Jeffrey Baker managed to OCR some of the images produced by my CAPTCHA program. This isn't terrible because I knew that some of the images were almost flat and could probably be OCRed, so I tweaked the rotation code so to make more of the images come out with larger angles.
What I hadn't expected was that people would have such trouble reading them. I did a quick test and got 100% on a small set (about 50) of images. But some people can't even manage the sample images on that page. I certainly tuned the program so that I could read them and assumed that everyone would be the same. Clearly not.
That means that I can't increase the angles of the images to break the OCR.
So I got to thinking this morning while trying to forget a slightly weird dream (I was asleep and dreaming, in a dream. Then I became aware that I was dreaming, but I thought that I was only one level deep before waking up, twice). The point of the 3D text was to try to make a translator reconstruct the 3d world. (Which is (or should be) pretty easy for a human).
So, meet Sammy the Stick Man:
He gets rotated in lots of directions and you have to name which part of him is lit up. In this case the answer would be "left foot". Couple of problems: only 6 possible values. That's not actually too bad for the use I want it for because I'll be giving the user lots of them to solve so I can get a measure of their success rate (which had better be > 1/6 for a human). Next: there's too few images. It would be too easy to have a human classify 1000 images and then have the computer do a dumb image closeness match.
So Sammy doesn't get released into the real world but maybe something will come of it.
I've finally got round to getting Skype working with SkypeOut. Seems good. People are free to try me over Skype (nick: aglangley) as I'd be interested to see how the quality of computer to computer is.