Originally posted by Sayeh Determining distance with any vision system is about a thing called "triangulation'. That's why you have two eyes. It gives you depth perception because distance is calculated based on focus. When an image is brightest and edges are sharpest, the image is "in focus". Whatever mechanics it took to alter the focus so that this event would occur, are queried based on position or synchro-server feedback and those values are translated via table lookup to a "real-world" distance. It's very simple.

The 'bot' must know, by initial table data, that the motor at position "zero" means it's focused out at some specific distance (for example, an inch away). Thus, when queried, the motors can return the difference of where they are from where "zero" is, and this is translated into a focus distance.

that's great for 3-d imaging, but it's also expensive... a simple sonar/IR array will work for determining distance in any direction...

Originally posted by yavanna It will transfer it to voice data!!! They will hear it.

do you know how expensive a good OCR is? even then they're not 90% accurate... and that's with black letters on a white document in front of a sensor with ample light and controlled conditions... now you want to read a blackboard with some teacher's sloppy handwriting in blue chalk? or do you want to read a dry-erase board in red marker?

true, if you just go by contrast, the colors don't matter much, but think about the distance... first it has to find the chalkboard on it's own... unless you want a blind person to find the board...

btw, what's a blind person doing looking at a chalkboard in the first place? and seeing-eye dogs are allowed in schools...