So I setup my iphone to act as a webcam (which actually works fairly well surprisingly). I setup RR to track the color red on a white door. I then had arduino control a laser pointer, and was amazed at how well RR was able to follow the red dot - mostly at how fast RR was. It would actually capture the laser streaks when the dot would move too fast for the camera to capture it, and focus on the dot when it would stop. Very cool software!

Now the question is, I'm going to try to get a second arduino to target the dot from the first arduino using the software. I see that when it follows the first laser dot, it provides what looks like coordinates. So I have two questions.

How would I go about calibrating the second laser to a particular coordinate relative to RR?

How do I have RR talk to the second arduino to feed those coordinates to move the servos to fire it's laser at the first dot?

Quick clarification, what's the overall goal of this? Meaning, if you are communicating to an Arduino you can run more than one servo on a single Arduino without having to communicate to another.

If you do have to have 2 separate systems, you will need something in the visual space to calibrate against. For example, you can move the camera to the full range of where the laser would appear and scale that accordingly. Alternatively, you can always check where the laser dot is and move the camera a bit in that direction and check again. Because you have a closed loop system (movement to visual to movement, etc) you can "fudge" it and just move in small increments. Eventually the second camera will move to the center of the laser dot.

STeven.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post
and enter a new forum thread with the appropriate title.

about us

The RoboRealm application was created back in 2006 to take advantage of (1) lower cost generic
computing (i.e. PCs), (2) a widening range of lower cost imaging devices, (3) an increasing need
and usage of vision as primary sensor device and (4) the desire to quickly research custom solutions
using an interactive user interface with minimal programming.