I am building a robotic arm which would be capable of catching an object in mid-flight. I want to know if I can use roborealm to calculate the cartesion co-ordinates of a flying object in realtime.

Anonymous

7 years

Sanjay,

You might be able to use it to detect the target but we do not have any modules that will do projectile prediction. You can do some of that via another application that can read the current target location from RR using the API.

Thanks for the info. I would like to try and use Roborealm for stereo vision. I only want the cartesian co-ordinates of the object detected. The software is fast enough to detect objects moving at reasonable speeds.
Is that possible???

It will depend more on your camera and lighting level as to if the software is fast enough. The distance to the object will also matter. At 30fps the object cannot be moving very fast when close, but can move much faster if further away.

What is the object? There may be a way to detect the object in both stereo frames and just use the COG offset rather than calculating a full frame stereo which is much slower.

Given that I use a stationary object, two cameras of low resolution (640x480),
will use it in a well lit environment.
Can I use stereo vision in roborealm to detect the coordinates of the object?
I will migrate to slow moving objects and different lighting conditions later.
My PC: i5, 6GB RAM and no GPU (but I can buy any necessary hardware.)

If yes can you please give me a link for a tutorial in the same.

Anonymous

7 years

Sanjay,

That should work ... unfortunately we do not have any tutorials with regards to this particular topic. If you post the two images here we can reply back with a robofile that will get you started or with reasons as to why it may not work.

STeven.

This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post
and enter a new forum thread with the appropriate title.

about us

The RoboRealm application was created back in 2006 to take advantage of (1) lower cost generic
computing (i.e. PCs), (2) a widening range of lower cost imaging devices, (3) an increasing need
and usage of vision as primary sensor device and (4) the desire to quickly research custom solutions
using an interactive user interface with minimal programming.