I'm new to programming and robotc and working on a maze project. I finally have the navigation portion complete, however, I also want to build a loop which will seek a flame (light) while navigating. I need help to understand how the NXTCam works. I found the following sample program called blob_chase at mindsensors.com, but don't understand how it works. How can I customize the code to recognize a light source? Ideally I'd like to drive my robot toward the source when it is found.

/************************************************************************************/// blob_chase.c - fun demo of nxtcam using Robot C. Needs nxtcamlib.c.// Gordon Wyeth// 30 October 2007// Updated 4/12/07 to use new version of nxtcamlib.c that works around Robot-C compiler// bug.//// When a single blob is found in the image the robot will try to centre the blob by moving// the motors. When more the one blob is found the robot halts and displays a message.// The constants chosen work well with the standard NXT wheelchair robot with the// camera mounted at the front of the robot looking down at the floor.///************************************************************************************/

The NXTCam is a fun sensor but can be a little tricky to operate. The camera has 8 colour ranges that are used to track objects. You can configure these using NXTCamView, a PC based program. What you do is you connect the NXTCam to the PC with a USB cable and run NXTCamView. You can then capture an image and use the colour pickers to configure a range you see covers most of the light source you're trying to track. Then you upload this new colour range to the camera and you're good to go! You can do this with up to 8 colour ranges.

You can download the NXTCamView program here: [LINK]. You will also need the drivers for the camera, which you can download from the Mindsensors page.

Thanks a bunch, i did exactly what you said and it worked perfectly. I captured the picture of a flame and i uploaded and now whenever i move the flame, the blocks on the screen show where the flame is in relation to the camera. But now i have no idea how to make the code so that the camera can center the NXT on the flame. I am trying to make my program so that it navigates a maze but then whenever it sees a flame, the whole maze algorithm will stop and the NXT will center on the flame and make some kind of confirmation noise or something along those lines. I already have the maze portion working, but i do not understand how to incorporate the flame-centering code into the maze-navigating code. So basically i have to questions: what is the code or the format for the code to center the NXT on the flame? and how to incorporate that code into my existing code?

So basically i have to questions: what is the code or the format for the code to center the NXT on the flame? and how to incorporate that code into my existing code?

Just out of curiousity, did you try the NXTCam drivers that come with the 3rd Party Driver Suite? They have more functionality than the standard Mindsensors one. One of the biggest advantage is that you can find the average center of all the objects it has detected. Using this data you can then figure out how far off-center the object is. If you're finding the object is more to the left, you need to turn your robot in that direction. The same applies in reverse.

The driver for the NXTCam is documented and comes with two examples. You can download it here: [LINK]. The documentation can be found in the Html folder and the driver is called "NXTCAM".

I just have the drivers from Mindsensors. I'm having problems and just want to simplify for now. What is simple code that will recognize a color? Like I mentioned before, I can capture an object in NXTCam View, then upload that color to NXT. How do I build a simple program that will recognize the color? And in uploading my "captured" colors, have I wiped out the default object tracking. Now when I try the object tracking with the sample code from mindsensors, I get nothing.

I just have the drivers from Mindsensors. I'm having problems and just want to simplify for now. What is simple code that will recognize a color? Like I mentioned before, I can capture an object in NXTCam View, then upload that color to NXT. How do I build a simple program that will recognize the color? And in uploading my "captured" colors, have I wiped out the default object tracking. Now when I try the object tracking with the sample code from mindsensors, I get nothing.

The main part of the camera is made up of two chips, the camera chip and a small processor that deals with the image information. There is another chip in there but it's not important for this explanation. When you configured the colour ranges using NXTCam view you uploaded this information to the image processing processor. What it does is scan the image data coming from the camera and see if any of the pixels match the colour ranges it has in its memory. These pixels form a square object and are the camera can keep track of up to 8 of these objects. The colour matching and recognition all happens on the camera, not the NXT. The NXT retrieves the object data viaI2C and the driver will stuff them into various arrays.

Did you test the tracking with the NXTCamView program after you uploaded the new colour range? Did it show the various blobs on the screen it was tracking?

I am able to track the blobs on NXTCam view, and will work on getting the NXT cam to work in the future. However, because of my frustrations I am trying to recognize color in my maze in a simpler way, with my HiTechnic color sensor. Here's my problem: My simple maze navigation is working well, a very simple color sensor code code set to recognize "orange (7) works well too, but when I try to put them together I'm messing up the logic somewhere. Here is my basic maze code and the color sensor code: How can I merge these 2 so that the navigation continues until it senses "orange"? Ideally, I'd like it to continue searching the maze after finding the first "orange", looking for more "orange".

I've tried a lot of things. Here is the latest: It will play the song when I place "orange" in front of the sensor, but nothing below the second if statement works (check directions, movement, etc.) I also tried to place an else statement instead of the second if statement, and had no luck. I've also tried moving the color sensor (HTCS2) commands within the go straight or elsewhere, but haven't had luck. I feel I'm missing some simple point of logic.

hey u guys how do u get the NXTCam to follow a red ball? I know how to use NXTCam View to get it to track the olny the red ball but I dont know how to get it into the RobotC code using the nxtcamlib.c any help would be appreciated.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum