Description:

Based off a modified version of the NXT shoot bot, the automatic tortoise feeder has three main components: a top feeding food hopper with a motor operated paddle that dispenses food, a color sensor for line following, and a reverse mounted touch sensor which acts as like a pull trigger. The touch sensor has a colored ball mounted in it which entices the tortoises to bite at it. Once that happens the robot dispenses the food, and then executes the line following program for several seconds and stops. At first the tortoises would just bite at it because it was brightly colored but I believe after only a few gos they’ve figured out now that pulling out it gives food which is a pretty impressive feat of reptile intelligence as far as I can tell.

What inspired you to build this robot?

I found out about NXT after watching a video sent around at work of the cubestormer robot and thought immediately it would be fun to do a robot that could interact with our two pet redfoot tortoises. Reptiles aren’t particularly trainable animals though ours are very food motivated so a robotic feeder seemed like a fun project to try. Fortunately tortoises are relatively slow moving and benign so building something to interact with them wasn’t that difficult. I also hadn’t seen any examples of NXT robots interacting with animals (though I think a friend of mine used the remote control shootbot to terrorize his cats?)

How long did it take you to make this?

Hard to say as I started and stopped several times? The programming took about a day once I went through the ROBOTC tutorials from Carnegie Mellon. I almost gave up initially trying to program it with the included NXT-G software and left the project alone until I found out about ROBOTC The construction maybe a week or two? I tried a few different designs before the current one that all had various problems. It took a while to figure out a way to mount the touch sensor in a way that would allow a tortoise to trigger it.

What are your future plans with the robot?

I’d like to try a modified mechanism for dispensing the food. The vertical mounted hopper and the irregular size of tortoise pellets makes the amount dispensed each time really difficult to control. My current idea is to try mounting the dispenser horizontally and use either one of the rubber treads or maybe a track from a lego technic set to dispense the foot more like a conveyor belt. Also might try a different way to move the robot around than using a line follow, possibly the distance sensor and some simple wall avoidance?

hmoor14 put together a fun little (Ok, it’s not THAT little… ) robot. It’s a VEX robot that is able to keep upright while simultaneously acting as a punching bag! Take a look:

I asked hmoor14 a few questions about his robot:

1) What inspired you to build this robot?

I wanted to start learning about robots and how to control them. So, when I saw a video on a balancing robot, I decided I would try that project.

2) How long did it take you to make this?

This was my first robot, so it probably took longer than it should have!
I pretty much did it over the Christmas holidays and then some. So about a month part time. Most of the time was not actually spent building the actual robot but learning how to design it and test the pieces. Just getting around the deadzone in the motors took me a few days.

3) What are your future plans with the robot?

I’m fixing to take it apart, I need the parts for my next robot But, I am going to keep what I’ve learned (which was so, so much).

Close up of the robot:

[All work done by Burf, original link: http://www.burf.org.uk/2012/01/01/mindsensors-rcx-multiplexer-controlled-via-android-and-robotc/]

We found another one of Burf’s work on his blog. If you don’t know Burf, he was the creator of a previous Cool Project on our blog, LEGO George.

Here’s another amazing post from his work that utilizes the RCX Multiplexer and an Android phone!

His blog reads,

——————————————————————————————————————————————

As you may be aware I have been building a Robot called Wheeler out of old parts (old grey and RCX 9V motors etc). I was hoping to have it finished over the Christmas break but had hit a small issue with driving the wheels with the new weight of the body. Anyway what I did managed to get up and running is the top half of Wheeer and the controller which is a Android phone (Dell Streak).

Mindsensors RCX Multiplexer

I was utterly impressed with the Mindsensors.com RCX Multiplexer and using Xanders driver suite (check BotBench) how fast I was up and running. I wish there was a way to run the RCX Multiplexer off the NXT power supply but thats a small thing compared to how useful it is. I wish I had 3 more of them so that I could control 16 RCX motors!

Android NXT Remote Control

So to try and work out how to control the NXT via Android, I stumbled across the NXT Remote Control project which is free to download. This uses Lego’s Direct Commands to control the 3 motor ports on the NXT. This means it bypasses your own code and you have no control over it. However, what I managed to do is reduced it down to a very simple program that sends messages to the NXT which you can deal with in your own program. In RobotC, it sends messages that are compatible with the MessageParam command and so you can send a message ID and 2 params to the NXT and deal with them in robotC anyway you want to. Code will be available soon once I have tidied it up.

First of all, let me introduce myself: I’m Leon (aka dimastero/ dimasterooo), and I was recently invited to contribute to this blog. So as, my first post, I’d like to tell you about my new Skype-controlled LEGO Mindstorms NXT Car.

I’ve been creating websites for a while now, and I was trying to think of a way to combine it with Mindstorms NXT. This project is the result of that. The project’s webpage is fairly simple – it’s got three arrows (one forward, two to the sides), a start button, and a stop button. It’s also got instructions on it. Clicking the start arrow will begin a Skype conversation with my computer, after which you should share your screen; the NXT standing in front of my computer can then “see” the webpage with the arrows via your computer.

That’s where the cool part kicks in – when you any one of the arrows or the stop button, the page will change to a different shade of gray. This shade of gray is then picked up by the NXT, which turns it into a Bluetooth message for the other NXT on the car. The car then drives in the direction the user tells it to, while remaining within a fenced off area where the webcam can see it.

So, until January the 18th, you can drive a LEGO Mindstorms NXT car, from the comfort of your own home. To learn how and find out more about this project, click the link below:

How it works

The iOS code uses iOS 5’s face detection algorithm to find the position of the face within the video frame. I then needed a way to communicate with the NXT robot and steer it. Since I didn’t want to go through the trouble of communicating through bluetooth with it (and I don’t know how to do it!), I chose to communicate with the NXT using the Light Sensor that comes with the NXT.

If I want the robot to go to the left, I dim the lower portion of the iPhone screen and if I want it to go to the right I increase its intensity. Also, when the phone does not see a face, I turn the lower portion of the screen black. This tells the robot that it needs to not move forward and spin in-place until it finds a face.

In the ROBOTC code, I also make use of the sound sensor to start and stop the robot. A loud sound is used to toggle between start and stop.

Thanks To Mark over at www.mastincrosbie.com for creating this incredible project and providing the information. Also Thanks to Xander for providing the community with drivers to use the mentioned sensors in ROBOTC.

You might remember the original Lego Street View Car I built in April. It was very popular at the Google Zeitgeist event earlier this year.

I wanted to re-build the car to only use the Lego Mindstorms NXT motors. I was also keen to make it look more….car-like. The result, after 4 months of experimentation, is version 2.0 of the Lego Street View Car.

As you can see this version of the car is styled to look realistic. I also decided to use my iPhone to capture images on the car. With iOS 5 the iPhone will upload any photos to PhotoStream so I can access them directly in iPhoto.

The car uses the Dexter Industries dGPS sensor to record the current GPS coordinates.

The KML file that records the path taken by the car is transmitted using the Dexter Industries Wifi sensor once the car is within wireless network range.

Design details

The LEGO Street View Car is controlled manually using a second NXT acting as a Bluetooth remote. The remote control allows me to control the drive speed and steering of the car. I can also brake the car to stop it from colliding with obstacles. Finally pressing a button on the remote

Every time an image is captured the current latitude and longitude are recorded from the dGPS. The NXT creates a KML format file in the flash filesystem which is then uploaded from the NXT to a PC. Opening the KML file in Google Earth shows the path that the car drove, and also has placemarks for every picture you took along the way. Click on the placemark to see the picture.

For each GPS coordinate I create a KML Placemark entry that embeds descriptive HTML code using the CDATA tag. The image link in the HTML refers to the last image captured on disk.

The images are captured by triggering the camera on my iPhone. I use an app called SoundSnap which triggers the camera when a loud sound is heard by the phone. By placing the iPhone over the NXT speaker I can trigger the iPhone camera by playing a loud tone on the NXT. While this is not ideal (Bluetooth would be better) it does the job for now.

To get the photos from the iPhone I use the PhotoStream feature in iOS 5. I select the pictures in iPhoto and export them to my laptop. The iPhone will only upload photos when I am in range of a wireless network.

Finally the Dexter Industries Wifi sensor is used to wirelessly transmit the KML file to my laptop over the wireless network.

The snippet from the KML file gives you an idea of what each placemark should look like.

Once the car has finished driving press the orange button on the NXT to save the KML file. This writes a <pathstring> entry which records the actual path of the car. A path string is simply a list of coordinates that define a path in Google Earth along the Earth’s surface. For example:

From the NXT to Google Earth

How do we get the pictures and KML file from the NXT and into Google Earth? First of all we need to get all the data in one place. The KML file refers to the relative path of each image, so we can package the KML file and the images into a single directory.

An example of the output produced is shown below. In this test case I started indoors in my house and took a few pictures. As you can see the dGPS has trouble getting an accurate reading and so the pictures appear to be scattered around the map. I then drove the car outside and started to capture pictures as I drove. From Snapshot 10 onwards the images become more realistic based on where the car actually is.

Video

I shot some video of the car driving outside my house. It was a windy dull day, so the video is a little dark. The fun part is seeing the view from on-board the car!

LEGO George the Giant Robot!

He is controlled via a PlayStation 2 controller, he can move about, rotate his upper body, move his arms / shoulders and grab on to items. His head also rotates, moves up and down and if you get too close, his eyes will rotate.

Video of LEGO George:

I asked burf2000 some questions about his robot:

What inspired you to build this robot?

“I have always loved robotics and so Lego for me was a medium to build it in, I built another large robot last year but was not so successful. That one was based off T1 from Terminator 3. I wanted to keep things simple on this one due to size. It weights around 20KG. I also loved the Short-circuit films (johnny 5).”

How long did it take to make?

“This one took around 3 months of the odd evenings and days, We just had a baby (my wife) so getting time has been quite hard. However my wife is very supportive and knew I needed to build this for a show. (http://www.greatwesternlegoshow.com/).”

What are your future plans with the robot?

“Glad you asked this, currently I am improving certain parts of this which I am not happy with like shoulder joints, main bearing and turning. Once they are done, I am going to build a second robot to keep him company. Its going to be another large one, using more NXT’s and hopefully will go round on his own. My aim is to get a whole display of large robots moving around and interacting with each other.”

I thank you, burf2000, for submitting LEGO George. We can’t wait to see his successor!

More Photos

The ROBOTC curriculum covers quite a bit of material ranging from basic movement to automatic thresholds and advanced remote control. This is plenty of material for the average robotics class. However, it is not enough for some ambitious teachers and students who have mastered the basics. For those individuals who strive to learn the ins and outs of ROBOTC, we offered a pilot course called “ROBOTC Advanced Training” in late July.

The focus of the class is on advanced programming concepts with ROBOTC. Trainees learn to make use of the NXT’s processing power and third-party sensors which expand its capabilities. The class began with a review of the basic ROBOTC curriculum. It then moved into arrays, multi-tasking, custom user interfaces using the NXT LCD screen and buttons, and file input/output. The class worked together to write a custom I²C sensor driver for the Mindsensors Acceleration sensor seen here. Mindsensors Acceleration Sensor

The capstone project for the course involves autonomous navigation on a grid world. The program allows the NXT to find the most efficient path to its goal while avoiding obstacles. The class learned the concept of a “wavefront algorithm”, which enabled autonomous path planning in a world delineated by a grid field. The algorithm assumes that the robot will only use three movements: forward for one block, right turn and left turn. Based on these assumptions, each grid block has four neighbors. They are north, south, east and west of the current block.

The grid world (for our project it was a 10×5 grid) is represented in ROBOTC by a 2-Dimensional array of integers. Integer representations are as follows: robot = 99, goal = 2, obstacle = 1, empty space = 0. The wavefront begins at the goal and propagates outwards until all positions have a value other than zero. Each empty space neighbor of the goal is assigned a value of 3. Each empty space neighbor of the 3’s is assigned a value of 4. This pattern continues until there are no more empty spaces on the map. The robot then follows the most efficient path by moving to its neighbor with the lowest value until it reaches the goal.

It is very exciting to see autonomous path planning implemented in ROBOTC because this is similar to the way full scale autonomous vehicles work. Check out the video of the path planning in action and the full ROBOTC code below. Our future plans are to incorporate these lessons into a new curriculum including multi-robot communications. If this seems like the type of project you would like to bring to your classroom, check back throughout the year for updates and also in the spring for availability for next summer’s ROBOTC Advanced Class.