This week we divided the group into two teams to work on our hide-and-seek robots: One team to build the hider robot, the other to build the seeker robot.

The two robots can use many of the same parts, but the control systems will be different. The hider robot will be operated by a person with an RC transmitter. The seeker robot will control itself.

Team Hider started resurrecting a radio-controlled robot that we built last year. The robot was involved in a battle-bots battle at the end of the year, so it is in pretty poor shape.
Teem Seeker worked on a colour tracking system. We took the code that we developed on a laptop before Christmas and tried it on two Raspberry Pis. We found that the original model Raspberry Pi wasn’t fast enough to do the job we needed. It took 6 seconds to acquire an image from a webcam. We tried a Model 3 B instead. This was fast enough for our needs. We then started work on tuning the colour detection. There’s some more experimentation to be done to find the best colour range to work with.

Kevin went through the steps involved in finding an object with a particular colour in an image. He started on an image with six circles each of a different colour, and demonstrated finding a green circle in the image. Then he stepped through the Python code and explained each task.

Here’s the image:

OpenCV, the Open Source Computer Vision library, has lots of functions for transforming and processing images.

We started with a standard RGB (red, green, blue) JPEG image, which OpenCV stores in memory as BGR (blue, red, green). Then we transformed the image to HSV (hue, saturation, value) format. The HSV colour space has a very useful property: colours are described by their hue and saturation. The value represents the intensity of the colour. This means that we can use H and S to find a colour, without having to worry much about lighting or shadows.

Next we used the known value for the green circle to apply a threshold to the image: any colours above or below the threshold are converted to black. Any colour at the threshold is converted to white. Here’s what the thresholded image looked like:

Then we found the white area in the image. To do that, we used an OpenCV function that gets the co-ordinates for contours that can be drawn around the boundaries of all the white regions in the image. We calculated the areas of each contour, and took the largest. We’ll find out why this is useful later.

To show that we had found the right circle, we calculated the co-ordinates of its centre-point. Finally, we drew a small cross at that centre-point to mark it, and displayed the full image on screen. This is what we ended up with:

Since we had a contour, we also used that contour to draw a line around the perimeter of the circle.

Next, we took a photo of a blue marker, found the HSV value of the blue, and used that value to find the marker in a live image. We held the marker in front of a laptop webcam, moved the marker around, and watched as the spot moved with it. Our method for finding a particular colour works for any shape, since we use a contour, not just circles.

Michael introduced us to TensorFlow, a machine learning library. Once trained, TensorFlow can identify specific objects by name. It’s a lot more sophisticated than finding something by colour. We spent some time setting the library up on a Raspberry Pi. The Pi isn’t powerful enough to train the software, but it is capable of doing the recognition after training models on more powerful computers.

Or final goal is to build an autonomous robot to play a game of hide and seek. We can use one of our remote-controlled battlebots from last year to hide, and the new robot to do the seeking on its own. One way to do the seeking would be to go after the biggest thing in the robot’s field of view that has a particular colour – the colour of the hiding robot. Another way to do the seeking would be to train a TensorFlow model with pictures of the hiding robot, so that the seeker can recognise it from different angles.

It’s going to take us a while to figure out what works best, and then we have to work out how to control our robot. It should be an interesting new year.

Last week, we used an LM35 temperature sensor to read the temperature in the room and report it on the Arduino serial console.
We were able to get a reading for the temperature of the air in the room, and the temperature of a cup of coffee touching the sensor. We couldn’t be sure, however, that the readings were correct. So, we decided to test with a potentiometer and a voltmeter.

We supplied 5 V to the potentiometer and fed the output to the Arduino. We used a voltmeter to measure the output from the pot and compared it to the reading from
the Arduino. Once we got the code working, we got good agreement on voltage readings between the meter and the Arduino.

The sensor presents a voltage on its output pin that represents the temperature its measuring. In its basic mode, the sensor reads temperatures between 0.2 and 150 degrees Celsius. Every degree is 10 mV, so the output voltage ranges from 2 mV to 1500 mV. Converting our voltage reading to a temperature is simple then: divide the voltage reading in millivolts by 10.

We can use one of the analog I/O pins on the Arduino to read the voltage. The analogRead() function returns a value between 0 and 1023 for voltages between 0 and 5 V.

We digressed into how to use a multimeter correctly. The first thing to do is make sure that our meter is set up correctly.
We tried measuring a 9 V DC battery with the DC voltage setting and got 9.2 V. We then tried measuring the same battery with the AC voltage setting. This time we got 19.6 V.
There’s a lot of potential (sorry) to get confused, then. Worse again, if we try measuring 220 VAC with the meter set to DC, we get a reading close to zero. In other words, a live AC supply looks safe if we set our meter wrong.

Next, we start with the highest reading range and step down. For a 1.5 V battery, we start with the 200 V reading, then step down to 20 V. This is to protect the meter: if the voltage is higher than we expected, we might damage the meter.

Voltage readings are taken in parallel with the circuit, and while it is live.

Resistance and continuity readings, on the other hand, are taken with the component disconnected from the circuit.
We worked out why with a simple circuit:

F is a fuse. W is a wire between the two ends. We’re not sure if the fuse is blown. We try testing for continuity across the fuse by putting our meter leads on A and B.
Even if the fuse is blown, we will get continuity because W provides a circuit. We need to cut the wire to get a true reading.

Our first attempt at Arduino code for the voltage reading gave us a surprise. Our meter displayed 2.5 V. The Arduino displayed 2 V. We used this formula to calculate the voltage:
v = 5 * analogRead_reading / 1023.
V was declared as float variable. analogRead_reading was declared as an int. The Arduino code multiplied two ints and divided the result by another int, truncated the result and stored the int as a float. When we made the 5 and 1023 floating point numbers (5.0 and 1023.0), we got the right answer.

Once we were happy with the Arduino code, we replaced the potentiometer with an LM35. Unfortunately, we didn’t notice the “bottom view” label on the datasheet drawing. We connected the 5 V supply to the wrong side of the sensor. It’s amazing how hot a temperature sensor can get! It was too hot to touch. And we couldn’t measure the temperature because…
After we disconnected the LM35, we made another discovery: we continued to get random voltage readings displayed on the Arduino even though there was no voltage to read. The analogRead() function happily outputs values from random noise.

Next week, we’ll use an LM35 connected the right way around, and try controlling a relay to switch power on and off to an external device. We’ll build a temperature controlled soldering station carefully – we want to solder with the tip of the iron, not the temperature sensor.