Tag Archives: motion control

Post navigation

Adafruit has released a $35 robotics HAT add-on for any 40-pin Raspberry Pi board. The Adafruit Crickit (Creative Robotics & Interactive Construction Kit) HAT is designed for controlling motors, servos, or solenoids using Python 3. The board is limiting to powering 5V devices and requires a 5 V power supply.

The Crickit HAT incorporates Adafruit’s “i2c-to-whatever” bridge firmware, called seesaw. With seesaw, “you only need to use two data pins to control the huge number of inputs and outputs on the Crickit,” explains Adafruit founder and MIT engineer Limor Fried in an announcement on the Raspberry Pi blog. “All those timers, PWMs, NeoPixels, sensors are offloaded to the co-processor. Stuff like managing the speed of motors via PWM is also done with the co-processor, so you’ll get smooth PWM outputs that don’t jitter when Linux gets busy with other stuff.”

Crickit HAT with and without Raspberry Pi
(click images to enlarge)

The Crickit HAT uses a “bento box” approach to robotics, writes Fried. “Instead of having eight servo drivers, or four 10A motor controllers, or five stepper drivers, it has just a little bit of everything,” she adds.

1x NeoPixel driver with 5V level shifter connected to the seesaw chip (not the Pi), so you won’t be giving up pin 18. It can drive over 100 pixels.

1x Class D, 4-8 ohm speaker, 3W-max audio amplifier connected to the I2S pins on the Pi for high-quality digital audio — even works on Zeros

1x micro-USB to serial converter port for updating seesaw with the drag-n-drop bootloader, or plugging into a computer; it can also act as a USB converter for logging into the console and running command lines on the Pi.

Crickit HAT, front and back
(click images to enlarge)

Further information

The Adafruit Crickit HAT is currently listed as “out of stock” at $34.95. More information may be found in Adafruit’s Crickit HAT announcement and product page.

Continuing with the topic of stepper motors, this time Ed looks at back electromotive force (EMF) and its effects. He examines the relationship between running stepper motors at high speeds and power supply voltage requirements.

By Ed Nisley

Early 3D printers used ATX supplies from desktop PCs for their logic, heater and motor power. This worked well enough—although running high-wattage heaters from the 12 V supply tended to incinerate cheap connectors. More mysteriously, stepper motors tended to run roughly and stall at high printing speeds, even with microstepping controllers connected to the 12 V supply.

In this article, I’ll examine the effect of back EMF on stepper motor current control. I’ll begin with a motor at rest, then show why increasing speeds call for a much higher power supply voltage than you may expect.

Microstepping Current Control

As you saw in my March 2018 article (Circuit Cellar 332), microstepping motor drivers control the winding currents to move the rotor between its full-step positions. Chips similar to the A4988 on the Protoneer CNC Shield in my MPCNC sense each winding’s current through a series resistor, then set the H-bridge MOSFETs to increase, reduce or maintain the current as needed for each step. Photo 1 shows the Z-axis motor current during the first few steps as the motor begins turning, measured with my long-obsolete Tektronix Hall effect current probes, as shown in this article’s lead photo above.

Photo 1 Each pulse in the bottom trace triggers a single Z-axis microstep. The top two traces show the 32 kHz PWM ripple in the A and B winding currents at 200 mA/div. The Z-axis acceleration limit reduces the starting speed to 18 mm/s = 1,100 mm/min.

The upper trace (I’ll call it the “A” winding) comes from the black A6302 probe clamped around the blue wire, with the vertical scale at 200 mA/div. The current starts at 0 mA and increases after each Z-axis step pulse in the bottom trace. Unlike the situation in most scope images, the “ripple” on the trace isn’t noise. It’s a steady series of PWM pulses regulating the winding current.

The middle trace (the “B” winding) increases from -425 mA because it operates in quadrature with the A winding. The hulking pistol-shaped Tektronix A6303 current probe, rated for 100 A, isn’t well-suited to measure such tiny currents, as you can see from the tiny green stepper motor wire lying in the gaping opening through the probe’s ferrite core. Using it with the A6302 probe shows the correct relation between the currents in both windings, even if its absolute calibration isn’t quite right.

Photo 2 zooms in on the A winding current, with the vertical scale at 50 mA/div, to show the first PWM pulse in better detail. The current begins rising from 0 mA, at the rising edge of the step pulse, as the A4988 controller applies +24 V to the motor winding and reaches 110 mA after 18 µs. The controller then applies -24 V to the winding by swapping the H bridge connections. This causes the current to fall to 40 mA, whereupon it turns on both lower MOSFETs in the bridge to let the current circulate through the transistors with very little loss.

Photo 2Zooming in on the first microstep pulse of Photo 1 shows the A4988 driver raising the stepper winding current from 0 mA as the motor starts turning. The applied voltage and motor inductance determine the slope of the current changes.

The next PWM cycle starts 15 µs later, in the rightmost division of the screen, where it rises from the 40 mA winding current set by the first pulse. It will also end at 110 mA, although that part of the cycle occurs far off-screen. You can read the details of the A4988 control algorithms and current levels in its datasheet, with the two-stage decreasing waveform known as “mixed decay” mode.

Although the H-bridge MOSFETs in the A4988 connect the motor windings directly between the supply voltage and circuit ground, the winding inductance prevents the current from changing instantaneously. The datasheet gives a nominal inductance of 4.8 mH, matching what I measured, but you can also estimate the value from the slope of the current changes.. . …

Note: We’ve made the October 2017 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

The new year 2018 is almost upon us. It’s a special year for us because the year marks Circuit Cellar’s 30th anniversary. In tribute of that, we thought we’d share an article from the very first issue.

Motion Triggered Video Camera Multiplexer by Steve Ciarcia

One of the most successful Circuit Cellar projects ever was the ImageWise video digitizing and display system (BYTE, MayAugust ‘87). It seems to be finding its way into a lot of industrial applications. I suppose I should feel flattered that a whole segment of American industry might someday depend on a Circuit Cellar project, but I can’t let that hinder me from completing the project that was the original incentive for ImageWise. Let me explain.

How it all started

When I’m not in the Circuit Cellar I’m across town at INK or in an office that I use to meet a prospective consulting client so that he doesn’t think that I only lead a subterranean existence. Rather than discuss the work done for other clients to make my point, however, I usually demonstrate my engineering expertise more subtly by just leaving some of the electronic “toys” I’ve presented lying around. The Fraggle Rock lunchbox with the dual-disk SBI 80 in it gets them every time! ImageWise was initially conceived to be the “piece de resistance”” of these hardware toys. The fact that it may have had some commercial potential was secondary. I just wanted to see the expressions on the faces of usually stern businessmen when I explained that the monitor on the corner of my desk wasn’t a closedcircuit picture of the parking lot outside my office building. It was a live video data transmission from the driveway at my house in an adjacent town.

Implementing this video system took a lot of work and it seems like I’ve opened Pandora’s box in the process. It would have been a simple matter to just aim a camera at my house and transmit a picture to the monitor on the desk but the Circuit Cellar creed is that hardware should actually work, not just impress business executives. ImageWise is a standalone serial video digitizer (there is a companion serial input video display unit as well) which is not computer dependent. Attached to a standard video camera, it takes a “video snapshot” at timed intervals or when manually triggered. The 256×244-pixel (64level grayscale) image is digitized and stored in a 62K-byte block of memory. It is then serially transmitted either as an uncompressed or run-length-encoded compressed file (this will generally reduce the 62K bytes to about 40K bytes per picture, depending upon content).

An ImageWise digitizer/transmitter normally communicates with its companion receiver/display at 28.8K bits per second. Digitized pictures therefore can be taken and displayed about every 14 seconds. While this might seem like a long time, it is quite adequate for surveillance activities and approximates the picture taking rate of bank security cameras.

“Real-Time” is relative

When we have to deal with remote rather than direct communication, “freeze-frame” imaging systems such as ImageWise can lose most of their “real time” effectiveness as continuous-activity monitors due to slow transmission mediums. Using a 9600-bps modem, a compressed image will take about 40 seconds to be displayed. At 1200 bps it will take over 5 minutes!

Of course, using such narrow logic could lead one to dismiss freeze-frame video imaging and opt for hardwired direct video, whatever the distance. However, unless you own a cable television or telephone company you might have a lot of trouble stringing wires across town. All humor aside, the only reason for using continuous monitoring systems at all is to capture and record asynchronous “events.” In the case of a surveillance system, the “event” is when someone enters the area under surveillance. For the rest of the time, you might as well be taking nature photos because, according to the definition of an event, nothing else is important. Most continuous surveillance video systems are, by necessity, real-time monitors as well. Because they have no way to ascertain that an event has occurred they simply record everything and ultimately capture the “event” by default. So what if there is 6 hours of video tape or 200 gigabytes of useless video data transmission around a 4-second “event.”

If we know exactly when an event occurs and take a freeze frame picture exactly at that time, there is no difference between its record of the event and a real-time recorder or snap-shot camera at the same instant. The only difference is that a freeze-frame recorder needs some local intelligence to ascertain that an event is occurring so that it knows when to snap a picture. Sounds simple, right?

To put real-timing into my driveway monitor, I combined a video camera and an infrared motion detector. When someone (or something) enters the trigger zone of the motion detector it will also be within the field of the video camera. If motion is detected, the controller triggers the ImageWise to capture that video frame at that instant and transmit the picture via modem immediately. The result is, in fact, real-time video, albeit delayed by 40 seconds. Using a 9600-bps modem, you will see what is going on 40 seconds after it has occurred. (Of course, you’ll see parts of the picture sooner as it is painting on the screen.) Subsequent motion will trigger additional pictures until eventually the system senses nothing and goes back to timed update. With such a system you’ll also gain new knowledge. You’ll know that it was the UPS truck that drove over the hedge because you were watching, but you aren’t quite sure who bagged the flower bed.

Of course knowing a little bit is sometimes worse than nothing at all. While a single video camera and motion detector might cover the average driveway, my driveway has multiple entrances and a variety of parking areas. When I first installed a single camera to cover the main entrance all it did was create frustration. I would see a car enter and park. If the person exited the vehicle they were soon out of view of the camera and I’d be thinking, “OK, what are they doing?” Rather than laying booby traps for some poor guy delivering newspapers, I decided to expand the system to cover additional territory. Ultimately, I installed three cameras and four motion detectors which could cover all important areas and provide enough video resolution to specifically recognize individuals. (Since I have four telephone lines into my house and only one is being used with ImageWise, I suppose the next step is to use one of them as a live intercom to speak to these visitors. A third line already goes to the home control system so I could entertain less-welcome visitors with a few special effects).

Motion Triggered Video M U X

Enough of how I got into this mess! What this is all leading to is the design of my motion triggered video camera multiplexing (MTVCM) system. I am presenting it because it was fun to do, it solved a particular personal problem, and if I don’t document it somehow, 1’11 never remember what I’ve got wired the next time I work on it.

The MTVCM is a 3-board microcomputer-based 4-channel video multiplexer with optoisolated trigger control inputs (see figure 1). Unlike the high-tech totally solidmultiplexer state audio/video (AVMUX) which I presented a couple years ago (BYTE Feb ‘86), the MTVCM is designed to be simple, lightning-proof, reliable, and above all flexible.

The MTVCM is designed for relatively harsh environments. To minimize wire lengths from cameras and sensors, the MTVCM is mounted in an outside garage where its anticipated operating temperature range is -20°C to +85”C. The MTVCM operates as a standalone unit running a preprogrammed control program or can be remotely commanded to operate in a specific manner. It is connected to the Imagewise and additional electronics in the house via a twisted-pair RS-232 line, one TTL “camera ready” line, and a video output cable. At the heart of the MTVCM is an industrial temperature version of the Micromint BCC52 8052based controller which has an onboard full floating-point 8K BASIC, EPROM programmer, 48K bytes of memory, 24 bits of parallel I/O and 2 serial ports (for more information on the BCC52 contact Micromint directly, see the Articles section of the Circuit Cellar BBS, see my article “Build the BASIC-52 Computer,” BYTE, Aug ‘85, or send $2 to CIRCUIT CELLAR INK for a reprint of the original BCC52 article).

Because the BCC52 is well documented, I will not discuss it here.
The MTVCM is nothing more than a specific application of a BCC52 process controller with a little custom I/O. In the MTVCM the custom I/O consists of a 4channel relay multiplexer board and a 4-channel optoisolated input board (Micromint now manufactures a BCC40R 8-channel relay output board and a BCC40D direct decoding 8-channel optoisolated input/output board. Their design is different and should not be confused with my MTVCM custom I/O boards). Each of my custom circuits is mounted on a BCC55 decoded and buffered BCC-bus prototyping board.

Figure 2 details the basic circuitry of the BCC55 BCC-bus prototyping board. The 44-pin BCC-bus is a relatively straightforward connection system utilizing a low-order multiplexed address/data configuration directly compatible with many standard microprocessors such as the Z8, 8085, and the 8052. On the protoboard all the pertinent busses are fully latched and buffered. The full 16-bit address is presented on J19 and J20 while the 8-bit buffered data bus is available at J21. J22 presents eight decoded I/O strobes within an address range selected via JP2.

The Multiplexer Board

Figure 3 is the schematic of the relay multiplexer added to the prototyping board. The relay circuit is specifically addressed at C900H and any data sent to that location via an XBY command [typically XBY(0C900H)=X] will be latched into the 74LS273. Since it can be destructive to attach two video outputs together, the four relays are not directly controlled by the latch outputs. Instead, bits DO and Dl are used to address a 74LS139 one-off our decoder chip. The decoder is enabled by a high-level output on bit D3. Therefore, a 1000 (binary) code selects relay 4 while a 1011 code selects relay 1. An output of 0000 shuts off the relay mux (eliminating the decoder and going directly to the relay drivers allows parallel control of the four relays).

All the normally-open relay contacts are connected together as a common output. Since only a single relay is ever on at one time, that video signal will be routed to the output. If the computer fails or there is a power interrupt, the default output state of a 74LS273 is normally high. Therefore, the highest priority camera should be attached to that input. If the system gets deep-sixed, the output will default to that camera and will still be of some use (I could also have used one of the normally-closed contacts instead but chose not to).

Fools and Mother Nature

I’m sure you’re curious so I will anticipate your question and answer it at the same time. With all the high-tech stuff that I continually present, how come I used mechanical relays? The answer is lightning! Anyone familiar with my writings will remember that I live in a hazardous environment when it comes to
Mother Nature. Every year I get blasted and it’s always the high-tech stuff that gets blitzed.

Because the MTVCM has to work continuously as well as be reliable I had to take measures to protect it from externally-induced calamities. This meant that all the inputs and outputs had to be isolated. In the case of the video mux, the only low-cost totally isolated switches are mechanical relays. CMOS multiplexer chips like the ones I’ve used in other projects are not isolated and would be too susceptible. (Just think of the MTVCM as a computer with three 150-foot lightning collectors running to the cameras.) Relays still serve a useful purpose whatever the level of integrated circuit technology. They also work.

Because the infrared motion sensors are connected to the AC power line and their outputs are common with it, these too had to be isolated to protect the MTVCM. Figure 4 details the circuit of the 4-channel optoisolator input board which connects to the motion detectors.

The Optoisolator Board

The opto board is addressed at CAOOH. Reading location CAOOH
[typically X=XBY(OCAOOH)] will read the 8 inputs of the 74LS244. Bits O-3 are connected to the four optoisolators and bits 4-7 are connected to a 4-pole dip switch which is used for configuration and setup. Between the optoisolators and the LS244 are four 74LS86 exclusive-OR gates. They function as selectable inverters. Depending upon the inputs to the optoisolators (normally high or low) and the settings of DIP SW2 you can select what level outputs you want for your program (guys like me who never got the hang of using PNP transistors have to design hardware so that whatever programming we are forced to do can at least be done in positive logic).

The optoisolators are common units sold by OPT022, Gordos,and other manufacturers. They are generically designated as either IAC5 or IDC5 modules depending upon whether the input voltage is 115 VAC or 5-48 VDC.
Since the motion detectors I used were designed to control AC flood lights, I used the IAC5 units connected across the lights.

Now that we have the hardware I suppose we have to have some software. For all practical purposes, however, virtually none is required. Since teh MTVCM is designed with hardcoded parallel port addressing, you only need about a three-line program to read the inputs, make a decision and select a video mux channel; you know, something like READ CAOOH, juggle it, and OUT C900H. I love simple software.

Of course, I got a little more carried away when I actually wrote my camera control program. I use a lot of REM statements to figure out what I did. Since it would take up too much room here, I’ve posted the MTVCM mux control software on the Circuit Cellar BBS (203-8711988) where you can download it if you want to learn more. Basically, it just sits there looking at camera #l.. If it receives a motion input from one of the sensors, it switches to the appropriate camera and generates a “camera ready” output (TTL output which is optoisolated at the other end) to the ImageWise in the house. It stays on that camera if it senses additional motion or switches to other cameras if it senses motion in their surveillance area. Eventually, it times out and goes back to camera # 1.

Basically, that’s all there is to the MTVCM. If you are an engineer you can think of it as a lightning-proof electrically-isolated process-control system. If not, just put it in your entertainment room and use it as a real neat camera controller. Now I’ve opened a real bag of worms. Remotely controlling the ImageWise digitizer/transmitter from my office through the house to the MTVCM is turning into a bigger task than I originally conceived. Getting the proper picture and tracking someone in the driveway is only part of the task.

I can already envision a rack of computer equipment in the house which has to synchronize this data traffic. My biggest worry is not how much coordination or equipment it will involve, but how I can design it so that I can do it all with a three-line BASIC program! Be assured that I’ll tell you how as the saga unfolds.

Article first appeared in Issue 1 of Circuit Cellar magazine – January/February 1988

Many complicated motion control and power electronics systems comprise thousands of parts and dozens of embedded systems. Thus, it makes sense that a systems engineer like New Jersey-based John Roselle would have more than one workspace for simultaneously planning, designing, and testing multiple systems.

John Roselle’s space for designing circuits and electronic systems (Source: John Roselle)

Roselle recently submitted images of his space and provided some interesting feedback when we asked him about it.

My main work space for testing and debugging of circuits consists of nothing more than a kitchen table with two shelves attached to the wall. Shown in the picture (see above) a 265-V digital motor drive for a fin control system for an under water application. In a second room I have a computer design center.

I design and test mostly motor drives for motion control products for various applications, such as underwater vehicles, missile hatch door motor drives, and test equipment for testing the products I design.

A second room serves as “computer design center” (Source: John Roselle)

John’s third workspace is used mainly for testing and assembling. At times there might be two or three different projects going on at once, he added.

The third space is used to test and assemble systems (Source: John Roselle)

Do you want to share images of your workspace, hackspace, or “circuit cellar”? Send us your images and info about your space.

How do you clean a clean-energy generating system? With a microcontroller (and a few other parts, of course). An excellent example is US designer Scott Potter’s award-winning, Renesas RL78 microcontroller-based Electrostatic Cleaning Robot system that cleans heliostats (i.e., solar-tracking mirrors) used in solar energy-harvesting systems. Renesas and Circuit Cellar magazine announced this week at DevCon 2012 in Garden Grove, CA, that Potter’s design won First Prize in the RL78 Green Energy Challenge.

This image depicts two Electrostatic Cleaning Robots set up on two heliostats. (Source: S. Potter)

The nearby image depicts two Electrostatic Cleaning Robots set up vertically in order to clean the two heliostats in a horizontal left-to-right (and vice versa) fashion.

The Electrostatic Cleaning Robot in place to clean

Potter’s design can quickly clean heliostats in Concentrating Solar Power (CSP) plants. The heliostats must be clean in order to maximize steam production, which generates power.

The robot cleaner prototype

Built around an RL78 microcontroller, the Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high-voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Object oriented C++ software, developed with the IAR Embedded Workbench and the RL78 Demonstration Kit, controls the device.

I designed a microcontroller-based mobile robot that can cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. In the first part of this series, I introduced my TOMBOT robot’s hardware. Now I’ll describe its software and how to achieve autonomous robot behavior.

Autonomous Behavior Model Overview
The TOMBOT is a minimalist system with just enough components to demonstrate some simple autonomous behaviors: Cruise, Escape, Avoid, and Home behaviors (see Figure 1). All the behaviors require left and right servos for maneuverability. In general, “Cruise” just keeps the robot in motion in lieu of any stimulus. “Escape” uses the bumper to sense a collision and then 180 spin with reverse. “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle. Finally “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.

Figure 1: High-level autonomous behavior flow

Figure 2 shows more details. The diagram captures the interaction of TOMBOT hardware and software. On the left side of the diagram are the sensors, power sources, and command override (the XBee radio command input). All analog sensor inputs and bumper switches are sampled (every 100 ms automatically) during the Microchip Technology PIC32 Timer 1 interrupt. The bumper left and right switches undergo debounce using 100 ms as a timer increment. The analog sensors inputs are digitized using the PIC32’s 10-bit ADC. Each sensor is assigned its own ADC channel input. The collected data is averaged in some cases and then made available for use by the different behaviors. Processing other than averaging is done within the behavior itself.

Figure 2: Detailed TOMBOT autonomous model

All behaviors are implemented as state machines. If a behavior requests motor control, it will be internally arbitrated against all other behaviors before motor action is taken. Escape has the highest priority (the power behavior is not yet implemented) and will dominate with its state machine over all the other behaviors. If escape is not active, then avoid will dominate as a result of its IR detectors are sensing an object in front of the TOMBOT less than 8″ away. If escape and avoid are not active, then home will overtake robot steering to sense track a light source that is immediately in front of TOMBOT. Finally cruise assumes command, and takes the TOMBOT in a forward direction temporarily.

A received command from the XBee RF module can stop and start autonomous operation remotely. This is very handy for system debugging. Complete values of all sensors and battery power can be viewed on graphics display using remote command, with LEDs and buzzer, announcing remote command acceptance and execution.

Currently, the green LED is used to signal that the TOMBOT is ready to accept a command. Red is used to indicate that the TOMBOT is executing a command. The buzzer indicates that the remote command has been completed coincident with the red led turning on.

With behavior programming, there are a lot of considerations. For successful autonomous operation, calibration of the photocells and IR sensors and servos is required. The good news is that each of these behaviors can be isolated (selectively comment out prior to compile time what is not needed), so that phenomena can be isolated and the proper calibrations made. We will discuss this as we get a little bit deeper into the library API, but in general, behavior modeling itself does not require accurate modeling and is fairly robust under less than ideal conditions.

TOMBOT Software Library
The TOMBOT robot library is modular. Some experience with C programming is required to use it (see Figure 3).

Figure 3: TOMBOT Library

The entire library is written using Microchip’s PIC32 C compiler. Both the compiler and Microchip’s 8.xx IDE are available as free downloads at www.microchip.com. The overall library structure is shown. At a highest level library has three main sections: Motor, I/O and Behavior. We cover these areas in some detail.

TOMBOT Motor Library
All functions controlling the servos’ (left and right wheel) operation is contained in this part of the library (see Listing1 Motor.h). In addition the Microchip PIC32 peripheral library is also used. Motor initialization is required before any other library functions. Motor initialization starts up both left and right servo in idle position using PIC32 PWM peripherals OC3 and OC4 and the dual Timer34 (32 bits) for period setting. C Define statements are used to set pulse period and duty cycle for both left and right wheels. These defines provide PWM varies from 1 to 2 ms for different speed CCW rotation over a 20-ms period and from 1.5 ms to 1 ms for CC rotation.

Listing 1: All functions controlling the servos are in this part of the library.

V_LEFT and V_RIGHT (velocity left and right) use the PIC32 peripheral library function to set duty cycle. The other motor functions, in turn, use V_LEFT and V_RIGHT with the define statements. See FORWARD and BACKWARD functions as an example (see Listing 2).

Listing 2: Motor function code examples

In idle setting both PWM set to 1-ms center positions should cause the servos not to turn. A servo calibration process is required to ensure center position does not result in any rotation. For the servos we have a set screw that can be used to adjust motor idle to no spin activity with a small Philips screwdriver.

TOMBOT I/O Library

This is a collection of different low level library functions. Let’s deal with these by examining their files and describing the function set starting with timer (see Listing 3). It uses Timer45 combination (full 32 bits) for precision timer for behaviors. The C defines statements set the different time values. The routine is noninterrupt at this time and simply waits on timer timeout to return.

Listing 3: Low-level library functions

The next I/O library function is ADC. There are a total of five analog inputs all defined below. Each sensor definition corresponds to an integer (32-bit number) designating the specific input channel to which a sensor is connected. The five are: Right IR, Left IR, Battery, Left Photo Cell, Right Photo Cell.

The initialization function initializes the ADC peripheral for the specific channel. The read function performs a 10-bit ADC conversion and returns the result. To faciliate operation across the five sensors we use SCAN_SENSORS function. This does an initialization and conversion of each sensor in turn. The results are placed in global memory where the behavior functions can access . SCAN_SENOR also performs a running average of the last eight samples of photo cell left and right (see Listing 4).

Listing 4: SCAN_SENOR also performs a running average of the last eight samples

The next I/O library function is Graphics (see Listing 5). TOMBOT uses a 102 × 64 monchrome graphics display module that has both red and green LED backlights. There are also red and green LEDs on the module that are independently controlled. The module is driven by the PIC32 SPI2 interface and has several control lines CS –chip select, A0 –command /data.

Listing 5: The Graphics I/O library function

The Graphics display relies on the use of an 8 × 8 font stored in as a project file for character generation. Within the library there are also cursor position macros, functions to write characters or text strings, and functions to draw 32 × 32 bit maps. The library graphic primitives are shown for intialization, module control, and writing to the module. The library writes to a RAM Vmap memory area. And then from this RAM area the screen is updated using dumpVmap function. The LED and backlight controls included within these graphics library.

The next part of I/O library function is delay (see Listing 6). It is just a series of different software delays that can be used by other library function. They were only included because of legacy use with the graphics library.

Listing 6: Series of different software delays

The next I/O library function is UART-XBEE (see Listing 7). This is the serial driver to configure and transfer data through the XBee radio on the robot side. The library is fairly straightforward. It has an initialize function to set up the UART1B for 9600 8N1, transmit and receive.

Listing 7: XBee library functions

Transmission is done one character at a time. Reception is done via interrupt service routine, where the received character is retrieved and a semaphore flag is set. For this communication, I use a Sparkfun XBee Dongle configured through USB as a COM port and then run HyperTerminal or an equivalent application on PC. The default setting for XBee is all that is required (see Photo 1).

Photo 1: XBee PC to TOMBOT communications

The next I/O library function is buzzer (see Listing 8). It uses a simple digital output (Port F bit 1) to control a buzzer. The functions are initializing buzzer control and then the on/off buzzer.

Listing 8: The functions initialize buzzer control

TOMBOT Behavior Library
The Behavior library is the heart of the autonomous TOMBOT and where integrated behavior happens. All of these behaviors require the use of left and right servos for autonomous maneuverability. Each behavior is a finite state machine that interacts with the environment (every 0.1 s). All behaviors have a designated priority relative to the wheel operation. These priorities are resolved by the arbiter for final wheel activation. Listing 9 shows the API for the entire Behavior Library.

Listing 9: The API for the entire behavior library

Let’s briefly cover the specifics.

“Cruise” just keeps the robot in motion in lieu of any stimulus.

“Escape” uses the bumper to sense a collision and then 180° spin with reverse.

“Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle.

“Arbiter” is an internal function that is an intrinsic part of the behavior library that resolves different behavior priorities for wheel activation.

Here’s an example of the Main function-invoking different Behavior using API (see Listing 10). Note that this is part of a main loop. Behaviors can be called within a main loop or “Stacked Up”. You can remove or stack up behaviors as you choose ( simply comment out what you don’t need and recompile). Keep in mind that remote is a way for a remote operator to control operation or view status.

Listing 10: TOMBOT API Example

Let’s now examine the detailed state machine associated with each behavior to gain a better understanding of behavior operation (see Listing 11).

Listing 11:The TOMBOT’s arbiter

The arbiter is simple for TOMBOT. It is a fixed arbiter. If either during escape or avoid, it abdicates to those behaviors and lets them resolve motor control internally. Home or cruise motor control requests are handled directly by the arbiter (see Listing 12).

Listing 12: Home behavior

Home is still being debugged and is not yet a final product. The goal is for the TOMBOT during Home is to steer the robot toward a strong light source when not engaged in higher priority behaviors.

The Cruise behavior sets motor to forward operation for one second if no other higher priority behaviors are active (see Listing 13).

Listing 13: Cruise behavior

The Escape behavior tests the bumper switch state to determine if a bump is detected (see Listing 14). Once detected it runs through a series of states. The first is an immediate backup, and then it turns around and moves away from obstacle.

Listing 14: Escape behavior

This function is a response to the remote C or capture command that formats and dumps (see Listing 15) to the graphics display The IR left and right, Photo left and Right, and battery in floating point format.

Listing 15: The dump function

This behavior uses the IR sensors and determines if an object is within 8″ of the front of TOMBOT (see Listing 16).

Listing 16: Avoid behavior

If both sensors detect a target within 8″ then it just turns around and moves away (pretty much like escape). If only the right sensor detects an object in range spins away from right side else if on left spins away on left side (see Listing 17).

Listing 17: Remote part 1

Remote behavior is fairly comprehensive (see Listing 18). There are 14 different cases. Each case is driven by a different XBee received radio character. Once a character is received the red LED is turned on. Once the behavior is complete, the red LED is turned off and a buzzer is sounded.

Listing 18: Remote part 2

The first case toggles Autonomous mode on and off. The other 13 are prescribed actions. Seven of these 13 were written to demonstrate TOMBOT mobile agility with multiple spins, back and forwards. The final six of the 13 are standard single step debug like stop, backward, and capture. Capture dumps all sensor output to the display screen (see Table 1).

Table 1: TOMBOT remote commands

Early Findings & Implementation
Implementation always presents a choice. In my particular case, I was interested in rapid development. At that time, I selected to using non interrupt code and just have linear flow of code for easy debug. This amounts to “blocking code.” Block code is used throughout the behavior implementation and causes the robot to be nonresponsive when blocking occurs. All blocking is identified when timeout functions occur. Here the robot is “blind” to outside environmental conditions. Using a real-time operating system (e.g., Free RTOS) to eliminate this problem is recommended.

The TOMBOT also uses photocells for homing. These sensitive devices have different responses and need to be calibrated to ensure correct response. A photocell calibration is needed within the baseline and used prior to operation.

TOMBOT Demo

The TOMBOT was successfully demoed to a large first-grade class in southern California as part of a Science, Technology, Engineering and Mathematics (STEM) program. The main behaviors were limited to Remote, Avoid, and Escape. With autonomous operation off, the robot demonstrated mobility and maneuverability. With autonomous operation on, the robot could interact with a student to demo avoid and escape behavior.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.

The projects listed below placed at the top of Renesas’s RL78 Green Energy Challenge.

Electrostatic Cleaning Robot: Solar tracking mirrors, called heliostats, are an integral part of Concentrating Solar Power (CSP) plants. They must be kept clean to help maximize the production of steam, which generates power. Using an RL78, the innovative Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Cloud Electrofusion Machine: Using approximately 400 times less energy than commercial electrofusion machines, the Cloud Electrofusion Machine is designed for welding 0.5″ to 2″ polyethylene fittings. The RL78-controlled machine is designed to read a barcode on the fitting which determines fusion parameters and traceability. Along with the barcode data, the system logs GPS location to an SD card, if present, and transmits the data for each fusion to a cloud database for tracking purposes and quality control.

Inside the electrofusion machine (Source: M. Hamilton)

The Sun Chaser: A GPS Reference Station: The Sun Chaser is a well-designed, solar-based energy harvesting system that automatically recalculates the direction of a solar panel to ensure it is always facing the sun. Mounted on a rotating disc, the solar panel’s orientation is calculated using the registered GPS position. With an external compass, the internal accelerometer, a DC motor and stepper motor, you can determine the solar panel’s exact position. The system uses the Renesas RDKRL78G13 evaluation board running the Micrium µC/OS-III real-time kernel.

Water Heater by Solar Concentration: This solar water heater is powered by the RL78 evaluation board and designed to deflect concentrated amounts of sunlight onto a water pipe for continual heating. The deflector, armed with a counterweight for easy tilting, automatically adjusts the angle of reflection for maximum solar energy using the lowest power consumption possible.

RL78-based solar water heater (Source: P. Berquin)

Air Quality Mapper: Want to make sure the air along your daily walking path is clean? The Air Quality Mapper is a portable device designed to track levels of CO2 and CO gasses for constructing “Smog Maps” to determine the healthiest routes. Constructed with an RDKRL78G13, the Mapper receives location data from its GPS module, takes readings of the CO2 and CO concentrations along a specific route and stores the data in an SD card. Using a PC, you can parse the SD card data, plot it, and upload it automatically to an online MySQL database that presents the data in a Google map.

Air quality mapper design (Source: R. Alvarez Torrico)

Wireless Remote Solar-Powered “Meteo Sensor”: You can easily measure meteorological parameters with the “Meteo Sensor.” The RL78 MCU-based design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. Receivers are configured for listening of incoming data on the same radio channel. It simplifies the way weather data is gathered and eases construction of local measurement networks while being optimized for low energy usage and long battery life.

The design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. (Source: G. Kaczmarek)

Portable Power Quality Meter: Monitoring electrical usage is becoming increasingly popular in modern homes. The Portable Power Quality Meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis.

The portable power quality meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis. (Source: A. Barbosa)

High-Altitude Low-Cost Experimental Glider (HALO): The “HALO” experimental glider project consists of three main parts. A weather balloon is the carrier section. A glider (the payload of the balloon) is the return section. A ground base section is used for communication and display telemetry data (not part of the contest project). Using the REFLEX flight simulator for testing, the glider has its own micro-GPS receiver, sensors and low-power MCU unit. It can take off, climb to pre-programmed altitude and return to a given coordinate.

Welcome to “Robot Boot Camp.” In this two-part article series, I’ll explain what you can do with a basic mobile machine, a few sensors, and behavioral programming techniques. Behavioral programming provides distinct advantages over other programming techniques. It is independent of any environmental model, and it is more robust in the face of sensor error, and the behaviors can be stacked and run concurrently.

My objectives for my recent robot design were fairly modest. I wanted to build a robot that could cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. I knew that if I could meet such objective other more complex behaviors would be possible (e.g., self-docking on low power). There certainly many commercial robots on the market that could have met my requirements. But I decided that my best bet would be to roll my own. I wanted to keep things simple, and I wanted to fully understand the sensors and controls for behavioral autonomous operation. The TOMBOT is the fruit of that labor (see Photo 1a). A colleague came up with the name TOMBOT in honor of its inventor, and the name kind of stuck.

In this series of articles, I’ll present lessons learned and describe the hardware/software design process. The series will detail TOMBOT-style robot hardware and assembly, as well as behavior programming techniques using C code. By the end of the series, I’ll have covered a complete behavior programming library and API, which will be available for experimentation.

DESIGN BASICS

The TOMBOT robot is certainly minimal, no frills: two continuous-rotation, variable-speed control servos; two IR (850 nm) analog distance measurement sensors (4- to 30-cm range); two CdS photoconductive cells with good lux response in visible spectrum; and, finally, a front bumper (switch-activated) for collision detection. The platform is simple: servos and sensors on the left and right side of two level platforms. The bottom platform houses bumper, batteries, and servos. The top platform houses sensors and microcontroller electronics. The back part of the bottom platform uses a central skid for balance between the two servos (see Photo 1).

Given my background as a Microchip Developer and Academic Partner, I used a Microchip Technology PIC32 microcontroller, a PICkit 3 programmer/debugger, and a free Microchip IDE and 32-bit complier for TOMBOT. (Refer to the TOMBOT components list at the end of this article.)

It was a real thrill to design and build a minimal capability robot that can—with stacking programming behaviors—emulate some “intelligence.” TOMBOT is still a work in progress, but I recently had the privilege of demoing it to a first grade class in El Segundo, CA, as part of a Science Technology Engineering and Mathematics (STEM) initiative. The results were very rewarding, but more on that later.

BEHAVIORAL PROGRAMMING

A control system for a completely autonomous mobile robot must perform many complex information-processing tasks in real time, even for simple applications. The traditional method to building control systems for such robots is to separate the problem into a series of sequential functional components. An alternative approach is to use behavioral programming. The technique was introduced by Rodney Brooks out of the MIT Robotics Lab, and it has been very successful in the implementation of a lot of commercial robots, such as the popular Roomba vacuuming. It was even adopted for space applications like NASA’s Mars Rover and military seekers.

Programming a robot according to behavior-based principles makes the program inherently parallel, enabling the robot to attend simultaneously to all hazards it may encounter as well as any serendipitous opportunities that may arise. Each behavior functions independently through sensor registration, perception, and action. In the end, all behavior requests are prioritized and arbitrated before action is taken. By stacking the appropriate behaviors, using arbitrated software techniques, the robot appears to show (broadly speaking) “increasing intelligence.” The TOMBOT modestly achieves this objective using selective compile configurations to emulate a series of robot behaviors (i.e., Cruise, Home, Escape, Avoid, and Low Power). Figure 1 is a simple model illustration of a behavior program.

Debugging a mobile platform that is executing a series of concurrent behaviors can be daunting task. So, to make things easier, I implemented a complete remote control using a wireless link between the robot and a PC. With this link, I can enable or disable autonomous behavior, retrieve the robot sensor status and mode of operations, and curtail and avoid potential robot hazard. In addition to this, I implemented some additional operator feedback using a small graphics display, LEDs, and a simple sound buzzer. Note the TOMBOT’s power-up display in Photo 1b. We take Robot Boot Camp very seriously.

Minimalist System

As you can see in the robot’s block diagram (see Figure 2), the TOMBOT is very much a minimalist system with just enough components to demonstrate autonomous behaviors: Cruise, Escape, Avoid, and Home. All these behaviors require the use of left and right servos for autonomous maneuverability.

Figure 2: The TOMBOT system

The Cruise behavior just keeps the robot in motion in lieu of any stimulus. The Escape behavior uses the bumper to sense a collision and then 180° spin with reverse. The Avoidbehavior makes use of continuous forward-looking IR sensors to veer left or right upon approaching a close obstacle. The Home behavior utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source. It all should add up to some very distinct “intelligent” operation. Figure 3 depicts the basic sensor and electronic layout.

Photo 2a shows dual Sharp IR sensors. Just below them is the photocell assembly. It is a custom board with dual CdS GL5528 photoconductive cells and 2.2-kΩ current-limiting resistors. Below this is a bumper assembly consisting of two SPDT Snap-action switches with lever (All Electronics Corp. CAT# SMS-196, left and right) fixed to a custom pre-fab plastic front bumper. Also shown is the solderless breakout board and left servo. Photo 2b shows the rechargeable battery pack that resides on the lower base platform and associated power switch. The electronics stack is visible. Here the XBee/Buzzer and graphics card modules residing on the 32-bit Experimenter. The Experimenter is plugged into a custom carrier board that allows for an interconnection to the solderless breakout to the rest of the system. Finally, note that the right servo is highlighted. The total TOMBOT package is not ideal; but remember, I’m talking about a prototype, and this particular configuration has held up nicely in several field demos.

Figure 5 depicts a second-generation bumper assembly. The same snap-action switches with extended levers are bent and fashioned to interconnect a bumper assembly as shown.

Figure 5: Second-generation bumper assembly

TOMBOT Electronics

A 32-bit Micro Experimenter is used as the CPU. This board is based the high-end Microchip Technology PIC32MX695F512H 64-pin TQFP with 128-KB RAM, 512-KB flash memory, and an 80-MHz clock. I did not want to skimp on this component during the prototype phase. In addition the 32-bit Experimenter supports a 102 × 64 monographic card with green/red backlight controls and LEDs. Since a full graphics library was already bundled with this Experimenter graphics card, it also represented good risk reduction during prototyping phase. Details for both cards are available on the Kiba website.

The Experimenter supports six basic board-level connections to outside world using JP1, JP2, JP3, JP4, BOT, and TOP headers. A custom carrier board interfaces to the Experimenter via these connections and provides power and signal connection to the sensors and servos. The custom carrier accepts battery voltage and regulates it to +5 VDC. This +5 V is then further regulated by the Experimenter to its native +3.3-VDC operation. The solderless breadboard supports a resistor network to sense a +9-V battery voltage for a +3.3-V PIC processor. The breadboard also contains an LM324 quad op-amp to provide a buffer between +3.3-V logic of the processor and the required +5-V operation of the servo. Figure 6 is a detailed schematic diagram of the electronics.

Figure 6: The design’s circuitry

A custom card for the XBee radio carrier and buzzer was built that plugs into the Experimenter’s TOP and BOT connections. Photo 3 shows the modules and the carrier board. The robot uses a rechargeable 1,600-mAH battery system (typical of mid-range wireless toys) that provides hours of uninterrupted operation.

Photo 3: The modules and the carrier board

PIC32 On-Chip Peripherals

The major PIC32 peripheral connection for the Experimenter to rest of the system is shown. The TOMBOT uses PWM for servo, UART for XBee, SPI and digital for LCD, analog input channels for all the sensors, and digital for the buzzer and bumper detect. The key peripheral connection for the Experimenter to rest of the system is shown in Figure 7.

Figure 7: Peripheral usage

The PIC32 pinouts and their associated Experimenter connections are detailed in Figure 8.

Figure 8: PIC32 peripheral pinouts and EXP32 connectors

The TOMBOT Motion Basics and the PIC32 Output Compare Peripheral

Let’s review the basics for TOMBOT motor control. The servos use the Parallax (Futaba) Continuous Rotation Servos. With two-wheel control, the robot motion is controlled as per Table 1.

Table 1: Robot motion

The servos are controlled by using a 20-ms (500-Hz) pulse PWM pattern where the PWM pulse can from 1.0 ms to 2.0 ms. The effects on the servos for the different PWM are shown in Figure 9.

Figure 9: Servo PWM control

The PIC32 microcontroller (used in the Experimenter) has five Output Compare modules (OCX, where X =1 , 2, 3, 4, 5). We use two of these peripherals, specifically OC3, OC4 to generate the PWM to control the servo speed and direction. The OCX module can use either 16 Timer2 (TMR2) or 16 Timer3 (TMR3) or combined as 32-bit Timer23 as a time base and for period (PR) setting for the output pulse waveform. In our case, we are using Timer23 as a PR set to 20 ms (500 Hz). The OCXRS and OCXR registers are loaded with a 16-bit value to control width of the pulse generated during the output period. This value is compared against the Timer during each period cycle. The OCX output starts high and then when a match occurs OCX logic will generate a low on output. This will be repeated on a cycle-by-cycle basis (see Figure 10).

Figure 10: PWM generation

Next Comes Software

We set the research goals and objectives for our autonomous robot. We covered the hardware associated with this robot and in the next installment we will describe the software and operation.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.

James Kim—a biomedical student at Ryerson University in Toronto, Canada—recently submitted an update on the status of an interesting prosthetic arm design project. The design features a Freescale 9S12 microcontroller and a Microsoft Kinect, which tracks arm movements that are then reproduced on the prosthetic arm.

He also submitted a block diagram.

Overview of the prosthetic arm system (Source: J. Kim)

Kim explains:

The 9S12 microcontroller board we use is Arduino form-factor compatible and was coded in C using Codewarrior. The Kinect was coded in C# using Visual Studio using the latest version of Microsoft Kinect SDK 1.5. In the article, I plan to discuss how the microcontroller was set up to do deterministic control of the motors (including the timer setup and the PID code used), how the control was implemented to compensate for gravitational effects on the arm, and how we interfaced the microcontroller to the PC. This last part will involve a discussion of data logging as well as interfacing with the Kinect.

German engineer Jens Altenburg’s solar-powered hidden observing vehicle system (SOPHECLES) is an innovative gas-detecting mobile robot. When the Texas Instruments MSP430-based mobile robot detects noxious gas, it transmits a notification alert to a PC, Altenburg explains in his article, “SOPHOCLES: A Solar-Powered MSP430 Robot.” The MCU controls an on-board CMOS camera and can wirelessly transmit images to the “Robot Control Center” user interface.

Take a look at the complete SOPHOCLES design. The CMOS camera is located on top of the robot. Radio modem is hidden behind the camera so only the antenna is visible. A flexible cable connects the camera with the MSP430 microcontroller.

Altenburg writes:

The MSP430 microcontroller controls SOPHOCLES. Why did I need an MSP430? There are lots of other micros, some of which have more power than the MSP430, but the word “power” shows you the right way. SOPHOCLES is the first robot (with the exception of space robots like Sojourner and Lunakhod) that I know of that’s powered by a single lithium battery and a solar cell for long missions.

The SOPHOCLES includes a transceiver, sensors, power supply, motordrivers, and an MSP430. Some block functions (i.e., the motor driver or radio modems) are represented by software modules.

How is this possible? The magic mantra is, “Save power, save power, save power.” In this case, the most important feature of the MSP430 is its low power consumption. It needs less than 1 mA in Operating mode and even less in Sleep mode because the main function of the robot is sleeping (my main function, too). From time to time the robot wakes up, checks the sensor, takes pictures of its surroundings, and then falls back to sleep. Nice job, not only for robots, I think.

The power for the active time comes from the solar cell. High-efficiency cells provide electric energy for a minimum of approximately two minutes of active time per hour. Good lighting conditions (e.g., direct sunlight or a light beam from a lamp) activate the robot permanently. The robot needs only about 25 mA for actions such as driving its wheel, communicating via radio, or takes pictures with its built in camera. Isn’t that impossible? No! …

The robot has two power sources. One source is a 3-V lithium battery with a 600-mAh capacity. The battery supplies the CPU in Sleep mode, during which all other loads are turned off. The other source of power comes from a solar cell. The solar cell charges a special 2.2-F capacitor. A step-up converter changes the unregulated input voltage into 5-V main power. The LTC3401 changes the voltage with an efficiency of about 96% …

Because of the changing light conditions, a step-up voltage converter is needed for generating stabilized VCC voltage. The LTC3401 is a high-efficiency converter that starts up from an input voltage as low as 1 V.

If the input voltage increases to about 3.5 V (at the capacitor), the robot will wake up, changing into Standby mode. Now the robot can work.

The approximate lifetime with a full-charged capacitor depends on its tasks. With maximum activity, the charging is used after one or two minutes and then the robot goes into Sleep mode. Under poor conditions (e.g., low light for a long time), the robot has an Emergency mode, during which the robot charges the capacitor from its lithium cell. Therefore, the robot has a chance to leave the bad area or contact the PC…

The control software runs on a normal PC, and all you need is a small radio box to get the signals from the robot.

The Robot Control Center serves as an interface to control the robot. Its main feature is to display the transmitted pictures and measurement values of the sensors.

Various buttons and throttles give you full control of the robot when power is available or sunlight hits the solar cells. In addition, it’s easy to make short slide shows from the pictures captured by the robot. Each session can be saved on a disk and played in the Robot Control Center…

Guido Ottaviani designs and creates innovative microcontroller-based robot systems in the city from which remarkable innovations in science and art have originated for centuries.

Guido Ottaviani

By day, the Rome-based designer is a technical manager for a large Italian editorial group. In his spare time he designs robots and interacts with various other “electronics addicts.” In an a candid interview published in Circuit Cellar 265 (August 2012), Guido described his fascination with robotics, his preferred microcontrollers, and some of his favorite design projects. Below is an abridged version of the interview.

NAN PRICE: What was the first MCU you worked with? Where were you at the time? Tell us about the project and what you learned.

GUIDO OTTAVIANI: The very first one was not technically an MCU, that was too early. It was in the mid 1980s. I worked on an 8085 CPU-based board with a lot of peripherals, clocked at 470 kHz (less than half a megahertz!) used for a radio set control panel. I was an analog circuits designer in a big electronics company, and I had started studying digital electronics on my own on a Bugbook series of self-instruction books, which were very expensive at that time. When the company needed an assembly programmer to work on this board, I said, “Don’t worry, I know the 8085 CPU very well.” Of course this was not true, but they never complained, because that job was done well and within the scheduled time.

I learned a lot about how to optimize CPU cycles on a slow processor. The program had very little time to switch off the receiver to avoid destroying it before the powerful transmitter started.

Flow charts on paper, a Motorola developing system with the program saved on an 8” floppy disk, a very primitive character-based editor, the program burned on an external EPROM and erased with a UV lamp. That was the environment! When, 20 years later, I started again with embedded programming for my hobby, using Microchip Technology’s MPLAB IDE (maybe still version 6.xx) and a Microchip Technology PIC16F84, it looked like paradise to me, even if I had to relearn almost everything.

But, what I’ve learned about code optimization—both for speed and size—is still useful even when I program the many resources on the dsPIC33F series…

NAN: You worked in the field of analog and digital development for several years. Tell us a bit about your background and experiences.

GUIDO: Let me talk about my first day of work, exactly 31 years ago.

Being a radio amateur and electronics fan, I went often to the surplus stores to find some useful components and devices, or just to touch the wonderful receivers or instruments: Bird wattmeters, Collins or Racal receivers, BC 312, BC 603 or BC 1000 military receivers and everything else on the shelves.

The first day of work in the laboratory they told to me, “Start learning that instrument.” It was a Hewlett-Packard spectrum analyzer (maybe an HP85-something) that cost more than 10 times my annual gross salary at that time. I still remember the excitement of being able to touch it, for that day and the following days. Working in a company full of these kinds of instruments (the building even had a repair and calibration laboratory with HP employees), with more than a thousand engineers who knew everything from DC to microwaves to learn from, was like living in Eden. The salary was a secondary issue (at that time).

I worked on audio and RF circuits in the HF to UHF bands: active antennas, radiogoniometers, first tests on frequency hopping and spread spectrum, and a first sample of a Motorola 68000-based GPS as big as a microwave oven.

Each instrument had an HPIB (or GPIB or IEEE488) interface to the computer. So I started approaching this new (for me) world of programming an HP9845 computer (with a cost equivalent to 5 years of my salary then) to build up automatic test sets for the circuits I developed. I was very satisfied when I was able to obtain a 10-Hz frequency hopping by a Takeda-Riken frequency synthesizer. It was not easy with such a slow computer, BASIC language, and a bus with long latencies. I had to buffer each string of commands in an array and use some special pre-caching features of the HPIB interface I found in the manual.

Every circuit, even if it was analog, was interfaced with digital ports. The boards were full of SN74xx (or SN54xx) ICs, just to make some simple functions as switching, multiplexing, or similar. Here again, my lack of knowledge was filled with the “long history experience” on Bugbook series.

Well, audio, RF, programming, communications, interfacing, digital circuits. What was I still missing to have a good background for the next-coming hobby of robotics? Ah! Embedded programming. But I’ve already mentioned this experience.

After all these design jobs, because my knowledge started spreading on many different projects, it was natural to start working as a system engineer, taking care of all the aspects of a complex system, including project management.

NAN: You have a long-time interest in robotics and autonomous robot design. When did you first become interested in robots and why?

GUIDO: I’ve loved the very simple robots in the toy store windows since I was young, the same I have on my website (Pino and Nino). But they were too simple. Just making something a little bit more sophisticated required too much electronics at that time.

After a big gap in my electronics activity, I discovered a newly published robotic magazine, with an electronic parts supplement. This enabled me to build a programmable robot based on a Microchip PIC16F84. A new adventure started. I felt much younger. Suddenly, all the electronics-specialized neurons inside my brain, after being asleep for many years, woke up and started running again. Thanks to the Internet (not yet available when I left professional electronics design), I discovered a lot of new things: MCUs, free IDEs running even on a simple computer, free compilers, very cheap programming devices, and lots of documentation freely available. I threw away the last Texas Instruments databook I still had on my bookshelf and started studying again. There were a lot of new things to know, but, with a good background, it was a pleasant (if frantic) study. I’ve also bought some books, but they became old before I finished reading them.

Within a few months, jumping among all the hardware and software versions Microchip released at an astonishing rate, I found Johann Borenstein et al’s book Where Am I?: Systems and Methods for Mobile Robot Positioning (University of Michigan, 1996). This report and Borenstein’s website taught me a lot about autonomous navigation techniques. David P. Anderson’s “My Robots” webpage (www.geology.smu.edu/~dpa-www/myrobots.html) inspired all my robots, completed or forthcoming.

I’ve never wanted to make a remote-controlled car, so my robots must navigate autonomously in an unknown environment, reacting to the external stimuli. …

NAN: Robotics is a focal theme in many of the articles you have contributed to Circuit Cellar. One of your article series, “Robot Navigation and Control” (224–225, 2009), was about a navigation control subsystem you built for an autonomous differential steering explorer robot. The first part focused on the robotic platform that drives motors and controls an H-bridge. You then described the software development phase of the project. Is the project still in use? Have you made any updates to it?

The “dsNavCon” system features a Microchip Technology dsPIC30F4012 motor controller and a general-purpose dsPIC30F3013. (Source: G. Ottaviani, CC224)

GUIDO: After I wrote that article series, that project evolved until the beginning of this year. There is a new switched power supply, a new audio sensor, the latest version of dsNav dsPIC33-based board became commercially available online, some mechanical changing, improvements on telemetry console, a lot of modifications in the firmware, and the UMBmark calibration performed successfully.

The goal is reached. That robot was a lab to experiment sensors, solutions, and technologies. Now I’m ready for a further step: outdoors.

NAN: You wrote another robotics-related article in 2010 titled, “A Sensor System for Robotics Applications” (Circuit Cellar 236). Here you describe adding senses—sight, hearing, and touch—to a robotics design. Tell us about the design, which is built around an Arduino Diecimila board. How does the board factor into the design?

An Arduino-based robot platform (Source: G. Ottavini, CC236)

GUIDO: That was the first time I used an Arduino. I’ve always used PICs, and I wanted to test this well-known board. In that case, I needed to interface many I2C, analog sensors, and an I2C I/O expander. I didn’t want to waste time configuring peripherals. All the sensors had 5-V I/O. The computing time constraints were not so strict. Arduino fits perfectly into all of these prerequisites. The learning curve was very fast. There was already a library of every device I’ve used. There was no need for a voltage level translation between 3.3 and 5 V. Everything was easy, fast, and cheap. Why not use it for these kinds of projects?

NAN: You designed an audio sensor for a Rino robotic platform (“Sound Tone Detection with a PSoC Part 1 and Part 2,” Circuit Cellar 256–257, 2011). Why did you design the system? Did you design it for use at work or home? Give us some examples of how you’ve used the sensor.

GUIDO: I already had a sound board based on classic op-amp ICs. I discovered the PSoC devices in a robotic meeting. At that moment, there was a special offer for the PSoC1 programmer and, incidentally, it was close to my birthday. What a perfect gift from my relatives!

This was another excuse to study a completely different programmable device and add something new to my background. The learning curve was not as easy as the Arduino one. It is really different because… it does a very different job. The new PSoC-based audio board was smaller, simpler, and with many more features than the previous one. The original project was designed to detect a fixed 4-kHz tone, but now it is easy to change the central frequency, the band, and the behavior of the board. This confirms once more, if needed, that nowadays, this kind of professional design is also available to hobbyists. …

NAN: What do you consider to be the “next big thing” in the embedded design industry? Is there a particular technology that you’ve used or seen that will change the way engineers design in the coming months and years?

GUIDO: As often happens, the “big thing” is one of the smallest ones. Many manufacturers are working on micro-nano-pico watt devices. I’ve done a little but not very extreme experimenting with my Pendulum project. Using the sleeping features of a simple PIC10F22P with some care, I’ve maintained the pendulum’s oscillation bob for a year with a couple of AAA batteries and it is still oscillating behind me right now.

Because of this kind of MCU, we can start seriously thinking about energy harvesting. We can get energy from light, heat, any kind of RF, movement or whatever, to have a self-powered, autonomous device. Thanks to smartphones, PDAs, tablets, and other portable devices, the MEMS sensors have become smaller and less expensive.

In my opinion, all this technology—together with supercapacitors, solid-state batteries or similar—will spread many small devices everywhere to monitor everything.

When I first saw the Intel Industrial Control in Concert demonstration at Design West 2012 in San Jose, CA, I immediately thought of Kurt Vonnegut ‘s 1952 novel Player Piano. The connection, of course, is that the player piano in the novel and Intel’s Atom-based robotic orchestra both play preprogrammed music without human involvement. But the similarities end there. Vonnegut used the self-playing autopiano as a metaphor for a mechanized society in which wealthy industrialists replaced human workers with automated machines. In contrast, Intel’s innovative system demonstrated engineering excellence and created a buzz in the in the already positive atmosphere at the conference.

In “EtherCAT Orchestra” (Circuit Cellar 264, July 2012), Richard Wotiz carefully details the awe-inspiring music machine that’s built around seven embedded systems, each of which is based on Intel’s Atom D525 dual-core microprocessor. He provides information about the system you can’t find on YouTube or hobby tech blogs. Here is the article in its entirety.

EtherCAT Orchestra

I have long been interested in automatically controlled musical instruments. When I was little, I remember being fascinated whenever I ran across a coin-operated electromechanical calliope or a carnival hurdy-gurdy. I could spend all day watching the many levers, wheels, shafts, and other moving parts as it played its tunes over and over. Unfortunately, the mechanical complexity and expertise needed to maintain these machines makes them increasingly rare. But, in our modern world of pocket-sized MP3 players, there’s still nothing like seeing music created in front of you.

I recently attended the Design West conference (formerly the Embedded Systems Conference) in San Jose, CA, and ran across an amazing contraption that reminded me of old carnival music machines. The system was created for Intel as a demonstration of its Atom processor family, and was quite successful at capturing the attention of anyone walking by Intel’s booth (see Photo 1).

Photo 1—This is Intel’s computer-controlled orchestra. It may not look like any musical instrument you’ve ever seen, but it’s quite a thing to watch. The inspiration came from Animusic’s “Pipe Dream,” which appears on the video screen at the top. (Source: R. Wotiz)

The concept is based on Animusic’s music video “Pipe Dream,” which is a captivating computer graphics representation of a futuristic orchestra. The instruments in the video play when virtual balls strike against them. Each ball is launched at a precise time so it will land on an instrument the moment each note is played.

The demonstration, officially known as Intel’s Industrial Control in Concert, uses high-speed pneumatic valves to fire practice paintballs at plastic targets of various shapes and sizes. The balls are made of 0.68”-diameter soft rubber. They put on quite a show bouncing around while a song played. Photo 2 shows one of the pneumatic firing arrays.

Photo 2—This is one of several sets of pneumatic valves. Air is supplied by the many tees below the valves and is sent to the ball-firing nozzles near the top of the photo. The corrugated hoses at the top supply balls to the nozzles. (Source: R. Wotiz)

The valves are the gray boxes lined up along the center. When each one opens, a burst of air is sent up one of the clear hoses to a nozzle to fire a ball. The corrugated black hoses at the top supply the balls to the nozzles. They’re fed by paintball hoppers that are refilled after each performance. Each nozzle fires at a particular target (see Photo 3).

Photo 3—These are the targets at which the nozzles from Photo 2 are aimed. If you look closely, you can see a ball just after it bounced off the illuminated target at the top right. (Source: R. Wotiz)

Each target has an array of LEDs that shows when it’s activated and a piezoelectric sensor that detects a ball’s impact. Unfortunately, slight variations in the pneumatics and the balls themselves mean that not every ball makes it to its intended target. To avoid sounding choppy and incomplete, the musical notes are triggered by a fixed timing sequence rather than the ball impact sensors. Think of it as a form of mechanical lip syncing. There’s a noticeable pop when a ball is fired, so the system sounds something like a cross between a pinball machine and a popcorn popper. You may expect that to detract from the music, but I felt it added to the novelty of the experience.

The control system consists of seven separate embedded systems, all based on Intel’s Atom D525 dual-core microprocessor, on an Ethernet network (see Figure 1).

Figure 1—Each block across the top is an embedded system providing some aspect of the user interface. The real-time interface is handled by the modules at the bottom. They’re controlled by the EtherCAT master at the center. (Source. R. Wotiz)

One of the systems is responsible for the real-time control of the mechanism. It communicates over an Ethernet control automation technology (EtherCAT) bus to several slave units, which provide the I/O interface to the sensors and actuators.

EtherCAT

EtherCAT is a fieldbus providing high-speed, real-time control over a conventional 100 Mb/s Ethernet hardware infrastructure. It’s a relatively recent technology, originally developed by Beckhoff Automation GmbH, and currently managed by the EtherCAT Technology Group (ETG), which was formed in 2003. You need to be an ETG member to access most of their specification documents, but information is publicly available. According to information on the ETG website, membership is currently free to qualified companies. EtherCAT was also made a part of international standard IEC 61158 “Industrial Communication Networks—Fieldbus Specifications” in 2007.

EtherCAT uses standard Ethernet data frames, but instead of each device decoding and processing an individual frame, the devices are arranged in a daisy chain, where a single frame is circulated through all devices in sequence. Any device with an Ethernet port can function as the master, which initiates the frame transmission. The slaves need specialized EtherCAT ports. A two-port slave device receives and starts processing a frame while simultaneously sending it out to the next device (see Figure 2).

The last slave in the chain detects that there isn’t a downstream device and sends its frame back to the previous device, where it eventually returns to the originating master. This forms a logical ring by taking advantage of both the outgoing and return paths in the full-duplex network. The last slave can also be directly connected to a second Ethernet port on the master, if one is available, creating a physical ring. This creates redundancy in case there is a break in the network. A slave with three or more ports can be used to form more complex topologies than a simple daisy chain. However, this wouldn’t speed up network operation, since a frame still has to travel through each slave, one at a time, in both directions.

The EtherCAT frame, known as a telegram, can be transmitted in one of two different ways depending on the network configuration. When all devices are on the same subnet, the data is sent as the entire payload of an Ethernet frame, using an EtherType value of 0x88A4 (see Figure 3a).

Figure 3a—An EtherCAT frame uses the standard Ethernet framing format with very little overhead. The payload size shown includes both the EtherCAT telegram and any padding bytes needed to bring the total frame size up to 64 bytes, the minimum size for an Ethernet frame. b—The payload can be encapsulated inside a UDP frame if it needs to pass through a router or switch. (Source: R. Wotiz)

If the telegrams must pass through a router or switch onto a different physical network, they may be encapsulated within a UDP datagram using a destination port number of 0x88A4 (see Figure 3b), though this will affect network performance. Slaves do not have their own Ethernet or IP addresses, so all telegrams will be processed by all slaves on a subnet regardless of which transmission method was used. Each telegram contains one or more EtherCAT datagrams (see Figure 4).

Each datagram includes a block of data and a command indicating what to do with the data. The commands fall into three categories. Write commands copy the data into a slave’s memory, while read commands copy slave data into the datagram as it passes through. Read/write commands do both operations in sequence, first copying data from memory into the outgoing datagram, then moving data that was originally in the datagram into memory. Depending on the addressing mode, the read and write operations of a read/write command can both access the same or different devices. This enables fast propagation of data between slaves.

Each datagram contains addressing information that specifies which slave device should be accessed and the memory address offset within the slave to be read or written. A 16-bit value for each enables up to 65,535 slaves to be addressed, with a 65,536-byte address space for each one. The command code specifies which of four different addressing modes to use. Position addressing specifies a slave by its physical location on the network. A slave is selected only if the address value is zero. It increments the address as it passes the datagram on to the next device. This enables the master to select a device by setting the address value to the negative of the number of devices in the network preceding the desired device. This addressing mode is useful during system startup before the slaves are configured with unique addresses. Node addressing specifies a slave by its configured address, which the master will set during the startup process. This mode enables direct access to a particular device’s memory or control registers. Logical addressing takes advantage of one or more fieldbus memory management units (FMMUs) on a slave device. Once configured, a FMMU will translate a logical address to any desired physical memory address. This may include the ability to specify individual bits in a data byte, which provides an efficient way to control specific I/O ports or register bits without having to send any more data than needed. Finally, broadcast addressing selects all slaves on the network. For broadcast reads, slaves send out the logical OR of their data with the data from the incoming datagram.

Each time a slave successfully reads or writes data contained in a datagram, it increments the working counter value (see Figure 4).

Figure 4—An EtherCAT telegram consists of a header and one or more datagrams. Each datagram can be addressed to one slave, a particular block of data within a slave, or multiple slaves. A slave can modify the datagram’s Address, C, IRQ, Process data, and WKC fields as it passes the data on to the next device. (Source: R. Wotiz)

This enables the master to confirm that all the slaves it was expecting to communicate with actually handled the data sent to them. If a slave is disconnected, or its configuration changes so it is no longer being addressed as expected, then it will no longer increment the counter. This alerts the master to rescan the network to confirm the presence of all devices and reconfigure them, if necessary. If a slave wants to alert the master of a high-priority event, it can set one or more bits in the IRQ field to request the master to take some predetermined action.

TIMING

Frames are processed in each slave by a specialized EtherCAT slave controller (ESC), which extracts incoming data and inserts outgoing data into the frame as it passes through. The ESC operates at a high speed, resulting in a typical data delay from the incoming to the outgoing network port of less than 1 μs. The operating speed is often dominated by how fast the master can process the data, rather than the speed of the network itself. For a system that runs a process feedback loop, the master has to receive data from the previous cycle and process it before sending out data for the next cycle. The minimum cycle time TCYC is given by: TCYC = TMP + TFR + N × TDLY + 2 × TCBL + TJ. TMP = master’s processing time, TFR = frame transmission time on the network (80 ns per data byte + 5 μs frame overhead), N = total number of slaves, TDLY = sum of the forward and return delay times through each slave (typically 600 ns), TCBL = cable propagation delay (5 ns per meter for Category 5 Ethernet cable), and TJ = network jitter (determined by master).[1]

A slave’s internal processing time may overlap some or all of these time windows, depending on how its I/O is synchronized. The network may be slowed if the slave needs more time than the total cycle time computed above. A maximum-length telegram containing 1,486 bytes of process data can be communicated to a network of 1,000 slaves in less than 1 ms, not including processing time.

Synchronization is an important aspect of any fieldbus. EtherCAT uses a distributed clock (DC) with a resolution of 1 ns located in the ESC on each slave. The master can configure the slaves to take a snapshot of their individual DC values when a particular frame is sent. Each slave captures the value when the frame is received by the ESC in both the outbound and returning directions. The master then reads these values and computes the propagation delays between each device. It also computes the clock offsets between the slaves and its reference clock, then uses these values to update each slave’s DC to match the reference. The process can be repeated at regular intervals to compensate for clock drift. This results in an absolute clock error of less than 1 μs between devices.

MUSICAL NETWORKS

The orchestra’s EtherCAT network is built around a set of modules from National Instruments. The virtual conductor is an application running under LabVIEW Real-Time on a CompactRIO controller, which functions as the master device. It communicates with four slaves containing a mix of digital and analog I/O and three slaves consisting of servo motor drives. Both the master and the I/O slaves contain a FPGA to implement any custom local processing that’s necessary to keep the data flowing. The system runs at a cycle time of 1 ms, which provides enough timing resolution to keep the balls properly flying.

I hope you’ve enjoyed learning about EtherCAT—as well as the fascinating musical device it’s used in—as much as I have.

Author’s note: I would like to thank Marc Christenson of SISU Devices, creator of this amazing device, for his help in providing information on the design.

Roboticists and soccer fans from around the world converged on Eindhoven, The Netherlands, from April 25–29 for the Roboup Dutch Open. The event was an interesting combination of sports and electronics engineering.

Soccer action at the Robocup Dutch Open

Since I have dozens of colleagues based in The Netherlands, I decided to see if someone would provide event coverage for our readers and members. Fortunately, TechtheFuture.com’s Tessel Rensenbrink was available and willing to cover the event. She reports:

Attending the Robocup Dutch Open is like taking a peek into the future. Teams of fully autonomous robots compete with each other in a soccer game. The matches are as engaging as watching humans compete in sports and the teams even display particular characteristics. The German bots suck at penalties and the Iranian bots are a rough bunch frequently body checking their opponents.

The Dutch Open was held in Eindhoven, The Netherlands from the 25th to the 29th of April. It is part of Robocup, a worldwide educational initiative aiming to promote robotics and artificial intelligence research. The soccer tournaments serve as a test bed for developments in robotics and help raise the interest of the general public. All new discoveries and techniques are shared across the teams to support rapid development.
The ultimate goal is to have a fully autonomous humanoid robot soccer team defeat the winner of the World Cup of Human Soccer in 2050.

In Eindhoven the competition was between teams from the Middle Size Robot League. The bots are 80 cm (2.6 ft) high, 50 cm (1.6 ft) in diameter and move around on wheels. They have an arm with little wheels to control the ball and a kicker to shoot. Because the hardware is mostly standardized the development teams have to make the difference with the software.

Once the game starts the developers aren’t allowed to aid or moderate the robots. Therefore the bots are equipped with all the hardware they need to play soccer autonomously. They’re mounted with a camera and a laser scanner to locate the ball and determine the distance. A Wi-Fi network allows the team members to communicate with each other and determine the strategy.

The game is played on a field similar to a scaled human soccer field. Playing time is limited to two halves of 15 minutes each. The teams consist of five players. If a robot does not function properly it may be taken of the field for a reset while the game continues. There is a human referee who’s decisions are communicated to the bots over the Wi-Fi network.

The Dutch Open finals were between home team TechUnited and MRL from Iran. The Dutch bots scored their first goal within minutes of the start of the game to the excitement of the predominantly orange-clad audience. Shortly thereafter a TechUnited bot went renegade and had to be taken out of the field for a reset. But even with a bot less the Dutchies scored again. When the team increased their lead to 3 – 0 the match seemed all but won. But in the second half MRL came back strong and had everyone on the edge of their seats by scoring two goals.

When the referee signaled the end of the game, the score was 3-2 for TechUnited. By winning the tournament the Dutch have established themselves as a favorite for the World Cup held in Mexico in June. Maybe, just maybe, the Dutch will finally bring home a Soccer World Cup trophy.

The following video shows a match between The Netherlands and Iran. The Netherlands won 2-1.

It wasn’t the Blue Man Group making music by shooting small rubber balls at pipes, xylophones, vibraphones, cymbals, and various other sound-making instruments at Design West in San Jose, CA, this week. It was Intel and its collaborator Sisu Devices.

Intel's "Industrial Controller in Concert" at Design West, San Jose

The innovative Industrial Controller in Concert system on display featured seven Atom processors, four operating systems, 36 paint ball hoppers, and 2300 rubber balls, a video camera for motion sensing, a digital synthesizer, a multi-touch display, and more. PVC tubes connect the various instruments.

Want a CNC panel cutter and controller for your lab, hackspace, or workspace? James Koehler of Canada built an NXP Semiconductors mbed-based system to control a three-axis milling machine, which he uses to cut panels for electronic equipment. You can customize one yourself.

Panel Cutter Controller (Source: James Koehler)

According to Koehler:

Modern electronic equipment often requires front panels with large cut-outs for LCD’s, for meters and, in general, openings more complicated than can be made with a drill. It is tedious to do this by hand and difficult to achieve a nice finished appearance. This controller allows it to be done simply, quickly and to be replicated exactly.

Koehler’s design is an interesting alternative to a PC program. The self-contained controller enables him to run a milling machine either manually or automatically (following a script) without having to clutter his workspace with a PC. It’s both effective and space-saving!

The Controller Setup (Source: James Koehler)

How does it work? The design controls three stepping motors.

The Complete System (Source: James Koehler)

Inside the controller are a power supply and a PCB, which carries the NXP mbed module plus the necessary interface circuitry and a socket for an SD card.

The Controller (Source: James Koehler)

Koehler explains:

In use, a piece of material for the panel is clamped onto the milling machine table and the cutting tool is moved to a starting position using the rotary encoders. Then the controller is switched to its ‘automatic’ mode and a script on the SD card is then followed to cut the panel. A very simple ‘language’ is used for the script; to go to any particular (x, y) position, to lift the cutting tool, to lower the cutting tool, to cut a rectangle of any dimension and to cut a circle of any dimension, etc. More complex instructions sequences such as those needed to cut the rectangular opening plus four mounting holes for a LCD are just combinations, called macros, of those simple instructions; every new device (meter mounting holes, LCD mounts, etc.) will have its own macro. The complete script for a particular panel can be any combination of simple commands plus macros. The milling machine, a Taig ‘micro mill’, with stepping motors is shown in Figure 2. In its ‘manual’ mode, the system can be used as a conventional three axis mill controlled via the rotary encoders. The absolute position of the cutting tool is displayed in units of either inches, mm or thousandths of an inch.