Sunday, December 15, 2013

Actually, the title is a bit of a tease. I know exactly where they are. But people can't seem to get their head around the coding. And that's holding back demand.

Do you think transistors (and therefore computers) are Boolean devices? Would you be uncomfortable if I told you that's wrong, and easily proved? Which means they are not logical and deterministic machines which will always carry out the same series of instructions given the same program, like they told you. Sorry.

Sure, logically they do. But physically, things are more complicated. I'm sure you guessed that already. If you happen to present a transistor with a gate level that just happens to be a critical 'in-between' voltage, the transistor will not switch into a state representing either zero or one, on or off, but instead goes gray.

The technical term is "metastable", because often it doesn't just sit maddeningly in-between - it balances there and generates white noise, blasting out randomness into the rest of the circuit.

This ambiguity value can propagate, if it matches the metastability of the next transistor. And the next.

Why don't you experience this all the time? Because pretty much every aspect of digital logic design at the physical level is intended to hide it. Big fat specifications with timing diagrams which say you should never present such indeterminate voltages, and if you do then it's your own fault. We have bistable latches, Schmitt triggers, lockup protection, thin films designed to decrease metastability, and design features so core we take them for granted...

Such as the clock. The entire point of a central propagated clock (and all the resources it requires) is to create a moment where all the transistors shout "change places!" and move through their metastable zone before any measurements get made. It is why "clockless logic" has devolved to just redefining what "clock" means.

And if any of these elaborate protections guarding each and every Boolean 'bit' (actually made from rushing flows of billions of electrons sloshing from potential to potential through the bulk silicon) fails for just the tiniest nanosecond, your computer crashes.

That's why they crash. That's what a "hardware fault" is, and why they're so frustratingly random.

It is the underlying reality of your machine asserting itself, in contradiction of your Bool-ish fantasies. You can't get rid of "noise" entirely, because much of it comes from the atoms within the wires your computer is made from. Just ask Benoit Mandelbrot.

The fact we can construct near-perfect self-correcting boolean simulation machines out of piles of reheated sand is really nothing short of a technological miracle. And is taken entirely for granted.

Students of Object Oriented Programming are taught the tenants of the faith: "Encapsulation, Abstraction, Polymorphism". But they think they are virtues, rather than necessary evils. Abstraction in particular gets totally out of control, with over-generalized interfaces that map well to human concepts (as defined by committees) that bear no relation whatsoever to how a real machine actually would perform the task.

It's why "shader" programmers for game engines are a special breed. They have to smash classic linear algorithms into parallel pieces which will fit into the graphics card pipeline. Abstraction doesn't help one whit. It is the enemy of performance.

There's an equivalent in relational database design: "Fourth Normal Form". (Or even Fifth!) Students are taught how to renormalize their database designs to make them more logical, and are graded on it. Then you get to work on real high-performance transactional systems and quickly rip all DB designs back down to second (or first) normal form, because otherwise the system is too slow for words and users get angry.

If you are using abstraction to hide the details of a problem rather than reveal them, you are using it the wrong way around. Encapsulate the code, not the problem. You can't generalize from a sample of one.

This obsessive need to abstract away and deny the underlying machine is why we're very bad at quantum programming, which pretty much by definition is a sneaky way of arranging the dominoes of reality to fall in a certain way. And while reality is playing quantum dominoes, we keep designing programs as if the game is billiard balls.

And when you ask why, the answer is essentially "because it's easier for people to reason by analogy about billiards".

The assumption here is that the point of computer science is to create nice and easy structures for humans to comprehend.

Um... OK.

And that's why you can't have a quantum computer. Because the only metaphor or abstraction that has any value currently looks like this:

Sure, we have names for some of these concepts. "Superposition" and "entanglement" and so forth, but they have common characteristics and behaviours we have yet to find well-rounded words for, that everyone intrinsically understands. Unless you count "timey wimey".

So forget trying to understand Quantum Information Theory in terms of something else. There isn't.

Sunday, December 8, 2013

While soldering things together, I get a lot of time to think about the general course of technology and so forth. And I'm now old enough that I've personally seen a large chunk of the story arc. So rather than post a work update, I wanted to get out of my head a thought that's been rattling around for a while, but recent events have solidified.

Let's start with what sounds like the intro to a terrible, tragic joke: What do the the Newtown Shootings and the Space Program have in common? They are both ways an individual can leave their mark on history. I've been learning how Adam Lanza was apparently inspired by previous school shootings, and had newspaper clippings about such events going back a hundred years to one of America's earliest. He learned that such heinous acts were a path to notoriety, to fame, and he was right, because I just said his name, and you know who I mean.

Not too long ago, there were other ways of achieving such fame. You could become an Astronaut, and walk on the face of another world, for example. Granted it was unlikely - the first round of jobs generally went to the upstanding military types who had been kind enough to fight a war on their country's behalf, but the feeling was that soon we might all have the chance to do something that had never been done, to write our own small piece of the new history.

But that doesn't happen anymore. We aspire to efficient repetition, now. There are far less jobs around so new they don't have a name. There's a stagnant stability to our culture, and we've stopped doing the cool stuff because it was too difficult. No Concorde. No Space Shuttle. Not much to replace them. Maybe we'll go back to the moon next decade. Jupiter? Don't make us laugh.

Buzz Aldrin is reduced to punching bloggers in the face.

When the blue-sky options contract and all the papers talk about are the latest tragedy, when the walls of society close in, we lose something. Modern culture's mantra is that fame is all that matters, and if you're not born pretty or sporty in exploitable ways, there aren't many options left. A whole generation destined to die forgotten, ignored, because we withdrew the support structures and funding needed to feed those dreams.

We just passed discovery of the 1000th exoplanet. One seems to have water. There are, literally, new worlds to explore. But we think they're out of reach, so no-one cares.

Tuesday, December 3, 2013

Jeff Bezos has been making news today with his Amazon Octocopter delivery drones. I do like his prototypes - very heavy on the redundancy.

Heavy on the Redundancy

Possibly my favorite comments were by Bill Gates who (never one to give credit for anything, if he can) went almost in one breath from ridiculing the idea as stunt-like and pointless, to musing whether similar technology might not be useful for zooming medicines and messages around the kinds of roadless refugee camps he's become used to dealing with.

Anyway, I don't have the same budget those guys do, so my autonomous flying robot project is much more modest, and still scheduled to happen mostly over Christmas/New Years after I get some other things finished. In one of those synchronicities, the BLDC motors and controllers arrived yesterday along with the Bezos news. Here's all the pieces I have so far:

Apart from a battery and frame, that's all the components for a quadcopter! On the right is an inductive wireless power transfer system I'm hoping to integrate somehow, and at the back is the classic remote-controlled HubsanX4 (clone) that I've learned so much from.

If rulers don't quite give you the scale, here's a coin. (note: somewhat foreshortened...) I'm guessing all the pieces weigh roughly half a chocolate bar. My first autonomous flying robot, filled with experimental software, needs to be incapable of harm or damage. Especially to me. (Because I'm the one who is going to get hit in the head by a confused droid. I do know this.)

That's an Arduino-clone "Pro Micro" with an Atmel '32U4, a MP6050 accelerometer/gyro (same tech as in the 'sonic screwdriver' I just built...) magnetometer "compass", Barometric pressure "Altimeter" (probably optional given my indoors aspirations) and 2.4GHz radio transceiver. These pieces have been slowly turning up for a while now, and I've tested them all individually in other small projects.

The new components I couldn't do this without are the brilliant next generation of tiny ESCs (Electronic Speed Controllers) matched for the equally small motors. These can push 3 Amps each, and interface directly to the microcontroller.

The brushless motors are Turnigy 3000KV, and weigh less than 5 grams. They are stuffed with neodymium magnets, and will last for years - probably outlast everything else, barring physical crash damage.

Oh, in case you're wondering about the small black can-like components, they are 5V 0.33F super-capacitors. All together, THREE FARADS of power supply smoothing will be distributed around the frame, at the points it is needed. That might power the droid for almost half a second - if the battery just fell off. Enough redundancy to soft-land. Perhaps.

So, if you're interested in autonomous quads, I'll be back to this topic very soon. I have two other robots to finish first, though. (They're coming along nicely, but I can't mention one until after Christmas! Shhh!)

Friday, November 29, 2013

People have been responding to the 50th Anniversary of Doctor Who in various ways. Apparently mine is to build Sonic Screwdrivers. The Fourth Doctor (Tom Baker) once mentioned in passing that his screwdriver had eight computers in it - an impossible throwaway line in 1978.

This wasn't entirely the original intention, but things just kind of escalated. (As they do.) The original prototype looked like this:

Arduino Leonardo, LCD screen, and MP6050 accelerometer/gyro.

But once the MP6050 was proven to work to my satisfaction, I wanted to build it into some kind of hand-held device that I could properly wave around without smashing.

There is a slightly serious background to this - I am a computer scientist with an interest in novel human-interface devices, and I'm planning to build some robots (automated telescopes and quadcopters) over Christmas that use the same sensor tech in their inertial navigation.

Think of this as the brain of a flying droid, but without the motors. A safe starting point.

Oh, and it has a bluetooth module in it as well, so theoretically one can connect a serial terminal to the screwdriver from a mobile phone, but that's proving a little trickier than I'd hoped. If you like, you can accuse me of essentially re-inventing the wii-mote. And an IR LED, so it could function as a universal remote if I programmed it to.

I find it makes sense to build a prototype, write some "hardware test" code that exercises it in some kind of demo mode, and then rebuild the hardware (with the software already installed) in stages so you can incrementally test that everything is working.

'Blank' case modelled roughly on Pertwee-era sonic.

This is all the internals assembled. Once I knew the rough size, I started making a 'case' out of PVC pipe, which is a cheap and easy material to work with. And I happen to have various sizes lying around the workshop.

Finding an appropriate "button" was one of the hardest parts. Eventually I pulled the switches from an old mouse and installed two.

Buttons added, channels to let the glowing out,and a thingie for the end. Charging up.

The LiPo battery is charged directly by the Arduino, so some calibration was needed to make sure it was treading the fine line between adequately charging, while not allowing flames to shoot out sideways.

Mostly assembled, just before the video. Lights being tested by USB commands.

I've since cut a hole for the battery power switch, and plugged up the open end with Silicone (which still lets the Arduino glow out) and once the new owner paints it, I'll post some final photos.

Only one thing hasn't quite turned out - the internal boost converter is actually providing better regulated 5V power than my computer does over USB. (Unexpected!) That means that when plugged in, it still preferentially draws power from the internal battery, which is not what one wants. Hence a hardware switch to isolate the battery. I will revisit this.

Of course, the hardware is in many ways the easy part. The point was to create a new platform for testing out ideas. The software has already taken far longer. The total hardware cost has been under $50 - it's essentially 'disposable'. But if you added up all my time, and costed me at my usual contracting rate, this is a multi-thousand dollar device. A bespoke artifact.

In the end, that's why The Doctor's sonic is so damn cool - Four hundred mythical years of software development, moved from device to device. That's a deep and insightful truth about Computing Science. Hardware is transitory, but code can be eternal.

Thursday, November 21, 2013

Summary

Here we go with another three cheap and cheerful modules from around eBay labelled as "Motor Drivers for Arduino".

A3967 EasyDriver $4.89

Dual HG7881 H-Bridge $2.30

ULN2003A "Stepper Driver" $1.78

The short version: I made smoke come out of the HG7881 under fairly normal operating conditions. The ULN2003A barely survived, probably because it can't go in reverse. The EasyDriver is a beautiful device, and the hands down winner, so long as you're driving a small stepper motor.

I'm still looking for a good solution to spin a couple of toy motors, though.

Schmalzhaus EasyDriver

So, let's start with the "winner" in terms of overall quality. This device just kept surprising me with it's robustness and adaptability. So long as power was applied somewhere, in some form, it glowed it's happy red LED and spun the motor around. 5V from the Arduino's USB Raw line, 12V from a plugpack, 6V from some batteries... whatevs.

This is how it's done.

Once powered, you only need to run the "Step" and "Dir" lines to the Arduino. All the rest are tied via pullups/downs to sensible 1/8 microstep defaults. In a real application I'd also want the reset and sleep pins under control for proper homing and power management, but that's a personal thing.

The pin layout was strange at first, and seemed scattered around the board, but I understand the reasoning now - separating all the digital and motor power paths so that shorting any two adjacent pins is relatively benign.

This kind of sensible-ness probably comes from the fact this open source design is on it's sixth iteration, going back as many years. All the bugs and weakness have been sorted out.

Unsurprisingly, it's another Allegro chip in there, literally along the same lines as the A4988 "StepStick" driver I reviewed last time, but an older device. And while the A4988 does seem to be able to throw more power around than the A3967, it also has a higher minimum required voltage.

That's really the principal difference: The A4988 doesn't operate below 8-9V, and the A3967 tends to overheat when you go above.

The newer A4988 is therefore for cases where 12V is your primary supply voltage, as in mains-powered devices like the RepRap. But the EasyDriver works best in the 5V range, and does it magnificently. In fact, it's the only driver I've used that could happily spin a stepper motor using a 4xAA battery pack.

So it will generally function from the same power source driving the Arduino, and uses fewer pins than anything so far. "EasyDriver" is the perfectly correct name.

It even has mounting holes. Luxury.

HG7881 Dual H-Bridge

So, here's one thing this chip is missing - thermal shutdown. I know this because I managed to release the magic smoke from one of the H-Bridges during a "back-and-forth" stress test (reversing the motor direction about twice a second) After about eight cycles the motor stopped, and in the silence and stillness I saw the tiny puff of smoke rising into the air before my nose. Like an ant had lit a minuscule cigar in condolence.

He's dead Jim. And then the connector desoldered itself, and flew into my parts bin.

That's a shame, because otherwise this is a cool little module. It does the basic task of spinning a motor in either direction, and even handles 1Khz PWM speed control. But I burned it out with a 4.8V battery pack and a Tamiya toy motor by reversing direction a few times, and that's why it costs $2.

What I was hoping to use it for... but it was too much power. Too much responsibility.

Although, granted, this wasn't a test I subjected the AdaFruit Shield to. (I really doubt it would survive any better.) And in fact, I would rank this above the V1 Motor Shield, since it was able to keep the two motors running at constant max speed while generating less heat. (I'd stopped the stress test on the AdaFruit well before then.) And it's easier to control - only four pins needed.

The test setup that killed it. Side by side comparison torturing. Joystick for variable killing control.

What this module might be good for (and I'll have to try it later with the remaining channel) is replacing the guts of small RC servo motors, to convert them for continuous rotation. Those DC motors are a whole level down in power requirement, and might be the right match.

The module might even be appropriate for its advertised use - driving stepper motors - so long as the motor was extremely small, and very slow.

Moments after death. Alas, no visible mark or melted plastic.

It would be interesting to see what this chip can do with better thermal management, but the board design won't let you slap a heatsink across it without shorting things. And those little packages are really bad as dissipating heat anyway.

ULN2003A "Stepper Driver"

Ha. "Stepper Driver". I've had some past experience trying to build stepper drivers out of the ULN2003A. Let's call it what it is: a block of power transistors, limited by the obvious problem with putting seven power transistors onto one die - you can't really have more than one turned on at once.

It's probably only my past experience with this chip that allowed the module to survive the tests and keep its magic smoke. I'm pretty sure it doesn't have thermal overload protection. (I don't intentionally set out to destroy hardware, but I do need to stress test before putting anything unknown into my droids.)

This chip was old 20 years ago. And that's kind of why it keeps turning up. It was never really intended to drive motors, (remember relays?) but "unipolar" stepper motors can be thought of as four relay coils, so some wag used it in a few early dot matrix printers, and so "stepper driver" it became.

But don't use it for that. Really. I hooked it up to a very similar stepper motor to the one I attached to the EasyDriver, energized one coil, and had to shut it down five seconds later when the heat level hit critical.

It's not very good for driving DC motors either, mostly because it can't reverse the current flow, so it only goes forward. It did out-survive the HG7881, probably because of that fact, since the equivalent "stress test" was half as stressful. (Plus the '2003 only has to manage current on one side of the motor - the H-Bridge cops it coming and going.)

What you can do with the dumb 'ol ULN2003A is all kinds of weird things the more specialized chips would freak out about. For example, the spindle motor of a hard drive is kind of odd - it's technically a three winding "stepper" with a common terminal. (Almost a direct ancestor of modern Brushless DC motors) so you have to pulse three pins in sequence to make it spin.

I wanted to try it out as a galvanometer, so I used the ULN2003A to keep one coil energized, and applied 1Khz 'chopped' DC speed control to another winding to vary the current.

It did work - I could get the "needle" to swing to positions between the normal poles (like fine-control microstepping) but the repeatability was awful. Obvious in retrospect, given the spindle mass. Good 'galvos' have tiny spindles.

It has all kinds of other uses too. Powering a dozen LEDs. A simple speaker amp. As a level shifter to drive solid-state relays. For $1.80, it's a very versatile block of power transistors, and dumb-as-a-brick can have some advantages...

Sunday, November 17, 2013

Summary

While the old AdaFruit Motor Drive shield is cheap and versatile, it's also not very good. It struggles with even small motors, and if you want anything more sophisticated than "forward" and "back" then it's just not going to perform well.

The "StepStick" board is based on a more advanced chip that amazed me with its power. (I thought thermal shutdown was for certain, on a device that small without heatsinks... I was wrong.) But it is specialized to do one job - push stepper motors - and therefore lacks some flexibility to drive other motor types.

The A4988 is simply superior technology. Once you go DMOS, you realize how much BJT transistors suck. And having built stepper drivers before, I can appreciate its dead-simple microstepping interface. But the Motor Shield is easier to use.

Experimental Setup. AdaFruit motor driver + /Leonardo on top,

Micro Pro + StepStick on the bottom.

Moving Stuff Around with Arduinos

Over the last few weeks, I've been putting some "robots" together. This is the first time I've added Arduinos to the mix, and so I wanted to try out some of the cheaper and more common hardware on eBay and see if it was any good.

I used various old motors and power supplies I had lying around, but the Arduinos and motor drivers are new, and barely weeks old at this point.

Arduino Leonardo (clone) $12

AdaFruit L293D Motor Shield (clone) $6.15

Micro Pro (clone) $7.30

RepRap StepStick (clone) $3.99

All the gear tested came from ShenZen or Hong Kong via eBay, and I'm generally very impressed with the quality and value of what I've received so far. A lot of this gear is often cheap because it's considered "flawed" in some way, but usually this is a result of design or specification, and the kind of classic mistakes (connectors on the 'wrong' side, from the point of view of convenience) that we've all made. The actual build quality is great.

AdaFruit Motor Shield (v1)

The AdaFruit motor shield is a good example. For $6.15, it is a lot of physical hardware. But the design isn't the best, and has dated pretty badly. I found it necessary to trim the pins on the connector blocks with some side-snips to make the shield fit correctly. (And I was also concerned by the sub-millimeter clearance between the M4 connector and the ICSP/SPI pins on the Leonardo.. Hmm.. high volts right next to the CPU. Safe!)

AdaFruit has actually stopped making this version - the v2 shield is much better in every way. But the simplicity and availability of the components means there will probably be clones made for years.

What Leonardo Compatibility Issues?

Astute readers might know that the v1 shield is not officially supported on the newer Leonardo, but I have a few skills, so that situation didn't last long. Physically it's compatible, but the Leo is a little different internally from the Uno or Nova and the stock library makes some assumptions.

I have a new class in my Droid library called the "Motivator" that can PWM any pin(s) for DC current control. Since it's the pin/PWM/Timer assignments that changed most on the Leo, this was the 'secret sauce' needed to make it run all four motors in DC chopper mode. If anyone needs the code, drop me a line.

Arduino Leonardo hidden by AdaFruit Motor Shield (v1)

And generally it works, but those L293D's get hot. And 'speed control' doesn't necessarily help - the transistors generate a lot of switching heat, so PWM can make things worse.

Classic Tamiya toy gearbox, with good quality DC motors.
Sounds like banshees in a sack when running.

The board was simply not capable of driving all four motors at once, or even two at once for any length of time. But in a "classroom" situation where you need to slap something together in 10 minutes which won't run for much longer than that, this is a fairly foolproof solution.

But for permanent emplacement, I just wouldn't trust it. And it uses up (or makes inaccessable, like the SPI connector) too many other resources that a 'real' project would need.

RepRap "StepStick" A4988

It took me a while to believe that the single chip below completely outperformed both L293D drivers in the Motor Shield, but it does. Allegro really need to be congratulated on producing a first-class device, and whoever at the RepRap project picked it as their driver of choice should also be given a round of applause as well.

The module drives bigger motors with more capability, requires fewer pins on the Arduino (always a critical thing) and frees up other internal microcontroller resources like timers. I ran it at twice the voltage of the Motor Shield (12V instead of 6V) with greater current. It has the usual modern array of thermal and short protection, which I inadvertently tested by wiring up the stepper wrong.

Pulled from an '80's era Epson 160-column dot-matrix printer.

They don't make 'em like they used to.

This was not the first stepper I tried, but the module was 'overdriving' my smaller motors and making them heat up (while remaining stone cold itself.) so I wanted to see how large it could go. Driving this big motor at full current (actually more than twice it's rated power) finally got the A4988 warm. Not hot, just warm.

I tested out the 1/16 microstepping, and it was smooth and precise. (little steppers are more jittery, but these big ones have enough spindle mass to microstep perfectly.)

I've also tried using the module to drive just a single DC motor - which does work quite well, although you are essentially sacrificing half the chip's capability by only using one channel. (Because the other channel is phase-linked, it is mostly useless for driving a second DC motor unless you have a "tank" configuration where sharing power between two wheels might make sense)

The "Micro Pro".

Leonardo equivalent, but a fraction of the size.

If you're wondering where the Arduino is, it's the other tiny board. I am loving the "Micro Pro". I lose two pins compared to it's big brother, but also 9/10ths the volume. Otherwise, it is exactly the same Atmel chip that's on the Leo.

The issue with the A4988 is that it's not a beginner's device, and it's not targeted at Arduino. There are no "libraries" to drive it, (although it's pretty simple) so I wouldn't ask a classroom full of kids to add them to their projects unless I wanted a lot of magic smoke to come out of drivers, arduinos, or even host PCs when they got the 5V and 12V lines mixed up.

But for old pros, it's a beautiful little chip, with many possibilities.

Conclusions

While the A4988 is a better driver, I'm coming to the understanding that both of these technologies have had their day. The only reason to get either is because you have a drawer full of old-school motors and projects already provisioned with them.

If you're designing a new device, then you want to use BLDCs - brushless DC motors - which are essentially the love child of both the previous technologies. Brushless motors are very much like three-phase steppers (instead of two-phase) with less "detents", or like DC motors with an 'electronic commutator'. These are hooked up to ESCs (Electronic Speed Controllers) that decode a digital PWM signal and drive the motors with momentary currents of 20 or 30 amps.

These BLDC motors are more quiet, efficient and reliable than the previous generation, thanks to modern neodymium (and dysprosium) magnets, and our quantum FET mastery. Kudos to the Remote Control model community for pushing the envelope there.

If you look inside an ESC, you will often find what is essentially a commercial Arduino (Atmel 328 being quite popular) in charge of a bank of MOSFETs. A motor driver shield plus microcontroller in one. For $7. It's really hard to beat that. Some are reprogrammable.

If you're designing a new project, start with them. Forget steppers and DC. They're so last millenium.

Monday, November 11, 2013

These HC-SR04 ultrasonic ranging modules work brilliantly. They have a cone of about 40-60 degrees in front of them where pretty much anything - ceiling, coffee cup, pen, cloth, hand - will bounce the signal back and give a clear, clean, to-the-centimeter distance reading of the closest obstruction.

The only exception seems to be hitting smooth surfaces at high incident angles - in which case the return echo is bounced too well, and physically misses the receiver.

Did I mention I'm building a droid? Technically, a child's toy for Christmas, though I want it finished long before then. Actually, I'm building multiple "robots" at the moment, but this one is worth documenting.

eBay and the internet is making this a much easier task than it used to be. For $25, I got a complete 4wd "smart car chassis". I'm really hoping it arrives this week, since most of the rest of the pieces are already here and working.

This follows my current approach to robotics: Start with the sensors. It doesn't matter how fast you can go, if you don't know where you are.

as well as a couple of internal ones:
* Temperature
* Time
* Power Levels (maybe)

I've had each of these working individually now, but I'm learning one of the hard rules of Arduino development - combining devices together with one microcontroller gets exponentially difficult, due to limited shared resources. Making all of those parts work together at once is taking some programming art. I've got it covered, though.

A new approach to programming toys.

When I was but a little programmer, I really wanted one of these:

To the point that I read and practically memorized the manual, while standing in the shop. I couldn't physically have one, but at least I could carry away the idea. In many ways that toy embodied programming as it was in the early 80's.

But as brilliant as it was (I should find out who designed it, and drop them a line... ooh, Nolan Bushnell might has been involved...) programming has changed considerably. I do not think an exact recreation of that toy would be beneficial to an understanding of modern computers. Here's what I mean:

Vincent implemented what was essentially a stripped-down BASIC (or perhaps LOGO) without loops or conditionals. A simple series of instructions like "Forward 80" or "Left 2x60" that would inevitably lead to a tumble down the stairs, which is why I assume that not many survive.

A robot like that, even with IF and REPEAT statements, spends most of it's time doing 'nothing' but waiting for the timer on the current command to expire. Such droids are deaf, dumb, and insensible for most of their lives. And they act like it.

The only way to make such a 'bot respond to two things at once is essentially to write your own mini-scheduler for each unique combination. Fun!

But modern programming is all event driven. Not even multitasking, anymore. Node.js has shown us what happens if you stay single-threaded but make every call non-blocking - performance increases, and resource use decreases.

So I have reduced that concept to it's essentials. My droid's "brain" is composed of 128 "signal" slots. Reading a signal is identical to reading a value out of an array, but changing a signal will cause any "codes" attached to that signal to execute, either "now" or "later".

The "later" timeout is probably going to be globally set to 1/10th of a second in early versions, but could be faster for droids that need more temporal resolution. What is critical is that the 'later' timeout is reset if another write to the signal is performed. (The 'now' codes are still executed immediately.)

There's an excellent reason for all this which involves a lot of computing science. With just those two 'event' types, we get a universal system.

A "code" is a set of arithmetic instructions almost identical to the standard accumulator logic of a pocket calculator, plus an if/skip statement.

Loops are apparently not implemented, but any code can write another value to the signal which triggered it, which will cause it to trigger again. A classic "If signal>0 then signal = signal-1" will keep the code 'looping' until the counter gets to zero.

Infinite loops are not a bug in this environment. They are an essential way of getting things done. The interpreter is designed so that every signal could be "looping" simultaneously.

This creates a network of event-driven fragments of code. There are no explicit functions calls either... code simply triggers other code by writing to signals. There are no parameters to pass - needed values are read out of the signal array.

If you're into Comp. Sci, this is an extreme mash-up of core OCCAM and Javascript concepts.

This all sounds a lot more complicated than it will appear. "Codes" are simple lists of calculator arithmetic instructions (with the 128 signal registers being equivalent to memories) And each "Signal" is just two lists of codes, to run now and later. That's it. Universal turing machine.

I'm hoping I can go entirely "symbolic" for this. (English is an assumption... "pre-linguistic" programming is possible.) Since only 128 signals and 64 code scripts are possible, each can be assigned a unique "icon" instead of number - to be redrawn by the droid's owner to suit their conceptions of what it means or does.

This "digital nervous system" can be patched and changed in real-time without touching all the other signals or codes. Improvements to the droids "program" can be made in-the-field with a remote control, without the tether of a programming cable or IDE. (Hey kids! Who wants to use an Integrated Development Environment? It's totally fun! Where are you going?)

The droid will slowly accumulate it's responses over weeks and months (stored in EEPROM) not the "tabula rasa" approach of Vincent, who lost his memory on every reset and battery change.

No matter what cool new tricks you teach it, the droid should never forget that if it can't see the floor right in front of it's face, it should stop going forward. (That's the third law of robotics!) Or what to do when power is low. Or how to fire the Lasers!

Toys are tools to spur the imagination, and our understanding of the world. Good toys are ones that can be woven into existing play, that can offer possibilities rather than constricting choices. That is a very fine line to walk when dealing with robots. I think the answer is to be very clear about limitations, but be as free as possible within them. Pre-approved capabilities, but not pre-approved uses.

I'll be releasing the code as soon as it's done. If you have an Arduino-based "collision avoidance robot" (not the official arduino robot - I don't have one of those...) made out of a Leonardo/Atmega32u4, random cheap sensor modules, and a couple of motors, then you essentially have the same platform I'm building for. I personally think a compass is critical, (more than axis encoders) and you might even want to go for a full 6 or 9 DOF accelerometer/gyro/compass module. (GPS is probably way too much... probably...)

Oh, I've also written a "Beeps" module which makes a wonderful range of droid-sounding noises. I am genuinely astonished at the personality that a few warbled squeaks can imbue. I let it randomly generate "phrases" for a while, and kept the best sounding ones.

Wednesday, November 6, 2013

Have you seen how good accelerometer technology has got? I just bought three for $12. They have 16-bit resolution, and programmable range from 2 to 100G!

The tech has been improving ever since Nintendo introduced their Wii Nunchuck. Consumer mass production plus improving technology have driven some significant advances.

The techno-secret is in the modern ability to etch "Micro Mechanical Systems" (MEMS) onto a silicon wafer, as well as electronics. Tiny metal-coated cantilevered beams, supported on silicon pivots, with masses measured in the picograms.

I remember the original demos of this technology - micro-motors that seized up after ten seconds of runtime. A set of balance-scales that could weigh individual molecules. It was cool, but there didn't seem to be any obvious applications outside the chem lab. And silicon wafer tech works best when it's all sealed up, so it seemed like a mismatch of needs. Little did we know...

Turns out if you leave the molecule off the end of the tiny mass-scale, what you have is an inertial sensor. If the whole set of scales is moved to and fro in a way which affects the balance, then it's measurement will reflect the acceleration it's under. Boom, you have an accelerometer. You'll need one for each XYZ axis, but hey, if we're etching them on a chip that's not really a problem. (Well, maybe the Z axis...)

The sensor in the Wii Nunchuck is now an early-generation analog model. The latest devices have better everything, including a major innovation - gyros!

Knowing how fast you are accelerating in the three cartesian axes is very useful - but there's a missing set of dimensions - rotations. If you rotate a 'simple' accelerometer, it has issues distinguishing that from a lateral movement. A sideways impulse and a twist will both change the vector direction of acceleration due to gravity (hereafter called 'down') by roughly the same angle, but the twist will generally leave the vector the same length. But this subtle difference can be hard to distinguish through sampling noise.

However; if you have two accelerometers a distance apart, then you can measure the instantaneous differential, which will correspond to the gyroscopic motion. Their correlation will correspond to the pure acceleration.

That's how good the sensors have got. They can measure this difference across the chip. Barely two millimeters.

This is powering a whole new generation of mechanical devices which know exactly where they are, and how they are moving. Exactly. Better than you do. Better than GPS, in a relative moment-to-moment sense.

One offshoot is the amazing new "Toy Quadcopters" you can buy, (I just did) that sit in the air like a small flying drinks tray. Or other little home-made robots I've seen that balance on two wheels in the same manner as a Segway. (and can also carry drinks on a tray. Robotics people clearly get very thirsty.)

I'm going to use them on my telescope (along with a digital compass module) to tell which way it's pointed, and how it's moving under the influence of the motorized mount. This bypasses all the normal crap with axis encoders, which is excellent if you want to count motor shaft rotations. (because all your mechanics are perfect, predictable, and wobble-free. Ha!) That's the only case where taking measurements half-way through the mechanical chain can be expected to approximate the thing you really care about - where the optical tube is pointed relative to 'down' and 'north'.

16 bits (well, 14 really, on the 2G range including plus and minus sides) is a LOT of 'down' accuracy. As good as counting stepper increments before gearing, according to the math,

I'm also building some toy robots with the tech, but that's hardly new. Also perhaps a rugged 'wand'-like UI device that uses gestures to change settings, rather than buttons. Buttons are expensive. And big. And so... binary.

That's what happens when a 'real' computer scientist like me gets ahold of actual devices like this. I make them dance.

And while some of it is ported code from previous microcontroller projects, that's not the bulk of it. In fact, the ported code actually slowed me down, because re-adapting code for a different environment (especially subtle algorithms like red-black trees) requires you to be thinking simultaneously about two systems and all their differences.

I've been adding code and drivers pretty much as the hardware comes in. That would be more impressive if "driver" in this case wasn't 20 lines of real code, and the "devices" weren't more than a single chip that talks one of the standard inter-chip serial protocols. Or things arrived faster from Hong Kong. That would be nice.

I've had one partial failure so far, which took me a little time to figure out. The compass module is apparently a 3.3v I2C interface, and can't quite muster a signal of wide enough range to drive the 5v expectant microcontroller pins. Here's what that looks like on the digital oscilloscope:

The Arduino knows it's there, kind of, but the signal isn't clean enough to properly decode. (all those "mid-way stops" instead of slamming straight from floor to ceiling in nice square-wave pulses.) although it does, rarely, get a successful exchange. So I know the chip works.

Speaking of 'scope traces, I've also been playing around with "sonic" technology, in the form of 40Khz ultrasonic transducers. (When you stick one on the end of your oscilloscope probe, it looks a lot like the Pertwee-era sonic screwdriver.)

Here's what it looks like when you apply a single 'impulse' to the transmitter crystal (just a 5v-0v level change, in this case, which 'rings' the transmit crystal like hitting a bell, or tapping a wine glass.) and the resulting waveform generated by the receiver crystal. (which was placed face-to-face in this experiment.)

The high-frequency part of that signal isn't coming from an electronic source. It's purely the natural resonance of the piezoelectric crystal pair, as the "bell peal" dies away. I expect the actual output from the transmitter starts with a huge pulse and then exponentially decays, but it takes a 8-10 cycles for the receive crystal to begin resonating in response, and even longer (40-60) for it to "reset".

Not actually what I expected. So I learned something. Science!

I'm waiting for a couple of these to arrive in the post:

Which are integrated range-finder modules, but I can tell (just by looking at the circuit board in photos) how to hack it to act in a slightly more sophisticated manner, so that a couple of Arduinos can communicate using the same original technology - ultrasonic remote TV controls - that possibly inspired some beloved fiction.

(essentially, the middle chip is an 8-bit microcontroller. Remove that, and put the arduino directly in charge!)

Well.. I might use phase-shift keying instead of direct amplitude modulation, because it's now so very easy to write a tiny daemon and dedicate it to the task.

We've got so used to 'general purpose' computers that we've neglected the advantages you get by putting a single dedicated, unhackable processor in charge. One that will continue doing exactly what you told it to, ten thousand times a second, until it's 9v battery runs out. Maxwell's Demons, in silicon form.

Thursday, September 26, 2013

So, I stayed up late last night looking though my local Home Hardware superstore's on-line catalog, as I sometimes do, and had one of my ideas. So I went there today, bought some stuff, and before the sun went down I managed to make this.

It's a telescope mount, built from galvanized iron pipe, brass fittings, threaded rod, and some roller-skate bearings. The action is so smooth, I can start it spinning and come back a minute later.

Australians will recognize this as another example of Hills-Hoist based technology.

You can just see one of the bearings on the lower swivel. I've seen a lot of iron pipe used in the construction of amateur 'scope stands and mounts (and even the optical tube assembly) but I've taken it a step further and used matching brass fittings to create bearing blocks that accept standard ABEC 608s.

The bearings can be quickly removed, and disassembled, as I did just then. Those brass parts cost $4 each new, the skate bearings cost me about a buck each back in the day (and have been sitting on a shelf for years) and I had the nuts and 8mm threaded rod lying around from previous projects.

The real secret (and most expensive item) was the special "self-centering step drill" bit that I used to ream out the ends of the fittings, to create a 22mm 'seat' for the bearing to sit within. I chucked this bit into my drill press, and manually turned the bit, (as if I were tapping a hole) instead of powering up the press.

I have a lathe, but really didn't want to use it for this. Partly this was to find a method that works for people who don't have access to a small machine shop, and partly because I don't like them because they're so insanely dangerous. "Experiment" and "Lathe" are two words that do not go together well.

Each only took a minute or two to carefully ream out to a depth of a couple of millimeters, even by hand. Brass is a lovely material to work... you can feel the chips coming off. (Brass was cheaper and more accurately machined than the equivalent galvanised steel, which was clearly die-cast, and had terrible dimensional accuracy. Don't substitute iron or steel, or use cast parts, you will regret it.)

Actually, the 22mm drill step left the fit a little too snug. The bearings would go in, but getting them out again took a small impact hammer. So I put a small grinding bit in my Dremel, and 'dusted' off a few more thou (just enough to smooth away the drill tooling marks, really) and now the bearings drop in and fall out perfectly. A file, or some sandpaper and time would have worked too.

Total cost for the two bearing assemblies? $20 in parts, and $25 in tooling.

Given that I've see people using four 'pillow block' bearings that cost $15 each on eBay (if you're lucky) I consider this a win. Especially since mine are a fraction of the size, inline, and require no extra mounting hardware to attach to the pipe.

I not only have a new telescope mount, I have a technique for making them at will.

Also, those facing inline threads will come in very useful when I start attaching motors and gearing and torqe plates. I have some designs already, but obtaining Arduinos and servo motors will take a little time. (Assuming I don't reallocate my CNC steppers.)

Sunday, September 22, 2013

So this is where I'm at with my 3D on-line virtual astrometrics laboratory:

A big sphere, modelled on the Hayden Planetarium, stuffed with visualizations of planetary and solar datasets, (middle floor) an accurate 3D orrery (top floor) and a signal processing lab (bottom floor) floating in a virtual skybox.

I don't even have proper gravity yet, it's that primitive. :-)

I've had this web page hooked up to my telescope, and used it to process astrovideo data in real-time. It's not just a pretty face.

Pretty much everything is improved. Framerates are up. Caches are in. The LEP module now only requires 5MB of download to predict the moon's position to 1 meter accuracy, and runs twice as fast. The comms framework has moved beyond bouncing chat messages from ipad to desktop, to a level which allows peer-to-peer video streaming, in theory. (From WebSockets to WebRTC.)

Actually, I seem to be in a slight project pause. This happens from time to time. Usually once all my first-round technology goals are complete, and I realize it's all going to work as well as I hoped, I get kind of stunned by the sheer scale of it. It seems to be part of the process.

Step 1 is to dream your impossible dreams. That's the easy and fun part. Technology is the science and art of making the impossible real, so set out to do something impossible, but cool.

Step 2 is to learn why it's currently impossible. It often comes down to some critical component not being available, like perfect diodes, or superconductors, or a non-exponential algorithm.

Step 3 is to look around and realize that someone created exactly that last year, or found a way to make them unnecessary. Read lots of papers and science journals.

Now you're off to the races.

Because now you're in possession of that most valuable kind of knowledge: a single true fact that 99.999% of other people still believe to be false. You know it can be done, against the prevailing common-sense.

Of course, if you discover the critical component is still impossible, well, too bad. Project over. Try again some other day.

Failure is quick and easy. You write "does not work" and move on to a different approach. It's the usual outcome. It's when it all goes right that you have to occasionally stop for breath... In my case because I didn't really plan this far ahead. I wasn't sure I'd get here.

What matters now is engaging other people in the project, hence the very literal focus on 'Player Avatars' and peer-to-peer protocols. This is where the project strays slightly from the core astronomy focus, and starts involving psychology. For example, the particular choice of avatar that you 'inhabit' can have a profound impact on your self-perception, and even your ability to learn.

For example, I'm thinking of representing all 'players' in the environment as Astronauts. One-size-fits-all generic international space-suit to start with. you may get customizable patches. Michelle had the brilliant idea of letting the suits be 'marked up' by the environment - spend a lot of time in front of the Sun, and you get tanned. That kind of thing. If you have a webcam pointed at your head, we might even be able to go around 'visors up' and see each other's real faces.

At the opposite end of the scale, imagine if, upon entering what was purported to be a Massively Multi-Scientist Online Research Environment, you were embodied in the avatar of Donald Duck. A world populated with multi coloured giant Donald Ducks. There would be an element of cognitive dissonance that would not serve the intentions of the environment.

One excellent researcher said (I paraphrase) "Avatars have a profoundly positive influence when they represent us at our best. As we wish to be."

I'm guessing if you're insanely, professionally interested in the stars, you secretly always wanted to be an astronaut. I know I did.

To this end, I've been watching documentaries on Space Suits from around the world. Even standardizing on suits has the opportunity to introduce a vast range of what I like to think of as 'virtual hats'. From homages to history, through to some rather stylish modern threads.