Archive for 2012

After yesterday’s reading and decoding exploration, here’s some code which will happily play back my daily log files, of which I now have over 4 years worth …

Sample output:

As you can see, this supports scanning entire log files, both plain text and gzipped. In fact, JeeMonLogParser@parseStream should also work fine with sockets and pipes:

The beauty – again – is total modularity: both the real serial interface module and this log-replay module generate the same events, and can therefore be used interchangeably. As the decoders work independent of either one, there is no dependency (“coupling”) whatsoever between these modules.

Not to worry: from now on I won’t bore you with every new JavaScript / CoffeeScript snippet I come up with – just wanted to illustrate how asynchronous I/O and events are making this code extremely easy to develop and try out in small bites.

I’m starting to understand how things work in Node.js. Just wrote a little module to take serial output from the RF12demo sketch and decode its “OK …” output lines:

Sample output:

This quick demo only has decoders for nodes 3 and 9 so far, but it shows the basic idea.

This relies on the EventEmitter class, which offers a very lightweight mechanism of passing around objects on channels, and adding listeners to get called when such “events” happen. A very efficient in-process pub-sub mechanism, in effect!

Here is the “serial-rf12demo” module which does the rest of the magic:

For quite some time, I’ve wanted to know just how much current the RFM12B module draws on power-up. Well, time for a test using the power booster described recently:

So the idea is to apply a sawtooth signal to the RFM12B, rising from 0 to 3V at the rate of say 10 Hz, and to measure the voltage drop across a 100 Ω resistor at the same time. This will have a slight effect on measurement accuracy – but no more than 2%, so I’m ok with it.

Here is the outcome:

The yellow trace is VCC, the supply voltage – from 0..3V. The magenta trace is the current consumption, which turns out to be 0..650 µA. As you can see, the current draw quickly rises between 1 and 2V, and then continues to increase sort of linearly.

Note that this power consumption can’t be reduced: we don’t have the ability to send any commands to the RFM12B until it has started up!

This type of analysis can also be done using the X-Y mode on most oscilloscopes:

It’s essentially the same picture as before, because the sawtooth is a straight line, and so voltage rise is the same thing as time in this case. Here’s what happens when the input signal is switched to a sine wave:

As expected, the essence of the curve hasn’t changed one bit. Because it really doesn’t matter how we vary VCC over time. But there’s an intriguing split in the curve – this is most likely caused by a different current consumption when VCC is rising vs when it is dropping. Keep in mind that the changes are occurring at 10 Hz, so there’s bound to be some residual charge in the on-board capacitors of the RFM12B module.

Anyway. It’s a bit of a silly distraction to do things this way, but now I do have a better idea of how current consumption increases on startup. This relatively high 0.65 mA current draw was the main reason for including a MOSFET in the new JeeNode Micro v2, BTW.

Note that the LED Node comes with pre-soldered SMD MOSFETs so you don’t have to fiddle with ‘em.

The LED Node is really just a JeeNode with a different layout and 3 high-power MOSFET drivers, to control up to 72W of RGB LED strips through the ATmega’s hardware PWM. Since there’s an RFM12B wireless module on board, as well as two free JeePorts, you can do all sorts of funky things with it.

As usual, the build progresses from the flattest to the highest components, so that you can easily flip the PCB over and press it down while soldering each wire and pin.

Let’s get started! So we begin with 7 resistors and 1 diode (careful, the diode is polarised):

Be sure to get the values right: 3x 1 kΩ, 3x 1 MΩ, and 1x 10 kΩ (next to the ATmega).

(note: I used three 100 kΩ resistors i.s.o. of 1 MΩ, as that’s what I had lying around)

Next, add the 4x 0.1 µF capacitors and the IC socket – lots of soldering to do on that one:

Then the MCP1702 regulator and the electrolytic capacitor (both are polarised, so here too, make sure you put them in the right way around), as well as the male 6-pin FTDI header:

Soldering the RFM12B wireless radio module takes a bit of care. It’s easiest if you start off by adding a small solder dot and hold the radio while making the solder melt again:

Then solder the remaining pins (I tend to get lazy and skip those which aren’t used, hence not all of them have solder). I also added the 3-pin orange 16 MHz ceramic resonator, the antenna wire, the two port headers, and the big screw terminal for connecting power:

Celebration time – we’ve completed the assembly of the LED Node v2!

Here’s a side view, with the ATmega328 added – as you can see it’s much flatter than v1:

And here’s a top view of the completed LED Node v2, in all its glory:

You can now connect the FTDI header via a USB BUB, and you should see the greeting of the RF12demo sketch, which has been pre-loaded onto the ATmega328.

To get some really fancy effects, check out the Color-shifting LED Node post from a while back on this weblog. You can adjust it as needed and then upload it through FTDI.

Next step is to attach your RGB strip (it should match the 4-pin connector on the far left). Be sure to use fairly sturdy wires as there are up to 2 amps going through each color pin and a maximum of 6 amps total through the “+” connector pin!

Lastly, connect a 12V DC power supply (making absolutely sure to get the polarity right!) and you will have a remote-controllable LED strip. Enjoy!

Maybe it’s a bit soon’ish to talk about this, but I often like to go slightly against the grain, so with everybody planning to look back at 2012 a few days from now, and coming up with interesting things to say about 2013 – heck, why not travel through time a bit early, eh?

The big events for me this year were the shop hand-over to Martyn and Rohan Judd (who continue to do a magnificent job), and a gradual but very definitive re-focusing on home energy saving and software development. Product development, i.e. physical computing hardware, is taking place in somewhat less public ways, but let me just say that it’s still as much part of what I do as ever. The collaboration with Paul Badger of Modern Device is not something you hear from me about very much, but we’re in regular and frequent discussion about what we’re both doing and where we’d like to go. For 2012, I’m very pleased with how things have worked out, and mighty proud to be part of this team.

The year 2012 was also the year which brought us large-scale online courses, such as Udacity and Coursera. I have to admit that I signed up for several of their courses, but never completed them. Did enough to learn some really useful things, but also realised that it would take probably 2 full days per week to actually complete this (assuming it wouldn’t all end up being above my head…). At the time – in the summer – I just didn’t have the peace of mind to see it through. So this is back on the TODO list for now.

My shining light is Khan Academy, an initiative which was started in 2006 by one person:

To me, this isn’t about the Khan Academy, Salman Khan, John Resig, or JavaScript. What is happening here, is that that education is changing in major ways, and now the tools are changing in equally fundamental ways. This world is becoming a place for people who take their future into their own hands. And there’s nothing better than the above to illustrate what that means for a domain such as Computer Science. This isn’t about a better teacher or a better book – this is about a new way of learning. On a global scale.

The message is loud and clear: “Wanna go somewhere? Go! What’s holding you back?” – and 2012 is where it all switched into a higher gear. There are more places to go and learn than ever, and the foundations of that learning are more and more based on open source – meaning that you can dive in as deep as you like. Given the time, I’d actually love to have a good look inside Node.js one day… but nah, not quite yet :)

I’ve been rediscovering this path recently, trying to understand even the most stupid basic aspects of this new (for me) programming language called JavaScript, iterating between total despair at the complexity and the breadth of all the material on the one hand, and absolute delight and gratitude as someone answered my question and helped me reach the next level. Wow. Everything is out there. BSD/MIT-licensed. Right in front of our nose!

All we need is fascination, perseverance, and time. None of these are a given. But we must fight for them. Because they matter, and because life’s too short for anything less.

So – yes, a bit early – for 2013, I wish you lots of fascination, perseverance… and time.

It’s all about dynamics, really. When software becomes so dynamic that you see the data, then all that complex code will vanish into the background:

This is the transformation we saw a long time ago when going from teletype-based interaction to direct manipulation with the mouse, and the same is happening here: if the link between physical devices and the page shown on the web browser is immediate, then the checkboxes and indicators on the web page become essentially the same as the buttons and the LED’s. The software becomes invisible – as it should be!

That demo from a few days back really has that effect. And of course then networking kicks in to make this work anywhere, including tablets and mobile phones.

But why stop there? With Tcl, I have always enjoyed the fact that I can develop inside a running process, i.e. modify code on a live system – by simply reloading source files.

With JavaScript, although the mechanism works very differently, you can get similar benefits. When launching the Node.js based server, I use this command:

nodemon app.coffee

This not only launches a web server on port 3000 for use with a browser, it also starts watching the files in the current directory for changes. In combination with the logic of SocketStream, this leads to the following behavior during development:

when I change a file such as app.coffee or any file inside the server/ directory, nodemon will stop and relaunch the server app, thus picking up all the changes – and SocketStream is smart enough to make all clients re-connect automatically

when changing a file anywhere inside the clients/ area, the server sends a special request via WebSockets for the clients, i.e. the web browser(s), to refresh themselves – again, this causes all client-side changes to be picked up

when changing CSS files (or rather, the Stylus files that generate it), the same happens, but in this case the browser state does not get lost – so you can instantly view the effects of twiddling with CSS

Let me stress that: the browser updates on each save, even if it’s not the front window!

The benefits for the development workflow are hard to overstate – it means that you can really build a full client-server application in small steps and immediately see what’s going on. If there is a problem, just insert some “console.log()” calls and watch the server-side (stdout) or client-side (browser console window).

There is one issue, in that browser state gets lost with client-side code changes (current contents of input boxes, button state, etc), but this can be overcome by moving more of this state into Redis, since the Redis “store” can just stay running in the background.

All in all, I’m totally blown away by what’s possible and within reach today, and by the way this type of software development can be done. Anywhere and by anyone.

that happens to include NPM, the Node Package Manager, all I had to do was add the NPM bin dir to my PATH (in .bash_profile, for example), so that globally installed commands will be found – PATH=/usr/local/share/npm/bin:$PATH

Not there yet, but I wanted to point out at this point that Xcode plus Homebrew (on Mac, other platforms have their own variants), with Node.js and NPM as foundation for everything else. Once you have those installed and working smoothly, everything else is a matter of obtaining packages through NPM as needed and running them with Node.js – a truly amazing software combo. NPM can also handle uninstalls & cleanup.

Let’s move on, shall we?

install SocketStream globally – npm install -g socketstream
(the “-g” is why PATH needs to be properly set after this point)

open a second browser window on the same URL, and marvel at how a chat works :)

So there’s some setup involved, and it’s bound to be a bit different on Windows and Linux, but still… it’s not that painful. There’s a lot hidden behind the scenes of these installation tools. In particular npm is incredibly easy to use, and the workhorse for getting tons and tons of packages from GitHub or elsewhere into your project.

The way this works, is that you add one line per package you want to the “package.json” file inside the project directory, and then simply re-run “npm install”. I did exactly that – adding “serialport” as dependency, which caused npm to go out, fetch, and compile all the necessary bits and pieces.

For yesterday’s demo, the above was my starting point. However, I did want to switch to CoffeeScript and Jade instead of JavaScript and HTML, respectively – which is very easy to do with the js2coffee and html2jade tools.

These were installed using – npm install -g js2coffee html2jade

And then hours of head-scratching, reading, browsing the web, watching video’s, etc.

But hey, it was a pretty smooth JavaScript newbie start as far as I’m concerned!

Here’s a fun experiment – using Node.js with SocketStream as web server to directly control the LEDs on a Blink Plug and read out the button states via a JeeNode USB:

This is the web interface I hacked together:

The red background comes from pressing button #2, and LED 1 is currently on – so this is bi-directional & real-time communication. There’s no polling: signalling is instant in both directions, due to the magic of WebSockets (this page lists supported browsers).

I’m running blink_serial.ino on the JeeNode, which does nothing more than pass some short messages back and forth over the USB serial connection.

The rest is a matter of getting all the pieces in the right place in the SocketStream framework. There’s no AngularJS in here yet, so getting data in and out of the actual web page is a bit clumsy. The total code is under 100 lines of CoffeeScript – the entire application can be downloaded as ZIP archive.

Here’s the main client-side code from the client/code/app/app.coffee source file:

(some old stuff and weird coding in there… hey, it’s just an experiment, ok?)

The client side, i.e. the browser, can receive “blink:button” events via WebSockets (these are set up and fully managed by SocketStream, including reconnects), as well as the usual DOM events such as changing the state of a checkbox element on the page.

And this is the main server-side logic, contained in the server/rpc/serial.coffee file:

The server uses the node-serialport module to gain access to serial ports on the server, where the JeeNode USB is plugged in. And it defines a “sendCommand” which can be called via RPC by each connected web browser.

Most of the work is really figuring out where things go and how to get at the different bits of data and code. It’s all in JavaScript CoffeeScript on both client and server, but you still need to know all the concepts to get to grips with it – there is no magic pill!

There are tons of ways to make web pages dynamic, i.e. have them update in real-time. For many years, constant automatic full-page refreshes were the only game in town.

But that’s more or less ignoring the web evolution of the past decade. With JavaScript in the browser, you can manipulate the DOM (i.e. the structure underlying each web page) directly.
This has led to an explosion of JavaScript libraries in recent years, of which the most widespread one by now is probably JQuery.

In JQuery, you can easily make changes to the DOM – here is a tiny example:

And sure enough, the result comes out as:

But there is a major problem – with anything non-trivial, this style quickly ends up becoming a huge mess. Everything gets mixed up – even if you try to separate the JavaScript code into its own files, you still need to deal with things like loops inside the HTML code (to create a repeated list, depending on how many data items there are).

And there’s no automation – the more little bits of dynamic info you have spread around the page, the more code you need to write to keep all of them in sync. Both ways: setting items to display as well as picking up info entered via the keyboard and mouse.

There are a number of ways to get around this nowadays – with a very nice overview about seven of the mainstream solutions by Steven Sanderson.

I used Knockout for the RFM12B configuration generator to explore its dynamics. And while it does what it says, and leads to delightfully dynamically-updating web pages, I still found myself mixing up logic and presentation and having to think about template expansion more than I wanted to.

Then I discovered AngularJS. At first glance, it looks like just another JavaScript all-in-the-browser library, with all the usual expansion and looping mechanisms. But there’s a difference: AngularJS doesn’t mix concepts, it embeds all the information it needs in HTML elements and attributes.

AngularJS manipulates the DOM structure (better than XSLT did with XML, I think).

Here’s the same example as above, in Angular (with apologies for abusing ng-init a bit):

The “ng-app” attribute is the key. It tells AngularJS to go through the element tree and do its magic. It might sound like a detail, but as a result, this page remains 100% HTML – it can still be created by a graphics designer using standard HTML editing tools.

More importantly, this sort of coding can grow without ever becoming a mix of concepts and languages. I’ve seen my share of JavaScript / HTML mashups and templating attempts, and it has always kept me from using JavaScript in the browser. Until now.

Another little demo I just wrote can be seen here. More physical-computing related. As with any web app, you can check the page source to see how it’s done.

For an excellent introduction about how this works, see John Lindquist’s 15-minute video on YouTube. There will be a lot of new stuff here if you haven’t seen AngularJS before, but it shows how to progressively create a non-trivial app (using WebStorm).

If you’re interested in this, and willing to invest some hours, there is a fantastic tutorial on the AngularJS site. As far as I’m concerned (which doesn’t mean much) this is just about the best there is today. I don’t care too much about syntax (or even languages), but AngularJS absolutely hits the sweet spot in the browser, on a conceptual level.

AngularJS is from Google, with MIT-licensed source on GitHub, and documented here.

And to top it all off, there is now also a GitHub demo project which combines AngularJS on the client with SocketStream on the server. Lots of reading and exploring to do!

As I dive into JavaScript, and prompted by a recent comment on the weblog, it
occurred to me that it might be useful to create a small list of books
resources, for those of you interested in going down the same rabbit hole and starting out along a similar path.

Grab some nice food and drinks, you’re gonna’ need ‘em!

First off, I’m assuming you have a good basis in some common programming language,
such as C, C++, or Java, and preferably also one of the scripting languages, such as
Lua, Perl, Python, Ruby, or Tcl. This isn’t a list about learning to program,
but a list to help you dive into JavaScript, and all the tools, frameworks, and
libraries that come with it.

Because JavaScript is just the enabler, really. My new-found fascination with
it is not the syntax or the semantics, but the fast-paced ecosystem that is evolving around
JS.

One more note before I take off: this is just my list. If you don’t agree, or
don’t like it, just ignore it. If there are any important pointers missing (of course there are!),
feel free to add tips and suggestions in the comments.

JavaScript

There’s JavaScript (the language), and there are the JavaScript environments (in the
browser: the DOM, and on the server: Node). You’ll want to learn about them all.

In the browser

Next on the menu: the DOM,
HTML, and
CSS. This is the essence
of what happens inside a browser. Can be consumed in
small doses, as the need arises. Simply start with the just-mentioned Wikipedia links.

Not quite sure what to recommend here – I’ve picked this up over the years. Perhaps w3schoolsthis or this. Focus on HTML5 and CSS3, as these are the newest
standards.

On the server

There are different implementations of JavaScript, but on the server, by far
the most common implementation seems to be Node.js. This
is a lot more than “just some JS implementation”. It comes with a standard
API, full of useful functions and objects.

Node.js is geared towards asynchronous & event-driven operation. Nothing blocks,
not even a read from a local disk – because in CPU terms, blocking takes too long.
This means that you tend to call a “read” function and give it a “callback”
function which gets called once the read completes. Very very different frame of
mind. Deeply frustrating at times, but essential for any non-trivial app which
needs to deal with networking, disks, and other “slow” peripherals. Including us mortals.

See also this great (but fairly long) list of tutorials, videos, and books at
Stack Overflow.

SPA and MVC

Note that JavaScript on the server replaces all sorts
of widespread approaches: PHP, ASP, and such. Even advanced web frameworks such as Rails and Django don’t play a role here. The server no longer acts as a templating system generating
dynamic web pages – instead it just serves static HTML, CSS, JavaScript, and image files, and responds to requests via Ajax or WebSockets (often using JSON in both directions).

The term for this is Single-page web application, even though it’s not about staying on a single page (i.e. URL) at all costs.
See this website for more background – also as PDF.

The other concepts bound to come up are MVC and MVVM.
There’s an article about MVC at A List Apart. And here’s an online book with probably more than you want to know about this topic and about JavaScript design patterns in general.

In a nutshell: the model is the data in your app, the view is its presentation (i.e. while browsing), and the controller is the logic which makes changes to the model. Very (VERY!) loosely speaking, the model sits in the server, the view is the browser, and the controller is what jumps into action on the server when someone clicks, drags, or types something. This simplification completely falls apart in more advanced uses of JS.

Dialects

I am already starting to become quite a fan of CoffeeScript,
Jade, and Stylus.
These are pre-processors for JavaScript, HTML, and CSS, respectively. Totally optional.

CoffeeScript is still JavaScript, so a good grasp of the underlying semantics is important.

It’s fairly easy to read these notations with only minimal exposure to the
underlying language dialects, in my (still limited) experience. No need to use them
yourself, but if you do, the above links are excellent starting points.

Just the start…

The above are really just pre-requisites to getting started. More
on this topic soon, but let me just stress that good foundational understanding of
JavaScript is essential. There are crazy warts in the language (which Douglas
Crockford frequently points out and explains), but they’re a fact of life that
we’ll just have to live with. This is what you get with a language which has now
become part of every major web browser in the world.

Graphs used to be made with gnuplot or RRDtool. Both generated on the server and then presented as images in the browser. This used to be called state of the art!

But that sooo last-century …

Then came JavaScript libraries such as Flot, which uses the HTML5 Canvas, allowing you to draw the graph in the browser. The key benefit is that these graphs can be made dynamic (updating through real-time data feeds) and interactive (so you can zoom in and show details).

But that sooo last-decade …

Now there is this, using the latest HTML5 capabilities and resolution-independent SVG:

That picture doesn’t really do justice to the way some of these tools adjust dynamically and animate on change. All in the web browser. Stunning – in features and in variety!

I’ve been zooming in a bit (heh) on tools such as Rickshaw and NVD3 – both with lots of fascinating examples. Some parts are just window dressing, but the dynamics and real-time behaviour will definitely help gain more insight into the underlying datasets. Which is what all the visualisation should be about, of course.

Another interesting project is Dashku, based on SocketStream and Raphaël. It’s a way to build a live dashboard – the essence only became clear to me after seeing this YouTube video. As you build and adjust it in edit mode, you can keep a second view open which shows the final result. Things automatically get synced, due to SocketStream.

Now, if only I knew how to build up my fu-level and find a way into all this magic…

It all starts with baby steps. Let me just say that it feels very awkward and humbling to stumble around in a new programming language without knowing how things should be done. Here’s the sort of gibberish I’m currently writing:

This must be the ugliest code I’ve ever written. Not because the language is bad, but because I’m trying to convert existing code in a hurry, without knowing how to do things properly in JavaScript / CoffeeScript. Of course it’s unreadable, but all I care for right now, is to get a real-time data source up and running to develop the rest with.

I’m posting this so that one day I can look back and laugh at all this clumsiness :)

The output appears in the browser, even though all this is running on the server:

Ok, so now there’s a “feed” with readings coming in. But that’s just the tip of the iceberg:

What should the pubsub naming structure be, i.e. what are the keys / topic names?

Should readings be managed per value (temperature), or per device (room node)?

What format should this data have, since inserting a decimal point is locale-specific?

How to manage new values, given that previous ones can be useful to have around?

Are there easy choices to make w.r.t. how to store the history of all this data?

How to aggregate values, but more importantly perhaps: when to do this?

And that’s just incoming data. There will also need to be rules for automation and outgoing control data. Not to mention configuration settings, admin front-ends, live development, per-user settings, access rights, etc, etc, etc.

I’m not too interested yet in implementing things for real. Would rather first spend more time understanding the trade-offs – and learning JavaScript. By doodling as I’m doing now and by examining a lot of code written by others.

If you have any suggestions on what I should be looking into, let me know!

“Experience is what you get while looking for something else.” – Federico Fellini

I recently came across SocketStream, which describes itself as “A fast, modular Node.js web framework dedicated to building single-page realtime apps”.

And indeed, it took virtually no effort to get this self-updating page in a web browser:

The input comes from the serial port, I just added this code:

That’s not JavaScript, but CoffeeScript – a dialect with a concise functional notation (and significant white-space indentation), which gets turned into JavaScript on the fly.

The above does a lot more than collect serial data: the “try” block converts the text to a binary buffer in the form of a JavaScript DataView, ready for decoding and then publishes each packet on its corresponding channel. Just to try out some ideas…

I’m also using Jade here, a notation which gets transformed into HTML – on the fly:

And this is Stylus, a shorthand notation which generates CSS (yep, again on the fly):

All of these are completely gone once development is over: with one command, you generate a complete app which contains only pure JavaScript, HTML, and CSS files.

I’m slowly falling in love with all these notations – yeah, I know, very unprofessional!

Apart from installing SocketStream using “npm install -g SocketStream”, adding the SerialPort module to the dependencies, and scratching my head for a few hours to figure out how all this machinery works, that is virtually all I had to do.

Development is blindingly fast when it comes to client side editing: just save any file and the browser(s) will automatically reload. With a text editor that saves changes on focus loss, the process becomes instant: edit and switch to the browser. Boom – updated!

The trade-off here is learning to understand these libraries and tools and playing by their rules, versus having to write a lot more yourself. But from what I’ve seen so far, SocketStream with Express, CoffeeScript, Jade, Stylus, SocketIO, Node.js, SerialPort, Redis, etc. take a staggering amount of work off my shoulders – all 100% open source.

Home monitoring and home automation have some very obvious properties:

a bunch of sensors around the house are sending out readings

with actuators to control lights and appliances, driven by secure commands

all of this within and around the home, i.e. in a fairly confined space

we’d like to see past history, usually in the form of graphs

we want to be able to control the actuators remotely, through control panels

and lastly, we’d like to automate things a bit, using configurable rules

In information processing terms, this stuff is real-time, but only barely so: it’s enough if things happen within say a tenth of a second. The amount of information we have to deal with is also quite low: the entire state of a home at any point in time is probably no more than perhaps a kilobyte (although collected history will end up being a lot more).

The challenge is not the processing side of things, but the architecture: centralised or distributed, network topology for these readings and commands, how to deal with a plethora of physical interfaces and devices, and how to specify and manage the automation rules. Oh, and the user interface. The setup should also be organic, in that it allows us to grow and evolve all the aspects of our system over time.

It’s all about state and messages: the state of the home, current, sensed, and desired, and the events which change that state, in the form of incoming and outgoing messages.

What we need is MOM, i.e. Message-oriented middleware: a core which represents that state and interfaces through messages – both incoming and generated. One very clean model is to have a core process which allows some processes to “publish” messages to it and other to “subscribe” to specific changes. This mechanism is called pubsub.

Ideally, the core process should be launched once and then kept running forever, with all the features and functions added (at least initially) as separate processes, so that we can develop, add, fix, refine, and even tear down the different functions as needed without literally “bringing down the house” at every turn.

There are a couple of ways to do this, and as you may recall, I’ve been exploring the option of using ZeroMQ as the core foundation for all message exchanges. ZeroMQ bills itself as “the intelligent transport layer” and it supports pubsub as well as several other application interconnect topologies. Now, half a year later, I’m not so sure it’s really what I want. While ZeroMQ is definitely more than flexible and scalable enough, it also is fairly low-level in many ways. A lot will need to be built on top, even just to create that central core process.

Another contender which seems to be getting a lot of traction in home automation these days is MQTT, with an open source implementation of the central core called Mosquitto. In MOM terms, this is called a “broker”: a process which manages incoming message traffic from publishers by re-routing it to the proper subscribers. The model is very clean and simple: there are “channels” with hierarchical names such as perhaps “/kitchen/roomnode/temperature” to which a sensor publishes its temperature readings, and then others can subscribe to say “/+/+/temperature” to get notified of each temperature report around the house, the moment it comes in.

MQTT adds a lot of useful functionality, and optionally supports a quality-of-service (QoS) level as a way to handle messages that need reliable delivery (QoS level 0 messages use best-effort delivery, but may occasionally get dropped). The “retain” feature can hold on to the last message sent on each channel, so that when the system shuts down and comes back up or when a connection has been interrupted, a subscriber immediately learns about the last value. The “last will and testament” lets a publisher prepare a message to be sent out to a channel (not necessarily the same one) when it drops out for any reason.

All very useful, but I’m not convinced this is a good fit. In my perception, state is more central than messages in this context. State is what we model with a home monitoring and automation system, whereas messages come and go in various ways. When I look at the system, I’m first of all interested in the state of the house, and only in the second place interested in how things have changed until now or will change in the future. I’d much rather have a database as the centre of this universe. With excellent support for messages and pubsub, of course.

I’ve been looking at Redis lately, a “key-value store” which is not only small and efficient, but which also has explicit support for pubsub built in. So the model remains the same: publishers and subscribers can find each other through Redis, with wildcards to support the same concept of channels as in MQTT. But the key difference is that the central setup is now based on state: even without any publishers active, I can inspect the current temperature, switch setting, etc. – just like MQTT’s “retain”.

Furthermore, with a database-centric core, we automatically also have a place to store configuration settings and even logic, in the form of scripts, if needed. This approach can greatly simplify publishers and subscribers, as they no longer need local storage for configuration. Not a big deal when everything lives on a single machine, but with a central general-purpose store that is no longer a necessity. Logic can run anywhere, yet operate off the same central configuration.

The good news is that with any of the above three options, programming language choice is irrelevant: they all have numerous bindings and interfaces. In fact, because interconnections take place via sockets, there is not even a need to use C-based interface code: even the language itself can be used to handle properly-formatted packets.

I’ve set up a basic installation on the Mac, using Homebrew. The following steps are not 100% precise, but this is more or less all that’s needed on Windows, MacOSX, or Linux:

I learned to program in C a long time ago, on a PDP11 running Unix (one of the first installations in the Netherlands). That’s over 30 years ago and guess what… that knowledge is still applicable. Back in full force on all of today’s embedded µC’s, in fact.

I’ll spare you the list of languages I learned before and after that time, but C has become what is probably the most widespread programming language ever. Today, it is the #1 implementation language, in fact. It powers the gcc toolchain, the Linux operating system, most servers and browsers, and … well, just about everything we use today.

It’s pretty useful to learn stuff which lasts… but also pretty hard to predict, alas!

Not just because switching means you have start all over again, but because you can become really productive at programming when spending years and years (or perhaps just 10,000 hours) learning the ins and outs, learning from others, and getting really familiar with all the programming language’s idioms, quirks, tricks, and smells.

C (and it its wake C++ and Objective-C) has become irreplaceable and timeless.

Fast-forward to today and the scenery sure has changed: there are now hundreds of programming languages, and so many people programming, that lots and lots of them can thrive alongside each other within their own communities.

While researching a bit how to move forward with a couple of larger projects here at JeeLabs, I’ve spent a lot of time looking around recently, to decide on where to go next.

The web and dynamic languages are here to stay, and that inevitably leads to JavaScript. When you look at GitHub, the most used programming language is JavaScript. This may be skewed by the fact that the JavaScript community prefers GitHub, or that people make more and smaller projects, but there is no denying that it’s a very active trend:

In a way, JavaScript went where Java once tried to go: becoming the de-facto standard language inside the browser, i.e. on the client side of the web. But there’s something else going on: not only is it taking over the client side of things, it’s also making inroads on the server end. If you look at the most active projects, again on GitHub, you get this list:

There’s something called Node.js in each of these top-5 charts. That’s JavaScript on the server side. Node.js has an event-based asynchronous processing model and is based on Google’s V8 engine. It’s also is phenomenally fast, due to its just-in-time compilation for x86 and ARM architectures.

And then the Aha-Erlebnis set in: JavaScript is the next C !

Think about it: it’s on all web browsers on all platforms, it’s complemented by a DOM, HTML, and CSS which bring it into an ever-richer visual world, and it’s slowly getting more and more traction on the server side of the web.

Just as with C at the time, I don’t expect the world to become mono-lingual, but I think that it is inevitable that we will see more and more developments on top of JavaScript.

With JavaScript comes a free text-based “data interchange protocol”. This is where XML tried to go, but failed – and where JSON is now taking over.

My conclusion (and prediction) is: like it or not, client-side JavaScript + JSON + server-side JavaScript is here to stay, and portable / efficient / readable enough to become acceptable for an ever-growing group of programmers. Just like C.

Node.js is implemented in C++ and can be extended in C++, which means that even special-purpose C libraries can be brought into the mix. So one way of looking at JavaScript, is as a dynamic language on top of C/C++.

I have to admit that it’s quite tempting to consider building everything in JavaScript from now on – because having the same language on all sides of a network configuration will probably make things a lot simpler. Actually, I’m also tempted to use pre-processors such as CoffeeScript, Jade, and Stylus, but these are really just optional conveniences (or gimmicks?) around the basic JavaScript, HTML, and CSS trio, respectively.

It’s easy to dismiss JavaScript as yet another fad. But doing so by ignorance would be a mistake – see the Blub Paradox by Paul Graham. Features such as list comprehensions are neat tricks, but easily worked around. Prototypal inheritance and lexical closures on the other hand, are profound concepts. Closures in combination with asynchronous processing (and a form of coding called CPS) are fairly complex, but the fact that some really smart guys can create libraries using these techniques and hide it from us mere mortals means you get a lot more than a new notation and some hyped-up libraries.

I’m not trying to scare you or show off. Nor am I cherry-picking features to bring out arguments in favour of JavaScript. Several languages offer similar – and sometimes even more powerful – features . Based on conceptual power alone, I’d prefer Common Lisp or Scheme, in fact. But JavaScript is dramatically more widespread, and very active / vibrant w.r.t. what is currently being developed in it and for it.

The interesting bit is the predicitive aspect: you get a predicted price for the entire day ahead, which means you can plan your consumption! A win-win all around, since that sort of behavioural adjustment is probably what the energy company wants in the first place. Their concern is always (only?) the peak.

Is this our future? I’d definitely prefer it to “smart” grids taking decisions about my appliances and home. Better options, letting me decide whether to use, store, or pass along the solar energy production, for example.

Here’s another graph from that same site, showing this year’s trend in the Chicago area:

It’s pretty obvious that air-conditioners run on electricity, eh?

But look also at those rates… this is about an order of magnitude lower than the current rates in the Netherlands (and I suspect Western Europe).

Here are the rates I get from my provider, including huge taxes:

You can probably guess the Dutch in there – two tariffs: high is for weekdays during daytime, low is for weekends and at night. Hardly a difference, due to taxes :(

Here are the rates for natural gas, btw – just for completeness:

No wonder really, that different parts of the world, with their widely different income levels and energy prices, end up making completely different choices.

Solar panels are currently profitable after about 7..8 years in the Netherlands – which is reflected by a strong increase in adoption lately. But seeing the above graphs, I doubt that this would make much sense in any other part of the world right now!

There is a small but significant difference with regular JeeNodes (apart from their very different shape), in that all three MOSFETs are tied to pins with hardware PWM support. This is important to get flicker-free dimming, i.e. if you want to have clean and calm color effects. Software PWM doesn’t give you that (unless you turn all other interrupt sources off), and even with hardware PWM it requires a small tweak of the standard Arduino library code to work well.

The neat thing about the LED Node is the wireless capability, so you can control the unit in all sorts of funky ways.

But I didn’t like the very sharp pulses this board generates, which can cause problems with color shifts over long strips and also can produce a lot of RF interference, due to the LED driving current ringing. The other thing which didn’t turn out to be as useful as I thought was the room board part.

So here’s the newLED Node v2:

The big copper areas on the left are extra-wide traces and cooling pads, dimensioned to support at least 2 Amps for each of the RGB colors, for a total of 6 A, i.e. 72 W LED strips @ 12 V. But despite the higher specs, this board will actually be lower profile, because it uses a different type of MOSFETs. They are surface mounted and come pre-soldered so you don’t have to fiddle with them (soldering such small components on relatively large copper surfaces requires a good soldering iron and some expertise).

This new revision has the extra resistors to reduce ringing, and replaces the room board interface with two standard 6-pin port headers: one at the very end, and one on the side. These are ports 1 and 4, respectively, matching a standard JeeNode and any plugs you like. If you want, you could still hook up a Room Board, but this is now no longer the only way to use the LED Node.

Wanna add an accelerometer or compass to make your LED strips orientation aware? Well… now you can! And then place them inside your bike wheels? Could be fun :)

There’s a huge world out there which I’ve never looked into: audio. And it has changed.

It used to be analog (and before my time: vacuum tubes, or “valves” as the British say).

Nowadays, it’s all digital and integrated. The common Class-D amplifier is made of digitally switching MOSFETs with some cutoff filters to get rid of the residual high-frequency this generates. Leaving just the “pure” audible portion to drive the speakers.

With the recent switch to a new small TV, away from the Mac Mini, for our TV & music system, I lost the original hook-up we had, which was a (far too cheap) little analog amplifier driving (far too expensive) speakers we’ve had here for a long time.

So now we have this TV with built-in tiny 2.5W speakers blasting to the rear – a far cry from the sound we had before. And no music playback capability at all in the living room right now. Not good!

Our needs are simple: CD-quality music (we’re no audiophiles) and decent TV sound. I am going to need a setup soon, as the Christmas vacation time nears.

Trouble is: the sound source for our music is on the Mac Mini server, which is in an impossible place w.r.t. the TV and the speakers. So my first thought was: an Airport Express. It can play over WiFi, and has optical audio output. But… the AE draws 4W in standby. And turning it on for each use is awkward: waiting a minute or more to get sound from the TV is not so great.

The other options for music are an Apple TV or a specially-configured Raspberry Pi.

The only remaining issue is how to get sound from line-level analog audio or (preferably) digital audio to the speakers. I ended up choosing something fairly simple and low-end, a component from miniDSP called “miniAMP”:

This takes all-digital I²S signals and produces 4x 10W audio. It needs a 12..24V @ 4A supply, i.e. a simple “brick” should do. But that’s just half a solution: it needs I²S…

This is where the “miniDSP” component comes in (the SOIC chip at the top is a PIC µC):

So the whole setup becomes as follows – and I’ll double up the miniAMP (one for each channel) if the output is not powerful enough:

The miniDSP takes 2x analog in, and produces up to 4x digital I²S out. The nice part is that it’s fully configurable, i.e. it can do all sorts of fancy sound processing:

This is perfect for our setup, which includes old-but-incredibly-good separate speakers for the highs and the lows. So a fully configurable cross-over setup is just what we need:

The way this works is that you set it up, burn the settings into the DSP front-end via USB, and then insert it into the audio chain.

It’s tempting to start tinkering with this stuff at an even lower level, but nah… enough other things to do.
Although I do want to look into auto shut-off at some point, to further lower power consumption when no audio is being played. But for now this will have to do.

Having just gone through some reshuffling here, I thought it might be of interest to describe my setup, and how I got there.

Let’s start with some basics – apologies if this all sounds too trivial:

backups are not archives: backups are about redundancy, archives are about history

I don’t want backups, but the real world keeps proving that things can fail – badly!

archives are for old stuff I want to keep around for reference (or out of nostalgia…)

If you don’t set up a proper backup strategy, then you might as well go jump off a cliff.

If you don’t set up archives, fine: some hold onto everything, others prefer to travel light – I used to collect lots of movies and software archives. No more: there’s no end to it, and especially movies take up large amounts of space. Dropping all that gave me my life back.

We do keep all our music, and our entire photo collection (each 100+ GB). Both include digitised collections of everything before today’s bits-and-bytes era. So about 250 GB in all.

Now the deeply humbling part: everything I’ve ever written or coded in my life will easily fit on a USB stick. Let’s be generous and assume it will grow to 10 GB, tops.

What else is there? Oh yes, operating systems, installed apps, that sort of thing. Perhaps 20..50 GB per machine. The JeeLabs Server, with Mac OSX Server, four Linux VM’s, and everything else needed to keep a bunch of websites going, clocks in at just over 50 GB.

For the last few years, my main working setup has been a laptop with a 128 GB SSD, and it has been fairly easy to keep disk usage under 100 GB, even including a couple of Linux and Windows VM’s. Music and photo’s were stored on the server.

I’m rambling about this to explain why our entire “digital footprint” (for Liesbeth and me) is substantially under 1 TB. Some people will laugh at this, but hey – that’s where we stand.

Backup…

Ah, yes, back to the topic of this post. How to manage backups of all this. But before I do, I have to mention that I used to think in terms of “master disks” and “slave disks”, i.e. data which was the real thing, and copies on other disks which existed merely for convenience, off-line / off-site security, or just “attics” with lots of unsorted old stuff.

But that has changed in the past few months.

Now, with an automatic off-site backup strategy in place, there is no longer a need to worry so much about specific disks or computers. Any one of them could break down, and yet it would be no more than the inconvenience of having to get new hardware and restore data – it’d probably take a few days.

The key to this: everything that matters, now exists in at least three places in the world.

I’m running a mostly-Mac operation here, so that evidently influences some of the choices made – but not all, and I’m sure there are equivalent solutions for Windows and Linux.

This is the setup at JeeLabs:

one personal computer per person

a central server

Sure, there are lots of other older machines around here (about half a dozen, all still working fine, and used for various things). But our digital lives don’t “reside” on those other machines. Three computers, period.

For each, there are two types of backups: system recovery, and vital data.

System recovery is about being able to get back to work quickly when a disk breaks down or some other physical mishap. For that, I use Carbon Copy Cloner, which does full disk tree copying, and is able to create bootable images. These copies include the O/S, all installed apps, everything to get back up to a running machine from scratch, but none of my personal data (unless you consider some of the configuration settings to be personal).

These copies are made once a day, a week, or a month – some of these copies are fully automatic, others require me to hook up a disk and start the process. So it’s not 100% automated, but I know for sure I can get back to a running system which is “reasonably” close to my current one. In a matter of hours.

That’s 3 computers with 2 system copies for each. One of the copies is always off-site.

Vital data is of course just that: the stuff I never want to lose. For this, I now use CrashPlan+, with an unlimited 10-computer paid plan. There are a couple of other similar services, such as BackBlaze and Carbonite. They all do the same: you keep a process running in the background, which pumps changes out over internet.

In my case, one of the copies goes to the CrashPlan “cloud” itself (in the US), the other goes to a friend who also has fast internet and a CrashPlan setup. We each bought a 2.5″ USB-powered disk with lots of storage, placed our initial backups on them, and then swapped the drives to continue further incremental backups over the net.

The result: within 15 minutes, every change on my disk ends up in two other places on this planet. And because these backups contain history, older versions continue to be available long after each change and long after any deletion, even (I limit the history to 90 days).

That’s 1 TB of data, always in good shape. Virtually no effort, other than an occasional glance on the menu bar to see that the backup is operating properly. Any failure of 3 or more days for any of these backup streams leads to a warning email in my inbox (which is at an ISP, i.e. off-site). Once a week I get a concise backup status report, again via email.

The JeeLabs server VM’s get their own daily backup to Amazon S3, which means I can re-launch them as EC2 instances in the cloud if there is a serious problem with the Mac Mini used as server here. See an older post for details.

Yes, this is all fairly obvious: get your backups right and you get to sleep well at night.

But what has changed, is that I no longer use the always-on server as “stable disk” for my laptop. I used to try putting more and more data on the central server here, since it was always on and available anyway. Which means that for really good performance you need a 1 Gbit wired ethernet connection. Trivial stuff, but not so convenient when sitting on the couch in the living room. And frankly also a bit silly, since I’m the only person using those large PDF and code collections I’m relying on more and more these days.

So now, I’ve gone back to the simplest possible setup: one laptop, everything I need on there (several hundred GB in total), and an almost empty server again. On the server, just our music collection (which is of course shared) and the really always-on stuff, i.e. the JeeLabs server VM’s. Oh, and the extra hard disk for my friend’s backups…

Using well under 1 TB for an entire household will probably seem ridiculous. But I’m really happy to have a (sort of) NAS-less, and definitely RAID-less, setup here.

Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
— Rick Cook, The Wizardry Compiled

The latest trend is to add comments to weblog posts, praising me in all sorts of truly wonderful (but totally generic) ways. The only purpose being to get that comment listed with a reference to some site peddling some stuff. Fortunately, the ploy is trivial to detect. So easy in fact, that filtering can be fully automated via the Akismet web service, plus a WordPress plug-in by that same name.

Here’s the trend on this weblog (snapshot taken about a week ago):

The drop comes from the fact that all posts on this weblog are automatically closed for comments after two weeks, and there were no new posts in July and August. So it’s just a bunch of, eh, slightly desperate people pounding on the door.

One of them got through in the past six months. The other 326 just wasted their time.

Something similar is happening on the discussion forum. And behind the scenes, some new work is now being done to make those constant attempts there just as futile :)

Winter has set in here – it’s down to minus 15°C at night, with this view from JeeLabs:

Speaking of Christmas: gives me an excuse to talk about some administrative details…

We are running the Shop and shipping orders as fast as we can right up until the Christmas break – including some new products as they become available. If you’re planning to receive items in time for Christmas, we recommend you place your order before the following dates:

UK

Standard 1st class: 18th Dec
Special Delivery Request: 21st Dec

Mainland Europe

Airmail/Airsure: 12th Dec
Special Delivery Request: 20th Dec

Outside Europe

Airmail: right now please! Special Delivery Request: 18th Dec

If you don’t manage to get your order in before these dates, we will still process it right up until the 22nd Dec, but since the sleigh and reindeer are out on a rush job, you take your chances ….

Next thing to note is that dB and dBm (decibels) use a logarithmic scale. That’s a fancy way of saying that each step of 10 is 10 times more or less than the previous. From 0 to 10 dBm is a factor 10, i.e. from 1 mW to 10 mW. From 10 to 20 dBm is again a factor 10, i.e. 10 mW to 100 mW, etc. Likewise, -10 dBm is one tenth of 0 dBm (0.1 mW) etc.

The 500 KHz signal (peak #1) is therefore 10 mW (10 dBm), and the 1 MHz harmonic is roughly 100,000 times as weak at 0.1 µW (-40 dBm). It looks like a huge peak on the screen, but each vertical division down is one tenth of the value. The vertical scale on screen covers a staggering 1:100,000,000 power level ratio.

That 500 KHz sine wave is in fact very clean, despite the extra peaks seen at this scale.

Now let’s look at the same signal, on the output of the op-amp:

Not too bad (the second peak is still less than 1/30,000 of the original). Which is why the output shape at 500 KHz still looks very much like a pure sine wave.

At 1 MHz, the secondary peaks become a bit more pronounced:

.

And at 2 MHz, you can see that the output harmonics are again a lot stronger:

.

Not only has the level of the 2 MHz signal dropped from 9.23 dBm to 6.59 dBm, the second harmonic at 4 MHz is now only a bit under 1/100th the main frequency. And that shows itself as a severely distorted sine wave in yesterday’s weblog post.

In case you’re wondering: those other smaller peaks around 1 MHz come from public AM radio – there are some strong transmitters, located only a few km from here!

Anyway – I hope you were able to distill some basic intuition from this sort of signal analysis, if this is all new to you. It’s quite a valuable technique and all sort of within reach now, since most recent scopes include an FFT capability – the bread and butter of the analog electronics world…

Let’s now get back to digital again. Ah, bits and bytes, sooo much simpler!

Let’s look at that AD8532 dual op-amp mentioned yesterday and start with its “specs”:

The slew rate is relatively low for this unit. Its output voltage can only rise 5V per µs. In a way, this explains the ≈ 0.1 µs phase shift in the image which I’ll repeat again here:

As you can see, the 500 KHz sine wave takes about 200 ns to rise 1 division, i.e. 0.5V, so it’s definitely nearing the limit of this op-amp. Let’s push it a bit with 1 and 2 MHz sine waves:

Whoa! As you can see, the output cannot quite reproduce a 1 MHz input signal faithfully (there’s an odd little ripple), let alone 2 MHz in the second screen, which starts to diverge badly in both shape and amplitude. The vertical scale is 0.5V per division.

Sine waves are “pure frequencies” – in a vague manner of speaking. It’s the natural way for things to oscillate (not just electrical signals, sine waves are everywhere!). The field of Fourier analysis is based on one of the great mathematical discoveries that all repetitive signals (or motions) can be re-interpreted as the sum of sines and cosines with different amplitudes and frequencies.

You don’t have to dive into the math to benefit from this. Most modern oscilloscopes support an FFT mode, an amazing computed transformation which decomposes a repetitive signal into those sine waves. One of the simplest uses of FFT is to get a feel for how “pure” signals are, i.e. how close to a pure sine wave.

Unfortunately, I have too many FFT scope shots for one post, so tomorrow I’ll post the rest and finish this little diversion into signal analysis. It’ll allow us to compare the above three signals in a more quantitative way.

The trouble with the Arbitrary Waveform Generator I use, is that it has a fairly limited output drive capability. I thought it was broken, and returned it to TTi, but they tested it and couldn’t find any problem. It’ll drive a 50 Ω load, but my habit of raising the signal to stay above 0V (for single-supply uses) probably pushed it too far via that extra DC offset.

I’d like to use a slow ramp as sort of a controllable power supply for JeeNodes and the AA Power Board to find out how they behave with varying input voltages. A simple sawtooth running from 0.5V to 4V would be very convenient – as long as it can drive 50 mA or so.

Here’s one way to do it:

This is an op-amp, connected in such a way that the output will follow exactly what the input is doing – hence the name buffer amplifier or “voltage follower”.

Quick summary of how it works – an op-amp always increases its output when “+” is above “-“, and vice versa. So whatever the output is right now, if you raise the “+” pin, the output will go up, until the “-” pin is at the same value.

It seems pointless, but the other property of an op-amp, is that the input impedance of its inputs is very high. In other words: it draws nearly no current. The input load is negligible.

The output current is determined by the limits of the op-amp. And the AD8532 from Analog Devices can drive up to 250 mA – pretty nice for a low-power supply, in fact!

Here’s the experimental setup (only one of the two op-amps is being used here):

Here you can see that the input voltage is exactly the same as the output:

As you can see, there’s a phase shift. It’s not really a big deal – keep in mind that the signal used here is a high-frequency wave, and that shift is in fact less than 0.1 µs. Irrelevant for a power supply with a slow ramp.

Tomorrow I’ll bombard you with scope shots, to illustrate how this op-amp based voltage follower behaves when gradually pushed beyond its capabilities. Nasty stuff…

Keep in mind that the point of this whole setup is to drive more current than the function generator can provide. As a test, I connected a 100 Ω resistor over the output, and sure enough nothing changes. The AD8532 will simply drive the 10..30 mA through the resistor and still maintain its output voltage.

The beauty of op-amps is that all this just works!

But there is a slight problem: the AD8532 can drive up to 250 mA, but it’s not short-circuit proof. If we ever draw over 250 mA, we’ll probably damage it. The solution is simple, once you think about how op-amps really work (from the datasheet):

The extra resistor limits the output current to the safe value, but the side-effect is that the more current you draw, the less “headroom” you end up with: if we draw 100 mA, then that resistor will have a 2V voltage drop, so the maximum output voltage will be 3V when the supply voltage is 5V.

If you look at my experimental setup above, you’ll see a 22 Ω resistor tied to each output.

That’s it. This simple setup should make it possible to explore how simple circuits work with varying supply voltages. A great way to simulate battery limits, I hope!

The LED Node uses MOSFETs to drive the red, green, and blue LED strings, respectively.

Here’s the circuit (note that the LED strips must also include current-limiting resistors):

Well… in the LED Node v1, input pin B and resistor R2 are missing, and R1 is 10 kΩ.

This leads to a fair amount of electrical trouble – have a look:

The yellow line is the input, a 6V signal in this case (not 3.3V, as used in the LED Node). The blue line is the voltage over the MOSFET. The input is a 1000 Hz square wave with 20% duty cycle, i.e. 200 µs high, 800 µs low.

When the input voltage goes low, the N-MOSFET switches off. In this case, I don’t use an actual LED strip as load, but a 1 Ω power resistor, driven from a 2V power supply line to keep the heat production manageable during these tests. So that’s 2 A of current going through the MOSFET, and when it switches off that happens so quickly that the current simply has nowhere to go (the power supply is not a very nice conductor for such high-frequency events, alas).

As you can see, this signal ringing is so strong in this case, that the voltage will overshoot the power supply by a multiple of 2V.

Here are the leading edge (MOSFET turns on & starts to draw 2 A) and the trailing edge (MOSFET turns off & breaks the 2 A current) of that cycle again, in separate screenshots:

The horizontal time scale is 1 µs per division.

The vertical scales are 0.5 V and 5 V (!) per division for the input (yellow) and MOSFET voltage (blue), respectively. Note the 30V overshoot when turning that MOSFET off!

This has all sorts of nasty consequences. For one, such high frequency signals will vary across the length of the LED strip, which will affect the intensities and color balance.

But what’s much worse, is the electromagnetic interference these signals will generate. There’s probably a strong 5..10 MHz component in there. Yikes!

There are various solutions. One is to simply dampen the turn-on / turn-off slopes by inserting a resistor in series between the µC’s output pin and the MOSFET’s gate. If you recall the schematic above, I switched the output signal to pin B, made R1 = 1 MΩ and R2 = 1 kΩ. Here’s the effect – keeping all other conditions the same as before:

What a difference! Sure, the flanks have become quite soft, but that ringing has also been reduced to one fifth of the original case. And those soft flanks (about 2 µs on the blue line) will probably just make it easier to dim the LED strips to very low levels.

The little hump at about 1V is when this particular MOSFET starts to switch – these units were specifically selected to switch at very low voltages, so that they would be fully switched on at 3.3V. This helps reduces heat generation in the MOSFETs – an important detail when you’re switching up to 2 Amps. And indeed, the STN4NF03L MOSFETs used here don’t get more than hand-warm @ 2A – pretty amazing technology!

The new LED Node v2 will include those extra resistors in the MOSFET gate, obviously. And that 1 kΩ value for R2 seems just about right.

The other resistor (R1) is a pull-down, it only serves to avoid unpleasant power-up spikes – by keeping the MOSFET off until the µC enables its I/O pins and starts driving it.

In case you’re wondering about the ringing on the yellow input trace: there’s something called the Miller effect, which amplifies the capacitance between the drain and the gate, causing strong signals on the output to leak back through to the gate. The input signal from my signal generator has a certain impedance and can’t fully wipe them out.

It contains the TAOS TCS3414 color sensor. JeeLib now includes a new ColorPlug class which simplifies reading out this chip, as well as a colorDemo.ino sketch:

Sample output:

One nice use for this sensor and code is to determine the color temperature of white light sources, such as incandescent lamps, CFL’s, and LED’s. I’m trying to find a pleasant replacement for a few remaining warm white halogen lights around the house here and such a unit (especially portable) could be very handy when shopping for alternatives.

Hardware description in the Café to follow soon, as well as in the JeeLabs shop.

Here’s another new board, the Precision RTC Plug – this is is a revision of a design by Lennart Herlaar from almost a year ago – my, my, this year sure went by quickly:

The current RTC Plug from JeeLabs will be kept as low-end option, but this one reduces drift by an order of magnitude if you need it: that’s at most ≈ 1 second per week off over a temperature range of 0 .. 40°C. Or one minute per year.

Drift can go up to twice that for the full -40 .. +85°C range, but that’s still one sixth of the crystal used in the original RTC Plug. Considerably better than this, in fact, if you need the extended temperature range. Here’s a comparison between both plugs, from the datasheet:

The way the Precision RTC works is with a Temperature Compensated Crystal Oscillator (TCXO): once a minute, the approximate temperature is determined and the capacitance used by the crystal oscillator is adjusted ever so slightly to try and keep the 32,768 Hz frequency right on the dot. Since the chip also knows how long it has been running, it can even apply an “aging” correction to compensate this small effect in every crystal.

The temperature can be read out, but it’s only specified as accurate to ± 3°C.

No need to use any special software for this, all the normal clock functions are available through the same code as used with the original RTC Plug. If you want to use fancy functions, or perhaps calibrate things further for an even lower drift, you can access all the registers via normal I2C read and write comands.

The board will be added to the shop in a few days, and the wiki page on the Café updated.

As you can see, the shape and layout have not changed much in this revision:

Here’s the main part of the new JeeNode Micro v2 schematic:

Several major changes:

the power to the RFM12B module is now controlled via a MOSFET

the PWR pin is connected to the +3V pin with 2 diodes

there’s room for an optional boost regulator (same as on the AA Power Board)

and there’s even room for a RESET button

When you look at the PCB’s, you’ll see that the extra headers have all been removed, there is just one 9-pin header left – the “IOX” signal from v1 now controls power to the RFM12B.

Through a sneaky placement of the ISP header, there is still a way to connect a single-cell AA or AAA battery to opposite ends of the board.

This extra power control is intended to reduce the current consumption during startup, but I haven’t tried it yet. The idea is that the RFM12B will not be connected to the power source before the ATtiny starts and verifies that the voltage level is high enough to do so. After that, it can be turned on and immediately put to sleep – in practice, its power probably never needs to be turned off again.

The other main change has to do with the different power options:

2.2 .. 3.8V through the +3V pin, intended for 2-cell batteries of various kinds

3.5 .. 5.1V through the PWR pin, for 5V and LiPo use

0.9 .. 5.1V through the PWR pin when the boost regulator is present

The latter might seem the most flexible one, but keep in mind that the boost regulator has a 15 .. 30 µA idle current draw, even when the rest of the circuit is powered down, so this is not always the best option (and the extra switching supply components add to the cost).

As you can imagine, I’ll be running some final tests on all this in the next few days – but the new unit is now available for pre-order in the shop (“direct power” version only for now, the boost version will be available later this month). Design files are in the Café.

Ok, now that I have serial data from the P1 port with electricity and gas consumption readings, I would like to do something with it – like sending it out over wireless. The plan is to extend the homePower code in the node which is already collecting pulse data. But let’s not move too fast here – I don’t want to disrupt a running setup before it’s necessary.

So the first task ahead is to scan / parse those incoming packets shown yesterday.

There are several sketches and examples floating around the web on how to do this, but I thought it might be interesting to add a “minimalistic sauce” to the mix. The point is that an ATmega (let alone an ATtiny) is very ill-suited to string parsing, due to its severely limited memory. These packets consist of several hundreds of bytes of text, and if you want to do anything else alongside this parsing, then it’s frighteningly easy to run out of RAM.

So let’s tackle this from a somewhat different angle: what is the minimal processing we could apply to the incoming characters to extract the interesting values from them? Do we really have to collect each line and then apply string processing to it, followed by some text-to-number conversion?

This is the sketch I came up with (“Look, ma! No string processing!”):

This is a complete sketch, with yesterday’s test data built right into it. You’re looking at a scanner implemented as a hand-made Finite State Machine. The first quirk is that the “state” is spread out over three global variables. The second twist is that the above logic ignores everything it doesn’t care about.

Here’s what comes out, see if you can unravel the logic (see yesterday’s post for the data):

Yep – that’s just about what I need. This scanner requires no intermediate buffer (just 7 bytes of variable storage) and also very little code. The numeric type codes correspond to different parameters, each with a certain numeric value (I don’t care at this point what they mean). Some values have 8 digits precision, so I’m using a 32-bit int for conversion.

This will easily fit, even in an ATtiny. The moral of this story is: when processing data – even textual data – you don’t always have to think in terms of strings and parsing. Although regular expressions are probably the easiest way to parse such data, most 8-bit microcontrollers simply don’t have the memory for such “elaborate” tools. So there’s room for getting a bit more creative. There’s always a time to ask: can it be done simpler?

PS. I had a lot of fun come up with this approach. Minimalism is an addictive game.

With the smart meter installed, I just couldn’t resist a quick readout check of the serial data of the public “P1 port”. It’s an (inverted) TTL serial signal @ 9600 baud, even parity, 7 bits with data coming out the moment you put some voltage of the “request” line.

That request line is probably nothing other than the power feed of the optocoupler inside the unit, since the output voltage more or less matches the voltage I feed it. So this could probably be operated at 3.3V as well as 5V.

A good test case for the Hameg HMO2024 scope, which has serial bus decoding built-in:

The green bars indicate correct parity. Not only is decoding a breeze this way, the latest scope firware update also added a “Bus Table” so that you can view the decoded data as a list and even dump it to a USB stick. Here’s the first part of what came out – as a CSV file:

(probably took some committee years of work to come up with this sort of gibberish)

Four electricity counter totals (night/day and consumed/produced, respectively), then two actual power consumption/production levels, then the gas meter readout. Note that the meter does not know the separate consumption and production levels – it only sees the total, but it can detect whether the flow is positive or negative.

Easy stuff. Access to the values used for our electricity and gas bill at last!

PS. This will also allow comparing and calibrating the results obtained by other means: three 2000 pulse/kWh counters attached to a JeeNode and three current transformers attached to the Flukso Meter. They each measure different things, but it’s all hooked up in such a way that the total consumption or production can be calculated with each setup.

The electricity company just installed a new “smart meter” – because they want to track consumed and produced electricity separately, something the total count on the old Ferraris-wheel meter cannot provide:

See that antenna symbol on there? Its green LED is blinking all the time.

At the bottom on the right-hand side is an RJ11 jack with a “P1″ connection. This is a user-accessible port which allows you to get readings out once every 10 seconds. It’s opto-coupled with inverted TTL logic, generating a 9600 baud serial stream from what I’ve read. Clearly something to hook up one of these days.

The gas meter hanging just beneath it was also replaced:

Why? Because it sends its values out periodically over wireless to the smart meter, which then in turn sends it out via GPRS to the utilities company.

Apparently these gas counter values are only reported once an hour. Makes sense, in a way: gas consumption is more or less driven by heating demands, and aggregated over many households these probably vary fairly slowly – depending on outside temperature, wind, humidity, and how much the sun is shining. Not nearly as hard to manage as the electricity net, you just have to keep the gas pressure within a reasonable range.

Electricity is another matter. And now it’s all being monitored and reported. Not sure how often, though – every 2 months, 15 minutes, 10 seconds?
How closely will big brother be watching me? First internet & phone tracking, and now this – I don’t like it one bit…

Welcome to the 21st century. Everything you do is being recorded. For all future generations to come.

The thing with OpenTherm, is that the amount of current going through the wire is used by the boiler to send messages to the thermostat.

The reverse path, i.e. from thermostat to boiler, is signalled by voltage changes, which are considerably easier to detect. So let’s save that for later.

There is a small complication, in that the polarity of the wiring between boiler and thermostat is not defined, so either one of them could be “”+” and “-“, respectively. Of course once you’ve hooked things up, that polarity never changes again.

Can we measure the current going through a wire, without knowing in which direction it is flowing? We know it’ll be 5..7 mA for one signalling state and 20..25 mA for the other.

Here’s what I came up with:

There are two optocouplers in there, with the diodes connected in opposite ways. Depending on the direction of the current, one of them will block and the other one will light up – once the voltage over the 82 Ω resistor exceeds about 1V, i.e. at ≈ 12 mA.

For documentation purposes, the actual build:

The voltage drop over this circuit is at most just over 1V, which may or may not interfere with proper operation of the boiler and thermostat. Testing will be needed to find out.

The first advantage of this circuit is that it works with either polarity without needing a bridge rectifier (which would introduce yet another voltage drop). In addition, the output signal is galvanically isolated from the OpenTherm loop, i.e. “floating”. That means it can be connected in whatever way is needed.

The second part of an “OpenTherm snooper” – if it ever materialises – will be to measure the voltage between the wires and hopefully also to self-power the rest of the circuit. Note that the optocoupler LED lights up when a high current is passing through, and this is also the state where the photo transistor is drawing more current through the 10 kΩ resistor.

Here’s the diode voltage (yellow) and output (blue), using the same ± 10V @ 50 Hz signal as yesterday. The vertical zero axis is one division down from the centre, for both traces:

Note how the output triggers on both positive and negative excursions of the input signal due to the anti-parallel LEDs, which is why it ends up having twice as many pulses. So the first half is one LED turning on and off, and the second half is the other LED – both lead to the common OUT pin being pulled down. For OpenTherm use, there’d never be both polarities – only one LED would be active, depending on how the circuit is connected.

The pulse-width asymmetry you see is an artefact of the way the sine wave is applied (using a 150 Ω resistor). This will not happen with a 7..25 mA current toggle and 82 Ω. And while the MCT62 is not one of the fastest optocouplers, especially with a 10 kΩ collector pull-up, I expect that the resulting pulses will still be accurate enough.

So far so good. I haven’t built the rest yet – just doodling and trying to figure it all out.

I haven’t given up on the OpenTherm Gateway yet, but I’ve also been toying with related ideas for some time to try and just listen in on that current/voltage conversation using a self-powered JeeNode, which then reports what it sees as wireless packets.

It’s all based on Optocouplers, so here’s a first circuit to try things out:

A very simple test setup, which I’m going to feed a ±10V sine wave @ 50 Hz, just because the component tester on my oscilloscope happens to generate exactly such a signal. The 1 kΩ resistor is internal to the component tester, in fact. Here’s what comes out:

The yellow trace is the voltage over the IR LED inside the optocoupler, the blue trace is the voltage on the OUT pin. VCC is a 3x AA Eneloop battery pack @ 3.75V – what you can see is that the LED starts to conduct at ≈ 0.8V, and generates just enough light at 0.975V for the photo transistor to start conducting as well, pulling down the output voltage. With 1.01V over the LED, it already generates enough light for the output to drop to almost 0V.

In other words: within a range of just 41 mV at about 1V, the optocoupler “switches on”.

So much for the first part of this experiment. My hope is that this behavior will be just right to turn this MCT62 optocoupler into a little OpenTherm current “snooper” – stay tuned…

Sometimes I see some confusion on the web regarding the units to measure power with.

Here’s a little summary, in case you ever find yourself scratching your head with this stuff:

Electric potential is sort of a “pressure level” when using the water analogy, expressed in Volts (V)

Current is the flow of electrons, and is expressed in Amperes (A)

Charge is the “amount of electricity”, and is expressed in Coulombs (C)

Power is the product of volts and amperes, and is expressed in Watts (W)

Another measure of power is Volt-Amperes, this is not the same as Watts in the case of alternating current with reactive loads, but let’s not go there for now…

To summarise with the water analogy:

Volts = how high has the water been pumped up

Amps = how much water is flowing

Coulombs = the amount of water

Watts = how much energy is being used (or generated)

You can probably guess from this list that pumping water up twice as high (V) takes twice as much energy, and that pumping up twice as much (A) also takes twice as much energy. Hence the formula:

Watt = Volt * Ampere

Other equations can also help clarify things. They all add time into the mix (in seconds).

Current is “charge per second”:

Ampere = Coulomb / second

This is also the way I estimate average current consumption when diving into ultra-low power JeeNode stuff: using the oscilloscope to integrate (sum up) all the instantaneous current consumptions over time, I get a certain Coulomb (or micro-coulomb) value. If that’s a periodic peak and the system is powered-down the rest of the time, then the estimate becomes: X µC used per Y sec, hence the average current consumption is X / Y µA. The advantage of working with Coulombs in this way, is that you can add up all the estimates for the different states the system is in and still arrive at an average current level.

Another one: power consumption is the amount of energy consumed over time. This is often expressed in Watt-hour (Wh) or kilowatt-hour (kWh):

a 5 mA load on batteries of 2000 mAh will run for 2000 / 5 = 400 hours

Battery capacities are roughly as follows for the most common types:

an AA cell has 2500 mAh @ 1.5V = 3.75 Wh

an AA rechargeable cell has 2000 mAh @ 1.2V = 2.4 Wh

an AAA cell has 1000 mAh @ 1.5V = 1.5 Wh

an AAA rechargeable cell has 800 mAh @ 1.2V = 0.96 Wh

a CR2032 coin cell has 200 mAh @ 3V = 0.6 Wh

Wanna be able to run for a week on a coin cell? Better make sure your circuit draws no more than 200 / (24 x 7) = 1.2 mA on average under optimal conditions.

Wanna make it run a year on that same coin cell? Stay under 22 µA average, and it will.

With 2 or 3 AA batteries, you get an order of magnitude more to consume, so if you can get the average under 200..220 µA, those batteries should also last a year (ignoring the fact that batteries always have some self-discharge, that is).

The difference between 2, 3, or 4 AA batteries in series only affects the voltage you get out of them. Chips do not run more efficiently on more voltage – on the contrary, in fact!

For low-power use: run your circuit on as low a voltage as possible, but no lower (wink).

It looks like the OpenTherm gateway is sensitive to noise and wiring lengths. All my attempts to move the gateway upstairs, next to the boiler/heater, failed. Somehow, this:

THERMOSTAT <=> GATEWAY <=> 10 m wire <=> HEATER

… is not the same as this!

THERMOSTAT <=> 10 m wire <=> GATEWAY <=> HEATER

The OpenTherm documentation (PDF) specifically allows up to 50 meters of untwisted wiring, but I’m clearly running into some issue here.

Time to drag the scope downstairs and hook it up between gateway and heater:

The yellow trace is the voltage between the two wires, while the blue trace is the current through those wires. I used a 1 Ω resistor and measured the voltage drop, but had to switch to the most sensitive scale (since I’m using the standard x10 probe), hence all that noise.

Still, you can see the magic of the way the OpenTherm protocol works:

in rest, there’s 6V between the wires and about 6 mA of current flowing (a 1 kΩ load)

this is used by the thermostat to power itself (by keeping a capacitor charged)

when the thermostat sends data, it briefly reduces its current draw

since the boiler (or gateway) is feeding a constant current, this makes voltage go up

that voltage change is then detected and decoded by the boiler / gateway

about 40 ms later, the boiler / gateway then sends a reply

it does this by briefly forcing more current down the wire

this in turn can be detected by the thermostat, which then decodes that reply

there’s a small residual ripple, as the thermostat tries to maintain its 7V idle level

I was going to perform the same measurement on the other side of the gateway, i.e. the connection to the thermostat, but for some reason the gateway really doesn’t like me touching anything or connecting any wires to it (let alone a grounded scope probe). Maybe some noise is picked up and feeding back into one of the PIC’s I/O pins, and completely throwing it off. Luckily, the whole gateway always resets properly when left alone again.

I also sometimes see the thermostat indicating a fault (even just by touching the wire with the scope probe) – so it seems to be getting some power, but it’s definitely not happy.

Maybe the gateway’s output circuit is too sensitive, due to some high-impedance parts in the circuit? That would explain why even just using some long wires two floors down prevents the gateway from working.

Hm, not good – especially since I only wish to monitor the wire, not control it…

Update – these problems are caused by a floating ground. More on this once I get it all sorted out. With many thanks to Schelte Bron for dropping by and helping analyse this!

The circuit will deliver a constant current by varying the voltage drop, even when the load varies. You can see this in the fairly flat curve on the Component Tester screenshot included yesterday: no matter what level positive voltage you apply to this thing, it’ll draw about 2 mA (just ignore the negative end of the scale).

Actually, I cheated a bit. The real two-transistor current source circuit looks like this:

By moving that 10 kΩ resistor away from the load, and tying it directly to “+” the circuit works even better. I’ve simulated it with an external power supply to drive that resistor separately, and get this CT screen:

Totally flat! – And that 2 mA current level is set by the 330 Ω resistor, by the way.

One use for this could be a constant-current LED driver (although its efficiency would be very low – you really need a switching circuit with an inductor to get good efficiency).

So how does this mind-bending circuit actually work?

The key point to note, is that the emitter-to-base junction is essentially a diode (which is probably why transistors are drawn the way they are!). And it has a fixed forward-drop voltage of about 0.65V. As long as the base is less than 0.65V above the emitter voltage, the transistor will be switched off. As soon as the base is raised higher, current will flow through that forward diode and the transistor will start to conduct.

This is also why you always need a current limiting resistor: the base voltage cannot rise above 0.65V, it’ll simply conduct more current. Until the current limits are exceeded and the transistor is destroyed, that is…

First, imagine that the leftmost transistor is absent: then the 10 kΩ will pull up the base of the rightmost transistor and cause it to fully conduct. The circuit now essentially acts as the load in series with the 330 Ω resistor. With a maximum load (a short-circuit), the whole supply voltage will end up across that 330 Ω resistor.

But…

With the leftmost transistor in place, something special happens: as soon as the voltage over the 330 Ω resistor rises above 0.65V, the leftmost transistor will start to conduct, pulling the base of the rightmost transistor down. It will continue to do so until the voltage over the 330 Ω resistor has dropped to 0.65V again. Because at some point the base of the rightmost transistor will be pulled so low that it no longer fully conducts – thus reducing the current through the 330 Ω, and thus lowering the voltage drop across it.

You’re seeing a neat little negative feedback loop in action. These two transistors are going to balance each other out to the point where the 330 Ω resistor ends up having a voltage drop of exactly 0.65V – regardless of what the load is doing!

To get 0.65V over 330 Ω, we need a 0.65/330 = 1.97 mA current.

And so that’s what this circuit will feed to the load. As you can see in that last scope capture, the regulation is extremely good between 0.65 and 9V.

By simply varying the 330 Ω value, we can set any desired fixed current level.

The reason I’m bringing this up, is that this circuit is in fact used in the OpenTherm gateway – see this schematic (look for the upside-down PNP version). With some extra circuitry to set the resistor to either 100 Ω or 28 Ω (100 Ω in parallel with 39 Ω). So the gateway is driving either 7 mA or 25 mA through the thermostat.

Welcome to the magical world of electronics – it’s full of clever little tricks like this!

One last optimisation, now that the OpenTherm relay sends nice and small 3-byte packets, is to reduce the number of packets sent.

The most obvious reduction would be to only send changed values:

This is a trivial change, but it has a major flaw: if packets are lost – which they will, once in a while – then the receiving node will never find out the new value until it changes again.

There are several ways to solve this. I opted for a very simple mechanism: in addition to sending all changes, also send out unchanged values every few minutes anyway. That way, if a packet gets lost, at least it will be resent within a few minutes later, allowing the receiver to resynchronise its state to the sender.

Here’s the main code, which was rewritten a bit to better match this new algorithm:

This still keeps the last value for each id in a “history” array, but now also adds a “resend” counter. The reason for this is that I only want to re-send id’s which have been set at least once, and not all 256 of them (of which many are never used). Also, I don’t really want to keep sending id’s for which nothing has been received for a long time. So I’m setting the re-send counter to 10 every time a new value is stored, and then counting them down for each actual re-send.

The final piece of the puzzle is to periodically perform those re-sends:

And in the main loop, we add this:

Here’s how it works: every second, we go to the next ID slot. Since there are 256 of them, this will repeat roughly every 4 minutes before starting over (resendCursor is 8 bits, so it’ll wrap from 255 back to 0).

When examining the id, we check whether its resend counter is non-zero, meaning it has to be sent (or re-sent). Then the counter is decremented, and the value is sent out. This means that each id value will be sent out at most 10 times, over a period of about 42 minutes. But only if it was ever set.

To summarise, as long as id values are coming in:

if the value changed, it will be sent out immediately

if it didn’t, it’ll be re-sent anyway, once every 4 minutes or so

… but not more than 10 times, if it’s never received again

And indeed, this reduces the packet rate, yet sends and re-sends everything as intended:

The OpenTherm relay sketch presented yesterday sends out 9-byte packets containing the raw ASCII text received from the gateway PIC. That’s a bit wasteful of bandwidth, so let’s reduce that to a 3-byte payload instead. Here is some extra code which does just that:

I’m using a very hacky way to convert hex to binary, and it doesn’t even check for anything. This should be ok, because the packet has already been verified to be of a certain kind:

marked as coming from either the thermostat or the heater

the packet type is either Write-Data or Read-Ack

every other type of incoming packet will be ignored

Note the shouldSend() implementation “stub”, to be filled in later to send fewer packets.

Now that the OpenTherm Gateway has been verified to work, it’s time to think about a more permanent setup.
My plan is to send things over wireless via an RFM12B on 868 MHz. And like the SMA solar inverter relay, the main task is to capture the incoming serial data and then send this out as wireless packets.

And here’s the first version of the otRelay.ino sketch I came up with:

The only tricky bit in here is how to identify each message coming in over the serial port. That’s fairly easy in this case, because all valid messages are known to consist of exactly one letter, then 8 hex digits, then a carriage return. We can simply ignore anything else:

if there is a valid numeric or uppercase character, and there is room: store it

if a carriage returns arrives at the end of the buffer: bingo, a complete packet!

everything else causes the buffer to be cleared

This isn’t the packet format I intend to use in the final setup, but it’s a simple way to figure out what’s coming in in the first place.

It worked on first try. Some results from this node, as logged by the central JeeLink:

One of the problems with just relaying everything, apart from the fact that it’s wasteful to send it all as hex characters, is that there’s quite a bit of info coming out of the gateway:

Not only that – a lot of it is in fact redundant. There’s really no need to send the request as well as the reply in each exchange. All I care about are the “Read-Ack” and “Write-Data” packets, which contain actual meaningful results.

Some smarts in this relay may reduce RF traffic without losing any vital information.

Before going into processing the data from Schelte Bron’s OpenTherm Gateway, I’d like to point to OpenTherm Monitor, a multi-platform application he built and also makes freely available from his website.

It’s not provided for Mac OSX, but as it so happens, this software is written in Tcl and based on Tclkit, by yours truly. Since JeeMon is nothing but an extended version of Tclkit, I was able to extract the software and run it with my Mac version of JeeMon:

Here’s the user interface which pops up, after setting up the serial port (it needed some hacking in the otmonitor.tcl script):

I left this app running for an hour (vertical lines are drawn every 5 minutes), while raising the room temperature in the beginning, and running the hot water tap a bit later on.

Note the high error count: looks like the loose wires are highly susceptible to noise and electrostatic fields. Even just moving my hand near the laptop (connected to the gateway via the USB cable) could cause the Gateway to reset (through its watchdog, no doubt).

Still, it looks like the whole setup works very nicely! There’s a lot of OpenTherm knowledge built into the otmonitor code, allowing it to extract and even control various parameters in both heater and thermostat. As the above window shows, all essential values are properly picked up, even though this heater is from a different vendor. That’s probably the point of OpenTherm: to allow a couple of vendors to make their products inter-operable.

But here’s the thing: neither the heater nor the thermostat are near any serial or USB ports over here, so for me it would be much more convenient to transmit this info wirelessly.

Using a JeeNode of course! (is there any other way?) – stay tuned…

PS. Control would be another matter, since then the issue of authentication will need to be addressed, but as I said: that’s not on the table here at the moment.

Another project I’ve been meaning to tackle for a very long time is to monitor the central heating and warm water system. Maybe – just as with electricity – knowing more about what’s going on will help us reduce our fairly substantial natural gas bill here at JeeLabs.

The gas heater is from Vaillant and it’s connected to a Honeywell ChronoTherm – this is a “modulating” thermostat which automatically chooses its set-points based on the time of day and the day of the week. It all works really well.

The heater upstairs and the thermostat in the living room are connected by a two-wire low-voltage connection, using the OpenTherm protocol. There’s not that much “open” about this protocol, but people have hacked their way in and have discovered all the basic information being exchanged between these units.

A while back, I got a free PCB (thx, Lennart!) of a circuit by Schelte Bron, called the OpenTherm Gateway, and since all the required components were listed and easily available from Conrad, I decided to give it a go. Here’s the whole thing assembled:

The documentation is very well done: schematics, parts list, troubleshooting, and more.

This is a “gateway” in that it sits between the heater and the thermostat, so it can not only listen in on the conversation but actually take over. Things you can do with it is to adjust the set-point (i.e. desired room temperature), feed in the temperature from an outside sensor, set the ChronoTherm’s clock, and probably more. I’m only interested in monitoring this stuff for now, i.e. reading what is being exchanged.

The gateway is based on an 8-bit PIC controller, and has some funky electronics to do its thing – because the way these signals are encoded is pretty clever: there are only two wires, yet the heater actually powers the thermostat through them, and supports bidirectional I/O (hint: it uses voltage and current modulation).

One little gotcha is that this gateway brings out its interface as an RS232-compatible serial port. And to my surprise, I found out that I don’t even have any laptops to read out these +/- 12V level signals anymore!

So the next task was to get things back into “normal” logic levels. Simple, although it’s a bit of a hack: remove the on-board MAX232 level converter chip, and insert wires to bring out the original 5V logic levels instead:

Op-amps are one of the building blocks of the modern analog electronics industry.

Here’s an interesting one, the MAX4470 .. MAX4474 series:

Simple layout, again in a tiny SMD package:

The other members of this family are dual and quad versions, if you need more op-amps.

This chip is nice because of its phenominally low current consumption: 750 nA at 5V. It gets even better: at 3.3V, I measured a ridiculously low 190 nA!

Here are some more specs from the Maxim datasheet:

Might not be the highest-performance op-amp out there, but still – this thing could be quite handy to implement comparators, voltage followers, oscillators, amplifiers, filters, and more. Especially when the “power budget” is really really low.

PS. I’m assuming this chip isn’t oscillating with the above test setup, but in normal use you really need to tie the input pins to something to avoid that.

The TPS78233 from Texas Instruments looks like a standard LDO linear voltage regulator:

It takes an input voltage up to 5.5V and regulates it down to 3.3V (the above image from the datasheet is the 2.7V regulator). Not a spectacular voltage range, but it has a very nifty trick up its sleeve:

This regulator only draw 450 nA, i.e. 0.45 µA, when unloaded!

That’s about a quarter of the current consumption of the already-spectacular MCP1702 and MCP1703 used in JeeNodes – a ridiculously low 2.5 microwatts.

Here’s a little test setup (yep, those SMD’s are small – can you see the two 10 µF caps?):

To get a sense of this level of current consumption: 3x AA batteries of 2000 mAh would last 5 centuries (ehm, well, except for their pesky self-discharge) – which is a bit silly, of course.

To get another idea: when I measure the output voltage with a multi-meter, the current consumption “jumps” to about 750 nA. Why this relatively big change? Because most multi-meters have a 10..11 MΩ input impedance, and 3.3V over 11 MΩ is… 300 nA!

The fascinating thing about the TPX82xx series is that it achieves this extremely low idle current while still being able to regulate and supply up to 150 mA. Furthermore, that enable pin might come in very handy for certain ultra-low energy harvesting scenario’s.

But I’m not going to replace the regulator on JeeNodes for a number of reasons:

The MCP1702 can handle input voltages up to 13V (vs only 5.5V for the TPS78233).

There’s no through-hole version, so this would not work for standard JeeNode kits.

Those extra savings only kick in when you get everything into the few-µA range, and so far, things like Room Nodes still draw a few dozen µA’s.

In many cases, when the max is 5.5V anyway, no regulator will be needed at all (note that running only the RFM12B on it may create a problem with signal levels).

But hey, it’s good too know that these chips exist. A few microwatts… wow!

Ok, so all the solar panels are in place and doing their thing (as much as this season allows, anyway). But seeing that live power usage on my desk all day long kept tempting me to try and optimise the baseline consumption just a tad more…

Previous readings have always hovered around 115 Watts, lately. Since the JeeLabs server + router + internet modem use about 30 W together, that leaves roughly 85 W unaccounted for. Note that this is without fridges, boilers, heat circulation pumps, gas heaters, or other intermittent consumers running. This baseline is what we end up consuming here no matter what – vampire power from devices in “standby” and other basic devices you want to keep running at all times, such as the phone and internet connection.

It’s not excessive, but hey: 100 W day-in-day-out is still over 850 kWh on a yearly basis.

Well, today I managed to get the baseline down waaay further:

That’s including the JeeLabs server + router + modem. So the rest of the house at JeeLabs is consuming under 40 W. Perfect: I’ve reached my secret goal of a baseline under 50 W!

Here’s how that “idle” power consumption was reduced this far:

I turned off an old & forgotten laptop and Ethernet switch, upstairs – whoops!

I removed another gigabit Ethernet switch under my desk (more on that later)

the 10-year old Mac Mini + EyeTV + satellite dish setup has been dismantled and replaced by a small all-in-one TV drawing 0.5W in standby (the monitor is re-used)

I’m switching to DVB-C (i.e. coax-based) reception, available from the internet modem by upgrading to the cheapest triple-play subscription with “analog + digital” channels

that means: no settop box, just the internet modem (already on anyway) and a new low-end but modern Sharp 22″ TV / DVB-C / DVD-player / USB-recorder

As it turns out, the Mac Mini (about 10 years old) plus the master-slave AC mains switch controlling everything else were drawing some 20 W – day in day out. Bit silly, and far too much unnecessary technology strung together (though working, most of the time).

The other biggie: no more always-on Ethernet switches, just the WRT320N wireless router in front of the server, with a second wired gigabit connection to my desk. That’s two really fast connections where it matters, everything else uses perfectly-fine WiFi.

The main reason for having an Ethernet switch near my desk was to allow experimenting with JeeNode-based EtherCards, Raspberry Pi’s, and so on. But… 1) that switch was really in the wrong place, it would be far more convenient to have Ethernet in the electronics corner at JeeLabs, and 2) why keep that stuff on all the time, anyway?

So instead, I’m now re-using a spare Airport Express as wireless-to-wired Ethernet extension router. Plug it in, wait a minute for it to settle down, and voilá – instant wired Ethernet anywhere there is an AC mains socket:

And if I need more connections, I can route everything through that spare Ethernet switch.

It’s not the smallest solution out there, but who cares. Why didn’t I think of all this before?

I was curious about the difference between Power-down and Standby in the ATmega328p. Power-down turns off the main clock (a 16 MHz resonator in the case of JeeNodes), whereas standy keeps it on. And quite surprised by the outcome… read on.

There’s an easy way to measure this, as far as software goes, because the rf12_sendWait() function has an argument to choose between the two (careful: 2 = Standby, 3 = Power-down – unrelated to the values in the SMCR register!).

I tweaked radioBlip.ino a bit, and came up with this simple test sketch:

With this code, it’s just a matter of hooking up the oscilloscope again in current measurement mode (a 10 Ω resistor in the power line), and comparing the two.

Here’s the standby version (arg 2):

… and here’s the power-down version (arg 3), keeping the display the same:

I’ve zoomed in on the second byte transmission, and have dropped the baseline by 21 mA to properly zoom in, so what you’re seeing is the ATmega waking up to grabstore a single byte out ofinto the RFM12B’s reception FIFO, and then going back to sleep again.

The thing to watch is the baseline level of these two scope captures. In the first case, it’s at about 0.5 mA above the first division, and the processor quickly wakes up, does it’s thing, and goes into power-save again.

In the second case, there’s a 40 to 50 µs delay and “stutter” as the system starts its clock (the 16 MHz resonator), does its thing, and then goes back to that ultra-low power level.

There are bytes coming in going out once every 200 µs, so fast wakeup is essential. The difference is keeping the processor drawing 0.5 mA more, versus a more sophisticated clock startup and then dropping back to total shutdown.

What can also be gleaned from these pictures, is that the RF12 transmit interrupt code takes under 40 µs @ 16 MHz. This explains why the RF12 driver can work with a clock frequency down to 4 MHz.

The thing about power-down mode (arg 3), is that it requires a fuse setting different from what the standard Arduino uses. We have to select fast low-power crystal startup, in 258 cycles, otherwise the ATmega will require too much time to start up. This is about 16 µs, and looks very much like that second little hump in the last screen shot.

Does all this matter, in terms of power savings? Keep in mind that this is running while the RFM12B transmitter is on, drawing 21 mA. This case was about the ATmega powering down between each byte transmission. Using my scope’s math integral function, I measured 52.8 µC for standby vs 60.0 µC for power-down – so we’re talking about a 12 % increase in power consumption during transmit!

The explanation for this seemingly contradictory result is that the power-down version adds a delay before it actually sends out the first byte to transmit. In itself that wouldn’t really make a difference, but because of the delay, the transmitter stays on noticeably longer – wiping out the gains of shutting down the main clock. Check this out:

Not so fast. The I/O pin is tied to a microcontroller running at 3.3 or 5V, so its voltage level will vary between 0 and a few volts. Whereas “+” is more likely to be 5V, 12V, or even 24V.

This means that to keep the PNP transistor switched off, we need to keep the base voltage at nearly the same level as that “+” line. Unfortunately, this is impossible – not only could high voltages on I/O pins of a µC damage them, there is also some protection circuitry on each pin to protect against electrostatic discharge (ESD). If you were to look inside the µC chip, you’d find something like this on each I/O pin:

What that means is that if you try to pull an I/O pin up to over VCC+0.7V, then that topmost diode will start to conduct. This is no problem as long as the current stays under 1 mA or so, but it does mean that the actual voltage of an I/O pin will never be more than 4V (when running on 3.3V). Which means that PNP transistor shown in the first image will always be on, regardless of the I/O pin state.

We’ll need a more complex circuit to implement a practical high-side power-on switch:

The workhorse, i.e. the real switch, is still the PNP transistor on the right. But now there’s an an extra “stage” in front to isolate the I/O pin from the higher voltages on the base of that PNP transistor. There’s now essentially a low-side switch in front of the PNP.

When I/O is “0”, no current flows into the base of the NPN transistor, which means it won’t conduct, and hence no current flows into the base of the PNP transistor either.

When I/O is “1”, the NPN transistor will conduct and pull its collector towards ground. That leaves a 10 kΩ resistor between almost ground (0.4V) and almost high (“+” – 0.7V), since the base-to-emitter junction of a transistor is more or less a forward-conducting diode. So the base of the PNP transistor is pulled down, and the PNP transistor is switched on. The resistor values are not too critical here – making them both 10 kΩ would also work fine. But they have to be present to limit both base currents.

A similar circuit can be created with two MOSFETs. With the proper choice of MOSFETs, this will in fact be a better option, because it can handle more current and will have less power loss (i.e. heat). The resistors will need to be placed differently.

Note that all circuits can be analysed & explained in the same way, as long as there are no feedback loops: step-by-step, reasoning about the effect of each stage on the next.

Yesterday’s post brought up some good comments, which I’d like to expand on a bit.

To summarise, this is about how to switch power to an electric circuit using an I/O pin.

Yesterday’s solution worked for me, but would fail if the voltage range is not as nicely predictable, i.e. trying to control say between 2 and 12V with an I/O pin which supplies 1.8 to 3.3V. In this case, the 0.7V diode drop of the base-to-emitter junction of a transistor won’t always be of much help.

Let’s examine some approaches. First, what is perhaps the most obvious way:

With a “normal” (BJT) NPN transistor, you feed it some current by making an I/O pin high, and it’ll conduct. There needs to be a resistor in series, large enough to limit the current, but small enough to drive the transistor into saturation (10 kΩ should work for loads up to say 25 mA, you can reduce it to switch more current).

With an N-MOSFET, you pull the gate up, again by setting an I/O pin high. In this case there should be a resistor to pull the gate down until the I/O pin is set up as an output, to prevent power-up glitches. This resistor can be much larger, 1 MΩ or more. MOSFETs need almost no current (“flow”), they just need voltage (“pressure”) to function.

The benefit of these circuits is that you can easily switch 5V, 12V, or even 24V – with an I/O pin which remains at very low voltage levels (say 1.8 to 3.3V)

In a perfect world, these would both be fine, and be very convenient: “1” is on, “0” is off.

Unfortunately, a transistor is not a perfect switch, so there will be some residual voltage drop over it (0.2..0.4V for the BJT, under 0.1V for the MOSFET). Also, the selected MOSFET has to switch on at low voltages – many types need 4V or more to fully switch on.

One problem with these “low-side” switches (i.e. in the ground wire), is that the circuits will start to float: with a small voltage drop over the transistor, all signal levels to this circuit will be raised slightly, and sometimes unpredictably. So if the circuit has any other connections to the microcontroller (or anything else, for that matter), then these levels will vary somewhat. It’s like shaking hands with someone while standing on a treadmill :)

What’s even worse: when the power is switched off, the circuit ends up being tied to its power supply “+” side, but disconnected from ground – this can cause all sorts of nasty problems with electricty finding its way through other connected pins.

Having said that: if the circuit to be switched has no other outside connections, then either of these setups will work just fine. One example is LEDs and LED strips – which is why the MOSFET Plug uses N-MOSFETs exactly as outlined here. All you need to do is stick with “Common Anode” type RGB LED’s, i.e. tie all the “+” pins (anodes) together to the power supply, and let the MOSFETs do the switching between the “-” pins (cathodes) and GND.

For anything more elaborate, we need “high-side switching” – coming up tomorrow!

The SMA Bluetooth relay described yesterday has to switch the power to the RN-42 module using an I/O pin on the ATmega. Currents are fairly small: up to about 50 mA.

I tried directly powering the RN42 from two I/O pins in parallel, but it looks like they don’t have enough current drive capacity for this purpose. So the task is to find a simple way to switch on power somehow.

The simplest solution would seem to be a P-MOSFET in the “high side” supply, i.e. between PWR and the RN-42’s supply pin, but there is a problem: PWR will be somewhere between 3.3 and 5V (actually it’s more like between 3.6 and 4.0V with the 3xAA Eneloop batteries I’m using), but the I/O pin on the ATmega won’t be more than 3.3V – since the ATmega sits behind a 3.3V voltage regulator. I tried the P-MOSFET, before realising that it’d always be driven on – the I/O pin voltage is sufficiently low to switch the MOSFET on, even with a logic “1” – not good!

MOSFETs are driven by voltage whereas transistors are driven by current, so an obvious thing to try next is to use a PNP transistor in more or less the same configuration. Voltage differences wouldn’t be so critical, if no current flows. Also, there’s the extra base-to-emitter voltage drop or so that each normal transistor has. Still, a simple PNP transistor might switch on if the difference in voltage is large enough – this can be overcome with a PNP Darlington transistor, which is simply two PNP transistors, cascaded in a certain way. The property of these things – apart from their high amplification (hFE) – is that you need to drive the base with a slightly larger voltage. A lower voltage in this case, with PNP types. Could also have used two discrete PNP transistors.

Here’s the circuit:

And sure enough, it works. I happened to have an SMD “BCV 28″ lying around:

The 10 kΩ resistor in series with the base limits the drive current to under 1 mA – more than enough to drive the Darlington into saturation, i.e. the state where the collector-to-emitter voltage drop is at its lowest.

That’s it. Every 5 minutes, a reading arrives on the central JeeLink, as shown by JeeMon:

This approach has as “benefit” that it’ll fail gracefully: even if anything goes wrong and things hang, the hardware watchdog will pull the ATmega out of that hole and restart it, which then starts off again by entering an ultra-low power mode for 5 minutes.
So even if the SMA is turned off, this sketch won’t be on more than about 1% of the time.

Here’s the energy consumption measurement of an actual readout cycle:

The readings are a bit noisy – perhaps because I had to use 1 mV/div over a 1 Ω resistor (the 10 Ω used before interfered too much with this new power-switching code).

As you can see, the whole operation takes about 4 seconds. IOW, this node is consuming 153 milli-Coulombs every 300 seconds. That’s 0.5 mC/sec, or 0.5 mA on average. So the estimated lifetime on 3x AA 1900 mAh Eneloops is 3800 hours – some 5 months.

Update – The first set of batteries lasted until March 18th, 2013 – i.e. over 4 months.

The Bluetooth module in yesterday’s setup has a nasty power consumption profile:

The yellow line is total power consumption, which to over 60 mA at times, and the smaRelay.ino sketch is querying the SMA inverter roughly every 10.5 seconds. The drop in baseline is the ATmega going to sleep as it waits for the next period, so you can clearly see what the Bluetooth module is doing – while kept on and connected to the SMA in fact.

I’m not sure that the Hameg’s math integral function is up to summing such fast-changing values, but it’s the best I’ve got to measure power consumption here at JeeLabs at the moment (well, either this or tracking the discharge on a hefty electrolytic capacitor).

Note the baseline consumption of about 5 mA, and the frequent but highly irregular brief power consumption pulses. That’s BT doing its frequency hopping thing, I assume.

Anyway, my goal was to get an estimate of the average power consumption, so here we go:

two cursors were used to peg the integral (summed) value over one cycle

charge usage over one 10.5 second period turns out to be 134 millicoulombs

that’s 134 / 10.5 ≈ 12.75 mC per second, i.e. 12.75 mA average

Whoa… not much of a candidate for battery-power this way!

That leaves a couple of options:

just power it via a USB adapter and move on

explore the RN-42’s low-power mode, which is claimed to go as low as 300 µA

completely turn off power to the RN-42

I’m inclined to go for the latter. I don’t really need solar PV readings that often, since the SMA accumulates its daily and total generated power anyway. And during the night, all this reading and reporting activity is also quite useless.

That would also solve – or rather: work around – the intermittent problems present in the current code, in which the sketch stops relaying after a few minutes. It always seems to get stuck after a while, waiting for incoming data from the Bluetooth module.

One readout every 10 minutes would probably be plenty for me, and since the SMA has a time-of-day clock which can be read out over BT, I can stop readouts during the night (or even simpler: add an LDR and switch off when it’s dark).

It looks like powering up, establishing a connection, and reading out one set of values can be done in under 6 seconds, so that leads to a 1% duty cycle. Let’s say 200 µA on average – this ought to run a year on 3x AA Eneloops.

Yesterday’s post shows how to read out the SMA solar PV inverter via Bluetooth. The idea was to install this on the Mac Mini JeeLabs server, which happens to be in range of the SMA inverter (one floor below). But that brings up a little dilemma.

Install a potentially kernel-panic-generating utility on the main JeeLabs server? Nah…

I don’t really care whether this issue gets fixed. I don’t want to have the web server go down for something as silly as this, and since it’s a kernel panic, there’s no point trying to move the logic into a Linux VM – the problem is more likely in Apple’s Bluetooth / WiFi stack, which will get used no matter how I access things.

The alternative is to implement a little “SMA Relay” using a JeeNode with a Bluetooth module attached to it, which drives the whole protocol and then broadcasts results periodically over RF12. That way I can figure out and control it.

I tried to use the SoftwareSerial library built into the newer Arduino IDE releases, but ran into problems with lost bytes – even with the software UART speed down to 19200 baud.

So I ended up first debugging the code on an Arduino Mega, which has multiple hardware UARTs and allows good ol’ debugging-with-print-statements, sending out that debug info over USB, while a separate hardware UART deals with all communication to and from the Bluetooth module.

Once that worked, all debugging statements were removed and the serial Bluetooth was switched to the main (and only) UART of the JeeNode. The extra 10 kΩ R’s in the RX and TX lines allow hooking up a USB BUB for re-flashing. The BUB will simply overrule, but needs to be removed to try things out:

Next step was to add a little driver to JeeMon again, the aging-but-still-working Tcl-based home monitoring setup at JeeLabs. Fairly straightforward, since it merely needs to extract a couple of 16-bit unsigned ints from incoming packets:

And sure enough, data is coming in (time in UTC):

… and properly decoded:

The ATmega code has been added as example to JeeLib on GitHub, see smaRelay.ino.

I’m still debugging some details, as the Arduino sketch often stops working after a few minutes. I suspect that some sort of timeout and retry is needed, in case Bluetooth comms get lost occasionally. Bluetooth range is only a few meters, especially with the re-inforced concrete floors and walls here at JeeLabs.

Anyhow, it’s a start. Suggestions and improvements are always welcome!

As pointed out in recent comments, the SMA solar PV inverter can be accessed over Bluetooth. This offers various goodies, such as reading out the daily yield and the voltage / power generation per MPP tracker. Since the SB5000TL has two of them, and my panels are split between 12 east and 10 west, I am definitely interested in seeing how they perform.

Besides, it’s fun and fairly easy to do. How hard could reading out a Bluetooth stream be?

Well, before embarking on the JeeNode/Arduino readout, I decided to first try the built-in Bluetooth of my Mac laptop, which is used by the keyboard and mouse anyway.

I looked at a number of examples out there, but didn’t really like any of ‘em – they looked far too complex and elaborate for the task at hand. This looked like a wheel yearning to be re-invented… heh ;)

The trouble is that the protocol is fully packetized, checksummed, etc. The way it was set up, this seems to also allow managing multiple inverters in a solar farm. Nothing I care about, but I can see the value and applicability of such an approach.

So what it comes down to is to send a bunch of hex bytes in just the right order and with just the right checksums, and then pulling out a few values from what comes back by only decoding what is relevant. Fortunately, the Nanode SMA PV Monitor project on GitHub by Stuart Pittaway already did much of this (and a lot more).

I used some templating techniques (in good old C) which are probably worth a separate post, to generate the proper data packets to connect, initialise, login, and ask for specific results. And here’s what I got – after a lot of head-scratching and peering at hex dumps:

The clock was junk at the time, but as you can see there are some nice bits of info in there.

One major inconvenience was that my 11″ MacBook Air tended to crash every once in a while. And in the worst possible way: hard kernel panic -> total reboot needed -> all unsaved data lost. Yikes! Hey Apple, get your stuff solid, will ya – this is awful!

The workaround appears to be to disable wireless and not exit the app while data is coming in. Sounds awfully similar to the kernel panics I can generate by disconnecting an FTDI USB cable or BUB, BTW. Needless to say, these disruptions are extremely irritating while trying to debug new code.

The syncRecv.ino sketch developed over the last few days is shaping up nicely. I’ve been testing it with the homePower transmitter, which periodically sends out electricity measurements over wireless.

Packets are sent out every 3 seconds, except when there have been no new pulses from any of the three 2000 pulse/kWh counters I’m using. So normally, a packet is expected once every second, but at night when power consumption drops to around 100 Watt, only every third or fourth measurement will actually lead to a transmission.

The logic I’m using was specifically chosen to deal with this particular case, and the result is a pretty simple sketch (under 200 LOC) which seems to work out surprisingly well.

How well? Time to fire up that oscilloscope again:

This is a current measurement, collected over about half an hour, i.e. over 500 reception attempts. The screen was set in 10s trace persistence mode (with “false colors” and “background” enabled to highlight the most recent traces and keep showing each one, so all the triggers are superimposed on one another.

These samples were taken with about 300 W consumption (i.e. 600 pulses per hour, one per 6s on average), so the transmitter was indeed skipping packets fairly regularly.

Here’s a typical single trigger, giving a bit more detail for one reception:

Lots of things one can deduce from these images:

the mid-level current consumption is ≈ 8 mA, that’s the ATmega running

the high-level current increases by another 11 mA for the RFM12B radio

almost all receptions are within 8..12 ms

most missing packets cause the receiver to stay on for up to some 18 ms

on a few occasions, the reception window is doubled

when that happens, the receiver can be on, but still no more than 40 ms

the 5 ms after proper reception are used to send out info over serial

the ATmega is on for less than 20 ms most of the time (and never over 50 ms)

it looks like the longer receptions happened no more than 5 times

If you ignore the outliers, you can see that the receiver stays on well under 15 ms on average, and the ATmega well under 20 ms.

This translates to a 0.5% duty cycle with 3s transmissions, or a 200-fold reduction in power over leaving the ATmega and RFM12B on all the time. To put that in perspective: on average, this setup will draw about 0.1 mA (instead of 20 mA), while still receiving those packets coming in every 3 seconds or so. Not bad, eh?

There’s always room for improvement: the ATmega could be put to sleep while the radio is receiving (it’s going to be idling most of that time anyway). And of course the serial port debugging output should be turned off for real use. Such optimisations might halve the remaining power consumption – diminishing returns, clearly!

But hey, enough is enough. I’m going to integrate this mechanism into the homeGraph.ino sketch – and expect to achieve at least 3 months of run time on 3x AA (i.e. an average current consumption of under 1 mA total, including the GLCD).

Plenty for me – better than both my wireless keyboard and mouse, in fact.

That homeGraph setup brought out the need to somehow synchronise a receiver to the transmitter, as illustrated in a recent post. See also this forum discussion, which made me dive in a little deeper.

Let’s take a step back and summarise what this is this all about…

The basic idea is that if the transmitter is transmitting in a fixed cycle, then the receiver need not be active more than a small time window before and after the expected transmission. This might make it possible to reduce power consumption by two orders of magnitude, or more.

Here’s the initial syncRecv.ino sketch I came up with, now also added to JeeLib:

The idea is to turn the radio off for T – W milliseconds after a packet comes in, where T is the current cycle time estimate, and W the current +/- window size, and then wait for a packet to come in, but no later than T + W.

Each time we succeed, we get a better estimate, and can reduce the window size. Down to the minimum 16 ms, which is the granularity of the watchdog timer. For that same reason, time T is truncated to multiples of 16 ms as well. We simply cannot deal with time more accurately if all we have is the watchdog.

Here are some results from three different JeeNodes:

(whoops – the offset sign is the wrong way around, because I messed up the subtraction)

Note that for timing purposes, this sketch first waits indefinitely for a packet to come in. Since the transmitter doesn’t always send out a sketch, some of these measurement attempts fail – as indicated by multiple R’s.

That last example is a bit surprising: it was modified to run without powering down the radio in between reception attempts. What it seems to indicate is that reception works better when the radio is not put to sleep – not sure why. Or maybe the change in current consumption affects things?

As you can see, those watchdog timers vary quite a lot across different ATmega chips. These tests were done in about 15 minutes, with the sending sketch (and home power consumption levels) about the same during the entire period.

Still, these results look promising. Seems like we could get the estimates down to a few milliseconds and run the watchdog at its full 16 ms resolution. With the radio on less than 10 ms per 3000 ms, we’d get a 300-fold reduction in power consumption. Battery powered reception may be feasible after all!

The Flukso is a little open-source box which can read out a couple of CT current clamps and/or pulse counters to provides electricity / gas / water consumption details – via a JSON/REST interface, either locally or on the Flukso site (private or shared, your call):

The design is based on the Dragino, and includes an ATmega piggy-back board with extra circuitry specifically for reading out current clamps. It’s not every day that you see designs which can actually deal with power outages in such a way that the last readings get saved to EEPROM in the last few milliseconds – as the system is going down! – but that’s exactly what the Flukso does, showing its great attention to detail.

The Flukso meter’s designer, Bart Van Der Meerssche, is also the driving force behind the Electro:camp meetings, so we had some opportunities to chat and dream about the future these past few days. Lots of interesting options and wild ideas floating around. With Linux in the equation, a lot more sophistication becomes feasible.

To have a better test situation, I’ve decided to add a Flukso setup to JeeLabs, which is in fact trivial since it can connect over WiFi. Power consumption is under 3 W:

I added the three current clamps as follows (consumption only, no PV solar yield for now):

One 50 Amp CT clamp for the RCD-protected groups 1..3

One 50 Amp CT clamp for the non-RCD-protected groups 4..7

One 50 Amp CT clamp for the induction cooker, group 9

The total should match what I’m measuring with my other 2 pulse counters.

The daughterboard is a prototype with on-board RFM12B (software is work-in-progress).

The other news is that the pulse counter wiring has been fixed, so this is now correct:

That’s an early morning with heavy clouds. Hey, where’s that sun when you need it!

This has been a long time coming, and the recent Elektro:camp meet-up has finally pushed me to figure out the remaining details and get it all working (on foam board!):

Bottom middle is a JeeLink, which acts as a “boot server” for the other two nodes. The JeeNode USB on the left and the JeeNode SMD on the right (with AA Power Board) both now have a new boot loader installed, called JeeBoot, which supports over-the-air uploading via the RFM12B wireless module.

The check for new firmware happens when pressing reset on a remote node (not on power-up!). This mechanism is already quite secure, since you need physical access to the node to re-flash it. Real authentication could be added later.

The whole JeeBoot loader is currently a mere 1.5 KB, including a custom version of the RF12 driver code. Like every ATmega boot loader, it is stored in the upper part of flash memory and cannot be damaged by sketches running amok. The code does not yet include proper retry logic and better low-power modes while waiting for incoming data, but that should fit in the remaining 0.5 KB. The boot loader could be expanded to 4 KB if need be, but right now this thing is small enough to fit even in an ATmega168, with plenty of room left for a decent sketch.

The boot algorithm is a bit unconventional. The mechanism is driven entirely from the remote nodes, with the central server merely listening and responding to incoming requests in a state-less fashion. This approach should offer better support for low-power scenarios. If no new code is available, or if the server does not respond quickly, the remote node continues by launching the current sketch. If the current sketch does not match its stored size and CRC (perhaps because of an incomplete or failed previous upload attempt), then the node retries until it has a valid sketch. That last part hasn’t been fully implemented yet.

The boot server node can be a JeeLink, which has enough memory on board to store different sketches for different remote nodes (not all of them need to be running the same code). But it could also be another RFM12B-based setup, such as a small Linux box or PC.

This first test server has just two fixed tiny sketches built in: fast blink and slow blink. It alternately sends either one or the other, which is enough to verify that the process works. Each time any node is reset, it’ll be updated with one of these two sketches. A far more elaborate server sketch will be needed for a full-fledged over-the-air updatable WSN.

Good, but not perfect… I was curious about the actual current consumption of this latest version of the homeGraph.ino sketch. So here’s another timing snapshot:

This has trace persistence turned on, collecting multiple samples in one screen shot.

As you can see, the radio is turned on for ≈ 75 ms in this case (different JeeNode, slightly different watchdog timer period). Then after 30..35 ms the sketch goes to sleep again.

The second case is when nothing has been received after 150 ms: the “radioTimer” fires, and in this case the sketch turns off the radio as well, assuming there will be no packet coming in anymore.

But the interesting bit starts on the next packet coming in after one has been omitted. The logic in the code is such that the radio will be left on indefinitely in this case. As you can see, every late arrival then comes 100 ms later than expected: same radio-off + power down signature on the right-hand side of the screen capture, just much later.

Here is the code again:

And the main loop which calls this code:

That “if (!timingWasGood) …” code is probably wrong.

But it’s not so easy to fix, I’m afraid. Because the timing gets even more tricky if two or more packets are omitted, not just occasionally a single one. Sometimes, the radio doesn’t get switched off quickly – as you can see in this 30-minute capture at the end of the day, when power levels are starting to drop:

Maybe I need to try out a software PLL implementation after all. The other reason is that there seems to be a fair amount of variation in watchdog clock timing between different ATmega’s. For one, I had to use 2800 as value for recvOffTime, with another 3000 worked a lot better. A self-locking algorithm would solve this, and would let us predict the next packet time with even more accuracy.

But this is still good enough for now. Normally the radio will only be on for about 100 ms every 3s, so that’s a 30-fold power reduction, or about 0.6 mA on average for the radio.

This still ought to run for a month or two on a 3x AA battery pack, at least for daytime use. Which brings up another idea: turn the whole thing off when it’s dark – the display is not readable without light anyway.

This concludes my little gadget to track home energy use and solar energy production:

The graph shows production vs total consumption in 15-minute intervals for the last 5 hours. A summary of this infomation is shown at the bottom: “+” is total solar production in the last 5 hours, “-” is total energy consumption in that same period.

The actual consumption values are not yet correct because the home energy pulse counter is wired incorrectly, but they will be once that is fixed. The total home consumption is currently 1327 – 1221 + 7 = 113W, since the home counter is currently driven in reverse.

The graph is auto-scaling and I’m storing these values in EEPROM whenever it scrolls, so that a power-down or reset from say a battery change will only lose the information accumulated in the last 15 minutes.

Power consumption is “fairly low”, because the backlight has been switched off and the radio is turned off between predicted reception times. The mechanism works quite well when there is a packet every 3 or 6 seconds, but with longer intervals (i.e. at night), the sketch still keeps the receiver on for too long.

A further refinement could be to reduce the scan cycle when there are almost no new pulses coming in – and then picking up again when the rate increases. Trouble is that it’s impossible to predict accurately when packets will be skipped, so the risk is that the sketch quickly goes completely out of sync when packet rates do drop. The PLL approach would be a better option, no doubt.

But all in all, I’m quite happy with the result. The display is reasonably easy to read in daylight, even without the backlight. I’ll do a battery-lifetime test with a fresh new battery once the pulse counter wiring has been fixed.

The code has become a bit long to be included in this post – it’s available as homeGraph on GitHub, as part of the GLCDlib project. I’m still amazed by how much a little 200-line program can do in just 11 KB of flash memory, and how it all ends up as a neat custom gadget. Uniquely tailored for JeeLabs, but it’s all open source and easy to adapt by anyone.

Yesterday’s optimisation was able to achieve an estimated 60-fold power consumption, by turning on the radio only when a packet is predicted to arrive, and going into a deep sleep mode for the rest of the time.

That does work, but the problem is that the sending node isn’t sending out a packet every 3 seconds. Sometimes no packet is sent because there is nothing to report, and sometimes a packet might simply be lost.

Yesterday’s code doesn’t deal with these cases. It merely waits after proper reception, because in that case the time to the next packet is known and predictable (to a certain degree of accuracy). If a packet is not coming in at the expected time, the sketch will simply continue to wait indefinitely with the radio turned on.

I tried to somewhat improve on this, by limiting the time to wait when a packet does not arrive, turning the radio off for a bit before trying again. But that turns out to be rather complex – how do you deal with uncertainty when the last known reception is longer and longer ago? We can still assume that the sender is staying in its 3s cycle, but: 1) the clocks are not very accurate on either end (especially the watchdog timer), and 2) we need a way to resync when all proper synchronisation is lost.

Here’s my modified loop() code:

And here’s the slightly extended snoozeJustEnough() logic:

Yes, it does work, some of the time – as you can see in this slow scope capture:

Four of those blips are working properly, i.e. immediate packet reception, drop in current draw, and then display update. Two of them are missed packets (probably none were sent), which are again handled properly by turning the receiver off after 150 ms, instead of waiting indefinitely.

But once that first packet reception fails, the next try will be slightly too late, and then the receiver just stays on for the full 3 seconds.

And that’s actually just a “good” example I picked from several runs – a lot of the time, the receiver just stays on for over a dozen seconds. This algorithm clearly isn’t sophisticated enough to deal with periodic packets when some of ‘em are omitted from the data stream.

I can think of a couple of solutions. One would be to always send out a packet, even just a brief empty one, to keep the receiver happy (except when there is packet loss). But that seems a bit wasteful of RF bandwidth.

The other approach would be to implement a Phase Locked Loop in software, so that the receiver tracks the exact packet frequency a lot more accurately over time, and then gradually widens the receive window when packets are not coming in. This would also deal with gradual clock variations on either end, and if dimensioned properly would end up with a large window to re-gain “phase lock” when needed.

But that’s a bit beyond the scope of this whole experiment. The point of this demonstration was to show that with extra smarts in the receiving node, it is indeed possible to achieve low-power periodic data reception without having to keep the receiver active all the time.

Update – With the following timing tweak, everything works out a lot better already:

The receiver is now staying in sync surprisingly well – it’s time to declare victory!

The previous post showed that most of the power consumption of the homeGraph.ino sketch was due to the RFM12B receiver being on all the time. This is a nasty issue which comes back all the time with Wireless Sensor Networks: for ultra-low power scenarios, it’s simply impossible to keep the radio on at all times.

So how can we pick up readings from the new homePower.ino sketch, used to report energy consumption and production of the new solar panels?

The trick is timing: the homePower.ino sketch was written in such a way that it only sends out a packet every 3 seconds. Not always, only when there is something to report, but always exactly on the 3 second mark.

That makes it possible to predict when the next packet can be expected, because once we do receive a packet, we know that we don’t have to expect one for another 3 seconds. Aha, so we could turn the receiver off for a while!

It’s very easy to try this out. First, the code which goes to sleep:

The usual stuff really – all the hard work is done by the JeeLib library code.

And here’s the new loop() code:

In other words: whenever a packet has been received, process it, then go to sleep for 2.8 seconds, then wake up and start the receiver again.

Here’s the resulting power consumption timing, as voltage drop over a series resistor:

I can’t say I fully understand what’s going on, but I think it’s waiting for the next packet for about 35 ms with the receiver enabled (drawing ≈ 18 mA), and then another 35 ms is spent generating the graph and sending the new image to the Graphics Board over software SPI (drawing 7 mA, i.e. just the ATmega).

Then the µC goes to sleep, leaving just the display showing the data.

So we’re drawing about 18 mA for say 50 ms every 3000 ms – this translates to a 60-fold reduction in average current consumption, or about 0.3 mA (plus the baseline current consumption). Not bad!

Unfortunately, real-world use isn’t working out quite as planned… to be continued.

There’s no way the sketch will run for a decent amount of time on just a single AA battery.

Let’s take some measurements:

total power consumption on a 5V supply (via the USB BUB) is 20 mA

total power consumption on 3.3V is almost the same: 19 mA

the battery drain on the Eneloop at 1.28V is a whopping 95 mA

(the regulator on the AA board was selected for its low idle current, not max efficiency)

That last value translates to a run time of 20 hours on a fully charged battery. Fine for a demo, but definitely not very practical if this means we must replace batteries all the time.

Let’s try to reduce the power consumption of this thing – my favourite pastime.

The first obvious step is to turn off the Graphics Board backlight – a trivial change, since it can be done under software control. With the dark-on-light version of the GLCD this is certainly feasible, since that display is still quite readable in ambient light.
But the net effect with a 5V power is that we now draw 16.5 mA … not that much better!

The real power consumer is the RFM12B module, which is permanently on. Turning it off drops the power consumption to 6.0 mA (with the Graphics Board still displaying the last values). And putting the ATmega to sleep reduces this even further, down to 0.5 mA. Now we’re cookin’ – this would last some 6 months on 3 AA batteries.

Except that this variant of the “homeGraph” sketch is next to useless: it powers up, waits for a packet, updates the display, and then goes to sleep: forever… whoops!

With the new “homePower” setup now in place and working, it is time to say goodbye to a good companion – this ATmega168-based mousetrap which started it all, 4 years ago:

Note the dead spider :) – this thing has been soaked a few times, with water dripping from the kitchen on the next floor, yet all it took was a clean wipe, some time to dry, and it just resumed its duties over and over again:

Very few problems other than that corrosion (probably also inside the mini 3-pin jacks):

Less than a dozen resets / power cycles over all these years.

This is the predecessor of what eventually became the JeeNode, based on an RBBB by Modern Device. It works on 5V and uses resistor-based level converters for the RFM12B.

The gas measurements have been erratic for some time now, no doubt due to those bad contacts, so I’m going to look for another way to read those values.

And with it ends also the life of the node on the other side, a modified Arduino Mini plugged into the Mac Mini server using an FTDI cable:

Also based on perf board with Kynar wire-wrapping wire to connect it all together:

The code used between these two nodes was based on an early version of the RF12 driver protocol, and since it all worked perfectly, I never saw a need to change it. But I’m still going to miss these periodic packets in the logfiles:

Receiving the packets sent out yesterday is easy – in fact, since they are being sent out on the same netgroup as everything else here at JeeLabs, I don’t have to do anything. Part of this simplicity comes from the fact that the node is broadcasting its data to whoever wants to hear it. There is no need to define a destination in the homePower.ino sketch. Very similar to UDP on Ethernet, or the CAN bus, for that matter.

But incoming data like this is not very meaningful, really:

L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

What I have in mind is to completely redo the current system running here (currently still based on JeeMon) and switch to a design using ZeroMQ. But that’s still in the planning stages, so for now JeeMon is all I have.

To decode the above data, I wrote this little “homePower.tcl” driver:

It takes those incoming 12-byte packets, and converts them to three sets of results – each with the total pulse count so far (2000 pulses/KWh), and the last calculated instantaneous power consumption. Note also the “decompression” of the millisecond time differences, as applied on the sending side.

Calculation of the actual Watts being consumed (or produced) is trivial: there are 2000 pulses per KWh, so one pulse per half hour represents an average consumption (or production) of exactly one Watt.

To activate this driver I also had to add this line to “main.tcl”:

Driver register RF12-868.5.9 homePower

And sure enough, out come results like this:

This is just after a reset, at night with no solar power being generated. That’s 7 Watt consumed by the cooker (which is off, but still drawing some residual power for its display and control circuits), and 105 Watt consumed by the rest of the house.

Actually, you’re looking at the baseline power consumption here at JeeLabs. I did these measurements late at night with all the lights and everything else turned off (this was done by staring at these figures from a laptop on wireless, running off batteries). A total of 112 Watt, including about 24 Watt for the Wireless router plus the Mac Mini running the various JeeLabs web servers, both always on. Some additional power (10W perhaps?) is also drawn by the internet modem downstairs, so that leaves only some 80 Watt of undetermined “vampire power” drawn around the house. Not bad!

One of my goals for the next few months will be to better understand where that remaining power is going, and then try to reduce it even further – if possible. That 80 W baseline is 700 KWh per year after all, i.e. over 20% of the total annual consumption here.

Here are some more readings, taken the following morning with heavy overcast clouds:

This also illustrates why the wiring error is causing problems: the “pow3″ value is now a surplus (counting down), but there’s no way to see that in the measurement data.

I’ve dropped the packet sending rate to at most once every 3 seconds, and am very happy with these results which give me a lot more detail and far more frequent insight into our power usage around here. Just need to wait for the electrician to come back and reroute counter 3 so it doesn’t include solar power production.

With pulses being detected, the last step in this power consumption sketch is to properly count the pulses, measure the time between ‘em, and send off the results over wireless.

There are a few details to take care off, such as not sending off too many packets, and sending out the information in such a way that occasional packet loss is harmless.

The way to do this is to track a few extra values in the sketch. Here are the variables used:

Some of these were already used yesterday. The new parts are the pulse counters, last-pulse millisecond values, and the payload buffer. For each of the three pulse counter sources, I’m going to send the current count and the time in milliseconds since the last pulse. This latter value is an excellent indication of instantaneous power consumption.

But I also want to keep the packet size down, so these values are sent as 16-bit unsigned integers. For count, this is not so important as it will be virtually impossible to miss over 65000 packets, so we can always resync – even with occasional packet loss.

For the pulse time differences, having millisecond resolution is great, but that limits the total to no more than about a minute between pulses. Not good enough in the case of solar power, for example, which might stop on very dark days.

The solution is to “compress” the data a bit: values up to one minute are sent as is, values up to about 80 minutes are sent in 1-second resolution, and everything above is sent as being “out of range”.

Here are the main parts of the sketch, the full “homePower.ino” sketch is now on GitHub:

Sample output, as logged by a process which always runs here at JeeLabs:

L 22:09:25.352 usb-A40117UK OK 9 2 0 69 235 0 0 0 0 103 0 97 18

Where “2 0 69 235″ is the cooker, “0 0 0 0″ is solar, and “103 0 97 18″ is the rest.

Note that results are sent off no more than once a second, and the careful distinction between having data-to-send pending and actually getting it sent out only after that 1000 ms send timer expires.

The scanning and blinking code hasn’t changed. The off-by-one bug was in calling setblinks() with a value of 0 to 2, instead of 1 to 3, respectively.

That’s it. The recently installed three new pulse counters are now part of the JeeLabs home monitoring system. Well… as far as remote sensing and reporting goes. Processing this data will require some more work on the receiving end.

With yesterday’s solar setup operational, it’s now time to start collecting the data.

The pulse counter provides a phototransistor output which is specified as requiring 5..27V and drawing a current up to 27 mA, so my hunch is that it’s a phototransistor in series with an internal 1 kΩ resistor. To interface to the 3.3V I/O pins of a JeeNode, I used this circuit:

That way, if the circuit has that internal 1 kΩ resistor, the pin will go from 0 to 2.5V and act as a logic “1”. In case there is no internal resistor, the swing will be from 0 to 5V, but with the 10 kΩ resistor in series, this will still not harm the ATmega’s I/O pin (excess current will leak away through the internal ESD protection diode).

No need to measure anything, the above will work either way!

I considered building this project into a nice exclosure, but in the end I don’t really care – as long as it works reliably. As only 3 input pins are used, there’s a spare to also drive an extra LED. So here’s the result, built on a JeePlug board – using a 6-pin RJ12 socket:

The RJ12 socket pins are not on a 0.1″ grid, but they can be pushed in by slanting the socket slightly. The setup has been repeated three times, as you can see:

To avoid having to chase up and down the stairs while debugging too many things at once, I started off with this little sketch to test the basic pulse counter connections:

All the code does, is detect when a pulse starts and blink a LED one, two, or three times, depending on which pulse was detected. Note how the two MilliTimer objects make it easier to perform several periodic tasks independently in a single loop.

Tomorrow: logic to track pulse rates and counts, and sending results off into the air.

On a recent trip to Germany, we visited Uelzen, about 100 km north of Braunschweig. Its railway station was “upgraded” (pimped?) some 12 years ago by an Austrian artist called Friedensreich Hundertwasser in what looks very Gaudi-like in style and appearance:

He was interested in organic shapes, creating round and uneven fairy-tale like interiors:

But the reason I’m mentioning all this, is that the station also includes this display panel:

The roof is covered with solar panels, and has been generating electricity for the community since 1997. Almost half a megigawatt-hour produced so far, and still generating just under 8 Kilowatt on a half-cloudy October day.

Way to go! So gibt mann das gute Vorbild!

PS. Calculating back from the figures shown, this would appear to be just 42 m2 of panels, which seems off by an order of magnitude. Oh, wait… that’s not accounting for the panel’s conversion efficiency, I’m guessing that to be somewhere in the 10% range for those panels.

The “benstream.lua” decoder I mentioned yesterday is based on coroutines. Since probably not everyone is familiar with them (they do not exist in languages such as C and C++), it seems worthwhile to go over this topic briefly.

Let’s start with this completely self-contained example, written in Lua:

There is some weird stuff in here, notably how the list start, dict start, and list/dict end are returned as “false”, “true”, and the empty list ({} in Lua), respectively. The only reason for this is that these three values are distinguishable from all the other possible return values, i.e. strings and numbers.

But the main point of this demo is to show what coroutines are and what they do. If you’re not familiar with them, they do take getting used to… but as you’ll see, it’s no big deal.

First, think about the task: we’re feeding characters into a function, and expect it to track where it is and return complete “tokens” once enough data has been fed through it. If it’s not ready, the “cwrap” function will return nil.

The trouble is that such code cannot be in control: it can’t “pull” more characters from the input when it needs them. Instead, it has to wait until it gets called again, somehow figure out where it was the last time, and then deal with that next character. For an input sequence such as “5:abcde”, we need to track being in the leading count, then skip the colon, then track the actual data bytes as they come in. In the C code I added to the EmBencode library recently, I had to implement a Finite State Machine to keep track of things. It’s not that hard, but it feels backwards. It’s like constantly being called away during a meeting, and having to pick up the discussion again each time you come back :)

Now look at the above code. It performs the same task as the C code, but it’s written as if it was in control! – i.e. we’re calling nextChar() to get the next character, looping and waiting as needed, and yet the coroutine is not actually pulling any data. As you can see, the test code at the end is feeding characters one by one – the normal way to “push” data, not “pull” it from the parsing loop. How is this possible?

The magic happens in nextchar(), which calls coroutine.yield() with an argument.

Yield does two things:

it saves the complete state of execution (i.e. call stack)

it causes the coroutine to return the argument given to it

It’s like popping the call stack, with one essential difference: we’re not throwing the stack contents away, we’re just moving it aside.

Calling a coroutine does almost exactly the opposite:

it restores the saved state of execution

it causes coroutine.yield() to return with the argument of the call

These are not full-blown continuations, as in Lisp, but sort of halfway there. There is an asymmetry in that there is a point in the code which creates and starts a coroutine, but from then on, these almost act like they are calling each other!

And that’s exactly what turns a “pull” scanner into a “push” scanner: the code thinks it is calling nextChar() to immediately obtain the next input character, but the system sneakily puts the code to sleep, and resumes the original caller. When the original caller is ready to push another character, the system again sneakily changes the perspective, resuming the scanner with the new data, as if it had never stopped.

This is in fact a pretty efficient way to perform multiple tasks. It’s not exactly multi-tasking, because the caller and the coroutine need to alternate calls to each other in lock-step for all this trickery to work, but the effect is almost the same.

The only confusing bit perhaps, is that argument to nextChar() – what does it do?

Well, this is the way to communicate results back to the caller. Every time the scanner calls nextChar(), it supplies it with the last token it found (or nil). The conversation goes like this: “I’m done, I’ve got this result for you, now please give me the next character”. If you think of “coroutine.yield” as sort of a synonym for “return”, then it probably looks a lot more familiar already – the difference being that we’re not returning from the caller, but from the entire coroutine context.

The beauty of coroutines is that you can hide the magic so that most of the code never knows it is running as part of a coroutine. It just does its thing, and “asks” for more by calling a function such as nextChar(), expecting to get a result as soon as that returns.

Which it does, but only after having been put under narcosis for a while. Coroutines are a very neat trick, that can simplify the code in languages such a Lua, Python, and Tcl.

I’m using a couple of Lua utility scripts – haven’t published them yet, but at least you’ll get an idea of how the decoding process can be implemented:

dbg.lua – this is the vardump script, extended to show binary data in hex format

benstream.lua – a little script I wrote which does “push-parsing” of Bencoded data

Note that this code is far too simplistic for real-world use. The most glaring limitation is that it is blocking, i.e. we wait for each next character from the serial port, while being completely unresponsive to anything else.

Taking things further will require going into processes, threads, events, asynchronous I/O, polling, or some mix thereof – which will have to wait for now. To be honest, I’ve become a bit lazy because the Tcl language solves all that out of the box, but hey… ya’ can’t have everything!

The RF12demo sketch was originally intended to be just that: a demo, pre-flashed on all JeeNodes to provide an easy way to try out wireless communication. That’s how it all started out over 3 years ago.

But that’s not where things ended. I’ve been using RF12demo as main sketch for all “central receive nodes” I’ve been working with here. It has a simple command-line parser to configure the RF12 driver, there’s a way to send out packets, and it reports all incoming packets – so basically it does everything needed:

This works fine, but now I’d like to explore a real “over-the-wire” protocol, using the new EmBencode library. The idea is to send “messages” over the serial line in both directions, with named “commands” and “events” going to and from the attached JeeNode or JeeLink. It won’t be convenient for manual use, but should simplify things when the host side is a computer running some software “driver” for this setup.

Here’s the first version of a new rf12cmd sketch, which reports all incoming packets:

Couple of observations about this sketch:

we can no longer send a plain text “[rf12cmd]” greeting, that too is now sent as packet

the greeting includes the sketch name and version, but also the decoder’s packet buffer size, so that the other side knows the maximum packet size it may use

invalid packets are discarded, we’re using a fixed frequency band and group for now

command/event names are short – let’s not waste bandwidth or string memory here

I’ve bumped the serial line speed to 115200 baud to speed up data transfers a bit

It’s been over two years ago since I started looking for ways to collect solar energy on the roof, here at JeeLabs.

And then reality set in… the roofs of the houses in this street are covered with shingles from the late 70’s, and as we found out they contain asbestos – yuck! Then, earlier this year, we were told that nothing could be done on the roof unless those shingles were first officially removed by a specially-equipped third party:

So now at last, those “bad” shingles have been replaced and panels have been mounted:

That little bulge is a “Solar tube” which brings extra sunlight into the stairway.

That’s 2 x 5 panels on this side, and 3 x 4 on the other roof:

(the 10 panels you saw in the first picture are mounted on the roof to the far left)

For a total of 22 x 240 = 5280 Wattpeak. Not that they will ever all generate full power at the same time, because these two roofs are facing west and east, respectively. But hey, together they should still give us more than our annual 3100 kWh consumption. JeeLabs is about to become a net electricity producer!

Just one more step: wait for the SMA 5000 inverter to arrive and get it hooked up…

Crazy? Maybe. But there’s still some value in seeing so many products at one glance:

What would be even nicer, IMO, is a paper version with QR codes so you can instantly tie what you see to a web page, with more information, full quantity pricing details, and engineering specs / datasheets. Or at least a direct link between each full page and the web – that’s just 4 digits to enter, after all.

It feels a bit old-fashioned to leaf through such a catalog, but hey… when you don’t know exactly what you need (or more likely: what range of solutions is available), then that can still beat every parametric search out there.

RS Components have an excellent selection of electronic and mechanical parts, BTW.

One more experiment then. Let’s switch a little relay (the one used in the Relay Plug) at 10 Hz, and give it a serious beating. I’m going to put my dual power supply to work and put 5V @ 3A straight across the relay’s contacts. In other words, let’s see what happens when we switch the full 3A across those tiny relay contacts.

At first sight, it all looks reasonably ok:

That’s 5V across the relay while it’s open, and about 0.3V when it closes. So these contacts seem to have a resistance of about 0.1 Ω. The overshoot when the relay opens up again is probably in the power supply, as it recovers from just having been shorted out at 3A. Note that any inductance in the wiring will have this same effect. Inductance is a bit like a flywheel – it wants to keep going after such massive current (i.e. magnetic field) changes.

It’s quite impressive to see how this little relay rattles away at ≈ 10 Hz, working just fine.

But those transitions look quite different when we zoom in:

You’re seeing a mix of arcing and contact bounce, as all mechanical switches do.

Note that the switch hasn’t quite closed yet – there is still about 1.5V between the contact points, 30 µs after switching – due to contact bounce. This drops to 0.3V after some 400 µs, at which point the contacts are really firmly closed.

Here’s a much better example, this time using a 10 Ω resistor in series so the power supply just delivers 5V @ 0.5A without going into current-limiting mode:

Opening is again not quite what it seems, once you zoom in:

Two things happening here: the mechanical release of the switch (less resistance as the pressure on the contact decreases, followed by some arcing), and then a fairly linear ramp and some overshoot as the power supply recovers its 5V setting after having been shorted out. Think of a stretched rubber band, and how it “overreacts” when released.

So as you can see, a relay does some nasty things while switching on and off!

While on the subject of optocouplers, there’s another type besides “analog” ones and “digital” ones (which include a comparator), and that’s the opto-relay. Again with several kilovolts of isolation.

The Avago ASSR-1611 is an interesting one, for example, because it uses MOSFETs:

Basically, it lets you switch up to 60V at a few amps. To see how it performs, while I still have that linear ramp circuit up and running, I hooked it up – as a big mess-o-wires:

It’s getting pretty crowded in there. The ASSR-1611 is on the left. Here’s its schematic:

The interesting bit is that there are diodes in there, so it can deal with alternating current when hooked up differently, i.e. by using pins 4 and 6 instead of 5 and 6.

First thing to notice, is that this thing behaves in a strange way when switched at 1000 Hz:

It “sort of” triggers … slowly (keep in mind that turning on translates to shorting the output to ground, as before). And then it decides to turn off again very quickly. For some reason this repeats about 10 times per second.

To slow the triangle wave rate way down, I used a 10 µF capacitor instead of 0.1 µF:

Aha! Much better. At about 0.7 mA (purple voltage over 1 kΩ = 0.7V), this solid state relay switches on, and once the current drops back to almost 0, it switches off. Note how these readings match the specs nicely: turning on, i.e. the blue line dropping to 0, takes a few ms, whereas turning off is virtually instant.

At a few Euro each, these chips are not really cheaper than mechanical relays, but when you only need to switch a few dozen Volt at a few Amps, then this solution still has the benefit that it switches far more cleanly – with no arcing or mechanical wear. And it’s totally quiet, of course…

The past few days were about generating a linear ramp, in the form of a triangular wave, and as you saw, it was quite easy to generate – despite the lack of a function generator.

The result was a voltage alternating between about 0.6V and 3.0V in a linear fashion. Here’s why…

I want to see how the MCT62 optocoupler passes a signal through it. More specifically, how a linearly increasing voltage would come out. Let’s look at that chip schematic again:

So the idea is to apply that linear ramp through a current-limiting resistor into the opto’s LED. Then we put the photo-transistor in a simple 5V circuit, with again a current limiting resistor between collector and 5V – like this:

From left to right:

apply a triangle wave to the LED, which varies from 0.6 to 3.0V

there’s a 1 kΩ rsistor in series, so the maximum current will stay well under 3 mA

the phototransistor is hooked up as a normal DC amplifier

there’s another 1 kΩ pullup, so this too cannot draw more than 5 mA current

Prediction:

when the LED is off, the output will stay at 5V, i.e. transistor stays off

until the input rises above the 1.2V threshold of the (IR) LED, not much happens

So if that behavior is linear, then the output voltage should drop linearly. Let’s have a look:

the YELLOW line is the triangle wave, as generated earlier

the PURPLE line is the voltage over the leftmost resistor

the BLUE line is the voltage on the transistor’s collector output

the RED line is the derivative of the BLUE line

the zero origin for all these lines in the image is at two divisions from the bottom

First of all, the purple line indeed rises slowly once we start rising above 1.0V, and it stays roughly 1.2V under the input signal (yellow line).

The blue line is the interesting one: it takes a bit of input current (i.e. LED light) for the transistor to start conducting, but once it does, the output voltage drops indeed. Once we’re above 2.0V, the blue line becomes quite linear. As indicated by the fact that the red line is fairly flat between horizontal divisions 5 and 7.

So in this range (and probably quite a bit above), we have a linear transfer from input current to output current. Or voltage … it’s all the same with resistors.

In terms of current, we can use the purple line: it’s flat with a diode current between 0.7 and 1.7 mA (and probably beyond).

The output voltage only drops to just over 2V, so the phototransistor is still far from reaching saturation (“conducting all out”).

So what’s the point of all this, eh?

Well, one thing this illustrates is that you can get a pretty clean signal across such an optocoupler, as long as you stay in the linear range of it all. There is no real speed limitation, so even audio signals could be sent across reasonably well – without making any electrical connection, just a little light beam!

It’s not hard to imagine how this could be done with discrete components even, sending the light to a glass fiber over a longer distance.

You can call it wireless signal transmission, albeit of a different type: optical!

A year has passed, and it’s time for a get-together again, in Leuven, Belgium this time:

What is it?

Elektro:camp is a place where geeks meet to talk about smart metering, smart homes, smart grids, and smart ideas. Everything in and around the house related to electricity and electronics, really. Oh, and software.

This is a “barcamp” so there’s no fixed agenda: we make it up as we go, and we all present our ideas and discuss the ideas of others. Trust me, if things go as they did last time, then we’ll be scrambling to find enough time to go through everything that pops up.

Wanna have lots of fun with a bunch of geeks? Wanna show what you’re working on? Wanna present some new ideas? Wanna meet up in person? Be there!

Except that I don’t want a voltage between 1.25V and 3.75V but slightly lower. One way to accomplish this is to lower the reference “1/2 Vcc” voltage used by the comparator and integrator circuits. So I added a 22 kΩ resistor in parallel on one half of the voltage divider:

It’s now slightly asymmetric (we’re discharging faster than we’re charging), but more importantly, the signal now runs from about 0.6V to 3.0V, which is more in sync with what I need (more on that in an upcoming post).

Notice that on these screen shots, the waveforms look very nice and straight, although it’s hard to see just how linear those ramps really are.

This is where a scope with good math functions comes in. If you recall from mathematics, the derivative of a straight line is a constant. Or to put it differently: the straighter the line is, the closer its derivative should be to a constant value. Positive for upward slopes, and negative for downward slopes. Let’s zoom in a bit:

(the red line’s origin is centered vertically, the yellow line is at 1 division from the bottom)

That red line is the scope’s calculated derivative of the yellow line (it’s really just a matter of calculating differences between successive points). As you can see, the upward slope is pretty straight from 1.3V to 2.9V. The downward slope less so, IOW the capacitor discharge is not quite as linear. The signal was averaged over 128 samples in this last screen.

Excellent. I now have the signal I need to perform my experiment. Stay tuned.

In contrast to yesterday’s setup, there is no negative feedback here, but a resistor between “OUT” and “+”. So in this case, when the output rises, it’ll cause “+” to rise even more, instead of bringing it closer to the “-” input.

This circuit doesn’t work towards an equilibrium, it’s unstable. Let’s start with the output being VCC, i.e. +5V. What will the input need to be to make it change?

the desired change can only happen, when the “+” input drops below 2.5V

with OUT at +5, that’s 2.5V over 20 kΩ, i.e. 125 µA

to pull “+” under 2.5V, we need to draw at least 125 µA out through the 10 kΩ resistor

that’s 1.25V under 2.5V, i.e. with the IN level under 1.25V

So when IN drops under 1.25V, the op-amp has a change of heart so to speak, and starts bringing OUT down in an attempt to bring “+ and “-” back to the same level. The silly thing is that it can’t, because lowering OUT is never going to raise the “+” back up to 2.5V (still with me?). So the output just keeps keeps dropping all the way to its minimum, which will be more or less equal to 0V.

Let’s review what happened: we started dropping the input level, and once it reached 1.25V, the output violently flipped from +5V to 0V.

Now the situation is reversed: how far do we need to raise the input to make the output go up again? Well, that’ll be 3.75V – using the same reasoning as before, but now based on 2.5V (the “-” pin) + 1.25V (the voltage over the 10 kΩ resistor.

So what this circuit does is flip between 0 and 5V, at trigger points 1.25V and 3.75V.

Now the magic part: we tie the input of this circuit to the output of yesterday’s circuit, and vice versa, creating a control loop. If you look back at yesterday’s scope image, you’ll see that the triangular wave flips at… 1.23V and 3.72V. What a coincidence, eh? Nah… it’s all by design. The comparator drives the integrator by feeding it 0 and 5V levels, and it switches when the integrator output reaches 1.25V and 3.75V. Since the capacitor requires a little time to charge, this ends up being a nicely controlled oscillator. Perfect!

For an upcoming experiment, I’m going to need a slowly rising voltage, and with my signal generator currently out for a check-up, it’s time to dive into some electronic circuitry again. Let’s try to generate a sawtooth or triangular wave signal with a few basic components. After my search for a simple sine wave generator, I’m happy to report that generating “ramp voltages” is actually a lot simpler.

The reason for this is that all you need to generate a linear ramp, is a constant current into a capacitor. This automatically produces a linear voltage ramp. The electrical notation for such a circuit is:

The voltage over that capacitor will rise linearly over time. It looks so simple! (well, apart from figuring out how to build a constant current source, perhaps)

But that’s only half the story. What to do once the capacitor has been fully charged up? We need to discharge it again, clearly. One way to do this, is to put a transistor or MOSFET across the cap and periodically make it turn on (briefly!) to discharge the circuit again. Ok, so now we need a periodic pulse as well. Hmmm…

There is another solution: op-amps. The op-amp is a truly amazing little building block. What we need here, is to use two op-amps in different configurations: one as an integrator and one as a comparator.

Let’s start with the integrator, because that explains how we can get a linearly varying voltage out of the circuit:

An op-amp has two properties, both crucial for explaining this little 3-component circuit:

both inputs are very high resistance, so virtually no current flows in or out

the output is constantly adjusted to try and keep both inputs at the same voltage

That second property can also be described as: when “+” is higher than “-“, the output will go up (towards Vcc, the supply voltage), when it’s lower, the output will go towards 0 (the ground voltage in a single-supply setup).

Here we’re tying “+” to half Vcc, in this case 2.5V, so you can think of the op-amp as trying to do whatever it can to keep the “-” pin at 2.5V as well.

Let’s start with everything in perfect balance, then “+”, “-“, and “OUT” will all be at +2.5V, and the capacitor will have no charge. Now let’s take the input to +5V:

a constant current starts flowing into the resistor, since the other side is +2.5V

this current drives the “-” pin up, so the output will go down

how far down? well as much as needed to cancel out that incoming current

IOW, the same current is going to go into the capacitor, which starts to charge up

as the charge builds up, less current starts to flow into the capacitor

“no way” says the op-amp, and so it pulls its output lower

so a constant current flows into the cap, and the output drops lower and lower

at some point near 0V the op-amp reaches its limit, and the mechanism breaks down

To give a water analogy: think of pouring water into a glass at a constant rate while trying to keep the surface of the water the same. You have to gradually lower the whole glass to make this work. As you do, the glass (cap) will contain more and more water (voltage). You’ve integrated (collected) the water flow into the glass!

If we keep the input voltage high, nothing more will happen: the cap will end up being fully charged, and the op-amp can no longer do anything to keep the “-” input from rising all the way to the input voltage.

When we now drop the input voltage to 0V, the reverse happens. Current flows through the resistor in the other direction, and the cap starts discharging. Again, the op-amp will do whatever it can to maintain that “-” at 1/2 Vcc. It does this by raising its output pin causing the capacitor to discharge with that same constant current rate. And sure enough, the voltage over the capacitor drops linearly. Until we hit the limits, and the process stops.

Here’s the effect in action, as seen on an oscilloscope (R = 4.7 kΩ, C = 0.1 µF, and the op-amp is an OPA2340):

The blue line is the input signal, the yellow line is the triangular wave output. Neat, eh?

Tomorrow, I’ll add an op-amp as comparator to make this circuit oscillate all by itself.

It looks like Mr. Murphy has found some time to mess with things again…

The Optocoupler Plug is a little board to let you isolate two I/O pins. When you drive one end with a small voltage to light up a little LED inside, it shines that light onto a sensitive photo-transistor which then starts conducting. It’s effectively a tiny switch, driven by light.

The nice thing about light is that it lets you avoid an electrical connection. So this thing allows you to detect a (small) input voltage without actually making a connection to it. These units support over a thousand volt of isolation – perfect when messing with AC mains coupled circuits, for example.

But the Optocoupler plug is actually two plugs in one, because you can also use it as an output by using it in reverse: then, the two I/O pins on a port can be used as output to turn on the LEDs, and the phototransistors can then control some output circuit without having to actually connect to it.

Here’s the “dual-mode” configuration of the Opto-coupler Plug:

In one mode, it can be used as input (with current limiting resistors on the right), in the other it’s an output (current-limiting resistors on the left, and solder jumpers on the right).

The unit I picked for this board is an MCT62:

Then we recently switched over to a HCPL-2631, without thinking much about it:

Whoops! Different pinout, but also entirely different beast: the MCT62 contains a pair of independent simple “analog” type optocouplers. The HPCL-2631 on the other hand is a “digital” model with some built-in amplification. Which means that this thing needs a power supply, and to stay within the 8-DIP pinout, this can only be realised by tying a common “ground” together and re-using the other as supply voltage.

There’s a lot more to describe about this apparently simple device, but for now all I can say is “we messed up!”. Fortunately, only four people have been affected by this so far, and we’ve contacted each one of them to resolve the problem and send a replacement out. To each of you, my apologies for the confusion and for wasting your time if you’ve been trying to get those faulty Optocoupler Plug kits to work.

With thanks to Jupe Software for reporting the issue, and saving others more grief!

These past few days, I’ve explored some of the ways we could use Bencode as data format “over the wire” in both very limited embedded µC contexts and in scripting languages.

Calling it “over the wire” is a bit more down-to-earth than calling Bencode a data serialisation format. Same thing really: a way to transform a possibly complex nested (but acylic) data structure into a sequence of bytes which can be sent (or stored!) and then decoded at a later date. The world is full of these things, because where there is communication, there is a need to get stuff across, and well… getting stuff across in the world of computers tends to happen byte-by-byte (or octet-by-octet, if you prefer).

When you transfer things as bytes, you have to delimit the different pieces somehow. The receiver needs to know where one “serialised” piece of information ends and the next starts.

There are three ways to send multi-byte chunks and keep track of those boundaries:

send a count, send the data, rinse and repeat

send the bytes, then add a special byte marker at the end

send the bytes and use some “out-of-band” mechanism to signal the other side

Each of them has major implications and trade-offs for how a transmission works. With counts, if there is any sort of error, we’re hosed – because we lose sync and no guaranteed way to ever recover from it.

With the second approach, we need to reserve some character code as end marker. That means it can’t appear inside the data. So then the world came up with escape sequences to work around this limitation. That’s why to enter a quote inside a string in C, you have to use a backslash: "this is a quoted \" inside a string" – and then you lose the backslash. It’s all solvable, of course… but messy.

The third approach uses a different trick: we send whatever we like, and then we use a separate means of communication to signal the end or some other state change. We could use two separate communication lines for example, sending data over one and control information over the other. Or close the socket when done, as with TCP/IP.

If you don’t get this stuff right, you can get into a lot of trouble. Like when in the 60’s, telephone companies used “in-band” tones on a telephone line to pass along routing or even billing information. Some clever guys got pretty famous for that – simply inserting a couple of tones into the conversation!

Closer to home, as noted recently – even the Arduino bootloader got bitten by this.

So how about Bencode, eh?

Well, I think it hits the sweet spot in tradeoffs. It’s more or less based on the second mechanism, using a few delimiters and special characters to signal the start and end of various types of data, while switching to a byte-counted prefix for the things that matter: strings with arbitrary content (hence including any bit pattern). And it sure helps that we often tend to know the sizes of our strings up front.

With Bencode, you don’t have to first build up the entire message in memory (or generate it twice) to find out how many bytes will be sent – as required if we had to use a size prefix. Yet the receiver also can prepare for all the bigger memory requirements, because strings are still prefixed with the number of bytes to come.

Also, having an 8-bit clean data path really offers a lot of convenience. Because any set of bytes can be pushed through without any processing. Like 32-bit or 64-bit floats, binaries, ZIP’s, MP3’s, video files – anything.

Another pretty clever little design choice is that neither string lengths nor signed integers are limited in size or magnitude in this protocol. They both use the natural decimal notation we all use every day. A bigger number is simply a matter of sending more digits. And if you want to send data in multiple pieces: send them as a list.

Lastly, this format has the property that if all you send is numerical and plain ASCII data, then the encoded string will also only consist of plain text. No binary codes or delimiters in sight, not even for the string sizes. That can be a big help when trying to debug things.

Yep – an elegant set of compromises and design choices indeed, this “Bencode” thing!

Note: for another particularly easy to read Python decoder, see Mathieu Weber‘s version.

Tcl

Tcl’s Bencode implementation by Andreas Kupries is called Bee and it part of Tcllib.

Tcllib is a Debian package, so it can be installed using “sudo apt-get install tcllib”.

Ok, so installation is trivial, but here we run into an important difference: Tcl’s data structures are not “intrinsically typed”. The type (and performance) depends on how you use the data, following Tcl’s “everything is a string” mantra.

Let’s start with decoding instead, because that’s very similar to the previous examples:

You can see the pieces being separated, and all the different parts of the message being decoded properly. One convenient feature is that the asString() and asNumber() calls can be mixed at will. So strings that come in but are actually numeric can be decoded as numbers, and numbers can be extracted as strings. For binary data, you can get at the exact string length, even when there are 0-bytes in the data.

The library takes care of all string length manipulations, zero-byte termination (which is not in the encoded data!), and buffer management. The incoming data is in fact not copied literally to the buffer, but stored in a convenient way for subsequenct use. This is done in such a way that the amount of buffer space needed never exceeds the size of the incoming data.

The general usage guideline for this decoder is:

while data comes in, pass it to the decoder

when the decoder says it’s ready, switch to getting the data out:

call nextToken() to find out the type of the next data item

if it’s a string or a number, pull out that value as needed

the last token in the buffer will always be T_END

reset the decoder to prepare for the next incoming data

Note that dict and list nesting is decoded, but there is no processing of these lists – you merely get told where a dict or list starts, and where it ends, and this can happen in a nested fashion.

Is it primitive? Yes, quite… but it works!

But the RAM overhead is just the buffer and a few extra bytes (8 right now). And the code is also no more than 2..3 KB. So this should still fit fairly comfortably in an 8-bit ATmega (or perhaps even ATtiny).

Note that the maximum buffer size (hence packet size) is about 250 bytes. But that’s just this implementation – Bencoded data has no limitations on string length, or even numeric accuracy. That’s right: none, you could use Bencode for terabytes if you wanted to.

PS – It’s worth pointing out that this decoder is driven as a push process, but that actual extraction of the different items is a pull mechanism: once a packet has been received, you can call nextToken() as needed without blocking. Hence the for loop.

And those last two (Malta & Cyprus) bring the total to 50 countries where JeeLabs Shop has delivered products. Woohoo!
Thank you John & Andreas for reaching that milestone – welcome to the expanding community of JeeLabs experimenters!

(and thank you Martyn, for spotting this factoid)

Now that we’re on this topic anyway: the shop has just been upgraded to accommodate the new 21% VAT tax requirements in the Netherlands, effective as of October 1st. Death and taxes, as they say… the two inevitabilities in life!

After two years of total stability, we’ve had to raise the price on some items, to match costs and keep those ever-changing US$ exchange rates in check, but the good news is that all the major items such as JeeNodes, RBBB’s, and USB BUBs remain at the same level as before. This is where (modest) economies of scale start to kick in, so we’re not changing one bit of what has been working out so well.

Another point I’d like to single out here, is that Martyn and Rohan Judd have been doing a truly phenomenal job of steering a major shop transition over the summer into a pretty smoothly-running operation again – as it’s now all done from the UK. With a tip-of-the-hat to you fella’s, for making this happen in the best imaginable way – cheers!

There is more change cooking, but for the moment the only change you will see is that the shop’s email is now coming from order_assistance at jeelabs dot org. This address is from now on also the best place to get in touch, to make sure no message goes by unseen. Whether it is an inquiry about packages, questions about delivery times or problems with the supplied goods, we’ll take care of it. Yes, including me – I’m not going anywhere :-)

Oh, and then there’s that second disruption I mentioned a few days ago: the Café + Wiki at http://jeelabs.net/ have now been replaced by the new Redmine 2 system. The original wiki and doc pages can still be reached at “oldred.jeelabs.net”. For now.

Note that the forum is not affected. It’ll be switched over to Redmine later this year.

After having constructed some Bencode-formatted data, let’s try and read this stuff back.

This is considerably more involved. We’re not just collecting incoming bytes, we’re trying to make sense of it, figuring out where the different strings, ints, lists, and dicts begin and end, and – not obvious either perhaps – we’re trying to manage the memory involved in a maximally frugal way. RAM is a scarce resource on a little embedded µC, so pointers and malloc’s have to be avoided where possible.

Let’s start with the essential parsing task of figuring out when an incoming byte stream contains a complete Bencoded data structure. IOW, I want to keep on reading bytes until I have a complete “packet”.

All of the heavy-lifting is done in the EmBencode library. What we’re doing here is giving incoming bytes to the decoder, until it tells us that we have a complete packet. Here’s the output, using the test data created yesterday:

Looks obvious, but this thing has fully “understood” the incoming bytes to the point that it knows when the end of each chunk has been reached. Note that in the case of a nested list, all the nesting is included as part of one chunk.

There’s more to this than meets the eye.

First of all, this is a “push” scanner (aka “lexer”). Think about how you’d decode such a data stream. By far the simplest would be something like this (in pseudo code):

look at the next character

if it’s a digit:

get the length, the “:”, and then grab the actual string data

if it’s an “i”:

convert the following chars to a signed long and drop the final “e”

if it’s a “d” or an “l”:

recursively grab entries, until we reach an “e” at the end of this dict / list

But that assumes you’re in control – you’re asking for more data, IOW you’re waiting until that data comes in.

What we want though, is to treat this processing as a background task. We don’t know when data comes in and maybe we’d like to do other stuff or go into low-power mode in the meantime.
So instead of the scanner “pulling” in data, we need to turn its processing loop inside out, giving it data when there is some, and having it tell us when it’s done.

It’s written in (dense) C++ and implemented as a finite state machine (FSM). This means that we switch between a bunch of “states” as scanning progresses. That state is saved between calls, so that we’ll know what to do with the next character when it comes in.

There’s a fair amount of logic in the above code, but it’s a useful technique to have in your programming toolset, so I’d recommend going through this if FSM’s are new to you. It’s mostly C really, if you keep in mind that all the variables not declared locally in this code are part of a C++ object and will be saved from call to call. The “EMB_*” names are just arbitrary (and unique) constants. See if you can follow how this code flows as each character comes in.

The above code needs 7 bytes of RAM, plus the buffer used to store the incoming bytes.

There are tools such as re2c which can generate similar code for you, given a “token grammar”, but in simple cases such as this one, being able to wrap your mind around such logic is useful. Especially for embedded software on limited 8-bit microcontrollers, where we often don’t have the luxury of an RTOS with multiple “tasks” operating in parallel.

As mentioned a while back, I’m adopting ZeroMQ and Bencode on Win/Mac/Linux for future software development. The idea is to focus on moving structured data around, as foundation for what’s going on at JeeLabs.

So let’s start on the JeeNode side, with the simplest aspect of it all: generating Bencoded data. I’ve set up a new library on GitHub and am mirroring it on Redmine. It’s called “EmBencode” (embedded, ok?). You can “git clone” a copy directly into your Arduino’s “libraries” folder if you want to try it out, or grab a ZIP archive.

This serialSend sketch sends some data from a JeeNode/Arduino/etc. via its serial port:

Note that we have to define the connection between this library and the Arduino’s “Serial” class by defining the “PushChar” function declared in the EmBencode library.

One thing to point out, is that this code uses the C++ “function overloading” mechanism: depending on the type of data given as argument to push(), the appropriate member function for that type of value gets called. The C++ compiler automagically does the right thing when pushing strings and numbers.

Apart from that, this is simply an example which sends out a bit of data – i.e. some raw values as well as some structured data, each one right after the other.

It looks like gobbledygook, but when read slowly from left to right, you can see each of the calls and what they generate. I’ve indented the code to match the structures being sent.

You can check out the EmBencode.h header file to see how the encoder is implemented. It’s all fairly straight-forward. More interestingly perhaps, is that this code requires no RAM (other than the run-time stack). There is no state we need to track for encoding arbitrarily complex data structures.

(Try figuring out how to decode this stuff – it’s quite tricky to do in an elegant way!)

Tomorrow, I’ll process this odd-looking Bencoded data on the other side of the serial line.

One of the things I’d really like to do is hack on that Laser Cutter I described recently.

The electronics is based on a LaOS Board, but I’d like to see what you can do with an embedded Linux board such as a Raspberry Pi in this context – driving that LaOS board, for example. Because adding Linux to the mix opens up all sorts of neatness.

So here’s my new prototype development setup:

That oh-so-neat foam board acts as base, with two PCB’s fastened to it using, heh … remember these?

They’re called “splitpennen” in Dutch. Long live foam board and splitpennen!

This is a pretty nifty setup, in my opinion. Tons and tons of ways to implement features on this combo, and there’s plenty of power and storage on both boards to perform some pretty neat tricks, I expect.

Anyway – this is more a big project for cold winter days, really. It’ll take a long time before anything can come out of this, but isn’t it incredible how the prices of these things have reached a point where one can now dedicate such hardware to a project?

There are a couple of disruptions imminent regarding the JeeLabs web sites:

First one is a VAT increase in the Netherlands, starting October 1st. This will affect the shop.

Second one is the switch from Redmine 1 to Redmine 2, this will affect the café (docs and wiki).

Third one is a transition / migration from Drupal to Redmine – this will affect the forum.

Let me assure you that I hate each of the above changes at least as much as you do. Probably more.

But there’s no way around it. The VAT increase from 19% to 21% is a legal requirement, of course, so it’s both necessary and needs to happen on a specific date. A few days from now, that is. More news on that coming.

The switch from Redmine 1 to Redmine 2 was also inevitable, in the long run. I’m quite happy with Redmine as a system, but the current 1.0.1 setup has been out for two years, and I haven’t ever been able to easily update it. Yuck, yuck, yuck. In the end, the only way out was to wait for a solid 2.x release series, which also happens to come with a far better upgrade mechanism.

But instead of doing some sort of 1.x -> 2.x conversion for a range of projects in Redmine 1 (some public, but also several private ones), I’ve decided to just start over. With the help of a couple of people (thank you Steve, thank you Myra), just about all the main content of the café has now been transferred (converted, in fact).

Except the API documentation – this hasn’t been migrated, because I’ve decided been convinced that maintaining API documentation in Doxygen is a far better long-term solution (thanks Jasper). So instead of setting up wiki pages for that, all the JeeLabs libraries can now be extended with comments for Doxygen to generate nice docs from – which I intend to publish as part of the new “JeeLabs.net” site (for now, you can generate the JeeLib docs yourself, again thanks to Jasper’s check-ins).

The new Redmine website has been around for some time now (at redtry.jeelabs.org), but largely unexposed.

On October 1st, I’m going to take the plunge, and replace the current http://jeelabs.net/ site with the new one. This will probably break most URLs out there.

(Did I mention how much I hate these changes?)

Note: to avoid losing essential info, the current site will be moved to “oldred.jeelabs.net”, for reference. So if you can’t find what you’re looking for, you can prefix the url with “oldred.” and try again.

The fact is, that I’ve not made enough progress in the current situation for some time now, and the only way to get there is to break everything first, and then quickly try to repair the most harmful damage. Evolution just doesn’t cut it for me in a case like this, apparently.

Apart from the URL breakage, there are some additional horrid consequences:

To participate (submitting bugs, editing wiki pages, etc), you will need to register as new user again. To make matters even worse, I’m not enabling auto-registration to keep spammers at bay. So new registrations will not be instant.

(Did I mention that I hate spammers even more than these disruptive changes?)

Please use the same user name for registration as on the forum (I’ll explain why in a moment).

I’m dropping Markdown as wiki formatting language, and switching to Textile. This format comes from the Ruby world and is considerably better supported by Redmine (which is implemented using Ruby On Rails). The good news is that Textile formatting is very similar to Markdown for simple things, and much better at supporting more complex features (such as tables, colours, and even CSS styles).

The third disruption is probably going to cause the most frowning and cursing, but it too is becoming inevitable. Some time later this year, the forum will be migrated to Redmine as well. One reason is practical, in that Drupal admin is too much of a burden (for the three of us sharing the burden: thank you Martyn, thank you Steve again). And since I’m not going to start using it for more tasks here anyway, it really bothers me to have to keep a VM running with 1 GB RAM allocated to it.

This is the reason why registration on the new Redmine site should be done with your existing forum name, where possible: it’ll become a forum as well, once that third big switch is flipped.

But there are also advantages to this forum migration. A major one of them is the nice integration we gain by having forums, issues, source code browsing, and wiki pages all in the same system. Something I’ve always been looking for, and frankly Redmine 2 has been moving in the right direction for some time now.

And lastly, note that “forum.jeelabs.net” will keep a copy of all the forum discussions, so those URLs will in fact not break – that forum will merely become read-only (preferably using a static copy, if I can figure out how).

Anyway. Major disruptions. I hope you’ll bear with me as this takes place, and that you’re willing to help out and pinpoint any problematic and painful spots this leads to. First there is trouble, then we can fix it. There’s no way back – so let’s at least try and make the way forward as effective as possible.

Once the big trouble spots have been identified and resolved, I can move forward on the documentation side of things again (hardware as well as software). That too has been long overdue. Jeelabs deserves better. Open source deserves better. And you deserve better from me. A lot better.

Now the good news: the new Redmine setup (currently at redtry.jeelabs.org) has recently been given a major makeover (thank you David), bringing it close in style to the daily weblog. Same logo, same fonts, same looks.

The best news though, I’m sure you’ll agree, is that the daily weblog isn’t going to change!

After yesterday’s basic connection of an LPCXpresso 1769 board to the Raspberry Pi, it’s time to get all the software bits in place.

With NXP ARM chips such as the LPC1769, you need a tool like lpc21isp to upload over a serial connection (just like avrdude for AVR chips). It handles all the protocol details to load .bin (or .hex) files into flash memory.

There’s a small glitch in that the build for this utility gets confused when compiled on an ARM chip (including the Raspberry Pi), because it then thinks it should be built in some sort of special embedded mode. Luckily, Peter Brier figured this out recently, and published a fixed version on GitHub (long live open source collaboration!).

So now it’s just a matter of a few commands on the RPi to create a suitable uploader:

Next step is to get that board into serial boot mode. As it turns out, we’re going to have to do similar tricks as with the JeeNode a few days ago. And sure enough, I’m running into the same timing problems as reported here.

But in this case, the boot load process is willing to wait a bit longer, so now it can all easily be solved with a little shell script I’ve called “upload2lpc”:

After connecting a JeeNode to a Raspberry Pi, let’s do the same with the LPCXpresso I mentioned a while back. Let’s do it in such a way that we can upload new code into it.

With an LPC1769, this is even simpler than with an ATmega328, because the serial boot loader is built-into the chip out of the box. No need to install anything. So you can always force the chip back into serial boot loader mode – it can’t be “bricked” !

But the process is slightly different: you have to pull down a specific “ISP” pin while resetting the chip, to make it enter serial boot loader mode. So we’ll need one more GPIO pin, for which I’ll use the RPi’s GPIO 23. The wiring is even cleaner than before, because this time I planned a bit more ahead of time:

Except that this runs into the same problem as before. The LPCXpresso does not have the essential 3.3V regulator next to the ARM chip, it’s on the part that has been cut off (doh!). So again, I’m going to have to add an SMD MCP1703 regulator:

(that little critter is between pins 1 and 2, and upside down, to get the pinout right)

Here’s the complete hookup:

First step is again to make sure that Linux isn’t doing anything with the serial port:

remove the two args referring to /dev/ttyAMA0 in the file /boot/cmdline.txt

add a hash (“#”) in front of this line in /etc/inittab to comment it out:

T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

make the system aware of the changes in inittab with: kill -1 1 (or reboot)

Note – As of October 1st, VAT prices in the Netherlands will increase from 19% to 21%. As a consequence some adjustments will also have to be made in the JeeLabs Shop. Up or down depends on many obscure factors, including exchange rates, current costs, stock levels, and the wish to stick to decently rounded values – so if you want to avoid any surprises and were planning to order stuff, please keep that switch-over date in mind.

Energia is a fork of Arduino for the Texas Instruments MSP430 Micro Controller.

It’s not the first fork, and won’t be the last. And if you’re used to the Arduino IDE, it’ll be delightfully familiar:

One really interesting aspect of of this project, is that the price of experimenting with embedded controllers drops to about $5 for a TI “LaunchPad” – so any throwaway project and kid-with-little-money can now use this:

To give you an idea why a simple little Linux board (the Carambola on OpenWrt in this case) is not a convenient platform for real-time use, let me show you some snapshots.

I’ve been trying to toggle two I/O pins in a burst to get some data out. The current experiment uses a little C program which does ioctl(fd, GPIO_SET, ...) and ioctl(fd, GPIO_CLEAR, ...) calls to toggle the I/O pins. After the usual mixups, stupid mistakes, messed up wires, etc, I get to see this:

A clean burst on two I/O pins, right? Not so fast… here’s what happens some of the time:

For 10 ms, during that activity, Linux decides that it wants to do something else and suspends the task! (and the system is idling 85% of the time)

If you compare the hex string at the bottom of these two screenshots, you can see that the toggling is working out just fine: both are sending out 96 bytes of data, and the first few bytes are identical.

The situation can be improved by running the time-critical processes with a higher priority (nice --20 cmd...), but there can still be occasional glitches.

Without kernel support, Linux cannot do real-time control of a couple of I/O pins!

But not quite as I thought, and I missed a hint in the scope signal capture yesterday (at 322 ms after the reset). My premature conclusion was that resetting the GPIO pin and then starting up avrdude was taking too long, so the boot loader would give up before receiving the proper starting character over the serial line.

But that doesn’t quite make sense. Looking at the screen shot again, we can see that the RF12demo greeting (blue line) starts about 800 ms after the RESET pin gets pulled low. Even though OptiBoot is supposed to wait a full second after power-up. And there are two handshake attempts well within that period.

The other subtle hint was in the not-quite-equidistant characters being sent out (yellow line). Why would handshake chars be sent out in such a repeatable yet irregular pattern?

Having written an avrdude replacement in Tcl for some experiments I did with JeeMon a while back, I decided to try and extend it a bit to toggle the reset pin right before sending out the data. That way no delay would interfere, so the reset would happen moments (probably mere milliseconds) before starting the boot loader serial handshake. I didn’t really want to start hacking on avrdude for such a “simple” task as toggling a GPIO pin on the Raspberry Pi. Besides, that replacement code is only 70 lines of Tcl, if all you have to deal with is the basic stk500 protocol understood by the boot loader.

Controlling the GPIO pin also turned out to be pretty straightforward:

But no matter what I tried, and no matter what timing delays I inserted, the darn thing just wouldn’t upload!

Then, just for the heck of it, I tried this variation – reversing the open and reset order:

You can see data being sent to the JeeNode, and the 0x14h reply sent back to the RPi.

So was this all about timing? Yes and no. Let’s revisit yesterday’s list of pulses again:

A single pulse 322 ms after reset, and then several pulses at 673 ms (presumably the first boot loader protocol handshake character). The problem is really that first pulse – it’s not a valid character but a glitch!

What I think is happening is that the JeeNode resets, the glitch at 322 ms causes the boot loader to give up and launch the sketch, and then all subsequent boot handshake characters get ignored. Looks like opening the serial port produces a glitch on the transmit output pin.

By first opening the serial port and then doing the GPIO18 reset, the problem is avoided, and then it all works.

Thank you JeeMon – I hadn’t expected to fire you up again, but you’ve saved my day!

the GPIO pin is correctly pulsed high and then dropped low after 0.1 s

the scope triggers on that falling edge, presumably same time as the ATmega resetting

after 323 ms, I see a 30 µs blip on the outgoing serial pin

after 674 ms, it looks like avrdude sends out it’s “0” wakeup character

after about 750 ms, the JeeNode starts sending the RFM12demo greeting

after 946 ms, the second “0” wakeup goes out

after 1197 ms, the third and final “0” goes out

In other words: it looks like avrdude is starting to send the 0’s too late! – and as a result, the JeeNode’s boot loader passes control to the sketch and never enters the upload cycle. After a few seconds, avrdude then gives up:

avrdude: ser_recv(): programmer is not responding

Note that reset and serial communication both work properly, as verified several times.

Trouble is, apart from redoing all in a single app, I see no way to reduce the startup time of avrdude. The SD card is perhaps not the fastest, but it’s no slouch either:

Now that the JeeNode talks to the Raspberry Pi, it’d be interesting to be able to reset the JeeNode as well, because then we could upload new sketches to it.

It’s not hard. As it so happens, the next pin on the RPi connector we’re using is pin 12, called “GPIO 18″. Let’s use that as a reset pin – and because this setup is going to become a bit more permanent here, I’ve made a little converter board:

This way, a standard JeeLabs Extension Cable can be used. All we need is a board with two 6-pin female headers, connected almost 1-to-1, except for GND and +5V, which need to be crossed over (the other wire runs on the back of the PCB to avoid shorts).

This takes care of the hardware side of resets. Now the software. This is an example of just how much things are exposed in Linux (once you know where to find all the info).

There’s a direct interface into the kernel drivers called /sys, and this is also where all the GPIO pins can be exposed for direct access via shell commands or programs you write. Let’s have a look at what’s available by default:

$ sudo ls /sys/class/gpio/
export gpiochip0 unexport
$

The “export” entry lets us expose individual GPIO pins, so the first thing is to make pin 18 available in the /sys area:

sudo echo 18 >/sys/class/gpio/export

That will create a virtual directory called /sys/class/gpio/gpio18/, which has these entries:

That port is only accessible by users in access group “tty” (and root). So first, let’s make sure user “pi” can access it, by adding ourselves to that group:

sudo usermod -a -G tty pi

Now logout and log back in to make these changes take effect.

> Note: not all RPi Linux distro’s are set up in the same way. If ttyAMA0’s group is “dialout” instead of “tty”, chances are that you’re already a member (type “id” to find out). In that case, skip the above usermod command.

But that’s not all. By default, there’s a “getty” process running on this serial port:

This lets you connect to the serial port and login. Very convenient, but in this case we don’t want to log in, we want to take over control of this serial port for talking to the JeeNode. So we have to disable getty on ttyAMA0:

And that’s it. There is no longer a process trying to respond to our serial port.

> Note: again, this may not be needed if you don’t see “ttyAMA0″ listed in the ps output.

You also have to make sure that the kernel doesn’t log its console output to this serial port. Look in the file “/boot/cmdline.txt” and remove the text console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 if present. Then reboot.

Now we’re ready for a quick loopback test:

Connect a single jumper as indicated (a little jumper block would also work), connecting RXD with TXD. This means that everything sent out serially will be sent back and received serially as well. A very convenient test that we’ve got all the Linux stuff set up properly.

Oh, wait – first let’s get the utility installed which lets us type to the serial port:

sudo apt-get install screen

Now we can test the serial port, at last:

$ screen /dev/ttyAMA0 57600
…

You should see a blank screen, and whatever you type in should show up on the screen. If this is indeed the case, then there is data going out and back into the serial port. To really make sure, disconnect the jumper and the echoing will stop. Excellent. Almost there.

Screen is a very convenient utility, but you need to remember how to get back out of it. The key sequence is CTRL+A followed by “\” (the backslash key).

For now, you can leave it running, or start it up again later. The last step is to hook up the JeeNode. This requires 4 wires:

Note carefully the order of these female-to-female jumper wires, top to bottom:

on the RPi: green, white, black, gap, blue

on the JeeNode: gap, green, white, blue, gap, black

I used a JeeNode, pre-loaded with the standard RFM12B sketch, and set to receive packets from my main home monitoring setup here at JeeLabs. I also enabled “collect mode (“1c”) so that it wouldn’t try to acknowledge packets, just lurking and reporting what comes in. And sure enough, it works:

Easy! (once you know how to overcome all the software and hardware hurdles, that is…)

The last part of this mini series about a low-cost semi-DIY laser cutter describes the electronics and firmware:

In the middle is an MBED microcontroller based on an ARM Cortex M3 chip (NXP’s LPC1768). It runs at 96 MHz, has 512 KB flash, 32 KB RAM, USB, and an Ethernet controller on board (but not the magjack).

The whole thing sits on the custom LaOS board, which includes an SD card socket, an I2C bus, some opto-isolators, power-supply interface (+5V and +24V), and two Pololu stepper drivers (or nearly compatible StepSticks).

There’s room for 1 or 2 more stepper drivers, a CAN bus interface, and more connectors for extra opto-isolated inputs and outputs. The on-board steppers can be omitted with the step/direction pins brought out if you need to drive more powerful motors.

The MBED was briefly mentioned in a previous post, and can also be replaced by a suitably modded LPCXpresso 1769 board, at less than half the price.

The software is open source and has just been moved from the MBED site to GitHub. Note that at this point in time, the software is still based on the “mbed.ar” runtime support library, which is unforunately not open source. Work is in progress to move to a fully open source “stack”, but that’s not the case today – the Ethernet library in particular hasn’t been successfully detached from its MBED context yet.

Ethernet connectivity is based on the lwIP code, which in turn is the big brother of uIP. Big in the sense that this supports a far more capable set of TCP/IP features than would be possible with an 8-bit microcontroller and its limited memory. Having 512 KB flash and 32 KB RAM in this context, means that you don’t have to fight the constraints of the platform and can simply focus on getting all the functionality you want in there.

Right now, the LaOS firmware drives the steppers, using a Gcode interpreter adapted from gbrl, and includes support for Ethernet (TFTP, I believe) and the SD card. It also requires the I2C control panel. As a result, you can send a “job” to this thing via VisiCut, and it’ll save it on the SD card. The I2C-based control panel then lets you pick a job and start it. Quite a good workflow already, although there are still a fair number of rough edges.

(Note that if you listen very carefully, you can hear all the pioneers on this project scream for more volunteers and more help, on all sorts of levels and domains of expertise :)

What excites me about all this, apart from those über-cool laser cutting capabilities of course, is the potential to take this further since all the essential bits are now open source – and it’s not even really tied to a specific brand of laser.

So there you have it. It cuts, it burns, it smells, it’s yearning to be taken to the next level :)

water reservoir + circulation pump, for cooling, must be on when the laser is on

air pump (noisy aquarium type) – for “air assist”, keeps the fumes away from the lens

exhaust ventilator, mounted on the back, pushes the smelly & smoky air out

The pumps and ventilator are manually turned on from the front panel with separate switches. Not perfect, since this leaves room or operator error, but hey – it works.

Here’s a peek inside, with some packaging plastic still in place:

The shiny plate height can be adjusted using 4 connected screws turning in tandem.

You can see a little fluorescent light in the top center, and the air exhaust underneath it.

A 35 Watt infrared laser is serious stuff (it actually requires a few hundred Watt to generate that sort of power). Once focused, that beam can do major harm – cutting through stuff is what it’s supposed to do, after all.

Safety comes in the form of a reed relay which triggers when the lid of this thing is lifted (top right above), another one when the lid of the laser on the back is open, and a mains power switch with a key. The lid for the electronics is locked (from the X-Y compartment). The transparent viewing area is of a special type which blocks laser light, and the whole thing is made of fairly solid steel.

The beam is invisible (scary, eh?), but a tiny red “pointer” laser is mounted inside, showing roughly where the beam will hit. Extremely useful while doing dry runs – which in turn is very easy to do: just start the laser with the lid open, and you can see where it’s going to cut. Given that the cutting area is only about 20×30 cm, that’s not a luxury – you really have to check whether the material is properly placed. Here’s that last mirror, deflecting the beam down into the final focusing lens:

(Note: that’s not smoke, but the red panel’s reflection on the aluminium work surface)

The hollow tube is the “air assist”, blowing air to push the smoke away from the beam.

All the mirrors come pre-aligned and secured with a bit of hot glue. No need to tweak.

On the side is that little pointer laser. Since it’s tilted, the position of the spot is not exactly where the laser will hit, depending on how far into the material you place the focus spot. Focusing is actually quite simple: the laser comes with a little (lasered) acrylic piece, the side of which has a specific length. To focus, you place that on the work piece, and then lower or raise the object until the plastic spacer aligns with this metal mirror mount.

Raising / lowering the work piece is done manually. The whole thing sits on a (fixed size) “honeycomb” bed, which in turns sits on a manually adjustable metal “table”. Adjustment is done with a screw sticking through the bottom of the laser – and in fact the very first thing you need to cut is a piece of wood to act as large adjustment ring, which can then be grabbed from the front of the laser. Bit hard to explain, but then again this isn’t meant as assembly instruction – I just want to give you an impression of what sort of issues you have to deal with in such a setup. On the plus side: pioneering is fun! – with lots of opportunities to come up with clever improvements :)

Tomorrow, I’ll describe the electronics. This board (and the software developed for it) replaces the original board, which I’m told was hard to use, came with crude Windows-specific software, and was impossible to hack on and improve. The LaOS board has a microntroller, two stepper motor drivers, an Ethernet port, and various other bits and bobs needed to deal with everything in the HPC laser. It’s actually quite general-purpose.

Yep, it has finally happened… it all started in July, when I got this (without the electronics board) and this (called the LaOS board).

The laser arrived like this:

This summer, a couple of early adopters in the Netherlands got together to figure out all the pesky little details that come up when you’re really just in pioneering mode. Make no mistake: this thing is far from ready for mainstream use. The laser itself is quite nice and ready to go, and definitely workable as far as all the mechanics go. But the electronics and software are still work-in-progress (I’m using the VisiCut software – which includes a driver that talks directly to the laser’s firmware over Ethernet).

Nevertheless – early as it may be, the total cost is less than €1,500 and you end up with an A4-sized laserable workspace which can cut up to 5 mm wood panels and 6 mm acrylic (engraving is more involved, but getting there). One of the first things we tried was this:

Sure enough, the circle is a snug fit and turns perfectly. Excellent alignment!

I’ll describe a few more details of this setup in the coming weblog posts, but also want to point out that you too can end up with such a laser cutter (just make sure to arrange for ventilation – you’ll quickly get very tired of the burning smell and fumes).

There’s a project by the designers of the replacement electronics, called LaOS (Laser Open Source) – and it’s definitely the best place to go right now if you want to find out what’s going on. Warning: the site is currently undergoing a messy transition to a new wiki – I’ve helped out a bit and am sure they’d love to see more people join in the effort of making this thing more practical for the non-hacker crowd.

One little gotcha: these 35 W laser cutters are produced in China and then tested/resold by a company in the UK, and they are really large and heavy. Having one shipped from the UK to you is fairly expensive, because of the delicate glass laser tube (several hundred Euro, most likely). We got around that by getting organised and doing a “group buy”, with one person actually volunteering to go to the UK, load his car up with a bunch of laser cutters, and driving back (!). Still pricey, but less so, and I think the UK vendor is in fact willing to make arrangements when enough units are being purchased and shipped at the same time. Anyway – it’s an aspect to keep in mind.

The counter is about to reach 1,000,000 – just after midnight today, in fact.

These log entries come from a JeeNode with a radioBlip sketch which just sends out a counter, roughly every 64 seconds, and goes into maximum low-power mode in between each transmission. That’s the whole trick to achieving very long battery lifetimes: do what ya’ gotta do as quickly as possible, then go go bck to ultra-low power as deeply as possible.

The battery is a 1300 mAh LiPo battery, made for some Fujitsu camera. I picked it because of the nice match with the JeeNode footprint.

Bit the big news is that this battery has not been recharged since August 21st, 2010!

Which goes to show that:

lithium batteries can hold a lot of charge, for a long time

JeeNodes can survive a long time, when programmed in the right way

sending out a quick packet does not really take much energy – on average!

all of this can easily be replicated, the design and the code are fully open source

And it also clearly shows that this sort of lifetime testing is really not very practical – you have to wait over two years before being sure that the design is as ultra-low power as it wast intended to be!

If we (somewhat arbitrarily) assume that the battery is nearly empty now, then running for 740 days (i.e. 17,760 hours) means that the average current draw is about 73 µA, including the battery’s self-discharge. Which – come to think of it – isn’t even that low. I suspect that with today’s knowledge, I could probably set up a node which runs at least 5 years on this kind of battery. Oh well, there’s not much point trying to actually prove it in real time…

One of the omissions in the original radioBlip code was that only a counter is being sent out, but no indication at all of the remaining battery charge. So right now I have no idea how much longer this setup will last.

As you may recall, I implemented a more elaborate radioBlip2 sketch a while ago. It reports two additional values: the voltage just before and after sending out a packet over wireless. This gives an indication of the remaining charge and also gives some idea how much effect all those brief transmission power blips have on the battery voltage. This matters, because in the end a node is very likely to die during a packet transmission, while the radio module drains the battery to such a point that the circuit ceases to work properly.

Next time, as a more practical way of testing, I’ll probably increase the packet rate to once every second – reducing the test period by a factor of 60 over real-world use.

The ARM microcontrollers described in the past few days are a big step up from a “simple” ATmega328, but that’s only if you consider hundreds of kilobytes of flash storage and tens of kilobytes of RAM as being “a lot”.

Compared to notebooks and workstations, it’s still virtually nothing, of course.

But there’s another trend going on, with bang-for-the-buck going off the charts nowadays: small embedded Linux systems with integrated wired and/or wireless Ethernet. These are often based on Broadcom and Atheros chipsets – the same as found in just about every network router and gateway nowadays.

One particularly nice and low-cost example of this is the Carambola by 8devices:

There are I2C and SPI interfaces, which can also be used as general-purpose I/O pins, so this thing will interface to a range of things right out of the box.

One gotcha is that the 2×20 pins are on a 2mm grid, not 0.1″. Small size has its trade-offs!

The board will draw up to 1.5 W @ 3.3V (i.e. roughly 500 mA), but that can easily be reduced to about 0.4 W for a blank board with no wired Ethernet attached.

Here are some more specs, obtained from within Linux on the Carambola itself:

As you can see, this unit (like many routers) is based on a MIPS architecture. And it’s actually quite a bit faster that the Bifferboard I described a while back.

Like most low-end ARM chips, these systems often lack hardware floating point (its all implemented in software, just like an Arduino does for the ATmega’s). Don’t expect any number-crunching performance from these little boards, but again it’s good to point out that boards like these are priced about the same as an Arduino Uno.

One of the benefits of Linux is that it’s a full-fledged operating system, with numerous tools and utilities (though you often still need to cross-compile) and with solid full-featured networking built in. The amount of open source software available for Linux (on a wide range of hardware) is absolutely staggering.

Among the drawbacks of Linux in the context of Physical Computing, is that it’s not strictly real time, so programming for it follows a different approach (busy loops for timing are not done, for example). Don’t expect to accurately pulse an I/O pin at a few hundred Hz or more, for example. Linux was also definitely not made for ultra-low power use, such as in remote wireless nodes which you’d like to keep up and running for months or years on a single battery – there’s simply too much going on in a complete operating system.

The other thing about Linux is that it can be somewhat intimidating if you’ve never used it before. Part of this comes from its strong heritage from the “Unix world”. But given the current trends, I strongly recommend trying it out and getting familiar with it – Linux is very mature: it has been around for a while and will remain so for a long time to come. With boards such as the Carambola illustrating just how cheap it can be to have a go at it.

Interesting times. Now if only new software developments would keep up with all this!

Yesterday’s post presented an example of a simple yet quite powerful platform for “The Internet Of Things” (let’s just call it simple and practical interfacing, ok?). Lots of uses for that in and around the house, especially in the low-cost end of ATmega’s, basic Ethernet, and basic wireless communication.

What I wanted to point out with yesterday’s example, is that there is quite a bit of missed potential when we stay in the 8-bit AVR / Arduino world. There are ARM chips which are a least as powerful, at least as energy-efficient, and at least as low-cost as the ATmega328. Which is not surprising when you consider that ARM is a design, licensed to numerous vendors, who all differentiate their products in numerous interesting ways.

In theory, the beauty of this is that they all speak the same machine language, and that code is therefore extremely portable between different chips and vendors (apart from the inevitable hardware/driver differences). You only need one compiler to generate code for any of these ARM processor families:

In practice, things are a bit trickier, if we insist on a compiler “toolchain” which is open source, with stable releases for Windows, Mac, and Linux. Note that a toolchain is a lot more than a C/C++ compiler + linker. It’s also a calling convention, a run-time library choice, a mechanism to upload code, and a mechanism to debug that code (even if that means merely seeing printf output).

In the MBED world, the toolchain is in the cloud. It’s not open source, and neither is the run-time library. Practical, yes – introspectable, not all the way. Got a problem with the compiler (or more likely the runtime)? You’re hosed. But even if it works perfectly – ya can’t peek under the hood and learn everything, which in my view is at least as important in a tinkering / hacking / repurposing world.

Outside the MBED world, I have found my brief exploration a grim one: commercial compiler toolchains with “limited free” options, and proprietary run-time libraries everywhere. Not my cup of tea – and besides, in my view gcc/g++ is really the only game in town nowadays. It’s mature, it’s well supported, it’s progressing, and it runs everywhere. Want a cross compiler which runs on platform A to generate code for platform B? Can do, for just about any A and B – though building such a beast is not necessarily easy!

As an experiment, I wanted to try out a much lower-cost yet pin-compatible alternative for the MBED, called the LCPXpresso (who comes up with names like that?):

Except: half of that board is dedicated to acting as an upload/debug interface, and it’s all proprietary. You have to use their IDE, with “lock-in” written on every page. Amazing, considering that the ARM chip can do serial uploading via built-in ROM! (i.e. it doesn’t even have to be pre-flashed with a boot loader)

As an experiment, I decided to break free from that straight-jacket:

Yes, that’s right: you can basically throw away half the board, and then add a few wires and buttons to create a standard FTDI interface, ready to use with a BUB or other 3.3V serial interface.

(there’s also a small regulator mod, because the on-board 3.3V regulator seems to have died on me)

The result is a board which is pin-compatible with the MBED, and will run more or less the same code (it has only 1 user-controllable LED instead of 4, but that’s about it, I think). Oh, and serial upload, not USB anymore.

Does this make sense? Not really, if that means having to manually patch such boards each time you need one. But again, keep in mind that the boards cost the same as an Arduino Uno, yet offers far more than even the Arduino Mega in features and performance.

The other thing about this is that you’re completely on your own w.r.t. compiling and debugging code. Well, not quite: there’s a gcc4mbed by Adam Green, with pre-built x86 binaries for Windows, Mac, and Linux. But out of the box, I haven’t found anything like the Arduino IDE, with GUI buttons to push, lots of code examples, a reference website, and a community to connect with.

For me, personally, that’s not a show stopper (“real programmers prefer the command line”, as they say). But getting a LED to blink from scratch was quite a steep entry point into this ARM world. Did I miss something?

Two more notes:

Yes, I know there’s the Maple IDE by LeafLabs, but I couldn’t get it to upload on my MacBook Air notebook, nor get a response to questions about this on the forum.

No, I’m not “abandoning” the Atmel ATmega/ATtiny world. For many projects, simple ways to get wireless and battery-operated nodes going, I still vastly prefer the JeeNode over any other option out there (in fact, I’m currently re-working the JeeNode Micro, to add a bit more functionality to it).

But its good to stray outside the familiar path once in a while, so I’ll continue to sniff around in that big ARM Cortex world out there. Even if the software exploration side is acting surprisingly hostile to me right now.

A while back, I came across this product, called the “mbed Internet of Things Gateway”:

It’s an ARM microcontroller with an Ethernet port, a µSD storage slot, and an RFM12B wireless transceiver. Very nicely packaged in an extruded-aluminium case with laser-cut front and back panels. Here’s what’s inside:

Not that much circuitry, as you can see – because all the heavy-lifting is done by the MBED board on the left.

That’s a 32-bit microcontroller, with built-in Ethernet and USB, plenty of I/O pins, and lots of features to connect to SPI, I2C, CAN, and other types of devices. Not to mention the 512 KB flash and 32 KB RAM memory – plenty to implement some serious functionality.

MBED comes with an intriguing “cloud-based” compiler and build environment, which is surprisingly effective. Here’s how it works, out of the box:

plug the MBED into USB and it’ll present itself as a memory disk with one HTML file on it

if the code compiles successfully, you end up with a file in your download folder

copy that file to the MBED’s USB drive

press on the MBED’s reset button, and that’s it … uploaded and running!

This is a very elegant workflow. No need to install any software to develop for MBED. And you can continue work wherever you are, as long as internet works and you have your MBED with you. You do need to sign up and register a (free) account on that MBED site – in return, they’ll do all the compiles for you.

This board is an exciting development. The cost is higher than with just a JeeNode + EtherCard, but there is also a lot more possible when you don’t have to fight the ATmega328’s strict flash and RAM memory constraints.

I’ll have more to say about this hardware and software tomorrow – stay tuned…

This is a follow-up to the Delayed power-up post, this time using some P-MOSFETs (in SOT-23 SMD form, i.e. tiny). The way I’m testing these, is by using a 1 kΩ resistor as simulated load, and hooking things up as follows:

These components were not selected for this purpose, in fact, I picked units with a very low switching threshold voltage, so that they can reliably be switched on from an I/O pin, even if we were to run at just 1.8V.

Here are the characteristics of the Philips BSH203 P-MOSFET:

Just over 1 Ω resistance when driven low by 1.8V, so with a 50 mA load, the voltage drop over this MOSFET will be just over 50 mV.

When placed in the Component Tester, and zooming in on the interesting bit, we get:

Each major horizontal division is 2V, so this thing switches on at about 0.5V.

For comparison, the characteristics of the Vishay SI2333 P-MOSFET:

And in this case, the Component Tester shows this (sorry, can’t zoom in with this setup):

A slightly higher turn-on voltage, but note that the ON resistance is considerably lower at 1.8V: only 0.05 Ω. Not surprising, when you consider that this MOSFET can probably switch over 5 A (without self-destructing from the heat dissipation).

Here’s what these tiny components look like, with wires soldered on for “debugging”:

Looks like either of these will do the trick, when switched from an I/O pin anyway.

But there’s some weirdness w.r.t. the amplitude. The built-in impedance is quoted as being 50 Ω, no doubt the reasoning being that if you terminate the cable with 50 Ω as well, then a standard coax cable will have the least possible signal degradation (ringing, reflections, and such).

There is a setting to adjust for the assumed load, so that the instrument can adjust its amplitude accordingly. The default is to assume a 50 Ω load at the end of the cable:

But this is where I get mightily confused. Here’s what I see with the scope set in high impedance mode:

That in itself is actually correct: the output amplitude is twice the expected value, since there is no 50 Ω terminating resistor. So let’s add it in (it’s an option inside the scope):

Huh? – Why isn’t the amplitude 3 Vpp, and ends up being only half that value?

But there’s something even stranger, and far more inconvenient about the TG2511. After a few minutes, I often get a message about the output signal being overloaded:

Doesn’t make sense. I’m never loading the output with more than 50 Ω, i.e. well within specs. Sometimes this happens even when I completely disconnect the load. Even more suspect: the green “OUTPUT” doesn’t always match what’s happening. Sometimes the light is off, yet the output signal is still present!

In fact, I get this screen in the most unexpected situations. Maybe the TG2511’s output stage is broken? Yuck.

To recall yesterday’s reasoning, I’m looking for a way to keep the RFM12B from starting up too soon and drawing 0.6 mA before the microcontroller gets a chance to enable it’s ultra-low power sleep mode.

The solutions so far require an extra I/O pin, allowing the microntroller to turn power on and off as needed, with the extra detail that power stays off until told otherwise.

But all I’m really interested in, is to keep that RFM12B from powering up too soon. After that, I never need to power it down again (and lose its settings) – at 0.3 µA, the RFM12B’s sleep mode will be just fine.

One solution is to use a dedicated chip for this, which can reliably send out a trigger when a fixed voltage threshold has been exceeded. That would still need a MOSFET, though, so this increases the cost (and footprint) of such a solution.

The other way would be to create a low-speed RC network, gradually charging a cap until a threshold is reached and turns on the MOSFET switch. Lower cost, no doubt, but in fact not flexible enough in case of a very slow power-on voltage ramp, as in the case of a solar cell charging a supercap or small battery. There is just no way to determine how long the delay needs to be – it might take days for the power rail to reach a usable level!

Yet another option is this little gem (thanks for the initial suggestion, Martyn!):

No I/O pin, no pull-up, nothing!

This trick takes advantage of what was originally considered a drawback of MOSFET switching: the fact that the gate voltage has to reach a certain level before the MOSFET will switch. Assuming that voltage is say 1.5V, then the MOSFET will be turned off as long as the power rail has not yet reached 1.5V, and once it rises above that value, it’ll automatically switch on. Magic!

Does it work? Well, I’m still waiting for some P-MOSFETs to arrive, but I’ve done a little test with an N-MOSFET, connected the other way around and using a 1 kΩ resistor as load. We can look at that combination as a component which has only two pins: a power rail and ground.

If the circuit works as expected, then when applying an increasing voltage, no current will flow until the threshold has been reached, and then it’ll switch on and start drawing current.

As it turns out, this is very easily observable using a Component Tester – like the one built into my scope:

The horizontal scale is the applied voltage (from about -5V to +5V), the vertical scale is the current through that component (from about -5 mA to +5 mA). The straight slanted line is characteristic of a 1 kΩ resistor.

But the interesting bit is that little dent: from under 0V to about 1.5V, the circuit draws no current at all. Once 1.5V or more are applied, the circuit starts conducting, and behaving like a plain 1 kΩ resistor again.

Woohoo, this might actually work: just a single P-MOSFET would be all that’s needed!

One reason for yesterday’s exploration, is to figure out a way around a flaw of the RFM12B wireless radio module.

Let me explain – the RFM12B module has a clock output, which can be used to drive a microcontroller. The idea being that you can save a crystal that way. Trouble is that this clock signal has to be present on power-up, even though it can be configured over SPI in software, because otherwise the microcontroller would never start running and hence never get a chance to re-configure the radio. A nasty case of Catch 22 (or a design error?).

In short: the radio always powers up with the crystal oscillator enabled. Even when not using that clock signal!

The problem is that an RFM12B draws about 0.6 mA in this mode, even though it can be put to sleep to draw only 0.3 µA (once running and listening to SPI commands). In the case of energy harvesting, where you normally get very tiny amounts of energy to run off, this startup hurdle can be a major stumbling block.

See my low-power supply weblog post about how hard that can be, and may need extra hardware to get fixed.

So I’m trying to find a way to keep that radio powered down until the microcontroller is running, allowing it to be put to sleep right away.

For ultra-low power use, yesterday’s PNP transistor approach is not really good enough.

This is where an interesting aspect of MOSFETs comes in: they make great power switches, because all they need is a gate voltage to turn them on or off. When on, their resistance (and hence voltage drop) is near zero, and the voltage on the gate doesn’t draw any current. Just like a water faucet doesn’t consume energy to keep water running or blocked, only to change the state – so do MOSFETs.

But many MOSFETs typically require several volts to turn them on, which we may or may not have when running at the lower limit of 1.8V of an ATmega or ATtiny. So the choice of MOSFET matters.

Just like yesterday, we’ll need a P-channel MOSFET to let us switch the power supply rail:

Note the subtly different placement of the resistor. With a PNP transistor, it was needed to limit the current through the base (which then got wasted, but that current is needed to make the transistor switch). With a MOSFET there is no current, but now we need to make sure that the MOSFET stays off until a low voltage is applied.

Except that now R can be very large. It’s basically a pull-up, and can be extremely weak, say 10 MΩ. That means that when pulled low, the leakage current will be only 0.3 µA.

The trick is to find a P-MOSFET type which can switch using a very low gate voltage, so that it can still be fully switched on. I’ve ordered a couple of types to test this out, and will report once they arrive and measurements can be made.

All in all, this is a very nice solution, though – just 2 very simple components. The main drawback is that we still need to reserve an I/O pin for this.

Tomorrow, I’ll explore a refinement which does not even need an extra I/O pin.

This is an exploration to find out what circuit can be used to switch small electronic loads using a digital I/O pin.

The simplest solution by far, is to use the I/O pin to directly drive the circuit. This works great for low currents, such as an LED with series resistor. But there are limits to how much current you can draw:

As you can see, from the datasheet, drawing 5 mA will already cause a voltage drop of about 0.2V, so the load will not get the full supply voltage. With larger loads, the drop becomes even more pronounced.

Another approach is to use a PNP transistor, as follows:

Here’s how it works, in case you’re not familiar with them PNP transistor thingies:

pulling the input pin down towards 0V, will turn the PNP transistor on

when on, the transistor will conduct and supply power to the load

leaving the pin floating or pulling it up, will cause the PNP transistor to turn off

In my setup, I’m using a very light load for now. The following measurements will be different as the load increases, but not as substantially as with the raw I/O pin drive. A circuit like this could easily drive a 100..250 mA load, if the base resistor has the right value – as we’ll see.

Time to try this out. I am quite interested in how the voltage drop over the transistor depends on the exact voltage placed on the base junction. Or rather: how much current I’ll need (since the resistor passes a known current once we know the voltage drop.

So as input, I’m using a 10 Hz sawtooth signal, which varies from 0 to 3V. Due to the way things are hooked up, it starts at 3V below the 3.3V power rail, i.e. 0.3V, and then rises to the level of the power rail.

Here are the results with a 10 kΩ resistor – both signals have been inverted, as I’m measuring relative to +3.3V:

(oops, ignore the yellow trigger point baseline)

At the start, the transistor is turned on strongly, and then the input voltage falls to zero (yellow trace). As you can see, the voltage drop over the transistor increases non-linearly as it gradually turns off. With only 1V driving it, the voltage drop increases from about 40 mV to 100 mV, i.e. 0.1V. With even less, it starts to switch off and gets the full power supply voltage (so nothing reaches the load).

Now let’s do some math. I used my transistor tester to determine that this particular BC557 PNP transistor had 0.8V drop over its base-to-emitter junction, and that its current amplification factor (hFE) is about 220x.

In other words: with 1.0V on the input, there is 1.0-0.8 = 0.2V on the 10 kΩ resistor. Ohm’s law (E = I x R, or I = E / R) implies that the current through the resistor will be 0.2V / 10kΩ = 20 µA. This current is essentially “wasted”, but given the 220 factor, it will allow the transistor to drive up to 20 µA x 220 = 4.4 mA. Not so great…

Update – not sure what I was smoking at the time I wrote the above paragraph. That’s 1.0V below VCC, roughly the switching threshold of this transistor when used with a 1 kΩ load. I’m not sure what point I was trying to make here, other than that a 20 µA base current is not enough to switch an RFM12B.

But if we pull the input to almost ground level, as would be the case with a digital I/O pin, then the base current increases to (3.3 – 0.8) / 10,000 = 250 µA, supporting a load of up to 250 µA x 220 = 55 mA.

If power waste is not an issue, we could reduce the base resistor to 1 kΩ, and get over 500 mA load switching capability (assuming the transistor is powerful enough). The base current will then be around 2.5 mA, a value which an ATmega I/O pin can still easily supply.

But what if power use is important, i.e. if we can’t afford to waste those 20 µA or more?

Here’s the same circuit, with a 100 kΩ resistor instead:

Note how the emitter-to-collector voltage drop (blue line) rises. With 3V input differential, i.e. almost pulled to ground, the drop over the transistor is still under 0.15V, but we’ll need to keep the input voltage at least 1.5V below the power supply voltage to make this work. So in very low-voltage / low-power scenario’s (i.e. running off two almost-depleted AA cells), this might become tricky.

The current losses are now only 1/10th, i.e. 25 µA when the power supply is 3.3V and the input is tied to ground. Then again, in the ultra-low power world that wasted 25 µA is not really such an impressive figure.

Still, for switching loads which draw up to a few hundred milliamps, this circuit using just a PNP transistor and a resistor is really quite practical. If you use a 1 kΩ resistor, the base current will be well with the I/O pin’s capabilities, and the transistor will usually have enough drive to switch its load.

The only other drawback is that the transistor will always add a small voltage drop of perhaps 50 .. 200 mV.

There is a third solution using MOSFETs, with its own set of trade-offs. To be continued…

The JeeLabs Shop has gained some extra functionality, since about a year, as it now lets you “sign up” and add a password to simplify re-ordering later.

What I didn’t know until today (thanks, Martyn) is that there is actually a way to access the order history and to manage your shipping address(es).

The trick is to go to http://jeelabs.com/account – which will redirect you to a login page unless your browser has already saved the relevant cookies:

Once logged in, you can see what you’ve ordered in the past:

In my case, most of these orders were of course just dummies, which I then cancelled.

Three things to note about this functionality:

yes, the shop will use cookies if you decide to sign-up when placing an order

you can’t change the info on existing orders (contact order_assistance at jeelabs dot org for that)

I’ll update the email confirmations sent out to mention this feature

I still think that there are plenty of smaller and larger inconveniences in this shop (hosted by Shopify), none of which I have control over unfortunately, but it’s good to know that this history mechanism is there if you need it.

To continue where I left off before the summer, let’s examine what a current clamp like this one does:

You put it around a single wire in your AC mains cabling and it’ll generate a voltage proportional with the current going through it. This unit has a built-in burden resistor, which means you get a ± 1V (AC!) output when the current through the wire is 30 A. So let’s have some fun and look at a couple of different loads, eh?

Let’s start off with an old-fashioned 25W incandescent light-bulb, which is a resistive load:

Note the vertical scale – these voltage levels are tiny. The scope caclulates the Root Mean Square (RMS) voltage as being 3.52 mV in this case. That’s the voltage you’d need to draw as direct current to dissipate the same amount of power as this alternating current (let’s ignore phase shift and “reactive” vs “true” power for now).

Sure enough, a 75W light bulb draws three times as much power (note the different vertical scale):

Here’s a 2W LED light bulb, which uses an electronic circuit to pulse small amounts of energy from AC mains:

Now let’s take a vacuum cleaner, which is an inductive load, and quite a lot beefier too:

The “blips” are switching artifacts from the TRIAC control included in this unit. From the RMS value, my estimate would be that it draws about 1500W. Here’s the same vacuum cleaner, with its power throttled back to minimum:

As with the LED light, you can see the electronics kicking in and pulsing AC mains to throttle power back to around 500W.

Conclusion: a current clamp is a safe way to measure current in an AC mains wire, and it more or less reproduces the measured current as a small output voltage. Very small in fact, for light loads. To accurately determine the RMS value of a load as small as our 2W LED light, we’re going to have to read out this signal in the sub-millivolt range, and do so at perhaps 1000 HZ to collect enough readings per 50 Hz cycle.

For some time now, there have been two of these Tizio lamps in the house. A gorgeous design and very practical:

As so many lamps from last century, they come with a halogen incandescent light bulb:

Lifetimes are great (I don’t think I ever replaced one), but efficiency less so. So I decided to replace them anyway:

These are simple 1.2W warm-white LED lights with built-in rectifier and current limiting resistors. Here’s the result:

The transplant ended up being very simple, the UV filter glass is no longer needed, the power consumption has dropped from 50W 35W to 12W 5W, and – if all is well – these LEDs will never need to be replaced again.

Easy peasy! Now if only all our incandescent lights had equally simple alternatives…

As you might have noticed, the Jeelabs Shop has been kept open and operational during the summer break. The reason for this, is that Martyn and Rohan Judd have set up what is now essentially a fantastic fullfilment service for JeeLabs.

As a result, this welcome message on the shop’s home page is now more or less permanent:

The limit for free shipping has beeen raised somewhat, due to the higher cost of shipping to “mainland” Europe from the UK. The other major change is that you need to use the indicated email address to make sure your message reaches me and all people involved.

Note that I am out of the loop for day-to-day order processing, but not out of the loop in any other way!

There will be more changes, most of them probably behind the scenes, as we work out all the details of doing business this way. As far as the shop is concerned, the orders are still placed in the Netherlands, and I remain as before fully responsible and accountable for everything that happens – both good and bad. VAT processing (and VAT exemption for EU-based business outside the Netherlands) will also remain exactly as before.

I am quite confident that this change will allow me to spend more time on the R&D side of things regarding all current and future JeeLabs projects.

Which, dear reader, is – and remains – my main motivation for doing all this, of course.

Tada! I’m really proud to be able to present the JeeLabs new-and-official “look” to you!

Here is the new logo for JeeLabs – nicely distinctive and with a little wink towards battery-powered electronics:

As you can see, this has been incorporated into the weblog, as header and as “favicon”.

I hope you like it.

The proportions and font used for the “JeeLabs” text in the header above are still work-in-progress. In fact, I just used the Trebuchet MS font with smallcaps for now, because it sort of resembles the letters in the logo. If you have tips or suggestions for this, let me know…

One more announcement tomorrow, then “regular” JeeLabs posts will resume – promise!

We had a gorgeous vacation. The best part: the car broke down at the start of our trip, forcing us to make some quick decisions. So we ended up in Retournac, a little village in the south of the Auvergne while getting that car fixed (kudo’s to Volkswagen for their splendid service, which included a free rental car replacement). In fact, we liked this place so much that we decided to come back to it in the second part of our vacation – this little spot was unbelievably calm, with a great little Camping Municipal on the border of the Loire, and restaurants with fantastic 4-course plat du jour meals for the price of what would get us just about one pizza back home.

See that little green tent over on the left, under the trees? No? Oh well, that’s where Liesbeth and I set up camp :)

What else to do in France in high season, apart from going on lots of hikes and lazily reading books? Well, we visited lots of smaller and larger villages for one, such as these:

… and we chased all the scents in those gorgeous little markets everywhere:

The other half of our vacation was spent visiting French & Portuguese friends in the area.

It was a truly wonderful break … and now it’s time to get back to Physical Computing!

Ok, time to sign off for the summer break. This weblog will be off the air until September 1st – same as last year.

But unlike last year, the shop will stay open during the break: Martyn and Rohan Judd will be taking over all JeeLabs shop duties from the UK this summer. We’re making a range of preparations to get everything going smoothly, but please note that there will be some “reduced availability” issues during this time – i.e. a few more items out of stock than usual, and occasional delays while trying to prepare packages and get things out the door.

It’s been yet another truly fascinating year here. Somewhat fewer new products out the door than I would have wanted, but also quite a bit more work behind the scenes to make sure this all remains focused on fooling around with physical computing, wireless networking, and ultra-low power computing. And even though there has been another unplanned break early this year, things are actually starting to work out a lot better these days. As I’ve learned after over 1000 posts, the trick is to stay ahead of the weblog by a comfortably large margin, instead of having the daily publishing schedule dictate how to spend my time and my energy.
This summer break will give me an excellent opportunity to relax, re-focus, and then re-launch into the next yearly cycle – IOW: onwards!

Until then, I’ll leave you with a view of one of the more chaotic corners of the JeeLabs work area:

If physical computing – or even just technology in general – is your thing, then maybe some of these past 1075 posts will encourage you to follow your passion, nurture your curiosity, cherish your fascination, challenge your boundaries, and … be creative! Because there is infinite fun in creating and in learning from what others create.

To be continued in September. Have a wonderful time!

Note – Please send all questions about the shop, payments, and shipping to email address order_assistanceatjeelabsdotorg during the summer break – that way it will reach both the people handling the shop and me. Note also that I will be reading email only once a week during this period.

The SHT11 sensor used on the room board is a bit pricey, and the bulk source I used in the past no longer offers them. The good news is that there’s a HYT131 sensor with the same accuracy and range. And the same pinout:

This sensor will be included in all Room Board kits from now on.

It requires a different bit of software to read out, but the good news is that this is now standard I2C and that the code no longer needs to do all sorts of floating point calculations. Here’s a test sketch I wrote to access it:

And here’s some sample output (measurements increase when I touch the sensor):

As you can see, this reports both temperature and humidity with 0.1 resolution. The output gets converted to a float for display purposes, but apart from that no floating point code is needed to use this sensor. This means that the HYT131 sensor is also much better suited for use with the memory-limited JeeNode Micro.

It’ll take a little extra work to integrate this into the roomNode sketch, but as far as I’m concerned that’ll have to wait until after the summer break. Which – as far as this weblog goes – will start tomorrow.

One more post tomorrow, and then we “Northern Hemispherians” all get to play outside for a change :)

The guys at OpenEnergyMonitor – hi Glyn and Trystan! – have been working on a number of open source energy monitoring kits for some time now. With solar panels coming here soon, I thought it’d be nice to try out their EmonTX unit – which is partly derived from a bunch of stuff here at JeeLabs. Here’s the kit I got recently:

Following these excellent instructions, assembly was a snap (I added the 868 MHz RFM12B wireless module):

Whee, assembling kits is fun! :)

I had some 30A current clamps from SeeedStudio lying around anyway, so that’s what I’ll be using.

The transformer is a 9 VAC type, to help the system detect zero crossings, so that real power factors can be calculated. Unfortunately, this transformer doesn’t (yet) power the system (but it now looks like it might in a future version), so this thing also needs either FTDI or USB to power it.

These readings were made with a clamp on one wire of a 25W lightbulb load – first off, then on. The mains voltage estimated from the 9V transformer is a bit high – it’s usually about 230V around here. My plan is to measure and report two independent power consumers and one producer (the solar panel inverter), so I’ll dive into this in more detail anyway. But that’ll have to wait until after the summer break.

Speaking of which: the June discount ends tomorrow, just so you know…

Update – I have disconnected the burden resistors, since the SCT-013-030 has one built in. See comments.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Today, I’d like to present a nifty new arrival here at JeeLabs, called the Half Ohm:

It’s a brilliant little tool (with a cute name). What it does is convert milliohm to millivolt, and it works roughly in the range 0 .. 500 mΩ. Hence the name.
On the back is a coin cell, and there’s a tiny on-off switch. Luckily, it’s hard to forget to turn the thing off because there’s a bright red LED while it’s powered up.

Milliohms are tricky. Fortunately, Jaanus Kalde, who created this tool has it all explained on his website.

So what can you do with a milliohm meter? Well, measure the resistance of your test leads, for one:

That’s 22.8 mΩ, i.e. 0.0228 Ω. Not surprising, because (almost) every conductor has some resistance.

Being able to measure such low resistances can be extremely useful to find shorts. For example, when using longer test leads, I can see that their own resistance is 74 mΩ. And with that knowledge, we can measure PCB traces:

Turns out that in this case I got about 112 mΩ, which means that this little 5 cm stretch of copper on the PCB has 37 mΩ resistance. And sure enough, the other ground pins have less resistance when when the path is shorter, and more when the path is longer. This is very logical – note also that between the GND pins I’ll be measuring relatively low values because of the relatively “fat” ground plane, which reduces overall DC resistance.

To find shorts, we simply measure the resistance between any two points (with no power connected, of course). If the measured value is close to 75 mΩ, then it’s a short. If it’s well above say 150 mΩ, then it’s definitely not a short.

To locate a short circuit, we can now simply move probes towards the direction of lowest resistance.

Looks like the Half Ohm will be a great help for this type of hard-to-isolate problems!

PS. No need to do the subtraction in your head if you have a multimeter which supports relative measurements.

To continue yesterday’s discussion about level interrupts, the other variant is “edge-triggered” interrupts:

In this case, each change of the input signal will trigger an interrupt.

One problem with this is that it’s possible to miss input signal changes. Imagine applying a 1 MHz signal, for example: there is no way for an ATmega to keep up with such a rate of interrupts – each one will take several µs.

The underlying problem is a different one: with level interrupts, there is sort of a handshake taking place: the external device generates an interrupt and “latches” that state. The ISR then “somehow” informs the device that it has seen the interrupt, at which point the device releases the interrupt signal. The effect of this is that things always happen in lock step, even if it takes a lot longer before the ISR gets called (as with interrupts disabled), or if the ISR itself takes some time to process the information.

With edge-triggered interrupts, there’s no handshake. All we know is that at least one edge triggered an interrupt.

With “pin-change interrupts” in microcontrollers such as the ATmega and the ATtiny, things get even more complicated, because several pins can all generate the same interrupt (any of the PD0..PD7 pins, for example). And we also don’t get told whether the pin changed from 0 -> 1 or from 1 -> 0.

The ATmega328 datasheet has this to say about interrupt handling (page 14):

(note that the “Global Interrupt Enable” is what’s being controlled by the cli() and sei() instructions)

Here are some details about pin-change interrupts (from page 74 of the datasheet):

The way I read all the above, is that a pin change interrupt gets cleared the moment its associated ISR is called.

What is not clear, however, is what happens when another pin change occurs before the ISR returns. Does this get latched and generate a second interrupt later on, or is the whole thing lost? (that would seem to be a design flaw)

For the RF12 driver, to be able to use pin-change interrupts instead of the standard “INT0″ interrupt (used as level interrupt), the following is needed:

every 1 -> 0 pin change needs to generate an interrupt so the RFM12B can be serviced

every 0 -> 1 pin change can be ignored

The current code in the RF12 library is as follows:

I made that change from an “if” to a “while” recently, but I’m not convinced it is correct (or that it even matters here). The reasoning is that servicing the RFM12B will clear the interrupt, and hence immediately cause the pin to go back high. This happens even before rf12_interrupt() returns, so the while loop will not run a second time.

The above code is definitely flawed in the general case when more I/O pins could generate the same pin change interrupt, but for now I’ve ruled that out (I think), by initializing the pin change interrupts as follows:

In prose:

make the RFM12B interrupt pin an input

enable the pull-up resistor

allow only that pin to trigger pin-change interrupts

as last step, enable that pin change interrupt

Anyway – I haven’t yet figured out why the RF12 driver doesn’t work reliably with pin-change interrupts. It’s a bit odd, because things do seem to work most of the time, at least in the setup I tried here. But that’s the whole thing with interrupts: they may well work exactly as intended 99.999% of the time. Until the interrupt happens in some particular spot where the code cannot safely be interrupted and things get messed up … very tricky stuff!

The ATmega’s pin-change interrupt has been nagging at me for some time. It’s a tricky beast, and I’d like to understand it well to try and figure out an issue I’m having with it in the RF12 library.

Interrupts are the interface between real world events and software. The idea is simple: instead of constantly having to poll whether an input signal changes, or some other real-world event occurs (such as a hardware count-down timer reaching zero), we want the processor to “somehow” detect that event and run some code for us.

The mechanism is very useful, because this is an effective way to reduce power consumption: go to sleep, and let an interrupt wake up the processor again. And because we don’t have to keep checking for the event all the time.

It’s also extremely hard to do these things right, because – again – the ISR can be triggered any time. Sometimes, we really don’t want interrupts to get in our way – think of timing loops, based on the execution of a carefully chosen number of instructions. Or when we’re messing with data which is also used by the ISR – for example: if the ISR adds an element to a software queue, and we want to remove that element later on.

The solution is to “disable” interrupts, briefly. This is what “cli()” and “sei()” do: clear the “interrupt enable” and set it again – note the double negation: cli() prevents interrupts from being serviced, i.e. an ISR from being run.

But this is where it starts to get hairy. Usually we just want to prevent an interrupt to happen now – but we still want it to happen. And this is where level-interrupts and edge-interrupts differ.

A level-interrupt triggers as long a an I/O signal has a certain level (0 or 1) and works as follows:

Here’s what happens at each of those 4 points in time:

an external event triggers the interrupt by changing a signal (it’s usually pulled low, by convention)

the processor detects this and starts the ISR, as soon as its last instruction finishes

the ISR must clear the source of the interrupt in some way, which causes the signal to go high again

finally, the ISR returns, after which the processor resumes what it had been doing before

The delay from (1) to (3) is called the interrupt latency. This value can be extremely important, because the worst case determines how quickly our system responds to external interrupts. In the case of the RFM12B wireless module, for example, and the way it is normally set up by the RF12 code, we need to make sure that the latency remains under 160 µs. The ISR must be called within 160 µs – always! – else we lose data being sent or received.

The beauty of level interrupts, is that they can deal with occasional cli() .. sei() interrupt disabling intervals. If interrupts are disabled when (1) happens, then (2) will not be started. Instead, (2) will be started the moment we call sei() to enable interrupts again. It’s quite normal to see interrupts being serviced right after they are enabled!

The thing about these external events is that they can happen at the most awkward time. In fact, take it from me that such events will happen at the worst possible time – occasionally. It’s essential to think all the cases through.

For example: what happens if an interrupt were to occur while an ISR is currently running?

There are many tricky details. For one, an ISR tends to require quite a bit of stack space, because that’s where it saves the state of the running system when it starts, and then restores that state from when it returns. If we supported nested interrupts, then stack space would at least double and could easily grow beyond the limited amount available in a microcontroller with limited RAM, such as an ATmega or ATtiny.

This is one reason why the processor logic which starts an ISR also disables further interrupts. And re-enables interrupts after returning. So normally, during an ISR no other ISRs can run: no nested interrupt handling.

Tomorrow I’ll describe how multiple triggers can mess things up for the other type of hardware interrupt, called an edge interrupt – this is the type used by the ATmega’s (and ATtiny’s) “pin-change interrupt” mechanism.

The latter depends on whether this setup will become part of the permanent home automation system at JeeLabs.

As switcher I’ll use a no-name brand from eBay – it delivers 5V at over 1 A and draws about 8 mA without load:

Lots of pesky little details need to be worked out, such as how to get a Sitecom WLA-1000 USB WiFi dongle working, how to set up what is essentially “kiosk mode”, and how to control the display backlight. I’d also like to hook up an RFM12B directly to the main board, to see how convenient this is and what can be done with it.

There’s a nice article at next.kolumbus.no about setting up something quite similar. Long live the sharing culture!

Total system cost should be roughly €100..150, since I recovered the screen from an old Dell laptop.

Ultra-low power computing is a recurring topic on this weblog. Hey – it’s useful, it’s non-trivial, and it’s fun!

So far all the experiments, projects, and products have been with the ATmega from Atmel. It all started with the ATmega168, but since some time it’s now all centered around the ATmega328P, where “P” stands for pico power.

There’s a good reason to use the ATmega, of course: it’s compatible with the Arduino and with the Arduino IDE.

With an ATmega328 powered by 3.3V, the lowest practical current consumption is about 4 µA – that’s with the watchdog enabled to get us back out of sleep mode. Without the internal watchdog, i.e. if we were to rely on the RFM12B’s wake-up timer, that power-down current consumption would drop considerably – to about 0.1 µA:

Whoa, that’s a factor 40 less! Looks like a major battery-life improvement could be achieved that way!

Ahem… not so fast, please.

As always, the answer is a resounding “that depends” – because there are other power consumers involved, and you have to look at the whole picture to understand the impact of all these specs and behaviors.

First of all, let’s assume that this is a transmit-only sensor node, and that it needs to transmit once a minute. Let’s also assume that sending a packet takes at most 6 ms. The transmitter power consumption is 25 mA, so we have a 10,000:1 sleep/send ratio, meaning that the average current consumption of the transmitter will be 2.5 µA.

Then there’s the voltage regulator. In some scenarios, it could be omitted – but the MCP1702 and MCP1703 used on JeeNodes were selected specifically for their extremely low quiescent current draw of 2 µA.

The RFM12B wireless radio module itself will draw between 0.3 µA and 2.3 µA when powered down, depending on whether the wake-up timer and/or the low-battery detector are enabled.

That’s about 5 to 7 µA so far. So you can see that a 0.1 µA vs 4 µA difference does matter, but not dramatically.

I’ve been looking at some other chips, such as ATXmega, PIC, MSP430, and Energy Micro’s ARM. It’s undeniable that those ATmega328’s are really not the lowest power option out there. The 8-bit PIC18LF25K22 can keep its watchdog running with only 0.3 µA, and the 16-bit MSP430G2453 can do the same at 0.5 µA. Even the 32-bit ARM EFM32TG110 only needs 1 µA to keep an RTC going. And they add lots of other really neat extra features.

In terms of low power there are at two more considerations: other peripherals and battery size / self-discharge.

In a Room Node, there’s normally a PIR sensor to detect and report motion. By its very nature, such a sensor cannot be shut off. It cannot even be pulsed, because a PIR needs a substantial amount of time to stabilize (half a minute or more). So there’s really no other option than to keep it powered on at all times. Well, perhaps you could turn it off at night, but only if you really don’t care what happens then :)

Trouble is: most PIR sensors draw a “lot” of current. Some over 1 mA, but the most common ones draw more like 150..200 µA. The PIR sensor I’ve found for JeeLabs is particularly economical, but it still draws 50..60 µA.

This means that the power consumption of the ATmega really becomes almost irrelevant. Even in watchdog mode.

The other variable in the equation is battery self-discharge. A modern rechargeable Eneloop cell is quoted as retaining 85% of its charge over 2 years. Let’s assume its full charge is 2000 mAh, then that’s 300 mAh loss over 2 years, which is equivalent to about 17 µA of continuous self-discharge.

Again, the 0.1 µA vs 4 µA won’t really make such a dramatic difference, given this figure. Definitely not 40-fold!

As you can see, every microamp saved will make a difference, but in the grand scheme of things, it won’t double a battery’s lifetime. There’s no silver bullet, and that Atmel ATmega328 remains a neat Arduino-compatible option.

That doesn’t mean I’m not peeking at other processors – even those that don’t have a multi-platform IDE :)

As hinted at yesterday, I intend to use the ZeroMQ library as foundation for building stuff on. ZeroMQ bills itself as “The Intelligent Transport Layer”, and frankly, I’m inclined to agree. Platform and vendor agnostic. Small. Fast.

So now we’ve got ourselves a pipe. What do we push through it? Water? Gas? Electrons?

The next can of worms: how does a sender encode structured data, and how does a receiver interpret those bytes?
Have a look at this Comparison of data serialization formats for a comprehensive overview (thanks, Wikipedia!).

Yikes, too many options! This is almost the dreaded language debate all over again…

Ok, I’ve travelled the world, I’ve looked around, I’ve pondered on all the options, and I’ve weighed the ins and outs of ‘em all. In the name of choosing a practical and durable solution, and to create an infrastructure I can build upon.
In the end, I’ve picked a serialization format which most people may have never heard of: Bencode.

Not XML, not JSON, not ASN.1, not, well… not anything “common”, “standard”, or “popular” – sorry.

Let me explain, by describing the process I went through:

While the JeeBus project ran, two years ago, everything was based on Tcl, which has implicit and automatic serialization built-in. So evidently, this was selected as mechanism at the time (using Tequila).

But that more or less constrains all inter-operability to Tcl (similar to using pickling in Python, or even – to some extent – JSON in JavaScript). All other languages would be second-rate citizens. Not good enough.

XML and ASN.1 were rejected outright. Way too much complexity, serving no clear purpose in this context.

Also on the horizon: JSON, a simple serialization format which happens to be just about the native source code format for data structures in JavaScript. It is rapidly displacing XML in various scenarios.

But JSON is too complex for really low-end use, and requires relatively much effort and memory to parse. It’s based on reserved characters and an escape character mechanism. And it doesn’t support binary data.

Next in the line-up: Bernstein’s netstrings. Very elegant in its simplicity, and requiring no escape convention to get arbitrary binary data across. It supports pre-allocation of memory in the receiver, so datasets of truly arbitrary size can safely be transferred.

But netstrings are a too limited: only strings, no structure. Zed Shaw extended the concept and came up with tagged netstrings, with sufficient richness to represent a few basic datatypes, as well as lists (arrays) and dictionaries (associative arrays). Still very clean, and now also with exactly the necessary functionality.

(Tagged) netstrings are delightfully simple to construct and to parse. Even an ATmega could do it.

But netstrings suffer from memory buffering problems when used with nested data structures. Everything sent needs to be prefixed with a byte count. That means you have to either buffer or generate the resulting byte sequence twice when transmitting data. And when parsed on the receiver end, nested data structures require either a lot of temporary buffer space or a lot of cleverness in the reconstruction algorithm.

Which brings me to Bencode, as used in the – gasp! – Bittorrent protocol. It does not suffer from netstring’s nested size-prefix problems or nested decoding memory use. It has the interesting property that any structured data has exactly one representation in Bencode. And it’s trivially easy to generate and parse.

Bencode can easily be used with any programming language (there are lots of implementations of it, new ones are easy to add), and with any storage or communication mechanism. As for the Bittorent tie-in… who cares?

So there you have it. I haven’t written a single line of code yet (first time ever, but it’s the truth!), and already some major choices have been set in stone. This is what I meant when I said that programming language choice needs to be put in perspective: the language is not the essence, the data is. Data is the center of our information universe – programming languages still come and go. I’ve had it with stifling programming language choices.

Does that mean everybody will have to deal with ZeroMQ and Bencode? Luckily: no. We – you, me, anyone – can create bridges and interfaces to the rest of the world in any way we like. I think HouseAgent is an interesting development (hi Maarten, hi Marco :) – and it now uses ZeroMQ, so that might be easy to tie into. Others will be using Homeseer, or XTension, or Domotiga, or MisterHouse, or even… JeeMon? But the point is, I’m not going to make a decision that way – the center of my universe will be structured data. With ZeroMQ and Bencode as glue.

And from there, anything is possible. Including all of the above. Or anything else. Freedom of choice!

Update – if the Bencode format were relaxed to allow whitespace between all elements, then it could actually be pretty-printed in an indented fashion and become very readable. Might be a useful option for debugging.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Time for a different subject. All the tools discussed so far have been about electronics and computing hardware.

But what about the flip side of that computing coin – software?

As mentioned recently, software has no “fixed point”. There’s no single center of its universe. Everything is split across the dimension of programming language choice. We’re operating in a computing world divided by language barriers – just like in the real world.

Here’s the language divide, as seen on GitHub (graph extracted from this Zoom.it site):

It’s easy to get carried away by this. Is “my” language up? Or is it down? How does language X compare to Y?

Yawn.

Programming language choice (as in real life, with natural languages) has huge implications, because to get to know a language really well, you have to spend 10,000 hours working with it. Maybe 9,863 if you try really hard.

As we learn something, we get better at it. As we get better at something, we become more productive with it.

So… everyone picks one (or perhaps a few) of the above languages and goes through the same process. We learn, we evolve, and we gain new competences. And then we find out that it’s a rabbit hole: languages do not inter-operate at a very low level. One of the best ways to inter-operate with other software these days is probably something called ZeroMQ: a carefully designed low-fat interface at the network-communication level.

The analogy with real-world spoken languages is intriguing: we all eat bread, no matter what our nationality is or which language we speak (bear with me, I’m simplifying a bit). We can walk into a shop in another country, and we’ll figure out a way to obtain some bread, because the goods and the monetary exchange structure are both bound to be the very similar. Language will be a stumbling block, but not a show stopper. We won’t starve.

In the same way, you can think of information exchanges as bread. If we define appropriate data structures and clear mappings to bits and bytes, then we can get them from one system to the other via libraries such as ZeroMQ.

Which brings me to the point I’m trying to make here: programming language choice is no longer a key issue!

What matters, are the high-level data structures we come up with and the protocols (in a loosely defined manner) we use for the interactions. The bread is what it’s about (data). Money is needed to make things happen (e.g. ZeroMQ), and programming languages are going to differ and change over time anyway – so who cares.

We should stop trying to convince each other that everything needs to be written in one programming language. Humanity has had plenty of time to deal with similar natural language hurdles, and look where we stand today…

I feel reasonably qualified to make statements about languages. I speak four natural languages more or less fluently, and I’ve programmed in at least half a dozen programming languages for over two years each (some for over a decade, and with three I think may have passed that 10,000 hour mark). In both contexts, I tend to favor the less widespread languages. It’s a personal choice and it works really well for me. I get stuff done.

Then again, this weblog is written in English, and I spend quite a bit of my time and energy writing in C. That more or less says it all, really: English is the lingua franca of the (Western) internet, and C is the universal language used to implement just about everything on top with. That’s what de facto standards are about!

So what will I pick to program in for Physical Computing, embedded micros, laptops, and the web? The jury is still out on that, but chances are that it will not be any of the first 12 languages in either of those two lists above.

But no worries. We’ll still be able to talk to each other and both have fun, and the software I write will be usable regardless of your mother’s tongue – or your father’s programming language :)

As you may have seen in a number of discussions on the forum (such as this one), things are still in flux w.r.t. documentation of software / libraries / hardware coming out of JeeLabs.

I’ve been agonizing for ages about this. It’s a recurring theme and it drives me up the wall about once a year.

Like with so many things, ya’ can’t get very high (or far) if you keep changing shoulders to stand on…

The good news is that the wait is over. All documentation and collaborative editing is going to be done with Redmine, the same system which has been driving the Café for some time now. But a few things will change:

Redmine has now evolved to version 2.0.3, and I’ll be using subversion to easily track updates

anyone can register to participate, but the process involves an administrator (goodbye, spammers)

the system supports generating PDF’s, so we can have good web docs and good paper-like docs

better support for page hierarchies and automatic page lists, i.e. more high-level structure

Also, I found an excellent theme for the wiki, which gives the whole thing a clean look and nice layout. Like so:

Clean, minimal, and compatible with all modern browsers, as far as I can tell.

But this isn’t about good looks at all, really. That’s just an enabler, to finally make it worthwhile to pour tons and tons of my time into this.

And now that I have your attention…

The above has been set up, but it’s very, very early days. Things may get (slightly) worse before they get better – i.e. I’m not going to do much more maintenance on the current Café pages at http://jeelabs.net/ – neither the hardware pages, nor the software documentation, nor any of the other wiki pages or user-contributed info.

The reason to announce this here anyway, is that I want to make a really serious effort to get it right. I would like Physical Computing software and hardware such as from JeeLabs to be maximally fun to explore, easy to get acquainted with, fully open to adopt and tweak, and truly, truly, truly effective and practical. No fluff, no nonsense, but a rich resource which improves over time.

I love writing (heck, 1000+ posts ought to have made that clear by now) and as I said, I’m willing to pour lots of time and effort into this. But I can’t do it alone. You can help by telling me what sort of info you need, where you’re coming from, what style and structure would work well for you, and you can help point out the errors, the gaps, the omissions, the mistakes… or anything you don’t agree with and consider substantial enough to bring up.

You can of course also help a lot more than that, by participating in this initiative to get a really good collaborative documentation site going (I’m willing to beg on my knees, bribe you in some innocent way, or pile up the compliments if that helps). Everybody is busy, but I think there is value in trying to coordinate efforts like this.

To put it all in perspective: this new documentation site is not “my” site (other than providing the infrastruture). Even though I’ll probably be one of the main contributors, it’s not anyone’s site, in that nothing on it should be written in first-person form. No I’s and me’s to wonder about who said what. It needs to become everyone’s site, a live knowledge base, with full and honest attribution to everyone who volunteers to get involved.

Let’s get the focus on audience right. Let’s get the structure right. And let’s get the content right. In that order.

Here’s the bad news (yeah, I know, should normally have started off with this) …

No spotlights on this endeavor for the next three months. No fame. No riches. Only blood, sweat, and tears.

This weblog post will remain the only one to draw attention to this documentation challenge. I’m inviting you to participate and help shape things. I hope some of you will find a suitable amount of time, right now or later on.

Note that I haven’t mentioned “code” until now. That’s not because code is irrelevant. On the contrary – part of this work will be to write new code, redo things done so far, and if possible even to “lead by example”. The same goes for technical documentation and for tutorials which can go far beyond just telling what the code does. And for automatically generated documentation from comments or other text files. It all has a place.

But it’s easy to get swamped by it all – as I’ve been for so long – and never reach a practical point. Best thing for me to do now, is to try and pick a single direction for documentation and run with it. You’re welcome to tie your own interests and efforts into this – I’m sure we can figure out ways to make things work nicely together.

One last point – this isn’t really limited to software or hardware from JeeLabs. To me, this whole Jeelabs thing is just an umbrella to go off and play with “Computing Stuff tied to the Physical World”, wherever that leads to. I use this as basis to try and stay focused (hah!) and keep aiming in a somewhat coherent (hah again!) direction.

Wanna help make the above happen? Email me some thoughts and I’ll set up editor access for you.

What a bargain! – Now compare it to this one at NewEgg (yeah, no enclosure or power supply, I know):

(note how this drive has more RAM included as cache even than the total storage on that 1980’s disk!)

Let’s ignore inflation and try to compare storage prices across this 32-year stretch:

$4995 for 26 MB is $192 per megabyte

$170 for 3 TB is $57 per terabyte – six extra zero’s

in other words: storage has become ≈ 3.37 million times cheaper

Then again, hard drives are so passé … it’s all SSD and cloud storage, nowadays.

The amazing bit is not merely the staggering size increase and price reduction, but the fact that this happened within less than a lifetime. Bill’s, Steve’s – anyone over 50 will have witnessed this, basically.

Might be useful to think about this when putting our work in context of… a few years down the road from now.

As described in this recent post, it should be possible to create a simple fixed frequency oscillator using just a few low-cost components. This could then be used as interrupt source to wake up an ATmega every millisecond or so.

Here’s a first attempt, based on a widely-used circuit, as described in Fairchild’s Application Note 118:

I used a CD4049 hex inverter, since I had them within easy reach:

The two resistors are 10 kΩ, the capacitor is 0.1 µF – and here’s what it does:

The yellow trace is VOUT, the blue trace is V1. Pretty stable oscillation at 456 Hz.

Unfortunately, the current draw is a bit high with these components: 140 µA idle, and 450 µA when oscillating! There would be no point, yesterday’s approach will take half as much current using just a single 0.1 µF cap.

If someone has a tip for a simple 0.5 .. 1 KHz oscillator which consumes much less power, please let me know…

Following yesterday’s trial, here is the code which uses the pin-change interrupt to run in a continuous cycle:

The main loop is empty, since everything now runs on interrupts. The output signal is the same, so it’s working.

Could we somehow increase the cycle frequency, to get a higher time resolution? Sure we can!

The above code discharges the cap to 0V, but if we were to discharge it a bit less, it’d reach that 1.66V “1” level more quickly. And sure enough, changing the “50” loop constant to “10” increase the rate to 500 Hz, a 2 ms cycle:

As you can see, the cap is no longer being discharged all the way to 0V.
A shorter discharge cycle than this is not practical however, since the voltage does need to drop to a definite “0” level for this whole “cap trick” to work.

So how do we make this consume less power? Piece of cake: turn the radio off and go to sleep in the main loop!

The reason this works, is that the whole setup continuously generates (and processes) pin-change interrupts.
As a result, this JeeNode SMD now draws about 0.23 mA and wakes up every 2 ms using nothing more than a single 0.1 µF cap tied to an I/O pin. Mission accomplished – let’s declare (a small) victory!

PS. Exercise for the reader: you could also use this trick to create a simple capacitance meter :)

This continues where yesterday left off, trying to wait less than 16 milliseconds using as little power as possible.

First off, I’m seeing a lot of variation, which I can’t explain yet. I decided to use a standard JeeNode SMD, including regulator and RFM12B radio, since that will be closer to the most common configuration anyway.

Strangely enough, this sketch now generates a 704 µS toggle time instead of 224 µs, i.e. 44 processor cycles per loop() iteration. I don’t know what changed since yesterday, and that alone is a bit worrying…

The other surprise is that power consumption varies quite a bit between different units. On one JN SMD, I see 1.35 mA, on another it’s only 0.86 mA. Again: I have no idea (yet) what causes this fairly substantial variation.

How do we reduce power consumption further? The watchdog timer is not an option for sleep times under 16 ms.

The key point is to find some suitable interrupt source, and then go into a low-power mode with all the clock/timing circuitry turned off (in CMOS chips, most of the energy is consumed during signal transitions!).

Couple of options:

run the ATmega off its internal 8 MHz RC clock and add a 32 KHz crystal

add extra circuitry to generate a low-frequency pulse and use pin-change interrupts

connect the RFM12B’s clock pin to the IRQ pin and use Timer 2 as divider

add a simple RC to an I/O pins and interrupt on it’s charge cycle

use the RFM12B’s built-in wake-up timer – to be explored in a separate weblog post

Option 1) has as drawback that you can’t run with standard fuse settings anymore: the clock will have to be the not-so-accurate 8 MHz RC clock and the crystal oscillator needs to be set appropriately. It does seem like this would allow short-duration low-power waiting with a granularity of ≈ 30 µs.

Option 2) needs some external components, such as perhaps a low-power 7555 CMOS timer. This would probably still consume about 0.1 mA – pretty low, but not super-low. Or maybe a 74HC4060, for perhaps 0.01 mA (10 µA) power consumption.

Option 3) may sound like a good idea, since Timer 2 can run while the rest of the ATmega’s clock circuits are switch off. But now you have to keep the RFM12B’s 10 MHz crystal running, which draws 0.6 mA … not a great improvement.

Option 4) seems like an option worth trying. The idea is to connect these components to a spare I/O pin:

By pulling the I/O pin low as an output, the capacitor gets discharged. When turning the pin into a high-impedance input, the cap starts charging until it triggers a transition from a “0” to a “1” input, which could be used as pin-change interrupts. The resistor value R needs to be chosen such that the charge time is near to what we’re after for our sleep time, say 1 millisecond. Longer times can then be achieved by repeating the process.

It might seem odd to do all this for what is just one thousandths of a second, but keep in mind that during this time the ATmega can be placed in deep-sleep mode, consuming only a few µA’s. It will wake up quickly, and can then either restart another sleep cycle or resume its work. This is basically the same as a watchdog interrupt.

Let’s first try this using the internal pull-up resistor instead, and find out what sort of time delays we get:

(several typo’s: the “80 µs” comment in the above screen shot should be “15 µs” – see explanation below)

This code continuously discharges the 0.1 µF capacitor connected to DIO1, then waits until it charges up again:

With the internal pull-up we get a 3.4 ms cycle time. By adding an extra external resistor, this could be shortened. The benefit of using only the internal pull-up, is that it also allows us to completely switch off this circuit.

We can see that this ATmega switches to “1” when the voltage rises to 1.66V, and that its internal pull-up resistor turns out to be about 49 kΩ (I determined this the lazy way, by tweaking values on this RC calculator page).

Note that discharge also takes a certain amount of time, i.e. when the output pin is set to “0”, we have to keep it there a bit. Looks like discharge takes about 15 µs, so I adjusted the asm volatile (“nop”) loop to cycle 50 times:

In other words, this sketch is pulling the IO pin low for ≈ 15 µs, then releases it and waits until the internal pull-up resistor charges the 0.1 µF capacitor back to a “1” level. Then this cycle starts all over again. Easy Peasy!

So there you have it: a single 0.1 µF capacitor is all it takes to measure time in 3.4 µs ms increments, roughly. Current consumption should be somewhat under 67 µA, i.e. the current flowing through a 49 kΩ resistor @ 3.3V.

Tomorrow, I’ll rewrite the sketch to use pin-change interrupts and go into low-power mode… stay tuned!

The watchdog timer in the ATmega has as shortest interval about 16..18 milliseconds. Using the watchdog is one of the best ways to “wait” without consuming much power: a fully powered ATmega running at 16 MHz (and 3.3V) draws about 7 mA, whereas it drops thousandfold when put to sleep, waiting for the watchdog to wake it up again.

The trouble is: you can’t wait less than that minimum watchdog timer cycle.

What if we wanted to wait say 3 ms between transmitting a packet and turning on the receiver to pick up an ACK?

Short time delays may also be needed when doing time-controlled transmissions. If low power consumption is essential, then it becomes important to turn the transmitter and receiver on at exactly the right time, since these are the major power consumers. And to wait with minimal power consumption in between…

One approach is to slow down the clock by enabling the built-in pre-scaler, but this has only a limited effect:

The loop toggles an I/O bit to allow verification of the reduced clock rate. The above code draws about 1.6 mA, whereas the same code running at full speed (16 MHz) draws about 8.8 mA. Note that these measurements ended up a bit lower, but that was as 3.3V – I’m running from a battery pack in these tests, i.e. at about 4.0V.

The I/O pin toggles at 2.23 KHz in slow mode, vs 573 KHz at full speed, which indicates that the system clock was indeed running 256 times slower. A 2.23 KHz rate is equivalent to a 224 µs toggle cycle, which means the system needs 14 processor cycles (16 µs each) to run through this loop() code. Most of it is the call/return stack overhead.

So basically, we could now wait a multiple of 16 µs while consuming about 1.6 mA – that’s still a “lot” of current!

Not terribly convenient. I prefer something like this – have had one of them around here at JeeLabs for ages:

Then again, both of these measuring devices are quite a long shot (heh) from today’s laser rangefinders:

For about €82 at Conrad – no, I don’t have stock options, they are privately owned :) – you get these specs:

That’s 2 mm accuracy from 0.5 to 50 meters, i.e. one part in 25,000 (0.004%). Pretty amazing technology, considering that it’s based on measuring the time it takes a brief pulse to travel with (almost) the speed of light!

But you’ll need a 9V battery to make this thing work – everything needs electricity in today’s “modern” world.

I’ve been wondering for some time whether the power consumption of an ATmega varies depending on the code it is running. Obviously, sleep modes and clock rate changes have a major impact – but how about plain loops?

To test this, I uploaded the following sketch into a JeeNode:

Interrupts were turned off to prevent the normal 1024 µs timer tick from firing and running in between. And I’m using “volatile” variables to make sure the compiler doesn’t optimize these calculations away (as they’re not used).

The result is that the code pattern does indeed show up in the chip’s total current consumption:

The value displayed is the voltage measured over a 10 Ω resistor in series with VCC (the JeeNode I used had no regulator, and was running directly off a 3x AA battery pack @ 3.95V).

What you can see is that the power consumption cycles between about 8.4 mA and 8.8 mA, just by being in different parts of the code. Surprising perhaps, but it’s clearly visible!

The shifts in the second loop are very slow – due to the fact that the ATmega has no barrel shifter. It has to use a little loop to shift by N bits. To get a nice picture, those shifts are performed only 5,000 times instead of 50,000.

The high power consumption is during the multiplication loop, the low consumption is during the shift loop.

In the end, I had to use a lot of tricks to create the above oscilloscope capture, because there was a substantial amount of 50 Hz hum on the measured input signal. Since the repetition rate of the signal I was interested in was not locked to that 50 Hz AC mains signal, most of the “noise” went away by averaging the signal over 128 triggers.

The other trick was to use the scope’s “DC offset” facility to lower the signal by 80 mV. This allows bumping the input sensitivity all the way up to 2 mV/div without the trace running off the screen. An alternative would be to use AC coupling on the input, but then I’d lose the information about the actual DC levels being measured.

As you can see, shifts by a variable number of bits do take quite a lot of time on an ATmega, relatively speaking!

Update – As noted in the comments, a shift by “321” ends up being done modulo 256, i.e. 65 times. If I change the shift to 3, the run times drop to being comparable to a multiply. The power consumption effect remains.

Yesterday, I made an effort to remove some glitches which I thought were due to the switching regulator used inside the ±15V DC-DC converter. To be honest: it didn’t really make any sense when I saw this on the scope…

Here’s the signal with the power supply turned off, and only the ground cable still connected:

Clearly the same signal – it appears even when the 1 meter ground cable isn’t tied to anything: it’s an antenna!

In other words: I’m probably just picking up one of the FM transmitters in this region. I should of course have turned on the scope’s built-in 20 MHz bandwidth filter. There was no reason to look for frequencies that high with a switcher in the 60..100 KHz region. And the fact that neither a bypass capacitor nor various inductors made much difference should have been a clue. How embarrassing.

The dual power supply described yesterday had a nasty spike every 5 µs. I tried damping them with one of these:

(they are called “varkensneus” – pigs nose – in Dutch, ’cause that’s what they look like, seen from the end)

But the results were not very substantial when adding one to the supply output. When I added one on both the input and the output of the 7812 regulator, things did improve a bit further:

The yellow trace is the output with ferrite core between the DC-DC converter output and the 7812 linear regulator input, and the blue trace is from a second ferrite core added at the end, i.e. the linear regulator’s output pin.

Note the scale of this oscilloscope capture: 10 nsec/div, so this is a 100 MHz high frequency signal of about ± 200 mV. The second ferrite core almost halves these spike’s amplitudes.

In conclusion: these are very brief ± 100 mV glitches, super-imposed on the +12V supply output voltage – i.e. about ±1% of the regulated supply output voltage. The glitches don’t change much with a 1 kΩ load, i.e. 12 mA.

It’s an artifact of the switching inside the DC-DC converter – looks like there’s not much more I can do about it!

The first option would be to take a dual-windings 12 VAC transformer, add a bridge rectifier, two beefy electrolytic capacitors, and violá: ±12V, right?

Not so fast… this is called an unregulated supply. It has a couple of drawbacks: 1) the voltages will actually be considerably larger than ±12V, 2) the voltages will change depending on the current drawn, and 3) the voltages can have a lot of residual ripple voltage. Let’s go through each of these:

Voltage levels – a 12 VAC transformer generates a 50 Hz alternating current (60 Hz in the US) with an RMS voltage of about 12 VAC. For a sine wave, this corresponds to a peak voltage which is 1.414 times as high, i.e. 17 Volts peak to peak. With a bridge rectifier, you end up topping each of the two caps to 17V DC.

Regulation – or rather: lack thereof. Since the input is a sine wave which only peaks at 17V, the caps will be charged up to this value only a couple of dozen times per second. In between, current drawn will simply discharge them, causing the voltage to drop. Large current = much lower voltage.

Ripple voltage – this variation on the power supply is called ripple. It’ll be either the same frequency of AC mains, or double that value – depending on the rectification circuit used. So that’s a 50..120 Hz signal on top of what was supposed to be a fixed supply voltage (that’s why bad audio amplifiers can “hum”).

There’s a very simple solution to all these issues: add 2 linear regulators to generate a far more stable supply voltage (one for the positive and one for the negative supply). The most widely used regulator chips are the 78xx series (+) and the 79xx series (-). You give them a few more Volts than what they are designed to deliver, add a few caps for electrical stability, and that’s it. In this case, we need one 7812 and one 7912 to get ±12V.

But I’m not so fond of power line transformers in my circuits, because you have to hook them up to AC mains on one side – that’s 230 VAC, needing lots of care to prevent accidents. Besides, we only need a few dozen milliamps for this Component Tester anyway.

So instead, I decided to use a DC-to-DC converter – a nifty little device which takes DC in and transforms it to another level of DC. The nice thing is that there are “dual” variants which can automate both positive and negative voltages at the same time.

I picked the Traco TMA0515D, which generates up to 30 mA @ ±15V, using just 5V as input. Its efficiency is specified as about 80%, so the 900 mW it supplies will need about 1.125W of input power. At 5V, that translates to 225 mA, well within range of a USB port – how convenient!

Here is the circuit I’ve built up:

As you can see, it uses very few components. And the output is galvanically isolated from the input supply – nice!

Such DC-DC converters are surprisingly small, at least for low-power units like this one (black block on the left):

With a bit of forethought, almost everything can be connected together with its own wires:

It worked as expected (caveat: the 78xx and 79xx pinouts are different!), but there were two small surprises:

the unloaded DC-DC converter output was about ±25V, these units are clearly not internally regulated!

the outputs from this assembled unit are indeed + and – 12V, but with some residual switching noise:

That DC-DC converter appears to be based on a 100 KHz switching regulator (5 µs between on and off transitions), and these spikes are making it all the way to the output pins, straight through those linear regulators!

It probably won’t matter for a component tester operating at 50..1000 Hz, but this too should be fairly easy to fix – by inserting a couple of ferrite beads for example: small inductors which filter out such high frequency “spikes”.

With analog circuitry, stable and smooth power supplies tend to be a lot more important!

Here’s another idea in the continuing search for long autonomous JeeNode run times:

The basic circuit is an Eneloop AA(A) cell, driving the AA Power Board to boost its voltage to 3.3V. There’s a 1 kΩ in series with the battery, as well as a Schottky diode to limit the voltage drop to about 0.3V during times “high” current consumption. I’ll explain why later on.

On the input side is a really simple circuit: a solar cell with a series diode, simply feeding the Eneloop battery when there is solar energy available.

The solar cell I’m using is that same 4.5V @ 1 mA cell I’ve been using all the time in my recent experiments. It is surprisingly good at generating some electricity indoor, even behind the coated double-glazing we have here.

The 1 kΩ resistor in series will let me measure the actual current flowing across it – 1 µA will read out as a 1 mV drop (that Ohm’s law again, of course!). So with a charging current of up to say 200 µA, this conveniently matches the 200 mV lowest range of most multimeters. And 0.2V is not a dramatic voltage drop, so the circuit should continue to work – almost the same as without those measurement resistors included.

A similar 1 kΩ resistor has also been inserted between the battery and the AA Power Board, but in this case we have to be more careful: a JeeNode will briefly pull 25 mA while in transmit mode, and the 1 kΩ resistor would effectively shut off input power with such currents. So I added a diode with minimal forward drop in parallel – it’ll interfere with my readings, but I’m really only interested in the ultra-low power consumption phases.

Here’s my “flying circus” concoction:

I’ve added some wires to easily allow clipping various meters on.

Now, clearly, 4V is way over the 1.3V nominal of an Eneloop battery. But here’s why this setup should still work:

this solar cell is so feeble that its voltage will collapse when drawing more than a fraction of a milliamp

solar cells may be shorted out – doing so switches them from constant-voltage to constant current mode

As for the Eneloop, my assumption is that it doesn’t really care much about being overcharged at these very low power levels. In the worst case of continuous sunshine for days on end, it’ll be fed at most 1 mA, even when full. That will probably just lead to a tiny amount of internal heating.

So let’s try and predict how this will work out, in terms of battery lifetimes…

I’ll take a JeeNode + Room Board as reference point, which draws about 60 µA continuous, on average (50 µA for the PIR, which needs to remain always-on). That’s on the 3.3V side of the AA Power board. So with a (somewhat depleted) AA battery @ 1.1V, than means the battery would have to supply 180 µA with a perfect boost regulator.

Unfortunately, perfect boost regulators are a bit hard to get. The chip on the AA Power Board does reasonably well, with about 20 µA idle and about 60..70% conversion efficiency. Let’s just batch those together as 50% efficiency, then the continuous power draw for a Room Node would be about 360 µA. Let’s round that up to 400 µA.

An Eneloop AA battery has about 1900 mAh capacity, but it loses some energy due to self-discharge. The claim is that it retains 85% over 2 years, so this battery can effectively give us about 1600 mAh of power.

The outcome of this little exercise, is that we ought to get some 4000 hours run-time out of one fully-charged AA cell, i.e. 166 days, almost six months. Not bad, but a little lower than I would have liked to see.

If the solar cell were to generate 4 hours per day @ 0.5 mA, when averaged over an entire year (that might be optimistic), then that’s 4 x 365 x 0.5 = 730 mAh. That comes down to an average current of 83 µA.

IOW, roughly one fifth of the total power needs could be supplied by the solar cell. Not enough for total autonomy, but still, it’s a start. Note that most of these last figures were pulled out of thin air at this stage: I don’t know yet!

Yet another idea would be to add an extra diode from the solar cell straight to the JeeNode +3V pin. IOW, when there is sufficient sunlight, we off-load the boost circuit altogether and charge up a capacitor of say 100..1000 µF on the JeeNode itself. No more losses, other than the AA Power Board’s quiescent current consumption.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

(I won’t call this a “lab power supply, for reasons explained below)

In a weblog post a while ago, I took apart a standard computer power supply unit (PSU). Now, instead, let’s do the opposite and turn it into a useful tool for experimentation with electrical circuits:

What you see here is a neat little way to repurpose any standard ATX power supply. Just snip off most of the wires, except for that 20-pin connector, and assemble this neat little ATX adapter board by Benjamin Jordan:

It’s available as a simple kit with a few basic components and all the connectors and binding posts.

The reason this is convenient is that it makes it somewhat easier to work with an ATX power supply (especially if it gets mounted on or near that power supply). There are push-buttons to toggle the supply on and off (except for the 5V standby voltage on the rightmost blue post, which is always on). There’s a LED to indicate whether the power supply is on (red) or in standby mode (green), and there’s an orange LED to indicate that power is OK.

All the main voltages are nicely arranged on binding posts, with matching ground return posts (all tied together internally), and there are holes to get to those same voltages via alligator clips – this is clearly for experimentation!

The is no high voltage anywhere, so the stuff is completely safe in terms of voltage. But there is still a risk:

The currents available in most PC power supplies are phenomenal: 25 and 35 Amps on the 3.3V and 5V voltage rails, respectively. That’s what modern CPU’s and memory chips and all the supporting logic need, nowadays. The +12V supply is also pretty powerful, and normally used for all those terabyte disk drives people seem to be using.

This means that no matter how we touch it, it wont hurt us – anything under 40V is considered safe since our skin resistance prevents any serious amount of current flowing. But an electrical short circuit can (and will!) still easily vaporize thin copper traces on a low-power PCB. In other words: this is totally safe in terms of voltage, but the currents caused by shorts can generate sparks and enough heat to destroy components, wires, and PCB’s.

The above PCB itself is ok – its wide and thick gold-plated copper traces were designed to carry heavy currents.

What this means is that this setup is indeed a very cheap way to get lots of useful voltages for experimentation, but that it’s not the same thing as a “laboratory power supply” which also needs to have adjustable current limits.

Here are the voltages I measured coming out of this thing:

+5V standby, actual value, unloaded: 5.16 V

+3.3V, actual value, unloaded: 3.39 V

+5V, actual value, unloaded: 5.19 V

+12V, actual value, unloaded: 12.01 V

-12V, actual value, unloaded: -11.35 V

Close enough, and more importantly: most are slightly high. That means we could add very precise low-dropout regulators to get the voltages exactly right or we could add a current sensing circuit and limiter, to get that extra feature needed to turn this into a cheap yet beefy lab power supply.

As long-time readers will know, I’ve been working on and off on a project called JeeMon, which bills itself as:

JeeMon is a portable runtime for Physical Computing and Home Automation.

This also includes a couple of related projects, called JeeRev and JeeBus.

JeeMon packs a lot of functionality: first of all a programming language (Tcl) with built-in networking, event framework, internationalization, unlimited precision arithmetic, thread support, regular expressions, state triggers, introspection, coroutines, and more. But also a full GUI (Tk) and database (Metakit). It’s cross-platform, and it requires no installation, due to the fact that it’s based on a mechanism called Starkits.

I’ve built several version of this thing over the years, also for small ARM Linux boards, and due to its size, this thing really can go where most other scripting languages simply don’t fit – well under 1 Mb if you leave out Tk.

One of (many) things which never escaped into the wild, a complete Mac application which runs out of the box:

JeeMon was designed to be the substrate of a fairly generic event-based / networked “switchboard”. Middleware that sits between, well… everything really. With the platform-independent JeeRev being the collection of code to make the platform-dependent JeeMon core fly.

Many man-years have gone into this project, which included a group of students working together to create a first iteration of what is now called JeeBus 2010.

And now, I’m pulling the plug – development of JeeMon, JeeRev, and JeeBus has ended.

There are two reasons, both related to the Tcl programming language on which these projects were based:

Tcl is not keeping up with what’s happening in the software world

the general perception of what Tcl is about simply doesn’t match reality

The first issue is shared with a language such as Lisp, e.g. SBCL: brilliant concepts, implemented incredibly well, but so far ahead of the curve at the time that somehow, somewhere along the line, its curators stopped looking out the window to see the sweeping changes taking place out there. Things started off really well, at the cutting edge of what software was about – and then the center of the universe moved. To mobile and embedded systems, for one.

The second issue is that to this day, many people with programming experience have essentially no clue what Tcl is about. Some say it has no datatypes, has no standard OO system, is inefficient, is hard to read, and is not being used anymore. All of it is refutable, but it’s clearly a lost battle when the debate is about lack of drawbacks instead of advantages and trade-offs. The mix of functional programming with side-effects, automatic copy-on-write data sharing, cycle-free reference counting, implicit dual internal data representations, integrated event handling and async I/O, threads without race conditions, the Lisp’ish code-is-data equivalence… it all works together to hide a huge amount of detail from the programmer, yet I doubt that many people have ever heard about any of this. See also Paul Graham’s essay, in particular about what he calls the “Blub paradox”.

I don’t want to elaborate much further on all this, because it would frustrate me even more than it already does after my agonizing decision to move away from JeeMon. And I’d probably just step on other people’s toes anyway.

Because of all this, JeeMon never did get much traction, let alone evolve much via contributions from others.

Note that this isn’t about popularity but about momentum and relevance. And JeeMon now has neither.

If I had the time, I’d again try to design a new programming environment from scratch and have yet another go at databases. I’d really love to spend another decade on that – these topics are fascinating, and so far from “done”.
Rattling the cage, combining existing ideas and adding new ones into the mix is such an addictive game to play.

But I don’t. You can’t build a Physical Computing house if you keep redesigning the hammer (or the nails!).

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

After the recent server troubles (scroll down a bit), I had to replace one of the 500 GB Hitachi drives in the Mini.

I decided to switch to a 128 GB SSD for the system disk, with up to 6x faster transfer rates:

It came with an interesting USB-to-SATA adapter included. Which looks like this inside:

And on the bottom:

(sorry: no teardown of the SSD, it’s probably just a bunch of black squares anyway!)

The scary part was replacing the disk in the Mac Mini’s “unibody” Aluminium case – as explained on YouTube.

But I definitely wanted to keep the server setup in a single enclosure. First lots of disk formatting, re-shuffling, and copying and then I just went ahead and did it. The good news: it worked. The system disk is now solid state!

I had hoped that the most accessible drive would have to be replaced, but unfortunately it was the top one (when the Mini is placed on it feet) – so a full dismantling was required – look at all those custom-shaped parts:

The other thing I did was to add an external 2 TB 2.5″ USB drive, to hold all Time Machine backups for both these server disks as well as two other Macs here at JeeLabs. This drive wil spin up once an hour, as TM does its thing.

Summary: the JeeLabs server now maintains a good up to date image of the entire system disk at all times, ready to switch to, and everything gets backed up to an external USB drive once an hour (these backups usually only take a minute or so, due to the way Time Machine works). All four VM’s get daily backups to the cloud, as well as now being included in Time Machine (Parallels takes care to avoid huge amounts of disk file copying).

That means all the essentials will be stored in at least three places. I think I can go back to the real work, at last.

There’s plenty of room for growth: 8 GB of RAM and less than half of the system disk space used so far.

The AS1323 boost converter mentioned a while back claims an extra-ordinarily low 1.65 µA idle current when unloaded. At the time, I wasn’s able to actually verify that, so I’ve decided to dive in again:

A very simple circuit, but still quite awkard to test, due to its size:

Bottom right is incoming power, bottom left is boosted 3.3V output voltage. Input voltage is 1.65V for easy math.

The good news is that it works, and it shows me an average current draw of 4.29 µA:

The yellow line is the output voltags, with its characteristic boost-decay cycle. For reference: the top of the graph is at 3.45V, so the output voltage is roughly between 3.30 and 3.36V (it rises a bit with rising supply voltage).

The blue line is the voltage over a resistor inserted between supply ground and booster ground. I’m using 10 Ω, 100Ω, or 1 kΩ, depending on expected current draw (to avoid a large burden voltage). So this is the input current.

The red line is the accumulated current, but it’s not so important, since the scope also calculates the mean value.

Note that there’s some 50 Hz hum in the current measurement, and hence also in its integral (red line).

Aha! – and here’s the dirty little secret: the idle current is specified in terms of the output voltage, not the input voltage! So in case of a 1.65V -> 3.3V idle setup, you need to double the current (since we’re generating it from an input half as large as the 3.3V out), and you need to account for conversion losses!

IOW, for 100% efficiency, you’d expect 1.6 µA * (3.3V / 1.65V) = 3.2 µA idle current. Since the above shows an average current draw of 4.29 µA, this is about 75% efficient.

Not bad. But not that much better than the LTC3525 used on the AA Power Board, which was ≈ 20 µA, IIRC.

More worrying is the current draw when loaded with 10 µA, which is more similar to what a sleeping JeeNode would draw, with its wireless radio and some sensors attached:

Couple of points to note, before we jump to conclusions: the boost regulator is now cycling at a bit higher frequency of 50 Hz. Also, I’ve dropped the incoming voltage to a more realistic 1.1V, i.e. 1/3rd of the output.

With a perfect circuit, this means the input current should be around 30 µA, but it ends up being about 52 µA, i.e. 57% efficiency. I have no idea why the efficiency is so low, would have expected about 70% from the datasheet.

Further tests with 1.65V in show that 1 µA out draws 6.72 µA, 10 µA out draws 29.6 µA, 100 µA out draws 261 µA, 1 mA out draws 2.51 mA, and 10 mA out draws 30.9 mA. Not quite the 80..90% efficiency from the datasheet.

My hunch is that the construction is affecting optimal operation, and that better component choices may need to be made – I just grabbed some SMD caps and a 10 µH SMD inductor I had lying around. More testing needed…

For maximum battery life, the one thing which really matters is the current draw while the JeeNode is asleep, since this is the state it spends most of its time in. So minimal consumption with 5..10 µA out is what I’m after.

To keep things in perspective: 50 µA average current drawn from one 2000 mAh AA cell should last over 4 years. A JeeNode with Room Board & PIR (drawing 50 µA, i.e. 200 µA from the battery) should still last almost a year.

Update – when revisiting the AA Power Board, I now see that it uses 25 µA from 1.1V with no load, and 59 µA with 10 µA load (down to 44 µA @ 1.5V in). The above circuit works (but does not start) down to 0.4V, whereas the AA Power Board works down to 0.7V – low voltages are not really that useful, since they increase the current draw and die quickly thereafter. Another difference is that the above circuit will work up to 2.3V (officially only 2.0V), and the AA Power Board up to at least 6V (which is out of spec), switching into step-down mode in this case.

Will all this effort to create a good sine wave for use as Component Tester, and all the testing on it, I might as well put the CT to the test which started it all, i.e. the one built into my Hameg HMO2024 oscilloscope.

Nothing easier than that – just hook up a probe to the signal it generates on the front panel, and do an FFT on it:

(yeah, a green trace for a change – the other channels were tied up for another experiment…)

Ok, but not stellar: 1.6% harmonic distortion on the 3rd harmonic – more than with the Phase Shift Oscillator.

The frequency is also about 10% high, although that’s usually not so important for a Component Tester.

PS. The ∆L units should be “dB”, not “dBm” – Hameg says they’ll fix this oversight in the next firmware update.

During the month of June, everyone who has previously purchased something from the JeeLabs shop (i.e. before June 1st 2012) can use discount code “JEE2012GO” to get 12% off all items.

And while I’m at it, let me list some other important dates for this period, here at JeeLabs:

June 1st – summer sale kicks off with this post

June 30th – sale ends at midnight, 0:00 CEST time

July 1st – weblog is suspended during the summer

July 15th – shop enters “summer fulfillment mode”

August 16th – shop resumes normal operation

August 31st – daily weblog resumes

Most of these are more or less the same as in the past years. The 2-month summer hiatus on the daily weblog gives me time to recharge and put things in perspective, and gives you some time off to enjoy other activities :)

The summer fulfillment mode is new. I’m currently making arrangements to keep the shop open this time, by having new orders and support handled by someone else. More details will follow once everything is in place.

Anyway – now you know what’s coming as far as JeeLabs and yours truly is concerned for the summer period.

The nice thing about this unit is that it’s fully self-contained (with a 9V battery on the back) and that it has all the bits and pieces on board to check a multimeter’s (DC) voltage, (DC) current, and resistance measurements.

It comes with a calibration report – the voltage has been trimmed to exactly 5V, but the rest will have slightly different values due to component and temperature tolerances. Also, it was calibrated at 70°F (21.1°C):

Here are my HP 34401A measurements, with only 15 minutes warm-up (it’s now about 23.5°C here at JeeLabs):

Very close – more than close enough to start checking the VC170 multimeter I described recently, for example:

Easily within spec. Note that a VC170 only has 400 µA + 400 mA ranges, and 1 mA only shows 2 decimal points.

Here’s a higher-spec VC940, which I find unconvincing – I use it rarely anyway, due to its slow refresh rate:

Here’s a very low end Extech MN15 – it performs worse than the VC170 and can only display values up to 1999:

And finally, as flash from the past, a cheap analog multimeter – this one is probably over 30 years old:

We’ve sure come a long way, from trying to guess the value while not mixing up all those scales!

This reaffirms my choice of using the VC170 for day-to-day use, with the high-end HP 34401A used for top accuracy and for long-running experiments (handheld multimeters always auto-shutdown much too quickly).

As you can see, the DMMCheck is an superb little tool to quickly do a sanity check of your multimeter(s). There’s now also a DMMCheck Plus with extra signals to check AC voltage + current, and even frequency + duty cycle.

If you take lots of measurements over the years, it’s well worth getting something like this to verify your DMM.

This all relates to a discipline called metrology (no, not “meteo”, but “metrics”) – i.e. the science of measurement.

I couldn’t quite wrap my head around it, so I re-drew it in a different way – it’s still the same circuit:

The key is that a MOSFET can switch a voltage with nearly no voltage drop, i.e. a signal on its gate can turn it from near-infinite to near-zero resistance. There’s no “bipolar junction” involved, therefore no 0.6V .. 0.8V threshold.

I’m not going to explain the circuit, but I’ve built it up and did some measurements to show its behavior:

Since a supercap has all sorts of odd behavior w.r.t. deep discharge and such, I replaced it with a 6800 µF electrolytic cap for this experiment, in parallel with a 1 kΩ resistor to simulate a load of a few mA.

Instead of the solar panel, I’m using a 2.9V power supply, limited to supply at most 10 mA. In other words, when connected to the capacitor, it will reduce its voltage a bit while the charging current is high, and then end up as 2.9V once the capacitor has been fully charged up. This makes it similar to a solar cell with limited capacity.

Enough talking. Let’s see how this thing behaves, while tracking a number of voltage levels at the same time:

There’s a lot to describe here:

the RED line is the voltage from the power supply, i.e. Vsolar

the YELLOW line is the voltage over the capacitor, i.e. Vcc

the GREEN line is the voltage between drain and source of the MOSFET

the BLUE line is the voltage on the gate of the MOSFET

all signals have their zero origin at 2 divisions from the bottom

power was turned on after 2 seconds from the left edge of the screen

power was turned off again about 5 seconds later

The RED line is actually the YELLOW line minus the GREEN line.

The first thing to note is really the whole point of this circuit: the voltage on the capacitor (YELLOW) rises up to 2.84V, while the input voltage (RED) reaches 2.90V, so there’s only a 0.06V voltage drop over that FET while it conducts. That’s a ten-fold improvement over a silicon diode, and three-fold over a Schottky diode.

I’m using a BC557 for the PNP transistor, a BC549 for the NPN transistor, and a VN2222 as MOSFET, just because I happened to have those lying around. That MOSFET in particular is not a great fit here, really.

The other peculiar thing about this circuit, is that the MOSFET is used for current flowing through it in the wrong direction, from drain to source! But most MOSFETs won’t mind, they really act a bit like (controllable) resistors.

The most interesting bit is the GREEN signal, i.e. the voltage over the MOSFET. At input levels under about 1.5V it does conduct, but with a substantial voltage drop – first from the built-in diode conducting, and then gradually the MOSFET turns on and its low resistance takes over, with a total voltage drop of some 30..60 mV.

When the input voltage is completely switched off, the MOSFET goes into high impedance mode within a fraction of a second, which can be deduced from the fact that the GREEN and YELLOW lines meet up and overlap.

Lastly, once the voltages drop below about 0.4V, the gate voltage on the MOSFET rises a bit, but this is not enough to turn it on, and also not very important since the whole circuit is now essentially “dead”.

Here’s the second event in greater detail, i.e. when the input voltage drops – or in this case, gets switched off:

The MOSFET is switched off within milliseconds, with the cap now holding a higher voltage than the input.

The result of it all is that the capacitor soaks up almost all the voltage it can get, with no diode forward voltage drop involved. When the input voltage drops, the circuit disconnects it from the cap so it’ll retain its charge. Brilliant!

Update – Wanted to get a bit more info on-screen, so here’s another scope capture (oops, green is now yellow):

It better displays the elegant “swirly” charge ramp, with an odd little 64 mV bump on the MOSFET when the cap is full (maybe that’s the power supply switching to constant voltage mode). Given that the cap voltage reaches 2.82V and is loaded down by a 1 kΩ resistor, we can deduce that the current through the MOSFET must be 2.82 mA at that point, and therefore that its resistance is 23 Ω in this circuit (64.57 mV / 2.82 mA, Ohm’s law again!).

When running off solar power, ya’ gotta deal with lack of (sun-) light…

As shown in a recent post, a 0.47F supercap appears to have enough energy storage to get through the night, at least when assuming that the day before was sunny and that the nights are not too long.

There are still a couple of issues to solve. One which I won’t go into yet, is that the current approach won’t start up properly when there is only a marginal power budget to begin with. That’s a hard problem – some other time!

But another tactic to alleviate this problem, is to try and sail through a low-power situation by reducing power consumption until (hopefully) more energy becomes available again, later on.

Here’s my first cut at implementing a “survival strategy”, using the radioBlip2 sketch:

It’s all in the comments, really: when power is low, try to avoid sending packets, since turning on the transmitter is by far the biggest power “hog”. And when power is really low, don’t even measure VCC – just slow down even more in maximally efficient sleep mode – I’ve set the times to 5 and 60 minutes. The 1-hour sleep being a last effort to get through a really rough night…

But I’ve also added some kamikaze logic: when running low, you don’t just want the sketch to go into sleep mode more and more and finally give up without having given any sign of life. So instead, when the sketch is about to decide whether it should send a packet again, it checks whether the voltage is really way too low after what was supposedly a long deep-sleep period. If so, and before power runs out completely, it will try to send out a packet anyway, in the knowledge that this might well be its last good deed. That way, the central node might have a chance to hear this final swan song…

The thresholds are just a first guess. Maybe there are better values, and maybe there is a better way to deal with the final just-about-to-die situation. But for now, I’ll just try this and see how it goes.

One last point worth mentioning: all the nodes running this sketch can use the same group and node ID, because they are transmit-only. There is never a need to address packets sent towards them. So the BLIP_ID inside the payload is a way to still disambiguate incoming packets and understand which exact node each one came from.

Re-using the same node ID is useful in larger setups, since the total number of IDs in a group is limited to 30.

I’ll do some tests with the above logic. Let’s hope this will keep nodes alive in long and dark winter days nights.

So will it ever be possible to run a JeeNode or JeeNode Micro off solar power?

Well, that depends on many things, really. First of all, it’s good to keep in mind that all the low-power techniques being refined right now also apply to battery consumption. If a 3x AA pack ends up running 5 or even 10 years without replacement, then one could ask whether far more elaborate schemes to try and get that supercap or mini-lithium cell to work are really worth the effort.

One fairly practical option would be a single rechargeable EneLoop AA battery, plus a really low-power boost circuit (perhaps I need to revisit this one again). The idea would be to just focus on ultra-low power consumption, and move the task of charging to a more central place. After all, once the solar panels on the roof of JeeLabs get installed (probably this summer), I might as well plug the charger into AC mains here and recharge those EneLoop batteries that way!

Another consideration is durability: if supercaps only last a few months before their capacity starts to drop, then what’s the point? Likewise, the 3.4 mAh Lithium cell I’ve been playing with is rated at “1000 cycles, draining no more than 10% of the capacity”. With luck, that would be about three years before the unit needs to be replaced. But again – if some sort of periodic replacement is involved anyway, then why even bother generating energy at the remote node?

I’m not giving up yet. My KS300 weather station (868 MHz OOK, FS20’ish protocol) has been running for over 3 years now, I’ve never replaced the 3x AA batteries it came with – here’s the last readout, a few hours ago:

After measuring the forward voltage drop over a diode, I should also have measured the reverse leakage current, i.e. how much current the diode lets through when it’s supposed to be blocking. I never did until now, because I couldn’t detect any current in a quick check I did a while back. Time to build a better setup – here’s what I used:

The voltmeter’s own 10 MΩ or so internal resistance will skew the readings by 10%, but that’s no big deal.

It turns out that the reverse leakage current is pretty small when applying 5V:

1N4004 – a high power diode: 1.3 mV = 1.3 nA

1N4148 – a low power diode: 3.4 mV = 3.4 nA

BAT34 – a Schottky diode: 50 mV = 50 nA

That’s nanoamps, i.e. milli-milli-milli-amps. The Schottky diode does indeed leak a tad more than the others. Here are the specs of that BAT34 diode – note that the reverse current could even be used as temperature sensor!

FWIW, I found a minuscule “RB751S” SMD Schottky diode, about 1 mm long, which does a bit better at 7.0 nA:

It was quite a challenge to get some wires soldered onto it. I used the core of 30 AWG Kynar “wirewrap” wire:

Anyway – the BAT34 is good enough: 50 nA leakage is acceptable while dealing with circuits which consume µA’s.

Yeay! – The JeeNode made it through the night on a 0.47F supercap, for the first time ever at JeeLabs:

Sorry for the awkward / missing scale, here’s some context:

vertical is voltage: 50 = 2V, 100 = 3V, 150 = 4V, 200 = 5V

blue is VCC before sending, green is VCC after sending

graph runs from 11:45 yesterday to 10:45 the next morning, i.e. 23 hours

that’s two VCC measurements and one packet transmission every minute

The supercap had been charged by the solar cell for 3 days, no load. When connecting the JeeNode (BOD set to 1.8V, on-board 100 µF i.s.o. regulator, already running), I placed it in a cardboard box to block out the light:

the first upward blip is at 12:45, during 5 minutes of exposure to sunlight

then back into the box until 18:30, depleting the supercap for a few hours

after that, the node was kept in the light to try and charge up enough for the night

at 20:00, the charge had gone up to 4.42V and 3.86V, respectively

at around 6:30 the next morning, the lowest point was reached: 3.44V and 2.88V

from then on, the cell started charging again from the morning light (no direct sunlight yet)

looks like about 10% of the packets never arrived (probably mostly due to collisions)

At noon, the cap voltage had risen to 4.9V (note that the RFM12B is now operating above its official 3.8V max).

So there you have it: one packet per minute powered by solar energy, harvested indoor near a window.

Update – FWIW, this setup lasted a second day, but then it died again… we’re not done yet!

One last experiment I wanted to do after the recent sine wave circuits, was to compare a few different op-amps.

I’m including the original one here as well – the LM358, running at ±13.6V:

Here’s the LT1413, running at ±14.4V:

And here’s the NE5532ANG, running at ±15.3V:

In each case, the supply voltage was adjusted until the output sine wave was ±10 V, with all other components identical. Note the slight difference in oscillation frequency.

What’s also interesting, is the mean output voltage: it should be 0V with an ideal circuit. Looks like the NE5532ANG performs best – within 1%. It’s described as being an “Internally Compensated Dual Low Noise Operational Amplifier”. Second harmonic is at -51 dBm, i.e. 0.28% harmonic distortion – an excellent signal!

As a quick test with that last op-amp, I reduced the supply voltage to ±2.5V – the effect was a slightly higher frequency of 522 Hz, a much lower output of 2.14 Vpp, i.e. ±1.07V, but relatively far off-center: 240 mV. Harmonic distortion rises to 3.5% in this case. But that’s not surprising: the NE5532ANG is only specified down to ±3V, and it’s not a “rail-to-rail” op-amp, which means it cannot generate an output voltage too close to its supply voltage (with a ±5V supply, distortion drops back to 1.25%).

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Another post about frequencies – this time I’ve assembled a DFD4A from Almost All Digital Electronics:

It’s a low-cost frequency counter which goes all the way up to 3 GHz. Here it’s measuring a 10 MHz signal from my Frequency Generator, while synchronized to the Rubidium frequency standard.

As you can see, it’s spot on – the last digit flips between 0 and 1 every so often, that’s all.

As with the Capacitance Meter I assembled recently, this kit comes with detailed build instructions. Except that this time I didn’t really want to build it, so I got the pre-built version instead, including the connectors and (fully) plastic enclosure. The front plate already has all the right cutouts, and a printed piece of paper (!) glued to the front. Works ok, but I suspect that it’ll get dirty over time.

The unit came with all the parts, I just had to solder a few components and wires in place after inserting all the switches and BNC connectors.

One thing missing was the 9V battery clip – but not to worry, I have a couple of those lying around anyway.

The reason to get this particular unit was its high frequency range of well over the 868 MHz and 2.4 GHz frequencies I may want to measure here at JeeLabs. The main difference with a professional unit is probably the fact that it doesn’t have many input signal options:

HF measures from 0 to 30 MHz, with 5 Vpp max into a high impedance input

UHF measures from 10 to 3000 MHz over a 50 Ω input (max 15 dBm)

No way to directly measure the 868 MHz output from an RFM12B, I suspect – i.e. it probably won’t be sensitive enough to measure 0 dBm.

The slow measurement mode continuously collects data for one second, so you get 1 Hz resolution on the HF range and 100 Hz resolution on the UHF range (since that’s essentially just a ÷ 100 prescaler).

The fast measurement mode runs 10 times per second, i.e. a gate time of 0.1s – so this gives 10 Hz resolution on HF and 1000 Hz (1 KHz) resolution on UHF.

It’s a bit odd that the display shows more significant digits than are being measured in all but FAST + HF mode, but no big deal – the current mode is clearly visible from the switch settings.

Knowing that the counter is very accurate (for now – it’ll no doubt gradually drift slightly), it’s time to find out how accurate the TG2511 AWG’s frequency is when not synchronized to the Rubidium standard:

Software – a virtual world, artificially constructed, and limited only by imagination

Hardware – a real world, where electrons and atoms set the rules and the constraints

I’ve long been pondering about the difference between the two, and how I enjoy both, but in very different ways. And now I think I’ve figured out, at last, what makes each so much fun and why the mix is so interesting.

I’ve spent most of my professional life in the software world. This is the place which you can create and shape in whatever way you like. You set up your working environment, you pick and extend your tools, and you get to play with essentially super-natural powers where nearly everything goes.

No wonder people like me get hooked to it – this entire software world is one big addictive game!

The hardware world is very different. You don’t set the rules, you have to discover and obey them. Failure to do so leads to non-functional circuits, or even damage and disaster. You’re at the mercy of real constraints, and your powers are severly limited – by lack of knowledge, lack of instruments, lack of … control.

Get stuff working in either world can be exhilarating and deeply satisfying. Yes! I got it right! It works!

All of this appeals to an introvert technical geek like me, and all of this requires little human interaction, with all its complex / ambiguous / emotional aspects. It’s a competition between the mind and the software / hardware. There are infinitely many paths, careers, and explorations lying ahead. This is the domain of engineers and architects. This is where puzzles meet minds. I love it.

The key difference between software and hardware, when you approach it from this angle, is how things evolve over time: with software, there is no center of gravity – everything you do can become irrelevant or obsolete later on, when a different approach or design is selected. With hardware, no matter how elaborate or ingenious your design, it will have to deal with the realities of The World Out There.

So while after decades of software we still move from concept to concept, and from programming language to programming language, the hardware side more and more becomes a stable domain with fixed rules which we understand better and better, and take more and more advantage of.

In a nutshell: software drifts, hardware solidifies.

Old software becomes useless. Old hardware becomes used less. A very subtle difference!

The software I’ve built in the past becomes irrelevant as it gets superceded by new code and things are simply no longer done they way they used to be. There’s no way to keep using it, beyond a certain point.

Hardware might become too bulky or slow or power-consuming to keep using it, or it might mechanically wear out. But I can still hook up a 40-year old scope and it’ll serve me amazingly well. Even when measuring the latest chips or MOSFETs or LCDs or any other stuff that didn’t exist at the time.

Software suffers from bit rot – this happens especially when not used much. Hardware wears out, but only when used. If you put it away, it can essentially survive for decades and still work.

In practice, this has a huge impact on how things feel when you fool around – eh, I mean experiment – to try and to learn new things.

Software needs to be accompanied by documentation about its internals and it needs to be frequently used and revisited to keep it alive. Writing software is always about adding new cards to an existing house of cards – assuming I can remember what those cards were before. It’s all virtual, and it tends to fade and become stale if not actively managed.

Hardware, on the other hand, lives in a world which exists even when you don’t explore it. Each time I sit down at my electronics bench, I think “hm, what aspect of the real world shall I dive into this time?”.

I love ‘em both, even though working on software feels totally different from working on hardware.

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

Today’s episode will be a short one, it’ll become clear why halfway down this page…

This is a little bargain LED flashlight, nothing to it really:

Three AAA (not AA) cells, a toggle button, 24 + 4 white LEDs, and that’s it. Press the button once, and the 4 LEDs on the side turn on, press again to light the 24 on the top, and again to turn it off.

Quite a bright light BTW. The 4 LEDs draw 190 mA, with 16 it rises to 270 mA. That’s perhaps 4 hours of use with 16 LEDs before the batteries run out.

The circuit is as ridiculously simple as can be – one 4.7 Ω resistor and a switch:

That “metal” reflector is actually plastic with a chrome finish.
The PCB is one-sided, no doubt to lower the cost:

(it won’t take much bending to create a short with that top wire!)

Using Ohm’s law (V = I x R), we can deduce that the LED’s forward voltage is 4.5 – X = 0.190 x 4.7 – in other words, X = 4.5 – 0.190 x 4.7 = 3.6V. Note that the light intensity will vary considerably with battery voltage and that this lamp won’t work at all with 3 AAA EneLoop batteries which only supply 1.2V to 1.3V when fully charged.

The reason I’m opening up this trivial little gadget is not to marvel at the deep electronic engineering that went into it, but to show how custom plastics and a custom case makes something quite practical and nice to the touch. The top and bottom have a rubbery feel to them. The bottom has a little plastic hook in it, which can be folded out.

The bigger news today is a bit of a mess, unfortunately.

Last night I decided to upgrade the JeeLabs server from Mac OSX 10.7.3 to 10.7.4 – that update had been out for a few days, worked fine on two other machines here, so it seemed safe to apply the update to the server as well.

It failed.

This server is connected via wired Ethernet, and I usually only look at the GUI via a VNC-like “Screen Sharing” mechanism built into Mac OSX. It works well, because that connection is re-established even when the machine is in an exclusive “Updating…” mode, so you get to track progress even while the system is busy replacing some of its own bits and pieces. No screen needed, even though part of admin interface sometimes uses the GUI.

Last night, the server failed to come back online. Which is a major hassle, because then I have to move it to another spot to hook up a mouse, keyboard, and monitor to see what’s going on. Never happened before.

Trouble is (probably), that I turned the darn thing off forcefully. I knew that all the VM’s had been properly shut down, and I heard the characteristic reboot “pling”, so I thought it was waiting for a GUI response… or something.

Then the trouble started. Hooked it up, did a restart: no go. So I restarted it in recovery mode, and did a disk check/repair of all the disks. Guess what: the startup disk with all VM’s could not be repaired… whoops!

Time to kick my backup strategy in gear. I have two in place: local hourly Time Machine backups to a second drive, and daily backups for all VM’s to the cloud.

To make a very long night story short: the local hourly backups are fine, but they do not include the VM’s (whole-file backups of a VM disk every hour is not really practical). And the daily backups? Well, they are indeed all there – I can get any day in the past 3 months back, for any of the 4 VM’s. Awesome.

But Turnkey Linux does things a bit differently. Very clever in fact: it only backs up the minimum. The Linux Debian packages for example: these are not backed up, but re-installed from some other source. The rest is a well thought-out mix of full and incremental backups, and it all works just as expected.

Except that my VM’s are about two years old now, and no longer the latest base images used by Turnkey Linux. No problem, they say: you can get the latest, and then recover your own stuff on top of that.

So I spent about 6 hours trying to work out how to get my VM’s back up from the Amazon S3 storage. No joy. I can see all the files being restored, but the result is not a working VM. At some point, package installs & updates hang, with either udev restart problems or bootdisk image generation problems.

And now the crazy bit: the JeeLabs weblog + forum + café sites are all back up again (phew!). I restored from Time Machine to a freshly freed disk partition, and restarted the Mac – it’s alive! Right now, the server is running from the new disk partition, but with the 4 VM disk images still on the damaged partition. So apparently they did not get any damage, although the Mac file system structure on that disk seems to be hosed.

I’ll spend some time thinking about how to clean up this mess, and how to avoid it in the future. The good news is that I lost no data – just a lot of time and some hair. Yikes … this really was uncomfortably close to the edge!

The moral: test the backup strategy regularly. It can still break, even when not changing anything!

Update – All systems are “go” again.

Update 2 – Final diagnosis: one of the 2 internal disks was getting too hot, leading to intermittent failure, so it was hardware after all – unrelated to the 10.7.4 software. And it was probably all my fault, because I placed a (fairly warm) router on top of the Mac Mini. I’m going to replace the failed system drive with with an SSD.

Let’s move on, now that we have a clean sine wave. The goal is to produce a ±10V sine wave to use for constructing a Component Tester. The sine wave produced so far was merely ±65 mV.

I re-used the same circuit as yesterday, but with slightly different settings. First of all, I replaced the op-amp by an LM358, which can handle higher voltages. Next, I halved all the R’s to 5 kΩ and doubled all the C’s to 0.2 µF. This reduces the loading of the feedback loop – it shouldn’t really affect the frequency.

To increase the output voltage, I connected the oscillator output signal to a non-inverting op-amp circuit:

In a nutshell: this circuit tries to keep Ve as close to zero as possible at all times. IOW, the op-amp will constantly adjust its output so that the tap on the Rf:Rg voltage divider tracks the Vin voltage on the “+” input pin.

Using Rf = 10 kΩ, and Rg = 470 Ω, its gain will be about 22x. The nice property of this circuit is that it has a very high input impedance, so there is virtually no current draw from the oscillator.

And sure enough, the output of this op-amp is a sine wave with many volts of output swing. Now it’s simply a matter of cranking up the supply voltages to ±13.6V and bingo, a ±10V sine wave:

Very clean. Better even than the original circuit – the 2nd harmonic is now -49 dB w.r.t. to the base frequency. That’s just 0.35% of harmonic distortion – excellent!

That second op-amp came for free, since an LM358 DIP-8 package has two of them anyway. So the first op-amp is oscillating (at about 470 Hz) and the second op-amp brings the output level to ±10V.

It’s quite a mess on the mini-breadboard I used, but who cares – that’s what prototypes are for:

One last check is needed to make sure that the LM358 is suitable. A component tester measures the effects of an unknown component in series with a 1 kΩ resistor. So in the worst case, with a simple wire as “unknown component”, the maximum current through that resistor will be ±10 mA. Luckily, according to the specs, an LM358 can supply at least 10 mA, and typically up to 20 mA on its output.

So that’s it: our Component Tester will need a ±13.6V supply, an LM358, and a few R’s and C’s. That supply voltage is not critical, as long as it’s stable – the output level could be adjusted to ±10V via a trimpot.

After the pretty bad sine wave trial of the lasttwo days, it’s time to try another circuit:

This is a “Phase Shift Oscillator” from the same op-amp book as the other one. I used half a TLV2472.

This one is actually a bit simpler to explain: the op-amp is set up with 25..50x amplification, i.e almost a comparator (with 50x amplification, a 50 mV input above or below the 2.5V will drive the output to its limit). And indeed, the output signal of the op-amp looks somewhat like a heavily clipped sine wave:

The 3 resistors and 3 capacitors create 3 RC “low-pass” filters in series, removing all the higher frequencies, i.e. harmonics. A fairly clean sine wave comes out at the end, as you can see here:

The only problem is that the signal level has been reduced from a ±2.5 V level to ≈ ±65 mV, a 40-fold reduction!

So the op-amp itself has to amplify that level back up to produce the clipped ±2.5V signal again.

The frequency is determined by “phase shifts”. Each RC filter changes the phase of its input signal, and it will be by 60° at a certain frequency, so that 3 of them in series will then shift it by 180°. Since the signal is fed back to the “-” pin of the op-amp, that’s exactly the proper signal to generate the opposite output, i.e. shifted 180° out of 360°. This analog stuff gets complicated – don’t worry too much about it: just pick R and C values to get the right frequency, and make all of them the same.

I used 0.1 µF caps i.s.o. 10 nF caps, i.e. 10x larger than the original circuit.
With these values, the oscillation in my setup turned out to occur at just about 440 Hz, i.e. a pure musical “A” tone!

I did have to increase the gain (1.5 MΩ / 55.2 kΩ = 27 in the above setup) to force oscillation. I changed RF to 1 MΩ and RG to 22 kΩ, for a gain of 47. This RG value is a bit low, it loads down the last RC section quite a bit.

What you’re seeing here is a classical example of a negative feedback loop, which ends up in a very stable state of oscillation. It oscillates because we’re delaying the feedback signal by about 2.27 ms through the RC chain. So the op-amp constantly overshoots around its mid-point (the 2.5V applied to the “+” input), but does so in a very controlled way. The amplitude can’t increase any further, since the op-amp is clipping at its limits already, and the amplification factor is large enough to keep boosting the swing up to that limit. You can see the startup ramp and stabilization when powering up:

Here’s the FFT spectrum analysis of the generated sine wave:

A clean signal compared to the previous experiment. The 2nd harmonic is ≈ 42 dB below the fundamental wave, the rest is even lower. Using this calculator, we can see that this represents about 0.8% harmonic distortion.

The only issue is that the signal is much weaker than the ±10V needed for a standard Component Tester.

After yesterday’s failed attempt to generate a clean since wave, I started experimenting a bit further. How could the Op-amp book be so wrong about the quadrature oscillator circuit?

The nice thing about op-amps in DIP-8 packages, is that most of them use the same pinout, so it’s very easy to swap them out and test different brands and types. The TLV2472 only supports up to 6V as power supply, most of the other ones go much higher – usually above 30V, i.e. ±15V.

Here’s the list of op-amp chips I tried (yeah, got quite a bunch of them in my lab, for various reasons):

TLV2474

LM358N

LM833N

NE5532ANG

OP2340

NJM14558D

MCP6023

LT1413

All of them had similar behavior, i.e. clipping at both limits of the voltage range, except for the LT1413:

Still nowhere near a sine wave, BTW. But what’s more interesting, is the the voltage swing of this signal was just 4.5 Vpp, while the op-amp was being driven from a ±15V power supply in this particular case. So for some reason, it was “oscillating” at 1.25 KHz (about 8x higher than the other mode).

I have no idea what was going on. When trying to reproduce this a second time, I couldn’t get this behavior back. I suspect a loose connection, or perhaps some odd interaction due to the breadboard.

I’m not really interested in tracking down this issue, since it looks like this quadrature oscillator circuit is not suitable for a Component Tester – not without some sort of amplitude regulation anyway.

So there you have it – analog circuits also need to be debugged, as you can see!

Update – this issue has now been resolved, see the comments on yesterday’s weblog post.

After the recent pretty disappointing results with a transformer-based Component Tester, I’d like to try and generate a ± 10 V sine wave at approximately 50 Hz in some other way. Using as few components as possible.

This is where we enter, eh, squarely into the analog electronics domain. Yes, we could generate it with an ATmega, but frankly that sounds like a bit of overkill, would require a fair amount of filtering to remove residual switching effects, and besides we’d still have to amplify it up to 10 Vpp.

Time to introduce some new circuitry!

One of the most incredible electronic building blocks invented in the second world-war era was the Operational Amplifier, or “op-amp” in short.

There’s way too much to say about this amazingly universal circuit, which even has its own schematic symbol:

A positive and negative power supply pin, a positive and a negative input, and an output pin. That’s it.

I’ve only just started exploring op-amps, really – one superb resource on the web comes in the form of a free eBook from 2002 on the Texas Instruments site, titled “Op Amps For Everyone”, by Ron Mancini.

In his chapter on Sine Wave Oscillator, he mentions a “Quadrature Oscillator” built from two op-amps:

It uses very few components. This one was dimensioned for about 1.6 KHz, so I started with capacitors ten times as large, i.e. 0.1 µF, to lower the oscillation frequency. Here’s the result, using a TLV2472 dual op-amp:

Powered by a supply of ±2.5V (i.e. 0 / 2.5V / 5V), I see this result on the scope, when attached to the sine output:

Yeah, right. Clipping like crazy, i.e. overshooting into the limiting 0V and 5V power lines. The FFT shows it’s not anywhere near a pure sine wave, even though the shape vaguely resembles one:

A pure sine wave would have a single peak at the oscillating frequency.

Here’s the cosine output, again showing that it’s running way outside its linear range:

So yeah, we’re generating a 160 Hz signal, but it’s no sine wave and it would be useless as Component Tester.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

(this is again a bit of a side excursion, about checking the quality of a measuring instrument)

I recently visited a friend who had to get his frequency meter’s calibration verified to a fairly high precision. Thinking of the Rubidium clock I got from eBay, he came up with the idea of using a transfer standard.

The thing with accuracy, is that you have to have an absolute reference to compare against. One option is to go to a “calibration lab” and have them test, adjust, and certify that your instrument has a certain accuracy. Awkward, expensive, and usually a bit over the top for “simple” hobby uses.

So the other way to do things, is to “transfer” the calibration in some way. Buy or build a device which can keep the desired property stable, calibrate it to some standard, move it to where the instrument needing calibration is located, and compare the two. Or vice versa: match to instrument, then compare with a standard.

The latter is exactly what we ended up doing. First we created a little Arduino daughter board with a “VC-TCXO” on it: that’s a “Temperature Compensated Xtal Oscillator” which can be fine-tuned through a voltage. Here’s the setup, created and built by Rohan Judd:

On the left, an SPI-controlled digital potmeter, on the right a VC-TCXO running at 10 MHz.

Via a sketch, the VC-TCXO was fine-tuned to produce exactly 10.000,000 MHz readout on the frequency counter we wanted to verify. This was done at about 18°C, but a quick test showed that this VC-TCXO was indeed accurately keeping its frequency, even when cooled down by more than 10°C.

I took this device back home with me, and set up my frequency generator to use the Rubidium clock as reference. So now I had two devices on my workbench at JeeLabs, both claiming to run at 10 MHz …

Evidently, they are bound to differ to some degree – the question was simply: by how much?

Remember Lissajous? By hooking up both signals to the oscilloscope, you can compare them in X-Y mode:

Channel 1 (yellow) is the VC-TCXO signal, some sort of odd square wave – I didn’t pay any attention to proper HF wiring. Channel 2 (blue) is the sine wave generated by the frequency generator.

The resulting image is a bit messy, but the key is that when both frequencies match up exactly, then that image will stay the same. If they differ, then it will appear to rotate in 3D. It’s very easy to observe.

The last trick needed to make this work is simply to adjust the frequency generator until the image does indeed stop rotating. This is extra-ordinarily sensitive – the hard part is actually first finding the approximate setting!

After a bit of searching and tweaking, and after having let everything warm up for over an hour, I got this:

IOW, the frequency I transferred back to JeeLabs with me was 9.999,999,963 MHz. We’re done!

To put it all into perspective: that highlighted digit is 0.1 ppb (billion!). So the frequency counter appears to be 3.7 ppb slow. Assuming that the transfer standard did not lose accuracy during the trip, and that my Rubidium clock is 100% accurate. Which it isn’t of course, but since its frequency is based on atomic resonance properties, I’m pretty confident that it’s indeed accurate to more than 0.1 ppb.

The real story here, though, is that such breath-taking accuracy is now within reach of any hobbyist!

Some first results from trying to run a JeeNode off a 24 x 32 mm indoor solar cell…

In each of the cases described below, I’m using a JeeNode without regulator and with 100 µF cap hooked up, with fuses and settings as described in this post. The cap should have enough energy to cushion the dip from a small packet transmission. I’m using the latest radioBlip2 sketch, which now sends out the following 7-byte payload:

The benefit of this version, is that the sketch reports not just the battery level but also how far the battery level drops after sending out a packet once a minute. That value is sent out in the next packet, so it always lags.

To get started, I connect the JeeNode to a BUB, which charges the 100 µF cap to 5V (and runs the RFM12B slightly above spec). Then I disconnect and hook it up to the solar setup. This way I don’t have to deal with startup problems yet – which is an entirely different (and tricky) problem.

Yesterday’s elaborate setup didn’t get very far, unfortunately. Two different runs gave me just a few packets:

That’s 4.50V and 3.94V before and after transmission, respectively. But a 0.47F supercap has a lot less energy in it than that 3.4 mAh Lithium cell used in the first tests above, so it’ll probably run down a lot faster.

After one hour, voltages drop to 4.28V and 3.72V.
Two hours: 4.14V and 3.60V. Five hours: 3.92V and 3.36V. I’m not sure this will work, unless the node sends less at night perhaps or always restarts reliably the next day.

Time for another experiment, this time combining my small solar panel with the 3.4 mAh Lithium battery which seems to work so well. The circuit I’m going to try is as follows:

Here’s the construction, cozily attached to the back of the solar cell:

Same solar cell, I think it can supply up to 4.5V @ 1 mA in full sunlight.

The tricky bit is that the rechargable lithium cell needs to be treated with care. For maximum life, it needs to be hooked up to a voltage source between 2.8V and 3.2V, and the charge current has to be limited.

Note that the 1 kΩ resistor is put in series with the battery not only to charge it, but also when taking charge out of it. Seems odd, but that’s the way the datasheet and examples show it. Then again, with a 10 µA current draw the voltage drop and losses are only about 10 mV. A diode bypass could be added later, if necessary.

The diode after the regulator has the nice effect of dropping the 3.3V output to an appropriate value, as well as blocking all reverse current flow. There is no further circuitry for the regulator, since I don’t really care what it does when there is too much or too little power coming from the solar cell. Let’s assume it’s stable without caps.

It all looks a bit wasteful, i.e. linearly regulating the incoming voltage straight down to 3.3V regardless of PV output levels and discarding the excess. But given that this little 3V @ 3.4 mAH battery has already supported a few days of running time when fully charged, maybe it’s still ok.

With all this tinkering with solar panels, little batteries, supercaps, etc, you often need to prevent current from leaking away. The usual approach is to insert a diode into the circuit.

Diodes conduct current in one direction and block the current in the opposite direction.

Well, that’s the theory anaway. In real life, diodes only conduct current once the voltage is above a certain level, and they tend to leak minute amounts of current when blocking the reverse voltage.

For ultra-low power use and the low voltage levels involved, you need to be careful about the type of diode used. A regular 1N4148 silicon diode has a forward drop of about 0.65V, quite a bit when supplies are 2..3V!

The Schottky diode has a much lower voltage drop. It’s usually specified as 0.3..0.4V, but it really depends on the amount of current passed through it.

To see the properties of the BAT43 Schottky diode I’ve been using, I created this simple test setup:

A 10 Hz “sawtooth” voltage is used to create a signal rising from -3V to +3V in a linear fashion, 10 times a second. This means that the current through the 100 kΩ resistor will go from -30 µA to +30 µA. We can then watch the voltage over the diode, and how it goes from a blocking to a conducting state:

The yellow trace is the sawtooth signal applied to the circuit. The blue trace is the voltage over the diode. Note the difference in vertical scale.

You can see that with negative voltages, the diode just blocks. As it should.
With positive voltages up to 1.2V, i.e. a current up to 12 µA, the voltage drop over the diode is under 0.15V, rising slowly to about 0.175V at 30 µA.

Changing the resistor to 10 kΩ to increase the current by a factor of 10, we get this:

Same picture, different scale. At 300 µA, the voltage drop is now about 0.23V, and it’s fairly flat at that point.

Now that there’s low-power vccRead() code to measure the current power supply voltage, we can finally get a bit more valuable info from the radioBlip sketch, which sends out one packet per minute.

So here’s a new radioBlip2 sketch which combines both functions. To support more test nodes, I’m adding a byte to the payload for a unique node ID, as well as a byte with the measured voltage level:

As a quick test I used a JeeNode without regulator, running off an electrolytic 1000 µF cap, charged to 5V via a USB BUB, and then disconnected (this is running the RFM12B module beyond its 3.8V max specs, BTW):

As shown in this post, it is possible to read out the approximate level of VCC by comparing the internal 1.1 V bandgap with the current VCC level.

But since this is about tracking battery voltage on an ultra-low power node, I wanted to tinker with it a bit further, to use as little energy as possible when making that actual supply voltage measurement. Here’s an improved bandgap sketch which adds a couple of low-power techniques:

First thing to note is that the ADC is now run in noise-canceling-reducing mode, i.e. a special sleep mode which turns off part of the chip to let the ADC work more accurately. With as nice side-effect that it also saves power.

The other change was to drop the 250 µs busy waiting, and use 4 ADC measurements to get a stable result.

The main delay was replaced by a call to loseSomeTime() of course – the JeeLib way of powering down.

Lastly, I changed the sketch to send out the measurement results over wireless, to get rid of the serial port activity which would skew the power consumption measurements.

Speaking of which, here is the power consumption during the call to vccRead() – my favorite graph :)

As usual, the red line is the integral of the current, i.e. the cumulative energy consumption (about 2300 nC).

And as you can see, it takes about 550 µs @ 3.5 mA current draw to perform this battery level measurement. The first ADC measurement takes a bit longer (25 cycles i.s.o. 13), just like the ATmega datasheet says.

The total of 2300 nC corresponds to a mere 2.3 µA average current draw when performed once a second, so it looks like calling vccRead() could easily be done once a minute without draining the power source very much.

The final result is pretty accurate: 201 for 5V and 147 for a 4V battery. I’ve tried a few units, and they all are within a few counts of the expected value – the 4-fold ADC readout w/ noise reduction appears to be effective!

Update – The latest version of the bandgap sketch adds support for an adjustable number of ADC readouts.

Let’s face it – some parts of the JeeNode / JeePlug documentation isn’t that great. Some of it is incomplete, too hard, missing, obsolete, or in some cases even just plain wrong.

I think that the fact that things are nevertheless workable is mostly because the “plug and play” side of things still tends to work – for most people and in most cases, anyway. You assemble the kits, solder the header, hook things up, plug it into USB, get the latest code, upload an example sketch, and yippie… success!

But many things can and do go wrong – electrically (soldering / breadboarding mistakes), mechanically (bad connections), and especially on the software side of things. Software on the host, but most often the problems are about the software “sketch” running on the JeeNode. You upload and nothing happens, or weird results come out.

Ok, so it doesn’t work. Now what?

There’s a chasm, and sooner or later everyone will have to cross it. That’s when you switch from following steps described on some web page or in some PDF document, to taking charge and making things do what you want, as opposed to replicating a pre-existing system.

To be honest, following instructions is boring – unless they describe steps which are new to you. Soldering for the first time, heck even just connecting something for the first time can be an exhilarating experience. Because it lets you explore new grounds. And because it lets you grow!

As far as I’m concerned, JeeLabs is all about personal growth. Yours, mine, anyone’s, anywhere. Within a very specific domain (Physical Computing), but still as a very broad goal. The somewhat worn-out phrase applies more than ever here: it’s better to teach someone how to fish (which can feed them for a lifetime) than to give them a fish (which only feeds them for a day).

IMO, this should also drive how documentation is set up: to get you going (quick start instructions) and to keep you going, hopefully forever (reference material and pointers to other relevant information). A small part of the documentation has to be about getting a first success experience (“don’t ask why, just do it!”), but the main focus should be on opening up the doors to infinitely many options and adventures. Concise and precise knowledge. Easy to find, to the point, and up to date.

Unfortunately, that’s where things start to become complicated.

I’m a fast reader. I tend to devour books (well, “skimming” is probably a more accurate description). But I don’t really think that thick books are what we need. Sure, they are convenient to cover a large field from A to Z. But they are reducing our options, and discourage creative patterns – What if I try X? What if I combine Y and Z? What if I don’t want to go a certain way, or don’t have exactly the right parts for that?

This weblog on the other hand, is mostly a stream-of-conscience – describing my adventures as I hop from one topic to the next, occasionally sticking to it for a while, and at times diving in to really try and push the envelope. But while it may be entertaining to follow along, that approach has led to over 1000 articles which are quite awkward as documentation – neither “getting started” nor “finding reference details” is very convenient. Worse still, older weblog posts are bound to be obsolete or even plain wrong by now – since a weblog is not (and should not be) about going back and changing them after publication.

I’ve been pondering for some time now about how to improve the documentation side of things. There is so much information out there, and there is so much JeeLabs-specific stuff to write about.

Write a book? Nah, too static, as I’ve tried to explain above.

Write an eBook? How would you track changes if it gets updated regularly? Re-read it all?

A website? That’s what I’ve been doing with the Café, which is really a wiki. While it has sections about software and hardware, I still find it quite tedious (and sluggish) for frequent use.

I’ve been wanting to invest a serious amount of time into a good approach, but unfortunately, that means deciding on such an approach first, and then putting in the blood, sweat, and tears.

My hunch is that a proper solution is not so far away. The weblog can remain the avant garde of what’s going on at JeeLabs, including announcing new stuff happening on the documentation side of things. Some form of web-based system may well be suited for all documentation and reference material. And the forum is excellent in its present role of asking around and being pointed to various resources.

Note that “reference material” is not just about text and images. There is so much information out there that pointers to other web pages are at least as important. Especially if the links are combined with a bit of info so you can decide whether to follow a link before being forced to surf around like a madman.

The trick is to decide on the right system for a live and growing knowledge base. The web is perfect for up-to-date info, and if there’s a way to generate decent PDFs from (parts of) it, then you can still take it off-line and read it all from A to Z on the couch. All I want, is a system which is effective – over a period of several years, preferably. I’m willing to invest quite a bit of energy in this. I love writing, after all.

Suggestions would be welcome – especially with examples of how other sites are doing this successfully.

This is the sort of thing the members of the volt-nuts mailing list ponder about, I would imagine.

In my case, with now over half a dozen ways to measure voltage here (numerous hand-held multimeters, mostly), I just wanted to know which one to trust most and to what extent.

The solution comes in the form of a transfer voltage standard – an item you can order, gets shipped to you, and which then gives a certain level of confidence that it will provide a fixed voltage reference. As it turns out, Geller Labs offer just such a thing at low cost – it’s called the SVR 2.0:

Each board is “burned in” (kept turned on) for 200 hours and calibrated at the temperature you specify (I asked for 21°C). You even get the measured temperature coefficient at that spot (mine is 1.7 ppm/°C), so you can in fact predict the voltage it will generate at a slightly different temperature. Now that’s serious calibration!

And guess what – after a 30-minute warm-up (both the 34401A and the SVR), it’s spot on.

No last-digit jitter, nothing. A constant 10.000,00 readout. The current room temperature is 21.1°C, heh.

Think about it for a second: as hobbyist, you can order a precision second-hand instrument from eBay, Google around a bit to find a little voltage standard, have ‘em shipped from different parts of the planet, get them here within two weeks, hook up some wires, wait 30 minutes, and they match to 0.000,1 % precision.

Given that this instrument is from the 90’s, I’m massively impressed. This 34401A HP thing rocks!

Voltage? Current? Resistance? Game over – for me, this is more than enough precision for serious use.

Times are in UTC and we’re in the CEST time zone, so this was two hours later – i.e. around 4:30 AM.

I left it there for another few days, but unfortunately once dead this setup never recovers. The main reason for this is probably that the RFM12B starts up in a very power hungry mode (well, relatively speaking anyway) with a 0.6 mA current consumption – because it starts with the crystal oscillator enabled.

Maybe the self-leakage of the supercap was still too high, and would be (much) lower after a few days in the mostly-charged state, so I’m not ruling out using supercaps just yet. But as it stands, not getting through even a single night is not good enough – let alone being used in darker spots or on very dark winter days.

First thing to remark is that there is no temperature sensor in the soldering iron. In other words, this is an adjustable unit, but it’s not temperature-controlled – the 150..450°C scale around the rotating knob is bogus.

Just removing the knob and a washer around the potmeter is enough to examine the board up close:

A couple of resistors, caps, an inductor, and a little transformer – that’s all. Oh, and a little TRIAC in a TO-92 housing (just beneath the transformer). Here’s the other side:

A plain single-sided low-cost PCB. No surprises here – this is a very low-cost unit, after all.

So how does it work? Well, it’s basically a simple dimmer. But instead of dimming an incandescent lightbulb, it dims the heater coil inside the iron. The way this works is that the start of each AC mains cycle gets switched off – and then only after a specific time does the TRIAC start conducting. The whole circuit is essentially an adjustable delayed pulse generator, synchronized to the AC mains zero crossings.

Here’s what it looks like on the scope (as measured via a differential probe for isolation):

The entire AC mains cycle is 20 ms (50 Hz), half a cycle is therefore 10 ms, and in this mid-range setting, each half of the sine wave is switched on after about 5 ms, i.e. halfway into the sine, at the peak voltage in this case.

Does it work? Sure, turning the knob will definitely adjust the tip temperature – but not very directly. Instead of a feedback loop, we merely control the amount of power going into the iron, and assuming a fairly steady heat dissipation, the iron will then stabilize more or less around a specific temperature.
Just like a lightbulb, such a circuit will “dim” a soldering iron just fine this way.

The only drawback is that it’s not tightly controlled. When using the iron and pushing it against a thick copper wire or a big copper surface, the iron will cool off. Real temperature control requires a feedback loop which senses this change and counteracts the effect by pushing more power in when needed.

For simple uses, the crude approach is fine, but if you plan to solder under lots of different conditions (through-hole, SMD’s, PCB ground planes, thick copper wires) then a more expensive type might be more convenient.

Anything lower than that and the sketch stops sending out packets once a minute – but then again, that’s probably just the brownout detector of the ATmega kicking in!

To get it back up, I re-connected the power supply at 2.1 V and the node started its blips again… lower didn’t work, my hunch is that the RFM12B’s clock circuit needs that slightly higher voltage level to start oscillating.

Capacitors are all about storing and releasing charge. The main difference with batteries is that this charge is stored directly as electrical energy, whereas a battery converts to / from chemical energy in some form.

In an ideal capacitor, charge and discharge follow an exponential curve. Charging takes place when connected to a fixed supply via a resistor, discharging is a matter of placing a resistor across the capacitor:

The “time constant” is the level when the discharge reaches 36.8% or the charge reaches 63.2% of the original voltage. It can be calculated using the formula: T (seconds) = R (ohm) x C (farad).

This property makes it easy to measure the value of a capacitor: charge it up to a known voltage, then discharge it through a known resistor and measure the time it takes for the voltage to drop to 36.8% of the original voltage.

There is excellent documentation including very detailed assembly instructions, leading to this:

And sure enough, it works as expected – measuring a 10 µF cap in this case.

The only two drawbacks I found is that it doesn’t measure caps larger than 50 µF, and that there is no on-off switch. With a tool like this, you tend to want to use it from time to time and put it away after use. Without the switch, you have to disconnect the battery each time – a bit awkward and inconvenient.

This meter is based on a pre-flashed PIC controller. There’s one button to calibrate its zero value, and a convenient “auto-zero” mechanism, which keeps adjusting for exactly 0 pF when no capacitor is connected.

Due to the wonders of automation, yours truly was able to sneak away for a few days without missing a beat on the weblog and webshop (but away from the forum) – with Liesbeth and me ending up on the other side of Europe:

The “Blue Mosque”, and lots more fascinating / touristy things. A humbling experience for a Westerner like me.

With apologies for not responding immediately to all emails – I’ll catch up on this in the next few days.

The ATmega’s (and ATtiny’s for that matter) all have a 10-bit ADC which can be used measure analog voltages. These ADC’s are ratiometric, meaning they measure relative to the analog reference voltage (usually VCC).

On a 5V Arduino, that means you can measure 0..5V as 0..1023, or roughly 5 mV per step.

On a 3.3V JeeNode, the measurements are from 0 to 3.3V, or roughly 3.3 mV per step.

There’s no point connecting VCC to an analog input and trying to measure it that way, because no matter what you do, the ADC readout will be 1023.

So can we figure out what voltage we’re running at? This would be very useful when running off batteries.

Well, there is also a “bandgap reference” in each ATmega/ATtiny, which is essentially a 1.1V voltage reference. If we read out that value relative to our VCC, then a little math will do the trick:

So all we have to do is measure that 1.1V bandgap reference voltage and we can deduce what VCC was!

Unfortunately, the Arduino’s analogRead() doesn’t support this, so I’ve set up this bandgap demo sketch:

Sample output, when run on a JeeNode SMD in this case:

There’s a delay in the vccRead() code, which helps stabilize the measurement. Here’s what happens with vccRead(10) – i.e. 10 µs delay instead of the default 250 µs:

Quite different values as you can see…

And here’s the result on an RBBB with older ATmega168 chip, running at 5V:

I don’t know whether the 168’s bandgap accuracy is lower, but as you can see these figures are about 10% off (the supply voltage was measured to be 5.12 V on my VC170 multimeter). IOW, the bandgap accuracy is not great – as stated in the datasheet, which specifies 1.0 .. 1.2V @ 25°C when VCC is 2.7V. Note also that the bandgap reference needs 70 µs to start up, so it may not immediately be usable when coming out of a power-down state.

Still, this could be an excellent way to predict low-battery conditions before an ATmega or ATtiny starts to run out of steam.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Equivalent Series Resistance, or ESR, is the resistance of a capacitor. Huh? Let me explain…

A perfect capacitor has a specific capacitance, no resistance, and no inductance. Think of a capacitor as a set of parallel plates, close to each other, but isolated. When you apply a voltage, electrons flow in on one side and electrons flow out on the other side until the voltage (potential difference) across the plates “pushes back” enough to prevent more electrons from flowing. Then the flow stops.

It’s a bit of a twisted analogy, but that’s basically what happens. A capacitor acts like a teeny weeny battery.

But no real capacitor is perfect, of course. One of the properties of a capacitor is that it has an inner resistance, which can be modeled as a resistor in series with a perfect capacitor. Hence the term “ESR”.

Resistance messes up things. For any current that flows, it eats up some of that energy, creating a voltage potential and more importantly: generating waste heat inside the capacitor.

ESR is something you don’t want in hefty power supplies, where big electrolytic capacitors are used to smooth out the ripple voltage coming from rectified AC, as provided by a transformer for example. With large power supplies, these currents going in and out of the capacitor lead to self-heating. This warms up the electrolyte in the caps, which in turn can dramatically reduce their lifetimes. Caps tend to age over time, and will occasionally break down. So to fix old electronic devices: check the big caps first!

Measuring ESR isn’t trivial. You have to charge and discharge the cap, and watch the effects of the inner resistance. And you have to cover a fairly large capacitance range.

This ESR70 instrument from Peak Instruments does just that, and also measures the capacitance value:

It’s protected against large voltages, in case the capacitor under test happens to still have a charge in it (a cap is a tiny battery, remember?). The clips are gold-plated to lower the contact resistance – and removable, nice touch!

In this example, I used a 47 µF 25V electrolytic capacitor, and it ended up being slightly less than 47 µF and having an ESR of 0.6 Ω as you can see.

It this cap were used in a 1A power supply to filter the ripple from a transformer, then its ESR could generate up to 0.6 W of heat – which would most likely destroy this little capacitor in no time.

Fortunately, big caps have a much lower ESR. It measured 0 (i.e. < 0.01 Ω) with a 6800 µF unit, for example.

As with last week’s unit, this is not an indispensable instrument. But very convenient for what it does.

To summarize: a tiny ML614 rechargable Lithium cell of a mere 3.4 mAh is used to power a JeeNode running the radioBlip sketch, which is going to send out one small packet per minute (the period is actually slightly longer).

I charged the battery overnight from a 3.0V power supply and through a 1 kΩ resistor, as described in the datasheet. As expected, the battery voltage without load is now indeed 2.97V

The test is really simple, but it’s going to take a little while: connect, see packets come in, with a counter being increased for each packet, and then just wait for the whole thing to stop sending. Here we go:

The range will need to be good, since packets have to cross a reinforced concrete floor to reach my receiver.

Here’s the log of packets I got (the first packet seems to have been missed):

The amount of solar energy available indoor is very limited… a very small fraction of outside, I’m sure.

Still, this unit has been running in the house here for a few years now, and not near a window either:

It’s a attractive goal: gadgets which you buy (or build) once and then use essentially forever!

We don’t use the alarm clock mode of this thing, so the beeper never goes off, but it does listen for the DCF77 clock standard transmitter in Germany once a day to stay in sync. It’s also slightly glow-in-the-dark, so this clock remains readable at night.

The fact that this clock works so well tells me that, with proper care, we should be able to run simple nodes inside the house with a solar cell of perhaps a few square centimeters, just like this clock.

And, whether battery- or supercap-powered, that’s precisely what I’d like to get going…

As with the AC-mains connected ultra-low power supply, I suspect that reliable startup will be the hardest part. Such an energy source will have very little spare energy and charge up very slowly, so when the JeeNode comes out of power-up (or brownout) reset, it’ll have to be careful to not cut off the hand that is feeding it, so to speak.

The tiny rechargeable Lithium batteries I mentioned recently are another way to try and retain some charge overnight, just like the supercap mentioned last week.

First thing to do was to charge it up for a day, using a 1 kΩ resistor and a 3.0V supply:

I adjusted the radioBlip sketch, to switch back to 8 MHz (because the ATmega will be running well below 3.3V):

And I used these fuse settings:

efuse 0x06 = BOD 1.8V

hfuse 0xDE = OptiBoot (512b)

lfuse 0xCE = fast 16 MHz resonator startup

This should allow a JeeNode to work all the way down to 1.8V (the RFM12B radio only officially supports down to 2.2V but usually still works a bit below that). I also used a JeeNode with no regulator, and added a 100 µF cap to handle the peak currents during packet transmission (100 µF is a bit excessive – less probably also works fine):

And sure enough, even with 2.75 V left in the battery, it starts up fine and starts sending out packets.

Unfortunately, I accidentally shorted out the battery while fiddling with the cables – so the charging process needs to be repeated for duration tests :(

Elektro:Camp is a convention on Smart Metering, Smart Home, Smart Grid and Smart Ideas, in a BarCamp style.

October 2011, I attended this really interesting get-together about smart metering, monitoring, home automation, and more. Very heavy on technology. It’s basically a few rooms filled with lots of human ingenuity for two days :)

The Elektro:Camp event is scheduled twice a year in various locations, and is coming up again very soon:

Due to a silly scheduling mistake, I can’t make it this time, unfortunately. But if you like the stuff on this weblog, then you’ll most likely also be delighted with everything being presented and discussed at Elektro:Camp!

The tiny solar cell chip presented a couple of days ago has been doing some indoor sun-bathing:

I’ve left it alone for some 3 days, just to give it a chance to charge up the 0.47 F supercap as far as possible. The voltage after all that time (partly sunny on most days) is now only just over 2.78 V, so this isn’t really going to work indoor, I’m afraid. Nor outside during the winter, probably – it’s just too weak.

The other solar cell I tried is also a very small one, rated at about 5V but only 1 or 2 mA, IIRC:

It’s currently in the shadow, but during these same 3 days it has had its share of sunlight (still indoor, and again behind double-glazing). Much better: about 4.75V on the second day, and unchanged since then.

This might actually do the trick. I’ll wait for another experiment to finish and will then hook up the JeeNode running my radioBlip sketch to see how it goes.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Today’s episode is about a little gadget called the DCA55 Semiconductor Analyzer from Peak Electronics:

It’s a nifty little self-contained unit, which identifies a range of 2- and 3-pin semiconductors, their pinouts, and some useful characteristics:

NPN and PNP Bipolar Junction Transistors and Darlingtons

Various types of MOSFETs and Junction FETs

Low-power thryristors and triacs

Diode and diode networks, as well as LEDs

The convenient bit is that you just hook up all the pins, press ON, and this gadget will figure it out, all by itself.

Here’s a BC549C transistor, i.e. a very common high-gain NPN transistor:

And here’s an example from the datasheet, showing all the info you get:

I wouldn’t call this unit indispensable – most of this can also be derived with a battery, a few resistors, and a multimeter – but it’s darn convenient, if you regularly re-use stuff from your spare parts bin, as I often do.

Got a tip from Lennart Herlaar a long time ago about a tiny CPC1824 solar cell from Clare with 4V output:

It’s packaged as a SOIC-16 chip, so clearly the light collecting capabilities of this thing will be limited. But with all this ultra-low power stuff going on here at JeeLabs, I thought I’d give it a whirl anyway. It’s trivial to hook up:

In bright sunlight, you get over 4V with a 100 µA short-circuit current according to the datasheet.

I added a BAT34 Schottky diode in series (which has a low voltage drop) and placed it all on a little breadboard together with a 0.47 F supercap – the solar chip is mounted on a little SOIC breakout board:

The initial voltage was under half a volt, but rising (very) slowly and steadily while exposed to light.

Let’s just leave this thing exposed to light near a south-facing window for a week or so, eh?

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

The other day, Ard Jonker pointed me to this item available at the Dutch Lidl stores for €12.95:

A solar LED light you put in the floor outside, which automatically lights up when it gets dark.

It’s about 14 cm in diameter, and 6 cm deep – let’s have a look inside:

A solar cell, with two white LEDs, held in place by two screws yearning to be removed:

The red leads connect to an on/off switch which can be accessed from outside.
The batteries are 800 mAh, according to the specs, and look like standard replaceable AAA cells. The PCB has a chip on the other side:

Hey – not bad, two NiCad NiMH’s and a little chip to drive the LEDs. This could easily accommodate a JNµ!

The DIP-8 chip in there seems to have logic for turning the LEDs on only when it’s dark (weak solar cell voltage, I assume). It does a bit more though, as this scope trace of one of the LED shows:

Probably some sort of charge-pumping, to drive the LEDs beyond the 2.5V supplied by the batteries. The power consumption is about 9.5 mA, so these lights should last through the night if there is enough sunlight during the day to fully recharge the batteries.

Neat. This could make an excellent power source plus enclosure for a JeeNode Micro, but note that the big metal ring is essential – it presses the glass and rubber seal tight against the rest of the enclosure “cups”.

Yesterday, I charged a 0.47F 5.5V supercap to 5.1V and kept charging it for 24 hours to reduce the leakage current.

Actually, I lowered it to 5.01V in the last hour – there’s a slight memory effect, so right after lowering the voltage actually rises when power is disconnected.

Next step is to measure the supercap’s self-discharge time from 5.00V to 1.84V (i.e. 36.8% of 5V) – that’ll give the time constant of the RC circuit (the real capacitance, in parallel with an imaginary internal current leakage resistor). Note that this is not the same as the ESR of a cap, which is about charge & discharge current losses.

Ok, let’s disconnect the power supply and track the voltage readings in high-impedance mode. It is 10:17 here, and the voltage has just dropped to 5.00V – with the power supply removed.

…

Time passes. Unfortunately, waiting for the voltage to drop to 1.84V (i.e. 36.8% of 5V) would take a bit long, so let’s throw some math at this and come up with a quicker way to measure leakage current:

for T = R x C, we need to measure a drop to 36.8% (i.e. a factor 0.368) of the original voltage

since the charge decay curve is exponential, we can estimate when 0.5 T will happen

Hmmm…. that amount of leakage is three orders of magnitude higher than with a 47 µF electrolytic cap, but it might still be usable as power source for a JeeNode or JeeNode Micro. Here’s my reasoning:

suppose the JN/JNµ draws 12 µA on average – a tough target, but it should be feasible

then we’re effectively draining the supercap twice as fast as its self-discharge

it looks like the supercap can hold a charge down to 1.8V for 56 hours on its own

note that 1.8V is too low for RFM12B use, but the microcontroller would still work

with the added load from the JN/JNµ, this halves to 28 hours, i.e. slightly over a day

so the challenge will be to fully recharge the supercap to 5V at least once a day

A solar cell might just do it – assuming it’s large enough to overcome a dark and cloudy winter day. And the good news is that supercaps can charge up very fast, so a short period of bright light could be enough.

Update – There’s a lot more to supercaps than this…

As suggested by @jpc in a comment yesterday, I had a look at some documentation from Panasonic, in particular Part 2. And sure enough, they show that a supercap can be modeled as a whole set of capacitors in parallel, each with their own – often substantial – series resistance. It takes a while to “reach them” with charge, so to speak. Which explains why a long charge time increases the charge and voltage:

And which also explains why the supercap tends to drop quickly at first:

Having seen the discharge tail off much more than expected (i.e. flatten out and retain voltage), I can confirm that a supercap behaves considerably differently from a plain electrolytic capacitor.

The good news, is that for our intended purpose, this might actually work out quite well: a solar cell, keeping the supercap charged up fairly well most of the time, with just night-time JeeNode activity to drain the charge a bit, and occasional dark days, expecially in wintertime.

Update #2 – Three days have passed, and the voltage is still 3.23V, so T will be over 6 days, and the corresponding discharge rate even lower than estimated above. Bit of a puzzle – the discharge tails off considerably, apparently. Which is good news in fact, because that leaves more charge for a JeeNode to use. I’m ending this experiment for now: real-world testing with a JeeNode sending packets will be more useful.

Now that I have this super-high-impedance multimeter, it’s time to revisit the venerable supercap:

That’s a whopping 0.47 Farad, the size of a little coin cell, and as you can see, this unit is rated 5.5V (most supercaps are 2.7V, I suspect that this is actually made of two 1F 2.7V units placed in series).

The beauty of a supercap is that it’s like a little battery, but with fewer limitations – you can’t really overcharge it, for one, because it doesn’t turn electric energy into chemical energy. There is no conversion: put 5V on it, and it’ll draw current and gobble up electrons until it reaches 5V, then it’ll stop.

So for example for solar-powered ultra-low power nodes, this could be a pretty nice solution. Solar cell -> diode -> supercap -> JeeNode. Max charge rate while the sun shines, and then it simply stops once the supercap is full. Only thing is to not exceed that 5.5V maximum, for which supercaps are very sensitive.

But there’s a problem. Supercaps can have a substantial self-discharge rate. When I connected 5.3V to it, the voltage immediately jumped to 5.3V, but when I disconnected that cable, it also dropped back to around 4.7V in just a few seconds – a normal capacitor sure isn’t supposed to work that way!

As it turns out, supercaps tend to “learn” to keep charge better over time. The longer you expose them to a voltage, the lower their self-discharge rate becomes. The isolation barrier needs time to build up, apparently (I’ve had this supercap on the shelf for over a year). Which is great, because presumably these cells would be kept charged most of the time, with the node only depleting them slightly when sending out a packet. So ideally, all we really need is for the supercap to retain enough energy overnight.

It’s time to put these unique components to the test!

The first encouraging fact is that indeed, when fed 5.1V for a couple of minutes, the discharge no longer jumps as radically when disconnected. It now drops to 5.03V in a few seconds, but tends to hold its value after that. So it does indeed look like these supercaps can be “taught” to better retain their charge.

This test is going to take some time. First thing I’m going to do is to just keep the supercap charged to 5.1V (note that the power supply voltage calibration is pretty good – slightly less so for the low mA’s):

Let’s just leave it there to stabilize for about 24 hours. Stay tuned…

And while we’re at it, let’s compare a regular transistor (a.k.a. BJT) to yesterday’s MOSFET.

Again a smal test setup, but this time it also needs a 10 kΩ resistor between input signal and base:

The reason for that extra resistor, is that the base of an NPN BJT is essentially connected to ground via what looks like a forward-biased diode. So the voltage on the base doesn’t normally rise above 0.7V. Without current limiting resistor, the transistor would get damaged (and perhaps also the source circuit into it).

Compare this to yesterday’s screen shot and you’ll see that a BJT behaves like a MOSFET, sort of:

The main difference is that the switching point is much lower, around 0.7V – which happens to be just about the point where the base-to-emitter junction starts to conduct.

Here’s the same as X-Y graph (with again the X axis adjusted to 500 mV/div for full scale):

Compared to the MOSFET, the switch-over is steeper, i.e. more like a digital on-off switch. Note also that although the base-to-emitter voltage will be at 0.7V, the collector-to-emitter voltage is in fact below that, almost zero!

What might not be immediately apparent from the above plot, is that a transistor has a much more linear behavior (even if steeper, i.e. with more amplification). In that small range between about 0.65V and 0.75V, it’s in fact a great linear amplifier – which is what transistors were initially used for, and on a huge scale.

A simple way to describe them is that BJTs are current-driven, whereas MOSFETs are voltage-driven.

For a nice article about how to use BJT’s for signal amplification, see this page on the PCBheaven website.

The BJT was at the start of the semiconductor revolution, decades ago. The MOSFET added a new and very different component, perfect for switching enormous loads with amazingly little power loss.

For the dual-voltage supply of a few days ago, either a MOSFET or BJT will probably work. With the BJT, there will be a higher residual voltage – so a check is needed to make sure it switches properly with a feedback pin voltage of only 0.41V. The MOSFET has no such issues, it’s essentially a controllable resistor: no bipolar junctions or diode-like behavior in sight.

In a previous post, I mentioned using a MOSFET to short out a resistor. So how does that work?

Well, a MOSFET is like a voltage-controlled switch. To be more precise, an N-channel enhancement type MOSFET is like an infinite resistance when the gate-to-source voltage is zero, and turns into a very low resistance when the gate-to-source voltage is a few volts positive.

To examine this in more detail, I created a test setup like this:

By applying a linear ramp voltage on the gate, we can see what it does with varying voltages. When open, the output should be 5V, and when conducting, it should drop to almost 0V. Let’s examine this in real life:

The blue line is the input voltage on the gate (by definition a sloped straight line), and the yellow line is the voltage on the output (i.e. between drain and resistor). Let’s try and read this:

the gate voltage takes 10 divisions to reach 3V, so that’s 0.3 V/div

the MOSFET starts conducting at around 1.8V and is fully on at ≈ 2.4V

at slightly over 2.1V, the drain-to-source resistance is about 1 kΩ

The red trace is the derivative of the output, so the output change is maximal at just over 2.2V.

There’s no linear behavior, in terms of gate-to-source voltage (the derivative is never constant, except in the fully-open and fully-closed regions), but what you can see is that the MOSFET will switch just fine with a logic signal (anything switching between under 1.8V and over 2.4V will work perfectly).

There are more ways to look at this. Here’s an X-Y plot, with the linear ramp on the horizontal axis:

Note that – if you think about it – in X-Y mode, it doesn’t really matter what sort of signal is placed on the gate as long as it has the same voltage range. Here’s a sine wave to illustrate this perhaps somewhat surprising property:

It’s a good exercise to try and understand exactly why the two above screenshots are the same.

Lastly, here is a zoomed-in measurement, to get more precise data using the scope’s cursor features:

As you can see, a 0.33V change on the gate is all it takes between the “almost-OFF” and “almost-ON” states.

I’ll leave it as exercise for the reader to plot the resistance of this particular MOSFET at different gate voltages. With a bit more setup, the scope’s math functions should in fact be able to display that plot on-screen.

So there you have it: a MOSFET switches on voltage, and a scope + function generator makes it easy to see that behavior. Note that even without these instruments, with nothing more than a potentiometer and a multimeter, you could in fact derive exactly the same information. It would merely be a bit more work.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

This episode is not about an instrument you will normally need, but about using a high-end unit.

Once you get into measuring instruments, there’s a trap – the kick of going after models which have more and more resolution and accuracy. First, let me explain the difference – i.e. roughly speaking:

resolution is the number of digits you can measure

accuracy is how close that value is to the real value

So you could have a 3-digit multimeter which is spot-on, and in most scenarios it’d probably be much more useful than a 5-digit multimeter which delivers meaningless results because it’s not properly calibrated.

The trouble with this search for perfection is that it can be addictive – see the time-nuts site for one example of keeping track of the EXACT time. Over the top for most mortals, but hey, I can relate to this sort of craziness :)

And recently I fell into the same trap. I’ve got quite a few hand-held multimeters, but when someone pointed out some eBay offers of a 6.5 digit HP/Agilent bench-top multimeter, I simply couldn’t resist and bought one:

An amazing instrument – above it’s measuring between 1.8 and 2.0 µV with the probes shorted out. It’s a second-hand unit, probably from the 90’s, so it’ll be out of calibration by now. I could send it to a calibration lab, where they tweak the thing until it’s back to its sub-ppm accuracy, but that might well cost as much as what I paid for it. So for now I’ll just assume its accuracy is decent, perhaps in the 5-digit range. More than good enough for the experiments at JeeLabs anyway. This is all for fun, after all.

One of the interesting specs of this multimeter is a selectable input resistance of over 10,000 MΩ on DC ranges up to 10V. This extremely high value is great for measuring the leakage of a capacitor. Let’s try it:

first, a 47 µF 25V cap is charged to slightly over 5V for a few minutes

then, the power supply is disconnected and it starts discharging

finally, we measure the time it takes to discharge from 5V to 3.16V

this was determined to be well over six hours (I stopped waiting!)

I picked this voltage range because 3.16V is 63.2% of 5V, so the measured time corresponds to the time constant of the T = R x C formula for capacitor discharge. In other words:

20000 s = R x 47 µF

therefore, the internal leakage resistance R = 20000 / 47 ≈ 425 MΩ

this translates to an internal leakage current of under 5 V / 425 MΩ ≈ 12 nA

So without even having an instrument which can measure such extremely low currents, we can arrive at an estimate of the leakage of this particular 47 µF 25V electrolytic capacitor, and under 12 nA is not bad!

Update – see the comments below, the leakage is even lower because the discharge should be measured to 1.84V iso 3.16V – so it’s well under 10 nA for this capacitor, in fact!

For a sensor I’ve been fooling around with, I needed a supply which can switch between 5V and 1.4V, supplying up to about 200 mA.

There are several ways to do this, but I decided to use the MCP1825 adjustable voltage regulator:

The trick is to create an adjustable voltage divider, using a MOSFET to short out one of the resistors:

When off, the MOSFET does nothing, with R2 and R3 in series. When on, R3 is essentially shorted out.

The regulator varies its output voltage (top of R1) such that the level between R1 and R2 always stays at 0.41V:

So the task is to come up with suitable values for R1, R2, and R3. Let’s start with the 5V output and R1 = 10 kΩ:

5V = 0.41V x (10 kΩ + R2) / R2

then 5 x R2 = 0.41 x (10,000 + R2) = 4,100 + 0.41 x R2

and 5 x R2 – 0.41 x R2 = 4,100, i.o.w. 4.59 x R2 = 4,100

that would make R2 = 4,100 / 4.59 = 893 Ω

Now for the 1.4V output level (where R2′ is R2 in series with R3):

1.4V = 0.41V x (10 kΩ + R2′) / R2′

then 1.4 x R2′ = 0.41 x (10,000 + R2′) = 4,100 + 0.41 x R2′

and 1.4 x R2′ – 0.41 x R2′ = 4,100, i.o.w. 0.99 x R2′ = 4,100

that would make R2′ = 4,100 / 0.99 = 4141 Ω

But that’s not quite right, because R2 and R2′ have to be in the range 10 .. 200 kΩ. This is easy to fix by making R1 = 220 kΩ. Then the above values all increase by a factor 22 as well – bringing both R2 and R2′ nicely in range:

for 5V: R2 = 19.6 kΩ

for 1.4V: R2′ = 91.1 kΩ

IOW, two resistors of 19.6 kΩ and 71.5 kΩ in series would work, whereby the 71.5 kΩ resistor can be shorted out with the MOSFET to take it out of the loop.

These are not very convenient values, for resistors in the E12 series – let’s try and improve on that. After all, we can choose these values any way we like, as long as their relative values stays the same. With 15 kΩ and 54.7 kΩ, R1 would have to be 168 kΩ. That’s not so bad, we could use 15 kΩ, 56 kΩ, and 68 kΩ in series with 100 kΩ, resp.

With 5 V input, the output is still 4.86 V @ 200 mA, proving that the MCP1825 is indeed a low-dropout regulator. The switching edges look clean on the oscilloscope, with rise and fall times of ≈ 30 µs (1 µF cap charge/discharge).

It all started on October 25th in 2008, with a weblog post about – quite appropriately – the Arduino.

Then it took a few more months to evolve into a daily habit, and yet another few months to set up a shop, but apart from that it has all remained more or less the same ever since.

You might have been following this from the start, and you might even have been going through the long list of daily posts later, but there you have it – a personal account of my adventures in the world of Physical Computing. If anything, these years have been the source of immense inspiration and delight. I’ve been able to re-connect to my inner geek, or rather: my inner ever-curious and joyful child. And to so many like-minded souls – thank you.

“Standing on the shoulder of giants” is a bit over-used as a phrase, but it really does apply when it comes to technology and engineering. What we can do today is only possible because many generations of tinkerers, inventors, and researchers before us have created the foundations and the tools on which we can build today. It feels silly even to try and list them – such a list would be virtually endless.

I’m not a technocrat. I think our IT world has done its share to rob people of numerous meaningful and competence-building jobs, and to introduce new mind-numbing and RSI-inducing repetitive tasks. Our (Western) societies have become de-humanized as more and more screens take over in the most unexpected workplaces, and our car trips and train rides are turning us into very selectively-social beings, reserving our emotions but even our respect and courtesy for our families and the people we choose as our friends. Technology’s impact on daily life is a pretty horrible mess, if you ask me.

But what drives me, are the passion and the creativity and the excitement in the field of technology. Not for the sake of technology, but because that’s one of the major domains where cognition and rationality have free reign. You can learn (and reason) all about history, medicine, psychology, or you can invent (and reason about) things which do new things, be it electrical, mechanical, biological, informational, or otherwise. Technology as a source of boundless evolution and innovation is breath-taking, we “merely” have to tap it and put it to good use.

And what thrills me most is not what I can do in that direction, but what others have done in the past and are still doing every day. Learning about all that existing technology around us is like looking into the minds of the persons who came up with all that stuff, feeling their struggles, their puzzles, and ultimately the solutions they came up with. I’m in awe of all the cleverness that has emerged before us, and even more in awe of the thought that this will no doubt go on forever.

It’s really all about nurturing curiosity, asking questions, and solving the puzzles they bring to the surface:

I have no special talents. I am only passionately curious. — Albert Einstein

Here’s the good news: we all have that ability. We all came into the world the same way. We can all be explorers.

If you start doing this early on in life and hold onto it, you’ll never be hungry and you’ll never get bored. And if you didn’t have that opportunity back then: nothing of substance prevents you from starting today!

We live in amazing times. Ubiquitous internet and access to knowledge. Open source Physical Computing. Online communities with a common language. This weblog is simply my way of reciprocating all these incredible gifts.

As an alternative to supercaps, I recently ordered a Lithium rechargeable battery from Digikey:

It’s not quite what you might think, though: its size is only 6.8 x 1.4 mm, with a tiny 3.4 mAh capacity :)

Got ten of them, as part of a larger order, and they came packaged as follows:

So far so good, but now the crazy part. These batteries were sent out in a separate 23x23x5 cm box:

With a warning label …

… and another warning label:

The max discharge current of these things is 1.5 mA according to the specs. I doubt they’ll even go up to 15 mA when shorted! By the way, does that dented corner qualify as “damaged” ? … I want my money back! :)

As it so happens, someone very recently brought to my attention a site called www.udacity.com, which announces itself as simple and as clearly as can be:

Free online university classes for everyone.

It’s a phenominally exciting initiative, second only to the Khan Academy, if you ask me:

The idea: great video lectures plus exercises to let anyone with (good) internet access learn some major topic really well. You have to be fluent in English, evidently, but apart from that the courses seem to be designed to give the broadest possible group of people access to this new form of – literally! – world-class education.

These guys are serious. With a pool of well-known researchers and teachers and set up to scale massively (the class on Artificial Intelligence which led to all this had over 160,000 people signed up).

The format is slightly different from the Khan Academy in that the courses start on a fixed date and have a fixed duration. So you really have to “sign up” for class if you want to benefit from what they have to offer.

As it so happens, these classes start tomorrow, Monday, April 16th and they will last for 7 weeks.

It looks like there will be a bunch of videos each week, plus some homework assignments, which you can then follow whenever you have time that week. You can enroll in multiple courses, but I’m sure they will be repeated at a later date, so it’s probably best to just pick what feels like a good match right now.

What can I say? IMO, this is a unique chance to learn about modern software programming on many levels. Whether you’ve never built any software or whether you are curious about how some really sophisticated problems can be solved, these six courses cover a breathtaking range of topics.

I don’t know how these courses will turn out, but I do know about some of the names involved, and frankly, I’d have loved to have this sort of access when starting out in programming.

FWIW, out of curiosity, I’ve signed up for CS101. What a nice birthday present.

There has never been a better time to learn than now. This world will never be the same again.

The DHT11 and DHT22 sensors measure temperature and humidity, and are easy to interface because they only require a single I/O pin. They differ in their measurement accuracy, but are in fact fully interchangeable.

There is code for these sensors floating around on the web, but it all seems more complicated than necessary, and I really didn’t want to have to use floating point. So I added a new “DHTxx” class to JeeLib which reads them out and reports temperature in tenths of a degree and humidity in tenths of a percent.

The analog plug contains an MCP3424 4-channel ADC which has up to 18 bits of resolution and a programmable gain up to 8x. This can measure microvolts, and it works in the range of ± 2.048 V (or ± 0.256 V for 8x gain).

However, the analog_demo example sketch was a bit limited, reading out just a single fixed channel, so I’ve added a new AnalogPlug class to JeeLib to simplify using the Analog Plug hardware. An example:

This interfaces to an Analog Plug on port 1, and uses 0x69 as default I2C device address. There are a number of ways to use this, but if you want to read out multiple channels, you have to select the proper channel and then wait for at least one conversion to complete. Since conversions take time, especially at 18-bit resolution, a delay() is needed to get proper results.

Sample output:

I tied a 1.5V battery to channel 1 and left the rest of the pins unconnected. Touching both battery pins lowers the voltage briefly, as you can see.

These results are in microvolts, due to this expression in the code:

long uvolts = ((adc.reading() >> 8) * 1000) / 64;

Here’s the reasoning behind this formula:

the reading() call returns 32 bits of I2C data, but we only need the first 24 bits

of these 24 bits, the first 6 will simply be a sign-extended copy of bit 18

so we shift right by 6 bits (i.e. divide by 64) to get the actual result

It’s a bit convoluted, but as you can see, the measured value comes out as about 1.477 Volts, with a few more digits of resolution. If you do the math, you’ll see that the actual “step” size of these measurements is 1000 / 64 = 15.625 µV – and it drops to under 2 µV when used with 8x gain!

With this sort of precision, electrical noise can easily creep in. But it’s pretty neat: 5 digits of precision for 4 channels, with nothing more than one teeny little I2C chip.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Yet another useful package from Conrad (NL #418714) – a set of 390 resistors from 10 Ω through 1 MΩ:

Resistors come in specific values and are based on a logarithmic range, i.e. you’ll see them organized as “E6″, “E12″, or “E24″, meaning that they are split up into 6, 12, or 24 values per decade, respectively.

Here’s some info about what’s in that above box:

This is actually mostly a subset of the E6 range (which is 10, 15, 22, 33, 47, 68) – see this Wikipedia article about preferred numbers for how and why things are organized that way.

The point is that you can never have enough resistors, which can probably be considered to be the most elementary components in electronics. Whether for limiting the current through a LED or creating a voltage divider, these things just tend to get used all over the place.

But what if you need a different value?
Well, that’s often trivial: by using two resistors, either in series or in parallel, it’s often possible to get real close to the value you’re after.

The formula for two resistors in series is simply the sum of their values:

Rseries = R1 + R2

The formula for two resistors in parallel is slightly more complicated:

Rparallel = (R1 x R2) / (R1 + R2)

(this can easily be explained using Ohm’s law, I’ll be happy to write a post about this if you’re interested)

Here’s an online calculator which will find the proper values – although I recommend doing the math yourself, at least initially, because it will help you get a good grasp of how resistors work together.

This has been an often-requested feature, so I’ve added a way to get an Ethernet reply back after you call tcpSend() in the EtherCard library:

The one thing to watch out for, is that – over time – packets going out and coming back are going to interleave in unforeseen ways, so it is important to keep track of which incoming reply is associated to which outgoing request. Fortunately, the EtherCard library already has some crude support for this:

Each new tcpSend() call increases an internal session ID, which consist of a small integer in the range 0..7 (it wraps after 8 calls).

You have to store the last ID to be able to look for its associated reply later, hence the “session” variable, which should be global (or at least static).

There’s a new tclReply() call which takes that session ID as argument, and returns a pointer to the received data if there is any, or a null pointer otherwise. Each new reply is only returned once.

A simple version of this had been hacked in there in a Nanode-derived version of EtherCard, so I thought I might as well bring this into the EtherCard library in a more official way.

This code – the whole EtherCard library in fact – is fairly crude and not robust enough to handle all the edge cases. One reason for this is that everything is going through a single packet buffer, since RAM space is so tight. So that buffer gets constantly re-used, for both outgoing and incoming data.

Every time I go through the EtherCard code, my fingers start itching to re-factor it. I already did quite a few sweeps of the code a while back as a matter of fact, but some of the cruft still remains (such as callback functions setting up nested callbacks). It has to be said though, that the code does work pretty well, with all its warts and limitations, and it’s non-trivial, so I’d rather stick to hopping from one working state to the next, instead of starting from scratch, working out all the cases, and tracking out all the new bugs that would introduce.

The biggest recent change was the addition of a “Stash” mechanism, which is a way to temporarily use the RAM inside the ENC28J60 Ethernet controller as scratchpad for all sorts of data. Its already useful in its current state because it lets you “print” data to it in the Arduino way to construct a request or a reply for an Ethernet session.

There are a few more steps planned, with as goal to avoid the need to have a full packet buffer in the ATmega’s RAM. Once that goal is reached, it should also become possible to track more than one session at the same time, so that more frequent requests (in and out) should be possible. There is no reason IMO, why an ENC28J60-based Ethernet board should be much less capable than a Wiznet-based one (apart from needing a bit more flash memory for the library code, and not supporting multi-packet TCP sessions).

The remaining steps to get away from the current high demands on RAM space are:

generate the final outgoing packet directly from one or more stashes, without going through our RAM-based buffer

collect the incoming request into a stash as well, again to avoid the RAM buffer, and to quickly release the receiver buffer again

reduce the RAM buffer to only store all the headers and the first few bytes of data, this should not affect all too much of the current code

add logic to easily “read” incoming data from a stash as an Arduino stream (just as “writing” to a stash is already implemented)

Not there yet, but thinking this through in detail is really the first step…

The switches are custom-designed, using a silicone mat with buttons, each of them holding some sort of little carbon-lined conducting pad. When pressed, they connect two traces on the PCB and that’s it!

Oh, wait, the other side has two more components and some simple battery clips:

The electrolytic cap just helps the battery supply power for IR LED, I presume, while the other component is a cap 3.45 MHz resonator, and part of the frequency-generating circuit.

Here is a scope trace of the emitted IR light when pressing a single button:

This was picked up with an AMS302 light sensor, BTW. You can see the two pulse trains, i.e. the button press gets repeated twice. Perhaps not as easy to see, is the fact that “ON” is not represented by a simple IR pulse, but by a pulse train. This allows the receiver to filter out noise and random pulses, by filtering and detecting pulses only when modulated in this way.

As you can see in the zoomed-in section, the pulse train consist of turning the IR LED on and off at a 36 KHz rate.

This is within the detection range of the TSOP34838 IR receiver, as used on the Infrared Plug, even though that receiver is optimized for 38 KHz modulation. Don’t be put off by the term “modulation” in this context, BTW – it simply means that the 38 KHz frequency used to drive the IR LED is turned on and off in a certain pattern.

Each key has its own pattern. This remote appears to use the RC5 protocol. Here’s a snapshot of one keypress using the TSOP34838 chip, which detects, demodulates, and then outputs a clean logic signal:

I’ve enabled the tabular pulse search listing, which gives us information about the encoding used by this remote:

829 µs for a short “OFF”

953 µs for a short “ON”

1738 µs for a long “OFF”

1752 µs for a long “ON”

Decoding such a pulse train is fairly easy, and as you can see, the component count for such IR transmissions is extremely low and hence very low-cost. Which also explains the popularity of this system!

PS. I’ve switched to light oscilloscope screen shots as a trial. The colors are not as pronounced, but it seems to be a little easier on the eyes. Here’s the same info, in the dark version as it shows on-screen:

Oscilloscopes are the “printf” of the electronics world. Without a “scope” you can only predict and deduce what’s happening in a circuit, not actually verify (let alone “see”) it. Here’s what an oscilloscope does: on the vertical axis, you see what happens, on the horizontal axis you see when it happens. It’s a voltmeter plus time-machine.

That doesn’t mean you can’t get anything done in Physical Computing without one. A simple multimeter is a lot cheaper and will get you a long way in figuring out the electrical behavior of a circuit – not to mention finding shorts and connection mistakes. So the first thing to get is a multimeter, not a scope. Always.

The trouble is that ATmega’s are so friggin’ darn fast. We can’t observe events on their time scale, and more importantly: many problems will zoom past us and get lost before we have a chance to see anything!

So I’m going to revise my advice about oscilloscopes somewhat: if you solder together kits and basic components, then yeah, a multimeter is plenty. But if you hook up non-trivial chips and need to debug the combination of hardware and software, then you really need all the help you can get. Be it a logic analyzer for digital signals, buses, and pulse-trains, or a scope to investigate the electric behavior of a fast circuit.

Note that a logic analyzer can be a lot cheaper than a scope. The reason being that they are electrically much simpler – they just need to collect a bunch of digital logic levels (rapidly), whereas a scope needs to collect much richer signals (ranging from millivolts to hundreds of volts, and with all sorts of signal processing to make sure you’re seeing the real thing and not some artifact of the instrument itself).

If you’ve been following this weblog a bit, you’ll have seen quite a few scope screen shots in some of the posts. One of the most important uses for my scope here at JeeLabs is to figure out power consumption while trying to optimize a JeeNode’s ultra-low power mode. Power consumption is an analog thing, so that’s where a scope comes in. And when you look at the amount of detail a modern scope can show, it’s clear that this level of insight really comes from such an instrument.
See the recent Watchdog tweaking and Room Node analysis for some examples.

Does that mean you have to shell out a few thousand dollars to do something similar? Not at all.

First of all, visualization isn’t everything. A couple of years ago, I used one JeeNode to measure the power consumption of another JeeNode, see the Power consumption tracker post, and the software for it. Less insight perhaps, and no geeky screen shots, but plenty of info to try and optimize the power consumption by trial-and-error. Just tweak your sketch and measure, over and over again.

Second point I’d like to make, is that such power measurements are fairly slow, so any scope will do. Even a 10 MHz model will be able to accurately display changes from one microsecond to the next.

There are a couple of ways to get such a “low-end” scope (don’t let that term fool you, any oscilloscope can be extremely useful as long as things don’t change too fast):

These last two options are lower cost, but more limited since they don’t really include a full “front-end” to handle a wide range of input voltages. For circuits with only a few volts, they may still be sufficient.

Normal “sweeping” analog scopes are ok, but storage scopes (analog or digital) are considerably better because you can “capture” an event and keep it on the screen to investigate. Such a feature will cost more though.

Here’s an example of how a €100 second-hand Tek 475 (analog & non-storage) scope can be used to measure that same power consumption as in the Watchdog tweaking post – it’s the same waveform:

Two essential tricks were used: 1) the watchdog is firing at ≈ 60 Hz, so the scope trace fires constantly, and 2) it triggers on one pulse but displays the next one, using x10 horizontal magnification.

The above screen shows 2 mA and 200 µs per division. The vertical scale could have been zoomed in further, but for the horizontal scale I’m sort of at the limit unless I start using delayed sweeps. Here’s the whole unit:

No storage, no screen capture, no USB, so this was done by darkening the room and holding a camera in front of the scope. It took a couple of tries, but hey – it is possible to estimate power consumption this way!

What I’m trying to say is that you too can do this sort of work with an investment of €100 to €150.

If you intend to do more with electronics (and let me assure you: this sort of fooling around is geek heaven, and addictive!) – then consider holding off just a bit longer if need be, and save up for a Rigol or Owon scope. These “DSO’s” are mature, have tons of useful features, and they can store lots of detail (that’s the “S” in DSO).

Is this a case of “if you have a hammer then everything starts looking like a nail”? All I know is that my insight in ultra-low power consumption and optimization has increased significantly since getting an oscilloscope.

Our usage (i.e. Liesbeth’s and mine) was about 3000 kWh in 2011. That includes electric cooking, but note that heating and warm water is provided through natural gas.

That puts us in the late 1950’s w.r.t. US electricity consumption levels – yo, Elvis! :)

I’ve started to get involved in a local initiative (see this Dutch website if you’re interested – “duurzaamheid” is all the rage these days, it seems), with all sorts of simple and not-so-simple ways planned 1) to consume less, 2) to switch to renewable sources, and 3) to fall back to natural resources for the rest. It’s not an all or nothing game, more a way to plot a practical trajectory for improving things over the next couple of years.

Here’s the JeeLabs neighborhood:

Lots of space to catch some sunlight on all those rooftops – but careful with that chimney’s shadow!

Now that solar energy has become so cheap (Wp prices including inverter have dropped below €1.70), we’re finally getting together with a couple of neighbors here to actually make it happen. This year, and hopefully before the summer is gone again!

The aim is to try and get 4000 to 5000 Wp onto our roof (16..21 panels of 100×160 cm), which would cover for our entire yearly electricity needs, even without pushing for further savings. For the 52°N latitude of the Netherlands, panel + inverter efficiencies are estimated in the 80..85% range, nowadays.

That’s just half the story, gas consumption is the other biggie – but hey, ya gotta’ start somewhere, eh?

Capacitors have a “leakage current”, i.e. when you charge them up fully, the leakage current will cause them to slowly lose that charge. I was wondering about this in the context of an ultra-low power JeeNode, which has a 10 µF buffer cap right after the voltage regulator. Does its leakage affect battery lifetimes?

Time to do a quick test – I used the 47 µF 25V cap included with JeeNode kits these days:

So how do you measure leakage currents, which are bound to be very small at 3.3V? Well, you could charge up the cap and then insert a multimeter in series in its most sensitive range. This multimeter goes down to 0.1 µA resolution, although its accuracy is specified as 1.6 % + 5 digits, so the really low values aren’t very precise.

A simpler way is to use the RC time constant as a basis. The idea is that a real-world cap can be treated like a perfect cap (which would keep its charge forever) plus a resistor in parallel. That resistor merely “happens” to be situated inside the cap.

What I did was charge the cap from a 3x AA battery pack which was just about 4.0V, then disconnect the battery and watch the discharge on the oscilloscope:

As you can see, it took 500 seconds for the charge in the capacitor to drop by some 2.5V – note the exponential decay, which matches the assumption that the leakage comes from a fixed resistance.

Can we derive the leakage from this? Sure we can!

The formula for RC discharge is:

T = R x C

Where T (in seconds) is the time for the cap to discharge by 63.2 percent, R is the discharge resistor (in ohms), and C is the capacitor size (in farads).

Above, it took 500 seconds to drop from 3.98 V to 1.48 V, which by pure accident is almost exactly 63.2 %, so T = 500 and C = 0.000,047 – giving us all the info needed to calculate R = 500 / 0.000,046 = 10638298 ≈ 10.6 MΩ.

Using ohm’s law (E = I x R), that means the leakage current at the start is 4 V / 10.6 MΩ = 0.376 µA.

The good news is that such a result would not be of any concern with ultra-low power JeeNodes – the regulator + ATmega + RFM12B use an order of magnitude more than that, even when powered down.

But the bad news is that this result is in fact completely bogus: to measure the charge, I placed the oscilloscope probe over the cap, and it happens to have 10 MΩ internal resistance itself. So basically the entire discharge behavior shown above was caused by the probe i.s.o. the capacitor’s own leakage!

So it looks like I’ll need a different setup to measure real leakage, which is probably in the nanoamp range…

The Hameg HMO2024 scope just got a firmware upgrade – wow, it just keeps getting better and better.

Support for up to 6 calculated values (was 2), based on any of the input channels – now with optional statistics:

And one of the things I really missed dearly – the ability to see all decoded serial data in tabular form:

The top two traces show the SCL and SDA data in analog form, the next group is the color-coded serial data, and at the bottom is the list of packets. As you scroll through the table, the traces adjust to show the related information. Still shown at the bottom are the 6 auto-measured items I configured in the first screen.

Last big new feature is the capability to search through stored traces, again with a table to help navigation:

It’s all firmware, evidently, but I hadn’t expected the development to keep on moving the capabilities of this oscilloscope forward to such an extent. And these aren’t just gimmicks, such features can be extremely useful!

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Two weeks ago, I extolled the virtues of the multimeter for measuring various electrical units.

With voltages, things are very simple: you’ve got two probes, and you can stick them anywhere in your circuit to measure the voltage potential difference between two points. The input impedance of any modern multimeter is usually 10 MΩ or more, which means the load caused by measuring is neglegible in just about all cases.

Let’s apply Ohm’s law: 10 MΩ over 1V is just 0.1 µA current, and over 230V it’s still just 23 µA current.

But with current measurements, things change: a multimeter in current measurement mode is essentially a short. You place the probe pins between the supplier and consumer of current to measure the Amps, milliamps, or microamps. That also means you can’t just go probing around at random: sticking the probes between + and – of a power supply, or even just a battery, basically creates a short. The result is a huge current, which will blow the internal fuse of the multimeter. Very often, the fuse is a 500 mA type (to protect a 400 mA range).

That’s why the VC170 (left) is better than the VC160 (right) – voltage and current are on different jacks:

But there’s another aspect of current measurement with multimeters to be aware of: burden voltage.

When measuring current, multimeters insert a small resistance in series with the load, i.e. between the two probe pins, and then work by measuring the voltage drop across them (Ohm’s law, again!).

So placing multimeter between current supplier and consumer actually introduces a small voltage drop. How much? Well, that depends both on the multimeter and on the selected range.

Here’s the VC170 with a 1 mA current through it – in its 400 mA range:

I used the VC160 multimeter to measure the voltage over the VC170 multimeter, which is in current measurement mode. This is one example why having several multimeters can come in handy at times.

Not bad – roughly 1 mV to measure 1 mA, so the burden resistor in this unit for the 400 mA range is somewhere around 1.3 Ω. Note also that with 400 mA, the voltage drop will rise to over 500 mV!

Let’s repeat this with the VC170 in µA range, i.e. measuring up to 4000 µA:

Hmmm… the voltage drop with 1 mA is now 100 mV, i.e a 100 Ω burden resistor. Not stellar.

Why is this a problem? Let’s take an example from the JeeNode world: say you want to measure the current consumed by the JeeNode once it has started up and entered some sort of low-power state in your sketch. You expect to see a few µA, so you place the multimeter in µA mode.

The JeeNode starts up, powered from say a 3.6V 3x AA battery pack with EneLoops. It starts up in full power mode, briefly drawing perhaps 10 mA. You’ve got the multimeter in series, which in µA mode means that you’ve got a 100 Ω resistor in series with the battery.

The problem: at 10 mA, a 100 Ω resistor will cause a 1V drop (BTW, make sure you can dream these cases of Ohm’s law, it’s an extremely useful skill). That comes out as 100 V/A burden voltage.

So the battery gives out 3.6V, but only 2.6V reaches the JeeNode. Supposing its ATmega is set to the default fuse settings, then the brown-out detector will force a reset at 2.7V – whoops! You’re about to witness a JeeNode constantly getting reset – just by measuring its current consumption!

In the 400 mA range, the voltage drop at 10 mA will be 13 mV and affect the circuit less (1.3 V/A burden voltage).

The good news is that the multimeter still does auto-ranging. As you can see in the above example, 1 mA is shown with 2 significant decimals, so it’s still possible to read out ± 10 µA in this mode (don’t assume it’ll be accurate beyond perhaps ± 30 µA, though).

Can this problem be avoided? Sure. Several ways:

stick to the higher current ranges, even if that means you can’t see low values very precisely

add a Schottky diode in forward mode over the multimeter – this will limit the voltage drop to about 0.3V, even during that brief 10 mA startup peak

get a better instrument – this is easier said than done, most multimeters have a 1..100 V/A burden voltage (!)

One caveat with Dave’s solution is that it is never in stock. I’ve been trying to get one for years without luck. He occasionally gets new ones made, but they tend to sell out within nanoseconds, AFAICT!

PortI2C is a subclass of Port (defined here), which handles raw I/O for one port (1..4 or 0)

there’s an “enum” which defines some constants, specifically for PortI2C use

there’s a “constructor” which takes two arguments (the second one is optional)

there are four member functions available to any instance of class PortI2C

But that’s not all. Since PortI2C is a subclass of public Port, all the members of the Port class are also available to PortI2C instances. So even when using a PortI2C instance as I2C bus, you could still control the IRQ pin on it (mode3(), digiWrite3(), etc). An I2C port is a regular port with I2C extensions.

Note this line:

PortI2C (uint8_t num, uint8_t rate =KHZMAX);

This is the constructor of the PortI2C class, since it has the same name as the class. You never call it directly, it gets called automatically whenever a new instance of PortI2C is declared.

This constructor takes one or two arguments: the last argument can be omitted, in which case it will get a default value of KHZMAX, which is the constant 1, as defined in the preceding enum.

Note that the first argument is required. The following instance declaration will generate a compile error:

PortI2C myPort; // compile-time error!

There’s no way to create an instance of a port without specifying its port number (an int from 1 to 4, or 0). Instead, you have to use either one of the following lines:

And this is where things get nasty: PortI2C is a subclass of Port, which also has a constructor requiring a port number. So the PortI2C constructor somehow has to pass this information to the Port constructor.
To see how this is done, look at the PortI2C constructor function, defined in Ports.cpp:

DeviceI2C is not a subclass, but it does need to refer to the PortI2C instance specified as 1st argument and remember the bus address. The way this is done is through member variables “port” and “addr”. These are defined at the top of the class, and initialized in the DeviceI2C constructor.

The reason we can’t use subclassing here, is that a device is not a port, it’s merely associated with a port and I2C bus, since multiple devices can coexist on the bus. The “&” notation is a bit like the “*” pointer notation in C, I’ll refer you to C++ language documentation for an explanation of the difference. It’s not essential here.

Not being a subclass of PortI2C, means we can’t simply send I2C packets via send(), write(), etc. Instead, we have to go through the “port” variable. Here’s the above write() member function in more detail:

uint8_t write(uint8_t data) const { return port.write(data); }

In other words, instead of simply calling “write()”, we have to call “port.write()”. No big deal.

So much for the intricacies of C++ – I hope this’ll allow you to better read the source code inside JeeLib.

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

Well, not a very “deep” teardown, just opening it up and looking inside a conventional 400W PC power supply:

When turned on, but not powered up, the power cunsumption is a substantial 2.8 W. IOW, that’s your computer when turned off.
But the nasty surprise was that even with the mechanical switch in the off position, it still draws 0.04 W? Oh well, the sticker says “2006”, so let’s assume things have improved since then.

Here’s the top view inside:

Two large heatsinks with two fans blowing air across, the bottom fan is on the outside of the case.

These caps scare me, I had it powered up briefly, so I’d probably get a jolt if I were to touch them now:

Two small transformers in there, on the right. And here are three more:

One last toroidial one in where the main circuitry appears to be – note the one-sided PCB with jumpers:

And that board at the right is filled with varicaps, etc – noise and surge suppression, no doubt:

Go to the website for the full-size view. Looking at the number of transformers, this supply is probably similar. The basic idea is simple: generate a high-frequency AC signal, and feed it through some transformers for galvanic isolation and to produce the much lower voltages at much higher currents. A high frequency is used i.s.o. 50 Hz because transformers are more efficient that way. There’s a feedback mechanism to regulate the output voltages.

The TL494 chip (which is not necessarily the same as used in this particular supply) is the heart of a PWM Power Control Circuit, which seems to drive it all. It generates pulses, and varies the ON-time as a way to regulate the generated output voltages. I think.

What I never understood is how you can regulate multiple voltages with what looks like only one feedback loop. In the schematic, the +12 and +5 V outputs are brought together as a single regulating signal through 2 resistors. What if the power draw from those 12V and 5V sections differ widely over time?

Anyway, go to that website mentioned earlier to read more about how it all works. I’m sure it does since there must be hundreds of millions of these on the planet by now…

Update – This particular unit will turn on without adding 10 Ω resistors, as sometimes suggested for lab use of such PSU’s. Voltage unloaded is 3.39V, 5.18V, and 11.99V, so close enough – with a little extra to compensate for wire losses. Big downside for lab use of such a “raw” PSU, is the nearly unlimited current that will flow with a short-circuit – guaranteed to vaporize lots of things! One solution would be to add basic current sensing and MOSFETs to switch off when pre-set values are being exceeded. With proper dimensioning, the added current drop need not be more than perhaps 100 mV, so the generated voltages would still be “in spec”. The + and – 12V would make a nice ±10V supply for analog experiments with dual-supply op-amps, for example.

The Heading Board has been sold out for some time now. I’ve not been reordering it because it’s a bit quirky (needing the IRQ pin as well) and probably also not really all that sensitive.

To continue to offer a solution, I’ve decided to switch to the Modern Device 3-axis Compass Board instead:

As you can see, it has a port-compatible header footprint on one side. The other side is for use with a 5V system, such as an RBBB or Arduino. Which is why there is also a 3.3V regulator on there.

The board is slightly smaller than a standard JeePlug and does not have port headers on both sides to support daisy chaining, but apart from that it’s totally JeeNode-/port-compatible. You can simply put it on the end of the chain if you want to mechanically stack this along with other I2C-enabled plugs.

Be careful about the orientation: it not the same as other plugs and there’s no “dot” next to the “P” pin.

I’ve added a very basic implementation in JeeLib to access the HMC5883 chip on this board, with demo:

The other day, I mentioned a way to keep approximate track of time via the watchdog timer, by running it as often as possible, i.e. in a 16 ms cycle.

This brought out some loose ends which I’d like to clean up here.

First of all, the loseSomeTime() code now runs 60 times per second, so optimizing it might be useful. I ended up moving some logic out of the loop, to keep the time between power-down states as short as possible:

The result is this power consumption profile, every ≈ 16 ms:

That’s still about 22 µs @ 6.5 mA. But is it, really?

The above current measurement was done between the battery and the power supply of a JeeNode SMD. Let’s redo this without regulator, i.e. using a “modded” JeeNode with the regulator replaced by a jumper:

Couple of observations:

different ATmega, different watchdog accuracy: 17.2 vs 16.3 ms

the rise and fall times of the pulse is sharper, i.e. not dampened by a 10 µF buffer cap

new behavior: there’s now a 0.4 mA current during 80 µs (probably the clock starting up?)

that startup phase adds another 75 nC to the total charge consumed

note that there is a negative current flow, causing the charge integral to decrease

The worrying bit is that these two ways of measuring the current pulses differ so much – I can’t quite explain it. One thing is clear though: an adjusted fuse setting with faster clock startup should also make a substantial difference, since this now needs to happen 60 times per second.

A second improvement is to assume that when a watchdog cycle gets interrupts, half the time has passed – on average that’ll be as best as we can guess, assuming the interrupt source is independent:

The last issue I wanted to bring up here, is that small code optimizations can sometimes make a noticeable difference. When running the test sketch (same as in this post) with a 8192 millisecond argument to loseSomeTime(), the above code produces the following profile:

The reason the pulse is twice as wide is that the “while” in there now loops a few times, making the run time almost 50 µs between power-down phases. As Jörg Becker pointed out in a comment, the ATmega has no “barrel shifter” hardware, meaning that “shift-by-N” is not a hardware instruction which can run in one machine cycle. Instead, the C runtime library needs to emulate this with repeated shift-by-1 steps.

By changing the while loop from this:

… to this:

… we get this new power consumption profile (the horizontal scale is now 20 µs/div):

IOW, this takes 20 µs less time. Does it matter?
Well, that might depend on who you ask:

Marketing guy: “50 µs is 67% more than 30 µs – wow, that means your sketch might last 5 years i.s.o. 3 years on the same battery!”

Realist: “50 µs i.s.o. 30 µs every 8 seconds – nah, those extra 120 nC or so will merely add 15 nA to the average current consumption.”

The moral of this story: 1) careful how you measure things, and 2) optimize where it matters.

Anyway, I’ve added all three changes to JeeLib. It won’t hurt, and besides: it leads to smaller code.

First there was the 7th HackersNL meeting in Utrecht. The name of the event is unfortunate, IMO (this whole “hacker” monicker doesn’t sit well with normal people, i.e. 99.9% of humanity), but the presentations were both absolutely fantastic. A wide scale of design topics by David Menting, including his “linear clock” for which he designed custom hardware based on a standard tiny Linux + WiFi board, and then a talk about turning a cheap laser cutter into a pretty amazing unit by ripping out the driver board and software, and replacing it with their own custom hardware with an MBED module plus software (wiki) – by Jaap Vermaas and Peter Brier. Both cutting edge, if you pardon the pun, and above all a pressure cooker where two dozen people get to talk about “stuff”, mostly related to Physical Computing really. Everything is open source.

If you live in the neighborhood of Utrecht, I can highly recommend this recurring meeting, scheduled for the last Thursday of each month – so take note, hope to see you there, one day!

The other event was the Air Quality Egg Workshop, by Joe Saavedra. Basic idea: a sensor unit, to measure air quality in some way, plus an “egg” base station which can tie into Pachube (both ways), relays the sensor data, and includes an RGB color light plus push-button.

Except that it doesn’t exist yet. We built a wired prototype based on a Nanode with SparkFun protoshield, a CO sensor, an NO2 sensor, and a DHT22 temperature/humidity sensor.

Here’s my concoction (three of the sensors were mounted away from the heat generated by the Nanode):

It’s now sitting next to the JeeLabs server, feeding Pachube periodically. We’ll see how it goes, since apparently these sensors need 24..48 hours to stabilize. Here are some of the readings so far:

What I took away from this, is:

Whee, there sure is a lot more fun stuff waiting to be explored!

When you put a fantastic bunch of creative people together, you get magic!

Not enough time! Would it help to keep flying westwards to cram more hours into a day?

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

Today just some more general notes about stuff which you probably already have: screwdrivers, pliers, tweezers, that sort of stuff. None of this is electronic – but some details do tend to matter in this context.

The toolkit I picked for this series is item 814892 from Conrad, or rather 046027, which is the multimeter plus this set, as a package deal:

Don’t expect top-of-the-line professional tools – just stuff which ought to work nicely. The idea is that if any of those tools break, then apparently you’re using it a lot, so maybe now’s a good time to get a better-quality version of that particular tool! – and the rest still comes in handy. By then, you’ll already have some experience, and you’ll be better equipped to pick a good brand which meets your need. It may sound crazy, but by the time you’ve managed to break all of these tools, you’ll have gained plenty of experience with each of them (or you’re handling them too roughly). Either way, it’s still worth the initial expense!

One of the tools you’ll use a lot are side-cutters, to snip off the wires of resistors, caps, etc. after having soldered these components into your circuit or onto your board. The one in this set works, but also illustrates the kind-of-average build quality of these items:

The jaws will cut just fine, but they are not 100% parallel – it’ll cut better near the end (which is what matters most anyway), than inside where these cutters don’t fully close. But hey – they do work.

Other items in this toolbox are: various types of screwdrivers (flat, philips, and torque), hex spanners, and such. Nothing spectacular, but they come in small sizes – very convenient for electronics use.

There’s a little magnetic LED light (yawn), a loupe (oh so handy, at times, with SMD), and some less common utilities like a magnet on a telescopic pointer and a long “gripper” – useful to get screws accidentally dropped in some hard-to-reach spots, I suppose.

Furthermore there are two types of tweezers in this collection, a straight “reverse-action” type which opens when squeezed, and one bent to the side. Both can be extremely useful, for very different purposes: the straight one acts like a weak clip, since it springs back closed when released. It can be used to gently hold something in place while you’re soldering or measuring it (it does conduct heat, so don’t put it too close to the spot you want to solder).

The standard tweezer is an excellent example of a prolongement du corps – an extension of your body, letting you do more than you’d think possible. I prefer this “angled” type with a bend in it over straight models. It takes very little time to learn to pick up and manipulate tiny SMD components with it. I remember quite well how amazed I was when trying this for the first time with sub-millimeter SMDs – felt a bit like being a neuro-surgeon :)

None of these items are very special. You probably have most of them already. Otherwise, just be sure to get at least the side-cutters, the standard tweezers, and a loupe (or small magnifying glass) … even if you don’t do SMD.

Last week, I described how the PortI2C + DeviceI2C definitions in JeeLib work together to support daisy-chaining multiple “JeePlugs” on a “JeePort”. To describe how, I need to go into some C++ concepts.

PortI2C and DeviceI2C (and MemoryPlug) are each defined as a C++ class – think of this as a little “software module” if you like. But a class by itself doesn’t do much – just like the C type “int” doesn’t do much – until you create a variable of that type. JeeLib is full of classes, but to make any one of them come alive you have to create an instance – which is what the following line does:

PortI2C myPort (3);

This could also have been written as follows, but for classes the parentheses are preferable:

PortI2C myPort = 3;

The class is “PortI2C”, the new variable is “myPort”, and its initial value is based on the integer 3.

In C++, instances are “constructed”. If a class defines a constructor function, then that code will be run while the instance is being set up. In this case, the PortI2C constructor defined inside JeeLib takes the port number and remembers it inside the new instance for later use. It also sets up the I/O pins to properly initialize the I2C bus signals. You can find the code here, if you’re curious.

So now we have a “myPort” instance. We could use it to send and receive data on the I2C bus it just created, but keeping track of all the plugs (i.e. devices) on the bus would be a bit tedious.

The next convenience JeeLib provides, is support per plug. This is what the DeviceI2C class does: you tell it what port to use, and the address of the plug:

DeviceI2C plugOne (myPort, 0x20);

Same structure: the class is “DeviceI2C”, the new variable is “plugOne”, and the initial value depends on two things: a port instance and the integer 0x20. The port instance is that I2C port we set up before.

The separation between PortI2C and DeviceI2C is what lets us model the real world: each port can act as one I2C bus, and each bus can handle multiple plugs, i.e. I2C devices. We simply create multiple instances of DeviceI2C, giving each of them a different variable name and a unique bus address.

The Memory Plug example last week takes this all even further. There’s a “MemoryPlug” class, i.e. essentially a specialized DeviceI2C which knows a little more about the EEPROM chips on the Memory Plug.

In C++, this sort of specialization is based on a concept called subclassing: we can define a new class in terms of an existing one, and extend it to behave in slightly different ways (lots of flexibility here).

In the code, you can see this in the first line of the class definition:

The JeeLib library has a convenient loseSomeTime() function which puts the ATmega in low power mode for 16 to 60,000 ms of time. This is only 10% accurate, because it uses the hardware watchdog which is based on an internally RC-generated 128 KHz frequency.

But the worst bit is that when you use this in combination with interrupts to wake up the ATmega, then you can’t tell how much time has elapsed, because the clock is not running. All you know when waking up, is that no more than the watchdog timeout has passed. The best you can assume is that half of it has passed – but with loseSomeTime() accepting values up to 1 minute that’s horribly imprecise.

Can we do better? Yes we can…

Internally, loseSomeTime() works by cutting up the request time into smaller slices which the watchdog can handle. So for a 10000 ms request, for example, loseSomeTime() would wait 8192 + 1024 + 512 + 256 + 16 ms to reach the requested delay, approximately. Convenient, except for those long waits.

First of all, note how 8192 ms ends up being 8255 ms, due to the watchdog timer inaccuracy.

But the main result is that to perform this sketch, the ATmega will draw 5 mA during about 50 µs. The rest of the time it’ll be a few µA, i.e. powered down. These wake-ups draw virtually no current, when averaged.

The downside is that under these conditions, interrupts can cause us to lose track of time, up to 8192 ms.

So let’s try something else. Let’s instead run the watchdog as briefly as possible:

Current consumption now changes to this:

I don’t really understand whyBecause of a loop in the loseSomeTime() code which runs faster, the running time drops by half in this case (and hence total nanocoulomb charge halves too). But note that we’re now waking up about 60 times per second.

This means that interrupts can now only mess with our sense of time by at most 16 ms. Without interruptions (i.e. most of the time), the watchdog just completes and loseSomeTime() adds 16 ms to the millis() clock.

Let’s try and estimate the power consumption added by these very frequent wake-ups:

each wake-up pulse draw 5.5 mA for about 25 µs

the charge consumed by each pulse is 122 nC

there are (roughly) 60 wake-up pulses per second

so per second, these pulses consume 60 x 122 nC ≈ 7.3 µC in total

that comes down to an average current consumption of 7.3 µA

That’s not bad at all! By waking up 60 times per second (and going back to sleep as quickly as possible), we add only 7.3 µA current consumption to the total. The reason this works, is that wake-ups only take 25 µs, which – even at 60 times per second – hardly adds up to anything.

So this technique might be a very nice way to keep approximate track of time while mostly in sleep mode, with the ability to wake up whenever some significant event (i.e. interrupt) happens!

PS. In case you’re wondering about the shape of these signals, keep in mind that I’m measuring current draw before the regulator and 10 µF capacitor on the JeeNode.

When you have two nearly identical sine wave signals and you want to compare them, one technique is to plot one against the other, creating what is known as a Lissajous curve.

Lissajous curves make nice images, and even nicer videos because of the phase shifts.

So let’s take two signal generators and try it out, eh?

On the X-axis, I’m going to plot a 10 MHz sine wave from the new AWG, described on this weblog a few days ago. The frequency accuracy and stability of its output signal is within 1 or 2 ppm, according to the TG2511 specs.

On the Y-axis, let’s connect a second 10 MHz sine wave from a cheap DDS, also described on this weblog a few months back. This has a simple crystal so I’d expect 50 to 100 ppm frequency accuracy, i.e. within ≤ 1000 Hz.

When you connect these to an oscilloscope and put it in X-Y mode, you get pictures like these:

The more the two signals are in phase, the more the result will look like a straight line, slanted at 45° (from the bottom left to the top right). When exactly 180° out of phase, it will show a straight line from top left to bottom right. Everything in between create ovals, and when the signals are 90° out of phase (either lagging or leading), the result is a perfect circle.

So the big thing about Lissajous curves is that they let you compare the relative phase of two sine waves.

In practice, signals from different sources will tend to change phase over time, i.e. “drift” as one sine wave is slightly slower or faster than the other. This creates a way to precisely compare two frequency generators: measure how long it takes for the phase to go from 0° to 180° (or 360°, which is 0° again), and you get an idea how long it takes for one signal to catch up (or lag) one full sine wave over the other. Trouble with this approach is that sometimes these cycles are too fast to see, let alone time manually.

With an adjustable frequency source, there’s also another way: adjust a known frequency until the shape stays the same, and you’ll have “measured” the frequency of the other signal in terms of the adjusted one since they must now be equal. It’s very much like tuning a musical instrument by ear and adjusting for a “zero beat”.

That’s what I did, and I ended up with the following result for this test setup:

IOW, the cheap DDS is running 0.03% slow – i.e. about 300 ppm! And it’s not even very stable, because very soon the DDS starts drifting again: an indication that it’s not holding its frequency really accurately either. This is not really surprising for such a low-cost unit off eBay – it’s still a useful signal source: lots of useful experiments and measurements can be done with such a fairly decent 0.03 % accuracy level, after all.

Triggered by the recent signal generator checks, and those FM radio stations creeping into the signal yesterday, I wanted to do another test to see how and when this happens, using a series of scope FFT snapshots.

Here’s a 50 Ω coax cable of 2m length hooked up, with 50 Ω termination on both sides but no signal. The scale is 5 dBm per division, and I’ve zoomed into this very low range with a baseline of about -105 dB. The following 200 MHz wide FFT measurements were all done with the scope input set to max sensitivity, i.e. 1 mV/div:

Note the slight FM radio station RF signal pickup, even with a fully terminated coax cable!

Same thing, but disconnected on one end, i.e. only one 50 Ω terminator inside the scope:

Here’s the spectrum with no coax at all, i.e. nothing connected to the scope, but its 50 Ω shunt still enabled:

When also adding an external 50 Ω terminator, the lower frequencies drop ever so slightly further:

And here’s what happens when the 2m 50 Ω coax cable is attached back on, without the 50 Ω termination:

As you can see, the coax cable now acts as antenna, picking up a few more signals at 38.45 MHz and 46.38 MHz. And FM reception shooting up to 20 dB above the noise floor. Even though it’s shielded!

The slight drop in noise across the screen from 0 to 200 MHz is probably nothing more or less than the scope’s bandwidth: a 200 MHz scope is specified as having a 3 dB drop at 200 MHz, which fits amazingly well with what all the above screen shots are showing.

These tests confirm the superb signal processing specs of the Hameg oscilloscope front end: a -105 dB noise floor @ 1 mV/div maximum sensitivity. For even lower noise levels (and a higher frequency range, as would be needed for 868 MHz and 2.4 GHz RF measurements), probably only a “real” spectrum analyzer will do better.

Whee… with a multi-meter probe wire attached as antenna, I can easily pick up all major AM radio stations:

Tomorrow, I’ll close off with one more post about signal processing: accurate frequency measurements.

Update – As with yesterday’s post, these FFT’s were produced with the Rectangle window function. As a bonus, here’s the frequency spectrum produced by the noise generator in my new AWG:

Down by 30 dB at 50 MHz, but a pretty good source of white noise at lower frequencies. The AWG can add an adjustable amount of this noise to the generated waveforms – can be useful to see how well a filter, demodulator, or other detector behaves, for example.

As shown in yesterday’s post, once you’re looking at signals in the high MHz range, it’s easy to make mistakes. Looking at that screen shot again, you can see a whole bunch of 90..100 MHz spikes:

These are in fact local FM radio stations – being picked up by my clunky scope probe hookup to the AWG. In other words it’s an antenna: radiated RF signals are accidentally being received by the probe and mixed in with the conducted 25 MHz signal. The way to get rid of these is to use “shielding” to keep those radio waves out.

Here’s the same signal, using a 50 Ω coax cable all the way between signal generator and oscilloscope:

No more weird stuff, just the 25 MHz multiples – indicating that there’s a slight distortion in the generated 25 MHz sine wave. This is normal for any signal generated by a direct digital synthesizer, because the waveform is created through through a digital-to-analog converter, fed at 125 million samples per second in this case. IOW, it’s an approximated sine wave (20 datapoints per sine wave @ 14-bit resolution).

So the big change is that the FM radio stations are gone, and that the signal’s noise level is now in fact a few dB cleaner than before – as you can see from the slightly lower tail towards 200 MHz.

I did have to use a small trick to make these graphs comparable: the second one has the scope’s 50 Ω internal terminator enabled, so that the path from signal source to signal destination is now done by the book: a 50 Ω source (in the AWG), feeding a 50 Ω coax cable, terminated by 50 Ω at the destination (in the scope). This does “attenuate” (i.e. reduce) the signal level by half, so I had to raise the baseline by 3 dB on the second FFT screen shot to make the height of the 25 MHz peak identical in both screens.

One other minor difference is that the second graph is smoothed over 256 samples i.s.o. 64 – cleaning up the resulting line slightly more.

So you see… it’s possible to do RF-type stuff without understanding all the details – which I certainly don’t, yet – and get decent results. The 25 MHz wave coming into the scope is very clean: the first harmonic is some 50 dB below the signal itself, which means that the first harmonic has 100,000 times less energy than the main signal.

Tomorrow, another post about this topic: cables, termination, and noise…

Update – As John Beale pointed out in the comments below, the FFT baseline is caused by the choice of FFT windowing function. Here’s the 50 Ω coax example again, using a Hanning window:

Much better for comparing relative dB differences between peaks.

(Tomorrow’s post will also use the default Rectangle window, sorry about that…)

Earlier this week, I described how a fixed frequency can be used to stabilize others.

Well… as part of my continuing drive to set up a more complete workbench here at JeeLabs, I’ve decided to get another piece of equipment which relies on this mechanism, called an “Arbitrary Waveform Generator” (AWG) or “function generator” or “signal generator” – three names for essentially the same instrument, as far as I can tell.

An AWG produces a repetitive electrical signal, such as a sine wave or a square wave. Very roughly speaking, you can think of sine waves as “pure analog” and square waves as “pure digital” frequencies.

The unit I picked is a fairly advanced one, the TG2511 from TTi (￼Thurlby Thandar Instruments), in the UK:

(Check out that box underneath – as a reference it now makes a lot more sense, eh?)

It produces sine waves and square waves up to 25 MHz, and has tons of other waveforms built in, including ramp, triangle, pulse, noise, and more. In fact, since it’s an AWG, you can load any waveform shape into it, and it’ll reproduce it at up to 125 Mega-samples per second and 14-bit resolution (it goes up to 6 MHz in this mode).

Two other major capabilities of such a unit are: the ability to “sweep” across a range of frequencies and being able to “modulate” the generated signal with another one in numerous ways: AM, FM, PM, PWM, and FSK.

As with the Hameg HMO2024 oscilloscope and the GW-Instek GPD-2303S power supply, this thing can be remotely controlled over USB. So it can be driven from a computer to perform complex and/or lengthy tests.

This model does more than I need, but there was a good “price burner” offer at Distrelec, so I decided to go for it. Function generators are not the most important instruments for an electronics lab, but they are extremely useful to learn about all sorts of analog electronics, and to illustrate various concepts and effects “for real”. Note that for lower frequencies, you can generate rough arbitrary waveforms with simply an ATmega and a few resistors.

Here’s the FFT spectrum of its 25 MHz sine wave – a few spikes at 25 MHz multiples, as expected, plus a bunch of 90..105 MHz spikes which also appear when the AWG output is off (more about those tomorrow):

Such an AWG is not limited to strictly analog uses, by the way. This unit should also be able to generate a serial bit-stream, like an RS232 message, for example. Such patterns can be loaded via USB on the front panel.

I intend to put this instrument to good use here at JeeLabs, not in the least to create good examples for future weblog posts and to illustrate relevant electronics concepts in that huge playground called Physical Computing.

Actually, my suggestion for this series would be to get item 046027, which includes a whole set of additional tools for only €12 extra. It won’t break the bank, and it gets you various screwdrivers, tweezers, a simple loupe, a lamp, and a few more items.

Anyway, back to the multimeter. Trust me – this is one of those lab instruments which will enable you to learn more about electricity than anything else. And this is one of those cases where a small amount of money will go a huge way – this particular unit lets you measure voltage, current, resistance, frequency, and more. The VC170 even does non-contact AC mains sensing, to detect live wires from a short distance.

I’ve got over half a dozen multimeters by now. Low-cost as well as expensive / more accurate ones. My favorite one is this VC170 (or rather, its predecessor, the VC160 which I’ve been using for several years now). Why? Because it’s very small, it’s fast and responsive, and it offers an excellent set of trade-offs.

Some more expensive ones are very sluggish (but also produce considerably more accurate 5-digit readings), some beep very annoyingly all the time, and some don’t have the sensitivity you need. Of all the multimeters I have, I end up using my trusty VC160 most of the time. It does what I need, and it doesn’t fill up my desk.

You can’t really go wrong with this. You’ll want more than one multimeter if you really get into electronics. Here’s a not-too-contrived example: measuring incoming and outgoing voltages of a power regulator at the same time, as well as incoming and outgoing currents – that’s 4 multimeters! So by the time you want a more advanced one, this first unit will still come in handy in certain use cases.

The good news is that one is fine for a huge range of situations. This one will measure up to 230 VAC mains (with a small caveat, see below), and all the way down to fractions of a µA of current (ultra-low power, anyone?).

Learning how to make the most of a multimeter is a story far beyond this initial Thursday Toolkit series. But it’s really easy to get started and learn along the way. Even just fiddling with a resistor, or a capacitor and a resistor, and measuring what happens in various hookups can be a great way to understand Ohm’s law, and all the basics of electronic circuits. Do two resistors in series draw more or less current? What is the resistance of two resistors in parallel? How much voltage are my near-dead batteries giving out, and how are they performing under load? Is that power supply doing what it’s supposed to do? And perhaps most important of all: are the proper voltages being applied to the different parts of my circuit? Trivial stuff with a multimeter – you can simply measure it!

Multimeters are very robust, especially auto-ranging ones like this, which can take any voltage and figure out all by themselves whether it’s over 100 V or in the millivolt range. But there are ways to break things. Big currents always tend to cause trouble, and even the best multimeter won’t be pleased if you push a few amps through it while it’s trying to measure microamps. Which is why the above set of input jacks is actually quite nice: voltage and current are very different quantities, and you have to hook up the measuring cables in specific ways to measure the different types of units. But mess-ups do happen… I’ve blown fuses inside my multimeters a few times – fortunately, they are easy to replace.

All multimeters have trade-offs. This one gets many of them right though, and does auto-ranging.

Then again, this multimeter seems to be at its limit when asked to measure 230 VAC, i.e. AC mains around here. It displays “OL” (overload). But it can measure 230 VAC just fine when using the “Select” button to fix it to the maximum range before doing the measurement.

The other thing is not to get carried away by the 4-digit display. You’ll be able to measure 3999 vs 4000, but that’s not an absolute accuracy, i.e. you shouldn’t expect to be spot on when measuring 3.999 V versus 4.000 V – the accuracy is only about 1.5 %, so it might well be 3.940 V, or 4.060 V. The only purpose this serves, is to show you slight fluctuations – fairly accurately. So it might be off a bit, but you will be able to see small dips and increases in voltage, current, resistance, etc.

And to be honest: 1.5 % accuracy is actually pretty amazing for such a low-cost instrument, if you compare it to the old analog multimeters which you had to read out by estimating the position of their needle!

The VC170 added a function I’ve dearly missed on the VC160: frequency measurements. Its specs says that it works up to 10 MHz, but a quick test here tells me that it’ll work up to at least 25 MHz with a 1 Vpp signal (wait for tomorrow’s post to find out how I tested that).
The frequency range is in fact very convenient for microcontroller debugging of timing loops, for example – I’ll go into this in a future post.

So much for the multimeter. If you solder electronic circuits together, all I can say is: get one!

This would set up three C++ objects, where each knows how to reach and control its own plug.

But that’s not all. Suppose plug #3 is a Memory Plug, i.e. an EEPROM memory of 128..512 kB. JeeLib contains extra support code to easily read and write data to such a plug, in the form of a C++ class called “MemoryPlug”. It’s an I2C device, but it always has a fixed bus address of 0x50, which for convenience is already built into the JeeLib code. To use this, all we have to do is replace that last plugThree definition above by this line:

MemoryPlug plugMem (myPort);

Once this works, we get a lot of functionality for free. Here’s how to send an I2C packet to plug #1:

Or you can save a 3-byte string to the Memory Plug, on page 12, at offset 21:

plugMem.save(12, "abc", 21, 3);

There’s a lot going on behind the scenes, but the result leads to a fairly clean coding style with all the details nicely tucked away. The question remains how this “tucking away” with C++ classes and objects is done.

Welcome to the Tuesday Teardown series, about looking inside the technology around us.

Over two years ago (gosh, time flies), I reported about a low-cost AC metering device called Cost Control:

It seems to be available from several sources, not just Conrad and ELV, under different brand names. Not sure they are identical on the inside, but the interesting bit is that they transmit on 868 MHz and seem to go down to fairly low power levels as well as all the way up to 16A:

So let’s have a look inside, eh? Here’s the back side of the PCB:

No much to see, other than a thick bare copper wire, which probably acts as the shunt resistor.

The rest appears to be built around 3 main chips, two of which are epoxied in, so I can’t see what they are:

Flipping this thing over, we can see the different sections. I had expected a special purpose AC power measuring chip, but it looks like this thing is built around a quad LM2902 op-amp:

The rest of the analog circuitry and the MPU of some kind running at 4-something MHz is here:

The 24LC02 is a 2 Kbit I2C EEPROM, for the node ID and some calibration constants, I presume.

And here’s the wireless transmitter, running off a 16 MHz crystal:

Being 16 MHz, it’s a bit unlikely that this is a HopeRF RFM12B (or its transmit-only variant), alas. The blob at the center bottom goes to an antenna wire on the other side of the board.

Would love to be able to decode the wireless signal (1 packet every 5s, very nice!). Either that, or find out how they are measuring the power from 1..3600W – the remote actually displays in tenths of a Watt.

A week ago, there was a post about various clock options and their accuracy.

These clocks generate a stable pulse or sinewave, basically. But what if you need a different frequency?

Suppose you get a very accurate 1 pulse-per-second (i.e. 1 Hz) signal from somewhere, but you want to keep track of time in microseconds? IOW, you need a 1 MHz clock, preferably just as accurate. One way to do this, is to use a “Voltage Controlled Oscillator” (VCO). It can be any frequency really – the idea is to divide its output down to 1 Hz and then compare it with your reference clock. If it’s either too slow or too fast, adjust the voltage used to set the precise frequency of the VCO, and bingo – within no time (heh, so to speak), your VCO will be “locked” onto the reference and generate its target frequency, at just about the same accuracy as the 1 pps reference.

My Rubidium clock came with a 63.8976 MHz VCO as part of the bargain:

With no control voltage it generates a sinewave-ish very high frequency signal from just a 3.3V power supply:

That frequency is not as awkward as it looks: 638976 = 3 * 13 * 16384, so you can get 100 Hz out of it with a few simple dividers, as well as any integral fraction of that (including 1 Hz). Another way of going about this is to divide the clock by a simple power of two, say 256 or 4096, and then pass the resulting square wave to an ATmega’s timer/counter input. I haven’t hooked up this VCO to the Rb clock yet, since there’s a bit more logic involved – look up “phase locked loop” (PLL) if you’re interested.

Another source of very stable clock signals is the GPS navigation system (see also this note). Their clocks areused to be made a little bit jittery for civilian use, but this averages out over time, so you can still lock onto it and get a very accurate long-term reference. Look up Allen variance to find out more about short- vs long-term stability – it’s fascinating stuff, but as with most things: once you get into the details it can become quite complex.

To summarize: with a VCO you can produce any frequency you like given some stable reference. So I’m happy with my 10 MHz @ 10 ppt atomic clock, for those rare cases when I’ll need it. And for its geek factor, of course…

Do all these extreme accuracies matter? Well, apart from TDMA, think of it this way: an 868 MHz RFM12B wireless radio with 1 ppm accuracy may be off by 868 Hz. That’s no big deal because the RFM12B’s receiver uses Automatic Frequency Control (AFC) to tune itself into the incoming signal, but with bandwidths in the kilohertz range, you can see that all of a sudden a couple of ppm isn’t so academic any more!

There are several scenarios where it’d be nice to detect the pulse of a blinking LED – especially low-power, because then we can sense it with a long-lasting battery-powered setup, such a JeeNode or JNµ.

Fortunately, that’s fairly easy to do. I used this test setup to try things out:

The left-hand side is a test pulse, generating 10 ms pulses once a second to simulate a typical indicator light. It’s simple enough with no further explanation needed.

The right-hand side of the above circuit is the actual pulse sensor we’re trying to implement. It’s a voltage divider with on the upper half a fixed resistor (well, a trimmer, but we only have to adjust it once) and the lower half is a Light Dependent Resistor (LDR) – like these two examples:

We want to generate one electrical pulse for each incoming light pulse, in such a way that it could trigger an ATmega’s digital input pin. With a clean pulse we could then set up a pin-change interrupt and keep the ATmega asleep most of the time.

The trouble is that LDR’s and voltage dividers are analog i.s.o digital. One way would be to constantly read out the signal as analog input. But this sort of polling and continuous ADC use eats up quite a bit of power – a digital signal would be a lot better as it’d allow us to use pin-change interrupts.

No worries. A digital signal is also a voltage, but it has to stay under a certain limit to be treated as digital “0” and above another limit to act as digital “1”. Here are the specs from the ATmega328 datasheet:

With a JeeNode running at 3.3V, we get: “0” ≤ 1V and “1” ≥ 2V. Note that in theory voltages between 1 and 2V will have indeterminate results, but in practice the signal will work fine as long as it doesn’t stay forever within that gray zone.

The trick is to make that LDR sensor as sensitive as possible. The LDR which I used is a fairly standard one (same one as included with the Room Board) and rises to over 1.5 MΩ resistance when dark. Let’s assume 1 MΩ as extra margin, then we could use 470 kΩ as upper resistor of the above resistor divider, and the resulting signal would be about 2.2V when dark.

The way I maximized the dark-state resistance was to place it in a small black plastic cap, as shown in the above photograph. This is essential, as you’ll see.

Now the actual pulse detection: the resistance of an LDR drops (quite dramatically) in the presence of light, so the trick is to place it close enough to the blinking LED that we want to “read out”. I placed my blinking test LED a few millimeteres from the black cap (which is open at the end, of course);

Here’s a scope snapshot of the LED pulse (channel 1, yellow trace) and the detected signal (channel 2, blue trace):

You can see the LDR signal dropping when light is detected, and that the LDR actually needs a bit of time to react. For 10 ms pulses, it’s plenty fast enough, though.

This configuration is probably ok – the voltage swings from about 1.8V (a marginal “1”) down to 0.7V (a clean “0”). The whole setup really depends on first getting the dark resistance as high as possible (i.e. shielded from any stray light) and pulling it down enough during the LED blink (i.e. close enough to pick up a good LED signal).

When the LED is inserted inside the plastic tube, the signal becomes much stronger – but recovery is slower:

It all hinges on the pull-up resistor, really. Which is why the best way to create this sensor is to use an adjustable 1 MΩ trimpot, and tweak it. You won’t need an oscilloscope or even a multimeter to get optimal results:

very important: shield the LDR from stray light as well as you can

pick as high a resistance as possible which still gives a “1” signal (between 100 kΩ and 1 MΩ)

place the LDR + shield near enough to the LED to generate a “0” pulse

tweak and iterate the above steps until it works reliably under all conditions

For minimal power consumption, the pull-up resistor should be as large as possible. Example: with an optimal pull-up of 1 MΩ and the LDR’s dark resistance about 1 MΩ as well, the quiescent current draw will be (Ohm’s law: I = E/R) 3.3 V / 2 MΩ = 1.65 µA, an excellent value for ultra-low power nodes. During the LED light pulse, this will increase to at most twice that (i.e. if the LDR resistance were to drop completely to 0 Ω).

Note that a more sensitive sensor design will be needed if you want to actually measure the length of the pulse with a decent accuracy, but for simple counting purposes where incident light can be kept out, there is nothing simpler than this LDR + pull-up trimmer, probably.

The size is great, of course – but the current consumption isn’t: I measured 1.9 mA idle current @ 5V.

The other inconvenience, in the context of JeeNodes, is that this sensor expects a 5..9V supply voltage.

Using my new accurately adjustable power supply, I was able to establish that it actually works all the way down to 3.6V – with current consumption down to 1.3 mA. But that’s still far from the 50 µA current consumption of the PIR sensor used in the Room Board, so this rules out ultra-low power battery nodes. The detection range is specified as 2 to 3 m, not stellar but probably enough for many uses.

Here are the two sides of this really tiny sensor (whoops, I accidentally cracked the lens):

The chip marked 7144-1 appears to be a 4.4V LDO regulator, with excellent < 5 µA idle current and 0.06 V drop-out voltage under light load, but that seems to point to a circuit which really expects to run at 4.4V internally.

I have no idea what this spec means on the eBay page:

It’s definitely not referring to the idle power consumption of this sensor. Too bad!

Should we try and design our own PIR sensor? I wonder what that would take – some way to stabilize on an average detector level, and then detecting changes in that value? Using an ultra-low power op-amp?

With just the basic Arduino library support, i.e. if you have to do everything with delay() calls, it’s a lot harder to do things like making two LEDs blink at an independent rate – as in this blink_timers example:

This illustrates a simple way of using the millisecond timers: calling one with “poll(ms)” will return true once every “ms” milliseconds. The key advantage is that you can do other things in between these calls. As in the above example, where I used two independent timers to track the blinking rate of two LEDs.

This example uses the millitimers in automatic “retrigger” mode, i.e. poll() will return true every once in a while, because whenever poll() returns true, it also re-arms the timer to start over again.

There may be cases where you need a one-shot type of trigger. This can also be handled by millitimers:

Note that the MilliTimer class implements software timers, and that you can have as many as you like. No relationship to the hardware timers in the ATmega, other than that this is based on the Arduino runtime’s “millis()” function, which normally uses hardware TIMER0 internally.

Welcome to the Thursday Toolkit series, about tools for building Physical Computing projects.

The very first tool you’ll need – inevitably – when going beyond breadboards and wire jumpers to hook stuff together, i.e. when building things which need to become more or less permanent, is a soldering iron.

A soldering iron is just a heater which gets hot enough to melt solder. For the solder used in electronics, the iron’s tip is usually kept at between 275°C and 375°C. That’s more than hot enough to give you a serious burn when touched. So the whole idea of a soldering iron is really to get that heat in the right place, while giving you a way to hold the thing and manipulate it fairly precisely.

There are tons of different models, costing from €10 to €1000. The idea here is to pick one which doesn’t burn a hole in your pocket (heh, turned off, I mean :) – The target I’ve set myself for this initial Thursday Toolkit series is to be able to get all the tools you need for having oodles of fun with various Physical Computing projects for a total of under €150.

That rules out a lot of soldering irons, and forces use to focus on two essential features, i.e. that the soldering iron has enough heat to work well, and also has some sort of basic temperature control. A soldering iron which is too cold will be an awful time-consuming hassle, but one which is too hot will burn and damage electrical components, and will oxidize the solder much too quickly. The big fat uncontrolled “after-burners” used by electricians and plumbers are not suitable here.

As mentioned in the initial post, I decided to buy all the tools at Conrad, item 588417 in this case:

(just the iron and the two tubes at the left are included – the rest was ordered separately)

What I like about this 45W unit is that it has a solid base and sort of a temperature control, letting you regulate how much heat gets generated. This is definitely a low-end unit. Another option, with a better (smaller!) soldering iron, is the Aoyue 936 (here’s a link to a Dutch shop carrying this particular model).

The Conrad unit is a soldering iron heated at 230 VAC. Let’s have a look in close up:

It’s all about heat, and keeping it away from your hand. You hold it like a big pencil or marker, and after an hour or so of use, you’ll note that the middle of that thing gets warm, but not too hot – which is the whole idea. The metal part is the hot end, as you’ll quickly find out once you touch it and get a nasty burn. Trust me, you will get burned at least once – it comes with the hobby…

As I said, this is a low-end unit. One of the compromises is that the hot end is fairly large – so holding this thing steady and accurately placing the tip where you want it takes some practice. But no worries – everyone starts out this way, and many of us keep on working with such a unit for years. It works fine.

The other compromise is that this unit isn’t really controlled by a thermostat, it’s really just trying to keep the tip at a somewhat constant temperature, based on thermal flow in free air. Let’s take it apart:

The shiny metal barrel is the heater. Some nichrome wire, wound inside an isolated jacket no doubt. Much like toasters, hair driers, etc – but only 45W. In the middle sits a big metal core, with the pointy tip we’ll be soldering with. Its main task is to conduct the heat to the tip, and being such a large piece of metal, it’ll keep a reasonably constant temperature, even when the tip touches the copper and wires of the circuit being soldered.

There are two heat-insulated wires to the heater, powered from AC mains. The third wire is ground, and is attached directly to the barrel. This provides three types of safety: 1) if the heater breaks down, it’ll cause a short to ground and blow your AC mains fuse instead of electrocuting you, 2) if you accidentally burn through a wire carrying AC mains current (such as the soldering iron’s own!) it’ll also blow a fuse, and 3) the tip of the soldering iron is at ground potential, so any static electricity around your circuit will be conducted safely away from the sensitive electronic parts.

Then there’s the base, where the hot soldering iron is kept between your soldering work. Note the metal spring / holder, which keeps soldering iron itself hot, but tries to stay reasonably cool to the the touch on the outside. You’re not going to get burned touching it – just a quick reminder that there’s something very hot inside!

And then there’s this thing:

That’s actually a synthetic sponge. It’ll probably make more sense once you soak it in water:

Part of the skill needed to solder stuff together, is to keep a good clean soldering tip. Solder tends to oxidize, so over time you’ll get in the habit if wiping that scorching hot tip clean and applying fresh solder. The wet sponge is one way to clean that tip – it’ll sizzle and scorch a bit, but it works fine.

So much for the venerable soldering iron. Get one, don’t go overboard on features (a small size is great, but it’ll cost ya’). Far more important is to get a decent one and practice, practice, practice! – I won’t go into the actual soldering skills here, there are plenty of articles, books, and weblogs on internet, so my suggestion would be to just google around a bit. And then: practice – there’s no magic pill around that.

Next week, I’ll go into one of the best other investments you can make – apart from the soldering iron.

My existing lab power supply delivers 30V @ 3A, which is more than enough for normal use, but it uses linear fine + coarse potentiometers, which are in fact not optimal for really fine adjustments. I’ve been using it a lot, and I really have been wanting something more convenient for quite some time.

So I decided to get a second and more high-end unit, the GW-Instek GPD-2303S:

It even comes with a “calibration certificate”, FWIW:

There are many lab power supplies out there, and I intend to come up with a really good option for low end use in the context of the Thursday Toolkit series, but I’ve got enough future projects piled up here to justify this instrument for JeeLabs. Everything other than the ultra-low power experiments will benefit from this.

BTW, if you’re looking for a DIY design which is coming along very nicely, check out the EEVblog episode list, where Dave Jones has over half a dozen fascinating videos about how he is designing a really nice Arduino-compatible power supply, with all the bells and whistles you might be after: finely programmable voltage and current range, an LCD display, rotary switches for adjustment, etc. Here’s the first one in the series.

The GPD-2303S delivers 2x 30V @ 3A, i.e. up to 180W of controlled DC power. There’s a 3-channel unit with extra 2.5/3.3/5V output, even a 4-channel unit, but I’ve got enough supplies here now to cover such needs.

Note that lab power supplies are designed to “float” w.r.t. ground. The reason for this is described in my two weblog posts here and here. So you can hook them into your setup in any way you like. Even doing some totally crazy stuff like a adding a 50V DC component to AC mains would be possible…

Anyway. The nice thing about this supply (even though its shape is a bit deep for my workspace), is that it includes two independent supplies which can be used in series (double the voltage) or in parallel (double the current) to get 0..60V or 0..6A capability, and that both voltage and current can be controlled and measured very accurately (not quite down to the 1 mV/mA levels as they claim, but close).

This power supply is not a really high-end one, though (which would cost even more), since there is no remote sensing, for example. So small losses over the cabling are not compensated for. I’m not too worried, because with large currents I’m usually not really concerned about 10 mV error.

More important is that it’s a linear power supply with only 1..2 mV ripple, and that the current limit can accurately be set to very low levels. By setting it to 50 mA, say, you can avoid most damage when hooking up things the wrong way – as so often happens while messing around with circuits.

Also very nice is that this unit is programmable – meaning that you can control it fully via USB. That opens the door to all sorts of stress and limits testing, i.e. plotting the effects of a slow voltage ramp on a circuit, for example.

Sooo… with this new addition to JeeLabs, I hope to stay out of Mr. Murphy’s path a bit more!

I’m going to look at two different units, the older/smaller/cheaper PAR-1000 supporting 16 different addresses, and the newer YC-3500 supporting up to 256 different addresses and switching up to 3500W:

Here’s the PAR-1000, once opened (you need a TX9 torque screwdriver for both units):

There’s a .22 µF X2 cap as transformer-less power supply, in series with a 100 Ω resistor (hidden in black heat shrink tubing, bottom right, next to it). According to this calculator, you can get up to 12.2 mA out of that, when using a bridge rectifier (which is under the cap, using discrete 1N4007 diodes).

The measured power consumption is 0.58 W. Note that due to the way these transformer-less power supplies work, this power is always consumed, whether the relay is turned on or not.

There’s an interesting post-production “mod” in this unit, on the relay, i.e. top middle in the above image. After removing the tiewrap and glue, this interesting part emerges – in series with AC mains:

I’m guessing some sort of overheating protection for the relay, a PTC resistor?

Here’s the copper-side of the PAR-1000’s PCB, with what looks like lots of solder flux residue:

And here’s the YC-3500, in a slightly larger enclosure and using a relay which can switch up to 16A:

Same 100 Ω resistor but beefier 0.33 µF X2 cap, bringing the maximum current to 18.2 mA. Measured power consumption is 0.81 W – what a waste for an always-on device which is merely switching another device!

Here’s the underside of the YC-3500’s PCB:

Both single-sided non-epoxy PCB’s have SMD’s on one side and through-hole parts on the other, but the amount of solder on the SMD side suggests to me that everything has either been soldered on by hand or glued on and wave-soldered. The extra solder on the left increases the PCB’s current carrying capacity, BTW.

These 433 MHz units respond to simple packets using the On-Off-Keying (OOK) protocol. There’s no way to control them directly, other than via RF – and even if there were, there would be no way for a home automation system to know their state since these units are receive-only. The relay is off after power loss. There’s an LED to indicate the actual on/off state.
The choice of 24V relays is wise – needs much less current than 5V and 12V ones.

Note the 433 MHz antenna – a single loop of copper wire in one case, and a loop plus coil in the other!

Tracking time is as old as… well, time itself really.
FWIW, I stopped wearing a wristwatch about a year ago. When traveling, I often don’t have a convenient way to tell the time with me. I like it that way because it pushes me to leave a bit earlier and enjoy the journey a bit more, instead of stressing out to reach some location on earth at some particular point in time. “Onthaasten” as the Dutch say (“un-hurry”).

That wristwatch was one of the most beautiful time-pieces I ever owned, and darn accurate – less than a minute off per year. More than accurate enough for day-to-day use without ever adjusting it (except for DST).

Still, time is everything. Cell phones make very efficient use of bandwidth (and energy) by using time-division multiple access (TDMA), i.e. taking up specific time slots to get a transmitter to talk to the receiver it wants to reach, without collisions. It’s a common technique in many advanced networks, not just with cell phones.

TDMA requires all parties to be aware of time. No wonder that cell-phone towers need the Rubidium clock I described in the past two days. If your timing is off, you end up jamming others, and to avoid that the system then needs to introduce wider gaps around each slot. More gaps = more time & energy wasted, as each unit has to wait longer in receive mode to be certain it picks up the entire packet, and more gaps = more unused bandwidth.

For good timing, you need to have every node in sync within a millisecond or so. Perfect timing means a receiver can turn on exactly when the transmitter starts, and switch off right after the end. But the need for exact time is bad news for the simplest ultra-low power nodes, which tend to use an RC-controlled watchdog timer for the sleep modes. On an ATmega, watchdog accuracy is only about 10% in the worst case:

Ok, time to get into some terminology…

one percent (%) is 1 per 100, of course

one part per million (ppm) is 0.0001 percent

one part per billion (ppb) is 0.0000001 percent

one part per trillion (ppt) is 0.0000000001 percent

It’s easy to make mistakes with so many zero’s, so let’s approach it from another angle: a year has about 31.5 million seconds, so let’s specify time accuracy in the amount of error over a year. And let’s not fuss over 50%, I’ll round things up or down a bit for convenience.

My trusty old Seiko Lasalle wristwatch:

estimated 1 min/year, i.e. an astoundingly good 2 ppm

ATmega watchdog accuracy:

worst case: 10 % = can be over a month per year off

if supply is 3.3V ± 0.1V and temp is 25°C ± 10°C: 1 % = within 3 days per year

After yesterday’s intro of my “get your own atomic clock”, which is really just doodling, here’s the next step:

The clock, and the PCB panel it came attached to, has been placed in an all-plastic enclosure along with a little 15V @ 1.7A switching power supply. This thing needs quite a bit of power and actually gets quite hot. Nevertheless, I expect that placing it inside this relatively small plastic enclosure will not be a problem because much of the heat seems to be generated simply to keep the Rubidium “physics package” inside at a certain fixed temperature. For that same reason, I suspect that the heat sink on which this clock is mounted is not so much meant to draw heat away, but to maintain a stable temperature and improve stability.

To get an idea: 10 to the power -11 frequency stability is less than 0.3 milliseconds per year error!

This particular unit (they are not all identical, even when called “FE-5680A”) also needs a 5V logic supply.

I haven’t yet decided how to bring out various signals, so I’ll hook up the 50Ω BNC connector on the back first and wait with the rest. Also needed: a LED power-on light, LED indicators for the “output valid” and “1 pulse-per-second” signals (via a one-shot to extend the 1 µs second pulse), and a 7805 regulator. Here’s the front – so far:

I don’t intend to keep this energy-drain running at all times, but it’ll be there at the flick of a switch to generate a stable 10 MHz signal when needed. One of the things you can do with it is calibrate other clocks, and compare their accuracy + drift over time and temperature.

Geeky stuff. For a lot more info about precise time and frequency tracking, see the Time Nuts web site.

Tomorrow, I’ll describe some of the trade-offs w.r.t. time for JeeNodes and wireless sensors.

Triggered by a video on EEVblog of a Rubidium frequency standard and its teardown, I decided to get one myself. These things are available from eBay for around €40 these days, as recalls from cellphone towers, apparently:

It’s about the size of a hard drive, it’s completely closed, and there really is nothing to it – connect power, wait a few minutes for things to stabilize, and out comes a 10 MHz signal – a perfect black box:

There’s a capacitor in the output circuit, so the resulting signal is AC-coupled and hence centered around 0V and 1.5 V peak-to-peak.
There’s also a 1 pulse-per-second (PPS) output signal, with a 1 µs pulse (at first I thought it didn’t work, but the pulse is really there).

The big deal about such a Rubidium-based atomic clock is its accuracy. More on this tomorrow…

The JeeNode Micro is based on an ATtiny84, which has quite a bit less hardware functionality built-in than an ATmega. There is some rudimentary byte-shifting hardware for sending or receiving a serial bit stream, but that’s already assigned to the SPI-style RFM12B interface.

So how about a serial port, for debugging? Even just serial out would be a big help, after all.

Luckily, the developers of the Arduino-Tiny library have thought of this, and have implemented a software solution. Better still, it’s done in an almost completely compatible way.

Here’s an example test sketch:

Look familiar? Of course it does: it’s exactly the same code as for a standard ATmega sketch!

The one thing you have to keep in mind, is that only a few baud rates are supported:

9600, 38400, and 115200 baud

The latter is unlikely to work when running on the internal 8 MHz clock, though. It’s all done in software. For an ATtiny84 running at 8 MHz, the serial output appears on PB0. This is pin 2 on the chip and pin 10 on the 10-pin header of the JNµ, marked “IOX”.

The code for this software-based serial port is fairly tricky, using embedded assembly code and C++ templates to create just the right timing loops and toggle just the right pin.

Note also that since this is all done in software, interrupts cannot occur while sending out each byte, and sending eats up all the time – the code will resume after the print() and println() calls after all data has been sent.

But apart from these details, you get an excellent debugging facility – even on an ATtiny!

Welcome to a second new initiative on this weblog: a weekly series about tools, i.e. the stuff you can use to design and create stuff, in the context of Physical Computing, that is. Again, you can bookmark this Toolkit link to find back all the related posts on this weblog, now and later.

People regularly ask about what to get, how to get started, and sometimes I see comments indicating that maybe a few basic extra investments might help understand and fix a problem much quicker.

Tools can be anything: the soldering iron you use, various electronics “lab instruments”, but also the software you use, and even the computer setup you work with. There is no “best” answer. It’s all matter of goals, interest levels, amount of involvement, and of course budget.

What I’d like to do is start off this series from scratch. I vividly remember the time when I re-booted my interest in electronics a few years ago and started JeeLabs to get into Physical Computing. It was very confusing. Do I get the best tools money can buy? Sure, dream on, but if going broke is not an option, what do I get first? When should I buy specific items? Which items are risk-free? Is really everything required? What’s “everything”, anyway? What would be the absolute minimum? Is this the start of never-ending upgrades?

The good news is: you can start having immense fun, and learn, and build stuff for less than the price of an Xbox. To draw on a theme from Alice in Wonderland: you can pick the red pill or the blue pill, it’s all up to you. The red pill is: watch videos, play games, surf and consume, follow the pack, compare yourself (and keep up) with others. The blue pill is: launch yourself into a new adventure, find out what so many explorative minds before you have invented, discover the gift of boundless learning, and start contributing to change the path of the future – your own, of your friends, of your community, or maybe even of your whole world. It’s all possible, these journeys are totally real (and indeed also non-virtual). Today. Now!

The even better news is, that these make incredibly nice gifts (note that gifts are not tied to a particular time of year – the best time to give IMO, is when you feel like it and can turn it into a genuine act of generosity).

Thinking about how to start off this series (which, incidentally, will be open for guest writers, so feel free to suggest topics or contribute with posts), I decided to take on the role of someone who really wants to dive into Physical Computing and has to start from scratch: knowing nothing, having nothing, eager to learn, willing to buy what’s needed – or indeed, having received some sort of starter set as a gift.

I came up with a list of items: some tools, and a fun kit to build, which can catapult you into this world of technical invention and creation. It’s meant as a suggestion – no more. Whatever works for you, ok?

Next question was – how to make this meaningful, i.e. how can people get hold of this stuff, if they simply want to get started? I decided to select appropriate items from Conrad, a mail-order shop with outlets all over Europe, which has been in the business of supplying all sorts of electronics and hobby products for many decades. You’ll find cheaper stuff in China, and you may be served better by a local company you already know, but if you really start from scratch, Conrad is a fine mail-order source of hobby-oriented products for the Europe region. And although I don’t know them as well, I suspect that Jameco has a similar audience in the US.

I do not have even the slightest affiliation with Conrad – I just order from them once in a while (and know from experience that returns and cancellations are handled in a courteous and responsive manner). Their website is not the fastest or the most convenient, but hey, it works.

Here is a list of what I found and will be discussing in the upcoming installments. Unfortunately, it appears that these item numbers are not identical across different countries – these links are to Conrad’s Dutch site:

Cost so far: € 86.04, including VAT and free shipping (Dutch prices, other countries should be similar).

That leaves plenty of spare cash in our sub-Xbox budget to buy one more thing: a delightful robotic kit (item 191451) – the same as used for the TwitLEDs project. Total expenses: € 146.03 (over 40% of which is that robot).

The coming weekly posts are going to describe these items in detail, and explain why less is too little and more is not essential. Feel free to pick alternatives, but don’t omit too many of these items. Even that robot (or some starter project) is essential. Walk first, then run. But as you’ll see, even walking is fun!

My reasoning for this approach is as follows: when starting out, you need enough to get going, to be able to really learn and get used to everything, and to build up the skills which will allow you to step up to more advanced tools – but only then, and only if you decide that you want to take it further!

Nobody in their right mind would start learning to play the violin on a Stardivarius. Well.. I have to admit that my well-documented recent oscilloscope acquisition sure feels like a “Strad”. And I’m glad I didn’t get it any sooner, or skipped the Rigol trial, because I would probably have had no idea how to make use of it otherwise.

So this series will be about picking tools, making the very most of them, and focusing on the world beyond.

It’s not the tools that matter. It’s what they enable. And it’s for everyone who’s interested, from age 7 to 77.

Update – ALthough this will slightly exceed the total budget, I recommend also getting a large set of resistors, such as Conrad’s 418714. I’ll go into this in one of the upcoming Toolkit posts.

The current version only knows about a few boot loaders so far, but it’s table-driven and can quickly be extended.

So now that it exists, let’s use the power of crowdsourcing and make it really useful, eh?

If you’ve got an ATmega328-based board with a bootloader on it, upload this sketch to find out what it reports. If it says “UNKNOWN” and you happen to know exactly what boot loader is present, please let me know what output you get (i.e. the two CRC values) and the name/type of the boot loader, and I’ll add it to the table.

I’ll update the code for each new boot loader. The sketch is maintained as gist on GitHub, so please be sure to get the latest version before you try this. Your boot loader might already be in there!

Note that this is not limited to JeeNodes or RBBBs. Anything with an ATmega328 that will run this Arduino sketch can be included.

Welcome to a new initiative on this weblog: a weekly series about taking something “interesting” apart and peeking under the hood. I’m calling it the Tuesday Teardown series, and since they’ll all be tagged “Teardown”, that link you see will bring up all posts, accumulating as we walk down this path.

The idea is to look at some neat existing technology and find out how things were engineered, which is after all often a highly creative process, reflecting the outcome of a lot of problem-solving and deep insight about the design and production of all sorts of products. Since this weblog is all about creativity, technology, and exploration, it seemed like an obvious fit to look at how “stuff” was made.

This series of posts is also a departure in that I’ll be passing the microphone to guests once in a while. There is plenty of technology – both excellent and awful – to be able to keep this weekly topic alive for a long time… if you have suggestions, would like to contribute a complete story, or simply want me to translate or do part of the writing for you – please get in touch!

To start off, here’s a little dive into an amazing piece of engineering: a vintage-2005 Apple Power Mac G5 (2x 2.5 GHz PowerPC, each dual-core), which a friend and I recently took apart, after it had suffered a catastrophic breakdown – as you’ll see.

Here’s the shiny new Power Mac, as presented in the marketing brochures (it’s about 50x50x20 cm):

The interesting bit is that at the time, these CPU’s were hitting the limits of personal computer cooling capabilities, yet Apple wanted to really keep noise levels down. As a result, an elaborate set of cooling zones was created, each with quiet cooling fans operating independently and adapting to demands.

I wasn’t really interested in the top part (drive bays and expansion slots), or the middle part (motherboard and memory expansion). I wanted to see the CPU cooling solution:

This is an oblique top view of the cooling unit, sitting on top the two CPU boards – which are separate from the big motherboard (no doubt easier to service and upgrade this way). The whole unit looks and behaves like a mini car radiator, and indeed, it uses what seems to be the same sort of thick blue-ish liquid coolant (glycol) as you’d put in your car (or your fridge, as cooling blocks).

The whole Power Mac can draw over half a kilowatt, and no doubt quite a bit of that goes to these CPU’s when maxed out. Since all of it ends up as heat, this really is an impressive feat of engineering.

Trouble is… after a few years, things tended to fail. In a pretty ugly way, in this case:

Massive leakage. Taking the board with it, to the point where the solder joints got corroded:

Interesting detail – look at the immense number of capacitors on there. Here’s the other side:

Oh, and this isn’t a run-of-the-mill double-layer PCB either – check it out:

Even 7 years later, “awesome” only barely covers the level of engineering that must have gone into this.

PS. I also extracted the power supply, rated 600W, to see whether that could be re-used at JeeLabs somehow. But the PSU didn’t really like me – my first attempt at powering it up beyond the default standby state produced fireworks inside and a smelly puff of smoke. It probably needed a certain load to function properly. Oh well.

I ran out of USB BUB interface boards in the shop the other day (all my fault, not paying attention), so these last few days a slightly different version is being sent out. This is an earlier, but nearly equivalent, design by Lennart Herlaar – and as it so happens, I had a bunch of them lying around – a perfect susbtitute until the BUB’s by Modern Device are back in stock next week.

As with the BUB, you need to solder on the 6-pin FTDI connector for use with JeeNodes and RBBBs:

The “settings” for this board is to pass the 5V supply voltage from the USB connector (solder jumper, already installed), and to set the logic levels jumper to work with 3.3V, as required by the JeeNode (or 5V for RBBB use). It’s not that critical, really.

Sooo… slightly different unit and shape, but does the same job as before, i.e. connecting up to USB for power, as well as serial-over-USB uploading and debugging.

My PC has been updated. I left it unattended for a month, and now I’m powering it up again. It’s got a new motherboard, a new display, and a new OS revision. It’s quiet, because it’s all-SSD now, and it’s actually a bit slower than the previous one.

The above paragraph is a mix of reality and fiction, BTW. Because I’m talking about two things at once – the Mac I work on, and… my brain. Both have changed :)

The past month has been extremely chaotic for me. I’ve been trying to figure out what I really want to do, and how to make it happen. The outcome surprised me: I absolutely want to keep doing what I’ve been doing these past few years, with JeeLabs. So the good news, if you been following along, is that I will. But there will be changes, because the intensity of it all is not sustainable for me, not at the previous energy level anyway. I will spread out stories over more weblog posts – thus also making it easier for you to keep up and follow along.

In this day and age of instant gratification, mass consumption, and immediate mail-order fulfillment, I’m going to go against the grain and buck the trend – by reducing short term the frequency of JeeLabs shop fulfillments, dealing with shop-related tasks less often. The shop will become even more of a secondary activity here, but fulfillment improvements are in the pipeline. The product range will grow further, but the pace and scale of commerce most likely not. It gives me pleasure to send out packages and to stay in contact with the people who are going to use these products. The shop isn’t about volume and turnover, but about allowing others to reproduce and extend some projects I’m coming up with and working on. Because making stuff is fun.

My passion, my energy, and my time will remain focused on the weblog, or rather on the projects that drive it all. Whether the frequency can stay as is, time will tell. I hope it can – with occasional breaks in the year – because the daily cycle is great fun, keeps me focused, and is clearly being appreciated.

As Seth Godin describes in his manifesto, the schooling system has taken our dreams away. I’ve been lucky to keep (or rather, rediscover) mine, and want to help as much as I can to make sure others will be able to latch onto their dreams as well, with curiosity and creativity as the driving forces – in the context of Physical Computing, that is.

The internet, at least the part I care about, is evolving into an extra-ordinary global learning powerhouse. It started with Wikipedia and led to the inspiring TED presentations, MIT’s Open Courseware, and the Khan Academy (an absolutely astounding initiative which is turning the way education works on its head). There is no excuse anymore for not knowing what you’d like to know, it’s all there.

And as I’m finding out, there is no excuse anymore for not sharing what you know, either.

History is about to repeat itself…
With this 954’th post, I have an important announcement to make: I’m slamming on the brakes and taking a one month break away from this weblog.

It’s a bit radical and unexpected, but there is no way around it. This weblog is “driven by passion”, as you will probably know, and the crazy bit is that there’s just too much going on here to keep things going smoothly. I’ve been running behind on shop fulfillment again, and I’ve been running behind even more on answering emails and with helping out on the forum. First thing I hope this will do, is to let me catch up and regain my footing.

In sharp contrast to last year’s emergency stop, this time it’s not so much lack of ideas or lack of energy, but lack of clear focus and direction. The stories I would love to tell need more time – diving into various aspects of physical computing in considerably more depth and detail than what’s been happening on the weblog lately. And it’s not happening because the daily bite-sized cycle is chopping up my attention (even at times when I have enough weblog posts queued up for many days on end – go figure!).
And maybe it’s also a hill climbing issue.

I’ve updated the alphabetical and chronological indexes to all the posts on this weblog, to give you something to go through for the coming weeks. It’s a stopgap measure, but it’ll just have to do – and there should be enough to keep you interested and hopefully also pique your interest and keep you excited in the month ahead.

The difference with last year, is that I’m putting a precise cap on the duration of this “outage”: 30 days from now. That’s when this weblog will resume, probably with some announcements and adjustments to its style and format.

Talk to you one month from now!

PS. If you want to learn about electricity, then there are numerous resources on the web. Let me single out one: a 50-minute video by Walter Lewin at MIT about batteries and power (lecture 10 on this page). You can get a deep understanding of what a battery is, why its internal resistance matters, what power is, how heat comes out, what shorting a battery does, and even sparks. It’s a fantastic presentation, and the video was just picked at random!

To summarize: a straight line going through (0,0) represents a purely resistive effect. The slope of the line is related to actual resistance. With resistors, once you know the voltage, you know the current (and vice versa).

Here’s a diode, i.e. a component with very specific properties (this shows why it’s called a semiconductor!):

With negative voltages, it just blocks (horizontal line, infinite resistance). With positive voltages it’s essentially a short circuit (vertical line, almost zero resistance). Note the “knee”: a diode starts conductiong at about 0.7V.

Here’s a blue LED:

Very much like a diode (the “D” in LED stands for diode, after all). Except that the knee is higher, at around 3V.

Note first of all that these diodes were connected in reverse compared to the diode and LED shown earlier, so the graphs are rotated by 180° compared to those. A zener is a regular diode, in that it conducts normally at around 0.7V. The difference is that when it’s blocking, it will at some point “avalanche” and start conducting anyway. This very specific voltage is what makes zeners special. But note how that avalanche knee is round and inaccurate for low voltage types. Zeners for less than 6V or so are not very precise for regulating the voltage – but 9.1V is fine.

Neat, huh? Each type of component has its distinctive analog signature when viewed on a CT!

So far, you’d be forgiven to conclude that a Component Tester is simply a hardware function plotter. With the horizontal axis being the voltage applied, and the vertical axis being the current flowing through the component.

Ah, but wait… here’s a 1 µF capacitor, showing that capacitors are fundamentally different beasts:

This is where things start to go crazy. No current at maximum and minimum voltage? Lots of current at zero volts? Positive and negative current at that zero-volt position? What’s going on here?

The thing to keep in mind is that this is not simply a function of voltage vs current. We’re applying a sine wave – a voltage which very uniformly and smoothly varies between -10V and +10V. Think of a swinging pendulum, oscillating over and over again in a constant pattern.

Note also that the component is being driven through a 1 kΩ resistor, limiting the maximum current through it. So we’re looking at the capacitor while it’s in fact part of a circuit – i.e. a 1 kΩ resistor in series with our 1 µF cap.

Let’s start at the right. The capacitor is fully charged to +10V, and our voltage is starting to decrease. When the voltage is +9V, the cap is still +10V, so it starts sending out charge in the form of current to try and regain the balance. So a positive current flows out when the voltage is at +9V. If that voltage stayed at +9V, it would soon stop, since the charge drops, and the capacitor reaches +9V equilibrium again. But as this happens, the voltage keeps on dropping. In fact, it drops faster and faster, so more and more current leaks out while catching up.

At 0V, the rate of descent (dare I say slope or derivative?) is maximal, as you can see when you look at a sine wave. So at that point, the capacitor is leaking charge as fast as it can – at the rate of 4 mA in this case.

The voltage doesn’t stop dropping, though. I keeps on dropping to -10V, although it’s slowing down again. So the current still flows out of the cap, but slower and slower. At -10V, the voltage is no longer dropping at all, and the charge will have caught up – no more current, i.e. 0 mA.

Now the roller coaster ride repeats the other way around. The capacitor has -10V charge (lack of charge, if you wish to look at it that way), and voltage is about to start rising again. This time, charge has to be fed into the cap to try and equalize voltages, and so the current is now negative.

And sure enough, the lower negative side of the circle goes through the same changes. Until we reach +10V again.

So what you’re looking at is not a function, but the path of a point in space, racing around a circular path (ok… oval, since you insist). That point in space leaves a trail on the screen, and that’s the resulting image.

Phew! Still there?

The reason this happens, is due to the fact that a capacitor has state (or memory, if you like). It will respond to an external voltage differently, depending on the amount of charge it currently holds. Applying +5V to an empty cap will generate a different current than applying +5V to a capacitor which is currently charged up to +10V, or whatever. Current will start to flow to balance things out, but this requires time.

Very loosely speaking, you could say that capacitors “live in the time domain”. Unlike resistors – which just resist the same way under any circumstance.

Here’s the trace of an inductor (the secondary coil of a small transformer in this case):

Hey, it looks like inductors also have state! And yes indeed, they do. Capacitors and inductors are very similar, electrically. They both “live in the time domain”, although through very different mechanisms.

The state of a capacitor is its current charge level, i.e. the “amount of electricity” inside it at any particular time.

The state of an inductor is the magnetic field level it has created. When you send an electric current through a coil, that coil becomes an electro-magnet, and starts generating a magnetic field around it. When the current stops, the magnetic field wants to keep going. But it can’t and it starts fading – while it does, an electric current is generated in the opposite same direction. This effect (plus a little resistance) is what causes the tilted shape shown above.

As you can see, it has the same weird effect: no current at maximum or minimum voltage, and either positive or negative current at zero volts.

The point of these little demos was to show how current and voltage stop being linearly inter-related with caps and inductors. Because they mess with time. The charge which came in today could come out tomorrow, for example.

With constant voltages, capacitors and inductors are boring. But when their time effects are pitted against voltages which change over time, then nifty things can happen. It’s probably fair to say that the discovery of DC (direct current) brought electricity to the world, whereas AC (alternating current) brought electronics to the world.

For measuring DC, you can get by with a voltmeter. For AC, you need a voltmeter-over-time, a.k.a. an oscilloscope.

I hope this gives you a feel for what’s going on in electronic circuits. The behaviors shown here are universal, i.e. caps will behave like this every time, no matter what else sits around them, and getting an intuition about how these components react to voltages is a fantastic way to figure out all sorts of more complex circuits.

There’s tons more to explore about signals and circuits: filters, phase effects, crazy stuff called “complex numbers” (values with a “real” and an “imaginary” part, go figure!), switching perspectives from the “time domain” to the “frequency domain”, and Fourier transforms. None of this matters, if all you want is to turn on a lamp or work with digital signals. But if you’ve ever wondered how electronic stuff really works: trust me… it’s fascinating.

Is anyone interested in any of this? I’d love to write a series about it one day, where intuition comes first, insight a close second, and where all the mathematics involved will become totally obvious (seriously!).

PS. Here’s my intuitive summary of what R’s, C’s, and L’s do (and what makes each of them unique):

Resistors turn electrical energy into heat (no way back with a resistor)

Hameg scopes have often included a “Component Tester” (CT) and mine’s no exception. It’s a really nifty way to identify a component and understand its basic characteristics. It requires a sine wave signal and an oscilloscope:

Don’t fret too long about the above circuit (copied from this PDF, which I found via Google). It’s just to show that setting up something like this is very easy – but you do need an oscilloscope with X-Y capability.

The basic idea is to apply a sine wave of say 10 VAC @ 50 Hz to the part you want to identify, and to then display voltage over versus current through that component.

My scope has the equivalent of the above simple CT circuit built in:

Two pins, on which a 50 Hz voltage is applied which varies between +10V and -10V in the form of a pure sine wave. For some reason, the scope won’t let me take screen dumps to USB in this mode, so I’ll use camera shots.

Here’s what you see with nothing connected (note the full scale: ±10 V on X and ±10 mA on Y):

The horizontal axis shows the applied voltage, and as you can see, no current is flowing. Because air insulates!

Let’s short the two pins with a copper wire (I’ve reduced the image scale to reduce this weblog post’s length):

Of course: no matter what voltage we try to put between the pins, the wire will force it to 0V, and will simply pass -10 .. +10 mA of current. As you can see in the schematic, there’s a 1 kΩ resistor in series to limit the current.

These two images of open vs shorted set the stage. Now let me insert a couple of different components, so you can see how they behave when subjected to this 10 Vpp sine wave. First, let’s insert a 1 kΩ resistor:

Make sense? Now have a look at a 10 kΩ resistor:

So what we have so far, is resistance varying from zero ohm (shorted) to infinite ohm (open), with two values in between. It all ends up as a straight line, with the slope varying from horizontal to vertical. That’s it: resistance!

If you want an explanation: this is Ohm’s law, visualized. Voltage and current are proportional, i.e. V = I x R.

So much for the basic stuff. Tomorrow, I’ll show you a couple of considerably more interesting components.

Yesterday, I tried to get to grips with how a capacitive power supply works. Real samples are a bit messy, though.

So let me try something else this time, and simplify a bit:

The setup is very similar, but I’m leaving out the zener diode and the rest, and more importantly, I’m going to feed a real 50 Hz sine wave signal into this circuit, using my little sine wave generator.

Here again, because scopes need to measure with a common ground, I’m placing that common ground in between the resistor and the cap, and I’m using the scope’s internal “invert” feature to treat this as if the channel 1 probe were connected the other way around. Here’s what we get with this new setup:

The vertical scale of the resistor was adjusted to display the same amplitude for the resistor as for the capacitor.

the yellow line is the voltage over the capacitor

the blue line is the voltage over the resistor

the red line is the sum of the yellow and blue lines

The scale of the red line is not quite accurate, but its shape is. So the red line is in essence the input signal.

So what’s going on here?

As you can see, all these signals are 50 Hz sine waves. That’s quite remarkable already. Obviously the red line is a 50 Hz sine wave, since that’s what we’ve been feeding in. But so is the voltage over the capacitor, the voltage over the resistor, and hence also the current through this circuit!

What you see, is a set of sine waves which differ only in phase and in amplitude:

the voltage over the capacitor (yellow) lags the input signal (red): it’s forever trying to catch up

the current through the capacitor, i.e. the voltage over the resistor (blue), is leading in phase

And something else, as we saw yesterday: the current through the capacitor is related directly to the slope of the capacitor’s voltage change (i.e. its derivative). When the yellow line is steepest, the blue line is at its highest.

Let’s throw one more calculation into the mix: power. Power is input voltage (red) times input current (blue):

Looks very sine wave’ish again! There is some amplitude variation, which can probably be attributed to signal asymmetry or amplitude differences between the different sine waves. The phase of this wave is different from the ones already shown, but note that it’s also twice the frequency, i.e. 100 Hz.

Let’s step back for a moment. With a purely resistive circuit (as used in a resistive transformer-less supply), the current and voltage would be in lock step, i.e. sine waves with exactly the same phase, and the power consumption would be maximal (high current at times of high voltage).

With this capacitive setup, currents get “moved around”. That means power consumption will be less than when voltage and current match up (since this gives the largest possible results).

I’ll include one more screenshot, this time using the same vertical scale for all signals (except power):

This puts things more in perspective: the voltage over the capacitor (yellow) is slightly lagging the input signal (red) now. And the input current (blue) is out of phase w.r.t. the input signal (red). So the power consumption (red x blue) is substantially lower than with a resistive circuit: when the current is maximal, the input voltage is only a fraction of its maximum range. IOW, we’re drawing current when it “costs” little.

This is why a capacitively coupled supply is cheaper: the electricity company charges us for real power (i.e. V x I). We’re being charged for what happens inside a pure resistor.

But we’re doing a lot more: we’re taking charge out of the AC mains line on one half of the cycle and pushing it back on the other half. It might seem as if that doesn’t use up energy, but it does: current through a wire causes resistive losses (in the form of heat), and “returning” that energy one half cyc