Archive for January 2013

Ok, so now I have this table with incoming data, updated in real time – Ajax polling is so passé – with unit conversions, proper labeling, and locations associated with each device:

As I’ve said before: neat as a gimmick, but… yawn, who wants to look at this sort of stuff?

Which sort of begs the question: what’s the point of home monitoring?

(Automation is a different story: rules and automation could certainly be convenient)

What’s the point? Flashy / fancy dashboards with clever fonts? Zoomable graphs? Statistics? Knowing which was the top day for the solar panels? Calculating average temperatures? Predicting the utility bill? Collecting bragging rights or brownie points?

Don’t get me wrong: I’ve got a ton of plans for all this stuff (some of them in other directions than what you’d probably expect, but it’s way too early to talk about that).

But for home monitoring… what’s the point? Just because we can?

The only meaningful use I’ve been able to come up with so far is to save on energy (oh wait, that’s now called “reducing the carbon footprint”). And that has already been achieved to a fairly substantial degree, here at JeeLabs. For that, all I need really, are a few indicators to see the main energy consumption patterns on a day-to-day basis. Heck, a couple of RGB LEDs might be enough – so who needs all these figures, once you’ve interpreted them, drawn some conclusions, and adjusted your behaviour?

I’ve implemented the following with a new “compile” module in HouseMon – based on Arduino.mk, which is also available for the Raspberry Pi via Debian apt:

In other words, this allows uploading the source code of an Arduino sketch through a web browser, manage these uploaded files from the browser, and compile + upload the sketch to a server-side JeeNode / Arduino. All interaction is browser based…

Only proof of concept, but it opens up some fascinating options. Here’s an example of use:

(note that both avr-gcc + avrdude output and the exit status show up in the browser!)

The processing flow is fairly complex, given the number of different tasks involved, yet the way Node’s asynchronous system calls, SocketStream’s client-server connectivity, and Angular’s two-way webpage bindings work together is absolutely mind-blowing, IMO.

Last month, I described how to hook up a JeeNode with a Blink Plug to Node.js via SocketStream (“SS”) and a USB connection.
Physical Computing with a web interface!

That was before AngularJS (“NG”), using traditional HTML and JavaCoffeeScript code.

With NG and SS, there’s a lot of machinery in place to interface a web browser with a central Node process. I’ve added a “blink-serial” module to HouseMon to explore this approach. Here’s the resulting web page (very basic, but fully dynamic in both directions):

I’m not quite happy yet with the exact details of all this, but these 3 files are all there is to it, and frankly I can’t quite imagine how a bidirectional real-time interface could be made any simpler – given all the pieces involved: serial interface, server, and web browser.

The one thing that can hopefully be improved upon, is the terminology of it all and the actual API conventions. But that really is a matter of throwing lots of different stuff into HouseMon, to see what sticks and what gets messy.

Onwards!

PS – What’s your take on these screenshots? Ok? Or was the white background preferable?

This post won’t be for everyone. In fact, it may well sound like an utterly ridiculous idea…

After yesterday’s post, it occurred to me that maybe some readers will be interested in what these so-called “programmer’s editors” such as vim and emacs are all about.

When developing software, you need a tool to edit text. Not just enter it, nor even correct it, but really edit it. Now perhaps one day we’ll all laugh about this concept of having a bunch of lines with characters on a screen, and working with tools to arrange those characters in files in a very very specific way. But for the past decades up to this very day, that’s how software development happens. Don’t blame me, I’m just stating a fact.

an Integrated Development Environment, such as Visual Studio, Eclipse, Komodo

command-based programmer’s editor – the main ones are really only Vim and Emacs

Everything but the last option is GUI based, and while all of those tools have lots and lots of “keyboard shortcuts”, these are usually added for the “power users”, with the main idea being that everything should be reachable via clicking on menu’s, buttons, and text. Mouse-based, in other words.

There’s nothing wrong with using the mouse. But when it comes to text, the fact is that you’re going to end up using the keyboard anyway. Having to switch between the two is awkward. The other drawback of using a mouse for these tasks, is that menu’s can only give a certain amount of choice – for the full breadth and depth of a command-based interface, you’re going to have to navigate through a fairly large set of clever visual representations on the screen. Navigation means “pointing” in this case. And pointing requires eye-hand coordination.

As I said yesterday, I’m going back to keyboard-only. Within reason, that is. Switching windows and apps can be done through the keyboard, and so can many tasks – even in a web browser. But not all. The trick is to find a balanced work flow that works well… in the long run. Which is – almost by definition – a matter of personal preference.

Anyway. Let’s say you want to find out more about vim…

In a nutshell:

you type commands at it: simple keys, control keys, short sequences of keys

each of them mean something, and you have to know them to be able to use them

luckily, a small subset of commands is all it takes for simple (albeit tedious) editing

it does a lot more than edit a file: jump around, see multiple files, start compiles, …

it’s all about monospaced text files, and it scales to huge files without problems

vim (like emacs) is a world in itself, offering hundreds (thousands!) of commands

bonus: a stripped version of vim (or vi) is present on even the tiniest Linux systems

In vim, you’re either in insert mode (typing text), or in visual mode, or in shell-like command mode. Typing the same key means different things in each of these modes!

Make no mistake: vim is huge. Here’s the built-in help for just the CTRL+W command:

Yeah, vim also supports all sorts of color-scheme fiddling. But here’s the good news:

you only need to remember how to enter insert mode (“i” or “a”), how to exit it (ESC), how to enter command mode (“:”), how to save (“:w”+CR), and how to quit (“:q”+CR) – and most importantly: how to get to the help section (“:help”+CR).

a dozen or so letter combinations will get you a long way: just read up on what the letters “a” to “z” do in visual mode, and you’ll be editing for real in no time

there is logic in the way these commands are set up and named, there really is – even a few minutes spent on vim’s extensive built-in introductory help texts are worth it

some advanced features are awesome: such as being able to undo any change even after quits / relaunches, and saving complete session state per project (“:mks”)

If you try this for a few days (and fight that recurring strong urge to go back to the editor you’re so familiar with), you’ll notice that some things start to automate. That’s your spine taking over. This is what it’s really all about: delegating the control of the editor to a part of you which doesn’t need your attention.

When was the last time you had to think about how to get into your car? Or how to drive?

Probably the best advice I ever got from someone about learning vim: if you find yourself doing something over and over again and it feels tedious, then vim will have some useful commands to simplify those tasks. And that is the time to find them and try them out!

Vim (and emacs) is not about good looks. Or clever tricks. It’s about delegation.

There’s a reason why many people stick to a programming language and reuse it whenever possible: it’s hard work to learn another one! So once you’ve learned one well, and in due time really, really well, then you get a big payback in the form of a substantial productivity boost. Because – as I said – it’s really a lot of hard work to become familiar, let alone proficient, with any non-trivial technology – including a programming language or an OS.

Here’s me, a few weeks ago:

I was completely overwhelmed by trying to learn Node.js, AngularJS, SocketStream, CoffeeScript, and more. To the point of not being able to even understand how it all fits together, and knowing that whatever I’d try, I would stumble around in the dark for ages.

I’ve been through the learning curve of three programming languages before. I like to think that I got quite good at each of them, making code “sing” on occasion, and generally being able to spend all my time and energy on the problem, not the tools. But I also know that it took years of tinkering. As Malcolm Gladwell suggests in his book Outliers, it may take some 10,000 hours to become really good at something. Maybe even at anything?

Ten. Thousand. Hours. Of concentrating on the work involved. That sure is a long time!

But here’s the trick: just run, stumble, and learn from each re- / disrecovery!

That doesn’t mean it’s easy. It’s not even easier than the last time, unfortunately: you can’t get good at learning new stuff – you still have to sweat it through. And there are dead ends. Tons of them. This is the corner I painted myself into a couple of days ago:

Let me explain:

I thought I was getting up to speed with CoffeeScript and the basics of AngularJS

… even started writing some nice code for it, with lots of debug / commit cycles

big portions were still clear as mud to me, but hey… I got some stuff working!

The trouble is: instead of learning things the way all these new tools and technologies were meant to be used, by people far more knowledgeable than me, I started fitting my old mental models on top of what I saw.

The result: a small piece, working, all wrapped into code that made it look nice to me.

That’s a big mistake. Frameworks such as AngularJS and SocketStream always have their own “mindset” (well, that of their designers, obviously) – and trying to ignore it is a sure path to frustration and probably even disaster, in the long term.

Ya can’t learn new things by just mapping everything to your old knowledge!

Yesterday, I realised that what I had written as code was partly just a wrapper around the bits of NG and SS which I started to understand. The result: a road block, sitting in the way of real understanding. My code wasn’t doing much, just re-phrasing it all!

So a few days ago, I tore most of it all down again (I’m talking about HouseMon, of course). It didn’t get the startup sequence right, and it was getting worse all the time.

Guess what? Went through a total, frustratingly disruptive, re-factoring of the code. And it ended up so much simpler and better… it blew me away. This NG + SS stuff is incredible.

Here’s me, after checking in the latest code a few hours ago:

No… no deep understanding yet. There’s no way to grasp it all in such a short time (yes, weeks is a short time) – not for me, anyway – but the fog is lifting. I’m writing less code again. And more and more finding my way around the web. The AngularFun project on GitHub and this post by Brian Ford are making so much more sense now.

I’ve also decided to try and change one more habit: I’m going back to the vim editor, or rather the MacVim version of it. I’ve used TextMate for several years (now v2, and open source), but the fact is that pointing at things sucks. It requires eye-hand coordination, whereas all command-line and keyboard-driven editing activity ends up settling itself in your muscle memory. Editors such as vim and emacs are amazing tools because of that.

Now I don’t know about you, but when I learn new things, the last thing on earth I’d want is to spend my already-strained mental energy on the process instead of the task at hand.

So these days, I’m in parallel learning mode: with head and spine working in tandem :)

(Not planned this way, but still a nice follow-up on yesterday’s Junk USB post…)

While thinking about some minor tweaks for the JeeNode USB board, I wanted to try out a different LiPo charger chip – mostly to reduce costs, given than not everyone using the JN USB would be interested in the LiPo charge capability (it’s also fine as a JeeNode-with-built-in-USB-BUB after all).

So I had a look at the MCP73832T – in fact, Paul Badger and I went ahead and had a new board made with it:

The good news: as a LiPo charger, it works absolutely fine.

The bad news: without an attached LiPo battery, it’s not usable.

It turns out that this chip uses some sort of charge/discharge cycle. This is what happens without LiPo attached:

IOW, it’s delivering 4.2V for a while, and then dropping the voltage to see whether the LiPo itself will fill in the gaps. A pretty clever way to figure out the state of the attached battery, if you ask me.

One way to use the chip without attached LiPo would be to bypass the Vin and Vout pins of the chip, i.e. just disable it altogether via a (solder-) jumper. Drawbacks: 1) you have to remember this, and act accordingly, 2) this means PWR would be 4.2V with LiPo attached, and 5V without, and 3) when bypassed, there would be no over-current protection for the USB port.

Especially that 3rd issue is bad – JeeNodes are about tinkering with stuff, and JN USB’s are about tinkering while attached to a computer USB port. Without over-current protection, tinkering can damage your computer – scary!

There is one more way to solve this, but it’s not very practical: add a big electrolytic cap which sort of takes the place of a battery. I used a 6800 µF (which pulls too much current on power-up, BTW). The result:

A voltage on the PWR pin which carries 3.8 .. 4.2V, with a 10 Hz ripple. Not great, but good enough to make the JeeNode’s internal regulator work. Except that a 6800 µF capacitor is huge and highly impractical, of course!

Sooo… back to the MAX1555 it is. That chip works differently: it senses the charge current and the output voltage.

Note to self: don’t replace chips without testing all essential modes they’ll be used in.

And good bye Mr. Murphy, how considerate of you to drop by once in a while…

It makes sense once you know it … but that’s the whole thing with being a newbie, eh?

That array-with-poperties behaviour is actually very useful, because it lets you create collections which can be looped over, while still offering members in an object-like manner. Very Lua-ish. The same can be done with Object.defineProperty, but that’s more involved.

For the full story on “array-like” objects, see this detailed page. It gets real messy inside – but then again, so does any other language. As long as its easy to find answers, I’m happy. And with JavaScript / CoffeeScript, finding answers and suggestions on the web is trivial.

On another note: Redis is working well for storing data, but there is a slight impedance mismatch with JavaScript. Storing nested objects is trivial by using JSON, but then you lose the ability to let Redis do a lot of nifty sorting and extraction. For now, I’m getting good mileage with two hashes per collection: one for key-to-id lookup, one for id-based storage of each object in JSON format. But I’m really using only a tiny part of Redis this way.

Since we’re measuring half the voltage of the supercaps, you can see that they have all reached around 2.8V at this point. The interesting bit will be to see whether the voltage ever rises in the time between DIO1 “on” cycles – indicating a charge surplus!

Here’s the next 24 hours, only the 2 and 20 MΩ loads benefit from this cloudy winter day:

Another day, some real sunlight, not enough yet to survive the next night, even w/ 20 MΩ:

Note that the only curves which hold any promise, are the ones which can permanently stay above that hourly on/off cycle, since that DIO-pin top-up won’t be there to bail us out when using these solar cells as a real power source.

I’ll leave it for now – and will collect data for some more days, or perhaps weeks if there’s any point with these tiny solar cells. Will report in due time … for now: patience!

PS. Note that these fingernail-sized CPC1824 chips are a lot more than just some PV material in an SOIC package. There appears to be a switching boost regulator in there!

With the hardware ready to go, it’s time to take that last step – the software!

Here is a little sketch called slowLogger.ino, now in JeeLib on GitHub:

It’s pretty general-purpose, really – measure 4 analog signals once a minute, and report them as wireless packet. There are a couple of points to make about this sketch:

The DIO1 pin will toggle approximately once every 64 minutes, i.e. one hour low / one hour high. This is used in the solar test setup to charge the supercaps to about 2.7V half of the time. The charge and discharge curves can provide useful info.

The analog channel is selected by doing a fake reading, and then waiting 100 ms. This sets the ADC channel to the proper input pin, and then lets the charge on the ADC sample-and-hold capacitor settle a bit. It is needed when the output impedance of the signal is high, and helps set up a more accurate voltage in the ADC.

The analog readings are done by summing up each value 32 times. When dividing this value, you get an average, which will be more accurate than a single reading if there is noise in the signal. By not dividing the value, we get the maximum possible amount of information about the exact voltage level. Here, it helps get more stable readings.

As it turns out, the current values reported are almost exactly 10x the voltage in millivolts (the scale ends up as 32767 / 3.3), which is quite convenient for debugging.

Having said all this, you shouldn’t expect miracles. It’s just a way to get minute-by-minute readings, with hopefully not too many fluctuations due to noise or signal spikes.

It’s all a big hack, and since I didn’t plan the board layout, it all ended up being hard to hookup to a JeeNode, so I had to tack on a Proto Board:

Those solar cells are tiny! – if they don’t work, I’ll switch to more conventional bigger ones.

On the back, some resistors and diodes:

I used a JeeNode USB (about 60 µA consumption in power-down, due to the extra FTDI chip), but am powering it all via a 3x AA battery pack to get full autonomy. The idea is to place this thing indoor, facing south through a window:

These dark and cold winter days are not really effective for solar energy: the entire month of January will probably not generate more power energy than two peak days in July!

Still, it’s a good baseline to try things with. And one of the experiments I haven’t given up on is making nodes run off solar power, using a supercap to hold the charge. Maybe they’ll not last through the night, but that’s fine – there are still various uses (especially outdoor) where being able to run during daytime with nodes that never need to have their batteries changed would be really nice.

In previous attempts, I’ve always immediately tried to power the actual node, but now I’d like to try something simpler: a solar cell, a supercap, and a resistor as load. Like this:

I’m using a tiny solar cell by Clare again, the CPC1824, with the following specs:

Not much, but then again, it’s a cell which is just the size of a fingernail. As SOIC-16 package, and with the specs of the available current next to it:

There may be a flaw in this approach, in that the leakage of the supercap could completely overshadow the current draw from the resistors. But my hope is that supercaps get better over time when kept charged. Hmmm… not sure it applies if they run down every night.

So the second part of the idea, is to alternate solar cell use and dumb charging – just to measure how that affects output voltage over time. One hour, DIO will be on, and put the supercap on about 2.7V, the other hour it’ll be off and the solar cell takes over. With a bit of luck, the output voltage changes might show a pattern, right?

I think it’s worth a try and have made a setup with 4x the above – more tomorrow…

Good progress on the HouseMon front. I’m still waggling from concept to concept like a drunken sailor, but every day it sinks in just a little bit more. Adding a page to the app is now a matter of adding a CoffeeScript and a Jade file, and then adding an entry to the “routes” list to make a new “Readings” page appear in the main menu:

The only “HTML” I wrote for this page is readings.jade, using Foundation classes:

And the only “JavaScript” I wrote, is in readings.coffee to embellish the output slightly:

The rest is a “readings” collection, which is updated automatically on the server, with all changes propagated to the browser(s). There’s already quite a bit of architecture in place, including a generic RF12demo driver which connects to a serial port, and a decoder for some of the many packets flying around the house here at JeeLabs.

But the exciting bit is that the server setup is managed from the browser. There’s an Admin page, just enough to install and remove components (“briqlets” in HouseMon-speak):

In this example, I installed a fake packet generator (which simply replays one of my logfiles in real time), as well as a small set of decoders for things like room node sketches.

So just to stress what there is: a basic way of managing server components via the browser, and a basic way of seeing the live data updates, again in the browser. Oh, and all sorts of RF12 capturing and decoding… even support for the Announcer idea, described recently.

Did I mention that the data shown is LIVE? Some of this stuff sure feels like magic… fun!

(that Arduino.mk file in the sketchbook/ dir is well-commented and worth a read)

That’s it. You now have the following commands to perform various tasks:

make - no upload
make upload - compile and upload
make clean - remove all our dependencies
make depends - update dependencies
make reset - reset the Arduino by tickling DTR on the serial port
make raw_upload - upload without first resetting
make show_boards - list all the boards defined in boards.txt

The first build is going to take a few minutes, because it compiles the entire runtime code. After that, compilation should be fairly snappy. Be sure to do a make clean whenever you change the board type, to re-compile for a different board. The make depends command should be used when you change any #include lines in your code or add / remove source files to the project.

Ok, time to try out some real code, after yesterday’s proposal for a simple node / sketch / packet format announcement system.

I’m going to use two tricks:

First of all, there’s no need for the node itself to actually send out these packets. I’ve written a little “announcer” sketch which masquerades as the real nodes and sends out packets on their behalf. So… fake info to try things out, without even having to update any nodes! This is a transitional state, until all nodes have been updated.

And second, I don’t really care about sending these packets out until the receiving end actually knows how to handle this info, so in debug mode, I’ll just report it over serial.

Note that packets get sent to “node 0″, which cannot exist – so sending this out is in fact harmless: today’s RF12 listeners will ignore such packets. Except node 31, which sees these new packets come in with header byte 64 (i.e. 0x40). No RF12 driver change needed!

Here’s the essence of the test sketch:

With apologies for the size of this screenshot – full code is on GitHub.

Note that this is just for testing. For the actual code, I’d pay far more attention to RAM usage and place the actual descriptive data in flash memory, for example.

And here’s some test output from it on the serial port:

Yeah, that looks about right. Next step will be to pick this up and do something with it. Please give me a few days for that, as the “HouseMon” server-side code isn’t yet up to it.

To be continued soon…

PS. Another very useful item type would be the maximum expected time between packets. It allows a server to automatically generate a warning when that time is grossly exceeded. Also a nice example of how self-announcement can help improve the entire system.

Yesterday’s post was about the desire to automate node discovery a bit more, i.e. having nodes announce and describe themselves on the wireless network, so that central receivers know something about them without having to depend on manual setup.

The key is to make a distinction between in-band (IB) and out-of-band (OOB) data: we need a way to send information about ourselves, and we want to send it by the same means, i.e. wirelessly, but recognisable in some way as not being normal packet data.

One way would be to use a flag bit or byte in each packet, announcing whether the rest of the packet is normal or special descriptive data. But that invalidates all the nodes currently out there, and worse: you lose the 100% control over packet content that we have today.

Another approach would be to send the data on a different net group, but that requires a second receiver permanently listening to that alternate net group. Not very convenient.

Luckily, there are two more alternatives (there always are!): one extends the use of the header bits already present in each packet, the other plays tricks with the node ID header:

There are three header combinations in use, as described yesterday: normal data, requesting an ACK, and sending an ACK. That leaves one bit pattern unused in the header. We could define this as a special “management packet”.

The out-of-band data we want to send to describe the node and the packet format is probably always sent out as broadcast. After all, most nodes just want to report their sensor readings (and a few may listen for incoming commands in the returned ACK). Since node ID zero is special, there is one combination which is never used: sending out a broadcast and filling in a zero node ID as origin field. Again, this combination could be used to mark a “management packet”.

I’m leaning toward the second because it’s probably compatible with all existing code.

So what’s a “management packet”, eh?

Well, this needs to be defined, but the main point here is that the format of management packets can be fully specified and standardised across all nodes. We’re not telling them to send/receive data that way, we’re only offering them a new capability to broadcast their node/packet properties through these new special packets.

So here’s the idea, which future sketches can then choose to implement:

every node runs a specific sketch, and we set up a simple registry of sketch type ID’s

each sketch type ID would define at least the author and name (e.g. “roomNode”)

the purpose of the registry is to issue unique ID’s to anyone who wants a bunch of ’em

once a minute, for the first five minutes after power-up, the sketch sends out a management packet, containing a version number, a sequence number, the sketch type ID, and optionally, a description of its packet payload format (if it is simple and consistent enough)

after that, this management packet gets sent out again about once an hour

management packets are sent out in addition to regular packets, not instead of

This means that any receiver listening to the proper net group will be able to pick up these management packets and know a bit more about the sketches running on the different node ID’s. And within an hour, the central node(s) will have learned what each node is.

Nodes which never send out anything (or only send out ACKs to remote nodes, such as a central JeeLink), probably don’t need this mechanism, although there too it could be used.

So the only remaining issue is how to describe packets. Note that this is entirely optional. We could just as easily put that description in the sketch type ID registry, and even add actual decoders there to automate not only node type discovery, but even packet decoding.

Defining a concise notation to describe packet formats can be very simple or very complex, depending on the amount of complexity and variation used in these data packets. Here’s a very simple notation, which I used in JeeMon – in this case for roomNode packets:

light 8 motion 1 rhum 7 temp -10 lobat 1

These are pairs of name + number-of-bits, with a negative value indicating that sign extension is to be applied (so the temp field ranges from -512..+511, i.e. -51.2..+51.1°C).

Note that we need to fit a complete packet description in a few dozen bytes, to be able to send it out as management data, so probably the field names will have to be omitted.

This leads to the following first draft for a little-endian management packet format:

3 bits – management packet version, currently “1”

5 bits – sender node ID (since it isn’t in the header)

8 bits – management packet number, incremented with each transmission

16 bits – unique sketch type ID, assigned by JeeLabs (see below)

optional data items, in repeating groups of the form:

4 bits – type of item

4 bits – N-1 = length of following item data (1..16 bytes)

N bytes – item data

Item types and formats will need to be defined. Some ideas for item types to support:

type 0 – sketch version number (a single byte should normally be enough)

The current use of the wireless RF12 driver is very basic. That’s by design: simple works.

All you get is the ability to send out 0..66 bytes of data, either as broadcast or directed to a specific node. The latter is just a little trick, since the nature of wireless communication is such that everything goes everywhere anyway – so point to point is simply a matter of receivers filtering out and ignoring packets not intended for them.

The other thing you get with the RF12 driver, is the ability to request an acknowledgement of good reception, which the receiver deals with by sending an “ACK” packet back. Note that ACKs can contain data – so this same mechanism can also be used to request data from another node, if they both agree to use the protocol in this way, that is.

So there are three types of packets: data sent out as is, data sent out with the request to send an ACK back, and packets with this ACK. Each of them with a 0..66 byte payload.

But even though it all works well, there’s an important aspect of wireless sensor networks which has never been addressed: the ability for nodes to tell other nodes what/who they are. As a consequence, I always have to manually update a little table on my receiving server with a mapping from node ID to what sketch that node is currently running.

Trivial stuff, but still a bit inconvenient in the longer run. Why can’t each node let me know what it is and then I don’t have to worry about mixing things up, or keeping that table in sync with reality every time something changes?

Well… there’s more to it than a mapping of node ID’s – I also want to track the location of each node, and assign it a meaningful name such as “room node in the living room”. This is not something each node can know, so the need to maintain a table on the server is not going to disappear just by having nodes send out their sketch + version info.

Another important piece of information is the packet format. Each sketch uses its own format, and some of them can be non-trivial, for example with the OOK relay sending out all sorts of packets, received from a range of different brands of OOK sensors.

It would be sheer madness to define an XML DTD or schema and send validated XML with namespace info and the whole shebang in each packet, even though that is more or less the level of detail we’re after.

When power is so scarce that microcoulombs matter, you want to keep packet sizes as absolutely minimal as possible. A switch from a 1-bye to a 2-byte payload increases the average transmit power consumption of the node by about 10%. That’s because each packet has a fixed header + trailer byte overhead, and because the energy consumption of the transmitter is proportional to the time it is kept on.

The best way to keep packet sizes minimal, is to let each node pick what works best!

That’s why I play lots and lots of tricks each time when coming up with a remote sketch. The room node gets all its info across in just 4 bytes of data. Short packets keep power consumption low and also reduce the chance of collisions. Short is good.

The P1 port connection I set up recently for picking up data from the utility company’s smart meter isn’t working reliably yet, so I dove a bit deeper into it.

Here’s how I now think the P1 interface is implemented inside that smart meter:

There’s no way to ascertain whether a PNP or NPN transistor is used, without opening the box (impossible without breaking a seal, so definitely not an option!). But given that NPN is more common, the above circuit is my best guess.

The resistance was measured between the DATA and GND pin. The resistance between DATA and REQUEST is usually over 1 MΩ, which indicates that the phototransistor is open. Makes sense: pull the data low regardless of the REQUEST line state, and pull it high when the internal IR LED in that optocoupler is lit. That also explains the awkward inverted TTL logic levels provided by this interface.

What you can see is that the REQUEST line is really nothing but the power supply to the isolated side of the optocoupler. But the surprise is the value of that pulldown resistor!

Let’s do the math: when REQUEST is 5V, and the phototransistor is conducting, it’ll have about a 0.2V collector-to-emitter voltage drop, leaving 4.8V to feed the 180 Ω resistor (I measured 4.75 V, so yeah, close enough).

My conclusions from all this are: there’s no external pull-up or pull-down needed after all, and my hunch is that it’ll work just as fine with REQUEST powered by 3.3V (reducing the current somewhat). All you have to make sure when working with this P1 interface, is that your REQUEST pin can supply those 25-odd milliamps.

If the above is accurate, then nothing forces us to actually use that resistor, by the way. We could simply connect the REQUEST and DATA pins and leave GND dangling. In fact, by re-using the phototransistor in a different way, one could even make it work in active-high mode again (only if no other P1 devices are connected, evidently).

You’re looking at the final prototype of the latest Flukso meter, which can be connected to AC current sensors, pulse counters, and the Dutch smart metering “P1″ port. Here’s the brief description from that website:

Flukso is a web-based community metering application. Install a Fluksometer near your fuse box and you will be able to monitor, share and reduce your electricity consumption through this website.

The interesting bit is that it’s all based on a Linux board with wired and wireless Ethernet, plus a small ATmega-based add-on board which does all the real-time processing.

But the most exciting news is that the new version, now entering production, will include an RFM12B module with the JeeNode-compatible protocol. A perfect home automation workstation. Yet another interesting aspect of this, is that Bart Van Der Meersche, the mastermind behind Flusko, is working on getting the Mosquitto MQTT broker running permanently on that same Flukso meter.

Here’s the basic layout (probably slightly different from the actual production units):

Flukso runs OpenWRT, and everything in it is based on the Lua programming language, which is really an excellent fit for such environments. But even if Lua is not something you want to dive into, the open-endedness of PubSub means this little box drawing just a few Watt can interface to a huge range of devices – from RF12 to WiFi to LAN, and everything flowing in and out of that little box becomes easily accessible via MQTT.

PS. I have no affiliation with Flukso whatsoever – I just like it, and Bart is a nice fellow :)

Last year, I got one of these little low-cost audio Digital-to-Analog Converters on eBay:

Takes digital SPDIF coax in, and produces analog left and right channels out. I got them before the bigger picture of the audio chain in the living room got fleshed out, and in the end I don’t really need this type of converter anymore.

Besides, there was some hiss and hum with these, so it’s not really high-end anyway.

I’m a bit surprised by the hiss/hum issues, since the audio DAC is specified as -90 dB THD+N and 105 dB Dynamic Range, which to me sounds like it should be pretty good. Maybe not audiophile level, but hey… neither are my ears, anyway!

I’m a big fan of test-driven development, an approach where test code is often treated as more important than the code you’re writing to actually do the work. It may sound completely nuts – the thought of writing tons of tests which can only get in the way of making changes to your newly-written beautiful code later, right?

In fact, TDD goes even further: the idea is to write the test code as soon as you decide to add a new feature or write a new bug, and only then start writing the real stuff. Weird!

But having used TDD in a couple of projects in the past, I can only confirm that it’s in fact a fantastic way to “grow” software. Because adding new tests, and then seeing them pass, constantly increases the confidence in the whole project. Ever had the feeling that you don’t want to mess with a certain part of your code, because you’re afraid it might break in some subtle way? Yeah, well… I know the feeling. It terrifies me too :)

With test scaffolding around your code, something very surprising happens: you can tear down the guts and bring them back up in a different shape, because the test code will help drive that process! And re-writing / re-factoring usually leads to a better architecture.

There’s a very definite hurdle in each new project to start using TDD (or BDD). It’s a painful struggle to spend time thinking about tests, when all you want to do is write the darn code and start using it! Especially at the start of a project.

I started coding HouseMon without TDD/BDD on my plate, because I first want to expose myself to coding in CoffeeScript. But I’m already checking out the field in the world of Node.js and JavaScript. Fell off my chair once again… so many good things going on!

Here’s an example, using the Mocha and should packages with CoffeeScript:

And here’s some sample output, in one of many possible output formats:

I’m using kind of a mix of TDD’s assertion-style testing, and BDD’s requirement-style approach. The T + F tricks suit me better, for their brevity, but the “should.throw” stuff is useful to handle exceptions when they are intentional (there’s also “should.not.throw”).

But that’s just the tip of the iceberg. Once you start thinking in terms of testing the code you write, it becomes a challenge to make sure that every line of that code has been verified with tests, i.e. code coverage testing.
And there too, Mocha makes things simple. I haven’t tried it, but here is some sample output from its docs:

On the right, a summary of the test coverage of each of the source files, and in the main window an example of a few lines which haven’t been traversed by any of the tests.

Ok, well, baby steps or not – there’s just no way to avoid it… I need to start coding!

I’ve set up a new project on GitHub, called HouseMon – to have a place to put everything. It does almost nothing, other than what you’ll recognise after yesterday’s weblog post:

This was started with the idea of creating the whole setup as installable “Briqs” modules, which can then be configured for use. And indeed, some of that will be unavoidable right from the start – after all, to connect this to a JeeNode or JeeLink, I’m going to have to specify the serial port (even if the serial ports are automatically enumerated one day, the proper one will need to be selected in case there is more than one).

However, this immediately adds an immense amount of complexity over just hard-coding all the choices in the app for just my specific installation. Now, in itself, I’m not too worried by that – but with so little experience with Node.js and the rest, I just can’t make this kind of decisions yet.
There’s too big a gap between what I know and understand today, and what I need to know to come up with a good modular system. The whole thing could be built completely out of npm packages, or perhaps even the lighter-weight Components mechanism. Too early to tell, and definitely too early to make such key decisions!

Just wanted to get this repository going to be able to actually build stuff. And for others to see what’s going on as this evolves over time. It’s real code, so you can play with it if you like. And of course make suggestions and point out things on the issue tracker. I’ll no doubt keep posting about this stuff here whenever the excitement gets too high, but this sort of concludes nearly two weeks of continuous immersion and blogging about this topic.

It’s time for me to get back to some other work. Never a dull moment here at JeeLabs! :)

Wow – maybe I’m the last guy in the room to find out about this, but responsive CSS page layouts really have come a long way. Here are a bunch of screen shots at different widths:

Note that the relative scale of the above snapshots is all over the map, since I had to fit them into the size of this weblog, but as you can see, there’s a lot of automatic shuffling going on. With no effort whatsoever on my part!

Here’s the Jade definition to generate the underlying HTML:

How cool is that?

This is based on the Foundation HTML framework (and served from Node.js on on a Raspberry Pi). It looks like this might be an even nicer, eh, foundation to start from than Twitter Bootstrap (here’s a factual comparison, with examples of F and B).

Phew! You don’t want to know how many examples, frameworks, and packages I’ve been downloading, trying out, and browsing lately… all to find a good starting point for further software development here at JeeLabs.

Driving this is my desire to pick modern tools, actively being worked on, with an ecosystem which allows me to get up to speed, leverage all the amazing stuff people keep coming up with, and yet stay firmly on the simple side of things.
Because good stuff is what you end up with when you weed out the bad, and the result is purity and elegance … as I’ve seen confirmed over and over again.

A new insight I didn’t start out from, is that server-side web page generation is no longer needed. In fact, with clients being more performant than servers (i.e. laptops, tablets, and mobile phones served from a small Linux system), it makes more and more sense to serve only static files (HTML, CSS, JS) and generic data (JSON, usually). The server becomes a file system + a database + a relatively low-end rule engine for stuff that needs to run at all times… even when no client is connected.

Node.js – JavaScript on the server, based on the V8 engine which is available for Intel and ARM architectures. A high-performance standards-compliant JavaScript engine.

The npm package manager comes with Node.js and is a phenomenal workhorse when it comes to getting lots of different packages to work together. Used from the command line, I found npm help to be great starting point for figuring out its potential.

SocketStream is a framework for building real-time apps. It wraps things like socket.io (now being replaced by the simpler engine.io) for bi-directional use of WebSockets between the clients and the server. Comes with lots of very convenient features for development.

AngularJS is a great tool to create very dynamic and responsive client-side applications. This graphic from a post on bennadel.com says it all. The angularjs.org site has good docs and tutorials, this matters because there’s quite a bit of – fantastic! – stuff to learn.

Connect is what makes the HTTP webserver built into Node.js incredibly flexible and powerful. The coin dropped once I read an excellent introduction on project70.com. If you’ve ever tried to write your own HTTP webpage server, then you’ll appreciate the elegance of this timeout module example on senchalabs.org.

Redis is a memory-based key-value store, which also saves its data on file, so that restarts can resume where it left off. An excellent way to cache file contents and “live” data which represents the state of a running system. Keeping data in a separate process is great for development, especially with automatic server restarts when any source file changes. IOW, the basic idea is to keep Redis running at all times, so that everything else can be restarted and reloaded at will during development.

Foundation is a set of CSS/HTML choices to support uniform good-looking web pages.

CoffeeScript is a “JavaScript dialect” which gets transformed into pure JavaScript on the fly. I’m growing quite used to it already, and enjoy its clean syntax and conciseness.

Jade is a shorthand notation which greatly reduces the amount of text (code?) needed to define HTML page structures (i.e. elements, tags, attributes). Like CoffeeScript, it’s just a dialect – the output is standard HTML.

Stylus is again just a dialect, for CSS this time. Not such a big deal, but I like the way all these notations let me see the underlying structure and nesting at a glance.

After all this fiddling with Node.js on my Mac, it’s time to see how it works out on the Raspberry Pi. This is a running summary of how to get a fresh setup with Node.js going.

Linux

I’m using the Raspbian build from Dec 16, called “2012-12-16-wheezy-raspbian”. See the download page and directory for the ZIP file I used.

The next step is to get this image onto an SD memory card. I used a 4 GB card, of which over half will be available once everything has been installed. Plenty!

Good instructions for this can be found at elinux.org – in my case the section titled Copying an image to the SD card in Mac OS X (command line). With a minor tweak on Mac OSX 10.8.2, as I had to use the following command in step 10:

Next: insert the SD card and power up the RPi! Luckily, the setup comes with SSH enabled out of the box, so only an Ethernet cable and 5V power with USB-micro plug are needed.

When launched and logged in (user “pi”, password “raspberry” – change it!), it will say:

Please run 'sudo raspi-config'

So I went through all the settings one by one, overclocking set to “medium”, only 16 MB assigned to video, and booting without graphical display. Don’t forget to exit the utility via the “Finish” button, and then restart using:

$ sudo shutdown -r now

Now is a good time to perform all pending updates and clean up:

$ sudo -i
# apt-get update
# apt-get upgrade
# apt-get clean
# reboot

The reboot at the end helps make sure that everything really works as expected.

Up and running

That’s it, the system is now ready for use. Some info about my 256 MB RPi:

Oh, one more thing – the RPi doesn’t come with “git” installed, so let’s fix that right now:

$ sudo apt-get install git

There. Now you’re ready to start cookin’ with node and npm on the RPi. Onwards!

PS. If you don’t want to wait two hours, you can download my build, unpack with “tar xfz node-v0.8.16-rpi.tgz”, and do a “cd node-v0.8.16 && make install” to copy the build results into /usr/local/ (probably only works if you have exactly the same Raspbian image).

It’s probably me, but I’m having a hard time dealing with data as arrays and hashes…

Here’s what you get in just about every programming language nowadays:

variables (which is simply keyed access by name)

simple scalars, such as ints, floats, and strings

indexed aggregation, i.e. arrays: blah[index]

tagged aggregation, i.e. structs: blah.property

arbitrarily nested combinations of the above

JavaScript structs are called objects and tag access can be blah.tag or blah['tag'].

It would seem that these are all one would need, but I find it quite limiting.

Simple example: I have a set of “drivers” (JavaScript modules), implementing all sorts of functionality. Some of these need only be set up once, so the basic module mechanism works just fine: require "blah" can give me access to the driver as an object.

But others really act like classes (prototypal or otherwise), i.e. there’s a driver to open a serial port and manage incoming and outgoing packets from say a JeeLink running the RF12demo sketch. There can be more than one of these at the same time, which is also indicated by the fact that the driver needs a “serial port” argument to be used.

Each of these serial port interfaces has a certain amount of configuration (the RF12 band/group settings, for example), so it really is a good idea to implement this as objects with some state in them. Each serial port ends up as a derived instance of EventEmitter, with raw packets flowing through it, in both directions: incoming and outgoing.

Then there are packet decoders, to make sense of the bytes coming from room nodes, the OOK relay, and so on. Again, these are modules, but it’s not so clear whether a single decoder object should decode all packets on any attached JeeLink or whether there should be one “decoder object” per serial interface object. Separate objects allow more smarts, because decoders can then keep per-sensor state.

The OOK relay in turn, receives (‘multiplexes”) data from different nodes (and different types of nodes), so this again leads to a bunch of decoders, each for a specific type of OOK transmitter (KAKU, FS20, weather, etc).

As you can see, there’s sort of a tree involved – taking incoming packet data and dissecting / passing it on to more specific decoders. In itself, this is no problem at all – it can all be represented as nested driver objects.

As final step, the different decoders can all publish their readings to a common EventEmitter, which will act as a simple bus. Same as an MQTT broker with channels, with the same “nested key” strings to identify each reading.

So far so good. But that’s such a tiny piece of the puzzle, really.

Complexity sets in once you start to think about setup and teardown of this whole setup at run time (i.e. configuration in the browser).

Each driver object may need some configuration settings (the serial port name for the RF12demo driver was one example). To create a user interface and expose it all in the browser, I need some way of treating drivers as a generic collection, independent of their nesting during the decoding process.

Let’s call the driver modules “interfaces” for now, i.e. in essence the classes from which driver instances can be created. Then the “drivers” become instantiations of these classes, i.e. the objects which actually do the work of connecting, reading, writing, decoding, etc.

One essential difference is that the list of interfaces is flat, whereas a configured system with lots of drivers running is often a tree, to cope with the gradual decoding task described a bit above.

How do I find all the active drivers of a specific interface? Walk the driver tree? Yuck.

Given an driver object, how do I find out where it sits in the tree? Store path lists? Yuck.

Again, it may well be me, but I’m used to dealing with data structures in a non-redundant way. The more you link and cross-link stuff (let alone make copies), the more hassles you run into when adding, removing, or altering things. I’m trying to avoid “administrative code” which only keeps some redundant invariants intact – as much as possible, anyway.

Aren’t data structures supposed to be about keeping each fact in exactly one place?

PS. My terminology is still a mess in flux: interfaces, drivers, devices, etc…

Update – I should probably add that my troubles all seem to come from trying to maintain accurate object identities between clients and server.

The new code is progressing nicely. First step was to get a test table, updating in real time:

(etc…)

It was a big relief to figure out how to produce graphs again – e.g. power consumption:

The measurement resolution from the 2000 pulse/kWh counters is excellent. Here is an excerpt of power consumption vs solar production on a cloudy and wet winter morning:

There is a fascinating little pattern in there, which I suspect comes from the central heating – perhaps from the boiler and pump, switching on and off in a peculiar 9-minute cycle?

Here are a bunch of temperature sensors (plus the central heating set-point, in brown):

There is no data storage yet (I just left the browser running on the laptop collecting data overnight), nor proper scaling, nor any form of configurability – everything was hard-coded, just to check that the basics work. It’s a far cry from being able to define and configure such graphs in the browser, but hey – one baby step at a time…

Although the D3 and NVD3 SVG-based graphing packages look stunning, they are a bit overkill for simple graphing and consist of more code for the browser to load and run. Maybe some other time – for now I’m using the Flot package for these graphs, again.

Will share the code as soon as I can make up my mind about the basic app structure!

Note that the API of these decoders is still changing. They are now completely independent little snippets of code which do only one thing – no assumptions on where the data comes from, or what is to be done with the decoded results. Each decoder takes the data, creates an object with decoded fields, and finishes by calling the supplied “cb” callback function.

Here is some sample output, including a bit of debugging:

As you can see, this example packet used 19 bytes to encode 10 values plus a format code.

Explanation of the values shown:

usew is 0: no power is being drawn from the grid

genw is 6: power feed into the grid is 10 x 6, i.e. ≈ 60W

p1 + p3 is current total consumption: 8 + 191, i.e. 199W

p2 is current solar power output: 258W

With a rate of about 50 Kbit/sec using the standard RF12 driver settings, and with some 10 bytes of packet header + footer overhead, this translates to (19 + 10) * 8 * 20 = 4,640 µS “on-air” time once every 10 seconds, i.e. still under 0.05 % bandwidth utilisation. Fine.

After yesterday’s hardware hookup, the next step is to set up the proper software for this.

There are a number of design and implementation decisions involved:

how to format the data as a wireless packet

how to generate and send out that packet on a JeeNode

how to pick up and decode the data in Node.js

Let’s start with the packet: there will be one every 10 seconds, and it’s best to keep the packets as small as possible. I do not want to send out differences, like I did with the otRelay setup, since there’s really not that much to send. But I also don’t want to put too many decisions into the JeeNode sketch, in case things change at some point in the future.

The packet format I came up with is one which I’d like to re-use for some of the future nodes here at JeeLabs, since it’s a fairly convenient and general-purpose format:

(format) (longvalue-1) (longvalue-2) ...

Yes, that’s right: longs!

The reason for this is that the electricity meter counters are in Watt-hour, and will immediately exceed what can be stored as 16-bit ints. And I really do want to send out the full values, also for gas consumption, which is in 1000th of a m3, i.e. in liters.

But for these P1 data packets that would be a format code + 8 longs, i.e. at least 25 bytes of data, which seems a bit wasteful. Especially since not all values need longs.

The solution is to encode each value as a variable-length integer, using only as many bytes as necessary to represent each value. The way this is done is to stored 7 bits of the value in each byte, reserving the top-most bit for a flag which is only set on the last byte.

With this encoding, 0 is sent as 0x80, 127 is sent as 0xFF, whereas 128 is sent as two bytes 0x01 + 0x80, 129 is sent as 0x01 + 0x81, 1024 as 0x08 + 0x80, etc.

Ints up to 7 bits take 1 byte, up to 14 take 2, … up to 28 take 4, and 32 will take 5 bytes.

It’s relatively simple to implement this on an ATmega, using a bit of recursion:

The full code of this p1scanner.ino sketch is now in JeeLib on GitHub.

Tomorrow, I’ll finish off with the code used on the receiving end and some results.

Just in time for 2013, I hooked up the smart meter which was installed a month ago:

This connects DIO4 to the output of the P1 port, which has the following pinout:

The circuit is as follows:

The request pin has to be pulled to 5V (it very likely just powers the isolated side of the built-in optocoupler). The 10 kΩ signal pull-up is needed to improve the rising flank of the signal, and the 10 kΩ resistor to the I/O pin prevents problems when the input signal rises above the 3.3V powering the ATmega.