The fact is that this is probably what most visitors to this blog are going to be most interested in.

I’ve another session hacking the barmesh slicing code, which is now creating these interesting subdivisions:

It can even generate 17 slices of a part without crashing, though it takes a few minutes:

It’s still only testing against the points, and not the edges and faces (hence the arcs), but that will only make it crash less as the shapes will be smoother.

It’s a little unclear what to do next. Maybe I should tidy the code further and clear up all these special cases I’ve been hitting and had to hack in to make it work. When a subdividing line crosses the r=0.5 threshold and I calculate it’s location, I’m setting it back to exactly 0.5. I don’t think this is the most reliable way to make it work.

I’m going to do some other coding, now that I got this result. The code would fall apart if I touched it again.

Next on the list of things to do is clear out the vast quantity of rubbish left in the code, completely redo the subdivision loops and make the logic robust, apply it to multiple z-levels and plot slices, then make it test against edges and faces (not just points), and package it into a self-contained (but very slow) version of the slicer.

I don’t know how long this will take, as there are many other distractions available.(more…)

By popular demand, I am working on a new Z-slicing algorithm, which is open source and in Python and can be found here. (My latest parts order is taking too long to be picked and come in the post.)

The code is not in any state to be used by anyone as I conduct some very meandering software development. I am unexpectedly basing everything on these BarMesh structures. This is a neat way to represent a triangulated manifold, such as the STL triangle file that contains the 3D input geometry. But also, instead of basing the slice on a strict XY grid (or weave) as I’ve done before, I’m using a second BarMesh to handle the 2D partitioning of the plane.

I don’t really know what I am doing, but if it gets messy and results in malformed folded cells, at least I can choose to constrain the BarMesh to conform to an XY grid weave structure, which I know works.

I’m just slicing (with a radius) against the points in the input geometry as this creates a more difficult slice geometry to begin with (here it’s an impeller shape with 38474 triangles, so I’m not starting with a toy example).

When the slicer is working, I can extend the code to test against the edges and faces of the input geometry.

I’m going for simplicity, and not being too constrained by speed or memory useage. There’s a lot more memory available than we need, and I’m counting on investigations into some weird Python compiler systems to provide the performance of C++, without the disadvantages of using C++.

I maintain the fact that if you take away the speed advantage of C++ on a particular platform, it loses its point of existence. Therefore the question of whether you should be using C++ is not to be found by looking at C++ itself, but by trying to beat what it supposedly does best using another language.

I can’t predict what will win. But the experience I am about to have ought to be extremely relevant to a programming team that is starting a new product and is having to choose what language they commit to.

This particular slicer will be for 3D printing and it will not notice the problem of mismatching triangle edges or self intersecting input geometry. (Self-intersecting inputs can come when you throw in some support structures.) It will be optimized for taking hundreds of slices and different Z-levels. It will work by finding the offset surface at a particular radius, and then offsetting back in by that radius to get the “true” surface, after the interior contours have been identified by tracking them up and down in 3D to prove full enclosure.

I was going great guns with my PALvario system, until I came back to looking at the barometer data. I redesigned the Arduino code to be more intelligent than simply waiting 10milliseconds for the reading to become ready from the MS5611 device, and programmed it do go do something else with this otherwise wasted time.

Suddenly all my readings were noise.

I won’t bore you by recounting how I isolated the problem, or how I suffered a delay of 2 hours due to a bug in the Arduino code where calling delayMicroseconds(0) actually delays by 4096 microseconds.

The result is a sudden step variation by 3 metres (the vertical lines are every 2 seconds).

The MS5611 barometer comes with a set of calibration constants used in a formula for converting its raw pressure reading (which is temperature sensitive) into corrected temperature value using its own temperature reading. You have to code the formula yourself, since the device is too small to do its own arithmetic processing.

Here’s the temperature graphs from the device:

My interpretation was that it’s warming up when I’m asking for a reading every 25ms, and then cooling down to a lower working temperature when I’m asking for a reading every 30ms.

But it’s worse than that.

Suppose I read at a constant frequency of about 30ms (25+5), except for once every 10 seconds insert a delay of an extra 45ms, like so:

This gives the lower yellow line in the graph. Most of the time it’s at 35.6 degrees, and when we have that extra delay, it spikes immediately down to 35.35 degrees.

The upper white line is the when I reversed the condition, so the delay(50) happened always, except for every 10 seconds there was a delay(5). Here we were at 36.32degrees with a sudden spike up to 36.45degrees. True, these are small amounts, but they are occurring in the space of 30milliseconds, which is the problem because it means your temperature compensation of the raw pressure reading takes place in the next 10ms is going to be out of date.

If I insert another delay of 150ms into the loop, I don’t get spikes any more, and the average temperature settles in at a hotter 36.83degrees.

There cannot be a physical reason for this, as the device is to small to carry any heating effects, so it must be due to a regulator of some sort.

It’s as if the circuitry dynamically adapts itself to the number of readings you are taking per second. Then if you change the gap between readings for even one instant, it tries to adapt its power handling the new cycle rate and causes a glitch.

This behaviour is not disclosed on the datasheet. What it does say there on page 5 is that:

The best noise performance from the module is obtained when the SPI bus is idle and without communication to other devices during the ADC conversion

But is SPI better than the I2C interface? The statement is ambiguous, so I tried the SPI interface out on Sunday (before all this blew up) and found no difference in noise levels. The problem I have here is not noise, because it’s completely predictable and not coming from any stray hardware electrical signals.

I experimented with lots of different cases.

(By the way, a very useful upgrade to the Serial monitor on the Arduino IDE would be to make it plot points and lines in a window if any output line contains something like the string “P5.1,3.7″. It’s extremely critical, but very tedious, to plot printed data into these graphs, and it would be a major step if this could be done directly out of the Arduino in this debugging tool. This is so important I might try to implement it in their code myself one day.)

This is exceptionally boring. And I’ve missed my lunch again. I had been looking forward to building a set of funky filters to immediately detect altitude changes from very dense data.

So anyway, I got the timing all evened out by inserting a reliable skip delay into the place where it reads pressure every 50milliseconds on the dot:

Then I tried running the vibration motors in the loop, and it went tits up again. Here’s the graph of temperature and altitude when the motors are doing their stuff for 10 seconds, and then are all off for ten seconds:

This is no good.

The only way forward for this is to wire it up to an entirely independent clean microcontroller which reads from it on a regular cycle and doesn’t do anything else except transfer the data to a main board through some mechanism.

There’s a chap in Austrialia making the blueflyvario who’s well ahead with this technology, and sends the readings directly from the MS5611 to an Android App via bluetooth.

I wonder if he has experienced this issue with the MS5611, or avoided it because it wasn’t exposed by his design.

Sensor readings generally have to be processed before you can use them. Patrick’s explanation of how he filtered the CoffeeMon signal (by picking the maximum value in each time window) suggested that there’s something fishy going on and it would be a mistake to treat the readings as subjected to mere noise.

Here’s a zoomed-in section of my fridge temperature as it rises by about 0.75 degrees an hour, or 12 units of 1/16th of a degree which my Dallas OneWire DS18B20 digital temperature sensor reads at its maximum 12 bits of resolution.

The readings don’t jump between more than two levels when the temperature is stable. You’d expect some more random hopping from signal noise.

Indeed, applying a crude Guassian filter doesn’t seem to do much good. This (in green) is the best I got by convolving it with a kernel 32 readings wide (equating to about 25 seconds)

The filtered version still has steps, but with a rough slope at the change levels. This filter is very expensive, and not any better than the trivially implementable Alpha-beta filter, which smoothed it like so:

I have experienced much joy from this hardware hacking. I must have spent a couple hundred pounds on components. The bits arrive in little plastic trays like very expensive chocolate sweeties. There’s always a thrill when you first wire them up and they actually work perfectly. Not only that, you can have fun with them the next day and the day after that, because they have not turned into poop.

I have a few surpluses by now. I got a realtime clock which is 5V, and a microSD card reader which is 3V3; the Jeenodes run on 3.3V and the normal arduinos are 5V, so I can’t easily use either as the controller for datalogger. Some of the more idiot-proof breakout boards have converters on them, so they are safe for either voltage. Adrian has warned me to prepare for the coming of the 1.8V standard everywhere soon. I bought a combined ArduLog-RTC Data Logger, which for the moment is not playing ball.

Meanwhile, I’ve made a rule for the data logging of sensor data. Don’t do it. It’s not an end in itself. Too often people take on projects to collect sensor data and upload it to the internet (it’s Tuesday, so the site must be called Xively) with the idea that anyone else in the world could download it and [rolls eyes] “Do whatever they want with it.”

“Like what?”

“Whatever they want!”

If you can’t think of a single interesting application for your data, why do you think anyone else in the world will be able to? And even if there was anyone in the world who could do something with it, they’re probably the sort of person who’d have their own data which is guaranteed to be lot more interesting to them than yours. There’s a reason we don’t have a CCTV channel of someone else’s back door at night on cable TV.

I’ve formulated a stronger principle:

The value of sensor data is inversely proportional to the product of the time that has ellapsed since it was collected and the distance you are from the subject of the data.

Let’s take a simple case.

Patrick made CoffeeBot by putting the coffee machine onto an electronic weighing scale connected to an Arduino with an ethernet shield that talks to the internet.

I’ve been playing around with some geometric signal processing on the Atmega328-based Arduino kit for my run-time line fitting routine, when it occurred to me that I ought to know if I should be using floats or long_ints as the basis for this system.

Short_ints are only 2bytes with a maximum value of 32767, so you’re always overflowing them and it’s not worth the hassle. Therefore you have to use long_ints, which are 4bytes, the same as a float, so saving precious memory is not a factor in this decision.

Anyways, I woke up this morning and decided I needed some benchmarking.(more…)

It was an expensive London and Cambridge weekend for me and Becka (£99.20 return train ticket each), but the chance to get home on Sunday night directly from the middle of London to the middle of Liverpool in under three hours without needing to be awake beat the plan of car shuttling onto a local train via some backstreet parking spot in St Albans to avoid driving to the centre of London.

You win some, you lose some.

I got a motion for electionleaflets.org to be done properly accepted by the members at the UnlockDemocracy AGM on Saturday. This has the potential to get some professionalism on the situation in time for the next election.

Then I spent 36 hours working and sleeping in the basement of the National Audit Office at the Accountability Hack 2014 on a my project, with Becka getting predictably very bored at times.

The purpose of the project was to learn how to use PDF.js, which Francis told me about the day before.

I thought I had a good chance with it (being as it is completely practical and could be implemented by the Public Accounts Committee right away), but it did not even get an honourable mention. That honour went to Richard whose Parliamentary Bill analyser disclosed how many goats would need to be skinned to print out the Act, among other things. For more details, see my blogpost from six years ago: The vellum has got to go.

We met Rob for dinner who had a brain machine on the bookshelf, which Becka was very taken with. I can tell you that someone will be learning how to solder in the next couple of weeks, because that is the only way they are going to get one of their own.

Just when I thought it was over for the summer, there came a chance to go flying at Llangollen. It seems there are more hang-gliding conditions this year than kite-surfing conditions, which is not what I’d hoped.(more…)

I’ve begun various arduino experiments here in DoESLiverpool, which necessitated moving closer to Adrian’s desk on account of knowing no electronics, there being bugger all adequate instructions on how to wire anything up.

Oh yes, he says, obviously VCC is standard code for “power in” for that red square in the centre-left of the picture that contains a microSD card and requires 3.3V of power — even though this is nowhere stated and all the other circuits in this kit use 5V.

Whatever.

It’s not much of a standard when this is immediately contradicted by the thin thing on the bottom left of the picture (called a Jeenode) which labels its corresponding power pin “PWR“, and the low-power bluetooth blue board on the middle of the white panel which calls its power pin “VIN” for “voltage in”, and the red “real-time clock” thing above it which labels its power pin “5V“, which is so much better because: (a) it is immediately understandable by the man in the street, (b) it conveys the crucial information about the level of voltage required, and (c) it uses one fewer character when the labels are already too small to read without a magnifying glass which I do not have but should get.

Well, no, actually, because the thing in the middle with the USB plug has two power pins on it, one called “5V” and the other called “3V3“, so they were forced to be sensible.

Of course, I’ll be proved wrong when I find a peripheral that contains both “VCC” and “VIN” pins.

Don’t get me started on all the other pin names, especially on the different arduino boards on which they’ve failed to mark out these all-important SPI pins that are either pins 11, 12 and 13, or pins 4, 1 and 3, or pins 51, 50 and 52, or you have to look it up on this handy diagram if you have a Jeenode.

I think electronics got off to a bad start from the very beginning when they decided that current flows in the opposite direction to the electrons. From then on it’s been seven human generations of miscodings and mistakes that have been adopted as conventions resulting in something not unlike spelling in the english language — ie you can’t see the problem once you have gotten used to it.