This is a bit of a brain dump so that I don’t forget this little tidbit in future.

Scenario

You have a shiny new Samba 4 active domain controller (or two) responsible for the domain ad.youroffice.example.com. You have a couple of DNS servers that are responsible for non-AD parts of the domain and the parent youroffice.example.com. To have everything go through one place, you’ve set up these servers with slave domains for ad.youroffice.example.com.

Joining your first Windows 7 client yields a message like this one. You’re able to resolve yourdc.ad.youroffice.example.com on the client but not the _msdcs subdomain.

Some time back, Lenovo made the news with the Superfish fiasco. Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine. When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser. This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems. Both laptops and desktops are affected. This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7. Both had been built on the 13th of October, and shipped to us. They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare. This morning I checked both for this certificate. The one in the box was clean, the deployed machine had the certificate present.

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

Future recomendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers. Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support. HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.

All non-trivial software is a potential security attack vector and must be kept up to date. The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean. It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs. Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust. As far as convenience goes, we have been had. It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack. I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus. The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1. It just so happens that I have a 33MHz oscillator. There’s a couple of nits in this plan though:

the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.

the SDRAM modules are 64-bits wide. We’ll have to buffer the output to eight 8-bit registers. Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.

Each SDRAM module holds 32MB. We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB. Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory. We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks. (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers. There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either. This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it. I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz. Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive. Then, the next steps should become clear.

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone. For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

There’s no time-out timer

The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out. Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles. There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic. AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task. However, I found their “license” confusing. So I decided to have a crack myself. How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

The flash is organised into sectors.

These sectors when erased contain nothing but ones.

We store data by programming zeros.

The only way to change a zero back to a one is to do an erase of the entire sector.

The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

It should keep tabs on what parts of the sector are in use. For simplicity, we’ll divide this into fixed-size blocks.

When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.

When a sector is full of obsolete blocks, we may erase it.

We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables. They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant. In many microcontroller projects, there’ll be several regions of memory, defined by memory address. This comes from the datasheet of your MCU.

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

Sectors 0…3: 16kB starting at 0x08000000

Sector 4: 64kB starting at 0x0800c000

Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up. We know the following:

We have 3 sectors, each 16kB

The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing. This will decide how big to make the blocks. If you’re storing only tiny bits of data, more blocks makes more sense. If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks. I figured that was a nice round (binary sense) figure to work with. At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow. The blocks will need a header to tell you whether or not the block is being used. Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely. So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this. 256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector. This gives us enough room to have 16-bits worth of flags for each block stored in the index. That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID. It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255). If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15

14

13

12

11

10

19

8

7

6

5

4

3

2

1

0

+0

CRC32 Checksum

+2

+4

ROM ID

Block Index

+6

Block Size

Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15

14

13

12

11

10

19

8

7

6

5

4

3

2

1

0

+0

Program Cycles Remaining

+2

+4

+6

+8

Block 0 flags

+10

Block 1 flags

+12

Block 2 flags

…

No checksums here, because it’s constantly changing. We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to. The flags for each block are currently allocated accordingly:

15

14

13

12

11

10

19

8

7

6

5

4

3

2

1

0

+0

Reserved

In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”. When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones. When we erase, we mark the entire flags block as zeros. We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers. We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas. Our code needs to worry about 3 basic operations:

reading

writing

erasing

This is good enough if the size of a ROM image doesn’t change (normal case). For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.

Constants

It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

VROM_SECT_SZ=16384:
The virtual ROM sector size in bytes. (Those watching Codec2 Subversion will note I cocked this one up at first.)

VROM_SECT_CNT=3:
The number of sectors.

VROM_BLOCK_SZ=256:
The size of a block

VROM_START_ADDR=0x08004000:
The address where the virtual ROM starts in Flash

VROM_START_SECT=1:
The base sector number where our ROM starts

VROM_MAX_CYCLES=10000:
Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
The number of blocks per sector, including the index block

VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words. There’s also the complexity of checking the contents of a structure that includes its own CRC. I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index. We need to search for these when requested, as they can be located literally anywhere in flash. There are probably cleverer ways to do this, but I chose the brute force method. We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset. This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of. The rest, we’ll read entire blocks in. The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range. If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data. Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair. We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis. We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased. When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.

Implementation

The full C code is in the Codec2 Subversion repository. For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain). The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

Well, I just had a “fun” afternoon. For the past few weeks, the free DNS provider I was using, yi.org, has been unresponsive. I had sent numerous emails to the administrator of the site, but heard nothing. Fearing the worst, I decided it was time to move. I looked around, and found I could get an id.au domain cheaply, so here I am.

I’d like to thank Tyler MacDonald for providing the yi.org service for the last 10 years. It helped a great deal, and until recently, was a real great service. I’d still recommend it to people if the site was up.

So, I put the order in on a Saturday, and the domain was brought online on Monday evening. I slowly moved my Internet estates across to it, and so I had my old URLs redirecting to new ones, the old email address became an alias of the new one, moving mailing list subscriptions over, etc. Most of the migration would take place this weekend, when I’d set things up proper.

One of the things I thought I’d tackle was DNSSEC. There are a number of guides, and I followed this one.

Preparations

Before doing anything, I installed dnssec-tools as well as the dependencies, bind-utils and bind. I had to edit some things in /etc/dnssec-tools/dnssec-tools.conf to adjust some paths on Gentoo, and to set preferred signature options (I opted for RSASHA512 signatures, 4096-bit key-signing keys and 2048-bit zone-signing keys).

Getting the zone file

I constructed a zone file using what I could extract using dig:

The following is a dump of more or less what I got. Obviously the nameservers were for my domain registrar initially and not the ones listed here.

Signing the zone

Next step, is to create domain keys and sign it.

$ zonesigner -genkeys longlandclan.id.au

This generates a heap of files. Apart from the keys themselves, two are important as far as your DNS server are concerned: dsset-longlandclan.id.au. and longlandclan.id.au.signed. The former contains the DS keys that you’ll need to give to your regristrar, the latter is what your DNS server needs to serve up.

Updating DNS

I figured the safest bet was to add the domain records first, then come back and do the DS keys since there’s a warning that messing with those can break the domain. At this time I had Zuver (my registrar) hosting my DNS, so over I trundle to add a record to the zone, except I discover that there aren’t any options there to add the needed records.

“Okay, maybe they’ll appear when I add the DS keys“, I think. Their DS key form looks like this:

Turns out, the 12345 goes by a number of names, such as key ID and in the Zuver interface, key tag. So in they went. The record literally is in the form:

${DOMAIN} IN DS ${KEY_ID} ${ALGO} ${DIGEST_TYPE} ${DIGEST}

The digest, if it has spaces, is to be entered without spaces.

Oops, I broke it!

So having added these keys, I note (as I thought might happen), the domain stopped working. I found I still couldn’t add the records, so I had to now move (quickly) my DNS over to another DNS server. One that permitted these kinds of records. I figured I’d do it myself, and get someone to act as a secondary.

First step was to take that longlandclan.id.au.signed file and throw it into the bind server’s data directory and point named.conf at it. To make sure you can hook a slave to it, create a ACL rule that will match the IP addresses of your possible slaves, and add that to the allow-transfer option for the zone:

Make sure that from another machine in your network, you can run dig +tcp axfr @${DNS_IP} ${DOMAIN} and get a full listing of your domain’s contents.

I really needed a slave DNS server and so went looking around, found one in BuddyNS. I then spent the next few hours arguing with bind as to whether it was authoritative for the domain or not. Long story short, make sure when you re-start bind, that you re-start ALL instances of it. In my case I found there was a rogue instance running with the old configuration.

BuddyNS was fairly simple to set up (once BIND worked). You basically sign up, pick out two of their DNS servers and submit those to your registrar as the authorative servers for your domain. I ended up picking two DNS servers, one in the US and one in Adelaide. I also added in an alias to my host using my oldyi.org domain.

Adding nameservers

Working again

After doing that, my domain worked again, and DNSSEC seemed to be working. There are a few tools you can use to test it.

Updating the zone later

If for whatever reason you wish to update the zone, you need to sign it again. In fact, you’ll need to sign it periodically as the signatures expire. To do this:

$ zonesigner longlandclan.id.au

Note the lack of -genkeys.

My advice to people trying DNSSEC

Before proceeding, make sure you know how to set up a DNS server so you can pull yourself out of the crap if it comes your way. Setting this up with some registrars is a one-way street, once you’ve added keys, there’s no removing them or going back, you’re committed.

Once domain signing keys are submitted, the only way to make that domain work will be to publish the signed record sets (RRSIG records) in your domain data, and that will need a DNS server that can host them.

This is more a quick dump of some proof-of-concept code. We’re in the process of writing communications drivers for an energy management system, many of which need to communicate with devices like Modbus energy meters.

Traditionally I’ve just used the excellent pymodbus library with its synchronous interface for batch-processing scripts, but this time I need real-time and I need to do things asynchronously. I can either run the synchronous client in a thread, or, use the Twisted interface.

We’re actually using Tornado for our core library, and thankfully there’s an adaptor module to allow you to use Twisted applications. But how do you do it? Twisted code requires quite a bit of getting used to, and I’ve still not got my head around it. I haven’t got my head fully around Tornado either.

So how does one combine these?

The following code pulls out the first couple of registers out of a CET PMC330A energy meter that’s monitoring a few circuits in our office. It is a stripped down copy of this script.

#!/usr/bin/env python
'''
Pymodbus Asynchronous Client Examples -- using Tornado
--------------------------------------------------------------------------
The following is an example of how to use the asynchronous modbus
client implementation from pymodbus.
'''
#---------------------------------------------------------------------------#
# import needed libraries
#---------------------------------------------------------------------------#
import tornado
import tornado.platform.twisted
tornado.platform.twisted.install()
from twisted.internet import reactor, protocol
from pymodbus.constants import Defaults
#---------------------------------------------------------------------------#
# choose the requested modbus protocol
#---------------------------------------------------------------------------#
from pymodbus.client.async import ModbusClientProtocol
#from pymodbus.client.async import ModbusUdpClientProtocol
#---------------------------------------------------------------------------#
# configure the client logging
#---------------------------------------------------------------------------#
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
#---------------------------------------------------------------------------#
# example requests
#---------------------------------------------------------------------------#
# simply call the methods that you would like to use. An example session
# is displayed below along with some assert checks. Note that unlike the
# synchronous version of the client, the asynchronous version returns
# deferreds which can be thought of as a handle to the callback to send
# the result of the operation. We are handling the result using the
# deferred assert helper(dassert).
#---------------------------------------------------------------------------#
def beginAsynchronousTest(client):
io_loop = tornado.ioloop.IOLoop.current()
def _dump(result):
logging.info('Register values: %s', result.registers)
def _err(result):
logging.error('Error: %s', result)
rq = client.read_holding_registers(0, 4, unit=1)
rq.addCallback(_dump)
rq.addErrback(_err)
#-----------------------------------------------------------------------#
# close the client at some time later
#-----------------------------------------------------------------------#
io_loop.add_timeout(io_loop.time() + 1, client.transport.loseConnection)
io_loop.add_timeout(io_loop.time() + 2, io_loop.stop)
#---------------------------------------------------------------------------#
# choose the client you want
#---------------------------------------------------------------------------#
# make sure to start an implementation to hit against. For this
# you can use an existing device, the reference implementation in the tools
# directory, or start a pymodbus server.
#---------------------------------------------------------------------------#
defer = protocol.ClientCreator(reactor, ModbusClientProtocol
).connectTCP("10.20.30.40", Defaults.Port)
defer.addCallback(beginAsynchronousTest)
tornado.ioloop.IOLoop.current().start()

… or how to emulate Red Hat’s RPM dependency hell in Debian with Python.

There are times I love open source systems and times when it’s a real love-hate relationship. No more is this true than trying to build Python module packages for Debian.

On Gentoo this is easy: in the past we had g-pypi. I note that’s gone now and replaced with a gsourcery plug-in called gs-pypi. Both work. The latter is nice because it gives you an overlay potentially with every Python module.

Building packages for Debian in general is fiddly, but not difficult, but most Python packages follow the same structure: a script, setup.py, calls on distutils and provides a package builder and installer. You call this with some arguments, it builds the package, plops it in the right place for dpkg-buildpackage and the output gets bundled up in a .deb.

Easy. There’s even a helper script: stdeb that plugs into distutils and will do the Debian packaging all for you. However, stdeb will not source dependencies for you. You must do that yourself.

So quickly, building a package for Debian becomes reminiscent of re-living the bad old days with early releases of Red Hat Linux prior to yum/apt4rpm and finding the RPM you just obtained needs another that you’ll have to hunt down from somewhere.

Then you get the people who take the view, why have just one package builder when you can have two. fysom needs pybuilder to compile. No problems, I’ll just grab that. Checked it out of github, uhh ohh, it uses itself to build, and it needs other dependencies.

Lovely. It gets better though, those dependencies need pybuilder to build. I just love circular dependencies!

So as it turns out, in order to build this, you’ll need to enlist pip to install these behind Debian’s back (I just love doing that!) then you’ll have the dependencies needed to actually build pybuilder and ultimately fysom.

There have been reports of web browser sessions from people outside China to websites inside China being hijacked and having malware injected. Dubbed “Great Cannon”, this malware having the sole purpose of carrying out distributed denial of service attacks on websites that the Chinese Government attempts to censor from its people. Whether it be the Government there itself doing this deliberately, or someone hijacking major routing equipment is fundamentally irrelevant here, either way the owner of the said equipment needs to be found, and a stop put to this malware.

I can understand you wish to prevent people within your borders from accessing certain websites, but let me make one thing abundantly clear.

COUNT ME OUT!

I will not accept my web browser which is OUTSIDE China being hijacked and used as a mule for carrying out your attacks. It is illegal for me to carry out these attacks, and I do not authorise the use of my hardware or Internet connection for this purpose. If this persists, I will be blocking any and all Chinese-owned websites’ executable code in my browser.

This will hurt Chinese business more than it hurts me. If you want to ruin yourselves economically, go ahead, it’ll be like old times before the Opium Wars.

This afternoon, whilst waiting for a build job to complete I thought I’d do some further analysis on my annual mileage.

Now I don’t record my odometer readings daily (perhaps I should), but I do capture them every Sunday morning. So I can possibly assume that the distance done for each day of a “run” is the total distance divided by the number of days. I’m using a SQLite3 database to track this, question is, how do I extract this information?

This turned out to be the key to the answer. I needed to enumerate all the days between two points. SQLite3 has a julianday function, and with that I have been able to extract the information I need.

Then there are the views.CREATE VIEW run_id as select s.rowid as start_id, (select rowid from odometer where bike_id=s.bike_id and timestamp > s.timestamp and action='stop' order by timestamp asc limit 1) as stop_id from odometer as s where s.action='start';
CREATE VIEW "run" AS select start.timestamp as start_timestamp, stop.timestamp as stop_timestamp, start.bike_id as bike_id, start.odometer as start_odometer, stop.odometer as stop_odometer, stop.odometer-start.odometer as distance,julianday(start.timestamp) as start_day, julianday(stop.timestamp) as stop_day from (run_id join odometer as start on run_id.start_id=start.rowid) join odometer as stop on run_id.stop_id=stop.rowid;

The first view breaks up the start and stop events, and gives me row IDs for where each “run” starts and stops. I then use that in my run view to calculate distances and timestamps.

Here’s where the real voodoo lies, to enumerate days, I start at the very first timestamp in my dataset, find the Julian Day for that, then keep adding one day on until I get to the last timestamp. That gives me a list of Julian days that I can marry up to the data in the run view.

CREATE VIEW distance_by_day as
SELECT day_of_year, avg_distance FROM (
SELECT days.day - julianday(date(days.day,'start of year')) as day_of_year, sum(run.distance/max((run.stop_day-run.start_day),1))/count(*) as avg_distance
FROM run,
(WITH RECURSIVE
days(day) as (
SELECT julianday((select min(timestamp) from odometer))
union all
SELECT day+1 from days
limit cast(round(julianday((select max(timestamp) from odometer))-julianday((select min(timestamp) from odometer))) as int)
) SELECT day from days) as days
where
run.start_day < = days.day
AND run.stop_day >= days.day
group by day_of_year) dist_by_doy;

I’ve been a long time user of PGP, had a keypair since about 2003. OpenPGP has some nice advantages in that it’s a more social arrangement in that verification is done by physically meeting people. I think it is more personal that way.

However, you still can get isolated islands, my old key was a branch of the strong set, having been signed by one person who did do a lot of key-signing, but sadly thanks to Heartbleed, I couldn’t trust it anymore. So I’ve had to start anew.

The alternate way to ensure communications is to use some third party like a certificate authority and use S/MIME. This is the other side of the coin, where a company verifies who you are. The company is then entrusted to do their job properly. If you trust the company’s certificate in your web browser or email client, you implicitly trust every non-revoked valid certificate that company has signed. As such, there is a proliferation of companies that act as a CA, and a typical web browser will come with a list as long as your arm/leg/whatever.

I’ve just set up one such certificate for myself, using StartCOM‘s CA as the authority. If you trust StartCOM, and want my GPG key, you’ll find a S/MIME signed email with my key here. If you instead trust my GPG signature and want my S/MIME public key, you can get that here. If you want to throw caution to the wind, you can get the bare GPG key or S/MIME public key instead.

Update: I noticed GnuPG 2.1 has been released, so I now have an ECDSA key; fingerprint B8AA 34BA 25C7 9416 8FAE F315 A024 04BC 5865 0CF9. You may use it or my existing RSA key if your software doesn’t support ECDSA.