]]>Let’s say you have a Raspberry Pi running a web site (like this one) and want to have a backup.

It turns out with a little netcat magic, you can do just that.

1. ssh to the pi, and run this (for Raspbian or other Linux distros)

sudo netcat -l 2222 < /dev/mmcblk0

or, for FreeBSD

sudo netcat -l -p 2222 < /dev/mmcsd0

2. On your “regular” computer (in my case a Mac mini), run this

netcat think.random-stuff.org 2222 > ~/Documents/pi.img

3. Grab a new micro SD card and copy the pi.img image onto it. You can see how here. I use df to figure out which /dev/disk device is the SD card, then use Disk Utility to unmount anything on that card, then I use dd to do the copy, e.g.

sudo dd if=~/Documents/pi.img of=/dev/rdisk4 bs=1m

4. Grab a spare Pi, plug in the SD card, plug it into a power source and let it boot.

5. Make sure you can load a page on the web site, then unplug the network cable from the old Pi and plug it into the new Pi. Now make sure you can reload the page. Or load a different page.

6. Profit.

Considerations…

If your Pi is in the middle of some database operations or someone happens to be updating the website while you’re doing the netcat, the result might not be great. If you find that your copy isn’t working, try shutting down any services like MySQL or Apache before you do the netcat.

Security! If you set up netcat like this and don’t prevent other people from getting at port 2222 (or whatever port you pick), someone else can beat you to it and slurp out your entire SD card’s contents.

Your new SD card has to have the same or greater capacity. Note that not all SD cards labeled 8GB or 16GB, etc. are the same size. Either make sure you have two of the same brand and model, or you might have to use a bigger one for the clone.

Maybe a hold-over from my old UNIX days, but back then the raw device tended to be faster than the block device. I think using /dev/disk4 and /dev/rdisk4 (or whatever number yours is) is pretty much the same.

This is not a substitute for a regular backup scheme. I just want to be able to have something like a hot spare that I can pop in and get running almost instantly if something goes wrong. Cloning once a week (or in the case of this blog, every few years the way I’ve been going) is good enough. Testing the clone every so often is a great idea, too.

]]>I’m one step further along on the quest for the Holy Grail of kiosk displays. We use a lot of iPads here at the museum, most of them are either in Lab Shield brackets or built into cabinets. The whole idea behind building a kiosk is to make it hard for people to break into your device, so naturally the power and home buttons are behind lock and key.

The trouble is, there are a few quirks in the iPads. One is that after running for a few days, the display seems to lose its sensitivity to touch. The only way so far to deal with that seems to be to open the bracket, hit the power button to make it sleep, then hit it again to make it wake up. Then launch the app again and lock the whole thing back up. Not something the visitor services people want to do a lot. Nor I.

Sometimes visitors also break out of the kiosk app (we’re using iCab Mobile, an absolutely fantastic app, by the way). I have no idea how, but sometimes the iPad will be doing something completely different. One of ours now has a spiffy new home screen background thanks to someone who got in.

I’m very hopeful that my newly found method of remote booting/app launching will help at least take some of the pain out of the reset process. Here’s how:

Set up Activator to launch whatever app (e.g. iCabMobile) you want to have start up at boot time. Hook the app to the “Anywhere -> Power -> Connected” event.

If you’re still ssh’ed in from step 5, type ‘reboot<return>

Voilá. Your iPad will reboot, and if you’re crossing your fingers just right, the app will launch after it boots. It seems to do this pretty reliably unless you do it too many times in too short a time.

I’ve set up ssh keys for the iPad so that I can run a command like this from my desktop machine:

ssh -i .ssh/id_rsa_ipadkiosk root@192.168.1.52 reboot

Next up on my list is to build a web app that lets us reboot any iPad. Then we can carry around an iPad running the web app, and reboot troublesome exhibits with the swipe of a finger.

Update: I should note that for this to work, the iPad has to be plugged in to a power source. During the boot sequence the hardware must sense the power source and generate the same event that gets generated if you plug it in after it’s booted.

]]>I’m still on the quest for the perfect free kiosk app for Mac OS X. I’ve usedPlainview to good effect in several exhibits. I’ve used Opera in one, and I’m using Firefox with R-Kiosk in another. Which program to use depends a lot on the nature of the web page(s) to be displayed in the kiosk and the way the user expects to interact with the kiosk.

For the Firefox kiosk, I needed a sure-fire way to reload the home page after a certain amount of idle time. The trouble is that Firefox doesn’t lend itself to be controlled via AppleScript. A lot of poking around led me to learn about AppleScript System Events being able to simulate a keypress, that Option-Home will make Firefox load the home page, and that there’s a really handy little application called Full Key Codes that tells you what the key code is for any key you press on a Mac.

That let me put together this handy little Python script that watches how long it’s been since a user has done something. First it waits until there’s been any user activity at all, then it waits until there’s been no activity for 90 seconds. Then it uses osascript to run a little bit of AppleScript that sends Option-Home (key code 115) to Firefox.

I suspect there’s a way to eliminate the osascript by using the Python objc module, but this is good enough for me…

#!/usr/bin/env python
#========================================================================
#
# idle.py - makes Firefox load home page after user inactivity
#
# Set the value of 'timeout' to the number of seconds of idle
# time you want to allow before reloading the home page
#
# Comment out the calls to status() if you don't want to see any
# messages
#
#========================================================================
from subprocess import Popen
from time import sleep
from Quartz.CoreGraphics import *
# From /System/Library/Frameworks/IOKit.framework/Versions/A/Headers/hidsystem/IOLLEvent.h
NX_ALLEVENTS = int(4294967295) # 32-bits, all on.
# Set this value to the number of seconds of idle time before
# resetting Firefox
timeout = 90.0
def status(state, last, idle):
"""Print out some info about where we are """
print "%10s %.1f %.1f" % (state, last, idle)
def getIdleTime():
"""Get number of seconds since last user input"""
idle = CGEventSourceSecondsSinceLastEventType(1, NX_ALLEVENTS)
return idle
def doReset():
"""Use osascript to tell Firefox to reload the home page"""
reset = """/usr/bin/osascript \
-e 'tell application "Firefox"' \
-e ' activate' \
-e 'end tell' \
-e 'tell application "System Events"' \
-e ' key code 115 using option down' \
-e 'end tell' """
Popen(reset, shell=True)
sleep(1) # Prevent osascript keypress from triggering below
last = 0.0
idle = 0.0
while True:
# These two lines can also be at the very end if you don't want to
# load the home page once when the program starts
status('--- RESET', last, idle)
doReset()
idle = getIdleTime()
last = idle
# Wait for user activity
while idle >= last:
status('wait', last, idle)
sleep(1)
idle = getIdleTime()
# Wait for idle to become bigger than timeout
while idle < timeout:
status('triggered', last, idle)
sleep(1)
idle = getIdleTime()

I just discovered something very cool. As an only occasional coder (Python, mostly, when I get the chance to write code), I don’t have a very good grasp of where all the header files for Mac OS X frameworks live.

Let’s say that you want to know where CGEventSourceSecondsSinceLastEventType is defined. Just hunt for it in Spotlight and it will turn up all the places it shows up in header files.

]]>At the museum, we’re running a site where we want people to comment but where we’re also sitting ducks for spam comments. Trouble is, the site is running behind a web proxy. That means that all the comments are seemingly from the same IP address, namely that of the proxy host. That, in turn prevents any meaningful spam detection.

I poked around the web a while, looking for a way to get the real IP addresses, and finally rolled my own solution.

It boils down to this. If you’re running behind an up-to-date apache server that’s doing the proxying for you, all of the incoming HTTP requests should have the X-Fowarded-For header set to the originating client’s IP address.

Once I verified that this is the case, I put this snippet of code into my functions.php file.

Then don’t do what I did. After googling to no avail, I went so far as to completely reinstall Mac OS X. No good.

[Update: Actually, what I did was clone a different system that I had recently set up and used it as the “new installation”. Had I done a total, from DVD reinstallation, it would have fixed the problem but I wouldn’t have discovered the cause.]

The answer? Make sure you didn’t set the Remote Login preferences in System Preferences/Sharing to “Only these users” and then forget to add the new user to the list!

D’oh.

Hopefully this will help the next person who’s looking for the answer.

]]>I needed to do something about the Mac minis that were accumulating on the table in my office. Digging around, I found this Rubbermaid organizer on Amazon.

It turns out to be nearly perfect. The unit is very sturdy, was easy to put together, and the shelf height is just right. There’s enough clearance for airflow but not so much that you feel space is being wasted.

I used self-stick cable tie anchors and cable ties to mount the power bricks and used double-stick mounting tape as stops to keep things in place. The old-style minis are heavy enough and are pretty non-slip, so I just put some tape at the front of the shelf to keep them from sliding off. The one new-style mini was pretty slippery so I used the tape to actually stick the base to the shelf.

The unit came with vertical rods that go in the back of each column of shelves to keep them from sliding out the back, but I decided to leave those out. That way I can slide each shelf forward to get DVDs into the mini, or back to get at the connectors.

The weak spot of the minis is the power cord (at least on the pre-2010 models) which comes out quite easily. I tied those down as well and am pretty sure they won’t jiggle their way out. I have four minis in the rack right now along with a Drobo with 10TB of disk. I’m going to be adding a 5th mini with a stackable disk drive, that’s why there’s double-high slot still open on the mini side of the rack.

Cable management is an issue, mostly because of the power bricks long cables. I may fiddle with how I fold the cables into the shelves a bit more.

The whole thing plus a UPS and monitor/keyboard/mouse sits nicely on some steel shelves in our A/V equipment room at the museum. I still need to time how long the UPS runs. I’m only going to have the public web site minis on it.

]]>I moved this blog from one of the Mac minis in my basement to the other (I’m trying to put everything on the newer one to free the other one up) yesterday. Originally I had been blogging using Plone (from about 2005-2007) and then moved to WordPress. Moving the Plone part seemed like it was more work than I wanted to put in, so it’s goodbye to those posts.
]]>OGC (re)discovers URLs, but let’s tighten up the terminology a bithttp://think.random-stuff.org/posts/ogc-rediscovers-urls-but-lets-tighten-up-the-terminology-a-bit
http://think.random-stuff.org/posts/ogc-rediscovers-urls-but-lets-tighten-up-the-terminology-a-bit#commentsFri, 02 Jul 2010 17:17:31 +0000http://think.random-stuff.org/?p=146…

]]>I had seen this tidbit that Sean Gillies writes about in the recent OGC newsletter. My thoughts were along the lines of Sean’s. I never understood the big deal behind URNs.

EDIT: Forget the semi-rant, see the comments, and then go read about URI…

But in re-reading Sean’s post and the OGC news coming out of the June 2010 meetings, I think the terminology is a bit imprecise. Too bad the source document, 10-124r1 isn’t available on the OGC web site (promised for mid-July, I see) to see if the issue is in the document or in the news page. Here’s the news page version:

OGC Identifiers – the case for http URIs’

The OGC Members approved release of ‘OGC Identifiers – the case for http URIs’ [OGC 10- 124r1] as an OGC Whitepaper. .According to the current OGC policy either URNs or http URIs may be used in OGC standards. However, the use of http URIs (a) resolves some deployment challenges and (b) provides an opportunity for easier engagement with broader communities. So OGC should now consider taking the next step, and mandate the use of http URIs for persistent identifiers in OGC specifications. This whitepaper canvasses a number of issues around this proposal.

http URI Policy

The OGC Members approved the following as official OGC policy to be included in the OGC Policies related to OGC standards [OGC 06- 135rN]:

OGC TC directs the OGC-NA that all new OGC identifiers issued for persistent public OGC resources shall be http URIs, instead of URNs

New standards and new major versions of existing standards shall use http URIs for persistent public OGC resources to replace OGC URN identifiers defined in previous standards and versions, unless OGC- NA approves an exception

Operational Implications: OGC should carefully manage (maintain for the long term) the http://www.opengis.net domain and identifiers in this domain

A URI can be further classified as a locator, a name, or both. The term “Uniform Resource Locator” (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network “location”). The term “Uniform Resource Name” (URN) has been used historically to refer to both URIs under the “urn” scheme [RFC2141], which are required to remain globally unique and persistent even when the resource ceases to exist or becomes unavailable, and to any other URI with the properties of a name.

A URN is a kind of URI. What is called an “http URI” is really a “just” a URL in RFC3986. And, a URN need not (or I should say “need no longer”) be something with “urn:” in the scheme. A URL could be a URN based on the last part of the definition above, “any other URI with the properties of a name”

Therefore, an “http URI” (from the OGC wording) can be either a URL or a URN, based on section 1.1.3 of RFC3986. Of course, the URN is really a URL with the additional uniqueness and persistence properties. So let’s just call OGC’s newly mandated URIs URLs.

There are two primary motivations for using RFC2141 URNs. One is as a globally unique name managed by some authority. The other is as a persistent identifier, sometimes used to map onto a URL with a resolver. The trouble with the latter is that URLs really work better in the first place, and I’m guessing that’s what 10-124r1 says.

So here’s what I think they should have said in the TC:

URI Policy

The OGC Members approved the following as official OGC policy to be included in the OGC Policies related to OGC standards [OGC 06- 135rN]:

OGC TC directs the OGC-NA that all new OGC identifiers issued for persistent public OGC resources shall be http URLs, instead of RFC2141 URNs

New standards and new major versions of existing standards shall use http URLs for persistent public OGC resources to replace OGC RFC2141 URN identifiers defined in previous standards and versions, unless OGC- NA approves an exception

Sorry to be so pedantic. Back in the day, there would have been half a dozen people at any given TC who would have been able to argue the finer points of this for hours….

(And I just figured out what the OGC-NA is. I guess it’s the “Naming Authority”.)