One of the longterm goals for PiBox is to integrate it into the home. I want to see the PiBox Server provide centralized communication for an array of IoT devices. One way to do this is with Bluetooth Low-Energy, aka BLE, aka Bluetooth 4.0. This specification allows very low power sensors to report information to services that can do something useful with the information: close the shades when the temperature goes up, let the roomba loose when the dog’s sheddings are accumulating, turn off the stove when granny forgets to take her meds.

Before I can do this I needed to get bluetooth classic working on the server. This turned out to be fairly easy. The kernel supports it and there is the BlueZ userspace tools, plus the bluez-tools wrappers to make it even easier. After much procrastination I finally got around to testing these on a server with a 4.0-compatible USB dongle. After a little fiddling with the /etc/bluetooth/audio.conf and /etc/asound.conf files I was able to play a simple wav file to a bluetooth speaker. Easy-peazy.

The next logical step was to get omxplayer, the hardware-accelerated video player for the Raspberry Pi that I use to play videos full screen, to switch from the analog output to ALSA. Turns out this can’t be done, or at least it isn’t done yet. omxplayer doesn’t support ALSA. So you can’t play the audio stream from a video over bluetooth. So much for my drive-in movie with wireless speakers. I looked at what might be required for this (there were hints from the omxplayer developers) but it seems a bit over my head and would distract from the larger scale goals for the project. I’m a one man team. There’s only so much time in a year.

So I’ve started banging my head for alternatives. The obvious one is to plug an analog-to-bluetooth transmitter into the analog output port of the Pi. Too easy. Though I have one on order for the short term (hey, we still use it when we go camping). But that would also require a larger box to hide the transmitter. And how do I switch between bluetooth and analog output? I may not always want bluetooth. Nah, I’m just kidding. Everyone loves bluetooth.

Another alternative is to switch to a different video player, like mplayer. That would get me a software switchable option for audio output. I tried this on a Model B+ but the performance was terrible since the GPU isn’t used. First, to get a nearly smooth playback the image had to be halved while being decoded on the CPU. And there was no way to rescale it to the display size at display time. So this was functionally unusable.

The next alternative (this is computing and there is ALWAYS another alternative) is to switch boards. I looked at several competitors like BananaPi an Odroid. The former looks like a possibility. But then I took another look at the newest version of the Pi, the Raspberry Pi 2 Model B. It’s a quad-core with each core spec’d at 1GHz, slightly more than the original Pi’s 900MHz. Not to mention the possibility of overclocking and additional on-board memory. So mplayer, which does work with ALSA but which requires the CPU to do the decoding, might just work on this board.

So I placed an order for one Pi 2 which has a slightly different Broadcom chip. I’ve already rebuilt the toolchain for this new chip. That’s an adventure in itself, but fortunately doesn’t appear to require major changes to the toolchain. I’m unclear if I have the toolchain optimized. It doesn’t use the latest Linaro gcc releases so I’m doubting it. But if my toolchain shows any possibility of running mplayer then I can always upgrade my Crosstool-NG release to get access to the latest Linaro bits.

This past weekend I took PiBox out in the trailer for a field trial. The box was mounted under a cabinet with power and HDMI wrapped around to a 7″ HDMI-input monitor. Power on worked fine and the system came right up. But not everything worked as expected. Here is the summary.

SMB playback

The first issue was with shared files over SMB. I first tried to watch a movie using my LG Volt phone, which runs Android 4.4.2. I use ES File Explorer on Android to get to the video files. The phone could access the files and start the playback but the network would drop out after a while. I then tried it with a Galaxy Tab 2 tablet running Android 4.2.2. This worked much better and I watched several movies without interruption. The PiBox Media Player also worked perfectly. So I think the problem was with the phone’s wifi. That’s interesting since I often use the phone at the gym when I run on the treadmill to watch Netflix without problem. Apparently it works better with 3G than it does with wifi.

One thing of interest here: the tablet notified me at one point of “unusual magnetic activity” in the area. I got this when I used an astronomy app (setting up to use my telescope – hey, it wasn’t all work on this field trial, we were camping). I have no idea how it sensed this but wonder, if it was real, if it had any impact on local wifi network performance.

Cabling

The media server was cabled to a 7″ display in the trailer. This required an extra power cable. Then we also had a digital TV box that added additional power and HDMI cables. Finally I had a HDMI switch to go between the TV receiver and the Media Server. That amounts to a boat load of cables that didn’t exist before with the old TV/DVD player. Fortunately, the digital TV and HDMI switch will go away when I add a digital TV dongle to the Media Server. But I’ll still have a TV cable. Even so, I can see that there are still too many cables. I need to power a display and connect to it with fewer exposed cables. That would leave just the power and digital TV lines exposed, which is much better. I’m not sure how I’ll solve that problem yet.

Wifi Setup

The wifi setup works well when connecting to the local router. Using PiBox Media Server as a wireless access point had some minor problems. First there was a problem with the way I set ccmp vs tkip. There are two ways to configure the network: bui-network-config (the GTK+ app run from the launcher) and the web interface. I fixed the web interface to do the right thing, which is use tkip, in the web backend (piboxd). But bui-network-config uses its own code for the same thing.

I’ve fixed bui-network-config but the long term solution is to have bui-network-config use the same backend. The problem is that bui-network-config was designed as a standalone tool that can be used outside of PiBox. So the question is how to maintain that capability. I think the solution is going to be to have a shared library that both bui-network-config and piboxd can use for this purpose.

Multiple USB sticks

Something that kind of shocked me was a problem with multiple USB sticks. With one stick plugged in videos were available. With two or three they were not. This may have been a problem with the USB ports. I still need to check on that. Another problem may be with how VideoFE handles the databases created by VideoLib on the sticks. Or it might just be that the sticks were messed up. This simply needs more research.

Video Browsing

Field trials are good to find out just how usable the device really is. In this case I found that the simplistic alphabetical list of videos was slow to move through using the keyboard’s arrow keys. I also found that, for some reason, the poster art was slow to load (or at least appeared that way). This poster issue is new, as I’d not seen this problem at home. Again, it may be due to networking issues at this site.

The important issue here is that I need a way to fine tune the list of videos. Pressing a few keys should be sufficient to reduce the displayed list. I would need a way to cancel that search (to get the full list back) and/or a way to timeout the current search. If it does timeout, does the display go back to the full list? How does the user know what the current search state is (as in timeout state, etc.)?

It’s taken more than 2 years but I’ve finally finished the first versions of the PiBox Media Server and Media Player. The cases are stained wood boxes from a hobby store but they provide me with information on how to deal with layout in a 3D printed case. It also required me to look at acquiring parts such as specialized cables and connectors. Some of the small breakout boards I used can be combined into custom boards in future versions.

The goal of this project has always been to build a complete software system from scratch using off the shelf hardware as much as possible to create a self contained set of devices that networked with each other. The hardware is all off the shelf and the few breakout boards are open hardware designs. The software was built from the ground up starting with a custom cross toolchain up through the custom applications and UI. It’s even getting some Android apps to go with it (future development).

The only thing that didn’t come out as I’d hoped is the projector used in the Media Player. I was hoping to find a development board for this but all I can find is a “low-cost” pico projector. And integrating the TI DLP chip is a little beyond my scope at the moment (maybe with time that will change).

But it’s a working set of prototypes. I plan on deploying them this summer on our camping trips. I used them last year without cases. This past winter our trailer was broken into and the TV/DVD player was stolen. So now the Media Server, with a 7″ HDMI monitor, will replace it. The only thing missing from it is digital TV. I have a dongle but wasn’t able to get omxplayer working with it yet. That will come with more work this summer.

For now, it’s a start. If you read this and look over the photos then drop me a line and let me know your thoughts.

I’ve been using Apache 2.2 for quite some time. Lately I’ve noticed CentOS and other distros using Apache 2.4. The new Apache breaks a bunch of stuff, as does newer php.

One thing that needed changing is a switch in directory listing tools. I have an archive on my web site that is running Scott Evan’s Indices software. It’s just a prettifier for file listings but it worked well and was easy to set up and configure for my personal taste. But this doesn’t work on Apache 2.4 for some reason. I haven’t figured out why. However I did find an alternative that works.

h5ai provides more functionality out of the box and is even easier to configure: copy a directory to your document root and add a new path entry to the default DirectoryIndex. I don’t have an example of this because so far I’m just using it behind a firewall at work. But it’s quite nice. Still need to look into customization options. But out of the box it’s quite useful.

Related posts

When I was just getting started with embedded development I found many tutorials on how to perform cross compiles required setting up some shell functions and variables before working on builds. This is a necessity for embedded work because the embedded build for the target platform won’t use the same toolchain and build libraries as the host platform. Unfortunately, the setup for the various publicly available build systems were all different. Angstrom/OpenEmbedded vs Yocto vs Buildroot vs Mentor Graphics vs whoever: they all have their own setup.

Then I created my own build platform, PiBox. I did this mostly to teach myself the whole platform bring-up process. One of the side effects of this (along with having to build custom OPKG packages for PiBox) was the need to bounce from one build to another quickly. I wanted them all to use the same navigation commands and automatically point where ever they needed to with environment variables. Thus was born cdtools.

Use case

Let’s say I have a build system for the toolchain, kernel and root file system for my Raspbeery Pi (re: PiBox). I do out-of-tree builds and packaging to avoid clutter when looking for updates with git status. So while my source lives under …/raspberrypi/src my builds are under …/raspberrypi/bld and packages generated from the build are under …/raspberrypi/pkg. So far so good. That’s not overly complex.

But now I need to bounce over to a kernel tree and make some updates and test them. The updates I end up with will be integrated back into the PiBox build but before that happens I want to test them in a kernel-only tree (remember: PiBox is a build of multiple components – it is, in fact, a metabuild that downloads components, unpacks them, patches them, builds them and packages them). So how do I bounce over to the kernel-only tree and run tests and then bounce back to the PiBox tree to integrate the changes?

Here’s another scenario: I have a slew of metabuilds that do nothing but download 3rd party utilities, configure them as needed for PiBox and them package them as OPKG packages. These builds are then wrapped in another metabuild that builds all of those package-metabuilds in order to simplify creating a complete PiBox release! This top level metabuild has dependencies on the package metabuilds so I may find myself performing the top level metabuild, finding a bug (due to an upstream change to the branch I’m using) in a 3rd party utility, switching over to that package metabuild to fix, commit and push it, and then bouncing back to the top level metabuild to rebuild the packages again.

What I need are common navigation tools for all projects and a way to change that navigation based on which project I want work on right at this moment. The way to do that is with shell variables. But I need to do this without having all kinds of project-specific shell scripts that are all different. There needs to be some consistency in how they are used. I want to set the project name but use the same navigation to end up in source, build and package trees no matter which project I’m in.

cdtools

This shell script acts as a front end to a directory of configuration scripts, one for each project in which I do builds. This script lives in ~/bin and sources any shell files found under ~/bin/env*. In this way I separate builds for work from builds I do at home by placing the environment configurations in different directories, as in

~/bin/env.muse

~/bin/env.work

Whenever I add a new project (and this happens frequently for me) I just drop in a new shell file in on of the env subdirectories and then reload cdtools.

. ~/bin/cdtools

This makes it easy to make changes without having to logout and back in. It also makes it easy to do within a screen session so I can test new or modified configurations under one screen window without affecting the other windows until I’m ready for the update.

shell functions

Each project has its own shell file which contains a project specific shell function that sets up environment variables and aliases. The environment variables are always the same but point to different things. Typically I use variables prefixed with GM_, though not always.

SRCTOP – top of all source trees, under which you find all project specific directories.

GM_HOME – top of the project directory, under which you find the project source, build and package trees, along with others as needed.

GM_WORK – a scratch pad area often common between projects.

GM_BUILD – the build directory for the project

GM_SRC – the source directory for the project

GM_PKG – the packaging directory for the project

Other variables can be set to update the executable PATH, library path, Java path, or anything else that’s needed. As long as the same variables are used in each shell file (with unused variables ignored) then switching from one project to another will always properly setup the environment.

For the use case, the PiBox shell function is rpi and the Linux kernel shell function is kernel. Typing either configures my environment to access, build and navigate their respective trees.

aliases

Remembering the environment variables is hard. So is typing them. So the shell function maps them to navigation aliases. All navigation aliases are prefixed with cd, as in

cdt – top of all source trees, under which you find project directories

cdh – top of the project directory

cdx – source tree under the project directory

cdb – build tree under the project directory

cdp – package tree under the project directory

cda – archive directory, where a local copy of downloaded source is kept, to avoid having to redownload unless absolutely necessary (this includes git or mercurial managed trees, as needed)

cde – Extras directory

A special alias, cd?, is setup to call a project shell function that lists the aliases, important environment variable configuration and help in cloning and accessing remote trees. All projects use the same aliases. Additional aliases can be setup as long as the context of the alias is the same for each project.

help functions

Until you’ve used cdtools for awhile it can be hard to remember the aliases. It’s even harder to remember the sometimes convoluted git or mercurial (or other source code manager system) commands necessary to work with remote repositories. A help function is included in each shell file. This is always prefixed with “list”. In our use case, the helper function is listrpi or listkernel. But you don’t have to remember this because the alias cd? always points to the currently active projects help function.

cdlist

So what happens when you have umpteen projects and you can’t remember which shell function sets up a particular project? You can get a list of available functions and their descriptions using the cdlist function, which is included in the cdtools script. The cdlist function browses all shell files and extracts the shell function name, a description and the name of the file its found in. The description uses a tag in a shell comment, as in this example.

# DESC: PiBox: Embedded environment for ARM-based system using buildroot

To search for this I can pass any string in the DESC field, such as PiBox:

All shell files with descriptions that include the string “pibox” (case insenstive) are listed. Now I can see the shell function (rpi) which is listed first and the file in which that function is defined (raspberrypi), which is listed at the end of the line inside brackets.

Example usage

First, load all cdtools scripts.

. ~/bin/cdtools

Now setup to work in the PiBox tree. Then change to the source tree, see what’s there and then switch to the packaging tree and do the same.

Note the “2” in the directory paths. I can now have multiple copies of the same repo. And this trick can go further by using additional arguments to the shell function to embed different-yet-related repositories using a single shell function.

bui fakekey
bui network-config 2

And so forth.

Extras

The cdtools script currently includes support for color coding of the cdlist using ASCII escape sequences. This works in most terminal types. More importantly, there is no reason you can’t extend cdtools to support additional functionality that you want to embed in your own set of shell files.

Consistency of practice

What you should get out of this is the consistency of use when moving from one project to another. How I build projects can vary greatly: PiBox uses GNU Make while one of its packages (launcher, for example) uses Autoconf. This happens all the time because when doing a system build (a build of an OS plus utilities plus custom kernels plus custom apps, etc.) you may depend on many 3rd party source trees with a variety of build patterns. But that doesn’t matter. What matters is can I build it quickly by bouncing to it and having pretty much everything I need in place when I do.

Networking in QEMU

I’ve recently had a need to test UEFI booting for a disk image. I stumbled upon OVMF, a bios that will handle UEFI booting. It works pretty well but doesn’t remember its config sometimes. But that’s not why I’m writing this.

In my disk image I need dual network interfaces that are on specific networks. I don’t need tap interfaces. I can live with the slower user-mode interfaces. But I needed to put them on separate networks. So I did this.

I googled till I was blue to figure out how to do this. You can set the dhcpstart address but qemu_system_x86-64 spews the following message:

device 'user' could not be initialized

It doesn’t say why, but after fiddling with it for a long time it became obvious that this was a configuration error. There just doesn’t seem to be any explanation as to what configuration is wrong.

Finally I found a site with an example. The key appears to be that you have to include the host’s network address on the network that interface will be in. Without this specification there is no mapping of the guest interface to a host interface. Here is what the correct configuration should look like.

Now both interfaces will come up on the assigned networks. The hostfwd option is used to map host ports to guest ports, with the larger number (the first one) mapping to the guest’s port. Here I’m mapping ports to the ssh port on the guest. Since I’m using user-mode networking this is required otherwise the default firewall in the guest prevents me from accessing it from the host.

KVM Speedup

Another thing I discovered when running qemu_system_x86_64 is that it was slow on my Fedora 21 box. The reason is simple: you can’t run this command manually with KVM speedups (which are significant) if you’re already running libvirtd. You can go through the hoops of tying the two together, but for a quick solution to run a qemu session with full virtualization to get much better performance without dealing with libvirtd just shutoff libvirtd.

sudo service libvirtd stop

That’s using archaic service format instead of the ugly sysctl interface (that you very little, systemd) but it works. Again, this is for Fedora. Your distro mileage will vary.

I can no longer push to gitorious so I tried to run the import of all my repos (and there are alot) into GitHub. This failed miserably. Only two of 34 repos were imported. Some of the repos actually got created on GitLab but there is nothing in them. So I tried to import one of them manually. It’s been running for about 20 minutes now and nothing is happening. No errors, no messages, no nothing. Just a spinning wheel.

I dug around and found the process for migrating a repo, be it to GitLab or GitHub or whatever. It requires manual creation of each repository. While gitorious allowed me to have different projects with different repos under them, GitHub does not. At least I don’t see a way to do that. GitLab does, but they call them groups. The import appears to have created the groups but that’s all it did.

So I guess I can try doing this manually under each group, after manually creating each repo. *sigh* This is a strong argument for running my own git server and gitweb.

Then restart apache and make sure the server name resolves somehow (DNS or /etc/hosts) and you’re good to go.

But with Apache 2.4 something changed. This doesn’t work. You get the nice welcome page that says the web server is running but you don’t get the directory index. I found a solution online that hid the solution behind a requirement to “like” the answer but it became very clear what the problem was: the welcome.conf configuration file overrides my directory index.

That Options -Index line is the problem. The easiest solution, in this case where all I want is a web server that shows a directly listing, is to comment out that whole block of code. Restarting the web server and accessing my file server works as expected now.

I had this problem on Fedora 21. I’ve seen others have it with Ubuntu. So the source of welcome.conf content may be the apache project itself. In other words, this solution may be applicable to any distribution using Apache 2.4 or later.

I was job hunting from late November to late January. Since few of the people needed to do interviews tend to be available during the 12th month of the year, I was able to devote a lot of time working on PiBox development. However, once the new year rolled in I was bouncing around from office to office in my finest nerd sweaters and cleanest hiking shoes. Fortunately, no one was interested in my fashion sense. But it required me to do a bit of travelling and lots of phone chatting, which played havoc with the concentration required to do development work. So little PiBox work got done.

Fortunately all that ended in mid February when HGST decided I was least offensive candidate that week. But it takes awhile to get back into a project like PiBox. So I thought I’d start slowly by finally getting around to writing some end user and developer documentation on the wiki.

I managed to complete a user guide to explain how to use the media systems. Turns out it’s pretty easy, which is a good thing. The developer document was a bit harder since I wasn’t sure if I wanted to explain how to use and modify the development platform’s build system or explain how to write apps for the media systems. Turns out I did a little of both. That’s a good thing too. I think.

What’s still missing is the protocol document explaining how messages are passed around the media system components. I’ve got some diagrams up and an idea of what I need to say. I just need to get some time to finish it up.

And then I can get back to my issues list. So much to do. So little time. Anyone wanna help?

I’ve been working on building custom Debian and Ubuntu distributions for use under VirtualBox. One advantage that both have over Fedora is debootstrap. This tool allows you to create a default rootfs from pre-compiled packages inside a directory. You can then chroot into that directory to install the kernel image, extra packages and do any additional customizations.

Where this gets really cool is using debootstrap with a few additional tools on an image file. The process is fairly simple. First you create an image file using dd and add a partition table for two partitions, boot and rootfs, using parted. Loop mount (kpartx) and bind mount boot under rootfs. Format the partitions (ext3.mkfs). Now you’re ready to install the rootfs using debootstrap. Once installed, chroot into that directory and run customization scripts. Finally exit the chroot, unmount and install a bootloader such as grub. That creates the raw disk image. You can then use qemu-image to convert that to various VM image formats such as those for VirtualBox, Xen and KVM.

I handle this using a front end script that runs 7 steps, most of which are outside of the chroot but a few that are in it. The seven steps are

Create image file

Create partition table

Loop mount and bind mount

Install rootfs with debootstrap

Copy in chroot scripts and data files

Chroot and run those script and unmount

Convert to VM image

These seven are pretty common for either Debian or Ubuntu. There are small variations, such as what you include wiht debootstrap and perhaps how you need to do your loop mounts. The differences come from the chroot scripts and data files. There is one script that sets up the chroot environment as necessary to perform additional package installations. Setup can include network interfaces, setting the locale and installing prerequisites necessary to do other installations.

Within the chroot script is the installation of the Linux kernel image. This gets installed under /boot, which was bind mounted outside of the chroot so we have separate boot and rootfs partitions when we boot the image. What’s interesting is that most information you find online about this process assumes you’re building debian or ubuntu on debian or ubuntu. But I’m not. I’m on Fedora.

This means that when you get to installing grub the instructions you find don’t always match what you need to do. Fedora 21 uses grub2. Debian uses grub or grub2 and Ubuntu has grub. Does it matter? Turns out it doesn’t. All that matters is that the grub from your distro is installed to /boot on the image and the MBR of the image file points to it. That way qemu-image can create an appropriate image for your VM.

For bonus points the use of qemu-image allows you to convert this raw disk image into a variety of VM image formats. So a single image build is quickly converted to the VM environment of choice.

In the end I found that testing the VM image under VirtualBox was pretty easy, including getting the guest utilities installed as part of a firstboot-process. Those utils need to rebuild kernel modules and you can’t do that from within the chroot easily. It’s easier to just create a firstboot script hat installs the utils and rebuilds the modules andthat gets run the first time the image boots in the VM and then removes itself aftward.

So now I can quickly bring up a a Debian VM. The whole process takes about 15 minutes unattended. The only problem is it’s not Fedora. If I could do this under Fedora or CentOS, I’d be sooo happy.