The GTK widget layout is done via a Glade XML file which can be
edited visually using glade.
This template includes working callbacks to handle the File
and Help menus and File Save/Open dialogs, with dummy
handlers for selecting filenames and the Edit menu's
cut/copy/paste.
The main canvas uses Cairo for
graphics rendering, and includes example code from the
cairo package.

To build your own application on top of this, first grab the code. You can either grab it from hackage
with cabal unpack cairo-appbase, or clone the git repo:

To add widgets, install glade from your distro system and run glade data/main.glade. Note that you must run cabal
install to put the glade file in the correct place for your application to pick it up.
To modify the code, edit src/cairo-appbase.hs. Hooking up functions to widgets is very simple: get a widget by name
(which you set in glade file), and hook one of its signals (which you found in the Signals tab in glade) to an IO () action:

cut1

The template code includes a trivial definition of myCut:

myCut :: IO ()
myCut = putStrLn "Cut"

A real application will want to pass data to the callback. In C, this is fairly tedious as you only have a single void * to
pass to callbacks as "user_data", and applications typically do lots of marshalling and unmarshalling to pass data around. In
Haskell however, you can make yourself a more complex callback handler and use a curried version of it in each instance:

Erik de Castro Lopo discussed how currying at length in his April 2006 post,
GTK+ Callbacks in OCaml.
The
Haskell GTK+ bindings have been around a long time, but were only
recently cabalized and uploaded to Hackage. I put together cairo-appbase in August 2006 when I was
playing with it, but now that I have more time for Haskell I've updated it and uploaded it to Hackage. Enjoy, and hack away!

The conventional way of doing embedded development is to cross-compile everything then copy it onto the target,
but working natively allows you to use "normal" tools and workflows.
We want to issue commands directly to a shell on the development board or phone prototype, and speed up the compilation step by distributing it to a faster machine such as your workstation.
This isn't the usual way to do things, but I like working this way, and here's how to make it work faster.

This article explains how to configure a Debian PC host and a Debian target system so that development done on the target invokes the cross-compiler on the host. The advantage offered by this approach is a speed-up of compile times. Note that this does not speed up other aspects of building, such as source configuration (which can be slow for packages using GNU autotools), linking or installation.

We assume that a full Debian system is available for development on the target: packages can be built natively using gcc and a full toolchain (binutils, ld etc.), and
tools such as automake, autoconf, libtool, version control systems etc. are available.

The setup we work with uses Debian on both the host PC and the target.
The examples will use a debian-sh4 on the target, with the sh4-linux-gnu-gcc
cross compiler installed on the build host. For other target architectures, simply replace all instances of sh4-linux-gnu- with the arch prefix, eg. arm-linux-gnueabi-.

In this article, commands executed natively on the target device will use the prompt target#, and commands executed on the x86 build host will use the prompt host#.

The first step is to ensure you can build software natively on the target. For GCC:

target$ gcc hello.c -o hello

and for autotools projects:

target$ ./configure
target$ make

ccache

Next, install ccache:

target# apt-get install ccache

ccache keeps a cache of compiled object files, such that the same compilation does not need to be repeated. This cache exists outside of your source tree, so it persists across invocations of 'make clean'. It compares the pre-processed source files, so that compilation of a source file will happen if it or any of its included headers is changed. The usual way to use ccache is to simply set your C compiler to be "ccache gcc".

target$ ccache gcc hello.c -o hello

and for autotools projects:

target$ CC="ccache gcc" ./configure
target$ make

Debian also sets things up so that if you put /usr/lib/ccache ahead of /usr/bin in your PATH, it will get used for native builds whenever gcc is invoked. That is useful to set up, but not necessary for this setup with distcc.

An aside about compiler naming

Before we move on to cross compiling, it's important to realize that the native compiler is also available with its full architecture prefix:

The distinction between "native" and "cross-" compiling is then just a matter of what machine you are running this compiler program on. If you run sh4-linux-gnu-gcc
on an x86 machine, you are cross-compiling, but if you run sh4-linux-gnu-gcc on an sh4 machine then you are just compiling. Of course the compiler binaries are
different; the point is that a shell script which calls the compiler by its full name would work without modification on either machine.

distcc

distcc allows you to use a compiler running on a different, faster machine. This involves running a server (distccd) there, and it is far easier to set up than it would seem.

So that we can ensure that compilation is running on the host, watch this log file in a separate window:

host# tail -f /var/log/distccd.log

Then, on the client (ie. the target system) we also install distcc:

target# apt-get install distcc

We do not need to modify the distcc configuration on the target as it will not be running the server, so Debian's defaults are fine.
However, we do need to set an environment variable to specify which machine[s] to compile on.

target$ export DISTCC_HOSTS='host'

You run distcc in a similar manner to ccache, by simply setting your C compiler. Note that we are only distributing compilation, not
linking, so we just run the compilation step:

The C file was transferred over the network to the host, where distccd invoked the cross-compiler and then sent the results back to the target. The end result is the same as
if sh4-linux-gnu-gcc had been run directly on the target, but we avoided using the slower CPU of the target system.

To fully take advantage of distcc, you can run distccd on multiple build hosts, and specify all their names in the DISTCC_HOSTS environment variable on the target.
Then use eg. "make -j 10" to run multiple compiles in parallel, which will each then get farmed out to different build hosts.

Combining ccache and distcc

You can quite simply put these two tools together, by calling:

target$ ccache distcc sh4-linux-gnu-gcc -c hello.c

You can quite simply put these two tools together, by setting CCACHE_PREFIX to "distcc" before calling ccache:

The first time we run this the code is cross-compiled on the build host and sent back to the target, and ccache keeps track of that. The second time we run this, ccache
notices that it already has a stored copy of the output hello.o, and decides to use that rather than calling the compiler. (From ccache's point of view, the compiler is
"distcc sh4-linux-gnu-gcc").

For autotools project, you can simply do the following before calling ./configure:

After which the ./configure step will write Makefiles which specify to compile with ccache, so the rest of your build (ie. make -j 10) just
works as normal without any new settings or any other change to your workflow.

For more discussion of combining distcc with ccache, see the distcc(1) man page.

Summary

By combining both ccache and distcc we can:

avoid redundant compilations, and

distribute required compilations to a faster build host.

The result is faster build times, which speeds up your development cycle and allows you to work more efficiently on the target system itself.

AUBE/Metadecks Live is a music production tool designed for live use. A track
like this is made by setting up a bunch of sample, rhythm and effects
units, playing them for a while and recording the result.

This post uses the HTML5 <audio> tag. If the audio controls are not present then the problem may simply be that your browser does not support HTML5 <audio> with Ogg Vorbis (in which case upgrade to one that does). If you are reading this in a feed reader or via a planet aggregator, then the problem may be that the reader or aggregator strips the HTML5 <audio> tag -- in which case you might want to switch to a more modern reader, or upgrade your planet.

AUBE/Metadecks Live is a music production tool designed for live use. A track
like this is made by setting up a bunch of sample, rhythm and effects
units, playing them for a while and recording the result.

I just released
Sighttpd version 1.1.0,
which includes support for streaming Ogg Vorbis from standard input.
In an earlier post introducing
a new HTTP streaming server (sighttpd 1.0.0),
I described how sighttpd could be used to stream raw data, such as plain text:

$ while `true`; do date; sleep 1; done | sighttpd

and H.264 elementary video streams but not Ogg, because an Ogg stream needs to have setup headers prepended for each
codec stream. "Instead, we would need to do something like Icecast:
buffering these headers and serving them first to each client that connects before continuing with live Ogg pages".

So, that's exactly what version 1.1.0 introduces with a new <OggStdin> module.
The sighttpd.conf setup is similar to the normal <Stdin> configuration:

When a client connects to a stream somewhere in the middle of a song, these headers from the
beginning are required in order to decode the audio data.
sighttpd writes the pages containing the 3 header packets to a temporary file (created with
mkstemp(3)). When a new client connects, the contents of that file are sent to it
with sendfile(2) before jumping into the current contents of the stream.

I'm not trying to make a replacement for Icecast, but instead building a more general
streaming server -- and of course I want it to have good Ogg support! So, please try it out,
and leave some feedback in the comments or in email to me or ogg-dev :)

AUBE/Metadecks Live is a music production tool designed for live use. A track
like this is made by setting up a bunch of sample, rhythm and effects
units, playing them for a while and recording the result.

The rhythms are made with a simple drum machine, which is basically a matrix
of triggers tied up to sample players. These are fed through a cascade of
delays to get the rolling effect -- I love feeding a short delay to provide
echo into a longer delay which matches the beat, so that the individual
sounds combine with each other to make a more complex rhythm.

The rhythm is sent through a resonant low-pass filter; as the track starts
off, the cutoff of that filter is raised to give the effect of opening up
the whole track. It's a pretty simple technique, used in tracks like
Fatboy Slim's Right Here, Right
Now.

The filtered version is called the "wet" part of the mix, and the unfiltered
version is the "dry" part. Changing the amount of these is useful: the dry part
provides definition (the attacks of each drum are clearly audible), and the wet
part has a more interesting texture. In a sequencer you might program the "wetness"
of the effect; I like to work with it more directly by feeding the two
versions into a cross-fader and switching between them live. If you are quick enough
with the controls then your other arm is free for doing handstands :)

Happstack is a Haskell web applications framework.
I hadn't played with it in a while but
Happstack 0.5.0 was recently released
so I decided to try it out. You can get it with
cabal:

$ cabal update
$ cabal install happstack

Happstack has a pretty detailed tutorial, which is actually a
self-hosted happstack site that you can cabal install
and dig around in. It takes a while though, so let's just get
into it. The tutorial doesn't actually start showing
any code until section 7,
first
shot at happstack. This shows you how to run a Hello World
server from Haskell's
REPLghci:

A REPL is great for playing around, but some real code to read
for an example server is
ControllerBasic.hs.

At the top of that file we get hit with this:

mzero corresponds to a 404 and mzero `mappend` f = f,
while if f is not mzero then f `mappend` g = f.

That's not even code, it's a comment.
Like, omg why would anyone talk like that? lol

It's talking about a type called ServerPartT, which you can think of as
an abstract part of your web server, like the part that handles "everything under
/articles" or "all the images". If you connect a bunch of these together
you get your whole web server.
Anyway, it turns out that it's much more fun if you simply pronounce ServerPartT as
"Server Party":

So what's all this about monoids?
Mathematically speaking, a monoid is a simple party game that some data objects
can play when they get together. This is a mathematical definition in the
sense that mathematicians are fun at parties.

The rules of the game are just that you have some way of appending things together;
the tricky Haskell name for this is mappend, named after
the famous French mathematician
M. Append. Whenever you mappend two things together you get another thing
of the same type that can also be mappended. There's also an empty element
called mempty, or here called mzero(*).

So a monoid is just a way of saying how you connect things up. In terms of
ServerPartTs:

mzero corresponds to a 404:
The empty part of your server is
404 Not Found;
ie. if your server contained
no application parts at all, it would just have to return 404 for any request. In
general if a ServerPartT can't handle the current request (eg. the ServerPartT
for images doesn't handle /articles then it'll act like mzero for that
request).

mzero `mappend` f = f:
mappend is the way that you connect up two server parts. Basically you just try
server parts one after another: when a request comes along, if the first ServerPartT
can't handle it, ie. acts like mzero, then try the next ServerPartT
(and hey let's call it f).

if f is not mzero then f `mappend` g = f:
On the other hand, if the first server part can handle the request,
ie. it does not return 404 and is not mzero, then use it and ignore all the
other ServerPartTs (call them g). The whole server is acting just like f
by itself!

The point is that because ServerPartT follows all the rules of the monoid party game,
you can suddenly use all the functions available in
Data.Monoid,
like mconcat which takes a whole list of objects and works out what would happen if they were all
mappended together. This allows you to simply make a list of ServerPartTs and use the first one
that doesn't return 404: you don't even need to write a function for evaluating your whole server, you can
just use the plain old boring mconcat from the base libraries!

The structure of monoids (stuff that can be appended) is pretty trivial, but very common. I highly recommend
sigfpe's
Haskell Monoids and their Uses
to learn about some other more general uses.

As for Happstack: it's obviously a bit deeper than your average web framework. In this article I've only looked at
the basic idea behind making a server; it has many more features for managing data, transactions and scaling.
So what do you think? Is the monoidal mumbo-jumbo useful or does it just add a layer of confusion?
Would servers really wear party hats to a ServerPartT?

(*) because ServerPartT is the awesome kind of monoid formed by the MonadPlus type class,
obviously.

oggz-validate builds on the correctness checks imposed by liboggz when
writing Ogg packets. Whereas the low-level libogg simply allows an application to construct arbitrary
Ogg packets and push them into a stream, liboggz checks each packet against the basic constraints:

oggz-validate works by
reading the input file and attempting to reproduce its sequence of packets.
It creates both a reader and a writer and feeds the output of the reader into the writer;
any errors in stream creation are reported as validation errors.

For example, the check for "packets out of order" uses liboggz's parsing of codec granulepos
to interpret timestamps of many free codecs including Ogg Dirac, FLAC, Speex, Theora and Vorbis.
Also, there is a simple constraint in the specification for Ogg Theora that the
BOS (Beginning Of Stream) header packet for Theora must come before that for Vorbis (or
another audio codec).

What oggz-validate does not do is check that the contents of the codec streams are valid for
that codec. Such checking is left up to codec-specific tools such as
vorbose, and flac --test.

Video streaming must be reliable and glitch-free. It must be possible for video hosting
sites to allow clients to adapt to the available bandwidth, and for clients to be able to
take advantage of this.

Adaptive streaming refers to a system which allows a video streaming client to
request different versions of a stream according to the bandwidth it has
available, and to change this selection on the fly, during the course of streaming.
Such a system of course requires the streaming server to have various versions of
a stream available, each in different bitrates. In order to allow the client to
switch streams on the fly the content must be produced in such a way that corresponding
video frames in the different representations can be easily accessed and decoded.

The first stage in building an adaptive streaming system is making it work for static content,
ie. files on disk. The second stage is making it work for live content, ie. streams coming
from a video production system consisting of cameras, mixing desks and random people in
black tshirts. The first is mainly a technical problem; the second requires developing both
technology and production processes.

Microsoft have a proprietary technology for adaptive bitrate streaming called
Smooth Streaming,
and an extension for
Live Smooth Streaming.
Apple are following a more open path, pursuing standardization of their specifications through
the IETF, in the current form of the
HTTP Live Streaming
Internet-Draft. This extends the m3u playlist format with
durations, sequence numbering, caching and stream information hints.

Ogg does not yet have an adaptive streaming specification; this should be developed in a
way that is compatible with open specifications, and also taking into account the various
quirks of Ogg. For example, the client must have access to codec setup headers for each
bitrate representation, and the system must accomodate chained Ogg resources (as commonly
used for streaming Ogg).
In the
W3C Media Fragments
working group we are developing specifications for addressing fragments
of media resources. The ongoing development of
Ogg Skeleton allows Ogg to take advantage
of these, allowing faster seeking through
OggIndex and gapless playback through hints on
presentation time.

Encouraging use of these features requires tool support and demonstrations of novel
applications for video mash-ups. Video on the web should be a means of creative
expression, allowing new applications that mash up parts of many videos and present
the result seamlessly to the user. This goal makes Ogg fun, and brings us beyond
thinking about video on the Web as just a different way of watching pre-packaged TV-style
content.

AUBE/Metadecks Live is a music production tool designed for live use. A track
like the above is made by setting up a bunch of sample, rhythm and effects
units, playing them for a while and recording the result.