Amir Chaudhry - 'OCaml'thoughts, comments & general ramblings2015-07-14T00:13:39+00:00http://amirchaudhry.com/tags/#ocamlAmir ChaudhryUnikernels at PolyConf!Amir Chaudhry2015-07-04T13:00:00+00:00http://amirchaudhry.com/unikernels-polyconf-2015
<p><strong><em>Updated: 14 July (see below)</em></strong></p>
<script async="" class="speakerdeck-embed" data-id="1076a457408d42d7bb9da27dd88b68c8" data-ratio="1.77777777777778" src="//speakerdeck.com/assets/embed.js"></script>
<p>Above are my slides from a talk at PolyConf this year. I was originally going
to talk about the <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">MISO</a> tool stack and personal clouds (i.e. how we’ll
build <a href="http://nymote.org/blog/2013/introducing-nymote/">towards Nymote</a>) but after some informal conversations with
other speakers and attendees, I thought it would be <em>way</em> more useful to focus
the talk on unikernels themselves — specifically, the ‘M’ in MISO. As a
result, I ended up completely rewriting all my slides! Since I pushed this
post just before my talk, I hope that I’m able to stick to the 30min time slot
(I’ll find out very soon).</p>
<p>In the slides I mention a number of things we’ve done with MirageOS so I
thought it would be useful to list them here. If you’re reading this at the
conference now, please do give me feedback at the end of my talk!</p>
<ul>
<li><em>Thomas’ Hello world and REST service</em>, <a href="http://roscidus.com/blog/blog/2014/07/28/my-first-unikernel/">“My First Unikernel”</a></li>
<li><em>Magnus on</em> <a href="http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment/">“A unikernel experiment: A VM for every URL”</a></li>
<li><em>Mindy on <a href="http://www.somerandomidiot.com/blog/2014/08/19/i-am-unikernel/">“I Am Unikernel (and So Can You!)”</a></em></li>
<li>
<p><em>The <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton repo</a>, which has a number of examples</em></p>
</li>
<li><em>My previous posts (referred to in the talk)</em>
<ul>
<li><a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">“From Jekyll site to Unikernel in fifty lines of code.”</a></li>
<li><a href="http://amirchaudhry.com/heroku-for-unikernels-pt1">“Towards Heroku for Unikernels”</a></li>
<li><a href="http://amirchaudhry.com/bitcoin-pinata/">“The Bicoin Piñata!”</a></li>
<li><a href="http://nymote.org/blog/2013/introducing-nymote/">“Introducing Nymote”</a></li>
</ul>
</li>
</ul>
<p>To get involved in the development work, please do join the
<a href="http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel">MirageOS devel list</a> and try out some of the examples for
yourselves!</p>
<h3 id="update--14-july">Update — 14 July</h3>
<p>The video of the talk is now available and it’s embedded below. Overall, the
talk seemed to go well and there was enough time for questions.</p>
<p>At the end of the talk, I asked people to give me feedback and shared a URL,
where I had a very short form. I had 21 responses with a rating of
<strong>4.52/5.00</strong>. I’m quite pleased with this and the feedback was also useful.
In a nutshell, the audience seemed to really appreciate the walkthrough (which
encourages me to make some screencasts). There was one comment that I didn’t
do enough justice to the security benefits. Specifically, I could have drawn
more reference to the OCaml TLS work, which prevents bugs like heartbleed.
Security is definitely one of the key benefits of MirageOS unikernels (see
<a href="https://mirage.io/blog/why-ocaml-tls">here</a>), so I’ll do more to emphasise that next time.</p>
<p>Here’s the video and I should mention that the slides seem to be a few
seconds ahead. You’ll notice that I’ve left the feedback link live, too. If
you’d like to tell me what you think of the talk, please do so! There are some
additional comments at the end of this post.</p>
<div class="flex-video">
<iframe width="540" height="304" src="https://www.youtube.com/embed/zi2TdMXs7Cc" frameborder="0" allowfullscreen=""></iframe>
</div>
<!-- I find it a little awkward watching myself give a talk, especially when I
recognise things I should have said (or obvious mistakes).
-->
<p>Finally, here are few things I should clarify:</p>
<ul>
<li>Security is one of the critical benefits, which is why we need new systems
for personal clouds (rather than legacy stacks).</li>
<li>We still get to use all the existing tools for storage (e.g. EBS), it
doesn’t have to be Irmin.</li>
<li>The <a href="https://mirage.io/blog/introducing-irmin">Introducing Irmin</a> post is the one I was trying to point
an audience member at.</li>
<li>When I mention the DNS server, I said it was 200MB when I actually meant
200<strong>KB</strong>. More info in the <a href="http://nymote.org/docs/2013-asplos-mirage.pdf">MirageOS ASPLOS paper</a>.</li>
<li>I referred to the <a href="http://hubofallthings.com">“HAT Project”</a> and you should also check out the
<a href="http://mor1.github.io/publications/pdf/aarhus15-databox.pdf">“Databox paper”</a>.</li>
<li>A summary of other unikernel approaches is also <a href="http://www.linux.com/news/enterprise/cloud-computing/819993-7-unikernel-projects-to-take-on-docker-in-2015/">available</a>.</li>
</ul>
Towards Heroku for Unikernels: Part 2 - Self Scaling SystemsAmir Chaudhry2015-04-03T15:30:00+00:00http://amirchaudhry.com/heroku-for-unikernels-pt2
<p>In the <a href="http://amirchaudhry.com/heroku-for-unikernels-pt1/">previous post</a> I described the continuous end-to-end system
that we’ve set up for some of the MirageOS projects — automatically going from
a <code>git push</code> all the way to live deployment, with everything under
version-control.</p>
<p>Everything I described previously already exists and you can set up the
workflow for yourself, the same way many others have done with the Travis CI
scripts for testing/build. However, there are a range of exciting
possibilities to consider if we’re willing to extrapolate <em>just a little</em> from
the tools we have right now. The rest of this post explores these ideas and
considers how we might extend our system. </p>
<p>Previously, we had finished the backbone of the workflow and I discussed a few
ideas about how we should flesh it out — namely more testing and some form of
logging/reporting. There’s substantially more we could do when we consider
how lean and nimble unikernels are, especially if we speculate about the
systems we could create as our <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">toolstack</a> matures. A couple of
things immediately come to mind. </p>
<p>The first is the ability to boot a unikernel only when it is required, which
opens up the possibility of highly-elastic infrastructure. The second is the
ease with which we can push, pull or otherwise distribute unikernels
throughout a system, allowing new forms of deployment to both cloud and
embedded systems. We’ll consider these in turn and see where they take us,
comparing with the current ‘mirage-decks’ deployment I described in
<a href="http://amirchaudhry.com/heroku-for-unikernels-pt1/">Part 1</a>.</p>
<h2 id="demand-driven-clouds">Demand-driven clouds</h2>
<p>The way cloud services are currently provisioned means that you may have
services operating and consuming resources (CPU, memory, etc), even when there
is no demand for them. It would be significantly more efficient if we could
just <em>activate</em> a service when required and then shut it down again when the
demand has passed. In our case, this would mean that when a unikernel is
‘deployed to production’, it doesn’t actually have to be <em>live</em> — it merely
needs to be ready to boot when demand arises. With tools like
<a href="https://github.com/MagnusS/jitsu">Jitsu</a> (Just-In-Time Summoning of Unikernels), we can work
towards this kind of architecture. </p>
<h3 id="summon-when-required">Summon when required</h3>
<p>Jitsu allows us to have unikernels sitting in storage then ‘summon’ them into
existence. This can occur in response to an incoming request and with <em>no
discernible latency</em> for the requester. While unikernels are inactive, they
consume only the actual physical storage required and thus do not take up any
CPU cycles, nor RAM, etc. This means that more can be achieved with fewer
resources and it would significantly improve things like utilization rates of
hardware and power efficiency.</p>
<p>In the case of the <a href="http://decks.openmirage.org">decks.openmirage.org</a> unikernel that I
discussed last time, it would mean that the site would only come online if
someone had requested it and would shut down again afterwards. </p>
<p>In fact, we’ve already been working on this kind of system and
<a href="https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/madhavapeddy">Jitsu will be presented at NSDI</a> in Oakland, California this May.
In the spirit of looking ahead, there’s more we could do.
<!-- ([PDF][jitsu-paper]) --></p>
<h3 id="hyper-elastic-scaling">Hyper-elastic scaling</h3>
<p>At the moment, Jitsu lets you set up a system where unikernels will boot in
response to incoming requests. This is already pretty cool but we could take
this a step further. If we can boot unikernels on demand, then we could use
that to build a system which can automate the <em>scale-out</em> of those services to
match demand. We could even have that system work across multiple machines,
not just one host. So how would all this look in practice for ‘mirage-decks’?</p>
<h4 id="auto-scaling-and-dispersing-our-slide-decks">Auto-scaling and dispersing our slide decks</h4>
<p>Our previous toolchain automatically boots the new unikernel as soon as it is
pulled from the git repo. Using Jitsu, our deployment machine would pull the
unikernel but leave it in the repo — it would only be activated when someone
requests access to it. Most of the time, it may receive no traffic and
therefore would remain ‘turned off’ (let’s ignore webcrawlers for now). When
someone requests to see a slide deck, the unikernel would be booted and
respond to the request. In time it can be turned off again, thus freeing
resources. So far, so good.</p>
<p>Now let’s say that a certain slide deck becomes <em>really</em> popular (e.g. posted
to HackerNews or Reddit). Suddenly, there are <em>many</em> incoming requests and we
want to be able to serve them all. We can use the one unikernel, on one
machine, until it is unable to handle the load efficiently. At this point,
the system can create new copies of that unikernel and automatically balance
across them. These unikernels don’t need to be on the same host and we should
be able to spin them up on different machines.</p>
<p>To stretch this further, we can imagine coordinating the creation of those new
unikernels nearer the <em>source</em> of that demand, for example starting off on a
European cloud, then spinning up on the East coast US and finally over to the
West coast of the US. All this could happen seamlessly and the process can
continue until the demand passes or we reach a predefined limit — after all,
given that we pay for the machines, we don’t really want to turn a Denial of
<em>Service</em> into a Denial of <em>Credit</em>. </p>
<p>After the peak, the system can automatically scale back down to being largely
dormant — ready to react when the next wave of interest occurs.</p>
<h4 id="can-we-actually-do-this">Can we actually do this?</h4>
<p>If you think this is somewhat fanciful, that’s perfectly understandable — as I
mentioned previously, this post is very much about <em>extrapolating</em> from where
the tools are right now. However, unikernels actually make it very easy to
run quick experiments which indicate that we could iterate towards what I’ve
described. </p>
<p>A recent and somewhat extreme experiment ran a
<a href="http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment/">unikernel VM for <em>each URL</em></a>. Every URL on a small static
site was served from its own, self-contained unikernel, complete with it’s own
web server (even the ‘rss.png’ icon was served separately). You can read the
post to see how this was done and it also led to an interesting
<a href="http://lists.xenproject.org/archives/html/mirageos-devel/2015-03/msg00110.html">discussion</a> on the mailing list (e.g. if you’re only serving a
single item, why use a web server at all?). Of course, this was just an
<em>experiment</em> but it demonstrates what is possible now and how we can iterate,
uncover new problems, and move forward. One such question is how to
automatically handle networking during a scale-out, and this is an area were
tools like <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost">Signpost</a> can be of use.</p>
<p>Overall, the model I’ve described is quite different to the way we currently
use the cloud, where the overhead of a classic OS is constantly consuming
resources. Although it’s tempting to stick with the same frame of reference
we have today we should recognise that the current model is inextricably
intertwined with the traditional software stacks themselves. Unikernels allow
completely new ways of creating, distributing and managing software and it
takes some thought in order to fully exploit their benefits. </p>
<p>For example, having a demand-driven system means we can deliver more services
from just the one set of physical hardware — because not all those services
would be consuming resources at the same time. There would also be a dramatic
impact on the economics, as billing cycles are currently measured in hours,
whereas unikernels may only be active for seconds at a time. In addition to
these benefits, there are interesting possibilities in how such scale-outs can
be coordinated across <em>different</em> devices.</p>
<h2 id="hybrid-deployments">Hybrid deployments</h2>
<p>As we move to a world with more connected devices, the software and services
we create will have to operate across both the cloud and embedded systems.
There have been many names for this kind of distributed system, ranging from
ubiquitous computing to dust clouds and the ‘Internet of Things’ but they all
share the same idea of running software at the edges of the network (rather
than just cloud deployments).</p>
<p>When we consider the toolchain we already have, it’s not much of a stretch to
imagine that we could also build and store a unikernel for ARM-based
deployments. Those unikernels can be deployed onto embedded devices and
currently we target the <a href="http://openmirage.org/wiki/xen-on-cubieboard2">Cubieboard2</a>. </p>
<!-- For the example of our static websites, it would be straightforward to serve them from cubieboards that reside from our homes, thus further minimising the costs to run such infrastructure. However, they could be configured such that if demands begins to peak, then an automated scale-out can occur from the Cubieboard onto the public cloud instead. -->
<!-- You could even set up such a system to push the well-tested unikernels out onto embedded devices elsewhere (think IoT). In this way you only need a Minimal cloud infrastructure for your IoT service, in order to push new code out to end points, where the work is actually done (within a user's home). Think of the Goodnight Lamp, This can drastically reduce cost and any loss of the central service means end devices can keep working. (requires Signpost?). Have a central location where devices can pick up updates from. Doesn't need to do any more than coordinating stuff and devices can work P2P. V cheap to run and make money from selling devices. -->
<p>We could make such a system smarter. Instead of having the edge devices
constantly polling for updates, our deployment process could directly <em>push</em>
the new unikernels out to them. Since these devices are likely to be behind
NATs and firewalls, tools like <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost">Signpost</a> could deal with the issue
of secure connectivity. In this way, the centralized deployment process
remains as a coordination point, whereas most of the workload is dealt with by
the devices the unikernels are running on. If a central machine happens to be
unavailable for any reason, the edge-devices would continue to function as
normal. This kind of arrangement would be ideal for Internet-of-Things style
deployments, where it could reduce the burden on centralised infrastructure
while still enabling continuous deployment.</p>
<p>In this scenario, we could serve the traffic for ‘mirage-decks’ from a
unikernel on a Cubieboard2, which could further minimise the cost of running
such infrastructure. It could be configured such that if demand begins to
peak, then an automated scale-out can occur from the Cubieboard2 directly out
onto the public cloud and/or <em>other Cubieboards</em>. Thus, we can still make use
of third-party resources but only when needed and of the kind we desire. Of
course, running a highly distributed system leads to other needs.</p>
<h2 id="remember-all-the-things">Remember all the things</h2>
<p>When running services at scale it becomes important to track the activity and
understand what is taking place in the system. In practice, this means logging
the activity of the unikernels, such as when and where they were created and
how they perform. This becomes even more complex for a distributed system.</p>
<p>If we also consider the logging needs of a highly-elastic system, then another
problem emerges. Although scaling up a system is straightforward to
conceptualise, scaling it back <em>down</em> again presents new challenges. Consider
all the additional logs and data that have been created during a scale-out —
all of that history needs to be merged back together as the system contracts.
To do that properly, we need tools designed to manage distributed data
structures, with a consistent notion of merges.</p>
<p><a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/#irmin">Irmin</a> addresses these kinds of needs and it enables a style of
programming very similar to the Git workflow, where distributed nodes fork,
fetch, merge and push data between each other. Building an end-to-end logging
system with Irmin would enable data to be managed and merged across different
nodes and keep track of activity, especially in the case of a scale down. The
ability to capture such information also means the opportunity to provide
analytics to the creators of those unikernels around performance and usage
characteristics. </p>
<p>The use of Irmin wouldn’t be limited to logging as the unikernels themselves
could use it for managing data in lieu of other file systems. I’ll refrain
from extrapolating too far about this particular tool as it’s still under
rapid development and we’ll write more as it matures.</p>
<!-- With something like [Irmin][irmin-post], you may even be able to receive notifications about the type of incoming traffic and raise the limit if you so wish. May be able to configure your embedded devices to scale up to the hosted provider if there's sufficient demand. -->
<h2 id="on-immutable-infrastructure">On immutable infrastructure</h2>
<p>You may have noticed that one of the benefits of the unikernel approach arises
because the artefacts themselves are not altered once they’re created.
This is in line with the recent resurgence of ideas around ‘immutable
infrastructure’. Although there isn’t a precise definition of this, the
approach is that machines are treated as replaceable and can be regularly re
provisioned with a known state. Various tools help the existing systems to
achieve this but in the case of unikernels, everything is already under
version control, which makes managing a deployment much easier.</p>
<p>As our approach is already compatible with such ideas, we can take it a step
further. Immutable infrastructure essentially means the artefact produced
<em>doesn’t matter</em>. It’s disposable because we have the means to easily recreate
it. In our current example, we still ship the unikernel around. In order to
make this ‘fully immutable’, we’d have to know the state of all the packages
and code used when <em>building</em> the unikernel. That would give us a complete
manifest of which package versions were pulled in and from which sources.
Complete information like this would allow us to recreate any given unikernel
in a highly systematic way. If we can achieve this, then it’s the manifest
which generates everything else that follows.</p>
<p>In this world-view, the unikernel itself becomes something akin to caching.
You use it because you don’t want to rebuild it from source — even though
unikernels are quicker to build than a whole OS/App stack. For more security
critical applications, you may want to be assured of the code that is pulled
in, so you examine the manifest file before rebuilding for yourself. This also
allows you to pin to specific versions of libraries so that you can explicitly
adjust the dependencies as you wish. So how do we encode the manifest? This
is another area where Irmin can help as it can keep track of the state of
package history and can recreate the environment that existed for any given
build run. That build run can then be recreated elsewhere without having to
manually specify package versions. </p>
<p>There’s a lot more to consider here as this kind of approach opens up new
avenues to explore. For the time being, we can recognise that the unikernel
approach lends itself to the achieving immutable infrastructure.</p>
<h2 id="what-happens-next">What happens next?</h2>
<p>As I mentioned at the beginning of this post, most of what I’ve described is
speculative. I’ve deliberately extrapolated from where the tools are now so as
to provoke more thoughts and discussion about how this new model can be used
in the wild. Some of the things we’re already working towards but there are
many other uses that may surprise us — we won’t know until we get there and
experimenting is half the fun.</p>
<p>We’ll keep marching on with more libraries, better tooling and improving
quality. What happens with unikernels in the rest of 2015 is largely up to
the wider ecosystem. </p>
<p>That means you.</p>
<hr />
<p class="footnote">
Thanks to Thomas Gazagnaire and Richard Mortier for comments on an earlier draft.
</p>
<!-- TODO- xref with Nymote somehow. The above infra is needed for those apps to provide a resilient service. etc -->
Towards Heroku for Unikernels: Part 1 - Automated deploymentAmir Chaudhry2015-03-31T14:30:00+00:00http://amirchaudhry.com/heroku-for-unikernels-pt1
<p>In my <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">Jekyll to Unikernel post</a>, I described an automated
workflow that would take your static website, turn it into a MirageOS
unikernel, and then store that unikernel in a git repo for later deployment.
Although it was written from the perspective of a static website, the process
was applicable to any MirageOS project.
This post covers how things have progressed since then and the kind of
automated, end-to-end deployments that we can achieve with unikernels. </p>
<p>If you’re already familiar with the above-linked post then it should be clear
that this will involve writing a few more scripts and ensuring
they’re in the right place. The rest of this post will go through a real
world example of such an automated system, which we’ve set up for building and
deploying the unikernel that serves our slide decks — <a href="https://github.com/mirage/mirage-decks">mirage-decks</a>. Once
you’ve gone though this post, you should be able to recreate such a workflow
for your own needs. In Part 2 of this series I’ll build on this post and
consider what the possibilities could be if we extended the system using
some of our <a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">other tools</a> — thus arriving at something very much
like our own Heroku for Unikernels.</p>
<h3 id="standardised-build-scripts">Standardised build scripts</h3>
<p>Almost all of our OCaml projects now use Travis CI for build and testing (and
deployment). In fact, there are so many libraries now that we recently put
together an <a href="https://github.com/ocaml/ocaml-travisci-skeleton">OCaml Travis Skeleton</a>, which means we don’t
have to manually keep the scripts in sync across all our repos — and fewer
copy/paste/edits means fewer mistakes. </p>
<p>If you’re familiar with the build scripts from <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines#setting-up-travis-ci">last time</a>, then
you can browse the new scripts and you’ll see that they’re broadly similar.
In many cases you may well be able to depend on one or other of the scripts
directly and for a handful of scenarios, you can fork and patch them to
suit you (i.e. for MirageOS unikernels). We can do this because we’ve made it
quick to set up an OCaml environment using an <a href="https://launchpad.net/~avsm">Ubuntu PPA</a>. The rest
of the work is done by the <code>mirage</code> tool itself so once that’s installed, the
build process becomes fairly straightforward. The complexity around secure
keys was also <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/#sending-travis-a-private-ssh-key">covered last time</a>, which allowed us to commit the
final unikernel to a deployment repo. That means the remaining step is
to automate the deployment itself.</p>
<h3 id="automated-deployment-of-unikernels">Automated deployment of unikernels</h3>
<p>Committing the unikernel to a deployment repo is where the previous post ended
and a <a href="http://amirchaudhry.com/unikernels-for-everyone/">number of people</a> forged ahead and wrote about their
experiences deploying onto AWS and Linode. Many of these deployments
(understandably) involve a number of quite manual steps. It would be
particularly useful to construct a set of scripts that can be fully automated,
such that a <code>git push</code> to a repo will automatically run through the cycle of
building, testing, storing and <em>activating</em> a new unikernel. We’ve done
exactly this with some of our repos and this post will talk through those
scripts. </p>
<h4 id="the-deployment-options--xen-or-nix">The deployment options — Xen or *nix</h4>
<p>MirageOS unikernels can currently be built for Xen and Unix backends. This is
a straightforward step and typically the build matrix is already set up to
test that both of them build as expected. For this post, I’ve only considered
the Xen backend as that’s our chosen deployment method but it would be equally
feasible to deploy the unix-based unikernels onto a *nix machine in much the
same way.
In this sense, you get to choose whether you want to deploy the unikernels
onto a <a href="http://en.wikipedia.org/wiki/Hypervisor#Classification">Hypervisor</a> (for isolation and security) or whether running
them as unix-processes better suits your needs.
<!-- If you step back and think about what this means, it's *almost*
like considering the
[difference between a Type-1 and Type-2 hypervisor][hyp-class] and selecting
between them. -->
The unikernel approach means that <em>both</em> options are open to
you, with little more than a command-line flag between them.</p>
<p>In terms of the deployment machines there are several options to consider. The
most obvious is to set up a dedicated host, where you have full access to the
machine and can <a href="http://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide">install Xen</a>. Another is to have a machine
running on EC2 and <a href="http://somerandomidiot.com/blog/2014/08/19/i-am-unikernel/">create scripts</a> to deal with unikernels. You
could also build and deploy onto <a href="http://openmirage.org/wiki/xen-on-cubieboard2">Xen on the Cubieboard2</a>. If you’d
rather test out the complete system first, you could set up an appropriate
<a href="http://www.skjegstad.com/blog/2015/01/19/mirageos-xen-virtualbox/">machine in Virtualbox</a> to work with.</p>
<p>For our workflow, we use Xen unikernels which we deploy to a dedicated host.
For the sake of brevity, I won’t go into the details of how to set up
the machine but you can follow the instructions linked above.</p>
<h4 id="the-scripts-for-decksopenmirageorg">The scripts for decks.openmirage.org</h4>
<p><a href="https://github.com/mirage/mirage-decks">Decks</a> is the source repo that holds many of our slides, which
we’ve presented at conferences and events over the years (I admit that I have
yet to <a href="https://github.com/mirage/mirage-decks/issues/49">add mine</a>). The repo compiles to a unikernel that can
then serve those slides, as you see at <a href="http://decks.openmirage.org">decks.openmirage.org</a>. For
maximum fun-factor, we usually run that unikernel from a Cubieboard2 when
giving talks.</p>
<p><img src="http://amirchaudhry.com/images/singles/mirage-cubieboard.jpg" alt="mirage-decks-on-cubieboard" /></p>
<p>The toolchain for this unikernel includes build, store and deploy. We’ll
recap the first two steps before going through the final one.</p>
<p><strong>Build</strong> — In the root of the decks source repo, you’ll notice the
<code>.travis.yml</code> file, which fetches the standard build script mentioned earlier.
Building the unikernel proceeds according to the options in the build matrix. </p>
<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">language</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">c</span>
<span class="l-Scalar-Plain">install</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">wget https://raw.githubusercontent.com/ocaml/ocaml-travisci-skeleton/master/.travis-mirage.sh</span>
<span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">bash -ex .travis-mirage.sh</span>
<span class="l-Scalar-Plain">env</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">matrix</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=socket</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">OCAML_VERSION=4.02 MIRAGE_BACKEND=unix MIRAGE_NET=direct</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">OCAML_VERSION=4.02 MIRAGE_BACKEND=xen</span>
<span class="l-Scalar-Plain">MIRAGE_ADDR=&quot;46.43.42.134&quot; MIRAGE_MASK=&quot;255.255.255.128&quot; MIRAGE_GWS=&quot;46.43.42.129&quot;</span>
<span class="l-Scalar-Plain">DEPLOY=1</span>
<span class="l-Scalar-Plain">global</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="l-Scalar-Plain">...</span></code></pre></div>
<p>In this case, two builds occur for Unix and one for Xen with different
parameters being used for each. If you look at the
<a href="https://github.com/mirage/mirage-decks/blob/master/.travis.yml">actual travis file</a>, you’ll notice there are 26 lines of
encrypted data. This is how we pass the deployment key to Travis CI, so that
it has push access to the <em>separate</em> <a href="https://github.com/mirage/mirage-decks-deployment">mirage-decks-deployment</a>
repo. You can read the section in the previous post to see how we
<a href="https://github.com/mirage/mirage-decks-deployment">send Travis a private key</a>.</p>
<p><strong>Store</strong> — One of the combinations in the build matrix (configured for Xen),
is intended for deployment. When that unikernel is completed, an additional
part of the script is triggered that pushes it into the deployment repo. </p>
<h4 id="deployment-scripts">Deployment scripts</h4>
<p>After the ‘build’ and ‘store’ steps above, we have a
<a href="https://github.com/mirage/mirage-decks-deployment">deployment repository</a> with a collection of Xen unikernels. For
this stage, we have a new set of scripts that live in this repo alongside those
unikernels. Specifically, you’ll notice a folder called <code>scripts</code> that
contains four files. </p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash">.
├── Makefile
├── README.md
├── scripts
│ ├── crontab
│ ├── deploy.sh
│ ├── install-hooks.sh
│ └── post-merge.hook
...</code></pre></div>
<p>A quick summary of the setup is that we clone the repo onto our deployment
machine and install some hooks there. Then a simple cronjob will perform
<code>git pull</code> at regular intervals. If a merge event occurs, then it means the
repo has been updated and another script is triggered. That script removes the
currently running unikernel and boots the latest version from the repo. It’s
fairly straightforward and I’ll explain what each of the files does below.</p>
<p><strong>Makefile</strong> - After cloning the repo, run <code>make install</code>. This will trigger
<code>install-hooks.sh</code> to set things up appropriately. It’s worth remembering that
from this point on, the git repo on the deployment machine will not be
identical to the deployment repo on GitHub.</p>
<p><strong>install-hooks.sh</strong> — The first two lines ensure that the commands
will be run from the root of the git repo. The third line symlinks the
<code>post-merge.hook</code> file into the appropriate place within the <code>.git</code> directory.
This is the folder where customized <a href="http://www.git-scm.com/book/en/v2/Customizing-Git-Git-Hooks">git hooks</a> need to be placed in
order to work. The final line adds the file <code>scripts/crontab</code> to the
deployment machine’s list of cron jobs.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse --show-toplevel<span class="k">)</span> <span class="c"># obtain path to root of repo</span>
<span class="nb">cd</span> <span class="nv">$ROOT</span>
<span class="c"># symlink the post-merge.sh file into the .git/hooks folder</span>
ln -sf <span class="nv">$ROOT</span>/scripts/post-merge.hook <span class="nv">$ROOT</span>/.git/hooks/post-merge
crontab scripts/crontab <span class="c"># add to list of cron jobs</span></code></pre></div>
<p><strong>crontab</strong> — This file is a cronjob that sets up the deployment machine to
perform a <code>git pull</code> on the deployment repo at regular intervals. Changing the
file in the repo will ultimately cause it to be updated on the deployment
machine (cf. <code>deploy.sh</code>). At the moment, it’s set to run every 11 minutes.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash">*/11 * * * * <span class="nb">cd</span> <span class="nv">$HOME</span>/mirage-decks-deployment <span class="o">&amp;&amp;</span> git pull</code></pre></div>
<p><strong>post-merge.hook</strong> — Since we’ve already run the Makefile, this file is
symlinked from the appropriate place on the deployment machine’s copy of the
repo. When a <code>git pull</code> results in new commits being downloaded and merged,
then this script is triggered immediately afterwards. In this case, it just
executes the <code>deploy.sh</code> script.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse --show-toplevel<span class="k">)</span> <span class="c"># obtain path to root of repo</span>
<span class="nb">exec</span> <span class="nv">$ROOT</span>/scripts/deploy.sh <span class="c"># execute the deploy script</span></code></pre></div>
<p><strong>deploy.sh</strong> — This is where the work actually happens and you’ll notice that
there really isn’t much to do! I’ve commented in the code below to explain
what’s going on.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">VM</span><span class="o">=</span>mir-decks
<span class="nv">XM</span><span class="o">=</span>xm
<span class="nv">ROOT</span><span class="o">=</span><span class="k">$(</span>git rev-parse --show-toplevel<span class="k">)</span>
<span class="nb">cd</span> <span class="nv">$ROOT</span>
crontab scripts/crontab <span class="c"># Update cron scripts</span>
<span class="c"># Identify the latest build in the repo and then use</span>
<span class="c"># the generic Xen config script to construct a</span>
<span class="c"># specific file for this unikernel. Essentially,</span>
<span class="c"># &#39;sed&#39; just does a find/replace on two elements and</span>
<span class="c"># the result is written to a new file.</span>
<span class="c">#</span>
<span class="nv">KERNEL</span><span class="o">=</span><span class="nv">$ROOT</span>/xen/<span class="sb">`</span>cat xen/latest<span class="sb">`</span>
sed -e <span class="s2">&quot;s,@VM@,$VM,g; s,@KERNEL@,$KERNEL/$VM.xen,g&quot;</span> <span class="se">\</span>
&lt; <span class="nv">$XM</span>.conf.in <span class="se">\</span>
&gt;<span class="p">|</span> <span class="nv">$KERNEL</span>/<span class="nv">$XM</span>.conf
<span class="c"># Move into the folder with the latest unikernel.</span>
<span class="c"># Remove any uncompressed Xen images found there</span>
<span class="c"># (since we may be starting a rebuilt unikernel).</span>
<span class="c"># Unzip the compressed unikernel.</span>
<span class="c">#</span>
<span class="nb">cd</span> <span class="nv">$KERNEL</span>
rm -f <span class="nv">$VM</span>.xen
bunzip2 -k <span class="nv">$VM</span>.xen.bz2
<span class="c"># Instruct Xen to remove the currently running</span>
<span class="c"># unikernel and then start up the new one we</span>
<span class="c"># just unzipped.</span>
<span class="c">#</span>
sudo <span class="nv">$XM</span> destroy <span class="nv">$VM</span> <span class="o">||</span> <span class="nb">true</span>
sudo <span class="nv">$XM</span> create <span class="nv">$XM</span>.conf</code></pre></div>
<p>At this point, we now have a complete system!
Of course, this arrangement isn’t perfect and
there are number of things we could improve. For example, it depends on a
cron job, which means it may take a while before a new unikernel is live.
Replacing this with something triggered on a webhook could be an improvement,
but it does mean exposing an end-point to the internet. The scripts will also
redeploy the <em>current</em> unikernel, even if the only change is to the crontab
schedule. Some extra work in the deploy script, using some git tools, might
work around this. </p>
<p>Despite these minor issues, we do have a completely end-to-end workflow that
takes us all the way from pushing some new changes to deploying a new
unikernel! An additional feature is that <em>everything</em> is checked into version
control. Right from the scripts to completed artefacts (including a method of
transmitting secure keys/data, over public systems). </p>
<p>There is minimal work done outside the code you’ve already seen, though there
is obviously some effort involved in setting up the deployment machine.
However, as mentioned earlier, you could either use the unix-based unikernels
or experiment with <a href="http://www.skjegstad.com/blog/2015/01/19/mirageos-xen-virtualbox/">Virtualbox VM with Xen</a> just to test out this
entire toolchain. </p>
<p>Overall, we’ve only added around 20 lines of code to the initial 50 or so that
we use for the Travis CI build. So for <em>less than 100 lines of code</em>, we have
a <em>complete</em> end-to-end system that can take a MirageOS project from a
<code>git push</code>, all the way through to a live deployment. </p>
<h3 id="fleshing-out-the-backbone">Fleshing out the backbone</h3>
<p>In our current system, if the unikernel <em>builds</em> appropriately then we just
assume it’s ok to deploy to production. Fire and forget! What could
possibly go wrong! Of course, this is a somewhat naive approach and for any
critical system it would be better to hook in some additional things.</p>
<h4 id="testing-frameworks">Testing frameworks</h4>
<p>One obvious improvement would be to introduce a more thorough testing regimen,
which would include running unit tests as part of the build. Indeed, various
libraries in the MirageOS project are already moving towards this model
(e.g see the <a href="http://openmirage.org/wiki/weekly-2015-03-11#Qualityandtest">notes</a> for links). </p>
<p>It’s even possible to go beyond unit tests and introduce more
functional/systems/stress testing on the complete unikernel before permitting
deployment. This would help to surface any wider issues as services interact
and we could even simulate network conditions — achieving something like
‘staging on steroids’. </p>
<h4 id="logging-and-notifications">Logging and notifications</h4>
<p>The scenario we have above also assumes that things work smoothly and nobody
needs to know anything. It would be useful to hook in some form of logging
and reporting, such that when a new unikernel is deployed a notification can
be sent/stored somewhere. In the short term, there are likely existing tools
and ways of doing this so it would be a matter of putting them together.</p>
<h4 id="looking-ahead">Looking ahead</h4>
<p>Overall, with the above model, we can easily set up a system where we go from
writing code, to testing it via CI, to deploying it to a staging server for
functional tests, and finally pushing it out into live deployment. All of
this can be done with a few additional scripts and minimal interaction from
the developer. We can achieve this because we don’t have to concern ourselves
with large blobs of code, multiple different systems and keeping environments
in sync. Once we’ve built the unikernel, the rest almost becomes trivial. </p>
<p>This is close enough for me to declare it as a ‘Heroku for unikernels’ but
obviously, there’s much more we can (and should) do with such a system. If we
extrapolate <em>just a little</em> from where we are now, there are a range of
exciting possibilities to consider in terms of automation, scalability and
distributed systems. Especially if we incorporate other aspects of the
<a href="http://amirchaudhry.com/brewing-miso-to-serve-nymote/">toolstack we’re working towards</a>. </p>
<p><a href="http://amirchaudhry.com/heroku-for-unikernels-pt2/">Part 2</a> of this series is where I’ll consider these possibilities, which will
be more speculative and less constrained. It will cover the kinds of systems
we can create once the tools are more mature and will touch on ideas around
hyper-elastic clouds, embedded systems and what this means for the concept of
immutable infrastructure.</p>
<p>Since we already have the ‘backbone’ of the toolchain in place, it’s easier to
see where it can be extended and how.</p>
<p><em>Edit: The second part of this series is now up -
“<a href="http://amirchaudhry.com/heroku-for-unikernels-pt2/">Self Scaling Systems</a>”</em></p>
<hr />
<p class="footnote">
Thanks to Anil Madhavapeddy and Thomas Leonard for comments on an earlier
draft and Richard Mortier for his work on the deployment toolchain.
</p>
<!--
[jitsu-repo]: https://github.com/MagnusS/jitsu
[jitsu-x]: http://www.skjegstad.com/blog/2015/03/25/mirageos-vm-per-url-experiment
[sp-post]: http://amirchaudhry.com/brewing-miso-to-serve-nymote/#signpost
[cron-conf]: http://en.wikipedia.org/wiki/Cron#Configuration_file
-->
The Bitcoin Piñata!Amir Chaudhry2015-02-10T15:25:00+00:00http://amirchaudhry.com/bitcoin-pinata
<p>Last summer we announced the beta release of a clean-slate implementation of
TLS in pure OCaml, alongside a <a href="http://openmirage.org/blog/introducing-ocaml-tls">series of blog posts</a> that described
the libraries and the thinking behind them. It took two hackers six months
— starting on <a href="https://goo.gl/maps/GpcQs">the beach</a> — to get the stack to that point and
their <a href="https://tls.openmirage.org">demo server</a> is still going strong. Since then, the team has
continued working and recently <a href="http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-_saal_2_-_201412271245_-_trustworthy_secure_modular_operating_system_engineering_-_hannes_-_david_kaloper.html#video">presented</a> at the 31st Chaos
Communication Congress.</p>
<p>The latest example goes quite a bit further than a server that just displays
the handshake. This time, the team have constructed a Xen unikernel that’s
holding a private key to a bitcoin address and are asking people to try and
<em>break in</em>. Hence, they’ve called it the <strong><a href="http://ownme.ipredator.se">Bitcoin Piñata</a></strong>!*</p>
<h2 id="what-the-bitcoin-piata-does">What the Bitcoin Piñata does</h2>
<p><a href="http://ownme.ipredator.se"><img src="http://amirchaudhry.com/images/btc-pinata/btc-pinata.png" alt="Bitcoin Pinata" /></a></p>
<p>The Piñata unikernel will transmit its private bitcoin key if you can
successfully set up a TLS connection <strong>but</strong> it’s rigged so that it will <em>only</em>
create that connection if you can present the certificate it’s expecting to
see — which has been <em>signed appropriately</em>. Of course, you’re not being given
the secret key with which to do that signing and that means there should be
<em>no way</em> for anyone to form a TLS connection with the Piñata.
In order to get the private key to the bitcoin address, you’ll have to smash
your way in.</p>
<p>Helpfully (perhaps), things are set up so that you <em>can</em> make the Piñata talk
to itself, allowing you to <a href="http://en.wikipedia.org/wiki/Man-in-the-middle_attack">eavesdrop</a> on a successful connection and
see the encrypted traffic. In addition, all the <a href="https://github.com/mirleft/btc-pinata">code and libraries</a> are
open-source so you can look through any of the codebase. There isn’t anything
that anyone will have to reverse engineer, which should make this a little
more enjoyable.</p>
<p>This contest is set to run until mid-March or whenever the coins are taken.
If someone does manage to get in, please do let us know how!</p>
<h3 id="the-rubber-hose-approach">The Rubber-hose approach</h3>
<p>Of course there are many other ways to get at the private key and as many
people like to comment, the human element is sometimes the weakest link —
after all, a safe is only as secure as the person with the combination.</p>
<p>In this case, there is obviously a secret key or certificate <em>somewhere</em>
that could be presented so it may be tempting to go hunting for that. Perhaps
phishing attempts on the authors may yield a way forward, or maybe just
straight-forward <a href="http://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis">Rubber-hose cryptanalysis</a>! Sure, these
options might provide a result<sup>†</sup> but this is meant to be fun.
The authors haven’t specified any rules but please be nice and focus on the
technical things around the Piñata<sup>‡</sup>. Don’t be this guy.</p>
<p><img src="http://amirchaudhry.com/images/btc-pinata/pinata-kid-bat.gif" alt="Pinata-kid-bat" /></p>
<h2 id="whats-the-point-of-this-contest">What’s the point of this contest?</h2>
<p>Even though the Bitcoin Piñata is clearly a contest, nobody is deluding
themselves into thinking that if the coins are still there in March, that
somehow the stack can be declared ‘undefeated’ — while pleasing, that
result wouldn’t necessarily <em>prove</em> anything. Contests have their place but as
Bruce Schneier <a href="https://www.schneier.com/crypto-gram/archives/1998/1215.html#contests">already pointed out</a>, they are not useful mechanisms
to judge security.</p>
<p>However, it does give us the chance to engage in some shameless self-promotion
and try to draw vast amounts of attention to the work. That, and the chance to
stress-test the stack in the wild. Ultimately, we <em>want</em> to use this code in
production but must take a lot of care to get there and want to be sure that
it can bear up. This is just one more way of learning what happens when
putting something ‘real’ out there. </p>
<p>If the Bitcoins <em>do</em> end up being taken, then there’s <em>definitely</em> something
valuable that the team can learn from that. Regardless of the Piñata, if we
have more people exploring the <a href="https://github.com/mirleft/">TLS codebase</a> or trying it out for
themselves, it will undoubtedly be A Good Thing. </p>
<h5 id="responsible-sidenote">Responsible sidenote</h5>
<p><em>For clarity and to avoid any doubt, please be aware that the TLS codebase is
missing external code audits and is not yet intended for use in any security
critical applications. All development is done in the open, including the
tracking of <a href="https://github.com/mirleft/ocaml-tls/issues?q=label%3A%22security+concern%22+">security-related issues</a>, so please do consider
auditing the code, testing it in your services and reporting issues.</em></p>
<p class="footnote">* If you've never come across a piñata before, hopefully
the gif in the post gives you an idea. If not, the
<a href="https://en.wikipedia.org/wiki/Pinata">wiki page</a>
will surely help, where I learned that the origin may be Chinese rather
than Spanish!
</p>
<p class="footnote"><sup>&dagger;</sup> Of course, I'm not suggesting that
anyone would actually go this far. I'm simply acknowledging that there is
a human factor and asking that we put it aside.
</p>
<p class="footnote"><sup>&Dagger;</sup> Edit to add: After seeing
<a href="https://twitter.com/andreasdotorg/status/565193815183876096">
Andrea's tweet</a> I should point out that <strong>any part of
MirageOS</strong>, including the networking stack, OCaml runtime etc is a
legitimate vector. It's why there's a
<a href="https://raw.githubusercontent.com/mirleft/btc-pinata/master/opam-full.txt">
manifest of the libraries</a> that have been used to build the Piñata!
</p>
Unikernel demo at FOSDEMAmir Chaudhry2015-02-06T17:50:00+00:00http://amirchaudhry.com/unikernel-arm-demo-fosdem
<p>Last weekend was spent at one of the world’s biggest open source conferences,
FOSDEM. You can check out <a href="http://nymote.org/blog/2014/fosdem-summary/">last year’s review</a> to get an idea of
the scale of the event. Since there’s no registration process, it’s difficult
to estimate how many people attend but given how many rooms there are, and how
full they are, it’s easily several thousand. I was impressed last year at how
smoothly things went and the same was true this year.</p>
<p>The main reason to attend this time was to run a demo of MirageOS from an ARM
board — one of the main advances since the previous conference. I looked over
all the things we’d achieved since last year and put together a demo that
showcases some of the capabilities as well as being fun.</p>
<h3 id="from-a-unikernel-on-an-arm-board">2048 from a Unikernel on an ARM board</h3>
<p>The demo was to serve the 2048 game from a Unikernel running on a Cubieboard2
with its own access point. When people join the wifi network, they get
served a static page and can begin playing the game immediately. </p>
<p>The components I needed for the demo were:</p>
<ul>
<li>
<p>Code for the 2048 game — I was able to lift code from a
<a href="https://github.com/ocamllabs/2048-tutorial/">tutorial last year</a>, which <a href="http://erratique.ch">Daniel</a>, <a href="http://www.lpw25.net">Leo</a>, <a href="https://github.com/yallop">Jeremy</a> and
<a href="http://gazagnaire.org">Thomas</a> all contributed to. It was first run at <a href="http://cufp.org/2014/t7-leo-white-introduction-to-ocaml.html">CUFP 2014</a> then
adapted and presented at <a href="http://booking.agilefaqs.com/functional-conf-2014#workshop-52-info">Functional Conf</a> in India (see the
<a href="http://gazagnaire.org/fuconf14/">IOCaml notebook</a>). Attendees wrote the code in OCaml, which was
then compiled into pure JavaScript (via <a href="http://ocsigen.org/js_of_ocaml/">js_of_ocaml</a>). The result can be run
completely in the browser and only involves serving two files.</p>
</li>
<li>
<p>Code for making a static website — Since the game is completely
self-contained (one html file and one js file). I only need to convert a static
website into a unikernel. That’s trivial and
<a href="http://amirchaudhry.com/unikernels-for-everyone/">many people have done it before</a>.</p>
</li>
<li>
<p>A Cubieboard with a wifi access point — There are pre-built images on the
<a href="http://blobs.openmirage.org">MirageOS website</a>, which make part of this easy. However, getting the
wifi access point up involves a few more steps.</p>
</li>
</ul>
<p>The first two pieces should be straightforward and indeed, I had a working
unikernel serving the 2048 game within minutes (unix version on my laptop).
The additional factors around the ARM deployment is where things were a little
more involved. Although this was technically straightforward to set up, it
still took a while to get all the pieces together. A more detailed
description of the steps is in my <a href="https://github.com/amirmc/fosdemo">fosdemo repository</a> and in
essence, it revolves around configuring the wifi access point and setting up a
bridge (thanks to <a href="http://somerandomidiot.com">Mindy</a>, <a href="http://www.skjegstad.com">Magnus</a> and <a href="https://github.com/pqwy">David</a> for getting this
working).</p>
<p>Once this was all up and running, it was a simple matter to configure the
board to boot the unikernel on startup, so that no manual intervention would
be required to set things up at the booth.</p>
<h4 id="running-the-demo">Running the demo</h4>
<p>I gave the demo at the Xen booth and it went very well. There was a small
crowd throughout my time at the booth I’m convinced that the draw of a board
with glowing LEDs should not be underestimated. Many people we’re happy to
connect to the access point and download the game to their browser but there
were two main things I learnt.</p>
<p>Firstly, demos involving games will work if people actually <em>know</em> the game.
This is obvious, but I’d assumed that most people had already played 2048 —
especially the crowd I’d expect to meet at FOSDEM. It turned out that around
a third of people had no idea what to do when the game loaded onto their
browser. They stared blankly at it and then blankly at me. Of course, it was
trivial to get them started and they were soon absorbed by it — but it still
felt like some of the ‘cool-factor’ had been lost.</p>
<p>The second thing was that I tried to explain too much to people in much too
short a time. This particular demo involved Xen unikernels, js_of_ocaml and a
Cubieboard2 with a wifi access point.
There’s a surprisingly large amount of technology there, which
is difficult explain to a complete newcomer within one or two minutes. When
it was obvious someone hadn’t heard of unikernels, I focused on the approach
of library operating systems and the benefits that Mirage brings. If a visitor
was already familiar with the concept of unikernels, I could describe the rest
of the demo in more detail.</p>
<p>Everything else did go well and next time I’d like to have a demo like this
running with <a href="https://github.com/MagnusS/jitsu">Jitsu</a>. That way, I could configure it so that a unikernel
would spin up, serve the static page and then spin down again. If we can
figure out the timing, then providing stats in the page about the lifetime of
that unikernel would also be great, but that’s for another time.</p>
<p><a href="https://twitter.com/amirmc/status/561525704161243137"><img src="http://amirchaudhry.com/images/web/fosdem15-tweet.png" alt="Tweet at FOSDEM 2015" /></a></p>
<h5 id="sidenote-the-beginnings-of-a-personal-cloud">Sidenote: The beginnings of a ‘personal cloud’</h5>
<p>One of the things we’re keen to work towards is the idea of
<a href="http://nymote.org">personal clouds</a>. It’s not a stretch to imagine that a Cubieboard2,
running the appropriate software, could act as one particular node in a
network of your own devices. In this instance it’s just hosting a fun and
simple game but more complex applications are also possible.</p>
<h3 id="huge-range-of-sessions-and-talks">Huge range of sessions and talks</h3>
<p>Of course, there was lots more going on than just my demo and I had a great
time attending the talks. Some in particular that stood out to me were those
in the <a href="https://fosdem.org/2015/schedule/track/open_source_design/">open source design</a> room, which was a new addition this year. It
was great to learn that there are design people out there who would like to
contribute to open source (<a href="https://twitter.com/amirmc">get in touch</a>, if that’s you!). I also had a
chance to meet (and thank!) Mike McQuaid in his <a href="https://fosdem.org/2015/schedule/event/homebrew_the_good,_bad_and_ugly_of_osx_packaging/">Homebrew talk</a>.
FOSDEM is one of those great events where you can meet in person all those
folks you’ve only interacted with online.</p>
<p>Overall, it was a great trip and I thoroughly recommend it if you’ve never
been before!</p>
Brewing MISO to serve NymoteAmir Chaudhry2015-01-20T14:00:00+00:00http://amirchaudhry.com/brewing-miso-to-serve-nymote
<p>The <a href="http://nymote.org/blog/2013/introducing-nymote/">mission of Nymote</a> is to enable the creation of resilient
decentralised systems that incorporate privacy from the ground up, so that
users retain control of their networks and data. To achieve this, we
reconsider all the old assumptions about how software is created in light of
the problems of the modern, networked environment. Problems that will become
even more pronounced as more devices and sensors find their way into our lives.</p>
<p>We want to make it simple for anyone to be able to run a piece of the cloud
for their own purposes and the first three applications Nymote targets are
Mail, Contacts and Calendars, but to get there, we first have to create solid
foundations.</p>
<h3 id="defining-the-bedrock">Defining the bedrock</h3>
<p>In order to create applications that work for the user, we first have to
create a robust and reliable software stack that takes care of fundamental
problems for us. In other words, to be able to assemble the applications we
desire, we must first construct the correct building blocks.</p>
<p>We’ve taken a clean-slate approach so that we can build long-lasting solutions
with all the benefits of hindsight but none of the baggage. As
mentioned in earlier posts, there are three main components of the stack,
which are: <a href="http://nymote.org/software/mirage/">Mirage</a> (OS for the Cloud/IoT), <a href="http://nymote.org/software/irmin/">Irmin</a> (distributed datastore)
and <a href="http://nymote.org/software/signpost/">Signpost</a> (identity and connectivity) - all built using the <a href="http://ocaml.org">OCaml</a>
programming language.</p>
<h4 id="using-the-miso-stack-to-build-nymote">Using the MISO stack to build Nymote</h4>
<p>As you’ve already noticed, there’s a useful acronym for the above tools —
<strong>MISO</strong>. Each of the projects mentioned is a serious undertaking in its own
right and each is likely to be impactful as a stand-alone concept. However,
when used together we have the opportunity to create applications and services
with high levels of security, scalability and stability, which are not easy to
achieve using other means. </p>
<p>In other words, MISO is the <em>toolstack</em> that we’re using to build Nymote —
Nymote is the <em>decentralised system</em> that works for its users.</p>
<p>Each of the projects is at a different phase but they have all have made great
strides over the last year.</p>
<h4 id="mirage">Mirage</h4>
<p>Mirage — a library operating system that constructs unikernels — is the most
mature part of the stack. I previously wrote about the
<a href="http://nymote.org/blog/2014/announcing-first-mirage-release/">Mirage 1.0 release</a> and only six months later we had an
<a href="http://openmirage.org/blog/announcing-mirage-20-release">impressive 2.0 release</a>, with continuing advances throughout the year.
We achieved major milestones such as the ability to deploy unikernels to
ARM-based devices, as well as a clean-slate implementation of the transport
layer security (TLS) protocol.</p>
<p>In addition to the development efforts, there have also been many
presentations to audiences, ranging from <a href="http://amirchaudhry.com/describing-miso-entrepreneur-first-2014/">small groups of startups</a>
all the way to <a href="http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-_saal_2_-_201412271245_-_trustworthy_secure_modular_operating_system_engineering_-_hannes_-_david_kaloper.html#video">prestigious keynotes</a> with 1000+ attendees. Ever
since we’ve had ARM support, the talks themselves have been delivered from
unikernels running on Cubieboards and you can see the growing collection of
slides at <a href="http://decks.openmirage.org">decks.openmirage.org</a>.</p>
<p>All of these activities have led to a tremendous increase in public awareness
of unikernels and the value they can bring to developing robust, modern
software as well as the promise of <a href="https://medium.com/@darrenrush/after-docker-unikernels-and-immutable-infrastructure-93d5a91c849e">immutable infrastructure</a>.
As more people look to get involved and contribute to the codebase, we’ve also
begun curating a set of <a href="https://github.com/mirage/mirage-www/wiki/Pioneer-Projects">Pioneer Projects</a>, which are suitable for a
range of skill-levels.</p>
<p>You can find much more information on all the activities of 2014 in the
comprehensive <a href="http://openmirage.org/blog/2014-in-review">Mirage review post</a>. As it’s the most mature
component of the MISO stack, anyone interested in the <em>development of code</em>
towards Nymote should join the <a href="http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel">Mirage mailing list</a>.</p>
<ul>
<li><em>Source code</em> - <a href="https://github.com/mirage">Mirage org on GitHub</a></li>
</ul>
<h4 id="irmin">Irmin</h4>
<p>Irmin — a library to persist and synchronize distributed data structures —
made significant progress last year. It’s based on the principles of Git, the
distributed version control system, and allows developers to choose the
appropriate combination of consistency, availability and partition tolerance
for their needs.</p>
<p>Early last year Irmin was released as an alpha with the ability to speak
‘fluent Git’ and by the summer, it was supporting user-defined merge
operations and fast in-memory views. A couple of summer projects improved the
<a href="http://gazagnaire.org/pub/FGM15.pdf">merge strategies</a> and synchronisation strategies, while an
external project — Xenstore — used Irmin to <a href="http://openmirage.org/blog/introducing-irmin-in-xenstore">add fault-tolerance</a>.</p>
<p>More recent work has involved a big clean-up in the user-facing API (with nice
<a href="http://samoht.github.io/irmin/">developer documentation</a>) and a cleaner high-level REST API.
Upcoming work includes proper documentation of the REST API, which means Irmin
can more easily be used in non-OCaml projects, and full integration with
Mirage projects. </p>
<p>Irmin is already being used to create
<a href="https://opam.ocaml.org/packages/imaplet-lwt/imaplet-lwt.0.1.3/">a version controlled IMAP server</a> and
<a href="https://github.com/samoht/dog">a version controlled distributed log system</a>. It’s no surprise
that the first major release is coming <a href="https://github.com/mirage/irmin/issues?q=is%3Aopen+is%3Aissue+milestone%3A1.0.0">very soon</a>!</p>
<ul>
<li><em>Source code</em> - <a href="https://github.com/mirage/irmin">Irmin on GitHub</a></li>
</ul>
<h4 id="signpost">Signpost</h4>
<p>Signpost will be a collection of libraries that aims to provide identity and
connectivity between devices. Forming efficient connections between
end-points is becoming ever more important as the number of devices we own
increases. These devices need to be able to recognise and reach each-other,
regardless of their location on the network or the obstacles in between. </p>
<p>This is very much a nascent project and it involves a lot of work on
underlying libraries to ensure that security aspects are properly considered.
As such, we must take great care in how we implement things and be clear about
any trade-offs we make. Our thoughts are beginning to converge on a design we
think will work and that we would entrust with our own data, but we’re
treating this as a case of ‘Here Be Dragons’. This is a critical piece of the
stack and we’ll share what we learn as we chart our way towards it.</p>
<p>Even though we’re at the design stage of Signpost, we did substantial work
last year to create the libraries we might use for implementation. A
particularly exciting one is <a href="https://github.com/MagnusS/jitsu">Jitsu</a> — which stands for Just In Time
Summoning of Unikernels. This is a DNS server that spawns unikernels in
response to DNS requests and boots them in real-time with no perceptible lag
to the end user. In other words, it makes much more efficient use of
resources and significantly reduces latency of services for end-users —
services are only run <em>when</em> they need to be, in the <em>places</em> they need to be. </p>
<p>There’s also been lots of efforts on other libraries that will help us
<em>iterate towards</em> a complete solution. Initially, we will use pre-existing
implementations but in time we can take what we’ve learned and create more
robust alternatives. Some of the libraries are listed below (but note the
friendly disclaimers!). </p>
<ul>
<li><em>Source code</em>
<ul>
<li><a href="https://github.com/dsheets/ocaml-sodium">Bindings to libsodium</a></li>
<li><a href="https://github.com/dsheets/ocaml-dnscurve">Implementation of DNSCurve</a></li>
<li><a href="https://github.com/dsheets/ocaml-libmacaroons">Bindings to libmacaroons</a></li>
</ul>
</li>
</ul>
<h4 id="ocaml">OCaml</h4>
<p><a href="http://ocaml.org">OCaml</a> is a mature, powerful and highly pragmatic language. It’s
proven ideal for creating robust systems applications and
<a href="http://ocaml.org/learn/companies.html">many others</a> also recognise this. We’re using it to create all the
tools you’ve read about so far and we’re also helping to improve the ecosystem
around it.</p>
<p>One of the major things we’ve been involved with is the coordination of the
OCaml Platform, which combines the OCaml compiler with a coherent set of tools
and workflows to be more productive in the language and speed up development
time. We presented the first major release of these efforts at OCaml 2014 and
you can <a href="http://ocaml.org/meetings/ocaml/2014/ocaml2014_7.pdf">read the abstract</a> or <a href="https://www.youtube.com/watch?v=jxhtpQ5nJHg&amp;list=UUP9g4dLR7xt6KzCYntNqYcw">watch the video</a>.</p>
<p>There’s more to come, as we continue to improve the tooling and also support
the community in <a href="http://amirchaudhry.com/towards-governance-framework-for-ocamlorg">other ways</a>.</p>
<h3 id="early-steps-towards-applications">Early steps towards applications</h3>
<p>Building blocks are important but we also need to push towards working
applications. There are different approaches we’ve taken to this, which
include building prototypes, wireframing use-cases and implementing features
with other toolstacks. Some of this work is also part of a larger
<a href="http://usercentricnetworking.eu">EU funded project</a>* and below are brief summaries of the things we’ve
done so far. We’ll expand on them as we do more over time.</p>
<p><strong>Mail</strong> - As mentioned above, a prototype IMAP server exists (<a href="https://opam.ocaml.org/packages/imaplet-lwt/imaplet-lwt.0.1.3/">IMAPlet</a>)
which uses Irmin to store data. This is already able to connect to a client to
serve mail. The important feature is that it’s an IMAP server which is version
controlled in the backend and can expose a REST API from the mailstore quite
easily.</p>
<p><strong>Contacts</strong> - We first made wireframe mockups of the features we might like
in a contacts app (to follow in later post) and then built a
<a href="https://github.com/yansh/contacts-app">draft implementation</a>. To get here, code was first written in OCaml
and then put through the <a href="http://ocsigen.org/js_of_ocaml/">js_of_ocaml</a> compiler. This is valuable as it
takes us closer to a point where we can build networks using our address books
and have the system take care of sharing details in a privacy-conscious manner
and with minimal maintenance. The <a href="http://yansnotes.blogspot.co.uk/2015/01/work-summary-ocaml-labs.html">summary post</a> has more detail.</p>
<p><strong>Calendar</strong> - This use-case was approached in a completely different way as
part of a hackathon last year. A rough but functional prototype was built over
one weekend, with a team formed at the event. It was centralised but it
tested the idea that a service which integrates intimately with your life (to
the point of being very invasive) can provide disproportionate benefits. The
<a href="http://seedcamp.com/seedhack-5-0/">experience report</a> describes the weekend and our app — Clarity —
won first place. This was <em>great</em> validation that the features are desirable
so we need to work towards a decentralised, privacy-conscious version.</p>
<h3 id="time-to-get-involved">Time to get involved!</h3>
<p>The coming year represents the best time to be working on the MISO stack and
using it to make Nymote a reality. All source code is publicly available and
the projects are varied enough that there is something for everyone. Browse
through issues, consider the <a href="https://github.com/mirage/mirage-www/wiki/Pioneer-Projects">projects</a> or simply write online and
share with us the things you’d like to see.
This promises to be an exciting year!</p>
<p><em>Sign up to the <a href="http://nymote.us5.list-manage.com/subscribe?u=8a83b2d5453bba2ee5838b4ad&amp;id=a41245094c">Nymote mailing list</a> to keep up to date!</em></p>
<p class="footnote">* The research leading to these results has received
funding from the European Union's Seventh Framework Programme FP7/2007-2013
under the UCN project, grant agreement no 611001.
</p>
<!-- ========================================================= -->
Unikernels for everyone!Amir Chaudhry2015-01-19T13:00:00+00:00http://amirchaudhry.com/unikernels-for-everyone
<p>Many people have now set up unikernels for blogs, documenting their
experiences for others to follow. Even more important is that people are
going beyond static sites to build unikernels that provide more complicated
services and solve real-world problems.</p>
<p>To help newcomers get started, there are now even more posts that that use
different tools and target different deployment methods. Below are summaries
of some of the posts I found interesting and that will make it easier for you
try out different ways of creating and deploying your unikernels.</p>
<h3 id="unikernel-blogs-with-mirageos">Unikernel blogs with MirageOS</h3>
<p>Mindy picked up where the <a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines/">first set of instructions</a>
finished and described her work to get an Octopress blog running on Amazon EC2.
As one of the first people outside the core team to work on this, she had a
lot of interesting experiences — which included getting into the Mirage
networking stack to debug an issue and submit a bugfix! More recently, she
also wrote a couple of excellent posts on <em>why</em> she uses a unikernel for her
blog. These posts cover the security concerns (and responsibility) of running
networked services on today’s Internet and the importance of owning your
content — both ideas are at the heart of the work behind <a href="http://nymote.org/">Nymote</a> and are
well worth reading.</p>
<ul>
<li><em>Mindy’s posts</em>
<ul>
<li><em>Overview</em> - <a href="http://www.somerandomidiot.com/blog/2014/08/18/i-am-unikernel/">“I Am Unikernel (and So Can You!)”</a></li>
<li><em>First in her Mirage series</em> - <a href="http://www.somerandomidiot.com/blog/2014/03/14/its-a-mirage/">“It’s a Mirage! (or, How to Shave a Yak.)”</a></li>
<li><a href="http://www.somerandomidiot.com/blog/2014/08/11/attack-surface-area/">“Attack Surface: Why I Unikernel, Part 1”</a></li>
<li><a href="http://www.somerandomidiot.com/blog/2014/08/14/my-content-is-mine/">“My Content Is Mine: Why I Unikernel, Part 2”</a></li>
</ul>
</li>
</ul>
<p>Ian took a different path to AWS deployment by using Vagrant and Test Kitchen
to to get his static site together and build his unikernel, and then Packer to
create the images for deployment to EC2. All succinctly explained with code
available on GitHub for others to try out!</p>
<ul>
<li><em>Ian on</em> <a href="https://github.com/iw/mirage-jekyll">“Mirage with Jekyll on Amazon EC2”</a></li>
</ul>
<p>Toby wanted to put together a blog that was a little more complicated than a
traditional static site, with specific features like subdomains based on tags
and the ability to set future dates for posts. He also pulled in some other
libraries so he can use Mustache for sever-side rendering, where his blog
posts and metadata are stored as JSON and rendered on request.</p>
<ul>
<li><em>Toby on</em> <a href="http://ocaml.is-awesome.net/2014/11/building-a-blog-with-mirage-os">“Building a Blog with MirageOS”</a></li>
</ul>
<p>Chris saw others working to get unikernel blogs on EC2 and decide he’d try
getting his up and running on Linode instead. He is the first person to
deploy his unikernel to Linode and he provided a great walkthough with helpful
screenshots, as well as brief notes about the handful of differences compared
with EC2. Chris also wrote about the issue he had with clean urls (i.e
serving <code>/about/index.html</code> when a user visits <code>/about/</code>) — he describes the
things he tried out until he was finally able to fix it. </p>
<ul>
<li><em>Chris’ posts</em>
<ul>
<li><em>Setting up a unikernel</em> - <a href="http://christopherbothwell.com/ocaml/mirage/2014/12/03/about-not-found.html">“About Not Found”</a></li>
<li><em>Deploying to Linode</em> - <a href="http://christopherbothwell.com/ocaml/mirage/linode/2014/12/08/hello-linode.html">“Hello Linode”</a></li>
</ul>
</li>
</ul>
<p>Phil focused on getting unikernels running on a cubieboards, which are ARM
based development boards — similar to the Raspberry Pi. He starts by taking
Mirage’s pre-built <a href="http://blobs.openmirage.org">Cubieboard images</a> — which makes it easy to
get Xen and an OCaml environment set up on the board — and getting this
installed on the Cubieboard. He also noted the issues he came across along
with the simple tweaks he made to fix them and finally serves a Mirage hello
world page.</p>
<ul>
<li><em>Phil on</em> <a href="http://philtomson.github.io/blog/2014/09/10/some-notes-on-building-and-running-mirage-unikernels-on-cubieboard2/">“Some Notes on Building and Running Mirage Unikernels on Cubieboard2”</a></li>
</ul>
<h3 id="more-than-just-static-sites">More than just static sites</h3>
<p>Static sites have become the new ‘hello world’ app. They’re simple to manage,
low-risk and provide lots of opportunities to experience something new. These
aspects make them ideal for discovering the benefits (and trade offs) of the
unikernel approach and I look forward to seeing what variations people come up
with — For instance, there aren’t any public instructions for deploying to
Rackspace so it would be great to read about someone’s experiences there.
However, there are also many other applications that also fit the above
criteria of simplicity, low risk and plentiful learning opportunities. </p>
<p>Thomas Leonard decided to create a unikernel for a simple REST service for
queuing package uploads for 0install. His post takes you from the very
beginning, with a simple hello world program running on Xen, all the way
through to creating his REST service. Along the way there a lots of code
snippets and explanations of the libraries being used and what they’re doing.
This is a great use-case for unikernels and there are a lot of interesting
things to take from this post, for example the ease with which Thomas was able
to find and fix bugs using regular tools. There’s also lots of information on
performance testing and optimising of the unikernel, which he covers in a
follow-up post, and he even built tools to visualise the traces. </p>
<ul>
<li><em>Thomas’ posts</em>
<ul>
<li><em>Hello world and REST service</em> - <a href="http://roscidus.com/blog/blog/2014/07/28/my-first-unikernel/">“My First Unikernel”</a></li>
<li><em>Profiling and optimisations</em> - <a href="http://roscidus.com/blog/blog/2014/08/15/optimising-the-unikernel/">“Optimising the Unikernel”</a></li>
<li><em>Tool to visualise traces</em> - <a href="http://roscidus.com/blog/blog/2014/10/27/visualising-an-asynchronous-monad">“Visualising an Asynchronous Monad”</a></li>
</ul>
</li>
</ul>
<p>Of course, there’s much more activity out there than described in this post as
people continually propose ideas on the <a href="http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel">Mirage mailing list</a> — both
for things they would like to try out and issues they came up against. In my
<a href="http://nymote.org/blog/2014/from-jekyll-site-to-unikernel/">last post</a>, I pointed out that the workflow is applicable to any
type of unikernel and as Thomas showed, with bit of effort it’s already
possible to create useful, real-world services using the many libraries that
already exist. There’s also a lot of scaffolding in the <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a>
repo that you can build on which makes it even easier to get involved. If you
want to dive deeper into the libraries and perhaps learn OCaml, there are lots
of <a href="http://ocaml.org/learn/books.html">resources online</a> and <a href="https://github.com/mirage/mirage-www/wiki/Pioneer-Projects">projects</a> to get involved with too.</p>
<p>Now is a great time to try building a unikernel for yourself and as you can
see from the posts above, shared experiences help other people progress
further and branch out into new areas. When you’ve had a chance to try
something out please do share your experiences online! </p>
<p class="footnote">This post also appears on the <a href="http://nymote.org/blog/2015/unikernels-for-everyone/">Nymote blog</a>.</p>
<!-- ===== LINKS ===== -->
Towards a governance framework for OCaml.orgAmir Chaudhry2015-01-08T18:15:00+00:00http://amirchaudhry.com/towards-governance-framework-for-ocamlorg
<p>The projects around the OCaml.org domain name are becoming more established
and it’s time to think about how they’re organised. 2014 saw a <em>lot</em> of
activity, which built on the <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/news/index.html#OnlineatOCamlorg">successes from 2013</a>.
Some of the main things that stand out to me are:</p>
<ul>
<li>More <a href="http://ocaml.org/contributors.html">volunteers</a> contributing to the public website with
translations, bug fixes and content updates, as well as many new visitors —
for example, the new page on <a href="http://ocaml.org/learn/teaching-ocaml.html">teaching OCaml</a> received over 5k
visits alone. The increasing contributions are a result of the earlier work on
<a href="http://amirchaudhry.com/announcing-new-ocamlorg/">re-engineering the site</a> and there are many ways to get involved
so please do <a href="https://github.com/ocaml/ocaml.org/labels/contribute%21">contribute</a>!</li>
</ul>
<p><a href="http://opam.ocaml.org/"><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/web/opampkg-2015-01-08.png" /></a></p>
<ul>
<li>The relentless improvements and growth of OPAM, both in terms of the
repository — with over 1000 additional packages and several
<a href="http://lists.ocaml.org/pipermail/opam-devel/2014-October/000781.html">new repo maintainers</a> — and also improved workflows (e.g the new
<a href="http://opam.ocaml.org/blog/opam-1-2-pin/">pin functionality</a>).
The OPAM site and package list also moved to the ocaml.org domain, becoming
the substrate for the OCaml Platform efforts. This began with the work towards
<a href="http://opam.ocaml.org/blog/opam-1-2-0-beta4/">OPAM 1.2</a> and there is much more to come (including closer
integration in terms of styling). Join the <a href="http://lists.ocaml.org/listinfo/platform">Platform list</a> to
keep up to date.</li>
</ul>
<ul>
<li>Much more activity on the <a href="http://lists.ocaml.org">mailing lists</a> in general and user groups
requesting to have lists made (e.g the <a href="http://lists.ocaml.org/listinfo/teaching">teaching list</a>). If anyone
has a need for a new list, just ask on the
<a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure list</a>!</li>
</ul>
<p>There is other work besides those I’ve mentioned and I think by any measure,
all the projects have been quite successful. As the community continues to
develop, it’s important to clarify how things currently work to improve the
level of transparency and make it easier for newcomers to get involved.</p>
<h3 id="factors-for-a-governance-framework">Factors for a governance framework</h3>
<p>For the last couple of months, I’ve been looking over how larger projects
manage themselves and the governance documents that are available. My aim has
been to put such a document together for the OCaml.org domain without
introducing burdensome processes. There are number of things that stood out
to me during this process, which have guided the approach I’m taking.</p>
<p>My considerations for an OCaml.org governance document:</p>
<ul>
<li>
<p>A governance document is not <em>necessary</em> for success but it’s valuable to
demonstrate a commitment to a <strong>stable decision-making process</strong>. There are
many projects that progress perfectly well without any documented processes
and indeed the work around OCaml.org so far is a good example of this (as well
as OCaml itself). However, for projects to achieve a scale greater than the
initial teams, it’s a significant benefit to encode in writing how things work
(NB: please note that I didn’t define the <em>type</em> of decision-making process -
merely that it’s a stable one).</p>
</li>
<li>
<p>It must <strong>clarify its scope</strong> so that there is no confusion about what the
document covers. In the case of OCaml.org, it needs to be clear that the
governance covers the domain itself, rather than referring to the website. </p>
</li>
<li>
<p>It should <strong>document the reality</strong>, rather than represent an aspirational
goal or what people <em>believe</em> a governance structure should look like. It’s
very tempting to think of an idealised structure without recognising that
behaviours and norms have <em>already</em> been established. Sometimes this will be
vague and poorly defined but that might simply indicate areas that the
community hasn’t encountered yet (e.g it’s uncommon for any new project to
seriously think about dispute resolution processes until they have to). In
this sense, the initial version of a governance document should simply be a
written description of how things currently stand, rather than a means to
adjust that behaviour. </p>
</li>
<li>
<p>It should be <strong>simple and self-contained</strong>, so that anyone can understand
the intent quickly without recourse to other documents. It may be tempting to
consider every edge-case or try to resolve every likely ambiguity but this
just leads to large, legal documents. This approach may well be necessary
once projects have reached a certain scale but to implement it sooner would be
a case of premature optimisation — not to mention that very few people would
read and remember such a document.</p>
</li>
<li>
<p>It’s a <strong>living document</strong>. If the community decides that it would prefer a
new arrangement, then the document conveniently provides a stable starting
point from which to iterate. Indeed, it <em>should</em> adapt along with the project
that it governs. </p>
</li>
</ul>
<p>With the above points in mind, I’ve been putting together a draft governance
framework to cover how the OCaml.org domain name is managed. It’s been a
quiet work-in-progress for some time and I’ll be getting in touch with
maintainers of specific projects soon. Once I’ve had a round of reviews, I’ll
be sharing it more widely and posting it here!</p>
<!-- [![FIGURE 06.1 Governance versus anarchy on Flickr](http://amirchaudhry.com/images/web/governance-alpha.png)](https://www.flickr.com/photos/jurgenappelo/5201270923/) -->
Writing Planet in pure OCamlAmir Chaudhry2014-04-29T09:30:00+00:00http://amirchaudhry.com/writing-planet-in-pure-ocaml
<p>I’ve been learning OCaml for some time now but not really had a problem that
I wanted to solve. As such, my progress has been rather slow and sporadic
and I only make time for exercises when I’m travelling. In order to focus my
learning, I have to identify and tackle something specific. That’s usually
the best way to advance and I recently found something I can work on.</p>
<p>As I’ve been trying to write more blog posts, I want to be able to keep as
much content on my own site as possible and syndicate my posts out to other
sites I run. Put simply, I want to be able to take multiple feeds from
different sources and merge them into one feed, which will be served from
some other site. In addition, I also want to render that feed as HTML on a
webpage. All of this has to remain within the OCaml toolchain so it can be
used as part of <a href="http://openmirage.org/">Mirage</a> (i.e. I can use it when
<a href="http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines">building unikernels</a>). </p>
<p>What I’m describing might sound familiar and there’s a well-known tool that
does this called <a href="http://en.wikipedia.org/wiki/Planet_(software)">Planet</a>. It’s a ‘river of news’ feed reader, which
aggregates feeds and can display posts on webpages and you can find the
<a href="http://www.planetplanet.org">original Planet</a> and it’s successor <a href="http://intertwingly.net/code/venus/docs/index.html">Venus</a>, both written in Python.
However, Venus seems to be unmaintained as there are a number of
<a href="https://github.com/rubys/venus/issues">unresolved issues and pull requests</a>, which have been
languishing for quite some time with no discussion. There does appear to be
a more active Ruby implementation called <a href="http://feedreader.github.io/">Pluto</a>, with recent commits and
no reported issues.</p>
<!--
\[Rant: Frankly, the naming of these versions leaves a lot to be desired.
When you know exactly what you're supposed to Google for you're fine, but
until then you're just on a random-walk through space websites. I'm
lucky I managed to get to the Wikipedia page.\]
-->
<h3 id="benefits-of-a-planet-in-pure-ocaml">Benefits of a Planet in pure OCaml</h3>
<p>Although I could use the one of the above options, it would be much more
useful to keep everything within the OCaml ecosystem. This way I can make
the best use of the <a href="https://queue.acm.org/detail.cfm?id=2566628">unikernel approach</a> with Mirage (i.e lean,
single-purpose appliances). Obviously, the existing options don’t lend
themselves to this approach and there are <a href="https://forge.ocamlcore.org/tracker/index.php?func=detail&amp;aid=1349&amp;group_id=1&amp;atid=101">known bugs</a> as a lot has
changed on the web since Planet Venus (e.g the adoption of HTML5).
Having said that, I can learn a lot from the existing implementations and
I’m glad I’m not embarking into completely uncharted territory.</p>
<p>In addition, the OCaml version doesn’t need to (and <em>shouldn’t</em>) be written
as one monolithic library. Instead, pulling together a collection of
smaller, reusable libraries that present clear interfaces to each other
would make things much more maintainable. This would bring substantially
greater benefits to everyone and <a href="https://opam.ocaml.org/">OPAM</a> can manage the dependencies. </p>
<!--
OPAM makes managing dependencies easy so having a number of single-
purpose libraries is A Good Thing and costs almost nothing. This
approach has already worked well with examples like an [IP address
library][ipaddr] and the [OCaml markdown library][OMD], which can be
used by multiple projects.
-->
<h3 id="breaking-down-the-problem">Breaking down the problem</h3>
<p>The first cut is somewhat straightforward as we have a piece that deals with
the consumption and manipulation of feeds and another that takes the result
and emits HTML. This is also how the original Planet is put together, with a
library called <a href="https://pypi.python.org/pypi/feedparser/">feedparser</a> and another for templating pages. </p>
<p>For the feed-parsing aspect, I can break it down further by considering Atom
and RSS feeds separately and then even further by thinking about how to (1)
consume such feeds and (2) output them. Then there is the HTML component,
where it may be necessary to consider existing representations of HTML. These
are not new ideas and since I’m claiming that individual pieces might be
useful then it’s worth finding out which ones are already available.</p>
<h4 id="existing-components">Existing components</h4>
<p>The easiest way to find existing libraries is via the
<a href="http://opam.ocaml.org/packages">OPAM package list</a>. Some quick searches for <code>RSS</code>, <code>XML</code>, <code>HTML</code>
and <code>net</code> bring up a lot of packages. The most relevant of these seem to be
<a href="https://opam.ocaml.org/packages/xmlm/xmlm.1.2.0/">xmlm</a>, <a href="https://opam.ocaml.org/packages/ocamlrss/ocamlrss.2.2.2/">ocamlrss</a>, <a href="https://opam.ocaml.org/packages/cow/cow.0.9.1/">cow</a> and maybe <a href="http://opam.ocaml.org/packages/xmldiff/xmldiff.0.1/">xmldiff</a>. I noticed that
nothing appears, when searching for <code>Atom</code>, but I do know that <code>cow</code> has an
Atom module for creating feeds. In terms of turning feeds into pages and
HTML, I’m aware of <a href="https://github.com/ocaml/ocaml.org/blob/master/script/rss2html.ml">rss2html</a> used on the <a href="http://ocaml.org">OCaml</a> website and parts of
<a href="http://opam.ocaml.org/packages/ocamlnet/ocamlnet.3.7.3/">ocamlnet</a> that may be relevant (e.g <code>nethtml</code> and <code>netstring</code>) as well as
<code>cow</code>. There is likely to be other code I’m missing but this is useful as a
first pass. </p>
<p>Overall, a number of components are already out there but it’s not obvious
if they’re compatible (e.g html) and there are still gaps (e.g atom). Since
I also want to minimise dependencies, I’ll try and use whatever works but
may ultimately have to roll my own. Either way, I can learn from what
already exists. Perhaps I’m being overconfident but if I can break things
down sensibly and keep the scope constrained then this should be an
achievable project. </p>
<h3 id="the-first-baby-steps---an-atom-parser">The first (baby) steps - an Atom parser</h3>
<p>As this is an exercise for me to learn OCaml by solving a problem, I need to
break it down into bite-size pieces and take each one at a time. Practically
speaking, this means limiting the scope to be as narrow as possible while
still producing a useful result <em>for me</em>. That last part is important as I
have specific needs and it’s likely that the first thing I make won’t be
particularly interesting for many others. </p>
<p>For my specific use-case, I’m only interested in dealing with Atom feeds as
that’s what I use on my site and others I’m involved with. Initial feedback
is that creating an Atom parser will be the bulk of the work and I should
start by defining the types. To keep this manageable, I’m only going to deal
with my own feeds instead of attempting a fully compliant parser (in other
words, I’ll only consider the subset of <a href="https://tools.ietf.org/html/rfc4287">RFC4287</a> that’s relevant to me).
Once I can parse, merge and write such feeds I should be able to iterate
from there. </p>
<p>To make my requirements more concrete:</p>
<ul>
<li>Only consider <em>my own</em> Atom feeds for now</li>
<li>Initially, be able to parse and emit just one Atom feed</li>
<li>Then be able to merge 2+ feeds, specifically:
<ul>
<li>Use tag-based feeds from my personal site as starting points</li>
<li>Be able to de-dupe content</li>
</ul>
</li>
<li>No database or storage (construct it afresh every time)</li>
<li>Minimise library dependencies</li>
</ul>
<!--
Perhaps these requirements are already too much and I may decide to dial
it down even further (e.g just figure out how to consume *one* feed),
but I won't really know until I get started. For example, I can imagine
that I'll need one bunch of code to deal with Atom feeds and then
perhaps I can make another (feedparser), that depends on it and others
to deal with general feeds.
-->
<h4 id="timeframes-and-workflow">Timeframes and workflow</h4>
<p>I’ve honestly no idea how long this might take and I’m treating it as a
side-project. I know there are many people out there who could produce a
working version of everything in a week or two but I’m not one of them (yet).
There are also <em>a lot</em> of ancillary things I need to learn on the way, like
packaging, improving my knowledge of Git and dealing with build systems. If
I had to put a vague time frame on this, I’d be thinking in months rather
than weeks. It might even be the case that others start work on parts of
this and ship things sooner but that’s great as I’ll probably be able to use
whatever they create and move further along the chain.</p>
<p>In terms of workflow, everything will be done in the open, warts and all, and
I expect to make embarrassing mistakes as I go. You can follow along on my
freshly created <a href="https://github.com/amirmc/ocamlatom">OCaml Atom</a> repo, and I’ll be using the issue tracker as
the main way of dealing with bugs and features. Let the fun begin.</p>
<!-- acknowledgements -->
<hr />
<p><em>Acknowledgements:</em> Thanks to <a href="http://erratique.ch">Daniel</a>, <a href="http://ashishagarwal.org">Ashish</a>, <a href="https://github.com/Chris00">Christophe</a>,
<a href="http://philippewang.info/">Philippe</a> and <a href="http://gazagnaire.org">Thomas</a> for discussions on an earlier draft of this post
and providing feedback on my approach.</p>
<!-- links -->
From Jekyll site to Unikernel in fifty lines of code.Amir Chaudhry2014-03-10T18:30:00+00:00http://amirchaudhry.com/from-jekyll-to-unikernel-in-fifty-lines
<p><a href="http://openmirage.org">Mirage</a> has reached a point where it’s possible to easily set up
end-to-end toolchains to build <a href="http://queue.acm.org/detail.cfm?id=2566628">unikernels</a>! <!--\[If you're not sure what that is, read the post [What is a unikernel?][amc-unikernel]\]-->
My first use-case is to be able to generate a unikernel which can serve my
personal static site but to do it with as much automation as possible. It
turns out this is possible with less than 50 lines of code.</p>
<p>I use Jekyll and GitHub Pages at the moment so I wanted a workflow that’s as
easy to use, though I’m happy to spend some time up front to set up and
configure things.
The tools for achieving what I want are in good shape so
this post takes the example of a Jekyll site (i.e this one) and goes through
the steps to produce a unikernel on
<a href="https://travis-ci.org">Travis CI</a> (a continuous integration service) which can later be
deployed. Many of these instructions already exist in various forms but
they’re collated here to aid this use-case. </p>
<p>I will take you, dear reader, through the process and when we’re finished,
the workflow will be as follows:</p>
<ol>
<li>You’ll write your posts on your local machine as normal</li>
<li>A push to GitHub will trigger a unikernel build for each commit</li>
<li>The Xen unikernel will be pushed to a repo for deployment</li>
</ol>
<p>To achieve this, we’ll first check that we can build a unikernel VM locally,
then we’ll set up a continuous integration service to automatically build
them for us and finally we’ll adapt the CI service to also deploy the built
VM. Although the amount of code required is small, each of these steps is
covered below in some detail.
For simplicity, I’ll assume you already have OCaml and Opam
installed – if not, you can find out how via the
<a href="http://realworldocaml.org/install">Real Word OCaml install instructions</a>.</p>
<h2 id="building-locally">Building locally</h2>
<p>To ensure that the build actually works, you should run things locally at
least once before pushing to Travis. It’s worth noting that the
<a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a> repo contains a lot of useful, public domain examples
and helpfully, the specific code we need is in
<a href="https://github.com/mirage/mirage-skeleton/tree/master/static_website">mirage-skeleton/static_website</a>. Copy both the <code>config.ml</code>
and <code>dispatch.ml</code> files from that folder into a new <code>_mirage</code> folder in your
jekyll repository.</p>
<p>Edit <code>config.ml</code> so that the two mentions of <code>./htdocs</code> are replaced with
<code>../_site</code>. This is the only change you’ll need to make and you should now
be able to build the unikernel with the unix backend. Make sure you have
the mirage package installed by running <code>$ opam install mirage</code> and then run:</p>
<p><em>(edit: If you already have <code>mirage</code>, remember to <code>opam update</code> to make sure you’ve got the latest packages.)</em></p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">cd </span>_mirage
<span class="nv">$ </span>mirage configure --unix
<span class="nv">$ </span>make depend <span class="c"># needed as of mirage 1.2 onward</span>
<span class="nv">$ </span>mirage build
<span class="nv">$ </span><span class="nb">cd</span> ..</code></pre></div>
<p>That’s all it takes! In a few minutes there will be a unikernel built on
your system (symlinked as <code>_mirage/mir-www</code>). If there are any errors, make
sure that Opam is up to date and that you have the latest version of the
static_website files from <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a>. </p>
<h3 id="serving-the-site-locally">Serving the site locally</h3>
<p>If you’d like to see this site locally, you can do so from within the
<code>_mirage</code> folder by running unikernel you just built. There’s more
information about the details of this on the <a href="http://openmirage.org/wiki/mirage-www">Mirage docs site</a>
but the quick instructions are:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span><span class="nb">cd </span>_mirage
<span class="nv">$ </span>sudo mirage run
<span class="c"># in another terminal window</span>
<span class="nv">$ </span>sudo ifconfig tap0 10.0.0.1 255.255.255.0</code></pre></div>
<p>You can now point your browser at http://10.0.0.2/ and see your site!
Once you’re finished browsing, <code>$ mirage clean</code> will clear up all the
generated files. </p>
<p>Since the build is working locally, we can set up a continuous integration
system to perform the builds for us.</p>
<h2 id="setting-up-travis-ci">Setting up Travis CI</h2>
<p><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/jekyll-unikernel/travis.png" />&lt;/img&gt;</p>
<p>We’ll be using the <a href="https://travis-ci.org">Travis CI</a> service, which is free for open-source
projects (so this assumes you’re using a public repo). The benefit of using
Travis is that you can build a unikernel <em>without</em> needing a local OCaml
environment, but it’s always quicker to debug things locally.</p>
<p>Log in to Travis using your GitHub ID which will then trigger a scan of your
repositories. When this is complete, go to your Travis accounts page and
find the repo you’ll be building the unikernel from. Switch it ‘on’ and
Travis will automatically set your GitHub post-commit hook and token for you.
That’s all you need to do on the website.</p>
<p>When you next make a push to your repository, GitHub will inform Travis,
which will then look for a YAML file in the root of the repo called
<code>.travis.yml</code>. That file describes what Travis should do and what the build
matrix is. Since OCaml is not one of the supported languages, we’ll be
writing our build script manually (this is actually easier than it sounds).
First, let’s set up the YAML file and then we’ll examine the build script.</p>
<h3 id="the-travis-yaml-file---travisyml">The Travis YAML file - .travis.yml</h3>
<p>The <a href="http://docs.travis-ci.com/user/ci-environment/#CI-environment-OS">Travis CI environment</a> is based on Ubuntu 12.04, with a
number of things pre-installed (e.g Git, networking tools etc). Travis
doesn’t support OCaml (yet) so we’ll use the <code>c</code> environment to get the
packages we need, specifically, the OCaml compiler, Opam and Mirage. Once
those are set up, our build should run pretty much the same as it did locally.</p>
<p>For now, let’s keep things simple and only focus on the latest releases
(OCaml 4.01.0 and Opam 1.1.1), which means our build matrix is very simple.
The build instructions will be in the file <code>_mirage/travis.sh</code>, which we
will move to and trigger from the <code>.travis.yml</code> file. This means our YAML
file should look like:</p>
<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">language</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">c</span>
<span class="l-Scalar-Plain">before_script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">cd _mirage</span>
<span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">bash -ex travis.sh</span>
<span class="l-Scalar-Plain">env</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">matrix</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">MIRAGE_BACKEND=xen DEPLOY=0</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">MIRAGE_BACKEND=unix</span></code></pre></div>
<p>The matrix enables us to have parallel builds for different environments and
this one is very simple as it’s only building two unikernels. One worker
will build for the Xen backend and another worker will build for the Unix
backend. The <code>_mirage/travis.sh</code> script will clarify what each of these
environments translates to. We’ll come back to the <code>DEPLOY</code> flag later on
(it’s not necessary yet). Now that this file is set up, we can work on the
build script itself.</p>
<h3 id="the-build-script---travissh">The build script - travis.sh</h3>
<p>To save time, we’ll be using an Ubuntu PPA to quickly get
<a href="https://launchpad.net/~avsm">pre-packaged versions of the OCaml compiler and Opam</a>, so the
first thing to do is define which PPAs each line of the build matrix
corresponds to. Since we’re keeping things simple, we only need one PPA
that has the most recent releases of OCaml and Opam.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/usr/bin/env bash</span>
<span class="nv">ppa</span><span class="o">=</span>avsm/ocaml41+opam11
<span class="nb">echo</span> <span class="s2">&quot;yes&quot;</span> <span class="p">|</span> sudo add-apt-repository ppa:<span class="nv">$ppa</span>
sudo apt-get update -qq
sudo apt-get install -qq ocaml ocaml-native-compilers camlp4-extra opam</code></pre></div>
<p>[NB: There are many <a href="https://launchpad.net/~avsm">other PPAs</a> for different combinations of
OCaml/Opam which are useful for testing]. Once the appropriate PPAs have
been set up it’s time to initialise Opam and install Mirage. </p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">export </span><span class="nv">OPAMYES</span><span class="o">=</span>1
opam init
opam install mirage
<span class="nb">eval</span> <span class="sb">`</span>opam config env<span class="sb">`</span></code></pre></div>
<p>We set <code>OPAMYES=1</code> to get non-interactive use of Opam (it defaults to ‘yes’
for any user input) and if we want full build logs, we could also set
<code>OPAMVERBOSE=1</code> (I haven’t in this example).
The rest should be straight-forward and you’ll end up with an
Ubuntu machine with OCaml, Opam and the Mirage package installed. It’s now
trivial to do the next step of actually building the unikernel!</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash">mirage configure --<span class="nv">$MIRAGE_BACKEND</span>
mirage build</code></pre></div>
<p>You can see how we’ve used the environment variable from the Travis file and
this is where our two parallel builds begin to diverge. When you’ve saved
this file, you’ll need to change permissions to make it executable by doing
<code>$ chmod +x _mirage/travis.sh</code>.</p>
<p>That’s all you need to build the unikernel on Travis! You should now commit
both the YAML file and the build script to the repo and push the changes to
GitHub. Travis should automatically start your first build and you can
watch the console output online to check that both the Xen and Unix backends
complete properly. If you notice any errors, you should go back over your
build script and fix it before the next step.</p>
<h2 id="deploying-your-unikernel">Deploying your unikernel</h2>
<p><img style="float: right; margin-left: 10px" src="http://amirchaudhry.com/images/jekyll-unikernel/octocat.png" />&lt;/img&gt;</p>
<p>When Travis has finished its builds it will simply destroy the worker and
all its contents, including the unikernels we just built. This is perfectly
fine for testing but if we want to also <em>deploy</em> a unikernel, we need to get
it out of the Travis worker after it’s built. In this case, we want to
extract the Xen-based unikernel so that we can later start it on a Xen-based
machine (e.g Amazon, Rackspace or - in our case - a machine on <a href="http://www.bytemark.co.uk">Bytemark</a>).</p>
<p>Since the unikernel VMs are small (only tens of MB), our method for
exporting will be to commit the Xen unikernel into a repository on GitHub.
It can be retrieved and started later on and keeping the VMs in version
control gives us very effective snapshots (we can roll back the site without
having to rebuild). This is something that would be much more challenging
if we were using the ‘standard’ web toolstack.</p>
<p>The deployment step is a little more complex as we have to send the
Travis worker a private SSH key, which will give it push access to a GitHub
repository. Of course, we don’t want to expose that key by simply adding it
to the Travis file so we have to encrypt it somehow. </p>
<h3 id="sending-travis-a-private-ssh-key">Sending Travis a private SSH key</h3>
<p>Travis supports <a href="http://docs.travis-ci.com/user/encryption-keys/">encrypted environment variables</a>. Each
repository has its own public key and the <a href="http://rubygems.org/gems/travis">Travis gem</a> uses
this public key to encrypt data, which you then add to your <code>.travis.yml</code>
file for decryption by the worker. This is meant for sending things like
private API tokens and other small amounts of data. Trying to encrypt an SSH
key isn’t going to work as it’s too large. Instead we’ll use
<a href="https://github.com/avsm/travis-senv">travis-senv</a>, which encodes, encrypts and chunks up the key into smaller
pieces and then reassembles those pieces on the Travis worker. We still use
the Travis gem to encrypt the pieces to add them to the <code>.travis.yml</code> file.</p>
<p>While you could give Travis a key that accesses your whole GitHub account, my
preference is to create a <em>new</em> deploy key, which will only be used for
<a href="https://help.github.com/articles/managing-deploy-keys#deploy-keys">deployment to one repository</a>.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># make a key pair on your local machine</span>
<span class="nv">$ </span><span class="nb">cd</span> ~/.ssh/
<span class="nv">$ </span>ssh-keygen -t dsa -C <span class="s2">&quot;travis.deploy&quot;</span> -f travis-deploy_dsa
<span class="nv">$ </span><span class="nb">cd</span> -</code></pre></div>
<p>Note that this is a 1024 bit key so if you decide to use a 2048 bit key,
then be aware that Travis <a href="https://github.com/avsm/travis-senv/issues/1">sometimes has issues</a>. Now that we have
a key, we can encrypt it and add it to the Travis file. </p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># on your local machine</span>
<span class="c"># install the necessary components</span>
<span class="nv">$ </span>gem install travis
<span class="nv">$ </span>opam install travis-senv
<span class="c"># chunk the key, add to yml file and rm the intermediate</span>
<span class="nv">$ </span>travis-senv encrypt ~/.ssh/travis-deploy_dsa _travis_env
<span class="nv">$ </span>cat _travis_env <span class="p">|</span> travis encrypt -ps --add
<span class="nv">$ </span>rm _travis_env</code></pre></div>
<p><code>travis-senv</code> encrypts and chunks the key locally on your machine, placing
its output in a file you decide (<code>_travis_env</code>). We then take that output
file and pipe it to the <code>travis</code> ruby gem, asking it to encrypt the input,
treating each line as separate and to be appended (<code>-ps</code>) and then actually
adding that to the Travis file (<code>--add</code>). You can run <code>$ travis encrypt -h</code>
to understand these options. Once you’ve run the above commands,
<code>.travis.yml</code> will look as follows.</p>
<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">language</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">c</span>
<span class="l-Scalar-Plain">before_script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">cd _mirage</span>
<span class="l-Scalar-Plain">script</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">bash -ex travis.sh</span>
<span class="l-Scalar-Plain">env</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">matrix</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">MIRAGE_BACKEND=xen DEPLOY=0</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">MIRAGE_BACKEND=unix</span>
<span class="l-Scalar-Plain">global</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">secure</span><span class="p-Indicator">:</span> <span class="s">&quot;....</span><span class="nv"> </span><span class="s">encrypted</span><span class="nv"> </span><span class="s">data</span><span class="nv"> </span><span class="s">....&quot;</span>
<span class="l-Scalar-Plain">...</span></code></pre></div>
<p>The number of secure variables added depends on the type and size of the key
you had to chunk, so it could vary from 8 up to 29. We’ll commit
these additions later on, alongside additions to the build script.</p>
<p>At this point, we also need to make a repository on GitHub
and add the public deploy key so
that Travis can push to it. Once you’ve created your repo and added a
README, follow GitHub’s instructions on <a href="https://help.github.com/articles/managing-deploy-keys#deploy-keys">adding deploy keys</a>
and paste in the public key (i.e. the content of <code>travis-deploy_dsa.pub</code>). </p>
<p>Now that we can securely pass a private SSH key to the worker
and have a repo that the worker can push to, we need to
make additions to the build script.</p>
<h3 id="committing-the-unikernel-to-a-repository">Committing the unikernel to a repository</h3>
<p>Since we can set <code>DEPLOY=1</code> in the YAML file we only need to make
additions to the build script. Specifically, we want to assure that: only
the Xen backend is deployed; only <em>pushes</em> to the repo result in
deployments, not pull requests (we do still want <em>builds</em> for pull requests).</p>
<p>In the build script (<code>_mirage/travis.sh</code>), which is being run by the worker,
we’ll have to reconstruct the SSH key and configure Git. In addition,
Travis gives us a set of useful <a href="http://docs.travis-ci.com/user/ci-environment/#Environment-variables">environment variables</a> so we’ll
use the latest commit hash (<code>$TRAVIS_COMMIT</code>) to name the the VM (which also
helps us trace which commit it was built from).</p>
<p>It’s easier to consider this section of code at once so I’ve explained the
details in the comments. This section is what you need to add at the end of
your existing build script (i.e straight after <code>mirage build</code>).</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># Only deploy if the following conditions are met.</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&quot;$MIRAGE_BACKEND&quot;</span> <span class="o">=</span> <span class="s2">&quot;xen&quot;</span> <span class="se">\</span>
-a <span class="s2">&quot;$DEPLOY&quot;</span> <span class="o">=</span> <span class="s2">&quot;1&quot;</span> <span class="se">\</span>
-a <span class="s2">&quot;$TRAVIS_PULL_REQUEST&quot;</span> <span class="o">=</span> <span class="s2">&quot;false&quot;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
<span class="c"># The Travis worker will already have access to the chunks</span>
<span class="c"># passed in via the yaml file. Now we need to reconstruct </span>
<span class="c"># the GitHub SSH key from those and set up the config file.</span>
opam install travis-senv
mkdir -p ~/.ssh
travis-senv decrypt &gt; ~/.ssh/id_dsa <span class="c"># This doesn&#39;t expose it</span>
chmod <span class="m">600</span> ~/.ssh/id_dsa <span class="c"># Owner can read and write</span>
<span class="nb">echo</span> <span class="s2">&quot;Host some_user github.com&quot;</span> &gt;&gt; ~/.ssh/config
<span class="nb">echo</span> <span class="s2">&quot; Hostname github.com&quot;</span> &gt;&gt; ~/.ssh/config
<span class="nb">echo</span> <span class="s2">&quot; StrictHostKeyChecking no&quot;</span> &gt;&gt; ~/.ssh/config
<span class="nb">echo</span> <span class="s2">&quot; CheckHostIP no&quot;</span> &gt;&gt; ~/.ssh/config
<span class="nb">echo</span> <span class="s2">&quot; UserKnownHostsFile=/dev/null&quot;</span> &gt;&gt; ~/.ssh/config
<span class="c"># Configure the worker&#39;s git details</span>
<span class="c"># otherwise git actions will fail.</span>
git config --global user.email <span class="s2">&quot;user@example.com&quot;</span>
git config --global user.name <span class="s2">&quot;Travis Build Bot&quot;</span>
<span class="c"># Do the actual work for deployment.</span>
<span class="c"># Clone the deployment repo. Notice the user,</span>
<span class="c"># which is the same as in the ~/.ssh/config file.</span>
git clone git@some_user:amirmc/www-test-deploy
<span class="nb">cd </span>www-test-deploy
<span class="c"># Make a folder named for the commit. </span>
<span class="c"># If we&#39;re rebuiling a VM from a previous</span>
<span class="c"># commit, then we need to clear the old one.</span>
<span class="c"># Then copy in both the config file and VM.</span>
rm -rf <span class="nv">$TRAVIS_COMMIT</span>
mkdir -p <span class="nv">$TRAVIS_COMMIT</span>
cp ../mir-www.xen ../config.ml <span class="nv">$TRAVIS_COMMIT</span>
<span class="c"># Compress the VM and add a text file to note</span>
<span class="c"># the commit of the most recently built VM.</span>
bzip2 -9 <span class="nv">$TRAVIS_COMMIT</span>/mir-www.xen
git pull --rebase
<span class="nb">echo</span> <span class="nv">$TRAVIS_COMMIT</span> &gt; latest <span class="c"># update ref to most recent</span>
<span class="c"># Add, commit and push the changes!</span>
git add <span class="nv">$TRAVIS_COMMIT</span> latest
git commit -m <span class="s2">&quot;adding $TRAVIS_COMMIT built for $MIRAGE_BACKEND&quot;</span>
git push origin master
<span class="c"># Go out and enjoy the Sun!</span>
<span class="k">fi</span></code></pre></div>
<p>At this point you should commit the changes to <code>./travis.yml</code> (don’t forget
the deploy flag) and <code>_mirage/travis.sh</code> and push the changes to GitHub.
Everything else will take place automatically and in a few minutes you will
have a unikernel ready to deploy on top of Xen! </p>
<p>You can see both the complete YAML file and build script in use on my
<a href="https://github.com/amirmc/www-test">test repo</a>, as well as the <a href="https://travis-ci.org/amirmc/www-test">build logs</a> for that repo
and the <a href="https://github.com/amirmc/www-test-deploy">deploy repo</a> with a VM.</p>
<p><em>[Pro-tip: If you add *<code>[skip ci]</code></em> anywhere in your
commit message, Travis will skip the build for that commit.
This is very useful if you’re making minor changes, like updating a
README.]*</p>
<h2 id="finishing-up">Finishing up</h2>
<p>Since I’m still using Jekyll for my website, I made a short script in my
jekyll repository (<code>_deploy-unikernel.sh</code>) that builds the site, commits the
contents of <code>_site</code> and pushes to GitHub. I simply run this after I’ve
committed a new blog post and the rest takes care of itself.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/usr/bin/env bash</span>
jekyll build
git add _site
git commit -m <span class="s1">&#39;update _site&#39;</span>
git push origin master</code></pre></div>
<p>Congratulations! You now have an end-to-end workflow that will produce a
unikernel VM from your Jekyll-based site and push it to a repo. If you
strip out all the comments, you’ll see that we’ve written less than 50 lines
of code! Admittedly, I’m not counting the 80 or so lines that came for free
in the <code>*.ml</code> files but that’s still pretty impressive.</p>
<p>Of course, we still need a machine to take that VM and run it but that’s a
topic for another post. For the time-being, I’m still using GitHub Pages
but once the VM is hosted somewhere, I will:</p>
<ol>
<li>Turn off GitHub Pages and serve from the VM – but still using Jekyll in
the workflow.</li>
<li>Replace Jekyll with OCaml-based static-site generation.</li>
</ol>
<p>Although all the tools already exist to switch now, I’m taking my time so
that I can easily maintain the code I end up using.</p>
<h2 id="expanding-the-script-for-testing">Expanding the script for testing</h2>
<p>You may have noticed that the examples here are not very flexible or
extensible but that was a deliberate choice to keep them readable. It’s
possible to do much more with the build matrix and script, as you can see
from the Travis files on my <a href="https://github.com/amirmc/amirmc.github.com/tree/master/_mirage">website repo</a>, which were based on
those of the <a href="https://github.com/mirage/mirage-www">Mirage site</a> and <a href="https://github.com/mor1/mort-www">Mort’s site</a>.
Specifically, you can note the use of more environment variables and case
statements to decide which PPAs to grab. Once you’ve got your builds
working, it’s worth improving your scripts to make them more maintainable
and cover the test cases you feel are important.</p>
<h3 id="not-just-for-static-sites-surprise">Not just for static sites (surprise!)</h3>
<p>You might have noticed that in very few places in the toolchain above have I
mentioned anything specific to static sites per se. The workflow is simply
(1) do some stuff locally, (2) push to a continuous integration service
which then (3) builds and deploys a Xen-based unikernel. Apart from the
convenient folder structure, the specific work to treat this as a static
site lives in the <code>*.ml</code> files, which I’ve skipped over for this post. </p>
<p>As such, the GitHub+Travis workflow we’ve developed here is quite general
and will apply to almost <em>any</em> unikernels that we may want to construct.
I encourage you to explore the examples in the <a href="https://github.com/mirage/mirage-skeleton">mirage-skeleton</a> repo and
keep your build script maintainable. We’ll be using it again the next time
we build unikernel devices.</p>
<hr />
<p><em>Acknowledgements:</em> There were lots of things I read over while writing this
post but there were a few particularly useful things that you should look up.
Anil’s posts on <a href="http://anil.recoil.org/2013/09/30/travis-and-ocaml.html">Testing with Travis</a> and
<a href="http://anil.recoil.org/2013/10/06/travis-secure-ssh-integration.html">Travis for secure deployments</a> are quite succinct (and
were themselves prompted by <a href="http://blog.mlin.net/2013/02/testing-ocaml-projects-on-travis-ci.html">Mike Lin’s Travis post</a> several
months earlier). Looking over Mort’s <a href="https://github.com/mor1/mort-www/blob/master/.travis-build.sh">build script</a> and that of
<a href="https://github.com/mirage/mirage-www/blob/master/.travis-ci.sh">mirage-www</a> helped me figure out the deployment steps as well as improve
my own script. Special thanks also to <a href="http://erratique.ch">Daniel</a>, <a href="http://www.lpw25.net">Leo</a> and <a href="http://anil.recoil.org">Anil</a> for
commenting on an earlier draft of this post.</p>
Switching from Bootstrap to Zurb FoundationAmir Chaudhry2013-11-26T21:05:00+00:00http://amirchaudhry.com/switching-from-bootstrap-to-zurb-foundation
<p>I’ve just updated my site’s HTML/CSS and moved from Twitter Bootstrap to
<a href="http://foundation.zurb.com/learn/features.html">Zurb Foundation</a>. This post captures my subjective notes on the
migration.</p>
<h4 id="my-use-of-bootstrap">My use of Bootstrap</h4>
<p>When I originally set this site up, I didn’t know what frameworks existed or
anything more than the basics of dealing with HTML (and barely any CSS). I
came across Twitter Bootstrap and immediately decided it would Solve All My
Problems. It really did. Since then, I’ve gone through one ‘upgrade’ with
Bootstrap (from 1.x to 2.x), after which I dutifully ignored all the fixes
and improvements (note that Bootstrap was up to v2.3.2 while I was still
using v2.0.2). </p>
<p><img src="http://amirchaudhry.com/images/switch-to-foundation/responsive-design.png" alt="Responsive Design" /></p>
<p>For the most part, this was fine with me but for a while now, I’ve been
meaning to make this site ‘responsive’ (read: not look like crap from a
mobile). Bootstrap v3 purports to be mobile-first so upgrading would likely
give me what I’m after but v3 is <a href="http://getbootstrap.com/getting-started/">not backwards compatible</a>,
meaning I’d have to rewrite parts of the HTML. Since this step was
unavoidable, it led me to have another look at front-end frameworks, just to
see if I was missing anything. This was especially relevant since we’d
<a href="http://amirchaudhry.com/announcing-new-ocamlorg/">just released</a> the new <a href="http://ocaml.org">OCaml.org</a>
website, itself built with Bootstrap v2.3.1 (we’d done the design/templating
work long before v3 was released). It would be useful to know what else is
out there for any future work.</p>
<p>Around this time I discovered Zurb Foundation and also the numerous
comparisons between them (note: Foundation seems to come out ahead in most
of those). A few days ago, the folks at Zurb released
<a href="http://zurb.com/article/1280/foundation-5-blasts-off--2">version 5</a>, so I decided that now is the time to kick the
tires. For the last few days, I’ve been playing with the framework and in
the end I decided to migrate my site over completely. </p>
<p><a href="http://foundation.zurb.com/learn/features.html"><img src="http://amirchaudhry.com/images/switch-to-foundation/zurb-yeti.png" alt="Foundation's Yeti" /></a></p>
<h4 id="swapping-out-one-framework-for-another">Swapping out one framework for another</h4>
<p>Over time, I’ve become moderately experienced with HTML/CSS and I can
usually wrangle things to look the way I want, but my solutions aren’t
necessarily elegant. I was initially concerned that I’d already munged
things so much that changing anything would be a pain. When I first put the
styles for this site together, I had to spend quite a bit of time
overwriting Bootstrap’s defaults so I was prepared for the same when using
Foundation. Turns out that I was fine. I currently use <a href="http://jekyllrb.com">Jekyll</a> (and
<a href="http://jekyllbootstrap.com">Jekyll Bootstrap</a>) so I only had three template files and a couple of
HTML pages to edit and because I’d kept most of my custom CSS in a separate
file, it was literally a case of swapping out one framework for another and
bug-fixing from there onwards. There’s definitely a lesson here in using
automation as much as possible.</p>
<p>Customising the styles was another area of concern but I was pleasantly
surprised to find I needed <em>less</em> customisation than with Bootstrap. This
is likely because I didn’t have to override as many defaults (and probably
because I’ve learned more about CSS since then). The one thing I seemed to
be missing was a way to deal with code sections, so I just took what
Bootstrap had and copied it in. At some point I should revisit this.</p>
<p>It did take me a while to get my head around Foundation’s grid but it was
worth it in the end. The idea is that you should design for small screens
first and then adjust things for larger screens as necessary. There are
several different default sizes which inherit their properties from the size
below, unless you explicitly override them. I initially screwed this up by
explicitly defining the grid using the <code>small-#</code> classes, which obviously
looks ridiculous on small screens. I fixed it by swapping out <code>small-#</code> for
<code>medium-#</code> everywhere in the HTML, after which everything looked reasonable.
Items flowed sensibly into a default column for the small screens and looked
acceptable for larger screens and perfectly fine on desktops. I could do
more styling of the mobile view but I’d already achieved most of what I was
after. </p>
<h4 id="fixing-image-galleries-and-embedded-content">Fixing image galleries and embedded content</h4>
<p>The only additional thing I used from Bootstrap was the <a href="http://getbootstrap.com/javascript/#carousel">Carousel</a>. I’d
written some custom helper scripts that would take some images and
thumbnails from a specified folder and produce clickable thumbnails with a
slider underneath. Foundation provides <a href="http://foundation.zurb.com/docs/components/orbit.html">Orbit</a>, so I had to spend time
rewriting my script to produce the necessary HTML. This actually resulted
in cleaner HTML and one of the features I wanted (the ability to link to a
specific image) was available by default in Orbit. At this point I also
tried to make the output look better for the case where JavaScript is
disabled (in essence, each image is just displayed as a list). Below is an
example of an image gallery, taken from a previous post, when I
<a href="http://amirchaudhry.com/joined-the-computer-lab/">joined the computer lab</a>.</p>
<div class="gallery">
<noscript><small><em>Note: The gallery needs JavaScript but I've tried to make it degrade gracefully. -Amir</em></small></noscript>
<ul class="inline-list">
<li><a data-orbit-link="join-comp-lab-1"><img src="/images/join-comp-lab/join-comp-lab-thumb-1.png" alt="join-comp-lab-thumb-1" /></a></li>
<li><a data-orbit-link="join-comp-lab-2"><img src="/images/join-comp-lab/join-comp-lab-thumb-2.png" alt="join-comp-lab-thumb-2" /></a></li>
<li><a data-orbit-link="join-comp-lab-3"><img src="/images/join-comp-lab/join-comp-lab-thumb-3.png" alt="join-comp-lab-thumb-3" /></a></li>
</ul>
<ul data-orbit="" data-options="next_on_click:true; timer_speed:3000; pause_on_hover:false; bullets:false;">
<li class="gallery-image" data-orbit-slide="join-comp-lab-1"><img src="/images/join-comp-lab/join-comp-lab-1.jpg" alt="join-comp-lab-1" /></li>
<li class="gallery-image" data-orbit-slide="join-comp-lab-2"><img src="/images/join-comp-lab/join-comp-lab-2.jpg" alt="join-comp-lab-2" /></li>
<li class="gallery-image" data-orbit-slide="join-comp-lab-3"><img src="/images/join-comp-lab/join-comp-lab-3.jpg" alt="join-comp-lab-3" /></li>
</ul>
</div>
<p>Foundation also provides a component called <a href="http://foundation.zurb.com/docs/components/flex_video.html">Flex Video</a>, which allows the
browser to scale videos to the appropriate size. This fix was as simple as
going back through old posts and wrapping anything that was <code>&lt;iframe&gt;</code> in a
<code>&lt;div class="flex-video"&gt;</code>. It really was that simple and all the Vimeo and
YouTube items scaled perfectly. Here’s an example of a video from an
earlier post, where I gave a <a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/">walkthrough of the ocaml.org site</a>.
Try changing the width of your browser window to see it scale.</p>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768157?byline=0&amp;portrait=0&amp;color=de9e6a" width="540" height="303" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video demo</iframe>
</div>
<h4 id="framework-differences">Framework differences</h4>
<p>Another of the main difference between the two frameworks is that Bootstrap
uses <a href="http://lesscss.org">LESS</a> to manage its CSS whereas Foundation uses <a href="http://sass-lang.com">SASS</a>. Frankly,
I’ve no experience with either of them so it makes little difference to me.
It’s worth bearing in mind for anyone who’s workflow does involve
pre-processing. Also, Bootstrap is available under the
<a href="http://getbootstrap.com/getting-started/#license-faqs">Apache 2 License</a>, while Foundation is released under
the <a href="http://foundation.zurb.com/learn/faq.html#question-3">MIT license</a>.</p>
<h4 id="summary">Summary</h4>
<p>Overall, the transition was pretty painless and most of the time was spent
getting familiar with the grid, hunting for docs/examples and trying to make
the image gallery work the way I wanted. I do think Bootstrap’s docs are
better but Foundation’s aren’t bad. </p>
<p>Although this isn’t meant to be a comparison, I much prefer Foundation to
Bootstrap. If you’re not sure which to use then I think the secret is in
the names of the frameworks. </p>
<ul>
<li>Bootstrap (for me) was a <em>great</em> way to ‘<em>bootstrap</em>’ a site quickly with
lots of acceptable defaults – it was quick to get started but took some
work to alter. </li>
<li>Foundation seems to provide a great ‘<em>foundation</em>’ on which to create more
customised sites – it’s more flexible but needs more upfront thought. </li>
</ul>
<p>That’s pretty much how I’d recommend them to people now.</p>
Announcing the new OCaml.orgAmir Chaudhry2013-11-20T23:00:00+00:00http://amirchaudhry.com/announcing-new-ocamlorg
<p>As some of you may have noticed, the new OCaml.org site is now live! </p>
<p>The DNS may still be propagating so if <a href="http://ocaml.org">http://ocaml.org</a> hasn’t updated for you then try http://166.78.252.20. This post is in two parts: the first is the announcement and the second is a call for content.</p>
<h3 id="new-ocamlorg-website-design">New OCaml.org website design!</h3>
<p>The new site represents a major milestone in the continuing growth of the OCaml ecosystem. It’s the culmination of a lot of volunteer work over the last several months and I’d specifically like to thank <a href="https://github.com/Chris00">Christophe</a>, <a href="http://ashishagarwal.org">Ashish</a> and <a href="http://philippewang.info/CL/">Philippe</a> for their dedication (the <a href="https://github.com/ocaml/ocaml.org/commits/master">commit logs</a> speak volumes). </p>
<p><a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/"><img src="http://amirchaudhry.com/images/ann-new-ocamlorg/ocaml-home-wire.png" alt="OCaml.org Wireframes" /></a></p>
<p>We began this journey just over 8 months ago with paper, pencils and a lot of ideas. This led to a comprehensive set of <a href="http://amirchaudhry.com/wireframe-demos-for-ocamlorg/">wireframes and walk-throughs</a> of the site, which then developed into a collection of <a href="https://github.com/ocaml/ocaml.org/wiki/Site-Redesign">Photoshop mockups</a>. In turn, these formed the basis for the html templates and style sheets, which we’ve adapted to fit our needs across the site. </p>
<p>Alongside the design process, we also considered the kind of structure and <a href="http://lists.ocaml.org/pipermail/infrastructure/2013-July/000211.html">workflow we aspired to</a>, both as maintainers and contributors. This led us to develop completely new tools for <a href="http://pw374.github.io/posts/2013-09-05-22-31-26-about-omd.html">Markdown</a> and <a href="http://pw374.github.io/posts/2013-10-03-20-35-12-using-mpp-two-different-ways.html">templating</a> in OCaml, which are now available in OPAM for the benefit all. </p>
<p>Working on all these things in parallel definitely had it challenges (which I’ll write about separately) but the result has been worth the effort. </p>
<p><a href="http://ocaml.org"><img src="http://amirchaudhry.com/images/ann-new-ocamlorg/ocaml-home-mockup.png" alt="OCaml.org" /></a></p>
<p>The journey is ongoing and we still have many more improvements we hope to make. The site you see today primarily improves upon the design, structure and workflows but in time, we also intend to incorporate more information on packages and documentation. With the new tooling, moving the website forward will become much easier and I hope that more members of the community become involved in the generation and curation of content. This brings me to the second part of this post.</p>
<h3 id="call-for-content">Call for content</h3>
<p>We have lots of great content on the website but there are parts that could do with a refresh and gaps that could be filled. As a community driven site, we need ongoing contributions to ensure that the site best reflects its members. </p>
<p>For example, if you do commercial work on OCaml then maybe you’d like to add yourself to the <a href="http://ocaml.org/community/support.html">support page</a>? Perhaps there are tutorials you can help to complete, like <a href="http://ocaml.org/learn/tutorials/99problems.html">99 problems</a>? If you’re not sure where to begin, there are already a number of <a href="https://github.com/ocaml/ocaml.org/issues?labels=content">content issues</a> you could contribute to. </p>
<p>Although we’ve gone through a bug-hunt already, feedback on the site is still very welcome. You can either <a href="https://github.com/ocaml/ocaml.org/issues">create an issue</a> on the tracker (preferred), or email the infrastructure list. </p>
<p>It’s fantastic how far we’ve come and I look forward to the next phase!</p>
Migration plan for the OCaml.org redesignAmir Chaudhry2013-11-06T11:00:00+00:00http://amirchaudhry.com/migration-plan-ocaml-org
<p>We’re close to releasing the new design of ocaml.org but need help from the
OCaml community to identify and fix bugs before we switch next week.</p>
<p>Ashish, Christophe, Philippe and I have been discussing how we should go
about this and below is the plan for migration. If anyone would like to
discuss any of this, then the <a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure list</a> is the best
place to do so.</p>
<ol>
<li>
<p>We’ve made a <strong><a href="https://github.com/ocaml/ocaml.org/tree/redesign">new branch</a></strong> on the main ocaml.org repository with
the redesign. This branch is a fork of the master and we’ve simply cleaned
up and replayed our git commits there.</p>
</li>
<li>
<p>We’ve built a live version of the new site, which is visible at
<strong><a href="http://preview.ocaml.org">http://preview.ocaml.org</a></strong> - this is rebuilt every few minutes
from the branch mentioned above. </p>
</li>
<li>
<p>Over the course of one week, we ask the community to review the new site
and <strong><a href="https://github.com/ocaml/ocaml.org/issues">report any bugs or problems</a></strong> on the issue tracker. We <em>triage</em>
those bugs to identify any blockers and work on those first. This is the
phase we’ll be in from <em>today</em>.</p>
</li>
<li>
<p>After one week (7 days), and after blocking bugs have been fixed, we
<strong>merge the redesign branch</strong> into the master branch. This would
effectively present the new site to the world. </p>
</li>
</ol>
<p>During the above, we would not be able to accept any new pull requests on
the master branch but would be happy to accept them on the new, redesign
branch. Hence, restricting the time frame to one week. </p>
<p>Please note that the above is only intended to merge the <em>design</em> and
<em>toolchain</em> for the new site. Specifically, we’ve created new landing
pages, have new style sheets and have restructured the site’s contents as
well as made some new libraries (<a href="http://pw374.github.io/posts/2013-09-05-22-31-26-about-omd.html">OMD</a> and <a href="http://pw374.github.io/posts/2013-10-03-20-39-07-OPAMaging-MPP.html">MPP</a>). The new toolchain
means people can write files in markdown, which makes contributing content a
lot easier. </p>
<p>Since the files are on GitHub, people don’t even need to clone the site
locally to make simple edits (or even add new pages). Just click the ‘Edit
this page’ link in the footer to be taken to the right file in the
repository and GitHub’s editing and pull request features will allow you to
make changes and submit updates, all from within your browser (see the
<a href="https://help.github.com/articles/creating-and-editing-files-in-your-repository">GitHub Article</a> for details). </p>
<p>There is still work to be done on adding new features but the above changes
are already a great improvement to the site and are ready to be reviewed by
the OCaml community and merged.</p>
Review of the OCaml FPDays tutorialAmir Chaudhry2013-10-28T12:30:00+00:00http://amirchaudhry.com/fpdays-review
<p><a href="http://fpdays.net/2013/sessions/index.php?session=24"><img style="float: right; margin-top: 10px; margin-left: 10px" src="/images/web/fpdays-logo.png" /></a>
Last Thursday a bunch of us from the OCaml Labs team gave an OCaml tutorial
at the <a href="http://fpdays.net/2013/sessions/index.php?session=24">FPDays</a> conference (an event for people interested in Functional
Programming). <a href="https://github.com/yallop">Jeremy</a> and I led the session with <a href="http://www.lpw25.net">Leo</a>, <a href="https://github.com/dsheets">David</a> and
<a href="http://philippewang.info/CL/">Philippe</a> helping everyone progress and dealing with questions.</p>
<p><img style="float: left; margin-right: 10px" src="/images/fpdays2013/fpdays2013-01.jpg" />
It turned out to be by far the <em>most popular session</em> at the conference with
over 20 people all wanting to get to grips with OCaml! An excellent turnout
and a great indicator of the interest that’s out there, especially when you
offer a hands-on session to people. This shouldn’t be a surprise as we’ve
had good attendance for the general <a href="http://www.meetup.com/Cambridge-NonDysFunctional-Programmers/">OCaml meetups</a> I’ve run
and also the <a href="http://ocamllabs.github.io/compiler-hacking/2013/09/17/compiler-hacking-july-2013.html">compiler hacking sessions</a>, which Jeremy and
Leo have been building up (do sign up if you’re interested in either of
those!). We had a nice surprise for attendees, which were
<a href="http://en.wikipedia.org/wiki/Galley_proof">uncorrected proof</a> copies of Real World OCaml and luckily, we had just
enough to go around.</p>
<p>For the tutorial itself, Jeremy put together a nice sequence of exercises
and a <a href="https://github.com/ocamllabs/fpdays-skeleton">skeleton repo</a> (with helpful comments in the code) so that people
could dive in quickly. The event was set up to be really informal and the
rough plan was as following:</p>
<ol>
<li>
<p><em>Installation/Intro</em> - We checked that people had been able to follow the
<a href="http://amirchaudhry.com/fpdays-ocaml-session/">installation instructions</a>, which we’d sent them in advance.
We also handed out copies of the book and made sure folks were comfortable
with <a href="http://opam.ocaml.org">OPAM</a>.</p>
</li>
<li>
<p><em>Hello world</em> - A light intro to get people familiar with the OCaml
syntax and installing packages with OPAM. This would also help people to get
familiar with the toolchain, workflow and compilation. </p>
</li>
<li>
<p><em>Monty Hall browser game</em> - Using <a href="http://ocsigen.org/js_of_ocaml/"><code>js_of_ocaml</code></a>, we wanted
people to create and run the <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall problem</a> in their
browser. This would give people a taste of some real world interaction by
having to deal with the DOM and interfaces. If folks did well, they could
add code to keep logs of the game results.</p>
</li>
<li>
<p><em>Client-server game</em> - The previous game was all in the browser (so could
be examined by players) so here the task was to split it into a client and
server, ensuring the two stay in sync. This would demonstrate the
re-usability of the OCaml code already written and give people a feel for
client server interactions. If people wanted to do more, they could use
<a href="http://opam.ocaml.org/pkg/ctypes/0.1.1/">ctypes</a> and get better random numbers. </p>
</li>
</ol>
<p>We did manage to stick to the overall scheme as above and we think this is a
great base from which to improve future tutorials. It has the really nice
benefit of having visual, interactive elements and the ability to run things
both in the browser as well as on the server is a great way to show the
versatility of OCaml. <code>js_of_ocaml</code> is quite a mature tool and so it’s
no surprise that it’s also used by companies such as Facebook (see the recent
<a href="http://www.youtube.com/watch?v=gKWNjFagR9k">CUFP talk by Julien Verlaguet</a> - skip to <a href="http://www.youtube.com/watch?feature=player_detailpage&amp;v=gKWNjFagR9k#t=1149">19:00</a>). </p>
<p>We learned a lot from running this session so we’ve captured the good, the
bad and the ugly below. This is useful for anyone who’d like to run an
OCaml tutorial in the future and also for us to be aware of the next
time we do this. I’ve incorporated the feedback from the attendees as well
as our own thoughts.</p>
<p><img src="/images/fpdays2013/fpdays2013-03.jpg" alt="Heads down and hands on" /></p>
<h3 id="things-we-learnt">Things we learnt</h3>
<h4 id="the-good">The Good</h4>
<ul>
<li>
<p>Most people really did follow the install instructions beforehand. This
made things so much easier on the day as we didn’t have to worry about
compile times and people getting bored. A few people had even got in touch
with me the night before to sort out installation problems. </p>
</li>
<li>
<p>Many folks from OCaml Labs also came over to help people, which meant
no-one was waiting longer than around 10 seconds before getting help. </p>
</li>
<li>
<p>We had a good plan of the things we wanted to cover but we were happy to
be flexible and made it clear the aim was to get right into it. Several
folks told us that they really appreciated this loose (as opposed to rigid)
structure. </p>
</li>
<li>
<p>We didn’t spend any time lecturing the room but instead got people right
into the code. Having enough of a skeleton to get something interesting
working was a big plus in this regard. People did progress from the early
examples to the later ones fairly well.</p>
</li>
<li>
<p>We had a VM with the correct set up that we could log people into if they
were having trouble locally. Two people made use of this.</p>
</li>
<li>
<p>Of course, It was great to have early proofs of the book and these were
well-received.</p>
</li>
</ul>
<p><img src="/images/fpdays2013/fpdays2013-02.jpg" alt="RWO books galore!" /></p>
<h4 id="the-bad">The Bad</h4>
<ul>
<li>
<p>In our excitement to get right into the exercises, we didn’t really give
an overview of OCaml and its benefits. A few minutes at the beginning would
be enough and it’s important so that people can leave with a few sound-bites.</p>
</li>
<li>
<p>Not everyone received my email about installation, and certainly not the
late arrivals. This meant some pain getting things downloaded and running
especially due to the wifi (see ‘Ugly’ below). </p>
</li>
<li>
<p>A few of the people who <em>had</em> installed, didn’t complete the instructions
fully but didn’t realise this until the morning of the session. There was a good
suggestion about having some kind of test to run that would check
everything, so you’d know if there was something missing.</p>
</li>
<li>
<p>We really should have had a cut-off where we told people to use VMs
instead of fixing installation issues and 10-15 minutes would have been
enough. This would have been especially useful for the late-comers.</p>
</li>
<li>
<p>We didn’t really keep a record of the problems folks were having so we
can’t now go back and fix underlying issues. To be fair, this would have
been a little awkward to do ad-hoc but in hindsight, it’s a good thing to
plan for.</p>
</li>
</ul>
<h4 id="the-ugly">The Ugly</h4>
<ul>
<li>The only ugly part was the wifi. It turned out that the room itself was a
bit of a dead-spot and that wasn’t helped by 30ish devices trying to connect
to one access point! Having everyone grab packages at the same time in the
morning probably didn’t help. It was especially tricky as all our
mitigation plans seemed to revolve around at least having local connectivity.
In any case, this problem only lasted for the morning session and was a
little better by the afternoon. I’d definitely recommend a backup plan in
the case of complete wifi failure next time! One such plan that Leo got
started on was to put the repository and other information onto a flash
drive that could be shared with people. We didn’t need this in the end but
it’ll be useful to have something like this prepared for next time. If
anyone fancies donating a bunch of flash drives, I’ll happily receive them!</li>
</ul>
<p>Overall, it was a great session and everyone left happy, having completed
most of the tutorial (and with a book!). A few even continued at home
afterwards and <a href="https://twitter.com/richardclegg/status/393458073052139520">got in touch</a> to let us know that they got
everything working.
It was a great session and thanks to <a href="https://twitter.com/MarkDalgarno">Mark</a>, <a href="https://twitter.com/JacquiDDavidson">Jacqui</a> and the rest of
the FPDays crew for a great conference!</p>
<p><img src="/images/fpdays2013/fpdays2013-04.jpg" alt="RWO Book giveaway" /></p>
<p>(Thanks to Jeremy, Leo, David and Philippe for contributions to this post)</p>
FP Days OCaml SessionAmir Chaudhry2013-10-22T21:00:00+00:00http://amirchaudhry.com/fpdays-ocaml-session
<p>On Thursday, along with <a href="https://github.com/yallop">Jeremy</a> and
<a href="http://www.lpw25.net">Leo</a>, I’ll be running an OCaml Hands-on Session at
the <a href="http://fpdays.net/2013/">FPDays conference</a>. Below are some prep
instructions for attendees.</p>
<h3 id="preparation-for-the-session">Preparation for the session</h3>
<p>If you’re starting from scratch, installation can take some time so it’s
best to get as much done in advance as possible. You’ll need OPAM (the
package manager), OCaml 4.01 (available through OPAM) and a few libraries
before Thursday. If you have any issues, please contact Amir.</p>
<ul>
<li>
<p><strong>OPAM</strong>: Follow the instructions for your platform at <a href="http://opam.ocaml.org/doc/Quick_Install.html">http://opam.ocaml.org/doc/Quick_Install.html</a>.
OPAM requires OCaml so hopefully the relevant dependencies will kick in and
you’ll get OCaml too (most likely version 3.12). You can get a cup of
coffee while you wait. After installation, run <code>opam init</code> to initialise OPAM.</p>
</li>
<li>
<p><strong>OCaml 4.01</strong>: We actually need the latest version of OCaml but OPAM
makes this easy. Just run the following (and get more coffee):</p>
</li>
</ul>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>opam update
<span class="nv">$ </span>opam switch 4.01.0
<span class="nv">$ </span><span class="nb">eval</span> <span class="sb">`</span>opam config env<span class="sb">`</span></code></pre></div>
<ul>
<li><strong>Libraries</strong>: For the workshop you will need to check that you have the
following installed: <code>libffi</code>, <code>pcre</code> and <code>pkg-config</code>. This will depend on
your platform so on a Mac with homebrew I would do
<code>brew install libffi pcre pkg-config</code> or on Debian or Ubuntu
<code>apt-get install libffi-dev</code>. After this, two OCaml packages it’s worth
installing in advance are <code>core</code> and <code>js_of_ocaml</code> so simply run:</li>
</ul>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>opam install core js_of_ocaml</code></pre></div>
<p>OPAM will take care of the dependencies and the rest we can get on the day!</p>
Feedback requested on the OCaml.org redesignAmir Chaudhry2013-09-24T14:00:00+00:00http://amirchaudhry.com/ocamlorg-request-for-feedback
<p>There is a work-in-progress site at
<a href="http://ocaml-redesign.github.io">ocaml-redesign.github.io</a>, where we’ve
been developing both the tools and design for the new ocaml.org pages. This
allows us to test our tools and fix issues before we consider merging
changes upstream.</p>
<p>There is a more detailed post coming about all the design work to date and
the workflow we’re using, but in the meantime, feedback on the following
areas would be most welcome. Please leave feedback in the form of issues on
the <a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues">ocaml.org sandbox repo</a>. You can also raise points on the
<a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure mailing list</a>.</p>
<ol>
<li>
<p><strong>OCaml Logo</strong> - There was some feedback on the last iteration of the
logo, especially regarding the font, so there are now several options to
consider. Please look at the images on the
<a href="https://github.com/ocaml/ocaml.org/wiki/Draft-OCaml-Logos">ocaml.org GitHub wiki</a> and then leave your feedback on
<a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues/16">issue #16 on the sandbox repo</a>.</p>
</li>
<li>
<p><strong>Site design</strong> - Please do give feedback on the design and any glitches
you notice. Text on each of the new landing pages is still an initial draft
so comments and improvements there are also welcome (specifically: Home
Page, Learn, Documentation, Platform, Community). There are already a few
<a href="https://github.com/ocamllabs/sandbox-ocaml.org/issues">known issues</a>, so do
add your comments to those threads first. </p>
</li>
</ol>
Wireframe demos for OCaml.orgAmir Chaudhry2013-03-14T00:00:00+00:00http://amirchaudhry.com/wireframe-demos-for-ocamlorg
<h3 id="making-mockups">Making mockups</h3>
<p>Over the last few months, I’ve been working on various aspects of the <a href="http://ocaml.org">OCaml.org</a> design project. This covers things like the design, information architecture and how to incorporate new functionality. One of the methods for thinking through these was to put together a bunch of wireframes using <a href="http://www.balsamiq.com">Balsamiq</a> and use these to express (and generate) ideas as well as get feedback quickly.</p>
<p>If you haven’t used wireframes before, think of them as a slightly more advanced form of sketching things out on a whiteboard. The best aspect is that it’s far quicker, easier and <em>cheaper</em> to iterate using wireframes than on an actual website. As you’ll see below, you can also convey a lot of information about how a site might work by showing people a clickable demo.</p>
<p>I want to make this work public and I thought the best way would be to show you some screencasts of how the upcoming <a href="http://ocaml.org">OCaml.org</a> website might work and also make the demo available to all of you. The three videos below cover three aspects of the site and I’d encourage you to go through them in order (about 16 minutes in total). Apologies if my screen isn’t particularly clear in the videos but you can visit the demo for yourself and see things in more detail (link and info on feedback at the end of this post).</p>
<h3 id="video-walkthroughs">Video walkthroughs</h3>
<p>For those who’d like to watch the videos back-to-back and scaled to fit your browser window, you can <a href="http://vimeo.com/couchmode/album/2301640">view the Vimeo album in ‘couchmode’</a>. Otherwise, individual videos are embedded below (total time 16m17s).</p>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768157?byline=0&amp;portrait=0&amp;color=de9e6a" width="540" height="303" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 1 - Overview - http://player.vimeo.com/video/61768157</iframe>
</div>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768235?byline=0&amp;portrait=0&amp;color=de9e6a" width="540" height="304" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 2 - Documentation - http://player.vimeo.com/video/61768235</iframe>
</div>
<div class="flex-video widescreen vimeo">
<iframe src="http://player.vimeo.com/video/61768273?byline=0&amp;portrait=0&amp;color=de9e6a" width="540" height="304" frameborder="0" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen="true">Video Part 3 - Continuous Integration - http://player.vimeo.com/video/61768273</iframe>
</div>
<h3 id="public-wireframe-demo">Public wireframe demo</h3>
<p>A demo you can interact with can be found at <a href="https://ocaml.mybalsamiq.com/projects/public-demo/naked/0_home?key=b897ea86d8a8199c6e46b3295ddf630dfa33e5e1">OCaml.org wireframe demo</a> and image files for each page are available on the <a href="https://github.com/ocaml/ocaml.org/wiki/Wireframes">github ocaml.org wiki</a>. Please bear in mind the following:</p>
<ul>
<li>
<p>Not everything that looks like it might be clickable actually is (and vice versa). There’ll be a toggle on the bottom right of the browser window that will highlight what can be clicked.</p>
</li>
<li>
<p>There are parts of the site which are ‘work in progress’ and are marked as such.</p>
</li>
<li>
<p>The designs you see aren’t necessarily final. Your feedback will help shape our decisions and the best way to provide it is via the <a href="http://lists.ocaml.org/listinfo/infrastructure">infrastructure mailing list</a>.</p>
</li>
</ul>
OCaml - Installation and hello worldAmir Chaudhry2012-10-04T00:00:00+00:00http://amirchaudhry.com/ocaml-installation-and-hello-world
<p class="footnote">This post is part of a series where I'm trying to teach myself OCaml.<br />
You might want to <a href="http://amirchaudhry.com/thirty-days-of-ocaml/">start at the beginning</a>.</p>
<p>It’s been a few days into my OCaml experience so this is a write-up of what I’ve come across so far. I’ve spent more time reading background rather than getting stuck in so I’ve copied in some of the links that I’ve found interesting/useful at the end.</p>
<h2 id="installation">Installation</h2>
<p>There are number of ways you can get OCaml on your machine. The most obvious would be to get the source via the <a href="http://caml.inria.fr/ocaml/release.en.html">release page</a>, but you could also use something called <a href="http://godi.camlcity.org/godi/index.html">GODI</a>, which apparently bundles a bunch of other stuff alongside the language. </p>
<p>I don’t really want to install from source ‘by hand’ and I definitely don’t need all the stuff that comes with GODI. I happen to use <a href="http://mxcl.github.com/homebrew/">Homebrew</a> on my machine, so I checked to see if I can install that way. Turns out I can, so that’s what I ended up doing.</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>brew install objective-caml</code></pre></div>
<p>The current version of OCaml is 4.00.0 and you can check the version you have installed by typing <code>ocaml -version</code> in your terminal.</p>
<h4 id="notes"><em>notes</em></h4>
<p>Be aware that Homebrew has it’s own installation process and depends on Ruby. To get Ruby, someone recommended that I install via <a href="https://rvm.io/">Ruby Version Manager</a>. To be able to do the above, you’ll need to have the OSX developer tools installed, which means having <a href="https://developer.apple.com/xcode/">Xcode</a>. </p>
<h2 id="hello-world">Hello World!</h2>
<p>The first thing to do is get a <a href="http://en.wikipedia.org/wiki/Hello_world_program">hello world</a> programme working. Since OCaml is a compiled language, that means writing the necessary source code into a file, compiling it and then executing it. In this case I only need one line that prints ‘hello world’ to the screen and I’m taking it from INRIA’s site (<a href="http://caml.inria.fr/pub/docs/u3-ocaml/ocaml-steps.html">link</a>).</p>
<div class="highlight"><pre><code class="language-ocaml" data-lang="ocaml"><span class="n">print_string</span> <span class="s2">&quot;Hello, world!</span><span class="se">\n</span><span class="s2">&quot;</span><span class="o">;;</span></code></pre></div>
<p>Take the one line above and save it in a file called <code>hello.ml</code>. Now we need to compile that file using the OCaml compiler. At the command prompt, type the following:</p>
<div class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">$ </span>ocamlc -o hello hello.ml
<span class="nv">$ </span>./hello</code></pre></div>
<p>Since I don’t really understand the first line I should break it down. <code>ocamlc</code> is the command to invoke the compiler, the option <code>-o hello</code> means you want to name the output executable to be (in this case) ‘hello’ and the final argument is the source code file. It’s useful to look at the man page for <code>ocamlc</code> to see what other options are available. The second line executes the programme, which prints hello world to the screen with a line-break.</p>
<p>I also notice that I now have two other files in addition to the source, <code>hello.cmi</code> and <code>hello.cmo</code>. According to the man page, these are the ‘compiled interface’ and ‘compiled object code file’ respectively. I have no idea what that means but removing the files doesn’t affect the executable.</p>
<h3 id="ocaml-toplevel">OCaml ‘toplevel’</h3>
<p>Even though OCaml is a compiled language, there’s something called ‘toplevel’ that allows interactive use (more on <a href="http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual023.html">toplevel</a>). To enter this mode, you simply have to type <code>ocaml</code> at the prompt so let’s try running the above hello world program using toplevel.</p>
<div class="highlight"><pre><code class="language-text" data-lang="text">$ ocaml
OCaml version 4.00.0
# print_string &quot;Hello, world!\n&quot;;;
Hello, world!
- : unit = ()
# #quit;;
$</code></pre></div>
<p>The <code>$</code> prompt is the command line and the <code>#</code> prompt is where toplevel is awaiting a new line of input. The input can span multiple lines and is terminated by <code>;;</code> (as in the source code).</p>
<p>To exit toplevel, type <code>#quit;;</code>. It took me three attempts to get that right. I haven’t really played around with toplevel much and I think I’m likely to stick with source code and compiling until I’m a little more comfortable with the syntax. </p>
<p>That’s pretty much where I am for the moment and everything so far has been straightforward. Obviously, I should push myself a bit harder :)</p>
<h2 id="resources">Resources</h2>
<p>This is the material I found and have been looking over for the last couple of days. Useful as background but I’d say I’m in danger of (semi-productive) procrastination if I’m not careful.</p>
<ul>
<li><a href="http://caml.inria.fr/pub/docs/manual-ocaml-4.00">OCaml Manual</a> (from INRIA)</li>
<li><a href="http://caml.inria.fr/about/taste.en.html">OCaml examples</a> (also INRIA)</li>
<li><a href="http://en.wikibooks.org/wiki/Objective_Caml/Introduction">OCaml Intro</a> (wikibooks)</li>
<li><a href="https://ocaml.janestreet.com/?q=node/82">Minsky on ML</a> (Jane Street)</li>
<li><a href="https://sites.google.com/site/steveyegge2/ocaml">OCaml pros</a> (Steve Yegge)</li>
<li><a href="http://www.podval.org/~sds/ocaml-sucks.html">OCaml cons</a> (Sam Steingold)</li>
<li><a href="http://www.thinkocaml.com">Think OCaml</a> (PDF book)</li>
</ul>
<!--
http://news.ycombinator.com/item?id=112129
http://dave.fayr.am/posts/2011-08-19-lets-go-shopping.html
-->
Thirty Days of OCamlAmir Chaudhry2012-10-01T00:00:00+00:00http://amirchaudhry.com/thirty-days-of-ocaml
<p><a href="http://www.flickr.com/photos/jeremyandchanel/6131620285/"><img src="/images/ocaml-30days/camel-bactrian-silhouette.jpg" alt="Bactrian Camel Silhouette" /></a></p>
<p>I’ve set myself a challenge that over the next thirty days, I’ll teach myself some <a href="http://en.wikipedia.org/wiki/Functional_programming">functional programming</a> using <a href="http://en.wikipedia.org/wiki/OCaml">OCaml</a>. This will be my first experience of FP in addition to learning a new language so I expect it’ll be quite challenging. </p>
<p>As I go along, I’ll try and write regular posts – hopefully one a day – describing my experiences and frustrations as well as the questions that occur to me. I suspect a number of the posts might be summaries to help me organise my understanding, especially where things seem unexpected. For the times where I can formulate well-posed questions, I’ll put them up in places like <a href="http://stackoverflow.com/questions/tagged/ocaml">Stack Overflow</a> as well as describing them here. Either way, keeping some form of diary will be useful for me and maybe also for other folks who’ve considered trying out FP and OCaml in particular.</p>
<p>Obviously, I’m not completely new to programming but I would say I have any deep experience (especially compared to the folks I tend to hang out with). Just to give you a better picture, my coding experience so far has involved:</p>
<ul>
<li>Some C/C++ with a little <a href="http://en.wikipedia.org/wiki/Object-oriented_programming">OOP</a> programming for a High Energy Physics simulation (during my undergrad - a long time ago so I’ve forgotten everything)</li>
<li>A bunch of <a href="http://www.mathworks.com/products/matlab/">Matlab</a> and <a href="http://en.wikipedia.org/wiki/Bash_(Unix_shell)">Bash</a> scripting for analysis of neuroimaging data (during my PhD)</li>
<li>A bunch of <a href="http://en.wikipedia.org/wiki/Visual_Basic_for_Applications">VBA</a> within MS Excel for analysing behavioural data (also during my PhD – I really wish someone had pointed me to Python back then)</li>
<li>Part of <a href="http://learnpythonthehardway.org">Learn Python the Hard Way</a> (I keep dipping in and out of this)</li>
</ul>
<p>I tried to dump most of my old scripts – at least those I could find – into a <a href="https://github.com/amirmc/PhD_stuff">github repo</a>, but I still haven’t collated the Matlab bits. You can judge my coding skills for yourself.</p>
<p>By the end of the 30 days, I’m hoping to have the ability to pick up and read other people’s OCaml code and write basic stuff for myself. I don’t actually have a particular project in mind but I’m not going to let that be an excuse for <em>not</em> getting started.</p>
<p>I’ll post again tomorrow with the mundane parts of how to get things installed and running on a Mac*. Onward!</p>
<p class="footnote">*Admittedly, I already did this some time ago but I want this series of posts to start right from the beginning.</p>