OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

Build and install annotation files.

Use builtin bin_annot and annot tags.

Tag .mly files on the same basis as .ml and .mli files (required by menhir).

Remove 'program' constraint from C-dependencies. Currently, when a library has C-sources and e.g. an executable depends on that library, then changing the C-sources and running '-build' does not yield a rebuild of the library. By adding these dependencies (rather removing the constraint), it seems to work fine.

Some bug fixes

Features:

no_automatic_syntax (alpha): Disable the automatic inclusion of -syntax camlp4o for packages that matches the internal heuristic (if a dependency ends with a .syntax or is a well known syntax).

compiled_setup_ml (alpha): Fix a bug using multiple arguments to the configure script.

This new version is a small release to catch up with all the fixes/pull requests present in the VCS that have not yet been published. This should made the life of my dear contributors easier -- thanks again for being patient.

I would like to thanks again the contributor for this release: Christopher Zimmermann, Jerome Vouillon, Tomohiro Matsuyama and Christoph Höger. Their help is greatly appreciated.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

Added -remove switch to the setup-clean subcommand designed to remove unaltered generated files completely, rather than simply emptying their OASIS section.

Translate path of ocamlfind on Windows to be bash/win32 friendly.

Description is now parsed in a more structured text (para/verbatim).

Features:

stdfiles_markdown (alpha): set default extension of StdFiles (AUTHORS, INSTALL, README) tp be '.md'. Use markdown syntax for standard files. Use comments that hides OASIS section and digest. This feature should help direct publishing on GitHub.

disable_oasis_section (alpha): it allows DisableOASISSection to be specified in the package with a list of expandable filenames given. Any generated file specified in this list doesn't get an OASIS section digest or comment headers and footers and is therefore regenerated each time `oasis setup` is run (and any changes made are lost). This feature is mainly intended for use with StdFiles so that, for example, INSTALL.txt and AUTHORS.txt (which often won't be modified) can have the extra comment lines removed.

compiled_setup_ml (alpha): allow to precompile setup.ml to speedup.

This new version closes 4 bugs, mostly related to parsing of _oasis. It also includes a lot of refactoring to improve the overall quality of OASIS code base.

The big project for the next release will be to setup a Windows host for regular build and test on this platform. I plan to use WODI for this setup.

I would like to thanks again the contributor for this release: David Allsopp, Martin Keegan and Jacques-Pascal Deplaix. Their help is greatly appreciated.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

Change BSD3 and BSD4 to BSD-3-clause and BSD-4-clause to comply with DEP5, add BSD-2-clause.

BSD3 and BSD4 are still valid but marked as deprecated.

Enhance .cmxs supports through the generation of .mldylib files.

When one of the modules of a library has the name of the library, ocamlbuild tends to just transform this module into a .cmxs. Now, the use of a .mldylib fix that problem and the .cmxs really contains all modules of the library.

Refactor oasis.cli to be able to create subcommand plugins.

Exported modules starts now with CLI.

Display plugins in the manual.

Design so that it is possible to be thread-safe.

Try to minimize the number of functions.

Make better choice of name and API.

A subcommand plugin 'dist' to create tarball is in preparation, as a separate project.

Remove plugin-list subcommand, this command was limited and probably not used. A better alternative will appear in next version.

Sub-command setup-dev is now hidden and will soon be removed.

I have published a quick intermediate version 0.4.1, a few days after the previous release. This was a bug fix related to threads. I also decided to skip the release of last month. I was in the US at this time and didn't have time to work enough on OASIS (christmas vacation and a travel to the US). This month I am back on track.

This new version doesn't feature a lot of visible changes. I mostly work on the command line interface code, in order to be able to create external plugins. A first external plugin is almost ready, but need some more polishing before release. This first plugin project is a port of the script that I have used for a long time and that was present in the source code of oasis (oasis-dist.ml). It will be a project on its own and have a different release cycle. The point of this plugin is to create .tar.gz out of an OASIS enabled project.

I also must admit that I am very happy to see contributors sending me pull-request through GitHub. It helps me a lot and I also realize that the learning curve to enter OASIS code is steep. This last point is something I will try to improve.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

I have recently resumed my work on OASIS and this will be hopefully the new version that will lead to quicker iteration in the development of OASIS. The development process was slowdown by the fact, that I feared introducing new fields in _oasis or regression. This was a pain and I decided to change my development model.

Features

The most important step is the introduction of AlphaFeatures and BetaFeatures fields. They allow to introduce pieces of code that will only be activated if certain features are listed in those fields. It should help to be always ready to release.

The features also cover other aspect like flag_tests and flag_docs which has been introduced in OASIS v0.3.0. In fact the features API is now used to introduce all enhancement while keeping backward compatibility with regard to OASISFormat. Rather than defining a ~since_version:0.3 for fields we use a feature that handle the maturity level of the feature. When I feel a specific feature is ready to ship, I just change the InDev Alpha to InDev Beta and then SinceVersion 0.4. On the long term, when we won't support anymore a version of OASIS that existed before the SinceVersion, the feature will always be true and I will fully integrate it in the code.

The only constraint around features is: if you use AlphaFeatures or BetaFeatures field, you must use the latest OASISFormat...

pure_interface: an OCamlbuild feature that allows to handle .mli without a .ml file

Automate

Another topic is automation of testing releases. For OASIS v0.3.0, I ran tests on all platforms manually, late in the development of v0.3.0 and it was painful to fix. So I have decided to setup a Jenkins instance that automate testing on Linux. On the long term, I plan to also setup a Mac OS X builder and start looking at Windows as well. This should help me catch errors early and be able to fix them quickly.

However, for v0.4.0 I have decided to just release what I have and which has mainly be tested on Linux. The point here is to quickly release and iterate, rather than wait for perfection. Hopefully end user testing will allow to quickly discover new bugs.

Time boxed release

In the coming months, I will try to do time boxed releases. I will try to release version of OASIS every 15th of the month. The point here is to try to iterate faster and avoid long delay between release.

One week ago, a thread in opam-devel mailing list started about the possibility to create binary snapshot to distribute OPAM packages. Distributing binary packages with everything already compiled is pretty useful when you want to make sure that everyone has the same version of packages installed and you don't want to spend time configuring all the computer of your colleague.

As a matter of fact I was also interested to see that happening. I have several computers where I want to install a set of packages and I want to snapshot OPAM archives when I am ready for an upgrade. I have tested for a long time another source distribution: GODI. I even wrote a puppet module to drive it. This was a fun experience, but as with any source only distribution, there are some drawbacks. Especially even if it is fully automated, you get a lot of errors and timeout when trying to build automatically from source.

This is how opam2debian has started. I wanted a replacement for puppet-godi that takes advantage of OPAM and the possibility to distribute my set of packages everywhere quickly.

The goals of this project are:

generate a Debian binary package with all the dependencies set on external packages (like libpcre)

use OPAM to build everything

use standard Debian process to build package

create snapshot of OPAM repository that allow to rebuild exactly the same thing on different arches

The real challenge behind opam2debian was to be able to use standard Debian process to create package to get all the power of dependencies computation provided by the Debian maintainer scripts. My great achievement on this topic was to find a tool called proot that allows you to bind mount a directory as a user. This is an achievement because it allows to create a package using a directory where you are not allowed to write (i.e. /opt/opam2debian directory which is owned by root). This is a way to do it on any Jenkins builder without root access and a standard way for Debian to build packages.

Usage example

I need to compile a set of package to install on all my Jenkins builder. Here is the list of packages I need and I also want to use the latest OCaml version. I want to build a Debian package for Debian Wheezy which has only OCaml 3.12.1.

Getting the package list right can be tricky because the process will stop at the first error -- and if it is because the last package fails to build it can be long. I have implemented a --keep-build-dir in case you want to tune the package list live.

The program will build everything and it can takes quite a while. At the end you get a file opam2debian-test2_20131105_amd64.deb which has a reasonable size of 232MB. That is big, but it looks like the biggest directory is $OPAMROOT/4.01.0/build, I am not sure if I can remove it but we may save some space here (the build directory represents 50% of the package size).

A nice thing to note, is that it is a standard OPAM install. You can install, update and upgrade OPAM from there, as root. However, I would recommend not to do it, and just rebuild a newer Debian package to upgrade.

Install

You will need the opam and proot Debian package available in Debian jessie and sid. You will also need various OCaml libraries (cmdliner, calendar, fileutils and jingoo).

After 1.5 month of work, I am proud to officialy release OUnit 2.0.0. This is a major rewrite of OUnit to include various feature that I think was missing from OUnit1. The very good news is that the port of the OASIS test suite has proven that this new version of OUnit can drastically improve the running time of a test suite.

OUnit is a unit test framework for OCaml. It allows one to easily create unit-tests for OCaml code. It is based on HUnit, a unit testing framework for Haskell. It is similar to JUnit, and other XUnit testing frameworks.

timer that makes tests fail if they take too long, only when using the processes runner (I was not able to do it cleanly using threads and sequential)

allow to parametrize filenames so that you can use OUNIT_OUTPUT_FILE=ounit-$(suite_name),log and have $(suite_name) replace by the test suite name

create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a chdir while another thread is doing a chdir elsewhere)

create a in_testdata_dir function to locate test data, if any

Migration path

OUnit 2.0.0 still provides the OUnit module which is exactly the same as the last OUnit 1.X version. This way, you are not forced to migrate. However, this means that you will gain no advantage of the new release and even some slowdown due to increase complexity of the code. Though, I strongly recommend to upgrade to OUnit2.

bracket are now inlined so bracket setUp f tearDown is now let x = bracket setUp tearDown test_ctxt in

make sure that you don't change global process state like chdir or Unix.putenv or that you don't rely on another test setting something for the next test

The OASIS test suite migration

In order to check that everything was working correctly, I have migrated the OASIS test suite to OUnit2. This is a big test suite (210 test cases) and it includes quite big sequences of tests (end to end tests from calling oasis setup to compiling and installing the results). This was really time consuming and I wish to see a significant speedup for the tests with OUnit2.

The migration was quite heavy because this test suite had a big design problem. It uses in-place modification of the test data. I think I pick this design because I thought this was a good idea to decrease the running time. As a matter of fact, this was a huge mistake, that keeps popping failed test cases because one of the previous tests was failing. I have refactored all this and now we start by copying the test data into a temporary directory, which ensure that everything is always starting from pristine test data.

During the redesign, I have decided to reduce the number of tests by merging some of them. This should have no big impact on the running time, although this is not as pure 1:1 comparison with OUnit v1. Although it is still testing exactly the same thing. This explain the loss of ~50 tests, which in fact has been merged in other tests.

The overall speedup is 4x compared to OUnit v1, when using processes. However there is a 12% increase compared to OUnit v1 for sequential and a 15% increase when using OUnit v1 compatibility layer. While this is not very good score, I hope this is small enough to compensate the huge win of being able to run tests in parallel with processes.

And now the magic !

At this point, if you read carefully the numbers, you would have noticed that there is a 4.5x decrease in speed when you compare sequential and 4 processes for OUnit2. Since we are only actively testing in 4 shards, it looks strange. I don't expect a super-linear speedup due to the use of processes. I have checked that every tests was indeed running and found no solutions to this mystery. Right now, I think this is due to the fact that we are running less tests in more processes which should lighten the load of the GC (which may not trigger at all). I am not sure about this explanation and will welcome any bug report which shows a problem in the implementation of either sequential or processes runner. Although, this is great.

Help still wanted

If you find any bugs with OUnit v2, this is the time to submit a bug.
OUnit BTS

If you want to try to fix bugs by yourself, please checkout the latest version of OUnit:

timer that makes tests fail if they take too long, only when using the processes runner (I was not able to do it cleanly using threads and sequential)

allow to parametrize filenames so that you can use OUNIT_OUTPUT_FILE=ounit-$(suite_name),log and have $(suite_name) replace by the test suite name

create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a chdir while another thread is doing a chdir elsewhere)

create a in_testdata_dir function to locate test data, if any

Still remaining to do, but quite straightforward:

sys admin (website, release process)

update the whole documentation

Some things that I decided not to do for OUnit 2.0 release:

introduce a 'cached' state to avoid rerunning a test if you can programmaticaly determine that the result will be the same.

The main development is now done, but before releasing I decided to test it first on a real scale application. The first big migration to OUnit2, will be the OASIS test suite. This is a pretty big test suite (100+ tests) that takes a fair amount of time to run. I hope that during the next week I will be able to port the whole test suite and come back with some timing results.

The problem is that if you were using 2 or 3 temporary files, the level of indentation was high. I have decide to switch to a more imperative approach, registering the tear down function inside the test context:

For a long time, I thought it was the only solution. But last week, I read again the documentation and found another solution.

My main concerns are the onlyif and last() parts, it doesn't look clean to me. The problem is that I cannot define the entry all at once and if I use a value that will be set late, the node cannot be targeted in between.

The clean way to do this was to define first the target attribute. Typically, in augeas changes:

set spec[user = '$name']/user '$name'

This way if the node doesn't exist it is created and you can then use it directly:

The big trick here is that defnode needs a value, but most of the time you cannot set a value for the node -- because it has none. To solve this, you set a value with defnode, process with your change and you clear the node at the end.

This recent discovery has simplify a lot some augeas changes I use.

Feel free to leave comment on your personal technique to deal with augeas and puppet.

The OASIS website has not been updated since a while. So I decide to take a shot at making more up to date. This blog post is about the pipeline I have but in place to automatically update the website. It is the first end to end 'continuous deployment' project I have achieved.

Among the user visible changes:

an invitation to circle the OASIS G+ page, which is now the official channel for small updates in OASIS.

an invitation to fork the project on Github since it is now the official repository for OASIS.

some link to documentation for the bleeding edge version of OASIS manual and API.

The OASIS website repository is also on Github. Feel free to fork it and send me pull request if you see any mistake.

The website is still using a lot of markdown processed by pandoc. But there are some new technical features:

no more index.php, we use a templating system to point to latest version

we use a Jenkins job to generate the website daily or after any update in OASIS itself.

Since I start using quite quite a lot Python, I have decided to use it for this project. It has a lot of nice libraries and it helps me to quickly do something about the website (and provides plenty of idea to create equivalent tools in OCaml).

The daily generation: Jenkins

I have a Jenkins instance running, so I decided to use it to compile once a day the new website with updated documentation and links. This Jenkins instance also monitor changes of the OASIS source code. So I can do something even more precise: regenerate the website after every OASIS changes.

I use the Jenkins instance to also generate a documentation tarball for OASIS manual and API. This helps a lot to be able to display the latest manual and API documentation. This way I can quickly browse the documentation and spot errors early.

Another good point about Jenkins, is that it allows to store SSH credential. So I created a build user, with its own SSH key, in the OCaml Forge and I use it to publish the website at the end of the build.

The OCaml Forge has a nice SOAP API. But one need to be logged in to access it. This is unfortunate, because I just want to access public data. The only way I found to gather my data was to scrape the OCaml Forge.

I use beautifulsoup to parse the HTML downloaded from the Files tab of the OASIS project and extract all the relevant information. I use curl to download the documentation tarball (for released versions) and for the latest development version.

Template

Using the data I have gathered, I feed them to mako and I process all the .tmpl files in the repository to create matching files.

Among the thing that I have transformed into template:

the index.php has been transformed into a index.mkd.tmpl, it was a hackish PHP script scraping the RSS of updates before, it is now a clean template.

robots.txt.tmpl, see the following section for explanation

documentation.mkd.tmpl in order to list all version of documentation.

Fix documentation and indexing

One of the problem of providing access to all versions of the documentation, is that people can end up reading an old version of the documentation. In order to prevent that, I use two different techniques:

prevent search engine to index old version.

warn the user that he is reading an old version.

To prevent search engine to index the file, I have created a robots.txt that list all URL of old documentation. This should be enough to prevent search engine to index the wrong page.

To warn the user that he is reading the wrong version, I have added a box "you are not viewing the latest version". This part was tricky but beautifulsoup v4 provide a nice API to edit HTML in place. I just have to find the right CSS selector to define the position where I want to insert my warning box.

Publish

The ultimate goal of the project is the 'continuous deployment'. Rather than picking what version to deploy and do the process by hand, I let Jenkins deploy every version of it.

Deploying the website used to be a simple rsync. But for this project I decided to use a fancier method. I spend a few hours deciding what was the best framework to do the automatic deployment. There are two main frameworks around: capistrano (Ruby) and fabric (Python).

Fabric is written in Python, so I pick this one because it was a good fit for the project. Fabric biggest feature is to be a SSH wrapper.

The fabric script is quite simple and to understand it, you just have to know that local run a local command and run run a command on the target host.

The fabfile.py script do the following:

create a local tarball using the OASIS website html/ directory

upload the tarball to ssh.ocamlcore.org

uncompress it and replace the htdocs/ directory of the oasis project

remove the oldest tarballs, we keep a few versions to be able to perform a rollback.

Given this new process, the website is updated in 3 minutes automatically after a successful build of OASIS.

This blog post is a little recipe to do a Debian migration for a node using Puppet and some other good practices.

We do all the following commands as root, which on of the exceptional situation where you should have a real root session (login through console, or su -).

I tend to avoid using the X server while doing an upgrade, so my 'best' setup is to have a laptop to take note and check things on the internet and a session to the computer to upgrade (ssh + su - or login as root into console). In both case, I use screen during the upgrade so that I can handle disconnection.

Create or update your admin-syscheck script

First of all, a good practice is to have a script that runs various test on the system that checks everything is ok. This is not only for upgrade but in general. But in the case of upgrade, it can be particularly useful. I call this script admin-syscheck. This is a simple bash script.

This script check various aspect of the system and serve me as my external worker to check the most advanced knowledge I have gathered about setting up a service. For example, I know that having *.bak or *.dpkg-dist in /etc/ means that something needs to be merged and a file should be deleted. Another example is about setting up the right alias for 127.0.0.1 and ::1 (which you can differentiate using getent ahostsv4 and getent ahostsv6).

I have packaged this script and distributed it using a specific apt-get repository. You can just distribute it using puppet. I recommend to run it daily to track changes (e.g. after an apt-get dist-upgrade) and to check that my setup is aligned with my most up-to-date knowledge about setting up a service (i.e. this is my external worker).

In our case we are interested in checking presence of old and new configuration files, before and after upgrading. Here is this specific section of my script:

Setting $fix to true will spawn a vim -d old new command where you can edit and then delete the leftover file. This is extremly handy.

Upgrading to Wheezy

I strongly recommend to read first the upgrade chapter of the release notes. This gives you a more complete overview of the upgrade procedure. I just go through the basic step here.

1. Update everything on the system:

$> apt-get update
$> apt-get dist-upgrade

2. Check that the current configuration apply cleanly:

$> puppet agent --test

3. Run admin-syscheck:

$> admin-syscheck

And fix all the problems.

4. Disable puppet:

I use a cronjob to run puppet, so I just comment the line for the job (/etc/cron.d/puppet-custom). You should disable puppet by stopping the daemon and preventing it to run by editing /etc/default/puppet and set START=no.

4. Fix your sources and pinning:

Change squeeze to wheezy in /etc/apt/sources.list and remove useless files in /etc/apt/sources.lists.d/.
(You may keep certain sources that refers to stable, like google-chrome.list).

$> rm /etc/apt/sources.list.d/* # Check if this ok to do this with your system.

Although, I tend to fully purge /etc/apt/sources.list expect for the main line (removing backports and security is fine for a short time). The first run of puppet after upgrade will anyway reset this file.

$> rm /etc/apt/preferences.d/* # (at least the ones that do pin some version)

You can although remove all pinning from /etc/apt/preferences.

5. Now you start the real upgrade:

$> apt-get update
$> apt-get dist-upgrade

6. During the upgrade you will be asked if you want to keep old configuration files or install the newer one from the maintainer.

I have always wondered what to answer to this. But here is the answer after a few major upgrade: always install configuration files from maintainer if the service has no ultra-specific settings that could break during the upgrade.

The only file, that I should not upgrade on my system, is /etc/sudoers. In this very specific case, you need to make sure before the upgrade that the old and new configuration can coexist. In the squeeze to wheezy case, I have just setup a few extra augeas rules to set the secure_path before the upgrade and it was fine. This is typically the kind of situation where you are thankful to have a real root session.

7. The upgrade can be long and require various fixing to remove/re-add packages to circumvent problems. At the end you will have a set of file *.dpkg-old and *.ucf-old (and some *.dpkg-dist and *.ucf-dist). The *-old files are your old version of the file, while the corresponding files match the maintainer version of it. The *-dist files are the maintainer version of the file and the corresponding files match your old version of it.

People working with augeas and puppet, will appreciate the fact that they probably have 0 changes to make for this to work (since it only does a few replacement in configuration files).

6. Once you are happy with the changes, copy back /etc.new to /etc and go to step 3 until the difference is almost 0.

7. Re-enable automatic run of puppet.

Do this procedure for a least each computer category you have (e.g. Desktop and Server nodes). Once you are fully confident your new puppet setup works, you will be able to use 'further upgrade' for the other nodes.

Further upgrade.

This one is super easy compare to a first upgrade:

1. Re-enable puppet and have it run at least once:

$> puppet agent --test

2. Merge *.{dpkg,ucf}-{dist,old} with corresponding files (you can run admin-syscheck with fix=true). This is mostly a sanity check since you should have already solved most problem with the 'first upgrade' procedure.

And to get the root password for mysql, just login into the node "mysqlserver":

$> sekred get root@mysql
Cie8ieza

Setting password for SSH-only user

This example is quite typical of the broken fully automated scenario with passwords:
- you setup a remote host only accessible through SSH
- you create a user and set its SSH public key to authorize access
- your user cannot access its account because SSH prevent password-less account login!

In other word, you need to login into the node, set a password for the user and mail him back.... That defeats a little bit the "automation" provided by puppet.

So the command "test \"$(getent shadow $name | cut -f2 -d:)\" = \"!\"" test for a password-less account. If this is the case, it creates a password using sekred get --uid $name $name@login and set it through chpasswd.

Note that $user_passwd use a shell expansion that will be evaluated when running the command only, on the host. The --uid flag of sekred assign the ownership of the password to the given user id.

So now the user (foo) can login into the node and retrieve its password using sekred get foo@login.

Try it!

Sekred was a very short project but I am pretty happy with it. It solves a long standing problem and helps to cover an extra mile of automation when setting up new nodes.

Logcheck is pretty straightforward to install. It scans your log every two hours and send you a report of what is happening. It is not a very precise tool and you have to tune it a little bit to just send you what is relevant (install extra rules to ignore what you know is not of interest). Whenever you start to see log entry like that:

It is a good time to think about changing your hardrive -- but it is maybe too late.

Smartmontools (aka smartd) is dedicated tool to monitor hard drives and do a good job. I think it is not installed by default in Debian, but it should be. It scans your hard drive for SMART capabilities and monitor the health of the HD using internal tools. In the case of bad blocks, you will start to see entry like that:

Obviously when you see this log, it is also a good time to change your hard drive.

In the case of my wife's HD, I just got data from logcheck. It means that the error is not that important (transient failure, something is wrong but the HD can cope with it). But I still decided to get a new one for my wife.

Whenever, I receive a new drive, the first thing I do is to check it for errors. You can do that using the program badblocks in write mode. It takes ages to test (count up to 1 day for 1TB on USB), but at the end you know that you have a good candidate -- where it is worth install your data.

You just have to follow this procedure

dmesg | grep sd

try to find what drive is the one you want to test in the output of 1.

cfdisk /dev/sdX, sdX being the drive you want to test

check that what you see in cfdisk is what you expect to test: right name and capacity for the drive

sudo badblocks -wvs /dev/sdX

run sudo tail -f /var/log/syslog in parallel just in case

wait

If some errors appears in /var/log/syslog, you know something bad is happening. Whenever you have a single failing block, don't think it is ok. It is NOT ok for a HD to start its life with failing blocks. In this case, repack the drive and send it for replacement ASAP.

In the case of my hard drive:
smartd mail:

The following warning/error was logged by the smartd daemon:
Device: /dev/sda [SAT], 15 Currently unreadable (pending) sectors

I am not blaming any particular brand (like Western Digital), all computer parts I have ever bought had to follow the same procedure and it is a known fact that computer parts as a non-zero percentage of chance to be DOA (dead on arrival) or after a few weeks. But as a consumer you should be aware of that and take action to avoid spending 10h configuring your computer to see it failing after a week... The waste of time to test is a win on the long term.

True. I think the approach of puppet is not really UNIXish. It is probably on purpose. The biggest issue is probably the PKI. It breaks frequently for unknown reason. The "non intuitive configuration language" is probably a matter of taste. I think the language is not very well designed and strange, but I can cope with that. The attempt to versioning -- if I understand correcly what it means -- refers to the fact that when Puppet replace a file it moves the old file to a bucket. This is not a good thing, but you can say "backup => '.puppet-bak'" and you get almost the same behavior as ".dpkg-old".

Ruby

False debate. We can discuss for hours on Ruby, PHP, Java or whatever pet language people has invented. I am not a fan of Ruby but it is still nice as a general purpose language. To my mind, Ruby is still better to write daemon than bash.

The config of this node is not complex, but 3s is not that bad for something that runs every 30min. If you need sub-second speed for this kind of thing, maybe you are not looking for this kind of tool. Does 144s of server time per day is a big deal ?

With a lot more complex setup, I can reach 30s for a run, although this is the point where I manage a lot of thing with it.

Lack of basic functionality (e.g. replace a line of text)

False and True. Augeas allows you to replace a single value (even more precise than a line). Just have a look at the augeas type. This is pretty nice and allow to do thing like replacing "Defaults env_reset" by "Defaults env_reset, !tty_tickets" in 4 lines of code. So this i not precisely "a single line of text", but there is other way to do it.

False. Well if you organize your code with manifests/site.pp and manifests/classes/*.pp, it seems like there is a separation between the two. Next you can try inheritance and define to create specific high-level features.

Horrific error messages

False-ish. Hey at least there are error message ;-) Now, most of the error that are related to the programming language are useless (at least as cryptic as a C++ error message). But as usual with error message in programming language

Catastrophic upgrade paths

True. Multi versions installation is horrible and you have to fix a lot of stuff to manage a sane overall configuration.

Lack of IPv6 support

Not sure to understand this point, I use puppet over IPv6...

To whoever is considering using puppet, this is worth a try. It is a nice system that really helps to maintain a decent configuration across nodes.

I recently read this book and it was enlightening -- that is the least I can say. It is organized into a set of recipes addressing the most common problems you have with puppet. I think the whole point is about being practical and it gives you various ways to achieve the same goal, depending if you want to do a quick and dirty hack or pick a long term solution. The whole book aims at being practical and it achieves this goal pretty well. However, it is not for beginners. Puppet itself is not for beginner and you should already have written and run into problem to really take advantage of this book.

I read the book in 5 days (only while commuting), and at each page I thought : "OMG, they do that like this" or "of course, you should use git + that" and wait impatiently coming back home to test all this stuff. The book is really about technical details and how to organize yourself to write nice Puppet classes. I have been an "irregular" user of Puppet since 4 years. I was mainly running it to distribute files on all my computers. Since 3 weeks, I am applying the recipes found in this book and it is just like discovering that you own something worth a million euros.

Let look at some example:

Setting a value in a configuration file:

Before:
Replace the whole file.

Now:
Use Augeas to just change the value and don't touch to the rest of the file. The book makes me discover Augeas and it rocks !!!

Now:
Shiny git repository in my home directory, using a Makefile to validate Puppet syntax, simulate applying it and effectively apply it to the current node before deploying it through git push.

Installing exim4:

Before:
Nothing because I didn't think it was possible.

Now:
Generate a configuration file using a template, install it, run the exim4 update script and restart the service.

Installing sshd:

Before:
Copy a file and restart it.

Now:
Use Augeas and a template with a loop to change what is needed for each node depending on which user I want to allow to connect on this node. Use different SSH port for every vserver that share the same IP and distribute an /etc/ssh_config to propagate the default ports set for all the vservers... (just being able to do that is worth the price of the book).

Have a problem and try to solve it:

Before:
Complain, whine and give up.

Now:
Go to Puppet Forge, take inspiration for similar module, re-read part of the book, find an idea and apply it.

And so forth and so on. This book made me love puppets again. I warmly recommend it to you.

In my effort to automate most of the stuff needed to release, I am working on a SOAP to OCaml converter. Richard W.M. Jones has kindly allowed me to take-over its project "ocsoap". The project is now located here and I am working on making it compatible with FusionForge SOAP.

My first step was to make it compatible with OASIS and ocamlbuild. The project is Makefile based and needs some extra care to make it ocamlbuild based.

The oasis-fication itself was a piece of cake. Just describe the dependencies of the project and make some extra choice like compiling the camlp5 extension to a .cma rather than a .cmo. You can see the
_oasis and myocamlbuild.ml results, compared to the initial Makefile.

The function rule create a rule with a name and a function that execute it. The labels ~prod and ~dep are the right and left parts of %.cdo: %.cd of the Makefile. When we call env "%cd", it replaces the % by the matching part of ~prod and ~dep.

Next we return a Rule.action. The rule in this case is a command Cmd. This rules use a sequence S, which are the command line itself. The sequence is made of smaller pieces that can be a placeholder for tag content (T + what will trigger it), atom (A, typically command line option), and filename (P).

For a file src/foo.ml, the command generated will

cduce $(TAG) --compile -I src src/foo.ml

where $(TAG) will be replaced by the content of flag ["file:src/foo.ml"; "cduce"; "compile"] content. But also by the content of flag ["cduce"; "compile"] content, the flag just need to be subset. For example if you want to add --verbose to the command line, just add flag ["cduce"; "compile"] (S[A"--verbose"]);; somewhere in myocamlbuild.ml.

Next rules, we need to compile %.cdo to %.cmo. It is trickier because in this case, we want to have %.cmi eventually built before. It is not require to build it before, if there is no %.mli file.

The difference with the former rules, is that we call cduce_prepare_compile which in turns call (build [[env "%.cmi"]]). The call to build will ask ocamlbuild to compile src/foo.cmi, but we don't care about the result, i.e. in case of Outcome.Bad exn we don't fail. This way we don't stop the build process that will continue and produce a .cmo and .cmi just out of the .cdo. The .cdo itself is compiled as a .ml file with ocamlfind ocamlc except that we apply cduce --mlstub as a preprocessor A"-pp"; Quote(S[cduce; ..; A"--mlstub"]).

The last rule that I will comment is the one that transform a .intf into a .ml. This rule is particular because it is totally different than the pieces of code of the original Makefile.

Here we decided to translate directly the .intf into a .ml file. The good thing about ocamlbuild is that it has a powerfull dynamicy dependencies scheme. So here you will generate the .ml file, which in turns will generate a .ml.depends and then will be compiled the standard way. In the makefile, you need to compute the .depends file using a different process that will do everything before the compilation even starts (in fact, before the inclusion of the .depends). We also use the trick to use an OCaml printer (pr_o.cmo) with camlp5o, that will directly output the standard .ml file.

Don't hesitate to post comment if you have question about OASIS and ocamlbuild

OASIS is a tool to integrate a configure, build and install system in your OCaml project. It helps to create standard entry points in your build system and allows external tools to analyse your project easily. It is the building brick of OASIS-DB, a CPAN for OCaml.

This release takes almost 18 months to complete. This is too long and I will talk in another blog post on the way I am trying to improve this right now (esp. using continuous integration).

This new release fixes a small bug (1 line) that prevents setup.ml to run with OCaml 4.00. If you have projects that was generated with a former release consider upgrading to OASIS 0.3.0.

There are several big new stuff that comes with this release. It now supports Pack: field for libraries which allows to pack your library using -for-pack and so on.

We are also compiling .cmxs (dynlink object for native) by default and we publish them in the META file. This feature was implemented in order to get more libraries to provide .cmxs and to help project like Ocsigen to take advantage of this. If you want to get rid of this at configure time, you can use ./configure --override native_dynlink false.

We introduce two new default flags: --enable-tests and --disable-docs for configure. These are implicit flags that define if we will run Test sections or compile Document sections. They are especially useful to reduce the number of dependencies, because dependencies of Test will be excluded by default. We recommend to set Build$: flag(tests) to any Library or Executable sections that are only useful for tests. This allows you to really cut down the number of dependencies.

The last change I want to introduce is about the old setup-dev subcommand which is now deprecated. It has been replaced by 2 different update schemes. I am pretty excited by this feature which in fact comes from OASIS user (esp. Lwt project). The former scheme was to have a big 'setup.ml' that always call the command oasis to update. This was complex and not very useful. We now have 2 mode: dynamic and weak.

dynamic allow you to have a small setup.ml and to keep your VCS history clean but you need to install OASIS. weak need a big setup.ml but only need to call the command oasis if someone change something in _oasis. This mode is targeted to project that wants to be able to checkout from VCS an OCaml project without installing OASIS. The difference generated by weak mode doesn't pollute too much the VCS history because most the time they make sense. For example, you upgrade your package version number in _oasis and it produces a change with 6 lines where the version number changes in every META, setup.ml files and so on.

This version has been tested on:

Linux (Debian Sid and Debian Squeeze with GODI, OCaml 3.12)

Windows Vista with MS Visual Studio 2008 toolchain, OCaml 3.12

Mac OS X, OCaml 3.12 from GODI

You can download it here or use your favorite package manager:
Debian (UPDATE: pending upload)

$ apt-get install -t experimental oasis

GODI

$ godi_console perform -build apps-oasis

odb.ml

$ ocaml odb.ml oasis

Here is the complete changelog:

EXTREMLY IMPORTANT changes (read this)

Fix bug with scanf %S@\n for ocaml 4.00. We were unfortunetaly using an undocumented tolerance of Scanf in the previous version. You should consider making new release using this version that fixed this.

PACKAGES uploaded to oasis-db will be automatically "derived" before OCaml 4.00 release (i.e. oUnit v1.1.1 will be regenerated with this new version as oUnit v1.1.1~oasis1).

PACKAGES not uploaded to oasis-db need to be regenerated. In order not to break 3rd party tools that consider a tarball constant, I recommend to create a new version.

Thanks to INRIA OCaml team for synchronizing with us on this point.

Major changes:

Handle the field Pack: true to be able to create packed libraries. It also installs .mli files for documentation into the target directory. The pack option is only supported for OASISFormat: 0.3, you will need to update the version of your _oasis file to match it.

Introduce --enable-tests to disable tests and docs at oasis level. It seems a very common pattern to have a Flag tests to turn off by default the tests. This is now define as a standard var and you should remove you previous Flag tests but you can continue to use flag(tests) where needed.

The Run$: flag(tests) is implicit for the section Test main. The default value is false for tests. If all the executable for test are flagged correctly (Build$: flag(tests)), you'll get rid of the dependency on oUnit.

It works the same for documentation, however the default is true. (Closes: #866)

Allow to define interdependent flags

In order to allow interdependent flags, we transform back lazy values into unit -> string functions. This allows to change a flag value on the command line and to update all the dependent values. (Closes: #827, #938)

none: this is the default mode, and the one you should use when distributing tarballs. No update are performed at al.

weak: the update is only triggered when something change in `_oasis`, we keep all files generated

dynamic: the content of 'setup.ml' is ultra small (<2kB) and we only keep a small setup.ml, Makefile and configure.

The choice between weak and dynamic depends on your need with regard to VCS and to the presence of oasis. The weak allow to checkout the project from VCS and be able to work on it, without the need of installing 'oasis' as long as you don't change the file _oasis. But it clutter your VCS history with changes to the build system each time you change something in _oasis. The 'dynamic' mode gives you no VCS history pollution but makes mandatory to have installed oasis libraries.

Don't copy executable in ocamlbuild

Avoid copying executable to their real name. It helps to call ocamlbuild a single time for the whole build rather than calling it n time (n = number of executable sections) and copying resulting exec.

This speeds up the build process because ocamlbuild doesn't have to compute/scan dependencies each time.

The drawback is that you have to use $foo when you want to call Executable foo, because $foo will be '_build/.../main.byte'.

Change the way we parse command line like option (CCOpt, CCLib and the like). We have implemented a real POSIX command line parser, except that variables are processed by Buffer.add_substitute (except if correctly escaped, using Buffer.add_substitute escaping).

For example:

CCOpt: -DEXTERNAL_EXP10 -L/sw/lib "-framework vecLib"

Will be parsed correctly and outputed according to target OS.

Minimize the dependencies of the project.

In order to ease building oasis, we have minimize the number of dependencies. You only need to install ocamlmod, ocamlify and ocaml-data-notation for a standard build without tests. Dependencies on pcre, extlib and ocamlgraph has been dropped. The remaining dependencies are hidden behind a flag tests.

3 years ago I bought a OCZ Core v2 SSD... I was very disappointed by write freeze, price and capacity. Recently a friend of mine, told me that I should try the new ones that has much improved in the meantime. So I decided to give another try to SSD on my laptop. I bought a Crucial M4 128GB SSD, because they seems to be of better quality than OCZ (and probably because of my first failed attempt).

The price is correct (~130€) and after some tests it seems quite good at write and read speed. So from this point of view, I am more convinced than my first attempt.

Now, the real problem of the SSD is how to migrate your old data to your new drive. I have always upgraded my HD to a larger one and so a simple dd if=/dev/sda of=/dev/sdb bs=32M was enough... In the case of SSD you have to migrate to a smaller capacity. This includes to juggle a little bit with your partition to get something that fits.

In order to improve a little bit the lifetime of my SSD, I have tried to follow various advice you can found on the Internet. One of this advice that seems to make sense, is to align your partition to a number modulo 2048 (rather than the default which is to start at 63). I followed this forum to do that.

I end up with all partitions aligned to 2048.

The GOOD news: doing data migration for Linux is a breeze. As a matter of fact, Linux support pretty well being move around, ending up on a different partition and so on... It just took me the time to transfer data, update grub and disk UUID and reboot.

The BAD news: doing data migration for Windows is a hell. I always keep a Windows partition to test various OCaml software. But Windows doesn't like you like to be moved around.

It especially doesn't like that the first sector of its partition get moved (i.e. from sector 63 to sector 2048). I spend a week trying to 'repair windows' but it was a pure waste of time. Thinkpad comes with a special ways to store passwords which is not compatible with the rescue mode of Windows, so you cannot access the repair mode -- because you need the Administrator password -- which is not readable...

I decided to get back to the standard first sector on 63... Now Windows complained about 'hal.dll' missing. You know why ! Because the partition number has changed (from /dev/sda1 to /dev/sda3). Boot in Linux, run fdisk, enter expert mode, fix order -- after having remounted read-only the root partition and unmount everything else. Grub complains a little bit, fall into rescue mode. I fix that following these instructions and running 'grub-install --recheck /dev/sda'.

So after 2 weeks of fights, I end up with a working windows (honestly, this is a pain, don't try to play with Windows partition, this OS is a lot more sensible sensitive than Linux).