Search Results: "Tollef Fog Heen"

16 April 2016

I moved my blog around a bit and it appears that static pages are now
in favour, so I switched to that, by way of
Hugo. CSS and such needs more tweaking, but
it ll make do for now.
As part of this, RSS feeds and such changed, if you want to subscribe
to this (very seldomly updated) blog, use
https://err.no/personal/blog/index.xml

reproducible.debian.net
Vagrant Cascadian has set up a new armhf node using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure.
diffoscope development
diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than amd64.
Package reviews
180 reviews have been removed, 268 added and 59 updated this week.
70 new fail to build from source bugs have been reported by Chris West, Chris Lamb and Niko Tyni.
New issue this week:
randomness_in_ocaml_preprocessed_files.
Misc.
Jim MacArthur started to work on a system to rebuild and compare packages built on reproducible.debian.net using .buildinfo and snapshot.debian.org.
On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.

22 April 2015

I've had a half-broken temperature monitoring setup at home for quite
some time. It started out with a Atom-based NAS, a USB-serial adapter
and a passive 1-wire adapter. It sometimes worked, then stopped
working, then started when poked with a stick. Later, the NAS was
moved under the stairs and I put a Beaglebone Black in its old place.
The temperature monitoring thereafter never really worked, but I
didn't have the time to fix it. Over the last few days, I've managed
to get it working again, of course by replacing nearly all the
existing components.
I'm using the DS18B20 sensors. They're about USD 1 a piece on Ebay
(when buying small quantities) and seems to work quite ok.
My first task was to address the reliability problems: Dropouts and
really poor performance. I thought the passive adapter was
problematic, in particular with the wire lengths I'm using and I
therefore wanted to replace it with something else. The BBB has GPIO
support, and various blog posts suggested using that. However, I'm
running Debian on my BBB which doesn't have support for DTB
overrides, so I needed to patch the kernel DTB. (Apparently, DTB
overrides are landing upstream, but obviously not in time for Jessie.)
I've never even looked at Device Tree before, but the structure was
reasonably simple and with a sample override from
bonebrews it was easy enough to come up with my patch.
This uses pin 11 (yes, 11, not 13, read the bonebrews article for
explanation on the numbering) on the P8 block. This needs to be
compiled into a .dtb. I found the easiest way was just to drop the
patched .dts into an unpacked kernel tree and then running make
dtbs.
Once this works, you need to compile the w1-gpio kernel module,
since Debian hasn't yet enabled that. Run make menuconfig, find it
under "Device drivers", "1-wire", "1-wire bus master", build it as a
module. I then had to build a full kernel to get the symversions
right, then build the modules. I think there is or should be an
easier way to do that, but as I cross-built it on a fast AMD64
machine, I didn't investigate too much.
Insmod-ing w1-gpio then works, but for me, it failed to detect any
sensors. Reading the data sheet, it looked like a pull-up resistor on
the data line was needed. I had enabled the internal pull-up, but
apparently that wasn't enough, so I added a 4.7kOhm resistor between
pin 3 (VDD_3V3) on P9 and pin (GPIO_45) on P8. With that in
place, my sensors showed up in /sys/bus/w1/devices and you can read
the values using cat.
In my case, I wanted the data to go into collectd and then to
graphite. I first tried using an Exec plugin, but never got it to
work properly. Using a [python plugin] worked much better and my
graphite installation is now showing me temperatures.
Now I just need to add more probes around the house.
The most useful references were

16 November 2014

Apparently, people care when you, as privileged person (white, male,
long-time Debian Developer) throw in the towel because the amount of
crap thrown your way just becomes too much. I guess that's good, both
because it gives me a soap box for a short while, but also because if
enough people talk about how poisonous the well that Debian is has
become, we can fix it.
This morning, I resigned as a member of the systemd maintainer team.
I then proceeded to leave the relevant IRC channels and announced this
on twitter. The responses I've gotten have been almost all been
heartwarming. People have generally been offering hugs, saying thanks
for the work put into systemd in Debian and so on. I've greatly
appreciated those (and I've been getting those before I resigned too,
so this isn't just a response to that). I feel bad about leaving the
rest of the team, they're a great bunch: competent, caring, funny,
wonderful people. On the other hand, at some point I had to draw a
line and say "no further".
Debian and its various maintainer teams are a bunch of tribes (with
possibly Debian itself being a supertribe). Unlike many other
situations, you can be part of multiple tribes. I'm still a member of
the DSA tribe for instance. Leaving pkg-systemd means leaving one of
my tribes. That hurts. It hurts even more because it feels like a
forced exit rather than because I've lost interest or been distracted
by other shiny things for long enough that you don't really feel like
part of a tribe. That happened with me with debian-installer. It was
my baby for a while (with a then quite small team), then a bunch of
real life thing interfered and other people picked it up and ran with
it and made it greater and more fantastic than before. I kinda lost
touch, and while it's still dear to me, I no longer identify as part
of the debian-boot tribe.
Now, how did I, standing stout and tall, get forced out of my tribe?
I've been a DD for almost 14 years, I should be able to weather any
storm, shouldn't I? It turns out that no, the mountain does get worn
down by the rain. It's not a single hurtful comment here and there.
There's a constant drum about this all being some sort of conspiracy
and there are sometimes flares where people wish people involved in
systemd would be run over by a bus or just accusations of
incompetence.
Our code of conduct says, "assume good faith". If you ever find
yourself not doing that, step back, breathe. See if there's a
reasonable explanation for why somebody is saying something or
behaving in a way that doesn't make sense to you. It might be as
simple as your native tongue being English and their being something
else.
If you do genuinely disagree with somebody (something which is
entirely fine), try not to escalate, even if the stakes are high.
Examples from the last year include talking about this as a war and
talking about "increasingly bitter rear-guard battles". By using and
accepting this terminology, we, as a project, poison ourselves. Sam
Hartman puts this better than me:

I'm hoping that we can all take a few minutes to gain empathy for
those who disagree with us. Then I'm hoping we can use that
understanding to reassure them that they are valued and respected
and their concerns considered even when we end up strongly
disagreeing with them or valuing different things.

I'd be lying if I said I didn't ever feel the urge to demonise my
opponents in discussions. That they're worse, as people, than I
am. However, it is imperative to never give in to this, since doing
that will diminish us as humans and make the entire project poorer.
Civil disagreements with reasonable discussions lead to better
technical outcomes, happier humans and a healthier projects.

3 May 2014

The GNOME and many other infrastructures have been recently attacked by an huge amount of subscription-based spam against their Mailman istances. What the attackers were doing was simply launching a GET call against a specific REST API URL passing all the parameters it needed for a subscription request (and confirmation) to be sent out. Understanding it becomes very easy when you look at the following example taken from our apache.log:

As you can the see attackers were sending all the relevant details needed for the subscription to go forward (and specifically the full name, the email, the digest option and the password for the target list). At first we tried to either stop the spam by banning the subnets where the requests were coming from, then when it was obvious that more subnets were being used and manual intervention was needed we tried banning their User-Agents. Again no luck, the spammers were smart enough to change it every now and then making it to match an existing browser User-Agent. (with a good percentage to have a lot of false-positives)
Now you might be wondering why such an attack caused a lot of issues and pain, well, the attackers made use of addresses found around the web for their malicius subscription requests. That means we received a lot of emails from people that have never heard about the GNOME mailing lists but received around 10k subscription requests that were seemingly being sent by themselves.
It was obvious we needed to look at a backup solution and luckily someone on our support channel suggested the freedesktop.org sysadmins recently added CAPTCHAs support to Mailman. I m now sharing the patch and providing a few more details on how to properly set it up on either DEB or RPM based distributions. Credits for the patch should be given to Debian Developer Tollef Fog Heen, who has been so kind to share it with us.
Before patching your installation make sure to install the python-recaptcha package (tested on Debian with Mailman 2.1.15) on DEB based distributions and python-recaptcha-client on RPM based distributions. (I personally tested it against Mailman release 2.1.15, RHEL 6)
The Patch

EPEL 6 related details
A few additional details should be provided in case you are setting this up against a RHEL 6 host: (or any other machine using the EPEL 6 package python-recaptcha-client-1.0.5-3.1.el6)
Importing the recaptcha.client module will fail for some strange reason, importing it correctly can be done this way:

and then fix the imports also making sure sys.path.append( /usr/share/pyshared ) is not there:

from recaptcha import captcha

That s not all, the package still won t work as expected given the API_SSL_SERVER, API_SERVER and VERIFY_SERVER variables on captcha.py are outdated (filed as bug #1093855), substitute them with the following ones:

10 February 2014

Dustin Kirkland wrote an interesting post about fingerprint authentication [1]. He suggests using fingerprints for identifying users (NOT authentication) and gives an example of a married couple sharing a tablet and using fingerprints to determine who s apps are loaded.
In response Tollef Fog Heen suggests using fingerprints for lightweight authentication, such as resuming a session after a toilet break [2].
I think that one of the best comments on the issue of authentication for different tasks is in XKCD comic 1200 [3]. It seems obvious that the division between administrator (who installs new device drivers etc) and user (who does everything from playing games to online banking with the same privileges) isn t working, and never could work well particularly when the user in question installs their own software.
I think that one thing which is worth considering is the uses of a signature. A signature can be easily forged in many ways and they often aren t checked well. It seems that there are two broad cases of using a signature, one is to enter into legally binding serious contract such as a mortgage (where wanting to sign is the relevant issue) and the other is cases where the issue doesn t matter so much (EG signing off on a credit card purchase where the parties at risk can afford to lose money on occasion for efficient transactions). Signing is relatively easy but that s because it either doesn t matter much or because it s just a legal issue which isn t connected to authentication. The possibility of serious damage (sending life savings or incriminating pictures to criminals in another jurisdiction) being done instantly never applied to signatures. It seems to me that in many ways signatures are comparable to fingerprints and both of them aren t particularly good for authentication to a computer.
In regard to Tollef s ideas about lightweight authentication I think that the first thing that would be required is direct user control over the authentication required to unlock a system. I have read about some Microsoft research into a computer monitoring the office environment to better facilitate the user s requests, an obvious extension to such research would be to have greater unlock requirements if there are more unknown people in the area or if the device is in a known unsafe location. But apart from that sort of future development it seems that having the user request a greater or lesser authentication check either at the time they lock their session or by policy would make sense. Generally users have a reasonable idea about the risk of another user trying to login with their terminal so user should be able to decide that a toilet break when at home only requires a fingerprint (enough to keep out other family members) while a toilet break at the office requires greater authentication. Mobile devices could use GPS location to determine unlock requirements, GPS can be forged, but if your attacker is willing and able to do that then you have a greater risk than most users.
Some users turn off authentication on their phone because it s too inconvenient. If they had the option of using a fingerprint most of the time and a password for the times when a fingerprint can t be read then it would give an overall increase in security.
Finally it should be possible to unlock only certain applications. Recent versions of Android support widgets on the lock screen so you can perform basic tasks such as checking the weather forecast without unlocking your phone. But it should be possible to have different authentication requirements for various applications. Using a fingerprint scan to allow playing games or reading email in the mailing list folder would be more than adequate security. But reading the important email and using SMS probably needs greater authentication. This takes us back to the XKCD cartoon.

29 November 2013

I'm running a local unbound instance on my laptop to get working
DNSSEC. It turns out that with the captive portal NSB (the Norwegian
national rail company), this doesn't work too well and you get into an
endless series of redirects. Changing resolv.conf so you use the
DHCP-provided resolver stops the redirect loop and you can then log
in. Afterwards, you're free to switch back to using your own local
resolver.

3 October 2013

Dustin Kirkland recently wrote that "Fingerprints are
usernames, not passwords". I don't really agree, I think fingerprints
are fine for lightweight authentication. iOS at least allows you to
only require a pass code after a time period has expired, so you don't
have to authenticate to the phone all the time. Replacing no
authentication with weak authentication (but only for a fairly short
period) will improve security over the current status, even if it's
not perfect.
Having something similar for Linux would also be reasonable, I think.
Allow authentication with a fingerprint if I've only been gone for
lunch (or maybe just for a trip to the loo), but require password or
token if I've been gone for longer. There's a balance to be struck
between convenience and security.

27 June 2013

NSCA is a tool used to submit passive check results to nagios.
Unfortunately, an incompatibility was recently introduced between
wheezy clients and old servers. Since I don't want to upgrade my
server, this caused some problems and I decided to just get rid of
NSCA completely.
The server side of NSCA is pretty trivial, it basically just adds a
timestamp and a command name to the data sent by the client, then
changes tabs into semicolons and stuffs all of that down Nagios'
command pipe.
The script I came up with was:

The reason for the hostname in the line (even though it's overridden)
is to be compatible with send_nsca's input format.
Machines submit check results over SSH using its excellent
ForceCommand capabilities, the Chef template for the authorized_keys
file looks like:

18 June 2013

Recently, there's been discussions on IRC and the debian-devel mailing
list about how to notify users, typically from a cron script or a
system daemon needing to tell the user their hard drive is about to
expire. The current way is generally "send email to root" and for
some bits "pop up a notification bubble, hoping the user will see
it". Emailing me means I get far too many notifications. They're
often not actionable (apt-get update failed two days ago) and
they're not aggregated.
I think we need a system that at its core has level and edge triggers
and some way of doing flap detection. Level interrupts means "tell me
if a disk is full right now". Edge means "tell me if the checksums
have changed, even if they now look ok". Flap detection means "tell
me if the nightly apt-get update fails more often than once a week".
It would be useful if it could extrapolate some notifications too, so
it could tell me "your disk is going to be full in $period unless you
add more space".
The system needs to be able to take in input in a variety of formats:
syslog, unstructured output from cron scripts (including their exit
codes), snmp, nagios notifications, sockets and fifos and so on.
Based on those inputs and any correlations it can pull out of it, it
should try to reason about what's happening on the system. If the
conclusion there is "something is broken", it should see if it's
something that it can reasonably fix by itself. If so, fix it and
record it (so it can be used for notification if appropriate: I want
to be told if you restart apache every two minutes). If it can't fix
it, notify the admin.
It should also group similar messages so a single important message
doesn't drown in a million unimportant ones. Ideally, this should be
cross-host aggregation. The notifications should be possible to
escalate if they're not handled within some time period.
I'm not aware of such a tool. Maybe one could be rigged together by
careful application of logstash, nagios, munin/ganglia/something and
sentry. If anybody knows of such a tool, let me know, or if you're
working on one, also please let me know.

8 April 2013

Hello everyone! I am very excited to report about the awesome progress we made with Tanglu, the new Debian-based Linux-Distribution.
First of all, some off-topic info: I don t feel comfortable with posting too much Tanglu stuff to Planet-KDE, as this is often not KDE-related. So, in future Planet-KDE won t get Tanglu information, unless it is KDE-related You might want to take a look at Planet-Tanglu for (much) more information.
So, what happened during the last weeks? Because I haven t had lectures, I nearly worked full-time on Tanglu, setting up most of the infrastructure we need. (this will change with the next week, where I have lectures again, and I also have work to do on other projects, not just Tanglu ^^) Also, we already have an awesome community of translators, designers and developers. Thanks to them, the Tanglu-Website is now translated to 6 languages, more are in the pipeline and will be merged later. Also, a new website based on the Django framework is in progress.
The Logo-Contest
We ve run a logo-contest, to find a new and official Tanglu logo, as the previous logo draft was too close to the Debian logo (I asked the trademark people at Debian). More than 30 valid votes (you had to be subscribed to a Tanglu Mailinglist) were received for 7 logo proposals, and we now have a final logo:
I like it very much
Fun with dak
I decided to use dak, the Debian Archive Kit, to handle the Tanglu archive. Choosing dak over smaller and easy-to-use solutions had multiple reasons, the main reason is that dak is way more flexible than the smaller solution (like reprepro or min-dak) and able to handle the large archive of Tanglu. Also, dak is lightning fast. And I would have been involved with dak sooner or later anyway, because I will implement the DEP-11 extension to the Debian Archive later (making the archive application-friendly).
Working with dak is not exactly fun. The documentation is not that awesome, and dak contains many hardcoded stuff for Debian, e.g. it often expects the unstable suite to be present. Also, running dak on Debian Wheezy turned out to be a problem, as the Python module apt_pkg changed the API and dak naturally had problems with that. But with the great help of some Debian ftpmasters (many thanks to that!), dak is now working for Tanglu, managing the whole archive. There are still some quirks which need to be fixed, but the archive is in an usable state, accepting and integrating packages.
The work on dak is also great for Debian: I resolved many issues with non-Debian dak installations, and made many parts of dak Wheezy-proof. Also, I added a few functions which might also be useful for Debian itself. All patches have of course been submitted to upstream-dak.
Wanna-build and buildds
This is also nearly finished Wanna-build, the software which manages all buildds for an archive, is a bit complicated to use. I still have some issues with it, but it does it s job so far. (need to talk to the Debian wanna-build admins for help, e.g. wanna-build seems to be unable to handle arch:all-only packages, also, build logs are only submitted in parts)
The status of Tanglu builds can be viewed at the provisoric Buildd-Status pages.
Setting up a working buildd is also a tricky thing, it involves patching sbuild to escape bug #696841 and applying various workarounds to make the buildd work and upload packages correctly. I will write instructions how to set-up and maintain a buildd soon. At time, we have only one i386 buildd up and running, but more servers (in numbers: 3) are prepared and need to be turned into buildds.
After working on Wanna-build and dak, I fully understand why Canonical developed Launchpad and it s Soyuz module for Ubuntu. But I think we might be able to achieve something similar, using just the tools Debian already uses (maybe a little less confortable than LP, but setting up an own LP instance would have been much more trouble).
Debian archive import
The import of packages from the Debian archive has finished. Importing the archive resulted in many issues and some odd findings (I didn t know that there are packages in the archive which didn t receive an upload since 2004!), but it has finally finished, and the archive is in a consistent state at time. To have a continuous package import from Debian while a distribution is in development, we need some changes to wanna-build, which will hopefully be possible.
Online package search
The Online-Package-Search is (after resolving many issues, who expected that? ) up and running. You can search for any package there. Some issues are remaining, e.g. the file-contents-listing doesn t work, and changelog support is broken, but the basic functionality is there.
Tanglu Bugtracker
We now also have a bugtracker which is based on the Trac software. The Tanglu-Bugtracker is automatically synced with the Tanglu archive, meaning that you find all packages in Trac to report bugs against them. The dak will automatically update new packages every day. Trac still needs a few confort-adjustments, e.g. submitting replies via email or tracking package versions.
Tanglu base system
The Tanglu metapackages have been published in a first alpha version. We will support GNOME-3 and KDE4, as long as this is possible (= enough people working on the packaging). The Tanglu packages will also depend on systemd, which we will need in GNOME anyway, and which also allows some great new features in KDE.
Side-effect of using systemd is at least for the start that Tanglu boots a bit slow, because we haven t done any systemd adjustments, and because systemd is very old. We will have to wait for the systemd and udev maintainers to merge the packages and release a new version first, before this will improve. (I don t want to do this downstream in Tanglu, because I don t know the plans for that at Debian (I only know the information Tollef Fog Heen & Co. provided at FOSDEM))
The community
The community really surprised me! We got an incredible amount of great feedback on Tanglu, and most of the people liked the idea of Tanglu. I think we are one of the less-flamed new distributions ever started . Also, without the very active community, kickstarting Tanglu would not have been possible. My guess was that we might be able to have something running next year. Now, with the community help, I see a chance for a release in October
The only thing people complained about was the name of the distribution. And to be really honest, I am not too happy with the name. But finding a name was an incredibe difficult process (finding something all parties liked), and Tanglu was a good compromise. Tanglu has absolutely no meaning, it was taken because it sounded interesting. The name was created by combining the Brazilian Tangerine (Clementine) and the German Iglu (igloo). I also dont think the name matters that much, and I am more interested in the system itself than the name of it. Also, companies produce a lot of incredibly weird names, Tanglu is relatively harmless compared to that .
In general, thanks to everyone participating in Tanglu! You are driving the project forward!
The first (planned) release
I hereby announce the name of the first Tanglu release, 1.1 Aequorea Victoria . It is Daniel s fault that Tanglu releases will be named after jellyfishes, you can ask him why if you want I picked Aequorea, because this kind of jellyfish was particularly important for research in molecular biology. The GFP protein, a green fluorescent protein (=GFP), caused a small revolution in science and resulted in a Nobel Prize in 2008 for the researchers involved in GFP research (for the interested people: You can tag proteins with GFP and determine their position using light microscopy. GFP also made many other fancy new lab methods possible).
Because Tanglu itself is more or less experimental at time, I found the connection to research just right for the very first release. We don t have a time yet when this version will be officially released, but I expect it to be in October, if the development speed increases a little and more developers get interested and work on it.
Project Policy
We will need to formalize the Tanglu project policy soon, both the technical and the social policies. In general, regarding free software and technical aspects, we strictly adhere to the Dbian Free Software Guidelines, the Debian Social Contract and the Debian Policy. Some extra stuff will be written later, please be patient!
Tanglu OIN membership
I was approached by the Open Invention Network, to join it as member. In general, I don t have objections to do that, because it will benefit Tanglu. However, the OIN has a very tolerant public stance on software patents, which I don t like that much. Debian did not join the OIN for this reason. For Tanglu, I think we could still join the OIN without someone thinking that we support the stance on software patents. Joining would simply be pragmatic: We support the OIN as a way to protect the Linux ecosystem from software patents, even if we don t like the stance on software patents and see it differently.
Because this affects the whole Tanglu project, I don t want to decide this alone, but get some feedback from the Tanglu community before making a decision.
Can I install Tanglu now?
Yes and no. We don t provide installation images yet, so trying Tanglu is a difficult task (you need to install Debian and then upgrade it to Tanglu) if you want to experiment with it, I recomment trying Tanglu in a VM.
I want to help!
Great, then please catch one of us on IRC or subscribe to the mailinglists. The best thing is not to ask for work, but suggest something you want to do, others will then tell you if that is possible and maybe help with the task.
Packages can for now only be uploaded by Debian Developers, Ubuntu Developers or Debian Maintainers who contacted me directly and whose keys have been verified. This will be changed later, but at the current state of the Tanglu archive (= less safety checks for packages), I only want people to upload stuff who definitely have the knowledge to create sane packages (you can also proove that otherwise, of course). We will later establish a new-member process.
If you want to provide a Tanglu archive mirror, we would be very happy, so that the main server doesn t have to carry all the load.
If you have experience in creating Linux Live-CDs or have worked with the Ubiquity installer, helping with these parts would be awesome!
Unfortunately, we cannot reuse parts of Linux Mint Debian, because many of their packages don t build from source and are repackaged binaries, which is a no-go for the Tanglu main archive.
Sneak peek
And here is a screenshot of the very first Tanglu installation (currently more Debian than Tanglu):

Something else
I am involved in Debian for a very long time now, first as Debian Maintainer and then as Debian Developer and I never thought much about the work the Debian system administrators do. I didn t know how dak worked or how Wanna-build handles the buildds and what exactly the ftpmasters have to do. By not knowing, I mean I knew the very basic theory and what these people do. But this is something different than experiencing how much work setting up and maintaining the infrastructure is and what an awesome job the people do for Debian, keeping it all up and running and secure! Kudos for that, to all people maintaining Debian infrastructure! You rock! (And I will never ever complain about slow buildds or packages which stay in NEW for too long )

22 March 2013

Update: This isn't actually that much better than letting them
access the private key, since nothing is stopping the user from
running their own SSH agent, which can be run under strace. A better
solution is in the works. Thanks Timo Juhani Lindfors and Bob Proulx
for both pointing this out.
At work, we have a shared SSH key between the different people
manning the support queue. So far, this has just been a file in a
directory where everybody could read it and people would sudo to the
support user and then run SSH.
This has bugged me a fair bit, since there was nothing stopping a
person from making a copy of the key onto their laptop, except policy.
Thanks to a tip, I got around to implementing this and figured writing
up how to do it would be useful.
First, you need a directory readable by root only, I use
/var/local/support-ssh here. The other bits you need are a small
sudo snippet and a profile.d script.
My sudo snippet looks like:

The key is unavailable for the user in question because ssh-add is
sgid and so runs with group ssh and the process is only debuggable for
root. The only thing missing is there's no way to have the agent
prompt to use a key and I would like it to die or at least unload keys
when the last session for a user is closed, but that doesn't seem
trivial to do.

29 January 2013

Over the last couple of weeks, I have been working on getting
binary packages for [Varnish] modules built. In the current version,
you need to have a built, unpacked source tree to build a module
against. This is being fixed in the next version, but until then, I
needed to provide this in the build environment somehow.
RPMs were surprisingly easy, since our RPM build setup is much simpler
and doesn't use mock/mach or other chroot-based tools. Just make a
source RPM available and unpack + compile that.
Debian packages on the other hand, they were not easy to get going.
My first problem was to just get the Varnish source package into the
chroot. I ended up making a directory in /var/lib/sbuild/build
which is exposed as /build once sbuild runs. The other hard part
was getting Varnish itself built. sbuild exposes two hooks that
could work: a pre-build hook and a chroot-setup hook. Neither
worked: Pre-build is called before the chroot is set up, so we can't
build Varnish. Chroot-setup is run before the build-dependencies are
installed and it runs as the user invoking sbuild, so it can't
install packages.
Sparc32 and similar architectures use the linux32 tool to set the
personality before building packages. I ended up abusing this, so I
set HOME to a temporary directory where I create a .sbuildrc which sets
$build_env_cmnd to a script which in turns unpacks the Varnish
source, builds it and then chains to dpkg-buildpackage. Of course,
the build-dependencies for modules don't include all the
build-dependencies for Varnish itself, so I have to extract those from
the Varnish source package too.
No source available at this point, mostly because it's beyond ugly.
I'll see if I can get it cleaned up.

28 January 2013

Michael Biebl and I are giving a talk on systemd in Debian at
FOSDEM on Sunday morning at 10. We'll be talking a bit about the
current state in Wheezy, what our plans for Jessie are and what Debian
packagers should be aware of. We would love to get input from people
about what systemd in Jessie should look like, so if you have any
ideas, opinions or insights, please come along. If you're just
curious, you are also of course welcome to join.

17 January 2013

gitano is not entirely unlike the
non-web, server side of github. It allows you to create and manage
users and their SSH keys, groups and repositories from the command
line. Repositories have ACLs associated with them. Those can be
complex ("allow user X to push to master in the doc/ subtree) or
trivial ("admin can do anything"). Gitano is written by Daniel
Silverstone, and I'd like to thank him both for writing it and for
holding my hand as I went stumbling through my initial gitano setup.
Getting started with Gitano can be a bit tricky, as it's not yet
packaged and fairly undocumented. Until it is packaged, it's install
from source time. You need luxio, lace,
supple, clod, gall and gitano
itself.
luxio needs a make install LOCAL=1, the others will be installed
to /usr/local with just make install.
Once that is installed, create a user to hold the instance. I've
named mine git, but you're free to name it whatever you would like.
As that user, run gitano-setup and answer the prompts. I'll use
git.example.com as the host name and john as the user I'm setting this
up for.
To create users, run ssh git@git.example.com user add john john@example.com
John Doe, then add their SSH key with ssh git@git.example.com as john
sshkey add workstation < /tmp/john_id_rsa.pub.
To create a repository, run ssh git@git.example.com repo create
myrepo. Out of the box, this only allows the owner (typically
"admin", unless overridden) to do anything with it. To change ACLs,
you'll want to grab the refs/gitano/admin branch. This lives
outside of the space git usually use for branches, so you can't just
check it out. The easiest way to check it out is to use
git-admin-clone. Run it as git-admin-clone
git@git.example.com:myrepo ~/myrepo-admin and then edit in
~/myrepo-admin. Use git to add, commit and push as normal from
there.
To change ACLs for a given repo, you'll want to edit the
rules/main.lace file. A real-world example can be found in the
NetSurf repository and the lace syntax
might be useful. A lace file consists of four types of lines:

Comments, start with -- or #

defines, look like define name conditions

allows, look like allow "reason" definition [definition ]

denials, look like deny "reason" definition [definition ]

Rules are processed one by one, from the top and terminate whenever a
matching allow or deny is found.
Conditions can either be matches to an update, such as ref
refs/heads/master to match updates to the master branch. To create
groupings, you can use the anyof or allof verbs in a definition.
Allows and denials are checked against all the definitions listed and
if all of them match, the appropriate action is taken.
Pay some attention to what conditions you group together, since a
basic operation (is_basic_op, aka op_read and op_write) happens
before git is even involved and you don't have a tree at that point,
so rules like:

4 September 2012

We recently switched from Buildbot to Jenkins at work, for
building Varnish on various platforms. Buildbot worked-ish, but
was a bit fiddly to get going on some platforms such as Mac OS and
Solaris. Where buildbot has a daemon on each node that is responsible
for contacting the central host, Jenkins uses SSH as the transport and
centrally manages retries if a host goes down or is rebooted.
All in all, we are pretty happy with Jenkins, except for one thing:
The job configurations are a bunch of XML files and the way you are
supposed to configure this is through a web interface. That doesn't
scale particularly well when you want to build many very similar
jobs. We want to build multiple branches, some which are not public
and we want to build on many slaves. The latter we could partially
solve with matrix builds, except that will fail the entire build if a
single slave fails with an error that works on retry. As the number
of slaves increases, such failures become more common.
To solve this, I hacked together a crude tool that takes a yaml
file and writes the XML files. It's not anywhere near as well
structured and pretty as liw's jenkinstool, but it is quite good
at translating the YAML into a bunch of XML files. I don't know if
it's useful for anybody else, there is no documentation and so on, but
if you want to take a look, it's on github.
Feedback is most welcome, as usual. Patches even more so.

23 July 2012

At work, we have a rotation of who is on call at a given time. We
have few calls, but they do happen and so it's important to ensure
both that a person is available, but also that they're aware they are
on call (so they don't stray too far from their phone or a computer).
In the grand tradition of abusing spreadsheets, we are using google
docs for the roster. It's basically just two columns, one with date
and one with user name. Since the volume is so low, people tend to be
on call for about a week at a time, 24 hours a day.
Up until now, we've just had a pretty old and dumb phone that people
have carried around, but that's not really swish, so I have
implemented a small system which grabs the current data, looks up the
support person in LDAP and sends SMSes when people go on and off duty
as well as reminding the person who's on duty once a day.
If you're interested, you can look at the (slightly redacted)
script.

21 October 2011

Before I start, I'll admit that I'm not a real RPM packager. Maype
I'm approaching this from completely the wrong direction, what do I
know?
I'm in the process of packaging Varnish 3.0.2 which includes mangling
the spec file. The top of the spec file reads:

%define v_rc
%define vd_rc % ?v_rc:-% ?v_rc

Apparently, this is not legal, since we're trying to define v_rc as a
macro with no body. It's however not possible to directly define it
as an empty string which can later be tested on, you have to do
something like:

%define v_rc % nil
%define vd_rc % ?v_rc:-% ?v_rc

Now, this doesn't work correctly either. % ?macro tests if macro
is defined, not whether it's an empty string so instead of two lines,
we have to write:

The 0 ?v_rc != 0 workaround is there so that we don't accidentially
end up with == 0 which would be a syntax error.
I think having four lines like that is pretty ugly, so I looked for a
workaround and figured that, ok, I'll just rewrite every use of
% vd_rc to % ?v_rc:-% ?v_rc . There are only a couple, so the
damage is limited. Also, I'd then just comment out the v_rc
definition, since that makes it clear what you should uncomment to
have a release candidate version.
In my naivety, I tried:

# %define v_rc ""

# is used as a comment character in spec files, but apparently not
for defines. The define was still processed and the build process
stopped pretty quickly.
Luckily, doing # % define "" seems to work fine and is not
processed. I have no idea how people put up with this or if I'm doing
something very wrong. Feel free to point me at a better way of doing
this, of course.

5 October 2011

We use SugarCRM at work and I've complained about its not-very-RESTy
REST interface. John Mertic a (the?) SugarCRM Community Manager asked
me about what problems I'd had (apart from its lack of RESTfulness)
and I said I'd write a blog post about it.
In our case, the REST interface is used to integrate Sugar and RT so
we get a link in both interfaces to jump from opportunities to the
corresponding RT ticket (and back again). This should be a fairly
trivial exercise or so you would think.
The problems, as I see it are:

Not REST-y.

Exposes the database tables all the way through the REST interface

Lack of useful documentation forcing the developer to cargo cult and
guess

Annoying data structures

Forced pagination

My first gripe is the complete lack of REST in the URLs. Everything
is just sent to https://sugar/service/v2/rest.php. Usually a POST,
but sometimes a GET. It's not documented what to use where.
The POST parameters we send when logging in are:

Nothing seems to actually care about the value of application, nor
about the user_auth.version value. The password is the md5 of the
actual password, hex encoded. I'm not sure why it is, as this adds
absolutely no security, but it is. This is also not properly
documented.
This gives us a JSON object back with a somewhat haphazard selection
of attributes (reformatted here for readability):

What is the module_name? No real idea. In general, when you get
back an id and a module_name field, it tells you that the id
exists is an object that exists in the context of the given module.
Not here, since the session id is not a user.
The worst here is the name_value_list concept which is used all over
the REST interface. First, it's not a list, it's a hash. Secondly, I
have no idea what would be wrong by just using keys directly in the
top level object, so the object would have looked somewhat like:

Some people might argue that since you can have custom field names
this can cause clashes. Except, it can't, since they're all suffixed
with _c.
So we're now logged in and can fetch all opportunities. This we do by
posting:

$where is opportunities_cstm.rt_id_c IS NOT NULL. Yes, that's
right. An SQL fragment right there and you have to know that you'll
join the opportunities_cstm and opportunities tables because we
are using a custom field. I find this completely crazy.

$next starts out at 0 and we're limited to 1000 entries at a time.
There is, apparently, no way to say "just give me all you have".

$fields is an array, in our case consisting of id, name,
description, rt_id_c and rt_status_c. To find out the field
names, look at the database schema or poke around in the SugarCRM
studio.

$links is to link records together. I still haven't been able to
make this work properly and just do multiple queries.

1000 is the maximum number of records. No, you can't say -1 and get
everything.

Why this is a list rather than a hash? Again, I don't know. A hash
would make more sense to me.
The resulting JSON looks like:

Why this works and my attempts at using a proper name_value_list
didn't work? I have no idea.
I think that pretty much sums it up. I'm sure there are other
problems in there (such as the over 100 lines of support code for the
about 20 lines of actual code that does useful work), though.