At October first, another full year of work at Haxx has been spent since I last summed up the past year (my previous posts about Haxx’s first year and second year). Three years working for Haxx full-time, and it has been another great year with lots of fun, challenges and us enjoying being independent.

During this year I ended my previous engagement with that large chip company and got a new assignment for the same customer both Björn and Linus were working for at the time. It has been a big adventure for me as I dove straight into unknown territories and I’ve spent my work days since then as a product manager, making an embedded Linux distribution. In this role I’ve travelled to US, China and South Korea during the year and I’m serving as a member of an advisory board in a related organization on behalf of my customer! I recently agreed to extending this contract to at least April 2013. Partly due to this new assignment I’ve not worked very much on foss-sthlm activities recently, but after the summer I’ve really made an effort to get this backup to speed.

Later during the year, Linus changed assignment to a new customer when we signed a sort of partnership contract with a leading global embedded software company and he then continued to do a whole series of little projects for them. After the summer Linus has grabbed a couple of curl related projects, partly still in progress.

Björn stuck around at the same customer during the entire year, and he’s been working as an engineer and developer in the team that actually makes the product I am a manager for.

This year we made more Haxx merchandise. Towels, stickers and jackets have now been sent out in the world to make our name more visible in a few weird corners of the universe.

We visisted FSCONS 2011 and FOSDEM 2012, two really nice conferences for FOSS fans like us and we got to meet a lot of friends and like-minded people there.

We continue to see a demand on the market for highly skilled embedded developers, including embedded Linux and open source related activities. We wouldn’t mind extending our merry team, so we decided to document a list of requirements of what to have in order to get hired by us. So far not a single person has applied…

Going strong after 12 years in the making. For the 27th time we’re gathering friends in the Stockholm Sweden area who are interested in technology, open source, beers, slightly inaccurate Monty Python quotes, reverse engineering electronics and similar very important topics. We might also have a beer or two and talk rubbish.

On October 31st 2012 we invite all and every of our tech oriented friends to visit

Facebook contributes fix to libcurl’s multi interface to overcome problem with more than 1024 file descriptors.

When we introduced the multi interface to libcurl about (what feels like) one hundred years ago, we went with simple in some ways. One way it shows: an application that wants to do many transfers in parallel asks libcurl to do it, and then it extracts the set of file descriptors (sockets!) from libcurl (using curl_multi_fdset) to wait for as plain fd_sets. fd_set is the variable type made for select(). This API choice made applications pretty much forced to use select. select() has its fair share of problems, where possibly the biggest one is that it has problems with file descriptors > 1024.

Later on we introduced an enhanced version of the multi interface for libcurl that allows an application to use whatever method it pleases. I tend to refer to that variation as the multi_socket API after its main function curl_multi_socket_action. That’s the high performance, event-driven API.

As you may be aware, event-driven code make things a bit more complicated at times so many people still prefer to use the older and simpler multi interface and thus they were forced to remain using select(). But now that era has ended. Now…

This poll(3)-like function basically works as a replacement for curl_multi_fdset() + select(). Starting in libcurl 7.28.0 (strictly speaking in commit de24d7bd4c03ea3), this is a function that any application can use for this purpose, and thus avoid the problem with many file descriptors.

This new function doesn’t use any struct from the “real” poll() or associated headers to make sure that it works even for systems without a real poll() implementation. It instead uses private curl versions of both the struct and the defines used. An application can of course also tell curl_multi_wait to wait for a set of private file descriptors, just like poll() or select().

The patch set that brought this function was provided by Sara Golemon, a friend from from a related project…

Much thanks to autoconf and automake we have an established more or less standardized way to build and install tools, libraries and other software. We build them with ‘make’ and we install them with ‘make install‘. This works great and it works equally fine even when we build stuff cross-compiled.

For testing however, the established concept and procedure is not as good. For testing we have ‘make test’ or ‘make check’ which typically first builds whatever needs to get built for the tests to run and then it runs all tests.

This is not good enough

Why? Because in lots of use-cases we build software using a cross-compiler on a build system that can’t run the executables. Therefore we need to first build the tests, then install the tests (somewhere that is reachable from the target system) and finally execute them. These steps need to be possible to run independently since at least the building and installing will sometimes happen on a different host than the execution of the tests.

Introducing ptest

Within the yocto project, Björn Stenberg has pushed for ptest to be the basis of this new reform and concept. The responses he’s gotten so far has been positive and there’s a pending updated patch to be posted to the upstream oe-core list soon.

The work does not end there

Even if or when this can be incorporated into OpenEmbedded and Yocto – and I really think it is a matter of when since I believe we can work out all the flaws and quirks until virtually everyone involved is happy. The bulk of the changes however, really should be done upstream, in hundreds and thousands of open source packages. We (as upstream open source projects) need to start doing testing in at least two different steps, where one step build everything that needs to be built for the tests and then a second step that run the suite. The two steps could then in a cross-compiled scenario get executed first on the host system and then on the target system.

I expect that this will mean a whole bunch of patches and scripts to have to be maintained within OpenEmbedded for a while, when things will be tried to get merged into upstream projects and I also foresee that a certain percentage of all projects just won’t accept this new approach and will reject all patches in this vein.

Output format

I think the most controversial part of these suggested “universal” changes is the common test suite output format. The common format is of course required so that we can “supervise” the output and results from any package without having to know any specifics.

While the ptest output format follows the automake test output syntax, I expect many projects that have selected a particular output format to rather stick with that. Hopefully we can then make projects introduce a separate make target or option that runs the test suite with the standard output format.

One little step forward

Building full-fledged Linux distributions cross-compiled that are completely tested on target will remain being hard work for a while more. But we are improving things, one step at a time.

Of course, the name ‘ptest’ is what the system is currently called by Björn within the yocto/OE environment. It is not supposed to be a catchy name for this idea outside of there. The ‘P’ refers to package, as opposed to for example system test and to make it less generic than simply test.

This is a slightly edited version of a genuine email I received in May 2012:

Dear Mr. Stenberg -

I recently came across the text you co-authored with Michael Schrenk, Webbots, Spiders, and Screen Scrapers, and was wondering if you might be interested in being a paid expert witness in a lawsuit we’re handling.

One of the major claims in the suit is unauthorized computer access in the form of a massive, multi-year campaign of screen scraping, and we’re looking for a qualified expert who can make the activity make sense to a jury (in the unlikely event that this matter reaches trial; fewer than 2% of cases do, in federal court).

We’re in Los Angeles, California, as is the case (and naturally would cover travel expenses, an hourly or per diem expert witness fee, etc). If you’re interested (or even if you’re not), please let me know? You can reach me via email or at (xyz) xyz-xyzx.

I responded to this mail saying that I’d rather not due to the distance and travel it’d require, but I never heard back from them again so I have no idea whatever happened in this case or who got to be the expert in the end…

Notice: this is not an advanced nor secret trickery. This is just something I’ve found even techsavvy people in my surrounding not having done so its worth being highlighted.

When I upgraded to fiber from ADSL, I had to give up my fixed IPv4 address that I’ve been using for around 10 years and switch to a dynamic DNS service .

In this situation I see lots of people everywhere use dyndns.org and similar services that offer dynamicly assigning a new IP to a fixed host name so that you and your computer illiterate friends still can reach your home site.

I suggest a minor variation of this, that still avoids having to run your own dyndns server on your server. It only requires that you admin and control a DNS domain already, but who doesn’t these days?

Get that dyndns host name at the free provider that makes the name hold your current IP. Let’s call it home.dyndns.org.

In your own DNS zone for your domain (example.com) you add an entry for your host ‘home.example.com‘ as a CNAME, pointing over to home.dyndns.org

Now you can ping or ssh or whatever to ‘home.example.com‘ and it will remain your home IP even when it moves.

Smile and keep using that host name in your own domain to your dynamic IP

We may not get a very warm and sunny summer this year in Sweden, but now at least we have the Haxx towels to use the very brief moments on the beach! It measures 140 x 70 centimeters and if you click on the image you’ll get a higher resolution version to browse.

I got this awesome curl sighting from the US101, near SFO posted to me (click for a slightly larger version) a few days ago – showing curl on a very large billboard in an ad for Dice.

It is clearly meant to look like a unix style command line that invokes curl. The command line would only be syntactically correct if the user truly has two host names named “techJobs” and “SanFran” and curl would attempt to get their root HTTP document off them on port 80.

It gets followed by that C++/C99 comment that really is an odd added context.

To me, all the sign shows is a company that desperately tries to look techy but in trying too hard it fails really miserably. I’m really honored my work found its way to such a high profile spot though!

BACKGROUND

I am the project leader and maintainer of the curl project. We are the open source project that makes libcurl, the transfer library and curl the command line tool. It is among many things a client-side implementation of HTTP and HTTPS (and some dozen other application layer protocols). libcurl is very portable and there exist around 40 different bindings to libcurl for virtually all languages/enviornments imaginable. We estimate we might have upwards 500
million users or so. We’re entirely voluntary driven without any paid developers or particular company backing.

HTTP/1.1 problems I’d like to see adressed

Pipelining – I can see how something that better deals with increasing bandwidths with stagnated RTT can improve the end users’ experience. It is not easy to implement in a nice manner and provide in a library like ours.

Many connections – to avoid problems with pipelining and queueing on the connections, many connections are used and and it seems like a general waste that can be improved.

HTTP/2.0

We’ve implemented HTTP/1.1 and we intend to continue to implement any and all widely deployed transport layer protocols for data transfers that appear on the Internet. This includes HTTP/2.0 and similar related protocols.

curl has not yet implemented SPDY support, but fully intends to do so. The current plan is to provide SPDY support with the help of spindly, a separate SPDY library project that I lead.

We’ve selected to support SPDY due to the momentum it has and the multiple existing implementaions that A) have multi-company backing and B) prove it to be a truly working concept. SPDY seems to address HTTP’s pipelining and many-connections problems in a decent way that appears to work in reality too. I believe SPDY keeps enough HTTP paradigms to be easily upgraded to for most parties, and yet the ones who can’t or won’t can remain with HTTP/1.1 without too much pain. Also, while Spindly is not production-ready, it has still given me the sense that implementing a SPDY protocol engine is not rocket science and that the existing protocol specs are good.

By relying on external libs for protocol and implementation details, my hopes is that we should be able to add support for other potentially coming HTTP/2.0-ish protocols that gets deployed and used in the wild. In the curl project we’re unfortunately rarely able to be very pro-active due to the nature of our contributors, which tends to make us follow the rest and implement and go with what others have already decided to go with.

I’m not aware of any competitors to SPDY that is deployed or used to any particular and notable extent on the public internet so therefore no other ”HTTP/2.0 protocol” has been considered by us. The two biggest protocol details people will keep mention that speak against SPDY is SSL and the compression requirements, yet I like both of them. I intend to continue to participate in dicussions and technical arguments on the ietf-http-wg mailing list on HTTP details for as long as I have time and energy.

HTTP AUTH

curl currently supports Basic, Digest, NTLM and Negotiate for both host and proxy.

Similar to the HTTP protocol, we intend to support any widely adopted authentication protocols. The HOBA, SCRAM and Mutual auth suggestions all seem perfectly doable and fine in my perspective.

However, if there’s no proper logout mechanism provided for HTTP auth I don’t forsee any particular desire from browser vendor or web site creators to use any of these just like they don’t use the older ones either to any significant extent. And for automatic (non-browser) uses only, I’m not sure there’s motivation enough to add new auth protocols to HTTP as at least historically we seem to rarely be able to pull anything through that isn’t pushed for by at least one of the major browsers.

The “updated HTTP auth” work should be kept outside of the HTTP/2.0 work as far as possible and similar to how RFC2617 is separate from RFC2616 it should be this time around too. The auth mechnism should not be too tightly knit to the HTTP protocol.

The day we can clense connection-oriented authentications like NTLM from the HTTP world will be a happy day, as it’s state awareness is a pain to deal with in a generic HTTP library and code.