Search Results: "pere"

6 June 2020

As a member of the Norwegian Unix
User Group, I have the pleasure of receiving the
USENIX magazine
;login:
several times a year. I rarely have time to read all the articles,
but try to at least skim through them all as there is a lot of nice
knowledge passed on there. I even carry the latest issue with me most
of the time to try to get through all the articles when I have a few
spare minutes.
The other day I came across a nice article titled
"The
Secure Socket API: TLS as an Operating System Service" with a
marvellous idea I hope can make it all the way into the POSIX standard.
The idea is as simple as it is powerful. By introducing a new
socket() option IPPROTO_TLS to use TLS, and a system wide service to
handle setting up TLS connections, one both make it trivial to add TLS
support to any program currently using the POSIX socket API, and gain
system wide control over certificates, TLS versions and encryption
systems used. Instead of doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);

the program code would be doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TLS);

According to the ;login: article, converting a C program to use TLS
would normally modify only 5-10 lines in the code, which is amazing
when compared to using for example the OpenSSL API.
The project has set up the
https://securesocketapi.org/
web site to spread the idea, and the code for a kernel module and the
associated system daemon is available from two github repositories:
ssa and
ssa-daemon.
Unfortunately there is no explicit license information with the code,
so its copyright status is unclear. A
request to solve
this about it has been unsolved since 2018-08-17.
I love the idea of extending socket() to gain TLS support, and
understand why it is an advantage to implement this as a kernel module
and system wide service daemon, but can not help to think that it
would be a lot easier to get projects to move to this way of setting
up TLS if it was done with a user space approach where programs
wanting to use this API approach could just link with a wrapper
library.
I recommend you check out this simple and powerful approach to more
secure network connections. :)
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2 June 2020

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.

This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

The importance of early pneumonia detection

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint

As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware

As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software

Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

ABM main screen

ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

24 May 2020

I am very happy to report that a more reliable
VLC
bittorrent plugin was just uploaded into debian. This fixes a
couple of crash bugs in the plugin, hopefully making the VLC
experience even better when streaming directly from a bittorrent
source. The package is currently in Debian unstable, but should be
available in Debian testing in two days. To test it, simply install
it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file
downloaded live via bittorrent like this:

It also support magnet links and local .torrent files.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 May 2020

It has been way too long since my last interview, but as the
Debian Edu / Skolelinux
community is still active, and new people keep showing up on the IRC
channel #debian-edu and
the debian-edu mailing
list, I decided to give it another go. I was hoping someone else
might pick up the idea and run with it, but this has not happened as
far as I can tell, so here we are This time the announcement of a new
free software tool to
create a school year
book triggered my interest, and I decided to learn more about its
author.
Who are you, and how do you spend your days?
My name is Yvan MASSON, I live in France. I have my own one person
business in computer services. The work consist of visiting my
customers (person's home, local authority, small business) to give
advise, install computers and software, fix issues, and provide
computing usage training. I spend the rest of my time enjoying my
family and promoting free software.
What is your approach for promoting free
software?
When I think that free software could be suitable for someone, I
explain what it is, with simple words, give a few known examples, and
explain that while there is no fee it is a viable alternative in many
situations. Most people are receptive when you explain how it is
better (I simplify arguments here, I know that it is not so simple):
Linux works on older hardware, there are no viruses, and the software
can be audited to ensure user is not spied upon. I think the most
important is to keep a clear but moderated speech: when you try to
convince too much, people feel attacked and stop listening.
How did you get in contact with the Skolelinux / Debian Edu
project?
I can not remember how I first heard of Skolelinux / Debian Edu,
but probably on planet.debian.org. As I have been working for a
school, I have interest in this type of project.
The school I am involved in is a school for "children" between 14
and 18 years old. The French government has recommended free software
since 2012, but they do not always use free software themselves. The
school computers are still using the Windows operating system, but all
of them have the classic set of free software: Firefox ESR,
LibreOffice (with the excellent extension Grammalecte that indicates
French grammatical errors), SumatraPDF, Audacity, 7zip, KeePass2, VLC,
GIMP, Inkscape
What do you see as the advantages of Skolelinux / Debian
Edu?
It is free software! Built on Debian, I am sure that users are not
spied upon, and that it can run on low end hardware. This last point
is very important, because we really need to improve "green IT". I do
not know enough about Skolelinux / Debian Edu to tell how it is better
than another free software solution, but what I like is the "all in
one" solution: everything has been thought of and prepared to ease
installation and usage.
I like Free Software because I hate using something that I can not
understand. I do not say that I can understand everything nor that I
want to understand everything, but knowing that someone / some company
intentionally prevents me from understanding how things work is really
unacceptable to me.
Secondly, and more importantly, free software is a requirement to
prevent abuses regarding human rights and environmental care.
Humanity can not rely on tools that are in the hands of small group of
people.
What do you see as the disadvantages of Skolelinux / Debian
Edu?
Again, I don't know this project enough. Maybe a dedicated website?
Debian wiki works well for documentation, but is not very appealing to
someone discovering the project. Also, as Skolelinux / Debian Edu uses
OpenLDAP, it probably means that Windows workstations cannot use
centralized authentication. Maybe the project could use Samba as an
Active Directory domain controller instead, allowing Windows desktop
usage when necessary.
(Editors note: In fact Windows workstations can
use
the centralized authentication in a Debian Edu setup, at least for
some versions of Windows, but the fact that this is not well known can
be seen as an indication of the need for better documentation and
marketing. :)
Which free software do you use daily?
Nothing original: Debian testing/sid with Gnome desktop, Firefox,
Thunderbird, LibreOffice
Which strategy do you believe is the right one to use to
get schools to use free software?
Every effort to spread free software into schools is important,
whatever it is. But I think, at least where I live, that IT
professionals maintaining schools networks are still very "Microsoft
centric". Schools will use any working solution, but they need people
to install and maintain it. How to make these professionals sensitive
about free software and train them with solutions like Debian Edu /
Skolelinux is a really good question :-)

8 May 2020

Half a year ago,
I
wrote about the Jami communication
client, capable of peer-to-peer encrypted communication. It
handle both messages, audio and video. It uses distributed hash
tables instead of central infrastructure to connect its users to each
other, which in my book is a plus. I mentioned briefly that it could
also work as a SIP client, which came in handy when the higher
educational sector in Norway started to promote Zoom as its video
conferencing solution. I am reluctant to use the official Zoom client
software, due to their copyright
license clauses prohibiting users to reverse engineer (for example
to check the security) and benchmark it, and thus prefer to connect to
Zoom meetings with free software clients.
Jami worked OK as a SIP client to Zoom as long as there was no
password set on the room. The Jami daemon leak memory like crazy
(approximately 1 GiB a minute) when I am connected to the video
conference, so I had to restart the client every 7-10 minutes, which
is not a great. I tried to get other SIP Linux clients to work
without success, so I decided I would have to live with this wart
until someone managed to fix the leak in the dring code base. But
another problem showed up once the rooms were password protected. I
could not get my dial tone signaling through from Jami to Zoom, and
dial tone signaling is used to enter the password when connecting to
Zoom. I tried a lot of different permutations with my Jami and
Asterisk setup to try to figure out why the signaling did not get
through, only to finally discover that the fundamental problem seem to
be that Zoom is simply not able to receive dial tone signaling when
connecting via SIP. There seem to be nothing wrong with the Jami and
Asterisk end, it is simply broken in the Zoom end. I got help from a
very skilled VoIP engineer figuring out this last part. And being a
very skilled engineer, he was also able to locate a solution for me.
Or to be exact, a workaround that solve my initial problem of
connecting to password protected Zoom rooms using Jami.
So, how do you do this, I am sure you are wondering by now. The
trick is already
documented
from Zoom, and it is to modify the SIP address to include the room
password. What is most surprising about this is that the
automatically generated email from Zoom with instructions on how to
connect via SIP do not mention this. The SIP address to use normally
consist of the room ID (a number), an @ character and the IP address
of the Zoom SIP gateway. But Zoom understand a lot more than just the
room ID in front of the at sign. The format is "[Meeting
ID].[Password].[Layout].[Host Key]", and you can hear see how you
can both enter password, control the layout (full screen, active
presence and gallery) and specify the host key to start the meeting.
The full SIP address entered into Jami to provide the password will
then look like this (all using made up numbers):

sip:657837644.522827@192.168.169.170

Now if only jami would reduce its memory usage, I could even
recommend this setup to others. :)
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29 April 2020

The curiosity got the better of me when
Slashdot
reported that New Jersey was desperately looking for
COBOL programmers,
and a few days later it was reported that
IBM
tried to locate COBOL programmers.
I thus decided to have a look at free software alternatives to
learn COBOL, and had the pleasure to find
GnuCOBOL was
already in
Debian. It used to be called Open Cobol, and is a "compiler"
transforming COBOL code to C or C++ before giving it to GCC or Visual
Studio to build binaries.
I managed to get in touch with upstream, and was impressed with the
quick response, and also was happy to see a new Debian maintainer
taking over when the original one recently asked to be replaced. A
new Debian upload was done as recently as yesterday.
Using the Debian package, I was able to follow a simple COBOL
introduction and make and run simple COBOL programs. It was fun to
learn a new programming language. If you want to test for yourself,
the GnuCOBOL Wikipedia
page have a few simple examples to get you startet.
As I do not have much experience with COBOL, I do not know how
standard compliant it is, but it claim to pass most tests from COBOL
test suite, which sound good to me. It is nice to know it is possible
to learn COBOL using software without any usage restrictions, and I am
very happy such nice free software project as this is available. If
you as me is curious about COBOL, check it out.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30 March 2020

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.
Recent Uploads to Debian related to Lomiri
Over the past 7-8 weeks the packaging progress has been slowed down due to other projects I am working on in parallel. However, quite a few things have been achieved:

review forks of unity-api, ubuntu-download-manager and unity-app-launch under the names lomiri-api, lomiri-download-manager, lomiri-app-launch.

request upstream releases of lomiri-api and lomiri-download-manager

package and upload lomiri-api to Debian unstable (unfortunately still in Debian's NEW queue)

package and upload lomiri-download-manager to Debian unstable (dito)

package (and with 'package' I mean Debian policy compliant packaging) lomiri-app-launch (no upload, yet, as there are some strange unit test failures that need more debugging)

package and upload qtsystems (under the umbrella of the Debian QT/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)

package and upload qtfeedback (under the umbrella of the Debian QT/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)

package and (upload) [1] qtpim (under the umbrella of the Debian Qt/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)

The packages qtsystems, qtfeedback, and qtpim are no official Qt5 components, and so I had to package Git snapshots of them; with all implicit consequences regarding ABI and API compatibilities, possibly
Debian-internal library transitions, etc.
Esp. packaging qtsystems was pretty tricky due to a number of failing unit tests when the package had been built in a clean chroot (like it is the case on Debian's buildd infrastructure). I learned a lot about DBus and DBus mocking while working on all those unit tests to finally pass in chrooted builds.
Unfortunately, the Lomiri App Launch component still needs more work due to (finally only) one unit test (jobs-systemd) not always passing. Sometimes, the test gets stucks and then fails after having reached a time out. I'll add it to my list of those unreproducible build failures I have recently seen in several GTest related unit test scenarios. Sigh...
Credits
A great thanks goes to Lisandro Perez Meyer from the Debian KDE/Qt Team for providing an intro and help on Qt Debian packaging and an intro on symbols handling with C++ projects.
Another big thanks goes to Dmitry Shachnev from the Debian KDE/Qt Team for doing a sponsored upload [1] of qtpim (and also a nice package review).
Also a big thanks goes to Marius Gripsgard for his work on forking the first Lomiri components on the UBports upstream side.
Previous Posts about my Debian UBports Team Efforts

[1] Unfortunately, I missed a crucial element of the GPG key update workflow as Debian Developer. My GPG key was about to expire at the end of March 2020. I renewed its expiration date and exported its public key to the public PGP/GPG keyserver. However, for being able to upload packages to Debian, one has to push the public key to Debian's own keyring server. Which I missed. Thus, I won't be able to upload any packages before the end of April myself and will depend on DD colleagues helping out with sponsoring my uploads.

There have been lot of stories about Coronavirus and with it a lot of political blame-game has been happening. The first step that India took of a lockdown is and was a good step but without having a plan as to how especially the poor and the needy and especially the huge migrant population that India has (internal migration) be affected by it. A 2019 World Economic Forum shares the stats. as 139 million people. That is a huge amount of people and there are a variety of both push and pull factors which has displaced these huge number of people. While there have been attempts in the past and probably will continue in future they will be hampered unless we have trust-worthy data which is where there is lots that need to be done. In the recent few years, both the primary and secondary data has generated lot of controversies within India as well as abroad so no point in rehashing all of that. Even the definition of who is a migrant needs to be well-established just as who is a farmer . The simplest lucanae in the later is those who have land are known as farmers but the tenant farmers and their wives are not added as farmers hence the true numbers are never known. Is this an India-specific problem or similar definition issues are there in the rest of the world I don t know.

How our Policies fail to reach the poor and the vulnerable
The sad part is most policies in India are made in castles in the air . An interview by the wire shares the conundrum of those who are affected and the policies which are enacted for them (it s a youtube video, sorry)

If one with an open and fresh mind sees the interview it is clear that why there was a huge reverse migration from Indian cities to villages. The poor and marginalized has always seen the Indian state as an extortive force so it doesn t make sense for them to be in the cities. The Prime Minister s annoucement of food for 3 months was a clear indication for the migrant population that for 3 months they will have no work. Faced with such a scenario, the best option for them was to return to their native places. While videos of huge number of migrants were shown of Delhi, this was the scenario of most states and cities, including Pune, my own city . Another interesting point which was made is most of the policies will need the migrants to be back in the villages. Most of these are tied to the accounts which are opened in villages, so even if they want to have the benefits they will have to migrate to villages in order to use them. Of course, everybody in India knows how leaky the administration is. The late Shri Rajiv Gandhi had famously and infamously remarked once how leaky the Public Distribution system and such systems are. It s only 10 paise out of rupee which reaches the poor. And he said this about 30 years ago. There have been numerous reports of both IPS (Indian Police Services) reforms and IAS (Indian Administrative Services) reforms over the years, many of the committee reports have been in public domain and in fact was part of the election manifesto of the ruling party in 2014 but no movement has happened on that part. The only thing which has happened is people from the ruling party have been appointed on various posts which is same as earlier governments.
I was discussing with a friend who is a contractor and builder about the construction labour issues which were pointed in the report and if it is true that many a times the migrant labour is not counted. While he shared a number of cases where he knew, a more recent case in public memory was when some labourers died while building Amanora mall which is perhaps one of largest malls in India. There were few accidents while constructing the mall. Apparently, the insurance money which should have gone to the migrant laborer was taken by somebody close to the developers who were building the mall. I have a friend in who lives in Jharkhand who is a labour officer. She has shared with me so many stories of how the labourers are exploited. Keep in mind she has been a labor officer appointed by the state and her salary is paid by the state. So she always has to maintain a balance of ensuring worker s rights and the interests of the state, private entities etc. which are usually in cahoots with the state and it is possible that lot of times the State wins over the worker s rights. Again, as a labour officer, she doesn t have that much power and when she was new to the work, she was often frustrated but as she remarked few months back, she has started taking it easy (routinized) as anyways it wasn t helping her in any good way. Also there have been plenty of cases of labor officers being murdered so its easier to understand why one tries to retain some sanity while doing their job.

The Indian response and the World Response
The Indian response has been the lockdown and very limited testing. We seem to be following the pattern of UK and U.S. which had been slow to respond and slow to testing. In the past Kerala showed the way but this time even that is not enough. At the end of the day we need to test, test and test just as shared by the WHO chairman. India is trying to create its own cheap test kits with ICMR approval, for e.g. a firm from my own city Pune MyLab has been given approval. We will know how good or bad they are only after they have been field-tested. For ventilators we have asked Mahindra and Mahindra even though there are companies like Allied Medical and others who have exported to EU and others which the Govt. is still taking time to think through. This is similar to how in UK some companies who are with the Govt. but who have no experience in making ventilators are been given orders while those who have experience and were exporting to Germany and other countries are not been given orders. The playbook is errily similar. In India, we don t have the infrastructure for any new patients, period. Heck only a couple of states have done something proper for the anganwadi workers. In fact, last year there were massive strikes by anganwadi workers all over India but only NDTV showed a bit of it along with some of the news channels from South India. Most mainstream channels chose to ignore it.
On the world stage, some of the other countries and how they have responded perhaps need sharing. For e.g. I didn t know that Cuba had so many doctors and the politics between it and Brazil. Or the interesting stats. shared by Andreas Backhaus which seems to show how distributed the issue (age-wise) is rather than just a few groups as has been told in Indian media. What was surprising for me is the 20-29 age group which has not been shared so much in the Indian media which is the bulk of our population. The HBR article also makes a few key points which I hope both the general public and policymakers both in India as well as elsewhere take note of.
What is worrying though that people can be infected twice or more as seems to be from Singapore or China and elsewhere. I have read enough of Robin Cook and Michael Crichton books to be aware that viruses can do whatever. They will over time mutate, how things will happen then is anybody s guess. What I found interesting is the world economic forum article which hypothesis that it may be two viruses which got together as well as research paper from journal from poteome research which has recently been published. The biggest myth flying around is that summer will halt or kill the spread which even some of my friends have been victim of . While a part of me wants to believe them, a simple scientific fact has been viruses have probably been around us and evolved over time, just like we have. In fact, there have been cases of people dying due to common cold and other things. Viruses are so prevalent it s unbelivable. What is and was interesting to note is that bat-borne viruses as well as pangolin viruses had been theorized and shared by Chinese researchers going all the way back to 90 s . The problem is even if we killed all the bats in the world, some other virus will take its place for sure. One of the ideas I had, dunno if it s feasible or not that at least in places like Airports, we should have some sort of screenings and a labs working on virology. Of course, this will mean more expenses for flying passengers but for public health and safety maybe it would worth doing so. In any case, virologists should have a field day cataloging various viruses and would make it harder for viruses to spread as fast as this one has. The virus spread also showed a lack of leadership in most of our leaders who didn t react fast enough. While one hopes people do learn from this, I am afraid the whole thing is far from over. These are unprecedented times and hope that all are maintaining social distancing and going out only when needed.

git clone REPO
./REPO/bootstrap.sh

... something eerily similar to the infamous curl pipe bash
method which I often decry. As a short-term workaround, I relied on
the SHA-1 checksum of the repository to make sure I have the right
code, by running this both on a "trusted" (ie. "local") repository and
the remote, then visually comparing the output:

One problem with this approach is that SHA-1 is now considered as
flawed as MD5 so it can't be used as an authentication mechanism
anymore. It's also fundamentally difficult to compare hashes for
humans.
The other flaw with comparing local and remote checksums is that we
assume we trust the local repository. But how can I trust that
repository? I can either:

audit all the code present and all the changes done to it after

or trust someone else to do so

The first option here is not practical in most cases. In this specific
use case, I have audited the source code -- I'm the author, even --
what I need is to transfer that code over to another server.
(Note that I am replacing those procedures with Fabric, which
makes this use case moot for now as the trust path narrows to "trust
the SSH server" which I already had anyways. But it's still important
for my fellow Tor developers who worry about trusting the git server,
especially now that we're moving to GitLab.)
But anyways, in most cases, I do need to trust some other fellow
developer I collaborate with. To do this, I would need to trust the
entire chain between me and them:

the git client

the operating system

the hardware

then the hosting provider (and that hardware/software stack)

and then backwards all the way back to that other person's computer

I want to shorten that chain as much as possible, make it "peer to
peer", so to speak. Concretely, it would eliminate the hosting
provider and the network, as attackers.

OpenPGP verification
My first reaction is (perhaps perversely) to "use OpenPGP" for this. I
figured that if I sign every commit, then I can just check the latest
commit and see if the signature is good.
The first problem here is that this is surprisingly hard. Let's pick
some arbitrary commit I did recently:

That's the output of git log -p in my local repository. I signed
that commit, yet git log is not telling me anything special. To
check the signature, I need something special: --show-signature,
which looks like this:

Important part: Can't check signature: No public key. No public
key. Because of course you would see that. Why would you have my
key lying around, unless you're me. Or, to put it another way, why
would that server I'm installing from scratch have a copy of my
OpenPGP certificate? Because I'm a Debian developer, my key is
actually part of the 800 keys in the debian-keyring package,
signed by the APT repositories. So I have a trust path.
But that won't work for someone who is not a Debian developer. It will
also stop working when my key expires in that repository, as it
already has on Debian buster (current stable). So I can't assume I
have a trust path there either. One could work with a trusted keyring
like we do in the Tor and Debian project, and only work inside that
project, that said.
But I still feel uncomfortable with those commands. Both git log and
git show will happily succeed (return code 0 in the shell) even
though the signature verification failed on the commits. Same with
git pull and git merge, which will happily push your branch ahead
even if the remote has unsigned or badly signed commits.
To actually verify commits (or tags), you need the git
verify-commit (or git verify-tag) command, which seems to do
the right thing:

At least it fails with some error code (1, above). But it's not
flexible: I can't use it to verify that a "trusted" developer (say one
that is in a trusted keyring) signed a given commit. Also, it is not
clear what a failure means. Is a signature by an expired certificate
okay? What if the key is signed by some random key in my personal
keyring? Why should that be trusted?

Worrying about git and GnuPG
In general, I'm worried about git's implementation of OpenPGP
signatures. There has been numerous cases of interoperability problems
with GnuPG specifically that led to security, like EFAIL or
SigSpoof. It would be surprising if such a vulnerability did not
exist in git.
Even if git did everything "just right" (which I have myself found
impossible to do when writing code that talks with GnuPG), what does
it actually verify? The commit's SHA-1 checksum? The tree's checksum?
The entire archive as a zip file? I would bet it signs the commit's
SHA-1 sum, but I just don't know, on the top of my head, and neither
do git-commit or git-verify-commit say exactly what is happening.
I had an interesting conversation with a fellow Debian developer
(dkg) about this and we had to admit those limitations:

<anarcat> i'd like to integrate pgp signing into tor's coding
practices more, but so far, my approach has been "sign commits" and
the verify step was "TBD"
<dkg> that's the main reason i've been reluctant to sign git
commits. i haven't heard anyone offer a better subsequent step. if
torproject could outline something useful, then i'd be less averse
to the practice.
i'm also pretty sad that git remains stuck on sha1, esp. given the
recent demonstrations. all the fancy strong signatures you can make
in git won't matter if the underlying git repo gets changed out from
under the signature due to sha1's weakness

In other words, even if git implements the arcane GnuPG dialect just
so, and would allow us to setup the trust chain just right, and
would give us meaningful and workable error messages, it still would
fail because it's still stuck in SHA-1. There is work underway to
fix that, but in February 2020, Jonathan Corbet described that work as
being in a "relatively unstable state", which is hardly something I
would like to trust to verify code.
Also, when you clone a fresh new repository, you might get an entirely
different repository, with a different root and set of commits. The
concept of "validity" of a commit, in itself, is hard to establish in
this case, because an hostile server could put you backwards in time,
on a different branch, or even on an entirely different
repository. Git will warn you about a different repository root with
warning: no common commits but that's easy to miss. And complete
branch switches, rebases and resets from upstream are hardly more
noticeable: only a tiny plus sign (+) instead of a star (*) will
tell you that a reset happened, along with a warning (forced update)
on the same line. Miss those and your git history can be compromised.

Possible ways forward
I don't consider the current implementation of OpenPGP signatures in
git to be sufficient. Maybe, eventually, it will mature away from
SHA-1 and the interface will be more reasonable, but I don't see that
happening in the short term. So what do we do?

git evtag
The git-evtag extension is a replacement for git tag -s. It's
not designed to sign commits (it only verifies tags) but at least it
uses a stronger algorithm (SHA-512) to checksum the tree, and will
include everything in that tree, including blobs. If that sounds
expensive to you, don't worry too much: it takes about 5 seconds to
tag the Linux kernel, according to the author.
Unfortunately, that checksum is then signed with GnuPG, in a manner
similar to git itself, in that it exposes GnuPG output (which can be
confusing) and is likely similarly vulnerable to mis-implementation of
the GnuPG dialect as git itself. It also does not allow you to specify
a keyring to verify against, so you need to trust GnuPG to make sense
of the garbage that lives in your personal keyring (and, trust me, it
doesn't).
And besides, git-evtag is fundamentally the same as signed git tags:
checksum everything and sign with GnuPG. The difference is it uses
SHA-512 instead of SHA-1, but that's something git will eventually fix
itself anyways.

kernel patch attestations
The kernel also faces this problem. Linus Torvalds signs the releases
with GnuPG, but patches fly all over mailing list without any form of
verification apart from clear-text email. So Konstantin Ryabitsev has
proposed a new protocol to sign git patches which uses SHA256 to
checksum the patch metadata, commit message and the patch itself, and
then sign that with GnuPG.
It's unclear to me what this solves, if anything, at all. As dkg
argues, it would seem better to add OpenPGP support to
git-send-email and teach git tools to recognize that (e.g. git-am)
at least if you're going to keep using OpenPGP anyways.
And furthermore, it doesn't resolve the problems associated with
verifying a full archive either, as it only attests "patches".

jcat
Unhappy with the current state of affairs, the author of fwupd
(Richard Hughes) wrote his own protocol as well, called
jcat, which provides signed "catalog files" similar to the ones
provided in Microsoft windows.
It consists of a "gzip-compressed JSON catalog files, which can be
used to store GPG, PKCS-7 and SHA-256 checksums for each file". So
yes, it is yet again another wrapper to GnuPG, probably with all the
flaws detailed above, on top of being a niche implementation,
disconnected from git.

The Update Framework
One more thing dkg correctly identified is:

<dkg> anarcat: even if you could do exactly what you describe,
there are still some interesting wrinkles that i think would be
problems for you.
the big one: "git repo's latest commits" is a loophole big enough to
drive a truck through. if your adversary controls that repo, then
they get to decide which commits to include in the repo. (since
every git repo is a view into the same git repo, just some have more
commits than others)

In other words, unless you have a repository that has frequent commits
(either because of activity or by a bot generating fake commits), you
have to rely on the central server to decide what "the latest version"
is. This is the kind of problems that binary package distribution
systems like APT and TUF solve correctly. Unfortunately, those
don't apply to source code distribution, at least not in git form: TUF
only deals with "repositories" and binary packages, and APT only deals
with binary packages and source tarballs.
That said, there's actually no reason why git could not support the
TUF specification. Maybe TUF could be the solution to ensure
end-to-end cryptographic integrity of the source code
itself. OpenPGP-signed tarballs are nice, and signed git tags can be
useful, but from my experience, a lot of OpenPGP (or, more accurately,
GnuPG) derived tools are brittle and do not offer clear guarantees,
and definitely not to the level that TUF tries to address.
This would require changes on the git servers and clients, but I think
it would be worth it.

Other Projects

OpenBSD
There are other tools trying to do parts of what GnuPG is doing, for
example minisign and OpenBSD's signify. But they do not
integrate with git at all right now. Although I did find a
hack] to use signify with git, it's kind of gross...

Golang
Unsurprisingly, this is a problem everyone is trying to solve. Golang
is planning on hosting a notary which would leverage a
"certificate-transparency-style tamper-proof log" which would be ran
by Google (see the spec for details). But that doesn't resolve the
"evil server" attack, if we treat Google as an adversary (and we should).

Python
Python had OpenPGP going for a while on PyPI, but it's unclear if it
ever did anything at all. Now the plan seems to be to use TUF but
my hunch is that the complexity of the specification is keeping that
from moving ahead.

Docker
Docker and the container ecosystem has, in theory, moved to TUF in the
form of Notary, "a project that allows anyone to have trust over
arbitrary collections of data". In practice however, in my somewhat
limited experience,
setting up TUF and image verification in Docker is far from trivial.

Android and iOS
Even in what is possibly one of the strongest models (at least in
terms of user friendliness), mobile phones are surprisingly unclear
about those kind of questions. I had to ask if Android had end-to-end
authentication and I am still not clear on the answer. I have no
idea of what iOS does.

Conclusion
One of the core problems with everything here is the common usability
aspect of cryptography, and specifically the usability of verification
procedures. We have become pretty good at encryption. The harder
part (and a requirement for proper encryption) is verification. It
seems that problem still remains unsolved, in terms of usability. Even
Signal, widely considered to be a success in terms of adoption and
usability, doesn't properly solve that problem, as users regularly
ignore "The security number has changed" warnings...
So, even though they deserve a lot of credit in other areas, it seems
unlikely that hardcore C hackers (e.g. git and kernel developers)
will be able to resolve that problem without at least a little bit of
help. And TUF seems like the state of the art specification around
here, it would seem wise to start adopting it in the git community as
well.
Update: git 2.26 introduced a new gpg.minTrustLevel to "tell
various signature verification codepaths the required minimum trust
level", presumably to control how Git will treat keys in your
keyrings, assuming the "trust database" is valid and up to date. For
an interesting narrative of how "normal" (without PGP) git
verification can fail, see also A Git Horror Story: Repository
Integrity With Signed Commits.

2 March 2020

Today, after many months of development, a new release of
Nikita
Noark 5 core project was finally
announced
on the project mailing list. The Nikita free software solution is
an implementation of the Norwegian archive standard Noark 5 used by
government offices in Norway. These were the changes in version 0.5
since version 0.4, see the email link above for links to a demo
site:

Updated to Noark 5 versjon 5.0 API specification.

Changed formatting of _links from [] to to match IETF draft
on JSON HAL.

Merged Registrering og Basisregistrering in version 4 to
combined Registrering.

DokumentObjekt is now subtype of ArkivEnhet.

Introducing new entity Arkivnotat.

Changed all relation keys to use /v5/ instead of /v4/.

Corrected to use new official relation keys when possible.

Renamed Sakspart to Part and connect it to Mappe, Registrering
and Dokumentbeskrivelse instead of only Saksmappe.

Moved Korrespondansepart connection from Journalpost to
Registrering.

Moved Part and Korrespondansepart from package sakarkiv to
arkivstruktur.

Renamed presedensstatus to presedensStatus.

Use new JSON content-type "application/vnd.noark5+json".

Updated prepopulated format list to use PRONOM codes.

Implemented endpoint for system information.

Implemented national identifiers for both file and record.

Implemented comments.

implemented sign off.

implemented conversion.

Improved/implemented OData search and paging support for more entities.

No longer exposes attribute Dokumentobjekt.referanseDokumentfil,
one should use the relation in _links instead.

18 November 2017

A month ago, I blogged about my work to
automatically
check the copyright status of IMDB entries, and try to count the
number of movies listed in IMDB that is legal to distribute on the
Internet. I have continued to look for good data sources, and
identified a few more. The code used to extract information from
various data sources is available in
a
git repository, currently available from github.
So far I have identified 3186 unique IMDB title IDs. To gain
better understanding of the structure of the data set, I created a
histogram of the year associated with each movie (typically release
year). It is interesting to notice where the peaks and dips in the
graph are located. I wonder why they are placed there. I suspect
World War II caused the dip around 1940, but what caused the peak
around 2010?

I've so far identified ten sources for IMDB title IDs for movies in
the public domain or with a free license. This is the statistics
reported when running 'make stats' in the git repository:

249 entries ( 6 unique) with and 288 without IMDB title ID in free-movies-archive-org-butter.json
2301 entries ( 540 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json
830 entries ( 29 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
2109 entries ( 377 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json
291 entries ( 122 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json
144 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-manual.json
350 entries ( 1 unique) with and 801 without IMDB title ID in free-movies-publicdomainmovies.json
4 entries ( 0 unique) with and 124 without IMDB title ID in free-movies-publicdomainreview.json
698 entries ( 119 unique) with and 118 without IMDB title ID in free-movies-publicdomaintorrents.json
8 entries ( 8 unique) with and 196 without IMDB title ID in free-movies-vodo.json
3186 unique IMDB title IDs in total

The entries without IMDB title ID are candidates to increase the
data set, but might equally well be duplicates of entries already
listed with IMDB title ID in one of the other sources, or represent
movies that lack a IMDB title ID. I've seen examples of all these
situations when peeking at the entries without IMDB title ID. Based
on these data sources, the lower bound for movies listed in IMDB that
are legal to distribute on the Internet is between 3186 and 4713.
It would be great for improving the accuracy of this measurement,
if the various sources added IMDB title ID to their metadata. I have
tried to reach the people behind the various sources to ask if they
are interested in doing this, without any replies so far. Perhaps you
can help me get in touch with the people behind VODO, Public Domain
Torrents, Public Domain Movies and Public Domain Review to try to
convince them to add more metadata to their movie entries?
Another way you could help is by adding pages to Wikipedia about
movies that are legal to distribute on the Internet. If such page
exist and include a link to both IMDB and The Internet Archive, the
script used to generate free-movies-archive-org-wikidata.json should
pick up the mapping as soon as wikidata is updates.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Several of these research papers are based on data collected from
hundred thousands or millions of disk, and their findings are eye
opening. The short story is simply do not implicitly trust RAID or
redundant storage systems. Details matter. And unfortunately there
are few options on Linux addressing all the identified issues. Both
ZFS and Btrfs are doing a fairly good job, but have legal and
practical issues on their own. I wonder how cluster file systems like
Ceph do in this regard. After all, there is an old saying, you know
you have a distributed system when the crash of a compyter you have
never heard of stops you from getting any work done. The same holds
true if fault tolerance do not work.
Just remember, in the end, it do not matter how redundant, or how
fault tolerant your storage is, if you do not continuously monitor its
status to detect and replace failed disks.

31 October 2017

I was surprised today to learn that a friend in academia did not
know there are easily available web services available for writing
LaTeX documents as a team. I thought it was common knowledge, but to
make sure at least my readers are aware of it, I would like to mention
these useful services for writing LaTeX documents. Some of them even
provide a WYSIWYG editor to ease writing even further.
There are two commercial services available,
ShareLaTeX and
Overleaf. They are very easy to
use. Just start a new document, select which publisher to write for
(ie which LaTeX style to use), and start writing. Note, these two
have announced their intention to join forces, so soon it will only be
one joint service. I've used both for different documents, and they
work just fine. While
ShareLaTeX is free
software, while the latter is not. According to a
announcement from Overleaf, they plan to keep the ShareLaTeX code
base maintained as free software.
But these two are not the only alternatives.
Fidus Writer is another free
software solution with the
source available on github. I have not used it myself. Several
others can be found on the nice
alterntiveTo
web service.
If you like Google Docs or Etherpad, but would like to write
documents in LaTeX, you should check out these services. You can even
host your own, if you want to. :)

25 October 2017

Recently, I needed to automatically check the copyright status of a
set of The Internet Movie database
(IMDB) entries, to figure out which one of the movies they refer
to can be freely distributed on the Internet. This proved to be
harder than it sounds. IMDB for sure list movies without any
copyright protection, where the copyright protection has expired or
where the movie is lisenced using a permissive license like one from
Creative Commons. These are mixed with copyright protected movies,
and there seem to be no way to separate these classes of movies using
the information in IMDB.
First I tried to look up entries manually in IMDB,
Wikipedia and
The Internet Archive, to get a
feel how to do this. It is hard to know for sure using these sources,
but it should be possible to be reasonable confident a movie is "out
of copyright" with a few hours work per movie. As I needed to check
almost 20,000 entries, this approach was not sustainable. I simply
can not work around the clock for about 6 years to check this data
set.
I asked the people behind The Internet Archive if they could
introduce a new metadata field in their metadata XML for IMDB ID, but
was told that they leave it completely to the uploaders to update the
metadata. Some of the metadata entries had IMDB links in the
description, but I found no way to download all metadata files in bulk
to locate those ones and put that approach aside.
In the process I noticed several Wikipedia articles about movies
had links to both IMDB and The Internet Archive, and it occured to me
that I could use the Wikipedia RDF data set to locate entries with
both, to at least get a lower bound on the number of movies on The
Internet Archive with a IMDB ID. This is useful based on the
assumption that movies distributed by The Internet Archive can be
legally distributed on the Internet. With some help from the RDF
community (thank you DanC), I was able to come up with this query to
pass to the SPARQL interface on
Wikidata:

If I understand the query right, for every film entry anywhere in
Wikpedia, it will return the IMDB ID and The Internet Archive ID, and
when the movie was released and its English title, if either or both
of the latter two are available. At the moment the result set contain
2338 entries. Of course, it depend on volunteers including both
correct IMDB and The Internet Archive IDs in the wikipedia articles
for the movie. It should be noted that the result will include
duplicates if the movie have entries in several languages. There are
some bogus entries, either because The Internet Archive ID contain a
typo or because the movie is not available from The Internet Archive.
I did not verify the IMDB IDs, as I am unsure how to do that
automatically.
I wrote a small python script to extract the data set from Wikidata
and check if the XML metadata for the movie is available from The
Internet Archive, and after around 1.5 hour it produced a list of 2097
free movies and their IMDB ID. In total, 171 entries in Wikidata lack
the refered Internet Archive entry. I assume the 70 "disappearing"
entries (ie 2338-2097-171) are duplicate entries.
This is not too bad, given that The Internet Archive report to
contain 5331
feature films at the moment, but it also mean more than 3000
movies are missing on Wikipedia or are missing the pair of references
on Wikipedia.
I was curious about the distribution by release year, and made a
little graph to show how the amount of free movies is spread over the
years:
I expect the relative distribution of the remaining 3000 movies to
be similar.
If you want to help, and want to ensure Wikipedia can be used to
cross reference The Internet Archive and The Internet Movie Database,
please make sure entries like this are listed under the "External
links" heading on the Wikipedia article for the movie:

17 October 2017

An earlier article showed that
private key storage is an important problem to solve in any
cryptographic system and established keycards as a good way to store
private key material offline. But which keycard should we use? This
article examines the form factor, openness, and performance of four
keycards to try to help readers choose the one that will fit their
needs.
I have personally been using a YubiKey NEO, since a 2015
announcement
on GitHub promoting two-factor authentication. I was also able to hook
up my SSH authentication key into the YubiKey's 2048 bit RSA slot. It
seemed natural to move the other subkeys onto the keycard, provided that
performance was sufficient. The mail client that I use,
(Notmuch), blocks when decrypting messages,
which could be a serious problems on large email threads from encrypted
mailing lists.
So I built a test harness and got access to some more keycards: I bought
a FST-01 from its creator,
Yutaka Niibe, at the last DebConf and Nitrokey donated a Nitrokey
Pro. I also
bought a YubiKey 4
when I got the NEO. There are of course other keycards out there, but
those are the ones I could get my hands on. You'll notice none of those
keycards have a physical keypad to enter passwords, so they are all
vulnerable to keyloggers that could extract the key's PIN. Keep in mind,
however, that even with the PIN, an attacker could only ask the keycard
to decrypt or sign material but not extract the key that is protected by
the card's firmware.

Form factor
The four keycards have similar form factors: they all connect to a
standard USB port, although both YubiKey keycards have a capacitive
button by which the user triggers two-factor authentication and the
YubiKey 4 can also require a button
press
to confirm private key use. The YubiKeys feel sturdier than the other
two. The NEO has withstood two years of punishment in my pockets along
with the rest of my "real" keyring and there is only minimal wear on the
keycard in the picture. It's also thinner so it fits well on the
keyring.
The FST-01 stands out from the other two with its minimal design. Out of
the box, the FST-01 comes without a case, so the circuitry is exposed.
This is deliberate: one of its goals is to be as transparent as
possible, both in terms of software and hardware design and you
definitely get that feeling at the physical level. Unfortunately, that
does mean it feels more brittle than other models: I wouldn't carry it
in my pocket all the time, although there is a
case
that may protect the key a little better, but it does not provide an
easy way to hook it into a keyring. In the group picture above, the
FST-01 is the pink plastic thing, which is a rubbery casing I received
along with the device when I got it.
Notice how the USB connectors of the YubiKeys differ from the other two:
while the FST-01 and the Nitrokey have standard USB connectors, the
YubiKey has only a "half-connector", which is what makes it thinner than
the other two. The "Nano" form factor takes this even further and almost
disappears in the USB port. Unfortunately, this arrangement means the
YubiKey NEO often comes loose and falls out of the USB port, especially
when connected to a laptop. On my workstation, however, it usually stays
put even with my whole keyring hanging off of it. I suspect this adds
more strain to the host's USB port but that's a tradeoff I've lived with
without any noticeable wear so far. Finally, the NEO has this peculiar
feature of supporting NFC for certain operations, as LWN previously
covered, but I haven't used that
feature yet.
The Nitrokey Pro looks like a normal USB key, in contrast with the other
two devices. It does feel a little brittle when compared with the
YubiKey, although only time will tell how much of a beating it can take.
It has a small ring in the case so it is possible to carry it directly
on your keyring, but I would be worried the cap would come off
eventually. Nitrokey devices are also two times thicker than the Yubico
models which makes them less convenient to carry around on keyrings.

Open and closed designs
The FST-01 is as open as hardware comes, down to the PCB design
available as KiCad files in this Git
repository. The
software running on the card is the
Gnuk firmware that implements the
OpenPGP card protocol, but you can
also get it with firmware implementing a true random number generator
(TRNG) called
NeuG
(pronounced "noisy"); the device is
programmable through a
standard Serial Wire
Debug (SWD) port. The
Nitrokey Start model also runs the Gnuk firmware. However, the Nitrokey
website announces only ECC and RSA 2048-bit
support for the Start, while the FST-01 also supports RSA-4096.
Nitrokey's founder Jan Suhr, in a private email, explained that this is
because "Gnuk doesn't support RSA-3072 or larger at a reasonable speed".
Its devices (the Pro, Start, and HSM models) use a similar chip to the
FST-01: the STM32F103
microcontroller.
Nitrokey also publishes its hardware designs, on
GitHub, which shows the Pro is basically a
fork of the FST-01, according to the
ChangeLog.
I opened the case to confirm it was using the STM MCU, something I
should warn you against; I broke one of the pins holding it together
when opening it so now it's even more fragile. But at least, I was able
to confirm it was built using the STM32F103TBU6 MCU, like the FST-01.
But this is where the comparison ends: on the back side, we find a SIM
card reader that holds the OpenPGP
card that, in turn, holds
the private key material and does the cryptographic operations. So, in
effect, the Nitrokey Pro is really a evolution of the original OpenPGP
card readers.
Nitrokey confirmed the OpenPGP card featured in the Pro is the same as
the one shipped by
the Free Software Foundation Europe (FSFE): the
BasicCard built by ZeitControl. Those cards,
however, are covered by NDAs and the firmware is only partially open
source.
This makes the Nitrokey Pro less open than the FST-01, but that's an
inevitable tradeoff when choosing a design based on the OpenPGP cards,
which Suhr described to me as "pretty proprietary". There are other
keycards out there, however, for example the
SLJ52GDL150-150k
smartcard suggested by
Debian developer Yves-Alexis Perez, which he prefers as it is certified
by French and German authorities. In that blog post, he also said he was
experimenting with the GPL-licensed OpenPGP
applet implemented by the French
ANSSI.
But the YubiKey devices are even further away in the closed-design
direction. Both the hardware designs and firmware are proprietary. The
YubiKey NEO, for example, cannot be upgraded at all, even though it is
based on an open firmware. According to Yubico's
FAQ,
this is due to "best security practices": "There is a 'no upgrade'
policy for our devices since nothing, including malware, can write to
the firmware."
I find this decision questionable in a context where security updates
are often more important than trying to design a bulletproof design,
which may simply be impossible. And the YubiKey NEO did suffer from
critical security
issue
that allowed attackers to bypass the PIN protection on the card, which
raises the question of the actual protection of the private key material
on those cards. According to Niibe, "some OpenPGP cards store the
private key unencrypted. It is a common attitude for many smartcard
implementations", which was confirmed by Suhr: "the private key is
protected by hardware mechanisms which prevent its extraction and
misuse". He is referring to the use of tamper
resistance.
After that security issue, there was no other option for YubiKey NEO
users than to get a new keycard (for free, thankfully) from Yubico,
which also meant discarding the private key material on the key. For
OpenPGP keys, this may mean having to bootstrap the web of trust from
scratch if the keycard was responsible for the main certification key.
But at least the NEO is running free software based on the OpenPGP card
applet and the
source is still available on
GitHub. The YubiKey 4, on the
other hand, is now closed
source,
which was controversial when the new model was announced last year. It
led the main Linux Foundation system administrator, Konstantin
Ryabitsev, to withdraw his
endorsement
of Yubico products. In response, Yubico argued that this approach was
essential to the security of its
devices,
which are now based on "a secure chip, which has built-in
countermeasures to mitigate a long list of attacks". In particular, it
claims that:

A commercial-grade AVR or ARM controller is unfit to be used in a
security product. In most cases, these controllers are easy to attack,
from breaking in via a debug/JTAG/TAP port to probing memory contents.
Various forms of fault injection and side-channel analysis are
possible, sometimes allowing for a complete key recovery in a
shockingly short period of time.

While I understand those concerns, they eventually come down to the
trust you have in an organization. Not only do we have to trust Yubico,
but also hardware manufacturers and designs they have chosen. Every step
in the hidden supply chain is then trusted to make correct technical
decisions and not introduce any backdoors.
History, unfortunately, is not on Yubico's side: Snowden revealed the
example of RSA security
accepting what renowned cryptographer Bruce Schneier described as a
"bribe"
from the NSA to weaken its ECC implementation, by using the presumably
backdoored Dual_EC_DRBG
algorithm. What makes Yubico or its suppliers so different from RSA
Security? Remember that RSA Security used to be an adamant opponent to
the degradation of encryption standards, campaigning against the
Clipper chip in the first
crypto wars.
Even if we trust the Yubico supply chain, how can we trust a closed
design using what basically amounts to security through obscurity?
Publicly auditable designs are an important tradition in cryptography,
and that principle shouldn't stop when software is frozen into silicon.
In fact, a critical vulnerability called
ROCA
disclosed recently affects closed "smartcards" like the
YubiKey 4
and allows full private key recovery from the public key if the key was
generated on a vulnerable keycard. When speaking with Ars
Technica,
the researchers outlined the importance of open designs and questioned
the reliability of certification:

Our work highlights the dangers of keeping the design secret and the
implementation closed-source, even if both are thoroughly analyzed and
certified by experts. The lack of public information causes a delay in
the discovery of flaws (and hinders the process of checking for them),
thereby increasing the number of already deployed and affected devices
at the time of detection.

This issue with open hardware designs seems to be recurring topic of
conversation on the Gnuk mailing
list. For
example, there was a
discussion
in September 2017 regarding possible hardware vulnerabilities in the STM
MCU that would allow extraction of encrypted key material from the key.
Niibe referred to a
talk
presented at the WOOT 17
workshop, where Johannes Obermaier and Stefan Tatschner, from the
Fraunhofer Institute, demonstrated attacks against the STMF0 family
MCUs. It is still unclear if those attacks also apply to the older STMF1
design used in the FST-01, however. Furthermore, extracted private key
material is still protected by user passphrase, but the Gnuk uses a weak
key derivation function, so brute-forcing attacks may be possible.
Fortunately, there is work in progress to
make GnuPG hash the passphrase before sending it to the keycard, which
should make such attacks harder if not completely pointless.
When asked about the Yubico claims in a private email, Niibe did
recognize that "it is true that there are more weak points in general
purpose implementations than special implementations". During the last
DebConf in Montreal, Niibe
explained:

If you don't trust me, you should not buy from me. Source code
availability is only a single factor: someone can maliciously replace
the firmware to enable advanced attacks.

Niibe recommends to "build the firmware yourself", also saying the
design of the FST-01 uses normal hardware that "everyone can replicate".
Those advantages are hard to deny for a cryptographic system: using more
generic components makes it harder for hostile parties to mount targeted
attacks.
A counter-argument here is that it can be difficult for a regular user
to audit such designs, let alone physically build the device from
scratch but, in a mailing list discussion, Debian developer Ian Jackson
explained
that:

You don't need to be able to validate it personally. The thing spooks
most hate is discovery. Backdooring supposedly-free hardware is harder
(more costly) because it comes with greater risk of discovery.
To put it concretely: if they backdoor all of them, someone (not
necessarily you) might notice. (Backdooring only yours involves
messing with the shipping arrangements and so on, and supposes that
you specifically are of interest.)

Since that, as far as we know, the STM microcontrollers are not
backdoored, I would tend to favor those devices instead of proprietary
ones, as such a backdoor would be more easily detectable than in a
closed design. Even though physical attacks may be possible against
those microcontrollers, in the end, if an attacker has physical access
to a keycard, I consider the key compromised, even if it has the best
chip on the market. In our email exchange, Niibe argued that "when a
token is lost, it is better to revoke keys, even if the token is
considered secure enough". So like any other device, physical compromise
of tokens may mean compromise of the key and should trigger
key-revocation procedures.

Algorithms and performance
To establish reliable performance results, I wrote a benchmark program
naively called crypto-bench
that could produce comparable results between the different keys. The
program takes each algorithm/keycard combination and runs 1000
decryptions of a 16-byte file (one AES-128 block) using GnuPG, after
priming it to get the password cached. I assume the overhead of GnuPG
calls to be negligible, as it should be the same across all tokens, so
comparisons are possible. AES encryption is constant across all tests as
it is always performed on the host and fast enough to be irrelevant in
the tests.
I used the following:

Nitrokey Pro 0.8 (latest firmware)

FST-01, running Gnuk version 1.2.5 (latest firmware)

YubiKey NEO OpenPGP applet 1.0.10 (not upgradable)

YubiKey 4 4.2.6 (not upgradable)

I ran crypto-bench for each keycard, which resulted in the following:

Algorithm

Device

Mean time (s)

ECDH-Curve25519

CPU

0.036

FST-01

0.135

RSA-2048

CPU

0.016

YubiKey-4

0.162

Nitrokey-Pro

0.610

YubiKey-NEO

0.736

FST-01

1.265

RSA-4096

CPU

0.043

YubiKey-4

0.875

Nitrokey-Pro

3.150

FST-01

8.218

There we see the performance of the four keycards I tested, compared
with the same operations done without a keycard: the "CPU" device. That
provides the baseline time of GnuPG decrypting the file. The first
obvious observation is that using a keycard is slower: in the best
scenario (FST-01 + ECC) we see a four-fold slowdown, but in the worst
case (also FST-01, but RSA-4096), we see a catastrophic 200-fold
slowdown. When I
presented
the results on the Gnuk mailing list, GnuPG developer Werner Koch
confirmed those "numbers are as expected":

With a crypto chip RSA is much faster. By design the Gnuk can't be as
fast - it is just a simple MCU. However, using Curve25519 Gnuk is
really fast.

And yes, the FST-01 is really fast at doing ECC, but it's also the only
keycard that handles ECC in my tests; the Nitrokey Start and Nitrokey
HSM should support it as well, but I haven't been able to test those
devices. Also note that the YubiKey NEO doesn't support RSA-4096 at all,
so we can only compare RSA-2048 across keycards. We should note,
however, that ECC is slower than RSA on the CPU, which suggests the
Gnuk ECC implementation used by the FST-01 is exceptionally fast.
In
discussions
about improving the performance of the FST-01, Niibe estimated the user
tolerance threshold to be "2 seconds decryption time". In a new
design
using the STM32L432 microcontroller, Aurelien Jarno was able to bring
the numbers for RSA-2048 decryption from 1.27s down to 0.65s, and for
RSA-4096, from 8.22s down to 3.87s seconds. RSA-4096 is still beyond the
two-second threshold, but at least it brings the FST-01 close to the
YubiKey NEO and Nitrokey Pro performance levels.
We should also underline the superior performance of the YubiKey 4:
whatever that thing is doing, it's doing it faster than anyone else. It
does RSA-4096 faster than the FST-01 does RSA-2048, and almost as fast
as the Nitrokey Pro does RSA-2048. We should also note that the Nitrokey
Pro also fails to cross the two-second threshold for RSA-4096
decryption.
For me, the FST-01's stellar performance with ECC outshines the other
devices. Maybe it says more about the efficiency of the algorithm than
the FST-01 or Gnuk's design, but it's definitely an interesting avenue
for people who want to deploy those modern algorithms. So, in terms of
performance, it is clear that both the YubiKey 4 and the FST-01 take the
prize in their own areas (RSA and ECC, respectively).

Conclusion
In the above presentation, I have evaluated four cryptographic keycards
for use with various OpenPGP operations. What the results show is that
the only efficient way of storing a 4096-bit encryption key on a keycard
would be to use the YubiKey 4. Unfortunately, I do not feel we should
put our trust in such closed designs so I would argue you should either
stick with 2048-bit encryption subkeys or keep the keys on disk.
Considering that losing such a key would be catastrophic, this might be
a good approach anyway. You should also consider switching to ECC
encryption: even though it may not be supported everywhere, GnuPG
supports having multiple encryption subkeys on a keyring: if one
algorithm is unsupported (e.g. GnuPG 1.4 doesn't support ECC), it will
fall back to a supported algorithm (e.g. RSA). Do not forget your
previously encrypted material doesn't magically re-encrypt itself using
your new encryption subkey, however.
For authentication and signing keys, speed is not such an issue, so I
would warmly recommend either the Nitrokey Pro or Start, or the FST-01,
depending on whether you want to start experimenting with ECC
algorithms. Availability also seems to be an issue for the FST-01. While
you can generally get the device when you meet Niibe in person for a few
bucks (I bought mine for around \$30 Canadian), the Seeed online
shop says the device is out of
stock
at the time of this writing, even though Jonathan McDowell
said
that may be inaccurate in a debian-project discussion. Nevertheless,
this issue may make the Nitrokey devices more attractive. When deciding
on using the Pro or Start, Suhr offered the following advice:

In practice smart card security has been proven to work well (at least
if you use a decent smart card). Therefore the Nitrokey Pro should be
used for high security cases. If you don't trust the smart card or if
Nitrokey Start is just sufficient for you, you can choose that one.
This is why we offer both models.

So far, I have created a signing subkey and moved that and my
authentication key to the YubiKey NEO, because it's a device I
physically trust to keep itself together in my pockets and I was already
using it. It has served me well so far, especially with its extra
features like U2F and
HOTP
support, which I use frequently. Those features are also available on
the Nitrokey Pro, so that may be an alternative if I lose the YubiKey. I
will probably move my main certification key to the FST-01 and a
LUKS-encrypted USB disk, to keep that certification key offline but
backed up on two different devices. As for the encryption key, I'll wait
for keycard performance to improve, or simply switch my whole keyring to
ECC and use the FST-01 or Nitrokey Start for that purpose.

16 October 2017

Following the news about the ROCA vulnerability (weak key
generation in Infineon-based smartcards, more info here and
here) I can confirm that the Almex smartcard I mentionned on
my last post (which
are Infineon based) are indeed vulnerable.
I've contacted Almex to have more details, but if you were
interested in buying that smartcard, you might want to refrain for
now.
It does *not* affect keys generated off-card and later injected
(the process I use myself).

14 October 2017

I find it fascinating how many of the people being locked inside
the proposed border wall between USA and Mexico support the idea. The
proposal to keep Mexicans out reminds me of
the
propaganda twist from the East Germany government calling the wall
the Antifascist Bulwark after erecting the Berlin Wall, claiming
that the wall was erected to keep enemies from creeping into East
Germany, while it was obvious to the people locked inside it that it
was erected to keep the people from escaping.
Do the people in USA supporting this wall really believe it is a
one way wall, only keeping people on the outside from getting in,
while not keeping people in the inside from getting out?

So those removal bugs' severities will be raised to RC in aproximately a month.

We still don't have any solutions for Qt 4 or 5.

For the Qt 5 case we will probably keep the bug open until Qt 5.10 is in the archive which should bring OpenSSL 1.1 support *or* FTP masters decide to remove OpenSSL1.0. In this last case the fate will be the same as with Qt4, below.

For Qt4 we do not have patches available and there will probably be none in time (remember we do not have upstream support). That plus the fact that we are actively trying to remove it from the archive it means we will remove openssl support. This might mean that apps using Qt4:

- Might cease to work.- Might keep working: - Informing their users that no SSL support is available programmer did a good job. - Not informing their users that no SSL support is available and establishing connections non the less programmer might have not done a good job.

10 October 2017

A long time
ago, I switched my GnuPG setup to a smartcard based one. I kept
using the same master key, but:

copied the rsa4096 master key to a master smartcard, for
when I need to sign (certify) other keys;

created rsa2048 subkeys (for signature, encryption and
authentication) and moved them to an OpenPGP smartcard for daily
usage.

I've been working with that setup for a few years now and it is
working perfectly fine. The signature counter on the OpenPGP basic
card is a bit north of 5000 which is large but not that huge, all
considered (and not counting authentication and decryption key
usage).

One very nice feature of using a smartcard is that my laptop (or
other machines I work on) never manipulates the private key
directly but only sends request to the card, which is a really huge
improvement, in my opinion. But it's also not the perfect solution
for me: the OpenPGP
card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information
on the smartcard, besides the fact that Werner Koch trust
ZeistControl to not mess up. One caveat for me is that the card
does not use a certified secure microcontroler like you would find
in smartcard chips found in debit card or electronic IDs. That
means it's not really been audited by a competent hardware lab, and
thus can't be considered secure against physical attacks. The
cardOS software and the application implementing the OpenPGP
specification are not public either and have not been audited
either, to the best of my knowledge.

At one point I was interested in the Yubikey
Neo, especially since the architecture Yubico used was common:
a (supposedly) certified platform (secure microcontroler, card OS)
and a GlobalPlatform / JavaCard virtual machine. The applet used in
the Yubikey Neo is open-source, too, so
you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned
to a less common and more proprietary infrastructure for Yubikey
4: it's not longer Javacard based, and they don't provide the
applet source anymore. This was not really seen as a good move by a
lot of people, including Konstantin
Ryabitsev (kernel.org administrator). Also, it wasn't
possible even for the Yubico Neo to actually build the applet
yourself and inject it on the card: when the Yubikey leaves the
facility, the applet is already installed and the smartcard is
locked (for obvious security reason). I've tried asking about
getting naked/empty Yubikey with developers keys to load the applet
myself, but it' was apparently not possible or would have required
signing an NDA with NXP (the chip maker), which is not really
possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard
applet, with the intention to support latest version of the
OpenPGP specification, and especially elliptic curve
cryptography. The applet is called SmartPGP and has been released on ANSSI github
repository. I investigated a bit, and found a
smartcard with correct
specification: certified (in
France or Germany), and supporting Javacard 3.0.4 (required for
ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC
with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256,
P384, P512) curves.

I've ordered some cards, and when they arrived started playing.
I've built the SmartPGP applet and pushed it to a smartcard, then
generated some keys and tried with GnuPG. I'm right now in the
process of migrating to a new smartcard based on that setup, which
seems to work just fine after few days.

Part two of this serie will describe how to build the applet and
inject it in the smartcard. The process is already documented here
and there, but there are few things not to forget, like how to lock
the card after provisionning, so I guess having the complete
process somewhere might be useful in case some people want to
reproduce it.