Search Results: "don"

6 June 2020

As a member of the Norwegian Unix
User Group, I have the pleasure of receiving the
USENIX magazine
;login:
several times a year. I rarely have time to read all the articles,
but try to at least skim through them all as there is a lot of nice
knowledge passed on there. I even carry the latest issue with me most
of the time to try to get through all the articles when I have a few
spare minutes.
The other day I came across a nice article titled
"The
Secure Socket API: TLS as an Operating System Service" with a
marvellous idea I hope can make it all the way into the POSIX standard.
The idea is as simple as it is powerful. By introducing a new
socket() option IPPROTO_TLS to use TLS, and a system wide service to
handle setting up TLS connections, one both make it trivial to add TLS
support to any program currently using the POSIX socket API, and gain
system wide control over certificates, TLS versions and encryption
systems used. Instead of doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);

the program code would be doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TLS);

According to the ;login: article, converting a C program to use TLS
would normally modify only 5-10 lines in the code, which is amazing
when compared to using for example the OpenSSL API.
The project has set up the
https://securesocketapi.org/
web site to spread the idea, and the code for a kernel module and the
associated system daemon is available from two github repositories:
ssa and
ssa-daemon.
Unfortunately there is no explicit license information with the code,
so its copyright status is unclear. A
request to solve
this about it has been unsolved since 2018-08-17.
I love the idea of extending socket() to gain TLS support, and
understand why it is an advantage to implement this as a kernel module
and system wide service daemon, but can not help to think that it
would be a lot easier to get projects to move to this way of setting
up TLS if it was done with a user space approach where programs
wanting to use this API approach could just link with a wrapper
library.
I recommend you check out this simple and powerful approach to more
secure network connections. :)
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

I just did a quick test of different compression options in Debian. The source file is a 1.1G MySQL dump file. The time is user CPU time on a i7-930 running under KVM, the compression programs may have different levels of optimisation for other CPU families.
Facebook people designed the zstd compression system (here s a page giving an overview of it [1]). It has some interesting new features that can provide real differences at scale (like unusually large windows and pre-defined dictionaries), but I just tested the default mode and the -9 option for more compression. For the SQL file zstd -9 provides significantly better compression than gzip while taking only slightly less CPU time than gzip -9 while zstd with the default option (equivalent to zstd -3 ) gives much faster compression than gzip -9 while also being slightly smaller. For this use case bzip2 is too slow for inline compression of a MySQL dump as the dump process locks tables and can hang clients. The lzma and xz compression algorithms provide significant benefits in size but the time taken is grossly disproportionate.
In a quick check of my collection of files compressed with gzip I was only able to fine 1 fild that got less compression with zstd with default options, and that file got better compression with zstd -9 . So zstd seems to beat gzip everywhere by every measure.
The bzip2 compression seems to be obsolete, zstd -9 is much faster and has slightly smaller output.
Both xz and lzma seem to offer a combination of compression and time taken that zstd can t beat (for this file type at least). The ultra compression mode 22 gives 2% smaller output files but almost 28 minutes of CPU time for compression is a bit ridiculous. There is a threaded mode for zstd that could potentially allow a shorter wall clock time for zstd --ultra -22 than lzma/xz while also giving better compression.

Compression

Time

Size

zstd

5.2s

130m

zstd -9

28.4s

114m

gzip -9

33.4s

141m

bzip2 -9

3m51

119m

lzma

6m20

97m

xz

6m36

97m

zstd -19

9m57

99m

zstd --ultra -22

27m46

95m

Conclusion
For distributions like Debian which have large archives of files that are compressed once and transferred a lot the zstd --ultra -22 compression might be useful with multi-threaded compression. But given that Debian already has xz in use it might not be worth changing until faster CPUs with lots of cores become more commonly available. One could argue that for Debian it doesn t make sense to change from xz as hard drives seem to be getting larger capacity (and also smaller physical size) faster than the Debian archive is growing. One possible reason for adopting zstd in a distribution like Debian is that there are more tuning options for things like memory use. It would be possible to have packages for an architecture like ARM that tends to have less RAM compressed in a way that decreases memory use on decompression.
For general compression such as compressing log files and making backups it seems that zstd is the clear winner. Even bzip2 is far too slow and in my tests zstd clearly beats gzip for every combination of compression and time taken. There may be some corner cases where gzip can compete on compression time due to CPU features, optimisation for CPUs, etc but I expect that in almost all cases zstd will win for compression size and time. As an aside I once noticed the 32bit of gzip compressing faster than the 64bit version on an Opteron system, the 32bit version had assembly optimisation and the 64bit version didn t at that time.
To create a tar archive you can run tar czf or tar cJf to create an archive with gzip or xz compression. To create an archive with zstd compression you have to use tar --zstd -cf , that s 7 extra characters to type. It s likely that for most casual archive creation (EG for copying files around on a LAN or USB stick) saving 7 characters of typing is more of a benefit than saving a small amount of CPU time and storage space. It would be really good if tar got a single character option for zstd compression.
The external dictionary support in zstd would work really well with rsync for backups. Currently rsync only supports zlib, adding zstd support would be a good project for someone (unfortunately I don t have enough spare time).
Now I will change my database backup scripts to use zstd.
Update:
The command tar acvf a.zst filenames will create a zstd compressed tar archive, the a option to GNU tar makes it autodetect the compression type from the file name. Thanks Enrico!

4 June 2020

I've been struggling with replacing parts of my old sysadmin
monitoring toolkit (previously built with Nagios, Munin and Smokeping)
with more modern tools (specifically Prometheus, its "exporters" and
Grafana) for a while now.
Replacing Munin with Prometheus and Grafana is fairly straightforward:
the network architecture ("server pulls metrics from all nodes") is
similar and there are lots of exporters. They are a little harder to
write than Munin modules, but that makes them more flexible and
efficient, which was a huge problem in Munin. I wrote a Migrating
from Munin guide that summarizes those differences. Replacing
Nagios is much harder, and I still haven't quite figured out if it's
worth it.

How does Smokeping work
Leaving those two aside for now, I'm left with Smokeping, which I used
in my previous job to diagnose routing issues, using Smokeping as a
decentralized looking glass, which was handy to debug long term
issues. Smokeping is a strange animal: it's fundamentally similar to
Munin, except it's harder to write plugins for it, so most people just
use it for Ping, something for which it excels at.
Its trick is this: instead of doing a single ping and returning this
metrics, it does multiple ones and returns multiple
metrics. Specifically, smokeping will send multiple ICMP packets (20
by default), with a low interval (500ms by default) and a single
retry. It also pings multiple hosts at once which means it can
quickly scan multiple hosts simultaneously. You therefore see network
conditions affecting one host reflected in further hosts down (or up)
the chain. The multiple metrics also mean you can draw graphs with
"error bars" which Smokeping shows as "smoke" (hence the name). You
also get per-metric packet loss.
Basically, smokeping runs this command and collects the output in a
RRD database:

... where those parameters are, by default:

$count is 20 (packets)

$backoff is 1 (avoid exponential backoff)

$timeout is 1.5s

$mininterval is 0.01s (minimum wait interval between any target)

$hostinterval is 1.5s (minimum wait between probes on a single target)

It can also override stuff like the source address and TOS
fields. This probe will complete between 30 and 60 seconds, if my math
is right (0% and 100% packet loss).

How do draw Smokeping graphs in Grafana
A naive implementation of Smokeping in Prometheus/Grafana would be to
use the blackbox exporter and create a dashboard displaying those
metrics. I've done this at home, and then I realized that I was
missing something. Here's what I did.

Set the Right Y axis Unit to percent (0.0-1.0) and set
Y-max to 1

Then set the entire thing to Repeat, on target,
vertically. And you need to add a target variable like
label_values(probe_success, instance).

The result looks something like this:
Not bad, but not Smokeping
This actually looks pretty good!
I've uploaded the resulting dashboard in the Grafana dashboard
repository.

What is missing?
Now, that doesn't exactly look like Smokeping, does it. It's pretty
good, but it's not quite what we want. What is missing is variance,
the "smoke" in Smokeping.
There's a good article about replacing Smokeping with
Grafana. They wrote a custom script to write samples into InfluxDB
so unfortunately we can't use it in this case, since we don't have
InfluxDB's query language. I couldn't quite figure out how to do the
same in PromQL. I tried:

The first two give zero for all samples. The latter works, but doesn't
look as good as Smokeping. So there might be something I'm missing.
SuperQ wrote a special exporter for this called
smokeping_prober that came out of this discussion in the blackbox
exporter. Instead of delegating scheduling and target definition
to Prometheus, the targets are set in the exporter.
They also take a different approach than Smokeping: instead of
recording the individual variations, they delegate that to Prometheus,
through the use of "buckets". Then they use a query like this:

This is the rationale to SuperQ's implementation:

Yes, I know about smokeping's bursts of pings. IMO, smokeping's data
model is flawed that way. This is where I intentionally deviated
from the smokeping exact way of doing things. This prober sends a
smooth, regular series of packets in order to be measuring at
regular controlled intervals.
Instead of 20 packets, over 10 seconds, every minute. You send one
packet per second and scrape every 15. This has the same overall
effect, but the measurement is, IMO, more accurate, as it's a
continuous stream. There's no 50 second gap of no metrics about the
ICMP stream.
Also, you don't get back one metric for those 20 packets, you get
several. Min, Max, Avg, StdDev. With the histogram data, you can
calculate much more than just that using the raw data.
For example, IMO, avg and max are not all that useful for continuous
stream monitoring. What I really want to know is the 90th percentile
or 99th percentile.
This smokeping prober is not intended to be a one-to-one replacement
for exactly smokeping's real implementation. But simply provide
similar functionality, using the power of Prometheus and PromQL to
make it better.
[...]
one of the reason I prefer the histogram datatype, is you can use
the heatmap panel type in Grafana, which is superior to the
individual min/max/avg/stddev metrics that come from smokeping.
Say you had two routes, one slow and one fast. And some pings are
sent over one and not the other. Rather than see a wide min/max
equaling a wide stddev, the heatmap would show a "line" for both
routes.

That's an interesting point. I have also ended up adding a heatmap
graph to my dashboard, independently. And it is true it shows those
"lines" much better... So maybe that, if we ignore legacy, we're
actually happy with what we get, even with the plain blackbox
exporter.
So yes, we're missing pretty "fuzz" lines around the main lines, but
maybe that's alright. It would be possible to do the equivalent to
the InfluxDB hack, with queries like:

The output looks something like this:
Looks more like Smokeping!
But there's a problem there: see how the middle graph "dips" sometimes
below 20ms? That's the min_over_time function (incorrectly, IMHO)
returning zero. I haven't quite figured out how to fix that, and I'm
not sure it is better. But it does look more like Smokeping than the
previous graph.
Update: I forgot to mention one big thing that this setup is
missing. Smokeping has this nice feature that you can order and group
probe targets in a "folder"-like hierarchy. It is often used to group
probes by location, which makes it easier to scan a lot of
targets. This is harder to do in this setup. It might be possible to
setup location-specific "jobs" and select based on that, but it's not
exactly the same.

Welcome to the May 2020 report from the Reproducible Builds project.
One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. Nonetheless, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.
In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

Recent years saw a number of supply chain attacks that leverage the increasing use of open source during software development, which is facilitated by dependency managers that automatically resolve, download and install hundreds of open source packages throughout the software life cycle.

This means that anyone can recreate the same binaries produced from our official release process. Now anyone can verify that the release binaries were created using the source code we say they were created from. No single person or computer needs to be trusted when producing the binaries now, which greatly reduces the attack surface for Sia users.

Synchronicity is a distributed build system for Rust build artifacts which have been published to crates.io. The goal of Synchronicity is to provide a distributed binary transparency system which is independent of any central operator.
The Comparison of Linux distributions article on Wikipedia now features a Reproducible Builds column indicating whether distributions approach and progress towards achieving reproducible builds.

Drop the (default) shell=False keyword argument to subprocess.Popen so that the potentially-unsafe shell=True is more obvious. []

Perform string normalisation in Black [] and include the Black output in the assertion failure too [].

Allow a bare try/except block when cleaning up temporary files with respect to the flake8 quality assurance tool. []

Rename in_dsc_path to dsc_in_same_dir to clarify the use of this variable. []

Abstract out the duplicated parts of the debian_fallback class [] and add descriptions for the file types. []

Various commenting and internal documentation improvements. [][]

Rename the Openssl command class to OpenSSLPKCS7 to accommodate other command names with this prefix. []

Misc:

Rename the --debugger command-line argument to --pdb. []

Normalise filesystem stat(2) birth times (ie. st_birthtime) in the same way we do with the stat(1) command s Access: and Change: times to fix a nondeterministic build failure in GNU Guix. (#74)

Ignore case when ordering our file format descriptions. []

Drop, add and tidy various module imports. [][][][]

In addition:

Jean-Romain Garnier fixed a general issue where, for example, LibarchiveMember s has_same_content method was called regardless of the underlying type of file. []

Daniel Fullmer fixed an issue where some filesystems could only be mounted read-only. (!49)

Emanuel Bronshtein provided a patch to prevent a build of the Docker image containing parts of the build s. (#123)

Mattia Rizzolo added an entry to debian/py3dist-overrides to ensure the rpm-python module is used in package dependencies (#89) and moved to using the new execute_after_* and execute_before_* Debhelper rules [].

Add a separate, canonical page for every new release. [][][]

Generate a latest release section and display that with the corresponding date on the homepage. []

Use Jekyll s absolute_url and relative_url where possible [][] and move a number of configuration variables to _config.yml [][].

Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Other tools
Elsewhere in our tooling:
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. In May, Chris Lamb uploaded version 1.8.1-1 to Debian unstable and Bernhard M. Wiedemann fixed an off-by-one error when parsing PNG image modification times. (#16)
In disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, Chris Lamb replaced the term dirents in place of directory entries in human-readable output/log messages [] and used the astyle source code formatter with the default settings to the main disorderfs.cpp source file [].
Holger Levsen bumped the debhelper-compat level to 13 in disorderfs [] and reprotest [], and for the GNU Guix distribution Vagrant Cascadian updated the versions of disorderfs to version 0.5.10 [] and diffoscope to version 145 [].

Juri Dispan:

Testing framework
We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced. Holger Levsen made the following changes:

System health status:

Improve page description. []

Add more weight to proxy failures. []

More verbose debug/failure messages. [][][]

Work around strangeness in the Bash shell let VARIABLE=0 exits with an error. []

Fail loudly if there are more than three .buildinfo files with the same name. []

Document how to reboot all nodes in parallel, working around molly-guard. []

Further work on a Debian package rebuilder:

Workaround and document various issues in the debrebuild script. [][][][]

Improve output in the case of errors. [][][][]

Improve documentation and future goals [][][][], in particular documentiing two real world tests case for an impossible to recreate build environment [].

Find the right source package to rebuild. []

Increase the frequency we run the script. [][][][]

Improve downloading and selection of the sources to build. [][][]

Improve version string handling.. []

Handle build failures better. []. []. []

Also consider architecture all .buildinfo files. [][]

In addition:

kpcyrd, for Alpine Linux, updated the alpine_schroot.sh script now that a patch for abuild had been released upstream. []

Alexander Couzens of the OpenWrt project renamed the brcm47xx target to bcm47xx. []

Mattia Rizzolo fixed the printing of the build environment during the second build [][][] and made a number of improvements to the script that deploys Jenkins across our infrastructure [][][].

Lastly, Vagrant Cascadian clarified in the documentation that you need to be user jenkins to run the blacklist command [] and the usual build node maintenance was performed was performed by Holger Levsen [][][], Mattia Rizzolo [][] and Vagrant Cascadian [][][].

To make the results accessible, storable and create tools around them, they should all follow the same schema, a reproducible builds verification format. The format tries to be as generic as possible to cover all open source projects offering precompiled source code. It stores the rebuilder results of what is reproducible and what not.

Do you own your Bitcoins or do you trust that your app allows you to use your coins while they are actually controlled by them ? Do you have a backup? Do they have a copy they didn t tell you about? Did anybody check the wallet for deliberate backdoors or vulnerabilities? Could anybody check the wallet for those?

Elsewhere, Leo had posted instructions on his attempts to reproduce the binaries for the BlueWallet Bitcoin wallet for iOS and Android platforms.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

IRC: #reproducible-builds on irc.oftc.net.

This month s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

3 June 2020

At work we have a wonderful pyhon tool that we are able to send CoAP messages to and from our products. Perfect for development work. However recently I needed to install a copy of the tools onto my personal laptop because the only work laptops I have access to have completely dead batteries and so are not suitable for taking out into a field to perform RF range tests .

As a company we have chosen not to package internal development tools I think that this is a mistake, but this is not my decision. So I simply copied across the coap tools directory and tried to run them. Obviously nothing worked! However, the error messages were enough to work out what dependencies I needed to resolve.
Error #1 Missing libasan.so.2 shared libraryBy a process of deduction we found that gcc5 contains libasan2, quite old.Debian s snapshots was our saviour here:

great that was enough to get the port bindings to work
Error #2 google.protobuf
When I tried to issue coap requests I ran into missing python3 imports google.protobuf fortunately this is found packaged in Debian buster as party of python3-protobuf

BigBlueButton, aka BBB, is a webrtc conferencing solution, that among many features, allows to record a conference, for later replay.
We have been working together with my colleague Fran ois Trahay, on a set of scripts (bbb-downloader) that will allow to easily (on Linux) download recordings of BBB conferences, for local backup, video editing, upload on video sharing platforms, etc. This is particularly useful in our distance learning contexts where students may have to catch up on a live session that was recorded.
We have integrated a hackish solution to capture, as a single video, presentations that contained slide deck presentations. Let me explain why this was necessary.
A nice feature of BBB is the fact that, to present a slides deck, you don t need to share your screen (as a video stream), but just have to upload your file, which is then auto-converted to images, that are sent to participants, in sync with your next/previous browsing of the slides.
This is great for participants with low bandwidth, which can see the slides ( static images) instead of receiving a full screen video stream.
But a side effect is that the recording of a class/conference that is done by BBB replays the slides just as it was done live : displaying images one after the other.
While it is easy to retrieve the audio, webcams of participants, or screen sharings as video streams, directly available from the recordings replay app, it is thus not the same for the slides, which don t come as a video.
Our script will perform a replay, using a Docker container which drives Selenium under the hood, to capture the full replay, as a single video, which then includes the slides and everything. You can see my demo of this process in the following video:
bbb-downloader full capture demo.
It takes long to replay, in real-time, the recordings, to perform this capture but it works. Kudos to elgalu/docker-selenium for the Docker env.
Feel free to test it and profit, or to report issues in the Guthub issues of the repo: https://github.com/trahay/bbb-downloader/.

2 June 2020

Because of the lock-down in France and thanks to Lucas, I have been able to make some progress rebuilding Debian with clang instead of gcc.

TLDR
Instead of patching clang itself, I used a different approach this time: patching Debian tools or implementing some workaround to mitigate an issue.
The percentage of packages failing drop from 4.5% to 3.6% (1400 packages to 1110 - on a total of 31014).
I focused on two classes of issues:

Symbol differences
Historically, symbol management for C++ in Debian has been a pain. Russ Allbery wrote a blog post in 2012 explaining the situation. AFAIK, it hasn't changed much.
Once more, I took the dirty approach: if there new or missing symbols, don't fail the build.
The rational is the following: Packages in the Debian archive are supposed to build without any issue. If there is new or missing symbols, it is probably clang generating a different library but this library is very likely working as expected (and usable by a program compiled with g++ or clang). It is purely a different approach taken by the compiler developer.
In order to mitigate this issue, before the build starts, I am modifying dpkg-gensymbols to transform the error into a warning.
So, the typical Debian error some new symbols appeared in the symbols file or some symbols or patterns disappeared in the symbols file will NOT fail the build.
Unsurprisingly, all but one package (libktorrent) build.
Even if I am pessimistic, I reported a bug on dpkg-dev to evaluate if we could improve dpkg-gensymbol not to fail on these cases.

For maintainers & upstream
Maintainer of Debian/Ubuntu packages? I am providing a list of failing packages per maintainer: https://clang.debian.net/maintainers.php
For upstream, it is also easy to test with clang. Usually, apt install clang && CC=clang CXX=clang++ <build step> is good enough.

Conclusion
With these two changes, I have been able to fix about 290 packages. I think I will be able to get that down a bit more but we will soon reach a plateau as many warnings/issues will have to fix in the C/C++ code itself.

I have been quite absent from Debian stuff lately, but this increased since COVID-19 hits us. In this blog post I'll try to sketch what I have been doing to help fight COVID-19 this last few months.

In the beginningWhen the pandemic reached Argentina the government started a quarantine. We engineers (like engineers around the world) started to think on how to put our abilities in order to help with the situation. Some worked toward providing more protection elements to medical staff, some towards increasing the number of ventilation machines at disposal. Another group of people started thinking on another ways of helping. In Bah a Blanca arised the idea of monitoring some variables remotely and in masse.

Simplified Monitoring of Patients in Situations of Mass Hospitalization (MoSimPa)

This is where the idea of remotely monitored devices came in, and MoSimPa (from the spanish of "monitoreo simplificado de pacientes en situaci n de internaci n masiva") started to get form. The idea is simple: oximetry (SpO2), heart rate and body temperature will be recorded and, instead of being shown in a display in the device itself, they will be transmitted and monitored in one or more places. In this way medical staff doesn't has to reach a patient constantly and monitoring could be done by medical staff for more patients at the same time. In place monitoring can also happen using a cellphone or tablet.

The devices do not have a screen of their own and almost no buttons, making them more cheap to build and thus more in line with the current economic reality of Argentina.

This is where the project Para Ayudar was created. The project aims to produce the aforementioned non-invasive device to be used in health institutions, hospitals, intra hospital transports and homes.

It is worth to note that the system is designed as a complementary measure for continuous monitoring of a pacient. Care should be taken to check that symptomps and overall patient status don't mean an inmediate life threat. In other words, it is NOT designed for ICUs.

The importance of early pneumonia detection

A vast majority of Covid pneumonia patients I met had remarkably low oxygen saturations at triage seemingly incompatible with life but they were using their cellphones as we put them on monitors. Although breathing fast, they had relatively minimal apparent distress, despite dangerously low oxygen levels and terrible pneumonia on chest X-rays.

This greatly reinforced the idea we were on the right track.

The project from a technical standpoint

As the project is primarily designed for and by Argentinians the current system design and software documentation is written in spanish, but the source code (or at least most of it) is written in english. Should anyone need it in english please do not hesitate in asking me.

General system description

The system is comprised of the devices, a main machine acting as a server (in our case for small setups a Raspberry Pi) and the possibility of accessing data trough cell phones, tablets or other PCs in the network.

The hardware

As of today this is the only part in which I still can't provide schematics, but I'll update this blog post and technical doc with them as soon as I get my hands into them.

Again the design is due to be built in Argentina where getting our hands on hardware is not easy. Moreover it needs to be as cheap as possible, specially now that the Argentinian currency, the peso, is every day more depreciated. So we decided on using an ESP32 as the main microprocessor and a set of Maxim sensors devices. Again, more info when I have them at hand.

The software

Here we have many more components to describe. Firstly the ESP32 code is done with the Arduino SDK. This part of the stack will receive many updates soon, as soon as the first hardware prototypes are out.

For the rest of the stack I decided to go ahead with whatever is available in Debian stable. Why? Well, Raspbian provides a Debian stable-based image and I'm a Debian Developer, so things should go just natural for me in that front. Of course each component has its own packaging. I'm one of Debian's Qt maintainers then using Qt will also be quite natural for me. Plots? Qwt, of course. And with that I have most of my necessities fulfilled. I choose PostgreSql as database server and Mosquitto as MQTT broker.

And for managing patients, devices, locations and internments (CRUD anyone?) there is currently a Qt-based application called mosimpa-abm.

ABM main screen

ABM internments view

The idea is to replace it with a web service so it doesn't needs to be confined to the RPi or require installations in other machines. I considered using webassembly but I would have to also build PostgreSql in order to compile Qt's plugin.

Translations? Of course! As I have already mentioned the code is written in English. Qt allows to easily translate applications, so I keep a Spanish one as the code changes (and we are primarily targeting spanish-speaking people). But of course this also means it can be easily translated to whichever language is necessary.

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.
In May, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 17.25h for LTS (out of 30 max; all done) and 9.25h for ELTS (out of 20 max; all done).
A survey will be published very shortly to gather feedback from all parties involved in LTS (users, other Debian teams...) -- let us know what you think, so we start the forthcoming new (Stretch) LTS cycle in the best conditions
Discussion is progressing on funding & governance of larger LTS-related projects. Who should decide: contributors, Freexian, sponsors? Do we fund with a percentage or by capping resources allocated on security updates? I voiced concerns over funding these at the expense of smaller, more organic, more recurrent tasks that are less easy to specify but greatly contribute to the overall quality nevertheless.
ELTS - Wheezy

1 June 2020

A Quick Recap from last year:
Kotlin is being packaged under the Google Summer of Code within the Debian organization itself. The major reason behind bringing Kotlin in Debian is to update all the Android packages which are now heavily dependent upon the Kotlin libraries.
The major work to bring Kotlin into Debian is done for the version 1.3.30, by Saif Abdul Cassim (goes by m36 on IRC) as a part of his GSoC'2019.
All his contributions to the team can be found in his blog posts.
So, for now, we have a bootstrap package and a Kotlin package for the version with 1.3.30.
There were still changes needed as we lacked some of the dependencies for Kotlin, and the source package lacked copyright information and didn t comply with Debian standards.
What's the present year brought for Kotlin?
To be specific the following were mainly left dependencies for Kotlin:

JLine3

intellij-community-idea

kotlin-bootstrap

And, we lack documentation for the newbies in order to get them started :(
Most importantly the crucial part was and still is, to figure out how to upload the package?
For GSoC'20, three students are selected as a part of project Android SDK tools in Debian.
What's the work done/left?
Work Done

A couple of dependencies were completed and reside in NEW Queue, those include Jline3 (done by @samyak-jn, myself), and intellij-community-idea (finished by @The_LoudSpeaker, Raman Sarda).

The kotlin package residing in m36 s repository had a couple of issues that were needed to be fixed to meet Debian standards, but Kotlin was building fine locally with the mentioned dependencies. :D

I (Samyak Jain) took the work for converting all the commits to the patches as all the changes were made directly to the source, and henceforth fixed rules and control files to meet Debian Standards. Debian is very particular about its license policies. The copyright was a pending task that was completed for Good.
The newer package exists at Samyak's repo.

I set up an initial wiki page for Kotlin as well, so everyone can follow. Thanks, Hans (@_hc) for the help with that. The wiki page for Kotlin exists here.

What's Blocking?

The most uncertain thing is to decide, how Kotlin will be uploaded to the Debian Archive?

What is the problem being faced?
The Kotlin-Bootstrap package consists of JAR files for various dependencies of kotlin such as Gradle, kotlin compiler, and kotlinx. The package is added to the build-depends of the main package so that the JAR files can be provided. Since the kotlin-bootstrap consists of binaries (JAR files), it is not feasible to upload the package as free software.
The other workaround was the Gradle 6.4 version, which consists of Kotlin files and generates a suitable JAR. But since the package needed Kotlin language itself, it was never updated, as it created a cyclic dependency.
Final workaround came, which proposed Kotlin to build from itself, that was a pretty impressive suggestion. But, we still have to look if the solution is feasible? Because, as far as I last checked and conversed with ebourg on the mailing list here, Emmanuel Bbourg mentioned very clearly that the rebuilt package is our interest. So, this is under WIP.
But, I fail to acknowledge the fact if we can drop the kotlin-bootstrap package totally, Kotlin will not be able to be built because each and every JAR file present in the bootstrap is needed.
That pretty much is the ongoing work and the update on the kotlin package. We intend to bring Kotlin to the Debian Archive as soon as possible :)
Have any queries or suggestions for Kotlin?
Please feel to drop a message at #debian-mobile channel on OFTC.

Here s my (eighth) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This month marks my 15 months of contributing to Debian.
And 6th month as a DD! \o/
Whilst I love doing Debian stuff, I have started spending more time on the programming
side now. And I hope to keep it this for some time now.
Of course, I ll keep doing the Debian stuff, but just lesser in amount.
Anyway, the following are the things I did in May.

Sponsored git-repo-updater and mplcursors for Sudip.

Mentoring for newcomers.

FTP Trainee reviewing.

Moderation of -project mailing list.

Experimenting and improving Ruby libraries FTW!
I have been very heavily involved with the Debian Ruby team for over an year now.
Thanks to Antonio Terceiro (and GSoC), I ve started experimenting and taking more
interest in upstream development and improvement of these libraries.
This has the sole purpose of learning. It has gotten fun since I ve started doing Ruby.
And I hope it stays this way.
This month, I opened some issues and proposed a few pull requests. They are:

Issue #85 against ruby-dbus asking if they still use rDoc for doc generation.

Debian LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases
to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group
of volunteers and companies interested in making it a success.
This was my eighth month as a Debian LTS paid contributor. I was assigned 17.25 hours and worked on
the following things:

Issued DLA 2210-1, fixing CVE-2020-3810, for apt.
This update was prepared by the maintainer, Julian. I just took care of the paperwork.
For Debian 8 Jessie , this problem has been fixed in version 1.0.9.8.6.

Created the LTS Survey on the self-hosted LimeSurvey instance.

Other(s)
Sometimes it gets hard to categorize work/things into a particular category.
That s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal:
This month I could get the following things done:

Wrote and published my first Ruby gem/library/tool on RubyGems!
It s open-sourced and the repository is here.
Bug reports and pull requests are welcomed!

Wrote a small Ruby script (available here) to install Ruby gems from Gemfile(.lock).
Needed this when I hit a bug while using ruby-standalone, which Antonio fixed pretty quickly!

The Open Source Initiative held their twice-annual multi-day 'face-to-face' board meeting this time held virtually and participated in the accompanying conversations on strategy, tactical and governance issues, as well as the usual discussions regarding licensing and policy (minutes pending). I also attended the regular monthly meeting for Software in the Public Interest (minutes).

Various alterations for the continuous integration pipeline. [...][...]

Reproducible builds
One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.
The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.
The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.
Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

Opened a pull request to make the documentation for the Wand Python/ImageMagick graphics library to build in reproducible manner. [...]

Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

In disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues, I replaced the term "dirents" in place of "directory entries" in human-readable output/log messages [...] and used the astyle source code formatter with the default settings to the main disorderfs.cpp file [...].

Elsewhere in our tooling, I made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading versions 142, 143, 144, 145 and 146 to Debian:

Investigated and triaged freerdp, keystone, nginx, tcpreplay & thunderbird, as well as tended to the general upkeep of the dla-needed.txt and ela-needed.txt files, adding various notes, references, attributions and citations.

Issued DLA 2201-1 to prevent a Denial of Service (DoS) vulnerability the ntp network time protocol server/client. ntp allowed an "off-path" attacker to block unauthenticated synchronisation via a server mode packet with a spoofed source IP address because transmissions were rescheduled even if a packet lacked a valid "origin timestamp".

Issued DLA 2203-1 for the SQLite database to prevent a denial of service attack. In the event of a semantic error in an aggregate query, SQLite did not return early from the resetAccumulator() function which would lead to a crash via a segmentation fault.

One of the things I maintain in Debian is OpenOCD. I say maintain, but it s so far required very little work, as it s been 3 years since a release (0.10.0). I ve talked about doing a git snapshot package for some time (I have an email from last DebConf in my inbox about it, and that wasn t the first time someone had asked), but never got around to it. Spurred on by some moves towards a 0.11.0 release I ve built a recent snapshot and uploaded it to the experimental suite in Debian.
Of particular interest is the support for more recent architectures that this brings - ARMv8/aarch64 and RISC-V being the big ones, but also MIPS64 and various other ARM improvements. I no longer have access to Xilinx Zynq or Mellanox Bluefield platforms to test against so I ve just done some some basic tests with a Sheevaplug and BusPirate/STM32F103, but those worked just fine.
Builds should hopefully happen shortly. Enjoy!

30 May 2020

Something I ve found myself doing as the pandemic rolls on is picking
out and (re-)reading through sections of the GNU Emacs
manual and the
GNU Emacs Lisp reference
manual. This
has got me (too) interested in some of the recent history of Emacs
development, and I did some digging into archives of emacs-devel from
2008 (15M
mbox) regarding the change to turn Transient Mark mode on by default
and set mark-even-if-inactive to true by default in Emacs 23.1.
It s not always clear which objections to turning on Transient Mark
mode by default take into account the mark-even-if-inactive change.
I think that turning on Transient Mark mode along with
mark-even-if-inactive is a good default. The question that remains
is whether the disadvantages of Transient Mark mode are significant
enough that experienced Emacs users should consider altering Emacs
default behaviour to mitigate them. Here s one popular blog arguing
for some
mitigations.
How might Transient Mark mode be disadvantageous?
The suggestion is that it makes using the mark for navigation rather
than for acting on regions less convenient:

setting a mark just so you can jump back to it (i) is a distinct
operation you have to think of separately; and (ii) requires two
keypresses, C-SPC C-SPC, rather than just one keypress

using exchange-point-and-mark activates the region, so to use it
for navigation you need to use either C-u C-x C-x or C-x C-x
C-g, neither of which are convenient to type, or else it will be
difficult to set regions at the place you ve just jumped to because
you ll already have one active.

There are two other disadvantages that people bring up which I am
disregarding. The first is that it makes it harder for new users to
learn useful ways in which to use the mark when it s deactivated.
This happened to me, but it can be mitigated without making any
behavioural changes to Emacs. The second is that the visual
highlighting of the region can be distracting. So far as I can tell,
this is only a problem with exchange-point-and-mark, and it s
subsumed by the problem of that command actually activating the
region. The rest of the time Emacs automatic deactivation of the
region seems sufficient.
How might disabling Transient Mark mode be disadvantageous?
When Transient Mark mode is on, many commands will do something
usefully different when the mark is active. The number of commands in
Emacs which work this way is only going to increase now that Transient
Mark mode is the default.
If you disable Transient Mark mode, then to use those features you
need to temporarily activate Transient Mark mode. This can be fiddly
and/or require a lot of keypresses, depending on exactly where you
want to put the region.
Without being able to see the region, it might be harder to know where
it is. Indeed, this is one of the main reasons for wanting Transient
Mark mode to be the default, to avoid confusing new users. I don t
think this is likely to affect experienced Emacs users often, however,
and on occasions when more precision is really needed, C-u C-x C-x
will make the region visible. So I m not counting this as a
disadvantage.
How might we mitigate these two sets of disadvantages?
Here are the two middle grounds I m considering.
Mitigation #1: Transient Mark mode, but hack C-x C-x behaviour

(defunspw/exchange-point-and-mark (arg)"Exchange point and mark, but reactivate mark a bit less often.Specifically, invert the meaning of ARG in the case whereTransient Mark mode is on but the region is inactive."(interactive"P")(exchange-point-and-mark(if(and transient-mark-mode (not mark-active))(not arg)
arg)))(global-set-key[remap exchange-point-and-mark] &aposspw/exchange-point-and-mark)

We avoid turning Transient Mark mode off, but mitigate the second of
the two disadvantages given above.
I can t figure out why it was thought to be a good idea to make C-x
C-x reactivate the mark and require C-u C-x C-x to use the action
of exchanging point and mark as a means of navigation. There needs to
a binding to reactivate the mark, but in roughly ten years of having
Transient Mark mode turned on, I ve found that the need to reactivate
the mark doesn t come up often, so the shorter and longer bindings
seem the wrong way around. Not sure what I m missing here.
Mitigation #2: disable Transient Mark mode, but enable it temporarily more often

Here we remove both of the disadvantages of Transient Mark mode given
above, and mitigate the main disadvantage of not activating Transient
Mark mode by making it more convenient to activate it temporarily.
For example, this enables using C-M-SPC C-M-SPC M-( to wrap the
following two function arguments in parentheses. And you can hit
M-h a few times to mark some blocks of text or code, then operate on
them with commands like M-% and C-/ which behave differently when
the region is active.1
Comparing these mitigations
Both of these mitigations handle the second of the two disadvantages
of Transient Mark mode given above. What remains, then, is

under the effects of mitigation #1, how much of a barrier to using
marks for navigational purposes is it to have to press C-SPC
C-SPC instead of having a single binding, C-SPC, for all manual
mark setting2

under the effects of mitigation #2, how much of a barrier to taking
advantage of commands which act differently when the region is
active is it to have to temporarily enable Transient Mark mode with
C-SPC C-SPC, M-= or one of the mark-* commands?

These are unknowns.3 So I m going to have to experiment, I think,
to determine which mitigation to use, if either. In particular, I
don t know whether it s really significant that setting a mark for
navigational purposes and for region marking purposes are distinct
operations under mitigation #1.
My plan is to start with mitigation #2 because that has the additional
advantage of allowing me to confirm or disconfirm my belief that not
being able to see where the region is will only rarely get in my way.

The idea of making the mark-* commands activate the mark comes
from an emacs-devel post by Stefan Monnier in the archives linked
above.

One remaining possibility I m not considering is mitigation #1
plus binding something else to do the same as C-SPC C-SPC. I
don t believe there are any easily rebindable keys which are
easier to type than typing C-SPC twice. And this does not deal
with the two distinct mark-setting operations problem.

Another way to look at this is the question of which of setting
a mark for navigational purposes and activating a mark should get
C-SPC and which should get C-SPC C-SPC.

A new version of drat arrived on CRAN overnight, once again taking advantage of the fully automated process available for such packages with few reverse depends and no open issues. As we remarked at the last release fourteen months ago when we scored the same nice outcome: Being a simple package can have its upsides
This release is mostly the work of Felix Ernst who took on what became a rewrite of how binary macOS packages are handled. If you need to distribute binary packages for macOS users, this may help. Two more small updates were made, see below for full details.
drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.
As your mother told you: Friends don t let friends install random git commit snapshots. Rolled-up releases it is. drat is easy to use, documented by five vignettes and just works.
The NEWS file summarises the release as follows:

Changes in drat version 0.1.6 (2020-05-29)

Changes in drat functionality

Support for the various (current) macOS binary formats was rewritten (Felix Ernst in #89 fixing #88).

I know most Debian people know about this already But in case you
don t follow the usual Debian communications channels, this might
interest you!
Given most of the world is still under COVID-19 restrictions, and that
we want to work on Debian, given there is no certainty as to what the
future holds in store for us Our DPL fearless as they always are
had the bold initiative to make this weekend into the first-ever
miniDebConf
Online
(MDCO)!
So, we are already halfway through DebCamp (which means, you can come
and hang out with us in the debian.social DebCamp Jitsi
lounge, where some
impromptu presentations might happen (or not).
Starting tomorrow morning (11AM UTC),
we will have a quite interesting set of talks. I am reproducing the
schedule here:

Saturday 2020.05.30

Time (UTC)

Speaker

Talk

11:00 - 11:10

MDCO team members

Hello + Welcome

11:30 - 11:50

Wouter Verhelst

Extrepo

12:00 - 12:45

JP Mengual

Debian France, trust european organization

13:00 - 13:20

Arnaud Ferraris

Bringing Debian to mobile phones, one package at a time

13:30 - 15:00

Lunch Break

A chance for the teams to catch some air

15:00 - 15:45

JP Mengual

The community team, United Nations Organizations of Debian?

16:00 - 16:45

Christoph Biedl

Clevis and tang - overcoming the disk unlocking problem

17:00 - 17:45

Antonio Terceiro

I m a programmer, how can I help Debian

Sunday 2020.05.31

Time (UTC)

Speaker

Talk

11:00 - 11:45

Andreas Tille

The effect of Covid-19 on the Debian Med project

12:00 - 12:45

Paul Gevers

BoF: running autopkgtest for your package

13:00 - 13:20

Ben Hutchings

debplate: Build many binary packages with templates

13:30 - 15:00

Lunch break

A chance for the teams to catch some air

15:00 - 15:45

Holger Levsen

Reproducing bullseye in practice

16:00 - 16:45

Jonathan Carter

Striving towards excellence

17:00 - 17:45

Delib*

Organizing Peer-to-Peer Debian Facilitation Training

18:00 - 18:15

MDCO team members

Closing

subject to confirmation

Timezone
Remember this is an online event, meant for all of the world! Yes, the
chosen times seem quite Europe-centric (but they are mostly a function
of the times the talk submitters requested).
Talks are 11:00 18:00UTC, which means, 06:00 13:00 Mexico (GMT-5),
20:00 03:00 Japan (GMT+9), 04:00 11:00 Western
Canada/USA/Mexico (GMT-7) and the rest of the world, somewhere in
between.
(No, this was clearly not optimized for our dear usual beer
team. Sorry! I
guess we need you to be fully awake at beertime!)

[update] Connecting!
Of course, I didn t make it clear at first how to connect to the
Online miniDebConf, silly me!