Search Results: "ghe"

3 June 2020

Here's a practical example of applying Template Haskell to reduce
the amount of boilerplate code that is otherwise required. I wrote the below
after following this excellent blog post by Matt Parsons. This post will be much higher-level, read
Matt's blog for the gorier details.
Liquorice
Liquorice is a toy project of mine from a few years ago that lets you
draw 2D geometric structures similar to LOGO. Liquorice offers two interfaces:
pure functions that operate on an explicit Context (the pen location:
existing lines, etc.), and a second "stateful" interface where the input and
output are handled in the background.
I prefix the pure ones P. and the stateful ones S. in this blog post for
clarity.
The stateful interface can be much nicer to use for larger drawings.
Compare example8b.hs,
written in terms of the pure functions, and the stateful equivalent
example8.hs.
The majority of the stateful functions are "wrapped" versions of the pure
functions. For example, the pure function P.step takes two numbers
and moves the pen forward and sideways. Its type signature is

Writing these wrapped functions for the 29 pure functions is boilerplate that
can be generated automatically with Template Haskell.
Generating the wrapper functions
Given the Name of a function to wrap, we construct an instance of FunD, the
TH data-type representing a function definition. We use the base name of the
incoming function as the name for the new one.

To determine how many arguments the wrapper function needs to accept, we need
to determine the input function's arity. We use Template Haskell's reify
function to get type information about the function, and derive the arity from
that. Matt Parson's covers this exactly in his blog.

We can use the list "args" directly in the clause part of the function
definition, as the data-type expects a list. For the right-hand side, we need
to convert from a list of arguments to function application. That's a simple
left-fold:

We use TH's oxford brackets for the definition of rhs. This permits us
to write real Haskell inside the brackets, and get an expression data-type
outside them. Within we have a splice (the $( )), which does the opposite:
the code is evaluated at compile time and generates an Exp that is then
converted into the equivalent Haskell code and spliced into place.
Finally, we need to apply the above to a list of Names. Sadly, we can't get at
the list of exported names from a Module automatically. There is an open
request for a TH extension for
this. In the meantime,
we export a list of the functions to wrap from the Pure module and operate
on that

import Liquorice.Pure
wrapPureFunctions = mapM mkWrap pureFns

Finally, we 'call' wrapPureFunctions at the top level in our state module
and Template Haskell splices all the function definitions into place.
The final code ended up only around 30 lines of code, and saved about the
same number of lines of boilerplate. But in doing this I noticed some missing
functions, and it will pay dividends if more pure functions are added.
Limitations
The current implementation has one significant limitation: it cannot handle
higher-order functions. An example of a pure higher-order function is
place, which moves the pen, performs an operation, and then moves it back:

P.place :: Int -> Int -> (Context -> Context) -> Context -> Context

Wrapping this is not sufficient because the higher-order parameter has
the pure function signature Context -> Context. If we wrapped it, the
stateful version of the function would accept a pure function as the
parameter, but you would expect it to accept another stateful function.
To handle these, at a minimum we would need to detect the function arguments
that have type Context -> Context and replace them with State Context ().
The right-hand side of the wrapped function would also need to do more work to
handle wrapping and unwrapping the parameter. I haven't spent much time
thinking about it but I'm not sure that a general purpose wrapper would work
for all higher-order functions. For the time being I've just re-implemented the
half-dozen of them.

28 May 2020

For more than a few decades now (!), I've been running my own
server. First it was just my old Pentium 1 squatting on university
networks, but eventually grew into a real server somewhere at the dawn
of the millenia. Apart from the university days, the server was mostly
hosted over ADSL links, first a handful of megabits, up to the current
25 Mbps down, 6 Mbps up that the Bell Canada network seems to allow
to its resellers (currently Teksavvy Internet, or TSI).

Why change?
Obviously, this speed is showing its age, and especially in this age
of Pandemia where everyone is on videoconferencing all the time. But
it's also inconvenient when I need to upload large files on the
network. I also host a variety of services on this network, and I
always worry that any idiot can (rather trivially) DoS my server, so I
often feel I should pack a little more punch at home (although I have
no illusions about my capacity of resisting any sort of DoS attack at
home of course).
Also, the idea of having gigabit links at home brings back the idea of
the original internet, that everyone on the internet is a
"peer". "Client" and "servers" are just a technical distinction and
everyone should be able to run a server.

Requirements
So I'm shopping for a replacement. The requirements are:

static or near-static IP address: I run a DNS server with its IP
in the glue records (although the latter could possibly be
relaxed). ideally a /29 or more.

all ports open: I run an SMTP server (incoming and outgoing) along
with a webserver and other experiments. ideally, no firewall or
policy should be blocking me from hosting stuff, unless there's an
attack or security issue, obviously.

clean IP address: the SMTP server needs to have a good reputation,
so the IP address should not be in a "residential space" pool.

IPv6 support: TSI offers IPv6 support, but it is buggy (I
frequently have to restart the IPv6 interface on the router
because the delegated block stops routing, and they haven't been
able to figure out the problem). ideally, a /56.

less than 100$/mth, ideally close to the current 60$/mth I pay.

(All amounts in $CAD.)

Contestants
I wrote a similar message asking major ISPs in my city for those
services, including business service if necessary:

I have not contacted those providers:

Bell Canada: i have sworn, two decades ago, never to do business
with that company ever again. They have a near-monopoly on almost
all telcos in Canada and I want to give them as little money as
possible.

I might have forgotten some, let me know if you're in the area and
have a good recommendation. I'll update this post with findings as
they come in.
Keep in mind that I am in a major Canadian city, less than a kilometer
from a major telco exchange site, so it's not like I'm in a rural
community. This should just work.

TSI
First answer from TSI was "we do not provide 30mbps upload on
residential services", even though they seem to have that package on
their website. They confirmed that they "don't have a option more
than 10 mbps upload."
TSI were the first to respond, within 24h.

Oricom
They offer a 100/30 link for 65$ plus 25$ for a static IP.
No IPv6 yet, unlikely to come soon. No services blocked, they have
their own PoP within Videotron's datacenters so clients come out from
their IP address space.
I can confirm that the IP is fairly static from the office.
Oricom were the second to respond, within 24h, but required a phone
call instead of an email exchange. Responded within 6 hours after
leaving a voicemail.

Ebox
Ebox claims my neighborhood supports 400mbps down, but offered me a
100/30 package with 350Go bandwidth per month for 54.95$/mth or
unlimited for 65$/mth.
Many ports are blocked, which makes it impossible for me to use their
service:

port 25 blocked incoming

port 25 filtered outgoing (only allowed to their servers)

port 53 blocked incoming

No static IP addressing, shared dynamic space so no garantee on
reputation. IPv6 only on DSL, so no high speed IPv6.
Ebox took the longest to respond, about 48 hours.

Beanfield / Openface
Even though they have a really interesting service (50$/mth for
unlimited 1gbps), they are not in my building. I did try to contact
them over chat, they told me to call, and I left a message. They
responded saying they mostly offer business services for now, no
residential in Montreal.

26 May 2020

Problems With Cruises
GQ has an insightful and detailed article about Covid19 and the Diamond Princess [1], I recommend reading it.
FastCompany has a brief article about bookings for cruises in August [2]. There have been many negative comments about this online.
The first thing to note is that the cancellation policies on those cruises are more lenient than usual and the prices are lower. So it s not unreasonable for someone to put down a deposit on a half price holiday in the hope that Covid19 goes away (as so many prominent people have been saying it will) in the knowledge that they will get it refunded if things don t work out. Of course if the cruise line goes bankrupt then no-one will get a refund, but I think people are expecting that won t happen.
The GQ article highlights some serious problems with the way cruise ships operate. They have staff crammed in to small cabins and the working areas allow transmission of disease. These problems can be alleviated, they could allocate more space to staff quarters and have more capable air conditioning systems to put in more fresh air. During the life of a cruise ship significant changes are often made, replacing engines with newer more efficient models, changing the size of various rooms for entertainment, installing new waterslides, and many other changes are routinely made. Changing the staff only areas to have better ventilation and more separate space (maybe capsule-hotel style cabins with fresh air piped in) would not be a difficult change. It would take some money and some dry-dock time which would be a significant expense for cruise companies.
Cruises Are Great
People like social environments, they want to have situations where there are as many people as possible without it becoming impossible to move. Cruise ships are carefully designed for the flow of passengers. Both the layout of the ship and the schedule of events are carefully planned to avoid excessive crowds. In terms of meeting the requirement of having as many people as possible in a small area without being unable to move cruise ships are probably ideal.
Because there is a large number of people in a restricted space there are economies of scale on a cruise ship that aren t available anywhere else. For example the main items on the menu are made in a production line process, this can only be done when you have hundreds of people sitting down to order at the same time.
The same applies to all forms of entertainment on board, they plan the events based on statistical knowledge of what people want to attend. This makes it more economical to run than land based entertainment where people can decide to go elsewhere. On a ship a certain portion of the passengers will see whatever show is presented each night, regardless of whether it s singing, dancing, or magic.
One major advantage of cruises is that they are all inclusive. If you are on a regular holiday would you pay to see a singing or dancing show? Probably not, but if it s included then you might as well do it and it will be pretty good. This benefit is really appreciated by people taking kids on holidays, if kids do things like refuse to attend a performance that you were going to see or reject food once it s served then it won t cost any extra.
People Who Criticise Cruises
For the people who sneer at cruises, do you like going to bars? Do you like going to restaurants? Live music shows? Visiting foreign beaches? A cruise gets you all that and more for a discount price.
If Groupon had a deal that gave you a cheap hotel stay with all meals included, free non-alcoholic drinks at bars, day long entertainment for kids at the kids clubs, and two live performances every evening how many of the people who reject cruises would buy it? A typical cruise is just like a Groupon deal for non-stop entertainment from 8AM to 11PM.
Will Cruises Restart?
The entertainment options that cruises offer are greatly desired by many people. Most cruises are aimed at budget travellers, the price is cheaper than a hotel in a major city. Such cruises greatly depend on economies of scale, if they can t get the ships filled then they would need to raise prices (thus decreasing demand) to try to make a profit. I think that some older cruise ships will be scrapped in the near future and some of the newer ships will be sold to cruise lines that cater to cheap travel (IE P&O may scrap some ships and some of the older Princess ships may be transferred to them). Overall I predict a decrease in the number of middle-class cruise ships.
For the expensive cruises (where the cheapest cabins cost over $1000US per person per night) I don t expect any real changes, maybe they will have fewer passengers and higher prices to allow more social distancing or something.
I am certain that cruises will start again, but it s too early to predict when. Going on a cruise is about as safe as going to a concert or a major sporting event. No-one is predicting that sporting stadiums will be closed forever or live concerts will be cancelled forever, so really no-one should expect that cruises will be cancelled forever. Whether companies that own ships or stadiums go bankrupt in the mean time is yet to be determined.
One thing that s been happening for years is themed cruises. A group can book out an entire ship or part of a ship for a themed cruise. I expect this to become much more popular when cruises start again as it will make it easier to fill ships. In the past it seems that cruise lines let companies book their ships for events but didn t take much of an active role in the process. I think that the management of cruise lines will look to aggressively market themed cruises to anyone who might help, for starters they could reach out to every 80s and 90s pop group those fans are all old enough to be interested in themed cruises and the musicians won t be asking for too much money.
Conclusion
Humans are social creatures. People want to attend events with many other people. Covid 19 won t be the last pandemic, and it may not even be eradicated in the near future. The possibility of having a society where no-one leaves home unless they are in a hazmat suit has been explored in science fiction, but I don t think that s a plausible scenario for the near future and I don t think that it s something that will be caused by Covid 19.

12 May 2020

I recently wrote a final paper on the history of written numerals.
In the process, I discovered this fascinating tidbit that didn t
really fit in my paper, but I wanted to put it somewhere.
So I m writing about it here.
If I were to ask you to count as high as you could on your fingers
you d probably get up to 10 before running out of fingers.
You can t count any higher than the number of fingers you have,
right? The Romans could! They used a place-value system, combined
with various gestures to count all the way up to 9,999 on two hands.

The System
(Note that in this diagram 60 is, in fact, wrong,
and this picture swaps the hundreds and the thousands.)
We ll start with the units.
The last three fingers of the left hand, middle, ring, and pinkie,
are used to form them.
Zero is formed with an open hand, the opposite of the finger
counting we re used to.
One is formed by bending the middle joint of the pinkie,
two by including the ring finger and three by including the
middle finger, all at the middle joint.
You ll want to keep all these bends fairly loose, as otherwise
these numbers can get quite uncomfortable.
For four, you extend your pinkie again, for five, also raise your
ring finger, and for six, you raise your middle finger as well, but
then lower your ring finger.
For seven you bend your pinkie at the bottom joint, for eight
adding your ring finger, and for nine, including your middle
finger. This mirrors what you did for one, two and three, but
bending the finger at the bottom joint now instead.
This leaves your thumb and index finger for the tens.
For ten, touch the nail of your index finger to the inside of your top thumb
joint.
For twenty, put your thumb between your index and middle fingers.
For thirty, touch the nails of your thumb and index fingers.
For forty, bend your index finger slightly towards your palm
and place your thumb between the middle and top knuckle of your
index finger.
For fifty, place your thumb against your palm.
For sixty, leave your thumb where it is and wrap your index
finger around it (the diagram above is wrong).
For seventy, move your thumb so that the nail touches between the
middle and top knuckle of your index finger.
For eighty, flip your thumb so that the bottom of it now
touches the spot between the middle and top knuckle of your index
finger.
For ninety, touch the nail of your index finger to your bottom
thumb joint.
The hundreds and thousands use the same positions on the right
hand, with the units being the thousands and the tens being the
hundreds. One account, from which the picture above comes, swaps
these two, but the first account we have uses this ordering.
Combining all these symbols, you can count all the way to 9,999
yourself on just two hands. Try it!

History

The Venerable Bede
The first written record of this system comes from the Venerable
Bede, an English Benedictine monk who died in 735.
He wrote De computo vel loquela digitorum,
On Calculating and Speaking with the Fingers, as the introduction
to a larger work on chronology, De temporum ratione.
(The primary calculation done by monks at the time was calculating
the date of Easter, the first Sunday after the first full moon of
spring).
He also includes numbers from 10,000 to 1,000,000, but its unknown if these
were inventions of the author and were likely rarely used
regardless. They require moving your hands to various positions on your body,
as illustrated below, from Jacob Leupold s Theatrum Arilhmetico-Geometricum,
published in 1727:

The Romans
If Bede was the first to write it, how do we know that it came from Roman times?
It s referenced in many Roman writings, including this bit from the Roman
satirist Juvenal who died in 130:

Felix nimirum qui tot per saecula mortem
distulit atque suos iam dextera computat annos.
Happy is he who so many times over the years has cheated death
And now reckons his age on the right hand.

Because of course the right hand is where one counts hundreds!
There s also this Roman riddle:

Nunc mihi iam credas fieri quod posse negatur:
octo tenes manibus, si me monstrante magistro
sublatis septem reliqui tibi sex remanebunt.
Now you shall believe what you would deny could be done:
In your hands you hold eight, as my teacher once taught;
Take away seven, and six still remain.

If you form eight with this system and then remove the symbol for seven, you
get the symbol for six!

10 May 2020

This review, for reasons that will hopefully become clear later, starts
with a personal digression.
I have been interested in political theory my entire life. That sounds
like something admirable, or at least neutral. It's not. "Interested"
means that I have opinions that are generally stronger than my depth of
knowledge warrants. "Interested" means that I like thinking about and
casting judgment on how politics should be done without doing the work of
politics myself. And "political theory" is different than politics in
important ways, not the least of which is that political actions have
rarely been a direct danger to me or my family. I have the luxury of
arguing about politics as a theory.
In short, I'm at high risk of being one of those people who has an opinion
about everything and shares it on Twitter.
I'm still in the process (to be honest, near the beginning of the process)
of making something useful out of that interest. I've had some success
when I become enough a part of a community that I can do some of the
political work, understand the arguments at a level deeper than theory,
and have to deal with the consequences of my own opinions. But those
communities have been on-line and relatively low stakes. For the big
political problems, the ones that involve governments and taxes and laws,
those that decide who gets medical treatment and income support and who
doesn't, to ever improve, more people like me need to learn enough about
the practical details that we can do the real work of fixing them, rather
than only making our native (and generally privileged) communities better
for ourselves.
I haven't found my path helping with that work yet. But I do have a
concrete, challenging, local political question that makes me coldly
furious: housing policy. Hence this book.
Golden Gates is about housing policy in the notoriously underbuilt
and therefore incredibly expensive San Francisco Bay Area, where I live.
I wanted to deepen that emotional reaction to the failures of housing
policy with facts and analysis. Golden Gates does provide some of
that. But this also turns out to be a book about the translation of
political theory into practice, about the messiness and conflict that
results, and about the difficult process of measuring success. It's also
a book about how substantial agreement on the basics of necessary
political change can still founder on the shoals of prioritization,
tribalism, and people who are interested in political theory.
In short, it's a book about the difficulty of changing the world instead
of arguing about how to change it.
This is not a direct analysis of housing policy, although Dougherty
provides the basics as background. Rather, it's the story of the
political fight over housing told primarily through two lenses: Sonja
Trauss, founder of BARF (the Bay Area Renters' Federation); and a Redwood
City apartment complex, the people who fought its rent increases, and the
nun who eventually purchased it. Around that framework, Dougherty writes
about the Howard Jarvis Taxpayers Association and the history of
California's Proposition 13, a fight over a development in Lafayette, the
logistics challenge of constructing sufficient housing even when approved,
and the political career of Scott Wiener, the hated opponent of every city
fighting for the continued ability to arbitrarily veto any new housing.
One of the things Golden Gates helped clarify for me is that there
are three core interest groups that have to be part of any discussion of
Bay Area housing: homeowners who want to limit or eliminate local change,
renters who are vulnerable to gentrification and redevelopment, and the
people who want to live in that area and can't (which includes people who
want to move there, but more sympathetically includes all the people who
work there but can't afford to live locally, such as teachers, day care
workers, food service workers, and, well, just about anyone who doesn't
work in tech). (As with any political classification, statements about
collectives may not apply to individuals; there are numerous people who
appear to fall into one group but who vote in alignment with another.)
Dougherty makes it clear that housing policy is intractable in part
because the policies that most clearly help one of those three groups hurt
the other two.
As advertised by the subtitle, Dougherty's focus is on the fight for more
housing. Those who already own homes whose values have been inflated by
artificial scarcity, or who want to preserve such stratified living
conditions as low-density, large-lot single-family dwellings within short
mass-transit commute of one of the densest cities in the United States,
don't get a lot of sympathy or focus here except as opponents. I
understand this choice; I also don't have much sympathy. But I do wish
that Dougherty had spent more time discussing the unsustainable promise
that California has implicitly made to homeowners: housing may be
impossibly expensive, but if you can manage to reach that pinnacle of
financial success, the ongoing value of your home is guaranteed. He does
mention this in passing, but I don't think he puts enough emphasis on the
impact that a single huge, illiquid investment that is heavily encouraged
by government policy has on people's attitude towards anything that
jeopardizes that investment.
The bulk of this book focuses on the two factions trying to make housing
cheaper: Sonja Trauss and others who are pushing for construction of more
housing, and tenant groups trying to manage the price of existing housing
for those who have to rent. The tragedy of Bay Area housing is that even
the faintest connection of housing to the economic principle of supply and
demand implies that the long-term goals of those two groups align.
Building more housing will decrease the cost of housing, at least if you
build enough of it over a long enough period of time. But in the short
term, particularly given the amount of Bay Area land pre-emptively
excluded from housing by environmental protection and the actions of the
existing homeowners, building more housing usually means tearing down
cheap lower-density housing and replacing it with expensive higher-density
housing. And that destroys people's lives.
I'll admit my natural sympathy is with Trauss on pure economic grounds.
There simply aren't enough places to live in the Bay Area, and the number
of people in the area will not decrease. To the marginal extent that
growth even slows, that's another tale of misery involving "super
commutes" of over 90 minutes each way. But the most affecting part of
this book was the detailed look at what redevelopment looks like for the
people who thought they had housing, and how it disrupts and destroys
existing communities. It's impossible to read those stories and not be
moved. But it's equally impossible to not be moved by the stories of
people who live in their cars during the week, going home only on weekends
because they have to live too far away from their jobs to commute.
This is exactly the kind of politics that I lose when I take a superficial
interest in political theory. Even when I feel confident in a guiding
principle, the hard part of real-world politics is bringing real people
with you in the implementation and mitigating the damage that any choice
of implementation will cause. There are a lot of details, and those
details matter. Without the right balance between addressing a long-term
deficit and providing short-term protection and relief, an attempt to
alleviate unsustainable long-term misery creates more short-term misery
for those least able to afford it. And while I personally may have less
sympathy for the relatively well-off who have clawed their way into their
own mortgage, being cavalier with their goals and their financial needs is
both poor ethics and poor politics. Mobilizing political opponents who
have resources and vote locally isn't a winning strategy.
Dougherty is a reporter, not a housing or public policy expert, so
Golden Gates poses problems and tells stories rather than describes
solutions. This book didn't lead me to a brilliant plan for fixing the
Bay Area housing crunch, or hand me a roadmap for how to get effectively
involved in local politics. What it did do is tell stories about what
political approaches have worked, how they've worked, what change they've
created, and the limitations of that change. Solving political problems
is work. That work requires understanding people and balancing concerns,
which in turn requires a lot of empathy, a lot of communication, and
sometimes finding a way to make unlikely allies.
I'm not sure how broad the appeal of this book will be outside of those
who live in the region. Some aspects of the fight for housing generalize,
but the Bay Area (and I suspect every region) has properties specific to
it or to the state of California. It has also reached an extreme of
housing shortage that is rivaled in the United States only by New York
City, which changes the nature of the solutions. But if you want to
seriously engage with Bay Area housing policy, knowing the background
explained here is nearly mandatory. There are some flaws I wish
Dougherty would have talked more about traffic and transit policy,
although I realize that could be another book but this is an important
story told well.
If this somewhat narrow topic is within your interests, highly
recommended.
Rating: 8 out of 10

8 May 2020

Half a year ago,
I
wrote about the Jami communication
client, capable of peer-to-peer encrypted communication. It
handle both messages, audio and video. It uses distributed hash
tables instead of central infrastructure to connect its users to each
other, which in my book is a plus. I mentioned briefly that it could
also work as a SIP client, which came in handy when the higher
educational sector in Norway started to promote Zoom as its video
conferencing solution. I am reluctant to use the official Zoom client
software, due to their copyright
license clauses prohibiting users to reverse engineer (for example
to check the security) and benchmark it, and thus prefer to connect to
Zoom meetings with free software clients.
Jami worked OK as a SIP client to Zoom as long as there was no
password set on the room. The Jami daemon leak memory like crazy
(approximately 1 GiB a minute) when I am connected to the video
conference, so I had to restart the client every 7-10 minutes, which
is not a great. I tried to get other SIP Linux clients to work
without success, so I decided I would have to live with this wart
until someone managed to fix the leak in the dring code base. But
another problem showed up once the rooms were password protected. I
could not get my dial tone signaling through from Jami to Zoom, and
dial tone signaling is used to enter the password when connecting to
Zoom. I tried a lot of different permutations with my Jami and
Asterisk setup to try to figure out why the signaling did not get
through, only to finally discover that the fundamental problem seem to
be that Zoom is simply not able to receive dial tone signaling when
connecting via SIP. There seem to be nothing wrong with the Jami and
Asterisk end, it is simply broken in the Zoom end. I got help from a
very skilled VoIP engineer figuring out this last part. And being a
very skilled engineer, he was also able to locate a solution for me.
Or to be exact, a workaround that solve my initial problem of
connecting to password protected Zoom rooms using Jami.
So, how do you do this, I am sure you are wondering by now. The
trick is already
documented
from Zoom, and it is to modify the SIP address to include the room
password. What is most surprising about this is that the
automatically generated email from Zoom with instructions on how to
connect via SIP do not mention this. The SIP address to use normally
consist of the room ID (a number), an @ character and the IP address
of the Zoom SIP gateway. But Zoom understand a lot more than just the
room ID in front of the at sign. The format is "[Meeting
ID].[Password].[Layout].[Host Key]", and you can hear see how you
can both enter password, control the layout (full screen, active
presence and gallery) and specify the host key to start the meeting.
The full SIP address entered into Jami to provide the password will
then look like this (all using made up numbers):

sip:657837644.522827@192.168.169.170

Now if only jami would reduce its memory usage, I could even
recommend this setup to others. :)
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

6 May 2020

Currently there is political debate about when businesses should be reopened after the Covid19 quarantine.
Small Businesses
One argument for reopening things is for the benefit of small businesses. The first thing to note is that the protests in the US say I need a haircut not I need to cut people s hair . Small businesses won t benefit from reopening sooner.
For every business there is a certain minimum number of customers needed to be profitable. There are many comments from small business owners that want it to remain shutdown. When the government has declared a shutdown and paused rent payments and provided social security to employees who aren t working the small business can avoid bankruptcy. If they suddenly have to pay salaries or make redundancy payouts and have to pay rent while they can t make a profit due to customers staying home they will go bankrupt.
Many restaurants and cafes make little or no profit at most times of the week (I used to be 1/3 owner of an Internet cafe and know this well). For such a company to be viable you have to be open most of the time so customers can expect you to be open. Generally you don t keep a cafe open at 3PM to make money at 3PM, you keep it open so people can rely on there being a cafe open there, someone who buys a can of soda at 3PM one day might come back for lunch at 1:30PM the next day because they know you are open. A large portion of the opening hours of a most retail companies can be considered as either advertising for trade at the profitable hours or as loss making times that you can t close because you can t send an employee home for an hour.
If you have seating for 28 people (as my cafe did) then for about half the opening hours you will probably have 2 or fewer customers in there at any time, for about a quarter the opening hours you probably won t cover the salary of the one person on duty. The weekend is when you make the real money, especially Friday and Saturday nights when you sometimes get all the seats full and people coming in for takeaway coffee and snacks. On Friday and Saturday nights the 60 seat restaurant next door to my cafe used to tell customers that my cafe made better coffee. It wasn t economical for them to have a table full for an hour while they sell a few cups of coffee, they wanted customers to leave after dessert and free the table for someone who wants a meal with wine (alcohol is the real profit for many restaurants).
The plans of reopening with social distancing means that a 28 seat cafe can only have 14 chairs or less (some plans have 25% capacity which would mean 7 people maximum). That means decreasing the revenue of the most profitable times by 50% to 75% while also not decreasing the operating costs much. A small cafe has 2-3 staff when it s crowded so there s no possibility of reducing staff by 75% when reducing the revenue by 75%.
My Internet cafe would have closed immediately if forced to operate in the proposed social distancing model. It would have been 1/4 of the trade and about 1/8 of the profit at the most profitable times, even if enough customers are prepared to visit and social distancing would kill the atmosphere. Most small businesses are barely profitable anyway, most small businesses don t last 4 years in normal economic circumstances.
This reopen movement is about cutting unemployment benefits not about helping small business owners. Destroying small businesses is also good for big corporations, kill the small cafes and restaurants and McDonald s and Starbucks will win. I think this is part of the motivation behind the astroturf campaign for reopening businesses.
Forbes has an article about this [1].
Psychological Issues
Some people claim that we should reopen businesses to help people who have psychological problems from isolation, to help victims of domestic violence who are trapped at home, to stop older people being unemployed for the rest of their lives, etc.
Here is one article with advice for policy makers from domestic violence experts [2]. One thing it mentions is that the primary US federal government program to deal with family violence had a budget of $130M in 2013. The main thing that should be done about family violence is to make it a priority at all times (not just when it can be a reason for avoiding other issues) and allocate some serious budget to it. An agency that deals with problems that affect families and only has a budget of $1 per family per year isn t going to be able to do much.
There are ongoing issues of people stuck at home for various reasons. We could work on better public transport to help people who can t drive. We could work on better healthcare to help some of the people who can t leave home due to health problems. We could have more budget for carers to help people who can t leave home without assistance. Wanting to reopen restaurants because some people feel isolated is ignoring the fact that social isolation is a long term ongoing issue for many people, and that many of the people who are affected can t even afford to eat at a restaurant!
Employment discrimination against people in the 50+ age range is an ongoing thing, many people in that age range know that if they lose their job and can t immediately find another they will be unemployed for the rest of their lives. Reopening small businesses won t help that, businesses running at low capacity will have to lay people off and it will probably be the older people. Also the unemployment system doesn t deal well with part time work. The Australian system (which I think is similar to most systems in this regard) reduces the unemployment benefits by $0.50 for every dollar that is earned in part time work, that effectively puts people who are doing part time work because they can t get a full-time job in the highest tax bracket! If someone is going to pay for transport to get to work, work a few hours, then get half the money they earned deducted from unemployment benefits it hardly makes it worthwhile to work. While the exact health impacts of Covid19 aren t well known at this stage it seems very clear that older people are disproportionately affected, so forcing older people to go back to work before there is a vaccine isn t going to help them.
When it comes to these discussions I think we should be very suspicious of people who raise issues they haven t previously shown interest in. If the discussion of reopening businesses seems to be someone s first interest in the issues of mental health, social security, etc then they probably aren t that concerned about such issues.
I believe that we should have a Universal Basic Income [3]. I believe that we need to provide better mental health care and challenge the gender ideas that hurt men and cause men to hurt women [4]. I believe that we have significant ongoing problems with inequality not small short term issues [5]. I don t think that any of these issues require specific changes to our approach to preventing the transmission of disease. I also think that we can address multiple issues at the same time, so it is possible for the government to devote more resources to addressing unemployment, family violence, etc while also dealing with a pandemic.

30 April 2020

Here is my monthly update covering what I have been doing in the free software world during April 2020 (previous month's report). Looking it over prior to publishing, I am surprised how much I got done this month I felt that I was not only failing to do all the extra things I had planned, but I was doing far less than normal. But let us go easy on ourselves; nobody is nailing this.

Made some small changes to my tickle-me-email library which implements Gettings Things Done (GTD)-like behaviours in IMAP inboxes in order to decode various headers correctly [...] and correct the counting logic in the send-later command's message limit. [...]

Worked with @dormando with a architecture-specific problem in the Memcached caching system to fix grossly incorrect behaviour on big-endian architectures. [...]

As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics, licensing, policy, liaising with the ClearlyDefined project and so on. In particular, I on-boarded the Ganeti project to SPI.

Reproducible builds
One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.
The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.
The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.
Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

Wrote a 20-page funding report to the Open Technology Fund -- whilst the Reproducible Builds project has submitted monthly reports to the otf-active mailing list this final report described in detail the status of each objective, our overall lessons and our future plans.

Add support for custom .zip filename filtering and exclude two patterns of files generated by Maven projects in "fork" mode. (#13)

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues. This month I fixed a long-standing issue by not drop UNIX groups in FUSE multi-user mode when we are not root. (#1)

Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

Elsewhere in our tooling, I made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading versions 139, 140, 141 and 142 to Debian:

Comparison improvements:

Dalvik.dex files can also serve as APK containers so restrict the narrower identification of .dex files to files ending with this extension and widen the identification of APK files to when file(1) discovers a Dalvik file. (#28)

It was discovered that there was a path-traversal issue in the Apache Shiro Java security framework (CVE-2020-1957) where a specially-crafted request could cause an authentication bypass. I therefore issued DLA 2181-1 to address this.

Issued ELA-225-1 for dom4j, a library for working with various XML formats on the Java platform to address an XML external external entity vulnerability (CVE-2020-10683). This type of attack occurs when XML input containing a reference to an internet-faced entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data, denial of service, server side request forgery as well as other system impacts.

Issued ELA-224-2 to update the fix to ntp in ELA-224-2 to add further protection that was not present in the previous update.

onioncircuits (0.6-3) Mark a non-deterministic autopkgtest as "flaky" for now to ease migration (#930448) and use DEB_VERSION_UPSTREAM_REVISION over manually parsing dpkg-parsechangelog in debian/rules.

26 April 2020

Artemisia Lomi or Artemisia Gentileschi (US: / d nt l ski, -ti -/, Italian: [arte mi zja d enti leski]; July 8, 1593 c. 1656) was an Italian Baroque painter, now considered one of the most accomplished seventeenth-century artists working in the dramatic style of Caravaggio. In an era when women had few opportunities to pursue artistic training or work as professional artists, Artemisia was the first woman to become a member of the Accademia di Arte del Disegno in Florence and had an international clientele.

Laura Maria Caterina Bassi (October 1711 20 February 1778) was an Italian physicist and academic. She received a doctoral degree in Philosophy from the University of Bologna in May 1732. She was the first woman to earn a professorship in physics at a university. She is recognized as the first woman in the world to be appointed a university chair in a scientific field of studies. Bassi contributed immensely to the field of science while also helping to spread the study of Newtonian mechanics through Italy.

Maria Gaetana Agnesi (UK: / n je zi/ an-YAY-zee,[1] US: / n -/ ahn-,[2][3] Italian: [ma ri a ae ta na a zi, - e z-];[4] 16 May 1718 9 January 1799) was an Italian mathematician, philosopher, theologian, and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a mathematics professor at a university.[5]

Elena Lucrezia Cornaro Piscopia (US: /k r n ro p sko pi /,[4] Italian: [ lena lu kr ttsja kor na ro pi sk pja]) or Elena Lucrezia Corner (Italian: [kor n r]; 5 June 1646 26 July 1684), also known in English as Helen Cornaro, was a Venetian philosopher of noble descent who in 1678 became one of the first women to receive an academic degree from a university, and the first to receive a Doctor of Philosophy degree.

Maria Tecla Artemisia Montessori (/ m nt s ri/ MON-tiss-OR-ee, Italian: [ma ri a montes s ri]; August 31, 1870 May 6, 1952) was an Italian physician and educator best known for the philosophy of education that bears her name, and her writing on scientific pedagogy. At an early age, Montessori broke gender barriers and expectations when she enrolled in classes at an all-boys technical school, with hopes of becoming an engineer. She soon had a change of heart and began medical school at the Sapienza University of Rome, where she graduated with honors in 1896. Her educational method is still in use today in many public and private schools throughout the world.

Rita Levi-Montalcini OMRI OMCA (US: / le vi mo nt l t i ni, l v-, li vi
m nt l -/, Italian: [ ri ta l vi montal t i ni]; 22 April 1909 30 December
2012) was an Italian Nobel laureate, honored for her work in neurobiology. She
was awarded the 1986 Nobel Prize in Physiology or Medicine jointly with colleague
Stanley Cohen for the discovery of nerve growth factor (NGF). From 2001 until
her death, she also served in the Italian Senate as a Senator for Life. This
honor was given due to her significant scientific contributions. On 22 April
2009, she became the first Nobel laureate ever to reach the age of 100, and
the event was feted with a party at Rome's City Hall. At the time of her
death, she was the oldest living Nobel laureate.

Samantha Cristoforetti (Italian pronunciation: [sa manta kristofo retti]; born 26 April 1977, in Milan) is an Italian European Space Agency astronaut, former Italian Air Force pilot and engineer. She holds the record for the longest uninterrupted spaceflight by a European astronaut (199 days, 16 hours), and until June 2017 held the record for the longest single space flight by a woman until this was broken by Peggy Whitson and later by Christina Koch. She is also the first Italian woman in space. Samantha Cristoforetti is also known as the first person who brewed an espresso in space.

21 April 2020

My fellow Debianites,
It's been one month, one week and one day since I
decided to run for this DPL
term. The Debian community has been through a variety of interesting times during the last decade,
and instead of focusing on grand, sweeping changes for Debian, core to my
DPL campaign was to establish a sense of normality and stability so that we can work on
community building, continue to focus on technical excellence and serve our users the best we can.
Thing don't always work out as we plan, and for many of us, Debian recently had
to take a back seat to personal priorities.
Back when I posted my intention to run, there were
125 260 confirmed cases of
COVID-19 globally. Today, that number is 20 times higher, with the
actual infected number likely to be significantly higher. A large number
of us are under lock-down, where we not only fear the disease and its effect on
local hospitals and how it will affect our loved ones, but also our very livelihoods
and the future of our local businesses and industry.
I don't mean to be gloomy with the statement above, I am after all, an optimist -
but unfortunately it does get even worse. Governments and corporations
around the world have started to take advantage of COVID-19 in negative ways and
are making large sweeping changes that undermine the privacy and rights of individuals
everywhere.
For many reasons, including those above, I believe that the Debian project
is more important and relevant now than it's ever been before. The world needs a
free, general purpose operating system, unburdened by the needs of profit,
which puts the needs of its users first, providing a safe and secure platform for
the computing needs of the masses.
While we can't control or fix all the problems in the world, we can control our
response to it, and be part of the solutions that bring the change we want to
see.
During my term as DPL, I will be available to help with problems in our community
to the maximum extent that my time permits. If we help ourselves, we will be in
a better position to help others. If you (or your team) get stuck and are in
need of help, then please do not hessitate to e-mail me.
A few thank-yous
As incoming DPL, I'd like to thank Sam Hartman on behalf of the project
for the work that he's done over the last year as DPL. It's a tremendous
time commitment that requires constant attention to detail. On Sunday,
Sam and I had a handover meeting where we discussed various DPL
responsibilities including finances, delegations (including specifics of
some delegations), legal matters, outreach and other questions I had.
I'd also like to thank Sam for taking the time to do this.
Thank you to Sruthi Chadran and Brian Gupta who took the time to also run
for DPL this year. Both candidates brought important issues to the forefront
and I hope to work with both of them on those in the near future.
DPL Blog
Today, I've started a new blog for the Debian
Project Leader to help facilitate
more frequent communication, and to reach a wider audience via Planet
Debian. This will contain supplemental information to what I send to the
debian-devel-announce mailing list.
Want to help?
In my platform,
I listed some key areas that I'd like to work on. My work
won't be limited to those, but it should give you some idea of the type of DPL
that I'll be. If you'd like to get involved, feel free to join the #debian-dpl
channel on the oftc IRC network, and please introduce yourself along with any
areas of interest that you'd like to contribute to.

My fellow Debianites,
It's been one month, one week and one day since I
decided to run for this DPL
term. The Debian community has been through a variety of interesting times during the last decade,
and instead of focusing on grand, sweeping changes for Debian, core to my
DPL campaign was to establish a sense of normality and stability so that we can work on
community building, continue to focus on technical excellence and serve our users the best we can.
Thing don't always work out as we plan, and for many of us, Debian recently had
to take a back seat to personal priorities.
Back when I posted my intention to run, there were
125 260 confirmed cases of
COVID-19 globally. Today, that number is 20 times higher, with the
actual infected number likely to be significantly higher. A large number
of us are under lock-down, where we not only fear the disease and its effect on
local hospitals and how it will affect our loved ones, but also our very livelihoods
and the future of our local businesses and industry.
I don't mean to be gloomy with the statement above, I am after all, an optimist -
but unfortunately it does get even worse. Governments and corporations
around the world have started to take advantage of COVID-19 in negative ways and
are making large sweeping changes that undermine the privacy and rights of individuals
everywhere.
For many reasons, including those above, I believe that the Debian project
is more important and relevant now than it's ever been before. The world needs a
free, general purpose operating system, unburdened by the needs of profit,
which puts the needs of its users first, providing a safe and secure platform for
the computing needs of the masses.
While we can't control or fix all the problems in the world, we can control our
response to it, and be part of the solutions that bring the change we want to
see.
During my term as DPL, I will be available to help with problems in our community
to the maximum extent that my time permits. If we help ourselves, we will be in
a better position to help others. If you (or your team) get stuck and are in
need of help, then please do not hessitate to e-mail me.
A few thank-yous
As incoming DPL, I'd like to thank Sam Hartman on behalf of the project
for the work that he's done over the last year as DPL. It's a tremendous
time commitment that requires constant attention to detail. On Sunday,
Sam and I had a handover meeting where we discussed various DPL
responsibilities including finances, delegations (including specifics of
some delegations), legal matters, outreach and other questions I had.
I'd also like to thank Sam for taking the time to do this.
Thank you to Sruthi Chadran and Brian Gupta who took the time to also run
for DPL this year. Both candidates brought important issues to the forefront
and I hope to work with both of them on those in the near future.
DPL Blog
Today, I've started a new blog for the Debian
Project Leader to help facilitate
more frequent communication, and to reach a wider audience via Planet
Debian. This will contain supplemental information to what I send to the
debian-devel-announce mailing list.
Want to help?
In my platform,
I listed some key areas that I'd like to work on. My work
won't be limited to those, but it should give you some idea of the type of DPL
that I'll be. If you'd like to get involved, feel free to join the #debian-dpl
channel on the oftc IRC network, and please introduce yourself along with any
areas of interest that you'd like to contribute to.

10 April 2020

Get the Champagne ready, we have released the final images of TeX Live 2020.
Due to COVID-19, DVD production will be delayed, but we have decided to release the current image and update the net installer. The .iso image is available on CTAN, and the net installer will pull all the newest stuff. Currently we are working on getting those packages updated during the freeze to the newest level in TeX Live.
Before providing the full list of changes, here a few things I would like to pick out:

LuaHBTeX: lualatex is now based on LuaHBTeX, meaning that one can use the HarfBuzz renderer which in particular for complicated scripts (Tibetan, Bengali, ) works better than the Lua-based renderer. Note that luatex itself remains normal LuaTeX, only the luaLAtex one uses LuaHBTeX.

Versioned containers: this is a change under the hood we have been working on slowly over the last half year. Many distributions had problems with the changing content of our package containers (foobar.tar.xz while the name never changed. We have now changed all the infrastructure and TeX Live Manager to work with versioned containers foobar.rNNNNN.tar.xz. This should help quite some distributors!

Haranoaji ( ): the default font for Japanese text was for long time the IPAex fonts, one of the few free fonts available. With 2020 we have switched to Haranoaji font family, which provides better support for JIS90/04 charsets, and more weights.

Most of the above features have been available already either via tlpretest or via regular updates, but are now fully released on the DVD version.
Thanks goes to all the developers, builders, the great CTAN team, and everyone who has contributed to this release!
Finally, here are the changes as listed in the master TeX Live documentation:
General:

The \input primitive in all TeX engines, including tex, now also accepts a group-delimited lename argument, as a system-dependent extension. The usage with a standard space/token-delimited lename is completely unchanged. The group-delimited argument was previously implemented in LuaTeX; now it is available in all engines. ASCII double quote characters ( ) are removed from the lename, but it is otherwise left unchanged after tokenization. This does not currently a ect LaTeX s \input command, as that is a macro rede nition of the standard \input primitive.

New option cnf-line for kpsewhich, tex, mf, and all other engines, to support arbitrary con guration settings on the command line.

The addition of various primitives to various engines in this and previous years is intended to result in a common set of functionality available across all engines (LaTeX News #31).

epTeX, eupTeX: New primitives \Uchar, \Ucharcat, \current(x)spacingmode, \ifincsname; revise \fontchar?? and \iffontchar. For eupTeX only: \currentcjktoken.
LuaTeX: Integration with HarfBuzz library, available as new engines luahbtex (used for lualatex) and luajithbtex. New primitives: \eTeXgluestretchorder, \eTeXglueshrinkorder.
pdfTeX: New primitive \pdfmajorversion; this merely changes the version number in the PDF output; it has no e ect on any PDF content. \pdfximage and similar now search for image les in the same way as \openin.
pTeX: New primitives \ifjfont, \iftfont. Also in epTeX, upTeX, eupTeX.
XeTeX: Fixes for \Umathchardef, \XeTeXinterchartoks, \pdfsavepos.
Dvips: Output encodings for bitmap fonts, for better copy/paste capabilities (https://tug.org/TUGboat/tb40-2/tb125rokicki-type3search.pdf).
MacTeX: MacTeX and x86_64-darwin now require 10.13 or higher (High Sierra, Mojave, and Catalina); x86_64-darwinlegacy supports 10.6 and newer. MacTeX is notarized and command line programs have hardened runtimes, as now required by Apple for install packages. BibDesk and TeX Live Utility are not in MacTeX because they are not notarized, but a README le lists urls where they can be obtained.
tlmgr and infrastructure:

Automatically retry (once) packages that fail to download.

New option tlmgr check texmfdbs, to to check consistency of ls-R les and !! speci cations for each tree.

Use versioned lenames for the package containers, as in tlnet/archive/pkgname.rNNN.tar.xz; should be invisible to users, but a notable change in distribution.

catalogue-date information no longer propagated from the TeX Catalogue, since it was often unrelated to package updates.

That s all, let the fun begin! And again, thanks to all the developers, builders, the great CTAN team, and everyone who has contributed to this release!

1 April 2020

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.
In March, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 20h for ELTS (out of 20 max; I did 0).
Most contributors claimed vulnerabilities by performing early CVE monitoring/triaging on their own, making me question the relevance of the Front-Desk role. It could be due to a transient combination of higher hours volume and lower open vulnerabilities.
Working as a collective of hourly paid freelancers makes it more likely to work in silos, resulting in little interaction when raising workflow topics on the mailing list. Maybe we're reaching a point where regular team meetings will be benefical.
As previously mentioned, I structure my work keeping the global Debian security in mind. It can be stressful though, and I believe current communication practices may deter such initiatives.
ELTS - Wheezy

No work. ELTS has few sponsors right now and few vulnerabilities to fix, hence why I could not work on it this month. I gave back my hours at the end of the month.

Elizabeth Cochran Seaman[1] (May 5, 1864[2] January 27, 1922), better known by her pen name Nellie Bly, was an American journalist who was widely known for her record-breaking trip around the world in 72 days, in emulation of Jules Verne's fictional character Phileas Fogg, and an expos in which she worked undercover to report on a mental institution from within.[3] She was a pioneer in her field, and launched a new kind of investigative journalism.[4] Bly was also a writer, inventor, and industrialist.

Delia Ann Derbyshire (5 May 1937 3 July 2001)[1] was an English musician and composer of electronic music.[2] She carried out pioneering work with the BBC Radiophonic Workshop during the 1960s, including her electronic arrangement of the theme music to the British science-fiction television series Doctor Who.[3][4] She has been referred to as "the unsung heroine of British electronic music,"[3] having influenced musicians including Aphex Twin, the Chemical Brothers and Paul Hartnoll of Orbital.[5]

Charity Adams Earley (5 December 1918 13 January 2002) was the first African-American woman to be an officer in the Women's Army Auxiliary Corps (later WACS) and was the commanding officer of the first battalion of African-American women to serve overseas during World War II. Adams was the highest ranking African-American woman in the army by the completion of the war.

19 March 2020

Today I d like to post a few updates about COVID-19 which I have gathered from credible sources, as well as some advice also gathered from credible sources.
Summary

Coronavirus causes health impacts requiring hospitalization in a significant percentage of all adult age groups.

Coronavirus also can cause no symptoms at all in many, especially children.

Be serious about social distancing.

COVID-19 is serious for young adults too
According to this report based on a CDC analysis, between 14% and 20% of people aged 20 to 44 require hospitalization due to COVID-19. That s enough to be taken seriously. See also this CNN story.
Act as if you are a carrier because you may be infected and not even know it, even children
Information on this is somewhat preliminary, but it is certainly known that a certain set of cases is asymptomatic. This article discusses manifestations in children, while this summary of a summary (note: not original research) suggests that 17.9% of people may not even know they are infected.
How serious is this? Serious.
This excellent article by Daniel W. Johnson, MD, is a very good read. Among the points it makes:

Anyone that says it s no big deal is wrong.

If we treat this like WWI or WWII and everyone does the right things, we will be harmed but OK. If many but not all people do the right things, we ll be like Italy. If we blow it off, our health care system and life as we know it will be crippled.

If we don t seriously work to flatten the curve, many lives will be needlessly lost

Advice
I m going to just copy Dr. Johnson s advice here:

You and your kids should stay home. This includes not going to church, not going to the gym, not going anywhere.

Do not travel for enjoyment until this is done. Do not travel for work unless your work truly requires it.

Avoid groups of people. Not just crowds, groups. Just be around your immediate family. I think kids should just play with siblings at this point no play dates, etc.

When you must leave your home (to get groceries, to go to work), maintain a distance of six feet from people. REALLY stay away from people with a cough or who look sick.

When you do get groceries, etc., buy twice as much as you normally do so that you can go to the store half as often. Use hand sanitizer immediately after your transaction, and immediately after you unload the groceries.

I m not saying people should not go to work. Just don t leave the house for anything unnecessary, and if you can work from home, do it.
Everyone on this email, besides Mom and Dad, are at low risk for severe disease if/when they contract COVID-19. While this is great, that is not the main point. When young, well people fail to do social distancing and hygiene, they pick up the virus and transmit it to older people who are at higher risk for critical illness or death. So everyone needs to stay home. Even young people.
Tell every person over 60, and every person with significant medical conditions, to avoid being around people. Please do not have your kids visit their grandparents if you can avoid it. FaceTime them.
Our nation is the strongest one in the world. We have been through other extreme challenges and succeeded many times before. We WILL return to normal life. Please take these measures now to flatten the curve, so that we can avoid catastrophe.

I d also add that many supermarkets offer delivery or pickup options that allow you to get your groceries without entering the store. Some are also offering to let older people shop an hour before the store opens to the general public. These could help you minimize your exposure.
Other helpful links
Here is a Reddit megathread with state-specific unemployment resources.
Scammers are already trying to prey on people. Here are some important tips to avoid being a victim.
Although there are varying opinions, some are recommending avoiding ibuprofen when treating COVID-19.
Bill Gates had some useful advice. Here s a summary emphasizing the need for good testing.

git clone REPO
./REPO/bootstrap.sh

... something eerily similar to the infamous curl pipe bash
method which I often decry. As a short-term workaround, I relied on
the SHA-1 checksum of the repository to make sure I have the right
code, by running this both on a "trusted" (ie. "local") repository and
the remote, then visually comparing the output:

One problem with this approach is that SHA-1 is now considered as
flawed as MD5 so it can't be used as an authentication mechanism
anymore. It's also fundamentally difficult to compare hashes for
humans.
The other flaw with comparing local and remote checksums is that we
assume we trust the local repository. But how can I trust that
repository? I can either:

audit all the code present and all the changes done to it after

or trust someone else to do so

The first option here is not practical in most cases. In this specific
use case, I have audited the source code -- I'm the author, even --
what I need is to transfer that code over to another server.
(Note that I am replacing those procedures with Fabric, which
makes this use case moot for now as the trust path narrows to "trust
the SSH server" which I already had anyways. But it's still important
for my fellow Tor developers who worry about trusting the git server,
especially now that we're moving to GitLab.)
But anyways, in most cases, I do need to trust some other fellow
developer I collaborate with. To do this, I would need to trust the
entire chain between me and them:

the git client

the operating system

the hardware

then the hosting provider (and that hardware/software stack)

and then backwards all the way back to that other person's computer

I want to shorten that chain as much as possible, make it "peer to
peer", so to speak. Concretely, it would eliminate the hosting
provider and the network, as attackers.

OpenPGP verification
My first reaction is (perhaps perversely) to "use OpenPGP" for this. I
figured that if I sign every commit, then I can just check the latest
commit and see if the signature is good.
The first problem here is that this is surprisingly hard. Let's pick
some arbitrary commit I did recently:

That's the output of git log -p in my local repository. I signed
that commit, yet git log is not telling me anything special. To
check the signature, I need something special: --show-signature,
which looks like this:

Important part: Can't check signature: No public key. No public
key. Because of course you would see that. Why would you have my
key lying around, unless you're me. Or, to put it another way, why
would that server I'm installing from scratch have a copy of my
OpenPGP certificate? Because I'm a Debian developer, my key is
actually part of the 800 keys in the debian-keyring package,
signed by the APT repositories. So I have a trust path.
But that won't work for someone who is not a Debian developer. It will
also stop working when my key expires in that repository, as it
already has on Debian buster (current stable). So I can't assume I
have a trust path there either. One could work with a trusted keyring
like we do in the Tor and Debian project, and only work inside that
project, that said.
But I still feel uncomfortable with those commands. Both git log and
git show will happily succeed (return code 0 in the shell) even
though the signature verification failed on the commits. Same with
git pull and git merge, which will happily push your branch ahead
even if the remote has unsigned or badly signed commits.
To actually verify commits (or tags), you need the git
verify-commit (or git verify-tag) command, which seems to do
the right thing:

At least it fails with some error code (1, above). But it's not
flexible: I can't use it to verify that a "trusted" developer (say one
that is in a trusted keyring) signed a given commit. Also, it is not
clear what a failure means. Is a signature by an expired certificate
okay? What if the key is signed by some random key in my personal
keyring? Why should that be trusted?

Worrying about git and GnuPG
In general, I'm worried about git's implementation of OpenPGP
signatures. There has been numerous cases of interoperability problems
with GnuPG specifically that led to security, like EFAIL or
SigSpoof. It would be surprising if such a vulnerability did not
exist in git.
Even if git did everything "just right" (which I have myself found
impossible to do when writing code that talks with GnuPG), what does
it actually verify? The commit's SHA-1 checksum? The tree's checksum?
The entire archive as a zip file? I would bet it signs the commit's
SHA-1 sum, but I just don't know, on the top of my head, and neither
do git-commit or git-verify-commit say exactly what is happening.
I had an interesting conversation with a fellow Debian developer
(dkg) about this and we had to admit those limitations:

<anarcat> i'd like to integrate pgp signing into tor's coding
practices more, but so far, my approach has been "sign commits" and
the verify step was "TBD"
<dkg> that's the main reason i've been reluctant to sign git
commits. i haven't heard anyone offer a better subsequent step. if
torproject could outline something useful, then i'd be less averse
to the practice.
i'm also pretty sad that git remains stuck on sha1, esp. given the
recent demonstrations. all the fancy strong signatures you can make
in git won't matter if the underlying git repo gets changed out from
under the signature due to sha1's weakness

In other words, even if git implements the arcane GnuPG dialect just
so, and would allow us to setup the trust chain just right, and
would give us meaningful and workable error messages, it still would
fail because it's still stuck in SHA-1. There is work underway to
fix that, but in February 2020, Jonathan Corbet described that work as
being in a "relatively unstable state", which is hardly something I
would like to trust to verify code.
Also, when you clone a fresh new repository, you might get an entirely
different repository, with a different root and set of commits. The
concept of "validity" of a commit, in itself, is hard to establish in
this case, because an hostile server could put you backwards in time,
on a different branch, or even on an entirely different
repository. Git will warn you about a different repository root with
warning: no common commits but that's easy to miss. And complete
branch switches, rebases and resets from upstream are hardly more
noticeable: only a tiny plus sign (+) instead of a star (*) will
tell you that a reset happened, along with a warning (forced update)
on the same line. Miss those and your git history can be compromised.

Possible ways forward
I don't consider the current implementation of OpenPGP signatures in
git to be sufficient. Maybe, eventually, it will mature away from
SHA-1 and the interface will be more reasonable, but I don't see that
happening in the short term. So what do we do?

git evtag
The git-evtag extension is a replacement for git tag -s. It's
not designed to sign commits (it only verifies tags) but at least it
uses a stronger algorithm (SHA-512) to checksum the tree, and will
include everything in that tree, including blobs. If that sounds
expensive to you, don't worry too much: it takes about 5 seconds to
tag the Linux kernel, according to the author.
Unfortunately, that checksum is then signed with GnuPG, in a manner
similar to git itself, in that it exposes GnuPG output (which can be
confusing) and is likely similarly vulnerable to mis-implementation of
the GnuPG dialect as git itself. It also does not allow you to specify
a keyring to verify against, so you need to trust GnuPG to make sense
of the garbage that lives in your personal keyring (and, trust me, it
doesn't).
And besides, git-evtag is fundamentally the same as signed git tags:
checksum everything and sign with GnuPG. The difference is it uses
SHA-512 instead of SHA-1, but that's something git will eventually fix
itself anyways.

kernel patch attestations
The kernel also faces this problem. Linus Torvalds signs the releases
with GnuPG, but patches fly all over mailing list without any form of
verification apart from clear-text email. So Konstantin Ryabitsev has
proposed a new protocol to sign git patches which uses SHA256 to
checksum the patch metadata, commit message and the patch itself, and
then sign that with GnuPG.
It's unclear to me what this solves, if anything, at all. As dkg
argues, it would seem better to add OpenPGP support to
git-send-email and teach git tools to recognize that (e.g. git-am)
at least if you're going to keep using OpenPGP anyways.
And furthermore, it doesn't resolve the problems associated with
verifying a full archive either, as it only attests "patches".

jcat
Unhappy with the current state of affairs, the author of fwupd
(Richard Hughes) wrote his own protocol as well, called
jcat, which provides signed "catalog files" similar to the ones
provided in Microsoft windows.
It consists of a "gzip-compressed JSON catalog files, which can be
used to store GPG, PKCS-7 and SHA-256 checksums for each file". So
yes, it is yet again another wrapper to GnuPG, probably with all the
flaws detailed above, on top of being a niche implementation,
disconnected from git.

The Update Framework
One more thing dkg correctly identified is:

<dkg> anarcat: even if you could do exactly what you describe,
there are still some interesting wrinkles that i think would be
problems for you.
the big one: "git repo's latest commits" is a loophole big enough to
drive a truck through. if your adversary controls that repo, then
they get to decide which commits to include in the repo. (since
every git repo is a view into the same git repo, just some have more
commits than others)

In other words, unless you have a repository that has frequent commits
(either because of activity or by a bot generating fake commits), you
have to rely on the central server to decide what "the latest version"
is. This is the kind of problems that binary package distribution
systems like APT and TUF solve correctly. Unfortunately, those
don't apply to source code distribution, at least not in git form: TUF
only deals with "repositories" and binary packages, and APT only deals
with binary packages and source tarballs.
That said, there's actually no reason why git could not support the
TUF specification. Maybe TUF could be the solution to ensure
end-to-end cryptographic integrity of the source code
itself. OpenPGP-signed tarballs are nice, and signed git tags can be
useful, but from my experience, a lot of OpenPGP (or, more accurately,
GnuPG) derived tools are brittle and do not offer clear guarantees,
and definitely not to the level that TUF tries to address.
This would require changes on the git servers and clients, but I think
it would be worth it.

Other Projects

OpenBSD
There are other tools trying to do parts of what GnuPG is doing, for
example minisign and OpenBSD's signify. But they do not
integrate with git at all right now. Although I did find a
hack] to use signify with git, it's kind of gross...

Golang
Unsurprisingly, this is a problem everyone is trying to solve. Golang
is planning on hosting a notary which would leverage a
"certificate-transparency-style tamper-proof log" which would be ran
by Google (see the spec for details). But that doesn't resolve the
"evil server" attack, if we treat Google as an adversary (and we should).

Python
Python had OpenPGP going for a while on PyPI, but it's unclear if it
ever did anything at all. Now the plan seems to be to use TUF but
my hunch is that the complexity of the specification is keeping that
from moving ahead.

Docker
Docker and the container ecosystem has, in theory, moved to TUF in the
form of Notary, "a project that allows anyone to have trust over
arbitrary collections of data". In practice however, in my somewhat
limited experience,
setting up TUF and image verification in Docker is far from trivial.

Android and iOS
Even in what is possibly one of the strongest models (at least in
terms of user friendliness), mobile phones are surprisingly unclear
about those kind of questions. I had to ask if Android had end-to-end
authentication and I am still not clear on the answer. I have no
idea of what iOS does.

Conclusion
One of the core problems with everything here is the common usability
aspect of cryptography, and specifically the usability of verification
procedures. We have become pretty good at encryption. The harder
part (and a requirement for proper encryption) is verification. It
seems that problem still remains unsolved, in terms of usability. Even
Signal, widely considered to be a success in terms of adoption and
usability, doesn't properly solve that problem, as users regularly
ignore "The security number has changed" warnings...
So, even though they deserve a lot of credit in other areas, it seems
unlikely that hardcore C hackers (e.g. git and kernel developers)
will be able to resolve that problem without at least a little bit of
help. And TUF seems like the state of the art specification around
here, it would seem wise to start adopting it in the git community as
well.
Update: git 2.26 introduced a new gpg.minTrustLevel to "tell
various signature verification codepaths the required minimum trust
level", presumably to control how Git will treat keys in your
keyrings, assuming the "trust database" is valid and up to date. For
an interesting narrative of how "normal" (without PGP) git
verification can fail, see also A Git Horror Story: Repository
Integrity With Signed Commits.

11 March 2020

I have worked a bit on the fonts I use recently. From the main font I
use every day in my text editor and terminals to this very website, I
did a major and (hopefully) thoughtful overhaul of my typography, in
the hope of making things easier to use and, to be honest, just
prettier.

Editor and Terminal: Fira mono
This all started when I found out about the JetbrainsMono font. I found the idea of ligatures
fascinating: the result is truly beautiful. So I do what I often do
(sometimes to the despair of some fellow Debian members) and filed a
RFP to document my research.
As it turns out, Jetbrains Mono is not free enough to be packaged in
Debian, because it requires proprietary tools to build. I nevertheless
figured I could try another font so I looked at other monospace
alternatives. I found the following packages in debian:

Those are also "programmer fonts" that caught my interest but somehow
didn't land in Debian yet:

Because Fira code had ligatures, i ended up giving it a shot. I really
like the originality of the font. See, for example, how the @ sign
looks when compared to my previous font, Liberation Mono:
Liberation Mono
Fira Mono
Interestingly, a colleague (thanks ahf!) pointed me to the Practical
Typography post "Ligatures in programming fonts: hell no", which
makes the very convincing argument that ligatures are downright
dangerous to programming environment. In my experiences with the
fonts, it was also not always giving the result I would expect. I also
remembered that the Emacs Haskell mode would have this tendency of
inserting crazy syntactic sugar like this in source code without being
asked, which I found extremely frustrating.
Besides, Emacs doesn't support ligatures, unless you count such
horrendous hacks which hack at the display time. That's because
Emacs' display layer is not based on a modern rendering library like
Pango but some scary legacy code that very few people
understand. On top of the complexity of the codebase, there is also
resistance in including a modern font.
So I ended up using Fira mono everywhere I use fixed-width fonts, even
though it's not packaged in Debian. That involves the following
configuration in my .Xresources (no, I haven't switched to Wayland):

Update: I forgot to mention one motivation behind this was to work
around a change in the freetype interpreter, discussed in bug
866685 and my upgrades
documentation.

Website: Charter
That "hell no" article got me interested in the Practical
Typography web book, which I read over the weekend. It was an eye
opener and I realized I had already some of those concepts
implanted; in fact it's probably after reading the Typography in ten
minutes guide that I ended up trying Fira sans a few years
ago. I have removed that font since then,
however, after realising it was taking up an order of magnitude more
bandwidth space than the actual page content.
I really loved the book, so much that I actually bought it. I
liked the concept of it, the look, and the fact that it's a living
document. There's a lot of typography work I would need to do on this
site to catchup with the recommendations from Matthew
Butterick. Switching fonts is only one part of this, but it's
something I was excited to work on. So I sat down and reviewed the
free fonts Butterick recommends and tried out a few. I ended up
settling on Charter, a relatively old (in terms of computing)
font designed by Matthew Carter (of Verdana fame) in 1987.
Charter really looks great and is surprisingly small. While a single
version of Fira varies between 96KiB (Fira Sans Condensed) and 308KiB
(Fira Sans Medium Italic), Charter is only 28KiB! While it's still
about as large as most of my articles, I found it was a better
compromise and decided to make the jump. This site is now in Serif,
which is a huge change for me.
The change was done with the following CSS:

I've only done some preliminary testing of how this will look
like. Although I tested on a few devices (my phone, e-book tablet, an
iPad, and of course my laptop), I fully expect things to break on
your device. Do let me know if things look better or worse. For
future comparison, my site is well indexed in the Internet
Wayback Machine and can be used to look at the site before the
change. For example, compare the previous article here with its
earlier style.
The changes to the theme are of course available in my custom ikiwiki
bootstrap theme (see in particular commits 0bca0fb7 and
d1901fb8), as usual.
Enjoy, and let me know what you think!
PS: I considered just setting the Charter font in CSS and not adding
it as a @font-face. I'm still considering that option and might do
so if the performance cost is too big. The Fira mono font is
actually set like this for the preformatted sections of the site, but
because it's more common (and it's too big) I haven't added it as a
font-face. You might want to download the font locally to benefit from
the full experience as well.
PPS: As it turns out, an earlier version of this post featured exactly
that: a non-webfont version of Charter, which works fine if you have a
good Charter font available. But it looks absolutely terrible if, like
many Linux users, you have the nasty bitmap font shipped with
xfonts-100dpi and xfonts-75dpi. So I fixed the webfont and
it's unlikely this site will be able to load reasonably well in Linux
until those packages are removed or bitmap font rendering is disabled.

7 March 2020

After brewing in experimental for a while, and getting a first outing in
the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable.
1.10 would be a boring, weird number, eh?
Compared to the 1.8 series, the APT 2.0 series features several new features,
as well as improvements in performance, hardening. A lot of code has been
removed as well, reducing the size of the library.

Highlighted Changes Since 1.8

New Features

Commands accepting package names now accept aptitude-style patterns. The
syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for
more details.

apt(8) now waits for the dpkg locks - indefinitely, when connected
to a tty, or for 120s otherwise.

When apt cannot acquire the lock, it prints the name and pid of the process
that currently holds the lock.

A new satisfy command has been added to apt(8) and apt-get(8)

Pins can now be specified by source package, by prepending src: to the
name of the package, e.g.:

Package: src:apt
Pin: version 2.0.0
Pin-Priority: 990

Will pin all binaries of the native architecture produced by the source
package apt to version 2.0.0. To pin packages across all architectures,
append :any.

Performance

Distribution of rred and decompression work during update has been
improved to take into account the backlog instead of randomly
assigning a worker, which should yield higher parallelization.

Incompatibilities

The apt(8) command no longer accepts regular expressions or wildcards as
package arguments, use patterns (see New Features).

Hardening

Credentials specified in auth.conf now only apply to HTTPS sources,
preventing malicious actors from reading credentials after they redirected
users from a HTTP source to an http url matching the credentials in
auth.conf. Another protocol can be specified, see apt_auth.conf(5) for
the syntax.

Developer changes

A more extensible cache format, allowing us to add new fields without
breaking the ABI

All code marked as deprecated in 1.8 has been removed

Implementations of CRC16, MD5, SHA1, SHA2 have been removed

The apt-inst library has been merged into the apt-pkg library.

apt-pkg can now be found by pkg-config

The apt-pkg library now compiles with hidden visibility by default.

Pointers inside the cache are now statically typed. They cannot be
compared against integers (except 0 via nullptr) anymore.

python-apt 2.0
python-apt 2.0 is not yet ready, I m hoping to add a new cleaner
API for cache access before making the jump from 1.9 to 2.0 versioning.

libept 1.2
I ve moved the maintenance of libept to the APT team. We need to investigate
how to EOL this properly and provide facilities inside APT itself to
replace it. There are no plans to provide new features, only bugfixes
/ rebuilds for new apt versions.

17 November 2017

It finally happened
On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montr al, my application was approved. If you re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.
How it started
In 1999 no wait, I can t start there, as much as I want to, this is a short post, so In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its boot-floppies installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn t want to support Windows ever again, and I didn t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called freshmeat that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.
Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up highvoltage@debian.org . I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)
Ubuntu and beyond

Ubuntu 4.10 default desktop Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just warty written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called Ubuntu and the desktop edition has naked people on it. I wasn t sure what he meant and was kind of dumbfounded so I just laughed and said something like Uh ok . At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline Linux for Human Beings . Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.
Closer to Ubuntu s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying Go talk to them! Go talk to them! , but I felt so intimidated by them that I couldn t even bring myself to walk up and say hello.
In the interest of keeping this short, I m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I d like to do a similar video series that might help a new generation of packagers.
I ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

14 November 2017

I have recently released version 2.2 of Wad Compiler, a lazy
functional programming language and IDE for the construction of Doom maps.
The biggest change in this version is a reworking of the preferences system (to
use the Java Preferences API), the wadcli command-line interface respecting
preferences and a new preferences UI dialog (adapted from Quake
Injector).
There are two new example maps: A Labyrinth
demonstration contributed by
"Yoruk", and a Heretic map Bird
Cage by yours truly. These are
both now amongst the largest examples in the collection, although laby.wl was
generated by a higher-level program.
For more information see the release
notes and
the reference, or
check out the new gallery of examples
or skip straight to downloads.
I have no plans to work on WadC further (but never say never, I suppose.)