Jeff Epler's blog

About me

I've been a computer programmer since I first started typing in program
listings on a Commodore Vic 20 when I was about 8. My hobbies include
electronics, CNC manufacturing, photography, beer and winemaking.

I live with my wife, cat, and lots of left-over parts from unfinished
projects in Lincoln, Nebraska, USA.

It remains unclear to me whether effective quantum computing -- which I think of as something that can implement algorithms like Grover's and Shor's -- is a mere matter of engineering, or whether it requires one or more scientific breakthroughs. So many companies doing public-facing research in the area act like it's the former, but authors like Mikhail Dyakonov who know much more than me act like it's the latter. The thing about scientific breakthroughs is that it's hard to put timelines on them; the hoped-for breakthrough might never come.

As bad as C/C++ can be, particularly for systems that have to be secure in the face of untrusted input, it's simply not financially possible for companies to transition their legacy systems to anything else. Take some random and unnamed commercial codebase for example -- while 'sloccount' isn't the Word of God, here's a big application that it says would cost just north of $100 million to write from scratch. But it also depends on any number of file format libraries; just one of them, sloccount reports, is $43 million in its own right. There are maybe 3 or 4 such libraries, and they're all in C++. And then there's the one that the company buys in binary-only form, so you're stuck into a single C++ ABI unless you fork over a 5- or 6-digit quantity of money for a fresh build with a different toolchain! You simply can't rewrite this in rust or other "safe" language, not even your core codebase. You also have to grow 5 or 6 people who are such domain experts that they can write, from scratch, file format libraries for formats where the (complicated × niche) product is huge. And again, whole 'sloccount' is not the word of god, it might say your schedule for just the two subtasks above is 10 years for a staff of 100, which already dwarfs your current development staff. May the address-space layout randomizer have mercy on us all.

The Gaia spacecraft is pretty amazing. Run a gigapixel camera, in space, for 5 years and somehow get the data on billions of stars home on a mere 3Mbit/s link. "only a few dozen pixels around each object can be downlinked". The in-space processing must be pretty sophisticated and high performance.

"[T]here’s the AI trained to identify toxic and edible mushrooms, which simply picked up on the fact that it was presented with the two types in alternating order. This ended up being an unreliable model in the real world."

I thoroughly disagree with the author's assertion of the equal epistemic(?) status of the two fields "date of birth" and "sex/gender" of a birth certificate. I am at home with a world where a 5-second or even 50-year investigation of the shape of a body can't accurately reveal this (once assumed to be objective and unchanging) characteristic. Just think of it like pronouncing a baby a habitual criminal based on the debunked science of phrenology! On the other hand, the truth of passing days and years seems just about as objective as anything; and find nothing particularly sinister in the way we codify it into a civil calendar which in turn enables legal contracts like "the term of the lease shall be 12 months from November 8, 2018". Hopefully we some day arrive in a world where even if there's some reason to write down quick notes on the shape of baby genitals (one weird trick for telling babies apart with ~P(0.5)!), nobody insists on printing anything about it on our everyday ID cards, or imagines it should inform our use of pronouns or whether we should prefer white wine or lite beer.

at a first guess I'm at or above that 321 hour average, based on 8000 yearly miles driving and 25mph average rate gives 320 hours, not including whatever I do in rental cars. On the other hand, we have made a choice to do driving vacations the last few years, racking up 1200 miles at a go; that driving at least brings much greater rewards than the drive to work! ETA: Average people get 120 hours of vacation?

I love mosh, using it to connect to a
long-lived screen session from multiple computers (laptop, chromebook,
$DAY_JOB, and phone).

One problem with it is that a mosh-server process that has no living
client will linger for a very long time (forever?). This can happen for
various reasons, such as letting a device's battery run down.

With some brainstorming help from the participants from #mosh, I came up
with a way to automatically kill old mosh-server processes that probably
represented defunct clients without killing ones that may represent working
clients.

For a long time, my basic setup has been to run a command equivalent to:
mosh myserver -- bin/cs where cs is a script which sets some environment
variables and ends with exec screen, so I already had an excellent
location to put cleanup code.

I also felt that the cleanup rule I wanted was: kill all mosh-server processes
started from the same client. $SSH_CLIENT was suggested for this, but it's not
useful since my portable devices naturally connect from different networks with
different IP addresses (that's the point of mosh after all!) So I jumped
to the conclusion that I needed a unique identifier. Why not involve a UUID?

As soon as I realized I needed to pass the UUID on the commandline or
environment of cs, I realized that meant it would show up in the
mosh-server commandline. So that led to the following snippet: ($i is already
identified as being the UUID= argument):

That seems to work! One final wrinkle, though: In mosh irssi connectbot,
there's no way to specify the mosh commandline directly. Hoever, there is
"Requested mosh server". I created a wrapper script reading:

#!/bin/sh
exec mosh-server "$@" -- bin/cs UUID=...

and used that script (with full path, in my case) as the configuration
value.

Now I shouldn't have to worry about mosh-server processes piling up
on my main linux box anymore.

Recently I was googling for a script to compare tar files, and found references
to a perl script (which I did not read) which reportedly did this by
expanding both tar files and then diffing the trees. This would actually
have been fine for my case, but some people noted that their use case
involved tarfiles that were too big to extract comfortably. I assume that
this is due to space considerations, but doubtless there are time
considerations too.

Ting offers great inexpensive cellphone service,
so when Ingrid's organization needed to provide a cellphone for an employee, I
naturally suggested she use them. However, Ting doesn't presently have an API
to check usage. So I put together a Python program that can screen-scrape this
information.

A number of times, I've said that I like the Windows 7 feature that allows
a window to easily be half-maximized. I got tired of waiting for it to be
added to my favorite window manager, so I wrote a script that uses
the program wmctrl to half-maximize windows, then bound it
to key presses in my window manager. Now, with a press of ctrl+alt+[QWER],
I can half-maximize a window into 4 locations on my dual-monitor setup.