As described in that linked post, trackpoint input data varies wildly.
Combined with the options we have in the server to configure everything
makes this post a bit pointless as almost every single behaviour can be
changed.

The linked post also describes the three subjective pressure ranges: no real
physical pressure, some physical pressure, and serious pressure. The line
between the first two ranges is roughly where the trackpoint sends deltas at
the maximum reporting rate (100Hz) but with a value of 1. Below that pressure, the
intervals increase but the delta remains at 1. Above that pressure, the
interval remains constant at 10ms but the deltas increase. I've used the
default kernel trackpoint sensitivity of 128 for any data listed here. Here is
the visualisation of how deltas and intervals change again.

The default pointer acceleration profile in the X server is the simple
profile. We know this from the
earlier posts, it has a double-plateau shape.
On a trackpoint mm/s doesn't make sense here so let's look at it in units/ms
instead. A unit is simply a device-specific measurement of
distance/pressure/tilt/whatever - it all depends on the device. On trackpoints that
is (mostly) sideways pressure or tilt. On mice and touchpads we can convert units to mm
based on their resolution. On trackpoints, we don't have a physical
reference and we thus have to deal with it in units. The obvious problem
here is that 1 unit on one device does not equal 1 unit on another device. And for
configurable trackpoints, the definition of a unit changes as the sensitivity changes.
And that's after the kernel already mangles it (if it does, it doesn't for all devices).
So here's a box of asterisks, please sprinkle it liberally.

The smallest delta the kernel can send is 1. At a hardware report rate of
100Hz, continuous pressure to the smallest detected threshold thus generates
1 unit every 10 milliseconds or 0.1 units/ms. If I push uncomfortably hard,
I can get deltas of around 10 units every 10ms or 1 unit/ms. In other words,
we better zoom in here. Let's look at the meaningful range of this curve.

On my trackpoint, below 0.1 units/ms means virtually no pressure (pressure
range one). Pressure range two is 0.1 to 0.4, approximately. Beyond that is
pressure range three but that is also the range that becomes pointless
quickly - I simply wouldn't want to press this hard in normal operation.
1 unit per ms (10 units per report) is very high pressure. This means
the pointer acceleration curve is actually defined for the usable range with
only outliers hitting the maximum acceleration. For mice this curve was
effectively a constant acceleration for all but slow movements (see here).
However, any configuration can change this curve to a point where none of the above applies.

Back to the minimum constant movement of 0.1 units/ms. That one effectively
matches the start of the 'no accel' plateau. Anything below that will be
decelerated, i.e. a delta of 1 unit will result a pointer delta less than 1
pixel. In other words, anything up to where you have to apply real pressure
is decelerated.

The constant factor plateau goes all the way to 0.4 units/ms. Then there's
the buggy jump to a factor of ~1.5, followed by a smooth curve to 0.8
units/ms where the factor maxes out. A bit of testing here suggests that 0.4
units/ms is in the upper limits of the second pressure range mentioned above.
Going past 0.6 or 0.7 is definitely well within the third pressure range
where things get uncomfortable quickly. This means that the acceleration bug
is actually sitting right in the highest interesting range. Apparently
no-one has noticed for 10 years.

But what does it matter? Well, probably not even that much. The only
interesting bit I I can see here is that we have deceleration for most
low-pressure movements and a constant acceleration of 1 for most realistic
movements. I very much doubt that the range above 0.4 really matters.

But hey, this is just the default configuration. It is affected when
someone changes the speed slider in GNOME, or when someone changes the
sensitivity at the sysfs level. Other trackpoints wont have the exact same
behaviour. Any analysis is thrown out of the window as soon as someone
changes the sysfs sensitivity or increases the acceleration threshold.

Let's talk sysfs - if we increase my trackpoint sensitivity to 200, the
deltas coming from the trackpoint change. First, the pressure required to
give me a constant stream of events often gives me deltas of size 2 or 3. So we're
half-way into the no acceleration plateau here. Higher pressures easily give
me deltas of size 10 or 1 unit per ms, the edge of the image above.

I wish I could analyse this any further but realistically, the only takeaway
here is that any change in configuration options results in some version of
trial-and-error by the user until the trackpoint moves as they want to.
But without knowing all those options, we just cannot know
what exactly is happening.

However, what this is useful for is comparing it to libinput.
libinput got a custom trackpoint acceleration function in 1.8, designed
around the hardware delta range. The idea was that you (or someone) measures
the trackpoint device's range once, if it's outside of the assumed default
ranges we add a hwdb entry and voila, it scales back to the right ranges and
that device is fixed for good.

Except - this doesn't work. libinput scales into the delta range and
calculates the factor from that but it doesn't take the time stamps into
account. It works on the assumption that a trackpoint deltas are at a constant
frequency with a varying delta. That is simply not the case and the dynamic
range of the trackpoint is so small that any acceleration of the deltas results
in jerky movement.

This is of course fixable, we can just convert the deltas into a speed and
then apply the acceleration curve based on that. So that's the next task, if
you're interested in that, subscribe yourself to
this
issue.

Wednesday, June 13, 2018

This post does not describe a configuration system. If that's all you care
about, read
this
post here and go be angry at someone else. Anyway, with that out of the
way let's get started.

For a long time, libinput has supported model quirks (first added in Apr
2015). These model quirks are bitflags applied to some devices so we can
enable special behaviours in the code.
Model flags can be very specific ("this is a Lenovo x230 Touchpad") or
generic ("This is a trackball") and it just depends on what the specific
behaviour is that we need. The x230 touchpad for example has a custom
pointer acceleration but trackballs are marked so they get some config
options mice don't have/need.

In addition to model tags we also have custom attributes.
These are free-form and provide information that we cannot get from the
kernel. These too can be specific ("this model needs a pressure threshold of
N") or generic ("bluetooth keyboards are an external keyboards").

Overall, it's a good system. Most users never have to care that we even have
this. The whole point is that any device-specific quirks need to be merged
only once for each model, then everyone with the same device gets to benefit
on the next update.

Originally quirks were hardcoded but this required rebuilding libinput
for any changes. So we moved this to utilise the udev hwdb. For the trivial
work of fetching udev properties we got a lot of flexibility in how we can
match against devices. For example, an entry may look like this:

The above uses a name match and the dmi modalias match to apply a property
for the touchpad on the Dell Latitude E6330. The exact match format is
defined by a bunch of udev rules that ship as part of libinput.

Using the udev hwdb maked the quirk storage a plaintext file that
can be updated independently of libinput, including local overrides for
testing things before merging them upstream. Having said that, it's
definitely not public API and can change even between
stable branch updates as properties are renamed or rescoped to fit the
behaviour more accurately. For example, a model-specific tag may be renamed
to a behaviour-specific tag as we find more devices affected by the same
issue.

The main issue with the quirks now is that we keep accumulating more and
more of them and I'm starting to hit limits with the udev hwdb match
behaviour. The hwdb is great for single matches but not so great for
cascading matches where one match may overwrite another match. The hwdb match
system is largely implementation-defined so it's not always predictable
which match rule wins out in the end.

Second, debugging the udev hwdb is not at all trivial. It's a bit like git -
once you're used to it it's just fine but until then the air turns yellow
with all the swearing being excreted by the unsuspecting user.

So long story short, libinput 1.12 will replace the hwdb model quirks
database with a set of .ini files. The model quirks will be installed in
/usr/share/libinput/ or whatever prefix your distribution prefers instead.
It's a bunch of files with fairly simplistic instructions, each [section]
has a set of MatchFoo=Bar directives and the ModelFoo=bar or
AttrFoo=bar tags. See
this file for an example. If all MatchFoo directives apply to a device, the
Model and Attr tags are applied. Matching works in inter- and
intra-file sequential order so the last section in a file overrides the
first section of that file and the highest-sorting file overrides the
lowest-sorting file. Otherwise the tags are accumulated, so if two files
match on the same device with different tags, both tags are applied. So far,
so unexciting.

Sometimes it's necessary to install a temporary local quirk until upstream
libinput is updated or the distribution updates its package. For this, the
/etc/libinput/local-overrides.quirks file is read in as well (if it
exists). Note though that the config files are considered internal API, so
any local overrides may stop working on the next libinput update. Should've
upstreamed that quirk, eh?

These files give us the same functionality as the hwdb - we can drop in
extra files without recompiling. They're more human-readable than a hwdb
match and it's a lot easier to add extra match conditions to it. And we can
extend the file format at will. But the biggest advantage is that we can
quite easily write debugging tools to figure out why something works or
doesn't work. The libinput list-quirks tool shows what tags apply to
a device and using the --verbose flag shows you all the files and
sections and how they apply or don't apply to your device.

Thursday, June 7, 2018

This time we talk trackpoints. Or pointing sticks, or whatever else you want
to call that thing between the GHB keys. If you don't have one and you've
never seen one, prepare to be amazed. [1]

Trackpoints are tiny joysticks that react to pressure [2], convert that
pressure into relative x/y events and pass that on to whoever is
interested in it. The harder you push, the higher the deltas.
This is where the simple and obvious stops and it gets difficult. But then
again, if it was that easy I wouldn't write this post, you wouldn't have
anything to read, so somehow everyone wins. Whoop-dee-doo.

All the data and measurements below refer to my trackpoint, a
Lenovo T440s. It may not apply to any other trackpoints, including those on
on different laptop models or even on the same laptop model with different
firmware versions. I've written the below with a lot of cringing and
handwringing. I want to write data that is irrefutable, but the
universe is against me and what the universe wants, the universe gets.
Approximately every second sentence below has a footnote of "actual results
may vary". Feel free to re-create the data on your device though.

Measuring trackpoint range is highly subjective, so you'll have to trust me
when I describe how specific speeds/pressure ranges feel. There are three
ranges of pressure on my trackpoint (sort-of):

Pressure range one: When resting the finger on the trackpoint I don't
really need to apply noticable pressure to make the trackpoint send events. Just
moving the finger on the trackpoint makes it send
events, albeit sporadically.

Pressure range two: Going beyond range one requires applying real
pressure and feels to me like we're getting into RSI territory. Not a
problem for short periods, but definitely not something I'd want all the
time. It's the pressure I'd use to cross the screen.

Pressure range three: I have to push hard. I definitely
wouldn't want to do this during everyday interaction and it just feels wrong
anyway. This pressure range is for testing maximum deltas,
not one you would want to use otherwise.

The first/second range are easier delineated than the second/third
range because going from almost no pressure to some real pressure is easy. Going from
some pressure to too much pressure is more blurry, there is some overlap between second and
third range. Either way, keep these ranges in mind though as I'll be using them in the
explanations below.

Ok, so with the physical conditions explained, let's look at what we have to
worry about in software:

It is impossible to provide a constant input to a trackpoint if you're
a puny human. Without a robotic setup you just cannot apply constant
pressure so any measurements have some error. You also get to enjoy a
feedback loop - pressure influences pointer motion but that pointer motion
influences how much pressure you inadvertently apply. This makes any
comparison filled with errors. I don't know if I'm applying the same
pressure on the two devices I'm testing, I don't know if a
user I'm asking to test something uses constant/the same/the right pressure.

Not all trackpoints are created equal. Some trackpoints (mostly in
Lenovos), have configurable sensibility - 256 levels of it. [3] So one
trackpoint measured does not equal another trackpoint unless you keep track
of the firmware-set sensibility. Those trackpoints also have other
toggles. More importantly and AFAIK, this type of trackpoint also has a
built-in acceleration curve. [4] Other trackpoints (ALPS) just have a
fixed sensibility, I have no idea whether those have a built-in acceleration
curve or merely have a linear-ish pressure->delta mappings.

Due to some design choices we did years ago, systemd increases the
sensitivity on some devices (the POINTINGSTICK_SENSITIVITY property).
So even on a vanilla install, you can't actually rely on the trackpoint
being set to the manufacturer default. This was in an attempt to make
trackpoints behave more consistently, systemd had the hwdb and it seemed
like the right place to put device-specific quirks. In hindsight, it was the
wrong design choice.

Deltas are ... unreliable. At high sensitivity and high pressures you
might get a sequence of [7, 7, 14, 8, 3, 7]. At lower pressure you get the
deltas at seemingly random intervals. This could be because it's
hard to keep exact constant pressure, it could be a hardware issue.

evdev has been the default driver for almost a decade and before that it
was the mouse driver for a long time. So the kernel will "Divide 4 since
trackpoint's speed is too fast" [sic] for some trackpoints. Or by 8. Or not
at all. In other words, the kernel adjusts for what the default user space
is and userspace is based on what the kernel provides. On the newest ALPS
trackpoints the kernel has stopped doing any in-kernel scaling (good!) but
that means that the deltas are out by a factor of 8 now.

Trackpoints don't always have the same pressure ranges for x/y. AFAICT the
y range is usually a bit less than the x range on many or most trackpoints.
A bit weird because the finger position would suggest that strong vertical
pressure is easier to apply than sideways pressure.

(Some? All?) Trackpoints have built-in calibration procedures to find and
set their own center-point. Without that you'll get the trackpoint
eventually being ever so slightly off center over time, causing a mouse
pointer that just wanders off the screen, possibly into the woods, without
the obligatory red cape and basket full of whatever grandma eats when she's
sick.

So the calibration is required but can be triggered accidentally by the
user: If you push with the same pressure into the same
direction for 2-5 seconds (depending on $THINGS) you trigger the calibration
procedure and the current position becomes the new center point. When you
release, the cursor wanders off for a few seconds until the calibration sets
things straight again. If you ever see the cursor buzz off in a fixed
direction or walking backwards for a centimetre or two you've triggered that
calibration. The only way to avoid this is to make sure the pointer
acceleration mechanism allows you to reach any target within 2
seconds and/or never forces you to apply constant pressure for more than 2
seconds. Now there's a challenge...

Ok. If you've been paying attention instead of hoping for a TLDR that's more
elusive than Godot, we're now aware of the various drawbacks of collecting data
from a trackpoint. Let's go and look at data. Sensitivity is set to the
kernel default of 128 in sysfs, the default reporting rate is 100Hz. All
observations are YMMV and whatnot, especially the latter.

Trackpoint deltas are in integers but the dynamic range of delta values is tiny. You
mostly get 1 or 2 and it requires quite a fair bit of pressure to get up to
5 or more. At low pressure you get deltas of 1, but less frequently.
Visualised, the relationship between deltas and the interval between deltas
is like this:

At low pressure, we get deltas of 1 but high intervals. As the pressure
increases, the interval between events shrinks until at some point the
interval between events matches the reporting rate (100Hz/10ms). Increasing the
pressure further now increases the deltas while the intervals remain at the
reporting rate. For example, here's an event sequence at low pressure:

Note how there is an events in there with 22ms? Maintaining constant
pressure is hard. You can re-create the above recordings by running evemu-record.

Pressing hard I get deltas up to maybe 5. That's staying within the second
pressure range outlined above, I can force higher deltas but what's the
point. So the dynamic range for deltas alone is terrible - we have a
grand total of 5 values across the comfortable range.

Changing the sensitivity setting higher than the default will send higher
deltas, including deltas greater than 1 before reaching the report rate. Setting
it to lower than the default (does anyone do that?) sends smaller deltas.
But doing so means changing the hardware properties, similar to how some
gaming mice can switch dpi on the fly.

I leave you with a fun thought exercise in correlation vs. causation: your
trackpoint uses PS/2, your touchpad probably uses PS/2. Your trackpoint has
a reporting rate of 100Hz but when you touch the touchpad half the bandwidth
is used by the touchpad. So your trackpoint sends half the events when
you have the palm resting on the touchpad. From my observations, the deltas
don't double in size. In other words, your trackpoint just slows down to
roughly half the speed. I can reduce the reporting rate to approximately a
third by putting two or more fingers onto the touchpad. Trackpoints haven't changed
that much over the years but touchpads have. So the takeway is: 10 years ago
touchpads were smaller and trackpoints were faster. Simply because you could
use them without touching the touchpad. Mind blown (if true, measuring these things is hard...)

Well, that was fun, wasn't it. I'm glad you stayed that long, because I did
and it'd feel lonely otherwise. In the next post I'll outline the pointer acceleration
curves for trackpoints and what we're going to to about that. Besides
despairing, that is.

[1] I doubt you will be, but it always pays to be prepared.
[2] In this post I'm using "pressure" here as side-ways pressure, not downwards
pressure. Some trackpoints can handle downwards pressure and modify the
acceleration based on it (or expect userland to do so).
[3] Not that this number is always correct, the Lenovo CompactKeyboard USB
with Trackpoint has a default sensibility of 5 - any laptop trackpoint would
be unusable at that low value (their default is 128).
[4] I honestly don't know this for sure but ages ago I found a hw spec
document that actually detailed the process. Search for ""TrackPoint System
Version 4.0 Engineering Specification", page 43 "2.6.2 DIGITAL TRANSFER
FUNCTION"

Wednesday, June 6, 2018

Thanks to Daniel Stone's efforts, libinput is now on gitlab. For a longer explanation on the move from the old freedesktop infrastructure (cgit, bugzilla, etc.) to the gitlab instance hosted by freedesktop.org, see this email.

All open bugs have been migrated from bugzilla to gitlab too, the documentation has been updated acccordingly, and we're ready to go. The new base URL for libinput in gitlab is:
https://gitlab.freedesktop.org/libinput/.