Tuesday, September 20, 2016

First a definition: a trackstick is also called trackpoint, pointing stick, or "that red knob between G, H, and B". I'll be using trackstick here, because why not.

This post is the continuation of libinput and the Lenovo T450 and T460 series touchpads where we focused on a stalling pointer when moving the finger really slowly. Turns out the T460s at least, possibly others in the *60 series have another bug that caused a behaviour that is much worse but we didn't notice for ages as we were focusing on the high-precision cursor movement. Specifically, the pointer would just randomly stop moving for a short while (spoiler alert: 300ms), regardless of the movement speed.

libinput has built-in palm detection and one of the things it does is to disable the touchpad when the trackstick is in use. It's not uncommon to rest the hand near or on the touchpad while using the trackstick and any detected touch would cause interference with the pointer motion. So events from the touchpad are ignored whenever the trackpoint sends events. [1]

On (some of) the T460s the trackpoint sends spurious events. In the recording I have we have random events at 9s, then again 3.5s later, then 14s later, then 2s later, etc. Each time, our palm detection could would assume the trackpoint was in use and disable the touchpad for 300ms. If you were using the touchpad while this was happening, the touchpad would suddenly stop moving for 300ms and then continue as normal. Depending on how often these spurious events come in and the user's current caffeination state, this was somewhere between odd, annoying and infuriating.

The good news is: this is fixed in libinput now. libinput 1.5 and the upcoming 1.4.3 releases will have a fix that ignores these spurious events and makes the touchpad stalls a footnote of history. Hooray.

Monday, September 19, 2016

This post explains how the evdev protocol works. After reading this post you
should understand what evdev is and how to interpret evdev event dumps to
understand what your device is doing. The post is aimed mainly at users
having to debug a device, I will thus leave out or simplify some of the
technical details. I'll be using the output from evemu-record as example
because that is the primary debugging tool for evdev.

What is evdev?

evdev is a Linux-only generic protocol that the kernel uses to forward
information and events about input devices to userspace. It's not just for mice and keyboards but any device that has any sort of
axis, key or button, including things like webcams and remote controls. Each
device is represented as a device node in the form of
/dev/input/event0, with the trailing number increasing as you add
more devices. The node numbers are re-used after you unplug a device, so
don't hardcode the device node into a script. The device nodes are also
only readable by root, thus you need to run any debugging tools as root too.

evdev is the primary way to talk to input devices on Linux. All X.Org
drivers on Linux use evdev as protocol and libinput as well. Note that
"evdev" is also the shortcut used for xf86-input-evdev, the X.Org
driver to handle generic evdev devices, so watch out for context when you
read "evdev" on a mailing list.

Communicating with evdev devices

Communicating with a device is simple: open the device node and read from
it. Any data coming out is a struct input_event, defined in
/usr/include/linux/input.h:

I'll describe the contents later, but you can see that it's a very simple
struct.

Static information about the device such
as its name and capabilities can be queried with a set of
ioctls. Note
that you should always use
libevdev
to interact with a device, it blunts the few sharp edges evdev has.
See the libevdev
documentation for usage examples.

evemu-record, our primary debugging tool for anything evdev is very
simple. It reads the static information about the device, prints it and then
simply reads and prints all events as they come in. The output is in
machine-readable format but it's annotated with human-readable comments
(starting with #). You can always ignore the non-comment bits. There's a
second command, evemu-describe, that only prints the description and
exits without waiting for events.

Relative devices and keyboards

The top part of an evemu-record output is the device description. This is
a list of static properties that tells us what the device is capable of. For
example, the USB mouse I have plugged in here prints:

The device name is the one (usually) set by the manufacturer and so are the
vendor and product IDs. The bus is one of the "BUS_USB" and similar
constants defined in /usr/include/linux/input.h. The version is often
quite arbitrary, only a few devices have something meaningful here.

We also have a set of supported events, categorised by "event type" and
"event code" (note how type and code are also part of the struct input_event).
The type is a general category, and
/usr/include/linux/input-event-codes.h defines quite a few of those.
The most important types are EV_KEY (keys and buttons), EV_REL (relative
axes) and EV_ABS (absolute axes). In the output above we can see that we
have EV_KEY and EV_REL set.

As a subitem of each type we have the event code. The event codes for this device are
self-explanatory: BTN_LEFT, BTN_RIGHT and BTN_MIDDLE are the left, right and
middle button. The axes are a relative x axis,
a relative y axis and a wheel axis (i.e. a mouse wheel). EV_MSC/MSC_SCAN is
used for raw scancodes and you can usually ignore it.
And finally we have the EV_SYN bits but let's ignore those, they are always
set for all devices.

Note that an event code cannot be on its own, it must be a tuple of (type,
code). For example, REL_X and ABS_X have the same numerical value and
without the type you won't know which one is which.

That's pretty much it. A keyboard will have a lot of EV_KEY
bits set and the EV_REL axes are obviously missing (but not always...).
Instead of BTN_LEFT, a keyboard would have e.g. KEY_ESC, KEY_A, KEY_B, etc.
90% of device
debugging is looking at the event codes and figuring out which ones are
missing or shouldn't be there.

Exercise: You should now be able to read a evemu-record
description from any mouse or keyboard device connected to your computer and
understand what it means. This also applies
to most special devices such as remotes - the only thing that changes are
the names for the keys/buttons. Just run sudo evemu-describe and pick any
device in the list.

The events from relative devices and keyboards

evdev is a serialised protocol. It sends a series of events and then a
synchronisation event to notify us that the preceeding events all belong
together. This synchronisation event is EV_SYN SYN_REPORT, is generated by
the kernel, not the device and hence all EV_SYN codes are always available
on all devices.

Let's have a look at a mouse movement. As explained above, half the line is
machine-readable but we can ignore that bit and look at the human-readable
output on the right.

Mostly the same as button events. But wait, there is one difference: we have
a value of 2 as well. For key events, a value 2 means "key repeat".
If you're on the tty, then this is what generates repeat keys for you. In X
and Wayland we ignore these repeat events and instead use XKB-based key
repeat.

Now look at the keyboard events again and see if you can make sense of the sequence.
We have an Enter release (but no press), then ctrl down (and repeat),
followed by a 'c' press - but no release. The explanation is simple - as
soon as I hit enter in the terminal, evemu-record started recording so it
captured the enter release too. And it stopped recording as soon as ctrl+c
was down because that's when it was cancelled by the terminal. One important
takeaway here: the evdev protocol is not guaranteed to be balanced. You may
see a release for a key you've never seen the press for, and you may be
missing a release for a key/button you've seen the press for (this happens
when you stop recording). Oh, and there's one danger:
if you record your keyboard and you type your password, the keys will show
up in the output. Security experts generally reocmmend not publishing event
logs with your password in it.

Exercise: You should now be able to read a evemu-record
events list from any mouse or keyboard device connected to your computer and
understand the event sequence.This also applies to most special devices such as
remotes - the only thing that changes are the names for the keys/buttons.
Just run sudo evemu-record and pick any device listed.

Absolute devices

Things get a bit more complicated when we look at absolute input devices
like a touchscreen or a touchpad. Yes, touchpads are absolute devices in
hardware and the conversion to relative events is done in userspace by e.g.
libinput. The output of my touchpad is below. Note that I've manually removed a few
bits to make it easier to grasp, they will appear later in the multitouch
discussion.

We have a BTN_LEFT again and a set of other buttons that I'll explain in a
second. But first we look at the EV_ABS output. We have the same naming
system as above. ABS_X and ABS_Y are the x and y axis on the device,
ABS_PRESSURE is an (arbitrary) ranged pressure value.

Absolute axes have a bit more
state than just a simple bit. Specifically, they have a minimum and maximum
(not all hardware has the top-left sensor position on 0/0, it can
be an arbitrary position, specified by the minimum). Notable here is that
the axis ranges are simply the ones announced by the device - there is no
guarantee that the values fall within this range and indeed a lot of
touchpad devices tend to send values slightly outside that range.
Fuzz and flat can be safely ignored, but resolution is interesting. It is
given in units per millimeter and thus tells us the size of the device. in
the above case: (5112 - 1024)/42 means the device is 97mm wide. The
resolution is quite commonly wrong,
a lot of axis
overrides need the resolution changed
to the correct value.

The axis description also has a current value listed. The kernel only sends
events when the value changes, so even if the actual hardware keeps sending
events, you may never see them in the output if the value remains the same.
In other words, holding a finger perfectly still on a touchpad creates
plenty of hardware events, but you won't see anything coming out of the
event node.

Finally, we have properties on this device. These are used to indicate
general information about the device that's not otherwise obvious. In this
case INPUT_PROP_POINTER tells us that we need a pointer for this device (it
is a touchpad after all, a touchscreen would instead have INPUT_PROP_DIRECT
set). INPUT_PROP_BUTTONPAD means that this is a so-called clickpad, it does
not have separate physical buttons but instead the whole touchpad clicks.
Ignore INPUT_PROP_TOPBUTTONPAD because it only applies to the Lenovo *40
series of devices.

Ok, back to the buttons: aside from BTN_LEFT, we have BTN_TOUCH. This one
signals that the user is touching the surface of the touchpad (with some
in-kernel defined minimum pressure value). It's not just for finger-touches,
it's also used for graphics tablet stylus touchpes (so really, it's more
"contact" than "touch" but meh).

The BTN_TOOL_FINGER event tells us that a finger is in detectable range. This
gives us two bits of information: first, we have a finger (a tablet would have
e.g. BTN_TOOL_PEN) and second, we may have a finger in proximity without
touching. On many touchpads, BTN_TOOL_FINGER and BTN_TOUCH come in the same
event, but others can detect a finger hovering over the touchpad too (in which
case you'd also hope for ABS_DISTANCE being available on the touchpad).

Finally, the BTN_TOOL_DOUBLETAP up to BTN_TOOL_QUINTTAP tell us whether the
device can detect 2 through to 5 fingers on the touchpad. This doesn't actually
track the fingers, it merely tells you "3 fingers down" in the case of
BTN_TOOL_TRIPLETAP.

Exercise: Look at your touchpad's description and figure out if the size
of the touchpad is correct based on the axis information [1]. Check how many
fingers your touchpad can detect and whether it can do pressure or distance detection.

The events from absolute devices

Events from absolute axes are not really any different than events from
relative devices which we already covered. The same type/code combination with
a value and a timestamp, all framed by EV_SYN SYN_REPORT events. Here's an
example of me touching the touchpad:

In the first event you see BTN_TOOL_FINGER and BTN_TOUCH set (this touchpad
doesn't detect hovering fingers). An x/y coordinate pair and a pressure value.
The pressure changes in the second event, the third event changes pressure and
location. Finally, we have BTN_TOOL_FINGER and BTN_TOUCH released on finger up,
and the pressure value goes back to 0. Notice how the second event didn't
contain any x/y coordinates? As I said above, the kernel only sends updates on
absolute axes when the value changed.

In the first event, the touchpad detected all three fingers at the same time.
So get BTN_TOUCH, x/y/pressure and BTN_TOOL_TRIPLETAP set. Note that the
various BTN_TOOL_* bits are mutually exclusive. BTN_TOOL_FINGER means
"exactly 1 finger down" and you can't have exactly 1 finger down when you have
three fingers down. In the second event x and pressure update (y has no event,
it stayed the same).

In the event after the break, we switch from three fingers to one finger.
BTN_TOOL_TRIPLETAP is released, BTN_TOOL_FINGER is set. That's very common.
Humans aren't robots, you can't release all fingers at exactly the same time, so
depending on the hardware scanout rate you have intermediate states where one
finger has left already, others are still down. In this case I released two
fingers between scanouts, one was still down. It's not uncommon to see a full
cycle from BTN_TOOL_FINGER to BTN_TOOL_DOUBLETAP to BTN_TOOL_TRIPLETAP on finger
down or the reverse on finger up.

Exercise: test out the pressure values on your touchpad and see how close
you can get to the actual announced range. Check how accurate the multifinger
detection is by tapping with two, three, four and five fingers. (In both cases,
you'll likely find that it's very much hit and miss).

Multitouch and slots

Now we're at the most complicated topic regarding evdev devices. In the
case of multitouch devices, we need to send multiple touches on the same
axes. So we need an additional dimension and that is called multitouch
slots (there is another, older multitouch protocol that doesn't use
slots but it is so rare now that you don't need to bother).

First: all axes that are multitouch-capable are repeated as ABS_MT_foo axis.
So if you have ABS_X, you also get ABS_MT_POSITION_X and both axes have the
same axis ranges and resolutions. The reason here is
backwards-compatibility: if a device only sends multitouch events, older
programs only listening to the ABS_X etc. events won't work. Some axes may
only be available for single-touch (ABS_MT_TOOL_WIDTH in this case).

We have an x and y position for multitouch as well as a pressure axis.
There are also two special multitouch axes that aren't really axes.
ABS_MT_SLOT and ABS_MT_TRACKING_ID. The former specifies which
slot is currently active, the latter is used to track touch points.

Slots are a static property of a device. My touchpad, as you can see above
ony supports 2 slots (min 0, max 1) and thus can track 2 fingers at a
time. Whenever the first finger is set down it's coordinates will be tracked
in slot 0, the second finger will be tracked in slot 1. When the finger in
slot 0 is lifted, the second finger continues to be tracked in slot 1, and
if a new finger is set down, it will be tracked in slot 0. Sounds more
complicated than it is, think of it as an array of possible touchpoints.

The tracking ID is an incrementing number that lets us tell touch
points apart and also tells us when a touch starts and when it ends. The two
values are either -1 or a positive number. Any positive number means "new touch"
and -1 means "touch ended". So when you put two fingers down and lift them
again, you'll get a tracking ID of 1 in slot 0, a tracking ID of 2 in slot
1, then a tracking ID of -1 in both slots to signal they ended. The tracking
ID value itself is meaningless, it simply increases as touches are created.

We have a tracking ID (387) signalling finger down, as well as a position
plus pressure. then some updates and eventually a tracking ID of -1
(signalling finger up). Notice how there is no ABS_MT_SLOT here - the kernel
buffers those too so while you stay in the same slot (0 in this case) you
don't see any events for it. Also notice how you get both single-finger as
well as multitouch in the same event stream. This is for backwards
compatibility [2]

This was a really quick two-finger tap that illustrates the tracking IDs nicely.
In the first event we get a touch down, then an ABS_MT_SLOT event. This
tells us that subsequent events belong to the other slot, so it's the other
finger. There too we get a tracking ID + position. In the next event we get
an ABS_MT_SLOT to switch back to slot 0. Tracking ID of -1 means that touch
ended, and then we see the touch in slot 1 ended too.

Note that "scroll" is something handled in userspace, so what you see here
is just a two-finger move. Everything in there i something we've already
seen, but pay attention to the two middle events: as updates come in for
each finger, the ABS_MT_SLOT changes before the upates are sent. The kernel
filter for identical events is still in effect, so in the third event we
don't get an update for the X position on slot 1. The filtering is
per-touchpoint, so in this case this means that slot 1 position x is still
on 3511, just as it was in the previous event.

That's all you have to remember, really. If you think of evdev as a
serialised way of sending an array of touchpoints, with the slots as the
indices then it should be fairly clear. The rest is then just about actually
looking at the touch positions and making sense of them.

Exercise: do a pinch gesture on your touchpad. See if you can
track the two fingers moving closer together. Then do the same but only move
one finger. See how the non-moving finger gets less updates.

That's it. There are a few more details to evdev but much of that is just
more event types and codes. The few details you really have to worry about
when processing events are either documented in libevdev or abstracted away
completely. The above should be enough to understand what your device does,
and what goes wrong when your device isn't working. Good luck.

[1] If not, file a bug against systemd's hwdb and CC me so we can put
corrections in
[2] We treat some MT-capable touchpads as single-touch devices in libinput
because the MT data is garbage

Friday, September 16, 2016

libinput's touchpad acceleration is the cause for a few bugs and outcry from
a quite vocal (maj|in)ority. A common suggestion is "make it like
the synaptics driver". So I spent a few hours going through the pointer
acceleration code to figure out what xf86-input-synaptics actually does (I don't think
anyone knows at this point) [1].

If you just want the TLDR: synaptics doesn't use physical distances but
works in device units coupled with a few magic factors, also based on device
units. That pretty much tells you all that's needed.

Also a disclaimer: the last time some serious work was done on acceleration
was in 2008/2009. A lot of things have changed since and
since the server is effectively un-testable, we ended up with the mess below
that seems to make little sense. It probably made sense 8 years ago and
given that most or all of the patches have my signed-off-by it must've made
sense to me back then. But now we live in the glorious future and holy cow
it's awful and confusing.

Synaptics has three options to configure speed: MinSpeed, MaxSpeed and
AccelFactor. The first two are not explained beyond "speed factor" but given
how accel usually works let's assume they all somewhoe should work as a
multiplication on the delta (so a factor of 2 on a delta of dx/dy gives you
2dx/2dy). AccelFactor is documented as "acceleration factor for normal
pointer movements", so clearly the documentation isn't going to help clear
any confusion.

I'll skip the fact that synaptics also has a pressure-based motion
factor with four configuration options because oh my god what have we done.
Also, that one is disabled by default and has no effect unless set by the
user. And I'll also only handle default values here, I'm not going to get
into examples with configured values.

Also note: synaptics has a device-specific acceleration profile (the only
driver that does) and thus the acceleration handling is split between the
server and the driver.

Ok, let's get started. MinSpeed and MaxSpeed default to 0.4 and 0.7. The
MinSpeed is used to set constant acceleration (1/min_speed) so we always
apply a 2.5 constant acceleration multiplier to deltas from the touchpad.
Of course, if you set constant acceleration in the xorg.conf, then it
overwrites the calculated one.

MinSpeed and MaxSpeed are mangled during setup so that MaxSpeed is actually
MaxSpeed/MinSpeed and MinSpeed is always 1.0. I'm not 100% why but
the later clipping to the min/max speed range ensures that we never go below a 1.0 acceleration factor
(and thus never decelerate).

The AccelFactor default is 200/diagonal-in-device-coordinates. On my T440s
it's thus 0.04 (and will be roughly the same for most PS/2 Synaptics
touchpads). But on a Cyapa with a different axis range it is 0.125. On a
T450s it's 0.035 when booted into PS2 and 0.09 when booted into RMI4.
Admittedly, the resolution halfs under RMI4 so this possibly maybe makes
sense. Doesn't quite make as much sense when you consider the x220t which
also has a factor of 0.04 but the touchpad is only half the size of the
T440s.

It's correct that the frequency is roughly 80Hz but I honestly don't know
what the 100packet/s reference refers to. Either way, it means that we
always apply a factor of 12.5, regardless of the timing of the events.
Ironically, this one is hardcoded and not configurable unless you happen to know that it's the X server option VelocityScale or ExpectedRate (both of them set the same variable).

Ok, so we have three factors. 2.5 as a function of MaxSpeed, 12.5 because of
80Hz (??) and 0.04 for the diagonal.

When the synaptics driver calculates a delta, it does so in device
coordinates and ignores the device resolution (because this code pre-dates
devices having resolutions). That's great until you have a device
with uneven resolutions like the x220t. That one has 75 and 129 units/mm for
x and y, so for any physical movement you're going to get almost twice as
many units for y than for x. Which means that if you move 5mm to the right
you end up with a different motion vector (and thus acceleration) than when
you move 5mm south.

The core X protocol actually defines who acceleration is supposed to be
handled. Look up the man page for XChangePointerControl(), it sets a
threshold and an accel factor:

The XChangePointerControl function defines how the pointing device
moves. The acceleration, expressed as a fraction, is a multiplier
for movement. For example, specifying 3/1 means the pointer moves
three times as fast as normal. The fraction may be rounded
arbitrarily by the X server. Acceleration only takes effect if the
pointer moves more than threshold pixels at once and only applies to
the amount beyond the value in the threshold argument.

Of course, "at once" is a bit of a blurry definition outside of maybe
theoretical physics. Consider the definition of "at once" for a gaming mouse
with 500Hz sampling rate vs. a touchpad with 80Hz (let us fondly remember
the 12.5 multiplier here) and the above description quickly dissolves into
ambiguity.

Anyway, moving on. Let's say the server just received a delta from the synaptics driver. The
pointer accel code in the server calculates the velocity over time,
basically by doing a hypot(dx, dy)/dtime-to-last-event. Time in the server
is always in ms, so our velocity is thus in device-units/ms (not adjusted
for device resolution).

Side-note: the velocity is calculated across several delta events so it gets
more accurate. There are some checks though so we don't calculate across
random movements: anything older than 300ms is discarded, anything not in
the same octant of movement is discarded (so we don't get a velocity of 0
for moving back/forth). And there's two calculations to make sure we only
calculate while the velocity is roughly the same and don't average between
fast and slow movements. I have my doubts about these, but until I have some
more concrete data let's just say this is accurate (altough since the whole
lot is in device units, it probably isn't).

Anyway. The velocity is multiplied with the constant acceleration (2.5, see
above) and our 12.5 magic value. I'm starting to think that this is just
broken and would only make sense if we used a delta of "event count" rather
than milliseconds.

It is then passed to the synaptics driver for the actual acceleration
profile. The first thing the driver does is remove the constant acceleration
again, so our velocity is now just v * 12.5. According to the comment this
brings it back into "device-coordinate based velocity" but this seems wrong
or misguided since we never changed into any other coordinate system.

The driver applies the accel factor (0.04, see above) and then clips the
whole lot into the MinSpeed/MaxSpeed range (which is adjusted to move
MinSpeed to 1.0 and scale up MaxSpeed accordingly, remember?).
After the clipping, the pressure motion factor is calculated and applied. I
skipped this above but it's basically: the harder you press the higher the
acceleration factor. Based on some config options. Amusingly, pressure
motion has the potential to exceed the MinSpeed/MaxSpeed options. Who knows
what the reason for that is...

Oh, and btw: the clipping is actually done based on the accel factor set by
XChangePointerControl into the acceleration function here. The code is

So we have a factor set by XChangePointerControl() but it's only used to
determine the maximum factor we may have, and then we clip to that. I'm
missing some cross-dependency here because this is what the GUI acceleration
config bits hook into. Somewhere this sets things and changes the
acceleration by some amount but it wasn't obvious to me.

Alrighty. We have a factor now that's returned to the server and we're back
in normal pointer acceleration land (i.e. not synaptics-specific). Woohoo.
That factor is averaged across 4 events using the simpson's rule to smooth
out aprupt changes. Not sure this really does much, I don't think we've ever
done any evaluation on that. But it looks good on paper (we have that in
libinput as well).

Now the constant accel factor is applied to the deltas.
So far we've added the factor, removed it (in synaptics), and now
we're adding it again. Which also makes me wonder whether we're applying the
factor twice to all other devices but right now I'm past the point where I
really want to find out . With all the above, our acceleration factor is,
more or less:

f = units/ms * 12.5 * (200/diagonal) * (1.0/MinSpeed)

and the deltas we end up using in the server are

(dx, dy) = f * (dx, dy)

But remember, we're still in device units here (not adjusted for
resolution).

Anyway. You think we're finished? Oh no, the real fun bits start now. And if
you haven't headdesked in a while, now is a good time.

After acceleration, the server does some scaling because synaptics is an
absolute device (with axis ranges) in relative mode [2]. Absolute devices
are mapped into the whole screen by default but when they're sending
relative events, you still want a 45 degree line on the device to map into
45 degree cursor movement on the screen. The server does this by
adjusting dy in-line with the device-to-screen-ratio (taking
device resolution into account too). On my T440s this means:

dx is left as-is. Now you have the delta that's actually applied to
the cursor. Except that we're in device coordinates, so we map the current
cursor position to device coordinates, then apply the delta, then map back
into screen coordinates (i.e. pixels). You may have spotted the flaw here:
when the screen size changes, the dy scaling changes and thus the pointer
feel. Plug in another monitor, and touchpad acceleration changes. Also: the
same touchpad feels different on laptops when their screen hardware differs.

Ok, let's wrap this up. Figuring out what the synaptics driver does
is... "tricky". It seems much like a glorified random number scheme. I'm
not planning to implement "exactly the same acceleration as synaptics" in
libinput because this would be insane and despite my best efforts, I'm not
that yet. Collecting data from synaptics users is almost meaningless,
because no two devices really employ the same acceleration profile (touchpad
axis ranges + screen size) and besides, there are 11 configuration options
that all influence each other.

What I do plan though is collect more motion data from a variety of
touchpads and see if I can augment the server enough that I can get a clear
picture of how motion maps to the velocity. If nothing else, this should
give us some picture on how different the various touchpads actually behave.

[1] fwiw, I had this really great idea of trying to get behind all this,
with diagrams and everything. But then I was printing json data from the X
server into the journal to be scooped up by sed and python script to print
velocity data. And I questioned some of my life choices.
[2] why the hell do we do this? because synaptics at some point became a
device that announce the axis ranges (seemed to make sense at the time, 2008) and
then other things started depending on it and with all the fixes to the
server to handle absolute devices in relative mode (for tablets) we painted
ourselves into a corner. Synaptics should switch back to being a relative
device, but last I tried it breaks pointer acceleration and that a) makes
the internets upset and b) restoring the "correct" behaviour is, well, you
read the article so far, right?

Friday, September 9, 2016

A great new feature has been merged during this 1.19 X server development cycle: we're now using threads for input [1]. Previously, there were two options for how an input driver would pass on events to the X server: polling or from within the signal handler. Polling simply adds all input devices' file descriptors to a select(2) loop that is processed in the mainloop of the server. The downside here is that if the server is busy rendering something, your input is delayed until that rendering is complete. Historically, polling was primarily used by the keyboard driver because it just doesn't matter much when key strokes are delayed. Both because you need the client to render them anyway (which it can't when it's busy) and possibly also because we're just so bloody used to typing delays.

The signal handler approach circumvented the delays by installing a SIGIO handler for each input device fd and calling that when any input occurs. This effectively interrupts the process until the signal handler completes, regardless of what the server is currently busy with. A great solution to provide immediate visible cursor movement (hence it is used by evdev, synaptics, wacom, and most of the now-retired legacy drivers) but it comes with a few side effects. First of all, because the main process is interrupted, the bit where we read the events must be completely separate to the bit where we process the events. That's easy enough, we've had an input event queue in the server for as long as I've been involved with X.Org development (~2006). The drivers push events into the queue during the signal handler, in the main loop the server reads them and processes them. In a busy server that may be several seconds after the pointer motion was performed on the screen but hey, it still feels responsive.

The bigger issue with the use of a signal handler is: you can't use malloc [2]. Or anything else useful. Look at the man page for signal(7), it literally has a list of allowed functions. This leads to two weird side-effects: one is that you have to pre-allocate everything you may ever need for event processing, the other is that you need to re-implement any function that is not currently async signal safe. The server actually has its own implementation of printf for this reason (for error logging). Let's just say this is ... suboptimal. Coincidentally, libevdev is mostly async signal safe for that reason too. It also means you can't use any libraries, because no-one [3] is insane enough to make libraries async signal-safe.

We were still mostly "happy" with it until libinput came along. libinput is a full input stack and expecting it to work within a signal handler is the somewhere between optimistic, masochistic and sadistic. The xf86-input-libinput driver doesn't use the signal handler and the side effect of this is that a desktop with libinput didn't feel as responsive when the server was busy rendering.

Keith Packard stepped in and switched the server from the signal handler to using input threads. Or more specifically: one input thread on top of the main thread. That thread controls all the input device's file descriptors and continuously reads events off them. It otherwise provides the same functionality the signal handler did before: visible pointer movement and shoving events into the event queue for the main thread to process them later. But of course, once you switch to threads, problems have 2 you now. A signal handler is "threading light", only one code path can be interrupted and you know you continue where you left off. So synchronisation primitives are easier than in threads where both code paths continue independently. Keith replaced the previous xf86BlockSIGIO() calls with corresponding input_lock() and input_unlock() calls and all the main drivers have been switched over. But some interesting race conditions kept happening. But as of today, we think most of these are solved.

The best test we have at this point is libinput's internal test suite. It creates roughly 5000 devices within about 4 minutes and thus triggers most code paths to do with device addition and removal, especially the overlaps between devices sending events before/during/after they get added and/or removed. This is the largest source of possible errors as these are the code paths with the most amount of actual simultaneous access to the input devices by both threads. But what the test suite can't test is normal everyday use. So until we get some more code maturity, expect the occasional crash and please do file bug reports. They'll be hard to reproduce and detect, but don't expect us to run into the same race conditions by accident.

[1] Yes, your calendar is right, it is indeed 2016, not the 90s or so
[2] Historical note: we actually mostly ignored this until about 2010 or so when glibc changed the malloc implementation and the server was just randomly hanging whenever we tried to malloc from within the signal handler. Users claimed this was bad UX, but I think it's right up there with motif.
[3] yeah, yeah, I know, there's always exceptions.

Tuesday, September 6, 2016

On Fedora, if you have mate-desktop or cinnamon-desktop installed, your GNOME touchpad configuration panel won't work (see Bug 1338585). Both packages install a symlink to assign the synaptics driver to the touchpad. But GNOME's control-center does not support synaptics anymore, so no touchpad is detected.
Note that the issue occurs regardless of whether you use MATE/Cinnamon, merely installing it is enough.

Unfortunately, there is no good solution to this issue. Long-term both MATE and Cinnamon should support libinput but someone needs to step up and implement it. We don't support run-time driver selection in the X server, so an xorg.conf.d snippet is the only way to assign a touchpad driver. And this means that you have to decide whether GNOME's or MATE/Cinnamon's panel is broken at X start-up time.

If you need the packages installed but you're not actually using Mate/Cinnamon itself, remove the following symlinks (whichever is present on your system):

I'm using T450 and T460 as reference but this affects all laptops from the Lenovo *50 and *60 series. The Lenovo T450 and T460 have the same touchpad hardware, but unfortunately it suffers from what is probably a firmware issue. On really slow movements, the pointer has a halting motion. That effect disappears when the finger moves faster.

The observable effect is that of a pointer stalling, then jumping by 20 or so pixels. We have had a quirk for this in libinput since March 2016 (see commit a608d9) and detect this at runtime for selected models. In particular, what we do is look for a sequence of events that only update the pressure values but not the x/y position of the finger. This is a good indication that the bug triggers. While it's possible to trigger pressure changes alone, triggering several in a row without a change in the x/y coordinates is extremely unlikely. Remember that these touchpads have a resolution of ~40 units per mm - you cannot hold your finger that still while changing pressure [1]. Once we see those pressure changes only we reset the motion history we keep for each touch. The next event with an x/y coordinate will thus not calculate the delta to the previous position and not trigger a move. The event after that is handled normally again. This avoids the extreme jumps but there isn't anything we can do about the stalling - we never get the event from the kernel. [2]

Anyway. This bug popped up again elsewhere so this time I figured I'll analyse the data more closely. Specifically, I wrote a script that collected all x/y coordinates of a touchpad recording [3] and produced a black and white image of all device coordinates sent. This produces a graphic that's interesting but not overly useful:

Roughly 37000 touchpad events. You'll have to zoom in to see the actual pixels.

I modified the script to assume a white background and colour any x/y coordinate that was never hit black. So an x coordinate of 50 would now produce a vertical 1 pixel line at 50, a y coordinate of 70 a horizontal line at 70, etc. Any pixel that remains white is a coordinate that is hit at some point, anything black was unreachable. This produced more interesting results. Below is the graphic of a short, slow movement right to left.

A single short slow finger movement

You can clearly see the missing x coordinates. More specifically, there are some events, then a large gap, then events again. That gap is the stalling cursor where we didn't get any x coordinates. My first assumption was that it may be a sensor issue and that some areas on the touchpad just don't trigger. So what I did was move my finger around the whole touchpad to try to capture as many x and y coordinates as possible.

Let's have look at the recording from a T440 first because it doesn't suffer from this issue:

Sporadic black lines indicating unused coordinates but the center is purely white, indicating every device unit was hit at some point

Ok, looks roughly ok. The black areas are irregular, on the edges and likely caused by me just not covering those areas correctly. In the center it's white almost everywhere, that's where the most events were generated. And now let's compare this to a T450:

A visible grid of unreachable device units

The difference is quite noticeable, especially if you consider that the T440 recording had under 15000 events, the T450 recording had almost 37000. The T450 has a patterned grid of unreachable positions. But why? We currently use the PS/2
protocol to talk to the device but we should be using RMI4 over SMBus
instead (which is what Windows has done for a while and luckily the RMI4
patches are on track for kernel 4.9). Once we talk to the device in its
native protocol we see a resolution of ~20 units/mm and it looks like the T440 output:

With RMI4, the grid disappears

Ok, so the problem is not missing coordinates in the sensor and besides, at the resolution the touchpad has a single 'pixel' not triggering shouldn't be much of a problem anyway.

Maybe the issue had to do with horizontal movements or something?
The next approach was for me to move my finger slowly from one side to the left. That's actually hard to do consistently when you're not a robot, so the results are bound to be slightly different. On the T440:

The x coordinates are sporadic with many missing ones, but the y coordinates are all covered

You can clearly see where the finger moved left to right. The big black gaps on the x coordinates mostly reflect me moving too fast but you can see how the distance narrows, indicating slower movements. Most importantly: vertically, the strip is uniformly white, meaning that within that range I hit every y coordinate at least once.
And the recording from the T450:

Only one gap in the y range, sporadic gaps in the x range

Well, still looks mostly the same, so what is happening here? Ok, last test: This time an extremely slow motion left to right. It took me 87 seconds to cover the touchpad. In theory this should render the whole strip white if all x coordinates are hit. But look at this:

An extremely slow finger movement

Ok, now we see the problem. This motion was slow enough that almost every x coordinate should have been hit at least once. But there are large gaps and most notably: larger gaps than in the recording above that was a faster finger movement. So what we have here is not an actual hardware sensor issue but that the firmware is working against us here, filtering things out. Unfortunately, that's also the worst result because while hardware issues can usually be worked around, firmware issues are a lot more subtle and less predictable. We've also verified that newer firmware versions don't fix this and trying out some tweaks in the firmware didn't change anything either.

Windows is affected by this too and so is the synaptics driver. But it's not really noticeable on either and all reports so far were against libinput, with some even claiming that it doesn't manifest with synaptics. But each time we investigated in more detail it turns out that the issue is still there (synaptics uses the same kernel data after all) but because of different acceleration methods users just don't trigger it. So my current plan is to change the pointer acceleration to match something closer to what synaptics does on these devices. That's hard because synaptics is mostly black magic (e.g. synaptics' pointer acceleration depends on screen resolution) and hard to reproduce. Either way, until that is sorted at least this post serves as a link to point people to.

Many thanks to Andrew Duggan from Synaptics and Benjamin Tissoires for helping out with the analysis and testing of all this.

[1] Because pressing down on a touchpad flattens your finger and thus changes the shape slightly. While you can hold a finger still, you cannot control that shape
[2] Yes, predictive movement would be possible but it's very hard to get this right
[3] These are events as provided by the kernel and unaffected by anything in the userspace stack