Thursday, March 21, 2019

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:

$ hexdump -v -e '1/2 "%d\n"'
-11643
23698
0
0
-5013
6
0
0

This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.

Friday, March 15, 2019

Ho ho ho, let's write libinput. No, of course I'm not serious, because
no-one in their right mind would utter "ho ho ho" without a sufficient
backdrop of reindeers to keep them sane. So what this post is instead is me
writing a nonworking fake libinput in Python, for the sole purpose of
explaining roughly how libinput's architecture looks like. It'll be to the
libinput what a Duplo car is to a Maserati. Four wheels and something
to entertain the kids with but the queue outside the nightclub won't be
impressed.

The target audience are those that need to hack on libinput and where the
balance of understanding vs total confusion is still shifted towards the
latter. So in order to make it easier to associate various bits, here's a
description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice
things unless what you want is a buffer overflow\n\80xb1001af81a2b1101.
Instead, we use opaque structs, each with accessor methods and an unhealthy
amount of verbosity. Because Python does have classes, those structs are
represented as classes below. This all won't be actual working Python code, I'm just using the
syntax.

We have two different modes of initialisation, udev and path. The udev
interface is used by Wayland compositors and adds all devices on the given
udev seat. The path interface is used by the X.Org driver and adds only one
specific device at a time. Both interfaces have the dispatch() and
get_events() methods which is how every caller gets events out of
libinput.

In both cases we create a libinput device from the data and create an event
about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our
imagination? Well, they're just another object in libinput.

You get the gist. Each event is actually an event of a subtype with a few
common shared fields and a bunch of type-specific ones. The events often
contain some internal value that is calculated on request.
For example, the API for the absolute x/y values returns mm, but
we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another
I-cant-believe-this-is-not-a-class with relatively few surprises:

Our evdev device is actually a subclass (well, C, *handwave*) of the
public device and its main function is "read things off the device node".
And it passes that on to a magical interface. Other than that, it's
a collection of generic functions that apply to all devices. The interfaces
is where most of the real work is done.

The interface is decided on by the udev type and is where the
device-specifics happen. The touchpad interface deals with touchpads, the
tablet and switch interface with those devices and the fallback interface is
that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be
compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The advantage of this system is twofold. First, the main libinput code only
needs one place where we really care about which acceleration method we
have. And second, the acceleration code can be compiled separately for
analysis and to generate pretty graphs. See the pointer
acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a
bit different in that they're all similar-ish but only to make switching
from one to the next a bit easier.

And that's basically it, those are the building blocks libinput has. The
rest is detail. Lots of it, but if you understand the architecture outline
above, you're most of the way there in diving into the details.

One of the features in the soon-to-be-released libinput 1.13 is
location-based touch arbitration. Touch arbitration is the process of
discarding touch input on a tablet device while a pen is in proximity.
Historically, this was provided by the kernel wacom driver but libinput has
had userspace touch arbitration for quite a while now, allowing for touch
arbitration where the tablet and the touchscreen part are
handled by different kernel drivers.

Basic touch arbitratin is relatively simple: when a pen goes into
proximity, all touches are ignored. When the pen goes out of proximity,
new touches are handled again. There are some extra details (esp. where the kernel handles arbitration too) but let's ignore those for now.

With libinput 1.13 and in preparation for the Dell Canvas Dial Totem, the
touch arbitration can now be limited to a portion of the screen only. On the
totem (future patches, not yet merged) that portion is a square slightly
larger than the tool itself. On normal tablets, that portion is a rectangle,
sized so that it should encompass the users's hand and area around the pen,
but not much more. This enables users to use both the pen and touch input at
the same time, providing for bimanual interaction (where the GUI itself
supports it of course). We use the tilt information of the pen (where
available) to guess where the user's hand will be to adjust the rectangle
position.

There are some heuristics involved and I'm not sure we got all of them right
so I encourage you to give it a try and file an issue where it doesn't
behave as expected.