If you are a modern programmer of Windows apps, there are numerous frameworks for you, hundreds of SDKs, scripted wrappers, IDEs to hide behind, and just layers upon layers of goodness to keep you safe and sane. So, when it comes to using some of the core Windows APIs directly, you can be forgiven for not even knowing they exist, let alone how to use them from your favorite environment.

I’ve done a ton of exploration on the intricacies of the various Linux interfaces, Spelunking Linux goes over everything from auxv to procfs, and quite a few in between. But, what about Windows? Well, I’ve recently embarked on a new project lj2win32 (not to be confused with earlier LJIT2Win32). The general purpose of this project is to bring the goodness of TINN to the average LuaJIT developer. Whereas TINN is a massive project that strives to cover the entirety of the known world of common Windows interfaces, and provides a ready to go multi-tasking programming environment, lj2win32 is almost the opposite. It does not provide its own shell, rather it just provides the raw bindings necessary for the developer to create whatever they want. It’s intended to be a simple luarocks install, much in the way the ljsyscall works for creating a standard binding to UNIX kinds of systems without much fuss or intrusion.

In creating this project, I’ve tried to adhere to a couple of design principles to meet some objectives.

First objective is that it must ultimately be installable using luarocks. This means that I have to be conscious about the organization of the file structure. To wit, everything of consequence lives in a ‘win32’ directory. The package name might ultimately be ‘win32’. Everything is referenced from there.

Second objective, provide the barest minimum bindings. Don’t change names of things, don’t introduce non-windows semantics, don’t create several layers of class hierarchies to objectify the interfaces. Now, of course there are some very simple exceptions, but they should be fairly limited. The idea being, anyone should be able to take this as a bare minimum, and add their own layers atop it. It’s hard to resist objectifying these interfaces though, and everything from Microsoft’s ancient MFC, ATL, and every framework since, has thrown layers of object wrappers on the core Win32 interfaces. In this case, wrappers and other suggestions will show up in the ‘tests’ directory. That is fertile ground for all manner of fantastical object wrapperage.

Third objective, keep the dependencies minimal. If you do programming in C on Windows, you include a couple of well known headers files at the beginning of your program, and the whole world gets dragged in. Everything is pretty much in a global namespace, which can lead to some bad conflicts, but they’ve been worked out over time. In lj2win32, there are only a couple things in the global namespace, everything else is either in some table, or within the ffi.C facility. Additionally, the wrappings are clustered in a way that follows the Windows API Sets. API sets are a mechanism Windows has for pulling apart interdependencies in the various libraries that make up the OS. In short, it’s just a name (so happens to end in ‘.dll’) which is used by the loader to load in various functions. If you use these special names, instead of the traditional ‘kernel32’, ‘advapi32’, you might pull in a smaller set of stuff.

With all that, I thought I’d explore one particular bit of minutia as an example of how things could go.

The GetSystemMetrics() function call is a sort of dumping ground for a lot of UI system information. Here’s where you can find things like how big the screen is, how many monitors there are, how many pixels are used for the menu bars, and the like. Of course this is just a wrapper on items that probably come from the registry, or various devices and tidbits hidden away in other databases throughout the system, but it’s the convenient developer friendly interface.

The signature looks like this

int WINAPI GetSystemMetrics(
_In_ int nIndex
);

A simple enough call. And a simple enough binding:

ffi.cdef[[
int GetSystemMetrics(int nIndex);
]]

Of course, there is the ‘nIndex’, which in the Windows headers is a bunch of manifest constants, which in LuaJIT might be defined thus:

So, this meets the second objective of bare minimum binding. But, it’s not a very satisfying programming experience for the LuaJIT developer. How about just a little bit of sugar? Well, I don’t want to violate the same second objective of non-wrapperness, so I’ll create a separate thing in the tests directory. The systemmetrics.lua file contains a bit of an exploration in getting of system metrics.

All of this allows you to do a couple of interesting things. First, what if you wanted to print out all the system metrics. This same technique can be used to put all the metrics into a table to be used within your program.

OK, so what? Well, the systemmetrics.names is a dictionary matching a symbolic name to the value used to get a particular metric. And what’s this magic with the ‘sysmetrics[key]’ thing? Well, let’s take a look back at that hand waving from the systemmetrics.lua file.

So, what’s happening here with the setmetatable thing is, Lua has a way of setting some functions on a table which will dictate the behavior they will exhibit in certain situations. In this case, the ‘__index’ function, if it exists, will take care of the cases when you try to look something up, and it isn’t directly in the table. So, in our example, doing the ‘sysmetrics[key]’ thing is essentially saying, “Try to find a value with the string associated with ‘key’. If it’s not found, then do whatever is associated with the ‘__index’ value”. In this case, ‘__index’ is a function, so that function is called, and whatever that returns becomes the value associated with that key.

I know, it’s a mouth full, and metatables are one of the more challenging aspects of Lua to get your head around, but once you do, it’s a powerful concept.

How about another example which will be a more realistic and typical case.

local function testSome()
print(sysmetrics.SM_MAXIMUMTOUCHES)
end

In this case, the exact same mechanism is at play. In Lua, there are two ways to get a value out of a table. The first one we’ve already seen, where the ‘[]’ notation is used, as if the thing were an array. In the ‘testSome()’ case, the ‘.’ notation is being utilized. This is accessing the table as if it were a data structure, but it’s exactly the same as trying to access as an array, at least as far as the invocation of the ‘__index’ function is concerned. The ‘SM_MAXIMUMTOUCHES’ is taken as a string value, so it’s the same as doing: sysmetrics[‘SM_MAXIMUMTOUCHES’], and from the previous example, we know how that works out.

Now, there’s one more thing to note from this little escapade. The implementation of the helper function:

local function getSystemMetrics(what)
local entry = nil;
local idx = nil;
if type(what) == "string" then
entry = exports.names[what]
idx = entry.value;
else
idx = tonumber(what)
if not idx then
return nil;
end
entry = lookupByNumber(idx)
if not entry then return nil end
end
local value = ffi.C.GetSystemMetrics(idx)
if entry.converter then
value = entry.converter(value);
end
return value;
end

There’s all manner of nonsense in here. The ‘what’ can be either a string or something that can be converted to a number. This is useful because it allows you to pass in symbolic names like “SM_CXBLAHBLAHBLAH” or a number 123. That’s great depending on what you’re interacting with and how the values are held. You might have some UI for example where you just want to use the symbolic names and not deal with numbers.

The other thing of note is that ‘entry.converter’ bit at the end. If you look back at the names table, you’ll notice that some of the entries have a ‘converter’ field associated with them. this is an optional function that can be associated with the entries. If it exists, it is called, with the value from the system called passed to it. In most cases, what the system returns is a number (number of mouse buttons, size of screen, etc). In some cases, the value returned is ‘0’ for false, and ‘non-zero’ for true. Well, as a Lua developer, I’d rather just get a bool in those cases where it’s appropriate, and this helper function is in a position to provide that for me. This is great because it allows me to not have to check the documentation to figure it out.

What does this do exactly? Simply, it generates those constants in the ffi.C space, so that you can still do this:

ffi.C.GetSystemMetrics(ffi.C.SM_MAXIMUMTOUCHES)

So, there you have it. You can go with the raw traditional sort of ffi binding, or you can spice things up a bit and make things a bit more useful with a little bit of effort. I like doing the latter, because I can generate the more traditional binding from the table of names that I’ve created. That’s a useful thing for documentation purposes, and in general.

I have stuck to my objectives, and this little example just goes to prove how esoteric minute details can be turned into approachable things of beauty with a little bit of Lua code.

Like this:

The subject of scheduling and async programming has been a long running theme in my blog. From the very first entries related to LJIT2Win32, through the creation of TINN, and most recently (within the past year), the creation of schedlua, I have been exploring this subject. It all kind of started innocently enough. When node.js was born, and libuv was ultimately released, I thought to myself, ‘what prevents anyone from doing this in LuaJIT without the usage of any external libraries whatsovever?’

It’s been a long road. There’s really no reason for this code to continue to evolve. It’s not at the center of some massively distributed system. These are merely bread crumbs left behind, mainly for myself, as I explore and evolve a system that has proven itself to be useful at least as a teaching aid.

In the most recent incarnation of schedlua kernel, I was able to clean up my act with the realization that you can implement all higher level semantics using a very basic ‘signal’ mechanism within the kernel. That was pretty good as it allowed me to easily implement the predicate system (when, whenever, waitForTruth, signalOnPredicate). In addition, it allowed me to reimplement the async io portion with the realization that a task waiting on IO to occur is no different than a task waiting on any other kind of signal, so I could simply build the async io atop the signaling.

schedlua has largely been a Linux based project, until now. The crux of the difference between Linux and Windows comes down to two things in schedlua. The first thing is timing operations. Basically, how do you get a microsecond accurate clock on the system. On Linux, I use the ‘clock_gettime()’ system call. On Windows, I use ‘QueryPerformanceCounter, QueryPerformanceFrequency’. In order to isolate these, I put them into their own platform specific timeticker.lua file, and they both just have to surface a ‘seconds()’ function. The differences are abstracted away, and the common interface is that of a stopwatch class.

That was good for time, but what about alarms?

The functions in schedlua related to alarms, are: delay, periodic, runnintTime, and sleep. Together, these allow you to run things based on time, as well as delay the current task as long as you like. My first implementation of these routines, going all the way back to the TINN implementation, were to run a separate ‘watchdog’ task, which in turn maintained its list of tasks that were waiting, and scheduled them. Recently, I thought, “why can’t I just use the ‘whenever’ semantics to implement this?”.

Now, the implementation of the alarm routines comes down to this:

local function taskReadyToRun()
local currentTime = SWatch:seconds();
-- traverse through the fibers that are waiting
-- on time
local nAwaiting = #SignalsWaitingForTime;
for i=1,nAwaiting do
local task = SignalsWaitingForTime[1];
if not task then
return false;
end
if task.DueTime &lt;= currentTime then
return task
else
return false
end
end
return false;
end
local function runTask(task)
signalOne(task.SignalName);
table.remove(SignalsWaitingForTime, 1);
end
Alarm = whenever(taskReadyToRun, runTask)

The Alarm module still keeps a list of tasks that are waiting for their time to execute, but instead of using a separate watchdog task to keep track of things, I simply use the schedlua built-in ‘whenever’ function. This basically says, “whenever the function ‘taskReadyToRun()’ returns a non-false value, call the function ‘runTask()’ passing the parameter from taskReadyToRun()”. Convenient, end of story, simple logic using words that almost feel like an English sentence to me.

I like this kind of construct for a few reasons. First of all, it reuses code. I don’t have to code up that specialized watchdog task time and time again. Second, it wraps up the async semantics of the thing. I don’t really have to worry about explicitly calling spawn, or anything else related to multi-tasking. It’s just all wrapped up in that one word ‘whenever’. It’s relatively easy for me to explain this code, without mentioning semaphores, threads, conditions, or whatever. I can tell a child “whenever this is true, do that other thing”, and they will understand it.

So, that’s it. First I used signals as the basis to implement higher order functions, such as the predicate based flow control. Now I’m using the predicate based flow control to implement yet other functions such as alarms. Next, I’ll take that final step and do the same to the async IO, and I’ll be back to where I was a few months back, but with a much smaller codebase, and cross platform to boot.

Last time around, I introduced some simple things with lj2procfs. Being able to simply access the contents of the various files within procfs is a bit of convenience. Really what lj2procfs is doing is just giving you a common interface to the data in those files. Everything shows up as simple lua values, typically tables, with strings, and numbers. That’s great for most of what you’d be doing with procfs, just taking a look at things.

But, on Linux, procfs has another capability. The /proc/sys directory contains a few interesting directories of its own:

abi/
debug/
dev/
fs/
kernel/
net/
vm/

And if you look into these directories, you find some more interesting files. For example, in the ‘kernel/’ directory, we can see a little bit of this:

Now, these are looking kind of interesting. These files contain typically tunable portions of the kernel. On other unices, these values might be controlled through the sysctl() function call. On Linux, that function would just manipulate the contents of these files. So, why not just use lj2procfs to do the same.

Let’s take a look at a few relatively simple tasks. First, I want to get the version of the OS running on my machine. This can be obtained through the file /proc/sys/kernel/version

And what about setting the hostname? First of all, you’ll want to do this as root, but it’s equally simple:

procfs.sys.kernel.hostname = 'alfredo'

Keep in mind that setting the hostname in this way is transient, and it will seriously mess up things, like your about to sudo after this. But, there you have it.

Any value under /proc/sys can be retrieved or set using the fairly simple mechanism. I find this to be very valuable for two reasons. First of all, spelunking these values makes for great discovery. More importantly, being able to capture and set the values makes for a fairly easily tunable system.

An example of how this can be used for system diagnostics and tuning, you can capture the kernel values, using a simple command that just dumps what you want into a table. Send that table to anyone else for analysis. Similarly, if someone has come up with a system configuration that is great for a particular task, tuning the VM allocations, networking values, and the like, they can send you the configuration (just a string value that is a lua table) and you can apply it to your system.

This is a tad better than simply trying to look at system logs to determine after the fact what might be going on with a system. Perhaps the combination of these live values, as well as correlation with system logs, makes it easier to automate the process of diagnosing and tuning a system.

Well, there you have it. The lj2procfs thing is getting more concise, as well as becoming more usable at the same time.

Recently, as a way to further understand all things Linux, I’ve been delving into procfs. This is one of those virtual file systems on linux, meaning, the ‘files’ and ‘directories’ are not located on some real media anywhere, they are conjured up in realtime from within the kernel. If you take a look at the ‘/proc’ directory on any linux machine, you’ll find a couple of things. First, there are a bunch of directories with numeric values as their names.

Yes, true to the unix philosophy, all the things are but files/directories. Each one of these numbers represents a process that is running on the system at the moment. Each one of these directories contains additional directories, and files. The files contain interesting data, and the directories lead into even more places where you can find more interesting things about the process.

Here are the contents of the directory ‘1’, which is the first process running on the system:

Some actual files, some more directories, some symbolic links. To find out the details of what each of these contains, and their general meaning, you need to consult the procfs man page, as well as the source code of the linux kernel, or various utilities that use them.

Backing up a bit, the /proc directory itself contains some very interesting files as well:

Again, the meanings of each of these is buried in the various documentation and source code files that surround them, but let’s take a look at a couple of examples. How about that uptime file?

8099.41 31698.74

OK. Two numbers. What do they mean? The first one is how many seconds the system has been running. The second one is the number of seconds all cpus on the system have been idle since the system came up. Yes, on a multi-proc system, the second number can be greater than the first. And thus begins the actual journey into spelunking procfs. If you’re like me, you occasionally need to know this information. Of course, if you want to know it from the command line, you just run the ‘uptime’ command, and you get…

06:38:22 up 2:18, 2 users, load average: 0.17, 0.25, 0.17

Well, hmmm, I get the ‘up’ part, but what’s all that other stuff, and what happened to the idle time thing? As it turns out, the uptime command does show the up time, but it also shows the logged in users, and the load average numbers, which actually come from different files.

It’s like this. Whatever you want to know about the system is probably available, but you have to know where to look for it, and how to interpret the data from the files. Often times there’s either a libc function you can call, or a command line utility, if you can discover and remember them.

What about a different way? Since I’m spelunking, I want to discover things in a more random fashion, and of course I want easy lua programmatic access to what I find. In steps the lj2procfs project.

In lj2procfs, I try to provide a more manageable interface to the files in /proc. Most often, the information is presented as lua tables. If the information is too simple (like /proc/version), then it is presented as a simple string. Here is a look at that uptime example, done using lj2procfs:

return {
['uptime'] = {
seconds = 19129.39,
idle = 74786.86,
};
}

You can see that the simple two numbers in the uptime file are converted to meaningful fields within the table. In this case, I use a simple utility program to turn any of the files into simple lua value output, suitable for reparsing, or transmitting. First, what does the ‘parsing’ look like?

--[[
seconds idle
The first value is the number of seconds the system has been up.
The second number is the accumulated number of seconds all processors
have spent idle. The second number can be greater than the first
in a multi-processor system.
--]]
local function decoder(path)
path = path or "/proc/uptime"
local f = io.open(path)
local str = f:read("*a")
f:close()
local seconds, idle = str:match("(%d*%.?%d+)%s+(%d*%.?%d+)")
return {
seconds = tonumber(seconds);
idle = tonumber(idle);
}
end
return {
decoder = decoder;
}

In most cases, the output of the /proc files are meant to be human readable. At least with Linux. Other platforms might prefer these files to be more easily machine readable (binary). As such, they are readily parseable mostly by simple string patterns.

So, this decoder is one of many. There is one for each of the file types in the /proc directory, or at least the list is growing.

They are in turn accessed using the Decoders class.

local Decoders = {}
local function findDecoder(self, key)
local path = "lj2procfs.codecs."..key;
-- try to load the intended codec file
local success, codec = pcall(function() return require(path) end)
if success and codec.decoder then
return codec.decoder;
end
-- if we didn't find a decoder, use the generic raw file loading
-- decoder.
-- Caution: some of those files can be very large!
return getRawFile;
end
setmetatable(Decoders, {
__index = findDecoder;
})

This is a fairly simple forwarding mechanism. You could use this in your code by doing the following:

When you try to access the procfs.uptime field of the Decoders class, it will go; “Hmmm, I don’t have a field in my table with that name, I’ll defer to whatever was set as my __index value, which so happens to be a function, so I’m going to call that function and see what it comes up with”. The findDecoder function will in turn look in the codecs directory for something with that name. It will find the code in uptime.lua, and execute it, handing it the path specified. The uptime function will read the file, parse the values, and return a table.

And thus magic is practiced!

It’s actually pretty nice because having things as lua tables and lua values such as numbers and strings, makes it really easy to do programmatic things from there.

In some cases, the output can be a bit tricky, since it’s trying to be human readable, there might be some trickery, like header lines, and variable number of columns. This can get tricky, but you have the full power of lua to do the parsing, including using something like lpeg if you so choose. Here’s the parser for the ‘/proc/interrupts’ file, for example:

In this case, I’m running on a VM which was configured with 4 cpus. I had run previously with a VM with only 3 CPUs, and there were only 3 CPU columns. So, in this case, the patterns first isolate the interrupt number from the remainder of the line, then the numeric columns are isolated from the interrupt description field, then the numbers themselves are matched off using an iterator (gmatch). The table generated looks something like this:

To make spelunking easier, I’ve created a simple script which just calls the procfs thing, given a command line argument of the name of the file you’re interested in looking at.

#!/usr/bin/env luajit
--[[
procfile
This is a general purpose /proc/<file> interpreter.
Usage:
$ sudo ./procfile filename
Here, 'filname' is any one of the files listed in the
/proc directory.
In the cases where a decoder is implemented in Decoders.lua
the file will be parsed, and an appropriate value will be
returned and printed in a lua form appropriate for reparsing.
When there is no decoder implemented, the value returned is
"NO DECODER AVAILABLE"
example:
$ sudo ./procfile cpuinfo
$ sudo ./procfile partitions
--]]
package.path = "../?.lua;"..package.path;
local procfs = require("lj2procfs.procfs")
local putil = require("lj2procfs.print-util")
if not arg[1] then
print ([[
USAGE:
$ sudo ./procfile <filename>
where <filename> is the name of a file in the /proc
directory.
Example:
$ sudo ./pocfile cpuinfo
]])
return
end
local filename = arg[1]
print("return {")
putil.printValue(procfs[filename], " ", filename)
print("}")

Once you have these basic tools in hand, you can begin to look at the various utilities that are used within linux, and try to emulate them. For example, the ‘free’ command will show you roughly how memory currently sits on your system, in terms of how much is physically available, how much is used, and the like. It’s typical output, without any parameters, might look like:

I couldn’t quite figure out where the -/+ buffers/cache: values come from yet. I’ll have to look at the actual code for the ‘free’ program to figure it out. But, the results are otherwise the same.

Some of these files can be quite large, like kallsyms, which might argue for an iterator interface instead of a table interface. But, some of the files have meta information, as well as a list of fields. Since the number of large files is fairly small, it made more sense to cover the broader cases instead, and tables do that fine. kallsyms being fairly large, it will still nicely fit into a table.

Of course, you’re not limited to simply printing to stdout. In fact, that’s the least valuable thing you could be doing. What you really have is programmatic access to all these values. If you had run this command as root, you would get the actual addresses of these routines.

And so it goes. lj2procfs gives you easy programmatic access to all the great information that is hidden in the procfs file system on linux machines. These routines make it relatively easy to gain access to the information, and utilize tools such as luafun to manage it. Once again, the linux system is nothing more than a very large database. Using a tool such a lua makes it relatively easy to access all the information in that database.

So far, lj2procfs just covers reading the values. In this article I did not cover the fact that you can also get information on individual processes as well. Aside from this, procfs actually allows you to set some values as well. This is why I structured the code as ‘codecs’. You can encode, and decode. So, in future, setting a value will be as simple as ‘procfs.something.somevalue = newvalue’. This will eliminate the guess work out of doing command line ‘echo …’ commands for esoteric values, which are seldom used. It also makes easy to achieve great things programmatically through script, without even relying on various libraries that are meant to do the same.

Honestly, I don’t know what all the fuss is about. What is systemd? It’s that bit of code that gets things going on your Linux machine once the kernel has loaded itself. You know, dealing with bringing up services, communicating between services, running the udev and dbus stuff, etc.

--[[
Test using SDJournal as a cursor over the journal entries
In this case, we want to try the various cursor positioning
operations to ensure the work correctly.
--]]
package.path = package.path..";../src/?.lua"
local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")
local jnl = SDJournal()
-- move forward a few times
for i=1,10 do
jnl:next();
end
-- get the cursor label for this position
local label10 = jnl:positionLabel()
print("Label 10: ", label10)
-- seek to the beginning, print that label
jnl:seekHead();
jnl:next();
local label1 = jnl:positionLabel();
print("Label 1: ", label1);
-- seek to label 10 again
jnl:seekLabel(label10)
jnl:next();
local label3 = jnl:positionLabel();
print("Label 3: ", label3)
print("label 3 == label 10: ", label3 == label10)

In this case, a simple journal object which makes it relatively easy to browse through the systemd journals that are laying about. That’s handy. Combined with the luafun functions, browsing through journals suddenly becomes a lot easier, with the full power of lua to form very interesting queries, or other operations.

--[[
Test cursoring over journal, turning each entry
into a lua table to be used with luafun filters and whatnot
--]]
package.path = package.path..";../src/?.lua"
local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")
local fun = require("fun")()
-- Feed this routine a table with the names of the fields
-- you are interested in seeing in the final output table
local function selection(fields, aliases)
return function(entry)
local res = {}
for _, k in ipairs(fields) do
if entry[k] then
res[k] = entry[k];
end
end
return res;
end
end
local function printTable(entry)
print(entry)
each(print, entry)
end
local function convertCursorToTable(cursor)
return cursor:currentValue();
end
local function printJournalFields(selector, flags)
flags = flags or 0
local jnl1 = SDJournal();
if selector then
each(printTable, map(selector, map(convertCursorToTable, jnl1:entries())))
else
each(printTable, map(convertCursorToTable, jnl1:entries()))
end
end
-- print all fields, but filter the kind of journal being looked at
--printJournalFields(nil, sysd.SD_JOURNAL_CURRENT_USER)
--printJournalFields(nil, sysd.SD_JOURNAL_SYSTEM)
-- printing specific fields
--printJournalFields(selection({"_HOSTNAME", "SYSLOG_FACILITY"}));
printJournalFields(selection({"_EXE", "_CMDLINE"}));
-- to print all the fields available per entry
--printJournalFields();

In this case, we have a simple journal printer, which will take a list of fields, as well as a selection of the kinds of journals to look at. That’s quite useful as you can easily generate JSON or XML, or Lua tables on the output end, without much work. You can easily select which fields you want to display, and you could even change the names along the way. You have the full power of lua at your disposal to do whatever you want with the data.

In this case, the SDJournal object is pretty straight forward. It simply wraps the various ‘sd_xxx’ calls within the library to get its work done. What about some other cases? Does the systemd library need to be used for absolutely everything that it does? The answer is ‘no’, you can do a lot of the work yourself, because at the end of the day, the passive part of systemd is just a bunch of file system manipulation.

Here’s where it gets interesting in terms of decomposition.

Within the libsystemd library, there is the sd_get_machine_names() function:

Great, for those who already know this call, you can allocate a char * array, get the array of string values, and party on. But what about the lifetime of those strings, and if you’re doing it as an iterator, when do you ever free stuff, and isn’t this all wasteful?

So, looking at that code in the library, you might think, ‘wait a minute, I could just replicate that in Lua, and get it done without doing any ffi stuff at all!

OK, that looks simple. But what’s happening with that ‘files_in_directory()’ function? Well, that’s the meat and potatoes of this operation.

local function nil_iter()
return nil;
end
-- This is a 'generator' which will continue
-- the iteration over files
local function gen_files(dir, state)
local de = nil
while true do
de = libc.readdir(dir)
-- if we've run out of entries, then return nil
if de == nil then return nil end
-- check the entry to see if it's an actual file, and not
-- a directory or link
if dirutil.dirent_is_file(de) then
break;
end
end
local name = ffi.string(de.d_name);
return de, name
end
local function files_in_directory(path)
local dir = libc.opendir(path)
if not dir==nil then return nil_iter, nil, nil; end
-- make sure to do the finalizer
-- for garbage collection
ffi.gc(dir, libc.closedir);
return gen_files, dir, initial;
end

In this case, files_in_directory() takes a string path, like “/run/systemd/machines”, and just iterates over the directory, returning only the files found there. It’s convenient in that it will skip so called ‘hidden’ files, and things that are links. This simple technique/function can be the cornerstone of a lot of things that view files in Linux. The function leverages the libc opendir(), readdir(), functions, so there’s nothing new here, but it wraps it all up in this convenient iterator, which is nice.

systemd is about a whole lot more than just browsing directories, but that’s certain a big part of it. When you break it down like this, you begin to realize that you don’t actually need to use a ton of stuff from the library. In fact, it’s probably better and less resource intensive to just ‘go direct’ where it makes sense. In this case, it was just implementing a few choice routines to make file iteration work the same as it does in systemd. As this binding evolves, I’m sure there is other low lying fruit that I’ll be able to pluck to make it even more interesting, useful, and independent of the libsystemd library.

In this article: Isn’t the whole system just a database? – libdrm, I explored a little bit of the database nature of Linux by using libudev to enumerate and open libdrm devices. After that, I spent some time bringing up a USB module: LJIT2libusb. libusb is a useful cross platform library that makes it relatively easy to gain access to the usb functions on multiple platforms. It can enumerate devices, deal with hot plug notifications, open up, read, write, etc.

At its core, on Linux at least, libusb tries to leverage the uvdev capabilities of the target system, if those capabilities are there. This means that device enumeration and hot plugging actually use the libuvdev stuff. In fact, the code for enumerating those usb devices in libusb looks like this:

There’s more stuff of course, to turn that into data structures which are appropriate for use within the libusb view of the world. But, here’s the equivalent using LLUI and the previously developed UVDev stuff:

local function isUsbDevice(dev)
if dev.IsInitialized and dev:getProperty("subsystem") == "usb" and
dev:getProperty("devtype") == "usb_device" then
return true;
end
return false;
end
each(print, filter(isUsbDevice, ctxt:devices()))

It’s just illustrative, but it’s fairly simple to understand I think. The ‘ctxt:devices()’ is an iterator over all the devices in the system. The ‘filter’ function is part of the luafun functional programming routines available to Lua. the ‘isUsbDevice’ is a predicate function, which returns ‘true’ when the device in question matches what it believes makes a device a ‘usb’ device. In this case, its the subsystem and dev_type properties which are used.

Being able to easily query devices like this makes life a heck of a lot easier. No funky code polluting my pure application. Just these simple query predicates written in Lua, and I’m all set. So, instead of relying on libusb to enumerate my usb devices, I can just enumerate them directly using uvdev, which is what the library does anyway. Enumeration and hotplug handing is part of the library. The other part is the actual send and receiving of data. For that, the libusb library is still primarily important, as replacing that code will take some time.

Where else can this great query capability be applied? Well, libudev is just a nice wrapper atop sysfs, which is that virtual file system built into Linux for gaining access to device information and control of the same. There’s all sorts of stuff in there. So, let’s say you want to list all the block devices?

local function isBlockDevice(dev)
if dev.IsInitialized and dev:getProperty("subsystem") == "block" then
return true;
end
return false;
end

That will get all the devices which are in the subsystem “block”. That includes physical disks, virtual disks, partitions, and the like. If you’re after just the physical ones, then you might use something like this:

local function isPhysicalBlockDevice(dev)
if dev.IsInitialized and dev:getProperty("subsystem") == "block" and
dev:getProperty("devtype") == "disk" and
dev:getProperty("ID_BUS") ~= nil then
return true;
end
return false;
end

Here, a physical device is indicated by subsystem == ‘block’ and devtype == ‘disk’ and the ‘ID_BUS’ property exists, assuming any physical disk would show up on one of the system’s buses. This won’t catch a SD card though. For that, you’d use the first one, and then look for a property related to being an SD card. Same goes for ‘cd’ vs ramdisk, or whatever. You can make these queries as complex or simple as you want.

Once you have a device, you can simply open it using the “SysName” parameter, handed to an fopen() call.

I find this to be a great way to program. It makes the creation of utilities such as ‘lsblk’ relatively easy. You would just look for all the block devices and their partitions, and put them into a table. Then separately, you would have a display routine, which would consume the table and generate whatever output you want. I find this much better than the typical Linux tools which try to do advanced display using the terminal window. That’s great as far as it goes, but not so great if what you really want is a nice html page generated for some remote viewing.

At any rate, this whole libudev exploration is a great thing. You can list all devices easily, getting every bit of information you care to examine. Since it’s all scriptable, it’s fairly easy to taylor your queries on the fly, looking at, discovering, and the like. I discovered that the thumb print reader in my old laptop was made by Broadcom, and my webcam by 3M? It’s just so much fun.

Well there you have it. The more you spelunk, the more you know, and the more you can fiddle about.

And what’s this then? It’s the draw_crtc_lines.lua example from the LJIT2RenderingManager repository. I am using a LuaJIT wrapped libdrm to draw graphics on the console of my Linux machine. Yah, the console, where text is usually the norm. But, really, text is just a bunch of glyphs, which is nothing but a bunch of graphics right?

That was three years ago. With the Raspberry Pi, it was actually a fairly straight forward thing to go get a handle on the screen, and thus a pointer to the frame buffer data. Just a couple of function calls…

With the image here, I am using libdrm, which is at the core of graphics on Linux systems. The way it works is, you have fairly low level access to the graphics subsystem within the Linux kernel itself. Then there’s this userspace code, as represented by libdrm, which wraps up various IOCtl() calls to make it easy to interact with those kernel functions. Then other things, from console to X11 windows, are built atop this very lowest level. Throw in some libudev, libevdev, for input device handling, and you begin to have a UI subsystem.

Using libdrm isn’t particularly hard, but it’s pretty darned hard to find any crisp clear example articles or code to show you the way. Most of the examples are associated with mode setting, or fairly low level tests, or very out of date. I’m not quire sure why that is, other than the fact that most people don’t really care about this lowest level stuff, they just use the higher level frameworks such as Qt, SDL, and what have you. I was hard pressed to find anything like “how to draw to the current screen’s frame buffer directly”. So, here’s my version:

--[[
Draw on the frame buffer of the current default crtc
The way to go about drawing to the current screen, without changing modes is:
Create a card
From that card, get the first connection that's actually connected to something
From there, get the encoder it's using
From there, get the crt controller associated with the encoder
From there, get the framebuffer associated with the controller
From there, we have the width, height, pitch, and data ptr
So, that's enough to do some drawing
--]]
package.path = package.path..";../?.lua"
local ffi = require("ffi")
local bit = require("bit")
local bor, band, lshift, rshift = bit.bor, bit.band, bit.lshift, bit.rshift
local libc = require("libc")
local utils = require("utils")
local ppm = require("ppm")
local DRMCard = require("DRMCard")
local function RGB(r,g,b)
return band(0xFFFFFF, bor(lshift(r, 16), lshift(g, 8), b))
end
local function drawLines(fb)
local color = RGB(23, 250, 127)
for i = 1, 400 do
utils.h_line(fb, 10+i, 10+i, i, color)
end
end
local card, err = DRMCard();
local fb = card:getDefaultFrameBuffer();
fb.DataPtr = fb:getDataPtr();
print("fb: [bpp, depth, pitch]: ", fb.BitsPerPixel, fb.Depth, fb.Pitch)
local function drawRectangles(fb)
utils.rect(fb, 200, 200, 320, 240, RGB(230, 34, 127))
end
local function draw()
drawLines(fb)
drawRectangles(fb);
end
draw();
ppm.write_PPM_binary("draw_crtc_lines.ppm", fb.DataPtr, fb.Width, fb.Height, fb.Pitch)

There, clear as can be right? It is actually fairly easy once you have the right wrappers and an understanding of how these things are supposed to work. The comment at the top of the code block explains the steps that go into it. Basically, you get a ‘card’ which is a representation of the video card in question. In this case, we just grab the first available card. If there are two in a system, you could be more specific about it. From the card, there are connectors (like VGA, DVI, HDMI, etc). So, you choose one of those, and ultimately you find out which frame buffer is associated with it. From there, you can ask the frame buffer for it’s data pointer, and then you’re ready to go.

Well, that’s certainly a mouthful, just to get a pointer to the screen’s frame buffer data! The way it breaks down actually isn’t that bad. First, you do that drmIoctl to get a handle on the framebuffer. This is just how the kernel knows which framebuffer you’re talking about, because there can be many. Then you use that handle to get some information which is useful for doing an mmap() call (the offset from the beginning of the ‘file’ which represents the frame buffer). And finally, you make the mmap call so that you can get a pointer to the actual data of interest.

At this point, you now have a pointer to the data portion of the framebuffer, and you can start drawing to your heart’s content. Yep, it’s that easy.

Well this is all fine and dandy then. If you don’t feel like creating your own graphics rendering engine from scratch, then you could enlist the power of libpixman (LJIT2pixman – Drawing on Linux), another low level portion of the graphics pipeline on Linux. With libpixman, you can take this pointer you got from the framebuffer, and give it to one of the pixman_image_create_bits() calls, to get yourself a pixman drawing canvas, then you’re all set to use all the goodness pixman has to offer.

That’s very nifty, and this is how windowing systems are born.

Using libdrm can be a daunting task to the uninitiated (such as myself). Through nice Lua wrapping, and a bit of objectification, it can be tamed, and drawing on the screen is no harder than drawing into any other bit of memory you might have. One added bonus here. Since you have a pointer to the colors on the screen, doing a screencast capture, or desktop sharing, can’t be too far away…