Musings of a Creator

Replicating Screens

What you see here is a snapshot of my screen. The big window on the left is an instance of the SciTE IDE running a bit of Lua code. On the right, is a semi-scaled down version of the same window.

And sooooo…..?

A few years back, when I was exploring doing distributed desktop stuff, I wrote a bunch of C# code. Mostly it was interop code to Windows API calls, such as networking, GDI, and OpenGL. Well, since I’m totally into Lua these days, and wanting to achieve the same, I’ve been re-writing that interop code using the LuaJIT FFI facility.

Along the way, there have been a couple of little hurdles (doing callbacks through JIT’d code), and real head scratchers (just can’t seem to get beyond OpenGL 1.1 on my Windows Vista based laptop). But, finally, I have enough stuff nailed that this actually works.

What’s going on here? Well the first part of it is rather simple. If you search the interwebs for “programatic Screen Capture Windows“, one of the first articles you might find shows 3 different methods of doing screen captures. The code is circa 2006, but really, it hasn’t changed at all in the intervening years. Various other articles show you that the “Print” key on your keyboard actually takes a copy of the screen and places it on the clip board.

The way I do it uses the ancient GDI interface. Why? Because it still works circa 2012 on Windows 7, and it worked way back in Windows 95. So, even if it’s not the most modern of APIs, it simply works. The relevant piece of code looks like this:

That’s calling the GDI based BitBlt() function. You pass in various device context handles, and other parameters, and out comes a copy of the screen, stuffed into a bitmap. Yah, I’m hiding some details, but this is the business end of the routines involved. Setting up the bitmap you’re actually copying into looks something like this:

I have a GDIDIBSection object coded up, which takes care of the details of creating a GDI based DIB Section, which is a fancy name for what is more commonly known as a bitmap. You simply give the width, height, and number of bits per pixel (32 or 24), and what you get back is a bitmap that Windows knows how to deal with. At the same time, you get a pointer to the actual pixels.

Playing with pixels, for things like drawing, can be a chore. So, instead of having to code against the pixel pointer directly, I wrap that up in a tidy little package known as the PixelAccessor. You instantiate one like this:

Why bother? Well, then you have a convenient interface to just about any source of data, including the DIBSection. It could just as easily represent the frames of data from a webcam, or any other source of 2D data. With the accessor, you have a couple of methods:

Between the two, and if you’re particularly dealing with pixels, you have GetPixel, and SetPixel. Well, once you have those, you have the world in your hands. And just for extra kicks, if you wanted to draw a red line across the image, you can use a simple 2D renderer:

And lastly, we need a nice wrapper for the OpenGL based Texture objects:

screenTexture = Texture(hbmScreenAccessor)

That one line will create an OpenGL texture based on information it gets from the Accessor you passed in. Basically, the width, height, and pixels to copy. It makes a couple of assumptions, like it uses RGBA for its internal storage, but that’s a fairly safe thing to do.

And lastly, when you want to alter the texture (like with 30 fps screen updates):

screenTexture:CopyPixelBuffer(hbmScreenAccessor)

That is, assume the same texture object, but copy new data into it.

So, to wrap it all up, 30 times a second, or whatever frame rate I set at the beginning, I execute the following function:

That is, capture the screen, tell the texture object to render itself to fill the window. So, if you resize the copied window, the image will scale up and down, with the full filtering support that OpenGL has to offer. That’s pretty spiffy I think.

One of the primary interesting things about this, to me, is that all of the code is Lua. Other than the bits that are actually OS calls, everything from setting up the window, the window callbacks, even pixel twiddling if I want to draw on the image before displaying, is all done in Lua, which is what I’ve been after. No GLFW, no nothing, just plain Lua. Now, this does leave me with a small interop layer to write when it comes to iOS, Android, and MacOS, but I can easily see writing that interop code in Lua, instead of C, Objective-C, Java, or whatever.

There was a lot of head banging that went into bringing the code to this point, but it’s been well worth it. Now I have complete control of my graphics environment, and can even contemplate doing screen sharing while doing design. That’s what I’m ultimate after with this, so it’s been a great next step.

Displaying a small copy of a window next to itself is not the most earth shattering thing in the world. The machinery that goes behind it is enabling, and provides more stepping stones for getting more interesting things done. With enough of these steppings stones in place, programming in Windows, or any environment, becomes much more approachable, and pleasant.

The next big thing I have to tackle is the OpenGL Extensions. Given the dynamic nature of Lua, I have an idea this is going to prove to be a much better way to program than using the typical GLEW library.