Category Archives: software libre

Post navigation

I just love when I forget to add ‘volatile’ and the compiler happily optimizes away a chunk of code.

After staring for a while at the screen trying to figure out why it doesn’t work as expected I went for a quick nap. When I got back I noticed several warnings about it that were invisible to my eyes before.

After many hours without proper sleep ‘B’ and ‘b’ look the same and your mind tricks you into seeing one when you really typed the other. ( .load script […] error undefined is not a function . Gah! but in the repl I just type myobj.Blargh and it says [Function])

Today I woke up almost as tired as I went to bed yesterday. Most of the people is not working because of a multi day holiday. Or something like that.

Like yesterday I (unsuccessfully) tried to figure out why WebVfx refuses to play nice with gstshm. So I went for a walk to clear my mind.

One of the nicest things about living in Berisso is that I have really really close almost virgin fields and beachs, an island, “normal” city stuff and industrial/maritime landscapes. Today I went to Ensenada, there are many places that look like a still from movies such as Tank Girl or Mad Max; toyed around the docks and abandoned ships. Also met a woman that kinda looked like Lori Petty these days. Scary.

Lately I’ve been thinking a lot about how can I make nice and easily customizable interfaces for video applications. My idea of ‘nice’ is kind of orthogonal to what most of my expected user base will want, and by ‘easily customizable’ I don’t mean ‘go edit this glade file / json stage / etc’.

Clutter and MX are great to make good looking interfaces and like Gtk have something that resembles css to style stuff and can load an ui from an xml or json file. However, they will need sooner or later a mix developer and a designer. And unless you do something up front, the interface is tied to the backend process that does the heavy video work.

So, seeing all the good stuff we are doing with Caspa, the VideoEditor, WebVfx and our new magical synchronization framework I questioned:

Why instead of using Gtk, can’t I make my ui with html and all the fancy things that are already made?

And while we are at it I want process isolation, so if the ui crashes (or I want to launch more than one to see side by side different ui styles) the video processing does not stop. Of course, should I want more tightly coupling I can embed WebKit on my application and make a javascript bridge to avoid having to use something like websockets to interact.

One can always dream…

Then my muse appeared and commanded me to type. Thankfully, mine is not like the poor soul on “Blank Page” had.

So I type, and I type, and I type.

‘Till I made this: two GStreamer pipelines, outputting to auto audio and video sinks and also to a webkit process. Buffers travel thru shared memory, still they are copied more than I’d like to but that makes things a bit easier and helps decoupling the processes, so if one stalls the others don’t care (and anyway for most of the things I want to do I’ll need to make a few copies). Lucky me I can throw beefier hardware and play with more interesting things.

I expect to release this in a couple of weeks when it’s more stable and usable, as of today it tends to crash if you stare at it a bit harder.

“It’s an act of faith, baby”

Using WebKit to display video from a GStreamer application.Something free to whoever knows who the singer is without using image search.

Lately I’ve been working with a lot of technologies that are a bit outside of my comfort zone of hardware and low level stuff. Javascript, html-y things and node.js. At first it was a tad difficult to wrap my head around all that asynchronism and things like hoisting and what is the value of ‘this’ here. And inheritance.

Then, out of a sudden I had an epiphany and I wrote a truly marvellous piece of software. Now I can use Backbone.io on the browser and the server, the same models and codebase on both without a single change. Models are automatically synchronized. On top of that there’s a redis transport so I can sync models between different node instances in real time without hitting the storage (mongo in this case). And the icing of the cake is that a python compatibility module is about to come.

When a potential client approached me for a quote normally I gave two estimates. One if I am allowed to write something about it and another one (substantially higher) if they refuse.

I never said a word about open sourcing it, naming names or something like that.

Most of the time I explain, as politely as I can, that nobody is going to ‘steal’ they wonderful idea. And also that it is just a very simple variation on stuff found on textbooks and, the only original thing they did was to put a company logo on it.

Some time ago we needed to connect as many usb cameras as possible to a single computer and capture full hd video and audio. Most of our systems despite having a lot of connectors on the inside they really have one host controller and a hub.

While the available bandwidth may be more than enough using a compressed format the amount of isochronous transfers is rather limited. Our minimal use case called for three C920 cameras. On a normal system (one host controller behind a hub) the best we could achieve was two at 1280×720@30fps with audio and a third without audio, and only one at 1920×1080@30fps with audio.

So, we need to add more controllers. Usb 2.0 add-on cards are a thing of the past but luckily they were replaced with the faster USB3. Most of the usb 3 controllers also feature an usb 2.0 controller and hub for older devices but some (very rare) have a dedicated usb 2 controller for each port.

Given this I went ahead and bought two cards of different brand and different chipset each.

One of them had a NEC PD720200. It worked like a charm but sadly only has one usb 2 controller.

The other sported a VIA VL800. This one has one usb 2 controller per port (this can be seen with lsub -t). That lovingly discovering didn’t last for too long as the controller crashed all the time, at best it would stop responding but sometimes it locked my system hard. The guys at Via have a very interesting definition of meeting the specs. I’ve spent a whole weekend patching kernels trying to make it behave. Now I have a quite expensive and sophisticated paperweight.

Testing procedures:

I ssh’d to the target machine and ran in several consoles:

– watch -n1 ‘dmesg | tail -n 16’ to have a log should the system crash hard.

It is really wonderful how much computing power we have nowadays. The first time I compiled a kernel it took a good four hours. On my current machine (not quite new…) it takes about forty minutes from a clean tree and around ten from an already compiled one.

I’ve made a couple of experiments with Tetra. Right now the code that manages disconnection of live sources (say, someone pulls the cable and walks away with one of our cameras) kind of works, it certainly does on my system but with differnet sets of libraries sometimes the main gst pipeline just hangs there and it really bothers me that I’m unable to get it right.

So I decided to really split it on a core that does the mixing (either manually or automatic) and different pipelines that feed it. Previously I had success using the inter elements (with interaudiosrc hacked so its latency is acceptable) to have another pipeline with video from a file mixed with live content.

Using the inter elements and a dedicated pipeline for each camera worked fine, the camera pipeline could die or dissapear and the mixing pipeline churned happily. The only downside is that it puts some requirements on the audio and video formats.

Something that I wasn’t expecting was that cpu utilization lowered, before I had two threads using 100% and 30% (and many others below 10%) of cpu time and both cores on average at 80% load. With different pipelines linked with inter elements I had two threads, one at 55% and a couple of others near 10%; both cores a tad below 70%.

Using shmsrc / shmsink yielded similar performance results but as a downside it behaved just like the original regarding the sources being disconnected, so for now I’m not considering them to ingest video. On the other hand latency was imperceptible as expected.

This is more or less a direct translation of the examples found at gstreamer/tests/examples/controller/*.c to their equivalents using the gi bindings for Gstreamer under Python. The documentation can be found here. Reading the source also helps a lot.

The basic premise is that you can attach a controller to almost any property of an object, set an interpolation function and give it pairs of (time, value) so they are smoothly changed. I’m using a pad as a target instead of an element just because it fits my immediate needs but it really can be any Element.

Here I created two test sources, one with bars and another with static that also has an horizontal offset. If we were to start the pipeline right now ( p.set_state (Gst.State.PLAYING) ) we would see something like this:

So far it works. Now I’d like to animate the alpha property of s0 (the sink pads of a videomixer have interesting properties like alpha, zorder, xpos and ypos). First we create a control source and set the interpolation mode:

If you are not running this from the interpreter remember to add GObject.MainLoop().run() , otherwise the script will end instead of keep playing. Here I’ve used absolute times, to animate in the middle of a playing state you need to get the current time and set the points accordingly, something like this will do most of the cases:

Avoiding too much bookkeeping

You can get the controller and control source of an element with:
control_binding = element.get_control_binding('property')
if control_binding:
control_source = control_binding.get_property('control_source')