Author: Edward Hervey

7 years since starting the Collabora Multimedia adventure,7 years of challenges, struggles, and proving we could tackle them7 years of success, pushing FOSS in more and more areas (I can stillhear Christian say “de facto” !)7 years of friendship, jokes, rants,7 years of being challenged to outperform one self,7 years of realizing you were working with the smartest and brightestengineers out there,7 years of pushing the wtf-meter all the way to 11, yet translating thatin a politically correct fashion to the clients7 years of life …7 years … that will never be forgotten, thanks to all of you

It’s never easy … but it’s time for me to take a long overdue break,see what other exciting things life has to offer, and start a newchapter.

So today is my last day at Collabora. I’ve decided that after 17 years of non-stop study and work (i.e. since I last took more than 2 weeks vacation in a row), it was time to take a break.

What’s next ? Tackling that insane todo-list one compiles over time but never gets to tackle :). Some hacking and GStreamer (obviously), some other life related stuff, traveling, visiting friends, exploring new technologies and fields I haven’t had time to look deeper into until now, maybe do some part-time teaching, write more articles and blogposts, take on some freelance work here and there, … But essentially, be in full control of what I’m doing for the next 6-12 months.

Who knows what will happen. It’s both scary … and tremendously exciting

PS 1: While my position at Collabora as Multimedia Domain Lead has already been taken over by the insane(ly amazing) Olivier Crete (“tester” from GStreamer fame), Collabora is looking for more Multimedia engineers. If you’re up for the challenge, contact them

History so far

For the past 6-9 months, as part of some of the tasks I’ve been handling at Collabora, I’ve been working on setting up a continuous build and testing system for GStreamer. For those who’ve been following GStreamer for long enough, you might remember we had a buildbot instance back around 2005-2006, which continuously built and ran checks on every commit. And when it failed, it would also notify the developers on IRC (in more or less polite terms) that they’d broken the build.

The result was that master (sorry, I mean main, we were using CVS back then) was guaranteed to always be in a buildable state and tests always succeeded. Great, no regressions, no surprises.

At some point in time (around 2007 I think ?) the buildbot was no longer used/maintained… And eventually subtle issues crept in, you wouldn’t be guaranteed checkouts always compile, tests eventually broke, you’d need to track what introduced a regression (git bisect makes that easier, but avoiding it in the first place is even better), etc…

What to test

Fast-forward to 2013, after talking so much about it, it was time to bring back such a system in place. Quite a few things have changed since:

There’s a lot more code. In 2005, when 0.10 was released, the GStreamer project was around 400kLOC. We’re now around 1.5MLOC ! And I’m not even taking into account all the dependency code we use in cerbero, the system for building binary SDK releases.

There are more usages that we didn’t have back then. New modules (rtsp-server, editing-services, orc now under the GStreamer project umbrella, ..)

We provide binary releases for Windows, MacOSX, iOS, Android, …

The problems to tackle were “What do we test ? How do we spot regressions ? How to make it as useful as possible to developers ?”.

In order for a CI system to be useful, you want to limit the Signal-To-Noise ratio as much as possible. Just enabling a massive bunch of tests/use-cases with millions of things to fix is totally useless. Not only is it depressing to see millions of failed tests, but also you can’t spot regressions easily and essentially people don’t care anymore (it’s just noise). You want the system to become a simple boolean (Either everything passes, or something failed. And if it failed, it was because of that last commit(s)). In order to cope with that, you gradually activate/add items to do and check. The bare minimum was essentially testing whether all of GStreamer compiled on a standard linux setup. That serves as a reference point. If someone breaks the build, it becomes useful, you’ve spotted a regression, you can fix it. As time goes by, you start adding other steps and builds (make check passes on gstreamer core, activate that, passes on gst-plugins-base, activate that, cerbero builds fully/cleanly on debian, activate that, etc…).

The other important part is that you want to know as quickly as possible whether a regression was introduced. If you need to wait 3 hours for the CI system to report a regression … that person will have gone to sleep or be taken up by something else. If you know within 10-15mins, then it’s still fresh in their head, they are most likely still online, and you can correct the issue as quickly as possible.

Finally, what do we test ? GStreamer has gotten huge. in that sentence GStreamer is actually not just one module, but a whole collection (GStreamer core, gst-plugins*, but also ORC, gst-rtsp-server, gnonlin, gst-editing-services, ….). Whatever we produce for every release … must be covered. So this now includes the binary releases (formerly from gstreamer.com, but which are handled by the GStreamer project itself since 1.x). So we also need to make sure nothing breaks on all the platforms we target (Linux, Android, OSX, iOS, Windows, …).

To summarize

CI system must be set-up progressively (to detect regressions)

CI system must be fast (so person who introduced the regression can fix it ASAP)

So yes, it’s been quite some time since the last blogpost. A lot has been going on, but that will be for future blogposts.

Currently at GUADEC 2014 in Strasbourg, France. Lots of interesting talks and people around.

Quite a few discussions with several people regarding future GStreamer improvements (1.4.0 is freshly out, we need to prepare 1.6 already). I’ll most likely be concentrating on the whole QA and CI side of things (more builds, make them clearer, make them do more), plan how to do nightly/weekly releases (yes, finally, I kid you not !). We also have a plan to make it faster/easier/possible for end-users (non-technical ones) to get GStreamer in their hands. More on that soon (unless Sebastian blogs about it first).

Since most of the PiTiVi hackers are going to be present in The Hague next week for GUADEC and the Collabora Multimedia annual meetup, we decided to organize a hackfest/meetup on Tuesday 27th July in the afternoon just before GUADEC. I’m still trying to figure out a venue to hold this, I’ll keep you informed via blog/mail/twitter.

If you want to learn more about PiTiVi, chat with the hackers, get face-to-face assistance/discussion, learn what’s planned next, etc… you’re welcome to join. And yes, if you want to complain, you can also come by

UPDATE: It will take place in the area below the Paris Room (stairs going down in the middle of the open space).

The Google Summer of Code deadline for student application is nearing and thought it would be good to highlight some projects that are dear to me:

PiTiVi : The inclusion of PiTiVi in Ubuntu Lucid has brought about a lot more interest, and while the reception is overall positive for what it can do right now, we know there’s still a lot more work to make it the swiss-army-penknife of editing. A student has already proposed adding the support for effects, but other items that would be great to have are:

Collaborative editing : Allowing several people across a network/internet to work on a common project, using the telepathy project.

Improving the Media Asset Management support of PiTiVi (i.e. solving the problem of having hours and hours of footage, how to search/sort/tag them efficiently). Leveraging existing technologies instead of reinventing the wheel is a must here

Auto-cleanup of editing: Average consumers don’t know how to shoot, the camera moves, there’s some hissing in the audio, the color-balance is always changing, … This would be an excellent project for somebody with lots of video editing experience who has got an eye to making an editing project look professional (i.e. anti-youtube) and wants to help automatize all the tedious little tasks one has to take care of. The goal would be to come up with a system to cleanup footage on import, help define proper transitions,.. This would go hand in hand with effects support.

Wine DirectShow GStreamer backend. I mentored Trevor Davenport last year who proved that using GStreamer as a back-end for DirectShow on Wine is possible, but quite a bit of work is required to finish up this task to have it merged in wine trunk.

GStreamer: Supporting VA-API in GStreamer would be a must, especially considering the awesome work Gwenole Beauchesne has done to make it support VDPAU as a backend. VLC has already switched to supporting only VAAPI for hardware-accelerated and it’s time GStreamer catches it up

The PiTiVi proposals can be proposed both in the GStreamer and GNOME organization, the Wine/DirectShow project with the Wine organization and the GStreamer projects… with GStreamer.

You can also hop into the IRC channels of the various projects to discuss your proposals. On the freenode network: #gstreamer, #pitivi, #winehq

The best proposals are always the ones where students have got an itch they want to scratch as opposed to being told what to do, so if you’ve got something in particular you’d like to hack on in the projects above, don’t hesitate to contact the maintainers/developers of those projects and propose your ideas… but hurry up… tic tic tic

As some people noticed in the PiTiVi community, I haven’t been working that much on PiTiVi over the past few months. I did mention the work I was doing was somewhat related to PiTiVi and that hopefully I’d be able to talk about it openly.

That day has now arrived.

GStreamer editing services

The “GStreamer Editing Services” is a library to simplify the creation of multimedia editing applications. Based on the GStreamer multimedia framework and the GNonLin set of plugins, its goals are to suit all types of editing-related applications.

The GStreamer Editing Services are cross-platform and work on most UNIX-like platform as well as Windows. It is released under the GNU Library General Public License (GNU LGPL).

Why ?

Because writing audio/video-editors is a lot of work, and we should make it as easy as possible for people to write such applications while being able to leverage the power of GStreamer and not requiring a PhD in nuclear engineering.

The GStreamer Editing Services (GES) introduces 3 concepts:

GESTimeline : This is your central container corresponding to a TimeLine, you can add Tracks and Layers to it. It is also a GStreamer Element, so you can use it in any GStreamer pipeline.

GESLayer : This corresponds to the User-visible part of the Timeline. This is where the user lays out the various LayerObjects (files, transitions) he wishes to use. The LayerObjects can be as simple or advanced as required (ex : a FileSource can have a mute property, an ‘overlay’ property, a rotate video property, …) and those objects will take care of properly filing up the Tracks. Applications can create their own subclasses of LayerObjects for their custom usage, implementing the logic of what TrackObjects to create in the background and not have to worry about anything else).

GESTrack : This corresponds to the media part of the Timeline. An audio editor will only require one audio Track, a video editor will require one Audio and one Video Track, etc … These parts don’t have to be visible to the user… nor the application developer

Why another library in addition to GNonLin?

The answer to that is that GNonLin will remain a media-agnostic set of elements whose goals are to be able to easily use parts of streams (i.e. from GStreamer elements) and arrange them through time. While this makes GNonLin very flexible… it also means there is quite a bit of extra code to write before getting to the ‘video-editing’ concepts.

Can I write a slideshow/audio/video/cutter/<crack-editor-idea> with it ?

Short answer : yes. Longer version : yes, but you might have to write your own LayerObject subclasses if you have some really specific use-cases in mind.

Where can I find it ?

The git repository is located here . Documentation can be generated in docs/libs/ if you have gtk-doc , and you can find some minimalistic examples in tests/examples/ . I will be gradually adding more documentation and examples.

GstDiscoverer, Profile System and EncodeBin

GES alone isn’t enough to end up with a functional editor. There are a couple of peripheral multimedia-related tasks that need to be done, and to solve that I’ve also been working on some other items. All of the following can be found in the gst-convenience repository.

GstDiscoverer

Those of you familiar with PiTiVi/Jokosher/Transmageddon/gst-python development might already be using the python variant of this code. The goal is to be able to get as much information (what’s the duration, what tags does it have, how many streams, of what type, using what codecs, …) from a given URI (file, network stream ,…) as fast as possible. While the code already existed, it was only python-based. So I rewrote one in C, with several improvements over it. It can be used synchronously or asynchronously, and comes with a command-line test application in tests/examples/

EncodeBin

Creating encoding pipelines, despite what many people might think, is not a trivial business and requires constantly thinking about a lot of little details. In order to make this as smooth as possible, I have written a convenience element for encoding : encodebin. It only has one required property : a GstEncodingProfile. Once you have set that property, you can then add that element to your pipeline, connect the various streams you wish to encode, connect to your sink element… and put the pipeline to PLAY.

Encoding profiles are not a new idea, and there has been many discussions in the past on how to properly solve this problem. Instead of concentrating on how to best store the various profiles for encoding… I decided to tackle the beast the other way round and, after having sent a RFC to the GStreamer mailing-list and collected as many use-cases as possible, came up with a proposal for a C based encoding profile description (see gst-libs/gst/profile/gstprofile.h). I still have some more use-cases to test and a few extra things to implement in encodebin, but so far the current profile system seems to fit all scenarios.

The remaining problem to solve… is to figure out how to store those encoding profiles in a persistent way for all applications to benefit from them.

Next steps

Using all of the above in PiTiVi But also looking forward to seeing comments/feedback/requests from people who wish to use any of the above in their applications.

In addition to that, I will be at the Maemo Barcelona Long Weekend starting from Friday, where we will try to corner down the UI and code requirements for creating a video editor for Maemo. All of the above should make it much easier to do than anticipated

(This was initially posted as a reply to a blogpost regarding PiTiVi being proposed as a default application in the upcoming Ubuntu Lucid. That comment was removed for an unknown reason, so I thought it best to put it here, and it would also be interesting for other people)

What a depressing post (in some aspects). I’ll answer the various questions/comments/rants all the same.

PiTiVi doesn’t support DV/mpeg4/whatever-format

Where did you get that idea from ? PiTiVi doesn’t come shipped with codecs, it relies on GStreamer to provide the needed plugins/decoders/etc… If you load a DV file in pitivi and you don’t have the plugins, the application missing-plugin system should appear proposing you to download the needed plugin.

Collabora pushed PiTiVi aggresively into ubuntu

That’s 100% totally wrong. I personally had chats with Jono and Rick Spencer about having PiTiVi shipped as a default application, and canonical were interested by the idea of having a video editor shipped by default. All of this was far from being enforced, or us (Collabora) going out of our way to have PiTiVi shipped by default. And nothing’s engraved in stone at this point. If we (pitivi development team) get feedback/help on improving what’s bothering people by the Lucid release date and people deem it good enough to be shipped by default, great ! If we get no help… well.. PiTiVi won’t die and people will still be able to use it via PPAs.

Why ship PiTiVi as default app and not another video editor ?

I’d say the main reason is that all the dependencies (except for goocanvas, which is pretty slim) are already shipped by default : GStreamer, GTK, python. All the other editors would require bringing in more dependencies. I’ll let Canonical/Ubuntu confirm that or not.

lack of features …

On this part we have always taken the stand of making sure features are as solid as possible before adding new features. In terms of video editing, that means you do need to have input/output format support rock solid, trimming/cutting rock solid. Check out how many clips/movies/documentaries/… out there and see how much of them make use of video effects, for how long, and how many don’t.
The two features we find critically missing are : video transitions and overlaying. I just merged yesterday the videomixing branch yesterday to master which enables setting transparency on every video streams (like Sony Vegas does). It still has some issues, but having it in master will force/speedup the bugfixing process.
Video effects are not a top-priority. Getting those… without being able to do the features above are pointless. We won’t diverge from that point of view. Helping us get the above rock solid as fast as possible … will mean you will see video effects faster.

To people throwing generic rants about <video-editor-name> sucking

Write a video editor (or any non-trivial multimedia applicatoin), then come back and rant about other people’s application sucking. Then we might have a proper discussion. In the meantime… you’re not improving the situation.

PiTiVi is dead or no longer maintained

The 3 main developers (who also happen to be hired by Collabora and that includes myself) have been working on some other company work in the meantime. Keeping Collabora hiring those 3 developers, means ensuring they have time to be paid to work on it also. (I’d personnaly love to have people working 100% of the time on PiTiVi … but you need to take into account the reality of running a business).
We’re progressively getting more company time for PiTiVi (Brandon has been back on it full time for the past frew weeks for example). It’s far from being abandoned/dead, just that we do it at our own pace. It’s freely available (LGPL, no copryight attributions required) and will always stay that way. We always welcome contributions and are pretty fast to review/commit patches.

Drop in on #pitivi on irc.freenode.net or send us a mail on pitivi-pitivi@lists.sourceforget.net and come and give your feedback, what can be improved, what’s good and should be kept and … who knows … be part of the pitivi team