Saturday, December 14, 2013

I recently had some issues with iTunes and lost a few of the entries in the database. Maybe
some little bug in iTunes, I don't precisely know. Anyway, this led to some data loss: some
of my ratings were lost. I knew it would happen, it was only a matter of time, but this
made me think it was time to look for a possible solution: embedding ratings into the
mp3 itself.

I wasn't able to find a good tool to do this, even though I'm sure some exist, but I
noticed it was pretty simple to implement that myself. So I wrote this little tool you might find useful as well:
https://github.com/carlonluca/ITunesEmbed.

The tool will parse the library, extract ratings and play counts and embed those values into
the popularimeter frame of the ID3v2 tags of the mp3s in your collection.

How to Build

Building the tool is simple: you just need Qt 4 (Qt 5 should be ok as well, I don't think
many changes are needed), taglib and
LightLogger. You can install
taglib using macports, aptitude or whatever you want.

How to Use

The tool is a command line tool. Run the application without params and the help will be
shown. You'll need to export your library to XML using iTunes. Then, provide the absolute
file path of the library XML and an email that you want to associate to the data (multiple
popularimeter frames can be embedded into the mp3). Beware that all the popularimeter frames
associated to the same email address will be removed.

Of course this is just a draft. So look at the code, check what it does and backup your
library entirely before starting to do anything. I don't want to be responsible of course
if you burn it :-)
Bye!

Thursday, September 12, 2013

I don't like to reinvent the wheel, and there are many logging codes out there, but still
I couldn't find what I wanted. Those were all either too complex or not exactly what I
wanted. That is why I took my old macros and added some ideas taken from this good tutorial:
http://www.drdobbs.com/cpp/logging-in-c/201804215.

Supports coloring of logs: coloring is implemented using ANSI Escape sequences on
Linux and MacOS. On Windows I disabled it, but you can enable if you're using something
like cygwin or similar. For iOS I use XcodeColors, a great project:
https://github.com/robbiehanson/XcodeColors.
So a specific implementation is reserved for that platform.

It is possible to reimplement the "sink" of the logs by reimplementing an output.

I preferred the printf way of formatting logs to the stream implementation. Anyway
I tried to provide both. The usage of a "null sink" when logs are disabled should,
together with compiler optimizations, make the overhead minimal.

Each log is flushed to avoid issues related to buffering. This might increase the
overhead, but it is simple to remove it.

On Windows/Linux/iOS a stack trace function is also available to show
the current call stack.

Should be entirely thread-safe.

Each log can be associated to a tag; I commonly use this with grep to filter
logs by module.

There a still things I don't like about this approach, like being impossible to use in C
sources, needing Objective-C++ instead of simple Objective-C and so on. Still someone
may find it useful, so I uploaded to github:
https://github.com/carlonluca/LightLogger.

How to Use

Just include in your sources and you're done. Most useful functions are those level-based:

These functions work differently according to the platform: on Android send INFO logs to
logcat, on iOS print colored text to Xcode (and thus the XcodeColors plugin is needed if
you keep colors enabled), on Windows simply print text and on Mac OS/Linux print text with
ANSI colors to the shell. I use the return type to do something like:

There are a few macros I use to configure for each project: COLORING_ENABLED to enable colors,
BUILD_LOG_LEVEL_* to set the logging verbosity and XCODE_COLORING_ENABLED to enable/disable
XcodeColors support.

In github you'll find Qt, iOS/XCode and Android sample projects.

How it Works

Pretty simple: wrapper functions like log_info(...) use the template class LC_Log to
print the string. The LC_Log delegates actual log call to the class of type T, which can
be implemented according to the needs. Delegates I currently implemented are:

LC_Output2Std: outputs to standard output, adding ANSI escape codes.

LC_Output2FILE: write the logs to file (no escape codes here).

LC_OutputAndroid: implements logging to logcat.

LC_Output2XCodeColors: implements logging using XcodeColors format.

Hope you can find this useful. If you improve this, share your work! If you find issues,
report them!
Bye! ;-)

Thursday, August 15, 2013

In the github project for PiOmxTextures I recently uploaded also a sample player (tools/POCPlayer) that can
be used for debugging and testing the performance. This is supposed to be a sample for the performance of
the QtMultimedia backend and Qt itself on the Pi. It is entirely Qt/QML based, with the
GUI written entirely in QML/Javascript. It shows the common controls for a media player and the service for metadata extraction. This is a demo of the current state:

How to Build

This is simple: it is a standard Qt application, with C++/Qt and QML code. A simple
build of Qt version >= 5.1 should be sufficient. There is no interaction whatsoever with
the plugin during build time.

Limitations

The current version can handle images and videos. Unfortunately it seems to perform pretty
good in 1080p with 720p video, but 1080p videos are quite difficult to handle when
other animations are running concurrently. Animations not involving video perform really good.
What you see in the video is running on the eglfs plugin: running on some more interesting
layers like Wayland would require some changes in how EGL images are handled in the QtMultimedia
plugin and PiOmxTextures. Unfortunately, it seems the QtWayland module is not currently
perfectly working, so a hard work might be needed.

Thursday, August 8, 2013

Please take into consideration that this firmware is based on an old firmware and an old PiOmxTextures version.

As I see still someone has troubles building PiOmxTextures, I built a new image based on
the Raspbian wheezy image released the 26th of July (http://www.raspberrypi.org/downloads). This
version of the image also contains Qt version 5.1.2 (35fe76d9c343f98ec55c7cacd8c402e90d91ef38)
and an OpenMAX-based Qt Multimedia plugin (PiOmxTextures). The available Qt modules are:

qtbase

qtscript

qtxmlpatterns

qtjsbackend

qtdeclarative

qtsensors

qtquick1

qtgraphicaleffects

qtwebkit

qtwayland

qtsvg

qtserialport

qttolls

qttranslations

qtquickcontrols

qtmultimedia

qtimageformats

The mediaservice plugin installed is the openmaxil plugin
(available here: https://github.com/carlonluca/pi). It is based on ffmpeg version 2.0 and
that version of ffmpeg is statically built inside the PiOmxTextures library available in
/usr/lib.
Note that the wayland module and the wayland and xkbcommon libs are available in the image, but I
never tested those. Also, PiOmxTextures won't work with wayland but only using the eglfs
platform plugin.

How to Test

Of course all this is experimental and if you seriously intend to use in production I
suggest you start working on the sources in https://github.com/carlonluca/pi. I placed a
build of the POCPlayer (sources are available in the same repository under tools/POCPlayer)
in the home directory of the pi user. You can run it and see how it works:

./POCPlayer [video_file]

By pressing the "l" button, you can see the possible options. The POC player is experimental
as well and it is entirely implemented in QML.
You can help improve and fix both PiOmxTextures lib and the openmaxil plugin by simply replacing
libPiOmxTextures in /usr/lib and/or libopenmaxilmediaplayer.so in /usr/local/Qt-rasp-5.1.2/plugins/mediaservice.

Sunday, May 26, 2013

There are some pretty common libraries that come very handy in some situations when
developing for common embedded systems like Android and iOS. One of this is the
ICU library, which is useful when you need
to work on code page conversion, collation, transliteration etc...

ICU is available as library for some systems, but it is quite large (libicudata for
instance is more than 23MB alone). It is possible to reduce the size considerably
reading this, and rebuilding.

ICU is perfectly portable on Linux, MacOS, Android, iOS, Linux Embedded etc... The process
of cross-building is very simple, but still it took me a couple of hours to build for
Linux, MacOS, iOS device and simulator and Android. So, I found some scripts around and
fixed those bit (maybe the originals were a little outdated). I therefore write here a
couple of notes on how to do it quickly (tested on 51.1).

Download the sources

You might want now to modify uconfig.h or data to avoid including data which is useless
for your application: http://userguide.icu-project.org/icudata.

Build for Android

Cross-building ICU requires to build it first for the system where the cross-build is run,
then for the target system. So, if we're using Linux when building for Android, let's first
build ICU for Linux:

cd $MY_DIR
mkdir build_icu_linux
cd build_icu_linux

and use this script to build from there (change the variables and build options
according to your needs):

In the icu_build directory you should have all you need to build your new application.
Note that I didn't use the NDK here, but the standard toolchain that results from the
make-standalone-toolchain.sh script in the NDK.
Also note that part of ICU is already in /system/lib in some Android devices but I don't
think there is any guarantee that it will be in every device (not part of the standard
Android interface) and don't know exactly what is included inside that build.

Building for iOS

The same approach can be applied to cross-build for iOS. First, I built for MacOS in this
case, and then for iOS device and simulator.

Post a comment if you find something wrong!
Of course the same approach might work similarly for other embedded devices with a
proper toolchain :-)
Not too difficult, but still might speed up your work! ;-)

In some previous posts I developed a custom QML component to render video in a QML scene using hardware
accelerated decoding capabilities and rendering without passing on the ARM side. This resulted in
good performance of the Raspberry Pi even with 1080p high profile h264 videos.

Many bugs need to be fixed, code should be refactored a
little but still it shows it is possible and that it works good. So I decided to move the following step:
modifying Qt to make it possible to use the "standard"
QtMultimedia module
to access the same decoding/rendering implementation. This would make it possible to better integrate with Qt and
allow users to recompile without changing anything on their implementation.

The QtMultimedia module uses gstreamer on Linux to provide multimedia capabilities: gstreamer is unfortunately
not hardware accelerated on the Pi unless you use something like gst-omx.

Thus, I started to look at the QtMultimedia module sources in Qt5 and found out (as I was hoping), that the Qt guys
have done, as usual, a very good job in designing the concept, providing the classic plugin structure also for
multimedia backends. Unfortunately, also as usual, not much documentation is provided on how to implement a new backend, but
it is not that difficult anyway by looking at the other implementations.

Design

At the end, I came up with a structure like this: I implemented a new QtMultimedia backend providing the
MediaPlayer
and
VideoOutput
minimal functionalities leveraging a "library-version" of the PiOmxTextures sample code
which in turn uses a "bundled" version of omxplayer implemented using the OpenMAX texture render component as
a sink for the video.

As said, Qt guys have done a good job! I didn't have to change almost nothing of the Qt implementation; all the implementation
is inside the plugin (apart from a minimal modification on the texture mapping, for some reason it was upside-down and inverted).

Results

The result is pretty good, I don't see many differences from the previous custom QML component (the decoding and rendering code is the
same and the QML component is implemented using the same exact principle, so nothing really changed).
I'm only beginning to play a little bit with this, I just tried a couple of things. In the video you can see the "standard" qmlvideo
and qmlvideofx examples provided with the Qt sources.

How to Build

Clone the repo somewhere, then use the prepare_openmaxil_backend.sh script in tools. It
will compile PiOmxTextures as a shared lib and will place everything you need in the openmaxil_backend directory. Copy that directory
recursively into your Qt source tree in qt_source_tree/qtmultimedia/src/plugins naming it simply openmaxil.Some changes are needed to the Qt tree to make it compile the new backend automatically instead of the gstreamer backend, for the
texture mapping and to make the "standard" qmlvideo and qmlvideofx examples work. No real modification to the code is needed: sufficient
to instantiate the QQuickView those examples use with a specific class definition. This is needed. and to provide the plugin the instance of the QQuickWindow containing the media player.These changes can be applied with a patch to the qtmultimedia tree using the patch in the tools in git. Then build the qtmultimedia module
with:path_to_qmake/qmake "CONFIG+=raspberry"
You'll find all you need here: https://github.com/carlonluca/pi.

How to Use

After you have built the plugin, you can simply use the "standard" Qt API for MediaPlayer and VideoOutput. Only restriction is that
the plugin needs to access a QQuickView to access the renderer thread of the Qt Scene Graph. This might be an issue, but I've not
found another solution to this yet.
What you have to do is to simply provide your application the QQuickView by using the exact class, which must be included in your
application:
class RPiQuickView
{
public:
static QQuickView* getSingleInstance();
};

Q_DECL_EXPORT QQuickView* RPiQuickView::getSingleInstance() {
static QQuickView instance;
return &instance;
}
This is needed because the plugin will look for the RPiQuickView::getSingleInstance() symbol, which should be found after the
dynamic linker has linked to plugin the the executable. Also, you'll need to add -rdynamic to the LFLAGS of your application, so
we ensure that the linker will add the symbol to the symbol table.This is what I added to the qmlvideo and qmlvideofx examples to make those work. This is of course not elegant, but still
I couldn't find a better way in reasonable time.Of cuorse, you'll have to copy the Qt libraries that are built to your Pi, together with libPiOmxTextures.so (unless you build it
statically) and the ffmpeg libraries (do not use the ffmpeg libs you have in your Pi, it is likely those won't work; use those
compiled by the compile_ffmpeg.sh script in tools.

What Remains to Be Done

Most the calls are not implemented, just the minimal to get video on the screen. Also, still the audio implementation
is missing (but OMX_MediaProcessor class should be ready to play audio as well) and only the QtQuick side is taken into consideration: I've
never had the time to look at the widget implementation.In case you find bugs, try to report an issue on github. If I'll find the time I'll answer.

Edit 6.25.2013

Instantiation of the QQuickView using the RPiQuickView class is no more needed from 30e24106c5dd7a5998d49d7093baef49f332b1d2. I tested this revision with Qt 5.1.1 and everything seems to keep working correctly.

Friday, April 19, 2013

Ok, I did this a couple of months ago but I just realized it might be of help to someone who is currently
using Necessitas Qt4 for some project and still cannot use Qt5.
This is a sample code which shows how to create a custom QML component in the Qt4 Necessitas porting to
use hardware acceleration on any Android devices with API level at least 11. The result is pretty good, you
can check the demo I uploaded on youtube a couple of months ago (the third application shown is the one which
is implemented over Qt 4):

The description says it all: "The third sample code uses a custom QML component written in C++ using a Qt 4.8.2 port for Android
(Necessitas). Regular QML animations are then applied to the custom component. The semi-transparent image
is a regular QML Image element with alpha set to 0.5."

How it Works

As you can see from the code, a custom QML component is used and placed in the QML scene. That component
instantiates some Java classes through JNI glue code and use the Android standard Media Player to start
decoding video and playing audio. The sink is set to be a
SurfaceTexture instance,
which provides the OpenGL texture that the custom QML component renders in the QML scene. Result is pretty
good.

Sunday, March 17, 2013

According to what I read around, it should be possible to install some build of Qt 5 using
apt on Raspberry by adding the Qt repo: read here.

However, some asked for a version of Qt compiled for Pi, so, if that was not ok for you for some reason,
this is an entire image with many packages for which Qt has support and includes wayland and QtMultimedia (what is
working oob).
This is the configuration of the build:

Setup Qt Creator to Cross-Compile

You should now be able to setup Qt Creator to cross-compile. Open Qt Creator, got to the Window -> Options
-> Build & Run. Open the "Compilers" tab and add the new cross-compiler.
Setup the Qt version by selecting the qmake binary from /usr/local/Qt-rasp-5.0.2/bin/qmake.
Setup a new kit by setting both the compiler and the Qt version you just added.
You should be done.

Download the Package

The package is large since it contains the entire image, including all the needed packages.
Try to download from here (sorry, I experienced too much traffic so I had to remove it, you can try with this torrent or this ftp, but both will take long; please let me know if you experience troubles).
The userid is pi and the passwords are rasp for both the user pi and root.

Edit 04.16.2013

I see that many are still trying to download and the server is permanently at full upload speed, I suppose it is better to start using p2p: use this ed2k link to download. I'm sharing this with priority. If you're already downloading via FTP, then please complete the download as quickly as possible; I'll remove the file in a couple of days. In case you experienced issues, please leave a message.

Sunday, March 3, 2013

Ok, I've been waiting to do this for quite some time but never had the time to actually do it. I tried this quickly twice
but without success because of many issues. Now I invested some hours and made it to the end of the journey :-)
I therefore try to describe here the steps to make Qt 5.0.1 (the current version in the Qt git) on the new wheezy image with Wayland support.

Building the Qt Fundamental Modules

Of course the procedure is almost identical to the one used for Qt 5.0 that I described
here. I only did
a couple of things to speed up the process, you choose how to do it. I briefly describe here some of the steps.

libdbus-1-dev is used to get the QtDBus module compiled from qtbase, libudev-dev to get udev support, libssl-dev for OpenSSL and
libasound2-dev will provide Qt what it needs for ALSA support.
GStreamer libs instead are mainly used in the qtmultimedia and qtwebkit modules. If the environment is setup correctly for gstreamer
support, then the configure script will report the success.
libffi-dev libpixman-1-dev are needed to compile the qtwayland module or its dependencies. libsqlite3-dev libicu-dev and
libfontconfig1-dev instead are needed only if you intend to use QtWebKit.

Instead of the loopback mount of the image on your system to get a correct sysroot, I quickly scp'ed the needed
binaries from my board to a newly created sysroot. In particular I copied:

/lib

/usr/lib

/usr/include

/opt

I'll refer to the directory containing all of this as rasp_sysroot. Quick and dirty. You might also consider
using rsync though.
As a final note on this I have to say that scp has the somehow pleasant collateral effect of following the symlinks in libs.

During compilation I got an error indicating that it was impossible to find the header "vchost_config.h". I solved
by editing the file in rasp_sysroot/opt/vc/include/interface/vmcs_host/vcgencmd.h:
33c33

< #include "vchost_config.h"
---
> #include "linux/vchost_config.h"

This is not very elegant maybe... anyway it is sufficient. You might add an include path in the qmake.conf or similar, but
it seemed good that way :-)

At this point qtbase should have been successfully compiled. Now at least you should compile the qtscript, qtjsbackend
and the qtdeclarative module:

Copy libs and headers resulting to your_sysroot_path.
Now before compiling the wayland library the wayland scanner
is needed for the generation of C code from Wayland protocols. To compile this, open a new environment for standard compilation
and start compiling the wayland-scanner and place it in the PATH:

At this point I have to say I had issues during the execution of the qmake binary. Unfortunately I couldn't track
down all the reasons, but it seems that libxkbcommon couldn't be found. According to the .pro file, the qtCompileTest
function is used to check if config.test/xkbcommon can be built. Appearantly, the inclusion of X11/keysym.h couldn't
be satisfied, and also it shouldn't be needed... but although the file can be compiled, the qtwayland.pro file still
was failing, so I simlply removed the checks for xkbcommon and the rest of the build procedure succeeded.

At this point the server should be running. Now open another shell and try to run any Qt application using the wayland-brcm
platform plugin:

$ cd your_app_path
$ ./your_app_bin -platform wayland-brcm &

Now you should see the window on the screen.
s
It is possible however that some EGL/OpenGL error occurs, like eglCreatePixmapSurface failed: 3003, global image id: 0 0, then
consider increasing the memory reserved to the GPU, that is a bad_allocation error. Simply add gpu_mem=n, where n is the number of
MBs to assign to the GPU in the /boot/config.txt file. Read here for more information:
http://elinux.org/RPiconfig.

Building QtWebKit

For more details refer to this.
It seems Qt guys have done a good work on QtWebKit. Making it work simply requires to build and run. Compile as said the qtwebkit module,
then copy back the libraries to the device and load a WebView element.
The only thing that still seems to be missing is the 16bit color depth support: if you try tu run you might see a mess on the scren,
that is because the QtWebProcess is writing 24bit image on 16bit mode. More details on this
here.
Anyway, it seems now it is sufficient to set the framebuffer to 24bits to make it work:

$ fbset -depth 24

No need to modify the eglfs plugin anymore. The EGL configuration seems to correctly reflect the framebuffer color depth.

Building QtMultimedia

QtMultimedia is the module responsible for the multimedia content handling. For Linux, it is based on gstreamer, which is available,
ad already said, for Raspberry Pi. Anyway, gstreamer relies on plugins to decode/render multimedia content, but most of those are clearly
not hardware accelerated, which makes it nearly useless on an embedded platform for video playback.

Anyway, there is a plugin that is supposed to use the RPi accelerated OpenMAX libraries, gst-omx. I have to say I still have never seen
it work well, so I'm not sure whether this is working or not on Pi. It is an interesting subject, but I don't if or when I'll
get my hands on that.
I tried the QtMultimedia module a couple of times though, and I could play a couple of videos, but the result was clearly useless. Something like this should play the video (no audio):

Saturday, February 23, 2013

As requested I shared the sources of the demo videos I posted recently. I tested these
components with a few videos and I saw that it seems to work "reasonably well" also for
1080p h264 high profile with 5.1 audio. The current implementation uses a player class which decodes data and a surface class that renders. Rendering the video on more surfaces seems to work.

Beware that the code is not complete, it is only a proof of concept of how to implement.
If you need to use it in production code, you'll have to work pretty much on it. There are
many TODO's left and no testing has been run on the classes. The cleanup code must be completely
rewritten and only pause/resume/stop commands are implemented at the moment. Also consider going through the relevant code for leaks, I didn't pay much attention when implementing because it was my idea to refactor, sorry.

Only 1080p resolution is currently supported, never even tried anything different, you'll
probably have to look around and see where I hardcoded those values (I was in a hurry :-))
There are many unused classes in the code, I left those there only because those might be
useful for new implementations.

I started to work on other things recently, so I really have few time to work on this. But
still I see that many are interested, so I decided that incomplete code is better than no
code. Also, I have to say I have no practical need of these components, I only worked on this as a challenge in my spare time. Now that there is no challenge anymore, I have to say I lost some interest and I'm looking for a new one :-D

This is the github URL of the repo (PiOmxTextures is the project directory):

The current implementation of the OMX_MediaProcessor class uses the components implemented
in the omxplayer code, with modifications
to some of those. Those modified sources are placed in the omxplayer_lib directory in the
project sources: I chose this architecture to make it relatively simple to merge the changes
from the omxplayer sources.

How to build

To build the project, you'll need a build of the Qt libraries, version 5.0.0 at least.
Instructions on how to build can be found around the web. I also wrote a quick
article on
that if you need it (this is the updated version for 5.0.1).

Once you have your Qt build and Qt Creator setup, you can open the .pro file. You should also
have the Raspberry sysroot somewhere in your system, and that should be known by Qt Creator.
The project is based on the same dependencies of omxplayer, so you need those as well. I tested this only against a specific build of ffmpeg, which is the one omxplayer was using when I last merged: to compile you can use this script, which is included in the source tree. Just running it passing the number of compilation thread to use should be sufficient:

Pay attention that the sample application renders a file whose path is hardcoded in the
qml file. Change that if you want to test the sample code.

As said, this is just a proof of concept and I do not have much time to maintain it. If
you want to help to fix the code and to merge new changes, issue a merge request!
Hope this code might be of help for other open source project! Feel free to leave a comment if you have any! Bye!

Sunday, February 10, 2013

I've been recently asked to do a little research on how to implement animations on a video surface in
Android, somehow similarly to how I did in the previous posts on RPi. It seemed interesting so I tried
to do some investigations.

After reading something in the Android's documentation, I took a random Android 4.0 low-cost tablet and
I started analyzing the problem.

First thing I tried is creating a simple VideoView and applying some regular Android animations
on it. I tried to apply a TranslateAnimation and a ScaleAnimation, but the result was that the video geometry didn't change, only
a black square representing the view was animated. Seems to be more or less similar to this.

I also tried to use the 3.1 animation system, but the result was the video actually moving, but leaving a trace behind it. Both
this defects might be related to how the video rendering is performed at the lower levels, so it might not be the case for other
boards.

The only other thing I tried before starting to dig into the OpenGL world is to actually "create" an animation by changing the
layout parameters applied to the VideoView. By interpolating the values like a damped harmonic ascillator I got the result
in the video. Implementing it more accurately you might get much better results.

I therefore started to look at something different: starting from API level 11 the SurfaceTexture might be the solution to all
the needs. By using this class as a surface for the MediaPlayer it is possible to stream the video to an OpenGL texture. This seems
to work pretty well (see the video) and it is not difficult to implement if you know OpenGL.

Anyway, for simple tasks, OpenGL might be overkill, so I tried to look at some Android classes that could let me render the
texture without needing to create the entire application in OpenGL. I have not found a way yet (if you do please add a comment!), but I started to think that, once
again, Qt might be the answer :-)

The third sample application you see in the video is a custom QML item created by rendering in the Qt OpenGL context the texture
provided by the SurfaceTexture class controlled using JNI. The result is very good. The QML code I used is exactly the same used
in previous posts for the RPi sample application. The Qt porting for Android I used is the old Necessitas alpha 4

EDIT: If API level 14+ is available, then it is possible to render the texture provided by the SurfaceTexture in a TextureView (thanks Tim for pointing this out!): this is the fourth sample in the video.

Sunday, January 20, 2013

While working on completing the custom
QML component for rendering video on
Raspberry Pi, I asked myself: does it really make sense to always re-invent the
wheel? And also, shouldn't the key point of open source be the sharing of code,
ideas and work?
I answered to myself no and then yes :-)
So, why going on reimplementing demuxing, audio decoding, audio rendering, subtitles
etc...? Raspberry Pi already is a target of the https://github.com/huceke/omxplayer implementation. So, I completely got rid of my implementation and
I started leveraging an adaptation of the omxplayer code to make it a library to use in a
similar QML component. In a few hours, this was the result:

Unfortunately, I still have no code that I can share and also still I'm experiencing some
"interruptions" during the rendering which do no seem to appear in the omxplayer, but
if there is anyone wanting to take this road, the result seems really encouraging as
you can see! You can start from the code I posted here if you want.