So this summer I have participated in a programming internship at Audiovisual Technology Center – CeTA in Wrocław. CeTA is developing a number of very exciting projects, and the one I had the pleasure to work on is AlgAudio.

(download links available below)

AlgAudio is a new signal processing framework that we’ve been developing from scratch. The user builds an audio processing network by placing “building blocks” of simple operations, connecting them together, configuring their parameters, and defining how the parameters should influence each other. The network works in real time, so any changes to the parameters are immediately reflected in the outputted audio. This makes AlgAudio a perfect tool for live performances.

The general concept was inspired by Max/MSP, but AlgAudio is intended to provide a higher-level interface. It is supposed to be used by musicians, so we do our best to make it easy to work with without any programming or mathematical skills. The building blocks usually represent a more complex operation (comparing to Max or PureData), and expose a number of parameters that can be manually configured, or controlled live via an external controller.

At this stage, AlgAudio is ready to be tried out. Most core features are already implemented, so you can actually build really interesting synthesizers. However, there is still a lot to be done. Most importantly, the module collections need expansion (currently only the very basic modules are available, for example there are no audio filtering blocks available ATM), the module browser needs a better hierarchical structure, we are missing a number of parameter connecting modes, the UI needs various improvements, subpatching and polyphonic features need improvements, and we need a test framework to ensure top quality.

We also believe that creating external modules should be very simple, so that third-parties can provide their own module collections. The API is not quite stable yet, but we’re getting there. This is what an example module looks like:

But what if you would like to inject custom features? AlgAudio has a built-in plugin system. Each module may come with custom logic. The C++ interface allows interaction with literally any other AlgAudio component (you might even get as far as to, say, get your module to modify how other modules are displayed!). Below is the source of a simple module that sums the values of two parameters, and outputs the result as a third parameter.

We are releasing AlgAudio under the terms of the Lesser GNU General Public License, so you’ll be free to use it however you like. We host AlgAudio source code on Github.

At this moment the development of AlgAudio will significantly slow down, as we are looking for funds and contributors. We will be thankful if you could notify us about your interest in AlgAudio (contact either me or CeTA).

Download

AlgAudio will be getting an official website soon, but before that you can download binaries for Linux and Windows from Github (OS X support is planned). Downloads for 1.99.1

Please keep in mind that this is not a stable release, and we cannot guarantee that AlgAudio won’t crash sometimes. If you find any bugs, please report them here.

]]>https://rafalcieslak.wordpress.com/2015/10/06/current-progress-on-algaudio/feed/1Prevent full-screen games from minimizing when switching workspaceshttps://rafalcieslak.wordpress.com/2015/08/11/prevent-full-screen-games-from-minimizing-when-switching-workspaces/
https://rafalcieslak.wordpress.com/2015/08/11/prevent-full-screen-games-from-minimizing-when-switching-workspaces/#respondTue, 11 Aug 2015 21:36:40 +0000http://rafalcieslak.wordpress.com/?p=383When I play games on my Ubuntu desktop, I like to switch workspaces a lot. For example, when waiting for respawn I will quickly switch to a second workspace to select a different music track, or to write a quick reply on IM. What I find very inconvenient is that a lot of games, by default, will minimize when I switch workspace. Because of that, it takes me more time to return to game – a workspace switch short-cut, and then alt+tab.

It turns out that this is SDL feature, so all games build with SDL will behave this way. However, there is an easy, little known way to disable it. Simply set the following enviromental variable

export SDL_VIDEO_MINIMIZE_ON_FOCUS_LOSS=0

before starting your game. Or, if you dislike this feature as much as I do, you may want to set that variable in your .profile file, or maybe even /etc/environment.

Enjoy flawless workspace switching when gaming!

]]>https://rafalcieslak.wordpress.com/2015/08/11/prevent-full-screen-games-from-minimizing-when-switching-workspaces/feed/0Multi-OS gaming w/o dual-booting: Excelent graphics performance in a VM with VGA passthroughhttps://rafalcieslak.wordpress.com/2014/08/15/multi-os-gaming-wo-dual-booting-excelent-graphics-performance-in-a-vm-with-vga-passthrough/
https://rafalcieslak.wordpress.com/2014/08/15/multi-os-gaming-wo-dual-booting-excelent-graphics-performance-in-a-vm-with-vga-passthrough/#commentsFri, 15 Aug 2014 19:57:26 +0000http://rafalcieslak.wordpress.com/?p=339Note: This articles is a technology/technique outline, not a detailed guide and not a how-to. It explains what is VGA passthrough, why you might be interested in it, and where to start.

Even with the current abundance of Linux native games (both indies and AAAs), with WINE reliably running almost any not-so-new software, many gamers who use Linux on a daily basis tend to switch to Windows for playing games. Regardless of one’s attitude towards non-free software, it has to be admitted that if you wish to try out some of the newest titles, you have no other choice than running them on a Windows installation. This is why so many gamers dual-boot: having installed two operating systems on the same machine and using Windows for playing games and Linux for virtually anything else, they limit their usage of Microsoft’s OS for gaming only. This popular technique seems handy – you get the luxury of using a Linux, and the gaming performance of Windows.

But dual-booting is annoying because of the need of reboot to switch your context. Need to IM your friend while playing? Save your game, shut down Windows, reboot to Linux, launch IM, reboot to Windows, load your game. Switching takes a long time, is inconvenient, and therefore the player may feel discouraged to do so.

What if you could run both operating systems at once? That’s nothing new, run a virtual machine in your Linux, install Windows within it, and voilà! But a virtual machine is no good for gaming, the performance will be utter cr terrible. Playing chess might work, but any 3D graphics won’t do because of the lack of hardware acceleration. The VM emulates a simple graphics adapter to display it’s output in a window of the host OS.

And that is where VGA passthrough comes in, and solves this issue.

1. The idea

The key to getting neat graphics in a VM is to grant the virtual machine a full access to your graphics card. This means that your host OS will not touch this piece of hardware at all, and the guest OS will be able to use it as any other (emulated) hardware. The guest OS (presumably Windows) will load it’s own drivers for the graphics adapter, and will communicate with it natively! Therefore it will have full access to hardware acceleration and any other goodies that gear might provide (eg. HDMI audio). The idea of passing a VGA adapter to a virtual machine is usually named VGA passthrough.

Sounds crazy? Let me tease you: my setup is capable of smoothly running Watch_Dogs, Tomb Raider (2013) on Ultra settings at 60+ FPS within that virtual machine, using NVIDIA’s GTX 770. And I get the luxury of running both OS at once – so I can switch between them in just a glimpse, without shutting down either one! This is astonishingly convenient.

Because the dedicated graphics hardware will be reserved for the guest system, the host will need another graphics adapter to display anything. So there comes the first hardware requirement: you need at least two graphic adapters. However, it is not uncommon – many new Intel processors have a build-in GMA – and if you are a gamer, chances are you have invested in a dedicated graphics card – so that makes two graphics adapters already. Let the host system use integrated graphics, and the guest will get the powerful dedicated graphics for games. Because both graphic adapters will work independently and there is no way to compose their video output¹, you will need two separate displays, one for each system. This means either a set of two monitors, or a monitor with two video inputs (so that you can switch between them). You might also experiment with a KVM switch.

Also keep in mind that it is not an easy thing to set up. While some claim they have succeeded on their first try, many others have struggled a lot. Personally, I spend about two weeks tuning things up to get my VGA passthrough running – and if we count hardware searching and preparations, then it took me two months. But it was completely worth it! My current setup contains of:

Intel i7-4790K (4 x 2 x 4.0GHz)

ASRock Z97 Extreme6

NVIDIA GTX 770 4GB

and some 16 gigs of RAM

also, a monitor with multiple video inputs (I switch video source using buttons on the monitor)

Ubuntu 14.04

As I have mentioned, this set is capable of running very demanding games at maxed settings with amazing results. How does it work in practice? It feels as if I was running both systems at once. For example, while playing a game under Windows, my Linux has an IM client running. Because I mix the sound from both systems, I can hear the notification when I get a message. So I pause the game, switch monitor video source with a hotkey shortcut, respond to the message, and switch the video back. If only I had two monitors, I would play on one of them, with the host system using the other one – so I wouldn’t even need to touch the monitor to switch the OS, I would just need to rotate my head a little bit ;-)

Getting here was a lot of work, but a lot of fun too! The first step is to meet the…

2. Hardware requirements

Yeah. Not every machine will be able to do this trick. As already mentioned, you need two graphics adapters. However, it is not possible to passthrough the graphics integrated in your CPU! This is because passthrough works by separating a PCI device from the host system, and attaching it to the guest OS. Therefore you can only pass a dedicated graphics hardware. Not much of a problem, probably, but it’s probably an important note.

You also need to ensure that your CPU and mainboard support IOMMU – extensions for I/O visualisation, which are necessary for passing through a PCI device. Intel calls their IOMMU technology “VT-d“, while AMD refers to it as “AMD-V“. This is an absolute must, so if you are buying new hardware, make sure both your processor and the chipset will support IOMMU²!

Also, if you plan to use a CPU integrated graphics adapter for the host system, make sure that the mainboard supports it, and that it has a video output!

You will get best results if you use a multi-core CPU. Demanding games will require not only powerful graphics hardware, but a decent CPU as well! It is possible to reserve some of CPU’s cores for the VM – this way you can ensure that the guest OS will be granted enough computational power. For example, in my setup, the host OS uses 2 cores, while other 6 are at Windows’ disposal.

Also, as explained, you need a monitor with several inputs, or a set of two. I am not aware of any way to get this working on a laptop, as most of laptops I know have just one monitor, and you cannot manually switch between video sources¹.

So the full list of requirements is:

IOMMU compatible CPU and mainboard

A dedicated PCI graphics adapter (for passing through)

Graphics hardware for the host OS (can be integrated in CPU)

Monitor with multiple video inputs (recommended two monitors)

(Recommended: multi-core CPU).

Warning: Note that you DO NOT NEED a multi-OS graphics card! Contrary to popular belief, non-Quatro NVIDIA cards will work well, with no hardware modifications of any kind!

3. Methods

There are two popular passthrough techniques – one involves Xen virtualization, and one using Qemu and VFIO. Having played around with both, I am personally a fan of the Qemu way – it seems it is much easier to set up, I get more control over my VM, customizations are easier, and, most importantly, it works with virtually any PCI graphics adapter!

There is a lot of confusion on the Internet concerning what results each method may yield. Some say that Qemu method can never grant any decent performance, they claim that only Xen can perform primary VGA passthrough, while Qemu’s secondary VGA passthrough will be very inefficient. However, numerous people (including me) confirm that they have awesome performance with Qemu. On the other hand, it is clear that passthrough with Xen will only work with multi-OS graphic cards. This is not a problem for Radeon users, as probably all new Radeons will do just fine with Xen. However, if your NVIDIA card is not an NVIDIA Quadro, you have no chances with Xen! – unless you burn several resistors on the board, which can mod your card so that it thinks it is a Quadro… I do not recommend such hardware modifications to anyone, even if you trust the Internet too much, the risk of rendering your precious hardware useless is far too high to make it work the effort. Qemu, on the other hand, should work well with absolutely any PCI card.

Given these reasons, as well as customization options, I have decided to stick with Qemu. For the rest of this article, I will be describing this particular method.

There is one particular comprehensive guide on how to setup everything using the Qemu method here – at the time of writing this forums thread has more than 2500 replies, so learning details from here may be hard, but on the other hand every possible scenario is covered somewhere in there :) I can highly recommend that guide, but if you want to learn about the general idea first, stay with me before you jump there!

4. The software

Obviously things won’t work out of the box. There are also necessary preparations on the software side.

First, you will need to patch your kernel a bit, and compile it with several options enabled. At the time of writing, ASC override patches and VGA arbiter fixes need to be applied manually, as they are not (yet?) included in the kernel. You can find details in that guide I linked.

You will need to configure your kernel a bit. The key is not only to ensure it activates appropriate IOMMU modules, but also to forbid it from loading any drivers to the card you will want to pass through.

Most likely it will be also necessary to use the git development version of Qemu – some necessary features are not yet available in stable releases. Also, when playing with qemu, it is worth to try KVM – chances are that hardware virtualization might significantly improve virtual machine’s CPU performance.

You may want to write a bit of scripts that set up few other details (binding the PCI card to vfio module) before you start qemu to run the virtual machine.

Also, it may be tricky to get the right order of installing drivers in the guest OS. It took me a while to realize that I need to disable qemu’s emulated VGA – otherwise NVIDIA drivers won’t detect the dedicated hardware :-)

The greatest issue I have met is that Windows is very sensitive to hardware changes. Even slightest changes in my virtual machine (different qemu options) would immediately cause my Windows to never boot anymore, and any web guides on dealing with these particular BSoDs on boot never helped… So eventually I had to re-install the whole guest OS, after ~10 times I am completely fed up with it. However, if I do not experiment with qemu settings, there are no such problems at all.

There is one more thing that I believe is worth setting up. It’s cool to play Windows games, but there are also many great Linux-native titles. Obviously, if your system boots up with disabled dedicated graphics hardware, any demanding game won’t run. For this reason I have configured my GRUB so that I can choose whether I wish to boot my system to use the graphics card, or whether I want to disable it. This probably can’t be done any simpler, there is no way to get Xorg to switch from one VGA adapter to another while it is running… But it’s not that much of a hassle anyway.

5. Peryphetials

How about keyboard/mouse, should you pass them through too? You might, but this is not necessary; I use Synergy for sharing my mouse/keyboard between systems just as if they were two displays of one system. Very convenient. The script that starts qemu for me also launches synergy server on my Linux, the client running in Windows starts automatically on boot.

If you want, you can also setup networking for the guest system – qemu has very good support for interface bridging, so it is not difficult to grant internet access for the guest OS.

One could also pass-through audio devices, but I believe this is not necessary – especially if you do not care about hardware audio acceleration; in such case you can get qemu to emulate a sound device and play it as any other app in the host OS would do. In result you can hear both systems on same speakers/headphones!

Personally, I have even went so far that I prepared a simple app that talks to my monitor via I²C and tells it when to switch video input – this way I can use a hotkey shortcut instead of navigating it’s OSD menus. The same hotkey will switch my keyboard/mouse between systems, thanks to synergy’s customizability.

6. Conclusions

I have used this configuration for a few weeks now, and I am yet to find a game that would not perform outstandingly in this environment. Graphics performance is just as if I dual-booted, CPU performance is only a tiny bit worse (but still awesome). The ability to keep all my apps running under Linux while I play games, be it a web browser, IM client, teamspeak or whatever else might be useful – is incredibly convenient!

Switching between systems in less then a second is really a game-changer for me (pun intended…)!

If you are excited about this technique, go ahead and read the guide. Be ready for a challenge, and do not give up it things won’t work – you won’t regret it! Good luck!

Want to know more? I will be happy to answer your general questions, but if you need help or want to learn about technical details, the best place to find answers is here.

¹) Unless your motherboard has a video multiplexer, like NVIDIA Optimus… but using it would be difficult, as you would need to manually control the mux. I believe this might be achievable, but most certainly would require specialized drivers, that do not exist right now.

²) It’s not as simple as “all new hardware supports it”, both in case of CPUs and mobos. You may find some lists of IOMMU-compatible hardware on the Internet, but it is probably best to ask the manufacturer itself – if they do not list it on their website, try dropping an email – from my experience all manufacturers are very keen to respond to enquiries concerning such sophisticated features! ;-)

]]>https://rafalcieslak.wordpress.com/2014/08/15/multi-os-gaming-wo-dual-booting-excelent-graphics-performance-in-a-vm-with-vga-passthrough/feed/29C++11: std::threads managed by a designated classhttps://rafalcieslak.wordpress.com/2014/05/16/c11-stdthreads-managed-by-a-designated-class/
https://rafalcieslak.wordpress.com/2014/05/16/c11-stdthreads-managed-by-a-designated-class/#commentsFri, 16 May 2014 17:54:50 +0000http://rafalcieslak.wordpress.com/?p=305Recently I have noticed an unobvious problem that may appear when using std::threads as class fields. I believe it is more than likely to meet if one is not careful enough when implementing C++ classes, due to it’s tricky nature. Also, its solution provides an elegant example of what has to be considered when working with threads in object-oriented C++, therefore I decided to share it.

Consider a scenario where we would like to implement a class that represents a particular thread activity. We would like it to:

start a new thread it manages when an instance is constructed

stop it when it is destructed

I will present the obvious implementation, explain the problem with it, and describe how to deal with it.

Okay, but the default std::thread constructor is pretty pointless here, it “Creates new thread object which does not represent a thread“. So a pretty obvious solution is to explicitly use another constructor, which will actually launch the thread:

the_thread(&MyClass::ThreadMain, this)

Then we can implement the thread routines within ThreadMain method. The destructor would take care of stopping the thread gracefully and waiting for it to terminate.

This seems like an elegantly implemented class managing the thread. It will even [seem to] work correctly!

But there is a critical problem with it.

The new thread will start running as soon as it is constructed. It will not wait for the constructor to finish its work (why should it?). It may, and sometimes, at random, it will, leading you into a false sense of security and correctness. It will not even wait for other class fields to be constructed. It has the right to cause crash when accessing stop_thread. And what if your class has other fields that are not atomic? The thread can start using uninitialized objects, problems guaranteed.

One possible solution would be to have the constructor notify the thread when it is safe to start, from within its body. This might be accomplished with an atomic or with a conditional variable. Keep in mind that it would need to be constructed before the std::thread, so that it is ready to use when the thread starts (so the order of construction really does matter!).

This might work in case of the code I use for demonstration. But in general case this will do no good. Imagine use a hierarchy of polymorphic thread-managing classes; imagine that ThreadMain is virtual. In such scenario its addresses is derived from the vtable. But the vtable pointer is initialized after the block of the constructor! This means the starting thread would start from calling a wrong function, leading to a variety of confusing behavior. The idea of notifying it when to start won’t help here.

A universal solution would be to prepare then class in two steps.

First, construct it.

When it is ready to use, call Start() which will actually launch the thread.

Like this:

MyClass instance;
instance.Start();

So we will need to start with no thread running, and construct it dynamically when Start is called. My first thought was:

But this just feels wrong. Pointers? Seriously, are we forced to manually manage memory?

No. std::thread is movable! So we can use that boring default std::thread constructor (not that pointless, right?) to construct it at the beginning, and then, when Start() is called, we will substitute it with an actual running thread! Like that:

This scenario has taught me to stay vigilant when mixing asynchronous execution with stuff that C++ does on the lower levels. I do hope that it has opened your eyes too, or that at least you will remember it as a simple yet interesting case!

Linux puts you in full control. This is not always seen from everyone’s perspective, but a power user loves to be in control. I’m going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful.

I hope the resulting output is obvious – ten randomly selected numbers 0-99, hopefully different each time you run this program.

Now let’s pretend we don’t really have the source of this executable. Either delete the source file, or move it somewhere – we won’t need it. We will significantly modify this programs behavior, yet without touching it’s source code nor recompiling it.

For this, lets create another simple C file:

int rand(){
return 42; //the most random number in the universe
}

We’ll compile it into a shared library.

gcc -shared -fPIC unrandom.c -o unrandom.so

So what we have now is an application that outputs some random data, and a custom library, which implements the rand() function as a constant value of 42. Now… just run random_num this way, and watch the result:

LD_PRELOAD=$PWD/unrandom.so ./random_nums

If you are lazy and did not do it yourself (and somehow fail to guess what might have happened), I’ll let you know – the output consists of ten 42’s.

This may be even more impressive it you first:

export LD_PRELOAD=$PWD/unrandom.so

and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library…

Wait, what? What did just happen?

Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided – which returns 42 every time.

But we *told* it to use the real one. We programmed it to use the real one. Besides, at the time we created that program, the fake rand() did not even exist!

This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand().

When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using ldd:

What you see as the output is the list of libs that are needed by random_nums. This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a libc.so must be there – this is the file which provides core C functionality. That includes the “real” rand().

We can have a peek at what functions does libc provide. I used the following to get a full list:

nm -D /lib/libc.so.6

The nm command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions.

Now what happens when we set up the environmental variable LD_PRELOAD? This variable forces some libraries to be loaded for a program. In our case, it loads unrandom.so for random_num, even though the program itself does not ask for it. The following command may be interesting:

Note that it lists our custom library. And indeed this is the reason why it’s code get’s executed: random_num calls rand(), but if unrandom.so is loaded it is our library that provides implementation for rand(). Neat, isn’t it?

Being transparent

This is not enough. I’d like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. It’s clear if we implemented open() with a simple “return 0;“, the application we would like to hack should malfunction. The point is to be transparent, and to actually call the original open:

Hm. Not really. This won’t call the “original” open(…). Obviously, this is an endless recursive call.

How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. It’s simpler than it sounds. Have a look at this complete example, and then I’ll explain what happens there:

The dlfcn.h is needed for dlsym function we use later. That strange #define directive instructs the compiler to enable some non-standard stuff, we need it to enable RTLD_NEXT in dlfcn.h. That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open – the alias name is orig_open_f_type, which we’ll use later.

The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer orig_open which will point to the original open(…) function. In order to get the address of that function, we ask dlsym to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return it’s return value as ours.

As the “evil injected code” I simply used:

printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h!

To compile it, I needed to slightly adjust compiler flags:

gcc -shared -fPIC inspect_open.c -o inspect_open.so -ldl

I had to append -ldl, so that this shared library is linked to libdl, which provides the dlsym function. (Nah, I am not going to create a fake version of dlsym, though this might be fun.)

So what do I have in result? A shared library, which implements the open(…) function so that it behaves exactly as the real open(…)… except it has a side effect of printfing the file path :-)

If you are not convinced this is a powerful trick, it’s the time you tried the following:

LD_PRELOAD=$PWD/inspect_open.so gnome-calculator

I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time.

I believe it’s not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because open() is not the only function that opens files… For example, there is also open64() in the standard library, and for full investigation you would need to create a fake one too.

Possible uses

If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app!

Gain root privileges. Not really, don’t even bother, you won’t bypass any security this way. (A quick explanation for pros: no libraries will be preloaded this way if ruid != euid)

Cheat games: Unrandomize. This is what I did in the first example. For a fully working case you would need also to implement a custom random(), rand_r(), random_r(). Also some apps may be reading from /dev/urandom or so, you might redirect them to /dev/null by running the original open() with a modified file path. Furthermore, some apps may have their own random number generation algorithm, there is little you can do about that (unless: point 10 below). But this looks like an easy exercise for beginners.

Cheat games: Bullet time. Implement all standard time-related functions pretend the time flows two times slower. Or ten times slower. If you correctly calculate new values for time measurement, timed sleep functions, and others, the affected application will believe the time runs slower (or faster, if you wish), and you can experience awesome bullet-time action.
Or go even one step further and let your shared library also be a DBus client, so that you can communicate with it real time. Bind some shortcuts to custom commands, and with some additional calculations in your fake timing functions you will be able to enable&disable the slow-mo or fast-forward anytime you wish.

Investigate apps: List accessed files. That’s what my second example does, but this could be also pushed further, by recording and monitoring all app’s file I/O.

Investigate apps: Monitor internet access. You might do this with Wireshark or similar software, but with this trick you could actually gain control of what an app sends over the web, and not just look, but also affect the exchanged data. Lots of possibilities here, from detecting spyware, to cheating in multiplayer games, or analyzing & reverse-engineering protocols of closed-source applications.

Investigate apps: Inspect GTK structures. Why just limit ourselves to standard library? Let’s inject code in all GTK calls, so that we can learn what widgets does an app use, and how are they structured. This might be then rendered either to an image or even to a gtkbuilder file! Super useful if you want to learn how does some app manage its interface!

Sandbox unsafe applications. If you don’t trust some app and are afraid that it may wish to rm -rf / or do some other unwanted file activities, you might potentially redirect all it’s file IO to e.g. /tmp by appropriately modifying the arguments it passes to all file-related functions (not just open, but also e.g. removing directories etc.). It’s more difficult trick that a chroot, but it gives you more control. It would be only as safe as complete your “wrapper” was, and unless you really know what you’re doing, don’t actually run any malicious software this way.

Implement features.zlibc is an actual library which is run this precise way; it uncompresses files on the go as they are accessed, so that any application can work on compressed data without even realizing it.

Fix bugs. Another real-life example: some time ago (I am not sure this is still the case) Skype – which is closed-source – had problems capturing video from some certain webcams. Because the source could not be modified as Skype is not free software, this was fixed by preloading a library that would correct these problems with video.

Manually access application’s own memory. Do note that you can access all app data this way. This may be not impressive if you are familiar with software like CheatEngine/scanmem/GameConqueror, but they all require root privileges to work. LD_PRELOAD does not. In fact, with a number of clever tricks your injected code might access all app memory, because, in fact, it gets executed by that application itself. You might modify everything this application can. You can probably imagine this allows a lot of low-level hacks… but I’ll post an article about it another time.

These are only the ideas I came up with. I bet you can find some too, if you do – share them by commenting!

]]>https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-ld_preload-to-cheat-inject-features-and-investigate-programs/feed/33e4rat – decreasing bootup time on HDD driveshttps://rafalcieslak.wordpress.com/2013/03/17/e4rat-decreasing-bootup-time-on-hdd-drives/
https://rafalcieslak.wordpress.com/2013/03/17/e4rat-decreasing-bootup-time-on-hdd-drives/#commentsSun, 17 Mar 2013 20:03:49 +0000http://rafalcieslak.wordpress.com/?p=224This time I will describe how to set up e4rat in order to speed your Ubuntu’s boot time. Let’s begin with some motivation: my netbook used to boot-up in ~40 seconds. Using e4rat, it takes ~10-15 seconds. Impressive, isn’t it? Let’s see how does this trick work, and I’ll teach you how to enable it on your machine.

Prerequisites

Note: e4rat will work only on HDD drives. If you installed your system on a SSD drive, it won’t make any chance. (I will explain this later on, but you may be already uninterested in this article). In case of SSD drives, ureadahead, which is installed with Ubuntu by default, already does it’s best to improve the boot time. Physical HDD drives however, can benefit a lot from e4rat.

Note 2: You need to have your system installed on a ext4 partition in order to use e4rat (which is default in most cases). Furthermore, a kernel not older than 2.6.31 is required. No worries – Ubuntu ships with a suitable kernel since 10.04! Also, e4rat is confirmed to work great with all Ubuntu releases since 11.04 Natty Narwhal.

How does it work?

First, let’s think why does your machine take so much time to boot up. Investigating the boot process, one can learn that if you use a physical HDD drive, most time during startup is spent waiting for your drive to access data (if you wish to investigate it on your own, bootchart is the utility that will help you). This makes a lot of sense, it needs to accelerate the plates, spin them as needed, move other mechanical parts to read information… Because there is lots of physical movement related, the time needed to read a file gets longer. And there can be thousands of files that are required on boot! Things are even worse: if the files are located throughout the whole disk space, much much more seeking for files has to be done (This also explains why the drive can be very noisy on startup!). Luckily, there is no (or almost none) filefragmentation on ext4 filesystems, so at least once the file is found, reading it requires not that much mechanisms movement. But “moving to files” is enough to make your boot take a long time.

First observation that has to be done, is that every time you bootup your system almost the same files are required. This is kind of intuitive, starting Ubuntu on the same machine should require similar stuff every time. What if we could found out what these files are, and somehow move them on the drive close to each other, so that accessing them requires less there-and-back drive movement? Yeah… so this is basically what e4rat does for you.

First, we’ll let e4rat inspect your boot-up, so that it can learn which files are needed to start your system. Then, we’ll use it’s file reallocation tool to move these files in a pattern as optimal as possible. Finally, we’ll let it start before your Ubuntu boots – it will load all the required files at once to RAM (which should take much much less time, because they will be read as a large block of concatenated data), and Ubuntu will continue the boot using the data in RAM – which, because RAM is insanely fast, in case of my Ubuntu 12.04 on ASUS 1225c, takes a total of 2 seconds. Therefore, the boot process should be significantly faster.

Pleas note, that you should re-do this process every time you upgrade your Ubuntu to a new release. This is because a lot of core system files are substituted during upgrade, and many different files will be used to start your system, so for best effects they should be relocated again. This is also the reason why using e4rat on Ubuntu Development version is not advised, as in such case lots of system files are changed with updates on a daily basis… which kills the idea of remembering what has to be preloaded for bootup.

Do you like this trick? If so, let me show you how to install and configure e4rat.

How to enable it?

1. Installing e4rat. First, you need to get e4rat on your system. Unfortunately, it is not available in the Software Center. Therefore, begin by looking at e4rat downloads page. Choose the latest release, and download the .deb file suitable for your system (amd64 for 64-a bit system / i386 for 32 bits). Don’t install the file yet!

The default boot-up aid in Ubuntu is ureadahead. It does a similar job, but it never relocates files. Therefore while it is helpful in case of SSD drives, it does not improve much if a HDD is involved. Because ureadahead conflicts with e4rat, we’ll need to uninstall ureadaheadfirst. The easiest way to do it is running this command:

sudo apt-get purge ureadahead

Note: This will warn you that you are about to remove ubuntu-minimal too. Don’t worry, this will not destroy your Ubuntu. Ubuntu-minimal is just an empty package that ensures all other required packages default for Ubuntu (like ureadahead)are present on your system. Therefore it’s safe to continue.

The next step is to install the e4rat .deb file we downloaded. You can double-click it and the Software Center will help. I prefer running:

sudo dpkg -i e4rat_file_name.deb

Once this is done, we can let it learn about your boot process!

2. Collecting startup files data. This step is about getting e4rat to know what files your system needs to boot up. This is fairly simple, and requires little work.

Start by restarting your system. Wait for the GRUB boot menu to appear (it is possible that you may need to hold the Shift key pressed in order to access this menu). When you will be presented with system selection menu, do not boot your Ubuntu. Instead, use arrow keys to highlight the entry you would normally use (most likely it’s called “Ubuntu” or “Ubuntu, with Linux version-version-version”). Press e to enter edit mode.

Do not be afraid of editing this data! Changes done here are not persistent! Therefore the worst thing that can happen while messing up here (unless you intentionally enter malicious commands) is that your Ubuntu will fail to start – but it will be back to normal next time you start your computer as usually.

The exact contents of what you will see depend on your system version. However, what we are going to edit is common to all of them. Look for a line that starts with: linux /boot/vmlinuz-…Use arrow keys to reach the end of this line, and add the following at it’s end:

init=/sbin/e4rat-collect

(Note: This line can be longer than your screen, and it will wrap around – if you are unsure where to add this text, go to one line below the one we would like to edit, and press left arrow key, which should take you to the end of the previous like – right where you need to add the above text).

Then press Ctrl+X, which will start the system using this new argument.

For the next 120 seconds e4rat will be looking which files are loaded during boot. Pro tip: if you open your browser (or any other application you use frequently) within these 2 minutes, e4rat will think the browser’s files are essential to boot, and will pre-load them everytime you start your system – this way your frequently used apps will also start faster!

Once your system is up, make sure everything went right, by testing if file /var/lib/e4rat/startup.log is present. I do it by running:

file /var/lib/e4rat/startup.log

If it says it’s a UTF-8 text file, everything’s fine. If it says there is no such file, you need to redo this step carefully – you must have somehow not launched e4rat-collect.

3. Relocating files.

Now we’ll need to boot up in low-level text-mode. This is because file relocation won’t work, if whole system is running. No worries – again, it sounds scarier than it really is.

To enter it, we’ll do something similar to the previous step. Restart your system, select your OS in GRUB boot menu, and press e to edit it. Look for the same line as previously (the one that starts with linux /boot/vmlinuz-… ), but this time we need to add some other text. Type:

single

and press Ctrl+X to boot your system. After few seconds you will see command prompt (if not, press Ctrl+Alt+F1). The following command will start file relocation, according to data e4rat collected:

e4rat-realloc /var/lib/e4rat/startup.log

It can take a long time. Do not worry if it does not finish within several minutes. It needs to move lots of data on your hard drive, and because it’s a slow one, it will take time. If you have little free disk space, this can be even longer. Just be patient, it will finish eventually.

Once it finishes, it will tell you some more or less interesting data about how well was it able to move your files.

It is recommended to run this command multiple times, until it clearly says that No further improvements are possible. Every time you run it, it should take less time, and really, it’s worth to wait – the better the files are located, the faster your system will boot once we finish.

When it says that there is are no further improvements possible, we are done with this step. Do not shut down your computer.

4. Enabling e4rat to preload files every time you start your system.

We are almost done. The last thing that has to be done is modifying the way your Ubuntu starts, so that it can benefit from e4rat. Still in text mode, run:

nano /etc/default/grub

A full-screen text-editor will appear, with a config file open. It’s very intuitive to use. Find a line that starts with

GRUB_CMDLINE_LINUX_DEFAULT="..."

Leave whatever is between quotemarks, and add init=/sbin/e4rat-preload . For example, on my system this line looked like that:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

so I changed it into:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash init=/sbin/e4rat-preload"

I hope that’s clear. Press Ctrl+O and then Enter to save the file, then press Ctrl+X to close the editor.

To apply the changes we’ve just made, run the following command:

update-grub

Once if finishes, restart your machine using this command:

reboot

Now your Ubuntu should start normally. Well… almost. If everything went right, it should start faster. If you are lucky, it will start lighting fast (and it will be just as fast on every reboot since now)!

I hope you found this guide useful, and that your system boot time shocked you. If so, thank the people who develop e4rat!

Final note: uninstalling e4rat. If for some reason you want to revert the changes you did with installing e4rat, here are the instructions. First, revert the changes to /etc/default/grub we introduced in step 4. Run sudo update-grub to apply this change. Run sudo apt-get purge e4rat to uninstall e4rat, and sudo apt-get install ubuntu-minimal ureadahead to restore ureadahead. Note that file relocation is not revertable, but you will not suffer from it.

]]>https://rafalcieslak.wordpress.com/2013/03/17/e4rat-decreasing-bootup-time-on-hdd-drives/feed/12There is something wrong with the new UDS system.https://rafalcieslak.wordpress.com/2013/02/28/there-is-something-wrong-with-the-new-uds-system/
https://rafalcieslak.wordpress.com/2013/02/28/there-is-something-wrong-with-the-new-uds-system/#respondThu, 28 Feb 2013 19:13:16 +0000http://rafalcieslak.wordpress.com/?p=218When I read the news about Canonical’s decision to change the way Ubuntu Developer Summit (original announcement here) I was totally astonished. I expected this change will cause a lot of buzz within the community, especially given the fact that all recent Canonical decisions are considered very controversial. This surprises me heavily, as I can spot a big number of problems that this decision may cause, as well as problems with the way this decision itself was handled. Jono Bacon’s article explaining the decision did not satisfy me either. It explains the general reasoning behind this idea, but it does not clarify everything.

UDS is a part of long-term Ubuntu tradition. Every six months the developers from all over the world would meet in order to plan development for the upcoming release, brainstorm ideas, discuss problems and collaborate in many ways to ensure that next Ubuntu is going to rock. But the event is not (was not?) just about planning. It was a chance for the community to actually meet, to get to know each other, to tighten the bonds within community. I believe this is crucial for being deeply engaged within the community, and for ensuring the relations within community, as well as it’s structure and organisation are well and sound (and isn’t it important to have friends within the community?). We all know that a big number of Canonical employees are working remotely, and I feel that this may be one of the key facts which contributed to the decision of converting UDS into an online meeting. Apparently some folks at Canonical realized that people do not need to meet in order to be productive. Moreover, a very important part of UDS was outside the sessions – people would discuss brilliant ideas during the dinner, some would seek for aid for their team by looking for interested folks, others would flash their mobile device with the help of experienced ones, and finally, some people would teach each other a lot. It is clear that none of these will happen in case of an online UDS.

From what I understood, the idea is to make UDS available to everyone, so that all contributors, regardless of where they live in and how far are they willing to travel, could participate to the sessions. I can, however, see some significant inconsistency here. The first fact is the choice of Google Hangouts for sessions. I agree this is a great handy tool for video conferences, and I use it myself a lot, but it cannot be assumed that everyone is perfectly fine with G+ policy; there indeed are people who do their best to avoid any Google products. We are told that IRC sessions will be provided for these who can’t join videos, but that doesn’t do much sense, because it does not differ at all from remote participation in summits which were real meetings (people who were unable to travel to UDS could use IRC to contact with session participants, the IRC log was displayed live in the room so that everyone could interact with the discussion even if they were miles away – I have participated this way during UDS-Q, and the experience was actually quite satisfying, even though I couldn’t see the faces of people I was speaking with).

I also have to express my doubts about session organisation. While some UDS sessions were indeed held by less than 10 people, many other would grab interest of more (e.g the ones from Community track), resulting in more than 50 developers in the room + at least 20 on the IRC, with at least 30-40 of them participating actively in the discussion. Now, if the point is to let everyone participate, then it means we aim for even more participants. Now please imagine 80-100 people in a single G+ hangout. Even a number like 30 seems bizarre! Either this will end as a huge mess, or only some people will be voiced (which, again, breaks the idea of opening UDS to everyone).

I have also concerns about the way it is said to be organised. The event is going to be two-days long, and it will take place between 4pm to 10pm UTC. Obviously, that means that a big part of the word will be sleeping then, and the other be at work etc. And that, once more, is aganist the principle of opening UDS to everyone. I don’t see any reason why this can’t be a 24h event, with sessions spread more or less evenly, so that those living in Australia can participate too. I also believe that some teams might want to schedule meetings on times that suit their people best, why limit them to few hours, if this is going to be an online event?

The length of the event is also interesting. Two days. Two days of few-hour long discussion. Compare that to traditional 5-day long UDS with sessions from 9am to 6pm. Add the fact that online UDS will take place two times more frequently, and the conclusion is that we’ll need to be 10 times more efficient to discuss all that is needed. Will this be enough time? Luckily, event length can be fine-tuned if needed. Some speculate this may be related to Ubuntu switching to rolling release model.

One of the main problems with how was decision handled is that it was 1) a surprise 2) immediately effective. I opened up Planet Ubuntu on Wednesday and learned that the UDS is next week. A lot of time to prepare discussion topic, isn’t it? At the time of writing this, there is not a single blueprint registered for this UDS. I might go on explaining why this was a terrible idea to announce it this late, but I believe you get the idea. Please also note that Canonical has never notified before about such idea. Until the announcement, everything seemed that the next UDS will take place as usually – this time in Oakland, and I expect there may be people who have already reserved their time. I feel that such crucial decisions need to come with some kind of transitional period.

With all respect to Canonical and their right to manage the money they own (UDS is a really expensive event, every time I try to imagine the amount of money that had to be involved in Copenhagen, my mind suffers stack overflows), I am very skeptical about this decision, both because of the reasons I explained, and because of some that I’d rather not share publicly. Time will tell how it will affect planning, development and community. I hope the lack of such meetings won’t have a major impact.

And please keep in mind that regardless of what changes are done to the way we organize our work, Ubuntu community will always make sure your favorite OS is the best possible! :-)

]]>https://rafalcieslak.wordpress.com/2013/02/28/there-is-something-wrong-with-the-new-uds-system/feed/0vModSynth 1.0 releasedhttps://rafalcieslak.wordpress.com/2013/02/10/vmodsynth-1-0-released/
https://rafalcieslak.wordpress.com/2013/02/10/vmodsynth-1-0-released/#commentsSun, 10 Feb 2013 15:34:41 +0000http://rafalcieslak.wordpress.com/?p=204Didn’t I mention for the last 2 months I have been working on a synthesizer application?

I am pleased to announce that vModSynth 1.0 is now publicly released and available to download.

What is vModSynth? It’s a modular software synthesizer for Linux. It is not intended to be as convenient as possible, but to resemble the look & feel of a real, analog, modular software synthesizer. See for yourself:

vModSynth allows you to play with a modular synth on your computer. You are free to choose any modules you wish, you can connect them however you want, and you will hear the result immediately. The synthesizer intentionally resembles the look of a modular synthesizer (I was inspired by modules manufactured by synthesizers.com), and it imitates behavior of one.

vModSynth integrates perfectly with external MIDI devices – you can play it with an external keyboard, and you can bind any knob to a knob/slider on your physical device, so that you can actually feel the synthesizer, and modify it’s parameters just as on a real one! You can also connect any external sequencing application, like Rosegarden or harmonySEQ.

There is a number of modules you can add to your setup. A all-in-one oscillator, some effects, some processing modules, whatever you like. If I continue to develop this project, the number of modules will certainly grow, as creating new ones is very easy.

No limits to the number of modules, the number of connections, loops in connections, you are absolutely free. Just build your own synthesizing path and hear it in action.

The source file can be downloaded here. Compilation is as simple as ./configure && make && make install . A detailed user manual is available in the ./doc directory.

There is also a PPA for Ubuntu 12.04/12.10/13.04 available. To add it an install, use sudo add-apt-repository ppa:smartboyhw/vmodsynth-release && sudo apt-get update && sudo apt-get install vmodsynth . Thanks to Howard Chan for maintaining the PPA!

Questions, ideas, bugs? Please contact me directly, or leave a comment here. I have not yet recognized the level of interest others may have in this piece of software, and I need to investigate whether it makes sense to setup a bug tracker etc. However, if you are interested in contributing to this project, writing new awesome modules or generally helping me make vModSynth a awesome synthesizer, you will be welcome with my arms wide open!

]]>https://rafalcieslak.wordpress.com/2013/02/10/vmodsynth-1-0-released/feed/27Dynamically changing Ubuntu Phone wallpaper for your desktophttps://rafalcieslak.wordpress.com/2013/01/13/dynamically-changing-ubuntu-phone-wallpaper-for-your-desktop/
https://rafalcieslak.wordpress.com/2013/01/13/dynamically-changing-ubuntu-phone-wallpaper-for-your-desktop/#commentsSun, 13 Jan 2013 19:42:23 +0000http://rafalcieslak.wordpress.com/?p=179We have all already seen it. The super-elegant Welcome Screen seen on all demonstrations of Ubuntu Phone OS is appreciated by many for it’s brilliant design and simplicity.

Because of that, some have tried to recreate it to use as a desktop wallpaper. Among several versions that are available, I liked Michał Prędotka’s version most. This version was modified to many different colors by Michael Hall – he has even created a video tutorial on how to make your own color scheme for this wallpaper.

I love the idea of different simple wallpapers that share the design, but vary in colors. But I’m lazy, and I don’t want to change my wallpaper everyday to enjoy another color scheme. Ideally the wallpaper would change automatically. But if the design is identical and only colors change, then it may be neat to change the colors smoothly.

(these images are low-res, and are not meant to be downloaded)

I have written a small script which does that for you. It runs in the background, and every now and then it creates a new shade of color for your wallpaper. This way the color changes smoothly. By default, it takes full day for the colors to repeat (but you can chage the period easily) – it’s yellow in the morning, green near noon, in the afternoon it gets blue, evening is violet, and at night it gets red. You may want the colors to change slower (it may be cool to have the full cycle take a week, this way everyday’s a new color!). If you wish, it will be simple even to modify color selection algorithm as you wish.

You can download the script here. After extracting, simply launch the run file (you may wish to modify the script before). I have also added the script to the list of apps that is started when I login, so that it runs in the background whenever I using my desktop. By default, it refreshes the wallpaper every 10 minutes. This is enough for shade changes to be not noticable.

And ofcourse the Dash and Launcher’s cameleonic features are following the mood of your wallpaper!