Posts

I saw a question online recently which basically asked whether big-O notation always holds true in practice. For example, in theory, searching through a vector for an item should be O(n) whereas searching a map should be O(log n). Right?

Let’s imagine we create a vector with 100,000 random integers in it. We then sort the vector, and run two different searches on it. The first search starts at the beginning, compares the element against the value we’re searching for, and simply keeps incrementing until we find it. A naive search you might say. Without resorting to std:: algorithms, how might you improve it?

A worthy idea might be a recursive binary, or bisection, search. We might, for example, make 15 bisections to find a value in the upper quadrant of our range, compared to, say 75,000 comparisons in the brute force approach.

That’s fairly safe to say that the binary search is going to be much faster, right?

Are you surprised to see that the binary search took (very) slightly longer than the brute force approach?

Welcome to the world of modern processors. What’s going on there is a healthy dose of branch prediction and plenty of valid processor cache hits which actually makes the brute force approach viable.

I guess the moral of this story is, as always, don’t optimise prematurely – because you might actually be making things worse! Always profile your hot spots and work empirically rather than on what you think you know about algorithm efficiency.

The code used here is reproduced below – it should compile on VS 2015, clang and g++ without issue. You’ll need Boost for the timers. You may see wildly different timings depending on optimisation levels and other factors :)

C++11 gives us two useful indispensable smart pointers, std::unique_ptr and std::shared_ptr. So much has been written about these that there’s no point me re-hashing anything other than to re-iterate that if you are using “naked” owning pointers in your code these days, you are simply doing it wrong.

Here’s another little “gotcha” from my experimentations in running C# apps cross-platform. The following code implements a system-wide mutex to ensure that only one instance of the program is running at any one time:

public static void Main(string[] args){
Mutex mutex = null;
try{
mutex = Mutex.OpenExisting("test.martyndavis.com");
// if we *can* open it we're an ADDITIONAL instance
Console.WriteLine("Application is already running");
return;
} catch(WaitHandleCannotBeOpenedException){
// if we get in here then the mutex doesn't exist,
// so we are the only instance running on this machine
mutex = new System.Threading.Mutex(true, "test.martyndavis.com");
}
try{
Console.WriteLine("Doing stuff... press a key");
Console.Read();
} finally{
// Although mutex implements IDisposable, it doesn't
// call Release Mutex when being Disposed(),
// hence usage of try/finally instead of 'using'.
mutex.ReleaseMutex();
}
}

Running this on Windows it behaves as one would expect. Run one instance you get “Doing stuff… press a key” output, and, if you try to run a second instance in another DOS box you get “Application is already running”.

On Linux though it’s a different story – each instance runs as if it’s the only copy on the machine.

Doing some digging turns up this note in Mono’s release docs – you can see that shared handles are now turned off by default – you can turn them on by setting the environment variable MONO_ENABLE_SHM, but for the application I’m working on I think I’ll simply avoid mutexes and use something else to ensure one instance – a lock file in the application’s data directory will be simple, and work cross-platform.

As I’ve mentioned a couple of times before, I’ve been working on a C# application, using GTK# to enable me to create a GUI application which can run on Linux, Mac or Windows. It works well but I’ve been working through some “gotchas” here and there. Here’s a useful little tip… Gtk# is not a thread-safe toolkit.Continue reading “C# / GTK# – Re-Entrancy Issues”

I’ve been experimenting with using GTK# and C# to build GUI applications which will run on Linux, Windows and Mac. Monodevelop has a graphical form designer for Gtk# front ends, but it’s not as simple as building WinForms applications under Visual Studio. An example is the TreeView widget – it’s designed from an MVC perspective, with a ListStore or TreeStore object containing the model (or data), various objects collectively representing the view (columns, cells, cell renderers etc) and the controller functions allow you to sort and filter the data. All the building needs to be done in code rather than at design time. This is a lot to wrap your head around if you simply want to display some data in a tabular format, as I did, when I wanted to display the parsed contents of a log file!

To make this easier for me in future, I’ve written a small class which hides just about everything. Read on…

The Arduino IDE does a great job of simplifying the creation of programs for Arduino (and clones) which is great if you don’t have much programming experience.

But if you’ve done more than a bit of programming in the past, you’ll soon find the dinky IDE a little frustrating to use. For example, my preferred text editor, by far, is vim, and I find it quite jarring to be forced to use a basic text editor to build programs. Yes, you can select “use external editor” in the settings, but it’s clumsy. I’ve built many an Arduino program in vim, then switched to the Arduino IDE to build it. It’s just not streamlined – it’s too slow and clunky.

It was therefore with delight I saw in the kubuntu 12.04 repository (I recently re-installed everything on my desktop after falling back in love with KDE) that there’s a package called arduino-mk which promises to provide the ability to build arduino programs directly from the command line. Which means you can use vim (or emacs if you’re weird :) and makefiles.

I can’t be the only victim for this… I’ve experienced it on two different machines, on two versions of Kubuntu (11.08 and 12.04).

If I go into System Settings → Application Appearance → Fonts and select “Enabled” for “Use anti-aliasing” then click “Configure” and select “Use sub-pixel rendering”, then (after a restart probably) all the fonts in the browser (Chrome, Chromium or Firefox) look absolutely horrible. I’ve spent two days on and off trying to fix this.

The fix I’ve found is to delete ~/.fonts.conf and ~/.kderc (not sure at this stage which is the culprit), log out and back in, and, hey presto, fonts are OK again.