Taking advantage of Multiplications Associative property you can get minimal error in O(n) by making two runners, one forward and one backward, to calculate the result by result[i] = forward[i-1] * backward[i+1].

Given n pots of gold arranged in a line containing different quantities of coins. You are allowed to remove one pot of gold from either end and then another player gets to remove one. This is repeated until no pots remain. Write an algorithm to find the set of moves for collecting the most coins in a two player game. Assume your opponent also takes optimal moves.

This can be easily optimized from O(2^n) into O(n^2) by moving to array indexes and memoization. Note: I'm ignoring the cost of sum, slice and concat for BigO. If you only wanted the total then you could just keep track of it instead of an array and save a bunch of memory.

After looking at skew heap, merge sort and Cartesian tree, I figured there has to be a way to combine all three. My idea is to start with a cartesian tree, take its root as the first element and then merge its leafs in a way that skews them to the right. Children swapping is employed to keep the tree shallow. The number of comparisons required for this sort is consistently less than quick sort. I also changed the Cartesian tree generator to use a stack instead of a parent node to save memory.

Since any heap tree, including Cartesian trees, degrade into sorted linked list when they are fully unbalanced it makes sense to try and unbalance the Cartesian tree. It turns out that a Skew heap works great for this. The following Cartesian skew tree now approaches the speed of quick sort (number of compares) for random data while maintaining linear time for nearly sorted data.

Thinking about Cartesian tree sort some more it seems logical to also use the Cartesian tree as the priority queue since it is already a heap. Performance appears to be the same without the extra memory requirement of an external priority queue.

Cartesian tree sort solves heap sort's problem of not taking advantage of partially sorted data by building a Cartesian tree. This tree is then traversed pre-prder inserting nodes into a priority queue. The nodes are removed from the priority queue as they are being inserted. Since the Cartesian tree is also a heap, nodes are not removed out of order. In the case of partially sorted data, the Cartesian tree degrades into an unbalanced tree resembling a linked list. Since there is only one element per row, the priority queue pushes and pops the same element without doing any comparisons.

This has θ(n log n) worse case running time and θ(n) running time for sorted data. Cartesian tree sort does require extra space for the tree and priority queue. Wikipedia on Cartesian tree sort.

This flattened Cartesian tree graph has a x-axis of the element index and a y-axis of the nodes depth. The root is the topmost dot. Cartesian trees can be built in linear time and will return the original data in order if the tree is traversed in-order.

Insertion sort works by iterating over a list once starting from the left. It looks at each element and the previous element swapping them if they are out of order and it keeps swapping the same element, moving it left, until the comparison is correct or it is in the first position. Wikipedia on Insertion sort.

An interesting thing about insertion sort is that it often runs faster than many θ(n log n) algorithms for small datasets, 8-20 elements, even though its worse case runtime is θ(n^2).

A heap implementation of priority queue is very similar to heap sort and can easily be turned into one. The difference implementation wise is that the queue doesn't know the final heap size so it has to build the heap from the ground up. This requires finding parent nodes in the heap and not just child nodes. Performance is nearly identical except the queue requires extra N space. Both of these heap implementations don't take advantage of partially sorted list.

Heap sort is similar to selection sort but instead of scanning the entire list for the minimal value it uses a heap to select the minimal value. There is an added step of rebuilding the heap every time the minimal value is removed. Heap sort's worst-case runtime is θ(n log n). In this implementation I use a max heap. Wikipedia on Heap sort.

The next set of sorts are based on selection sort. Selection sort works by finding the minimal value in a list and swapping it with the first position. This is repeated with the remainder of the list swapping the next minimal value with the second position and so forth until the list is sorted. Wikipedia on Selection sort.

Quick sort is a classic divide and conquer algorithm. It works by picking a pivot element and dividing the list into two sub list based on if the elements are less than or grater than the pivot element. This is repeated with new pivots in the smaller lists until the list size equals one.

Implementing this algorithm in place is interesting since you don't know how many elements are going to go on each side of the pivot. To get around this you can swap the pivot element with the last element in the list. Then you iterate over the list moving all the elements less than the pivot before all the elements greater than the pivot. Once this is done you swap the last element again, which is the pivot, with the first element after all the elements less than the pivot. This puts the pivot in the middle and the first element that is greater than the pivot in the last position. The pivot is now in the correct position with all the elements less than it to the left and all the elements greater than it to the right. Wikipedia on Quick sort.

Gnome sort works by stepping forward in the list if two elements are in order. If they are out of order it swaps them and steps back. This behavior emulates insertion sort but with many swaps. Wikipedia on Gnome sort.

Odd-even sort is another bubble sort variant and is similar to cocktail sort since it makes alternating passes. Instead of changing directions like cocktail, it tests all the even elements in one pass and then all the odd elements in the next. Wikipedia on Odd-even sort.

Inspired be Wikipedia's algorithm pages I decided to recreate their animations using JavaSrcipt and HTML5 canvas elements. IE 8 does not support canvas however new versions of all other modern browsers do. Mozilla Firefox, Google Chrome, Safari, and Opera.

Bubble sort works by looking at ever element in a list that is to be sorted and swapping adjacent pairs that are out of order. If it makes it to the end of the list without swapping any elements then it stops and the list is in order. If not then it starts at the beginning again. Wikipedia on Bubble Sort.

Having noticed it was Pi day before Google alerted me to the fact, I'm feeling pretty dorky. Following some links I found the current record holder for the most digits of pi computed. Seeing his algorithm is recursive I thought "Hey I could write that in F# with bigint." So here I am on Pi day computing pi.

There is no square root method built in so I had to make my own. It takes about 30s on my i7 to compute the first 100,000 digits. I confirmed the results against the record holders data.

After a break from game development and finishing a work project in F#, I'm back and it's time for me to leave XNA behind and move on to a collection of specifically designed libraries. For my graphics engine I'm choosing ORGE by way of MORGE a .NET wrapper for it.

After digging into the C# SDK I figured a good start would be porting MorgeForm. To get this code to run install the SDK, create a project, set it to .NET 2.0, add a reference to Mogre_d (C:\MogreSDK\bin\Debug) and set the working directory to C:\MogreSDK\bin\Debug. The sample pulls resources through relative paths so you need to run in the SDK Debug directory.

On a side note I need to make my own custom theme for better syntax highlighting.

// Normally we would use the foreach syntax, which enumerates the values, but in this case we need CurrentKey too; while seci.MoveNext() do for pair in seci.Current do ResourceGroupManager.Singleton.AddResourceLocation(pair.Value, pair.Key, seci.CurrentKey)