Some thoughts on your (and my) thoughts.

Monday, April 13, 2015

As a person who solves problem for a living, there's a few options one has as far as current models of engagement (or more traditionally employment) are concerned:

Engaged full time

Engaged part time

Freelance (i.e. somehow the work and the worker find each other)

I would argue that [1] and [2] are a special case of [3], wherein [3] just runs in a loop and the extra overhead of performing the search and matching are avoided, hence gaining some efficiency as a result of eliding the constant extra cost of running the matching algorithm.

[3] Is the purest (in some way) of being engaged, but some industries (such as the bollywood music industry) are more conducive to this model compared to something like a person working in a factory (either churning out tangible goods or code, etc...) (please pardon the phrase "factory" since some of you might be offended, and you should be since not all code is created equal; a lot of code is also a work of art). I'm ignoring some of the things that come with being employed full (or part) time such as health insurance, etc... since I want to focus on the most important aspect of employment, which in my opinion is impact and engagement.

Why is it that some industries have a preference for a certain model and others prefer another model?

Is it in the best interests of both parties to gravitate towards the most flexible model in most situations?

Which situations require that less flexible models thrive and are conducive to a more long term contractual type of engagement setting?

Which situations require that more flexible models thrive and are conducive to a more short term freelance type of engagement setting?

However, there's an entire class of algorithms and data structures that focus on how efficiently they utilize the host system's cache while processing data. These class or algorithms and data structures are called cache efficient algorithms and data structures [pdf].

There's 2 types of cache efficient algorithms and data structures:

One class tunes the algorithm for a particular size of cache or cache hierarchy. The algorithm is aware of the cache sizes, the number of caching levels, and the relative speeds of each of these caching levels.

Another class of algorithms is oblivious of the underlying cache sizes and layouts and are provably optimal for any cache size and caching layout (sounds magical doesn't it!). These are called Cache Oblivious Algorithms. See this link on Cache Oblivious Algorithms for more details, and this link for more details on the model, and the assumptions made in the Cache Oblivious model.

An example of a cache-efficient algorithm that is also cache-oblivious is Linear Search.

An example of a cache-inefficient algorithms that isn't cache-oblivious is Binary Search.

An example of a cache-efficient data structure that isn't cache-oblivious is the B-Tree (since B is the tuning parameter for the particular machine on which we arr running).

Without getting into the details, the complexity of running Binary Search on an array in the Disk Access Model (DAM) (where we are only concerned about the number of disk blocks read, and not the number of comparisons made) is O(logN⁄B)), since we must always load a block from disk till we reach a small enough array (of size B) such that no more jumps within that array will trigger another disk I/O operation to fetch another disk block. The optimal complexity for searching for ordered data on disk is realized by ordering the data recursively in a static search tree such that the complexity reduces to O(logBN).

However, implementing that structure is somewhat complicated, and we should ask ourselves if there is a way to get the best of both worlds. i.e.

Runtime efficiency of the cache-oblivious recursive layout, and

Implementation simplicity of the standard Binary Search algorithm on an ordered array

Turns out, we can reach a compromise if we use the square-root trick. This is how we'll proceed:

Promote every sqrt(N)'th element to a new summary array. We use this summary array as a first-level lookup structure.

To lookup an element, we perform binary search within this summary array to find the possible extents within the original array where our element of interest could lie.

We then use binary search on that interesting sub-array of our original array to find our element of interest.

For example:

Suppose we are searching for the element '7', we will first perform binary search on the top array; i.e. [10,22,33,43] and identify that 7 must lie in the original values sub-array at an index that is before the index of 10. We then restrict our next binary search to the sub-array [5,7,8,10].

Suppose we are searching for the element '22', we first identify the sub-array [11,21,22,25,26,33] as potentially containing a solution and perform binary search on that sub-array.

Even though we are asymptotically performing the same number of overall element comparisons, our cache locality has gone up, since the number of block transfers we'll perform for a single lookup will now be 2(log√N⁄B), which is an additive factor of log B lesser than what we had for our normal binary search on the sorted array.

When I started, I used a wall to support me when I kicked up from the dolphin pose

Forearm strength, shoulder strength, and general flexibility can be developed by practising multiple rounds of SuryaNamaskaar

Andrei Ram Om's video on the Feathered Peacock pose (above) touches up on some subtle but important aspects about the back position (arched) and the differences between various hand positions. I think it's extremely relevant to know these differences when you start so that you can experience the full flavour of the pose

Insight:
Suppose n = 6. Consider some numerator (target answer X) to be 1101. Can this be correct? We can verify by dividing and checking the remainder. 1101 mod 6 = 3, so we've overshot the answer by 3 or more (since we might not have the smallest X).

Which basically means that we want to find that subset of powers of 10, added together (mod n) leave a remainder of 0. We can solve this using dynamic programming using a technique similar to the one used to solve the subset sum problem. The solution to our problem (and to the subset sum problem) is pseudo-polynomial. Specifically, our problem is solved in time O(n log X).

Here is the ideone link, and below is the code (in python) to solve it.

Follow up problem:

Find a number 'X' given an 'n' (similar to the problem above) except that you are allowed to only use digits {0, 3, 5, 8}. Can you always find a solution for any 'n'?

Saturday, November 01, 2014

It's common for me to see blog posts by companies talking about the high traffic volume in terms of QPS/RPS they handle and the amount of data they process, and that's super cool. But then there's another class of facts I see floating around a lot, and they tend to talk about the size of their hadoop or serving cluster, and things like "dozens or hundreds" of machines in your fleet seem to be something to be proud of. I don't understand this way of thinking or at least don't see the point of it. As I see things, it's nicer if you can get more done with fewer machines, and not more machines in your fleet.

Sunday, October 12, 2014

Gaurav in his blog post describes in great detail what a static to dynamic transform is, when it is applicable, how to dynamize a static data structure, and the costs of inserting and looking up values in a dynamized data structure (relative to the costs in the corresponding static structure). I'll skip all of that since it's been presented so well in the link above, but will give a short description of the essentials.

What is a static data structure? A static data structure is one where it isn't computationally efficient to insert a new element, and typically involves re-building the whole structure to be able to add just one element. e.g. A sorted array. If you want to keep an array of elements sorted, then inserting a single element could involve shifting all the elements one place to the right.

What is a dynamic data structure? A dynamic data structure is one where adding a single element is computationally efficient, and doesn't involve touching every element in the data structure. e.g. A height-balanced binary search tree. Inserting a single element in a height-balanced binary search tree involves O(log n) node rotations. See this page to read more about the differences between static and dynamic data structures.

What is the amortized insertion time to insert an element in a dynamized sorted array? Inserting a single element into a dynamized sorted array costs O(log n) per insertion.

How much extra space do you need to dynamize a sorted array? You need O(n) extra space to dynamize a sorted array, since you need intermediate storage space when merging parts (levels) of the dynamized data structure.

What is the query time in a dynamized sorted array? Searching for an element in a sorted array costs O(log n), whereas searching in a dynamized sorted array costs O(log2 n).

We can see that the overhead of dynamizing a sorted array is something we can live with. In fact, it's almost unbelievable that we can dynamize a sorted array by paying only as much as an O(log n) overhead per insertion, and an O(log n) overhead per query.

Static to Dynamic transforms in practice: Consider that you're working with an inherently static data structure such as the SSTable (Sorted String Table), and that your system must maintain all its data as part of some SSTable. In such a case, inserting even a single row means that you need to rebuild the new SSTable which contains all the rows from the previous SSTable plus the newly inserted inserted row. This obviously means that the cost of inserting a single row can be linear in N, N being the number of elements in the newly created SSTable. This is extremely undesirable since it means that inserting N elements into the system will cost O(N2).

This is exactly where the Static To Dynamic Transform comes in super handy. You just apply the transform, and almost magically, you've gone from an overall running time of O(n2) down to O(n log n) for inserting n elements into the data structure.