Complexity of an Algorithm

Suppose there is a problem and two different algorithms. How can we decide which algorithm is better ? You can implement and run both of the algorithms, it is called Experimental Evaluation but it is expensive and waste of time. There should be another way to decide that. Actually, yes, there is. Let’s see..

The running time of an algorithm depends on the computer, programming language, compiler and so on and this causes some differences like an algorithm work faster in one programming language than the other. Therefore, the algorithm should be analyzed mathematically in a machine-independent way. This is called Asymptotic Analysis. Asymptotic analysis provide us to evaluate how does an algorithm behave when the input size get larger. We calculate the growth-rate instead of the actual running time. There are several ways to calculate growth-rate of an algorithm but each method has the same basic idea; express the bounds of the function of growth-rate. There are three common asymptotic notation; Big O, Big Ө and Big Ω. In this post, we will talk about Big O notation;

Big O Notation

Big-O notation is used to express the upper bound of the growth rate. When we calculate big-O, we just care about the most significant values at the function. The constants have no influence on the growth-rate. Let’s see some examples;

f(n) = n^2 + 23n + 6 —–> For a big n, “6” has not affect the growth-rate. Also, assume that n is equal to 24123124 (a random big value), it has no remarkable effect on the growth-rate. At this function, n^2 the most significant effect to growth-rate. Therefore, the worst case of the given function is O(n^2).

If the loop variables on the algorithm contain has divided or multiplied by a constant value the time complexity of the algorithm will grown up smaller ratio than n. For example:for(int i = 2; i < n; i *=4) // —> “i” multiplied by a constant value.

If the algorithm contain single loops which its variable incremented or decremented by a constant value, the operations inside of the loops executed n times, so the time complexity of the algorithm will be O(n). For example:

If the algorithm contain nested loops, the time complexity will be n^c (c is the number of how many loops are nested). For example:

for(int i = 0; i < n; i ++)

{

for(int b = 0; b < n; b ++)

{

//some constant time complexity operations

}

}

There are two “for loops” nested. So, the time complexity is O(n^2).

Bubble sort codes is seen on the picture above. There are two “for loops” nested and the operations inside of the loop are constant time complexity operations. So, the time complexity of Bubble sort is O(n^2).

O(2^n) [Fibonacci numbers(Recursive)] The recursive Fibonacci number calculations is an example of O(2^n). Let’s see the algorithm.

At the introduction, we were talked about how can we decide which algorithm is better if we have two algorithms about a problem. Then, we were talked about big-O notation and give some examples. There are two different algorithm about calculating fibonnacci numbers above (Actually, there are three -Best, Iterative, Recursive- but we only give just two of them). We have iterative and recursive algorithm. We have calculate the time complexity of both algorithm and obtain two different time complexity. Iterative algorithm has O(n) time complexity while recursive has O(2^n). O(2^n) is a serious time complexity. If n is equal to 100, 2^100 is approximately 10^30 nano-second while age of the universe is 10^26 nano-second. Interesting, right? So, we should choose iterative algorithm and we decided that through calculating the time complexity. This shows us to importance of time complexity.