(push(array, element) puts the new element at the end of the array and increases the array’s length by 1. pop(array, index) removes the element at that index from the array, moving all the elements at greater indices and decrementing the array’s length, and returns the removed element. merge is the same as in mergesort.)

Instead of simply splitting the array in the middle like mergesort, it splits it so that one resulting array doesn’t need to be recursively sorted. Let $n$ be the length of the array to be sorted. Applying the Master Theorem gives us

so $f(n) = 2n - 1$ and $a = c = 1$ in the statement of the Master Theorem.

However$b$ is one over the probability that an element is greater than the next element and will go into the array to be recursively sorted. For example, if there's a 25% chance thatarray[i] > array[i + 1] (for all i), $b = 4$. $b$ is clearly greater than $1$, since the length of unsorted array grows smaller with every recursive call, so taking the logarithm with base $b$ of $1$ will always give us $0$, which is less than $c$. Then $T(n) = \Theta(f(n)) = \Theta(n)$.

But that can’t be true, so the Master Theorem isn't applicable for some reason; I suspect because $b$ isn't constant, but I don’t know how to prove that. The worst case of the sorting algorithm obviously requires a quadratic number of comparisons and the best case linear, so by analogy with bubblesort, insertion sort, etc, I’m guessing this algorithm also makes a quadratic number of comparisons on average.

(push(array, element) puts the new element at the end of the array and increases the array’s length by 1. pop(array, index) removes the element at that index from the array, moving all the elements at greater indices and decrementing the array’s length, and returns the removed element. merge is the same as in mergesort.)

Instead of simply splitting the array in the middle like mergesort, it splits it so that one resulting array doesn’t need to be recursively sorted. Let $n$ be the length of the array to be sorted. Applying the Master Theorem gives us

so $f(n) = 2n - 1$ and $a = c = 1$ in the statement of the Master Theorem.

However, $b$ is clearly greater than $1$, so taking the logarithm with base $b$ of $1$ will always give us $0$, which is less than $c$. Then $T(n) = \Theta(f(n)) = \Theta(n)$.

But that can’t be true, so the Master Theorem isn't applicable for some reason; I suspect because $b$ isn't constant, but I don’t know how to prove that. The worst case of the sorting algorithm obviously requires a quadratic number of comparisons and the best case linear, so by analogy with bubblesort, insertion sort, etc, I’m guessing this algorithm also makes a quadratic number of comparisons on average.

(push(array, element) puts the new element at the end of the array and increases the array’s length by 1. pop(array, index) removes the element at that index from the array, moving all the elements at greater indices and decrementing the array’s length, and returns the removed element. merge is the same as in mergesort.)

Instead of simply splitting the array in the middle like mergesort, it splits it so that one resulting array doesn’t need to be recursively sorted. Let $n$ be the length of the array to be sorted. Applying the Master Theorem gives us

so $f(n) = 2n - 1$ and $a = c = 1$ in the statement of the Master Theorem.

$b$ is one over the probability that an element is greater than the next element and will go into the array to be recursively sorted. For example, if there's a 25% chance thatarray[i] > array[i + 1] (for all i), $b = 4$. $b$ is clearly greater than $1$, since the length of unsorted array grows smaller with every recursive call, so taking the logarithm with base $b$ of $1$ will always give us $0$, which is less than $c$. Then $T(n) = \Theta(f(n)) = \Theta(n)$.

But that can’t be true, so the Master Theorem isn't applicable for some reason; I suspect because $b$ isn't constant, but I don’t know how to prove that. The worst case of the sorting algorithm obviously requires a quadratic number of comparisons and the best case linear, so by analogy with bubblesort, insertion sort, etc, I’m guessing this algorithm also makes a quadratic number of comparisons on average.