When I first had to study this theorem, I found that for me personally, the meaning of $\epsilon$ was somewhat difficult to understand and memorise. I believe to have found a simpler way to write this theorem, and I am wondering if it can be used equivalently, or if there is a flaw in my reasoning.

Let's look at the first case in particular, $f \in O(n^{log_b{a}-\epsilon})$. For simplicitly, let's assume $\log_b(a) = 2$. If we choose an infinitesimally small value for $\epsilon$, then this case basically expresses that $f$ must be asymptotically less than or equal to $n^{1.999 \ldots}$. In other words, $f$ must be asymptotically strictly less than $n^2$. I am wondering if this means that we can write this first case of the theorem as $f \in o(n^{\log_b{a}})$ (and following the same logic, $f \in \omega(n^{\log_b{a}})$ for the third case), rather than the (IMO) more convoluted alternative?

2 Answers
2

Perhaps it is the condition $\varepsilon > 0$ which is the root of your confusion. It is supposed to mean that $\varepsilon$ is a real positive number and constant (with respect to $n$). If we allow $f \in \omega(n^{\log_b a})$, for instance, then for $a = b = 2$ we would have $f(n) = n \log n \in \Theta(n)$ as a consequence of the theorem, which is false.

On a side note, you write "$f$ must be asymptotically less than or equal to $n^{1.999\ldots}$". This does mean that $f$ must be bounded by $n^2$, though I believe not for the reasons you think it does. Recall that $1.999\ldots = 2$, so that statement is rather trivial...

I am wondering if this means that we can write this first case of the theorem as $f \in o(n^{\log_b{a}})$ (and following the same logic, $f \in \omega(n^{\log_b{a}})$ for the third case), rather than the more convoluted alternative?

That is indeed a natural attempt to understand that "convoluted" condition in simple terms. Unfortunately, it is not correct.

If the first case of master theorem still holds, we should have $T(n)=\Theta(n^{\log_21})=\Theta(1)$. However,
$$T(2^m)=T(2^{m-1})+\frac1m=T(2^{m-2})+\frac1m+\frac1{m-1}=\cdots=\frac1m+\frac1{m-1}+\cdots+1.$$
Letting $m$ goes to infinity, we see that $T(n)$ is not bounded because the harmonic series diverges.

For the same or a variety of other reasons, the usage of an arbitrarily small positive constant, which is usually denoted by $\epsilon$ pops up constantly in different places, especially in asymptotic estimate and complexity analysis. You may want to get used to it or even get addicted to it.

Exercise 1. Verify that for any $\epsilon>0$, it is not true that $\frac1{\ln n}\in o(n^{-\epsilon})$ although $\frac1{\ln n}\in o(n^0).$

Exercise 2. Construct a counterexample for the master theorem where $a=4$, $b=2$ for

the first case where $f \in o(n^{\log_b{a}})$ instead of $ f \in O(n^{log_b{a}-\epsilon})$ for some $\epsilon > 0$ and

the third case where $f \in \omega(n^{\log_b{a}})$ instead of $f \in \Omega(n^{\log_b{a}+\epsilon})$ for some $\epsilon > 0$