Sadly enough, there is a limit to the performance gain we can expect from our
parallelization. This limit was described by Gene Amdahl in
1967. Here's the exact quote:

For over a decade prophets have voiced the contention that the
organization of a single computer has reached its limits and that
truly significant advances can be made only by interconnection of a
multiplicity of computers in such a manner as to permit co-operative
solution...The nature of this overhead (in parallelism) appears to be
sequential so that it is unlikely to be amenable to parallel
processing techniques. Overhead alone would then place an upper limit
on throughput of five to seven times the sequential processing rate,
even if the housekeeping were done in a separate processor...At any
point in time it is difficult to foresee how the previous bottlenecks
in a sequential computer will be effectively overcome.

What we can deduce from this is that in parallelization in general,
there is always some part in the program that will be sequential. As
problems get bigger and bigger, the sequential part will become more
important and eventually place an upper limit on the maximal solution
speed. We will demonstrate this upper limit in one of the next
sections.