First of all: What is the hyper-period?: The hyper-period is the smallest
interval of time after which the periodic patterns of all the tasks are repeated. It
is typically defined as the LCM (least common multiple) of the periods of the tasks
(in a periodic task system).

A small hyper-period value has several applications in several fields of real-time
scheduling:

There is little room form improvements when using the mathematical definition of
hyper-period, it is like trying to reduce the result of 5 * 5 because you think that
25 is too big

But what would occur if the periods were not an integer number but defined as a
range of valid values. That is, a period is defined as a nominal value and a
tolerance. In fact, this is the normal way of dealing with the physical
magnitudes commonly used by engineers and physics.

What is , is
that with this "engineering" definition of period, it is possible
to exponentially reduce the value of the resulting hyper-period.

This problem is the first issue that must be addressed if we want to use the EDF in a
real-time system. Once we know how to analyse the schedulability of a basic task set,
then it is possible to extend the solution to include more restrictions like
precedence constrains between tasks, context switch overhead, or mutual exclusive
resources.

The first solution to this problem was propossed in 200? by San joy Barhua. Three
years latter I proposed another solution based on a completely different
property. This new solution paved the way for the optimal aperiodic service for
EDF.

The dynamic memory allocation (malloc/free) is a technique widely used since the very
beginning of the computer science. It was deeply studied during the 60 and 70, mainly
with the intention to address the fragmentation problem. Later the goal was to
speed-up the temporal cost of the operations (allocate and de-allocate).
Despite the many research efforts, it is still an open problem that is
waiting to be better settled.

A mayor breakthrough was achieved a few years ago: the design of the TLSF (Two Level
Segregate Fit) allocator. It is a fast, constant time allocator with a very low
fragmentation.

Basically, there are two problems regarding dynamic memory: fragmentation and
temporal efficiency. There exists many misconceptions around DMA, one of them
was that lower fragmentation can be achieved by more complex and costly
algorithms. For example it is generally accepted that the policy "best-fit" causes
less fragmentation, but it is not true!.

In short, yes as far as different processor architectures behalves
diferently to buggy code. Therefore, if we compile our application for
serveral target processors, it is likely that in some of them
the programming errors do not manifest.

Although it may seem a complex task (cross compiler suites, different processors,
!?!?), thanks to the enormous efforts done by the open source community in the
development of the GNU GCC toolchain, and the incredible work of Fabrice Bellard
with the Qemu emulator, it is almost trivial.