Entropy can be interpreted as a measure of the a priori uncertainity about the
outcome of the measurement an experiment, assuming that we are measuring it through the given partition
(i.e., we are going to be told in which atom of the partition the result is).
Thus, the finer a partition is, the higher the resulting entropy. In particular, the trivial
partition {X} has entropy 0, since there is only one possible outcome, so there is no
uncertainity at all. On the other hand, the measurement gives no information at all
about the “real” outcome of the experiment, which reflects the complementary intepretation of entropy:
as the information gained from the measurement.
This is because of the intuitive fact that more
uncertainity about the outcome of the measurement means that more information will be obtained from
knowing it about the “real” outcome.

2.

Equally intuitive is the fact that among all measurable partitions of X into n atoms, the maximum
possible
entropy is attained at those in which the atoms are equally likely (i.e., all atoms have equal
measure 1/n).
This can be proved by means of standard calculus, and
a direct computation shows that the maximum value is log⁡n.

3.

Since the definition of entropy involves only the measure of atoms of the given partition, two
partitions which are equal modulo measure zero have the same entropy.

4.

There is a natural correspondence between finite measurable partitions and finite sub-σ-algebras of ℬ. For this reason,
to each finite sub-σ-algebra 𝒫 we can define its entropy by Hμ⁢(𝒫) where 𝒫 is the (unique) partition which generates 𝒫. For short, we denote this entropy by Hμ⁢(𝒫).