A discussion based on a calculator trick where taking 'n' repeated
square roots of 'a', then subtracting 1 and multiplying by 'b', then
adding 1 and squaring 'n' times, leads to a result very close to a^b.
The explanation is closely related to logarithms.

I was taught that when one rounds numbers like 3.5 or 4.5 to the
nearest even integer, so that 3.5 should be rounded up, and 4.5 should
be rounded down. Is there any good reason why grade-schoolers are
taught to always round up?

Calculators evaluate x^x differently for small positive values of x. Piecing together
clues about equivalent fractions, decimals, and degree measures, Doctor Peterson
gets to the roots of the discrepancies.