"The default timer resolution on Windows is 15.6 ms - a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected â€" they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years. So how come almost every time I notice that my timer frequency has been raised it's been done by a Microsoft program?" Fascinating article.

On all modern computer hardware, you can set timers in single-shot mode for any duration you want, with microsecond to nanosecond timing accuracies. So how come we are still using periodic timers, apart for truly periodic phenomena such as round-robin scheduling without any external timing disturbance ? Is it so costly to set up timers this way in terms of CPU cycles or power consumption ?

The APIC timer is dependent upon the bus & CPU frequency, which modern systems can adjust on the fly depending on CPU load. The PIT has a reliable independent clock however I know from experience that the PIT IS expensive to re-program. It does seem that the timer API was designed for the PIT, so it wouldn't surprise me that there are legacy reasons for keeping the periodic timer design.

In theory an OS should be able to use APIC timers and recompute the appropriate timer scaling factors on the fly as the CPU changes frequencies. However therein might lie the problem, I think the CPU sleep/scaling functions are often controlled in System Management Mode, whereas the system timers are controlled by the OS. This lacks a needed citation on wikipedia, but...

"Since the SMM code (SMI handler) is installed by the system firmware (BIOS), the OS and the SMM code may have expectations about hardware settings that are incompatible, such as different ideas of how the Advanced Programmable Interrupt Controller (APIC) should be set up."

Also using variable-rate timer references may result in timer drift that does not occur with the PIT. Not that I think it should matter so much, in my opinion though typical consumer software should never really need such fast & precise timing anyways.

Indeed, that's something that's been bugging me forever about APIC timers: how are they supposed to replace the legacy PIT if the rate at which these timers are going keeps changing, in part due to factors outside of OS control ?