Alex Martelli wrote:
>> "Peter Hansen" <peter at engcorp.com> wrote in message
> news:39C6DA7B.2D19B1C1 at engcorp.com...> [snip]
> > > In theory, it's hardware dependant. But reasonably modern
> > > hardware should be able to get within a millisecond if the
> > > software is sufficiently discerning, so in practice it
> > > probably is not hardware dependent on modern systems.
> >
> > Is the latter statement more than just theory? I strongly doubt that
> > Windows, for example, would consistently come in any closer than +/- ten
> > or twenty milliseconds. Maybe on a real-time operating system.
>> Millisecond resolution is available through the "multimedia timers"
> on Windows if your hardware supports it (most PCs built in the last
> few years do, but not all).
I know there are alternatives. Unfortunately the message I was replying
to (in response to a question about the *accuracy*, not the *precision*
of time.sleep()), implied millisecond accuracy. Regardless of
precision, clock ticks, or anything else, my point was that using any
such call on a Windows system, where the process goes to sleep for a
while (or even busy-waits) and then returns after attempting to wait at
least X milliseconds, is doomed to be no more accurate than in the tens
of milliseconds. Windows NT *might* look better in many cases, if you
focus on precision (actually, resolution might be the better term here),
but even with the "realtime" priority it's at the mercy of some kernel
routine that feels like locking out multitasking for a while.
Basically, Windows (any flavor) may, on a whim, be off doing just about
anything else including launching some silly screen blanker which
proceeds to consume 98% of the CPU time playing Qix. I was just trying
to warn the original inquirant not to rely on any such system for doing
accurate delays under Windows. Some of the time it might look like it
works, but (and maybe I should have mentioned I think in terms of
realtime OS stuff) the *worst case* is going to be completely hopeless.
--
Peter Hansen