Quickly fork, edit online, and submit a pull request for this page.
Requires a signed-in GitHub account. This works well for small changes.
If you'd like to make larger changes you may want to consider using
a local clone.

core.time

Module containing core time functionality, such as Duration (which
represents a duration of time) or MonoTime (which represents a
timestamp of the system's monotonic clock).

Various functions take a string (or strings) to represent a unit of time
(e.g. convert!("days", "hours")(numDays)). The valid strings to use
with such functions are "years", "months", "weeks", "days", "hours",
"minutes", "seconds", "msecs" (milliseconds), "usecs" (microseconds),
"hnsecs" (hecto-nanoseconds - i.e. 100 ns) or some subset thereof. There
are a few functions that also allow "nsecs", but very little actually
has precision greater than hnsecs.

What type of clock to use with MonoTime / MonoTimeImpl or
std.datetime.Clock.currTime. They default to ClockType.normal,
and most programs do not need to ever deal with the others.

The other ClockTypes are provided so that other clocks provided by the
underlying C, system calls can be used with MonoTimeImpl or
std.datetime.Clock.currTime without having to use the C API directly.

In the case of the monotonic time, MonoTimeImpl is templatized on
ClockType, whereas with std.datetime.Clock.currTime, its a runtime
argument, since in the case of the monotonic time, the type of the clock
affects the resolution of a MonoTimeImpl object, whereas with
std.datetime.SysTime, its resolution is always hecto-nanoseconds
regardless of the source of the time.

ClockType.normal, ClockType.coarse, and ClockType.precise
work with both Clock.currTime and MonoTimeImpl.
ClockType.second only works with Clock.currTime. The others only
work with MonoTimeImpl.

normal

Use the normal clock.

bootTime

Linux-Only

Uses CLOCK_BOOTTIME.

coarse

Use the coarse clock, not the normal one (e.g. on Linux, that would be
CLOCK_REALTIME_COARSE instead of CLOCK_REALTIME for
clock_gettime if a function is using the realtime clock). It's
generally faster to get the time with the coarse clock than the normal
clock, but it's less precise (e.g. 1 msec instead of 1 usec or 1 nsec).
Howeover, it is guaranteed to still have sub-second precision
(just not as high as with ClockType.normal).

On systems which do not support a coarser clock,
MonoTimeImpl!(ClockType.coarse) will internally use the same clock
as Monotime does, and Clock.currTime!(ClockType.coarse) will
use the same clock as Clock.currTime. This is because the coarse
clock is doing the same thing as the normal clock (just at lower
precision), whereas some of the other clock types
(e.g. ClockType.processCPUTime) mean something fundamentally
different. So, treating those as ClockType.normal on systems where
they weren't natively supported would give misleading results.

Most programs should not use the coarse clock, exactly because it's
less precise, and most programs don't need to get the time often
enough to care, but for those rare programs that need to get the time
extremely frequently (e.g. hundreds of thousands of times a second) but
don't care about high precision, the coarse clock might be appropriate.

Currently, only Linux and FreeBSD support a coarser clock, and on other
platforms, it's treated as ClockType.normal.

precise

Uses a more precise clock than the normal one (which is already very
precise), but it takes longer to get the time. Similarly to
ClockType.coarse, if it's used on a system that does not support a
more precise clock than the normal one, it's treated as equivalent to
ClockType.normal.

Currently, only FreeBSD supports a more precise clock, where it uses
CLOCK_MONOTONIC_PRECISE for the monotonic time and
CLOCK_REALTIME_PRECISE for the wall clock time.

processCPUTime

Linux,Solaris-Only

Uses CLOCK_PROCESS_CPUTIME_ID.

raw

Linux-Only

Uses CLOCK_MONOTONIC_RAW.

second

Uses a clock that has a precision of one second (contrast to the coarse
clock, which has sub-second precision like the normal clock does).

FreeBSD is the only system which specifically has a clock set up for
this (it has CLOCK_SECOND to use with clock_gettime which
takes advantage of an in-kernel cached value), but on other systems, the
fastest function available will be used, and the resulting SysTime
will be rounded down to the second if the clock that was used gave the
time at a more precise resolution. So, it's guaranteed that the time
will be given at a precision of one second and it's likely the case that
will be faster than ClockType.normal, since there tend to be
several options on a system to get the time at low resolutions, and they
tend to be faster than getting the time at high resolutions.

So, the primary difference between ClockType.coarse and
ClockType.second is that ClockType.coarse sacrifices some
precision in order to get speed but is still fairly precise, whereas
ClockType.second tries to be as fast as possible at the expense of
all sub-second precision.

threadCPUTime

Linux,Solaris-Only

Uses CLOCK_THREAD_CPUTIME_ID.

uptime

FreeBSD-Only

Uses CLOCK_UPTIME.

uptimeCoarse

FreeBSD-Only

Uses CLOCK_UPTIME_FAST.

uptimePrecise

FreeBSD-Only

Uses CLOCK_UPTIME_PRECISE.

struct Duration;

Represents a duration of time of weeks or less (kept internally as hnsecs).
(e.g. 22 days or 700 seconds).

In std.datetime, it is also used as the result of various arithmetic
operations on time points.

Use the dur function or one of its non-generic aliases to create
Durations.

It's not possible to create a Duration of months or years, because the
variable number of days in a month or year makes it impossible to convert
between months or years and smaller units without a specific date. So,
nothing uses Durations when dealing with months or years. Rather,
functions specific to months and years are defined. For instance,
std.datetime.Date has add!"years" and add!"months" for adding
years and months rather than creating a Duration of years or months and
adding that to a std.datetime.Date. But Duration is used when dealing
with weeks or smaller.

Returns a TickDuration with the same number of hnsecs as this
Duration.
Note that the conventional way to convert between Duration and
TickDuration is using std.conv.to, e.g.:
duration.to!TickDuration()

split takes the list of time units to split out as template arguments.
The time unit strings must be given in decreasing order. How it returns
the values for those units depends on the overload used.

The overload which accepts function arguments takes integral types in
the order that the time unit strings were given, and those integers are
passed by ref. split assigns the values for the units to each
corresponding integer. Any integral type may be used, but no attempt is
made to prevent integer overflow, so don't use small integral types in
circumstances where the values for those units aren't likely to fit in
an integral type that small.

The overload with no arguments returns the values for the units in a
struct with members whose names are the same as the given time unit
strings. The members are all longs. This overload will also work
with no time strings being given, in which case all of the time
units from weeks through hnsecs will be provided (but no nsecs, since it
would always be 0).

For both overloads, the entire value of the Duration is split among the
units (rather than splitting the Duration across all units and then only
providing the values for the requested units), so if only one unit is
given, the result is equivalent to total.

alias for MonoTimeImpl instantiated with ClockType.normal. This is
what most programs should use. It's also what much of MonoTimeImpl uses
in its documentation (particularly in the examples), because that's what's
going to be used in most code.

struct MonoTimeImpl(ClockType clockType);

Represents a timestamp of the system's monotonic clock.

A monotonic clock is one which always goes forward and never moves
backwards, unlike the system's wall clock time (as represented by
std.datetime.SysTime). The system's wall clock time can be adjusted
by the user or by the system itself via services such as NTP, so it is
unreliable to use the wall clock time for timing. Timers which use the wall
clock time could easily end up never going off due to changes made to the
wall clock time or otherwise waiting for a different period of time than
that specified by the programmer. However, because the monotonic clock
always increases at a fixed rate and is not affected by adjustments to the
wall clock time, it is ideal for use with timers or anything which requires
high precision timing.

So, MonoTime should be used for anything involving timers and timing,
whereas std.datetime.SysTime should be used when the wall clock time
is required.

The monotonic clock has no relation to wall clock time. Rather, it holds
its time as the number of ticks of the clock which have occurred since the
clock started (typically when the system booted up). So, to determine how
much time has passed between two points in time, one monotonic time is
subtracted from the other to determine the number of ticks which occurred
between the two points of time, and those ticks are divided by the number of
ticks that occur every second (as represented by MonoTime.ticksPerSecond)
to get a meaningful duration of time. Normally, MonoTime does these
calculations for the programmer, but the ticks and ticksPerSecond
properties are provided for those who require direct access to the system
ticks. The normal way that MonoTime would be used is

MonoTime is an alias to MonoTimeImpl!(ClockType.normal) and is
what most programs should use for the monotonic clock, so that's what is
used in most of MonoTimeImpl's documentation. But MonoTimeImpl
can be instantiated with other clock types for those rare programs that need
it.

The current time of the system's monotonic clock. This has no relation
to the wall clock time, as the wall clock time can be adjusted (e.g.
by NTP), whereas the monotonic clock always moves forward. The source
of the monotonic time is system-specific.

On Windows, QueryPerformanceCounter is used. On Mac OS X,
mach_absolute_time is used, while on other POSIX systems,
clock_gettime is used.

Warning: On some systems, the monotonic clock may stop counting
when the computer goes to sleep or hibernates. So, the
monotonic clock may indicate less time than has actually
passed if that occurs. This is known to happen on
Mac OS X. It has not been tested whether it occurs on
either Windows or Linux.

MonoTimeImpl zero();

A MonoTime of 0 ticks. It's provided to be consistent with
Duration.zero, and it's more explicit than MonoTime.init.

This is generally fine, and by its very nature, converting from
system ticks to any type of seconds (hnsecs, nsecs, etc.) will
introduce rounding errors, but if code needs to avoid any of the
small rounding errors introduced by conversion, then it needs to use
MonoTime's ticks property and keep all calculations in ticks
rather than using Duration.

Adding or subtracting a Duration to/from a MonoTime results in
a MonoTime which is adjusted by that amount.

const pure nothrow @nogc @property long ticks();

The number of ticks in the monotonic time.

Most programs should not use this directly, but it's exposed for those
few programs that need it.

The main reasons that a program might need to use ticks directly is if
the system clock has higher precision than hnsecs, and the program needs
that higher precision, or if the program needs to avoid the rounding
errors caused by converting to hnsecs.

static pure nothrow @nogc @property long ticksPerSecond();

The number of ticks that MonoTime has per second - i.e. the resolution
or frequency of the system's monotonic clock.

e.g. if the system clock had a resolution of microseconds, then
ticksPerSecond would be 1_000_000.

// one tick is one second -> one tick is a hecto-nanosecond
assert(convClockFreq(45, 1, 10_000_000) == 450_000_000);
// one tick is one microsecond -> one tick is a millisecond
assert(convClockFreq(9029, 1_000_000, 1_000) == 9);
// one tick is 1/3_515_654 of a second -> 1/1_001_010 of a second
assert(convClockFreq(912_319, 3_515_654, 1_001_010) == 259_764);
// one tick is 1/MonoTime.ticksPerSecond -> one tick is a nanosecond
// Equivalent to ticksToNSecs
auto nsecs = convClockFreq(1982, MonoTime.ticksPerSecond, 1_000_000_000);

pure nothrow @nogc @safe long ticksToNSecs(long ticks);

Convenience wrapper around convClockFreq which converts ticks at
a clock frequency of MonoTime.ticksPerSecond to nanoseconds.

It's primarily of use when MonoTime.ticksPerSecond is greater than
hecto-nanosecond resolution, and an application needs a higher precision
than hecto-nanoceconds.

Warning: TickDuration will be deprecated in the near future (once all
uses of it in Phobos have been deprecated). Please use
MonoTime for the cases where a monotonic timestamp is needed
and Duration when a duration is needed, rather than using
TickDuration. It has been decided that TickDuration is too confusing
(e.g. it conflates a monotonic timestamp and a duration in monotonic
clock ticks) and that having multiple duration types is too awkward
and confusing.

Represents a duration of time in system clock ticks.

The system clock ticks are the ticks of the system clock at the highest
precision that the system provides.

static immutable long ticksPerSec;

The number of ticks that the system clock has in one second.

If ticksPerSec is 0, then then TickDuration failed to
get the value of ticksPerSec on the current system, and
TickDuration is not going to work. That would be highly abnormal
though.

static immutable TickDuration appOrigin;

The tick of the system clock (as a TickDuration) when the
application started.

static pure nothrow @nogc @property @safe TickDuration zero();

It's the same as TickDuration(0), but it's provided to be
consistent with Duration and FracSec, which provide zero
properties.

static pure nothrow @nogc @property @safe TickDuration max();

Largest TickDuration possible.

static pure nothrow @nogc @property @safe TickDuration min();

Most negative TickDuration possible.

long length;

The number of system ticks in this TickDuration.

You can convert this length into the number of seconds by dividing
it by ticksPerSec (or using one the appropriate property function
to do it).

Returns a Duration with the same number of hnsecs as this
TickDuration.
Note that the conventional way to convert between TickDuration
and Duration is using std.conv.to, e.g.:
tickDuration.to!Duration()

The current system tick. The number of ticks per second varies from
system to system. currSystemTick uses a monotonic clock, so it's
intended for precision timing by comparing relative time values, not for
getting the current system time.

On Windows, QueryPerformanceCounter is used. On Mac OS X,
mach_absolute_time is used, while on other Posix systems,
clock_gettime is used. If mach_absolute_time or
clock_gettime is unavailable, then Posix systems use
gettimeofday (the decision is made when TickDuration is
compiled), which unfortunately, is not monotonic, but if
mach_absolute_time and clock_gettime aren't available, then
gettimeofday is the the best that there is.

Warning:
On some systems, the monotonic clock may stop counting when
the computer goes to sleep or hibernates. So, the monotonic
clock could be off if that occurs. This is known to happen
on Mac OS X. It has not been tested whether it occurs on
either Windows or on Linux.

Generic way of converting between two time units. Conversions to smaller
units use truncating division. Years and months can be converted to each
other, small units can be converted to each other, but years and months
cannot be converted to or from smaller units (due to the varying number
of days in a month or year).

This is the portion of the time which is smaller than a second and it cannot
hold values which would be greater than or equal to a second (or less than
or equal to a negative second).

It holds hnsecs internally, but you can create it using either milliseconds,
microseconds, or hnsecs. What it does is allow for a simple way to set or
adjust the fractional seconds portion of a Duration or a
std.datetime.SysTime without having to worry about whether you're
dealing with milliseconds, microseconds, or hnsecs.

FracSec's functions which take time unit strings do accept
"nsecs", but because the resolution of Duration and
std.datetime.SysTime is hnsecs, you don't actually get precision higher
than hnsecs. "nsecs" is accepted merely for convenience. Any values
given as nsecs will be converted to hnsecs using convert (which uses
truncating division when converting to smaller units).

static pure nothrow @nogc @property @safe FracSec zero();

A FracSec of 0. It's shorter than doing something like
FracSec.from!"msecs"(0) and more explicit than FracSec.init.