I have a C program that aims to be run in parallel on several processors. I need to be able to record the execution time (which could be anywhere from 1 second to several minutes). I have searched for answers, but they all seem to suggest using the clock() function, which then involves calculating the number of clocks the program took divided by the Clocks_per_second value.

I'm not sure how the Clocks_per_second value is calculated?

In Java, I just take the current time in milliseconds before and after execution.

Is there a similar thing in C? I've had a look, but I can't seem to find a way of getting anything better than a second resolution.

I'm also aware a profiler would be an option, but am looking to implement a timer myself.

Note that this returns the time as a floating point type. This can be more precise than a second (e.g. you measure 4.52 seconds). Precision depends on the architecture; on modern systems you easily get 10ms or lower, but on older Windows machines (from the Win98 era) it was closer to 60ms.

clock() is standard C; it works "everywhere". There are system-specific functions, such as getrusage() on Unix-like systems.

Java's System.currentTimeMillis() does not measure the same thing. It is a "wall clock": it can help you measure how much time it took for the program to execute, but it does not tell you how much CPU time was used. On a multitasking systems (i.e. all of them), these can be widely different.

It gives me very random result - I get a mixture of large/small/negative number over the same piece of code. GCC 4.7 Linux 3.2 AMD64
– user972946Jun 2 '13 at 1:40

3

Yes: clock() returns a time in some internal scale called "clocks", and CLOCKS_PER_SEC is the number of clocks per second, so dividing by CLOCKS_PER_SEC yields a time in seconds. In the code above, the value is a double so you can scale it at will.
– Thomas PorninNov 7 '15 at 16:56

6

Big warning: clock() returns the amount of time the OS has spent running your process, and not the actual amount of time elapsed. However, this is fine for timing a block of code, but not measuring time elapsing in the real world.
– user3703887Mar 28 '16 at 18:31

1

He said he wants to measure a multi-threaded program. I'm not sure a clock() is suitable for this, because it sums up running times of all threads, so the result will look like if the code was run sequentially. For such things i use omp_get_wtime(), but of course i need to make sure, the system is not busy with other processes.
– Youda008Oct 15 '16 at 8:12

1

I should mention some things even though this thread was more relevant a year ago: CLOCKS_PER_SEC is a long int with the value 1000000, giving time in microseconds when not divided; not CPU clock cycles. Therefore, it doesn't need to account for dynamic frequency as the clock here is in microseconds (maybe clock cycles for a 1 MHz CPU?) I made a short C program printing that value and it was 1000000 on my i7-2640M laptop, with dynamic frequency allowing 800 MHz to 2.8 GHz, even using Turbo Boost to go as high as 3.5 GHz.
– DDPWNAGEAug 17 '17 at 0:32

Yes, it'll work on windows with a c library that supports the gettimeofday call. It actually doesn't matter what the compiler is, you just have to link it against a decent libc library. Which, in the case of mingw, is not the default windows one.
– Wes HardakerJan 10 '14 at 18:22

5

this one is better and reliable than accepted one.
– Harshit GuptaSep 12 '14 at 6:02

A lot of answers have been suggesting clock() and then CLOCKS_PER_SEC from time.h. This is probably a bad idea, because this is what my /bits/time.h file says:

/* ISO/IEC 9899:1990 7.12.1: <time.h>
The macro `CLOCKS_PER_SEC' is the number per second of the value
returned by the `clock' function. */
/* CAE XSH, Issue 4, Version 2: <time.h>
The value of CLOCKS_PER_SEC is required to be 1 million on all
XSI-conformant systems. */
# define CLOCKS_PER_SEC 1000000l
# if !defined __STRICT_ANSI__ && !defined __USE_XOPEN2K
/* Even though CLOCKS_PER_SEC has such a strange value CLK_TCK
presents the real value for clock ticks per second for the system. */
# include <bits/types.h>
extern long int __sysconf (int);
# define CLK_TCK ((__clock_t) __sysconf (2)) /* 2 is _SC_CLK_TCK */
# endif

So CLOCKS_PER_SEC might be defined as 1000000, depending on what options you use to compile, and thus it does not seem like a good solution.

Thanks for the information but is there any better alternative yet?
– ozanmuyesOct 16 '14 at 21:00

2

This is not a pratical problem: yes Posix systems always have CLOCK_PER_SEC==1000000, but in the same time, they all use 1-µs precision for their clock() implementation; by the way, it has the nice property to reduce sharing problems. If you want to measure potentially very quick events, say below 1 ms, then you should first worry about the accuracy (or resolution) of the clock() function, which is necessarily coarser than 1µs in Posix, but is also often much coarser; the usual solution is to run the test many times; the question as asked did not seem to require it, though.
– AntoineLApr 22 '15 at 15:29

ANSI C only specifies second precision time functions. However, if you are running in a POSIX environment you can use the gettimeofday() function that provides microseconds resolution of time passed since the UNIX Epoch.

As a side note, I wouldn't recommend using clock() since it is badly implemented on many(if not all?) systems and not accurate, besides the fact that it only refers to how long your program has spent on the CPU and not the total lifetime of the program, which according to your question is what I assume you would like to measure.

ISO C Standard (assuming this is what ANSI C means) purposely does not specify the precision of the time functions. Then specifically on a POSIX implementation, or on Windows, precision of the wall-clock (see Thomas' answer) functions are in seconds. But clock()'s precision is usually greater, and always 1µs in Posix (independently of the accuracy.)
– AntoineLApr 22 '15 at 15:18

This gives the difference between two time_t values as a double. Since time_t values are only accurate to a second, it is of limited value in printing out the time taken by short running programs, though it may be useful for programs that run for long periods.
– Jonathan LefflerDec 5 '16 at 1:12

For whatever reason, passing in a pair of clock_ts to difftime seems to work for me to the precision of a hundredth of a second. This is on linux x86. I also can't get the subtraction of stop and start to work.
– ragerdlDec 13 '16 at 19:39

@ragerdl: You need to pass to difftime()clock() / CLOCKS_PER_SEC, as it expects seconds.
– alkJul 21 '17 at 15:11

CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified starting point. This clock is not affected by discontinuous jumps in the system time
(e.g., if the system administrator manually changes the clock), but is affected by the incremental adjustments performed by adjtime(3) and NTP.

Comparison of execution time of bubble sort and selection sort
I have a program which compares the execution time of bubble sort and selection sort.
To find out the time of execution of a block of code compute the time before and after the block by

This doesn't really add anything new compared with adimoh's answer, except that it fills in 'the executable code' block (or two of them) with some actual code. And that answer doesn't add anything that wasn't in Alexandre C's answer from two year's earlier.
– Jonathan LefflerDec 5 '16 at 1:22