No. It is an overgeneralization but not a dangerous one. In practice, with
current C++ implementations and any I can imagine in the future (considering
that this state of affairs has persisted for about 10 years or so) printf will
be faster than cout. In theory cout *can* be faster, and I think it was Dietmar
Kuhl (modulo spelling) who once made a really really fast implementation --
Andrei Alexandrescu tried the same feat with some of the STL, called YASLI (Yet
Another Standard Library Implementation) but it was never completed except, as I
recall, an implementation of vector, and perhaps string but I'm not sure.

What you should be mainly be concerned about instead, is correctness and
maintainability.

Unfortunately for iostreams these concerns are in direct conflict. There is far
better type safety that for printf family, although still with UB for some input
operations. On the other hand, for any but the most trivial formatting and
parsing, the iostream code becomes really verbose & messy, downright ugly,
employing so complex functionality that whole tomes have been written about it.

But, for simple test & research & learning programs you can use a simple subset
of iostream functionality where the type safety outweights the verbosity.

For those kinds of small programs there's no contest really: at least for the
novice iostreams are there the default choice, the only sane choice.

> I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB of RAM
>
> A friend of mine said "printf is always faster than cout" and got the
> output of the same program as
>
> cout : 0.14
> printf: 0.10
>
> How did he get the output so fast ?

Is this the whole output? Where are the 2000000 asterisks?
The reason I mention this is that this is a very bad comparison of the two.
You don not take into account formating and actual I/O.
When printing so many characters without explicitly flushing the output buffer,
the difference may be caused by different flushing strategies of cout and printf,
which in normal situations would not apply.
You only measure the time to put a character in a buffer and an unspecified flushing of the output buffer.
The timing for printing each time a floating point variable on a new line
may show very different results, depending on whether you use endl or '\n' with cout.
>
> printf was slightly faster!
>
> But I think the statement "printf is faster than cout " is nothing but
> dangerous over generalization.
>
> Am I correct?
>
> I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB of RAM
>
> A friend of mine said "printf is always faster than cout" and got the
> output of the same program as
>
> cout : 0.14
> printf: 0.10
>
> How did he get the output so fast ?
>
> I think for 1000000 iterations my friend's output is impossible! Tell
> me whether I got approximately correct output or my friend?

When printing to the console? It doesn't matter because printing to
the console is probably hundreds if not thousands of times slower than
any speed difference between std::cout and std:rintf. Any such
difference will be almost completely overwhelmed by the slowness of the
console.

Now, if you were writing to a file, that can make a big difference in
many cases.

For this particular use, with the particular implementation you
were using.
> But I think the statement "printf is faster than cout " is
> nothing but dangerous over generalization.
> Am I correct?

Yes. In particular, for the precise program you've written,
there's a good chance that actual IO is dominating both cases,
so the speed of the library code doesn't mean anything. In
fact, this will probably be the case for most uses of the
library.
> I am using Intel Core 2 duo processor E7400 @ 2.8 GHz and 4GB
> of RAM
> A friend of mine said "printf is always faster than cout"

Which is ridiculous. Theoretically, cout can be slightly
faster, since it doesn't have to do any "parsing".
Theoretically, printf can be slightly faster, because there's
only one function call for complex formatting, as opposed to
many. Practically, it all depends, and if you find a large
difference, all it means is that one of them hasn't been
implemented very efficiently.
> and got the output of the same program as
> cout : 0.14
> printf: 0.10
> How did he get the output so fast ?

What was he outputting to? And what does clock() measure on
your system? (The presence of a getchar() at the end suggests
Windows, in which case, clock() is broken, and actually measures
elapsed time, rather than CPU.)
> I think for 1000000 iterations my friend's output is
> impossible! Tell me whether I got approximately correct output
> or my friend?

If clock() works correctly, both of your figures are way too
large. On my Linux box, I get very close to 0 for both. (Your
output requires no formatting, so there is practically no CPU
involved in either case.)

[...]
> > But I think the statement "printf is faster than cout " is
> > nothing but dangerous over generalization.
> > Am I correct?
> No. It is an overgeneralization but not a dangerous one. In
> practice, with current C++ implementations and any I can
> imagine in the future (considering that this state of affairs
> has persisted for about 10 years or so) printf will be faster
> than cout. In theory cout *can* be faster, and I think it was
> Dietmar Kuhl (modulo spelling) who once made a really really
> fast implementation

Dietmar's implementation of iostream beat any implementation of
printf I've seen, in terms of speed, and in at least one version
of g++, outputting to cout was faster than printf for some types
of output. (It's hard to generalize---his output to std::cout
would normally be done with putc, and not printf, using
<stdio.h>, and putc is probably faster than printf.)

In practice, the major vendors haven't bothered because their
implementations of iostream are already "fast enough".
> What you should be mainly be concerned about instead, is
> correctness and maintainability.
> Unfortunately for iostreams these concerns are in direct
> conflict. There is far better type safety that for printf
> family, although still with UB for some input operations. On
> the other hand, for any but the most trivial formatting and
> parsing, the iostream code becomes really verbose & messy,
> downright ugly, employing so complex functionality that whole
> tomes have been written about it.

Less so than printf, if you use it correctly. But advanced
formatting is never simple. (And of course, neither have any
support for formatting when variable width fonts are used. In
this sense, they're both from any earlier time.)

On Jul 21, 11:28 am, Juha Nieminen <> wrote:
> Prasoon wrote:
> > Which is faster "cout" or "printf" ?
> When printing to the console? It doesn't matter because
> printing to the console is probably hundreds if not thousands
> of times slower than any speed difference between std::cout
> and std:rintf. Any such difference will be almost completely
> overwhelmed by the slowness of the console.
> Now, if you were writing to a file, that can make a big
> difference in many cases.

For small files, which the system can cache in its memory. For
a large enough file, you'll end up using all of the system
buffers, the writes will require an actual write to disk, and
things will slow up considerable. Try writing 100K, then 200K,
up to a couple of MB. You'll find that a graph of the elapsed
execution times is decidedly non-linear. (Of course, if you're
using clock(), under Linux, nothing will change, since it's only
under Windows that clock() doesn't work correctly.)

James Kanze wrote:
>> I think for 1000000 iterations my friend's output is
>> impossible! Tell me whether I got approximately correct output
>> or my friend?
>
> If clock() works correctly, both of your figures are way too
> large. On my Linux box, I get very close to 0 for both. (Your
> output requires no formatting, so there is practically no CPU
> involved in either case.)

>If clock() works correctly, both of your figures are way too
>large. On my Linux box, I get very close to 0 for both. (Your
>output requires no formatting, so there is practically no CPU
>involved in either case.)

I think it measured the elapsed time in my case. I redirected the
output of the code to a file and got values too less as compared to
my previous ones.

If you are not using dynamic formatting, using constant data,
then cout::write is about as fast as fwrite(). The whole point
is that these block write functions take the data as-is and
send it on its merry way.

If you take a look at your results, the timings seem to be negligible.
The difference in timings are not significant due to the OS priorities
and the speed of the platform's I/O channel(s). In other words,
the time you save here will be wasted waiting for user input,
a hard drive, internet transmission, etc.

On Jul 21, 12:36 pm, tni <> wrote:
> James Kanze wrote:
> >> I think for 1000000 iterations my friend's output is
> >> impossible! Tell me whether I got approximately correct output
> >> or my friend?
> > If clock() works correctly, both of your figures are way too
> > large. On my Linux box, I get very close to 0 for both. (Your
> > output requires no formatting, so there is practically no CPU
> > involved in either case.)
> Windows console output is extremely slow.

And how does that relate to clock()? The standard says that
"The clock function returns the implementation's best
approximation to the processor time used by the program since
the beginning of an implementation-defined era related only to
the program invocation." There are, of course, enough weasel
words in there to make just about anything formally conform, but
the intent is clear that it should be related to the CPU time
used by the program (not the system), insofar as such is
available. Console output under Linux isn't particularly fast
either, but it's system time, not charged to the program, and it
doesn't show up in clock().

(Presumably, the reason Windows does what it does is for
backwards compatibility with MS-DOS, where no better
approximation was available.)

On Jul 21, 6:53 pm, Prasoon <> wrote:
> >If clock() works correctly, both of your figures are way too
> >large. On my Linux box, I get very close to 0 for both. (Your
> >output requires no formatting, so there is practically no CPU
> >involved in either case.)
> I think it measured the elapsed time in my case. I redirected the
> output of the code to a file and got values too less as compared to
> my previous ones.

If you're under Windows, it measures elapsed time. If you're
only under Windows, you can use the function GetProcessTimes to
obtain the CPU time. (The lpUserTime field in the returned
struct corresponds roughly to what clock() should return.)

I also use Ubuntu 9.04 frequently. So no problem with that.
>If you're under Windows, it measures elapsed time. If you're
>only under Windows, you can use the function GetProcessTimes to
>obtain the CPU time. (The lpUserTime field in the returned
>struct corresponds roughly to what clock() should return.)

James Kanze wrote:
> On Jul 21, 12:36 pm, tni <> wrote:
>> Windows console output is extremely slow.
>
> And how does that relate to clock()? The standard says that
> "The clock function returns the implementation's best
> approximation to the processor time used by the program since
> the beginning of an implementation-defined era related only to
> the program invocation." There are, of course, enough weasel
> words in there to make just about anything formally conform, but
> the intent is clear that it should be related to the CPU time
> used by the program (not the system), insofar as such is
> available.

My interpretation of the weasel words is that it's very reasonable to
include system time.
> Console output under Linux isn't particularly fast
> either,

Well, Linux (terminal is KDE Konsole 4.2.2) is faster than Windows by a
factor of 150. I would call that fast.
> but it's system time, not charged to the program, and it
> doesn't show up in clock().

Nope. System time is certainly included in the clock() value on my Linux
systems.

So any issue with clock() reporting real time on Windows is WAY smaller
than the difference in console output performance vs. Linux.

When redirecting the output to a file, Linux is about 4x faster; the
writes are completely cached, user+sys time is approximately equal to
real time on both.

(The numbers are for VS 2005 on Windows, GCC 4.3 on Linux; MinGW 4.4 on
Windows is about 10% faster than VS 2005 for this test. MinGW is using
the Windows standard libs, so it's not surprising that it's much slower
than GCC.)

On Jul 22, 4:42 pm, tni <> wrote:
> James Kanze wrote:
> > On Jul 21, 12:36 pm, tni <> wrote:
> >> Windows console output is extremely slow.
> > And how does that relate to clock()? The standard says that
> > "The clock function returns the implementation's best
> > approximation to the processor time used by the program
> > since the beginning of an implementation-defined era related
> > only to the program invocation." There are, of course,
> > enough weasel words in there to make just about anything
> > formally conform, but the intent is clear that it should be
> > related to the CPU time used by the program (not the
> > system), insofar as such is available.
> My interpretation of the weasel words is that it's very
> reasonable to include system time.

It's debatable. Is the system part of the program, or not. My
first interpretation would be that it isn't, but the point can
easily be argued both ways.

What is clear is that it shouldn't return elapsed time unless no
better alternatives exist (e.g under MS-DOS).
> > Console output under Linux isn't particularly fast
> > either,
> Well, Linux (terminal is KDE Konsole 4.2.2) is faster than
> Windows by a factor of 150. I would call that fast.

I've not measured the actual difference, but Linux terminal
output is visibly slower than output to /dev/null, or even
output to a remote file. I would call that slow.
> > but it's system time, not charged to the program, and it
> > doesn't show up in clock().
> Nope. System time is certainly included in the clock() value
> on my Linux systems.

I only tried it on one Linux system; the time from clock() was
the same whether the output whent to the terminal, or to
/dev/null.

I'm afraid I don't understand that sentence.
> When redirecting the output to a file, Linux is about 4x
> faster; the writes are completely cached, user+sys time is
> approximately equal to real time on both.

It depends on how much you're writing. There's a distinct point
where the caching stops working, and the elapsed time makes a
jump.

Of course, in most real applications, you'll be synchronizing
the important writes anyway, to avoid the caching. (In my work,
about the only non-synchronized writes are logging output. And
that very quickly becomes large enough that caching stops
working as well.)

Share This Page

Welcome to The Coding Forums!

Welcome to the Coding Forums, the place to chat about anything related to programming and coding languages.

Please join our friendly community by clicking the button below - it only takes a few seconds and is totally free. You'll be able to ask questions about coding or chat with the community and help others.
Sign up now!