Bill Baxter wrote:
> Benji Smith wrote:
> >>>> Then, because this algorithm needed to be deployed to heterogeneous environments, a colleague of mine ported my code to C++. He did a straight transliteration of the code, preserving the same semantics from the Java to C++.
> > Does that mean that wherever you did "new Foo" he did a "new Foo" also?
Yes. He also subsequently did a "delete Foo" when he was finished with the object.
>> When we timed both implementations, we discovered that mine was 40 percent faster. Several of the C++ developers on my team were completely incredulous, and they made it their personal quest to optimize the C++ version so that it was the performance winner.
>>>> They eventually caught up to, and surpassed, the performance of the Java code.
> > Any idea by how much the C++ surpassed the Java in the end? Was it about the same margin (~40%) or significantly more or less? It's a big difference between 10x the Java performance vs say only 5% faster.
The C++ version eventually outperformed the Java version by 10% ~ 15%.

Jason House wrote:
> In a head to head C vs. D comparison on the computer go mailing list. It was reported that D was slower than C by a factor of 1.5. That's close enough for me to consider D sufficiently fast. I don't know how that compares to Java.
> > http://www.mail-archive.com/computer-go@computer-go.org/msg00663.html
I bet that if they compared DMD with DMC, they'd have found no difference. If you write "C style" code in D, you will get exactly the same results you get from C.

Walter Bright wrote:
> Jason House wrote:
> >> In a head to head C vs. D comparison on the computer go mailing list. It was reported that D was slower than C by a factor of 1.5. That's close enough for me to consider D sufficiently fast. I don't know how that compares to Java.
>>>> http://www.mail-archive.com/computer-go@computer-go.org/msg00663.html> > I bet that if they compared DMD with DMC, they'd have found no difference. If you write "C style" code in D, you will get exactly the same results you get from C.
The same guy elaborates a bit in
http://www.mail-archive.com/computer-go@computer-go.org/msg00666.html
and it actually ends up sounding like it should have been posted in this thread.
A quote: "You can write code a little faster in D [than C], but you can
write finished bug-free code a LOT faster."

Walter Bright wrote:
> Jason House wrote:
>> It was reported that D was slower than C by a factor of 1.5.
> > I bet that if they compared DMD with DMC, they'd have found no difference.
That makes it sound like DMC is 1.5 slower than your average C.

Georg Wrede wrote:
> Walter Bright wrote:
>> Jason House wrote:
> >>> It was reported that D was slower than C by a factor of 1.5.
>>>> I bet that if they compared DMD with DMC, they'd have found no difference.
> > That makes it sound like DMC is 1.5 slower than your average C.
Your average C? No. It might be 1.5 slower than the specific C compiler the benchmarker used for that specific application. Performance for particular applications varies all over the map for different C compilers.

== Quote from Walter Bright (newshound@digitalmars.com)'s article
> Georg Wrede wrote:
> > Walter Bright wrote:
> >> Jason House wrote:
> >> >>> It was reported that D was slower than C by a factor of 1.5.
> >>> >> I bet that if they compared DMD with DMC, they'd have found no difference.
> >> > That makes it sound like DMC is 1.5 slower than your average C.
> Your average C? No. It might be 1.5 slower than the specific C compiler the benchmarker used for that specific application. Performance for particular applications varies all over the map for different C compilers.
That may be but this specific C compiler is most likely gcc on Linux or VS C++ on
Windows. The D compiler is probably dmd. It's a bit shocking to see a 50%
difference. Is there information which compilers were used? And is there any
reason to believe the specifics of the benchmark could produce such a wide difference?

Waldemar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
>> Georg Wrede wrote:
>>> Walter Bright wrote:
>>>> Jason House wrote:
>>>>> It was reported that D was slower than C by a factor of 1.5.
>>>> I bet that if they compared DMD with DMC, they'd have found no
>>>> difference.
>>> That makes it sound like DMC is 1.5 slower than your average C.
>> Your average C? No. It might be 1.5 slower than the specific C compiler
>> the benchmarker used for that specific application. Performance for
>> particular applications varies all over the map for different C compilers.
> > That may be but this specific C compiler is most likely gcc on Linux or VS C++ on
> Windows. The D compiler is probably dmd. It's a bit shocking to see a 50%
> difference. Is there information which compilers were used? And is there any
> reason to believe the specifics of the benchmark could produce such a wide difference?
In the past it's been mentioned that DMD (and DMC by extension, I assume) produces somewhat sub-optimal code for floating-point ops, and there may be one or two other scenarios as well. So any test that uses these features heavily may present a somewhat skewed display of language performance.
Sean

Waldemar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
>> Georg Wrede wrote:
>>> Walter Bright wrote:
>>>> Jason House wrote:
>>>>> It was reported that D was slower than C by a factor of 1.5.
>>>> I bet that if they compared DMD with DMC, they'd have found no
>>>> difference.
>>> That makes it sound like DMC is 1.5 slower than your average C.
>> Your average C? No. It might be 1.5 slower than the specific C compiler
>> the benchmarker used for that specific application. Performance for
>> particular applications varies all over the map for different C compilers.
> > That may be but this specific C compiler is most likely gcc on Linux or VS C++ on
> Windows. The D compiler is probably dmd.
If he's using gcc, he should do benchmark comparisons with gdc.
> It's a bit shocking to see a 50%
> difference. Is there information which compilers were used? And is there any
> reason to believe the specifics of the benchmark could produce such a wide difference?
There's every reason to believe it. Often, people who write benchmarks never check to see exactly what they are actually benchmarking. I've seen all of the following:
1) using the wrong compiler switches
2) assuming one is testing string handling speed, when actually the benchmark was extremely sensitive to how the compiler handled the / operation on integers
3) assuming one is benchmarking some calculation speed, when one is actually benchmarking some innocuous looking C library function call
4) etc. etc.
In other words, you don't know what you're benchmarking until you run a profiler on it. And nobody runs profilers <g>. The old adage that 90% of your code execution time is in 10% of the code applies to benchmarks, too. Unless you actually dig in and measure it, sure as heck that 10% will not be where you think it is.