the C# compiler is like 5 - 10x faster than the c++ compiler. Does that prove that real world managed code runs faster than native code?

There are several reasons for this. Most importantly, C++ is a far more complex language (grammar wise) than C# so it takes longer to parse (particularly thanks to templates, which are a Turing-complete compile-time language), and the C++ compiler performs far more extensive optimizations (in the .Net world, most optimizations are performed by the JIT, so the work of the C# compiler is relatively light).

Whether managed or native is faster has no simple answer. With native code, it is possible to perform more optimizations so it is possible to write faster code than with managed. However, in practice it may be require extremely complex and hard to maintain code to actually beat managed code. A good example of this is the optimization series that Raymond Chen and Rico Mariani once did: his final optimized C++ version was faster than the .Net version, but he had to write his own memory allocation algorithms and stuff like that in order to get there. By comparison, the naïve .Net version was orders of magnitude faster than the naïve C++ version, and not even that much slower than the final optimized C++ version.

And that was done back in 2005, with .Net 2.0. There have been many significant performance improvements in .Net (particularly regarding start-up and the GC) since then.

@SteveRichter: No. Perhaps the C# compiler is more parallel optimized during the compilation process. C++ is the fastest language there is. (in production use).

ok. I am just guessing that the C++ compiler, having been written in c++, is impossible to refactor. I am finding it very sluggish and the compiler errors often times misleading. The other day I had the header version of a function returning a different type than the code version and the compiler just went nuts. Telling me I had over 100 errors. I run into that kind of scenario very frequently, forcing me to change my coding style where I recompile very frequently, after small code changes.

ok. I am just guessing that the C++ compiler, having been written in c++, is impossible to refactor.

Nice theory, except the C# compiler is written in C++ too (Mono's C# compiler is written in C#, but MS.Net's isn't).

It's C++'s insane complexity, I tell you. C++'s grammar isn't even LALR, so you can't use automated compiler generation tools like yacc or bison to create a parser for it. It is just stupidly convoluted in many places.

Whether managed or native is faster has no simple answer. With native code, it is possible to perform more optimizations so it is possible to write faster code than with managed. However, in practice it may be require extremely complex and hard to maintain code to actually beat managed code.

In my C++ code all strings are in std::wstring and I am making a lot of use of unique_ptr<class>. And since I do a lot of function calls which return these types, there must be a lot of constructors and destructors being called. Compared to C# simply returning a reference.

In my C++ code all strings are in std::wstring and I am making a lot of use of unique_ptr<class>. And since I do a lot of function calls which return these types, there must be a lot of constructors and destructors being called.

Could be, but not necessarily. In a lot of cases, the C++ compiler is able to avoid copying objects (copy elision). Even more so since the introduction of r-value references and move constructors in C++11. And these things are aggressively inlined, even across compilation units. Doing those kinds of optimizations is what makes your code faster, but the compilation slower.

And std::unique_ptr doesn't have a copy constructor. That's kind of its main purpose (it does have a move constructor however, but that's no more expensive than copying a pointer).

Nice theory, except the C# compiler is written in C++ too (Mono's C# compiler is written in C#, but MS.Net's isn't).

what was Anders talking about last year when describing the data structures used in the Roselyn project? Where they did not have to recreate the entire parse tree or whatever it was called for every source code change. I recall him saying they had redone the entire compiler.

Nice theory, except the C# compiler is written in C++ too (Mono's C# compiler is written in C#, but MS.Net's isn't).

It's C++'s insane complexity, I tell you. C++'s grammar isn't even LALR, so you can't use automated compiler generation tools like yacc or bison to create a parser for it. It is just stupidly convoluted in many places.

LOL .... it's funny, way back I loved C and to this day I will code in C or in C# but not in C++ unless I am forced to.... I have never liked what they did with C++ and never found a real world case where I could not write what I needed with C if I had to use C or C++

as for speed well .net can be faster or slower depending on what is being done how it's done and many other factors.

runtime speed not the speed of the compiler.

compiler speed -- well as was posted what has to be done to compile C / C++ is way different than .net

also remember that a .net compiler only generates IL not native code. a C/C++ compiler does generate native code.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/df8ce80009af452e914ca13a0105da97#df8ce80009af452e914ca13a0105da97
Wed, 02 Jan 2013 15:53:22 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/df8ce80009af452e914ca13a0105da97#df8ce80009af452e914ca13a0105da97figuerres69https://channel9.msdn.com/Niners/figuerres/Discussions/RSSCoffeehouse - is managed code faster than native?One thing I found in both C# and C++ (STL), is to stay away from enumerators (foreach and begin()). Instead use simple integer loop indexing. There is a measurable performance drop in both cases when using enumerators.

the C# compiler is like 5 - 10x faster than the c++ compiler. Does that prove that real world managed code runs faster than native code?

That's an absurd conclusion. Depending on language compexity and optimizations applied, the compilation-time can vary wildly. One of the Go compilers can compile large code bases in less than a second. The Go language was designed to allow for that but it's also a product of how much optimization is applied in that particular compiler. There is still a need for a release compiler with a different compile-time vs run-time trade-off. The MLton compiler performs whole-program analysis and optimization - that is obviously going to be more time-consuming than local optimizations. The conclusion is so absurd that it leads one to believe you want to spur a heated debate.

One thing I found in both C# and C++ (STL), is to stay away from enumerators (foreach and begin()). Instead use simple integer loop indexing. There is a measurable performance drop in both cases when using enumerators.

Where performance doesn't matter, using an enumerator is fine.

If the use of an SDL enumerator rather than an integer indexer is the thing tanking your performance, then something has gone so far wrong in your app it's unreal.

My suggestion is that you benchmark your code before optimising it. Optimise only the bits that are actually in your hotpath, and choose algorithms that can be easily parallelized or which have known correct library implementations with lower big-Ohs to the ones you're using.

The rest of your code should focus on being readable and correct instead of being fast.

If the use of an SDL enumerator rather than an integer indexer is the thing tanking your performance, then something has gone so far wrong in your app it's unreal.

Wow that is quite a generalization for someone that has no clue what my code looks like. I'm talking about tight loops that does DSP processing (realtime FFT, filtering, FFT analysis, etc) as well as code that does asynchronous multithreaded low-level IO. In such cases the overhead of using enumerators can easily add a 30% performance drop in such tight loops. I've done the benchmarks and that is what they show. What does your benchmarks show for such cases?

If you think "something has gone wrong" in my code then with all due respect you should stick to writing simple code where no-one gives a crap about actual performance.

Wow that is quite a generalization for someone that has no clue what my code looks like. I'm talking about tight loops that does DSP processing (realtime FFT, filtering, FFT analysis, etc) as well as code that does asynchronous multithreaded low-level IO. In such cases the overhead of using enumerators can easily add a 30% performance drop in such tight loops. I've done the benchmarks and that is what they show. What does your benchmarks show for such cases?

I think you might get a bigger performance gain by taking a step back and asking if using other types of data structures and coding styles might give you better performance.

For example, a flat array will give you faster data accesses than an SDL list, even when using integer indexors.

And if you're really spending all of your time in a hot loop doing memory/CPU operations like FFTs, try writing them as a GPU shader in HLSL; you'll get orders of magnitude speed up by doing so. That's what I do for password-cracking for example. I certainly don't use the SDL at all for the hot loops of mine that actually make the room so hot you need four water cooling pumps and a heat-sink the size of a table to keep the processors from melting (but as you say, I only write code where performance doesn't matter, right )

Focusing on minor things like iterators versus integer indexers tends to hide the wood for the trees - as you said before you spend a lot of time doing low-level asynchronous IO - the syscall to kick that off will take tens of thousands of times longer to complete than the difference between an index in SDL verus the overloaded array indexer.

And since the SDL interator is not contractually bound to be slower - it might actually be faster on some machines or in future versions of the CRT. For example, integer array accesses need to be bounds checked for safety, whereas iterators do not.

Morale of the story is that writing, easy-to-read, obviously-correct and good-practices code for the most part, and only ever optimising (and liberally commenting and benchmarking) genuine hotloops tends to lead to better, more reliable and more long-lived code.

Well I was throwing different examples in there, so different approaches are required in each case.

To simplify things, for this example let's just say I'm implementing a DSP algorithm in C#. In that case there is quite a big difference in using for vs foreach. In my algorithm I analyze the FFT results, then classify peaks based on certain criteria. Then there are additional algorithms that enumerate over those results and analyze the musical relationships between all the collected peaks. So as you can see, there is a lot of looping.

What I have found is that the performance increase when switching to for loops from foreach loops is roughly 30%.

The IO example is a bit more complex because just profiling a specific part of the management code itself (C++) has shown that that part also had a roughly 30% increase in performance when going to simple integer indexing. Now the IO in other parts of the code is a much bigger bottleneck, however changing the management code to use integer indexing resulted in an overall improvement of 5% or so. Not as big but still worthwhile.

BTW in this particular case the DSP algorithm needs to run on WP7/8. As such I can't use the GPU. I can use C++ on WP8 however to keep it backwards compatible with WP7, I prefer not to do that. The point is that with the right coding approach it is fast enough that I don't need to resort to something else.

That's an absurd conclusion. Depending on language compexity and optimizations applied, the compilation-time can vary wildly. One of the Go compilers can compile large code bases in less than a second.

I do not know other compilers. Was just figuring that the compilier would be the flagship app of a language and a lot of effort would be put into making it run well. And my recent experience using visual C++ is it is a bit shabby, kind of like the language and the compiler have been hacked together over the years. Which would be fine if I had an alternative. But C# does not handle structs very well and Microsoft says windows shell code should be written in C++.

And since C# is very close to C++ in performance, you shouldn't worry about that. Often you ended up making cleaner faster program because it is easier using C# and gives your more time and less intimidation when trying to improve your program.

To me, C# for most apps and decent 3D games, C++ for extreme games and it is not limited to .Net platform.

With very few exceptions, people that write assembly write worse assembly (in terms of correctness and in terms of performance) than comes out the back of the C++ compiler and worse assembly than comes out of the C# JITter.

And people that make use of __asm in C++ not only make their code no longer portable to platforms other than x86 (including ARM and x64), they also cause the compiler to turn off inlining for that function, turn off optimisations for the function and cause the compiler to save and restore all of the registers it heuristically thinks it touches. It can also cause the compiler to make really suboptimal use of registers in the code:

Consequently the inline code inside a C++ function:

__asm { xor ebx, ebx; mov [_local], ebx }

is technically equivilent to _local = 0, but can be thousands of times slower in practice (since the compiler now has to save EBX over the block, no longer knows that _local is zero for optimisations later in the function, and can't initialize _local at the same time as it initializes all of the other local variables (using STOSDs not movs), it can't inline the function, can't make the function EBP-less and can't shuffle the asm block around to get better store/fetch performance on the processor, can't use SIMD and can't perform any compile-time checks of the code.

Even worse - if you dare to use that code in a declspec naked function without saving EBX over the call yourself, you might find that EBX is a register that you shouldn't be blindly destroying - and if you do it in kernel mode there's an exploitable error in there too.

It's also much harder to do algorithmic improvement in lower level languages. For example, doing a quick-sort in assembler is so difficult that in practice people do easier-to-implement, less-likely-to-go-wrong, but worse-big-Oh-performance algorithms when forced to use lower level languages, and consequently 2 days work on a handwritten assembler algorithm is likely to yield a slower algorithm than an equivalent amount of work on optimising the algorithm in a higher level language.

Morale of the story is that optimal assembler > optimal C++, but for nearly all values of "you", your assembler <<< Microsoft C++ compiler release output of your C++.

Oh tosh. You could make that claim of anything that compiles to native. Speed is dependant on the compiler, the runtime, and how crappy the user's code is.

sure, but C++ is also one of its kind with the not paying for what you don't use philosophy, being native doesn't necessarily mean control or speed given high enough abstractions

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3c01a8505dde4d4aaf5ba13b0141a46c#3c01a8505dde4d4aaf5ba13b0141a46c
Thu, 03 Jan 2013 19:31:03 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3c01a8505dde4d4aaf5ba13b0141a46c#3c01a8505dde4d4aaf5ba13b0141a46cIon Todirel69https://channel9.msdn.com/Niners/Ion Todirel/Discussions/RSSCoffeehouse - is managed code faster than native?I haven't really seen a good argument, ever, on why managed languages (or dynamic/static) inherently perform better or worse than unmanaged. You can look at benchmarks all day, but all it will tell you is compilers/runtimes are better at optimization than others. Nothing about languages.]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/e97a5bec54a94295bf83a13c01695eb3#e97a5bec54a94295bf83a13c01695eb3
Fri, 04 Jan 2013 21:55:42 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/e97a5bec54a94295bf83a13c01695eb3#e97a5bec54a94295bf83a13c01695eb3Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?Dynamic languages versus static languages is easy: dynamic languages need more runtime checks and an embedded parser in order to run code. Static languages get to remove certain checks (e.g. C++ doesn't need to decide if a variable should use an ADD opcode to add two integers or an FADD or possible a full-blown strncat, whereas PHP given a random variable doesn't know until runtime.

Managed languages versus unmanaged ones is less obviously one side or the other - managed languages get to take certain optimisations based on runtime data that static ones can't do, whereas static languages get to make slower, but ultimately more effective optimisations because they're not under pressure to return quickly so that the runtime isn't held up by the JIT.

Similarly the heap versus the GC are much of a muchness. The GC has to do expensive collects, but native heaps have to do expensive free-block coalescing and lose performance due to stuff like fragmentation pushing up the working set of the process and hence accesses being more likely to fault.

What I would really like though is the ability to tell C# to compile my app down not to CIL but to x86 and for it to burn in or link to a GC implementation. If it could strip all symbols, types and reflective information from the binary like I can strip PDBs from C++ that would be extraordinarily awesome too.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/26064131ee74452fb02ea13c0175a4db#26064131ee74452fb02ea13c0175a4db
Fri, 04 Jan 2013 22:40:23 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/26064131ee74452fb02ea13c0175a4db#26064131ee74452fb02ea13c0175a4dbevildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?The situations where you can know that when a function is called from somewhere it will always be passed values of a certain type are the same in static and dynamic languages (that's how type inference works). And in the situations where you can't, static typing doesn't buy you anything because you'd have to check the type anyway (eg, typecasting unpredictable input from I/O) or use interfaces to accomplish what dynamic languages give you for free via duck typing. (Also poorly and more verbosely.)]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3e6c712874b849ada023a13c01836919#3e6c712874b849ada023a13c01836919
Fri, 04 Jan 2013 23:30:31 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3e6c712874b849ada023a13c01836919#3e6c712874b849ada023a13c01836919Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?

There are several reasons for this. Most importantly, C++ is a far more complex language (grammar wise) than C# so it takes longer to parse (particularly thanks to templates, which are a Turing-complete compile-time language), and the C++ compiler performs far more extensive optimizations (in the .Net world, most optimizations are performed by the JIT, so the work of the C# compiler is relatively light).

Based on research made by folk at the LLVM, it seems that the include system is the main offender, with modules they proved the the compilation time can be greatly reduced. The include system is flexible, and for any project you could probably find the optimal structure that yields the optimal compilation times, but not without sacrificing maintainability and readability.

The situations where you can know that when a function is called from somewhere it will always be passed values of a certain type are the same in static and dynamic languages (that's how type inference works).

Type inference can sometimes tell what type a parameter will be (if you can, you get fast speeds. If you can't you have to do a slow runtime check).

Static types mean that you can always tell that a parameter will be, meaning that you never need to do a slow runtime check to ask what a parameter is to decide whether + means ADD or FADD or call variant_Add.

Can you point out a situation where type inference can't tell what a type would be, but a programmer can?

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/f0546eb71a7d40418e19a13d01219b82#f0546eb71a7d40418e19a13d01219b82
Sat, 05 Jan 2013 17:34:25 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/f0546eb71a7d40418e19a13d01219b82#f0546eb71a7d40418e19a13d01219b82Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?There is major things that I feel constitute what I expect out of dynamic languages:

Types as values instead of types as variables

Duck typing

Optional metaprogramming / reflection

I'd like to really know why, on an objective level, these features would effect performance in any way, that a static language would not suffer from, ie. from virtual calls or type casting when needed to do certain things in a static language like I/O or polymorphism and some kinds of signalling. Reflection/metaprogramming is the one feature that I can see really complicating optimization (especially AOT), because it allows the programmer to literally f**k with code in arbitrary ways at runtime. But that sort of thing is optional even in dynamic languages and I think it's just "slow" in any language that offers it.

I like strongly typed dynamic languages. Also, the language doesn't need to be compiled at runtime, it could have a compiler that precompiles directly into machine code, if this improves performance.

Just to be specific..a dynamic language still should have a rich type system that exposes a reasonable amount of the functionality available on the CPU. If it only supports is the String datatype or floats, well that's not really what I'm looking for.

JavaScript/V8 engine is pretty interesting because at least on Debian's benchmark game it performance competitively with Mono and Java (both considered very fast), despite having inherited JavaScript's various optimization difficulties and not doing any precompilation like they do, oh and being significantly newer.

Can you point out a situation where type inference can't tell what a type would be, but a programmer can?

function add(var1, var2) { return var1 + var2; }

alert(add(1, 2)); alert(add("1", "2"));

In this case, add will be implemented by a call, not by an ADD because two type inference passes will come back with different types.

If this was done via a C++ template the first time will be an ADD opcode for integer addition and the second one will be done by a call to a string concatenation routine.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/ab0f145eea964ee58712a13d0162803c#ab0f145eea964ee58712a13d0162803c
Sat, 05 Jan 2013 21:30:41 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/ab0f145eea964ee58712a13d0162803c#ab0f145eea964ee58712a13d0162803cevildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?I don't get why that would require runtime type checking. The type of the values you are passing to add could be unambiguously determined to be (integer,integer) and (string,string). Thus the compiler could create two functions (or even inline the code), one for add(string,string) and one for add(int,int).]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/4d845e2b549a414096cea13e000d35b1#4d845e2b549a414096cea13e000d35b1
Sun, 06 Jan 2013 00:48:05 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/4d845e2b549a414096cea13e000d35b1#4d845e2b549a414096cea13e000d35b1Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?

I don't get why that would require runtime type checking. The type of the values you are passing to add could be unambiguously determined to be (integer,integer) and (string,string). Thus the compiler could create two functions (or even inline the code), one for add(string,string) and one for add(int,int).

If you think type inference in real programs is quick, easy or accurate, I suggest you go work for a compilers team for a short while. They will disfranchise you of this opinion.

Perhaps then you will understand why knowing what a type is allows you to generate faster code than trying to guess what it is.

If you think type inference in real programs is quick, easy or accurate, I suggest you go work for a compilers team for a short while. They will disfranchise you of this opinion.

Perhaps then you will understand why knowing what a type is allows you to generate faster code than trying to guess what it is.

Explain when it isn't accurate. In your example, it would be trivial for the compiler to know the type, the parens give away the string and the fact that the value is a number gives away the int. Thus the compiler could optimize the function call without losing the advantages of dynamic behavior.

I'm trying to understand the instances when you wouldn't know what a type would be ahead-of-time. The only times I can think of involve metaprogramming and indirect calls (ie. polymorphism/interfaces/virtual calls, like using something like an "IAddable" and one function, which is what static languages use to get around the lack of duck typing in the language).

I didn't say writing an optimizing compiler is easy for dynamic language (or for static language even), just that I don't see what static languages inherently buy in performance (ie. static languages are "inherently" more optimization). If there isn't a inherent performance advantage, I want to see at least some example of a hard AI problem involved in decorating variables with types.

Because type inference between functions is a whole program optimisation, and is expensive and infeasible for large programs. Inlining functions can also only be done for certain classes of functions - namely small ones with no recursion.

Ultimately your solution for making a program written without types faster is to change it into a program written with types by type inference. Surely even you can see that it's faster (and more likely to find bugs) if you ask the programmer what the type should be in rather than trying (and often failing) to infer the type by second guessing the programmer.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/d95b70c5e8dd42bd94b2a13e013d4447#d95b70c5e8dd42bd94b2a13e013d4447
Sun, 06 Jan 2013 19:15:07 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/d95b70c5e8dd42bd94b2a13e013d4447#d95b70c5e8dd42bd94b2a13e013d4447evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?It can't be that complicated to do a search for places a function is called? That's O(n lg n) on the size of the code base at best?

I don't agree with letting code be more verbose to help the compiler. Isn't the point of programming languages to take needless complexity from the programmer? If we wanted to tell the computer everything, we'd be using assembly language.

I like this quote from the author of the Ruby language:

Often people, especially computer engineers, focus on the machines. They think, "By doing this, the machine will run faster. By doing this, the machine will run more effectively. By doing this, the machine will something something something." They are focusing on machines. But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves. --Yukihiro Matsumoto

I've done Java for awhile professionally. But now after doing Ruby for awhile, I've really started to understand this, especially why statically typed OO make programming far more complicated then it needs to be. The main thing is how much effort the team spends on pointless taxonomy, that's pretty much eliminated with Ruby. (In fact, I would say assembly is even better on this front. Static OO languages add complexity that simply doesn't even exist at the CPU level!) I'd say overall, productivity is just much better, especially the more people join a project. Less people arguing endlessly over how an object model should look.

It would be a shame if we still used static languages in the mainstream 20, 30 years from now.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/c543afa210be4596acaaa13e01408a8b#c543afa210be4596acaaa13e01408a8b
Sun, 06 Jan 2013 19:27:03 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/c543afa210be4596acaaa13e01408a8b#c543afa210be4596acaaa13e01408a8bBass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?I think also having classical object creation is a bit easier to optimize vs prototypes. But it ought to be able to optimize both using some compiler heuristics. Although not every case of prototype-based object creation can be optimized, obviously ones that kinda look like a class creation would be easier (think what TypeScript generates).

With how fast it JavaScript is these days, and JavaScript is not really the best example of an easy to optimize dynamic language. It makes me wonder if you made a dynamic language, still with duck typing and types as values but with some minor modifications (eg: classes, richer numeric types) to help with static optimizations could be made to be C++ fast. Anyway I'd definitely like to learn more about this sort of thing.

It can't be that complicated to do a search for places a function is called? That's O(n lg n) on the size of the code base at best?

For n functions with m instructions where each parameter is v indirected from a type-fix point you have at least O(nmv) to resolve the function. If function lengths are independent of codebase size and type indirection grows with lg(n), this is O(n^2 lg(n)) at the very least, and that's assuming that the type of the parameter passed to the argument is transparently obvious what the type is. And as soon as you get a recursion loop or calls from external libraries the answer becomes undecidable ( pow($v, $n) { return ($n == 1)? 0 : pow($v,$n-1)*$v; } has indeterminate type to the compiler).

I don't agree with letting code be more verbose to help the compiler. Isn't the point of programming languages to take needless complexity from the programmer? If we wanted to tell the computer everything, we'd be using assembly language.

You assume that the types add verbosity to the language just to get performance gain. Static types let you discover problems with your code as well. Telling my compiler that I intend to return an array of doubles from this function and it being able to tell me that foo("2", 1)[0] is a double, not an array of doubles is something that helps me keep typos, errors and knock-on effects of changing internal functions (which potentially affects the correctness of the callers) to a minimum.

Essentially static types are a really basic set of unit tests. They say "Hey - this function is going to return a double. So unit test it so that it can never return an array, or a file pointer".

But unlike unit tests, it's a complete set of unit tests. The unit test covers all inputs and guarantees that the function will return something useful.

I've done Java for awhile professionally.

I feel sorry for you. Java is a terrible language, and I see how if you think all statically typed languages are like Java and all dynamic ones are like Ruby why you'd think that static languages suck. However, before you condemn all static languages, I'd like to point out that not all of them are so retarded and verbose for so little benefit.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/ae430e59980d43a4bd4ba13e0145a43a#ae430e59980d43a4bd4ba13e0145a43a
Sun, 06 Jan 2013 19:45:37 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/ae430e59980d43a4bd4ba13e0145a43a#ae430e59980d43a4bd4ba13e0145a43aevildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?I don't buy the types as safety argument. Types are a very, very poor code contracts. You can enforce contracts in dynamic languages too. Also, poor man's immutability. So what if I can't change int a into "foo", I can change it into 6 and that might totally change the behavior of the program. Erlang is an example of a language that is dynamic but has first class support for contracts and immutability is completely enforced. A dynamic language that is used to build routing and switching equipment that is expected to be totally impervious to anything.

I don't really believe type problems cause many bugs, it's more the contents of the types (passing bad values) that cause bugs and static typed languages don't help there. Again, code contracts help and immutability makes it a bit easier to guess certain things about the behavior of the code.

I've also done C# professionally (not nearly as long as Java), and while I agree it is better language than Java, I think it's feels like a dated approach. It's too much like Java.. And I prefer Java and all it's memory management over C++. That covers all the mainstream static languages I think.

I don't buy the types as safety argument. Types are a very, very poor code contracts. You can enforce contracts in dynamic languages too. Also, immutability.

You can, but experience shows that most people using dynamic languages don't use the time they gain by writing faster code to write better unit tests and other contracts.

Type safety is a ubiquitous set of basic unit tests on your code that are hard to opt out of. Indeed - Spec# goes further and tries to build even better contracts into the language, essentially going the complete other direction away from dynamic langauges.

I'm not saying that static languages fix all your problems. It's possible to make really bad logic bugs in static languages, but the types fix lots of the basic typos or knock-on problems caused by refactoring internal functions.

You need unit tests and security reviews for code written in static languages and in dynamic languages, but in the absense of formal security procedures and unit tests, dynamic languages tend to come out vastly worse in security and correctness reviews.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/02f618fe2bdf4d84a606a13e0148f275#02f618fe2bdf4d84a606a13e0148f275
Sun, 06 Jan 2013 19:57:39 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/02f618fe2bdf4d84a606a13e0148f275#02f618fe2bdf4d84a606a13e0148f275evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?Again, all you are arguing for is an extremely limited form of code contracts. Adding a bunch of complexity to a language to implement a safety feature poorly is not really a good idea.

All your apparent vulnerabilities effect static languages as well, and don't really see an argument on how like duck typing magically makes something more robust. In fact, I would argue the opposite by having less code to worry about, you can write more robust code. And duck typing helps you write less code. There is some interesting literature on the academic side of things showing problems along these lines in static languages.

Again, all you are arguing for is a extremely limited form of code contracts. Adding a bunch of complexity to a language to do something other languages do, poorly is not really a good idea.

On the contrary. I think making types more powerful makes code better. That's why when I write code in C/C++ I use __in, __inout, const, __notnull and all sorts of other annotations to make my code better. I run static analysis tools to find unit-cases where a null-dereference or buffer overflow might occur that it finds algebraically rather than requiring me to think of it in advance in the unit tests.

That's why I really love Spec#. It's a way of building your unit tests inline with your code - and not only do your unit tests get auto-tested on each build, but the tests are algebraic (and therefore complete) rather than ad-hoc, and your compiler gets the opportunity to omit checks that it doesn't need because they are contractually verified.

All your apparent vulnerabilities effect static languages as well

They only affect it where data becomes typeless. SQL injection is because you're mixing SQL code and attacker controlled data. A static typed approach would be to say that SQL code simply doesn't compose with the attacker controlled string, and hence no SQL injection is ever possible.

Similarly if HTML was strongly typed and you didn't write a HTML string out to the browser but wrote an AST out then XSS wouldn't exist. Silverlight might be dead, but it never had XSSes for the simple reason that nobody in their right mind would try to dynamically build a Silverlight app on the fly and glue attacker controlled strings directly into the code.

If your data never went into the filesystem as a typeless blob next to your code, but rather went fully typed into your database, then arbitrary uploads wouldn't exist either.

My point is that you get better security by having stronger types. Dynamic languages decouple unit and security non-composability requirements tests from your code, and too many developers don't have them, but have them separate They just don't have them, leading to worse and less secure code.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/2fc1aacba9834fe09962a13e014cce08#2fc1aacba9834fe09962a13e014cce08
Sun, 06 Jan 2013 20:11:42 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/2fc1aacba9834fe09962a13e014cce08#2fc1aacba9834fe09962a13e014cce08evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?I really don't buy it at all. First of all dynamic languages can be (and often are) strongly typed, I would argue Ruby is much more strongly typed then C with all it's type punning shenanigans.

If you want to see robust dynamic languages can do, again I would look at Erlang. Far more robust than anything you can throw at me about static typing. Duck typing just makes more sense for OOP.

All this complexity exists in static languages to implement what duck typing can do, and they do it in a manner that is incredibly less powerful and come up with ugly hacks like templates or if-type-then-cast to overcome the mess.

Unfortunately C++ somehow won over Smalltalk and Java and by extension C# duplicated the poor designs in their languages and because of it, I had to waste hundreds and hundreds of hours of my life in taxonomist meetings where the same two opinionated people duke out their perfect idea for an object model for every stupid feature where I could have been programming.

Dynamic language community wasn't helping either by implicitly saying that their languages aren't systems programming languages and acting totally apathetic to performance, but the JS wars helped a bit to show that dynamic languages can be pretty damn fast. So I'm hopeful.

The part that doesn't map well is metaprogramming (eg: PHP's eval()), but that's to be expected. But that's a small part of it all, and guess what I've yet to meet a static language that doesn't have some kind of metaprogramming (or hacky way to accomplish something similar) because metaprogramming is often useful.

I really don't buy it at all. First of all dynamic languages can be (and ofter are) strongly typed, I would argue Ruby is much more strongly typed then C with all it's type punning shenanigans.

C is not a strongly typed language. Nobody claimed it was. Even C++ is only barely strongly typed.

If you want to see robust dynamic languages can do, again I would look at Erlang. Far more robust than anything you can throw at me about static typing. Duck typing just makes more sense for OOP.

If we're fighting over what weird crazy languages can do, I'll see your Erlang and raise you Haskell and Spec#.

All this complexity exists in static languages to implement what duck typing can do, and they do it in a manner that is incredibly less powerful and come up with ugly hacks like templates or if-type-then-cast to overcome the mess.

Implicit ducktyping is possible in strongly typed languages. It is no more a unique feature of dynamic languages than garbage collection is.

I had to waste hundreds and hundreds of hours of my life in taxonomist meetings where the same two opinionated people duke out their perfect idea for an object model for every stupid feature where I could have been programming.

You seem to be under the (false) impression that boring and ineffective design meetings are either the sole preserve of, more commonly occur in, or are a feature of companies that use statically typed languages. This is not true.

I've watched hopeless PHP design teams spend days in rooms discussing how to write code before bothering to try stuff out. On the other hand I've watched C# teams in other companies communicate via Linq to request an interface (e.g. ILogin) which the front-end person codes to and the backend programmer codes simultaneously.

The fact that you know two bad C++ programmers doesn't make C++ a bad language (there are much better reasons why C++ is a bad language). And the fact that you're a good Ruby programmer doesn't mean you wouldn't write better, faster, more secure code in a different language.

It is possible to write an x86 emulator in Haskell. Does this mean Haskell programs are generally as fast as C ones (since it can run anything C can run)? No, of course not.

And you can write a Haskell emulator in C. Does this mean that C is beautiful, strongly typed, garbage collected and functional, since C can do everything Haskell can do? Again, an absurd statement.

The fact that it is possible to convert one language to another doesn't mean the former gets any of the benefits of the latter.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b464676c754e491eb779a13e0156ce84#b464676c754e491eb779a13e0156ce84
Sun, 06 Jan 2013 20:48:07 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b464676c754e491eb779a13e0156ce84#b464676c754e491eb779a13e0156ce84evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?PHP is weakly typed. Weakly typed is when a language does bad(tm) things instead of failing (either at compile or runtime) when it encountered a type mismatch. I'm pretty sure any scary example from PHP you can come up with takes advantage of this.

Ruby and Python are fairly strongly typed, but dynamic. Again you can have "static" and weakly typed, like what pointers are in C. The type of a pointer is merely a suggestion. Heh. Also, rich type systems, like were a string is not just a f'ing pointer is helpful. These are were the REAL problems occur in robustness. Not in duck typing, and certainly not as values-as-types. Duck typing has almost no real disadvantage that I can see, and values-as-types can be considered syntactical sugar at best.

I don't disagree that weakly typed is a bad idea, I think you might be arguing against weakly typed stuff, which commonly is confused for dynamically typed stuff, because many dynamic languages happen to be somewhat weakly typed (PHP and JavaScript are no exception). So lets just agree that strongly typed languages are preferable.

guess what I've yet to meet a static language that doesn't have some kind of metaprogramming (or hacky way to accomplish something similar) because metaprogramming is often useful.

C/C++ doesn't have meta programming.

Also I've never needed it (other than for reverse engineering other people's binaries), so I dispute that everything meta-programming can do can't be done better by other programming constructs.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b9d0fd1d9b234e899ce6a13e01577453#b9d0fd1d9b234e899ce6a13e01577453
Sun, 06 Jan 2013 20:50:28 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b9d0fd1d9b234e899ce6a13e01577453#b9d0fd1d9b234e899ce6a13e01577453evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?It has loadlibrary, which lets you add code at runtime. This is how it gets around not having metaprogramming.]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b3e121c7a5b6483a807aa13e0157be11#b3e121c7a5b6483a807aa13e0157be11
Sun, 06 Jan 2013 20:51:31 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/b3e121c7a5b6483a807aa13e0157be11#b3e121c7a5b6483a807aa13e0157be11Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?

Neither is almost anything, but realistically it's available on every mainstream and even obscure OS (might not be named the same thing). If you don't have it, a lot of things (plugins, etc.) become a lot harder to do.

Neither is almost anything, but realistically it's available on every mainstream and even obscure OS (might not be named the same thing). If you don't have it, a lot of things (plugins, etc.) become a lot harder to do.

It only becomes hard to add native-code plugins to your closed source project. If you're open-source you just build it into the source code and recompile it (or add it via the command line to your compiler and it will become #if-included in).

Also, most programs don't have a need for plugins and those that do, the plugin interface is a very small part of the design and certainly doesn't factor into the language you use to write the code, or whether unit tests are a central feature of the language, or something you can choose to ignore to your later regret.

And if that wasn't enough, LoadLibrary isn't an eval, and isn't really equivalent to adding dynamic code to the codebase. It's dynamically adding more static code, which is quite different. Nobody in their right mind generates the library that they want out of a string of data to generate a library that they then LoadLibrary in.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/22219a2d03cf48ffbf3ea13e015d51d8#22219a2d03cf48ffbf3ea13e015d51d8
Sun, 06 Jan 2013 21:11:50 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/22219a2d03cf48ffbf3ea13e015d51d8#22219a2d03cf48ffbf3ea13e015d51d8evildictaitor69https://channel9.msdn.com/Niners/evildictaitor/Discussions/RSSCoffeehouse - is managed code faster than native?Also the thing about obscure languages, it shows how you can do dynamic languages right. Erlang's primary weakness is that it decided to copy the syntax of Prolog. That makes it very unapproachable for beginners.

You have language like Ruby which isn't that obscure at all and have beautiful syntax. I mean, it's all over the place these days, especially for new development. It has a much better syntax than Erlang and it's also very strongly typed. Ditto for Python. Very well designed languages, with unfortunately, very slow runtimes. That makes them useless for some things, which is incredibly unfortunate.

Then you have JS which is less pretty, but people have made it pretty via CoffeeScript. But JS is very fast, much faster than any dynamic language ever was because some people had the smart idea of trying to speed up a dynamic language for what seems to be the first real attempt.

At some point you will have the niceness of Ruby and Python, the robustness of Erlang, and the speed of JavaScript merge. That will be the language of the future.

I see no major difference between loading a library and eval, other than working with different languages. You are in fact, adding code to your process both ways, it just happens to that one is machine code. It's still code.

Also the thing about obscure languages, it shows how you can do dynamic languages right. Erlang's primary weakness is that it decided to copy the syntax of Prolog. That makes it very unapproachable for beginners.

You have language like Ruby which isn't that obscure at all and have syntax that read almost like poetry. I mean, it's all over the place these days, especially for new development. It has a much better syntax than Erlang and it's also very strongly typed. Ditto for Python. Very well designed languages, with unfortunately, very slow runtimes. That makes them useless for some things, which is incredibly unfortunate.

Then you have JS which is less pretty, but people have made it pretty via CoffeeScript. But JS is very fast, much faster than any dynamic language ever was.

At some point you will have the niceness of Ruby and Python, the robustness of Erlang, and the speed of JavaScript merge. That will be the language of the future.

You say that, but then you accuse statically typed languages of being equivilent to the dreadful languages of C++ and Java, missing out the elegance and simplicity of Haskell.

I'm not arguing against dynamic languages because I think that PHP is dreadful (it really, really is, by the way) or because I think that C++ and Java are the language that God himself would use (he really, really wouldn't). I'm arguing it because I feel that building unit-tests into the language via formalized contracts is the only way to get people to write them at all, and unit tests are the difference between good and bad code.

Forcing crappy developers to write unit tests, and getting the compiler to immediately abort and tell them when they do stuff wrong is critical to maintaining a healthy codebase. And making it easy for developers to see where semantics have changed slightly when refactoring their code to make refactoring easy rather than painful is cruicial to building systems that are effective.

Static types are a way of saying that writing unit tests is part of writing code. Too many people think that unit tests are optional or are something that we need a full time "tester" to write. Contracts and types let you find typos immediately, make code more immediately obvious when reading and easier to algebraically prove correctness for.

That's why I like static languages, and that's why I feel that going from C# to Spec# is a positive thing, and going to PHP from C#/ASP.NET is a backwards thing.

I see no major difference between loading a library and eval, other than working with different languages. You are in fact, adding code to your process both ways, it just happens to that one is machine code. It's still code.

It is not code that is highly coupled to runtime data. That's the difference.

In a strongly typed dynamic language. Again I don't see how changing a type1 to a type2 is going to cause anything other than a type mismatch exception or even a compile-time error unless type1 and type2 are both ducks. The whole "unit test" argument is totally bogus. Static types are not helping you at all! Strong types are. You aren't get type safety from static languages. You are just getting more complicated code.

Duck typing is basically doing the work of adding interfaces to your code so you don't have to think about it. That's all. You can implement it in static languages with a *-ton of interfaces, something like n! interfaces where n is the number of methods/properties. That would be ugly (esp for big classes), but it's doable. Some "pretty OO" designs seem to approach that, with their interface soup. See Android SDK. In what planet is that preferable to duck typing?

In a strongly typed dynamic language. Again I don't see how changing a type1 to a type2 is going to cause anything other than a type mismatch exception or even a compile-time error unless type1 and type2 are both ducks.

Because all primitive types duck-type to each other by default. An int is a double is a string is an array is an object.

Duck typing is basically doing the work of adding interfaces to your code so you don't have to think about it. That's all. You can implement it in static languages with a *-ton of interfaces,

Duck-typing can be done automatically by an IDE and can be built into a language. Duck-typing is not a feature of or the sole preserve of dynamic languages. Stop quoting it as such.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/6de023873b744518bf3aa13e016a0af2#6de023873b744518bf3aa13e016a0af2
Sun, 06 Jan 2013 21:58:09 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/6de023873b744518bf3aa13e016a0af2#6de023873b744518bf3aa13e016a0af2Bass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?It's kind of Ruby to helpfully report the typo of 5 instead of "5" only when you manage to generate a test case that exactly goes through that chain, rather than just telling you that you did a typo up front. Your customer also gets to appreciate the benefits of dynamic languages when he can't get his work done because of your "500 server error" caused by the error being detected only when he tries it.

It's kind of Ruby to helpfully report the typo of 5 instead of "5" only when you manage to generate a test case that exactly goes through that chain, rather than just telling you that you did a typo up front. Your customer also gets to appreciate the benefits of dynamic languages when he can't get his work done because of your "500 server error" caused by the error being detected only when he tries it.

My point is that your statement was wrong. Not 20% wrong, not 90% wrong. 100% wrong. So maybe try to be a little more accurate in your criticisms next time.

Yeah sure, the magic of C#'s compiler catches bugs before you deploy. So you're saying with C#, I'd never have need a static verification tool like Ruby/JSLint, because the compiler is just that awesome. Okay.

System Error

Wow, that shouldn't have happened

We're sorry. Something has gone terribly wrong.

If you would like to let us know about this, please Contact Us and let us know what happened.

Reference:

Well you should tell Channel9 team that because I keep getting this like 20 times in this thread so far via their superior C# codebase. To be fair this is the only website I regularly visit written in C# (they are hard to come by, with bigger ones like MySpace failing and all despite using happy customer C# code), so the sample size is small. But it doesn't sell the compiler anti-bug technology to me very well.

Ruby OR C#, hopefully you have integration/unit tests that do more than load your page or something. Hopefully you are using a static verification tool (!= compiler), code coverage tool and a tool to run your unit tests BEFORE you deploy (realistically, BEFORE you check in). None of this is ever optional in the real world(tm)!

But really, if you want code contracts, use code contracts (in Ruby even!). Don't try to pass type decorations like they are code contracts.

Anyway the point of this conversation for me was to try and see if I can learn anything new or interesting about static languages. Something to objectively show that static languages are superior. That WOULD be interesting. But it's not really working. All I am getting in a genuine confusion about the difference between weakly typed and dynamic typed. So uh, good luck.

]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3740605b03e24a438de9a13e0173b65c#3740605b03e24a438de9a13e0173b65c
Sun, 06 Jan 2013 22:33:21 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/3740605b03e24a438de9a13e0173b65c#3740605b03e24a438de9a13e0173b65cBass69https://channel9.msdn.com/Niners/Bass/Discussions/RSSCoffeehouse - is managed code faster than native?@Bass:Can you point to a dynamically typed native language out there? Note, dynamically typed, not dynamic.]]>https://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/dea764d91f2f4a7fbf7ea13f00bbdd4d#dea764d91f2f4a7fbf7ea13f00bbdd4d
Mon, 07 Jan 2013 11:23:59 GMThttps://channel9.msdn.com/Forums/Coffeehouse/is-managed-code-faster-than-native/dea764d91f2f4a7fbf7ea13f00bbdd4d#dea764d91f2f4a7fbf7ea13f00bbdd4dIon Todirel69https://channel9.msdn.com/Niners/Ion Todirel/Discussions/RSSCoffeehouse - is managed code faster than native?The bit that annoys me about this type of discussion is that most dynamic languages give you less flexibility and choice than statically typed languages to do what you want.

In C#, for instance, if you want values rather than variable-sites to have types, use the "object" keyword, or just use an interface:

but if you do that you still don't get a compile time or immediate IDE feedback when you write add("2", "3") - meaning that it makes the time between when you make a silly mistake and when you have an opportunity to fix it longer. It also ups the barrier to refactoring your code, since when you change internal stuff it's now much harder to quickly see where the error was caused because errors can now happen further from your typo.

The static versus dynamic debate is a debate about choice, and a debate about whether some forms of basic error checking (which is itself a rudimentary form of unit testing) is a first-class citizen in the language. In Spec# they take this further and build proper unit tests with algebraic solvers into the language - to the improvement rather than to the detriment of programmer productivity, because in Spec# the cost of writing unit tests shrinks yet further to just those tests that are too complex to write as requires/ensures contracts.

Dynamically typed languages pull the other way. They say that variable sites shouldn't be able to "ensure" or "require" features of the thing that sits inside them. This means that a greater burden is put on the compiler to infer the type and ensure type-safety internally (assuming the langauge is even compiled - most are interpreted, taking a double hit on speed).

Increasingly C# is giving you the most flexibility of all of the languages to program how you like. Use imperative programming if you want, or functional programming if you prefer. Use compiler-based assertions to pick up common typos and logic errors, and have a dynamic keyword for when it's useful.

Indeed, with Spec#, you can bring more of your testing into your code - making it possible for your compilation to "know" that you've tested for and proved that certain code paths can't take place - and since in real life hardly anyone unit tests their code, this is a way of bringing testing to normal developers, and stopping testing from being "the thing you do get someone else to do just before you launch the product"..

The future of languages is having static types where possible and dynamic types for the few times when it's actually needed.