Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

apps will get deployed on end-user devices as fully self-contained natively compiled code (when.NET Native enters production), and will not have a dependency on the.NET Framework on the target device/machine.

Actually, that's not the case. As already mentioned by PhrostyMcByte above, here's the quote from Microsoft's FAQ:

apps will get deployed on end-user devices as fully self-contained natively compiled code (when.NET Native enters production), and will not have a dependency on the.NET Framework on the target device/machine.

This would mean it gets compiled like C or C++. You would still need the.NET redistributable for any libraries you reference just as you have done with C++ libraries or DLL libraries in traditional Windows development

I see your point - libs will still be needed, whether.NET framework or individual dlls, however... from the comments on the article (yes, I RTFM!!!) it seems you will be able to statically link everything together into a single, self-contained executable.

Correct me if I am mistaken, but I'm pretty sure that if they are using the backend they are skipping the lexing and parsing steps and going straight to the generation of the intermediate representation. That would mean that there is no generated C++ code to see.

Correct me if I am mistaken, but I'm pretty sure that if they are using the backend they are skipping the lexing and parsing steps and going straight to the generation of the intermediate representation. That would mean that there is no generated C++ code to see.

That is precisely what they announced. No correction needed. They use that C++ backend to emit code for specific processor architectures (and core counts) and do global optimizations.

The raw speed of the code might actually diminish since the.net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc). On the other hand, the startup would benefit - no more need to just-in-time compile. Plus there is no need for memory to compile it. On the other hand, the runtime might use some cycles to further optimize code during execution, whereas with this approach the code won't change any further. In any case, great for instant star

Well, the ART preview native compiler on Android 4.4 is on device so it could compile to native on the device, but I expect Google will accelerate that step precompiling on their servers taking into account device characteristics. Microsoft could do that too if they want

The raw speed of the code might actually diminish since the.net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).

MS announced that developers still need to pass hints to the compiler on what architecture, CPU core count, available memory etch, to compile for. You can (cross) compile to multiple architectures.

This technology is already at work when deploying apps for Windows Phone 8: Developers pass IL code to the store, native compilation is performed per device type in the cloud (CPU architecture, OS version, memory,...) and the binary is passed to the device.

The raw speed of the code might actually diminish since the.net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).

I hate to break it to you, but the original Pentium is now obsolete. Compiling for a specific CPU variant doesn't help much these days. I'm also unaware of any JIT compiler that adjusts the code for available RAM. You might have a point about the phase of the moon.

Basically you're citing the standard tropes used in defense of JIT. Theoretically it can make some difference, but when I ask for benchmarks showing code on a JIT running faster than straight-to-binary, all I hear is crickets.

Pentium? Do you really think x86 evolution ended with the Pentium?!? Here's a _subset_ of cases that differs on current designs and can make a performance difference:

CMOV (conditional move): can be beneficial for some processors while mostly useless for others.

Micro-op fusion. Some processors support some kind of fusion (where a compare-type instruction is fused with a branch dependent on the comparison). Which types of compare can be combined to which types of conditional branch differs.

Do you have any numbers? GCC has a bunch of different cost models for scheduling for different CPUs and regularly gets new ones added. That said I'm not sure I've ever seen a vast amount of difference, and less recently compared to the GCC3 days and PIII versus P4 which had very different costs.

Also, fun thing, with multiversioning, GCC can now compile multiple funtion versions using different instruction set subsets and switch at load time.

According to the article, the.NET Native runtime is a (not yet complete) implementation of.NET. This means that Wine +.NET Native = a Microsoft-built.NET runtime on Linux. This is good news because this may be a way to take those.NET technologies missing from Mono, such as WPF, and still use them on Linux.

Another reason this is good news is, we're one step closer to being able to develop Windows installers in.NET. Lately I've been using NSIS and it is the most stupid, idiotic language

Linq isn't missing from Mono:
http://stackoverflow.com/quest... [stackoverflow.com]
All the WPF stuff definitely is, but a good chunk of.NET 3.5 and up is implemented. They haven't been assaulted by attack-lawyers yet either.

I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?

During runtime, a.NET already runs compiled. This saves on the JIT compiler.

However, they also announced (later session at/Build//) that the new compilers (including the JITs) will take advantage of SIMD. For some application types this can allegedly lead to serious (like in 60%) performance gains. Games were mentioned.

Well, that depends. JIT needs to be *really* fast. That limits the optimization it can do. Pre-compilation to native allows more processing time for optimizations between the CIL and the machine code than a JIT can really afford.

Many years ago there was an R&D project inside a large tech company. It was exploring many of the hot research topics of the day, topics like mobile code, type based security, distributed computing and just in time compilation using "virtual machines". This project became Java.

Were all these ideas actually good? Arguably, no. Mobile code turned out to be harder to do securely than anyone had imagined, to the extent that all attempts to sandbox malicious programs of any complexity have repeatedly failed. Integrating distributed computing into the core of an OO language invariably caused problems due to the super leaky abstraction, for instance, normal languages typically have no way to impose a deadline on a method call written in the standard manner.

Just in time compilation was perhaps one of the worst ideas of all. Take a complex memory and CPU intensive program, like an optimising compiler, and run it over and over again on cheap consumer hardware? Throw away the results each time the user quits and do it all again when they next start it up? Brilliant, sounds like just the thing we all need!

But unfortunately the obvious conceptual problems with just in time compilers did not kill Java's love for it, because writing them was kind of fun and hey, Sun wasn't going to make any major changes in Java's direction after launch - that might imply it was imperfect, or that they made a mistake. And it was successful despite JITC. So when Microsoft decided to clone Java, they wanted to copy a formula that worked, and the JITC concept came along for the ride.

Now, many years later, people are starting to realise that perhaps this wasn't such a great idea after all..NET Native sounds like a great thing, except it's also an obvious thing that should have been the way.NET worked right from the start. Android is also moving to a hybrid "compile to native at install time" model with the new ART runtime, but at least Android has the excuse that they wanted to optimise for memory and a slow interpreter seemed like the best way to do that. The.NET and Java guys have no such excuses.

That's rather a bizarre claim, considering i and p code has been around for decades, and virtual machines have been around as long. There's nothing particularly unique about Java (or.NET for that matter).

You miss the point entirely. The vast majority of CPU time in most applications is spent in a relatively few leaf subroutines. What the JIT does is just compile those bits that are found to be CPU intensive.

In tests I had done some time ago with the early compilers,.Net code was actually faster than C implementing the same algorithm. The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.

Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.

Delaying compilation makes it architecture neutral, same distro for 32, 64bit, ARM etc. What is needed is to cache the results of previous compiles which causes a slight but usually negligible start up penalty.

Compiling all the way to machine code at build time is an archaic C-grade idea that became obsolete thirty years ago for most common applications.

The reason is that it can perform global optimizations, in-lining aggressively.

So can all semi-modern C++ compilers. This is a compiler technology, not a language concern.

Modern generational garbage collectors are also faster than malloc/free, and do not suffer fragmentation.

Perhaps true, but this ignores the fact that C++ can effectively bypass heap allocation completely for programmer-defined hot spots. Sure, this pushes the optimisation work on to the programmer rather than the compiler, but it still means a significant performance win. Java can't do this to anything like the same degree.

The reason is that it can perform global optimizations, in-lining aggressively. Sure that can be done with C (and you do not even need macros), but it takes extra work, slows down the compiler if too much is put into header files, and programmers usually miss some of the routines that need in-lining.

Modern static compilers have been advancing too. Automatic inlining is now very well established. With link time optimization, this even happens across translation units.

You can stop pretending that our multicore processors with 64 gigs of ram can't handle them.

...and perhaps you can stop pretending that HLL bloatware has anywhere near the performance of hand coded apps *on the same CPU*. It has nothing to do with GUI's per se, either. It's about frameworks with years and years of bloat, slow code in black boxes that can't be sped up (or often even bugfixed, as we have sadly come to learn), about ridiculous amounts of memory getting chewed up by library after library, abo

Memory and CPU power are there to be used so why not take advantage of it. And what the hell is a hand coded app? Or are you referring to programming against a runtime versus programming directly against the OS? And what does eschewing OO approaches mean? Are you talking about an application that encapsulates all it's functionalities without referencing any external resources or dependencies?

While I agree with you on most points, I don't think the last sentence is warranted -- because people aren't perfect. Just because you can write a piece of code doesn't mean you should. 1) If the problem has been solved exactly like you wanted, and the resulting code has 5 years of bug fixing associated with it, it's probably a good idea to use it (contrived e.g. are you going to rewrite Linux?). 2) Why waste time solving a problem that's already been solved by someone else (assuming it aligns with your spe

Memory and CPU power are there to be used so why not take advantage of it.

Battery life, for one. The less time a task takes, the less energy your application draws from the battery over the course of its running, and the less energy the screen draws from the battery while the user waits for it to finish.

You could write everything in assembly if you wanted to and with careful optimization you could probably produce faster code but it would take several orders of magnitude faster to do.

No. You can't do that unless the platform is locked down hardware wise, and that's not been the case with the major OS's for quite some time now. The best tool -- to date -- for anything serious aimed at a major OS is c. By far. Not C++. not objective c, not C#, not asm... just c.

due to NIH syndrome

No. That's not it at all. I don't care where it was invented; that's a symptom, not the actual problem. The problem is bringing in other people's code results in a loss of maintainability, quite often a loss of focus on the precise problem one is attempting to address, a loss of understanding of exactly what is going on, which in turn leads to other bugs and performance shortcomings. OPC comes into play at multiple levels: attempts to manage memory for you; libraries; canned packages of every type and "handy" language features that hide the details from you. NIH because it wasn't you just *looks* like the problem, but the problem is what NIH code actually does to the end result, and that's a real thing, not a matter of I don't like your style, or some personality syndrome. If the goal is the highest possible quality, then the job has to be fully understood and carefully crafted from components you can service from start to finish, the only exceptions being where it *must* interface with the host OS. Even then you're likely to get screwed. Need UDP ports to work right? Stay away from OSX. Need file dialogs to handle complex selections? MS's were broken for at least a decade straight. Need font routines that rotate consistently? Windows would give it to you various ways depending on the platform. And so on. Better off to write your own code if you can possibly manage it. You know, so it'll work, and if your customer finds an error, so you can fix it instead of punting it into Apple or MS's lap.

It boggles the mind that people *still* use the term "bloated" simply because they are utilizing frameworks that might not be limited to just the exact set of things you need

I use "bloated" when my version of something is 1 mb, and a friend's, with fewer lines of code, is 50 mb and runs the target functionality at a fraction of the speed, not to mention loading differences and startup differences. It's not just about a library routine that isn't called (well, until there are a lot of them, or if they're very large... linkers really ought to toss those out anyway), it's primarily about waste in every function call, clumsy memory management that tries to be everything to everybody and ends up causing hiccups and brain farts at random times, libraries that bring in other libraries that bring in other libraries until you've got a house of a thousand bricks, where you only actually laid a few of them, and you have *no idea* of the integrity of the remaining structure. Code like that is largely out of your control. Bloated. Unmaintainable. Opaque. Unfriendly to concurrently running tasks.

Look at your average iOS application. 20 megs. 50 megs. Or more. For the most simpleminded shite you ever saw; could have been implemented in 32k of good code and (maybe) a couple megs of image objects. That's what I'm talking about, right there. Bloat. It's that zone where a craft is swamped by clumsy apprentices who think they understand a lot more than they do. Where one fellow creates beautiful, strong, custom furniture, and the other guy buys a 59.95 box from IKEA and turns a few cams. The good news is that there will always be a place for those who can really craft, because there's a never-ending source of challenges where crap just won't do. And despite rumors to the contrary, end users do know the difference -- especially once they've been exposed to both sides of the coin.

I'm known to be a low level person and professionally a C programmer. And I agree with you on some stuff.

However...

No, I won't use C to do something in 1k memory and 3 weeks of coding, I will use python in 10mb memory and 1 day of coding. Simply because my time costs more than 10mb of memory. So stop demonising higher level languages and accept that they have their perfectly legit uses as long as their limitations are undestood. Keep in mind that if android used C and not java, we would have about 5 non cra

OK. Your time costs more than 10 MB of RAM. But does it cost more than 10 MB of RAM times 100 apps times one million users? In the latter case, the guys who collectively write the 100 apps are costing the users collectively an extremely large amount of resources (think money).

It depends on how the app is going to be used. I program with C, C++, and Python, as appropriate.

I program on niche embedded devices which have 16mb of ram and still cost a lot of money for what they are. C is the only way there. However through the development circle we have a lot of bugs which get attributed to improper memory management - null dereferencing, memory leaks and the like. This makes the development circle longer, which is acceptable.

Now on the mobile market, it is an implied that the consumer prefers a lot of relatively stable applications in a short period of time. The tradeoff for thi

The best tool -- to date -- for anything serious aimed at a major OS is c. By far. Not C++

Not only that, but it must only be written by a true Scotsman as well.

There are many major, serious, projects written in C++.

GCC, LLVM, Firefox, QT, Webkit, the JVM, libre/open office.

In fact GCC recently switched from C to C++. Basically, C++ provides exactly the same machine model as C, except that it gives you a more programmable compiler and richer abstractions. There are very, very few places that it's worth usi

When your app is going to be used by a billion people, potentially dozens of times a day, every extra millisecond it takes to load costs a man year. Every extra K it consumes costs a terabyte. You should pretend your work will be so popular if you want it to be so popular.

> You should pretend your work will be so popular if you want it to be so popular.

No you should not. This is amateur/hobbyist talk. When you have a billion customers, you will have plenty of money to optimize... or by then, you will be able to afford to keep plenty of "terabytes" online. Writing apps for billions of users when all you are likely to have is a few hundred/thousand users... is plain delusional and is inviting trouble.

Highly optimized, hyper-scalable code does not come for free. You will si

the fact is that in the vast majority of cases saving 5ms while expending 5 times the development effort

For something that takes 20 ms to execute, making it take only 15 ms will help your application update its view every vertical blank instead of every two vertical blanks.

due to NIH syndrome

One cause of NIH syndrome is disagreement with the licensing terms of the pre-existing libraries. Another is that the pre-existing libraries happen not to be ported to a platform that you plan to support.

Is C really that hard to develop in? After all, the chief advantages of C# isn't really C#, but the.NET libraries. C/C++ with good libraries strikes me as being a reasonably good option. If I'm just going to end up compiling it to down to machine code anyways, why bother with.NET at all? I get it if you have an existing code base you want to squeeze some more cycles out of, but if I were starting a new project tomorrow, give me one reason why a C# compiler is the way to go as opposed to C++?

You can't be serious! C is *substantially* lower-level than C#; you should only use C as a portable assembly language. I've spent decades writing assembly, C, and higher level languages and I'd pick C# over C in an eyeblink for anything that doesn't require access to the bare metal (well, personally I'd pick a functional language, but these days I work in industry...)

Otherwise, performance of the raw C is overrated. Or better: the developers who benefit most from C performance are the ones who can't algorithms. Also, developing reusable algorithms in C is a major PITA.

The small amount of your post that even makes sense is absurdly wrong. For fuck's sake, *nix kernels have been implementing complex process and cycle allocation algorithms for four decades now, almost all of it written in C. That's not even talking about various tools in userland that invoke fairly complex logic.

Your rant makes little sense - almost everyone upthread was talking about C/C++, not just C.

Which, in itself, makes little sense. Actually programming in C++ is vastly different from programming in C. In fact the main criticisms of C (low level, tedious, error-prone by hand resource management, unsage contstructs etc...) can almost entirely be avoided in C++ with no performance penalty. In fact often with a performance gain.

It can't use an `int` as a key or value - it operated on pointers to something abstract. Meaning that not only that something has to be dynamically allocated, but that if it is small - like `int` or even `long` - the overhead of dynamic memory would (typically) quadruple memory consumption. Which is clearly why things like glib aren't used in kernel space.

If you're trying to write something for an 8-bit microcontroller or a retro video game console, a lot of C compilers aren't necessarily competitive with hand-tuned assembly in terms of size or speed of the object code. Or what optimizing compiler targeting the 6502 do you recommend?

This is actually fucking awesome. They've got native compilation of Win32/64 desktop and server apps on the road-map. You're right, nobody cares about the Windows Store, which is why they targeted those apps first (you know, developers, developers, developers and all that shit).

The FAQ [microsoft.com] clearly states that they're planning to propagate this feature to all.Net apps.

Desktop apps are a very important part of our strategy. Initially, we are focusing on Windows Store apps with.NET Native. In the longer term we will continue to improve native compilation for all.NET applications.

I'm guessing that means.Net 4.5+ apps, which in turn means Windows 8+. So here's for hoping that Windows 9 is not gonna suck so much donkey ball

The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead of using the just-in-time (JIT) compiler to compile the original assembly.

Yes, it does produce native code.

No, it doesn't produce an executable, ready for redistribution.

I do not disagree with the approach, but there is still the difference. If done right, it might be a blessing: code is optimized for the local CPU. If done poorly (as MS likes to do it sometimes) it might mean irreproducible bugs or performance regressions and outright no effect at all, if cache gets corrupted somehow.

I agree with the essence of the statement, but it's written in terms of childish absolutes such as "nobody" that obviously isn't true. Maybe if he had said "The majority of.NET developers aren't doing metro. When you expand support for this feature, then it'll be interesting to the rest of us." But some people live in a world that revolves around them and cry when they get left out. I hope this feature comes to the rest of the.NET platform, but I'm not going to cry about it.

Desktop apps are a very important part of our strategy. Initially, we are focusing on Windows Store apps with.NET Native. In the longer term we will continue to improve native compilation for all.NET applications.

I'm not your Google bitch, so you can figure out where that quote came from on your own yeah?

No need to use Mono for ARM..NET has been supported on numerous architectures, including ARM, as part of Windows CE for years. Sure, it was only a subset of the full framework, but not for any technical reason except keeping the footprint small.

WinRT (not to be confused with Windows RT; we're talking about the API set now) does often feel like a waste of effort to me, although there is something to be said for identifying/creating a sandbox-friendly set of APIs to use in creating sandboxed software...

.NET apps compiled for "AnyCPU" will, technically, run just fine on Windows RT on ARM. The reason why you can't actually run such desktop apps is because it is blocked by signature verifier (any desktop app must be signed by MS to run on RT). It's a DRM thing, not a technical limitation.

Oh, and huge parts of Office use.NET these days, alongside the older native code. Ditto for VS, and many other products.

It's already completely possible to run native or managed desktop apps on RT. You just need to "jailbreak" it first to remove the signature enforcement on user-mode full-trust binaries. RT 8.0 has been jailbroken since like a year ago...

The jailbreak for RT 8.1 is in development. Microsoft put a completely unjustifiable amount of effort (IMO) into making sure RT 8.1 sucks even more than RT 8.0, but nothing that complex is perfect. If you have a gen1 RT device (anything except a Surface 2 or Lumia 2520) you

It'd be nice if MS would officially support loading standard win32 applications, compiled for ARM, without jailbreaking though.

Now obviously they want to avoid a situation where a consumer tries to install a shrink-wrapped x86 program that won't run on a different architecture - but there are other means. e.g. distributing platform specific binaries through the Windows store.