My definition of a "systems" programming language pretty much goes as this +- a couple nitpicky points:

A program whether statically compiled or dynamic memory wise is completely self-contained when targeting a certain platform of execution on the target OS of ANY patchlevel which says it maintains compatibility.

The resultant program must also play nicely in kernel space if required in which the host language both provides and has mechanisms and semantics to accommodate kernel module/driver style communication to enable sane and safe kernel citizenship.

BTW Charles I'm waiting for Niko Matsakis' Rust talk to be available if you would be so kind :)

Honestly... Anders seems to look more happy working with Typescript (JavaScript) than being put under so much pressure as the father of C# and .NET non-stop. Unless BUILD was just THAT good there's no other way to seem that stoked.

He seems to be having fun again instead of needing to be so serious and boring and safe like in such a military grade C# implementation. So this seems waaaay more healthy for the old fella, to put some spring back in his step.

What I'd like to see is similar in nature to TS's type definition file, but be more generic/js-engine agnostic so you could feed it to a JS engine so near-perfect type information can be fed to Chakra to specialize the compiled code if possible so it emits a near-perfect high-perf compiled version that the actual code can then use as a prepared, now specialized run-time ready to rock whenever the code actually does run. Sure JS is dynamic but helping the compiler do it's job when it's not required can't be bad, its just being compiler friendly if it does add any value or benefit. Whenever it finally happens you can almost enforce run-time security when JS engines support function freezing in the VM since the templated types can have the freeze keywords where we intended them to be, helping the run time code stay unmodified and for its coded intention only.

Good to see this since I apparently missed it... and I can't wait to get some more juicy brain-food a-la-Charles.

I asked the question of being able to offload compilation to the GPU since of course the type of data GPU's are good at doesn't fit today's GPU's almost at all, but the point is Windows NT was forward looking, and considering GPU's are becoming more and more general purpose with even being able to access shared memory; using forward thinking there may be a way to have the msbuild build system pre-process the code in a clever way so we can compile on our thousands of GPU cores. Right now they are tailored towards similar numeric data sure, but maybe with a tweak here or there we could finally use all or even a fration of that compute power to compile on GPU's instead of just doing all the compilation process on our CPU's. Id be a very happy and very impressed camper, since my GPU is being relatively idle while compiling anyway, and not like I have a huge compilation farm. But I rarely play a game AND compile at the same time since hey are both very resource hungry tasks so they are usually seperate tasks for me. So may as well try to offload some computation to the GPU during compilation process no? We need Dave Cutler joint with AMD/NVIDIA on the problem

Im also glad their planning to get parallel (better be without threads so the runtime can manage stuff) as I missed this live session and was going to ask about any paralellism of the JS code inside the Chakra engine, since it SHOULD be smart enough and know the ast and data flow of the generated code is to intelligently create safe concurrency.

INTERSTINGLY they you can run a linux kernel in JavaScript... http://bellard.org/jslinux/ (takes a minute or few to start up) if you want to see how crazy it is You need to know some linux command line commands to get around but the kernel seems to actually be running.

You can use "uname -a, ls, dir, pwd etc" have a look in the /bin, /usr/bin or /usr/sbin directories to look at the programs you can run, even a little c compiler, so enjoy!

Yea... the biggest and most stupid problem is that type info is used to develop to help prevent bugs etc, and all that type info during dev time is, lost in between over the wire, then reconstructed afterwards in the engines.... COMPLETELY redundant since type into is 92% used on both sides.

This may be a bad idea since it WOULD add more payload but IDE's and browsers should work together to make some type/function signature/forward declaration hinting script standard for the browsers to use, the same type info that devs use to code with, that way at least the type info wouldn't be lost over the wire for 0 reason.

This all depends on if the download size negates time spend parsing and optimizing without types vs with types provided in he engines. Basically a function signature definition include file to give all type info for the browser to use as the skeleton of all functions to help runtime without needing to do type checking & inference.

In my opinion types are being added as needed, and their almost always needed; sounds like the reasons standards bodies began in the 1st place, to standardize common practice for interoperability. So yes there seems like a need exists here in some way or another as long as its optional for compatibility and it provides a real boost in some way, might be interesting.