Looking back – motivating .NET Core

First let’s look back to understand how the .NET platform was packaged in the past. This helps to motivate some of the decisions and ideas that resulted in the creation of .NET Core.

.NET – a set of verticals

When we originally shipped the .NET Framework in 2002 there was only a single framework. Shortly after, we released the .NET Compact Framework which was a subset of the .NET Framework that fit within the footprint of smaller devices, specifically Windows Mobile. The compact framework was a separate code base from the .NET Framework. It included the entire vertical: a runtime, a framework, and an application model on top.

Since then, we’ve repeated this subsetting exercise many times: Silverlight, Windows Phone and most recently for Windows Store. This yields to fragmentation because the .NET Platform isn’t a single entity but a set of platforms, owned by different teams, and maintained independently.

Of course, there is nothing wrong with offering specialized features in order to cater to a particular need. But it becomes a problem if there is no systematic approach and specialization happens at every layer with little to no regards for corresponding layers in other verticals. The outcome is a set of platforms that only share APIs by the fact that they started off from a common code base. Over time this causes more divergence unless explicit (and expensive) measures are taken to converge APIs.

What is the problem with fragmentation? If you only target a single vertical then there really isn’t any problem. You’re provided with an API set that is optimized for your vertical. The problem arises as soon as you want to target the horizontal, that is multiple verticals. Now you have to reason about the availability of APIs and come up with a way to produce assets that work across the verticals you want to target.

Today it’s extremely common to have applications that span devices: there is virtually always a back end that runs on the web server, there is often an administrative front end that uses the Windows desktop, and a set of mobile applications that are exposed to the consumer, available for multiple devices. Thus, it’s critical to support developers in building components that can span all the .NET verticals.

Birth of portable class libraries

Originally, there was no concept of code sharing across verticals. No portable class libraries, no shared projects. You were essentially stuck with creating multiple projects, linked files, and #if. This made targeting multiple verticals a daunting task.

In the Windows 8 timeframe we came up with a plan to deal with this problem. When we designed the Windows Store profile we introduced a new concept to model the subsetting in a better way: contracts.

Originally, the .NET Framework was designed around the assumption that it’s always deployed as a single unit, so factoring was not a concern. The very core assembly that everything else depends on is mscorlib. The mscorlib provided by the .NET Framework contains many features that that can’t be supported everywhere (for example, remoting and AppDomains). This forces each vertical to subset even the very core of the platform. This made it very complicated to tool a class library experience that lets you target multiple verticals.

The idea of contracts is to provide a well factored API surface area. Contracts are simply assemblies that you compile against. In contrast to regular assemblies contract assemblies are designed around proper factoring. We deeply care about the dependencies between contracts and that they only have a single responsibility instead of being a grab bag of APIs. Contracts version independently and follow proper versioning rules, such as adding APIs results in a newer version of the assembly.

We’re using contracts to model API sets across all verticals. The verticals can then simply pick and choose which contracts they want to support. The important aspect is that verticals must support a contract either wholesale or not at all. In other words, they can’t subset the contents of a contract.

This allows reasoning about the API differences between verticals at the assembly level, as opposed to the individual API level that we had before. This aspect enabled us to provide a class library experience that can target multiple verticals, also known as portable class libraries.

Unifying API shape versus unifying implementation

You can think of portable class libraries as an experience that unifies the different .NET verticals based on their API shape. This addressed the most pressing need, which is the ability to create libraries that run on different .NET verticals. It also served as a design tool to drive convergence between verticals, for instance, between Windows 8.1 and Windows Phone 8.1.

However, we still have different implementations – or forks – of the .NET platform. Those implementations are owned by different teams, version independently, and have different shipping vehicles. This makes unifying API shape an ongoing challenge: APIs are only portable when the implementation is moved forward across all the verticals but since the code bases are different that’s fairly expensive and thus always subject to (re-)prioritization. And even if we could do a perfect job with converging the APIs: the fact that all verticals have different shipping vehicles means that some part of the ecosystem will always lag behind.

A much better approach is unifying the implementations: instead of only providing a well factored view, we should provide a well factored implementation. This would allow verticals to simply share the same implementation. Convergence would no longer be something extra; it’s achieved by construction. Of course, there are still cases where we may need multiple implementations. A good example is file I/O which requires using different technologies, based on the environment. However, it’s a lot simpler to ask each team owning a specific component to think about how their APIs work across all verticals than trying to retroactively providing a consistent API stack on top. That’s because portability isn’t a something you can provide later. For example, our file APIs include support for Windows Access Control Lists (ACL) which can’t be supported in all environments. The design of the APIs must take this into consideration, and, for instance, provide this functionality in a separate assembly that can be omitted on platforms that don’t support ACLs.

Machine-wide frameworks versus application-local frameworks

Another interesting challenge has to do with how the .NET Framework is deployed.

The .NET Framework is a machine-wide framework. Any changes made to it affect all applications taking a dependency on it. Having a machine-wide framework was a deliberate decision because it solves those issues:

It allows centralized servicing

It reduces the disk space

Allows sharing native images between applications

But it also comes at a cost.

For one, it’s complicated for application developers to take a dependency on a recently released framework. You either have to take a dependency on the latest OS or provide an application installer that is able to install the .NET Framework when the application is installed. If you’re a web developer you might not even have this option as the IT department tells you which version you’re allowed to use. And if you’re a mobile developer you really don’t have choice but the OS you target.

But even if you’re willing to go through the trouble of providing an installer in order to chain in the .NET Framework setup you may find that upgrading the .NET Framework can break other applications.

Hold on – aren’t we saying that our upgrades are highly compatible? We are. And we take compatibility extremely seriously. We have rigorous reviews for any changes made to the .NET Framework. And for anything that could be a breaking change we have dedicated reviews to investigate the impact. We run a compat lab where we test many popular .NET applications to ensure that we don’t regress them. We also have the ability to tell which .NET Framework the application was compiled against. This allows us to maintain compatibility with existing applications while providing a better behavior for applications that opted-into targeting a later version of the .NET Framework.

Unfortunately, we’ve also learned that even compatible changes can break applications. Let me provide a few examples:

Adding an interface to an existing type can break applications because it might interfere with how the type is being serialized.

Adding an overload to a method that previously didn’t had any overloads can break reflection consumers that never handled finding more than one method.

Renaming an internal type can break applications if the type name was surfaced via a ToString() method.

Those are all rare cases but when you have a customer base of 1.8 billion machines being 99.9% compatible can still mean that 1.8 million machines are affected.

Interestingly enough, in many cases fixing impacted applications is fairly trivial. But the problem is that the application developer isn’t necessarily involved when the break occurs. Let’s look at a concrete example.

You tested your application on .NET Framework 4 and that’s what you installed with your app. But some day one of your customers installed another application that upgraded the machine to .NET Framework 4.5. You don’t know your application is broken until that customer calls your support. At this point addressing the compat issue in your application is fairly expensive as you have to get the corresponding sources, setup a repro machine, debug the application, make the necessary changes, integrate them into the release branch, produce a new version of your software, test it, and finally release an update to your customers.

Contrast this with the case where you decide you want to take advantage of a feature released in a later version of the .NET Framework. At this point in the development process, you’re already prepared to make changes to your application. If there is a minor compat glitch, you can easily handle it as part of the feature work.

Due to these issues, it takes us a while to release a new version of the .NET Framework. And the more drastic the change, the more time we need to bake it. This results in the paradoxical situation where our betas are already fairly locked down and we’re pretty much unable to take design change requests.

Two years ago, we’ve started to ship libraries on NuGet. Since we didn’t add those libraries to the .NET Framework we refer to them as “out-of-band”. Out-of- band libraries don’t suffer from the problem we just discussed because they are application-local. In other words, the libraries are deployed as if they were part of your application.

This pretty much solves all the problems that prevent you from upgrading to a later version. Your ability to take a newer version is only limited by your ability to release a newer version of your application. It also means you’re in control which version of the library is being used by a specific application. Upgrades are done in the context of a single application without impacting any other application running on the same machine.

This enables us to release updates in a much more agile fashion. NuGet also provides the notion of preview versions which allow us to release bits without yet committing on a specific API or behavior. This supports a workflow where we can provide you with our latest design and – if you don’t like it – simply change it. A good example of this is immutable collections. It had a beta period of about nine months. We spend a lot of time trying to get the design right before we shipped the very first version. Needless to say that the final design – thanks to the extensive feedback you provided – is way better than the initial version.

Enter .NET Core

All these aspects caused us to rethink and change the approach of modelling the .NET platform moving forward. This resulted in the creation of .NET Core:

.NET Core is a modular implementation that can be used in a wide variety of verticals, scaling from the data center to touch based devices, is available as open source, and is supported by Microsoft on Windows, Linux and Mac OSX.

Let me go into a bit more detail of how .NET Core looks like and how it addresses the issues I discussed earlier.

Unified implementation for .NET Native and ASP.NET

When we designed .NET Native it was clear that we can’t use the .NET Framework as the foundation for the framework class libraries. That’s because .NET Native essentially merges the framework with the application, and then removes the pieces that aren’t needed by the application before it generates the native code (I’m grossly simplifying this process here. For more details, take a look at this deep dive). As I explained earlier, the .NET Framework implementation isn’t factored which makes it quite challenging for a linker to reduce how much of the framework gets compiled into the application – the dependency closure is just too large.

ASP.NET 5 faced similar challenges. Although it doesn’t use .NET Native one of the goals of the new ASP.NET 5 web stack was to provide an XCOPY deployable stack so that web developers don’t have coordinate with their IT department in order to take dependencies on later versions. In that scenario it’s also important to minimize the size of the framework as it needs to be deployed alongside the application.

.NET Core is essentially a fork of the NET Framework whose implementation is also optimized around factoring concerns. Even though the scenarios of .NET Native (touch based devices) and ASP.NET 5 (server side web development) are quite different, we were able to provide a unified Base Class Library (BCL).

The API surface area for the .NET Core BCL is identical for both .NET Native as well ASP.NET 5. At the bottom of the BCL we have a very thin layer that is specific to the .NET runtime. We’ve currently two implementations: one is specific to the .NET Native runtime and one that is specific to CoreCLR, which is used by ASP.NET 5. However, that layer doesn’t change very often. It contains types like String and Int32. The majority of the BCL are pure MSIL assemblies that can be shared as-is. In other words, the APIs don’t just look the same – they share the same implementation. For example, there is no reason to have different implementations for collections.

On top of the BCL, there are app-model specific APIs. For instance, the .NET Native side provides APIs that are specific to Windows client development, such as WinRT interop. ASP.NET 5 adds APIs such as MVC that are specific to server- side web development.

We think of .NET Core as not being specific to either .NET Native nor ASP.NET 5 – the BCL and the runtimes are general purpose and designed to be modular. As such, it forms the foundation for all future .NET verticals.

NuGet as a first class delivery vehicle

In contrast to the .NET Framework, the .NET Core platform will be delivered as a set of NuGet packages. We’ve settled on NuGet because that’s where the majority of the library ecosystem already is.

In order to continue our effort of being modular and well factored we don’t just provide the entire .NET Core platform as a single NuGet package. Instead, it’s a set of fine grained NuGet packages:

For the BCL layer, we’ll have a 1-to-1 relationship between assemblies and NuGet packages.

In addition, we’ve decided to use semantic versioning for our assembly versioning. The version number of the NuGet package will align with the assembly version.

The alignment of naming and versioning between assemblies and packages help tremendously with discovery. There is no longer a mystery which NuGet packages contains System.Foo, Version=1.2.3.0 – it’s provided by the System.Foo package in version 1.2.3.

NuGet allows us to deliver .NET Core in an agile fashion. So if we provide an upgrade to any of the NuGet packages, you can simply upgrade the corresponding NuGet reference.

Delivering the framework itself on NuGet also removes the difference between expressing 1st party .NET dependencies and 3rd party dependencies – they are all NuGet dependencies. This enables a 3rd party package to express, for instance, that they need a higher version of the System.Collections library. Installing this 3rd party package can now prompt you to upgrade your reference to System.Collections. You don’t have to understand the dependency graph – you only need to consent making changes to it.

The NuGet based delivery also turns the .NET Core platform into an app-local framework. The modular design of .NET Core ensures that each application only needs to deploy what it needs. We’re also working on enabling smart sharing if multiple applications use the same framework bits. However, the goal is to ensure that each application is logically having its own framework so that upgrading doesn’t interfere with other applications running on the same machine.

Our decision to use NuGet as a delivery mechanism doesn’t change our commitment to compatibility. We continue to take compatibility extremely seriously and will not perform API or behavioral breaking changes once a package is marked as stable. However, the app-local deployment ensures that the rare case where a change that is considered additive breaks an application is isolated to development time only. In other words, for .NET Core these breaks can only occur after you upgraded a package reference. In that very moment, you have two options: addressing the compat glitch in your application or rolling back to the previous version of the NuGet package. But in contrast to the .NET Framework those breaks will not occur after you deployed the application to a customer or the production server.

Enterprise ready

The NuGet deployment model enables agile releases and faster upgrades. However, we don’t want to compromise the one-stop-shop experience that the .NET Framework provides today.

One of the great things of the .NET Framework is that it ships as a holistic unit, which means that Microsoft tested and supports all components as a single entity. For .NET Core we’ll provide the same experience. We’ll create the notion of a .NET Core distribution. This is essentially just a snapshot of all the packages in the specific version we tested them.

The idea is that our teams generally own individual packages. Shipping a new version of the team’s package only requires that the team tests their component, in the context of the components they depend on. Since you’ll be able to mix- and-match NuGet packages there can obviously be cases where certain combinations of components don’t play well together. Distributions will not have that problem because all components are tested in combination.

We expect distributions to be shipped at a lower cadence than individual packages. We are currently thinking of up to four times a year. This allows for the time it will take us to run the necessary testing, fixing and sign off.

Although .NET Core is delivered as a set of NuGet packages it doesn’t mean that you have to download packages each time you need to create a project. We’ll provide an offline installer for distributions and also include them with Visual Studio so that creating new projects will be as fast as today and not require internet connectivity in the development process.

While app-local deployment is great for isolating the impact of taking dependencies on newer features it’s not appropriate for all cases. Critical security fixes must be deployed quickly and holistically in order to be effective. We are fully committed to making security fixes as we always have for .NET.

In order to avoid the compatibility issues we have seen in the past with centralized updates to the .NET Framework it’s essential that these only target the security vulnerabilities. Of course, there is still a small chance that those break existing applications. That’s why we only do this for truly critical issues where it’s acceptable to cause a very small set of apps to no longer work rather than having all apps run with the vulnerability.

Foundation for open source and cross platform

From past experience we understand that the success of open source is a function of the community around it. A key aspect to this is an open and transparent development process that allows the community to participate in code reviews, read design documents, and contribute changes to the product.

Open source enables us to extend the .NET unification to cross platform development. It actively hurts the ecosystem if basic components like collections need to be implemented multiple times. The goal of .NET Core is having a single code base that can be used to build and support all the platforms, including Windows, Linux and Mac OSX.

Of course, certain components, such as the file system, require different implementations. The NuGet deployment model allows us to abstract those differences away. We can have a single NuGet package that provides multiple implementations, one for each environment. However, the important part is that this is an implementation detail of this component. All the consumers see a unified API that happens to work across all the platforms.

Another way to look at this is that open source is a continuation of our desire to release .NET components in an agile fashion:

Open Source offers quasi real-time communication for the implementation and overall direction

Releasing packages to NuGet.org offers agility at the component level

Distributions offer agility at the platform level

Having all three elements allows us to offer a broad spectrum of agility and maturity.

Although we’ve designed .NET Core so that it will become the foundation for all future stacks, we’re very much aware of the dilemma of creating the “one universal stack” that everyone can use:

We believe we found a good balance between laying the foundation for the future while maintaining great interoperability with the existing stacks. I’ll go into more detail by looking at several of these platforms.

.NET Framework 4.6

The .NET Framework is still the platform of choice for building rich desktop applications and .NET Core doesn’t change that.

For Visual Studio 2015 our goal is to make sure that .NET Core is a pure subset of the .NET Framework. In other words, there wouldn’t be any feature gaps. After Visual Studio 2015 is released our expectation is that .NET Core will version faster than the .NET Framework. This means that there will be points in time where a feature will only be available on the .NET Core based platforms.

We’ll continue to release updates to .NET Framework. Our current thinking is that the release cadence will roughly be the same as today, which is about once a year. In these updates, we’ll bring the innovations that we made in .NET Core to the .NET Framework. We’ll not just blindly port all the feature work, though – it will be based on a cost-benefit analysis. As I pointed out, even additive changes to the .NET Framework can cause issues for existing applications. Our goal is to minimize API and behavioral differences while not breaking compatibility with existing .NET Framework applications.

There are also investments that are exclusively being made for the .NET Framework such as the work we announced in the WPF Roadmap.

Mono

Many of you asked what the .NET Core cross platform story means for Mono. The Mono project is essentially an open source re-implementation of the .NET Framework. As such, it shares the richness of the APIs with the .NET Framework but it also shares some of its problems, specifically around the implementation factoring.

Mono is alive and well with a large ecosystem on top. That’s why, independent of .NET Core, we also released parts of the .NET Framework Reference Source under an open source friendly license on GitHub. This was done to allow the Mono community to close the gaps between the .NET Framework and Mono by using the same code. However, due to the complexity of the .NET Framework we’re not setup to run it as an open source project on GitHub. In particular, we’re unable to accept pull requests for it.

Another way to look at it: The .NET Framework has essentially two forks. One fork is provided by Microsoft and is Windows only. The other fork is Mono which you can use on Linux and Mac.

With .NET Core we’re able to develop an entire .NET stack as a full open source project. Thus, having to maintain separate forks will no longer be necessary: together with the Mono community we’ll make .NET Core great for Windows, Linux and Mac OSX. This also enables the Mono community to innovate on top of the leaner .NET Core stack as well as taking it to environments that Microsoft isn’t interested in.

Windows Store & Windows Phone

Both the Windows Store 8.1 and Windows Phone 8.1 platforms are much smaller subsets of the .NET Framework. However, they are also a subset of .NET Core. This allows us to use .NET Core as the underlying implementation for both of these platforms moving forward. So if you’re developing for those platforms you are able to directly consume all innovations without having to wait for an updated framework.

It also means that the number of BCL APIs available on both platforms will be identical to the ones you can see in ASP.NET 5 today. For example, this includes non-generic collections. This will make it much easier for you to bring existing code that runs on top of the .NET Framework into the touch-based application experience.

Another obvious side effect is that the BCL APIs in Windows Store and Windows Phone are fully converged and will remain converged as the underlying .NET platform is now both powered by .NET Core.

Sharing code between .NET Core and other .NET platforms

Since .NET Core forms the foundation for all future .NET platforms code sharing with .NET Core based platforms has become friction free.

This raises the question how code sharing works with platforms that aren’t based on .NET Core, such as the .NET Framework. The answer is: it’s the same as today, you can continue to use portable class libraries and shared projects:

Portable class libraries are great when your common code is platform-independent as well as for reusable libraries where the platform-specific code can be factored out.

Shared projects are great when your common code has a few bits of platform-specific code, since you can adapt it with #if.

For more details on how choose between the two, take a look at this blog post.

Moving forward, portable class libraries will also support targeting .NET Core based platforms. The only difference is that if you only target .NET Core based platforms you don’t get a fixed API set. Instead, it’s based on NuGet packages that you can upgrade at will.

If you also target at least one platform that isn’t based on .NET Core, you’re constrained by the APIs that can be shared with it. In this mode, you’re still able to upgrade NuGet packages but you may get prompted to select higher platform versions or completely drop support for them.

This approach allows you to co-exist in both worlds while still reaping the benefits that .NET Core brings.

Summary

The .NET Core platform is a new .NET stack that is optimized for open source development and agile delivery on NuGet. We’re working with the Mono community to make it great on Windows, Linux and Mac, and Microsoft will support it on all three platforms.

We’re retaining the values that the .NET Framework brings to enterprise class development. We’ll offer .NET Core distributions that represent a set of NuGet packages that we tested and support together. Visual Studio remains your one- stop-shop for development. Consuming NuGet packages that are part of a distribution doesn’t require an internet connection.

We acknowledge our responsibility and continue to support shipping critical security fixes without requiring any work from the application developer, even if the affected component is exclusively distributed as NuGet package.

Questions or concerns? Let us know by commenting on this post, by sending a tweet to @dotnet, or by starting a thread in the .NET Foundation forums. Looking forward to hearing from you!

Like everyone else, I'm excited about .NET Core, but something bothers me… You say that one of the motivations for .NET Core is to solve the issue that updating the machine-wide framework can break applications that were built against an older version; but this scenario mostly affects desktop applications, and if I understand correctly, the desktop is not in the scope of .NET Core… so it doesn't solve anything for desktop apps.

I would cast my vote to, over time, come up with new .NET Core-based versions of existing libraries and project types not already in scope for .NET Core. For example, MVC and Entity Framework are both releasing new versions that are not backwards compatible, but either directly evolved from or at least "highly inspired by" their previous versions, but compatible with the newer, more modern .NET Core.

It would be great to see similar efforts for WPF, System.Drawing, Windows Services, etc. In some cases they may still be Windows-only (probably the case for both these examples), but that's OK — in the long run it will allow Microsoft to stop maintaining two versions of .NET, and allow us to share code without PCLs, and only deal with NuGet dependencies. Like with MVC and EF updates, each app could choose if/when they bite the bullet and accept the major breaking change.

So what about alternative platforms for distribution of .Net Core? NuGet is Windows only. I think it will be essential to figure out a best practice for distribution to Mac and Linux based systems not just Windows based package managers.

FYI: cross-platform .Net native – probably not possible to port the Visual C++/Windows version to Linux. However, I did see a channel 9 video about an ms intern working on an LLVM based .net precompilation tool. His work went into an open source project that is still active. That might be another route to take for a linux/MacOS version of .Net native.

In 2012, after the Microsoft .NET Framework version 1.0, there was the Shared Source Common Language Infrastructure initiative: what are the main differences and improvements of making .NET Core released to open source if compared to that old initiative?

I find the framework-via-NuGet approach very intriguing and am dealing with a similiar situation on top of .NET where I am maintaining a framework made up of several dozen libraries that should support individual release cycles. Like the name 'distribution' very much for describing the "works in a tested fashion" release of the combined NuGet packages. Was looking for a descriptive name myself 🙂

Now, another question in this regard and I haven't yet analyzed the dependency graph within .NET Core, but I'm assuming you will have dependencies between some of the packages? Now, will you specify the upper boundary for your dependencies in the NuGet packages or will you leave it open until there is an actual breaking change that stops the next major version from working with the dependent package?

Like other commenter here, I'm still very confused about how (or even if) .net Core will ever impact the desktop? You talk about cross-platform development a lot, but you can't deploy a windows store app to linux or OSX (Or even Windows Vista/7) so what's the plan for these platforms? That's something I've never seen an answer for.

I'd like to know a lot more about how the "System.Windows" and "System.Windows.Forms" .NET libraries will migrate to become cross-platform to run also on Linux and MacOS. Will these libraries be renamed to e.g. "System.NET.Forms" to remove the previous tie-in to the "Windows" OS? Who will develop and support the underlying presentation layer <-> OS bindings for Linux and MacOS? If we are talking about Linux, then which OS GUI(s) will be supported? Will X.Window manager, KDE, XFCE, E17, LXDE be supported for example? It seems to me that with over 20 different GUI/ Window managers for different Linux variants, it could be a difficult job for MS to support them all with a mapping to "System.Windows.Forms" or whatever its successor will be called.

You guys are getting this all wrong, I don't see why you think MS is responsible to create platform everything,

if they do great, but that's a giant ask kiddys, is it supposed to be better or at least as good as native on every platform, if it's not gonna be better like WPF then whats the point, the world is flush in second rate frameworks already, the key thing here is solid framework to base you app on then interop with whatever the targets best in breed native UI View, that is if you want to be world class right now, if you just want max reach for your CRUD or forms app switch to HTML5, this is meant for something way better then that, IMHO.

Sounds great but you created one of main the problem yourself (starting 4.5) by not allowing .NET Framework to be installed side-by-side so applications could choose between 4.0, 4.5, 4.5.1, 4.5.2 in a shared environmement and by doing things like keeping the 4.0 as the displayed version number even if 4.5 is far from it and having a different CLR but same version and silent upgrading of the CLR on your dev machine just by installing VS 2012+ but can still target 4.0 but not really 4.0.

Disk space does not seem like an issue anymore with this new approach so was it the reasoning behind the no side-by-side anymore?

Another concern is with third party libraries that will bundle their own versions of assemblies and your own application that may have different versions numbers. Will that lead to something like having two System.Collections.dll files in your deployment folder? Little return of DLL hell?

How long does nugget save back level versions of libraries? If I have to do maintenance on a year or two old project, will I still be able to get the libs, or must I save them as part of my source repository?

For those of us with no current interest in cross platform: You don't mention the WinRT stack. When will GDI and Win32 go away as a foundational layer for the .Net framework? Where do we go to find out info on the future path of WinRT and the full desktop? The obvious questions about the Road Map for the .Net Framework side of the house and the desktop are still out there. The info provided in "The Roadmap for WPF" is not much more than a list of bug fixes. This article at least outlines the plan for the .net core. We need the same for the .net framework please.

I'm glad to see the direction you guys are going with more agile deliverables and per-app library deployments. However, I agree with the sentiment expressed by others that the desktop needs to be supported in .NET Core. At minimum this means console applications. I'd like to see MS develop a new, cross-platform UI technology that is not WPF and does not use XAML (which is a horrible, overly verbose language).

There's a reason WinForms and WPF coexist after so many years- writing WPF applications is extremely powerful but overly complex. I also don't think WinStore/Metro apps fill this niche, as the sandbox these live in is not really appropriate for many enterprise apps and they're obviously not cross-platform ready. Perhaps HTML + Razor syntax (with C#) would work for a desktop technology as well as it's worked for ASP.NET MVC?

I also have to say that MS really needs to lying about its support for "non-favored" platforms. It's clear that you guys are phasing out development of the .NET Framework proper, so you should be HONEST about that rather than giving empty pledges of ongoing support. Your developers have seen you do this over and over and over again- Silverlight, EF6, VB6 (just look at how many people are still angry at you after all these years!)

You're not sparing anyone's "feelings" by telling these white-lies- it makes it hard for managers to make decisions about what dev platforms to use and it hurts developers careers by giving them false hope that their skill set will remain relevant without having to learn newer technologies. Tell people what support will really be like going forward for the .NET Framework- security fixes and critical bugs, with few if any new features.

Now when you decided about .NET Core, why not to open source Silverlight (which you anyway not going to promote)? – The community can port its code to run on top of .NET Core and everyone will be happy.

@Ron: How is moving away from COM based objects and the win32 based C API done on a large scale?

The same as in small scale — very carefully 🙂

Seriously though, there is nothing wrong with using COM and Win32; in many cases they are simply the operating system APIs that we have to use to implement our libraries, such as System.IO.FileSystem.

The key design goal is to keep the exposed APIs independent of the OS so that it's possible to support the same APIs on different operating system. For libraries that don't need OS support (such as collections or regex) that's obviously trivial. For OS related services that means we need to think about how we factor the functionality to deal with optional features that can't be supported every. The post provides the example of ACLs.

@Thomas Levesque: You say that one of the motivations for .NET Core is to solve the issue that updating the machine-wide framework can break applications that were built against an older version; but this scenario mostly affects desktop applications, and if I understand correctly, the desktop is not in the scope of .NET Core… so it doesn't solve anything for desktop apps.

That's a fair point and I'm quite sympathetic to that view point. However, performing fundamental changes to a complex product like .NET is always a journey. Whenever we try to make too many drastic changes everybody loses because the chance of us getting it right is slim.

That's why it's best to make the innovations in area that (1) immediately adds value and (2) is somewhat safe so that we can adjust if things turn out to be problematic.

As explained in the post, the .NET Framework is a complex system but we're fully committed to move it forward. We understand that the current story isn't the addressing all the needs but we think of .NET Core as a stepping stone in the right direction.

@Jeremiah Gowdy: Will we be able to write console applications using .NET core / .NET native?

That's something we're working on. We'll absolutely have the Console APIs available in .NET Core. In fact, I believe this one is coming online on GitHub in a couple of weeks.

The other part that you'll need for running console apps is an application model that allows you to start & run your code. Currently, the only runner we have is ASP.NET's console application template which is mostly geared for web workers.

As far as .NET Native goes: it's on the list but we're currently focused on scenarios that are relevant for touch based clients.

For .NET Core we're currently focusing our resources to complete the support for touch based devices and ASP.NET related scenarios.

We believe that we're currently serving our desktop customers best by investing in the existing, .NET Framework stack as this ensures that our investments are immediately consumable from existing code. For brand new code, such as immutable collections, we also make sure that it will work on top of the .NET Framework.

@Yiannis Berkos: Let's say that I want to create a desktop application targeting windows, mac and linux. How can this be done?

I'm not aware of any plans to provide a cross-platform UI framework, if that's what you're asking for.

In general, I don't think we believe in using the exact same bits on all platforms. From personal experience I'd say the goal of any successful cross-platform strategy is maximizing sharing while not compromising the experience due towards the lowest common denominator.

So I'd say: think of your app as a burger, representing the layering. The bottom bun will require OS specific implementations, but we'll probably provide a good chunk of it. The top layer (UI) is also very likely to require leveraging OS/experience specific functionality. The beefy part in the middle represents your business logic and you should make sure you can share most of it across all platforms. OS specific concepts should be either pushed down or up (via inversion of control).

Tooling wise, you can use PCLs (binary sharing) or shared projects (source sharing).

There also additional tools which you can use to maximize sharing in the top layer. For example, Xamarin.Forms is a thin abstraction over the OS UI stack that allows you to share code that can be the same. It also allows to special case certain OSs.

@Luigi Bruno: In 2012, after the Microsoft .NET Framework version 1.0, there was the Shared Source Common Language Infrastructure initiative: what are the main differences and improvements of making .NET Core released to open source if compared to that old initiative?

I assume you meant 2002, not 2012. The Shared Source Common Language Infrastructure Implementation (SSCLI), also known as "Rotor" wasn't open source, it was shared source. Technically, open source means that the license is an OSI approved license. The .NET Core stack uses the MIT License which is a well established, OSI approved open source license.

Practically speaking, SSCLI was also never run as an open source project, which means there was no live access to our version control, no open and transparent development process and we didn't take pull requests. On top of that, the license disallowed using the code to build a different product.

@Michael: Now, another question in this regard and I haven't yet analyzed the dependency graph within .NET Core, but I'm assuming you will have dependencies between some of the packages?

That's correct.

Will you specify the upper boundary for your dependencies in the NuGet packages or will you leave it open until there is an actual breaking change that stops the next major version from working with the dependent package?

Generally speaking, we always ship preview releases which don't commit on a final shape and will generally perform breaking changes between releases. By the time we ship a stable package we're fairly confident that there are no breaking changes.

So we'll leave it open as we don't do breaking changes. If there are breaks, then we consider this a bug and rather fix the bug than to ship updated packages that limit the version range. The latter results in a game that the ecosystem can't win.

@Stephen: You talk about cross-platform development a lot, but you can't deploy a windows store app to linux or OSX.

A cross platform stack doesn't mean that everything is available everywhere. That's simply not feasible without a massive cost and/or compromising the experience in a fundamental way.

Instead, cross platform means that you componentize the stack and for each component make it as broadly available as possible. Some components (such as the runtime) truly need to go everywhere in order to have a story in the first place. Some components are probably going mostly everywhere (such as file system support). Some components might only go to a few platforms (such as ACLs). And some components might only go to a single platform (such as WinRT support).

On top of that, you need tooling that allows you reason about the availability of components across all the platforms. Your job as an application developer then is to factor your application according to the needs of cross platform. See my previous reply to Yiannis Berkos for more details.

@Rob Sherratt: I'd like to know a lot more about how the "System.Windows" and "System.Windows.Forms" .NET libraries will migrate to become cross-platform to run also on Linux and MacOS.

See my previous replies to Stephen and Yiannis Berkos. I'm not aware of any plans to provide a cross platform implementation for the UI technologies. I'm also doubtful that's what developers actually want.

I think it's possible to provide a thing abstraction layer (such as Xamarin.Forms) but a true shared UI technology would be pointless. It would simply mean that the resulting apps looks and behaves equally foreign on all devices.

@vibou: Disk space does not seem like an issue anymore with this new approach so was it the reasoning behind the no side-by-side anymore?

It's tempting to think that way but that's actually (no longer) true. Many consumer devices, such as tablets or hybrid machines, have SSDs which, compared to regular disk drives, have a much more limited capacity.

Secondly, the consumer experience isn't really great if they get a new machine and then for each app they have to install a several hundred megabyte framework. Also, each time we service it, NGen needs to update the native images which also binds CPU resources, depending on how many side-by-side frameworks exists.

We could try to make our side-by-side story more sophisticated by allowing smart sharing between frameworks but that's actually as complicated as ensuring that the new framework is compatible.

Also note that side-by-side doesn't mean we're out of the compat business either. At some point, you'll want to update to the higher framework anyways so we still need to invest in our compat strategy. That's why we came to the realization that at this point for the .NET Framework, in-place updates are a better story.

However, we make sure that new components can be delivered in app-local fashion, even for .NET Framework applications. For example, immutable collections for .NET Framework developers are essentially side-by-side.

Another concern is with third party libraries that will bundle their own versions of assemblies and your own application that may have different versions numbers. Will that lead to something like having two System.Collections.dll files in your deployment folder? Little return of DLL hell?

While it's true that our NuGet components version independently a given application will only use a single version (the highest version required by any component in your application).

We make sure that our components are always backwards compatible. Of course, there will always be cases where certain combinations don't blend well but that's why we also ship distributions which represent a set of components that are tested together and thus blend nicely. This will serve as the equalizer to minimize the different combinations that developers will encounter in practice.

You raise a bunch of good points. Of course, I can't promise that we'll not make any mistakes moving forward, we'll try really hard to be more transparent, open, and honest about our decision make process.

However, you make one assumption that I'd like to debunk: it's easy to think from an outsider's perspective that we always have the perfect master plan that we simply don't share. That's not true. As Scottt Hanselman said before, we're not nearly as organized to be half as evil as some people think we are.

The reality is that engineering is very complicated and takes a lot of time. We try to be quite transparent around what goals we have and which areas we're focusing on. So when I say "I don't know" or "I'm not aware of any plans" don't take this as the magic code words for "according to my master plan that's never going to happen". It simply means that I don't know and I'm not aware of any plans.

@Meir: Now when you decided about .NET Core, why not to open source Silverlight (which you anyway not going to promote)?

As you've probably seen in the Silverlight roadmap there are currently no plans to build a Silverlight 6. We don't believe in "throwing over the fence open source". Open source isn't a way to get free labor or outsource the product team's responsibility to the community.

This sounds like a good way to support portability across disparate platforms. As an iOS/Droid/MacOSX/WinPhone(WPF/XAML)/WinForms/Win32/Linux programmer, I look forward to being able to cross platforms more quickly, using a common codebase.

Any plans for adding WinForms-like iOS/Droid/MacOSX/Linux modules to .NET Core? I could then use my favorite UI development stack (C# WinForms) to develop apps for all platforms (my favorites being iOS and MacOSX), with only minor re-tooling! Would it need to be done completely in the Mono space?

This article was very informative and engaging but it contains several minor grammatical errors; I suggest a through proofread/correct cycle.

@Immo Landswerth; As a cross-platform developer in C and C++, I'm perhaps more aware of what's available than the strictly C# Windows devlopers here are, so I'm glad to see Microsoft focusing on the "core" that the community needs, instead of inventing a new wheel. Having said that, I can sympathise with them wanting to be able to use what they are used to for developing software on new plaforms, but without a heck of a lot of effort from Microsoft and the Mono/Xamarin development teams, I don't see it happening soon.

Great information, what a new direction for .NET application platform! What's the relationship between .NET Core and WinRT? Are .NET Core runtime and library built with WinRT on windows? Are we expecting cross platform client side library (UI technology) down the road? like Silverlight reborn for mobile and desktop client?

Is there any plan to continue or open source XNA? It's still a much beloved framework to this day even as it ages. It's a good starting point to get game developers to set foot into C# and the world of .NET. I think it would be beneficial to the MonoGame project to make that code available for them to use.

I need my nex web application to run in windows or Linux. I am concerned about how fast aspnet will run in Linux. Right now mono is very slow. I need something that will have a good performance.how long will we have to wait to test kestrel out in Linux? I may have to just go with Java if the wait is much longer…

I think you should make some cross-platform UI, with VS designers support. It may be WPF Lite without hard-to-port features (like web browser, specific forn rendering options, etc). Without cross-platform, NET Core will be limited to console or ASP.NET 5 apps, and will not be popular. Do you plan to support android ?

I worked at MS as a dev long time back but now am all about linux/Java/JavaScript stuff. This makes me more excited about MS again. Good timing with "The Force Awakens" teaser as maybe the evil empire will awaken once again but this time, will be much cooler.

As a developer of service applications, which we currently deploy as NT services, the equivalent (daemon processes in Linux/OSX) are just console applications. So support for Console apps gives us both the ability to write command line utilities on POSIX platforms, and the ability to write services on those platforms too. Heck, with a Console app we could implement FastCGI. Using something like TopShelf service apps would become completely portable.

I appreciate you bringing ASP.NET but there are service applications that are not web apps that would really benefit from what you're working on. So if it's possible to keep the door open to making runnable console apps with the kind of performance we are hoping for, that would really let .NET explode into the POSIX world.

@Jens Larson: Why should Microsoft reinvent the wheel, when the Open Source community has already done this work? There are numerous projects that provide the "missing" functionality, if you know to look for them.

@airtonix: There are two separate repositories; github.com/…/referencesource (the one he is talking about in the line you quote) and githup.com/…/corefx which they have fully opened to outside developers. Microsoft is not taking pull requests on the Reference Source simply because it is the past development and is being made available for the Mono Project to use to bring the two somewhat divergent implementations of .NET back together. .NET Core 5 is the future, and the open source development influence will make it into future releases of the .NET Framework, but in a somewhat more controlled fashion.

@Alex: All this means is that Microsoft has realized that there is more to the world of computing, especially at the server level, than the Windows Server products that they've developed, and they are accepting that reality. It doesn't mean that they will abandon developing Windows, but it may mean that eventually they do what Apple did in the late 1990's and open up portions of the Windows Source itself, or something similar; Apple, with Mac OS X, proved that it was viable to utilize the open source community to build a strong platform under the GUI and still keep some proprietary elements.

imo , your still missing the boat guys . the DX12 rewrite and TPL/opencompute should be coalesced , and dog fooded to hardware accelerate both OS internals and exposed for dev leveraging . that should have targeted/extended the WPF api and called winRT

thanks a lot for the detailed post. Was time to write something about your plans and what the core framework is. It is a little bit confusing to the people. They struggle with what the new core means.

It is interesting to read what you are doing and how the teams are working. To explain the verticals and that there are several teams working on the different verticals was interesting, too. And I think, people aren't aware of the complexity and so the need for the core framework.

I like to suggest some points in order to help the people understand what's going on in the .NET Framework and to keep them on the road with your plans and the technical possibilities as well as restrictions. Just to help them selecting the right things for their application developments and the help them mastering the complexity.

One blog post is not enough to explain the way .NET goes in the future. Bring it back to mind of developers regularly. Explain it again and again.

Provide some more walkthroughs, text and video tutorials, small technical blogs and so on, just to explain what the core Framework is. Do it on a regularly base.

Show people how to use the core and the full framework. Explain when they should use the one and when the other. Show/explain example use cases and applications for the core and the full framework. Guide them intensively and regularly.

Do not only show what people can do. Show restrictions of the core and the full framework and show them what they cannot do with the one and/or the other framework vertical. Do it regularly.

That's all! 🙂

Thanks a lot for all the great work! Looking forward what's coming especially on the other platforms! Hope to see a lot of the staff I like on Windows on the Mac and Linux in the future! 🙂

That being said, it's getting too confusing to develop for MS platform. Right now when I open NewProject window of VS just to create a Store App I have "Universal App" option, "Windows Phone" option, as well as "Windows Phonge Silverlight" option. Apparently, soon there'll be a ".Net Core" option, right?

It's hard to believe all these frameworks for the same platform aren't created by some competitor companies! Unfortunately what I read is this: "We are unable to produce a single framework for our one platform, yet we are going to provide a single framework for all other platforms including Mac and Linux".

Please don't get me wrong. I love working with Microsoft platform and tools. I have tried 'eclipse', and 'netbeans' and they suck! I have used Android APIs as well as many firmware OSs. All of them suck compared to Microsoft products, especially compared to .net and VS. I absolutely respect the fact that Microsoft doesn't treat their programmers as crp like many other competitors(Google f.i.) by dumping on them crppy products and frameworks.

I just wish one day the 'one' Microsoft dream becomes a reality again; to open Visual Studio and have that one yet best option for creating a Windows Phone project. .Net and C# was that option on desktop for years. Please bring us that one option for mobile and solve the confusion caused by too many incomplete choices.

First and most importantly, the issue of in-place framework upgrades is a nightmare in this environment. We are still stuck using 4.0 because there are a couple of apps with 4.5.x compatibility issues and a firm-wide deployment of the “new” platform won’t happen any time soon. Our team must argue for commissioning non-standard workstation OS builds if we want to make use of 4.5/4.6 for our app, and this creates enormous friction within a large corporate as you can probably imagine. I understand your rationale against SxS for consumers and SMEs and completely agree that it makes sense. However, couldn’t you provide some kind of option to build and deploy a desktop app with the complete framework packaged in, with no “smart sharing”? We’d have to take the hit on disk space and load-time efficiency but those are trivial matters for us, while being stuck on an old framework version certainly isn’t.

Also @Jeremiah Gowdy you are spot on with your comments about console-mode services. Our applications typically use Java on the middle tier for services of this type. If could use .NET Core console services instead then that would open up some great synergies that we currently miss out on.

But in general, a big “way to go” to MSFT for this announcement; it’s a watershed point moment for .NET, no question.

The problem with microsoft for the past lost decade has been that it has spent more time re-factoring than it has spent taking existing investment forward. This is yet again another massive re-factoring to put us where we have been stuck since WPF came out, only 10 years later.

And now, just as winRT is in bad need of maturing to hopefully someday replace WPF, MS is yet doing another massive re-write and starting over yet again. Is it not indicative of all that is wrong with this approach the fact that we have two async patterns in winRT just because MS insists on starting from scratch all the time?

However what is turly worrying about this is not that you'll do it anyways leading to yet another stack that will uncessfuly try to get .NET FX apps out of the established windows framework which is the one with 1.5 billion users, not your new thing. What worries me is that 3 years from now you'll do it again, and start over again with yet another stack because this one wasn't adopted outside of the asp.net folks and all this effort would have been better spent in the one thing every .net developer has been begging you to do:

take existing .net, that is wpf, win forms, .net FX 4.5 and just let me make mobile apps with it using my existing code base. Scrap winRT forever as you did with silverlight and stop your internal cross teams wars. We already had a technology that worked until the windows 8 team decided for no reason to roll back the clock API wise 10 years, it and started over with a half baked framework that today has failed in the market.

In trying to solve a problem which needs fixing: ASP.NET on linux, you're fragmenting yet again windows client development. Yet you're achieving nothing: client development is going to stay on iOS and Android's native stacks, and on the .NET FX for windows stacks. Why would anybody create these .net native apps based on .net core for mobile devices which won't run on windows 7's 1.5B install base? Better yet, you can only create them on winRT, which is pathetically inmature. All so I can share some logic between the app and asp.net's backend? who does this anyway? Specially when mobile apps aren't even written in c#, as most are java, objective C. The 1% of people who will care for this won't matter. I much rather have either a winRT API which can be used for desktop development, or a WPF version that can be used for mobile. If the price for that is that my .dll doesn't always work on some server backend, so be it. Yet most of the time, as long as you develop for mscorlib you're fine, and if your linux variant needs #if statements via the shared project way, so be it.

First of all let me say that I'm really excited about the whole .net core thing and your move toward open source.

Neverthless I have to disagree when you say:

"As you've probably seen in the Silverlight roadmap there are currently no plans to build a Silverlight 6. We don't believe in "throwing over the fence open source". Open source isn't a way to get free labor or outsource the product team's responsibility to the community."

Well, I think you missed the point here. Silverlight is a killer technology for building in-browser front-ends for line of business apps. I mean, productively. With C#, LINQ, and a predictable layout system designed for apps, not documents. Sure it has its flaws, but for the needs of a typical LOB application it's like WPF (cool) without the need for the full-blown .net framework, easily consumable through the browser, even on Macs.

Now, you may have different plans for the future of the UI side of the .net story, but why not letting other people which are still interested in Silverlight take over the project? You don't have to participate if you think it's not worth the effort. It's not like getting free labor. It's letting the community decide wether the platform has to die or not. Maybe it still will. Maybe not.

if you think you are "maintaining great interoperability " without the abilty to Wrap WCF services with MEF contacts interop will be degraded greatly, given the detail of the post and the question being ignored on every MSDN thread. I hope I'm wrong, @immo please tell me I am. I get the sick feeling I'm not. kind of makes the whole thing pointless if your not gonna include the best pieces, I wouldn't blame you, WCF and MEF are jewels, I don't see anyone else giving away there best stuff but it would instantly open up so many options. :{

I am really happy with the new path of the .NET team. Is there some chance of that the .NET Native Compiler generates native codes for non-windows platforms? It will be great to use C# as really an evolution of the C++.

I am super excited about the direction of .net. You guys are just in the nick of time. When I look at new startups and a lot of fast growing companies, I see a lot of Ruby, Python, and a lot of other open source tech. Consumer facing app dev seems to revolve around open source a lot more than enterprise LOB app dev. By fully embracing an open source path, I think you'll be keeping ASP.NET (and .net in general) relevant for a very long time.

It's always nice to read about the future of the .NET stack. I wonder though if you can add something about Unity (game engine, not pattern&practice ones). I understand that it probably depends only on Unity Technologies decisions and one should ask them indead, but if there are any plans on MS side about it, could you mention them, please?

Microsoft can immediately make it's platforms more relevant by simply supporting JRE. Today, migrating Android apps to Windows Phone is a major re-architecture. I realize Dalvick isn't a pure Java runtime, but it's closer than .NET. From my viewpoint, .NET is just Microsoft's version of a JRE.

This basically tells us, .NET Framework is in maintenance mode, use .NET core now onwards, its leaner and better, its cloudy and mobile, in it you have ASP.NET which is your cross platform solution you can stick into Windows, Mac or Linux and use everywhere there is a browser, and you have .NET native which is native to Windows, and will work great with our One Microsoft vision, that means you will have specific little .NET verticals, which will all share the same BCL, be supported on whatever new shiny device we may spit out ranging from tiny micro boards to classic desktops and all the mobile jungle in between, and all that in a great to code by Modern Async Azure enabled Universal app model. And the rest of it is legacy maintenance enabled.

I seem to spend way to much time, effort and money chasing Microsoft's next big idea, only to see it thrown out by the time I learn it and finish my first project. Then the cycle repeats. This gets old…

but pleeeeease stop this ms-wide bullshit spree by using the word "experience" as often and as senseless as it gets. example here: "You can think of portable class libraries as an experience that unifies the different .NET verticals based on their API shape."

In overall, .NET is like the rest. It becomes a garbage. There is no clean direction. Even C# become a garbage. Microsoft need to separate the real Core, the language, and Microsoft products. Things like integration of Entity Framework in .NET is the best example of what I call garbage or ASP.NET MVC with Razor. .NET Core is just a helper of what developer don't understand on programming.

This announcement excites me most because if it comes to be and gains some production worthy reputations, I can finally do everything the JRE guys are doing but with .Net, and with a better IDE (imo) than Eclipse or IntelliJ…

I can't wait to go up to the senior architect and show him a website running on apache in linux and go through the conversation…

I'm not nearly as excited about this announcement and .Net Native's announcement as I am for what that will bring us in the future.

I can fully see a WPF version that sits on top of Open GL as opposed to DirectX (seeing as IE 11 already implemented WebGL), that could also bring us WPF browser support in the future should that come to be. But mostly it would make it possible to build WPF apps for Linux and Mac…

I can also see .Net Native supporting Class Libraries, Windows Forms etc, and opening up a whole new can of worm's for AAA game development platform choices and seeing games come out built on .Net but compiled with .Net Native. E.g. Imagine if XNA Framework got picked up, converted to work on Open GL or DirectX, and then supporting .Net Native compilation?

@MustBe: The .NET Framework is in use on a lot of existing hardware, including the latest Windows 10 machines, so it makes perfect sense for them to maintain it in parallel with .NET Core. Also, .NET Core is in nearly constant flux at this time, so having a stable system for mainstream users makes more sense than subjecting them to the potential issues that simply dropping support for .NET Framework would cause.

I am not clear about Microsoft's strategy for internet of things and support for devices like Raspberry Pi. Is there a future for .net Micro Framework? Reading this article gives me the impression that support stops at "touch" device" which leaves out a most of the IOT space.

I don't think so – Mac users mostly dont want to use anything non-conforming absolutelly to their UI. And also, imagine WPF as first golive prototype of totally new GUI approach (against Win32), heavily bound to full NET Framework, then Silverlight as second golive prototype based on WPF features stripped down enough to be useable in browser (in fact SL = CoreCLR +native "agcore.dll" UI renderer/eventing), then axing Silverlight (because of outside world too, though), so now opensourcing CoreCLR + part of BCL + AOT compiler + targets binding (devices, web/owin) as ".NET Core" … so "agcore.dl" part of SL was at least partially baked/rewriten as again stripped-down FINAL WinRT UI using enhanced COM-binding (reflective by any langunage, so JS too) … that's all because devices are constrained by battery power, so any heavy VM overhead si not good here (both CPU/RAM). …During last few years, we was helping to test their prototypes, in fact 😉 … as portable options for LOB, "Xamarin.Forms" seens to be good enough (databinding concepts similar to SL) and even more portable is "MonoGame/XNA" (and mono runtime used by these will by continuously replaced everywhere by .NET Core with MS support, it seems)

If you are creating a "native" exe, i.e. windows, linux, osx etc,.. than how can you run in different kernal.. then how is it cross-pathform ? did you mean that you need to create different exe for specific platform?

And in that case, who gurantees that the all the other framework are same ? for e.g. templates/algorithms/ os thread scheduling/context switching .. and optimization due to that.. –given this, it is almost impossible to have exact same behavior even if you compile for each platform… unless you are running your code in VM, then it can not be native

Create something like what Xamarin is doing in order to help developers to code once and run EVERYWHERE (Wndows 10, windows 10 mobile, android and ios).

If Microsoft could not buy Xamarin, at least do what they are doing by your own. Make something that enable us to archive real NATIVE cross platform development. It could dramatically increase the numbers of developers using .net to create mobile applications as well as increasing the number of apps created to wp too, since app could be compiled to ios, android AND WINDOWS PHONE. Hibrid apps like apache Cordova has a lot of potential but right now it is limited to offer a bad user experience when compared to the native apps you could build native with native objective-c on iOS, java on Android or C# (on Xamarin.Forms o WP and Windows 8). Porting .net to others platforms is good, but go further MS, create something that help us to "code once and run everywhere. And by EVERYWHERE I mean, not just windows, I mean: Native apps on Android, iOS, Windows PC, Windows Mobile, Xbox, HoloLens and to on. Bring us a tool that enable us to make Visual Studio Universal Apps REALLY UNIVERSAL (running as well on android and iOS).

We want a Framework that enable us to build NATIVE.

By the way Xamarin Starter edition has a very limited package size so is almost impossible to create even small apps with an organised architecture (multi layers) and its licences fees is very expensive too.

Hey M$ look to the opportunity a tool like Xamarin could bring to the windows ecosystem. It could help you to resolve the app gap Windows Phone suffers so far. With a tool that can help developers to build native applications to windows as well as to android and iOS could bring the apple and android developers interest to use it and since it will be able to compile to windows phone then they will not have why to do not do so. It will also bring mobile developers form other platforms (android and iOS) to Visual Studio and .NET.

Hi, while researching these days about rendering of WPF and SL and rewritten GDI and Direct2D and everything related to MILcore and my loved AgCore part of SL which has potential to run everywhere already, I can only point you outside to peek whats all possible – Embarcadero did thing FireMonkey and they do something native accross platforms by this; not knowing more and still learning to know also this part of world little bit, just now you can join theirs c++ bootcamp too:http://community.embarcadero.com/blogs/entry/c-boot-camp-monday-august-8-2016-building-your-first-application-with-c-builder
(truth is their IDE is anything but fast, as was usual all the time despite the native code; but the new FMX thing is interesting somehow)

I just like the helpful info you supply to your articles.
I’ll bookmark your blog and test once more here frequently.
I’m moderately certain I’ll be informed many new stuff proper here!
Best of luck for the next!