StrathWeb. A free flowing web tech monologue.http://www.strathweb.com
StrathWeb. A free flowing web tech monologue.Mon, 05 Dec 2016 19:25:33 +0000en-UShourly1http://wordpress.org/?v=4.2.10Writing C# build scripts with FAKE, OmniSharp and VS Codehttp://www.strathweb.com/2016/12/writing-c-build-scripts-with-fake-omnisharp-and-vs-code/
http://www.strathweb.com/2016/12/writing-c-build-scripts-with-fake-omnisharp-and-vs-code/#commentsSun, 04 Dec 2016 11:28:47 +0000http://www.strathweb.com/?p=5711In this blog post I’d like to show an extremely – in my opinion – productive way of writing build scripts using C#. As a basis, we’ll use the excellent core FAKE library called FakeLib, which is written F# and consume it in C# scripts.

Sure, there are other projects/task runners like Cake or Bau that allow you to write C# build scripts (few more actually out there) but the approach I’d like to show you today, is I think the most productive of all, so bear with me.

More after the jump.

What do I need from a C# build system / task runner?

So first of all, let’s consider what do I want from a build system that lets me use C#.

I want to have a targets API that works reasonably well and prints out things elegantly at the end

I want to have a rich ecosystem of integrations – I don’t want to have to manually call into nuget.exe, dotnet CLI or Azure APIs

I don’t want to use some custom C# DSL / C# dialect – but rather would like to stick to “standard C# scripting”

The last point is important – and I’m guilty as charged here. From my experience with the scriptcs project, I can say it’s really much better to write standardized C# scripts, that can run on any runner such as csi.exe, rather than trying to fragment the landscape with scripting dialects. The latter, after all, ships with MSBuild these days too, making C# scripting possible to be used without any extra installation steps.

Main advantage of that is that if we stick to standardized scripting, we can easily provide language services, intellisense, refactoring, debugging and such.

So how about FAKE?

So despite some great C# build systems being out there, none of them ticks all the boxes at the moment.

We’ve had various Twitter discussions about the nature of scripting on the .NET platform in the past, and this time around, Steffen suggested to use FakeLib. I must admit – in all my ignorance – I didn’t know FAKE is structured in a way that the core lib can be reused so easily.

I have actually been a big fan of FAKE myself, and used it in various projects wherever I could – F# is an excellent language for scripting – but due to my long term involvement in the C# scripting ecosystem, I was always gravitating towards doing things in C# when it was possible.

Turns out all the helpers and integrations in FAKE are completely reusable. The same applies to its targets API. This will immediately tick boxes 3 and 4 for me.

C# scripting intellisense and language services

With the OmniSharp project, it’s possible to have intellisense for scripts. In fact, not just intellisense, but fully fledged language services – refactoring capabilities, code navigation, you name it.

This is a tremendous boost for productivity, especially when dealing with a verbose language like C# and a verbose platform like .NET.

Note that at the moment, the best (or rather, “revamped”) support for CSX language services is only on the dev branch of OmniSharp (I only added this refreshed scripting support in there recently). OmniSharp now has CSX support for both desktop CLR and .NET Core – provided your scripts follow standard C# scripting model.

This leads me back to my original points – with OmniSharp I can tick box 2. Now, to be able to use it, and take advantage of its unbelievable productivity boost though, I have to make sure to author my CSX in a way not violate point number 5 – stick to standard C# scripting only.

What does it mean in practice?
– no custom preprocessor directives
– no custom host objects (magic global properties or methods injected into the script)
– no custom mechanism for assembly loading
– no implicit assembly references
– no implicit namespace imports

This sounds constraining, but in reality I don’t find it limiting at all, and hopefully by looking at our end result, you will agree.

VS Code

Whatever is described here will soon make it into C# extension for VS Code, but at the moment, you’d need to build OmniSharp from the dev branch and then point VS Code to your OmniSharp build using the following setting:

{
"omnisharp.path": "<Path to the omnisharp executable>"
}

What’s worth adding at this point, is that in order for OmniSharp to light up in VS Code for CSX files, you need to have an empty (or non-empty too, actually, any would do) project.json file next to your CSX file.

Putting it all together

You could really use any project to code along, I am using a little scripting demos project as the project to create my build script for.

Let’s start by adding build.csx file and a bin folder (note – the folder name has no meaning, it might be anything). Also, don’t forget the empty project.json.

In the bin folder, we’ll need to place 4 DLLs:

FakeLib.dll – FAKE core library

FSharp.Core.dll – F# core library

FSharpx.Extras.dll – F# to C# interop helpers

System.Runtime.dll – needed just in case you want to run your build script on Mono. Version 4.0.20.0

All these DLLs can be grabbed from Nuget – which is what I did, manually. Note that at the moment it’s not a function of our build system to resolve nuget packages for itself, though it would be very easy to write a little bootstrapper tool that just downloads these dependencies and places them in the bin folder. This will likely not change often too, so it’s up to you to decide how much time you want to invest i nthe bootstrapping.

Another interesting (and cool) thing, is that FakeLib.dll is a single DLL that contains integration into dozens of tools and services like NuGet, unit test runners, Dotnet CLI, AppVeyor, Git, MSBuild, Xamarin and many more.

OK, so in build.csx, add the following directives to import the assemblies. We might also already import all the necessary namespaces.

You could offload that to a separate CSX file i.e. bootstrap.csx and then #load that CSX from build.csx, but I don’t think it’s super necessary.

At the moment OmniSharp will not parse these reference changes in realtime, so when you add new assembly references, you need to restart OmniSharp. This is done by going to command palette in VS Code (ctrl+shift+p or CMD+shift+p) and selecting “Restart OmniSharp”. After the restart, the references to the new assemblies will be recognized.

I’d like to spend a moment explaining something here. I’m sure you noticed the “using static” – that’s because we are relying on one little trick. Because of the way how F# and C# interops, loose F# functions, on which FAKE largely relies, are surfaced to C# as static methods of static classes.

As a consequence, by relying on the using static functionality of C# 6, we can mimic the F# experience. In the above example, we could just say:

using static Fake.FileHelper;
DeleteDirs(myDirs);

Which makes the scripting experience much better.

So with that in mind, we can proceed to define our targets. In this demo I will show you 4 sample targets:
– default – does nothing just prints a message
– build – builds the project with MSBuild
– clean – cleans up some directories
– pack – creates a nuget package

So our shell structure would look like this (from now on I am skipping the “header” where we defined references and using statements to conserve space):

I think this is rather self explanatory at the moment, but let me quickly walk you through. Target is naturally, the FAKE target here. We have access to the method, because we statically imported TargetHelper before.

Same applies to dependency function, used to link our targets into dependency hierarchies, and the run, which we can use to invoke a specific target. Args is a standard C# scripting way of dealing with script arguments, so we can get a target name from there, or default to, well “Default” – those will be passed in when we invoke the script from the command line.

So in our case, “Build” depends on “Clean”, and “Pack” depends on “Build”.

One more thing worth noting, is that we need to use a little helper called FromAction. It comes from FSharpx.Extras.dll and converts a C# action (our simple lambda) into FSharpFunc<Unit, Unit>, which is what the FAKE API would required. We could simplify it further by creating a custom Target method which would do this conversion internally and delagate to real FAKE Target but I don’t think it’s necessary.

So let’s start filling up our targets with real logic. First the simple ones, “Default” and “Clean”:

This is also pretty easy to comprehend. FAKE’s MSBuildHelper exposes a build function, which takes a delegate that can modify the default MSBuildParams. Here we could set custom properties and so on.

Similarly as it was a case with Actions, we just need to convert a C# Func into an FSharpFunc. This is something we do via FSharpx.Extras too. Note that I wrote a single line helper to reduce the amount of verbosity even further. It makes sense, since we will reuse this little helper in the other target too.

Of course everything here is fully discoverable and inspectable with OmniSharp intellisense, so even if it may not seem obvious at first glance, it’s actually super easy to work with!

This task is a bit more complicated but also rather self-explanatory. We ensure the artifacts folder exists, and then hand off to FAKE’s NuGetHelper. By default FAKE would look for NuGet in the chocolatey install folder, but since I committed nuget.exe with my project, I am repointing FAKE to that particular executable.

Running the whole shebang

We can now run this – but you might ask, how? Well, as I mentioned, because we used standard C# scripting here, we can just use csi.exe, which was built by the Roslyn team as part of Roslyn and is the official C# script runner.

It also ships with MSBuild, so that means if you have MSBuild, you do not have to install anything to run this build script, you just need to point it to csi.exe. CSI is available in the PATH on Windows boxes if you run VS Developer Command Prompt. Otherwise you can find it in C:\Program Files (x86)\MSBuild\14.0\Bin.

Also, CSI is portable, so you could just copy it over if needed. Adam Ralph actually ILMerged all CSI dependencies into 1 file, so you can grab his ILMerged version too.

I will not go into details now, but it’s very easy to write a cmd or sh file which will pick up CSI from a proper place, maybe even wget it, and bootstrap all the running – this is beyond scope for this article.

CSI will work on Mono too, I believe you need Mono 4.6. We also need the System.Runtime reference for Mono specifically. You can see it below (my sample project is not x-plat, so running a Clean task only).

Authoring experience

Bonus – interactive mode

Because we use standard C# scripting syntax, we can actually leverage the interactive mode (REPL) of CSI too. What I mean by that, is that we can start CSI REPL, #load our build script, and interact with it in the REPL context – run tasks manually, inspect variables, inject new tasks and so on.

The experience is not perfect, as it doesn’t at the moment because CSI doesn’t respect the using statements of #load-ed scripts, but nevertheless it’s quite cool.

This is also the last release requiring full .NET 4.5/Mono – the next version of ConfigR is going to be a netstandard.

Here’s a overview of the features that are there in 1.0!

What’s new

Here is the overview of the new features.

New NuGet packages

ConfigR is no longer coupled to C# scripting – instead it’s merely a configuration abstraction, and it’s just the abstraction code that is contained in the main ConfigR package.

This means that while scripting is the primary (or rather – at the moment – the only) way to use ConfigR, the ConfigR abstraction could be implemented against any configuration provider, allowing you to seamlessly merge multiple sources into a single configuration model (along with the scripting one).

New Syntax

ConfigR 1.0 introduces a new, simplified syntax of building up your configuration. Instead of calling Add<T> method from your configuration CSX script, you can just use a global dynamic dictionary.

Additionally, changes are done to how you consume ConfigR in the host application, and that’s related to the fact that ConfigR can potentially have multiple configuration sources (as explained in the previous section).

Let’s have a look at some examples. First, the old syntax – no longer in use in ConfigR 1.0 – for some context.

Other bug fixes and small improvements

A bunch of other bug fixes an improvements were done. For example, ConfigR supports in-memory assemblies, meaning you can now use it to configure applications which only exist in memory and are never flushed into a physical assembly on disk.

If you wanna help building ConfigR (next step – netstandard!) – stop by Github!

Finally, huge thanks to Adam who has been the mastermind behind all this.

]]>http://www.strathweb.com/2016/11/announcing-configr-1-0/feed/1Lazy async initialization for expiring objectshttp://www.strathweb.com/2016/11/lazy-async-initialization-for-expiring-objects/
http://www.strathweb.com/2016/11/lazy-async-initialization-for-expiring-objects/#commentsTue, 01 Nov 2016 19:53:26 +0000http://www.strathweb.com/?p=5521Today I wanted to share something I found myself using quite a lot recently, and that is not supported out of the box by the .NET framework.

So, as part of the framework, we have Lazy<T>, which provides out of the box support for deferring the creation of a large or resource-intensive objects.

However, what if the object requires async operation to be created, and what if its value expires after some time, and it needs to be recomputed? Let’s have a look at how to solve this.

Sample use case

Let’s imagine the following scenario – you are making service calls to an HTTP API using a typical OAuth 2.0 client credentials flow. This means, you need to obtain the token from the identity server in order to be able to use it to call the resource server. Typically you’d only fetch the token once it’s needed, and the operation of retrieving the token should be async, as it’s network bound. Additionally, according to the OAuth spec, client credentials flow doesn’t support refresh tokens, so once the original access token expires, you’ll need to re-request a new one.

All of that means that we are dealing with a lazy initialization (get the token only once we need it for the first time), with an async operation (network bound) and with an object that will expire (access token has a limited lifetime).

Let’s call our construct AsyncExpiringLazy<T> and support this scenario in a generic way.

Building an AsyncExpiringLazy<T>

In addition to the construct itself, we will also need a simple wrapper around the result object and its expiration timestamp, so let’s start there.

So far so good – not much to explain there. Next let’s add AsyncExpiringLazy. Since properties can’t be async in C#, in our AsyncExpiringLazy, we’ll need to use an instance method as a way to fetch our underlying value.

Let’s first create the outline of the class, and then fill in the remaining blanks.

So the constructor will take in a value provider – the provider itself will receive the “previous/old”, expired (or expiring) value and will be responsible of providing a new one – wrapped in the ExpirationMetadata defining it’s expiration time.

Filling in the remaining blanks is surprisingly easy – it’s shown below. The only “trick” in the code is to use SemaphoreSlim for locking – since C# does not allow traditional lock statements to contain awaits.

Regarding invalidation, we can just reset the ExpirationMetadata back to its default value.

You can create an instance of AsyncExpiringLazy at any point – for example it could be a field in some class of yours. You will need to pass in a delegate responsible for value creation – it will give you a chance to inspect the old value too if needed.

From there on, it’s all about accessing the value whenever you need it. And if it expires, AsyncExpiringLazy will re-create it on next access.

All the code for this article is located here on Github as .NET Standard 1.3 library. Hope this helps in some way.

The idea was super simple – I just wanted to be able to author C# scripts using .NET Core, leverage project.json to define the script dependencies and execute scripts cross platfom using .NET CLI – via a dotnet script command.

The project is located here on Github. You can head over and have a look at readme to get started – but, briefly, the key features are listed here.

dotnet script is entirely self contained

You do not need to install anything globally if you don’t want. Because the script runner itself is developed as .NET CLI tool, it can be referenced from with a project.json – the same file you’d define the dependencies for your script – and restored using dotnet restore.

dotnet script runs cross platform

Your scripts are effectively speaking small netcoreapp1.0 apps – they can reference any .NET Core packages compatible with it, and run on any platform supported by it.

dotnet script supports debugging

You can debug scripts executed with dotnet-script in Visual Studio Code. You can read the detailed instructions on how to get that set up here.

You’ll be able to set up breakpoints in the script code and step through it.

dotnet script supports code evaluation (file-less)

You can pass snippets of C# code directly to the runner and have it evaluate it for you without a need of accessing a physical file on disk.

For example:

dotnet script eval Console.WriteLine(\"Hi\");

This is roughly equivalent (and inspired by) to Node.js’s:

node -e console.log(\"foo\")

dotnet script is compatible with C# Interactive

The runner is compatible with Roslyn’s own script runner – csi.exe (C# Interactive), which is part of VS Tooling. It does not attempt to introduce its own scripting dialect and exposes the same global properties – for example Args to access script arguments – as CSI.

If you’d like to help – see you over at Github! Special thanks already to Atif and Adam, as well as to Bernhard.

]]>http://www.strathweb.com/2016/10/introducing-c-script-runner-for-net-core-and-net-cli/feed/2Strongly typed configuration in ASP.NET Core without IOptions<T>http://www.strathweb.com/2016/09/strongly-typed-configuration-in-asp-net-core-without-ioptionst/
http://www.strathweb.com/2016/09/strongly-typed-configuration-in-asp-net-core-without-ioptionst/#commentsThu, 29 Sep 2016 09:31:16 +0000http://www.strathweb.com/?p=5391There are several great resources on the Internet about using the new Configuration and Options framework of ASP.NET Core – like this comprehensive post by Rick Strahl.

Using strongly typed configuration is without a question a great convenience and productivity boost for the developers; but what I wanted to show you today is how to bind IConfiguration directly to your POCO object – so that you can inject it directly into the dependent classes without wrapping into IOptions.

POCO configuration with IOptions<T>

The typical IOptions<T> driven configuration setup would look like on the snippet below. To use this code, you also need to reference the Microsoft.Extensions.Options.ConfigurationExtensions package, which will expose the extension methods and will also bring in the options framework package as a dependency.

This will allow you to load up MySettings from appsettings.json into MySettings POCO.

However, using the options framework also means that your configuration is registered in the DI container as IOptions<MySettings>, and that’s how you will need to inject it.

This typically wouldn’t matter but it also means that you will need to reference the Microsoft.Extensions.Options package everywhere where you want to consume this configuration (that’s where IOptions<T> is defined).

In other words, your POCO configuration is not exactly POCO anymore, as it drags the extra dependency alongside it, as you always consume it like this:

It also doesn’t give you access to the POCO configuration instance straight away – it just buries it directly into the DI, so to access it immediately in the Startup class, you’d need to resolve it from DI.

EDIT: On top of that – as mentioned by Steven – the problem with IOptions<T> is that it will defer the evaluation of the configuration. This means that if your configuration file is incorrect, your app can crash later on, the first time IOptions<T>.Value is accessed, rather than at startup.

POCO configuration without IOptions<T>

You could, however, register the POCO configuration manually, and avoid the extra dependency on the options framework. This is shown below.

We are manually binding the configuration to the POCO, rather than having the options framework do it for us. This functionality is available in the Microsoft.Extensions.Configuration.Binder package, which needs to be referenced (it is not an extra dependency for us though, it is already referenced by the Microsoft.Extensions.Options.ConfigurationExtensions package). It also needs to be referenced only in the entry project (or your composition root).

Another small limitation of the options framework is that the POCO configuration class must have a parameterless constructor, as the framework will try to instantiate it for us.

However, with our custom approach, we can control how our class is instantiated. We could write extra extension methods that either take in an already existing instance, or a delegate responsible for its creation.

We can now consume these extension methods as its shown on the next snippet, without having to worry about exposing a default constructor. The instance could be created upfront or on demand through the delegate.

]]>http://www.strathweb.com/2016/09/strongly-typed-configuration-in-asp-net-core-without-ioptionst/feed/8Required query string parameters in ASP.NET Core MVChttp://www.strathweb.com/2016/09/required-query-string-parameters-in-asp-net-core-mvc/
http://www.strathweb.com/2016/09/required-query-string-parameters-in-asp-net-core-mvc/#commentsMon, 19 Sep 2016 15:26:02 +0000http://www.strathweb.com/?p=5331Today let’s have a look at two extensibility points in ASP.NET Core MVC – IActionConstraint and IParameterModelConvention. We’ll see how we can utilize them to solve a problem, that is not handled out of the box by the framework – creating an MVC action that has mandatory query string parameters.

Let’s have a look.

IActionConstraint extensibility point

ASP.NET Core MVC allows us to participate in the decision making regarding selecting an action suitable to handle the incoming HTTP request. We can do that through IActionConstraint extensibility point, which is a more powerful version of ActionMethodSelectorAttribute from “classic” ASP.NET MVC.

So, when creating a custom IActionConstraint, you effectively just have to handle one method – Accept, which should return true when the action is suitable for handling the current request, or false, if it isn’t. Naturally, the ActionConstraintContext object would give you access to the current HttpContext.

In this case, the action takes 3 parameters – id, foo and bar. However, only id is mandatory – because it’s part of the route, the latter ones are optional. This means that all 4 of the following URLs would be valid and lead to our action:

GET api/values/5

GET api/values/5?foo=a

GET api/values/5?bar=b

GET api/values/5?foo=a&bar=b

Now, let’s restrict these URLs to only the last one – by making our query strings mandatory. MVC comes with [FromQuery] attribute, which restricts binding of the data to query string only, but it still treats them as optional if we use it, so the code shown below still wouldn’t work like we want; it would simply stop looking at other (non-query string) binding sources for our foo and bar parameters.

The solution is to implement our own attribute, which we will get to in a second.

But first let’s create an IActionConstraint. The constraint will be pretty simple – we will be creating a single instance of a constraint for each mandatory parameter, and if a matching parameter is not found on the current request (in the query string, naturally) then we will return false from the Accept method.

We chose order with a high value to make sure our constraint runs last, especially after the built in framework constraints (some of which have order of 200).

The final piece is to apply this constraint to specific parameters.

Stitching it together via IParameterModelConvention

We could subclass the existing FromQueryAttribute (the one we originally deemed unsuitable for us), since it will force the correct binding source for us, and make sure that the constraint is applied to the parameter if that parameter is decorated with our attribute. This is shown next:

We are able to achieve this via IParameterModelConvention – which gives us an option to add an extra constraint to each action by visiting all of the parameters of the discovered actions. There are also other ways of applying it – for example you could use IApplicationModelConvention too.

So now we could decorate our mandatory query string parameters with our new attribute, and voila!

Right now, only the following URL GET api/values/5?foo=a&bar=b would lead us into the action above – the other combinations of parameters would result in 404.

]]>http://www.strathweb.com/2016/09/required-query-string-parameters-in-asp-net-core-mvc/feed/0[Controller] and [NonController] attributes in ASP.NET Core MVChttp://www.strathweb.com/2016/09/controller-and-noncontroller-attributes-in-asp-net-core-mvc/
http://www.strathweb.com/2016/09/controller-and-noncontroller-attributes-in-asp-net-core-mvc/#commentsThu, 08 Sep 2016 15:08:41 +0000http://www.strathweb.com/?p=5291One of the late additions before the RTM release of ASP.NET Core MVC was the introduction of the [Controller] attribute, and its counterpart, [NonController], which were added in RC2.

Together, they allow you to more specifically control which classes should be considered by the framework to be controllers (or controller candidates) and which shouldn’t. They also help you avoid the nasty hacks we needed to do in i.e. ASP.NET Web API to opt out from the “Controller” suffix in the name.

Let’s have a look.

[Controller] vs [NonController] and the built-in conventions

ASP.NET Core MVC supports the concept of POCO controllers so you no longer need to inherit from the base Controller class in order to create the your HTTP endpoints.

If you choose to use POCO controllers, your class will be considered to be a valid controller simply if it has a Controller suffix in the class name.

So these are the two fundamental prerequisites when authoring MVC controllers:
– inherit from Controller base class
– or use a Controller suffix in the name

As a result, the following two controller declarations will work out of the box:

Well, turns out this is also correct. The reason why this can work, is that the base class Controller (or – to be more specific, it’s own base class, called ControllerBase), is decorated with the [Controller] attribute, which indicates that the entire inheritance tree should be considered as valid MVC controllers.

Another possibility is that you’d have a POCO controller, that doesn’t have the Controller suffix in the name. That is shown below:

There is also a possibility where you’d have a class, which uses the Controller suffix in the name, but it should not become an MVC POCO controller. That’s where you need to opt out from the controller scan, by applying the [NonController] attribute.

Finally, what’s also worth mentioning, is that [NonController] will always have higher precendence than [Controller]. In fact, if the [NonController] appears anywhere in the class hierarchy, the type or its descendants will never be considered controller candidates anymore.

]]>http://www.strathweb.com/2016/09/controller-and-noncontroller-attributes-in-asp-net-core-mvc/feed/2Building Analyzers & Refactoring Tools with Roslyn (from NDC Sydney)http://www.strathweb.com/2016/09/building-analyzers-refactoring-tools-with-roslyn-from-ndc-sydney/
http://www.strathweb.com/2016/09/building-analyzers-refactoring-tools-with-roslyn-from-ndc-sydney/#commentsTue, 06 Sep 2016 06:59:19 +0000http://www.strathweb.com/?p=5201Last month I was at the excellent NDC Sydney conference, where I did a talk about building code analyzers and refactoring tools with Roslyn. Below you can find the the video, code and slides from the session.

Code

]]>http://www.strathweb.com/2016/09/building-analyzers-refactoring-tools-with-roslyn-from-ndc-sydney/feed/0Building a lightweight, controller-less, Markdown-only website in ASP.NET Corehttp://www.strathweb.com/2016/08/building-a-lightweight-controller-less-markdown-only-website-in-asp-net-core/
http://www.strathweb.com/2016/08/building-a-lightweight-controller-less-markdown-only-website-in-asp-net-core/#commentsWed, 17 Aug 2016 19:41:37 +0000http://www.strathweb.com/?p=5151In this blog post let’s have a look at building a lightweight site in ASP.NET Core.

In “classic” ASP.NET we had the WebPages framework – which allowed us to build sites composed only of views. This was perfect for lightweight projects, where we didn’t need the entire model-controller infrastructure.

At the moment, ASP.NET Core doesn’t have an equivalent yet (though it’s being worked on), but we have already provided a similar type of experience via the WebApiContrib project (you can read more about the project here). With the help of some of the libraries from there, we can build controller-less sites for ASP.NET Core already.

In addition to that, we can combine it with using Markdown tag helpers for content delivery – and it will result in a very cool experience – being able to author ASP.NET Core sites, without controllers, in Markdown. With Razor sprinkled on top of it, to provide dynamic data.

Let’s have a look – more after the jump.

Getting started

To get started, let’s create a new empty ASP.NET Core project and add references to the following WebAPiContrib.Core packages:

What’s worth mentioning, is that I also added Microsoft.AspNetCore.StaticFiles middleware, so that we can serve up some CSS. The rest is – again – pretty standard stuff from the default web application template, so Kestrel, IIS integration, as well as the default build and publish options.

Configuring WebApiContrib.Core.WebPages

In order to configure our web pages, we need to add the following to the Startup class:

The pattern is the same as in adding/configuring i.e. fully fledged MVC – first we are “adding” our framework, and then we are “using” it.

Next step is to actually add some views – the views will serve as our pages, and there names will also define the routes – since we have no controllers in place, everything happens by convention. We’ll get back to this in a moment.

If you look again at the snippet above, we also configured the view called Index.cshtml to act as the root of the site (so our “homepage”) – it is the view that will be served when navigating to the root of the domain where the site will be deployed.

Adding layout

Now, let’s imagine that we are building a simple blog – this could be a good example for us.

The Index.cshtml view will be our root, listing all of the posts. Then each of the posts will be a separate view (separate physical file in the Views folder) and it will be authored as a Razor/Markdown combination.

However, before we get on to that, let’s first add all of the remaining bootstrapping that we need.

It’s quite typical that we’d want to have a shared layout for our site – this in Razor is normally represented by a _Layout.cshtml file. So let’s add that.

The responsibility of our _Layout.cshtml will be to provide the page title and load up the necessary CSS. This is shown below.

Our layout file is using the classic Razor ViewBag to read some data from the views it will wrap around – each view will be able to pass a Title and a flag whether a link back to home page should be rendered or not.

Note: I am using a nice and elegant Markdown CSS file from here. This also means that the referenced CSS file exists in my wwwroot/css folder.

Configuring Markdown tag helper

Since we want to use Markdown to render our pages, and we already pulled the package for it, we just need to make it visible to our Razor views.

To do that, we need to add a _ViewImports.cshtml file, with the following content:

@addTagHelper "*, WebApiContrib.Core.TagHelpers.Markdown"

This will import the Markdown tag helpers (the package actually has two of them, we will be using just one though, the “basic” one).

The WebApiContrib.Core.TagHelpers.Markdown tag helpers (you can read more about it here) allows us to use an <md />tag and write Markdown directly in our Razor views, and it will get auto-converted to HTML. The nice think about it is that we get a cool authoring experience – as writing Markdown files for content driven sites is usually more efficient than writing pure HTML.

Also, once a Razor view is rendered, it will be cached, so the fact that we will convert to Markdown on the fly doesn’t matter that much, as only the first hit will be slower.

Adding views

At this point we can start adding our views. Since we already established that Index.cshtml will be our root, let’s add it. It will act as our table of contents.

Notice that we set the layout file and interact with the ViewBag identically how you’d do it in a typical, fully-fledged MVC application. Since this is our root page, we can hide the “back to home” link and we do it via a relevant flag (we just added that logic to _Layout.cshtml moments ago).

The content of the page itself is a a simple Markdown header and a list with some links. We already mentioned in the beginning, that the names of the views will correspond to the links/routes that are available in our lightweight application. So – as a consequence – the above structure implies that we need to have WebApiContrib.cshtml and FormatFilter.cshtml in our Views folder.

So let’s add both of them. I am not gonna show their entire structure here – cause for demo purposes I used my old blog posts here (I just grabbed the last two: this one and this one) and they are fairly long (btw – I write the posts in Markdown, obviously). Instead in the snippets below, I will abbreviate their “content” part to save up on space.

But they are similar to our Index.cshtml – some Razor bootrsrapping on top, and then Markdown content.

WebApiContrib.cshtml:

@{
Layout = "_Layout";
var title = $"Announcing WebApiContrib for ASP.NET Core!";
ViewBag.Title = title;
ViewBag.HideBackLink = false;
}
<md>
# @title
In the past, a [bunch of us](https://github.com/orgs/WebApiContrib/people) from the ASP.NET Web API community worked together on a WebApiContrib project (or really, *projects*, cause there were many of them!).
The idea was to provide an easy to use platform, a one stop place for community contributions for ASP.NET Web API - both larger add ons, such as HTML/Razor support for Web API, as well as smaller things like i.e. reusable filters or even helper methods. This worked extremely well - [WebApiContrib packages](https://www.nuget.org/packages?q=Tags%3A"WebApiContrib") were downloaded over 500k times on Nuget, and a nice community has emerged around the project on [Github](https://github.com/WebApiContrib).
(...) omitted for brevity (...)
</md>

FormatFilter.cshtml

@{
Layout = "_Layout";
var title = "Customizing FormatFilter behavior in ASP.NET Core MVC 1.0";
ViewBag.Title = title;
ViewBag.HideBackLink = false;
}
<md>
# @title
When building HTTP APIs with ASP.NET Core MVC , the framework allows you to use *FormatFilter* to let the calling client override any content negotiation that might have happened on the server side.
This way, the client can for example force the return data to be JSON or CSV or any other format suitable (as long as the server supports it, of course).
(...) omitted for brevity (...)
</md>

One interesting note is that we can use the typical Razor mechanisms when authoring our posts. For example in the above snippets we defined the title as a local variable and used it both to set the title of the page, and to set the H1 of the Markdown content.

Similarly, you can make your life easier by injecting other dynamic content, leveraging loops or even accessing external services to pull some data.

You could even use the @inject directive to inject services into the views.

Source code & demo

So this is everything!

We now have a fully functional, controller-less, Markdown-driven, lightweight, ASP.NET Core site. We just have views, and all the content is written almost entirely in pure Markdown.

Of course this is a very basic implementation, but I hope it nudges you in a useful direction or inspires you to do cool things with ASP.NET Core.

]]>http://www.strathweb.com/2016/08/building-a-lightweight-controller-less-markdown-only-website-in-asp-net-core/feed/3Announcing WebApiContrib for ASP.NET Corehttp://www.strathweb.com/2016/07/announcing-webapicontrib-for-asp-net-core/
http://www.strathweb.com/2016/07/announcing-webapicontrib-for-asp-net-core/#commentsMon, 18 Jul 2016 06:42:58 +0000http://www.strathweb.com/?p=5111In the past, a bunch of us from the ASP.NET Web API community worked together on a WebApiContrib project (or really, projects, cause there were many of them!).

The idea was to provide an easy to use platform, a one stop place for community contributions for ASP.NET Web API – both larger add ons, such as HTML/Razor support for Web API, as well as smaller things like i.e. reusable filters or even helper methods. This worked extremely well – WebApiContrib packages were downloaded over 500k times on Nuget, and a nice community has emerged around the project on Github.

Recently, we decided to restart the project, this time focusing on ASP.NET Core. Since the “brand” has caught on in the community and is fairly recognizable, we just called it WebApiContrib.Core.

There is already a bunch of things there:

Main package containing smaller features

BSON formatter

CSV formatter

JSONP formatter

PlainText formatter

Markdown tag helper

WebPages functionality (Razor pages without controllers!)

Conditional requests support based on RFC-7232

And it’s growing quickly. Several things that I blogged about on this blog – i.e. GlobalRoutePrefixConvention, FromBodyApplicationModelConvention or OverridableFilterProvider, are also there. There are packages pushed to Nuget already too, built against .NET Core RTM already.

It would be wonderful if you would like to participate in this community effort – I am sure almost everyone working on ASP.NET Core would have something interesting to contribute.