Within ASP.NET Core file uploads are somewhat different than we used to. In previous versions ASP.NET uses HttpPostedFileBase to bind files server-side. This concept is replaced by IFormFile in ASP.NET Core.

The basics of uploading a file with ASP.NET MVC is very well explained here goed uitgelegd. To upload files with Razor and ASP.NET Core take a look at the file upload docs from Microsoft.

Because nowadays there is much use of JavaScript client frameworks such as Angular and Aurelia I'll focus on how to upload a file from a Aurelia web application to MVC 6 (api) controller.

Aurelia is a JavaScript client framework uses the fetch-api to request the backend.

To begin, we'll show you the HTML so that the user can select a file in his browser:

Striking is the abscence of the method-parameter. The default (synchronous) way as described the ASP.NET Core docs uses the method-parameter to make the file available in the controller through modelbinding. In our SPA scenario my expierence is that this method-parameter always have a null value. Therefore I've chosen to asynchronous path with await Request.ReadFormAsync().

That's it! This is how you could upload a file (or more) from a Aurelia web application to an ASP.NET Core (API) controller.

]]>https://blog.codenamed.nl/2016/01/19/using-knockoutjs-systemjs-es2015-jasmine-and-karma-with-asp-net-mvc-6-in-visual-studio-2015/99d33d93-fd94-426c-93b6-96877c5c8662Tue, 19 Jan 2016 10:53:58 GMTOr at least, a bleeding edge. One of many, for the Web is a fast-changing world.

A day away from the SPA

The main motivation for this post came from using Aurelia, a Single Page Application (SPA) framework that uses SystemJs as a module loader in combination with BabelJs as a transpiler, enabling you to work with ECMAScript 2015 today without worrying about browser support (or lack thereof, rather).

But of course, no matter how hip it's become, not every web application has to be a SPA (even though SPAs can have more than one page).
So at CodeNamed we wondered, how would all this work on a "regular", multi-paged MVC 6 application? And down the rabbit hole we went. This post is about us seeing the light at the end of said hole and wanting to document the steps in what has, at times, become a tortuous path so that you, dear reader, can walk in and out of it without a scratch. Until some of the packages we're going to use receive another update, that is. Welcome to the Web.

The setup

Let me start by saying that this post is not about how to use KnockoutJs, or how to write tests with Jasmine. It's neither about teaching you MVC 6 nor the new ES2015 syntax. Instead, this post is about getting all of these working together.

jspm is a package manager for the SystemJS universal module loader, built on top of the dynamic ES6 module loader

Within a Command Prompt, go to the root of the project, which is one level above the wwwroot folder, and install and initialize jspm as follows:

npm install jspm
jspm init

The init command will ask a series of questions. On most of those you can just press Enter to accept the default option. Just make sure that you set the baseUrl to ./wwwroot and that you choose babel as the transpiler.

Here we're telling SystemJS to load a module called main. SystemJs will look inside the wwwroot/src folder (as defined in config.js) for a file called main.js.

2. main.js

main.js won't do much other than importing some of the modules that will be used throughout the application, like KnockoutJs or jQuery, so that we don't have to import them over and over in every module. Keep it DRY ;)

From this point on, the rest of the modules will have a central point to define the ko and $ variables.

The idea is that every MVC View gets a module (a JavaScript file) with the same name (although it doesn't matter; it's just for clarity). These modules will be extremely simple and will have only three tasks:

Import the Main module.

Import the KnockoutJs viewmodel for the current view, along with any other necessary modules (like, say, utility classes).

Call ko.applyBindings to get the view up and running.

3. Index.cshtml

The first thing that Index.cshtml needs to do, then, is import the module that will apply the KnockoutJs bindings (which we'll create in a minute), and of course, define some html.
To keep things simple, the view will only display a Name and Surname inputs and will show the concatenated full name below. Here's the view in its entirety:

As you can see, I have tried to apply some Bootstrap 3 classes and I've added Knockout bindings to the fields, but that's where the excitement ends.

4. fullName.js

Next up, we'll create the viewmodel that binds to the Index.cshtml view. I've placed it in the viewmodels folder I created earlier within src. This viewmodel is as simple as the view it binds to: it just defines the three properties that are needed in the view, and it all happens in the class's constructor.

You might be wondering what the point is of this "extra" file. Why not call ko.applyBindings(...) in the viewmodel itself? Simple: because of unit testing. Having the call to applyBindings on a separate file with virtually no code means that

We won't need to test that file, and

We won't need to worry about mocking the ko object on every spec.

In my first attempt I was actually calling ko.applybindings inside the viewmodel itself, but the problem was that, because I'm importing KnockoutJs in the main module and not in fullName where it is actually used, the test runner kept throwing a "ko is not defined" exception; and importing KnockoutJs into the test file didn't help.
As it happens, I'm happier with this setup. It might seem overkill for a small demo application such as this one, but not having to import KnockoutJs in every file on a large application, where you will need it on pretty much every module, is actually quite a welcome idea.

Start me up

At this point, we have a functional application. Go ahead, run it; bask in the wonder of your creation.
But, I hear you say, how can we guarantee our code is stable when we haven't written any tests?

Prepping to test

Earlier we installed the jQuery and KnockoutJS packages. Next up are Jasmine and Karma.

Jasmine is a behavior-driven development framework for testing JavaScript code

and Karma is a fantastic test-runner that will watch your JavaScript files for any changes and run the tests automatically, much like NCrunch does for .NET unit tests.

This time we'll install all the necessary packages from the npm repository so that we can benefit from VS2015's great npm built-in support.

Notice the --save-dev flag at the end of both commands. This tells npm that we'll only need these packages at development time.

Version mismatch

You might have noticed that we are installing the karma-babel-preprocessor package using a specific version.
That's because, at the time of this writing, Babel 6 is already out, but the current jspm (0.16.19) and SystemJs (0.19.9) versions don't support it yet. The latest version of karma-babel-preprocessor is already based on Babel 6. Version 5.2.2 is the latest version currently supported by jspm.

Babel 6 support is planned for SystemJS 0.20.0, which should be released soon.

We still need to configure Karma, but let's write a unit-test in Jasmine before we go any further.

fullName.spec.js

Inside the test folder, create the file that will hold our test. I called it fullName.spec.js, but it really doesn't matter.
The test itself will be as simple as our viewmodel, of course, but remember: the idea here is to get Jasmine and Karma up and running, not to explore the deep intricacies of JavaScript unit testing. My test file looks as follows:

Configuring Karma

Karma requires a quite a bit of configuration. Thankfully, a big chunk of the work will be done for us just by running a command and answering a couple of questions.
Again, then, within a Command Prompt, go to the root of the project (one level above the wwwroot folder), and run:

karma init

As I said, this will present you with a set of questions. Here's how to answer (by basically accepting every default answer):

Which testing framework do you want to use ?
Press tab to list possible options. Enter to move to the next question.
> jasmine
Do you want to use Require.js ?
This will add Require.js plugin.
Press tab to list possible options. Enter to move to the next question.
> no
Do you want to capture any browsers automatically ?
Press tab to list possible options. Enter empty string to move to the next question.
> Chrome
>
What is the location of your source and test files ?
You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".
Enter empty string to move to the next question.
>
Should any of the files included by the previous patterns be excluded ?
You can use glob patterns, eg. "**/*.swp".
Enter empty string to move to the next question.
>
Do you want Karma to watch all the files and run the tests on change ?
Press tab to list possible options.
> yes

This will create a karma.conf.js file in the folder where you run the command. Now we'll have to edit this config file to tell Karma we're using a transpiler (BabelJs).

jspm

Open karma.conf.js, locate the line where the framweorks are defined and add jspm to the array.

frameworks: ["jspm", "jasmine"],

Next, we weed to configure jspm. Add the following elements anywhere in the config file (I added it just below frameworks):

The "jspm" section basically tells Karma where the jspm packages and our own javascript files are; the "proxies" section maps the packages path so that Karma can understand where wwwroot stands in relation to it's baseUrl. Remember that we removed the baseUrl setting from config.js? That would confuse Karma if left there. Adding the proxies completes the mapping.

Needles to say, adapt the loadFiles section to your needs in case you haven't followed my same folder structure.

Babel preprocessor setup

Ok people, hold on tight because this is the last step! If we tried to run the tests with Karma as it stands, Karma wouldn't be able to interpret the ES2015 code. Much like the application itself, Karma needs a transpiler.

In the preprocessors section we're just telling Karma to use BabelJs to pre-process both our sources and the unit tests. In the babelPreprocessor section we can configure Babel itself. We could, for instance, use optional ES2016 features like class decorators.

Time to se what the fuss is all about

If you haven't used Karma or any other JavaScript test runner before you might be wondering why on earth you'd want to go through all this trouble to set it up. Then again, if you're using NCrunch with Visual Studio you probably already know what to expect and know that it's brilliant.

Once more, open a Command Prompt and go to the root of the project, the folder where karma.conf.js lies and run the following command:

karma start

Karma will open a broswer window (a Chrome instance), but you can minify and ignore it for now. It becomes useful in case you want to debug some failing tests.

Other than dat, on the console you will see that Karma has run 2 tests, and that one of them is failing. I did that on purpose, remember?

Karma magic

Open fullName.spec.js and change this line expect(sut.fullName()).toEqual(""); into:

expect(sut.fullName()).toEqual("Sergi Papaseit");

And save the file while keeping an eye on the console... BAM! Karma has detected changes in one of the files it's been watching and has automatically run the tests again.
That means not having to take any extra steps to run your tests while you develop. Karma for the front end and NCrunch for the back-end and you'll know immediately if your code breaks any of the existing tests. If that doesn't put a smile in your face I don't know what will.

Can I use this in the wild?

You definitely can, but what I've shown here is not how we use it in a production environment. The setup of this blog actually requires SystemJs to transpile with Babel on the fly, in the browser. As you can imagine, this imposes quite a speed penalty on the application.

So how doe we use it? Without going into too much detail, because that is a whole post in and of itself, here's our setup:

We have gulp tasks defined that transpile the *.js files upon building the solution.

Other gulp tasks copy the transpiled files into a dist folder insidewwwroot.

SystemJs looks inside the dist folder for source files, which have already been transpiled.

We have a "watch" gulp task that keeps an eye on any of the source files for changes. If they change, they are rebuilt and placed inside the dist folder.

Karma still uses the files inside src to run the tests. It's ok if those are transpiled on the fly.

And there you have it. If you have any questions, remarks or comments, feel free to let me know!

]]>https://blog.codenamed.nl/2015/10/04/unit-testing-javascript-timing-events-with-jasmine-aurelia-and-es2015/76a4c74e-119b-4af0-996a-a9867aba6a19Sun, 04 Oct 2015 17:51:53 GMTWhile working on an Aurelia application, I found myself needing to unit-test a recurring function; a function that is called every so many seconds. This is called a timing event in JavaScript, and is achieved through the setTimeout and setInterval functions.

The setup

We have the following Home module, named home.js, defined in our Aurelia application.

Very little happens here other than some initialization. The main thing is that the activate function calls startLoop, which in turn sets up a loop that will call getRandomSuggestion every 7 seconds. So how do we test this?

Aurelia, Jasmine and ECMAScript 2015

It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests.

And basically because it damn-easy to learn and work with and because it has good documentation. Speaking of which, the code examples in there aren't written in ECMASCript 2015 (ES2015 for short), but in the current, more familiar to everyone JavaScript syntax.

How do you get Jasmine to work with ES2015? I'm assuming that if you're reading this post you already have an Aurelia application set-up, and that it's probably based on the Aurelia-Skeleton-Navigation, but if you don't, or it isn't, go take a look at it since it contains a nice, basic unit-testing set-up using Jasmine and ES2015

We import the Home module, and, in the beforeEach, we initialize said module into the sut variable. S.U.T stands for System Under Test, and it's just a TDD/BDD way to refer to the module, class, etc. that you are testing.

If you've never worked with Jasmine before, beforeEach does exactly what it's name suggests: it is run before each of the tests defined in the current test module, and it is used to initialize classes, variables or mock data so that every test starts afresh, ensuring no test is dependent on any other test's results. There is also an afterEach method, which is used to clean up after each test has run.

The tests

In an ideal TDD/BDD scenario you'd be writing a specs/test first, making it fail, and then write just enough code to make it pass. For the sake of readability I've already shown you the Home module's code, but I'll follow the steps I originally took and, even for that simple bit of code, we'll have two tests.

First-off, we want to test that the loop is started and the best way to achieve this to "spy" on it. Jasmine spies are a way to mock an object or method within that object; Jasmine basically substitutes the original object or method with a spy object, thereby allowing the test framework to check the interactions with that object or method.

So, we want to verify that the method startLoop is called when the activate method is called, therefore, we "spy" on startLoop and then use one of the several handy methods that Jasmine spies provide: toHaveBeenCalled

done is just a callback that Jasmine allows you to pass to signal that some asynchronous task is completed. In our case it's necessary because activate returns a promise; without a call to done() our test would never finish.

Testing timing events

Testing methods that rely on the computer clock can be tricky, especially because you don't want to have to wait for the actual, real time to pass on every test run.

So how do we test that we are indeed fetching a new suggestion every so many seconds? Again, Jasmine to the rescue: Jasmine provides a clock API. When you call clock.install(), Jasmine substitutes the Window.setInterval and Window.setTimer methods with an implementation of its own. This allows Jasmine once again to check on calls to these methods and to simulate some of their functionality to manipulate time.

The first thing we do in our test is to set up Jasmine's clock by calling jasmine.clock().install(). This allows us to simulate the passage of time by calling jasmine.clock().tick, and passing it the amount of milliseconds we need the clock to move forward.

After initializing the clock, we want to spy on getRandomSuggestion; we'll then wind the clock 14 seconds forward. Since we want getRandomSuggestion to be called every 7 sec, we can expect the method to have been called twice.

Always remember to call jasmine.clock().uninstall() at the end of the test so that the original setInterval and setTimer functions are restored. If you have several tests depending on clock API, a much better approach is to install it in the beforeEach and uninstall it in the afterEach methods.

And there you have it. If you have any questions or suggestions let us know in the comments!

The web crawling bot

Once upon a time there was a web crawling bot also known as spider. It looked around on your website for new and updated pages to add to the Google, Bing or DuckDuckGo index. On your "old" and well indexed website this was no problem at all. But after developing your new website with one of the latest technology, single page application, this suddenly became a bit different. Google developed a scheme for search engines to crawl and index your content. If your SPA adopted this scheme, your content would show up in search results.

Supporting the AJAX crawling scheme

Once the crawling bot tries to crawl your site you need to tell the bot that your site is heavily based on JavaScript and that it implements the AJAX crawling scheme. To do this you need to add an exclamation mark after the hash in your URL. So http://www.example.com/#example becomes http://www.example.com/#!example.
When the bot sees the ! in your pretty URL right after the hash it will temporarily change the URL to a very ugly one: http://www.example.com/?_escaped_fragement=example. The bot will send a request to this URL, where it is our job to handle this _escaped_fragment_ and serve the bot with a HTML snapshot. This snapshot is then used by the bot to index the content of the page.

If you don't use hashes in your URL you have to tell the bot via a meta tag that the application is implementing the scheme:

<meta name="fragment" content="!">

Serving HTML snapshot

Once you've told the bot you created a website with a lot of JavaScript it is time to serve the bot with an HTML snapshot. A HTML Snapshot is a static version of your dynamically generated content.
To create this snapshot you'll need a headless browser (we use PhantomJs) to crawl your page.

The headless browser requests your dynamic page, which loads all the necessary JavaScript and CSS so that the static HTML file can be used to create a snapshot for the bot.

Obviously you don't want to serve these snapshots to a real user, so how do we see the difference between a bot and a real user? This is where crawling scheme comes in play. The bot will request your webpage with the ugly version of the url, so with _escaped_fragement at the end. In your server-side code (we use ASP.NET MVC and/or WebAPI) you need to handle it somewhat like this:

// Return the index view if the request is not from a bot and give control to your SPA framework
if (Request.QueryString["_escaped_fragment_"] == null) {
return View();
}
// If the request contains the _escaped_fragment_ then we return the created snapshot
try {
//Remove the ?_escaped_fragment part to be able create a snapshot with PhantomJS
var result = CrawlPage(Request.Url.AbsoluteUri.Replace("?_escaped_fragment_=", ""); //CrawlPage is requesting our WebApi
return Content(result);//result is the HTML of the snapshot which can be served to the bot
}
catch (Exception ex) {
return new HttpStatusCodeResult(HttpStatusCode.InternalServerError);
}

The CrawlPage method is requesting our SnapshotController which is part of an external WebApi. This controller, together with PhantomJS, creates the snapshot and returns the HTML to our web application. A perfect good example of all this can be found at Github

Maybe it takes some time to get your head around this, but your SPA website will be perfectly indexed using this approach.

Do you have another approach to make your SPA crawlable? Or have any questions? Sound off in the comments below.

In one of our projects we're using SASS as CSS extension. Together with Visual Studio 2015, Node and Gulp we've got a nice build process. Until I installed a new machine...
Suddenly our gulp-task to compile SASS files into CSS failed with a very descriptive error message:

Error: 'libsass' bindings not found. Try reinstalling 'node-sass'?

So first thing was trying to reinstall node-sass, but that is probably not the right way to go: we're using a node package gulp-sass which in turn is using the node-sass package. My thought was to look for a solution on their documentation page first:

gulp-sass is a very light-weight wrapper around node-sass, which in turn is a Node binding for libsass, which in turn is a port of Sass. Because of this, the issue you're having likely isn't a gulp-sass issue, but an issue with one of those three projects.

Ok that's clear, no gulp-sass issue. Next thing was to remove all the node modules (by the way rimraf is an awesome node package for that) and try reinstalling the node modules as mentioned on this StackOverflow question. No luck so far.

Task Runner Explorer

Our gulp tasks are part of the build process in Visual Studio with the wonderful Task Runner Explorer, which showed me the error until now. I don't know why but I didn't try to run the gulp-task in the command-prompt which to my surprise succeeded!
So when the task runner explorer for Visual Studio tries to run my gulp-task it failed but running the task manually from the command-prompt works just fine....

After this I ran into a blog post from @mkristensen via this StackOverflow question and my problem was solved. Apparently Visual Studio 2015 is shipped with an older version of Node and uses that instead of the installed version on my machine.

It keeps on surprising (and pleasing) me to see how well adopted and embraced Aurelia is, taking into account that it isn't officially out yet.

The good thing about that is that there's new Aurelia resources popping up almost every week. Here's a quick round-up:

Aurelia: Next Generation Web Apps - A talk by Rob Eisenberg.
The man behind Durandal en Aurelia himself gave a talk about Aurelia at the 2015 NDC conference in Oslo. This is the video of the talk. In this one hour video you'll learn about 80% of what you need to know to get going with Aurelia. A must see.

The New JavaScript: ES6 - by Rob Eisenberg.
At the same NDC, Rob Eisenberg gave a talk introducing the what's new in ECMAScript 6, the new version of JavaScript. Watch as Rob guides you through the many changes in this major paradigm shift in the JavaScript language. A very handy video since Aurelia realies heavily on ES6.

Bundling an Aurelia Application
Now that you have written some code and tested it, you'll probably want to get everything ready for deployment by bundling and minifying your JavaScript files. This post covers a lot of what you need to know in order to bundle an Aurelia app.

Aurelia provides a built-in router that makes it very easy not only to define routes for the many pages your application might have, but also to automatically generate a navigation bar with your defined routes.

Nothing too exciting but very, very handy. But what if we wanted to, for instance, give each link in our navigation bar a different css class? Thankfully, Aurelia's got this covered too.

Defining route settings

Aurelia supports defining extra settings for a route through a settings property; this property will be propagated to the navigation model, which means we'll be able to use it inside the repeat.for that generates the navigation bar:

And that's one of the thnigs I love about Aurelia: it's like the good folks over there not only have already thought of every possible thing you might need or want to use, but also have found the cleanest and most friction-less way for you to use and implement it.

I have to confess that every time I have to work with an external API and I'm faced with Xml serialization in C#, my reaction is: no biggie. And then, it hits me. I'll probably have to use custom namespaces. Meh.

The Google Shopping Feed specification uses its fair share of custom namespaces, so let's see how we can deal with that in order to serve the serialized feed through a Web API action method.

As you can see from the link above, we'll be using the Atom 1.0 specification, but using custom namespaces should work as well if you'd rather use the Rss specification.

I recommend you download the Atom 1.0 example file from the specification page before we continue.

Defining the serializable classes

As you can see on the example file, the fun with namespaces starts right off the bat with a custom g namespace that is defined on the root element: <feed xmlns="http://www.w3.org/2005/Atom" xmlns:g="http://base.google.com/ns/1.0">

you're out of luck. For one, there's no way (that I know of) to define the g prefix for that namespace, and, to top it up, the feed element requires two namespaces, the other one being the Atom namespace.

I'm not the kind of person to say "I told you so", but I told you so. Namespace galore.

But this is at least straightforward: the Namespace property on the XmlElement attribute assumes that the given namespace is already defined somewhere so we can just assign the right namespace. But how do we make sure the g prefix is applied on serialization? And that the feed element gets both it's namespaces?

Where the magic happens

We want to end up returning a stream of XML through a Web API action method, and for that we can use the ApiController.Content<T> method since one of the overloads of the method allows us to use a custom formatter.

The Web API has two built in formatters, JSON and XML. Were it not for the custom namespaces we could just tell the Web API to return our Feed class as XML and Robert's your mother's brother, but alas. Time for a custom formatter, then.

In order to create a custom formatter we have to create a class inheriting from XmlMediaTypeFormatter, then, in the constructor of our custom formatter we'll be able to specify any custom XML namespaces we may need. We will also have to overwrite the WriteToStreamAsync method, which is the method that'll be doing the actual serializing of our classes to XML.

As you can see, the Namespaces property, of type XmlSerializerNamespaces will hold a collection of XML namespaces. We have to give a prefix to every namespace we add to the collection; in our case this is the g of the Google namespace.

We have them, now where do we use them?

Next up, the method we need to overwrite. According to the documentation, WriteToStreamAsync is:

Called during serialization to write an object of the specified type to the specified writeStream.

So it is not a method we'll have to worry about calling ourselves; the Web API will take care of that once we tell it to use our custom formatter. Here's what it looks like.

One thing to notice is that, as the name implies, it's an asynchronous method and so it has to return a Task. Other that that, it mostly takes care of initializing a Serializer for the type passed in the parameters, and then, already within the asynchronous Task definition it gets the right serializer for the given type and calls the Serialize method on it.

Putting it all together

Now that we have everything nice and ready, how do we use it? Time to create a Web API action method that will return our shiny XML feed ready to be fed to Google.

To force Web API to read a "simple" type from the request body you need to add the [FromBody] attribute to the parameter.

Web API reads the response body at most once, so only one parameter of an action can come from the request body. If you need to get multiple values from the request body, define a complex type.

But still the value of email is NULL.

The JavaScript code is part of generic method we use, so that's why the content-type is set to application/json; charset=utf-8. Although the example in the article mentioned above also uses the content-type application/json, this is the source of our problem.

The default content-type of an AJAX request is application/x-www-form-urlencoded; charset=UTF-8. So if we leave the content-type out of it, or specifying it with application/x-www-form-urlencoded; charset=UTF-8 it should be working right? Well... No. Apparently the value should be formatted like this:

=value

Knowing this gives the final JavaScript code which will work:

$.ajax({
type: "POST",
contentType: "application/x-www-form-urlencoded; charset=UTF-8", //this could be left out as it is the default content-type of an AJAX request
url: "api/discount/saveEmailForDiscount",
data: =+"some@email.com"
});

Before sending a simple type, consider wrapping the value in a complex type instead. This gives you the benefits of model validation on the server side, and makes it easier to extend your model if needed.