This is just Day 1! At the end of March we'll have as many as 10 great podcasts, amazing guests, Live Google Hangouts, blog posts, Twitter Chats on Wednesdays, and a huge collection of links and projects for you to explore.

This actually a dual tutorial/how-to. I've been so impressed with the Raspberry Pi 2 I've wanted to see how far one can take it. It's still a modest little machine, but it's definitely twice as fast or more in single-tasking and perhaps 6x faster in multitasking in my experience than the previous Raspberry Pi.

Basic Raspberry Pi set up

I use the Raspbian Operating System image for my Raspberry Pi 2. It's a Debian Wheezy image for techies, that's a Unix for non-techies.

You can see me VNC'ed into my Raspberry Pi 2 here. Of course, you can always connect it to your monitor or TV.

I wanted to see how hard it would be to run .NET on this Raspberry Pi. Depending on how deep you want to go, it's not hard at all.

Running ASP.NET on a Raspberry Pi 2

NOTE/DISCLAIMER: This is a point in time. It's a beta/daily build of an early thing. I'm sure this will get down to a few simple lines in the future, so don't panic thinking that ASP.NET on Linux will suck. It's early.

Frist, you can get an old (3 years old) version of the open source Mono runtime with the stable standard repositories

sudo apt-get mono-complete

And this will get you version 3.2.8. You can do basic stuff. Make a HelloWorld.cs and run it with

gmsc HelloWorld.csmono HelloWorld.exe

Debian likes to be very stable, I'm told, so if you want to get a very NEW version of Mono like version 3.10 (that's "Three Point Ten") and as such, run things like ASP.NET 5, you'll need to do a little more work.

UPDATE and IMPORTANT NOTE: There's two options here. Build Mono from source, or use a custom repository from the Mono folks.

Option 1: Install Mono from The Mono Project's repositories

The Mono Project has their own repository for Debian distributions like Raspbian Wheezy. If you've put on an early mono, you'll want to sudo apt-get remove mono-complete first to tidy up.

Related Links

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

Middleman is "a static site generator using all the shortcuts and tools in modern web development." With any static site generator you can run it all locally and then push/FTP/whatever the resulting HTML to any host. However, static site generators are even more fun when you can host the source code in Git and have your static site build and deploy in the cloud.

Middleman uses Ruby for its build system and views, and some of the Gems it uses are native gems. That means if you are a Windows user, your system will need not just Ruby, but the Ruby DevKit so you can build those native gems. The DevKit is a lovely set of tools that "makes it easy to build and use native C/C++ extensions such as RDiscount and RedCloth for Ruby on Windows."

Azure Websites supports not just ASP.NET today, but also node.js, PHP, Python, and Java, all built in. But not Ruby, directly, yet. Also, Azure Websites doesn't know formally about the idea of a static site generator. You might be thinking, oh, this'll be hard, I'll need to use a VM and do this myself.

However, even though Azure Websites are totally "platform as a service" there's still a Windows Virtual Machine underneath, and you can use the disk space however you like. You've got a LOT of control and can even get a hold of a console where you can run commands and install stuff. The Azure Portal lets you open a command line from your website.

Check me out, here in the new Azure Portal. This is where I did my practice work to see if I could programmatically download and install Ruby via a script. I tried a number of different commands, all from the browser, and explored a number of ideas. When I got it working, I put together a batch file called GetRuby. I could have also used a shell script or PowerShell, but Batch was easy given what I was doing.

ASIDE: You may recognize that console from this video I did about the "Super Secret Debug Console" in Azure. It's not so secret now, it's a feature. There is still a cool debug "sidecar" website for every Azure site, it's at http://YOURSITENAME.scm.azurewebsites.net/DebugConsole but now a version of the console is in the portal as well.

Azure Websites uses an open source project called Kudu to deploy from locations with source like Github. Kudu supports custom deployment scripts where you can jump in and do whatever you like (within the limits of the security sandbox)

Basically I needed to do these things before running Middleman on my source:

Ensure Ruby is installed and in the path.

Ensure the DevKit (which includes native compilers, etc) is installed

Initialize and setup DevKit for builds

Update RubyGems to 2.2.3 until the Windows version of Ruby has this included

Install eventmachine 1.0.7, a problematic gem on Windows

Run the Ruby Bundler's update

Install Middleman

And then, every deployment run the Middleman static site generator.

Middleman build

The first part is a one time thing for a new website. I just need to make sure Ruby is around and in the path. The second part is what runs every time a source file for my static site generator is checked in. It runs middleman build. Then at the very end, Kudu takes the results from the /build folder and moves them to /wwwroot, which makes the changes live.

Here's an annotated part of the first bit, but the actual file is on GitHub. Note that I'm putting stuff in %temp% for speed. Turns out %temp% a local drive, so it's a few times faster than using the main drive, which makes this deployment faster. However, it's cleared out often, so if I wanted things to be persistent but slower to deploy, I'd put them in D:\deployments\tools. As it is, the deploy is fast (less than a minute) when Ruby is there, and just about 3 minutes to get and setup Ruby when it's not. The exists check handles the case when a deploy happens but %temp% has been cleared so it'll get Ruby again.

NOTE: If this seems confusing or complex, it's because I like to give folks LOTS of detail. But just look at my repository. All we have is a standard "Middleman init" site plus the Azure-generator deploy.cmd and my getruby.cmd. That's all you need, plus a Basic Azure Website. The getruby.cmd is my automating what you'd have to any way on a Windows machine without Ruby.

REM Note that D:\local\temp is a LOCAL drive on Azure, and very fastSET PATH=%PATH%;D:\local\temp\r\ruby-2.1.5-x64-mingw32\bin

pushd %temp% REM If you need things to be persistent, then put them elsewhere, not in TEMPif not exist r md rcd r if exist ruby-2.1.5-x64-mingw32 goto end

REM KuduSync and actual /build to /wwwroot is after this in deploy.cmd!

And in the Deploy.cmd all I needed to change was this under SETUP. This is where YOU can do whatever you like. Note since I'm using Batch, I need to put CALL in front of other Batch files (and Ruby uses them also!) otherwise my script will just end early.

ECHO CALLING GET RUBY

call getruby.cmd

ECHO WE MADE IT

Then later, still in Deploy.cmd, I just added \build to the source directory name.

And that's it. Now whenever I updated my views or other things in my Middleman source on GitHub, it automatically deploys to my live site.

Yes, again, to be clear, I realize it's a static site generator that I could run locally and FTP the results, but I'm working in a small team and this is a great way for us to collaborate on our static site. Plus, when it's done, it's all done and I don't have to mess with it again.

Debugging Custom Azure Website Deployments

I thought debugging my GetRuby batch file was going to be a nightmare. However, it turns out that the Azure cross-platform command line (the Azure x-plat CLI, open source, and written in nodejs, BTW) can connect to Azure's log streaming service. "Azure Site Log Tail" lets me see the LIVE console output as the deploy happens!

Now, note that the need for this whole "getruby.bat" file totally goes away if the Azure Websites folks start including Ruby and DevKit in the Azure Websites VM image by default. That would make Ruby, Rails, Gems, DevKit, etc. available to everyone. Do you want Ruby on Azure? Do you care? Sound off in the comments!

HELP: The batch file could use more testing, especially for robustness as well as Ruby-correctness as I'm likely confused about a few things, but it works for me and it's a great start. Sometimes different native gems don't build, though, or Gems complains about conflicting versions and asks me to run Bundler. I have no idea why. Running things twice clears it. It's either my bug or someone else's. :)

I'm just happy that Azure Websites is as flexible as it is that I was able to console into it from the browser, look around, add my own custom deployment hook, and do something I initially didn't think was possible!

Give Azure Websites a try FOR FREE, no signup, no credit card for an hour in a sandbox with PHP, Node, ASP.NET, or Java at http://try.azurewebsites.net. (Full Disclosure, I helped a little with this site, so I'm a fan.)

Related Links

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

Here's a few comments and disclaimers to start with. First, benchmarks are challenging. They are challenging to measure, but the real issue is that often we forget WHY we are benchmarking something. We'll take a complex multi-machine financial system and suddenly we're hyper-focused on a bunch of serialization code that we're convinced is THE problem. "If I can fix this serialization by writing a 10,000 iteration for loop and getting it down to x milliseconds, it'll be SMOOOOOOTH sailing."

Second, this isn't a benchmarking blog post. Don't point this blog post and say "see! Library X is better than library Y! And .NET is better than Java!" Instead, consider this a cautionary tale, and a series of general guidelines. I'm just using this anecdote to illustrate these points.

Are you 100% sure what you're measuring?

Have you run a profiler like the Visual Studio profiler or DotTrace?

Are you considering warm-up time? Throwing out outliers? Are your results statistically significant?

Are the libraries you're using optimized for your use case? Are you sure what your use case is?

A bad benchmark

A reader sent me a email recently with concern of serialization in .NET. They had read some very old blog posts from 2009 about perf that included charts and graphs and did some tests of their own. They were seeing serialization names (of tens of thousands of items) over 700ms and sizes nearly 2 megs. The tests included serialization of their typical data structures in both C# and Java across a number of different serialization libraries and techniques. Techniques included their company's custom serialization, .NET binary DataContract serialization, as well as JSON.NET. One serialization format was small (1.8Ms for a large structure) and one was fast (94ms) but there was no clear winner. This reader was at their wit's end and had decided, more or less, that .NET must not be up for the task.

To me, this benchmark didn't smell right. It wasn't clear what was being measured. It wasn't clear if it was being accurately measured, but more specifically, the overarching conclusion of ".NET is slow" wasn't reasonable given the data.

Hm. So .NET can't serialize a few tens of thousands of data items quickly? I know it can.

"Precision is not the same as accuracy. Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer."

So, we will use a Stopwatch when you need a stopwatch. In fact, before I switch the sample to Stopwatch I was getting numbers in milliseconds like 90,106,103,165,94, and after Stopwatch the results were 99,94,95,95,94. There's much less jitter.

If you don't use Stopwatch, look for a simple and well-thought-of benchmarking library.

Second: Doing the math

In the code sample I was given, about 10 lines of code were the thing being measured, and 735 lines were the "harness" to collect and display the data from the benchmark. Perhaps you've seen things like this before? It's fair to say that the benchmark can get lost in the harness.

Considering using an existing harness for small benchmarks. One is SimpleSpeedTester from Yan Cui. It makes nice tables and does a lot of the tedious work for you. Here's a screenshot I stole borrowed from Yan's blog.

Something a bit more advanced to explore is HdrHistogram, a library "designed for recoding histograms of value measurements in latency and performance sensitive applications." It's also on GitHub and includes Java, C, and C# implementations.

Third: Have you run a profiler?

Where is our application spending its time? Surprise I think we've all seen people write complex benchmarks and poke at a black box rather than simply running a profiler.

Aside: Are there newer/better/understood ways to solve this?

This is my opinion, but I think it's a decent one and there's numbers to back it up. Some of the .NET serialization code is pretty old, written in 2003 or 2005 and may not be taking advantage of new techniques or knowledge. Plus, it's rather flexible "make it work for everyone" code, as opposed to very narrowly purposed code.

People have different serialization needs. You can't serialize something as XML and expect it to be small and tight. You likely can't serialize a structure as JSON and expect it to be as fast as a packed binary serializer.

Measure your code, consider your requirements, and step back and consider all options.

Fourth: Newer .NET Serializers to Consider

Now that I have a sense of what's happening and how to measure the timing, it was clear these serializers didn't meet this reader's goals. Some of are old, as I mentioned, so what other newer more sophisticated options exist?

Protocol buffers have many advantages over XML for serializing structured data. Protocol buffers:

are simpler

are 3 to 10 times smaller

are 20 to 100 times faster

are less ambiguous

generate data access classes that are easier to use programmatically

It was easy to add. There's lots of options and ways to decorate your data structures but in essence:

var r = ProtoBuf.Serializer.Deserialize<List<DataItem>>(memInStream);

The numbers I got with protobuf-net were exceptional and in this case packed the data tightly and quickly, taking just 49ms.

JIL - Json Serializer for .NET using Sigil

Jil s a Json serializer that is less flexible than Json.net and makes those small sacrifices in the name of raw speed. From their site:

Flexibility and "nice to have" features are explicitly discounted in the pursuit of speed.

It's also worth pointing out that some serializers work over the whole string in memory, while others like Json.NET and DataContractSerializer work over a stream. That means you'll want to consider the size of what you're serializing when choosing a library.

Jil is impressive in a number of ways but particularly in that it dynamically emits a custom serializer (much like the XmlSerializers of old)

Jil is trivial to use. It just worked. I plugged it in to this sample and it took my basic serialization times to 84ms.

result = Jil.JSON.Deserialize<Foo>(jsonData);

Conclusion: There's the thing about benchmarks. It depends.

What are you measuring? Why are you measuring it? Does the technique you're suing handle your use case? Are you serializing one large object or thousands of small ones?

James Newton-King made this excellent point to me:

"[There's a] meta-problem around benchmarking. Micro-optimization and caring about performance when it doesn't matter is something devs are guilty of. Documentation, developer productivity, and flexibility are more important than a 100th of a millisecond."

What are your benchmarking tips and tricks? Sound off in the comments!

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

One of the nice things about any modular system (like ASP.NET) is the ability to swap out the parts you don't like. As the authors of ChameleonForms state, HTML forms is a pain. It's repetitive, it's repetitive, and it's boring. While ASP.NET MVC's Form Helpers help a lot, they felt that helper methods like Html.EditorForModel didn't go far enough or give you enough flexibility. ChameleonForms adds its own templating model and attempts to be as DRY as possible. It also takes a number of issues head on like better handling for drop-down lists and lists of radio buttons, and it even supports Twitter Bootstrap 3 to you can bang out HTML forms ASAP.

ChameleonForms also has some convenient extra abilities, like being able to automatically infer/create a [DisplayName] so you don't have to. If you're doing Forms in English and your preferred Display Name will end up just being your variable name this can be a useful time saver (although you may have opinions about its purity.)

If you've got a lot of Forms to create and they're just no fun anymore, you should definitely give ChameleonForms a try. If you're a Twitter Bootstrap shop, doubly so, as that's where ChameleonForms really shines.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

Shumway - Flash in JavaScript and HTML 5

"Shumway is in a race to stay relevant as Flash fades from the web, but there will always be a long-tail of Flash content that would/will be lost when Adobe or browsers stop supporting the Flash plugin."

Think about that. We've all largely got "Evergreen Browsers" now that update themselves as often as weekly, but sometimes it feels like Adobe Flash is being attacked daily, so we're told to update that as well. Flash itself has more than fallen from grace, as Chris points out, it's fading from the web itself. Fast forward a year or so when there is no more Flash installed, but there's still Flash on the web. Enter Shumway - it's a renderer for SWF (Flash files) without native code! Shumway literally.

Why is it called Shumway? Again, Chris:

"The name "Shumway" is derived from "Gordon Shumway", the actual name of the TV character ALF: Flash -> Flash Gordon -> Gordon Shumway -> Shumway."

That's awesome. What else is awesome? "Shumway is written in TypeScript. It has an ActionScript interpreter and a JIT that generates JavaScript, compiled using eval()."

So Shumway is an HTML experiment that uses TypeScript (a modern typed JavaScript compiler/transpiler) to read ActionScript and resources and JIT the result into evaluated JavaScript. Fantastic. It's also open source and on GitHub. Even better, the Firefox Nightly is using Shumway for Flash videos on Amazon.com. This is the beginning of their test, I presume, to sunset Flash in Firefox.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.