The less incoherent ramblings of a developer, come architect from London

tldr; a build script should achieve several things; it can build and test your code locally, it can run (virtually unchanged) on a build server and it should be easy to understand for the developer of the application.

The Huddle Desktop application is written in C# for Windows and Mac. With CAKE we can finally have a single cross-platform build script, written in the language used to develop the actual application.

What Is CAKE?

Cake (C# Make) is a cross-platform build system using a C# DSL – built on top of the Roslyn compiler and available on Windows, Linux and macOS (https://cakebuild.net/). It is completely open source and hosted on GitHub.

There are three key files – build.ps1 (PowerShell bootstrapper for Windows), build.sh (bash shell bootstrapper for Linux and macOS) and build.cake – which is your actual build script which the bootstrappers run. The bootstrappers download all the various tools and packages required to run cake.

What did Huddle use before CAKE?

Huddle has used various build mechanisms, dependent on project requirements, which have include rake (Ruby), psake (PowerShell) as well as plain old batch/bash scripts.

For Huddle Desktop, there were batch and bash scripts, maintained separately for the two platforms. Eventually these were deprecated for individually configured steps within our build server, TeamCity.

These steps varied considerably for the Windows and macOS versions, were not capable of being run locally on developer machines, tied us to a specific build server and due to be configured in the build server, isolated the build steps from our git repository.

The goal was to bring both platform builds into a single script, albeit with different platform specific tasks, capable of being run locally on development machines. We also wanted to gain the ability to run individual steps within the build, either locally or easily configured as separate builds on the build server.

Anatomy of a cake script

1 – Arguments

At the top of the script you can parse arguments passed into the build script. Suggested default arguments of target (the first task to be run) and configuration (Release/Debug) are provided in the example repository. You can rename these but it’s handy to leave them as is. Arguments are referenced by name and can have a default value. The target argument refers to the first task to be run in the script.

We also have a platform argument in our script,

var platform = Argument(“platform”, “windows”).Trim().ToLower();

2 – Setup and Teardown

Cake supports the concept of setup and teardown which are run independently of which tasks are being executed (as long as a call to RunTarget is executed somewhere in the script). Setup and Teardown should be defined in the script aboveRunTarget.

At Huddle we use Setup to identify whether we are running locally or under a TeamCity agent (see Build server integration below).

3 – Tasks

Tasks are the actual actions performed by the build script, and we try to isolate them into atomic, single purpose tasks, such as cleaning the build directory,

Task(“Clean”)

.Does(() =>

{

CleanDirectory(buildDir);

});

The default target task, AllStepsWindows, chains the atomic tasks together to provide a full build.

Task(“AllStepsWindows”)

.IsDependentOn(“KillHuddle”)

.IsDependentOn(“UpdateInstallerProject”)

.IsDependentOn(“MsBuildWindows”)

.IsDependentOn(“UnitTestsWindows”)

.IsDependentOn(“UnitTestsOffice”)

.IsDependentOn(“UnitTestsIntegration”)

.IsDependentOn(“ImportUnitTestResults”)

.IsDependentOn(“VerifyWindowsCodeSigning”)

.Does(() =>

{

});

The chaining of tasks means we can create a task that can build and run only unit tests (for quick compile/test runs), as well as a longer running task that builds and runs all tests including acceptance tests.

4 – RunTarget

At the very end of the script is the RunTarget method that, in our build script, runs the target argument we received on the command line, which we default to AllStepsWindows as our default build.

RunTarget(target);

Build server integration

CAKE provides support for build servers such as TeamCity. We can detect that we are running the build on a TeamCity agent, and then collect relevant information for customising the build.

if (BuildSystem.TeamCity.IsRunningOnTeamCity)

{

Information(

@”Environment:

PullRequest: {0}

Build Configuration Name: {1}

TeamCity Project Name: {2}

Build Number {3}

“,

BuildSystem.TeamCity.Environment.PullRequest.IsPullRequest,

BuildSystem.TeamCity.Environment.Build.BuildConfName,

BuildSystem.TeamCity.Environment.Project.Name,

BuildSystem.TeamCity.Environment.Build.Number

);

isTeamCityBuild = BuildSystem.TeamCity.IsRunningOnTeamCity;

Buildnumber = BuildSystem.TeamCity.Environment.Build.Number;

}

else

{

Information(“Not running on TeamCity”);

}

Build server integration includes being able to report issues and fail builds, such as code signing failures,

if (isTeamCityBuild) TeamCity.BuildProblem(“Code signing failed on MSI or EXE”, “Code signing failed on MSI or EXE”);

How did we create our build script?

Within TeamCity the Windows build consisted of eight distinct steps, in addition to the build server handling checking out the source code and collecting the output artifacts.

The macOS build consisted of two steps, with a large amount of logic placed in a standard make file.

Windows first

We decided to tackle the Windows build first, given that it was completely external to the git repo. We performed a lift and shift of every step into the build.cake file as skeleton tasks. The tasks were then fleshed out individually and debugged locally before attempting any builds on the build server itself. Not only was there no need to keep testing the script on the actual build server, steps could be debugged individually, greatly speeding up the process.

What were the hard bits?

TeamCity Parameters

With the old TeamCity builds we had various configuration parameters and environment variables used to customise different builds. Luckily TeamCity can pass these easily to the PowerShell script as arguments.

Normal configuration parameters can be referenced directly,

-configuration=”%MSBuildConfiguration%”

whereas environment variables require the env. prefix,

-environment=”%env.Environment%”

Advanced installer

We use Advanced Installer to package our Windows application as a setup.exe as well as more enterprise friendly MSI package. We have to configure Advanced Installer for versioning (read from TeamCity) as well as configuration values for the various environments we target.

Fortunately Advanced Installer provides command line tools that meant we could use the inbuilt StartProcess method to run the command line tool with appropriate parameters

Old version of NUnit – manually gathering results

Cake provides support for unit testing frameworks which includes support for the older NUnit 2.6.4 (as well as the more modern NUnit v3). Due to a requirement to use the older version of NUnit we had to include a step to import the XML output of the tests when running under TeamCity using the integration libraries, specifically TeamCity.ImportData.

When running locally the NUnit tests display the results to the console, so the import is not required.

Failing the build if incorrectly code signed

The final step was to verify that all the relevant setup and application files had been signed correctly with a code signing certificate. By using StartProcess we could execute the signtool.exe directly, collecting the exit codes and failing the build if any part of the application was not signed correctly by calling TeamCity.BuildProblem.

What have we left to do?

We still have to complete the macOS sections of the CAKE script, although the scaffolding was included as the CAKE script was debugged for Windows.

It helped that we had already moved some projects to use CAKE as a build script, so I could lean on Toby Henderson for a skeleton script, and guidance on how to achieved what I wanted.

Generally, using CAKE has been an enjoyable experience so far, and has resulted in a much cleaner build process, maintained outside of any specific build server, but retaining the feedback and build server integration to which we have become accustomed. Roll on the macOS build!

Cracking conference

Before I witter on about my small part at NDC, I must take a moment to thank the NDC Conference team, and the agenda committee for creating a fabulous event.

It takes a huge amount of work and preparation to create an interesting agenda, and then make sure everything falls into place getting speakers and delegates from around the world to enjoy a smooth running conference that has a real community feel where everyone mingles and chats over a coffee or some food.

As a speaker, the ability to chat to people who had seen my presentation in a relaxed environment is especially welcome. It’s great to see what areas people were most interested in and where I might not have been clear about topics.

Of course, I did make it easier to find me at the NDC Party when both myself and Charlotte from NDC Conferences decided to dress up a bit. We even ended up on the big screens in the main conference area.

For those wondering whether it was a coincidence, of course not! The first time I wore my television suit to NDC Charlotte was disappointed the company didn’t make them for women. Four years later they do, and so it only seemed proper to fulfil her wish (however crazy or misplaced).

Give It A REST – my presentation

Another thankyou to all who attended my presentation on designing a public REST API. I was genuinely surprised how many people attended, and from the Q&A at the end it seems like people are still checking out whether they should be a) using REST, and b) how to get it right.

I hope my presentation helped provide some pointers and places to look, especially the resources on API design which together with, Toby Henderson, a colleague at Huddle, were really helpful in collecting all my points into a coherent whole.

tldr; JavaScript really has taken over everything, it may be configuring your proxy server settings, via a PAC file, and if you use the full .NET Framework you may find your application cannot reconnect to the internet automatically after a network connection is restored. You have to redetect the settings with GetSystemWebProxy when your application is back online.

Note : This does not affect .NET Core.

Stanley Kubrick once said,

‘If it can be written, or thought, it can be filmed’

Which was possibly borrowed by Jeff Atwood to coin a new Atwood’s Law,

‘any application that can be written in JavaScript,

will eventually be written in JavaScript’

While debugging network connectivity issues with the Huddle Desktop for Windows application, I experienced just how true this is.

We found that when Huddle Desktop lost network connectivity; for instance, when being removed from a docking station and switching from wired network to WiFi, or moving between different WiFi networks, it detected the loss of the network but did not reconnect when it became available again.

This was only happening on companies with custom proxy server configurations and was initially hard to reproduce, and even harder to then track down.

Clearly I needed to learn more about proxy servers …

Network proxies

So what is a proxy server? A proxy server is a computer system or an application that acts as an intermediary for requests from clients seeking resources from other servers (source: Wikipedia).

When you have a network proxy configured, every time you request a resource from the local network or the internet it is processed by the proxy server which will inform you where to look for that resource. This may even include delivering a modified or cached version of a resource from the proxy server itself. In the past, mobile phone operators have even used proxy servers to resize images on the fly to reduce data usage on their networks.

If you are a developer, and have used Fiddler or Charles to inspect network traffic, you are using that application as a proxy server. When you are inspecting traffic from your local system then all the proxy server configuration happens in the background when you start Fiddler.

A proxy server can be configured manually to a specific IP address (this is how Fiddler achieves it), configured automatically at the same time as obtaining a network address via DHCP, or it can be determined by using a Proxy auto-configuration (PAC) file.

The PAC file option here is that check box, Use automatic configuration script, followed by the location of the script to be used.

I’m sure you mentioned JavaScript ages ago

A PAC file contains a JavaScriptfunction “FindProxyForURL(url, host)”. This function returns a string with one or more access method specifications. These specifications cause the user agent to use a particular proxy server or to connect directly (source: WikiPedia).

So every time you request a URI in your browser, it calls the JavaScript method FindProxyForURL which returns the proxy server (or to not use a proxy server at all) for that URI request. For one of our clients that JavaScript file has over 700 lines, mainly handling network segmentation across multiple countries.

Importantly, the PAC file cannot be a local file path, but must be served over HTTP or HTTPS.

.NET Framework is special

A rifle through some .NET framework pages also revealed that the full .NET framework (not .NET Core) handles proxy servers separately to the underlying Windows operating system. These even includes processing PAC files with a JavaScript VM that has subtle differences to that used by Windows.

As the .NET framework maintains its own state of the network configuration, separate to that of the underlying operating system, they can be out of step with one another. This was precisely what our customers were seeing; our application (and some other .NET applications) could not recover from network disconnections, but most applications, and the operating system could recover successfully.

In .NET the proxy settings can be read from WebRequest.DefaultWebProxy, or from WebRequest.GetSystemWebProxy. The difference is subtle; DefaultWebProxy supports overriding the proxy settings via the app.config file, but will use system settings where no such override exists. GetSystemWebProxy only reads proxy settings as configured in Internet Explorer. At Huddle we support the app.config file model for customers wishing to implement custom settings specific to our application.

It should be noted that .NET Core has changed this model. It relies solely on the underlying operating system for proxy configuration. Also, there are no standard app.config settings for overriding this configuration.

Reproducing the bug

So after this period of research, we could finally reproduce the connectivity issue. We placed a simple PAC file into Amazon AWS S3 , that pointed to a local proxy server on our internal network.

We had spent some time setting the local proxy server directly, but this never reproduced the connectivity issue as it was the act of failing to load the PAC file which was the root cause. Once we could reproduce the issue it was down to debugging our desktop application to pinpoint what was happening.

It became clear that on a network disconnect, the configuration to use a PAC file was lost, and never recovered. It appears that when the network fails, the PAC file is reloaded in some manner, and when it too cannot be found, the network stack removes the PAC file as a proxy option, and defaults to use direct access only with no proxy server. The PAC file option is not tested again when network connectivity returns.

Our code was only using the DefaultWebProxy property, to respect any app.config setting, but recreating this did not restore the PAC file configuration. However, a call to GetSystemWebProxy did detect the PAC file configuration when the network connectivity returned.

The Solution

When we are validating network connectivity returning, we query for proxy settings just as we do when starting the application.

If DefaultWebProxy provides a proxy, that must have come from either the app.config override or from a manually configured proxy server in system settings, so we use that.

If DefaultWebProxy does not return a proxy (which occurs after a network disconnect when a PAC file has been configured), but GetSystemWebProxy does return a proxy, we use that. If no proxy is returned by either method then we must have direct access with no proxy servers.

By making a call to GetSystemWebProxy to obtain a new instance of the WebProxy class and using this for future network calls, it processes the PAC file as expected to return the correct proxy server.

Many thanks to the organisers and crowd at DDDSouthWest last Saturday, and apologies for my having to dash off earlier than planned (thankfully x-rays at my local hospital proved I didn’t have any broken bones in my wrist).

After a run through with the Huddle dev team today, I have updated my presentation, Give it a REST – Tips for designing and consuming public APIs, with some corrections and a few extra slides and uploaded them to GitHub over here – https://github.com/westleyl/DDDSouthWest2018

Every now and then I get an e-mail in Outlook with an attachment that requires me to click to download the full message, after which the attachment doesn’t seem to be properly aware of it’s file type. Sometimes the attachment even says something like Invoice.pdf, but when you tap on it to open it you get a dialog asking you to CHOOSE AN APP.

Saving the file and trying to open via the File Explorer app doesn’t work either.

I discovered the solution by accident, as I tried to forward the e-mail to another e-mail address (GMail) with the hope that it could parse the attachment MIME type correctly.

Here’s the wierd bit, the draft created after you click on Forward shows the attachment with the correct file type. Tap on that attachment and it now opens in the default application (in this case the PDF opens in Edge which now previews PDF files).

All seems a bit crazy but it’s a very quick work around for a stupid bug.

Here’s a very silly issue I hit the other day. It involves the classic use (or abuse?) of the LINQ operator SingleOrDefault but unexpectedly getting more than one result returned. In this case an upgrade to a component had changed how data was stored in a SqlLite database; we hadn’t changed the schema, just the number of versions of data we stored per item.

It was quickly found and we placed an exception around our call to SingleOrDefault and filtered the exception to clean up the database, as shown in this simplified code.

This was all caught in regression testing, had more testing to verify the fix and so we were safe to release for upgrades.

Then we get a support call from a test user who says the very situation that we believed we had fixed had just occurred with their system. A quick check in the logs and it is indeed the same issue. Only, they are running the system on a Norwegian language version of Windows. Who knew that the .NET framework very kindly provides localization for exception messages? This makes sense, it’s the kind of thing a user might see in a dialog box. Here is what “Sequence contains more than one matching element” becomes in Norwegian,

Sekvensen inneholder mer enn ett samsvarende element

Oh dear. So our filtered exception handling has failed when used on a non-English system. Instead we are now catching that InvalidOperationException, then performing a proper check for multiple rows before dealing with them, rather than relying on the text of the exception message.

One of the interesting talks on my first day at NDC Oslo this year was from Norm Johanson who works for Amazon within AWS cloud services. He ran through the tooling for AWS Serverless applications within Visual Studio 2017 (also available for Visual Studio 2015).

While the Visual Studio integration was slick, more interesting for me was the command line integration with the .NET CLI, which allows you to create new .Net core applications from AWS project templates. Even better you can deploy the service and invoke a Lambda function from the CLI too.

There was also a good example of using AWS API gateway to source the static elements of a web site (CSS and HTML) from S3 storage, while the main ASP.NET Core runs within Lambda.

In the normal patch Tuesday updates my phone received Windows 10 Mobile update 15063.414, and I after deleting my Garmin devices from my bluetooth settings, Garmin Connect was able to pair with a new device successfully. The Garmin Connect app itself received an update to version 3.19.0.0 a few days later, and it now appears to be syncing activities successfully once more, and most of the time supports notifications again. The issues with continuum remain though.

I own a Lumia 950 XL (actually I own two, one needed a screen repair and is my backup). Neither are currently on an Insider Build so I was excited when Windows 10 Mobile Creators Update came out on production devices in early May, and I quickly updated both devices. I thought I’d leave it a bit of time to bed in before I gave my view on this update.

The Good

It does appear to be a bit faster, and the Edge has new features which I’m enjoying. There’s still a flurry of updates to all the core apps, some less welcome than others – Groove has taken a step back with some of the layouts, but mainly it’s positive. The voice recognition seems to definitely be improved – coping much better with wind noise while cycling when replying to a text message through my Bluetooth headset (quite a challenge to voice recognition).

Subjectively there appear to be a few more occasions where the phone hangs at random – although that may be before I tracked some down to continuum. Windows 10 has always been a bit temperamental about using the camera from the lock screen, often not responding after taking the photo and viewing it, and I don’t think that has changed.

The Bad – Continuum

I connect my phone to a continuum dock at work, with a standard mouse and displaying on a 24” monitor. It’s a great way of using Groove to play music, and muck about with playlists of radio 4 podcasts.

As I mentioned above, one of the those more frequent crashes I can now definitely repeat. It occurs when you phone is connected to the continuum dock, but has locked it’s screen and switched off the screen. If you disconnect at this point, it is highly likely that the phone will no longer switch on. If you get it connected back to the dock quickly you can recover, turning on the phone, waiting for the lock screen and then removing from the dock. If you don’t re-connect to the dock the phone remains blank, and it requires a long power button press reset to get it back.

There is also a further Windows 10 continuum bug. I suspect that to ‘speed’ up the launching of an app which was running on the phone previously, they are no longer ‘killing’ the original app and restarting but activating straight onto the continuum display. The issue here, is that the app has no idea that it now has a 24” monitor to play with. This is especially true with Groove which appears without any sensible master/view layout. It acts as if it only has a single, albeit, very, very wide display. To solve this you have to kill the app in the app switcher, and relaunch. Effectively doing what the phone used to do prior to Creators update.

The (downright) Ugly – Garmin Connect

The other heavy use I make of my phone is to use it to transfer data from a Garmin Vivoactive activity tracker that records my cycle ride to work (250km a week) as well as swims and runs. Last year Windows 10 received a full UWP app Garmin Connect so I can use the very same app on my Asus T100, Linx 7 tablet or 950 XL to sync the data via Bluetooth. When connected to the phone it also updates the watch with notifications, calendar entries and weather information. It’s miles better than the USB cradle sync, as that only synchronises activities, not notifications, and requires you to have a USB cradle with you at work as well as home. It’s also way better than the iOS version of Garmin Connect which looks old and dowdy in comparison with much harder navigation of the app.

The moment the Creators update was installed I struggled with synchronizing data, and a Garmin device I received back from repair refused to pair with Garmin Connect at all. That left me with one watch that can sync notifications, and another than can’t, and even the one that can still connect can only support very small sets of activity data, such as swimming. Any activity with any serious amount of GPS data fails. Reading release notes and browsing some forums, it looks like the Creator update contained improved Bluetooth LE connectivity. I suspect they subtly changed the Bluetooth software stick and that broke Garmin Connect. I suspect it may be getting timeouts as Bluetooth LE might not like supporting long lived Bluetooth operations.

It’s been well discussed on the Garmin Forums here, and there is a Garmin FAQ regarding the issue that leads to a Windows Answers forum post which pretty much says the ‘work around’ is to roll back to Anniversary Update. I’m not sure that this is even possible now that the production devices are being updated to Creators Edition and I don’t really consider regressing to an older operating system is a real ‘work around’.

So I’m back to synching my data via a Linx 7 tablet at work, and not being able to have the smartwatch notifications (not a big deal, to be honest).

What’s amazing is that this was a known issue on insider builds and everyone still ploughed ahead as if there wasn’t a problem to be resolved. Microsoft should be giving Garmin some serious help with this as they have produced a solid UWP app that’s best of breed at the exact point that companies like Adidas have pulled their own fab tracking app (they are closing down their entire miCoach service).

The future

Recently on All About Windows Phone Steve Lichfield provided his view on Windows 10 Mobile And The Future – I can only hope he is wrong and we do get a decent update to the 950 XL that can fix the issues I’ve found above with Garmin Connect. Everyone talks about the ‘app gap’ on Windows 10 Mobile but it’s not helped by cock ups like this.

tldr;

One of my reasons for moving my blog from https://geekswithblogs.net/twickers was due to the discussions with friends about reviving the alt.net scene in the UK as we got excited about the possibilities of .NET core and getting our elbows deep in other platforms than Windows. So this is a joint post with the alt.net stream over at https://medium.com/altdotnet.

I know, you want to get a scoop on my subjective view of the future of .NET – but as George Santayana allegedly said ‘Those who cannot remember the past are condemned to repeat it ’, so it’s time for a bit of history of how I got to where I am as an Application Architect at Huddle, primarily working in C#. Hopefully some of which might mirror your own experience.

My favourite languages

6502 Assembly

No joking about, this was the second programming language in which I became proficient. I’d worked through ZX Basic on a Sinclair ZX81, and graduated to BBC Basic on a BBC Model B – oh the luxury of a proper keyboard, decent screen resolutions, and colour! Of course, with only 32Kb of memory (not all of which was available for your actual programs) you hit the limits of what you could achieve with BBC Basic, both in terms of memory and speed of operation.

So, it was obvious, you picked up the Advanced User Guide and taught yourself 6502 assembly. Several months later you had a keyboard driven mouse pointer (mice were new, not everyone had them) which fitted into less than 1K including all the core logic as well as the background, mask and pointer that could be loaded ‘underneath’ the user memory.

It was hard work but good fun. It could also go wrong in so many ways … think C++ random pointers and allocations on steroids.

Turbo Pascal

University computer courses taught me Pascal, there was no Java yet (I’m old, otherwise why was I learning 6502?). This was the teaching language of it’s time, with a goal of instilling good practices with structured programming. It had a reputation for not being suited to real world use outside of academia.

Borland sorted out that bad reputation of it not being a workhorse language when it released Borland Turbo Pascal. I discovered this on a work placement during my university course, and it’s where I experienced my first ever IDE (Integrated Development Environment). To me, Turbo Pascal defined what a developer could expect from an IDE – decent help, integrated building and compiling, multiple file editing and a decent debugger. It meant I never went near a make file. For this I was glad for many years.

Borland Turbo Pascal was based on a compiler created by Anders Hejlsberg, who was a major force behind C# and the .NET framework, see later on in this story.

Visual Basic 1, 2, 3, 4, 5 and 6

The reason I’d written a 6502 assembly mouse pointer was due to my neighbours buying one of the first every Apple Mac units in the UK. I knew the future was graphical, but even though during work placements I touched upon GEM (a rival to Windows) it wasn’t until my first real jobs after university that I encountered Windows 3.0. Realistically this was the first ‘usable’ Windows, if only because it included decent sound card support and networking that … mostly worked. So, although I was still writing some DOS programs in Turbo Pascal it was obvious there needed to be a similar tool for Windows. C and C++ was just too damn hard for most developers especially with the Windows API.

Working for a small company inside of Sky Television, we were asked if we could provide the first in house weather system, and we grabbed this new product from Microsoft called Visual Basic. It had an IDE, you could create forms and just drag and drop controls onto them. You just double clicked a button and it opened a code window where you added some logic. You could easily show dialog boxes. You could access files. You couldn’t do databases properly (that came with the Jet database engine in Visual Basic 3.0) but suddenly everyone could write applications for Windows.

And I mean EVERYONE. This was the democratisation of programming. It was fun, like 6502 assembly, but with none of the pain. You could play. You could experiment. You didn’t have to know lots of stuff. It didn’t come with hundreds of libraries or frameworks – it came on three 720K floppy disks; there was no room for them.

Eventually I ended up working directly for Sky Television (now BSkyB). Those early VB1 programs became VB3 and VB4 programs, with 16-bit upgraded to 32-bit, and by the time we were programming in VB6 we in a full-on COM based world. The Visual Studio install required CDs, especially for the MSDN help library which was your local version of stack overflow (as long as you could guess what keywords Microsoft had decided to use). We had distributed systems handling the general election graphics systems feeding anything from Excel spreadsheets to 3D virtual sets with an Oracle database backend pushing Access out of the picture.

Visual Basic 6 was still an easy entry point to programming Windows, but by this point there were two types of Visual Basic programmers; those who dabbled, and those who really understood COM and could produce seriously robust and scalable systems with multiple tiers, reusable objects and threw XML around with abandon. It had lost the complete ease of use for which it had become loved, and for those who did push it to the limits, the limits were very apparent, and we occasionally looked over the fence at the C++ devs in envy. Of course, we didn’t quite understand the ‘fun’ the C++ devs were having with implementing COM …

SQL

I also have an affection for a SHOUTY language that to a structured programmer first appears bonkers. Gone are those loops and if .. then .. else blocks, and you have to change your mind to think it terms of data and linkages between data sets. Yes, you can use a CURSOR to do a loop, but you shouldn’t. Really, you shouldn’t. The challenges in SQL are quite different and the sense of satisfaction from a high performance query getting data back from 10’s of millions of rows of data in seconds is a real thrill.

It seemed a bit niche, until we got LINQ in the .NET framework, and suddenly I could write expressive data manipulation code in C# as well as SQL. Fab!

C#

By the late 90’s the combination of C++, Visual Basic 6.0 and COM looked like it was hitting the limits of its capability and Java was on the ascendance on servers, challenging the sales of Windows Server. It took quite a few years, and conversion from early projects within Microsoft, but the world received a new framework – the .NET Framework, and a new language, C#, complete with curly braces with very much a passing nod to Java. The .NET Framework brought standardisation if not open source. Even the new language C# became a standard as ECMA-334:2003 / ISO/IEC 23270:2003.

I could have moved from Visual Basic 6.0 to Visual Basic .NET but I felt it was genuinely time for a change, and maintaining VB6 code next to VB.NET was just not enough of a context switch to stop you making damn silly mistakes. The decision was made easier by the lack of VB.NET code samples, and the fact that the simplicity of VB had finally disappeared completely.

In the first version of Visual Studio.NET if you removed the default form from a VB.NET project to replace it with your own, it wouldn’t compile. There was no helpful VB6 dialog to get you to wire up an entry point to your application. Suddenly, everyone who used VB was asked to understand the entry point of an application, and how to configure delegates. It was the end of the democratisation of programming at Microsoft …

… but us old VB devs were having fun with C#, we got proper access to threading, new constructs, and along came all sorts of goodies, like LINQ, generics, Lambda functions and anonymous types and methods. We could no longer edit and continue, but apart from infamously running a general election system in edit and continue – editing the code live as the results came in, I never considerd it a killer feature.

As I have hinted, all this complexity came at a cost. Combined with learning proper patterns and practices, getting used to TDD and continuous integration servers, we found ourselves in a place where creating a simple web site would be at least five days – to do it properly.

A contractor, Charles, who I was working with hit this with a web site for a relative. He

was stunned by how something supposedly so simple took so much time and resulted in a web site that required knowledgeable developers to support. Of course, despite being in a startup, this is the point you had to admit you had finally become enterprise developers.

Those enterprise developers who could list all the steps in the ASP.NET page life cycle which became the classic question thrown at Microsoft developer interviews.

It was no surprise that other languages took the place of the missing VB – Ruby (on Rails) and JavaScript allowed much quicker creation of prototypes and small systems. They might not scale, but neither might the business plan either, so at least you didn’t waste oodles of time creating a gold-plated application that no one used.

Summary – other languages

I’ve been digging through JavaScript almost as long as I’ve been programming in C# and while I saw amazing stuff demonstrated by people like Helen Emerson in the early days, it had not become a language I’ve yet come to love. Neither do I see it as a pariah, it is odd that it gets so much stick from developers who favour compiled languages. It’s like claiming a screw is a much better version of a nail. It’s quite a different beast and is more complex both to manufacture and use.

Not just MSDN

I also had the fortune that when I left BSkyB for a short-lived startup I ended up as an owner/contractor working with companies that were not Microsoft only shops. This meant I was quickly weaned off Visual SourceSafe (which did a great job of getting developers into source code control) and onto Subversion and became pragmatic about tooling and frameworks.

I was hanging round with the guys at London .NET Users, who were remarkably progressive for a user group started at the same time as .NET. Here was a user group where third party tools such as NUnit, Windsor IoC, and NDoc were considered standard.

There was also an acceptance of more pattern based development, and an eagerness to adopt initiatives such as the ASP.NET MVC framework. An openness and pragmatism was still combined with a loyalty to the .NET framework and, in general, C# as a top tier language.

In the Java shop I worked, they were amazed when I demonstrated LINQ queries joining database results and in memory arrays together with SQL-like ease. They quite liked C# generics too.

Why Alt.NET

Part of the atmosphere within that user group was later inspired by the Alt.NET movement originating in Austin in the US. We were enjoying Alt.NET beers arranged by Sebastian Lambla and weekend un-conference days.

One of those days was where I first learnt how to configure git on Windows (still more of a challenge than it should be). At an Alt.NET beers I found a job at a digital download startup at Hammersmith where I ended up working with Charles encoding all of the Warner Brothers, EMI and Sony music catalogues to MP3 files and putting all that Alt.NET focus of using the best open source tools combined with .NET to deliver a system for considerably less than our rivals.

For myself, in the UK,

Alt.NET was exciting.

Alt.NET was open.

Alt.NET welcomed everyone.

Alt.NET wasn’t elitist.

Alt.NET was noisy.

With .NET Core and cloud services, I believe we have the inflexion point in the .NET framework to get people excited again. We want to throw .NET Core open to everyone, make some noise and make developing fun and exciting.

It’s easier than ever to download a lightweight editor like Visual Studio Code, hack some C# together, compile it and copy it over to a cloud server and have your code running on an AWS Linux box within a few hours.

.NET Core – it’s the democratisation of programming again.

It’s open to all, let’s welcome everybody in.

Many thanks to the organisers of Get.NET 2017; Izabela Wawer from Sii Łódź for making sure the speakers and attendees were well looked after and a personal thank you to Michał Śliwoń from the agenda committee for asking me to provide a presentation. It was a great conference and my first trip to Poland for 20 years, and I had real fun exploring the city, including a 10K run on Sunday morning. You can find some pictures of the city, and the run, on my instagram feed https://www.instagram.com/westleyjam. I promise, I won’t leave it as long next time.