Wednesday, 29 June 2016

The last few days have involved learning lots of things, breaking lots of things, fixing them and then breaking some other things. I have lived the Facebook "Fail fast" motto!

Anyway, the latest fun and games is using Application Insights, a pretty swish Azure service that collects not only loads of server and performance data from your web apps but also allows you to collect custom metrics. I have a shared library to track these metrics but realised I needed to be able to track events from multiple apps into the same App Insights bucket, while keeping separate buckets for server data - in other words, have more than one instrumentation key.

Initially, I simply passed the instrumentation key I wanted to my library like this:

But when called, this produced the rather obtuse error "Telemetry channel should be configured for telemetry configuration before tracking telemetry"

This is another MS classic that makes sense after you've worked out what went wrong rather than showing you what you did.

The Telemetry channel is set up in config so why wasn't it working? Duh, because I had just passed new TelemetryConfiguration() rather than getting TelemetryConfiguration.Active which is what the default constructor uses. I didn't want to use Active however because changing that could affect the config being used by the app and I wanted it just to be used locally. Fortunately, TelemetryConfiguration provides two helper methods: CreateDefault(), which creates a copy of the active configuration and allows you to change what you need to (which I used) and also CreateFromConfiguration(), which does the same thing from a configuration source if you prefer to do something more complex.

Once I created the default one and simply changed the key, hey presto, it all worked.

Tuesday, 28 June 2016

So we have an Azure SQL database that previously we have downloaded onto a test server every night both as a backup and also to act as a source of metrics and a test copy of the database. Apart from a few hiccups over the years (mostly when I updated the Azure instance and didn't check the backup process), it works OK.

The backup is a .net exe which runs in Task scheduler and calls an old SOAP web service to create the bacpac and store it in Blob storage. It is then downloaded and I call SqlPackage.exe to restore the bacpac onto a local instance of SQL Server.

Then the other day, we created a Contained user - a user that is linked to the database rather than a server login and which allows geo replication to work without screwing up permissions during the copying (since the users are really a security ID not just the display name). This all worked online but again, it broke the backup process or specifically the database restore process.

Long story short, the web service we were using to create the bacpac does not add the "Containment" setting in the output. This might be because Azure seems to cheat this setting and automatically have it enabled regardless of what settings your database has. I tried to use their new service API and I just couldn't get it to work. I then looked at the documentation for their .Net libraries which was woefully lacking in guidance. I decided for now to manually modify the bacpac to at least get a working database.

The bacpac is just a zip file with another name so renaming it from bacpac to zip, you can then extract the files to make them easier to work with. I then edited model.xml and added at the top where the other settings are. This actually breaks the package because there is a checksum in origin.xml.

Fortunately, there is a program which calculates the new hash for you called dacchksum and which is hosted on github here. The idea is to put back all the stuff into the bacpac again and run the program against it which will tell you what is currently in the file by way of check sum and what should be. You then copy the value it should be, replace the current value in origin.xml and package it up again.

IMPORTANT. I had a problem with the app telling me the package was invalid and the github source doesn't build in order to debug it. The problem is caused by the way that windows sends files to a compressed folder. If you right-click the entire folder, you get a zip file of that name but the zip has a root folder inside it before adding the child files. In other words, if you want to make the package, you must select the files to exist at the root level of the package (i.e. model.xml, origin.xml etc.) send those to the compressed folder and then rename it to what you want.

I did this and then imported the bacpac manually, which looked like it worked OK but it did not correct import the password for an Azure contained user. I checked the model and it does have some encrypted password information but for reasons I have yet to work out, this doesn't work - possibly because it uses a server key for security reasons, I don't know.

So be careful anyway, these portable users are not obviously portable - although the geo-replication seems to work OK!

Monday, 27 June 2016

Today has been a day when I have implemented precisely nothing and have spent the day fault-finding and fixing two problems. I have only managed one of them but it is all too common in software, especially when you manage several different parts of the same system.

1) The web app errors
I came in to an email that our site was not working. Although it seemed to be working for me, the error emails (very useful, highly recommended!) told a different story. A part of our web service is supposed to speak to a read-only replica of our database as part of our move towards higher bandwidth on our site, obviously only read-only procs are supposed to use this connection otherwise you get errors (which we had). I had seen this before and it seemed weird since the code "clearly" showed that these methods were not calling the read-only replica so should not have produced the errors. After probably too long, I noticed that my lazy loading was "very lazy" and I had reused the same variable between the writable and the read-only databases which meant whichever one was called first became the connection and every subsequent database call would try and use it. Bah. But not too bad, at least it was an easy fix.

Except I had stepped onto the slippery slope

2) The updated project
Since this site was last deployed, we have been moving some of the projects from SubVersion to Visual Studio Team Services (online) which will hopefully track and streamline our deployment moving forwards. This meant that some of the projects that were referenced in my solution were now referenced as NuGet packages, which is much neater and pretty sweet. What I hadn't realised was quite how bad some of my old library references were. Not only was I referencing projects, but also dlls in other projects on disk and also some really weird references like something that used to exist in my downloads folder. The new online projects were good because they were building online without any nasty physical references but not all missing libraries will cause a build failure. My web service had already been updated and I rather optimistically deployed my fix with the project reference changes and it failed online.

You would think the library loading errors would be more useful by now but nope. Cannot load Security.Cryptography or one of its dependencies, a strong named assembly is required. Well my calling library is strong named - I checked. Security.Cryptography doesn't seem to have any non-system dependencies so what on earth is going on? Trying to roll the project back was a pain since I had deleted the original folders once the VSTS projects had built because I hadn't changed the code and didn't need the old copies any more.

About two hours of fiddling, rebuilding, debugging, enabling remote desktops and faff later, I narrowed it down to one library and restored the old project bindings just to get it working. So now would be a good time to find out exactly what is wrong by using a local test server and fix all the dependencies.

3) The broken web publish in Visual Studio 2015
The NuGet package system requires Visual Studio 2015 for some reason so I opened my project in VS2015 (it was 2013 originally), built it and clicked "Publish". Bang. VS crashed. Followed some online suggestions. Clean. Crash. Delete profiles and recreate. Crash. Save the profile before deployment, choose publish again, crash. Debug in another Visual Studio and there is some dirty error about a missing method in the publish mechanism. Like most visual studio searches, there are about a trillion different forum questions about everything and about 100 different answers which might or might not work including loads of "this worked for me". Still nothing. In the end, I had to use VS2013 to publish it - didn't have any more will to try and get help.

The slope was still slippery though

4) The case of the missing database tables
I have a task that runs daily to backup our live database, download it to a test server and install the most recent one as a test copy of the database. It also runs some statistics processing - number of users etc. but at some point recently, it had obviously stopped working, the database was being created but none of the data was being unloaded from the bacpac.

The error log was hopeless, just another generic SQL Server error - I have no idea why it couldn't have logged the actual (and very simple) SQL error that was occurring when SqlPackage was being run: "You can only create a user with password in a contained database".

Of course you cam, and our database is contained, which is why it contains the contained user in the first place - something we learned the hard way that you need for geo-replication. Somehow the bacpac file did not contain the contained setting which caused the failure. Head is hurting now.

5) How to export the bacpac with the correct option
A Google search revealed a few other people had the same issue and a lovely comment from MS that since it only affects contained users, it only affects a few people and will be a low priority. Anyway, it was allegedly fixed according to the Connect issue although very lazily, there were no details about when/how/where it was fixed. Clearly it didn't work using the horrible old web service method of database backups which was basically a Soap web service.

I clearly needed to use a newer method which was much more likely to work correctly. There are two ways that you can allegedly work with SQL databases from .Net (3 if you count the "classic" method) and they are 1: REST API and 2: .net libraries. Both techniques are woefully un-documented. I spent about 2 hours trying to get the REST API to work since the current app uses an API style syntax. Nothing. I eventually managed to get a 401 with the detail that "the Authorization header is not present". Really interesting since it is COMPLETELY absent in the API documentation, not just what credentials are supposed to be used but even that is required at all! OK, I'll assume the response is correct and I need an authorization header.

Whoops, the slope is still descending

6) The HttpWebRequest class in .Net is often used for API connections. It appears pretty obvious to use (although several people online recommended some better third-party alternatives). So what do you do? You set request.Credentials. OK, so I'll try that. Nope. PreAuthenticate = true? Nope, that doesn't pre-authenticate, it only caches creds for subsequent calls to the same url. Now what normally happens is that if a protected resource is requested, the first response is 401 (Not Authorized) which MUST include a header that says what Auth mechanisms are acceptable. The client can then choose one, send it back in the correct format and the second request will then return the correct response if authentication is successful. Except it wasn't working. Does the HttpWebRequest do the 2 calls automatically under the covers or do I need to catch the first 401 and call it again? Couldn't find an answer to that which makes me think that it should. Except it wasn't - or at least I was still getting 401. Then saw something weird about the web client not handling 401 if it comes back in json format rather than plain html.

I'm tumbling down now, the top of the slope is far away

7) Fiddler problems
Fiddler is a great tool but it is getting harder now with SSL sites because although Fiddler should be able to decrypt SSL, it didn't work with this API request - it just caused the Trust to be broken by the API connection even though I had trusted Fiddlers root certificate. So I can't even see what is happening - whether there are two requests or whether something else weird is going on. Maybe the MS API is not returning the headers correctly so that the client can see them.

This was feeling like a real dead end so perhaps the .net libraries would be an easier way to perform the same job.

8) .net documentation is like a Brutalist housing estate
Most of you who program in .net have experienced the sinking feeling in amongst the vast, vast reams of .net documentation. You have experienced lack of examples, lack of complete documentation, lack of documentation of all possible return values, all errors even basics like like the specific format of a parameter rather than something useless like "the azure server name".

The documentation for .Net SQL Azure access is appalling. It is a vast list of auto-generated documentation. No guidance on the front page, no list of basic tasks just hundreds of classes. Basically unusable.

Then I tried finding the front page of Azure documentation and again, nothing under "Develop", nothing under "Manage" and no real thought into moving away from these useless abstract verbs and into more concise groupings that will allow me to find what I want.

9) I am dead
My system doesn't work, I have exhausted all my avenues right now so unless I get some help on Stack Overflow, I might have to download the backups and not process them until such a point as I can afford someone in my team to spend a week getting it to work again.

Thursday, 9 June 2016

Wow, another one of those jobs which seems impossible until it works and then it seems easy! It is also another reminder that the internet is too full of old outdated information, poor answers to questions and anecdotal answers "This random thing solved it for me..."

Anyway, I am attempting to set up an IPSEC VPN on my Netgear FVS318N, a wireless VPN router which is pretty cool featurewise. I have the additional pain that it is behind another DSL router since the Netgear does not support VDSL so first I had to setup the port forwarding from one to the other and followed this: https://www.shrew.net/support/Howto_Netgear to setup an IPSEC VPN.

On the suggestion of a colleague, I could test the work setup from work by using my phone as a mobile hotspot!

The first problem was that on Windows 10, there is a problem with DNE and something or other that makes the VPN client (Shrewsoft) not connect. Instead, it times out and shows disconnected from key daemon. I followed the instructions here: http://www.ruudborst.nl/shrewsoft-vpn-filter-blocks-traffic-on-windows-10/ to run a CISCO cleaner and DNE setup. The CISCO one works for Shrewsoft and I think this only applies to Windows 10 but some people reported problems on Windows 8.

I then connected the VPN and it showed tunnel established but I couldn't seem to ping any of the internal hosts on my company network. What is really confusing is that none of the guides seem to explain what IP addresses need to match the internal network and which ones need to be completely different so here goes:

Mode Config
First Pool = A UNIQUE range of IP addresses that should NOT match the internal network. These will be the VPN client IP addresses.
WINS/DNS Server = This should MATCH your actual DNS/WINS Server (at least I think so!) they don't appear to be virtual IP addresses.
Local IP Address = Should MATCH the range of the INTERNAL network - usually 3 numbers followed by a zero
Local Subnet Mask = Should MATCH the subnet mask of the INTERNAL network, usually 255.255.255.0

Client Setup
Policy Tab = You must have a topology entry that will route your INTERNAL network IP addresses through the VPN tunnel rather than attempting to send them through your default gateway which is likely to be your main internet connection. In other words, this entry must match the Local IP Address/Subnet specified in the Mode Config setup. If you have multiple internal LANS (vlans) then you can add multiple entries here to route all of them to the VPN router through the tunnel.

Monday, 6 June 2016

Disclaimer, I am not an expert, professional or Politician, just a developer having a rant!

Introduction

The British referendum on whether to remain a member of the EU has caused a lot of heated debate. Well, I say debate but much of it has descended into a plethora of Straw Man and Ad-Hominem fallacies. I use these big words to point out that these are well-known and well-documented logical issues that have not been used to restrain the various supposedly educated or otherwise tongue-in-cheek arguments for either side.

Straw Man means you make an irrelevant point that you disprove thinking you have disproved the main question. For instance, saying that we should leave the EU because you hate the European Court of Human Rights is a Straw Man argument because the ECHR is not part of the EU but the Council of Europe.

Ad-hominem argument is basically saying the argument is discredited because for some reason you don't like or respect the person making the argument or you don't believe they are qualified to make it. For example, we will take an example from Boris Johnson's life where he did something strange or questionable and then imply that he cannot be trusted on his views of Europe. Clearly, the fact that someone might or might not be qualified on a subject is not proof that their views are partially or entirely false (or true). Somebody said recently (I can't remember who) when asked, "So you trust Boris Johnson, Michael Gove etc. to know what's right" and his reply was, "I don't have to trust them but I trust the truth they have said".

How do we change the EU?

Anyway, I have already largely digressed. I wanted to talk about an issue that seems to have been largely unnoticed and certainly not discussed anywhere. The whole issue of, "if we remain in the EU, we can help change/improve it, whereas if we leave we cannot". Let us leave the issue that we could of course have some influence from outside just as America does to us but let us ignore that and look at the other side of it. An idealistic statement like this sounds too good to argue right? Who could disagree with staying and improving it? There is just one possible problem.

How is that even possible?

The EU is not a complete monster. Some people might think it is but most people think it is basically a large and expensive bureaucracy that has some level of usefulness between very and not very much. Clearly, there are things it has done and will do that are good and some things which are not. In order to improve it though, we have to identify what is bad and then decide how to improve it.

Sovereignty

This is a biggie for lots of people - the idea of being self-governing. The Remain camp have taken pains to point out that not many of our laws come from Brussels but even the most conservative estimates put this as a significant figure. It is also one thing to be given laws that you should probably have implemented but never got round to like harmonizing technical standards and some environmental measures but the whole fishing and farming quotas have been a protectionist mess so much so that some of the money we give to them gets given back to help the farmers (eh?). Now the EU, be definition, cannot exist without taking sovereignty away, since it is a Federal system of Rule. If it was not, it would be somewhere between a Free Trade Area and a discussion place. It must have rule over national governments to do its job of making decisions for the common good.

Verdict: Unchangeable

Influence

Influence is another of those things that seems reasonable but what does it really mean? Each country gets to choose one Commissioner (or rather the government does) regardless of size or prosperity, the parliament on the other hand gives a seat roughly per million people. Wow - one representative for a million people! Now, the Parliament cannot instigate or repeal legislation so basically its a rubber stamp process. In fact, the loss of 70 out of 70 times we voted against legislation proves how ineffective the process is - too many people who believe in the EU and will not countenance the fact that some legislation is bad.

That means we have about a 15th of the influence in the Parliament (which is split across parties anyway) and a 28th of the Commission. That is exactly how much influence we do have and how would that ever change? Do we ask for 2 Commissioners because we are wealthy or just because we are awkward? Why would we ever be given more than the proportional system that already exists? I can't imagine any other workable system, we are stuck with what we have.

Verdict: Unchangeable

Waste

One of the big criticisms is waste and that comes in two forms. Firstly the waste of the system itself, its 60 buildings in Brussels, buildings in Strasbourg and Luxembourg as well as the famous gravy train where thousands of employees earn more money than the UK Prime Minister! They have a fixed flat rate of tax at 21% that exists outside of any nations system which mean that no-one inside the system is likely to vote for change and no-one from outside can force them to do so. Can you imagine what would happen if the UK said, "we're leaving if you don't cut salaries and overheads"? Well, they wouldn't but it wouldn't happen. Can you reduce things in general? No, I can't see how. It is, after all, bureaucracy, it believes in an outdated system of paperwork and regulation of every area of life. Have you seen the massive boxes of paper outside each office in the EU building? Basically, we pay massive amounts for people who love paperwork - there is no obvious to reduce it.

The other type of waste is what the EU spends money on but that is, of course, highly subjective. Spending umpteen million or billion on farming is no-doubt welcomed by farming but decried by other struggling industries. Spending money on art facilities and museums etc is dubious, since these should be for national governments to prioritize in my opinion, not for the EU to effectively ringfence when other spending is not. It also has that bad smell of cronyism where you pay money to people who will then naturally support you. Again, it is good to give money to NGOs but are we really comfortable with the EU deciding which groups and how much on our behalf? Isn't the point that we should directly believe in these charities so we can support them directly? Who knows what the cost of lobbying is for smaller charities who miss out their slice of the pie because it is easier to make large payments to large organisations?

Like all bureaucracies, it will spend whatever money it gets given and, again, I don't see how it would change other than trying to convince them that our priorities are their priorities and to reiterate, why would we ever expect that as a 28th of the system?

Verdict: Unchangeable

Conclusion

I honestly can't think of any ways in which we could "stay and improve" the EU apart from some tinkering. I think expecting more than that is naive. Comments like, "we should just try it" or "we won't know until we try" etc. are again, naive considering that for the past 30 years, we have been in the EU and trying to change it to suit our own ends. As I said before, why should we expect to? We cannot have it both ways, we either accept the EU for it is and accept it cannot be changed or we leave it and try and establish relationships that are relevant for us in a modern world which don't have to rely on over-regulation but instead allow people the freedom to be creative and to invent new things, new working practices - that allow the innovative and efficient to uproot the incumbents who are lazily trying to live off past glories.

The only other option would be to return the EU to its Free Trade roots but I honestly can't see that happening. It looks a bit like the Labour Vote in the 2010 General Election. Why would so many people vote for Labour after mismanaging the money? Because they were paid off by generous welfare payments and would never vote to lose those - who would? Apart from a few very affluent countries, many in the EU are struggling and the EU, at least, seems to be their saviour. Why would they ever vote to leave a system that gives them free money for pushing some paper around?

Followers

About Me

I work for PixelPin being in charge of all development for our company, which includes mostly .Net web applications but also PHP, Android and iOS programming as well as managing our hardware and cloud-based systems.

I live in Cheltenham, Gloucestershire in the UK which is lovely in the summer and miserable in the winter.