Monday, 23 May 2011

Error TSD03006: has an unresolved reference to object.This pain in the neck had occurred after creating a new VSTS project and adding what I thought was perfect SQL. The error pointed to Database.sqlpermissions and said the role I had granted a permission for did not exist even though it did!Well there were two things wrong in my project and one assumption which made it hard to track down. The fundamental problem was I had spelt authorisation with an S instead of a Z in my role SQL which made it invalid but my assumption was that errors are printed in dependency order (i.e. fix the first one to fix the rest). This assumption was wrong and the error for the role was the last one listed!The other problem was that even if your database properties say that your database is case-insensitive, the build environment will only recognise references that match case. For example if you reference a column called id with [Id], the reference will fail.

Wednesday, 18 May 2011

Do you know what DRY means and practice it? How do you code for the unknown? These two facts equate to the highest possible quality yield in software production but are often misused or not done at all. If things are left to individuals, they will be variable at best and absent at worst. If management however can insist on processes that include these, they will see high quality software produced.

A friend of mine once told me about a software test that consists of writing a function that takes 3 integers and determines whether the 3 values could form a valid triangle. The test is how many of the checks could you think of? How many do you think there are? One of them for instance is that the values cannot be negative but there are around 16 tests all of which provide complete coverage of the permutations possible. The point here is that coding for things we know about and can think of is easy, coding for those we can't forsee is impossible by definition.

So how do we code to cope with the unknown? Well Unit tests can be helpful but not in all scenarios because you have to set up the unit tests to assume a certain set of inputs for an expected output, proving just one or two cases from millions of possibilities is weak to say the least.

Experience and documentation can be useful. For instance, you know the range of values that an INT can take although in most cases negative, zero and positive are enough to test a method. What about nullable fields? Do you test for null? What about the max values? What happens if you pass Int32.MaxValue into a method which adds 1 to it, do you know what does/should happen? By writing down standard choice values for certain types, you can then build up permutations to use in tests. On this topic, it is also good to not allow fields to be null in an object if they are never allowed to be null in normal use. No need to test something that is illegal (unless it is to prove that the validation works).

The other way that works really well for coding for the unknown is keeping methods in small and manageable chunks. You can then put very simple constraints on it and 'know' that your code is bombproof since it will fail in an expected way if called with illegal data rather than simply crash. Imagine having something like

private void SomeMethod(int i, int j){ if ( i < 0 ) throw new ArgumentException("i", "i cannot be less than zero"); if ( j < 0 ) throw new ArgumentException("j", "j cannot be less than zero"); // Now we can do functionality with values we know are valid}

Another way in which you can help is by using intelligent types. If an integer must always be greater than 0, you could either use an unsigned type or even create you own struct called IntGreaterThanZero or similar which agains throws an exception if you ever try to assign an illegal value to it. In this case, you catch the error much earlier on in development and your method could become:

private void SomeMethod(IntGreaterThanZero i, IntGreaterThanZero j){ // Now we know that our method will succeed because the constraint will already be correct}

This brings me to the other point and that is DRY (don't repeat yourself)which says if you have to do the same thing more than once, you should probably re-factor. I don't mean that you cannot test for something being equal to true in more than one place but if you are doing carbon copies of the same code (or similar) then you are asking for trouble. Consider the following code:

How many things do you think are wrong with this code? Really the difference between poor code and good code comes down purely to your ability to code for known issues and to assume others that are not obvious.Firstly, the point here is that we have two methods that do different things but end up calling the same method and then both have to check for null and call something else. The problem here? If someone else calls the ServiceCallMethod, how can we insist that they check for null in the return value? If they don't, the problem could manifest much later in the code and might takes several minutes, hours or even days to track back to a bad service call. The point here being that we can push the null check down into the ServiceCallMethod and it becomes impossible to call the method without checking for null (this assumes someone doesn't just call the service directly somewhere else but that still comes back to DRY). This is not the whole story though. What else is missing?

Consider the bigger picture. Just from what you can see/understand from the code, there is one big unknown and that is what will happen when you call the service. It is not just a case of succeed or fail but it could throw an exception, it could return null or some other populated or semi-populated object, it could return a whole host of errors which might be related to network, security or the service itself. Currently none of these are checked or catered for. You might say that these situations should simply allow the program to die but that is naive and actually a wrong assumption. Leaving exceptions to fly up the stack to the program can lead to either other behaviour actually failing and leading you to fix the wrong area or otherwise be masked by a catch statement which might re-throw or not. Where exceptions are possible, they should be caught and logged properly, it is up to you whether you then rethrow it or perhaps retry, show a helpful message to the user, "Sorry, the remote service cannot be contacted" or something else. You cannot always tell what exceptions are thrown by a method call so by catching those as soon as possible, it allows the people calling the method to know what to expect and to deal with it.

Tuesday, 3 May 2011

Well I say debacle but to be fair probably only because you expect someone as large as Sony to have all the best security experts but it again underlines something I have mentioned many times before. The security issues are well known and documented and have been for a long time, the problem is that most companies do not have the processes or quality control to actually audit these things. People make software updates, buy-in third-party software, contract out parts of the system to others who they cannot guarantee and all sorts of things, these are day-to-day realities of IT companies but yet so many organisations are simply too lax with security. At least the data that was stolen (or potentially stolen?) from Sony was encrypted so it is perhaps unlikely to be cracked since cracking encryption is extremely difficult if you don't know what encryption method has been used. The problem with credit card numbers is that you know the answer is either numeric or numeric with dashes so you have a crib to the solution. On the other hand, if you ensure your columns are not called CreditCard and ExpiryDate but something esoteric like CX25 and EX54 and possibly even padded to a fixed length then people are unlikely to be able to deduce what the data is that they have stolen!

Even basics like locking out accounts that have been accessed too many times or with multiple incorrect passwords makes it so much harder to brute-force attack systems.

I've been using Windows Workflow this year and to be honest, I really like it. Well, I like it in theory but there are a few things that are buggy (I am using version 3.5) and which could easily cause me to not use it if I had a choice:

The projects take an age to load and compile. VS2005 and about 100 projects but still, it is virtually unusable and I would imagine that many organisations have projects larger than this. I have a quad core with 8Gb so I would expect it to fly.

If you make the name of an activity the same as the class name, it will not tell you that it is invalid. What actually happens is that it will create a new member variable with the new name but keep the old name as it is so it looks unchanged. The newly created member will be unreferenced. Not sure why the names can't match.

I have quite a few problems related to VS2005 crashing and shutting down and this coupled with the time it takes to load the project is impossible!

The Toolbox takes about 2 minutes to populate (open the first time) with all the activities in my solution. I tried deleting a load of them from the toolbox so I can just have the ones I need but for some reason, they are automatically all added in again when you build.

For state-machine type systems however, Workflow is definitely worth the hassle because it reduces the amount of code you otherwise have to write to produce a pseudo state-machine in normal classes. The designer is pretty usable too. Just make sure you understand dependency properties and the correct levelling of activities otherwise you will end up with a hotch-potch of non-reusable components that all require code activities everywhere to tie them together.

Followers

About Me

I work for PixelPin being in charge of all development for our company, which includes mostly .Net web applications but also PHP, Android and iOS programming as well as managing our hardware and cloud-based systems.

I live in Cheltenham, Gloucestershire in the UK which is lovely in the summer and miserable in the winter.