Have you ever faced an issue in production, but have no idea what is going on?

A form is not saving everything, or an expected workflow is not triggering?

'It's impossible!', you say, 'It just can't do that!'. Or worse yet, '...are you sure we're looking at the right part of the system?'.

Well, it's happening, in production too, and you need to get it fixed ASAP.

You've seemingly endless possibilities of what the problem could; if only you could debug it to really see what is happening!

This is exactly the situation where you need a solid logging strategy!

If you have to debug, you're not logging enough!

Not every problem appears as an error

It's obvious but worth emphasising - not every bug shows up as an error.

Sometimes it's that data just doesn't flow as you expect.

Isolating and fixing where those types of issues are occurring is where good logging can save you valuable time!

When you do have an error, you get a stack trace.

Sure, from the stack trace you know where an error is happening, but often times that is not enough.

What else was that thread doing before it got the error?

What parameters was it working with?

Which guard clauses were been triggered?

A strategy for good logging

Therefore, you need to have a solid approach for logging.

Fundamentally, your logs should tell a story. They need to paint as clear a picture as possible as to what is going on.

You can't and shouldn't log everything.

However, if you log the important bits, when you're armed with nothing but a log to tackle a production issue, you'll thank yourself later!

Your logs should:

Provide enough information to solve a problem

Be able to target a specific section of code

Not affect performance of the entire system

Logger per class

Depending on what your code does, it's pretty safe to say that you don't want to turn on diagnostic logging across the entire code base.

Especially in production, the sheer quantity of log entries would be overwhelming, let alone the performance hit of doing so much.

So you want to be able to isolate parts of the system and log extra detail in specific areas.

This can be done by creating a logger per class. For example, using Common.Logging .NET you can do the following:

var ILogger _log = LogManager.CreateLogger(typeof(MyClass))

This gives you the option to selectively configuring the logging level at the namespace or class level.

Need to know what's happening in your EmailPublisher? Turn on DEBUG logging for just that class!

Use the correct logging level:

The logging levels are different per logging framework, but the general guide of what to log at each level is the same:

DEBUG

Use to log information you'd want to know as a developer. I like to think of this as the "diagnostic" level - what would I need to know if I was investigating an issue.

INFO

Use to log the general business flow of the application. It should tell the higher level story of what the application is doing, rather than how (that's DEBUG).

WARNING

Use when your code recovers from an error, or an unexpected situation, but continues to process. For example, if you handle an exception in a try ... catch, you're code would be doing so as it can carry on. Log it as a WARNING so you know it happened!

ERROR

Use when the application must stop execution, it can't carry on or automatically recover. Typically this will be when a thrown exception is unhandled until the very top layer of your app.

When creating a prototype or demonstration of a new app idea, you don't want to get bogged down in the backend. All your focus needs to be on the user interface, so how can you mock out the backend yet still make the app as functional as possible?

When working with Angular, you're typically dealing with a REST API. Hardcoding data or faking responses can make your app feel rigid and limited.

So what you want is a real API, a dynamic API, just without the fuss of making one from scratch.

If you're working with Azure SQL, however, CDC and Change Tracking are not supported, so they are out.

Temporal Tables are a new addition to SQL Server 2016 and are supported by Azure SQL. Temporal Tables allow SQL Server to take care of putting data into your designated history table as and when a record is altered.

There are however a couple of stipulations about what a Temporal Table's structure should be.

Any tracked table must have a startdate and enddate attribute on it. When working with an existing schema, this didn't appeal.

You must also maintain the history table structure yourself too, which must mirror the attributes of the source table. Especially when using SQL Server Data Tools (SSDT), this can be a real pain point.

Were those the only options I had?

What about using SQL Server JSON?

There is another option though it is bit more of a manual effort. SQL Server has introduced support for JSON Data.

Consider the following:

SELECT
FirstName,
LastName
FROM
crm.Customer
FOR JSON AUTO

Running this will result in a JSON string similar to:

[
{
"FirstName": "John",
"LastName": "Smith"
}
]

Wow, great you say (with a hint of sarcasm). Well, what if we took advantage of this in a trigger on a table, and simply used it to serialise the record being changed.

Ah, now we're talking!

Change tracking using JSON

Within a trigger, we have access to two special tables: DELETED and INSERTED. These tables hold the before and after state of the record that is being changed. So, if we serialise and store those as JSON data in a table, we can track changes over time.

Let's get to an example. Consider that we have a Customer table and we want to start tracking the data changes that are made.

We'll need a history table to store the before and after state of the table. Something like this will do:

Running this gives us the result shown below. Note that we haven't added a CustomerNumber field to the table yet, so none of the JSON will have that as a property. This is highlighted by the SomeOtherField column I also added.

If we now add the CustomerNumber attribute to the table and update a record, the most recent history record's JSON, will include a CustomerNumber property.

Re-running the query gives the following result:

To me, that is just simply awesome! It means I just don't have to worry about how my schema changes may affect the history records as and when they happen.

That's it in a nutshell

It's a simple approach, if not a little manual at first glance.

The advantages that JSON provides us here are huge:

It works nicely with SQL Server Data Tools (SSDT) as you simply modify the schema as normal.

No schema maintenance on the history table as new attributes are added to the source table. In fact, with a little more effort, the same "history" table can be used for all tables should you wish.

No changes are needed to the table structure of tracked tables, it just needs a trigger.

We're not limited in any way in how to query the history data; we can always still get it out and query if we need to.

The code above is just for demonstration purposes, but hopefully, you can see how this could be improved for your situation.

SQL Sequences have been a great addition to SQL Server. However when using them with SQL Server Data Tools (SSDT), I hit a bit of snag.

My sequence was to be used for incrementing invoice numbers. Previously, the invoice number had used the row's IDENTITY column. I know, not ideal, but it served the immediate purpose of creating incrementing invoice numbers with little overhead.

On first deployment to a database, this all worked great. Job done, or so I thought.

When Deploying to the database again, SSDT reset [finances].[InvoiceNumberSequence] back to start at 1.

So why did SSDT reset the Sequence?

In SSDT, my sequence has to be a CREATE script. The deployment process checks for differences between the sequence object it would generate with the CREATE script and what is already there in the target database.

The use of RESTART WITH actually changes the definition of the Sequence, which SSDT sees as a difference saying, "Hey, the InvoiceNumberSequence is different, I should update it to match the definition I have".

And so the sequence is summarily reset to 1. Not ideal to say the least.

Using sp_sequence_get_range to fix the Sequence

So what sp_sequence_get_range does for us is actually 'consume' the sequence - i.e. set the next value of the Sequence to a value we want, without changing its definition, by 'using' some values from it:

Can't you just download some json from the server?

The Angular startup process uses Providers to set up common settings across services. In the example above, the dataServiceProvider will ensure that all dataService objects have the correct API url.

However, using a request to the server to get the settings (essentially using $http) is an asynchronous call. The angular.config() function will continue on and, as far as Angular is concerned, the app is now fully configured and ready to go.

But our XHR request to get the configuration may not have responded in time. The app isn't ready to start!

We need a Provider, yes, but it needs to be there at application start.

The solution then, is to create a Provider server-side, which is pre-cooked with the configuration settings we want.

Dynamically creating an a Provider server-side

I generally use ASP.NET on the back-end, so this is an example implementation in the .NET world.

The key points that I like are:

Any <appSetting> key with a client: prefix is automatically available to the Angular app. This makes it nice and easy to add new settings as they arise.

There's no duplication of settings; if the server also needs the same setting, it can use it too.

Having a separate module for the config (client.config) allows any module that needs it to depend on it.

With a bit of refactoring, it's a reusable solution that can work with future projects!

When you're charging an hourly rate, you're actually selling hours, regardless of what you think you're selling - software development, training, etc. The trap here is that if you're not putting in the hours, you're not earning.

Is there a better way? What alternatives are there?

That's the question I've been asking recently as I do consultant work through my company. It's then that I come across the concept of value-based pricing.

Value-based pricing

The basic concept is to forget the hourly rate ... don't even think about it or mentally convert to it.

Instead, frame what you are going to charge based on the value you are going to provide to the client.

To the client, it's an investment. To increase product sales by $100k, they would be willing to invest $10k (10%) on new systems to make that happen. $10k becomes the figure you could charge to produce the $100k value to client.

The client is not interested in your costs, only the value you can add

The hidden benefit to all this is that both parties are working towards the same goal - creating that value.

With the "selling hours" approach, there's always that conflict of interest when something extra needs doing - the client wants you to do it as fast as possible, because, hey, they're paying by the hour. On the other side of it, there's no other incentive (other than ethical ones) to do it any faster.

Value-based pricing allows you to focus on providing value to your client

Breaking the Time Barrier

My first introduction to the value-based concept was by the founder of Fresh Books, Mike McDerment, who wrote the book Breaking the Time Barrier. It's a quick read at about an hour, but the message really hits home thanks to its narrative style.

The fable of the dog walker for example, resonated with me. She could no longer spend enough time walking dogs to be able to afford to continue doing it.

This was the the time and materials, hourly rate approach which meant she was competing on price with other local dog workers.

There was no other differentiator between her and the competition. Just raising her hourly rate alone would mean she was no longer competitive.

However, by changing her mindset and approaching her business from a more value-based approach, she focused on creating value for her clients. She found that her clients wanted their dogs to have the longest, happiest lives - they were part of their families after all.

That was where the dog walker could add value.

So, instead of just a dog walking service, it was repackaged as looking after the dogs well-being, training them, arranging vet appointments, selecting the right food and so on.

Everything was now focused on adding value to the client. To the client, it was far easier to justify paying more.

With prices raised, the dog walker was able to continue in business and due to the new focus, was able to grow to employ more people.

They are not paying for your time, they are paying for the value you are providing them.

Ok sure, it's just a story

But it rings true.

Given what we do in the tech industry, we're often too quick to base things on the cost per hour - how longs will it take to do this? Ok, that'll be X dollars then.

Our clients don’t care about our costs. They care about the value we create for them, so that’s what we should be asking them to pay for.

]]>

Although Angular has an ng-if, there is no direct concept of an ng-else, so you can easily end up writing some nasty conditional logic.

Entreprogrammers

Be a fly-on-the-wall as Josh Earl, John Sonmez, Derick Bailey, and Charles Max Wood share their experiences as developers and entrepreneurs.

Granted it's a strange name, but give it a chance.

In a word, this one is inspiring! I've gained so much from these guys; they've really opened my eyes to what it takes to be an entrepreneur.

I loath reality-TV, but this is a reality-podcast and I can't get enough of it. You're there as they talk through their problems and pitch in with advice and encouragement. It's all very honest and open. Great stuff!

I have a cstLabel directive which takes a numeric identifier and uses that to lookup the friendly text to show to the user. So the directive would take a dependency a lookupService which can get the data from the server, all nice and straight forward:

When multiple instances of this directive are in the page, they each call the lookupService to get the friendly text. When this text comes from the server, that means a whole heap of requests for the same data!

Hang on, aren't services singletons? Can't you cache the data in the service?

Well you can - in fact, there was caching in place in the app I was working on - but there was a race-condition. Each instance of the cstLabel directive makes a call to the same singleton service, pretty much at the same time, but with a cold cache each call would find the cache empty and go to the server to get the data. This is only really a problem for a cold cache and when lots of data is involved, but I wanted to solve it as easily as possible.

The solution was to share data between the directive instances

This was done using another directive that could be required by the original directive using the require property on the DDO (Directive Definition Object). The new cstLabelContainer directive would be the one to go and get the lookups, with each cstLabel instance able to get that data thanks to the require hierarchy:

This also made the publish faster! Because of the retry mechanism, each query timeout caused another attempt at running it - it would take 5 minutes before the publish aborted and I got the final timeout error. Now, it's done in less than a minute.