Tag Archives: cross.cutting

I’ve started to bring this “spike” to a close since I’ve set out to figure out the stuff I needed to using a few Azure services and got to a point of refactoring. When I get to this point with a spike, I set out to do the following things listed in the task list you see here. I’ll step through the why for each of them.

Refactoring for Interfaces – I tend to build up a few concrete types which don’t need to be added to an IoC container for injection, I try not to abuse Unity even though it’s pretty awesome. Until I find out that maybe a member needs to be injected to reduce a bit of coupling across the solution. So, I’ll look for these opportunities across the solution and extract an interface and either add it to the container or do some poor man’s injection w/o the container – it just depends on the context of the type’s usage. I agree, more interfaces are better, even marker interfaces serve a purpose but I try not to go crazy with anything, interfaces included.

Logging – At this point, I know more than I did a few weeks ago and I’ve got a clearer idea of what I want to log. This time I just need to build up the event source class for the app based on what I learned. That’s it. No more, no less.

Magic Strings and Numbers – This one is special. I litter the application with strings and sometimes numbers and this is the best time to go back over the entire solution to pull them out and into something like constants, that’s what worked for this exercise. I’m walking through all of the code to see if it makes sense, especially the bits that I’ve not seen in a few weeks. I forget sometimes what I was thinking, and clarifying with a better member or method name is always better than adding comments. And yes, I’ve got a small battery of tests to fire off after each changeset gets checked in.

Plumbing for cross-cutting stuff – Now that the logging events are done, I need to plant them in the classes that are doing the work.

Not much here, just some habits I’ve been using over the years to keep solutions clean, readable, and hopefully maintainable.

When we notice our cloud has stopped raining, it’s time to take a look under the hood to see what happened? Or, is there a better place to look before we raise the hood? A few questions to ask:

1) Was it something I did?

2) Was it something that happened inside one of the Azure instances?

3) Did the application run out of work?

4) Where can I look to see what was going on when it stopped?

Only you can answer the first question. If all of your tests aren’t, or weren’t passing, and promoted something to a production instance you might be able to answer this fairly easily.

The second question assumes you’ve can get to your management portal and look at the analytics surfaced by Azure. There might have been, or might be, a problem with one or more of your instances restarting. I’ve never seen either of my instances stay down after a restart unless there was an unhandled exception getting tossed around. Usually I find these problems in the local dev fabric before I promote. Sometimes I don’t though, so on a few occasions even though my tests were passing I had missed some critical piece of configuration that my local configuration had, and the cloud config was missing. I call this PIBKAC – problem is between keyboard and chair. Usually the analytics are enough to tell you if there were problems. And from there you can fix configuration if needed, or restart your instances or other Azure feature you’ve got tied to the application.

The third question is kind of a sunny day scenario where the solution is going what its supposed to in a very performant way. However, sometimes ports can get ignored b/c of a configuration issue like I mentioned prior as one example. If you’ve been storing your own health monitoring points you can probably tell if your application has stopped listening for new requests, or simply just can’t process anything.

The fourth question talks about having something that’s looking around the instance(s) and capturing some of your system health points: how many messages am I receiving and trying to process; how quickly am I processing the incoming messages; are there any logs that can tell me what was going on when it stopped raining.

I’ve been using Enterprise Library from the PnP team for >6 years and I still love the amount of heavy lifting it does for me. The wire-ups are usually easy and straightforward and the support behind each libary drop is constant and focused. Recently Enterprise Libary 6 dropped with a bit of overhauling to target 4.5 among other things, and here’s a blog post by Soma that discusses a few at a high-level.

I’ve used the Data and Logging Application Blocks, as well as Unity successfully. I had recently started wiring my solution to use the Azure Diagnostics listener to capture some of the diagnostic events, particularly instance restarts from configuration changes. Now, I think/hope I can use the logging application block to wire all of my logging events and push them to something simple like blob or table storage.

I’ve never like a UI that I have to open up and look through, it just makes my eyes tired and its annoying – I’d like to have something a little more easier to lookup fatal and critical logs first then go from there. PowerShell (PS) looks cool and fitting for something like this, and I can probably do something quick and dirty from my desktop to pull down critical, fatal, or warning logs but I’m not a PS junkie. But it would make for an interesting exercise to get some PS on me. Oh, on a side not I picked up this book to (re)start my PS journey and so far it’s been worth the price I paid. Some of the EntLib docs mentioned pushing data to Azure storage so I may just start there to see if this can work.