Tools

First of all I’d like to point out that I was kindly given a license by the folks at NDepend (not very often that sort of thing happens I can assure you!) and I’m under absolutely no obligation to write anything about it.

In the beginning…

The funny thing is that it was probably over a year ago when I first installed the product without any specific requirement or
expectation. I had a little play with it (on Visual Studio 2008 as I recall), then the work I ‘had’ to do overtook my will to learn this new product and it lay gathering dust on my hard drive. This probably explains why I haven’t posted in all that time!

But then…

Recently, I picked up an existing project (on visual Studio 2010), and wanted to have a good look inside to see what I was getting myself into. I dusted off NDepend and told myself I’d give it a good go this time…

First Impressions

The first thing I learned is that this is one significant addin, and you realistically need to ‘know you need it’ before you get it (see ‘laying
dormant comment above’). This also means you need to know what it can do for you – which is plenty!

If you’re reading this and thinking of trialling NDepend, then you either have problems to solve or you’re wanting to invest in ongoing improvement to your code. Both are very good reasons as it happens.

NDepend has few limitations in what it can do, as it has your entire codebase, Visual Studio extensibility and its own powerful rules engine at its disposal. It also employs its own CQL (code query language), to allow you to find all sorts of patterns and complexity problems with your code.

The biggest problem is knowing where to start, or discovering that first task you want to achieve with it. It’s easy to get overwhelmed by the
information it bombards you with when you spin it up).

To be fair, there’s plenty of links trying to lead you to ‘what you’re looking at is…’

Reasons to try/buy

If you’re interested in the quality of your code I believe there really is no equal. This is the tool you need. You may already be using FX Cop in your build process to check for certain snytactical rules, and ReSharper for sorting out your code as you go, but NDepend can do all sorts of ‘different’ funky stuff (through CQL) that goes in depth to your code to enforce things that would be otherwise difficult to do It can obviously do all the simple stuff like show you where your dependencies are between methods, classes and projects, and redundant code etc.

Things to be aware of

It’s a technical tool, and it’s easy to get a little overwhelmed with what it can do and where to start.

Time is needed to understand some of the concepts and power of the product.

You’ll need a beefy machine to avoid things slowing down with the addin loaded (I had to disable it for a while when I was using a solution with 60 projects as I was starting to experience memory issues). If you don’t want to run it in Visual Studio, you can run it in the standalone ‘Visual NDepend’ application.

I’ll admit I haven’t spent a lot of time with the interactive reports, and I don’t find some of the graphical representations of the metrics that easy to use.

I think like most products, you get comfortable with what you see as valuable, and tend to only try other things when you have time.

Summary

Clearly NDepend’s a very impressive tool for any serious development team to be using. It will help you to learn about reducing complexity, dependencies and generally designing your code in an efficient way. It’s basically all about improving quality.

It’s also a big product that’s not for the faint hearted. You basically get out what you put in as far as effort in understanding what it’s trying to achieve for you.

I think the key is finding the right balance between all the technical information it presents, the time you have available, and the business benefit you’ll get from code improvements.

As I said at the start. It can basically take you as far as ‘you’ want to go.

A good while ago I posted a stored proc that would generate insert statements for table data, along with simple filtering capability.

I broke this out again today, as I needed to recreate part of a database on a local machine. I didn’t have knowledge of the schema so I just went about fixing each constraint error and adding the reference tables as required to my script. After manually adding ‘SET IDENTITY_INSERT xx’ about a dozen times I added the functionality to the Stored Proc – so if your table has an identity column it will now wrap the results in IDENTITY_INSERT statements, saving you a bit more time, and headaches.

An interesing side effect if you’re explicitly inserting identity values is that the order of your inserts may become pertinent – especially if you’ve got foreign keys referencing the same table. I then just added the capability to ‘order by’.

There’s no automatic way to switch off column headers, so you’ll need to configure that in the Resutls Output options in Query Analyzer (or management studio).

Many people use Adobe Illustrator or Photoshop for designing Web layouts, but a lot of companies use Visio – especially for higer level ‘wireframes’. One of Visio’s problems (<= 2003 anyway) is that it doesn’t support pixel units. This means you’ll always be approximating the sizes of content areas and page elements, making the developer’s job more difficult.

Enter ‘Visio Guy’, and the Pixel Unit Line Shape. This is just a great addition for anyone using Visio. Before you know it you’ll be laying out pixel perfect shapes that the developers will then give you great pleasure in declaring ‘this won’t work in IE6!’ 🙂

A thing to note with Visio as well: If you’re working with pretty small dimensions then the only way to get really precise is to zoom right in (like 500%), otherwise the whole ‘snap/glue’ thing will kick in based on the page units (probably mm), regardless of whether you’ve switched snap/glue off.

Many people (including me) are often still using SQL 2000 – which means that you might be using DTS (Data Transformation Services) to perform utility jobs on your databases. I’ve had a requirement to output data to Excel (not my choice – Excel usually isn’t!) – which is subtly different to text output as CSV*. Another requirement was to have a ‘generated’ file name in a known folder. This is as simple as a date/time driven file name. The ‘job’ runs once (or more) a day and so creates a new file each time. Easy right?

Well – it turns out that if you go through the normal motions of creating an Excel connection and using a Transform Data Task it’s likely to work first time and leave you thinking ‘great’. Unfortunately when you run it again you’re likely to get a message similar to the following:

Table ‘New_Table$’ not found.

After much digging (hence the reason for writing this), I discovered that in order to map the columns successfully you basically have to create the table in the spreadsheet before running the transformation (the ‘create table’ part of creating the transformation). If you’ve got a dynamic file name then the next time you’ll effectively have a blank spreadsheet without the required receiving ‘table’, and it will fail, unless you’ve recreated the sheet before the transformation.

OK so assuming you want to output some data to a spreadsheet with a dynamic filename with a Transform Data task then here’s how you do it.

I put all files in the same folder so I use 2 global variables to build the filename – NetworkLocation and FinalLocation.

First, Right-click on a blank part of your package screen and choose Package Properties, then Global Variables tab.

Create the two global variables NetworkLocation and FinalLocation

Edit the NetworkLocation value to be the folder (including a final ‘\’) of where you want to store the files.

Leave FinalLocation blank (we’ll create that in the next step.

Add an ActiveX Script Task with the following code (which you can
obviously tailor the name to your needs). It basically creates a
date/time based file name, and stores a concatenation of the network
location and file name in the FinalLocation global variable.

In the Destination tab click ‘Create…’ next to ‘Table name:’ (a
dialog will pop up ‘Create Destination Table’). Change the table name
(or column definitions) if you like, then COPY THE ENTIRE TEXT – and
paste into a text editor for safe keeping.

Click OK (This will create a new sheet in the spreadsheet you’ve
specified in your connection, ready to receive your data). Your fields
should then also be populated in the table below.

Check the transformations tab – you should get a nice one to one mapping between source and destination columns.

Create a Dynamic Properties Task (Here’s where it all comes together)

Click ‘Add…’ to create a new property. Choose your Excel Connection and pick the ‘DataSource’ property, then click ‘Set…’

Choose ‘Global Variable’ as the source and FinalLocation as the variable – Click OK.

Click ‘Add…’ to create another property. Choose the
DTSTask_DTSDataPumpTask_? in the Tasks node (your data transformation
task) and pick the ‘DestinationObjectName’ property.

Choose ‘Query’ as the source and paste in your ‘CREATE TABLE‘ statement from earlier – Parse to check, then click OK.

Set filename of the excel connection, create the destination ‘sheet’
from the global variables and the query in the transformation

Do the transformation and store the results in a date/time named Excel spreadsheet in your configured folder

If you ever need to change the query (columns) then you’ll need to regenerate the ‘CREATE TABLE’ statement and reflect it in the Dynamic Properties Task (5.c)

*Typically an Excel export covers your back with datatypes where a CSV export won’t – so when you open in Excel you’ll have to format columns specifically yourself – like converting dates, and also long numbers from scientific format to text format.

I’d seen addins that list Visual Studio shortcuts before, but seem to have lost them. I also used to use ReSharper, so memorised its shortcuts. I’m now back to ‘naked’ Visual Studio, so am having to re-learn the standard shortcuts.

I was surprised to find this article on MSDN that has code for a macro to produce a html page of all the shortcuts. Just run the macro, open it up and search in the browser whenever you’re having problems remembering – unless you really want to print it (in which case I’d be inclined to bang a bit of CSS in there 🙂 ). I might improve upon the whole situation if I find myself needing to refer to it more often, and will post updates here.

This had bugged me for a while. A lot of sites (including some of the ones I develop) tend to have a fixed width layout these days and some browsers (IE particularly) ‘always’ has a visible scrollbar. This means that the available screen width is constant whether the page scrolls or not.

Firefox on the other hand (and Chrome/Opera/Safari) seem to have this off by default. This of course seems reasonable until you have a fixed width, centred layout that ‘shifts’ when you switch from a non-scrolling to a scrolling page. It’s just a bit off-putting.

Fortunately Firefox is configurable and the following will fix that up for you. (I’m sure the other browsers are capable of something similar but I’m not using them much 🙂 )

I really thought I’d posted this before, but here’s a script I use quite often when I need to create a SQL user that needs to just have ‘execute’ access to Stored Procs.

This is usually used for web apps where the server is going through as a single user. I’d encourage anyone to ensure that you always start off with granting the minimum access required and only add more when actually needed. If you’ve got different roles in the application (viewer, administrator etc) then create different roles, with a user for each.

The intention is to remove the need to give direct table access (as we only call Stored Procs right?). The script doesn’t actually do the ‘grants’ for you, but will generate another batch that you can execute if you like. I prefer to put the grants in the script for each proc/function anyway, just so it’s a little easier to maintain, and you can specifically grant access to the ‘admin’ procs to the ‘admin’ role, whilst limiting any ‘normal’ users to the read-only/everyday ones.

There’s satisfaction to be had from an app that connects to the database via an appropriate user, which is in a role that can only do what is necessary for the role, and can’t access tables directly. Not only is it neat, it’s also (relatively) secure.

Here’s the script…

— MSIMNER: This script will generate a role and user for SQL Access to a single database
— The role will have no specific access to read or write tables but will be granted access
— through execution of stored procedures ONLY