Musings about technology and things tangentially related

Archive for April, 2010

I’ve included below my Amazon.com review of the book “Making It Big In Software: Get the Job, Work the Org, Become Great”. I diligently read this book from cover to cover and just couldn’t seem to like it. It became pretty monotonous after a while to go through what felt like a very academic handling of what could have been a very interesting topic. This is in stark contrast to the other book I’m reading now, “Delivering Happiness” by Zappos CEO Tony Hsieh, which is a pragmatic blow-by-blow tale of how someone actually made it big by leveraging technology. My review:

I really wanted to like Sam Lightstone’s book “Making It Big In Software” and read it cover-to-cover, at some times forcing myself to read on. There are some good points in the book, which at its best represents a blend between the interviewing style of “Founders at Work” and the pragmatic advice of “Career Warfare”. Unfortunately, the book is at its best far too infrequently to make it a recommended read.

Aside from really lacking any really original advice or insights that are fairly common knowledge to folks who have spent a couple of years in the software industry, there are several other reasons I probably won’t be referring back to this book very frequently:

The questions were pretty much the same for every interview. That’s great for statistical comparability but really didn’t do much to draw out the stories from the interviewees. At one point, I found myself thumbing to the end of each interview to find out if the “Do you think graduate degrees are professionally valuable?” question was going to be asked again.

An earlier reviewer pointed out the value in the use of personas to illustrate examples. Done correctly, I agree that this is a very powerful technique. However, the software development antics of Moe, Larry, and Curly in this book seemed less like personas and more like an attempt to compensate for the lack of more illustrative examples.

Lots of borrowed material. Much of it from the standard software journeyman’s body of knowledge and some of it from popular authors such as Steven Covey, who seems to be a personal favorite of the author.

A chapter on compensation with salary ranges? C’mon, really? Aside from immediately dating the book, this is information that clearly could have been put out on a website and updated periodically so that the reference doesn’t get immediately stale.

This book may be of slightly more value (3 stars) to someone new to the field of software. I hope I’m not being unduly harsh but I find it hard to see how folks who have been around in the industry for 5 – 10 years can rate this book with 4 or 5 starts.

These tool discussions are also recurring themes on all of the major discussion forums. It seems that every so often one of these questions hits StackOverflow and everyone chimes in with their favorite current tools. Invariably, for the .NET tool lists, there are some tools that always show up and; enjoying near universal advocacy in the .NET developer community. This includes tools like Reflector and Fiddler on the free side and Ants Profiler and Resharper on the commercial side.

For this blog post, I’ve decided to go with 5 tools you’re not likely to find on any/many of these lists. While some of these tools are .NET-specific, other tools are just solid development tools that are likely to be great additions to any .NET team’s toolbox with the added benefit that they work across multiple technologies.

Badboy. Likely the biggest sleeper on my list. Badboy is an extremely easy-to-learn web application testing tool. Check out the online documentation to understand features and then use it to guide your learning. Chances are that you’ll have most of the basic and intermediate level scripting tasks mastered within the first 30 minutes of using the tool. Compare the cost of a Badboy license ($45 / individual or $30 / each for a 10-pack) with the cost of your existing web application testing tool. Chances are you’d be saving hundreds, if not thousands of dollars per license. If you need to scale beyond simple Badboy threading / load testing capabilities, Badboy scripts can be exported in a format consumable by Apache JMeter for more heavy duty controller/generator type load testing. Also, the Wave Test Manager server, from the makers of Badboy, allows you to upload and share badboy scripts across a project, schedule execution of the scripts, and access the reports from the tests on a central server.

Lightspeed ORM. When the discussion of Object Relational Mappers (ORMs) comes up, NHibernate and the Entity Framework are almost always at the forefront of the conversation. LLBLGen gets added to the list as well if commercial ORM’s are on the table. Rarely, if ever, is the Lightspeed ORM from the Mindscape team down under ever brought up. It should be. If an awesome Visual Studio modeling experience and second generation LINQ provider don’t convince you, maybe the Rails’esque data migration facilities will. Still not convinced? Check out the custom LinqPad provider and LINQ-to-SQL to Lightspeed drag and drop conversion. If there are new features you’d like to see or if you need bug fixes, Ivan and the team at Mindscape are all ears and provide a near legendary turn around time.

Silverlight Spy. Let’s recap just in case you missed the news – Silverlight is hot!!! It’s a pretty significant change from either the MVC or WebForms approach most .NET web developers are used to and takes a while to wrap your mind around. Silverlight Spy does for Silverlight what Reflector did for the .NET Framework, pulls back the covers so that you can inspect and understand. Silverlight Spy provides insight into the XAP package, isolated storage information, performance data, an accessibility view and so much more. The message from Microsoft over the last 6 months has been – learn Silverlight. That task is made so much easier with Silverlight Spy at your side.

DTM Data Generator. Microsoft recently finally got around to including a data generator in some versions of Visual Studio. If you restrict yourself to SQL Server and are willing to deal with slow data generation, it might even be a good fit for you. RedGate’s SQL Data Generator, which I’ve written about before is much more efficient at loading data, as long as you stick with SQL Server. If you’re looking for data generation tool to meet your needs, irrespective of the underlying database you use, DTM’s Data Generator is the tool for you. DTM’s data generator supports SQL Server, Oracle, MySQL, DB2, Sybase, and any database you can access through OLE DB or a DSN. It supports inserts of most major datatypes, including BLOB generation and supports a variety of rules comparable to RedGate’s product, including the use of custom rules. The enterprise version can be executed from the command line in silent mode, making it perfect for generation of data in preparation for the execution of an automated test suite.

Performance Analysis of Logs (PAL). This tool just doesn’t get enough love from the .NET development community. Oft maligned as the “poor man’s SCOM”, PAL can be a real timesaver and/or lifesaver. It’s so simple: capture the PAL specified counters for the platform being monitored (most major MS products such as Windows Server, IIS, MOSS, SQL Server, BizTalk, Exchange, and AD are supported), import the counters and let PAL do its thing. It’s “thing” is producing a detailed report for the counters showing how they looked across the duration of the capture and when the counters exceeded thresholds. PAL also provides explanations for each of the counters and details the implications are of exceeding the thresholds. More useful information for a better price you will not find.

I was up at Penn State IST school this past week giving a lecture to a class as part of our recruiting. As part of the class, which was about application integration, I touched on the HTTP protocol. I believe that it’s extremely important that everyone starting out in web application programming or web-based integration have a deep knowledge of the HTTP protocol. Although you should eventually read a book about HTTP and ultimately read the protocol itself, sometimes it’s easier to learn by tinkering. Along these lines, I thought it would be interesting to provide a quick demo of using Fiddler to inspect the HTTP protocol. I’ve included the screencast here. My apologies for the speed of the screencast. I was in a hurry to get it done and it sounds like I had an energy drink of five too many when I did the voice-over.

I used Camtasia for Mac to record the screencast. Camtasia for Mac is a relatively new entrant to the marketplace and is priced at $99 to compete directly with Screenflow, the long time incumbent in the Mac screencast market. The tool couldn’t be easier to use. It took no time at all to capture the screencast and post-capture editing, an area where Camtasia has always shined, is both powerful yet incredibly intuitive. If you’re in the market for a Mac screencasting tool, I can only recommend Camtasia. You can pick up a free 30 day trial and, after that, $99 introductory pricing will get you the full product.

Performance counters for WCF have been available ever since the first release of WCF with the .NET 3.0 Framework. As long as these counters have been available, Microsoft has been cautioning about the memory requirements and potential performance degradation associated with insufficient shared memory allocation. I thought that I had heard at the PDC that WCF 4 would fix some of this but going back to the WCF session video, it looks as if these counters won’t really be addressed by WCF 4 but instead superseded by the ETW instrumentation present in AppFabric. So, until everyone moves to AppFabric, I see a need for a bit more guidance than the “allocate enough memory” that Microsoft offers us.

Enabling WCF Performance Counters

Enabling WCF performance counters is a breeze and is covered pretty well elsewhere. The configuration change below will turn on all three types of WCF counters: Endpoint, Operation, and Service.

Your options for enabling the counters are: All, ServiceOnly, and Off. WCF performance counters are included for a reason so I wouldn’t recommend disabling them entirely. Instead, as a rule of thumb you should enable “All” if you’re performing specific service debugging activities that require all the counters and should leave on “ServiceOnly” for normal operations, including in a production environment.

Calculating Performance Counter Memory Size

Before diving into sizing, it’s best to provide a bit of background on performance counter memory allocation. Managed performance counters consume memory that is shared across all the .NET processes running on a machine; essentially a memory-mapped file. Although the .NET Framework 1.0 and 1.1 used global shared memory, .NET 2.0 and above use separate shared memory per performance counter category, with each category having a default size of approximately 128KB (that is ¼ the default global shared memory).

You also need to know about services, endpoints, operations – the WCF counter groups:

Services. Services are at the root of the WCF hierarchy. Services can have multiple endpoints and expose multiple operations. WCF has 33 performance counters for each service

Endpoints. WCF endpoints provide the client access to a service through address, binding, and contract. You can provide multiple endpoints per service. WCF has 19 performance counters for each instance of an endpoint across a service.

Operations. A WCF service operation is a discrete function performed by a WCF service. WCF provides 15 performance counters: per endpoint, per service.

What you’re ultimately looking to come up with is a sizing for each one of the WCF performance counter categories. Without providing a mathematical formula, I’ll walk through a brief hypothetical example to calculate the sizes. In this example, I’ll assume that we have 20 services on a machine, each of these services has 3 endpoints, and each service has 10 operations exposed across each of the three endpoints:

We’ll assume an average size per performance counter of 350 bytes, which is a fairly conservative yet accurate estimate.

From the above numbers, you’ll hopefully notice two things. First, I hope you now understand why I recommend the “ServiceOnly” setting unless you’re in an environment where you absolutely need the other counters. Second, even with a medium size service load, we’ve exceeded the default performance counter category maximum memory and are quickly heading for the dreaded “System.InvalidOperationException: Custom counters file view is out of memory” exception.

Setting Performance Counter Memory Size

Aside from the mechanics of setting the performance counter category memory size, there is only so much guidance I can provide. What you will set these values to will depend upon a couple of factors:

Whether you’ve set WCF counters to “ServiceOnly” or to “All”. If you’ve used the former, you’ll only need to tweak the service-specific private memory. If you go with “All”, you’ll want to set each category’s memory space individually.

The math you do for your counter categories based upon the example I provided in the previous section.

For the size of separate shared memory, the DWORD FileMappingSize value in the registry key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices<category name>Performance is referenced first, followed by the value specified for the global shared memory in the configuration file. If the FileMappingSize value does not exist, then the separate shared memory size is set to ¼ the global shared memory setting in the configuration file, which is 528KB by default.

To specify the WCF category-specific sizes, simply set the registry value for each of the three registry keys associated with the three WCF service categories and then reboot the machine. Keep in mind that, like other application sizing activities, sizing of the WCF counter memory will need to be repeated as the number of services, endpoints, and operations change on a particular machine.

I’ve been blogging for 4 years now and never have filled out the “About Me” section on my blog. I’ve had good intentions for a while but just never got around to it because my vision involved scanning in a bunch of older materials. I’ve finally carved out a bit of time to update the default blurb with suitable material, which you can find here. Unless you’ve known me for a long while, you’re sure to find out an interesting new thing or two. Give it a look!