In the abstract for “A Defined Process For Project Postmortem Review” authors Bonnie Collier, Tom DeMarco, and Peter Fearey state “Conventional wisdom in the software industry decrees that it’s good practice to conduct a postmortem study at the end of each project. Some would even suggest that this is not just a useful undertaking, but one of the fundamental principles of successful software development. The rationale authors most often cite for postmortem analysis is that only by analyzing our shortcomings can we learn to do better.We must begin by cataloguing such failures and learning from their patterns. The success of the postmortem-or of any learning process-demands a context that makes organizational learning possible. Participants are empowered when they know that each issue raised during the postmortem process must be added to the risk database and evaluated methodically on each subsequent project.”

While this is all true, most corporate postmortems are a waste of time, accomplishing little besides finger-pointing and assigning blame. In this series of articles, I will be going through the importance of postmortems and how we should go about them.

Part 1: A PostMortem takes Courage

Everyone loves to win. In fact there is nothing better after a win than having a post game party where everyone gets together and congratulates each other, slapping each other on the back and giving high fives. But is this all that should be done at the end of the game? How do you go forward to the next game ensured that you will experience the same success? Every coach knows that no matter how the game went or how perfect they feel their teams performance was, there is still some learning to be done. Some time must be spent reviewing the film and calling out what was done well, what could use some help, and even if they need to fill gaps on their team. This review will help to fine tune the performance of the team and ensure that in future games, whether they are against easier or more difficult opponents, the team will avoid making its past mistakes and continue to refine the things that they do so well.

This review is even more important if the team loses. The coach would never just let the team slink home, hoping that they will do better next time.

At the end of technical projects, we should be doing the same. We should be gathering together to review what was successful and unsuccessful. We call this meeting a post-mortem and we use it to determine process improvements with the hope of mitigating future risks and promote best practices.

A post-mortem is not easy, especially in the case of a project that struggled or failed. I would even say that a post-mortem takes courage. Courage to dig into and relive the struggles. Let me be clear though, this is not about chastising any of the team members. We have all see the movie where the coach chews out one of the players in front of the rest of the team at halftime. We know that player isn’t going to delivering much in the next half of the game. A good postmortem takes courage to point out the weaknesses of our team but not to pin blame. Courage to admit what you and your teammates already know – that the team is not perfect and that you need to work on certain parts of your game.

If you are feeling less than courageous about the idea of conducting a post-mortem on your recently finished project, take heart. Your entire team was at the game. They already saw many of the areas that they struggled with. There will probably be no surprises. Just like the sports team, everyone already knows where they failed they just need to own it and create a plan to overcome it next time.

I have lately been working with Apache JMeter to generate reports based on the data returned from a series of web service calls (details on how to do this will be in a later posting).

Apache JMeter is an Apache project that is predominately used as a load testing tool for analyzing and measuring the performance of a variety of services. Its main focus is web applications.
I have been struggling for quite some time at how to extract the response of an HTTP Sampler into a variable for later use. In cases where the response is xml, I would just use an XPath Extractor but in this case, the response from the service is HTML HTML is not guaranteed to be well formed xml (XML does not permit end-tag minimization (missing

, etc), unquoted attribute values, and a number of other SGML shortcuts) so an XPath Extractor would periodically throw errors.

The problem I have been struggling with is how to use a Regular Expression Extractor to save the entire HTTP Sampler response to a variable in JMeter. The documentation for the Regular Expression Extractor states that it allows the user to extract values from a server response using a Perl-type regular expression.

After much trial and error, here is the expression that I came up with:

(?s)(^.*)

() – grouping

? – Enables the embedded pattern-,attach modifiers. ?i enables case insensitivity, ?m enables multiline treatment of the input, ?s enables single line treatment of the input (it affects how the ‘.’ meta-character is interpreted), and ?x enables extended whitespace comments. We are using ?s to allow the use of ‘.’ to match a newline character.

s – Allows use of . to match a newline character
^ – matches the beginning of a string or line
. – matches everything. Usually this would exclude \n but since we have prefaced our expression with ?s it includes even these.

Nice…
Any Macintosh user who shares files with Windows/Linux users whether on a network or thumb drive has experienced the horror of the “Mac droppings” they have left behind. I am referring to the .DS_Store, .Trashes and “._*” files that get strewn around the file system of the unsuspecting host. Usually, the user comes back, scrapes some of it off their shoe and asks what they are for. You end up apologizing and letting them know that you have no idea how they got there but they can probably just delete them. Essentially, you are saying “Please clean up after me”.

Was it something I ate?
No, these are normal parts of the Mac OS that are hidden to the Macintosh user but are not hidden to a Windows/Linux user who views the filesystem that you have been writing files to. To be fair, these are not really useless file to the Macintosh. .DS_Store files contain settings which are “cosmetic” in nature — for example, Finder window position, view style, icon position, etc. However, .DS_Store files in OS X also store Finder “comments” so in this sense, disabling .DS_Store files may result in loss of data. “._*” files contain extra information to go along with the main file’s data but don’t play well on non-HFS+ file systems. The .Trashes folder that get created is just that. The trash. It is where the Macintosh puts items on the volume that are trashed.

How embarrassing, what can I do?
Good news, there are steps you can take to reduce or even eliminate those embarrassing droppings. Some of these methods may require you to carry a bag with you when you are visiting other file systems but at this point Apple has not given any other alternatives.

1. Disable the creation of .DS_Store files on external volumes.There is a way to tell your Macintosh that you don’t want it creating .DS_Store files on external volumes that you are visiting.
Open Applications:Utilities:Terminal.app
Copy/Paste “defaults write com.apple.desktopservices DSDontWriteNetworkStores true”
Hit return
You may need to logout or restart your computer after executing this command

2. dot_clean is your friend
Prior to Mac OS 10.5 you had to manually delete the ._resource files. With 10.5 though Apple introduced the dot_clean command that can be executed on the direcory in question. Type dot_clean /path/folder to join the “._*” files with their parent files. It is recursive so it should take care of all of the child folders also. This should be done prior to ejecting the volume. Read OS X 10.5’s manual pages (man dot_clean) for more information. Although many on the web will claim that dot_clean will remove .DSStore files, it does not. It only merges and removes “._*” files.

All of this could be wrapped up in a single script (which I may post later) that you could execute before ejecting volumes. It would be great to be able to configure your OS to execute the script automatically prior to ejecting a volume but that is not an option at this point.

Ok, you have been using a manually installed custom version of Ruby and find that you need to remove it from your system either because you are done with it or because you are switching to a package manager to handle the tedious task of package installing and dependency tracking.

Removing is not usually a difficult task if you have kept your source and there is an uninstall script in the makefile. Ruby, however, does not bless us with this.

Removing all of the files associated with the application is important since package managers use the files to indicate whether dependencies should be installed. If you don’t clean up completely, the package manager may not install the dependencies when you reinstall.

So, how do you remove Ruby from your system? To start with, one must remember that in Unix, /usr/local is the “safe haven” for all customization done to the system. This means that when you do updates to the OS, /usr/local will be protected. This is good to know since this is exactly where Ruby gets installed. Don’t touch the Ruby that is installed with the OS.

For a long time, I have been using Fiddler or the browser developer tools to resolve issues that naturally come along in web development, however, that was before I was introduced to Charles.

Charles, written by Karl von Randow at XK72 Ltd., is available as shareware for Windows, OS X, and Linux. It is an HTTP and SOCKS proxy server. Proxying requests and responses enables Charles to inspect and change requests as they pass from the client to the server, and the response as it passes from the server to the client.

The primary function of Charles and pretty much any of the other developer tools provided by browsers is to record the requests and responses of the current session for your inspection and analysis. Charles then goes beyond the capabilities of standard dev tools by providing a host of tools.

This is it! This is the beginning. I have wanted to start a blog for quite some time but really needed a forcing event to get me moving. Well, that forcing event has finally come. My company has requested that I begin blogging regularly. Nothing like a corporate edict to get me over the hump so here I go.

I have always thought that blogging would be a highly glamorous task, one in which I spend some time expounding on some topic of great interest to myself in hopes that in some way a reader of my post would find their job a little easier or even be inspired to start doing something differently than they have before. We shall see. At this point it is much less glamorous and is really more of a spotlight on my poor written communication skills but I plan to remedy that.

In the months ahead, as I develop my writing skills, I will be posting entries on the topics that I find myself engaged in daily. I am a Solutions Architect for Axian Consulting with an emphasis on custom application development so the topics will probably vary greatly.

A little about me…

I have been developing software since 1990 when I was hired by ImageBuilder Software (formerly CDI) as a C programmer in MS-DOS. My first project was to write a CGM (Computer Graphics Metafile) reader. Many of you probably have never heard of a CGM but they still have a significant following amongst those involved with technical illustration, electronic documentation, geophysical data visualization and many other areas. I was fairly young when I worked with CGMs so I can still remember much of the specification but I will not be creating any posts on this topic. After CGMs I worked in presentation software (anyone remember Harvard Graphics?) and then moved into cross platform educational and multimedia software for the Macintosh and Windows OS. All of the development was in C++. Ah, those were the days! After a while, I became the Technology Director for Application Development and after 14 years decided my next job would definitely need to have more variety. Enter Axian Consulting…

I started at Axian back in 2003 looking for variety and have not been disappointed. Since then, I have been able to work on projects ranging from build systems to robotics and from simulators to web services. I have completed several projects in the .Net stack but I spend most of my time these days in the Java/JBoss stack.

Throughout my career, however, I have always had a steady interest in refining the development process and the implementation of best practices. I will definitely be posting on these areas sometime in the future.

Sometime soon, I will fill in my “About” page and maybe even add a glamour shot.