As a follow up, heres what I wrote on the group, to solidify my understanding of coupling and cohesion, since it's not a topic that comes up directly during development.

Let me describe how I view coupling / cohesion, and see how it lines up with your understanding. For me, coupling is the ease of which you can pull modules apart. You notice this when moving classes and functions around during refactoring. Until you make a change, you don't really feel the pain of tight coupling. To minimize this pain, you remove needless dependencies, depend on abstractions instead of concrete classes, pay attention to the law of demeter, etc (i.e. use good design, e.g. SOLID…

I came across an amazing introduction to TDD today by Sandro Mancuso, Testing and Refactoring Legacy Code. Running time is 1 hour and 17 minutes, and is worth every second. This is truly a master at work.

There is no one product used alone that will enable you to lose weight. Just like there is no one development practice that will help you attain high quality software.

You might see on TV an add for some piece of equipment, say something called the Ab Master 3000 (completely just made that up, may or may not exist). The company selling said product might promote it by showing an extremely fit person, with a tag line "you can get these amazing results by using the Ab Master 3000 for 30 minutes a day".

They aren't lying to you, you could get the amazing results such as six pack abs. But unless you have a proper diet and exercise regiment, and the will power and persistence to keep at it long enough to get your percentage body fat low enough to be able to see these results. For those who buy this product and only use that product and nothing more, they won't see the results the are picturing.

The same thing applies to software development practices such as TDD. A TDD ev…

I've been looking around for a new laptop for a while now, and I finally decided on the MacBook Air. Specifically, the 2014 13 inch model, 128 GB SSD, 1.4 GHz Core i5, 8 GB of RAM. It's by far the best laptop I've ever owned, and I couldn't be happier with my purchase.

The decision to buy a MacBook Air was a tough one. There are a lot of great laptops available right now, and the MacBook Pro with the retina display was pretty tempting. Ultimately it was the light weight, long battery life, decent performance, stylishness, and price of the MacBook Air that influenced my decision.

Here's the list of laptops I was looking at:

MacBook ProMacBook AirAsus ZenbookAcer Aspier S7HP EnvyHP SpectreToshiba KirabookLenovo Yoga Pro 2Microsoft Surface Pro 3Dell XPS
I was leaning towards something light, such as an ultrabook, but I didn't want to pay much more than $1000. For me, it's hard to justify that much money on a laptop. I've bought cars for less money :). Kid…

In my previous post I linked to some basic JMeter test plans, as I was just getting comfortable with JMeter. Sine then, I've found a way to make the scripts more maintainable by using user defined variables. Here's the link to a simple script that uses variables.

The syntax for referencing a variable in JMeter is ${VARIABLE_NAME}.
What I like to do is add a user defined variables config element to the top of the test plan and put all the settings in there I'd like to tweak. For example, URL's, timeouts, etc. All in one place at the top of the script. This makes it easy to tune scripts, e.g. to test different pages, to tweak the number of users, etc.

I've started a GitHub repository containing some basic JMeter test plans. While creating test plans is fairly straight forward, I find it easier and quicker to take an existing test plan, and tweak it to suit my needs. Maybe it will help you as well.

Right now I've got two up there, one using the JMeter Standard Plugins to do a constant load, and one with a simple loop controller loading a web page with 20 users, with a 5 second delay.

If you're new to performance testing and just starting out using JMeter like me, it can be a little daunting. There's a lot to learn, and there's really no one thing you can read that will make you get it. You just have to play with it for a while until you start to get it. Here's a few things that might help you get started.

Reference Material
Ophir Prusak from BlazeMeter has a couple of good intro videos on JMeterLearn JMeter in 60 MinutesAdvanced JMeter Training and Report Analysis
The thing I like about these videos is that they are clear, decently explained with just he right amount of information, and aren't too technical. These videos are aimed at promoting the BlazeMeter product, but they focus more on JMeter itself, and contain a lot of information. WebPageTest.org is a handy open source performance tool to use in conjunction with JMeter. It can analyze page load speeds of applications available on the web.

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test.

For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes.

Introduction
In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling.

JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI. There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports.

Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page.
Metrics
Below are the basic metrics gathered by both JMeter and Gatling. If you are just starting performance testing, these might be a good starting point.Response Time – Difference between time when request was sent and time when response has been fully received

I've been following the #isTDDDead debate, and I've heard feedback loops brought up often. Part of the benefit of TDD is that you get quick feedback on your designs as you go, and you know immediately if you've violated any previous assumptions.

Feedback loops are also core of The Lean Startup by Eric Ries, in particular the Build, Measure, Learn feedback loop.

Feedback loops are just limited to software development. For example, do you ever think about how much you need to turn the steering wheel? There is no way to teach someone this ability - they need the feedback of the cars movement to know for sure.

I don't remember hearing much about feedback loops during my undergrad years (1995-2000). It was around that time that Kent Beck et. al. were publishing JUnit, TDD, and XP. It wasn't until years later that I heard about these topics. Even then, I don't recall hearing about feedback loops as much as I do today.

The last couple of code reviews of my work lately have turned up some common code smells: function names that don't adequately reflect behavior; misplaced functions; and sub-par function decomposition.

I don't think I'm the only developer afflicted with the bad naming, since it's one of the two hardest things in computer science. However, the function placement and improper decomposition are a signal to me that I have a clarity problem.
I've been building software for almost 12 years, and I take my coding very seriously. I'm pretty sure I have OCD tendencies, especially when it comes to code. Ever have to shut off your car when you realize that the emblem on your key is facing the wrong way? I have :).
My desk is littered with programming books like Pragmatic Programming, The Practice of Programming, Clean Code, Refactoring, and Domain Driven Design. But despite my years of development and many programming books I've read, I still find software developmen…

The recent debate over #IsTDDDead is very interesting to me. Not because I believe TDD to actually be dead, but because I think it goes against core beliefs of some software developers, especially for those who practice TDD on a regular basis and has now become integral to how they build software.
Now that the shock of the proclamation that TDD is dead has subsided a little bit, we can start examining the reasons prompting this statement, and maybe incorporate some of these thoughts back into our own views of software development.
But for that to happen, we need to reflect on our own view of the world, and be open to the possibility that some of our views might be wrong. Admitting that we're wrong is something that can be hard to do, especially if its our long standing beliefs that are in question. Strong opinions, weakly held
I approach software development assuming that there is a better way, but I haven't found it yet. As a result, I'm always in pursuit of a better way…