SOLID principles in .NET revisited part 9: concentrating on enumerations

In the previous post we streamlined our demo code so that it adheres to the SOLID principles. You may still spot bits and pieces that violate some design principle. It’s important to remark that in a large enterprise project it’s very difficult to attain 100% SOLID, if that state exists at all. You might be able to spot “deviations” even in the most well-maintained code bases.

In this post we’ll concentrate on enumerations. We saw before in this series how false usage of enums can easily lead to maintainability problems. Enumerations seem to be very popular due to their simplicity and how easily they can be used to create a list of valid values in a certain category.

We’ll first build up a short case study with all types of mistakes with special emphasis on enumerations. We’ll then improve the code in the next post.

Threshold evaluation model

In the case study we’ll simulate thresholds and how they can be evaluated. We’ll base our model on a hypothetical web performance test where we measure a range of statistics related to the behaviour of a web site during a load test. The user can specify conditions similar to the following before starting the test:

If the average URL response time exceeds 5 seconds then the performance test fails

If the number of successful URL calls per minute is less than 10 then the test fails

Note that the performance metric, such as “average URL response time” and the evaluation operator such as “is less than” can be extended to other values. These conditions are then evaluated at the end of the test based on the actual results. If one threshold is broken then the test fails.

Let’s see how this scenario can be modeled in code with deliberate drawbacks and design errors.

Code starting point

We all love enumerations, right? Performance metric types and operators wonderfully fit into the following enumerations:

Another thing that many programmers love is putting “special” code in dedicated services no matter what. We’ll follow that practice and put the code that evaluates the thresholds in a MetricEvaluationService. We’ll wrap the threshold evaluation result in a custom object:

We first check the metric type of the threshold and read the appropriate value from the PerformanceSummary object. We then branch the evaluation logic according to the metric type. Within each metric type we have a switch-block that evaluates the threshold according to the operator type. If the threshold limit is broken then we set the ThresholdBroken property of ThresholdEvaluationResult to true.

What’s wrong with this code?

There are several things that have gone wrong. From what we’ve seen on SOLID we now know that the EvaluateThreshold method will be difficult to maintain in the future. If a new metric type and/or a new operator is added to the requirements then we’ll need to extend the ‘if’ and ‘switch’ blocks as well. Obviously we’ll need to extend the PerformanceMetricType and EvaluationOperator enumerations as well. Furthermore we put the metric evaluation logic in a specialised class outside the Threshold class where it really belongs.

These are the major issues with this demo code that we’ll need some serious attention. We’ll make the code better in the next post.