Non-Functional Requirements Are Not Nonsense

(But They Are In Need of A Makeover)

I have been thinking about this topic for quite some time, but like a lot of my thoughts it had ended up as a Trello card and had went no further (yeah, I have a kanban system for blog posts and I’m still not prolific). Then Gerben Wierda posted his evocatively-titled piece on InfoWorld – Nonsense non-functional requirements and I had to churn out this reposite.

When I get past the headlines of Gerben’s article, and look deeper, my interpretation is that what people consider to be traditional Non-Functional Requirements (NFRs) do not benefit from such categorisation or segregation. These considerations or factors do have to be taken into account, its simply that they are inextricably linked to the functional requirements. So much so, that it is of no benefit or makes no sense to separate them as another category of requirements. And I agree.

NFRs Are Not Just Requirements That Are Non Functional

Functional requirements are pretty straight forward (to conceptually comprehend at least!) – they specify the functionality of the product, application or system under construction.

By the very name, non-functional requirements should be the antithesis of functional requirements. And unfortunately they are often thought of as such. In the same way functional requirements are the creative, positive possibilities, non-functional requirements are considered the conservative, negative constraints which your system must operate within. Non-functional requirements are thought of as only having primary colours at your disposal while you attempt to paint a masterpiece. (It’s no wonder they are sometimes ignored altogether!)

But they are not the antithesis of functional requirements. In fact, they are not even on the same plane as “functional” requirements really. You see, while each functional requirement should be specific around one area or element of functionality. Non-functional requirements are traditionally more generically thought of, for the application, system or product as a whole.

Example

For example, I may have a functional requirement which specifies what data I want to return from an API, how I want to filter it and maybe even why. This individual requirement is specific about an API endpoint and its data. Indeed, this is as far as some functional requirements go.

And then traditionally, I may have a performance NFR which states that no transactions or API calls for the system should take more than X milliseconds to complete.

I hope this example is familiar to some developers and architects out there, and its not just me! I’ve seen it many times. I know I’ve personally written some requirements specs like this!

The issue with this approach is that the performance NFR has no context. An arbitrary value (X milliseconds) has been assigned as a performance metric for ALL API calls and transactions. Regardless of what data is being returned, and whether it is a command (transaction) or query (call) (despite the fact that CQS shows these are vastly different beasts in capability terms) we have assigned a single performance metric to all of them. When you consider it like this, it’s absolute madness.

Reasoning

So, why do we do it? We’re not mad (at least not all of us). So why then? And if we do provide context-specific, focus NFRs, why do we still keep them separate?

Well, I think by segregating and labelling these as NFRs, we treat each non-functional consideration as a single requirement and try to satisfy each one, as this is how we handle functional requirements. In the same way, if one functional requirement is not met, it doesn’t invalidate the others.

And Gerben’s article touches upon this absolute blindspot. As software engineers, we deal in absolutes: pass or fail, ones or zeros. There is no “sort of” with test results, or 0.5 in binary. And we apply the same with functional vs non-functional requirements because they are categorised and labelled as such.

That is not to say I think NFRs are a waste of time because they don’t fit and should be scrapped (I don’t think Gerben does either, he just has an attention-grabbing headline). Rather, we have to think about them and treat them differently. They are not the same as functional requirements.

An Alternative Approach

For a while, I have been working on my own projects with this approach:

Non-functional requirements are not non-functional requirements, NFRs are not “requirements” at all, and should not be labelled as such (and yes, naming is important).

Instead, they are technical considerations or factors that have to be considered. Think of them as prompts when capturing the functionality of the system.

I like using the term “technical consideration” rather than NFR for 2 reasons:

the negative, constraining connotation conjured from “non-functional” no longer applies when you use a different, more positive term

these are not requirements at all. They are constraints, or prompts for your consideration while specifying your requirements.

Consequently, functional requirements are not functional requirements either; they are simply requirements. And each and every requirement should factor in the technical considerations and constraints that are contextually appropriate for that functionality. Note, not all of these technical considerations will apply to each requirement. But using this approach, at least it has been considered in this individual context.

Going Back To The Example

Taking our example, we can have a technical consideration for performance. Sure, we can set some benchmarks to make this a little more concrete and have a testing target to aim at, but it is way more valuable if we consider this in the context of the individual API call requirement and if we allow it to affect the requirement by being more specific. We could for instance, specify that the GET API call should have paging and a minimum/maximum page size in order to mitigate performance concerns. This considers the performance factor, and its effect is specific to the requirement in question. End users, product owners or “the business” can evaluate objectively whether this is acceptable, as it is directly attributable to the API call requirement. (It also means more to them, than an arbitrary performance metric.)

Summary

So to summarise, in my opinion NFRs are necessary (but not evil, and definitely not nonsense). The problem is how they are categorised or segregated that encourages misapplication of requirements as a whole. Treat them as technical factors that must be considered with each and every requirement, and used to improve said requirements, rather than as separate, isolated statements of constraint.

Architectural Factors

I take this approach a stage further by considering a subset of architectural factors. These are the technical considerations that are specific to the architecture of the system, application or product. This is of course not mandatory, but a nuance I think of as beneficial as a technical lead or architect.

I have a free email course Introduction to Software & Technology Architecture, where I discuss the architectural factors worth considering for startups and SMEs looking to build robust and sustainable software. You can sign up for this free email course here.

I’ve worked with some great testers in the past, and they’re great to define NFRs with, because they’ll define a test and a pass criteria “95% of pages must return a response within x ms” and then set x as the maximum over a series of tests. As a developer, I get a clear pass/fail, detailed statistics on where my danger areas are (eg mean load time is within 1 sd of x) and a realistic target.

Usually the customer has something vague like “it needs to be fast” and the testers will use their experience to turn that into an actionable requirement that the customer and developers agree to.

Usually there’s a good reason for the 95% because we know up front that a couple of pages will be logic or network heavy and may fail. For those pages we can use psychology tricks like progress bars or similar so that the users see a response whilst the work is being done. Or we do it asynchronously.

Yep, that sounds very similar to my experience but not necessarily with testers contributing so heavily. I guess if you are working with agile teams with testers available early on the project you can ask for that input.