Like many people out there, I love high-performing cars. After all, who wouldn’t prefer a Lamborghini or Ferrari over a regular sedan. Many of us end up having to live with simply driving a regular sedan, though, because of a combination of finances and family responsibilities.

It’s the same with software. While we all want our software to perform faster, many of us settle for the less attractive option, which is just getting it to function. Inspired by how car manufacturers make their cars to achieve ultimate performance, I’ve figured out how we can apply similar metrics to our software systems. Here’s how.

As much as I love software development, the truth is that it wasn’t my dream job growing up. Rather than coding, I dreamed of being a Formula 1 driver. There was something that thrilled me about watching the cars go around the track at blistering speeds, and the skill of the drivers trying to manoeuvre the cars around the track as effectively as possible. It was the ultimate combination of human engineering and driver skill that teams needed to balance in order to outperform one another by mere milliseconds.

Due to me not being very good at racing, it was a career that I couldn’t pursue and so, I ended up getting into software development instead. It may not have the thrill of speeding around the track, but there is enough excitement in seeing software come together and move into production that this option still gives me a huge rush of excitement.

I’ve kept up my interest in racing, though, and, surprisingly, by researching what top motor racing teams do to improve their vehicles’ speed, I’ve figured out a way that we can apply those same principles to improving software performance. As part of an article series, I want to take you through some of those critical applications, including:

Effective monitoring

Selecting the right architecture for the solution

Code optimisation, and

Continuous testing.

In the first article, I will focus on laying the foundations of monitoring and architecture. I will focus on code optimisation and testing in more detail in two later articles.

Measuring performance

Formula 1 teams use telemetry to gather millions of lines of data on every facet of their car’s performance, so that they know what they need to improve and where. Similarly, we should be applying the same principles to our software when assessing its performance because, before you can improve the speed of your software, you need to know what is slowing it down and where this problem is. To do this, you need to put some monitoring together so that you can better evaluate the performance of your software.

Of course, you can’t measure everything when it comes to software because there is simply too much data. Monitoring can help you focus on identifying the specific areas that need the most improvement.

As a starting point, the following things are important to monitor in order to understand what is happening in your software. When building an application, monitoring these things will help you understand the specifics of your application and how it is used. The more you understand about the usage and performance of your application at different levels, the better you can design it.

Things to monitor on your hardware and OS

When optimising for performance, you first need to understand how the tools that you will be using to build or manage your software are functioning. Here are some things to look into:

Memory usage – Having an idea about the utilisation of memory on the underlying servers, as well as how much usage your services use during different stages of the day at different loads, can be useful to understand if your servers are sufficiently equipped for the software that you’re working on. If they are lagging, it may be a sign that you need to make things more memory efficient.

Network bandwidth and latency – Especially when users are connecting to your services via the internet, it is important to understand the latency that they experience, and the bandwidth you have available for connectivity. Knowing this will help you to understand how users are experiencing the performance of your system and allow you to make tweaks as a result.

CPU usage – By measuring how much CPU processing is utilised by your different applications and services, you can identify where the CPU of the servers is taking strain. This will help you to understand the strain your software is having on the server, allowing you to scale more effectively and identify poorly optimised code.

Disk usage – Knowing how much disk space is available is important because, if you are going to be storing customer content or scaling services to work with more users, you need to be aware of server disk space. This will help to not only ensure that you don’t run out of it, but also to understand how it affects the performance of the software in terms of speed.

Various caches at all layers – Memory is more than just RAM. Because of this, you should measure not just your RAM usage, but how memory is utilised across all the different caching layers on your hardware. This will help to identify ways your code optimisation can be improved.

OS or web browser versioning – Tracking these on both your server and users’ computers can provide an idea of who your users are and how they use your software. This will allow you to identify performance across browsers, as well as make further improvements to cater for the different operating systems and browsers.

Things to monitor on your application itself

When taking a closer look at the software that you are building itself, these are some useful questions to ask to make sure you are on track for optimal performance:

What is the most used section of a screen or page? – The most critical aspects of your software are where you need optimal performance. You may not always be able to optimise everything, so being able to focus on the key areas is critical.

Do the users require a different control layout to use the system optimally? – Sometimes, systems are designed to work a certain way and users end up using it differently. Understanding how people actually use your software can help to optimise it for their specific journeys.

Where does the system crash most often? – When things go wrong, you want to know where and why – not just for failures, but performance deviations as well. This will help you understand where the bottlenecks in your system lie, along with where code quality is poor and can be improved.

How does your responsive website render on different devices? – In this age of connectivity, people can interact with software using many different devices. This is something that you should be monitoring to understand how performance differs on each.

How long does it take to load data into the system? – In order to make performance gains, measuring the interaction between your different services and the underlying databases as often as possible is key. This is how your system uses and deals with data that can lead to the biggest performance issues and regularly checking on this can help you find ways to handle data more efficiently.

How long does it take to load a page into a site? – By measuring page load times across different user journeys, you can better understand how they operate.

How does the latest deployment/ build compare to previous build responses? – Tracking this data will help you understand how new code may be affecting your software’s performance. When you’re measuring, try to make sure that as many factors as possible are considered, as the more data you have, the more you can understand and possibly correct.

A note on measuring performance during development and post-release

It makes sense to monitor how our software is performing once it is being used but I would say that finding performance issues only at that stage is too late. To be most effective, you want to monitor everything during production and after release. While stability in the development environment is sporadic and takes a lot of effort to put the necessary monitoring tools in place, the value in catching things quickly is worth the time it takes to set up in my opinion. Much like in racing, the car’s mechanics need to be checked before a race as well as when it is on the track to make sure that it is set up to function well before it gets going, and continues to function well when it is burning rubber.

Because there is such a difference between the two environments in software development, you need to benchmark based on those differences. To do this, you will need to measure performance over a period of days and weeks to establish what your ‘normal’ is. Once you have that, you can identify deviations in software performance more readily against these set standards.

If you have multiple development environments, each one should be benchmarked. The reason for this is simple: Each environment has a variety of its own environmental variables that affect performance, and you want a standard baseline to measure against. That way, deviations can be easily picked up. This makes it a lot easier to determine whether they are related to the code, database or environment.

Chassis/Architecture: Why a solid base is important for building a solid solution

Once you have identified problems with your software through effective monitoring practices, you need to think about how to build a solution.

Thanks to the Fast and the Furious franchise, many people think that all you need is a fast engine, new tyres and some nitrous to turn any old car into a thoroughbred racer. Truthfully, while these things might allow for some short-term performance gains, if the structure that holds that car together – known as the chassis – is not designed for speed, then the car is likely to give long-term maintenance issues and suffer consistent dips in performance as time goes on.

With software, it is very similar. We can write the most efficient code and have the best algorithms in place but, if we don’t have the right architecture for our system, we’re unlikely to make decent performance gains.

There are many different types of software architectures out there that all serve different purposes, and yet people often tend to only focus on the ones that they are most familiar with. This can result in inefficient processes that negatively affect performance.

Exploring different architectural designs can help you identify a better design for the intended use of your software, and come up with better, more effective solutions for problems that have arised. To save you time, here’s an article I wrote where I unpack a bunch of different architectures and explain where I think they’re best utilised.

How I think about choosing architectures for software

On one particular project I worked on, I had to identify ways of changing an old, monolithic piece of software into something that performed better. When I first started this task, I immediately thought about splitting it up into smaller services and then writing APIs in Scala and a front-end in Java, because that was the tech stack that I was familiar with.

However, when I evaluated the situation and understood how the software could be improved to perform better, I realised that it was very heavily dependent on its database. This was where most of the complexity lay, which meant that it would be difficult to simply replace or split up the database without changing the function of this particular system.

My first instinct would’ve been wrong because having multiple microservices running would’ve placed unnecessary strain on the database, resulting in a performance bottleneck. Thanks to an effective monitoring process, and some research into different architectures, however, I could quickly identify a better solution for my particular problem.

So, rather than just going the microservices route, I decided to adopt:

A layered architecture that was more suited to database interactions, and

A programming language like Golang at an API level, which is less taxing on the server space and CPU.

To make this decision, I followed this process to make a choice about which architecture would best help my system perform better:

First, I evaluated the data needs of the software.

Then, I identified how customers would interact with my system and what I would need to be able to scale my user base.

I assessed the technical skills in my team and used that to determine which systems and programming languages would be the easiest to adopt.

Finally, I evaluated the different architectures and programming languages against these criteria and chose the one which would effectively meet all of these needs.

Having mapped out that path, I was in a position to start building the most effective solutions for the problems that we were faced with.

Hopefully, you now have a clear idea about why it is important to monitor your software, both in production and after it has been released, as well as what you should pay particular attention to when looking to enhance performance. You should also understand that correcting mistakes picked up during monitoring can only be done effectively when you use a solid architecture – or chassis – to build the solution.

In part 2 of this series, I will spend some time deepdiving into how you can optimise your codebase to really set it up to win. 🏁

When not changing the world of software development, Craig can also be found writing for Critical Hit as well as his own LinkedIn blog, designing board games for his company Risci Games and running long distances for no apparent reason whatsoever. He likes to think that his many passions only add to his magnetic charm and remarkable talent – at least, that’s what he tells himself when he goes to sleep at night!