Architecture – 345 Systemshttps://www.345.systems
we help you build amazing softwareWed, 20 Feb 2019 15:47:16 +0000en-UShourly1https://wordpress.org/?v=5.0.3What’s the Future for OO Programming Languages?https://www.345.systems/architecture/whats-future-oo-programming-languages/
https://www.345.systems/architecture/whats-future-oo-programming-languages/#commentsSun, 07 May 2017 23:00:56 +0000https://www.345.systems/?p=625I spent most of my software career either writing, designing or architecting solutions built with a heavy object-oriented (OO) bias. I simply cannot even estimate the number of lines of C# that I’ve written or read. Even now, I love the abstract elegance of good OO code. However, I’m left wondering how much of a...

]]>I spent most of my software career either writing, designing or architecting solutions built with a heavy object-oriented (OO) bias. I simply cannot even estimate the number of lines of C# that I’ve written or read. Even now, I love the abstract elegance of good OO code.

However, I’m left wondering how much of a future it has left.

Here are a few thoughts about where mainstream programming is going, and what choices programmers will favour.

Note: When I talk of OO in this article I’m referring to strict, compiled OO languages such as C# and Java, not to dynamic languages that support objects and classes like Python and JavaScript.

Web APIs

Microservices, the second and more complete incarnation of Service-Oriented Architecture (SOA), allow us to decompose our logic into reusable services that are accessed via open network protocols. This is crystallising into an accepted convention of REST/HTTP. The code for our microservice, i.e. what’s behind the API, doesn’t matter to the outside world. Is it OO? Is it even compiled? Who cares?

Indeed, programming web APIs is functional programming in its very nature.

This is partly why JavaScript is such a good fit for server-side code. Whilst the language has been updated to incorporate many OO features such as classes, the language is functional at its core. To define a REST API in JavaScript is really just to map a function onto a route. And since functions are first-class citizens in JavaScript, well that’s just dandy.

By contrast, take a look at some of the implementations of service interfaces in OO languages (e.g. .svc and .asmx in C#) and you’ll notice that they have shoehorned in an OO interface / class structure when in fact an API call invokes a function. You find that to even create a “Hello World” you end up with a binary of an amazing size, because of all the plumbing that’s being done for you under the bonnet.

Look at the size of a NodeJS JavaScript “Hello World” app. A kilobyte? It’s faster too. A lot faster. Functional programming and functional languages definitely have the edge in this area of programming.

Serverless Functions

I don’t know how many of you have used AWS Lambda, or its equivalents (Google Cloud Functions, Azure Cloud Functions). This is a nascent area for programming, which is coming into being because of cloud platforms. Essentially, serverless functions are pieces of code that are invoked by an event. However, they don’t have a permanent host hence the “serverless” moniker.

When an event trigger occurs, such as a message being placed on a queue, the cloud platform spins up an instance to run the code that handles the event. When it’s finished, the instance can terminate and hence free up resources. This is an amazingly powerful concept, because it really is getting as close as possible to a pay-as-you-use model.

And the programming paradigm? Functional. Dynamic. You invoke a function to handle the event and the data that comes with it. Ideal language for this type of scenario? Python.

What you can achieve with a few imports and a few lines of code in Python requires a mind-bending amount of references and components in many formal OO languages.

UI

User interface (UI) is a very interesting area from a technology point of view. My old boss always used to say that the only thing a user ever knew of your software was the screen in front of him. You’d better make that good or your users will all assume everything else is rubbish as well. No matter how unfair, that’s the way it is.

As a result, UI is the area where the real pace of change is felt. You only need to go back a couple of versions in many pieces of software (or websites) and you will cringe at the clunky interface you used to take for granted. UI sells.

Good UIs are stateful too, and hence they lend themselves to OO design. Browser-based UIs were something of an exception, as postbacks to the server to refresh HTML content were effectively functional web API calls. The trend towards single page applications and frameworks such as ReactJS and Angular have meant that even browser-based UIs are now falling in line with OO programming methods. Ironically, using JavaScript.

I see UI as one of the last great outposts of OO programming.

Frameworks

The open source movement, combined with open package repos and package managers (e.g. NPM, pypi, NuGet) have massively impacted on the way we use code. Software is built on software, and if a problem has been solved before, well you’d be insane not to reuse the solution.

In terms of language and programming paradigm the jury is out on this one. There’s such a split in the packages that are out there that it’s hard to make predictions. There is a ton of functional JavaScript code out there, just as almost everything on NuGet is probably OO.

This is an interesting one to watch. Frameworks and framework extensibility were a case study for OO design patterns. Now, the functional world has found its own patterns for extensibility and reuse. I’m interested to see what wins out in this area.

Other Trends

The trend away from installed on-premise software to Software-as-a-Service (SaaS) is only going to accelerate. This will build on the points above, but the real demise will be felt when companies are decommissioning system after system from their own datacenters to avoid having to maintain and support them.

Add to this the shift to cloud. Do the math – cloud represents huge savings and huge opportunities if embraced correctly. Less so for pure lift and shift of on-premise systems, but organizations that are architected from the ground-up to take advantage of cloud and SaaS will have huge competitive advantage. Again, this is more of the same: APIs, Serverless Functions, UI.

DevOps is a hot topic, and one close to 345’s hearts. If you look at what’s going on there you find a lot of scripting going on. Python’s very strong, but so is Ruby. Bash is very procedural, PowerShell mostly procedural but also made to support .Net. I think Python is going to win big in this area over the next few years. Just look at the AWS CLI. Python is also widely adopted within Google.

Summary

This has been a whirlwind tour of a big subject area, sprinkled with a thin layer of thoughts. By all means add your own input to the comments.

I think the big growth areas in programming are going to be (1) further decomposition into microservices and APIs, (2) serverless functions to handle asynchronous processing, (3) SaaS applications replacing installed applications.

Of these, I think the first two will be dominated by functional programming and dynamic languages.

If you’ve been used to OO you’ve probably told yourself time and again that you’re doing it the “proper” way and those functional programmers / scripters are cowboys. Free your mind to accept the fluidity of a functional language: the possibilities are huge. And realise that the scripters aren’t cowboys after all, they just have a different way of thinking. They don’t rely on compilers, they rely on tests. Which is a good thing.

]]>https://www.345.systems/architecture/whats-future-oo-programming-languages/feed/2Do Businesses Dream of the Public Cloud?https://www.345.systems/cloud/do-businesses-dream-of-the-public-cloud/
https://www.345.systems/cloud/do-businesses-dream-of-the-public-cloud/#respondFri, 24 Feb 2017 18:58:11 +0000https://www.345.systems/?p=392In as little as a decade the public cloud has emerged from nowhere providing the dream of consumption based computing to businesses across the world. Gone are the days of large capital outlays and long lead times to provision infrastructure, nor is there a need to securely run privately owned physical datacentres or host infrastructure...

]]>In as little as a decade the public cloud has emerged from nowhere providing the dream of consumption based computing to businesses across the world. Gone are the days of large capital outlays and long lead times to provision infrastructure, nor is there a need to securely run privately owned physical datacentres or host infrastructure such as servers and networking equipment in third party owned hosted datacentres.

All that is needed now to provision huge swathes of infrastructure (IaaS), platform (Paas) and software services (SaaS) is a laptop, a network connection, a browser and optionally a command line session – oh, and not forgetting a credit card.

Early on in the cloud computing era, traditional organisations such as in the finance and government sectors, often viewed cloud computing as suspicious, something that was not to be trusted because “how could the public cloud ever be as secure as my own data centre?” or “I want to have physical control of my data”.

The reality is now that a lot of these traditional barriers to cloud adoption have been eroded to the point of obliteration due to a simple single reason: cost.

A decade of investment and the occasional price war by the public cloud providers, particularly the two biggest: Amazon’s AWS and Microsoft’s Azure, have meant that they have greater geographic reach and a large catalogue of services that it is becoming ever harder to ignore them if you are serious about running your business using IT. Not to mention that both of these public cloud providers support marketplaces that allow third party companies to offer their own services, to provide a frictionless way to provision and use third party cloud services as part of a cloud solution alongside the first party services of the public cloud providers.

So, when it comes to adopting the public cloud, in what ways are businesses considering the jump? After all, cloud adoption, particularly for traditional businesses that have made huge investments in on-premise IT infrastructure, is likely to be a staged affair.

The Three Waves of Public Cloud Adoption

First Wave

The lure of computing on demand with zero lead times for provisioning infrastructure and with a perceived simpler operational management overhead has proved too much for even some of the largest organisations. In a lot of cases, IT is seen as an overhead to the business rather than an enabler to more efficient operations, better ROI and increased profitability; so, the chance to reduce the overhead is seen as a quick win, a way to ‘lift and shift’ IT to reduce costs but to all other intents and purposes, continue on in the same vein as before.

The lift and shift of IT to the cloud typically starts with virtual machines and virtual networks with private secure VPN links to allow office workers to continue to operate without too much disruption. It often mimics the on-premise configuration as much as possible. In terms of getting to grips with and understanding the magnitude of the cloud, this offers the least mind bending option.

It would be remiss not to mention hybrid cloud solutions when it comes to the first wave. For larger organisations a hybrid solution can make sense during the transition to the public cloud, just by virtue of the fact that some migration strategies can be complicated and time consuming. There is also the risk associated with potential incompatibilities of running on-premise applications in the public cloud, or in some cases the inability for existing workloads to move to the public cloud, such as those running on legacy mainframes or requiring specialised hardware, means that hybrid solutions can help to alleviate the pressure of a large migration and offer a partial path to the public cloud.

Second Wave

The more enlightened of organisations see the public cloud and the myriad of services that are offered as a way of not only taking advantage of cost savings and the one-click provisioning of IT infrastructure, but also as a way of re-imagining their IT operations and applications to further reduce costs and increase operational agility and to provide a greater service to their customers with systems that are more secure, always on and accessible from anywhere.

After the lift and shift of IT services has been completed and further cost savings are desired, re-architecting core applications to take advantage of the cloud and the native platform and software services that are available is a natural next step. Once the cloud has been comprehended to some degree, then the possibilities of improved development and operational capability germinate and businesses seek to capitalise on this to streamline even further.

For those businesses that are new and going straight to the cloud (why wouldn’t they?), they have skipped the first wave. Just as the millennials have never seen life prior to the iPod, so the second wavers have never known what it is like to buy, operate and watch IT infrastructure depreciate into decrepitude only for the cycle to begin again.

Third Wave

Finally there are the organisations that are ahead of the curve and are thinking about the competitive advantage they can enjoy by fully embracing the cloud and its ability to offer vast computing power, cavernous storage capacity and internet level scale that would make even the largest of organisations quake at the capital cost; to use the on-demand resources to constantly ask questions of their data, to provide deep insights on a near real time basis so that every ounce of profitability can be wrung out of their offerings to their customers.

At this stage, organisations no longer see IT as a way of running cost effective, efficient services. It is used as a way to gain advantage over others, to squeeze the margins because the data insights are accurate, fresh and available straight away that instead of being behind the curve, organisations can look to beyond the curve, to anticipate demands and trends.

Fourth Wave?

Is there a fourth wave? Will we experience a fourth wave of adoption? Will cognitive services, robotics and AI underpin the next wave of applications built on the cloud and be considered the fourth wave?

Cloud services are already becoming available in this area to organisations and products such as Amazon’s Echo shows us a glimpse into the sorts of applications that organisations can start to consider offering and to think of ways to not simply sell products and services to customers but to enrich their lives. Surely only the sheer capacity of computing, storage and services the cloud offers on a consumption basis can effectively support this theoretical wave.

The Future

Fast forward a few years, businesses will wonder how they ever operated without the public cloud and will look back on the days of private datacentres with water cooled mainframes, halon fire systems and false floors with plumbing more complicated than public sewage processing plants with the same disbelief that millennials probably do with music cassette tapes, VHS and only five channels on the TV *.

For those in the cloud business, such as ourselves in 345, with software and consulting practices and running and operating our own business entirely in the cloud, the future looks very bright indeed.

* In the UK before satellite services, on-demand television and streaming services there was only BBC1, BBC2, ITV, Channel 4 and Channel 5. Life’s a bitch…

]]>https://www.345.systems/cloud/do-businesses-dream-of-the-public-cloud/feed/0Is Vertical Scalability Even a Thing Any More?https://www.345.systems/architecture/vertical-scalability-even-thing/
https://www.345.systems/architecture/vertical-scalability-even-thing/#respondMon, 02 Jan 2017 00:00:41 +0000https://www.345.systems/?p=289Enterprise computing relied heavily on large, expensive database infrastructure. The cloud has rendered this type of design obsolete. Vertical scalability has given way to elastic horizontal scalability. Vertical scalability vs horizontal scalability Vertical scalability means the ability to increase the capacity of your hardware to handle more load. This was definitely a thing in mainframe...

]]>Enterprise computing relied heavily on large, expensive database infrastructure. The cloud has rendered this type of design obsolete. Vertical scalability has given way to elastic horizontal scalability.

Vertical scalability vs horizontal scalability

Vertical scalability means the ability to increase the capacity of your hardware to handle more load. This was definitely a thing in mainframe days (so I’m told), where you had a single “machine” – the mainframe – and then bought more components for it if needed.

In the era of enterprise computing, vertical scalability was usually most applicable to relational database servers. For a long time, scaling out application servers has been pretty straightforward, but relational database servers (RDBMS) need a single view of a data set in order to enforce relational constraints. The larger the dataset the more memory, CPU and disk IO the server needs in order to handle increased load.

Vertically scaled systems are therefore dinosaur technology. They are big, cumbersome, expensive to buy, expensive to maintain and prone to becoming single points of failure.

Horizontal scalability on the other hand means increasing the number of machines that act in parallel to process workload. There are some design constraints to horizontal scale, principally that the machines should be stateless*. In order to distribute parallel work you should not care which machine you direct load towards, it should behave the same regardless.

If vertical scale is dinosaur, horizontal scale is definitely mammalian. Small, nimble, cheap to procure, ideally cheap to maintain (lots of things that are the same brings benefits of automation).

*Some web farms use sticky sessions, so a user’s state is tied to a server, but this is usually done with the explicit knowledge that a server outage will lose some transient state.

The web changed all that. No matter how good your database server, when you have millions to hundreds of millions of users your ability to scale vertically will run out of road sooner or later. Early pioneers of sharded data architectures showed that you can scale better and more reliably using commodity hardware and distributing load across many machines than you can with very powerful – and expensive – dinosaur systems.

Database technologies have moved on as well. The evolution of NoSQL database engines has given rise to sharding being baked into the database technology itself instead of being a layer that you need to apply yourself on top of a RDBMS. The popularity of MongoDB (https://www.mongodb.com/) is testament to this – making it into the top 5 most popular database technologies for the past couple of years. (It would be interesting to get stats on prevalence of database technologies categorized by user load). Cloud databases such as AWS DynamoDB (https://aws.amazon.com/dynamodb/) also bake sharding into the fundamental design.

The next generation: elastic scale

Cloud technology is a refinement of horizontal scale. For those of us building cloud apps, the exciting thing is that the cloud infrastructure provides us with most of the things we used to have to design into our solutions to make them scalable and resilient (load balancing, sharding, provisioning new instances, multiple copies of data, separate fault domains). We still need to understand how these work, and we need to use them wisely in order to get the best performance for the lowest running cost.

An important feature of cloud scaling though is that we no longer have to provision physical hardware. We can scale up by running a couple of lines of script, but just as importantly we can scale down again when we don’t need the capacity. Most systems run on cycles of load. Internet shoppers tend to be thin on the ground overnight, banking activity is busiest around payday, logistics is busiest at Christmas. When we build systems in our own datacentres we have to provision enough tin to handle our peaks. When we build in the cloud we can pay for what we need, when we need it.

For many businesses cloud offers massive potential cost savings, and elastic scale – paying for the capacity we need at any point in time – is a major component of that.

In summary

Large-scale computer systems being designed now should not have to rely on large, expensive hardware in order to make them run. Vertical scale is a thing of the past. The future belongs to elastic scale.

]]>https://www.345.systems/architecture/vertical-scalability-even-thing/feed/0Microservices: A Restatement of SOAhttps://www.345.systems/architecture/microservices-a-restatement-of-soa/
https://www.345.systems/architecture/microservices-a-restatement-of-soa/#respondFri, 23 Sep 2016 23:00:30 +0000http://345.systems/?p=209345 Systems have just been exhibiting and presenting at API World over in San Jose (http://apiworld.co/). This has been a great conference and expo, focused on APIs and microservices, with a whole track on building solutions through integration of APIs. Microservices are a really hot topic at the moment, and people are jumping on the...

]]>345 Systems have just been exhibiting and presenting at API World over in San Jose (http://apiworld.co/). This has been a great conference and expo, focused on APIs and microservices, with a whole track on building solutions through integration of APIs.

Microservices are a really hot topic at the moment, and people are jumping on the bandwagon really quickly.

An observation I make in this regard is that microservices, to me, are simply a restatement of SOA principles. SOA was a big thing 10-15 years ago, so why is it that we are now getting so excited about microservices?

I have a theory that it’s because when SOA came out most people did SOA wrong. We used to build enterprise systems with distributed layers (COM+/MTS on Windows vs CORBA / Java EE / EJB on other platforms). After this, we entered a new world of web services, usually based on SOAP / XML, that allowed us to connect services over HTTP instead. This gave advantages in terms of ease of connectivity, navigation of Ǔrewalls, Internet connectivity, multi-site WAN connectivity etc. My contention is that for many people SOA was confused with web services. People used web services as a means of writing remote procedure calls (RPCs), without adopting SOA principles.

If you look back at any SOA architecture theory from back then you’ll find that autonomy of services was already an essential architectural element. The point was that you should be able to connect to an SOA service and all you would need to know was the signature of the API and the rest of the service, whether it be back-end processing, data storage or logic, would be hidden behind the service façade. The technology used to implement the service would be irrelevant as open protocols were used.

Sound familiar? With the exception that interfaces now have coalesced around REST and JSON instead of SOAP and XML is beside the point. This is implementation detail. The important architectural concept is the same. Microservices encapsulate a piece of logic, do a specific thing and do it well. Crucially, they do not expose any dependencies on other systems. This is good SOA practice that we’ve been striving for.

Perhaps one of the things that has changed perceptions is the hosting model, and the move from hosted servers to cloud computing. Cloud-based web applications tend to align closely with the microservice mindset because of the way cloud deployments tend to force you into deploying services as independent units.

In the old hosted server model, a server was a significant investment: it takes capital and ongoing resources to keep a server running in your datacenter, and so the natural inclination is to get the most out of each server. Many organisations therefore use a model where many web services are deployed onto a single server to maximize the use of that server. Whilst this makes sense form the resource-allocation perspective it means that dependencies for one web service can be inadvertently shared across all other services on the same machine. The same goes for operating system updates and drivers, let alone dealing with the impact of competing load on machine resources. Scalability in this hosting model is constrained by the need to size for the maximum workload of all of the combined services hosted on the machine.

The cloud model is different. Hosting instances, whether they be vCPUs or micro virtual machines, have no inherent capital cost. They attract a usage-based charge, but you right-size them to the workload you need, meaning that they achieve an efficiency of resource allocation in a different way. If you host a single service on a cloud instance you have control of the dependency stack, meaning that updates for one microservice cannot bleed into the dependency stack of another microservice. Each service is independently horizontally scalable through the addition of more instances, meaning that scalability is easier to achieve, but also right-sizing is easier as each microservice is sized to its own needs and not to the needs of the collective load.

Of course, there are still pitfalls. From a dependency perspective, one of the most important features of a microservices architecture is the isolation of data. This is also one of the main errors that I have seen with implementing web services in enterprises. People think that if they have 10 web services all pointing at the same database they’ve “done SOA”, even though they’ve introduced a nightmare of cross-dependencies. You Ǔnd that you can’t upgrade one service because the others can’t handle the change in database schema. Then you Ǔnd you need to upgrade them all at once, which effectively results in the monolithic deployment you were trying to get away from in the first place.

So the move to microservices not only needs to achieve isolated services in terms of scale and software-stack dependencies, but also isolation of data dependencies.

An early casualty of this architectural change of HTTP APIs is you lose the world of ACID transactions. This can be scary for people who have taken comfort in the certainty of transactional integrity for most of their professional life. However, designing for eventual consistency instead of ACID consistency is a liberating change. ACID forces synchronicity and serialization, and synchronicity is the destroyer of scalability. If you truly want to scale you have to design for each instance being stateless and autonomous.

All of which we were saying about SOA 15 years ago. What’s changed is that the tech to support it has become a whole lot easier.

]]>https://www.345.systems/architecture/microservices-a-restatement-of-soa/feed/0Enterprise Application Configuration #4: The Application Configuration Lifecyclehttps://www.345.systems/architecture/enterprise-application-configuration-4-the-application-configuration-lifecycle/
https://www.345.systems/architecture/enterprise-application-configuration-4-the-application-configuration-lifecycle/#respondTue, 29 Mar 2016 23:00:01 +0000http://345.systems/?p=199In this series of blog posts I am discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems. When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed across...

In this series of blog posts I am discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems.

When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed across different data centres also applies.

In this post I’m going to look at the configuration lifecycle.

We’ve all been there. A fix or update is needed. “Just hack this text into the web.config and restart the website, it’ll all be OK.”

(only for things to be distinctly not OK at some point later when this change gets regressed by the next deployment)

The problem here is that configuration needs to have some kind of deployment lifecycle, allowing configured valuesthat are intended to be updated outside the application deployment lifecycleto be modified, but in a manner that gives you some control. In this context the “modify-on-the-fly” approach needs to be consigned to history.

When I talk of configuration data that varies outside of the application lifecycle, think of things like shipping rates in an eCommerce site. If the shipping rates change you shouldn’t have to redeploy your entire site. Similarly, it’s a bit much to go writing database tables for configuration such as shipping data because the data is essentially static. You could have a single JSON, YAML or XML file with this information in it, and simply update the document when required.

What you need is for your web nodes to detect that there has been a configuration update and refresh their configuration as required. Seamlessly. Controlled.

At 345 Systems we’ve been working with large-scale enterprise systems with exactly this type of scenario. This is why we’ve developed the cloco Enterprise Application Configuration software, which is a flexible Configuration-as-a-Service API + tooling, allowing developers to build configuration updateability into their applications and support processes.

We think that developers need to think closely about configuration data at design time, specifically to separate the configuration settings that are truly deploy-once (such as a connection string, although even that is debatable) from the settings that are intended to be updated regularly, even if infrequently.

We think that the tooling for creating, deploying, reading, validating and updating this configuration data should make it easy for developers and easy for administrators to put in place a configuration lifecycle that improves their application and business agility.

We invite you to read the overview of our product,cloco, and consider how this might solve configuration problems that you encounter in your applications. We’d also like you to let us know if there are any aspects of the application configuration lifecycle that we haven’t thought about, where we could enhance cloco to be even better.

If you’d like to find out more about how our cloco Enterprise Configuration Store can help you please get in touch via the contact page. We’d love to hear thoughts and comments so feel free to add those at the bottom of this post.

]]>https://www.345.systems/architecture/enterprise-application-configuration-4-the-application-configuration-lifecycle/feed/0Enterprise Application Configuration #3: Version Rollbackhttps://www.345.systems/architecture/enterprise-application-configuration-3-version-rollback/
https://www.345.systems/architecture/enterprise-application-configuration-3-version-rollback/#respondThu, 24 Mar 2016 00:00:32 +0000http://345.systems/?p=197In this series of blog posts I am discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems. When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed across...

In this series of blog posts I am discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems.

When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed across different data centres also applies.

In the first post in this series I discussed briefly some issues regarding the retention and access to version history for configuration in enterprise applications. In this post I’m going to look at the most valuable aspect of keeping a version history: the ability to roll back to the last known good version.

I have seen operators copying bits out of configuration files, pasting the snippets into notepad (or emails, chat windows etc.), or doing a “copy and comment out” change to configuration files in order to help them have something to roll back to if their (albeit ill-advised manual) configuration change does not succeed, or even makes things worse.

I have also worked in better environments where all changes are automated, but even in these cases the final configuration often comes from a combination of a template + build tokens, each of which may have been subject to change prior to a deployment. Whilst it is possible to roll the entire application back to a previous state it is often not possible to have a previous version of theactualconfiguration once a new deployment has been made, replacing the old version.

Whilst I think that deployment and DevOps automation is the way to go in almost all cases, the 345 way of thinking about configuration only serves to build on these good practices, not replace them.

We think that an important aspect of any Enterprise Configuration Store is support for versioning and rollbacks, and we have thought deeply about how best to achieve this in practice.

When we developed cloco (Cloud Configuration), our Enterprise Application Configuration platform, we recognised that saving the version history for diagnostic purposes was only part of the story. We also wanted to provide tooling to allow and support rollbacks.

The cloco configuration store is accessed via a REST service, with the clients never needing to actually see the raw data stored there. We also wanted to build on top of this the tooling to make the API transparent, to expose the functions of the REST API in a way that can be consumed readily.

When we developed the cloco Admin Console, we aimed to give cloco users and administrators access to view version history. We always intended to allow them to also initiate a rollback of the configuration to a previous version. When you’re developing software you would expect to be able to roll back a class file in your source control, but to be able to roll back a configuration section in production? We think it’s needed.

We are open minded about how many previous versions we are likely to support (5? 10? 20?) we recognise that for a system to be practical there should be a reasonable number of previous versions available, and that administrators should be able to select any of the previous versions to roll back to.

If you’d like to find out more about how our cloco Enterprise Configuration Store can help you please get in touch via thecontact page. We’d love to hear thoughts and comments so feel free to add those at the bottom of this post.

]]>https://www.345.systems/architecture/enterprise-application-configuration-3-version-rollback/feed/0Enterprise Application Configuration #2: Maintaining a Version Historyhttps://www.345.systems/architecture/enterprise-application-configuration-2-maintaining-a-version-history/
https://www.345.systems/architecture/enterprise-application-configuration-2-maintaining-a-version-history/#respondFri, 18 Mar 2016 00:00:14 +0000http://345.systems/?p=194In this series of blog posts I will be discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems. When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed...

In this series of blog posts I will be discussing aspects of Enterprise Application Configuration and how we have come across and resolved issues in real-life mission critical systems.

When I talk about Enterprise Applications, I generally refer to applications that are (a) mission critical and (b) distributed across multiple machines. Sometimes (c) geographically dispersed across different data centres also applies.

In this post I’m going to look at a very simple problem that we have come across in the configuration of Enterprise Applications, and that’s the lack of version history. We have encountered a number of debilitating outages in production systems when incorrect configuration changes have been made, which then ripple out across all the machines hosting the application and result in total loss of service.

Some examples of this we have come across are:

Changing a configuration section by pasting in an XML snippet, only to discover later that the XML was invalid and the configuration could not be read. This type of error can kill an entire application in no time.

Changing a configuration section, with incorrect configuration settings, but the changes only getting picked up when a machine rebooted due to Windows Updates 3 weeks later. This made the diagnosis difficult because the “what’s changed recently” investigation pointed initially to more recent but irrelevant updates.

Deployments making partial changes to a system’s configuration, but operators not being able to accurately find which updates have succeeded and which failed, leaving the system in an inconsistent state.

In all of these cases the lack of any kind of version history that operators can readily call upon made the resolution of the incident harder.

Another relevantaspect is the manner in which configuration is updated. Sometimes configuration is deployed via application deployment in an automated fashion, with tokenised templates populated with the correct values in each environment. This is great, and is definitely the way to go for static configuration. Even so, there are still many cases where configuration is updated manually by operators (don’t get me started) even if it’s “temporary”. This type of change is much harder to track because these changes are often made informally and/or in response to other issues.

On top of this there are configuration changes such as modifications to IIS or BizTalk that are made via an administration console. These are not subject to the same risk as direct manual changes to configuration sections, but they still have the capacity to bring down enterprise systems and leave little or no trace.

At 345, when we developed cloco (Cloud Configuration), our Enterprise Application Configuration product, we recognised that version history was an essential part of the toolkit. The freeware version of cloco will ship with version history disabled, but the paid-for version of the software already supports version history logging. Normally, you won’t even know it’s there – it’s just when things go wrong that you know that our configuration platform has your back and saves previous versions of your configuration.

Also, when we created cloco the components we wanted to embrace the fact that configuration updates were a feature of everyday system maintenance. We wanted our tooling to support and enhance good practices, not paper over the cracks. This is why we supply deployment tooling to enable application deployments to work in harmony with our configuration store, and application deployments form part of the tracked version history just as ad-hoc version updates do.

I still wonder how people manage to support distributed applications without a clear way of seeing the changes that have been made to configuration in the lifetime of the application.

Please feel free to leave thoughts in the comments section, and if you have any experiences to share or would like to know how cloco can help your business, please get in touch via thecontact form.

]]>https://www.345.systems/architecture/enterprise-application-configuration-2-maintaining-a-version-history/feed/0Enterprise Application Configuration #1: Introductionhttps://www.345.systems/architecture/enterprise-application-configuration-1-introduction/
https://www.345.systems/architecture/enterprise-application-configuration-1-introduction/#respondFri, 11 Mar 2016 00:00:34 +0000http://345.systems/?p=192In this post I will be introducing the main concepts regarding Enterprise Application Configuration, which will form the basis for the rest of this blog series. Most applications need some kind of configuration to store useful settings and values that should not be hard-coded into the compiled program. Configuration can take various forms, but some...

]]>In this post I will be introducing the main concepts regarding Enterprise Application Configuration, which will form the basis for the rest of this blog series.

Most applications need some kind of configuration to store useful settings and values that should not be hard-coded into the compiled program. Configuration can take various forms, but some of the most common uses are:

Storing values that allow a program to change behaviour depending on the user, which is very common in UI-driven applications such as multi-tenant ecommerce solutions.

Storing semi-static business data that is subject to change outside the lifecycle of the program code, such as tiers of shipping charges by weight in an ecommerce application.

In general, the larger and more complex an application the greater the demand for configuration will be, the more settings will be held in configuration and the more complex the configured data. It is also these larger applications that tend to have the strongest governance requirements and for which there is least tolerance for loss of service that can result from an incorrect configuration change.

This is the problem space we at 345 Systems have been addressing lately, and giving a great deal of thought to. Broadly speaking, when we talk about systems requiring Enterprise Application Configuration we are looking at solutions that these general features from their configuration solution:

Applications deployed across a number of separate machines.

Applications whose configuration needs to be synchronised across multiple machines.

Applications that are distributed across data centres, especially geographically dispersed data centres where latency of data access may be an issue.

Cloud-based applications where nodes are provisioned based on load, especially where administrators have no control over the actual node the application runs on.

Applications whose development lifecycle requires deployment through various environments (e.g. Development, System Test, Pre-Prod, Production), where tracking changes in configuration across these environments is vital.

Throughout the rest of this blog post series we will have a look at some of the issues we have come across when building and maintaining mission-critical enterprise systems and how we have sought to mitigate them when we wrote our own Enterprise Application Configuration solution cloco.