The Rigidity of the Modern Organisation

Internally any organization can be viewed as a set of processes, where a process is comprised of yet smaller processes or elementary activities. Each of those steps work on inputs and produce outputs for successive activities in the process. Each involves the participation of some machines or individuals.

Sometimes it is necessary to implement change to those processes, for example because of the addition of new products, services or as a result of the need to optimise. One type of change important to this document is computerisation. This has led to dramatic increases in transactional throughput by automating and speeding up activities. This increase has come at the expense of the ability to adapt. The processes that constitute the internal structure of organisations have become rigid.

This essay does not consider the effects of physical automation, such as the installation of large robots or other industrial machines that are difficult to move or reconfigure. This essay is concerned with those processes that govern those machines and orchestrate people.

Here I present my thoughts on how that flexibility in information processing has been lost and what can be changed to improve flexibility.

Flexibility is a very broad term. It is having choices. It is the ability to steer. It is the capacity to remember and learn in order to adapt or respond well. It is a structural property: the ability to survive unexpected shocks without shattering. A plane’s wings, though exhibiting great strength, are designed to retain their shape through flexibility. A plane’s wings remain in the form of wings as a consequence of their flexibility. Flexibility is the ability to adapt to a sufficiently wide range of conditions while remaining a coherent body.

If we look at nature we do not see highly optimised, economical structures. What we do see is redundancy and inefficiency. A typical microcomputer, while able to count to a trillion in no time, will fail catastrophically if just one component is compromised. Employing redundancy, a brain will degrade over years, is capable of withstanding losses, can repair and rewire itself, and operates effectively despite the apparent inefficiency of carrying a great number of normally unused parts. A human body contains two kidneys, two lungs and multiple senses. Plants have many leaves that overlap, obscuring each other, instead of one large leaf. The very fact that organisms competitively replicate exhibits the gene’s strategy of redundancy to triumph in numbers over other genes.

Some of the great debates on social orders revolve around the issues of centralisation and decentralisation and in essence these debates are about systems, redundancy and rigidity.

A highly centralised system carries the risk of fragility through absence of redundancy. A critical lynchpin may come unstuck and the whole system may come toppling down. Risks are increased by making a small number of fallible entities highly critical. A highly decentralised system carries the risk of breaking in response to unexpected events through simply being unable to remain a coherent body. The self organising flock of birds can suddenly become two, or evaporate into a cloud of lost individuals.

The highly centralised system may suffer the cost of long communication and planning cycles, and may find it hard to respond to necessary changes on time. Large dinosaurs developed multiple autonomous brains to cope with nervous system signal delays. On the other hand, the highly decentralised system may be unable to pull in the right direction, or more often may simply be unable to avoid implosion and becoming centralised.

An organisation of people is an animal. What it requires first and foremost is some kind of externally recognisable identity, something that delineates it from the rest of the world. The above observations also suggest that this delineated thing must tread a middle ground between two extremes if it is to survive. On the one hand a highly crystallised internal structure leads to fragility and over-dependence on key components; on the other hand too much flexibility leads to a loss of the identity that defines it the first place: it becomes amorphous, easily divided or evaporated.

The aim of any organisation must be then to permit any necessary amount of internal change in response to threats, opportunities, or other environmental changes, while at the same time ensuring that the integrity of the organisation as a recognisable and coherent entity is preserved.

For example, in response to extraneous pressure to improve the performance of cars, as opposed to say trucks, for a general vehicle manufacturer it is a failure if it is forced to sell its truck business to improve its cars. However, it is also a failure if the organisation is simply too paralysed or lumbering to respond to that pressure at all. The surviving organisation in a competitive environment will be the one that can quickly improve its cars with little internal effort.

I suggested what I mean by ‘flexibility’ and what level of it is needed. A thing is a delineated entity. Too flexible and it evaporates, too rigid and it shatters, erodes or simply becomes useless and vanishes from our realm of concern. The right amount of flexibility is whatever it takes to preserve identity in the face of environmental changes. This essay is not about what it means to strike that balance*, but more that we as an automated world for the sake of short term growth have gone too far toward the side of rigidity, and it highlights the areas that I think need to be addressed.

In the past, information management involved recording details of events on paper, filing them in cabinets, manually transcribing from template to template, duplicating documents by hand and physically moving them from department to department. In place of the database management system, there were books of indexes, folders and cabinets. In place of the web user interface showing outstanding daily tasks, there were racks of forms to process. In place of message queues, there were scribes who sat at desks with ink and blotter. Where today there are global unique identifiers and electronic random numbers, in the past mechanical devices generated codes on the pull of a handle. *references

For many companies the paper trail was less critical. Organisations were often small and agile enough to conduct their operations by word of mouth. While operations by and large followed typical daily routines, there was sufficient flexibility to deal with the unexpected by ad-hoc adaptation through a series of quick communications between all relevant parties. The paper trail lagged behind in a primarily descriptive role for the benefit of the accountants and tax offices. For many smaller firms this is still the case today.

Population increases, industrialisation and consumerism led to growth in the size of companies and the amount of trade they conducted. The railroad and the automobile allowed the family business to evolve into the supermarket chain, and the local bank to become a financial institution. *references

As companies grew, informal, word-of-mouth management methods ceased to suffice. Transactional throughput increased and the quality of the paper trail suffered. More information and calculations than could be humanly managed led to difficulties in monitoring and control. Soon the management were compelled to sacrifice their dynamic, reactive approach to things, and began to insist on stricter adherence to formal processes. Records were produced as processes were conducted. The records themselves became as much prescriptive as descriptive in that messages generated as the result of some task became the instructions to start some other. Paper documents and files were increasingly integrated into daily operations, simultaneously fulfilling the role of audit trail and task initiation or specification.

As time went by, mass production, globalisation and advances in telecommunication resulted in even greater transactional throughput, and in the later part of the twentieth century widespread computerisation began to automate that paper trail away. The quills, blotters and cabinets vanished, to be replaced by data centres, database clusters, web farms, intranets, and ‘enterprise resource planning’ (ERP) systems.

For most types of organisation though, one thing has remained comparatively constant: the basic processes themselves.

A withdrawal of funds is still a withdrawal of funds: The customer arrives at the business boundary – a physical desk at a branch or an internet banking web server – and makes a request. This is first recorded somewhere, which then sets off a whole chain of events. Accounts in ledgers get updated. Funds get reinvested. Portfolios change. Audit trails are kept. The actors in these chains may no longer be scribes and cabinets, but in essence the processes remain recognisable and familiar.

Before computerisation, each departmental head was fully aware of the processes carried out, ensured the necessary physical resources were in place, managed changes to the execution of those processes by procuring the required cabinets, scribes, room space and so on, and supervised the individuals responsible for carrying out this information processing.

This was a comprehensive type of line management that involved understanding and communicating detailed business rules to orchestrate those scribes and cabinets, or stackers and shelves, or what have you, and in such a way that the processes made best use of the resources available. The departmental managers had to lay down the information processing procedures and make sure that subordinates put them into effect. Procedural improvements may have meant literal, physical rearrangement of those people and files, while specifying which boxes and batches had to be shipped to which other departments and when. *references

In contrast today however, a great majority of establishments find themselves in the situation that those processes are now being conducted by computers. They are almost entirely automated with only sparse semi-automated or manual activities. The detailed knowledge of how the business operates no longer lies in the domain of the executives, but in the domain of software specialists. When orders are placed, they occur as messages buried deep in the code of websites. Those messages result in changes to accounts and stock levels, but this happens without a human in sight.

These business critical processes are even outsourced. In the past, business supporting industry consisted of all those manufacturers and craftsmen that produced cabinets, pens, paper, ink and other such physical essentials. Their responsibility was to produce the raw materials, tools, and provide the physical infrastructure that allowed business processes to take place. They did not actually conduct the business processes themselves. The companies that produced tailor made filing cabinets and notepads did not work with the contents of those filing cabinets. Today in stark contrast it is possible to purchase complete accounting packages, stock control packages, or custom databases that not only store that business information but shuffle those files around too. The line between the external supplier of supporting services and internal operations has become blurred.

So, it is no longer the departmental manager of the past who lays down information processing procedures and rules; it is the technical analyst, software architect and software developer. (These roles may even be found outside the organisation, as is often the case today with small traders who find themselves isolated without the use of the web and external web consultation.) While it may seem that those rules and procedures are dictated by business operatives and management, as a general rule this is not the case. The actual behaviour of line of business systems, and thus the behaviour of large parts of the organisation, is usually gathered from various sources and assimilated into a unique, integrated view. This is done by analysts who then pass it on to be committed to software code and configuration. If an organisation can be described as a set of business processes, it is that uniquely integrated view that forms the description and that is now most often understandable and readable only by programmers, technical analysts or others from the world of IT.

While IT departments may nominate a domain expert as representative and facilitator of communication, that expert or analyst is rarely capable of coordinating changes in software itself. Similarly, those IT experts who are very often also business domain experts – the architects, the lead developers, the technical analysts – are rarely authorised to make business process changes.

Today, putting change into effect can be a daunting and demoralising undertaking, involving many stakeholders and is often unsuccessful. IT is commonly perceived as expensive and unreliable in its delivery track record. Even the small business of today, with its online distribution, often finds itself in the stifling position of either having to learn to program web applications or rely on outsourcing the majority of its operations to a busy and expensive team of software developers, or tie itself down to a limited and rigid software package.

The end result, after many decades of computerisation and IT infrastructure development, is that companies are now capable of high transactional throughput, but at a price: they have lost control over their own processes, lost monitoring ability of their processes, and internal change has become enormously difficult and expensive.

Over time, computerisation has put control, monitoring, knowledge and management of business processes into the hands of technical people, who are not considered owners of those processes. This introduces difficulties into making changes to those processes.

It could be said after a skim through the above that the obvious solution to flexibility would be to authorise the IT experts, analysts and architects to run the business processes. Let them decide what gets automated, what gets replaced, and let them make key decisions in areas like product development and so on. After all, the understanding of overall operations lies within the software and systems realm.

This is usually infeasible. These systems are nearly always implemented using programming languages and other computing artefacts that require highly specialised knowledge. Those technologies are subject to constant change and improvement and they require individuals dedicated to their field. Also, on those occasions where business knowledge has flowed almost in entirety to one or two individuals in the IT area, the obstacles in these fragile conditions are political.

Conversely, that same requirement of special technical knowledge is a barrier to the business departments putting those changes into effect themselves. Managers overseeing life insurance processes are usually not programmers, for example.

The sad truth is that those very same managers are also denied up to date knowledge of the pertinent business processes and rules they should be familiar with. The big picture and the small details are often lost in the software code. When trying to change something, it is not uncommon to find IT departments trying to reverse engineer existing systems to learn and describe how the business actually operates. These exercises of rediscovery are a painful consequence of some of the problems presented later here, including staff turnover where there is inadequate documentation or quality control.

So these seemingly unbreakable barriers remain in place, and it is with these that changes are made painstakingly through an elaborate process of specification, which we delve into now.

Change originates from a primary business goal and proceeds as sequences of delegations, fanning out like a tree, with each boundary involving ever more remote descriptions of what needs to be done. Each different role attempts to translate incoming specifications into outgoing specifications, sometimes expressed in entirely different terms.

The illustration shows how a primary business goal fans out into further goals with each specification becoming more remote in terms of language and skills.

Let us consider an example: A large retailer has identified a problem that locally administered promotional activities, such as printed advertising of certain products at specific stores by local store management, is causing unexpected demand on warehouses and resulting in blips of stock shortage. This causes undesirable ‘demand noise’ that amplifies back up the chain. The retailer wants to solve this problem, improving the supply chain without having to curtail the local promotional activities.

This primary goal leads to the creation of a project. Usually some kind of governing body, a steering committee, is assembled involving stakeholders from various departments. This project results in a gap analysis of processes as they currently stand, and processes as they should be, after perhaps long consultation by analysts with various employees.

Each business unit is then involved in putting the necessary changes into effect:

· The local store manager needs better reports about stock levels in his local warehouse and centrally ordered lead times;

· The local warehouse manager needs to know about upcoming promotions and central stock levels and planned ordering;

· The central buyers need better information about upcoming promotions and expected demand increases and will be notified daily of such things. The central buyers also need to know current stock levels in local and central warehouses, and it should be possible to gauge expected depletion times.

From the perspective of the warehouse manager at the local store, a change needs to be made to the software system that is used to help manage his stock levels. That system allows the placement of replenishment orders from more central stock, and it tracks deliveries, shipments and other losses. The system will now need to know about planned promotions so the warehouse manager can be automatically alerted of upcoming demand. He does not know how those planned promotions will get recorded in the system and does not particularly care.

It is the store manager who is concerned with making sure that the local marketing boss records all upcoming promotions into the system via some newly provided web application that IT are supplying. Let us say that in fact IT will merely modify the local warehouse application so that it exposes a web page that allows its database to be updated by the local marketing team. The warehouse manager need have no knowledge of this. Once the warehouse manager’s system is updated, local marketing staff will use that web page to record their planned promotions, and with that information the warehouse manager will now plan his stockroom replenishments.

There are many types of promotion in a retail organisation. There are fliers, TV adverts, placing products on the end of a shopping aisle, reducing the price, moving products to near the entrance, and so on. In order to forecast changes in demand as a result of a promotion, some statistical sales analysis needs to be performed with information like the type, date and ‘scale’ of the promotion as input. It is decided that this task can be fully automated, but that this kind of statistical analysis should be done by a central, shared service available to various warehouses and marketing departments.

At a high level, all of the above present changes in interdepartmental communications accompanied by new messaging and reporting requirements on the respective departmental systems. At that level analysts and architects are responsible, with nominated business representatives, for formulating an integrated, overall picture of what needs to be done. Within the scope of that overall project, changes to each department’s systems (eg: the warehouse system) can be gathered together as specific subprojects and kicked off in parallel.

Each subproject involves the work of analysts, who study the business domain. They discover terms like ‘aisle promotion’, ‘TV promotion’, ‘promotion start date’, ‘promotion duration’, ‘discount’ and so on. They discover specific activities in business processes like ‘plan promotion,’ ‘cancel promotion’ and ‘commit promotion.’ They assemble these to create specifications for how the software system should appear to the end user. Somewhere in those descriptions will be included the web page that allows the local marketing employee to select from a single drop-down list “Select Promotion Type:” (aisle, TV, flier, etc). Also somewhere there will be a description in the warehouse system that the “Reports” page should now include a new “Automatic Alerts” subsection and that one of these should be the “Promotion Demand Analysis Alert”, with an explanation of how this helps the warehouse managers and central buyers anticipate demand.

Looking at the illustration above, where we have got to now is “Specify stock control system changes.”

Usually the outputs of this analysis are documents in the language of the business representative or system end user. They portray the business artefacts, the roles, entities and processes, in natural language using the terminology of the business domain. This output is often called the “business analysis” or merely “requirements.”

In our example scenario, this is taken as input by a more technical analyst, who is tasked with specifying requirements in ways that can be understood by programmers and other technical staff. Sometimes this is done using languages like the Unified Modelling Language (UML), or other dedicated modelling languages for describing things with formal semantics. The output is often called a “technical analysis”, or “requirements specification.” It is a very detailed description of the concepts and artefacts described in the earlier analysis, avoiding ambiguity, going into the details necessary for realisation.

Software designers then take these specifications and write software, design web screens, install network connections, buy hardware, and put all the changes into effect.

So, let us now zoom back out.

From that original business goal to improve the supply chain so that unexpected local promotions don’t cause shortages:

· A specification was put in place by a project team involving business representatives, analysts and IT architects to change the high-level interdepartmental processes.

· This changed the specification of the input messages and output reports (the general operations) between the departments.

· The collections of changes to each department and departmental system, and the introduction of new or decommissioning of old systems, were run as parallel subprojects.

· Each subproject accepted that set of changes to inputs and outputs of the department as the basic specification of departmental goals.

· Departmental teams of analysts and local representatives identified changes to processes, described those processes, and passed them as specifications to IT analysts. (While some manual tasks were changed by the departmental bosses, the majority of changes are in software.)

· Those analysts interpreted the requirements and passed them as specifications to software teams.

· These in turn lead to specifications for network configurations.

· These may in turn lead to specifications for hardware installations, purchases and so on.

Automation has created a plethora of roles with responsibilities so alien from one another that entirely different languages are used and intermediating roles are necessary just to be able to translate between them. These translations tend to flow in one direction and are usually referred to as specifications, or some such equivalent. Putting change into effect today is laborious, complicated, expensive, and usually unsuccessful.

The key point to recognise though is that the outputs of the software teams are yet further specifications.
These are not for people to follow like in the old days, but for computers to follow. Software is a set of processes, just the same as those business processes that were once conducted manually a long time ago, but described in programming code. In the past, when a heavy batch of copied documents was carried in a box by an assistant from the claims department to the accounting department, today a line of code runs a bulk insert SQL command to stuff data into an accounting database’s general ledger table, or sends an XML file by web service hosted on the accounting department’s server.

In the example above, from top to bottom, at each level and stage of the project, what was being passed as input to the successive activity was an updated procedural specification. The project steering committee commissioned a gap analysis, which recommended that the local warehouses be notified of expected promotions by local marketing departments – a change in macroscopic level business process. This change was included amongst other procedural changes for that department, and at a departmental level, in a dedicated subproject, the business analysts and IT architects recommended that the marketing operative be responsible for entering data into a modified warehouse stock control system. The technical analyst interpreted this into software change specifications specific to that system. The software change specifications went to the respective owners of all the processes embodied in the warehouse system as software, who merely changed those processes.

The only documented process that did not change, if it was identified at all, was the overall process for implementing change itself. It is this one process that seems to be neglected most of all, and that very neglect results in the ossification and inflexibility this document seeks to address. We will return to this later.

Changes to nested processes, illustrating a part of the case above. The boxes illustrate processes. The red processes are actual computer systems, and the red arrows represent changes to software systems.

It is important to recognise that on an abstract level, a business is a set of processes, where each process is a set of sub-processes. Each process, or sub-process, is executed by or involves the participation of a role, which is an abstraction from a concrete employee or physical resource.

A process begins with some message, or combination of messages, and results in the generation of messages. Here is an illustration of a simple warehouse process, the receipt of a delivery of goods:

Illustration of a business process for handling the receipt of goods to a warehouse.

The process was initiated by the message or event that a delivery truck has arrived with a package of goods. It involved the participation of the warehouse stock register, warehouse manager and other roles. Depending on if the delivery note is matched with an order, the process ends with either the shipment being rejected or being accepted, the stock updated, and the record of the stock levels updated. (In essence the record of stock levels is an undirected message useful for whoever needs to know what the state of the warehouse is. By altering the stock records, other processes may be invoked by those who subscribe to stock record changes. In this way, processes may interlink.)

The activities in the process may also be treated as nested sub-processes. Look above at the task “Update stock levels with loaded goods.” Within that activity the process below is being carried out:

The process of updating the stock levels records as conducted and understood by the warehouse manager. The product delivered and loaded is first found in the stock ledger, and if it is there the stock levels adjusted, and if not, then a new product line is recorded in the ledger. In either case the quantity damaged during load is also recorded. The process if repeated for each item delivered.

When the warehouse manager updates the stock levels, the information about products received is given to the activity as input and then some simple procedure of searching for those products in the ledger, updating their quantities, and closing the ledger is carried out. Clearly, this may be a manual or an automatic activity. Processes even on this level, as mentioned earlier, have not changed much since before the days of computerised automation.

The following illustration shows the process of updating the stock levels record nested within the warehouse goods receipt process:

It is clear now that the whole business can be seen as a set of processes. It can be described in terms of processes, and change to the business can be prescribed in terms of processes. (In addition, a society of businesses and individuals can be seen similarly, where processes describe the interactions between them.)

Going back to our historical business of papers and filing cabinets, if we were to map that business as processes using illustrations such as above, and if we were to contrast that with a similar process map for the same organisation after computerisation, we would find that not much has changed. The main change is in the assignment of roles. Where once there were desks, ink, quills and scribes, now there is electronic messaging. Likewise for cabinets of paper documents and databases or Windows and UNIX files systems. An illustration of this is presented in the next section.

The entire organisation can be described using process representations. Diagrams such as those above depict the processes in place. Software code is also a specification or description of processes and could also be represented using such diagrams. The only difference is that software code can be executed directly by machines. It is not often that the change process itself (the process of changing processes) is considered when introducing change, particularly the type of change that results in automation.

If on an abstract level there is not a great difference between the traditional manual process and the modern automated one, why should there be a difference in flexibility? Is it actually the case that effecting organisational change is getting more difficult?

The warehouse delivery process involving the use of a stock control system.

Looking at the ‘Update stock records with loaded goods process’ more closely, we see:

To reiterate and drive the point home, the processes of updating the stock records with loaded goods are in each case fundamentally the same. For the non-technical reader those specifications of the stock level update in ‘pseudo-code’ (a slightly more readable, informal rendition of a computer program) might be hard to understand, but that does not impact the execution of the process because they must only be understood by the computers themselves. That textual computer code represents the same thing as what we saw previously:

=

The two are simply different views of the same abstract entity: the process. One is readable by people and can be used to help manage manual processes; the other is readable by machines and can be used to manage automatic processes.

The key difference between the two processes is in the impact on the processes needed to change those processes themselves. In the past, changing operations involved telling people the new rules. Now they involve telling people to tell computers the new rules.

Changing processes where tasks have been automated requires the construction of specifications for software developers. Depending on the software development methods employed (and there are many), new tasks for analysts, architects, developers and whoever else needs to be involved must be created. This becomes increasingly prohibitive. The very act of automation results in a loss of control over the processes themselves. The processes get carried out, allowing high throughput, but they cannot be easily stopped and changed. Not only that, but the areas of the business the processes cover after time become lost and forgotten to the business itself. Over time the IT specialists become the domain experts, but effecting change in the domain is often beyond their remit.

Let’s assume that our warehouse has only ever stocked items in units. There is a new requirement to receive liquids into tanks. Tanks and vats are installed in the warehouse, each with its own label. The tanks and vats are general purpose but it is not permitted for liquids to be mixed, and each container requires a proper cleaning when they become empty or product contents are changed.

In a fully manual, pre-computerisation environment, the process of changing the processes themselves might have looked like this:

1) Inform the warehouse manager and schedule the change.

2) Warehouse manager introduces a new stock ledger called ‘liquid products’, which records the contents of the vats in litres at any given moment and helps with scheduling tank cleansing.

3) Test the new ledger with receipts of stock, simulated returns and leakages, and then approve the new ledgers.

4) Update standard procedure documentation, if it exists.

5) Warehouse manager simply adapts his behaviour in response to the change, updating the correct ledgers on receipt of either liquid or unit based goods.

Post-computerisation, if the computer system in place does not already accommodate such functionality, either already or through the purchase of some additional module license, then the departmental manager is faced with two options:

1) Commission internal software development for extension or creation of an in-house system.

2) Take the risk and expense of employing consultants to customise and reconfigure an externally developed system.

The process of introducing such a change can be so elaborate and complex that an entire book could be dedicated to it. The great majority of IT endeavours fail. It can be expected from the outset that some combination of time, budget or functionality goals will not be met.

All of the tasks involved in introducing vats, introducing loading and dispensing sockets, tubes, meters, inspection norms and health checks remain the same. In the automated process, we are able to deal with more transactions and handle more vats, but introducing the automated system involves all the activities of any software development lifecycle. Those activities include business analyses, technical analyses, test specifications, telecommunications and data storage upgrades, regression tests of interfacing systems and so on and so on. They include similar steps as shown in the earlier example of specifications upon specifications. It is a difficult, risky and fragile process. [References]

Further, once the warehouse manager has automated his stock control ledgers he is no longer the effective owner of those processes. The warehouse manager becomes the consumer of a software system that is tended and maintained by others. It is integrated into other systems and considering the impact of modifying such an intertwined thing can be a strong disincentive.

It is also true that in a manual environment there are some changes that are expensive to carry out. In particular, alteration of historical records to accommodate some new metric or property might involve a great deal of manual work. For example, if all bank accounts were to be given the new property, ‘credit rating,’ and if it was decided that the credit rating was to be listed with each account, it could be a time consuming process to go through all the books, pull them out of the cabinets and add a new column with pen and ruler to each page. However, this is merely a question of throughput again. The main advantage of computerisation is that the capacity for organisational growth is greatly improved. Transactional throughput and archive storage size is dramatically increased. An organisation could achieve the same thing with thousands of scribes and vast vaults of papers. Updating historical records is just a matter of available capacity. It is not complex, it is not risky, and in both cases it is easy to implement – just that the time or cost to complete the operation would be prohibitive if it was manual. It would be inaccurate to compare this type of inhibition on flexibility with those imposed by computerisation, because it requires that we first imagine trying to carry out such manual changes if we had the same volume of dataas today. The scenario is unrealistic: banks without computerisation would not have such high numbers of active accounts, if the number of clerks in place was insufficient to manage them. The objection can be generalised: organisations do not grow much beyond their ability to manage themselves.Processes may be the same, in terms of what activities are carried out, what their inputs and outputs are, but with automated roles there is one process that has been dramatically altered: the process of introducing further change. This is often overlooked. Automating roles is effectively the same as outsourcing critical business operations to a separate department that speaks a completely different language. The department may be ten times faster in their execution of operations, but to change their operations is risky and expensive.

This deserves a quick mention. Is it true that computerised organisations lose sight of their actual processes? Is not the case that once a process is automated, some manual process must be kept in place in the event of system failure or some catastrophe? The so-called “business continuity plan”? Is it not the case that the administrators should be fully aware of how the business truly works and, as part of their duties in the area of operational risk management, make sure that the business continuity plans can be carried out?

The answer to all the above is that in most organisations nothing like the above is done, and certainly not properly.

As changes that focus on growth at the expense of flexibility are made, the organisation gets more rigid. Perhaps this is symptomatic of all aging, living things. However, as the fitness trainer says, it is never too late to start exercising.

Overwhelmed by increasing volumes, frightened by mounting costs, attracted by the need for monitoring and higher information processing capacity, and perhaps motivated to keep technical pace with competitors, organisations rushed to automate and deploy systems. As business objectives emerged, in a keenness to fulfil these objectives, systems were extended, customised and complemented with yet more automated systems.

It has been said that this early phase of rapid growth is a hallmark of youth and eventually a successful organisation is bound to reach equilibrium where it can no longer expand organically, its internal complexity reaching the limits for its earning potential. While this may characterise the evolution of many companies, it is somewhat defeatist in that it makes no attempt to find an underlying mechanism of that organisational aging process in an attempt to resist it.

Rapid growth with unmanaged increase in complexity can lead to confusion and quality reduction at any stage in a company’s history. This is clearly illustrated in such examples as Toyota’s overemphasis on expansion, recently followed by the unfortunate recall of several million vehicles in the USA and elsewhere. [References]

Viewing the organisation as a set of processes, involving activities assigned to roles, we have seen that change itself often leads to decreasing ability to effect further change, primarily because the detailed process of effecting change is not itself included in the process model under change, or is just ignored. In other words, change is made without really understanding the consequences. Standard techniques like the “gap analysis” show you the difference between the now and the expected, but do not include future gap analyses, or the ease and effectiveness of future gap analyses, as a subject of change. Immediate savings or growth opportunities from automating or ‘improving’ something can be perceived with some clarity, but the cost of future changes to the deployed result is uncertain and often simply ignored.

The consequences of ignoring the impact on detailed change processes are manifold.

Automating an area introduces communication barriersAs soon as something is removed from a department to the world of software then change to that process now involves at least two departments instead of one.
For example, when the processes were manual, switching from FIFO costing to unit-based costing would have been fairly simple to put into effect. Someone would have to sort the till receipts in order of time, and then go through the stock books updating the levels and recording the cost of each sale. In the automated world, it is no longer trivial to make procedural changes. Changes to the stock and accounting processes now involve a more elaborate software development process.
This software development procedure involves a prohibitive communication overhead and even comparatively small changes are often abandoned.
Another consequence is that the details of the processes become lost to the business. Those who really ought to know about how, when and why the stock levels are recorded often only do so for a duration of time around about the time the software is being designed and deployed. As months and years go by, people leave, software changes, and the precise mechanisms might become forgotten. It is common across companies to see investments in system rediscovery, essentially re-learning how their business really works.

Now that the warehouse information process has been automated, the company now licenses the use of “Commercial Third Party System X.” Although “System X” met ninety five percent of functional requirements out of the box, it required some customisation by one or more external consultants. The external consultancy was selected on the basis of competency and likelihood that they would as a company remain around for the next five or ten years. It is now not possible to make changes to the system or its integration with other systems without involving these consultants. There is also an in-house technician with knowledge of “System X,” who is the last remaining employee in the company who understands key areas of the installation.
This is a typical scenario for many companies. It has become the norm to perceive line of business systems as fragile, indispensible and critical, while those who keep these systems propped up are treated in precisely the same way.
Further, as more changes are introduced, the stovepipe system can evolve. Smaller systems are developed to compensate for deficiencies in the larger ones. Processes evolve to simply move data from one system to the next as they cannot talk to one another directly. These in turn require automation. Efforts are made to consolidate things and cut down in complexity (such as attempts at SOA integration), only to discover that half way through the project the budget dries up or the obstacles become insurmountable, leaving a mixture of systems aligned with either the newly introduced or legacy paradigm. This evolution of complexity results in ever granular domains of specialisation, each with its own experts, and each a stakeholder in any future project for change.
Both internal and external specialists monopolise their knowledge as a job security strategy, and make sure that changes come at a high price.

Costs and risks associated with staff turnover increase
Processes have been changed without regard for the impact on the process of achieving further change. This manifests itself in yet another way: that staff turnover may require additional costs. This may be because of a need for specialised knowledge, or that the environment has increased in complexity or technicality and requires a steeper learning curve. It may be that the knowledge of how and why things are the way they are is simply no longer there, or so disparate between teams that takes longer to compile and learn it.
I have seen the almost humorous situation of technology being selected for its capability, without regard for the software’s popularity, only to find that after deployment internal staff trained in its upkeep actually left the company to set up external consultancies at high multiples of their original income. On occasion the consultants sold their services back to the same former employer.
Personnel or human resources departments often cannot cope with hiring or firing. In the past, recruitment for a particular field meant finding personable candidates with appropriate education and experience in the business area. Today the business processes are automated and hiring is more about looking for familiarity with software applications and the various techniques required to customise or operate them. HR departments are not usually conversant with the plethora of technologies in use. This encourages them to push the responsibility of hiring onto the relevant managers, washing their hands of it to some extent.
Prior to computerisation, where those responsible for induction, training, hiring and firing needed most importantly an understanding of the business area, costs associated with candidate performance, and hence HR performance, were reasonably easy to measure. HR, recruitment, or personnel departments could be held accountable, at least partially, for the kind of costs associated with such activities as training, learning curve length and so on. If someone took a relatively long time to catch up, whoever recruited the person could be held accountable.
In contrast, today it is practically unheard of for companies to maintain correlations and studies of learning curve, training, hiring or other HR related costs, and the impact on them as a result of technology choices. Today, managers insist they need some skill at a high price, and make technology decisions without any involvement of the HR department or proper consideration of what impact the technology choice has on HR actvities. An unfortunate end result is apathy towards effective personnel management.

Use of third party systems means more business critical processes are outsourced
The most accessible example of this is perhaps the small online shop. Many today who would like to or already participate in trade of goods and services find themselves in the situation that traditional knowledge of commerce no longer suffices. In the past one learnt about bookkeeping, stock keeping, buying, selling and the laws and lore of their particular business domain. Today it is somewhat different. There are three options for the simple shop owner that wants to sell online: learn to program, hire programmers (external or internal) or use a third party software package with some limited customisability.
Historically businesses have always been dependent on third parties for provision of infrastructure, but the actual orchestration of internal operations was very often the responsibility of the business’s key decision makers. Today however it seems that more and more are willing to let software run their business. Online marketing, ordering, accounting, stock keeping, bookkeeping, tax calculation and more, can all be done by machines. Simply purchase the system, host it somewhere, add liberal sprinklings of creative design and personal connections, then spend the next few years running around at the behest of the system making sure that the physical order and delivery processes keep up with what the system demands.
When a change becomes necessary, such as adding an entirely new product category, suddenly the business “owner,” must either become an adept programmer and web designer, manage programmers, or pay for expensive application customisations. Today, it is a commonly heard lament.
In larger corporations the same kind of difficulty applies. Entire sections of critical business processes become effectively owned by a third party supplier. In my experience the costs of internally developed systems versus externally supplied and customised systems often vary wildly from expectation. Sometimes companies invest so much into a system, only to discover that it lacks the customisability they need, that the entire business folds. Sometimes huge systems that would take years to develop by huge teams can be substituted by smarter systems developed in house at orders of magnitudes cost reductions (and vice versa). The main problem with outsourcing business processes, particularly by purchasing applications, is that they meet a need, but flexibility in those areas becomes minimal. Further, these systems can become quickly obsolete and/or require frequent upgrades.

Technical solutions to problems of organisational complexity miss the point
To reiterate: the business is a set of processes; businesses change and adapt through processes for change; the business is a set of processes including those processes of realising change. A change to the business results in a change to those processes for change – a change to the business means a change to how it can change. As changes occur, how flexible or how rigid the organisation becomes is determined by the impact on those processes for change. In order to maintain flexibility, the impact on further change management must be considered. It seems trivial and obvious once it is spelled out: rigidity occurs through neglect of flexibility, but most organisations in recent times seem to have failed in this.
The problem is methodological and cultural. Business processes must first be recognised as real artefacts, identified, described and maintained. They must be as descriptive as possible, as opposed to prescribed dogmas, so that the business can be well understood and remain understood. Without this knowledge, there is no governance, only the illusion of governance. Once this knowledge is available, it must be made to include identified processes for effecting change. Any proposed change to the organisation would have an effect on the steps necessary to realise yet further change and so this total impact must be grasped holistically. With the necessary approach and mindset, it should become possible to take control of organisational flexibility, anticipating what changes lead down the path of ossification and age, and which paths can lead away from it.
Consequently, any attempt to improve flexibility through technical ideals, such as enterprise SOA integration, homogenised platforms, monolithic packages and so on, can not come with any guarantee of success because they completely miss the point. These company-wide architectural panaceas are attempts at serving technical solutions to problems of structural complexity, but with no obvious connection to organisational flexibility. One does not follow from the other, and without this connection it is difficult to understand why there should be any return on investment at all.
Such architectural solutions do not address resistance from domain fiefdoms and expertise monopolies. An “enterprise service bus” does not address consultancy costs for its platform specific knowledge. A central datawarehouse does not address the costs and risks of a highly centralised method of sharing knowledge. Most importantly, they do no address the cultural aspects of the organisation that led to the inflexible solutions in the first place.

Loss of quality
Both a cause and effect of rigidity is loss of knowledge about systems and processes. People become tentative and hesitant to make changes, but when they do they cannot effectively test systems. For various reasons the test processes become lax, circumvented, perceived as too costly to maintain. In some companies experiencing growth, it is the necessity for expedience that causes systems to be deployed without retention of testability, and so quality suffers.
A climate of uncertainty is established when quality and testability is secondary. This uncertainty is accompanied by a resistance to change. This resistance to change can also often result in ‘patches’ and supplementary systems, put in place out of fear of upgrading or replacing the original, resulting in an increasingly complex and less understood stovepipe of an organisation.
De-emphasis of quality results in failure, and organisational rigidity results in poor quality. For some it may seem counterintuitive that stringent Quality Assurance could result in flexibility. That very emphasis was the key ingredient in the rise of Japanese manufacturing processes:
“Ford Motor Company was simultaneously manufacturing a car model with transmissions made in Japan and the United States. Soon after the car model was on the market, Ford customers were requesting the model with Japanese transmission over the USA-made transmission, and they were willing to wait for the Japanese model. As both transmissions were made to the same specifications, Ford engineers could not understand the customer preference for the model with Japanese transmission. Finally, Ford engineers decided to take apart the two different transmissions. The American-made car parts were all within specified tolerance levels. On the other hand, the Japanese car parts had much closer tolerances than the USA-made parts – e.g., if a part were supposed to be one foot long, plus or minus 1/8 of an inch – then the Japanese parts were within 1/16 of an inch. This made the Japanese cars run more smoothly and customers experienced fewer problems.”

In a climate that emphasises growth at the expense of adaptability and flexibility, ossification and rigidity sets in. The loss of flexibility through computerisation manifests itself in various ways.

This is a fictional case study of an organisation, demonstrating all the problems of rigidity identified above.

For the sake of familiarity, let our stricken company be a shop. So that the example is non trivial, it has a central, shared warehouse, some branches around town, and each branch has its own small warehouse. All ordering for replenishment of the central warehouse is done from a central department at HQ, and all local ordering is placed on the central warehouse. The shop currently sells a narrow range of products from a small selection of suppliers. All sales are brick and mortar. No online ordering is supported.

Up until recently the branch cash tills generated a spreadsheet of daily sales, which was sent by email as soon as available to HQ for processing. It was decided however that this process could be further automated to save some branch staff time. The end result was that the HQ accounts package was integrated with a web service, and custom software was created by in-house technicians to send till data via the web service to the accounts system. The custom software was not completely trivial: there was some cleaning of the data to be done, some recorded sales were not actual sales but goods returns or item cancellations, the till data was expressed in terms of bar codes and had to be matched with a file of product descriptions, and sometimes the data was not available at the expected time and had to be merged with data from the previous day. These are just some of the complications previously handled manually that had to be automated.

Time passed and the technicians who originally wrote that software left the organisation. The software ran with few hiccups, and was usually left alone by newer programmers. If on the rare occasion that something needed changing, it was tentatively patched with little in depth understanding and left to continue sending data to HQ.

Much later, as sales volumes picked up, two things became apparent: the selection of products needed widening, and predictive ability would be improved if the sales data could be available to HQ hourly. It was then that they realised that working out what needed to change was not going to be an easy task. No clear answers were immediately available, and it became obvious that investments would need to be made to re-discover how the business actually operated. To make matters worse, putting the change into effect involved much more than just telling someone what to do. It became clear that an elaborate and expensive set of methods would need to be followed in order to successfully document, change, test and deploy a system that was now critical to the operations of the business.

To the dismay of the business owners, it slowly dawned on them that this kind of piecemeal automation of business tasks had been going on now for some time in various other parts of the business too. It was even discovered that one enthusiastic technician had implemented a simple stock control system to help manage the small stock room at one of the branches, while another store manager with some knowledge of programming spreadsheets had created yet another system to do the same thing for another branch. As these hand-crafted solutions had migrated to the auspices of the technicians, the knowledge of the business operations had slowly drifted away.

In a desperate attempt to regain some semblance of control of the business, the bosses decided that all stock control for the central warehouse and stock rooms would be dealt with using a single third party system. Cutting a long story short, the implementation of this third party stock control system ended up costing much more than anticipated, because of customisations necessary, integrations necessary, and all the reverse engineering that had to be done just to discover what needed replacing at all. This was further complicated by the resistance put up by the technicians who had grown attached to these home-baked systems their jobs depended on.

The shop management, after a significant investment of time and money, became the proud owners of an obscure system that they did not understand, and which their company was dependent on, and cost consultant rates for changes or advice. Granted, they had a system that was ‘configurable,’ but then those who did the configuration were trained experts in that third party system. They took the place of the previous technicians as expertise monopolies. In short, for an illusory sense of control, the management had overspent and solved little.

The HR department is now perplexed. Interviewing for new hires over the last six months has been complicated. Those doing the interviewing no longer understand the skills being advertised. In the past it was simple: you had to have common sense, industry experience, personable approach and knowledge of the basic administrative systems and processes. Suddenly that is all by the by. The processes are automated, the systems are obscure, the skills are technical and it is hard to gauge what kind of soft skills are really needed. More importantly, some staff members with a negative influence are hard to change – they command higher salaries and more important areas of expertise. Further, the learning curves expected are much longer. New hires typically spend three months before even grasping the basics. There are trainings in these new systems that need to be scheduled and completed, and then all the local customisations need to be learnt. Trial periods become longer. The HR department in essence washes its hands of hiring and firing and becomes the paper-pushing team it is today typically recognised as. Nobody in management seems to be officially tracking and correlating costs associated with hiring and retention in relation to technology choices, so HR cannot offer real metrics about itself and becomes somewhat apathetic.

Finally, through inadequacies in understanding of the systems and processes, the approach to introducing change becomes tentative and reticent. This is a self-reinforcing situation because in such circumstances it is difficult to test that changes really work, and it is difficult to see the unintended consequences. This leads to yet further hesitance. Glitches in business operations appear frequently. Quality suffers for both the staff members and the customers.

The company exhibits all of the problems identified in the previous sections. It is now a paralysed entity. At worst, no real long term growth is possible and the only way for it to go is down.

It no longer makes sense to talk about large and small organisations in terms of numbers of employees. An online shop processing thousands of transactions per day can, in theory, operate with only a handful of employees. However, the uncomplicated business with a small number of staff running their business by word of mouth can be seen as flexible and unburdened by management overhead. This type of company is either a seminal venture or it is a long-established firm that occupies a niche and has peaked in terms of growth. In the former case it can follow only three paths: to become another old niche firm; to collapse for one reason or another; or to undergo the kind of fundamental transformations described earlier in order to allow it to scale.

Basically this type of company must either remain a fixture in a small niche, fold, or reform itself entirely in order to scale. This is not a picture of flexibility.

Here I attempt to present a situation in which every problem of flexibility above has been eliminated. This serves to exemplify possible solutions.

First, let us revisit what went wrong with the ‘worst company in the world:’

Automation of consolidating sales data took an understanding of the sales data and consolidation process away from the general staff awareness.

The knowledge of the software itself diminished, and so in effect the whole company forgot about how this area really worked.

Introducing change to that area became expensive and uncertain, as it always involved rediscovery.

Introducing change required complex software systems development processes, which are expensive and risky.

Similar automations had gone on unnoticed, with these ad-hoc results being understood and manageable only by a handful of people.

Ad-hoc automated solutions drifted to the technicians for supervision, resulting in an overall drain on the knowledge of processes and some repetition of the problems above.

The consolidation of ad-hoc systems into third party systems resulted in expensive dependencies on third party suppliers for business critical processes, and increased the costs related to hiring, training and so on.

The HR department became apathetic, their responsibilities were diminished, and they were held accountable for cost increases that were not their doing, because management were not correlating technology choices with personnel costs.

Lack of system quality and lack of knowledge of those systems resulted in further lack of quality. Internal processes became fragile, the atmosphere at the office worsened and customer service suffered.

Now let us look at the same picture in negative to start to get some idea of the “Best organisation in the world”:

WORST

BEST

Automation of consolidating sales data took an understanding of the sales data and consolidation process away from the general staff awareness.

It was possible to reduce long term headcount and increase transactional throughput by improving the way sales data was consolidated from tills to HQ, while at the same time keeping this process under the supervision of and fresh in the minds of the relevant business staff.

The knowledge of the software itself diminished, and so in effect the whole company forgot about how this area really worked.

The knowledge of the processes was always fresh in the minds of those business line managers responsible for their oversight. The overall picture of the business operations was readily available to anyone who needed to know.

Introducing change to that area became expensive and uncertain, as it always involved rediscovery.

Nothing more than trivial rediscovery was ever needed to implement any kind of procedural or structural change.

Introducing change required complex software systems development processes, which are expensive and risky.

Similar automations had gone on unnoticed, with these ad-hoc results being understood and manageable only by a handful of people.

Either processes were entirely maintainable by anyone with knowledge of the business area, or the process was not automated.

Ad-hoc automated solutions drifted to the technicians for supervision, resulting in an overall drain on the knowledge of processes and some repetition of the problems above.

While technicians may have remained involved, at least for infrastructure, no knowledge of processes was lost from the business.

The consolidation of ad-hoc systems into third party systems resulted in expensive dependencies on third party suppliers for business critical processes, and increased the costs related to hiring, training and so on.

All systems purchased had to meet the criteria of being well understood, popular platforms.

The HR department became apathetic, their responsibilities were diminished, and they were held accountable for cost increases that were not their doing, because management were not correlating technology choices with personnel costs.

Any technology choice was carefully monitored for its long term impact on the HR processes, including hiring, firing, training, etc. A scientific approach was adopted to help ascribe costs appropriately, and to help the business learn from its decisions. The HR department was made a stakeholder in key technology decisions, and was asked for estimates in any technology purchasing decision concerning long term costs.

Lack of system quality and lack of knowledge of those systems resulted in further lack of quality. Internal processes became fragile, the atmosphere at the office worsened and customer service suffered.

Each one of these possibilities in the ‘Best’ column raises the question, “How?” I will now address each item and present a general answer, and it will become apparent where these solutions are heading.

It was possible to reduce long term headcount and increase transactional throughput by improving the way sales data was consolidated from tills to HQ, while at the same time keeping this process under the supervision of and fresh in the minds of the relevant business staff.

How? The business processes automated and coordinated by machines must be clearly readable and understandable by business process participants and their execution must be clearly visible, just as it was before automation.

The knowledge of the processes was always fresh in the minds of those business line managers responsible for their oversight. The overall picture of the business operations was readily available to anyone who needed to know.

How? The business processes automated and coordinated by machines must be clearly readable and understandable by managers. They must be able to change the automated processes themselves. When processes are changed, the changes must remain clearly readable and understandable by process participants.

Nothing more than trivial rediscovery was ever needed to implement any kind of procedural or structural change.

How? The business processes automated and coordinated by machines must be clearly readable and understandable.

How? Minimisation of costs means understanding what all the costs would be, for each possible change or outcome of change. This is infeasible. What is necessary is to be able to test the impact of change easily, to permit some freedom to experiment. The processes should be clearly readable, understandable, as above, and changes to them should be as simple as directly altering them, just as prior to automation, but testing is necessary because there will always be unforeseen consequences. Testing should always be part of the change process, making the change process itself more bulky, but reducing the overall cost and cultivating an atmosphere of safety and freedom to play.

Either processes were entirely maintainable by anyone with knowledge of the business area, or the process was not automated.

How? First, the incentive structures need to be such that people always incorporate company wide flexibility into their decision making. When someone decides to create a spreadsheet macro to save time on a daily task, if most of the rest of the organisation is not familiar with spreadsheet macros then they should be aware of the cost to the company of their action, rather than the personal benefit to themselves. The next response is in the same vein as above: any automated business process must be clearly readable, understandable and changeable by all relevant parties.

While technicians may have remained involved, at least for infrastructure, no knowledge of processes was lost from the business.

How? The business processes must be clearly readable and understandable by all relevant parties, including those managing the data storage and transmission facilities. Introducing changes to processes that are clearly readable by everyone might be fine, unless the volumes of data simply exceed the capabilities of the network and databases. Planning for changes needs quick assessments by those who manage IT infrastructure. Indeed, IT infrastructure would have its own internal processes subject to the same requirements.

All systems purchased had to meet the criteria of being well understood, popular platforms.

How? The platform must be very popular with an abundance of open documentation, like Windows or Unix for example. The processes automated on these platforms must be readable and understandable by everyone relevant, without creating dependencies on third party suppliers or internal knowledge monopolies.

Any technology choice was carefully monitored for its long term impact on the HR processes, including hiring, firing, training, etc. A scientific approach was adopted to help ascribe costs appropriately, and to help the business learn from its decisions. The HR department was made a stakeholder in key technology decisions, and was asked for estimates in any technology purchasing decision concerning long term costs.

How? The comment is self explanatory, but it I feel it is important to emphasise this point: most businesses simply do not understand the personnel related costs that automation or technology choices have had on them. Executives fail to put these two aspects of the business together, and the end result is poor decision making. It is all too common to hear of ROI for an IT choice or ‘strategic technology investment,’ and yet even more common to see a business fold, a department close or an IT project fail for the simple reason that organisations do not and cannot understand the actual cost.

Flexibility is strength and health within a certain scope that preserves the identity of the organisation. It is the ability to respond to changes and adapt, to withstand shocks and survive.

Automation of business processes permitted a continuation of organisational growth by allowing increases in transactional throughput.

Automation of business processes has resulted in a separation between the business domains and specialised technical domains, where knowledge critical to the decision making of both lies across a difficult division.

Any organisation can be described purely in terms of business processes.

Business processes have not dramatically changed. What has changed in response to automation is the roles assigned to the execution of tasks are increasingly machines.

The main change in response to automation is the process of introducing further change. In response to automation, this increasingly involves the processes found in the acquisition or development of software, and is often poorly executed by organisations whose core skill is not software acquisition and development.

Automation of business processes very often has all kinds of unwanted side effects – including causing dependencies on external suppliers, causing expertise monopolies, loss of business knowledge, introducing communication barriers, aggravating the costs and risks in personnel management, baffling and confusing decision makers with technical concerns, degrading quality of service and products.

A major issue is that costs cannot be ascribed to personnel or personnel management. Accountability of individuals and consequently HR departments is practically impossible to achieve. HR departments suffer apathy and delegate responsibilities away to departmental managers.

Almost all of the above problems are caused by describing processes in ways that only specialists can understand. Process oriented thinkers have been pushed into IT, and processes have been pushed to IT.

If the basic functions of management are staffing, planning, organising, leading and monitoring, then in most cases automation of business processes has helped with monitoring at the expense of all other functions.

Finally, the solution to all these problems is a cultural emphasis and understanding of flexibility and its benefits, coupled with the right technologies that allow processes to be described and prescribed in ways that all can understand.

Such initiatives, and supporting technologies, are coming into maturity. I will look at these in a later post.