In Part 1 - Background – we set the stage for the debate with some background on why IP has started to make its way into historically closed and proprietary AMI meter networks.

In Part 2 – The Case for IPv6 we gave the IPv6 evangelists in our group a chance to make their case for mandating the newer protocol in the solution.

In this part 3 of the series we had planned to have a discussion of public addressability but instead we’ve opted to try and draw some conclusions from the group.

Apples and Oranges

They look very similar on the surface, but it can be notoriously difficult to make meaningful comparisons across AMI projects that exist in different geographic, regulatory and business environments.

The driving factors behind a significant technical decision may be completely absent in an otherwise very similar AMI implementation resulting in a different technical solution.

It is for these reasons that we decided to give each member of the group a chance to “sum up” their point of view instead of trying to synthesize a single conclusion that all of us could agree on.

We hope that this approach will provide the best coverage of the many factors that influenced this debate.

To avoid accusations of favouritism I’ve ordered the opinions by contributor’s last name. J

Enjoy…Brian MacDonald

Erwin Frank-Schultz

Let's first summarise the benefits of IPv6 as outlined in the previous post:

More efficient and lower latency as it requires no address translation;

Security designed in from the start;

Supports many more addresses than IPv4; and

IPv6 eventually replaces IPv4 and is thus more future proof.

In my view the efficiency gains are likely to be of limited benefit to AMI applications as there are usually no stringent latency requirements (unlike in SCADA systems for example). Furthermore the security solutions that have been designed around IPv4 are good enough, have been standardised and are now implemented by most vendors. AMI devices are unlikely to be addressable on the public Internet and any one system is unlikely to exceed the IPv4 private network space; the address space limitations of IPv4 are therefore not a significant concern.

At some point equipment manufacturers will start to drop support for IPv4. The previous post argues that this will be later rather than earlier and I agree, especially as far as networking equipment is concerned. But what about end points i.e. meters or in home communication modules? It is not clear when manufacturers will start to offer IPv6 only versions. My personal view is that this is still some way off. There are too many installed networks. Producing IPv6 only devices will not save a significant amount on manufacturing cost and limit the market into which devices can be sold.

Given the arguments above, mandating IPv6 is not necessary when procuring an AMI solution today. The risk of inflating costs by excluding cost effective and well-tested solutions outweighs the technical benefits. On the other hand, IPv6 is the future and does have some benefits. I would therefore include IPv6 as a selection criterion, but not assign much weight to it.
This position will need to be revised and I can see the weighting assigned to the IPv6 criterion increasing over time until it becomes a mandatory requirement in 5 to 10 years time.

Brian MacDonald

I like new things – the shinier the better – and IPv6 has lots of new “things” many of which we’ve discussed in this series. There is no doubt in my mind that IPv6 will be better than IPv4 and that it will continue to slowly replace IPv4 as the dominant protocol.

However, “new” in the software domain is generally regarded as a “bad thing” most notably from the security and reliability perspectives.

That’s not a criticism of today’s developers or the software development life cycle – just an observation that has been proven time and time again. The only way to identify and fix the defects that are guaranteed to exist in new software is to have it used as widely as possible for as long as possible.

IPv4 implementations have been subject to some of the most extensive production usage possible and there are still bugs being identified in code paths that have not been touched for years.

IPv6 has only just started on this journey and it is going to be a bumpy one – made worse by the amount of new firmware involved in the embedded and AMI world. Every device that is going to participate in the new IPv6 network will contain new firmware implementations of the IPv6 stack. There will be conflicting vendor interpretations of the standard, software and interoperability defects and revisions as IPv6 deployment grow. Delivering fixes to all of these firmware images and maintaining the requisite configuration management across the inventory of AMI meters will be a challenge.

To mitigate these risks I would defer the introduction of IPv6 software on the AMI meter for as long as possible.

How long is that?

The answer depends on whether the AMI meter vendor offers a viable upgrade path that will allow the move to IPv6 once the velocity of defect discovery has settled down to an acceptable level.

If there is an upgrade path via firmware update delivered over-the-air (OTA) then I would be happy to leave IPv6 out of the meter selection mandatory criteria.

On the other hand if the vendor does not offer an upgrade path then the risk of deploying meters with new and relatively untested firmware has to be measured against deploying meters that will remain on IPv4 for their useful life e.g. ten years.

Michael Martin

IPv6 is essential to the success of future smart metering AMI solutions. When considering the long-term future, several core technical aspects demand more attention than they do with today's AMI needs whereby IPv4 is deemed adequate. The driving change forces behind the next generation of AMI solution will be:

Internet of Things

Security; and

Distributed architecture.

The Internet of Things (IoT) is the next edition of the Internet itself some call it Web 3.0. The definition for this next generation of the Internet has evolved and changed from what we saw it as just five years ago. It is still an immersive environment, but not one based upon avatars and games whereby the user is emulated within the simulated virtual world. It is now a real world scenario whereby the user is immersed within their own physical environment and the devices surrounding them wrap the user in a dynamic bubble that maps and remaps as they move about their daily chores. In this same way, the IoT will make and break machine-to-machine relationships on demand and generate dynamic connections when needed to fulfill a request or a requirement. In the AMI environment, this will manifest as meters communicating with other meters, controllers, sensors and devices (or things) that share data. These things effectively move from a model that shares simple data towards one of information exchange and derived knowledge that is used to save energy, manage costs, control the smart home, and make lives better. Ultimately, we will migrate from this reactive knowledge world to one based upon predictive wisdom within the fabric of the AMI network.

Once you connect all of these things together, these ad-hoc networks pose a new potential risk. Therefore a new level of security and protection of data and privacy is necessary. The threat is serious. If someone could attack your immersive bubble wrapping around you and your family as well as the myriad devices within the fabric of your IoT model, then they may be able to do harm or control aspects of life or your environment that are undesired. Protecting against such intrusions is therefore essential. The IoT needs to operate within a trusted DMZ. The AMI solution needs to be protected within this same trusted DMZ so as to avoid these same risks and to ensure that these AMI devices and things can function as expected within a safe and protected environment.

In order for all of these changes to take place, then we will see a change to the fundamental architecture to the web itself and therefore by extension to the AMI network. We will move from a centralized model to a distributed model. The new model will share both client/sever designs as well as peer-to-peer designs in a hybrid approach. It will become autonomous and create many mini webs and not just one massive Internet. It will be a web of webs numbering into the billions. A hybrid architecture that is mostly distributed in nature will result.

It is for these key drivers that we must have IPv6. The number of connections will explode exponentially as the number of devices will be in the many billions. IPv4 cannot support this volume of IP addresses. Next generation AMI networks need to have security and privacy by design built-in to the fabric of the addressing schema that only IPv6 can provide and not simply a bolt-on after-thought approach like IPv4 employs today. With the change to the core architecture, the biggest change will come with the loss of a centralized approach, which is depended upon heavily by today's IPv4 designs. The core needs of Network Address Translation (NAT) and Dynamic Host Configuration Protocol (DHCP) servers will change dramatically and IPv4 will not be able to support this new model for the dynamic wrap around web. Only IPv6 can meet these future needs.

Andy Stanford-Clark

I am asked this question on a regular basis when I take questions after presentations about emerging Internet of Things technologies.

I am quite clear on the answer for the general IoT case, and more so for the case of AMI network infrastructure: IPv4 is quite sufficient. Through the use and re-use of reserved networks such as 10/8, we're never going to run out of addresses.

My first observation is that there are enough risks to the security of Smart Metering systems, without having them all publicly addressable and routable. Hiding them behind a diodic NAT gateway at least keeps them safe from prying eyes, port-scans, ping discovery, and most DOS attacks. For situations where a HES needs to gain direct access to an AMI device, private networks, either physical, or virtual using VPNs, enable host computer systems and AMI devices to appear to be on the same network enough to give the access required.

Secondly, the world is moving away from thinking about connectivity at the Network Services layer, and moving up the stack to the Application Services layer (OSI layer 4). This way of thinking brings the physical instantiation of a system closer to the mental model of the solution architect and implementer. Enterprise Messaging and other middleware messaging paradigms enable applications to talk to other applications, and more importantly allow the implementers to think about the solution in terms of functional entities communicating with each other. Of course, there needs to be the underlying network layers to support application level messaging, but those lower layers are just enablers for the application level messaging, and as long as they are sufficiently rich to support the patterns of interaction that the applications wish to use, then they will be fit for purpose.

For example, MQTT (http://mqtt.org) is an application-level publish-and-subscribe messaging protocol, widely used in M2M applications including AMI. All connections in MQTT are originated by the client - the meter in the AMI case. Once established, a keep-alive protocol is established between client and broker (server), and communication is two way as long as the connection is maintained. The client re-connects whenever the connection drops. There are numerous possible connectivity patterns using MQTT based on time, the need to publish data, and the likelihood and urgency of messages being delivered to the client.
In the extreme case of an unexpected urgent message from the broker, a "shoulder-tap" using an out-of-band signaling mechanism tells the client to "call home" and thus receives the waiting message. This is rarely needed, though, as maintenance of the client-to-broker connection can usually be arranged to cover the expected communications patterns for any given application.

So, IPv4 with diodic NAT gateways will enable billions of services to connect securely and communicate and collaborate as members of the Internet of Things, of which AMI devices will form a small part.

Brian MacDonald is a Consulting Solution Architect and a member of IBM’s Global Centre of Competency for Energy and Utilities.

Contributors (and all around good people to know)

Erwin Frank-Schultz is an IBM Executive IT Architect and leads the UK and Ireland Energy and Utilities technical community.

Michael Martin is an IBM Senior Executive Consultant in broadband networks and a member of IBM’s Global Centre of Excellence for Energy and Utilities.

Andy Stanford-Clark is an IBM Distinguished Engineer and the CTO for Smarter Energy in the UK and Ireland.

The opinions in this article are my own and don't necessarily represent IBM's positions, strategies or opinions.

Next we give the IPv6 evangelists in the group a chance to make their case for mandating the newer protocol in the solution

One thing was clear right from the start – there wasn't a single argument put forward that could carry the day on its own.

IPv6 is more efficient and secure

The big win for IPv6 on the efficiency front has to be the elimination of NAT. IPv6 is able to support NAT but with such a massive address space who needs it. The IPv6 advocates argue that this delivers gains both in total network latency reduction and in protocol simplification.

Total latency in a network is more and more relevant in the M2M, Internet of Things (Iot) and utilities use cases so any reduction is welcome. By freeing routers from the execution path for NAT individual handling time and aggregate throughput should benefit.

Protocol simplification is a bit less clear. In its simplest form NAT is just about rewriting the addresses on packets as they traverse a router boundary. However, some protocols embed server addresses within their payloads. This means that routers must be aware of the protocol and rewrite the payloads appropriately if things are to continue working when a NAT router enters the picture. IPv6 can be viewed as facilitating innovation by getting rid of the need for NAT.

Quality of Service or QoS support was also thrown into the mix on the efficiency discussion. IPv6 advocates point to new headers that should allow routers to manage classes of traffic and “flows”. The consensus though was that all of this is optional and manufacturers are likely to do the minimum necessary to be compliant.

Multicast support also looks to be a win for IPv6 on the efficiency front with its ability to stream to multiple destinations.

The security side of the question is another area that is not that clear.

Both IPv4 and IPv6 provide support for IPsec – the IP layer protocol for authenticating (including mutual authentication) and / or encrypting packets. In IPv4 IPsec was added after the fact through the RFC process and its implementation is optional. In IPv6 on the other hand IPsec has been built in from the start and all implementations must support it. Note this doesn’t mean that all IPv6 traffic is authenticated and encrypted – it is up to the participating servers to decide whether to require use of Authentication Header (AH) or Encapsulating Security Payload (ESP).

A second aspect of the security debate is the length of time that an implementation has been subjected to use in the field. Here IPv4 is the clear winner having been pounded on for decades. There is no doubt that new IPv6 implementations will have bugs including protocol implementation errors that will form the springboards for security exploits.

Given that both IPv4 and IPv6 provide IPsec support all that can be said at this point is that IPv6 is no less secure than IPv4.

On balance we gave this area to IPv6 but only barely.

IPv4 is running out of space

Address space exhaustion has been talked about since the 1990's. IPv4 only has 2**32 unique addresses and they are pretty much gone. At the same time the demand for publicly routable addresses is exploding both from new uses like machine-to-machine collaboration and growing demand in countries like China and India.

Lack of available IPv4 addresses has been the single biggest driver behind the move to IPv6 but three factors have conspired to ease the pressure:

Classless Inter-Domain Routing (CIDR);

Widely available and low cost NAT routers;

Previously reserved networks such as 5/8 have been pressed into service; and

Institutions have released unused networks – some of them /8.

Here is a good Wikipedia article on the mechanics of IPv4 address exhaustion and how mitigation efforts have delayed the inevitable.

But does the dwindling IPv4 address space alone justify an AMI meter network based on IPv6? We concluded that it doesn’t.

In the context of our AMI network the address space exhaustion in IPv4 is critical only from a public addressability point of view. If we place the AMI meter network in a private address space then the IP addressing of the meter network could be delivered using one of the IANA reserved networks e.g. 10/8.

There is at least one question that will cloud this neat conclusion. Is public addressability of AMI meters required? We'll come on to this point a bit later.

IPv6 is inevitable so just get it over with

We agreed that IPv6 is inevitable – the pressure on the IPv4 public address space that is coming from M2M and from the developing areas of the world will ensure it – the question is when.

There is a massive inventory of IPv4-only equipment that simply cannot be upgraded to join the IPv6 world. This fact alone ensures that IPv4 and IPv6 will have to coexist for as long as it takes to address the worldwide inventory of IPv4-stranded equipment. Is this going to take 10 years? 15 years? Regardless, organizations can rely on support for both protocols and supported techniques to connect IPv4 and IPv6 networks for the lifespan of any AMI project that starts now or in the near future.

We had to call this one a draw.

Public addressability in AMI – requirement or anathema?

As we said earlier, these arguments don’t exist alone – they have to be viewed from the end-to-end solution perspective. In the next article in this series we’ll tackle the question of public addressability in the AMI meter network and its impact on the IPv6 debate.

Brian MacDonald is an IBM Consulting Solution Architect and a member of IBM’s Global Centre of Competency for Energy and Utilities.

The other day we had an interesting debate about whether IPv6 is a red herring in the design of todays Advanced Metering Infrastructure (AMI) solution. We covered some interesting points that I want to summarize here and in the next few blog posts.

What’s a “red herring”? The short answer is a literary device that leads the reader towards a false conclusion. See Wikipedia for the full story.

Some Background

The day the first electricity meter was installed the business problem of collecting meter readings was created. Over the years the ability to collect meter reads remotely has evolved using both wired (e.g. Power Line Carrier or PLC) and wireless radio (e.g. radio frequency mesh) networks that enabled one way and then two way communications.

Figure 1 - High Level Network overview

In the past (and in many cases today) purchasing a meter vendor’s smart electricity meters required installation of that vendor’s Head End System (HES) to manage all communication with those meters. Beyond the Head End System in the diagram above the network of meters is opaque – the meters are not directly addressable.

In the early days of AMI deployments this wasn’t seen as too much of a hindrance. At the application layer everyone just used application identifiers like Service Delivery Point ID that the HES could then map onto the closed meter network.

However, as AMI deployments have matured the network technology limitations imposed by a closed, proprietary meter network have become more relevant.

Interoperability has started to change things

Distribution and Supply companies around the world realized some time ago that the historical tight coupling of a vendor’s meter communications to a specific HES implementation presented challenges and costs. As an example, a multiple meter vendor strategy brings along with it the installation and operation of the corresponding HES’s.

Regulatory and market pressure (particularly in the European Union) has incented meter vendors to develop arrangements where one agrees to build support for the other’s meters in to their HES and to start supporting more open, standards based network technologies – including IP.

So why didn’t open, standards based protocols like IP get used from the start?

An answer (and there are many…) is that the initial implementations of the wireless radio or wired power line segment of the overall AMI network had very limited and fragile bandwidth. It was felt that the overhead that comes with protocols like IP would be an unacceptable drag on throughput.

A second answer would point to meter vendors protecting what each regarded as an important competitive advantage in their specific meter communications solution.

A third answer would be the security advantage (however dubious) of using a protocol that is not in the public domain.

Why is the focus always on IP and not some other open standard?

This question was a topic of intense debate but is, for all intents and purposes, decided. The industry voted with its dollars and almost all new AMI products – particularly networking gear – sport the “Supports IP” feature.

The short technical answer (from a paper whose reference I’ve lost) is that IP has demonstrated it can provide network services (layer 3 of the OSI model) in disparate physical (layer 1 or PHY of the OSI model) and media access control (part of layer 2 or MAC of the OSI model) environments such as Ethernet, DSL, mobile phones, cable modems, WiFi, power line carrier, etc. The argument goes that by standardizing on IP the emerging smart grid can avoid fragmenting across multiple PHY/MAC implementations and accelerate integration of management tools and enterprise applications.

A note of caution here – many people in this debate have confused IP with TCP/IP. You can find a good description of the differences on Wikipedia but suffice to say that many of the internet protocols people are familiar with like FTP, SNMP, SSH, HTTP, etc. are connection oriented and use Transmission Control Protocol (the TCP in TCP/IP). Our friend IP is only interested in moving the data around – essentially address resolution and routing.

In the next article in this series we’ll take a look at some of the arguments in favor of IPv6.

Brian MacDonald is an IBM Consulting Solution Architect and a member of IBM’s Global Centre of Competency for Energy and Utilities.

Solution
Architects are often faced with the challenge of helping a customer choose a
third party vendor for hardware, software and services in a complex
environment. In my next few posts I'll give you an overview of an approach that
I've used to navigate this challenge and some of the pitfalls I've identified.

Let's start off with what I view as the single most
important input to any vendor selection process - the requirements that the
vendor's solution must meet. At the end of the day the work invested in writing
understandable, unambiguous and testable requirements will determine whether
the vendor solution that is procured is capable of meeting the customer's
business and technical needs.

Requirements break down into two major categories:
functional; and non-functional. Writing good functional requirements is the
subject of many books so let's spend some time where most Solution Architects
focus - on the non-functional requirements or NFRs.

A useful way to approach this is to look at three examples
of the same requirement written by different authors.

Example 1

“The vendor's solution
must be scalable to meet business needs.”

There really isn’t much
good to be said about this NFR but it does illustrate some common pitfalls:

Ambiguity is a vendor’s
friend. To win in a competitive procurement vendors will present their solution
in the best possible light and at the most competitive price point. This isn’t
to say that vendors will deliberately mislead you rather that the vendors’ responses
will be based on their individual interpretations of the same requirements.
This will make the proposals difficult to compare and evaluate. In this case
our author has invited the vendors to apply their own interpretations to “scalability”
and “business needs”;

The requirement is
incomplete. There is no guidance to the vendor on the “what’s” or “how’s” that characterise
the phrases “business needs” and “scalable”;

The vendor does not have
to answer any “yes/no” questions asserting its ability to meet or comply with
the requirement or provide any quantitative answers. The response from each
vendor may vary from a single sentence to several pages. Again, this will make
side by side comparisons and scoring more difficult; and

In the future, there will
be no way to test the vendor’s response to this NFR against the solution that
is delivered. This could be very important in situations where the vendor’s
product is having difficulty in a production setting.

Example 2

This example of the NFR
has improved somewhat but it is still lacking many of the attributes we
discussed in Example 1.

It is potentially
testable, but there is still significant ambiguity in the shape of the
transaction volumes across the processing day.

The NFR needs more detail
on the definition of transaction success and transaction failure.

Example
3

“The vendor's proposed
solution must be capable of successfully processing the business volumes set
out in the following table.

Year

Customer
record updates

Sales
Transactions

00:00
– 07:59

08:00
– 15:59

16:00
– 23:59

1

20,000

40,000

5%

75%

20%

2

80,000

45,000

5%

75%

20%

3

500,000

750,000

5%

75%

20%

4

2,000,000

6,000,000

5%

75%

20%

In addition to the even
spread of transactions within a time band in the table above the vendor's
proposed solution must be capable of supporting a peak transaction volume of 3x
the average transaction volume over a period of one hour without measurable
impact to the customer sales representative GUI responsiveness where impact
means an increase in text screen to text screen response time of more than 0.5
seconds.”

Before going further, read
each of the NFRs above using the point of view of a vendor in a competitive
procurement. For each, how many interpretations of the requirement can you come
up with and how could they assist you (as a vendor) in reducing the total cost
of the solution you would propose to this customer?

In this third example, the
specificity and testability has improved but it is still missing a specific “yes/no”
question requiring the vendor to assert the proposed solution’s ability to meet
this NFR.

The NFR could be further
improved by asking the vendor to propose options to deal with the significant
increase in volumes in year 4. Incrementally adding capacity versus building
for end state volumes on day 1 could result in cost savings to the customer.
The NFR could also ask the vendors to provide details on how they would add the
capacity in a non-disruptive manner.

The requirement implies
that there is a definition of a “Customer record update” and a “Sales
Transaction” and the criteria for “success”. The NFR should refer the vendors
to those definitions and the overall author of the requirements should be sure
that those definitions are in place.

This version of the
requirement also illustrates that there will always come a point at which we
need to trade off being sufficiently prescriptive to get clear answers from the
vendors and leaving them sufficient room to propose innovative and cost effective
solutions.

A common practice that I see is to have several
individuals (typically domain experts) write NFRs for their specific area. This
has the advantage of providing good coverage within each area but it can expose
the NFRs to internal inconsistency and contradictions if there is not an
overall author or quality assurance process to reconcile conflicts.

Ideally, the Solution Architect will be involved in the
writing of the non-functional requirements however you will often find that the
NFRs have already been published or finalised before you are brought into the
vendor selection process.

Here are some suggestions for actions that you can take
to mitigate the exposure created by inadequate non-functional requirements:

Investigate whether it is
possible to write and send out a set of additional questions or requirements to
augment areas of weakness in the procurement as part of the process;

Work with the customer to
create a set of clarifying questions to pose to vendors during interviews or
demonstrations;

Where the procurement process
has already concluded provide the customer with input to the commercial
agreements that will be written (e.g. Statement of Work) with the successful
vendor; and

Work with the customer and
project managers to identify high risk aspects of the vendor’s solution and add
activities to the project to mitigate those risks e.g. performance testing.

In the next post I'll discuss some of the perils of
scoring vendor responses.

Brian MacDonald is
a Consulting Solution Architect in IBM Global Business Services and a member of
the Global Centre of Competency for Energy and Utilities. He has designed and
implemented smart metering solutions in North America and is now on assignment
to the United Kingdom.

Please note that
the postings on this site are my own and don't necessarily represent IBM's
positions, strategies or opinions.

I think it is safe to say that the use of a Design Authority in cross functional programmes like Smart Metering is now an accepted best practice.

But why are some DAs more effective than others?

During the five years that I’ve been working on Smart Metering and Meter Data Management System (MDMS) projects I’ve collected some “themes” that I think answer this question.

Start with an Executive Sponsor or “Champion”

Newly formed Design Authorities always need a senior executive sponsor or champion who can coach them on how to be effective within the organisation and help them put their recommendations into practice. Much as we’d all like to go from Novice to Revered Authority in one step – we can’t. It takes hard work, dedication and sustained excellence to build up organisational influence and a reputation for good advice and decisions. The same is true for each new DA.

The executive sponsor also provides much needed organisational legitimacy or “clout” to the new DA. The need for this role should decline as the DA builds its own reputation and organisational influence.

Frequent escalations or appeals to the executive sponsor can be an early warning sign that the DA is struggling to establish itself. On the other hand, evidence that project teams actively seek out the advice and counsel of the DA is an early indicator that the DA is succeeding.

Define “effective”

The stakeholders in cross functional programmes like smart metering will have widely divergent perspectives on what makes a DA effective. For example:

IT managers will look to the DA to make and enforce standards across projects;

Technical architects will look to the DA to resolve (possibly through arbitration) the answer to complex technical issues and cross project infrastructure conflicts; and

Business sponsors will look to the DA to protect committed business benefits

Each of these perspectives could be a full time job and left to its own, a newly formed DA will struggle to adequately server all stakeholders.

Choosing a realistic mix of priorities and putting measurements in place to support them will accomplish two very important things:

Focus the limited time and resources of the DA effectively:

A common challenge that a new DA must deal with is a barrage of topics and / or disputes. An agreed set of priorities will help in establishing the order in which topics are dealt with and allay stakeholder concern that they are note getting an appropriate share of the DA’s attention.

Quantify the benefits of the DA:

Measurements will allow the DA, and more importantly the stakeholders paying for the DA, to taken an objective view of the DA's accomplishments;

A common criticism of a new DA that does not have clear measurements is that it only ever has meetings - it never makes decisions or recommendations.

Establish process early and document decisions

The DA should be an exemplar of architecture best practices including the use of artefacts like the Architecture Decisions document. This document and the process that goes with it will serve two very important purposes. First it will communicate DA decisions outward to stakeholders and affected parties. Second it will serve as a record of the thinking and evaluation that went into specific decisions that can be reviewed whenever debate on a given topic comes up again.

The process followed by a DA should have a degree of formalism to it such that people new to the DA can understand how it functions and how they can successfully engage with it.

Evolve the DA as it gains credibility and influence

The structure of the DA needs to reflect the size of the project or programme that it has been established to work on. Based on the programme size and scope the DA can have any of the following shapes.

Successful DAs that I have worked with tend to be those that have started small with a direct link to a specific project and have then evolved as they have been given greater scope and responsibility.

Restrict membership

Ever been to a working group that fills a room and is still effective? Me neither.

One of the big challenges with any DA is keeping the number of participants to a workable number while still providing required coverage for complex topics and multiple stakeholder organisations.

A technique that can help with this challenge is to establish different categories of participation. This approach starts with a core decision making membership at the centre that then builds outward to include extended team members brought in to address specific domains or represent specific stakeholders. A third tier could include external subject matter experts who provide needed depth in areas where the DA membership may not have sufficient experience.

There is no “silver bullet”

Every DA I have been involved with has had to evolve from its starting point and address the points above to work within organisational structure and realities.

Special thanks to my colleagues in IBM Canada Delivery Excellence: Tom Bridge; and Sharon Hartung for their work on Design Authorities and from whom I’ve plagiarised shamelessly!

Please note that the postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.