Business architecture view

Risk must be carefully monitored (through data collection), evaluated and acted upon. This means (see also the illustration below):

Enterprise business functions should be enriched to generate the risk-related data.

Those risk-related data need to be collected at the enterprise data warehouse together with other business data.

Some business processes need to be updated to embed risk-related activities.

A set of risk-related rules, logic and risk-related knowledge should be able to use the risk-related and other business data to detect acceptable limits of risk as well as interdependencies and correlations between different risks.

Some business processes for risk mitigation maybe automatically activated.

A lot of risk-related indicators, alerts should be available in the form of dashboards and reports available for different staff members.

Staff members should be able to initiate business processes based on the observed risk-related information.

Business-generic capabilities involved

The following business-generic capabilities are involved in the ERM platform:

Management by processes

Efficient data gathering channels

Single version of truth for data

Ingesting (into the data warehouse) of external information

Efficient dissemination channels

Effortless collaboration within groups / communities of practices

Formalized business logic

Supremacy of management by processes

Managing any work by processes is the key business capability with allows to address the risk-related issues in a proactive manner. The risk is strongly related to how the business processes are carried out. By understanding a process (i.e. through being able to simulate it) the business may predict how the risk is changing during the execution of that process. The explicit description of processes permits to add a few “check-points” within any process to examine its risk-related “health”.

Business processes act as a skeleton to which the enterprise adds risk management (as shown on the picture below) – each usual activity is enriched by risk-related monitoring and evaluation.

The risk evaluation may initiate some risk mitigation processes. The risk evaluation may be as complex as necessary, and it may include simulations (e.g. value at risk and stress testing), and the conduct of statistical and scenario analysis.

IT-generic capabilities involved

The following IT-generic capabilities are involved into the ERM platform:

Ideally, such dependencies can be generated from business processes and applications (i.e. from your EA repository).

In the real situation this illustration may become rather complex, so some techniques for better understanding can be necessary. For example, the selection of a rectangular highlights all connected rectangulars and links, as shown below:

And the final advise: be careful with arrows - people may interpret them differently.

2011-08-17

In my experience, BPM and SE are very natural friends (with the help from SOA, EA and BA) which work well together within a proper architecture.

Some basics: Any complex system is a dynamic set of artefacts (or building blocks?), e.g. in case of a typical business system those artefacts are: processes, services, events, data structures, documents, rules, roles, activities, audit trails, KPIs. Artefacts are interconnected and interdependent. We have to anticipate potential changes: policies, priorities, compliance, technology, etc. Implementation of such changes necessitates the evolution of some artefacts and the relationships between them. It must be easy to modify all artefacts and relationships without causing any negative effects.

My main architectural principles for creating flexible systems:

All artefacts must be evolved to become digital, external, virtual and components of clouds

Then BPM, EA and SE have to work together to make explicit and executable models. The best example of executable models is executable business processes. Any business process is a relationship between many artefacts: who (roles) is doing what (business objects), when (business events), why (business rules), how (business activities or other processes) and with which results (KPIs). At the same time, such a process is an explicitly-defined coordination of services to create a particular result. So, there is a recursive relationship between services and processes:

all our processes are services,

some operations of a service can be implemented as a process,

a process includes services in its implementation.

This is the base of a modelling procedure (the core of the SE) whose purpose is to analyse a building block (processes or just activities – what is it supposed to do and should it be considered as a whole?) and to synthesize its implementation (how does it carry out its function and should it be considered as a composite?) as the explicit coordination of other building blocks (processes or just activities).

It is an iterative procedure – it can be applied until we only have left indivisible building blocks (i.e. activities). During modelling it is necessary to collect and refine the different artefacts. To avoid getting bogged down in detail it is useful to construct building blocks recursively, like Russian dolls.

Of course, owing to very nature of modelling as a creative problem-solving human activity, each person does it in his/her own way, and for the same subject two different people may produce two different models. The proposed modelling procedure can’t change this, but it does help to uncover the same artefacts.

Privacy considerations

Only authorized person can actively contribute (i.e. add some text) in e-consultation services.

The identity of the person may is hidden.

The enrollment will include the identity verification.

Further a person can hide his/her identity under an avatar.

The correspondence between an avatar and the identity is secret, but may be disclosed in case of misbehavior. Example: Facebook.

Nomenclature of e-consultation services

Maybe this nomenclature is not full yet.

Question and answer discussion forums

It is a free form on-going thematic discussion initiated in a community of interests. Each contribution is named. A discussion may be closed (only within the community) or open for everyone (or even to the Internet). Examples: discussions in Linkedin.

On-line polls

A time-bounded questionnaire.

E-petitions or on-line testimonies

A person (or an association) initiates a formal demand to public services. Such a demand should start a process which should lead to a meaningful response. People, other than the initiator, can express their opinion (support or not) about the demand. Example: Reporting Damaged Roads and Paths https://www.contact.act.gov.au/app/answers/detail/a_id/22/~/reporting-damaged-roads-and-paths

E-panels

A time-bounded nominee-only group open discussion on issues of public interest.

Editorial consultations

A time-bounded multi-authoring of a document or a set of documents. There are several options about who can edit the text. A possible option is that many people can contribute to the content via comments and small (editorial) group can modify the text to reflect those comments.

Use of the e-government platform

E-consultation services are applications implemented on top of the e-government platform (see chapter 5). The latter provides different common services for facilitating the implementation of such application and keeping the same look and feel for better user’s experience. Each application is self-contained, developed in accordance with platform’s rules and may evolve without causing negative effects to others. Number of applications within the platform is not limited.

Enabling the public-private partnership

Systematic approach to very critical IT issues such as authorization, data security, access control, etc. clears a way for re-use of available data. It will be possible to estimate how the disclosure of some of those data will effect to the level of protection of remaining data.

Ability to open some data makes possible to employ private investments for improving some applications (it is considered that all applications are developed mainly by centralized capital investments).

Such investment attraction should be estimated for each application.

The big picture of e-government platform

E-consultation services are e-government services. The e-government (as one of its functions) provides (via ICT) to partners (citizens, enterprises, associations, etc.) the governmental services. To introduce the e-government without disturbances to existing governmental applications, it is proposed to position the e-government is a layer between the partners and the existing governmental applications (as shown in figure below).

The partners-facing part of the e-government is a collaborative extranet which is similar to popular social networking tools (Facebook, LinkedIn, etc.) and e-banking. Its main functionalities are the following:

secure repository for short messages, documents, and video;

dedicated (including role-based) information and functionality;

diverse services as small pluggable applications,

direct channel to the governmental business processes; and

unified view of central, regional and local governments.

The government-facing part of the e-government is the integration and coordination capabilities which are necessary to fulfill needs of partners.

Important that the whole e-government is separated from the existing governmental environment. This separation means operational and evolution independence.
The common functionality of the e-government platform is presented below.

Expected advantages:

Quick implementation

Easier maintenance

Explicit security

Uniformity for the users

Implementation principles

Keep the conceptual integrity.

Take into account socio-technical aspects, because how you do something is sometimes more important than what you do.

Unity the infrastructure and reach different mobile tools

Systematically use open source software

Provide security at the level of private banking

Ruthlessly validate the implementation by international experts and hacker groups and political parties

Develop agile and deploy step-by-step within the common architecture

Guaranty the total traceability and records management

Exchange by electronic documents

Infrastructure implications

To cover the population, it is necessary to establish a network of social computing centres. The latter may be located at local community premises (e.g. a public library, “hotel de ville”, etc.). Those centres may also provide wireless access points.

E-consultations are also more formal and structured than discussions in the informal virtual public sphere. They tend to have a set duration, agenda, employ the use of moderators, with topics for discussion pre-defined by the host.

There are five common types of e-consultations:

question and answer discussion forums

on-line polls

e-petitions or on-line testimonies

e-panels

editorial consultations

Usual challenges:

population coverage

integrity

visibility

transparency and disclosure obligations are vital (with confidentiality only applying on matters of a personal nature)

It has been proven that the deployment of e-government [E-government is the use of information and communication technologies (ICTs) to improve the activities of public sector organisations] brings the following advantages:

streamlining of the interactions of the citizens and business with the central, regional and local governments;

increase in the performance of workers at governmental agencies;

reduction in the possibilities for corruption.

How can an e-government implementation help Tunisia (which is already the top African country according to the UN e-government survey) to move forwards at this moment in its history?

Which e-government services (out of about 1000 items in e-government catalogues) should be the first priority?

Today, it appears that Tunisia urgently needs a much improved handling of political rights, provisioning of social security and, in general, establishment of trust between the population and the public sector organisations. Bearing this in mind, a list of potential e-government capabilities by domain could be the following.

373 unique replies from unique people (the first reply is used if not explicitly specified)

signal to noise ratio is almost 1 to 4

So, a possible resume about the EA status is “immature (discipline, practice, etc.) with problems in many aspects (acceptance, people, tool, definition, etc.) which is requested to deliver results with higher speed”. Maybe executability will help?

In this model, each layer is a set of services, each of which addresses particular concerns. The services are cloudable.

The business data layer comprises many pieces of information – names, dates, files, etc. – which are stored in existing repositories (e.g. databases, document management systems, web portals, directories, e-mail servers, etc.). Services at this layer are stateless, contain no business logic (although they may contain some access logic) and, usually, co-locate with their underlying databases. They are highly cloudable.

The business objects layer comprises the many objects specific to a particular business, e.g. a business partner, a product, etc. This layer hides the complexity for manipulating the objects, which are actually collections of data together with any dependencies between them. Again, services at this layer are stateless, contain no business logic (although they may contain some technical transformation logic), and are implemented as simple compositions. They too are highly cloudable.

The business routines (or regulations) layer comprises the actions which must be carried out on the business objects to perform the business activities. Services at this layer are stateless and implemented as complex compositions. The latter are defined in a normal programming language (e.g. Java, Python), an interpretive language (e.g. Jython) or, even, in BPEL. A specialised environment (actually a service called a “robot”) may be needed to execute these services, but this “robot” is rather cloudable.

The business monitoring layer analyses the execution of the business processes. A large amount of data (events and audit trails produced as the result of execution) is treated to extract any correlations and meaningful information. Services at this layer are both stateful and stateless, but they mainly operate in the “background” and thus are rather cloudable.

The business intelligence layer implements enterprise-wide planning, performance evaluation and control actions applied to the business processes. Services at this layer are cloudable.

The multi-layer implementation model and some technologies

The figure below shows the relation of the multi-layer implementation model to some other technologies. Normally, some services are accessible from a portal or workplace. They “float” in an Enterprise Service Bus (ESB). The latter is used only for service-to-service connections at the technical level. Ideally, an ESB should be based on a solid computing basis which can be provided by a grid, modern virtualisation infrastructure or cloud computing.

Extra considerations about the composition of services (or integration)

Some of the services mentioned above can be qualified as compositions of other services. In addition, interactive services from a portal or workspace are also, in general, compositions. For example, a user may invoke different services whilst staying on the same page (thanks to AJAX).
Any composition of services may manipulate business data and thus may disclose them. The following considerations may help to reduce the risks related to data disclosure:

2011-07-01

Big picture

The effective use of cloud computing at the enterprise level is a two-way street:

the use of cloud should be architected for the needs and realities of a particular enterprise and

the application portfolio, technologies, etc. used in an enterprise should be adapted to achieve the full potentials of cloud computing.

In general, EA deals with a system of systems. In general, those systems are distributed – each of them is an interrelated and interconnected set of business artefacts [events, rules, processes, documents, etc.] and technical artefacts [servers, OSes, databases, storage, applications, etc.] spread over the network. With cloud computing, the network become rather versatile (many zones with different characteristics) and transparent (easy to move some artefacts from one zone of the network to another).

Considering that EA knows all artefacts and (ideally all) relationships between them, EA should also know the impact (implementation time, risks, cost, performance, etc.) of particular allocation of some artefacts into some zones to optimise (easy to create, easy to operate, easy to maintain and easy to evolve) the allocation of all artefacts.

A simple allocation model

Let us consider cloud as a set of the following zone types (they are named using different colours):

Although, some of these zone types (e.g. the VIOLET one) may never exist in a particular enterprise, all of them are listed for completeness. The BLUE and VIOLET zone types are built with a set of trusted service providers. The term “zone types” is used because an enterprise may have several zones of the same type (e.g. more than one provider for VIOLET zones).

Practically all artefacts may be in any of these zone types. During the continuous virtualisation of technical artefacts, almost all of them may be moved from the GOLD to ORANGE and GREEN zone types and then to the BLUE and VIOLET zone types. Artefacts, like applications, may be transformed before the move or not.

The decision framework takes into account factors such as

data sensitivity,

security of data,

network latency,

the intensity of use,

artefact architecture,

the technologies involved,

dependencies between services,

SLAs,

BCDR requirements,

the existing zone (including its operating cost and risks),

the target zone (including its operating cost and risks),

the cost of moving,

etc.

Also, the decision framework reflects the business strategy, e.g. an organisation which anticipates a rather aggressive decentralisation shouldn’t promote the use of the ORANGE zone type.

The artefacts (business and technical) mentioned above are actually services which implement related artefacts, and sometimes there is a trivial dependency between business and technical artefacts. For example, business documents (business artefact) are implemented by Enterprise Content Management (ECM) services (technical artefacts) which require a database, file storage, application server, backup, monitoring, etc. (technical artefacts).

Services are useful for building cloud-optimised solutions

Ideally, a cloud-optimised solution is a set of interrelated and interconnected services which are good cloud citizens (or highly cloudable). In reality however, there are still many classic monolithic applications which are actually conglomerates of many potential services and therefore it is not easy to evaluate how cloudable they are. For this reason, any approaches for replacing monolithic applications (existing and/or new) by coordinated sets of services are very welcome. Some of the related concepts are mentioned below.

Services (defined as explicitly-defined and operationally-independent repeatable units of functionality that create a particular result) and, especially, stateless services are the best candidates for clouds (i.e. they are highly cloudable) – just add more instances but be careful about dependencies.

SOA (defined as an architectural approach for constructing software-intensive systems from a set of universally interconnected and interdependent services) is a way of thinking in terms of services (e.g. large, more functional, services are assembled from small, less functional, ones).

Idempotency (defined by Wikipedia as the property of certain operations to be applied multiple times without changing the result) applied to services helps to build reliable compositions of services – see the IRIS pattern from my book. Recently, the power of idempotency was demonstrated in April’s AWS issue – see http://www.twilio.com/engineering/2011/04/22/why-twilio-wasnt-affected-by-todays-aws-issues/. Also, note that the SAP BIT420 training course has an example of idempotent services.

The functioning of any enterprise is the coordination of many activities (human and automated) – see http://improving-bpm-systems.blogspot.com/2011/02/explaining-ea-business-architecture.html. Considering that the majority of those activities are actually invocations of services, it is possible to say that the functioning of the enterprise is the coordination of many services via different techniques: token-based, rule-based, event-based, data-based, manual-based, etc. If all coordination is made explicit (via BPM), this will provide the necessary information about all static and some dynamic relationships between services.

Typical questions about cloud computing

The following provides a typical list of enterprise-wide concerns relative to cloud computing.

How can we know if cloud computing is even appropriate for our company?

Which systems, applications and business processes are the best candidates for cloud outsourcing?

How can we effectively manage the interrelationships between systems, business processes and data we want to outsource with those that will remain in-house?

What would be the most effective cloud configuration for our company (private, public, hybrid, community, etc.)?

How do we protect sensitive agency data in the cloud?

How do we comply with industry records management requirements in a cloud environment?

What’s the best way to assess and manage our company’s risk profile in a cloud environment?

What are the actual costs of our IT operations today? What cost savings can be expected by transitioning to cloud computing?

How well are our current corporate IT investments performing? What performance improvements are possible in a cloud environment?

Where do we start? What are the steps to get from where we are today to a cloud environment?

It is clear that the simple allocation model (in which the decision framework is filled by your rules) and the ability to deliver solutions as a set of services will be of considerable help to address systematically those concerns.

2011-06-25

Decompose Into Patterns (DIP)

A friend of mine asked me to have a look at his first try of business process modelling in BPMN. The modelled process is well-known – “gestion de sinistres” or “claim processing”.

An apartment owner/leaseholder, who got an accident, inform the property managing company (régie), they call a repair service and validate the repair cost with the insurance company. Then the managing company control the work by the repair service and ask the insurance company about to reimburse the cost. The latter transfer the money to the former to pay the invoice.

The following picture is an attempt to model this process.

This diagram does not show the structure of the process thus not easy to understand. Actually, there are four big steps in this process:

Submission a claim to the managing company

Selection of the acceptable repair service by the managing company

Repair and control of repair

Submission the invoice from the managing company to insurance company and further payment

For all of those steps there is a proper practical process pattern to follow on.

2011-04-17

I noticed an enterprise pattern which is Platform-Enabled Agile Solutions (PEAS). It is applicable to situation when it is highly desirable to advance with a new enterprise-wide initiative in an incremental way. It means that developing the final user requirements is virtually impossible because the users just do not know exactly what should be built and they prefer to try those news things in real life. As well as the different departments (or target communities) advance with their (obviously different) speed. The classic approach to IT project management – define everything up-front – just does not work.

From the systemic point of view, it is necessary to provide many solutions (SOLs) which have a lot of similar functionality. The provisioning of SOLs should be carried out with the pace of the target community of practice. At each moment of time, each community may have different pace and may need different functionality.

The proposed architecture (see the illustration above) is based on the following considerations:

The platform must standardise and simplify core elements of future enterprise-wide system. For any elements outside the platform, new opportunities should be explored using agile principles. These twin approaches should be mutually reinforcing: the platform frees up resource to focus on new opportunities while successful agile innovations are rapidly scaled up when incorporated into the platform.

An agile approach requires coordination at a system level.

To minimise duplication of effort in solving the same problems, there needs to be system-wide transparency of agile initiatives.

Existing elements of the platform also need periodic challenge. Transparency, publishing feedback and the results of experiments openly, will help to keep the pressure on the platform for continual improvement as well as short-term cost savings.

In this pattern, technical concerns are decoupled from business concerns. All of those concerns are addressed TOGETHER by the enterprise architecture.

Added later: the following illustration shows that amount of efforts for implementation of solutions (which is proportional to "Functionality" x "Scope span") is reduced by the platform. Of course, the latter is a "common" good and a decision to build a platform should be taken strategically.

2011-02-19

The purpose of this post is to provide an explanation about Business Architecture (BA). Informally speaking, BA defines how work gets done within an enterprise. How work gets done is, of course, not completely unknown, but the knowledge is diffused throughout different instructions, strategic papers, reports, e-mails and in peoples’ heads. The aim of BA is to make this knowledge explicit, i.e. formal, externalized and operational, so it can be used for decision making, operating control, daily work, knowledge transfer, etc.

First, it is necessary to achieve a common understanding about certain concepts (and the relationships between them) used for constructing BA. Examples of such concepts are: function, process, service, capability, etc. These concepts are used to provide different views of the enterprise. It is important that these views are coherent and that interdependencies between them are explicit.

BA is a part of Enterprise Architecture (EA), and usually BA is the least understood / developed / implemented part of EA.

1 General

An enterprise creates a result which has value to a customer who pays for this result. The enterprise acts as a provider (supply-side) and the customer acts as a consumer (demand-side).

There is a (business) transaction between the provider and the consumer. From the point of view of the consumer (the outside-in view) the transaction is bounded by the pair “request and result”, e.g. from making an order to receiving goods. From the point of view of the provider (the inside-out view) the transaction is a set of several distinct activities (or units of work) which function together in a logical and coordinated manner to satisfy / delight the consumer. These activities are carried out in response to the consumer’s request which is an external business event for the provider.

2 Business functions

The collection of an enterprise’s activities serves as the foundation for the discovery of business functions (functions deliver identifiable changes to assets). Each function is an abstract and self-contained grouping of activities that collectively satisfy a specific operational purpose (e.g. management of relationships with partners). Functions are unique within the enterprise and should not be repeated. Some functions can be decomposed into smaller groups of activities, and thus the function architecture has a hierarchical structure. The structure of functions is not always the same as that of the organisation chart; in many cases, some organisational units can span several functions. Furthermore, organization charts may change while the function architecture does not.

The functional view emphasizes WHAT the whole enterprise does to deliver value to the customer (without the organizational, application, and process constraints). Usually, the hierarchical structure of business functions is very static (with a low rate of change). Meanwhile, business processes can change more frequently as a result of business process improvement or re-engineering initiatives.
The function architecture can be used in a number of ways:

to understand how organisational units are supporting each function and to identify instances where a function is supported by several organisational units (or is not supported by any organisational unit);

to reveal how functions are currently automated, including occurrences of where there is an overly complex use of applications (e.g. multiple applications) and when there is no automation of functions in place;

to understand how assets (information) flow between functions, and to map out which functions produce information, which function(s) consume information and where there is no clear understanding of information movement and ownership;

to clarify how business processes can be constructed;

to determine which business performance metrics should be used.
In some senses, functions are the players in a team (i.e. the enterprise), but it is not clear how they are going to play together.

3 Value-streams

The collective use of activities to satisfy a customer’s request leads to the notion of a value-stream which is an end-to-end collection of those activities (both value-added and non-value-added) currently required by an enterprise to create a result for a customer. Value-streams are named according to an initiating event and its result. A few examples of value streams are provided below (mainly from www.enterprisebusinessarchitecture.com):

Prospect-to-Customer

Order-to-Cash (order fulfilment process)

Manufacturing-to-Distribution (manufacturing process)

Request-to-Service

Design-to-Build

Build-to-Order

Build-to-Stock

Insight-to-Strategy

Idea-to-Concept

Concept-to-Product

Product-to-Launch

Initiative-to-Results

Relationship-to-Partnership

Forecast-to-Plan

Requisition-to-Payables (procurement process)

Resource availability-to-Consumption

Acquisition-to-Obsolescence

Financial close-to-Reporting

Recruitment-to-Retirement

Awareness-to-Prevention

Value-streams are directly linked to the enterprise’s aspirations – its vision and related “ends” chain (see http://www.omg.org/spec/BMM/): desired results, goals and objectives. Ideally, each value-stream should align with at least one long-range objective and its business performance metrics [key performance indicators (KPIs)]. For example, one objective of the success of the “Order-to-Cash” value-stream may be measured as “96% of orders delivered within 3 days”. If this value-stream’s actual performance is delivering only “90% of orders within 3 days” then a corrective action should be taken (e.g. a new strategic initiative is developed and its priority determined).

In addition to the reason WHY a value-stream exists, related to each value-stream there is an explicit HOW the desired results are achieved. Looking inside a value-steam reveals that there may be a few “integrated components” (or business cases – business transactions between a consumer and a provider). Usually, one of the “integrated components” is the main transaction which does the job and the others are collections of supporting/housekeeping activities. For example, “Order-to-Cash” includes “Fulfil order” (main), “Change order” (for cancellation and modification of an order by the customer), and “Review order” (for the consultation of an order by the customer).

Each “integrated component” is an ordered sequence of acts with applying functions to assets. Such a sequence is the explicit assets flow (called inputs and outputs, or I/O). KPIs and timelines associated with the sequence provide additional execution details (e.g. duration of the process from one point of I/O hand-off to another point). Thus the value-steam view provides the context (without the organisational and application constraints) for its constituent activities, e.g. what timing, level of performance, etc. are necessary to reach the objective of the complete value-stream.

An enterprise consists of a collection of value-streams. Most large enterprises can be broken down into a dozen or more value-streams. The nomenclature of value-streams differs somewhat from one enterprise to another. Within an enterprise, its value-streams are interdependent; a value-stream may rely on the results of other value-streams. An example of this interdependency is the value-chain of an enterprise, i.e. a network of strategically relevant integrated components of value-streams of the enterprise.

Two further posts will cover "Linking WHY, WHAT and HOW" and "Managing the complexity of VEB".

4 Linking WHY, WHAT and HOW

So, an enterprise’s value-chain and value-streams are the high-level decomposition of the work of the (whole) enterprise into the work of many different activities. In such a decomposition, WHY + WHAT of the whole enterprise should be used to define WHY + WHAT of each activity. The glue between them is HOW. Let’s look at a fictitious scenario.

Stakeholders:
OK, your business model looks good. Now tell us about the operating model.

Future CEO:
Our business model is the WHY for our operating model. The latter starts by showing the relationships between the enterprise and its partners (suppliers, providers, customers, etc.) from the economic ecosystem. Within the enterprise we have identified 4 aggregations of value-streams: customer-centric (green), strategic-visioning (blue), people-caring (yellow) and business enabling (red), as well as the relationships between them.

So, for each value-stream (FUNC1), we know its input WHAT0, its output WHAT1 as well as its operating requirements WHY0.

Stakeholders:
Sounds great. And, can you assure us that FUNC1 is capable of operating as required?

Architect:
The desired performance of FUNC1 is guaranteed by its implementation (HOW1) as the explicit coordination of “smaller” functions. In some way, WHAT1 is decomposed into a set of WHAT2x. WHY0 is decomposed into a set of WHY1x, and FUNC1 is decomposed into a set of FUNC2x. They are all coordinated together. In the illustration below, the coordination is trivial, but in real cases it may be rather complex (e.g. an interaction of activities carried out by several interdependent functional roles).

Stakeholders:
Please continue until all FUNC# become “manageable” activities so that they can be bought, rented, outsourced and easily implemented.

Architect:

This will involve the explicit decomposition of each value-stream to reveal the horizontal (peers) and vertical (subordinated) structure.

... Some time later ...

Architect:
As a result of this decomposition, a directed graph can be obtained (see the figure below). This directed graph is represented as a river basin; it could also be represented as an iceberg in which the value-stream is the tip of the iceberg.

In this graph, nodes (i.e. activities) are connected by edges to show the dependencies between results (i.e. the result of activity C depends on the results of activities I, K, L and B). This means that the result of a particular activity contributes to the result of another activity (which is probably more valuable and thus more expensive). The timing of result generation may be different: some results can be produced in advance and stored for later, some results can be produced on demand and some results can be acquired just before they are needed.

The primary importance of such a graph (called a “value & expenses basin” or “VEB”) is to represent business performance – the business wants to delight the customers (by giving them what they want to pay for) and the shareholders (by creating a profit). As shown in the figure below, different activities contribute differently to the generation of the value (green arrows) and the associated expenses (red arrows). The width of the arrows signifies the relative amount of value or expense.

The VEB should help in the management of an enterprise. It represents a dynamic, actual and contextual contribution of different activities to the value and expenses associated with a particular result. The business can be attentive to different “tributaries” which are

a) the most value-adding,
b) the most wasteful,
c) doing worse than defined by WHY, and
d) doing better than defined by WHY.

Depending on the business needs, such a representation can display a particular instance of value creation or a set of instances (usually over a given period of time).

So, how is a VEB constructed?

A VEB is not a flow of control, an event processing network (EPN) or a PERT diagram. It can be considered as a flow of assets (or a data flow diagram), but this will be just an externally-visible representation of internal mechanisms. Such a representation is good enough for the reactive analysis of behaviour, but is not sufficient for active control and pro-active (predictive) analytics. It is necessary to have a dynamic model which can be used for execution (e.g. simulation) and from which the VEB can be generated.

The set of “internal mechanisms” (as mentioned above) is a superposition of different coordination techniques (token-based, rule-based, event-based, data-based, etc.) as illustrated in the following.

An activity from one value-stream (or business process) can obtain some assets (business objects) which belong to another value-stream (or business process). This is pull-like communication, e.g. the “Order-to-Cash” value-stream should know the customer’s address which is maintained by the “Prospect-to-Customer” value-stream.

An activity from one value-stream (or business process) can send some assets to another value-stream (or business process). The latter interprets appearing of the assets as an event to be treated. This is push-like communication. Usually, there are three ways in which this treatment can occur:

a new instance should be started (e.g. for the manufacture of something) – initiating event;

an existing instance, which is waiting for this event, consumes the event and continues its work (e.g. the confirmation of a payment) – solicited event;

an existing instance, which does not expect this event, has to react to it – unsolicited event.

In reality, the situation is rather complicated. An enterprise may have several value-streams running in parallel. Some activities can be shared between different value-streams and some value-streams may compete for limited resources. Some activities may be outsourced or insourced, etc. All of these complexities need to be taken into account.

Furthermore, in addition to the activities, there are several other artefacts (see chapter 6) which should be defined explicitly in the model.

5 Managing the complexity of VEB

The interactions between activities reveal the different relationships between them. In order to manage the complexity, the primary interest of any architecture is to bring structure to those activities and their relationships. There are several techniques (services, capabilities, and processes) which are discussed below.

Activities which are used by a number of other activities (i.e. commonly-used functions which are the result of specialisation) are wrapped as services (which function as some kind of independent building blocks). A service is a consumer-facing formal representation of a self-contained provider’s repeatable set of activities which creates a result for the consumer. (It is considered that there are internal [even within an enterprise] providers and consumers.) It is important that the internal functioning of a service is hidden from its consumers, so that some parts of the enterprise can be changed independently. For example, a “proper” service can be relatively easily outsourced. Services are expressed in terms of expected products, characteristics and delivery options (cost, quality, speed, capacity, geographic location, etc.) – this is the Service Level Agreement (SLA).

Complex services are created by means of the coordination of more simple services and/or activities (in the same way that an orchestra is a coordination of individuals and their actions). In this sense, an enterprise is a mega-service composed of a network of nano-services. Each service is associated with an owner who is responsible for delivering the promised results in all instances in which that service has been requested. That owner has

to know/estimate the demand-side needs (the service may have many different consumers who will be using it with different frequencies), and

to design/organise/create in advance the supply-side capabilities to ensure those needs are satisfied.

Capability is the proven possession of characteristics required to perform a particular service (to produce a particular result, which may include the required performance). Capability needs to “understand” the mechanics of delivering that service. The mechanics include the resources, skills, policies, powers/authorities, systems, information, other services, etc., as well as the coordination of work within the service.

So, how can one ensure that a service has the required characteristics? There are three options:

by contract (“re-active” approach) – acquire a service with the required characteristics, use it, check that its performance is acceptable and replace it if something is wrong with it;

by design (“pro-active” approach) – build a service model, run a simulation test, improve the model, build the service, use it, measure it, improve it, etc.

The first option works with some support services, the second option can work satisfactorily with lead services and the third option should be used for core business services. The core business services can’t be outsourced, can’t be bought and must not be “damaged” (otherwise the enterprise may no longer function).

One of the models of the mechanics of delivering a service is a business process – an explicitly-defined coordination of services and/or activities to produce a particular result. The explicit coordination brings several advantages.

It allows planning and simulation of the behaviour of a service to evaluate its performance. If that service uses other services, then the demand-side needs for those services can also be evaluated.

It can be made to be executable, thus guiding how work is done.

It allows control that the actual behaviour of the service matches its intended behaviour, thus pro-actively detecting potential problematic situations.

It allows the measurement within a service of the dynamics of different characteristics, e.g. valuing, costing, risk, etc.

So, there is a structure of services in which some services are composed from others via explicit processes. The use of explicit processes allows the objective definition of the capabilities of composed services.

2011-02-11

From my book "Improving enterprise business process management systems":

We recommend introducing control-oriented coordination using a step-by-step approach
via the “eclipse” pattern (see figure 5.6). At first, we “cover” only a tiny area of the whole process. Usually we start with the intra-application coordination, because this part of IT is considered as boring and not very rewarding. The first fragment of explicit coordination may be quite primitive; it is a duplication of some existing functionality which is just eclipsed by this process. Then we introduce more and more fragments. With time, we cover bigger and bigger areas by explicit coordination of existing fragments.

Figure 5.6 Use of the “eclipse” pattern for making coordination explicit

2011-02-10

BPMN pool is normally associated with a participant. Often such a participant is associated with an organisational role, e.g. CFO. Obviously, an organisational role may include more than one functional role. As the result, within the same business process an organisational role may participate with different functional roles to carry out different activities. This looks like a typical use of swimlines, but the question – are those activities from same process instance?

Consider the following process:

periodically (e.g. monthly), a manager orders several service-engineers to visit several clients for carrying out some work

a service-engineer contacts the assigned client, plans a visit and reports back to the manager the visit details

the service-engineer pays a visit to the client

after the visit, the service-engineer submits to the manager a report about the work done at client's site

How many pools and instances?

Manager as a work planner – 1 instance (as quick as possible)

Manager as a report validator – N instances (usual duration is a few days)

Service-engineer (actually, per visit) – N instances (usual duration is a few weeks)

Some of Keith’s arguments do not correspond to my experience with collaborative and process-based applications. Attention, please – those applications were designed for clients (including international ones) based in Switzerland – maybe similar applications for the US-based clients should be different.

Work of a social worker is based on the existing rules, procedures and laws. Some of them are expressed in as processes. So, the process architecture is necessary; it must exist but it is not visible (similar to the 90% of an iceberg); and preferably it should be explicit.

For example, an application for automating “Office de faillite” (a governmental structure to implement bankruptcies) is a mixture of ACM features and classic BPMS because the bankruptcy process template is defined in the law with many slight variations. Although each bankruptcy case (process instance) is different, they use the same process architecture which is the proof that each case follows the law.

<quote>In BPM the person who designs the process needs to be a data architect, but in ACM these are different roles. The person who designes the “process” does not need to be a data architect. </quote>

Although many BPMS vendors provide data modelling capabilities, it is not always that a BPMS-based implementation of process-managed application forces the process architect to be a data architect. Some process-oriented applications are just moving existing data from one place to another or collecting process metrics.

<quote>BPM needs strong capabilities for integration, but in ACM there is little or no need for field-level integration. ACM can work well with documents, reports, and links to other application user interface.</quote>

At the beginning, the users of collaborative applications are very happy with just the access to documents, reports and links. Then those users ask for provisioning more case-related information which is usually “mastered” in central resources. For example, a Word document should contain several attributes extracted from SAP.

In conclusion: considering that “knowledge workers” and “workers who are doing repeatable work” are working TOGETHER, the capabilities from both ACM and BPMS should work together. As the first step for achieving this synergy, it is necessary to provide the commonly-agreed reference models and reference architectures (independent from the tools).

2011-01-20

Sometimes we need to process in an instance a group of events collected from different instances. For example, incoming orders are collected and treated each hour all together. I call this pattern CPP:

One of the building blocks of Event Processing Network (EPN) presented in “Event processing in action” (see http://epthinking.blogspot.com/) is event processing agent. It can, in particular, aggregate many events from a stream. Use of such an agent (between pools, of course) looks like that:

I found it rather explicit. Maybe a next version of BPMN should consider some building blocks of EPN?