2018-04-28

1 The expression “all models are wrong” is wrong

"All models are wrong" is a well-known aphorism from statistics (attributed statistician to George Box, 1976), which has been used in other disciplines. A modern book on system engineering claims that "The map is not the territory, the menu can’t be eaten, the drawings do not fly, the source code does not store the values of its variables during execution". Let's analyse these statements.

The territory existed before its map. A map is an informational (or digital right now) "twin" of the territory. Since the territory is a natural (made by nature) object, its digital "twin" (made by man) is secondary and approximative.

The menu is the chef's plan and, at the same time, the informational (or digital right now) "twin" of kitchen services. Kitchen services are planned ahead for the several good reasons: discuss them with all the involved persons, organize work, optimize costs and reduce risks. Thus the menu (as a planning tool) helps in achieving the result, but it is not mandatory to provide services. In this case, the informational (or digital right now) "twin" may appear before the physical "twin".

It is clear that the drawings do not fly, but there is no flight without them. Those drawings is a necessary "part" of the aircraft to be manufactured according to these drawings. It is clear that the drawings, in themselves, are not a sufficient "part" of the aircraft because there is a long way from the drawing to the working copy. However, we can consider that the drawings are informational (or digital right now) "ancestors" of aircrafts. In this case, the informational (or digital right now) "twin" is necessarily created before its physical "twin".

Well, finally, the computer program and its source code. What is the relationship between them? There is no program without its source code. The source code can be expressed in several views - in high-level language and in the language of machine instructions (i.e. assembler). This is common but not necessary, because the source code can be interpreted directly without being translated into machine instructions. Wait, this looks rather familiar.

Wow, this is the genetic code for a bionic program! The genetic code does not have all the details, but it determines (albeit partially) the future bionic system. Of course, any bionic system is an adaptive system with a complex "bootstrap" procedure. While modern software systems must be highly-dependable.

So, in ther digital world we copy the mother nature – we create a piece of digital genetic code (in some programming languages) and from it we create a program with the help of the digital environment. Thus, the source code is the main part of the program. Both they are digital artefacts. In this case, there is only an informational (or digital right now) "twin". Well then, it is not a "twin" but an "original"! And this is a digital model.

Physical form

Digital form

Primarily

1. Territory

2. Menu (probably)

3. Drawings (mandatory)

4. Program (inevitable)

Secondarily

2. Meal

3 Plane

1. Map

2 Obliterating differences between architecture and its description in the digital world

For the digital world, we must slightly adjust some of the provisions of ISO/IEC/IEEE 42010 Systems and software engineering - Architecture description. This standard clearly separates the architecture of the system and the description of the architecture. In accordance with this standard, the architecture description consists of models. But in the digital world, models can be also elements of the system-of-interest.

The usage of digital models:

simplifies the choice of elements and system-of-interest options,

allows making predictions about the behaviour of the system-of-interest and

replaces the system-of-interest itself, for example, for training purposes.

Such digital models are machine-readable and machine-executable. For example, a business process is not only an illustration, but also a piece of the source code of the system-of-interest. This increases the importance of Domain-Specific Languages (DSLs) through which some elements of the system-of-interest can be defined in business terms. For example, BPMN is a DSL. (As many years ago with the advent of SGML and HTML people began to say: the program becomes a document, in a document becomes a program.) Also, the appearance of the machine-executed elements of the system-of-interest in the early stages of its life cycle allows us to speak about the emergence of the BizDevOps culture as a natural up-stream extension of the DevOps culture.

The logic of the architecture viewpoints changes. Now, they are designed to systematically create model-types, some of which will be digital, i.e. machine-executable elements and/or machine-readable elements (or nomenclatures, for example, a list of all roles). Architecture viewpoints become some kind of aqueduct columns that support the logic of creating digital systems.

Fragment of the longest (132 km) Roman aqueduct, Tunisia.
(by the way, some parts of this structure are still working and used by local people)

Relationships between models also change. Previously, it was considered that models and views were created solely for stakeholders and, often, different models were created by different people thus models must be permanently aligned, e.g. by a chief architect. With the digital models, there is a lot of interest in semi-automatic and automatic creation of some models from already existing models. For example, if there is a functional map of the organization, then you can automatically offer the initial version of the organizational structure.

It is observed that the difference between the system-of-interest and its architecture description is disappearing in two directions:

Some elements of the system-of-interest can be used instead of some architecture description models.

It is clear that for each type of systems some of its digital models are system-forming elements. Imagine a directed and non-cyclic graph of dependencies between nodes as models and let us assign to its edges measure of the complexity of the "transition" between nodes. Then the models from which one can easily create the majority of other models are system-forming models.

All this is partially described in the series https://improving-bpm-systems.blogspot.com/search/label/%23BAW

2018-03-11

This blogpost is based on the several recent LI discussions about the concept “capability” (see their URLs at the end of this blogpost).

Those endless discussions only confirm a well-known systemic observation – a complex concept is better understand via its relationships to other concepts. Thus, to define the concept “capability”, it is necessary to define together the several related concepts, such as “function”, “service” and “process”. (Other concepts could be added on demand.)

Another complexity is, again, a well-known systemic observation – different people see the same thing differently. It is called “architecture viewpoint” (like an 3D object may have 3 projections). The many problem with architecture viewpoints is that they must be aligned.

The aim of this article to outline a main (or master) viewpoint which allows to align all other viewpoints. (With special thanks to Michael Poulin for his valuable comments for this article.)

1 Different viewpoints on capability

So far, the several viewpoints on the concept “capability” have been detected.

Demand viewpoint – to achieve our mission and vision we need a system with a particular performance of doing something. Demand-capability is a relative measure of ability of a system (or its element) doing something at a particular level of performance.

This viewpoint is about WHAT and HOW-WELL without any information about WHO, HOW, WHERE, WITH-WHAT-RESOURCES, etc.

Supply viewpoint – we have a system with a particular performance because we made it and deployed some resources. Supply-capability is a proven performance of a system (or its element) doing something.

This viewpoint is about WHAT, HOW-WELL, WHO, HOW, WHERE, etc.

Reference viewpoint – all the systems with a similar purpose (or mission) should be able to do this. Reference-capability is an ability of a system (or its element) doing something.

This viewpoint is about WHAT only. Typically, the reference viewpoint relates to a particular type of business, e.g. banking, rent-a-car, telecom, etc.

2 Let us classify some of the existing approaches

ArchiMate 3.1: A capability represents an ability that an active structure element, such as an organization, person, or system, possesses. AS: It seems that it is the supply viewpoint.

TOGAF 9.1: A capability is an ability that an organization, person, or system possesses. Capabilities are typically expressed in general and high-level terms and typically require a combination of organization, people, processes, and technology to achieve. For example, marketing, customer contact, or outbound telemarketing. AS: It seems that it is the supply viewpoint.
BIZBOK 4.1: A capability is a particular ability or capacity that a business may possess or exchange to achieve a specific purpose or outcome. AS: It seems that it is the reference viewpoint.

Bas van Gils (Strategy Alliance): CAPABILITY = CAPacity x ABILITY. - ABILITY refers to skills and proficiency in a certain area. It should be noted that ability is a relative term: one actor (human, machine, computer) may have higher levels of proficiency than others. The level of ability can be increased due to (formal) training, and practice. - CAPacity refers to the degree to which actors (human, machine, computer) are available to use their skills to achieve a goal. Capacity can be influenced by freeing up / adding resources to the available pool. More information on the Strategy Alliance Website. AS: It seems that it is the supply viewpoint.

Mark Paauwe (https://www.dragon1.com/terms/capability-definition ) A capability is a set of tasks that a system is potentially able to perform at a certain performance level, but only with the use of required resources. AS: It seems that it is the supply viewpoint.

Michael Poulin (https://organicbusinessdesign.com/agile-business-capability-part-1/ ) - A business capability is an ability of an entity - person or organisation - to create or deliver certain Real world Effect (outcome) in particular business execution context. If the context changes, yesterdays capability can vanish. A fact that you did something yesterday does not mean (itself) that you can do this tomorrow. A capability exists only if there are all needed resources available for the capability realization. No resources - no capabilities; competencies/knowledge/skills are not enough for having the capability. You lose capability if you outsource it. AS: It seems that it is the supply viewpoint.

Richard Hillier - A business capability is the ability to perform a business activity which is recognized as being required for success and which needs to be specifically managed. AS: It seems that it is the supply viewpoint.

So far, there is no demand viewpoint. Why?

3 Where is the demand viewpoint?

Any demand viewpoint it is dynamic and organisation specific. In any business, “bigger” (with emergent characteristics) capabilities are assembled from “smaller” (available or not yet) capabilities. Because such emergent characteristics are exhibited as the result of interactions of “smaller” capabilities between themselves and with other capabilities then some coordination of such interactions is mandatory.

Note: It is not a bottom-up approach, but a recursive combination of analysis (finding what "smaller" capabilities are necessary) and synthesis (proving that "smaller" capabilities and some coordination between them achieve "bigger" capability).

Imagine, an enterprise or solutions architect has to implement a particular demand-capability within an organisation (which is, obviously, a system). There are several choices:

Implement this demand-capability within the organisation as a coordination of some other capabilities.

Outsource this demand-capability via Business-to-Business (B2B) partnership and access it in accordance with a contract between two organisations.

Acquire this demand-capability as commodity maybe via a tender.

Ignore this demand-capability by providing some good reasons.

With the option 1 the enterprise architect must chose a set of “smaller” capabilities and a way to coordinate them. The reference viewpoint, if any, may help to find out those “smaller” capabilities. (Of course, some “smaller” capabilities may be not available yet and have to be implemented recursively).

Also, saying that “to implement this capability we will use those two capabilities” is not enough because the way to coordinate those capabilities will affect the performance of this capability. Sure that various estimations of the performance of this future supply-capability may be provided.

Any demand-capability or reference-capability which is implemented by (or within) the organisation is called function. Creating a function implies that several organisational, technical, contractual, resourcing, staffing and other changes must be carried out within the organisation. Function immediately has some performance approximation as supply-capability, i.e. its expected performance is stated. Ideally, the performance of such supply-capability exceeds the requested performance of the related demand-capability. (Sometimes, a gap between them can be huge – remember that we never drive our cars at their maximum speed.)

An illustration of relationships between various concepts is shown below. The left half of this illustration is the reference map of an organisation and the right half of this illustration is the functional map of this organisation. The functional map is smaller then the reference map, because some capabilities were implemented as commodities or via B2B partnership. A formal procedure for moving from “left” to “right” can be produced on demand.

Because functions can’t provide a good approximation of its expected performance, organisations uses services – service is an arrangement to access to one or more functions on a contractual basis. (Note: such an access may be within the same organisation as well as between different organisations). Because any service must take into consideration its contract (including SLA) and its expected usage, its performance may be anticipated better than for functions. Creating services also implies some organisational, technical, contractual, resourcing, staffing and other changes.

Nevertheless, neither functions nor services specify explicitly the coordination between “smaller” capabilities thus their estimations of the expected performance is still a guess. So far, only Business Process Management (BPM) allow the organisation to build, run and improve “bigger” capabilities in predictive, transparent and provable manner because process is an explicit, formal, machine-readable and machine-executable coordination. Obviously, one can evaluate (with a high level of confidence) the performance of a “big” supply-capability by knowing the process, its usage and performance of “small” supply-capabilities.

A few notes: Considering that there are many coordination techniques then there are no principal differences between BPM and Adaptive Case Management – see http://improving-bpm-systems.blogspot.bg/2014/03/coordination-techniques-in-bpm.html . BPM is actually a trio: discipline to manage business via processes, software to manage processes themselves (BPM-suite tools) and practice & architecture. Also, orchestration and choreography are variants of coordination.

Some assets and skills are required to operate services and processes. Obviously, assets and skills may be outsourced (or insourced).

4 Big picture

The overall logic is the following:

Capability The organisation has to be able to do something (because of the mission) with a particular level of performance (because of the vision).

Function Some of needed (demand-)capabilities must be implemented within the organisation. For example, because it is a core-business capability. By definition, a function is already a supply-capability (as a system element of an organisation as a system) and some assets, skills and coordination have to be provided.

Service Although function is already a supply-capability, the evaluation of its performance is rather approximative. Service allows improving the evaluation of its expected performance by specifying its contractual conditions.

2018-01-15

1 About Digital Systems

A digital system is a system which builds life cycles of its primary artefacts on the primacy of explicit, formal, computer-readable and computer-executable presentation of those artefacts (in other words, digital presentation of those artefacts). For example:

a house is designed digitally as an “ideal digital house”;

this digital form drives 3D printers and robots to build a real house;

this real house is equipped by IoT sensors which generate the “real digital house”, and

differences between the “ideal digital house” and the “real digital house” is used for maintenance and various improvements.

Digital systems employ the concept of “digital twins” – computerized companions of physical assets that can be used for various purposes. The relationship between digital twins and physical assets is the following:

Digital systems are uber-complex real-time systems of cyber-physical, socio-technical and classic IT systems with the following characteristics:

digital data and information in huge volumes;

software-intensive;

distributed and decentralized;

great influence on our society;

ability to interact with the physical world;

many essential characteristics which are required by design and by default (e.g. security, safety, privacy and resilience);

low cost of operation;

short time to market;

self-referential (some), and

long and complex life cycle.

This document outlines an approach for building digital systems which is based on synergy between:

the project (or work) management practices and

the digital systems life cycle management practices.

This approach facilitates optimisation of work management practices for digital systems life cycle. For example, if a digital system has two major components (bespoked and COTS) then each of them may have its own work management practice.

Let us consider the following hierarchy:

The type of a system-of-interest defines its DiSyLiCy (as a variant of the generic DiSyLiCy template) of the system-of-interest.

The DiSyLiCy defines the DiSyLiCy management (because each phase of the DiSyLiCy may have its own management practice).

The DiSyLiCy management defines the work planning (overall and per phases) methods.

Work planning defines the work execution management (i.e. project management).

Note: In the context of this document, the concepts “system-of-interest” and “solution” are used interchangeably because a system-of-interest is a solution of a problem.

2 WHY the Digital Systems Life Cycle (DiSyLiCy) is important

We are dealing more and more with digital systems. They are intrinsically complex systems in which software primarily defines of the system as a whole. The recent trends in digital systems show that such systems have the following common characteristics.

Such systems are assembled from many distributed elements which are deployed in various computing environments: in-house, in-cloud (SaaS, PaaS), at partners.

Elements of such systems have different granularity, e.g. platforms, applications, services and microservices.

Elements of such systems have different life cycles, e.g. some elements, especially business-facing, may require changes more often.

Elements of such systems have different ownership: FOSS, bespoke, commodity, community, service providers.

Elements of such system may be shared with other versions of this system and/or other software-intensive systems.

There are many internal and external drivers for changes of those elements, e.g. security threads, natural evolution of their elements, morphing business requirements, continuous improvements.

The speed of changes in their elements must fit the required urgency, e.g. time-to-market, levels of the security risks, etc.

The trustworthiness (security, safety, resilience, privacy) of their elements becomes very critical in the digital era because even one “weak link” in an assembly may ruin common efforts.

The TCO of such systems follows the classic 20/80 ratio – 20 % to build (development and transition phases) a system and 80 % to operate and evolve it.

Obviously, only concentrating on the development phase of such systems is not enough because such systems, after being in production, must evolve very fast and in many unpredictable ways. Thus all the phases of the whole life cycle are equally important.

Also, a new “non-functional” (or quality) system characteristic, called “variability”, becomes very critical. “Most modern software needs to support increasing amounts of variability, i.e. locations in the software where behaviour can be configured. This trend leads to a situation where the complexity of managing the amount of variability becomes a primary concern that needs to be addressed.” ( http://program-transformation.org/Variability/SoftwareVariabilityManagement ).

3 HOW the Digital Systems Life Cycle (DiSyLiCy) is composed

The assembled nature of software-intensive systems, certainly, complicates their life cycle which must address:

the life cycle of each element and

the life cycle of the system as a whole.

It is clear that such systems share some common characteristics with systems-of-systems (making a system from elements without having a direct ownership on them). Thus, the coordination is critical for the seamless transition from one phase to another and for the seamless integration of various elements.

The necessary coordination is achieved by a combination of the following:

Architecture which is critical for good, right and successful software-intensive systems.

Explicit and tailorable generic DiSyLiCy template which is adjustable to unique needs of the system-of-interest. This life cycle recommends to provide various views and models at different phases.

Various architectural styles and techniques to optimise DiSyLiCy within phases and beyond phases for the system-of-interest.

Various work management practices for each phase and beyond phases.

4 WHAT is the Digital Systems Life Cycle (DiSyLiCy)

4.1 Overview of the generic DiSyLiCy template

The DiSyLiCy template comprises the following phase:

Business case phase

Architecting (or elaboration) phase

Construction (or build or implementation) phase which may comprise the following sub-phases:

Architecting sub-phase – if necessary

Construction sub-phase

Transition sub-phase – if necessary

Transition (or deployment) phase

Pilot (or lab) phase – optional

Production (or operating) phase

Operations sub-phase

Maintenance (or evolution) sub-phase – repetitive

Architecting sub-phase – if necessary

Construction sub-phase

Transition sub-phase

Retiring phase

Decommissioning phase

Without sub-phases, the DiSyLiCy template is depicted in figure below which shows how a software-intensive system become more concrete during its life cycle.

The complexity of the construction phase must correspond the complexity of its software-intensive system. The construction phase may simultaneously be:

recursive –- complex system elements must be architected to produce elements which are simple enough to construct;

concurrent – some sub-phases may be executed in-parallel (depending on the availability of resources and dependencies between constructed elements).

This variant of the generic DiSyLiCy template is depicted in figure below.

Another variant is to decompose a complex system-of-interest during a single architecting phase.

In the same way, the production phase may have several maintenance phases as shown in the figure below.

Practically all the phases may be repetitive if some conditions of their completion have not been met.

Some phases maybe carried out iteratively (or incrementally) in a few steps to achieve the target situation. Such iterative way of execution is depicted in figures below.

An initial situation

The situation after the first integration.

The situation after the second iteration.

And the final situation.

Please note, that such iterative way of execution is very
similar to agile management practices.

4.2 The DiSyLiCy phases vs the systemic description views

At each DiSyLiCy phase, the systemic description of the system-of-interest is updated. in other words, some views (and pertinent models) are prepared and some views (and pertinent models) are updated. The simplified (without sub-phases) dependencies between the DiSyLiCy phases (rows) and the systemic description views (columns) are shown in the table below.

5Management of the DiSyLiCy

5.1 General

There are two types of logic in any management practice:

specific logic which depends on the life cycle (thus called life cycle management), e.g. what phases to finish or what phases to start, and

generic logic which does not depends on the life cycle, e.g. what works to finish and what works to start, depending on various (typical in programme and project management) conditions such as availability of some resources, e.g. free staff. Also called work management.

These two logic are strongly intertwined in the life cycle management. For example:

the decision to implement a new system depends on this system’s potential business value and some capacity of some resources (generic logic);

the decision to complete the architecting phase depends on the quality of the systemic description (specific logic), and

the decision to start in parallel one or more construction phases depends on capacity of some resources (generic logic).

Ideally, the life cycle can be presented as a set of interrelated units-of-work which are managed by these two logics. As said before, each unit-of-work has (minimum) two associated events (the start and the finish) at which these management logics are applied. However, there are a lot of other ad-hoc events at which these two logics must be applied as well. For example, various incidents, capacity fluctuation, etc.

Thus the life cycle management is based on a set of events and the following considerations:

there is some natural hierarchy and some coordination between events;

some of those events are considered as management points at which some management decisions have to be taken;

some management decisions may require different levels of authority;

some management decisions may be delegated;

any management point is associated with a set of rules based on specific and general logic;

some events can be planned (they are also called milestones);

some work planning methods are available;

missing a milestone is also a management event;

if more events are planned and less of them are not missed then the execution of life cycle is more seamless;

etc.

Any classic project management is based on the management of work with the use of generic logic only.

5.2 Review of some pertinent management practices

Let us illustrate the life cycle management and work management practices.

PMI is a work management practice which is based exclusively on the generic logic and project life cycle. Obviously, it is mandatory to map the life cycle management of the system to be built into project management life cycle. PMI argues to develop Work Breakdown Structure (WBS) which is a hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives and create the required deliverables. Obviously, the WBS is a waterfall-like bridge with life cycle management.

PRINCE2 is a work management practice which is based exclusively on the generic logic and project life cycle (which is more elaborated then from PMI).
Waterfall is a life cycle management practice which executes all its phases sequentially and tries to plan all works in advance. But the usage of the same planning method for all the phases is very inefficient.

Iterative is a life cycle management practice which allows incremental and iterative execution of some its phases.

HERMES is an IT-oriented project management practice which uses a very simplified IT systems life cycle as the project life cycle.

TOGAF is a life cycle management practice which covers, primarily, the implementation of IT solutions. Its Architecture Development Method (ADM) was originally “waterfall-like” but the recently some iterations are admitted.

ITSM is a life cycle management practice for IT services. It provides some planning for related works by outlining all necessary processes.

IT4IT is a life cycle management practice for IT solutions. It is an up-streamed version of the ITSM, however the IT4IT says nothing how to implement IT solutions.

DevOps is a life cycle management practice for IT changes, covering from coding to monitoring.

Agile (SCRUM) is a work management practice with an emphasis on software development. In other word, it is a mixture of life cycle management practice and work management practice, leaning to the latter. It is very light on the solution architecture which is presented as a set of small stories. Thus, creation of work to be done is rather ad-hoc. SCRUM is very strong with the work management by time-bound sprints; it promotes incremental and iterative execution of works. A short-time planning is possible. The SCRUM work management is presented in the figure below.

Case management is a work management practice. The case is a circumstance or undertaking that requires a set of works to obtain an acceptable result or achieve a goal. The case management focuses on the subject, over which the works are performed (for example, a person, a case, an insurance case), and is being led by the gradually emerging circumstances of the case.

Classic process management is a work management practice which formally defines a plan (as a flow-chart) of work. A flow-chart may be mimicking a life cycle. Thus, the planning of work is very explicit.

PDCA is a work management practice for small changes which is carried out in four steps: Plan, Do, Check, Act.

Kanban is a method for work planning (scheduling).

Critical path is a method for work planning (scheduling) for projects and processes.

Table below shows how the 6 management practices are compared to the DiSyLiCy phases. Because some of those practices are enterprise-wide then only pertinent parts of them are considered. For example, only 3 from 4 IT4IT value streams are considered (R2D, R2F, D2C).

PMI

PRINCE2

TOGAF

ITSM

IT4IT

SCRUM

DevOps

Business
Case

Fully

Partially

Architecting

Partially

Partially

Fully

Constriction

Fully

Partially

Partially

Partially

Partially

Fully

Fully

Transition

Partially

Partially

Partially

Fully

Partially

Partially

Fully

Pilot

Partially

Partially

Partially

Partially

Partially

Production

Fully

Partially

Partially

Retiring

Partially

Partially

Decommissioning

Partially

Partially

This table shows that no existing management practice which covers fully the DiSyLiCy.

5.3 Resume

The management of the DiSyLiCy is based on tailoring of the genetic DiSyLiCy template and recommendations what work management practices can be used for each phase.

6 Detailed description ofthe DiSyLiCy phases

Because of this document size only one phase is described below.

6.1 Business case phase

Initiation

An appropriate authority (e.g. a corporate-wide standing Business & IT governance body) mandates an ad-hoc team for this phase to prepare an estimation for a solution of a given problem.

Goal

The goal of this phase is to estimate a solution thus this standing governance committee can take an informed decision “Go / no-Go”.