Topics

Featured in Development

Understandability is the concept that a system should be presented so that an engineer can easily comprehend it. The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner. A system is understandable if it meets the following criteria: complete, concise, clear, and organized.

Featured in Architecture & Design

Sonali Sharma and Shriya Arora describe how Netflix solved a complex join of two high-volume event streams using Flink. They also talk about managing out of order events and processing late arriving data, exploring keyed state for maintaining large state, fault tolerance of a stateful application, strategies for failure recovery, data validation batch vs streaming, and more.

Featured in Culture & Methods

Tim Cochran presents research gathered from ThoughtWorks' varied clients and projects, and shows some of the metrics their teams have identified as guides to creating the platform and the culture for high performing teams.

The Seven Fallacies of Business Process Execution

After 8+ years of intense research, the software industry and its customers are hitting a wall. The vision defined by BPM startups in the dotcom era has not materialized yet: we are still far from having the ability to use the business process models designed by business analysts to create complete executable solutions (even with minimal interventions from developers). The need for process driven application models is real: Business Process Improvement initiatives are humming and running everywhere in G2000 companies, but despite such a strong need to continuously improve processes, the BPM market still remains marginal in 2007 (compared to what it could be). This contrasts sharply with the language of some vendors that had been prompt to portray themselves, in 2000, as the next Oracle for the Business Process Management System (BPMS) space...

So what happened? It is actually very easy to understand. It is the usual "I'll sell you what you want to buy" story. In these types of situations a series of misunderstandings usually arises which leads to suboptimal solutions. If you add into the mix that most product managers, architects and developers never ever talked to a business analyst, let alone try to design a business process by themselves beyond a few boxes and arrows, the current situation should come as no surprise to anybody.

Last week, Bruce Silver asked the critical question of "Roundtripping revisited". Bruce complains that there is a strong mismatch between the two key standards of BPM: BPMN (the Business Process Modeling Notation) and BPEL (the Business Process Execution Language). He pointed out the outstanding work of a team of researchers (Ouyiang, Dumas, van der Aalst and ter Hofstede) that set out to create a BPMN to BPEL compiler since it is often argued to be the missing link in current BPMS architectures. They have made great progress to solve this problem, but their work is still incomplete. He also argued that we should give up on BPEL altogether and focus on what appears to be the successful path: creating an executable BPMN standard layered underneath the notation.

Related Sponsored Content

Related Sponsor

Kubernetes and Minikube for DevOps. This cheat sheet shows how adding Minikube makes DevOps' lives easier when working with Kubernetes. Download now.

I have been working at this problem since 1997 and I have written in 2002, two articles(1,2) which have been both referenced in the OMG BPMN 1.0 specification. I would like to reiterate the arguments that I developed in these articles, perhaps more clearly, with a different example. My goal here is to explore the misunderstandings on which the current architecture of BPMSs is based and offer a new architecture blueprint on which a new class of Business Process Management Systems could be built.

Fallacy #1: Business analysts model their processes from a systems' point of view

If you talk to practitioners they will tell you they model processes from the user point of view and not from an execution point of view or a systems' point of view. It means that their process model instructs the user what to do, they never model the responses of the systems to the user input. There is a good reason for that: business continuity. If all systems fail, users need to know what to do for the business to continue operating. It is also the way business analysts think and how they define and get their metrics from processes, this user view is very important to the business because it directly relates to the workflow of activities that creates value. Business analysts never think in terms of system boundaries, execution, messages or business objects (but developers do). At most, the business analyst's understanding of a system is a screen which really amount to an electronic version of a paper form (to view or enter information).

Fallacy #2: Business users can easily learn BPMN and use all its features.

BPMN is a 300+ pages specification. It is elusive to think that even a fraction of your business analysts will be able to master all these concepts. Michael zur Muehlen has run a survey of the most used constructs in BPMN (see slide 24) and his conclusion was that about 25 constructs are routinely used. Personally, I have created a tutorial for business analysts based on 10 key concepts and even by pairing down BPMN it was hard to convince the Lean Six Sigma Black Belts I worked with to adopt BPMN.

BPMN has a lot of attributes put in there just for BPEL generation, and these are generally ignored.

Fallacy #3: Business analysts should be able to create executable solutions from process models

I am not saying BPMS vendors are disingenuous in trying to sell you a BPMS under the argument that it actually does that. BPM started with good intentions: the vision of better Business/IT alignment, faster development cycles... The idea emerged that the business could actually produce models that could be turned into executable code. No wrong doing there, this is in the same line as CASE tools, MDA, MDD, DSL... This vision spoke to our dearest dreams: fast, easy, cheap. Each time I hear a vendor's spill on this topic I think of John Lennon's song Imagine (i.e. I want to live in this world, but it is not going to happen in my lifetime). Vendors felt there was a real (and huge) market based on a solid idea and when you combine that with the almost infinite pool of money that flew from the VCs, well you get what we have today. Some vendors succeeded better than others at delivering a fraction of that vision, but we have to admit that the vision is not there. Nobody can claim they have delivered a general purpose engine that business analysts can use (even with minimal intervention from IT) to create a solution from process models. Big projects fails and BPMS usage is marginalized and bring little benefits to an organization.

The joke I often tell people that want their business users to "craft" solutions is: the good news is that you just added 2000 developers to your organization, well the bad news is that you just added 2000 developers. You want your user to be able to personalize solutions, not to build or even customize them. Note, that in some well constrained cases, it is ok to let business users customize some of the business logic (such as rules).

Fallacy #4: If we add a magical BPMS that create solutions directly from business analysts inputs we would not need to develop any of integration with existing systems nor to change existing systems of record nor to do any QA.

Stated that way, I hope that by now everyone agrees that we will not see such a magical BPMS on the market for at least another 10 years. And yes vendors have given up completely on taking developers out of the loop. Bruce notes however:

a host of smaller companies began to demonstrate success both with BPM buyers and industry analysts by ignoring BPEL altogether. Vendors like Lombardi, Appian, and Savvion, focused on human-centric processes more than integration, led the way with a new style of BPMS in which executable design is layered directly on top of the process model, in the form of implementation properties of BPMN activities.

The tooling itself encouraged business-IT collaboration throughout the implementation cycle, and fit well in agile iterative methodologies that significantly shortened the cycle time from model to deployed solution.

Marlon Dumas who responded to Bruce agrees with me:

You won’t remove the developer from the BPM lifecycle, simply because no business analyst will ever be willing to write something that resembles an XPath expression, or any other expression language.

I would argue, as I said earlier, that these vendors have experienced limited success. As Bruce points out they focus on human-centric processes, which I agree for the most part fit well the centralized view of a business process engine developed by these vendors, especially when limited customization of and integration with existing systems is needed. .

Fallacy #5: Business Process Execution must be centralized

Let's spend some time on this one. Bruce explains that he is confronted to a new problem:

In fact, more often than not, if [his BPMN users] have already made their BPM runtime decision, it is BPEL. It’s a standard, a commodity, available open source. It’s what IBM and Oracle use in their BPM runtime. So there are compelling factors in BPEL’s favor. But standardizing on both BPMN and BPEL? No, of course it’s not logical.

Having been in the roundtripping-is-dead camp for about a year, I now find myself having to confront this issue once again. In my BPMN training, for example, students want to know what strategies or patterns should they use in their BPMN diagrams that will fit well with their expected BPEL implementations. It’s not something I expected to think about when I started.

A BPMN / BPEL round-trip has been the holy grail of this industry. This was the vision initially proposed by BPMI.org the founding organization of BPML and BPMN. What happened there? How could a few companies have created a successful market for human-centric processes without the need for an intermediate orchestration language when they added some execution semantics to BPMN? Others suggest that the problem comes from the fact that we have not yet found the right coordination language. Arzul Hasni for instance suggests that GRAFCET could be a better candidate than BPEL to achieve this round-trip. GRAFCET is a programming language dedicated to industrial automata (Arzul gives details in his post). In essence it is fairly close to BPEL.

Ouyiang, Dumas, van der Aalst and ter Hofstede did a remarkable job at creating the BPMN/BPEL mapping. For those of you who, like me, have forgotten most of their college math, I published these UML diagrams for BPMN and BPEL, they may help you understand the divergence of the semantics (i.e. the things you can express in one and the other) between the two specifications. The conclusion from this group of researcher is pretty clear:

A possible avenue for future work is to extend the proposed techniques to cover a larger subset of BPMN models, e.g. models involving exception handling and other advanced constructs such as OR-joins. Unfortunately, many advanced constructs of BPMN are under-specified and are still being refined by the relevant standardization body.

This concept of a centralized process engine is not new. This is the foundation behind 99.99% of the work that has been done in this space since the early 90s. This focus on centralized architectures can be best understood by this excellent presentation from Keith Swenson, VP of R&D at Fujitsu Computer Systems (who is very invested in XPDL an interchange format for BPMN).

Unfortunately, this view is completely flawed and I would like to spend some time explaining why. With this kind of thinking we are simply ignoring the very nature of business processes: enabling an organization to add value by transforming resources. Processes such as Source-to-make, Quote-to-cash... all move "things" along a work flow of activities that ultimately (and hopefully) add value to the resources being transformed and consumed. The information systems are simply here to advance, capture and report the state of these resources and activities. Yes, you can take any business object that describe a physical concept: Purchase Order, Invoice, Inventory item, Employee, Customer... they all have a lifecycle (that can be described by a state machine - see figure 2.).

I would like to take the example of a Job Application business process (this is the Candidate-to-Employee process) that takes a candidate application and processes it to the point where the candidate can either be hired or his application can be rejected.

Here is a typical Job Application information model.

Figure 1. The Job Application Data Model

This job application has a lifecycle (please note that the Job Application data model -the content- is independent of its lifecycle and vice versa):

Figure 2. The Job Application Lifecycle

The Job Application lifecycle itself is independent of any Candidate-to-Employee business process. This is a piece of business logic that changes rarely even though the processes that interact with it, might change often. A company could also have several of these processes for the same lifecycle: for instance, one for VP positions, one for managers and one for all the other employees. Another case would be because of regulations, some processes may involve additional activities (background check...). These process variants are extremely common. However, for the most part a job application is a job application and even though there could also be some job application lifecycle variants they are for the most part decoupled from their process variants.

The question now is how would you go about implementing this Job Application Lifecycle component? The way I would do it is by creating a service that implements all the actions that will result in a state transition:

Figure 3. The Job Application Service

All these service operations will in effect execute some business logic that will result in a state transition. What's the best language to implement this service? Java/C#? BPEL? GRAFCET?

My preference is a message oriented orchestration language like BPEL because these resource lifecycles are long running (days, weeks, months, years). To illustrate that point, let's take the example of a customer resource: as a customer, I just canceled a 12 years relationship with a credit card company this week, (which cause the lifecycle of my customer instance to transition to its final state) because I had to pay some extra fees due to what I felt was a broken billing process... Yes processes do matter, and they could have added an activity to their process without ever changing the Bill lifecycle that would have kept me happy, but they didn't, they chose to maximize fees instead. BPEL is an ideal implementation language for such long running lifecycle (not processes) because it understands messages (receive, send, invoke), message correlations, and it can deal with parallel flows (yes a resource can have composite states). In addition, BPEL engines have been designed to automatically handle dehydration/hydration of process instances which is one less (painful) thing to implement.

The BPEL implementation would look like this (using a vendor neutral BPEL notation):

Figure 4. The Implementation of the Job Application Service

I know a lot of people will tell me that it is a process, but it is not. It is a service implementing the lifecycle of a Job Application independent of the processes and activities that may advance the state of the job application. A process is the set of activities that advance its state. Resource Lifecycles and processes are decoupled, I don't think anyone can argue with that, yet everyone is trying to model and implement processes without a clear understanding of the resource lifecycles, they are more or less "built-in" the process model.

So the choice that most people have made to standardize on a BPEL engine is the right choice ... by far. Note that because of SCA, your favorite programming language can easily be extended to incorporate BPEL semantics. In the past I would have favored BPEL-J over BPEL but today if you need to express some business logic in a traditional language, SCA makes it really simple to leverage orchestration capabilities in your favorite language (Java, C++, COBOL, PHP, ABAP...).

There is such a strong relationship between resource lifecycles and orchestration languages that the leading orchestration engines offer a state machine paradigm as a way to create you orchestration definition. This is the case of IBM Process Server, and Microsoft Workflow Foundation. (I apologize if I forgot some, please let know if you know of others).

Please note that so far I am suggesting to use an orchestration engine to implement the services that manage the lifecycle of resources, I have not talked yet about business processes or business process engines.

Before we start looking at the relationship between a lifecycle and process, let's emphasize that a lifecycle is a very intuitive concept. Most business analysts could easily describe these lifecycles readily (say using a UML notation). I would argue that almost anyone in an organization can understand these lifecycles, whatever their roles are. However, I would also argue that on the opposite end of the spectrum almost no-one would be capable of designing (as in graphically designing using BPMN) a business process that would comply with the lifecycles of all the resources involved. Assuming you created such a model, let's say that you now create a process variant. How would you be guaranteed that the resource lifecycle was not impacted? How much QA do you need to do to verify that?

Process and Resource Lifecycles can only be reconciled during the process implementation and possibly "bending" the process to make sure it complies with the lifecycle. This activity can only be performed by a developer mapping carefully the requirements of business analysts expressed in BPMN and reusing the enterprise class services that manage the lifecycle of the core resources of his or her organization.

Now, let's look at how a business analyst would create a Job Application business process definition using BPMN:

First, BPMN does not have the notion of "resource" and a fortiori "lifecycle", at best someone could annotate its BPMN definition with the expected states at a given point in the process (as shown above). This is perfectly fine, this is how BPMN should be. Second, the business analyst is totally unaware of the operations that will be invoked on the Job Application service to advance its state. They belong to the systems view. Expecting the business analyst to add "invokes" activity in between the activities he or she describes as user activities is simply dead wrong. Unfortunately the relationship that people set out to establish between BPMN and BPEL was the wrong one and they ended up adding the core BPEL operation semantics of Send, Receive and Invoke in the process notation. This is totally artificial and should be never used, unless the message being received or sent is a business business message not as an operation being invoked (such as a Job Application arriving on the desk of a recruiter).

How does a business process gets implemented? A business process execution environment is an assembly of services (figure 6) interacting with each other (not a centrally orchestrated set of services). It is the interactions of the orchestrations implementing the resource's lifecycles as well as the performance of human tasks, the events and simple service invocations that advance the process.

Figure 6. The Job Application Process Implementation

The great news is that we already have all the technologies necessary to achieve this vision including an assembly technology: the Service Component Architecture. Everything you see on this picture can be achieved with a combination of SCA 1.0, BPEL 2.0, Web Services (XSD and WSDL 1.1 -because of BPEL 2.0), BPEL4People 1.0 and Human Tasks 1.0.

With BPEL you don’t have the freedom to ignore elements you don’t support. BPEL is BPEL and you have to support everything in the spec. The rest are called proprietary extensions. They live in their own namespaces, and a valid criticism of BPEL 1.1 is that real processes need too many of them. It’s a bit better in BPEL 2.0, but human tasks, subprocesses, and other basics still require extensions in 2.0, such as the nearly mythical BPEL4People.

does not apply anymore. WS-HumanTask and BPEL4People belong to the task container and are indeed separate from BPEL itself. Now you can argue whether BPEL needs "subprocesses", but I would say that as an implementation language of Resource Lifecycle Services it is not critical: very few elements of a state machine are that reusable, they belong intimately to their resources.

At this point -and unfortunately- Microsoft is not participating in SCA or BPEL4People so you cannot use Workflow Foundation as an alternative to a BPEL engine even though it would do the job perfectly. You can however use WCF as a service container implementing services that can be invoked from SCA and your favorite BPEL engine. Microsoft itself does not have an assembly mechanism so you cannot even implement this architecture blueprint in .Net. On the open source side, you have most of the components (SCA, BPEL, and Service container) but a BPEL4People container is missing. It is not critical, basic human task containers are actually not too hard to build (but not to the level of BPEL4People and WS-HumanTask).

To understand the role of a developer in this new architecture, let's focus on the "Schedule Interview" activity of the process model (Figure 5). As you can see this activity is featured in the process model (and it makes sense because, if the job application system is down, this is what a user would have to do), but as an optimization it was decided with the business that the scheduling task would be automated on top of exchange server for instance. The Job Application lifecycle provides the hooks (i.e. requires) that an interview be scheduled after the candidate's application is retained. Note that the Job Application service does not know how this is implemented. It could have well been a human task too. At this point of my understanding, it is simply impossible to resolve this kind of design decision automatically. This is why process models must be completely separate from any execution semantics. Another design decision that would not impact the process definition is the fact that the candidate application could happen in a different human task container. We could very well "assemble" this process with the candidate application taking place in a popular career site. Once the application is approved for interview, an activity would send an email to the candidate to point him or her to the process tasks (review offer, enter employment information). I bet you can't do that (easily) with your current BPMS architecture.

As a side note, you can see now that a task engine is not really a sub-component of a business process engine. Of course, this is how BPMSs are designed today, but in reality it is not, it is an independent component of the architecture managing human tasks (figure 6). These human tasks are naturally always related to one or more business processes, but they have a lifecycle on their own and interact directly with the resource lifecycle services. As Dominique Vauquier[1] puts it in his article: "Human tasks are grafted on the resource lifecycle". In addition, as we have seen in the previous paragraph it is critical to enable a "business process" to interact with several task containers.

I did not describe here the role of rules or Master Data Management here (apologies, James) but they do play a crucial role and require a specialized service containers a.k.a. BRMS (Figure 6). The question that Michael zur Muehlen or Mark Proctor ask becomes totally irrelevant because SCA makes it irrelevant (from a runtime perspective). SCA will let you choose the most appropriate invocation mechanism of a decision service (running in process with your BPEL engine if it is technically possible). SCA offers a large degree of decoupling between the elements of this architecture allowing them to be reused in different processes while choosing the best runtime configuration possible for each process.

I did not speak of the role of B2B either, I speak a lot more about it in my two original articles (1,2). This architecture blueprint supports B2B by enabling the definition of arbitrary boundaries within the assembly. For instance I can "assemble the two views of a purchase order lifecycle (buyer and seller). This is a tremendous advantage. Traditional "centralized" execution models impose an artificial discontinuity at B2B boundaries and force two different execution models: a centralized orchestration on each side and an assembly in the middle. In some ways my proposal is simply based on the original B2B process definition model of OASIS ebXML Business Process but applied at the resource level, not just at the business partner level. This is why the execution models are continuous both inside an organization and at the periphery as it interacts with its business partners.

Pretty much everyone I encountered in the "execution" standard working groups (such as BPML, BPEL, WS-CDL) were not practitioners (that includes me). They were developers and architects. They often focused on complex mathematical theories (such as the Pi-Calculus) without ever validating if these theory's semantics would actually be enough to support a business process execution. Typically, these technical committees would focus on 3 to 5 use cases to write their requirements. These use cases would be often trivial, and would rarely reflect the "real-world" complexity of business processes.

Business Process Execution semantics are difficult to conceptualize. It is actually so difficult that most executable processes are still painfully hard-coded in our solutions, one line at a time. If there was a better way I am sure everyone would embrace it. I was encouraged to read the comments from the "Why Java Developers Hate BPM?" discussion. Not one comment complained about the validity of the abstraction. Even code Kahunas, such as JBoss's Chief Architect, Bill Burke (with whom I worked briefly as we were building together a human task container before he joined JBoss) comments:

I thought the same of BPM. That it was nothing more than XML scripting and the dumbing down of developers. Until I actually started looking into BPM frameworks ... I didn't see the value add these frameworks had to offer. When I started thinking [about them] as a reliable and fault tolerant state machine I really started to see the potential for BPM frameworks. Then when you start combining the use of transaction management and compensations with your business processes, you have some real nice abstractions to work with as you develop your applications.

Based on what I explained in the previous section, his and other's statements go in the right direction. Developers now see the difficulty in having to code state machines over and over and how a generic engine could ease their job (in most cases).

Fallacy #7: Bruce Silver concludes his post by saying that "the collaborative implementation paradigm, in which executable design is layered on top of the BPMN model, is the way to go."

Bruce believes that a business process model implementation should be driven from the business process model expressed in BPMN and successively add annotations (collaboratively with developers) to achieve an executable process.

Unfortunately this vision does not take into account the reality of business processes (as a work flow of activities that advance the state of resources) and I hope I convinced you that this statement, even though, conceptually, a valid endeavor, could not be more wrong because we cannot model workflow of activities and resource lifecycle together (at least in the current state of my knowledge). I can foresee for quite some time that developers will translate the process definitions produced by business analysts into assemblies of human tasks, resource lifecycle services and services (including decisions and MDM services).

Now, this new architecture blueprint does not mean that the investment you made in your favorite BPMS is lost. You will need however to add a Composite Service Container (such as a BPEL container) and an assembly container (SCA) and use your BPMS mostly as a human task container (which they actually are, for the most part, anyways). A human task container is a noble and important component of the architecture. The current BPMS's task containers are very sophisticated and would be difficult to build yourself, so it was money well spent. I don't want to undermine the role of this container at all. I am actually expecting than within 2 years all the BPMS vendors will have adopted the vision presented in this article and transformed their suite to work within SCA assemblies and BPEL containers based on this blueprint.

I also argue that, at the end of the implementation, it is possible to re-construnct an "as-is" view of the operating process automatically. I have not proved that, this could become a research topic.

Conclusion

After so many years searching for the BPM Magic Bullet, the software industry is facing a wall. This wall can easily be overcome with a paradigm shift and a new factoring of the business logic based on Resource Lifecycles. If we take the wrong turn today and still believe that in these 7 fallacies, we are running the risk of having to throw all these products and standards away for lack of ROI and return to coding everything by hand. If we, however, take the very same technology we have today and use it differently, we can deliver a vision that is very compelling to both the business and IT. I would not call that vision BPM per se, it is larger than BPM, I would rather call it "Composite Applications" or more exactly "Composite Solutions".

The Composite Solution vision speaks directly to what the business needs from IT:

Build solutions rapidly with projects as small as possible (rely on many iterations)

Change solutions rapidly and support an iterative lean six sigma approach

Be able to visualize the business design in operation at the present time without complex “current-state” projects

Be able to gain operational intelligence from the current business design without complex measurement projects

I argue that the capability of "Being able to build / change the solution directly by creating / changing the business design" (no matter how desirable it appears to be) is antagonistic to these four requirements. The reason is because it leads to simplistic and rigid task centric application models (as we can see in BPMSs today). These application models cannot meet the needs of the business and typically result in increased project cost because when real solutions need to be developed, they require a lot of custom developments “around” the BPMS application model. To compound the problem, these suites, as pointed out by the "Why Java Developers Hate BPM?" discussion, do not offer yet a robust development environment for this custom code and suitable for large projects.

I argue that the vision moving forward is Composite Software (based on two composition models: assembly (SCA) and orchestration (BPEL) -ok, choreography is coming down the road - of course - but that I will explain this in another article. The technology to develop this blueprint is available today. In addition, BPEL and BPMN, as they are defined today, work. If something needs to be changed in BPMN, it should be removing all execution semantics, it should be designed to let the business analyst express himself or herself. If you want some more details about how to construct composite software using these standards and this architecture blueprint, this mini-book was published on InfoQ last week.

The architecture of Composite Solution Platforms, as described in this paper, also offers a cleaner interface between SOA and BPM. It gives SOA the opportunity to build truly reusable services: the Resource Lifecycle Services which can be reused across process domains and process variants. Because these Resource Lifecycle Services become reusable across processes, it also means that the implementation of any given process becomes that much cheaper, faster and easier. The implementation of Resource Lifecycle Services are the "code" within the process. Thinking that a business analyst (or anyone else) would have the knowledge to code and recode these lifecycles in a graphical notation amidst the process definition is simply pushing BPM in the wrong direction.

This blueprint, as a composite solution platform, has already an Enterprise Method that can support it: Praxeme. The Praxeme Institute is translating their artifacts in English and making great progress towards this goal.

Now, I do share some of the concerns from Bruce and Marlon about involving developers in the current technologies (SCA, BPEL...), this is why I have started an initiative called wsper. This initiative offers a abstract programming environment to simplify the work of developers and architects during the lifecycle implementation and process assembly. It also helps construct a Composite Solution Platform from heterogeneous components because it isolates the business logic implementation from these components (and their future evolutions). It also isolates the business logic from the evolution of standards.

I want to extend many thanks to Sandy Kemsley for providing so many useful links and comments.

[1] This article complements Dominique Vauquier's article ("The 6 Fallacies of Business Process Improvement"). Here we focused on business process modeling as it translates into execution. Dominique's article explore how Business Process Modeling relates to Business Process Improvement projects. I translated Dominique's article from French (it has been accepted for publication on BPTrends.com in January 2008). For those of you who read French, the article can be found on page 39 of the "Guide of the Pragmatic Aspect" of the Praxeme enterprise method.

A useful list of fallacies though light on the role of business rules

Your message is awaiting moderation. Thank you for participating in the discussion.

I blogged about this article and discussed some of these fallacies as they relate to business rules and decision management. Some of them could be less true if BPM vendors/practitioners took decisions more seriously and some of them are great analogies for similar issues in the rules space. Check out the post www.ebizq.net/blogs/decision_management/2007/12...">here.JT

Business-IT alignment should still be an aim

Your message is awaiting moderation. Thank you for participating in the discussion.

Jean-JacquesI can not but agree with many of the opinions you share here. However, you seem to be overly pessimistic regarding the possibility of aligning IT systems with business operations by bridging analyst-level process models (e.g. BPMN) with executable process models. Granted, there is no magic button that will bridge the two. However, sound methods and wisely chosen tool support can go a long way in this direction. For example, if we align BPMN models with BPEL process definitions, we can at least partly elucidate the impact that business-level changes have at the implementation level. Oracle BPA, for example, is a modest step in this direction. Clearly, it's not a silver bullet, it won't magically solve the business-IT alignment equation, but still, it shows that something can be done to keep business models and code in sync.The question of how "task-centric" should process models be is valid, but perhaps orthogonal to the BPMN round-tripping debate. I mean, we can argue if BPMN's task-centricity is the way to go for process modeling (at various levels of abstraction), but that's a separate point.

Re: A useful list of fallacies though light on the role of business rules

Your message is awaiting moderation. Thank you for participating in the discussion.

James:

BRMS are definitely my area of expertize and I second many of your comments. Just a couple of precisions:

>>There is no reason why this information systems cannot also decide how to actActually I totally agree, I thought this was conveyed by "advance.. the state", but I am glad you made it clearer.

>> I do think that collaboration is key - business users and analysts must be able to collaborate with IT to define processes and decisions

I think the question is really focused on "collaborate" vs "communicate". I would argue that "communicate" is a better value proposition than "collaborate". Collaborate implies for me long joint sessions, communicate conveys a better separation of work and clean handoff. You collaborate because you can't reach the point where this hand off is possible.

Re: GRAFCET and resource management

Your message is awaiting moderation. Thank you for participating in the discussion.

Azrul:

thank you so much for bringing back so many memories. I used the GRAFCET in the early 1990s as I was building (industrial) process control systems for the semi-conductor industry (using Objective-C and NeXT on the front-end). I had actually tried in 1999 to discuss these concepts with the team I was working with at eXcelon so I do believe the concepts are good but I also believe that BPEL can do the job just fine. It is not perfect, but I can live with it, compared to starting over, I'd rather fix a few things in BPEL. If you read about "wsper" you will also realize that the core of the wsper programming language is very close to the GRAFCET. I had not looked at the language in almost 10 years, so I can't claim this was a conscious decision, but I argue I can compile this language in BPEL.

Re: Business-IT alignment should still be an aim

Your message is awaiting moderation. Thank you for participating in the discussion.

Marlon:

thanks for your comments. I am very impressed by how far your team has been able to go. It means that BPEL as a language is pretty well designed, considering the fact that you are imposing the constraint of generating readable BPEL code.

I am actually not "pessimistic" about aligning BPMN with executable process semantics, I am simply saying that I am a bit surprised that this is the direction some people are looking at because it negates the existence of the "resource" as a key ingredient of the process.

Now it does not mean that your work is not useful at all, as you mention understanding the impact of changes at the process definition level is a key benefit.

I am pretty sure also that you could go in the other direction and provide a view of the "process definition" once the BPEL code has been implemented, such that business users would have automatically the "as-is" view of the process should they be looking at improving the process at a later stage (this has tremendous value because analysts spend a lot of time just understanding what is the current state).

Finally, another area of interest could be "verification" that the process implementation (based on an assembly of BPEL definitions) actually implements the process definition.

>> we can argue if BPMN's task-centricity is the way to go for >> process modeling (at various levels of abstraction), but that's >> a separate point. The key question is whether you want to take the point of view: "A process is the collection of activities that advance the state of resources as they are transformed or consumed", this is a 100% task centric proposition (automated -james, this where decision services would fit, or human tasks). If you take the point of view that a process owns everything between the presentation layer and the data access layer, then I would say you are driven towards the kind of approach that you are exploring, and therefore you are faced to have developers to tweek the BPEL code, or business analysts to use BPMN in a way that write the correct BPEL.

My proposal is quite different, it starts from the resource / business entity level and assume their lifecycle to be fairly stable (and unbreakable, meaning a process cannot change the lifecycle of a resource once it has been defined). From that point the process is simply an assembly of resource lifecycles and "activities", there is much less code to write, the BPEL code has been written once and is reused in any process the resource is involved. I was actually quite amazed to see Dominique Vauquier comme to the same conclusion but coming from a pure methodology angle, trying to improve the way business analysts improve processes. I can only encourage you to read his article that I translated from French and that will be published on BPTrends next month.

The problem of your approach is that you can never be sure that a process will not lead to unwanted transitions in the lifecycle of the resource.

A team of researchers at IBM Research in Zurich is working on this topic.

Your message is awaiting moderation. Thank you for participating in the discussion.

Marlon Dumas was kind enough to send me the link for the home page of Ksenia Ryndina which contains many articles that explains their research (which is very applied since they have already built some prototypes with WebSphere Business Modeler).

I have exchanged an email with Ksenia who confirmed the relationship between her work and this article. She recommends reading a couple of references available on her home page:

Good taxonomy and ontology for modeling data, service, process, human-wf

Your message is awaiting moderation. Thank you for participating in the discussion.

You describe exactly how many misses out how important "resources" (domain objects) are in SOA. People are too fixated on the business processes when modeling their services and orchestrations, and thus forgets to create and govern a semantic canonical data model (or equivalent semantic transformations). This is what David Linthicum, Jack van Hoof, Nick Malik, you, me, and others blogged about this july. I have commented 'fallacy #5 Business Process Execution' and related it to the CDM discussion and service taxonomy (processes vs orchestrations) discussions in my blog: kjellsj.blogspot.com/2007/12/business-model-tax...

Interesting article, but we still need executable business processes

My comments on this article are based on my experience as an seasoned IT specialist; they are also expressed in my forthcoming book “Improving business process management systems” (see www.improving-bpm-systems.com/)

What about using the Process Virtual Machine ?

Your message is awaiting moderation. Thank you for participating in the discussion.

Hi Jean Jacques,

Find hereafter my last post on the BPM Corner community on how the Process Virtual Machine could be consider as a core technology to implement most of the containers and modules required in the architecture your propose to handle business process.

repeating patterns...

Your message is awaiting moderation. Thank you for participating in the discussion.

Firstly congratulations on what is one of the most lucid and well written process modelling articles I have every read. Believe me I read a lot...:-)

Historically the progression from procedural programming languages to object oriented programming languages has allowed us [humans] to build systems significantly more complex by establishing an appropriate set of conceptual and language related aparatus with which to manage this complexity with. Objects encapsulate data and behaviour that relates to a small piece of a much more complicated working system. This bigger composite solution is made easier to understand due to the fact that we can progressively dismantle it into smaller pieces. Thus complexity generally is managed through a recursive process of division and dismantling a big thing into smaller pieces that are individually easier to understand.

The fact that resources can exist independently of a process and the fact that they can participate in more than one process means that the need to understand and model them as separate entities is an important conclusion. A workflow process therefore can be viewed as a resource [state machine] co-ordinator responsible for transitioning one or more resources through one or more transitions.

The complexity has always been in trying to understand and describe a process environment where the triggers responsible for firing these resource transitions can originate indeterminately from either a human and/or system generated event. This problem has been compounded with the introduction of Business Rules and Scheduling sub-systems.

In my mind process models can be viewed as structural relationships between input and output resources where the relationships between resources can be defined as transition tables and valid process execution sequences of transitions contained with process resources.

Re: repeating patterns...

Your message is awaiting moderation. Thank you for participating in the discussion.

I guess in the end its the recursive application of state machine semantics to progressively smaller and more detailed descriptions of a workflow / orchestration or process. The notion of a task container is really just another way of understanding how transition events get trigger.

Re: repeating patterns...

Your message is awaiting moderation. Thank you for participating in the discussion.

Shaun:

thanks for your comments. I think you nailed it right there:

The fact that resources can exist independently of a process and the fact that they can participate in more than one process means that the need to understand and model them as separate entities is an important conclusion. A workflow process therefore can be viewed as a resource [state machine] co-ordinator responsible for transitioning one or more resources through one or more transitions.

I think that somehow "Computing" lead us the wrong path. The way we build information systems today is in no way based on information system concepts but rather on "programming concepts". It is now time to invent an information and process centric programming model. This is just the beginning

... and after 3 years...

Your message is awaiting moderation. Thank you for participating in the discussion.

We are publishing a whole new way to do things. OutSystems has a platform for agile software development that has been in use for many years. We too implemented BPM, facing all of this fallacies. On top of this experience we built our own modeling language. We came to realize that the tiny bit that makes the difference is that you have strong binding to your data model, and a strong binding to your interfaces.Our platform already had that. When using the same concept with BPM, data/business rules and user interface (web screens), we reached a way of process design that can, not be given to, but created with the analysts. The developer then goes, and implement the business rules or the actual screens...

I think the solution is in having good technologies to bind these three elements: DataModel/Business Layer, User interface, Business Processes.You can have on each a different set of specialized roles that work together very well.

great article

Your message is awaiting moderation. Thank you for participating in the discussion.

great, great article.most people get confuse about CASE Tools, and how this tool can help our life, some other (see the comments) thinks we do need executable process (and so what! computer science have executable process since it started) what IMHO he need I learn about software engineering, read the paper en.wikipedia.org/wiki/No_Silver_Bullet search and DO learn what a CASE Tool is (en.wikipedia.org/wiki/Computer-aided_software_e...) and it means Computer aided software engineering just reading the meaning we have to able to understand is "aided" not "magic" what a means in CASE. And last but not leas choose the right CASE Tool for you and your company (or your team) and there is a lot of people working on that see here:case-tools.org/ and here www.uml-forum.com/ as two examples....

and BMP is sometimes the right way to see logic and the business but not (never!) the architecture and the quality attributes behind the logic... so is impossible to think in a silver bullet.

Re: great article

Your message is awaiting moderation. Thank you for participating in the discussion.

Thank you sir. I had a question. You mentioned an article which has proposed a method for transforming BMPN models into BPEL models. However, I noticed that the authors assume that each activity in the BPMN model is equivalent to a service invocation in the BPEL model. Does it have any contribution to identifying services? Could we call these kind of approaches, service identification approaches? In fact, they only cluster operations into services. I am really looking forward to knowing your opinion

BPM in-a-can (4 years later)

Your message is awaiting moderation. Thank you for participating in the discussion.

After years as a developer & systems-engineer in the enterprise, I took a technology position at a small financial firm, who previous to my arrival, adopted one of the BPM-in-a-can solutions mentioned in this article.

At the onset I was extremely impressed that two business critical processes had been implemented in this system. After 2 weeks of digging-in however, I was discouraged by the rats-nest of variables, functions, constants, rules, and under-the-carpet black magic that it took to achieve these goals. In other words, *NOT* the "anybody can code" solution it was sold as.

In one hand, I applaud the attempt to turn a process model into something executable. On the other hand, these process models lose nearly all of their clarity/value along the way -- by the time systems-integration & resource-lifecyle logic are bolted-on.

Coming across this article was quite validating, because it highlights, in-depth, how much consideration should be put into the theory of BPM to realize it's value. (Key example being resource-lifecycle versus process -- the value here cannot be stressed enough).

So here's the problem....

When BPM is sold in a can, the core principles are either glazed-over or hidden entirely. It undermines the core concepts of what makes BPM potential so strong. And leaves the impression (to the unknowing business user) that they have actually implemented BPM.

My expedition to understand this from every angle continues - but just wanted to provide some feedback from the wild. I found this article especially interesting, as it is four years old still quite relevant today (2011).

Re: BPM in-a-can (4 years later)

Your message is awaiting moderation. Thank you for participating in the discussion.

@mohammad:

Personally, I would prefer if these transformations would have a different set of semantics between "activity" and "service invocations". In general, most people level the service layer at the "data access layer" as opposed to the "resource lifecycle layer". My recommendation would be use subprocesses to define "process activities" and use BPMN activities to reserve low-level system interactions such as a service invocation

The general problem of BPMN / BPEL generation is that they completely ignore the resource lifecycle concept. In fact, resource lifecycles and business process definitions are complementary not isomorphic. It would be best to a) focus on a BPEL resource lifecycle implementation (Java or C# work too) and then use any process engine as a "process activity engine" merely a task engine that interacts with the resource lifecycle operations.

@Jesse: Yes, I think the problem is that lots of vendors have spent lots of money to build something that missed the target and unfortunately, the sales and marketing machine is charged to explain customers that what they build is actually a BPM solution and people should spend a lot of money to buy the product and implement processes with lots of consultants. The fact that no vendor or BPM consultant ever wrote a response to my article indicates to me that I was spot on and engaging any discussion at that level would be deadly. I have talked to many of them and frankly it makes me quite sad that 4 years later this article is still for the most part actual, from BPMN, to BPEL to BPM products. Eventually, most people that spent millions of their company drinking the vendor cool-aid are not spend to much time to show that they were wrong.

I cannot emphasize enough how far can go a "resource lifecycle" analysis for both SOA and BPM. A chief architect once told me after I introduced the concept to him that he understood more about his own business in one hour of RL analysis than in 2 years he had been with the company.

Decomposing Process Models

Your message is awaiting moderation. Thank you for participating in the discussion.

In 2007 I was enjoying the luxury of a few years in academia, on a quest for executable architectures when I came across this intriguing article. The concept of Resource life cycles has been very influential on my thinking ever since. I’ve used this example of the job application process many times in discussions on process decomposition and come back to read the article often since then.

The Seven Fallacies of Business Process Execution: the state data of the life cyle

Your message is awaiting moderation. Thank you for participating in the discussion.

I think I get it, except for the statement that the Application lifecycle is independent of the (annoyingly class diagram-like) data model. Surely the data structure must hold the state vector for the Application life cycle? Where is the primary key and state variable of the Application?

Re: The Seven Fallacies of Business Process Execution: the state data of th

Your message is awaiting moderation. Thank you for participating in the discussion.

you only have an association between the two, there is no obligation for the datastructure to "hold the state". I can change the application data structure without impacting its lifecycle. That's what I meant. Lifecycle are highly reusable across versions of the types or variants of the same type. In other words, the properties of a type are orthogonal to the states of the lifecycle. Of course, an association needs to exist for the system to operate properly.