Over the past few months I’ve been interviewing companies that have successfully applied social to their BPM initiatives. As part of this research, we’re identifying best practices for combining social with BPM and identifying specific patterns on how BPM and social are coming together. The patterns identified thus far include:

Collaborative Discovery – Extending process discovery and design to include interactive real-time involvement of business users, customers, and partners.

Shared Development – Extending process development methodology and tools to support development collaboration between business and IT roles.

Of the 12 BPM programs interviewed for research, most are applying social for collaborative discovery and shared development. But only a few of the companies interviewed are trying to apply social to runtime executable business processes – e.g., connecting BPM to Jive or Twitter to provide context during process execution.

So what gives? Why are so few BPM programs applying social to runtime processes – and those that are fall into the bucket of “social CRM”, applying social to customer service or customer experience processes. Yesterday, fellow blogger Sandy Kemsley and I discussed this over lunch at the BPM 2010 Conference, and we both concluded that BPM programs are all over “collaborative discovery” and “shared development” patterns because vendors have been supporting these capabilities for a while. And fewer vendors have implemented the “process guidance” pattern because it requires a new way of modeling and analyzing processes and individual process tasks - as evidenced in our recently published "Forrester Wave™: Business Process Management Suites, Q3 2010" report.

It’s funny how things work out. Just after I wrapped up my conversation with Sandy, the next set of sessions focused on the topic of process guidance with researchers from academia and high-tech outlining different strategies for implementing guided processes and recommending best-next-action for process steps.

Although their solution was not completely integrated into a BPM suite environment, it was the first tangible – and well thought out – application of internal crowdsourcing I’ve seen applied to business processes. They walked through a very intriguing algorithm for evaluating and recalibrating recommendations and process guidance. I have to admit, seeing the algorithm got me excited :-). Using this crowdsourcing approach, teams could learn from one another very quickly without the need to constantly analyze and make changes to business processes. Or as one customer recently shared with me:

“We want everyone to adopt speedy Maria’s best practices, now that we realize speedy Maria gets her tasks done 10 times faster than the rest of the team.”

This might sound like science fiction, but this type of functionality will become standard in BPM suites over the next several years as companies search for new and better ways to drive productivity and real-time process collaboration. For more details and best practices for connecting social to BPM initiatives, keep an eye out for my upcoming report: “Social Breaks the Log Jam on Business Process Improvement”.

Sound Off

I want to hear from you. Let me know what you think is the next big thing is in BPM. Do you think process guidance will really help reduce the amount of analysis needed for continuous improvement? Or do you think it has potential but is too complicated to be implemented in commercial BPM suites? Post your thoughts in the comment section or feel free to shoot me a quick email at crichardson@forrester.com.

Thanks for the catch, Samuru. Yes, this was a joint presentation between the German Research Center for Artificial Intelligence (DFKI) and Vienna University of Technology. I had a chance to speak with them after their presentation and they seemed to work so well together I thought of them as one team. I could be wrong, but it looks like DFKI is taking the lead on applying the research to business environments.

I updated the blog post to also credit Vienna University of Technology. I'm also looking forward to a deeper dive in their solution for upcoming research, so stay tuned.

I strongly agree with you that this is the next big hot area: to apply process mining techniques to suggest improvements in process as you work. This fits particularly well with the Adaptive Case Management approach where process is far less structured, and more in need of guidance. I saw the paper you mentioned, and there was another good paper presented on Monday by Barbara Weber where the process log was generating suggestions for tasks to add to the end of your current list.

Just did a quick scan, but the paper does look like a fresh take on decision support – I’ll read thoroughly this weekend.

Interesting use of analysis, classification and other AI techniques for inferring ad hoc process from social networks, email apparently in this case. Certainly compelling approach. One thing I’m interested to find out is how it integrates as part of an enterprise information system to provide pervasive system-wide flexibility. As you’ve written previously, process and MDM represent two sides of the same coin.

We have another take, a web-architecture for managing emergent processes step-by-step where each interaction is an opportunity to evaluate context and construct (not simply compose) custom system responses on-demand. The system is Resource-Oriented, building on REST and Service Oriented principles to provide a practical mechanism for mass customization.

Interactions invoke a standard method for the asynchronous processing of all requests, events and timed actions. The method leverages under-specification and a virtualized resource repository to support business process mash-ups. The architecture enable concepts (e.g. business rules; data; security; version control; master data management; etc.) to be mixed freely at run-time without fixed architectural structures that constrain cross-cutting concerns.

We had looked at PROM for process mining, but we’ll certainly take a good look at what these guys have done. It might be a great compliment for our framework, adding another rich dimension of context.

Thanks for promoting innovation in process technology - there are a lot of great ideas and capabilities out there.

I completely agree with the importance of the topic. Note that in the context of the process mining tool ProM (www.processmining.org) we have been providing this functionality for several years. We call this "operational support", i.e., using process mining results based on historic information to support users working on running cases. We provide various techniques to make predictions about the future of a running case (e.g., the time it will be finished) and to recommend particular actions based on goals like minimizing flow time, costs, etc. See the recent paper "Wil M. P. van der Aalst, Maja Pesic, Minseok Song: Beyond Process Mining: From the Past to Present and Future. CAiSE 2010: 38-52" for pointers to literature.

Thanks for the feedback Wil. I'm interested to get a briefing on the capability in ProM. My primary focus here is the crowdsourcing aspect - not just process mining or process intelligence. It's really learning based on actions taken by other participants and adjusting recommendations to drive successful outcomes. Does the ProM tool support a crowdsourcing model that applies social network analytics for internal social networks?

Do you think that process guidance should be delivered in the same format as the process model itself (eg BPMN)? Or do you envisage a separate application that prompts a user towards the 'next-best-action'?

I wondered whether the processes themselves are the interface, or simply the controller?

Process guidance is definitely the way to go as everything else is too complicated. The majority of processes in a modern enterprise environment are fairly unpredictable and unstructured and often need modification during execution. They need clearly defined goals and outcomes but the way to get there is subject to negotiation between the performers.

Clay, thanks for putting the subject of real-time discovery in the limelight. My team and I have been researching the subject of real-time process discovery for a long time with first conceptual solutions emerging in 2003. In 2005 we had a working functionality, which we made available (and applied for a patent) in 2007 as the User-Trained Agent. This is not an idea, a concept or a thesis, but a real-world, working software component that is embedded into our Papyrus Platform and can be used for example without much additional effort in our ACM Adaptive Case Management solution.

I know the solutions considered by TU Vienna and v.d. Aalst in Eindhoven, but from my perspective they don't go far enough and aren't practical. There can be ANY data entity that is relevant for decision-making and the scope of the recommendation engine must be freely defineable. Time is actually one of the least important aspects of processes, because if they are highly unstructured each process may be different enough to warrant completely different execution paths. There is no overall time prediction possible. Therefore one has to focus in the individual decision points and map the prevailing data space into the decision engine. The UTA does that and discovers repeated patterns of execution.

You also mention that you focus on 'crowdsourcing' which is a bit odd for a business process, because usually there is not a 'crowd' involved. If I translate the populistic term into 'learning from the actions of other users', then the UTA is doing that as a 'USER-TRAINED' functionality. It discovers actions that were chosen by people based on current available information in the process. And that means ALL INFORMATION, data and content and process information.

Just to look at 'Maria' and her execution time is not good enough, because she may be fast but does that by leaving out an important step of the process that leaves customers unhappy. So one looks at the overall process (data) state space. Additionally you need a process environment that is flexible enough to allow that kind of user decisioning. Most BPMS aren't.

Once again, the UTA is extremely simple to use and completely transparent to the business user. It's recommendations show up as a list of options with a probability ranking. User choice is again feedback for the training. Most of all it does not require any intermediate process engineering steps to work. Learning and recommendations are completely in REAL-TIME!

I have posted some time ago a quite detailed discussion of the differences between the purely academic approach (i.e. ProM) and mine.

What is the key difference? I come from the perspective of processes emerging from scratch and ProM looks at how to improve existing defined processes or it works through activity logs. The Papyrus UTA will work with a completely empty process/case structure as well with an incredibly complex ones, and moreover will it reuse process knowledge from one process to another should it discover the same decision pattern. Thanks!

Clay,
We are finding that the more accurate description of what we do is real-time best practice guidance. We help our clients to forensically track down their best practices in gps-like step-by-step detail then help all users to use this in realtime.
The results are transformational. We then enable all users to provide continuous feedback to keep on refining the best practices. They end up with better than previous best for all.

Thanks for the update David. Real-time best practice guidance is one aspect of Real-time Process Guidance. Ultimately, you - Panviva - are providing the user with real-time guidance on policy and procedure, outlining specific steps they need to follow. The key difference is that with Real-time Process Guidance (or what we're beginning to call "social process guidance"), the updates to recommended next steps would occur based on analytics (i.e., mining completed processes and activities to determine the next best course of action).