Alfresco’s interest in Activiti is as a part of their open source enterprise content management suite: they don’t offer Activiti as a standalone commercial open source product, only bundled within their ECM. Activiti exists as an Apache-licensed open source project with about 1/3 of its main developers – likely representing more than 1/3 of the actual development effort – being Alfresco employees, making Alfresco the main project sponsor. Obviously, Alfresco’s document-centric interests are going to be represented within the Activiti project, but that doesn’t make it unsuitable as a general purpose BPMS; rather, Alfresco makes use of the BPM platform functionality for the purpose of document flow and tasks, but doesn’t force content concepts into Activiti or require Alfresco in any way to use Activiti. Activiti is continuing to develop functionality that has nothing to do with ECM, such as integration with MuleESB.

Activiti was one of the first BPMS platforms to execute BPMN 2.0 natively, and provides full support for the standard. It’s not a “zero-code” approach, but intended as a developer tool for adding high-performance, small-footprint BPM functionality to applications. You can read more about full Activiti functionality on the main project site and some nuances of usage on the blog of core developer Joram Barrez; in this post, I just want to cover the new functionality that I saw in this briefing.

Like all of the other BPMS out there, Activiti is jumping on the ad hoc collaborative task bandwagon, allowing any user to create a task on the fly, add participants to the task and transfer ownership of the task to another participant. The task definition can include a due date and priority, and have subtasks and attached content. Events for the task are showing in an activity feed sidebar, including an audit trail of the actions such as adding people or content to the task, plus the ability to just post a comment directly into the activity feed. The Activiti Explorer UI shows tasks that you create in the My Tasks tab of the Tasks page, although they do not appear in the Inbox tab unless (I think) the task is actually assigned to you. If someone includes you as a participant (“involves” you) in a task, then it shows in the Involved tab. This is pretty basic case management functionality, but provides quite a bit of utility, at least in part because of the ability to post directly to the activity feed: instead of having to build data structures specific to the task, you can just post any information in the feed as a running comments section. Mostly unconstrained, but at least it’s in a collaborative environment.

The other big new thing is a table-driven process definition as an alternative to the full BPMN modeler, providing a simpler modeling interface for business users to create models without having to know BPMN, or for fast process outlining. This allows you to create a process definition, then add any number of tasks, the order of which implies the sequence flow. Each task has a name, assignee, group (which I believe is a role rather than a direct assignment to a person) and description; you can also set the task to start concurrently with the previous task, which implies a parallel branch in the flow. Optionally, you can define the form that will be displayed for this task by adding a list of the properties to display, including name, type and whether each is mandatory; this causes an implicit definition of the process instance variables. The value of these properties can then be referenced in the description or other fields using a simple ${PropertyName} syntax. You can preview the BPMN diagram at any time, although you can’t edit in diagram mode. You can deploy and run the process in the Activiti Explorer environment; each task in the process will show up in the Queued tab of the Tasks page if not assigned, or in the Inbox tab if assigned to you. The same task interface as seen in the ad hoc task creation is shown at each step, with the addition of the properties fields if a form was defined for a task. The progress of the process instance can be viewed against the model diagram or in a tabular form. Indeed, for very simple processes without a lot of UI requirements, an entire process could be defined and deployed this way by a non-technical user within the Explorer. Typically, however, this will be used for business people to prototype a process or create a starting point; the model will then make a one-way trip into the Eclipse modeling environment (or, since it can be exported in BPMN, into any other BPMN-compliant tool) for the developers to complete the process application. Once the simple table-driven process is moved over to the Eclipse-based Activiti Modeler, it can be enhanced with BPMN attributes that can’t be represented in the table-driven definition, such as events and subprocesses.

There were a few other things, such as enhanced process definition and instance management functions, including the ability to suspend a process definition (and optionally, all instances based on that definition) either immediately or at a scheduled time in the future; some end-user reporting with configurable parameters; and integration of an SMS notification functionality that sent me a text telling me that my order for 2 iPads was shipped. Sadly, the iPads never arrived.

We finished with a brief description of their roadmap for the future:

Hybrid workflow that allows on-premise and cloud (including instant deployment on CloudBees) for different tasks in same flow, solving the issue of exposing part of process to external participants without putting the entire process off premise.

Polyglot BPM, allowing Activiti to be called from other (non-Java) languages via an expanded REST API and language-specific libraries for Ruby, C#, Javascript and others.

It’s great to see Activiti continue to innovate after so much change (losing both the original product architect and their main partner) within a short period of time; it certainly speaks to their resiliency as an organization, as you would expect from a robust open source project.

I also talked with Scott Francis of BP3 about their new Activiti partnership; apparently the agreement was unrelated to the camunda departure, but definitely well-timed. I was curious about their decision to take on another BPM product, given their deep relationship with IBM (and formerly with Lombardi), but they see IBM BPM and Activiti as appealing to different markets due to organizational cultural choices. Certainly to begin with, most of their new Activiti customers will be existing Activiti customers looking for an enterprise support partner, just as many of their new IBM BPM customers are already IBM BPM customers; however, I’ve been in a couple of consulting engagements recently where organizations had both commercial and open source solutions under evaluation, so I’m anticipating a bit of channel conflict here. BP3 has no existing Activiti customers (or any other BPM other than IBM), and has no significant open source contribution experience, but plans to contribute to the Activiti open source community, possibly with hybrid/HTML mobile front-ends, REST APIs architecture and other areas where they have some expertise from building add-ons to IBM BPM. Interestingly, they do not plan to build/certify WAS support for Activiti; although they didn’t see this as a big market, I’m wondering whether this also just cuts a bit too close to the IBM relationship.

Aside from the obvious potential for awkwardness in their IBM relationship, I see a couple of challenges for BP3: first, getting the people with the right skills to work on the Activiti projects. Since the IBM BPM skills are pretty hard to come by, they won’t be redeploying those people, so presumably have to train up other team members or make some new hires. The other challenge is around production support, which is not something that BP3 does a lot of now: typically, IBM would be the main production support for any IBM BPM installation even if BP3 was involved, although BP3 would support their own custom code and may act as triage for IBM’s support. With Activiti, they will have to decide whether they will offer full production support (and if not them, then who?) or just provide developer support during business hours.

At the end of 2012, I had a few hints that things at Alfresco’s Activiti BPM group was undergoing some amount of transition: Tom Baeyens, the original architect and developer of Activiti (now CEO of the Effektif cloud BPM startup announced last week), was no longer leading the Activiti project and had decided to leave Alfresco after less than three years; and camunda, one of the biggest Activiti contributors (besides Alfresco) as well as a major implementation consulting partner, was making noises that Activiti might be too tightly tied to Alfresco’s requirements for document-centric workflow rather than the more general BPM platform that Activiti started as. I’m not in a position to judge how Alfresco was controlling the direction and release cycle of Activiti, who was making the biggest contribution to the open source efforts, or what was said behind closed doors, but obviously things reached a breaking point, and this week camunda announced that they are forking a new open source project from Activiti, to be known as camunda BPM.

This is big news in the world of open source BPM. There are a few players already – Activiti, BonitaSoft, jBPM and Processmaker, to name a few – and it’s not clear that there’s enough demand for open source BPM software to warrant another entrant. Also, there has to be some hard feelings between the parties here, and this is a small community where you can’t really afford to make enemies, because you never know who you’re going to end up working with in years to come. This parting of the ways is described as “sad” by both camunda in their announcement post and by Joram Barrez (current Activiti lead core developer) in his post, and puts Activiti and camunda in direct competition for both existing Activiti users and future business. Signavio, whose process modeler is deeply integrated with camunda BPM, issued a press release stating that the camunda BPM fork will be good for Signavio customers, and including a nice quote from Tom Baeyens; keep in mind that Signavio just provided the funding for Baeyens’ new startup. It’s like the Peyton Place of BPM.

Leaving the personal (and personnel) aspects aside, camunda BPM is offering some significant additional capabilities beyond what is available in Activiti, mostly through open-sourcing their previously proprietary Activiti add-ons. I had a briefing a couple of weeks ago with Jakob Freund, camunda’s CEO, to get caught up on what they’re doing. camunda is about 20 people now, founded 4-1/2 years ago and completely self-funded. That makes them a bit small for launching an enterprise software product – including the implementation and support aspects – but also not driven to unreasonable growth since they have no external investors to please. Having once grown a consulting company to about twice that size without external funding, I can understand the advantages of maintaining the organic growth: control to pick the projects and products that you want to build, and to hand-pick a great team.

camunda BPM, as with Activiti (and jBPM, for that matter) are not claiming to be zero-code BPM suites – some would argue that even those claiming to be, aren’t – but are BPM engines and capabilities intended to be embedded within line-of-business enterprise applications. They see the zero-coding market as being general tooling for non-strategic processes, and likely served equally well or better by outsourcing or cloud solutions (Effektif, anyone?); instead, camunda targets situations where IT is a competitive differentiator, and BPM is just part of the functionality within a larger application. That doesn’t mean that there’s nothing for the non-technical business analyst here: BPMN is used as a bridge for business-IT alignment, and camunda is bringing their previously proprietary BPMN round-tripping capabilities into the new open source project. Their BPMN plugin for Eclipse provides an easy-to-use modeler for business analysts, or round-tripping with Signavio, Adonis and other modeling tools; camunda blogged back in June 2012 about how to integrate several different BPMN modelers with camunda BPM, although they have a definite preference for Signavio.

camunda BPM is a complete open source BPM stack under an Apache License (except for Eclipse, the framework for the designer/developer UI, which uses the Eclipse Public License). The Community (open source) edition will always be the most up-to-date edition – note that some commercial open source vendors relegate their community edition to being a version behind the commercial edition in order to drive revenue – with the Enterprise (commercial) edition lagging slightly to undergo further testing and integrations. The only capabilities available exclusively in the Enterprise edition are WebSphere Application Server (WAS) integration and Cockpit Pro, a monitoring/administration tool, although there is a Cockpit Light capability in the Community edition. You can see a Community-Enterprise feature comparison here, and a more complete list here. Unless you’re tied to WAS from the start, or need quite a bit of support, the Community edition is likely enough to get you up and running initially, allowing for an easier transition from open source to commercial.

However, the question is not really whether camunda has some great contributions to make to the Activiti code base (they do), but whether they can sustain and build an open source fork of Activiti. They have some good people internally to provide vision – Daniel Meyer for the core process engine architecture, Bernd Rücker for a technical consulting/product management view, Jakob Freund for the business aspects of BPM – and a development team experienced with the Activiti and camunda code bases. They have showed significant leadership in the Activiti open source community and development, so are likely capable of running a camunda BPM open source community, but need to make sure that they dedicate enough resource to it to keep it vital. There is a German camunda community already, but that’s not the same as an open source community, and also is only in German, so they have some work to do there.

And then there’s the existing Activiti and camunda users. Existing camunda customers probably won’t be freaked out about the fork since the contributions important to them were being made by camunda anyway, but existing Activiti users (and prospects) aren’t just going to fall into camunda’s lap: they might be weighing the additional functionality against the bigger company, stable brand and existing community behind Activiti. Given some of the new UI features being rolled into Activiti from the Alfresco team, it’s fair to say that Alfresco will continue to innovate Activiti, and attempt to maintain their solid standing in the open source BPM market. There’s likely a small window for existing Acitiviti users to shift to camunda BPM if they want to: right now, the engine is identical and the migration will be trivial, but I expect that within six months, both sides will make enough changes to their respective projects that it will become a more significant effort. In other words, if you’re on Activiti or camunda now and are thinking of switching, do it now.

camunda could be ruffling a few feathers by declaring an open source fork rather than just rolling their proprietary offerings into the Activiti project; they might have been able to become a stronger influencer within the project by doing that, counteracting any (perceived) document-centric influence from Alfresco. Again, I’m not internal to either of the companies nor part of the Activiti open source community, so that’s just speculation.

Meanwhile, Alfresco remains officially silent on the whole business. Given that they had advance warning about this, that’s a pretty serious PR mistake.

I recently had my first briefing with BonitaSoft about their open source BPM product. Although the project has been going on for some time, with the first release in 2001, the company is only just over a year old; much of the development has been done as part of BPM projects at Bull. Their business model, like many open source companies, is to sell services, support and training around the software, while the software is available as a free download and supported by a broader community. They partner with a number of other open source companies – Alfresco for content management, SugarCRM for CRM, Jaspersoft for BI – in order to provide integrated functionality without having to build it themselves. They’ve obviously hit some critical mass point in terms of functionality and market, since their download numbers have increased significantly in the past year and have just hit a half million.

A French company, they have a strong European customer base, and a growing US customer base, mostly comprising medium and large customers. They’ve just announced the opening of two US offices, and the co-founder/CEO Miguel Valdés Faura is moving to the San Francisco area to run the company from there; that’s the second European company that I’ve heard of lately where the top executives are moving to the Bay area, indicating that the “work from anywhere” mantra doesn’t necessarily pan out in practice. They’ve hired Dave Cloyd away from open source content management company Nuxeo as a key person in the building the US market; he was VP of sales at Staffware prior to the TIBCO acquisition, so knows both the open source and BPM side.

Open source BPM solutions have been around for a while, but the challenges are the same as with any open source project: typically, it takes greater technical skills to get up and running with open source, especially if it doesn’t do everything that you need and has to be integrated with other (open source or not) products. In many cases, open source BPM provides the process engine embedded inside a larger solution created by a systems integrator or business process outsourcing firm; in other words, it’s more like a toolkit for adding process capabilities into another application or environment. BonitaSoft considers jBPM, Activiti and ProcessMaker to be in this “custom BPM development” camp, as opposed to the usual commercial players in the “standalone BPM suites” category; they see themselves as being able to play on both sides of that divide.

Taking a look (finally, after 35 minutes of PowerPoint) at a product demo, I saw their four main components of process modeling, process development, process execution, and process administration and monitoring.

The modeler is a desktop Eclipse-based application providing BPMN 2.0 modeling, including importing of BPMN models from other tools. There is starting to be less distinction between these tools, as all the vendors start to pick up the user interface tricks that make process modeling work better: auto-alignment, automatic connector creation, and tool tips with the most likely next element to add. The distinguishing characteristics start to become how the non-standard modeling aspects are handled: data modeling and integration with other systems using proprietary connectors that go beyond the capabilities of a simple web services call, for example.

I like what they’ve done with some of the out-of-the-box connectors: the Sharepoint and Alfresco connectors allow you to browse and select a specific document repository event (such as check in a file) directly from within the process designer, and associate it with an activity in the process model. I saw a fairly comprehensive database connector that allowed for graphical query creation, and this connection can be used to transfer a data model from a database to the process model to build out the process instance data. There’s a wizard to create your own connectors, or browse the BonitaSoft community to find connectors created by others – a free marketplace for incremental functionality.

You can create a web form for a particular step in the process, which will auto-generate based on the defined data model, then allow new fields to be added based on external database calls, and reformatted in a graphical editor. Effectively, this capability allows a quick process-based application to be created with a minimum of code, just using the forms designer and connectors to databases and other systems.

Key performance indicators (KPIs) can be defined in process modeler; these are effectively data objects that can be populated by any step of the process, then reported on via a BI engine such as the integrated Jaspersoft.

Although they describe their modeling as collaborative, it’s asynchronous collaboration, where the model and associated forms are saved to the Bonita repository model, where they are property versioned and can be checked out by another user.

The end-user experience uses an inbox metaphor in a portal, with the forms displayed as the user interacts with the process. Individual process instances (or entire processes) can be tagged with private labels by a user – similar to labels applied to conversations in Gmail – and categories can be applied to processes so that every instance of that process has the same category, visible to all users. Love the instance and process tagging: this is a capability that I’ve been predicting for years, and just starting to see it emerge.

I was surprised by the lack of flexibility in runtime environment: the only change that a user can make to a process at runtime is to reassign a task, although they are working on other features to handle more dynamic situations.

The big product announcements from last month, with the release of version 5.3, included process simulation and support for cloud environments with multi-tenancy and REST APIs. However, by this time we were getting to the end of our time and I didn’t get all the details; that will have to wait for another day, or you can check out the brief videos on their site.

An independently-run and branded open source project, Activiti will work independently of the Alfresco open source ECM system. Activiti will be built from the ground up to be a light-weight, embeddable BPM engine, but also designed to operate in scalable Cloud environments. Activiti will be liberally licensed under Apache License 2.0 to encourage widespread usage and adoption of the Activiti BPM engine and BPMN 2.0, which is being finalized as standard by OMG.

John Newton, CTO of Alfresco, and Tom Baeyens, in his new role as Chief Architect of BPM, briefed me last week on Activiti. The project is led by Alfresco and includes SpringSource, Signavio and Camunda; Alfresco’s motivation was to have a more liberally-licensed default process engine, although they will continue to support jBPM. Alfresco will build a business around Activiti only for content-centric applications by tightly integrating it with their ECM, leaving other applications of BPM to other companies. I’ll be very interested to see the extent of their content-process integration, and if it includes triggering of process events based on document state changes as well as links from processes into the content repository.

They believe that BPEL will be replaced by BPMN for most general-purpose BPM applications, with BPEL being used only for pure service orchestration. Although that’s a technical virtuous viewpoint that I can understand, there’s already a lot of commitment to BPEL by some major vendors, so I don’t expect that it’s going to go away any time soon. Although they are only supporting a subset of the BPMN 2.0 standard now – which could be said of any of the process modelers out there, since the standard is vast – they are committed to supporting the full standard, including execution semantics and the interchange format.

Activiti includes a modeler, a process engine, an end-user application for participating in processes, and an administration console. Not surprisingly, we spent quite a bit of time talking about Activiti Modeler, which is really a branded version of Signavio’s browser-based BPMN 2.0 process modeler. This uses AJAX in a browser to provide similar functionality to an Eclipse-based process modeler, but without the desktop installation hassles and the geeky window dressing. It is possible to create a fully executable process model in the Activiti Modeler, although in most cases a developer will add the technical underpinnings, likely in a more developer-oriented environment rather than the Modeler. Signavio includes a file-based model repository, which has been customized for inclusion in the Activiti Modeler; it would be great to see if they can do something a bit more robust to manage the process models, especially for cloud deployments. They are including support for certain proprietary scripting instead of using Java code for some interfaces, such as their Alfresco interface.

Activiti Explorer provides a basic end-user application for managing task lists, working on tasks, and starting new processes. Without a demo, it was hard to see much of the functionality, although it appears to have support for private task lists as well as shared lists of unassigned tasks; a typical paradigm for managing tasks is to allow someone to claim an unassigned task from the shared list, thereby moving it to their personal list.

The Activiti Engine, which is the underlying process execution engine, is packaged as a JAR file with small classes that can be embedded within other applications, such as is done in Alfresco for content management workflows. It can be easily deployed in the cloud, allowing for cross-enterprise processes. The only thing that I saw of Activiti Probe, the technical administration console, was its view on the underlying database tables, although it will have a number of other capabilities to manage the process engine as it develops. Not surprisingly, they don’t have all the process engine functionality available yet, but have been focusing on stabilizing the API in order to allow other companies to start working with Activiti before the GA release.

I also saw a mockup of Activiti Cycle, a design-time collaboration tool that includes views (but not editing) of process models, related documents from Alfresco, and discussion topics. Activiti Cycle can show multiple models and establish traceability between them, since their expectation is that an analyst and a developer would have different versions of the model. This is an important point: models are manually forward-engineered from an analyst’s to developer’s version, and there are no inherent automated updates when the model changes, although there are alerts to notify when other versions of the same model are updated. This assumption that there can be no fully shared model between analyst and developer has formed a part of a long-standing discussion between Tom and I since before we met; although I believe that a shared model provides the best possible technical solution, it’s not so easy for a non-technical analyst to understand BPMN models once you get past the basic subset of elements. Activiti Cycle may not be in GA until after the other components, although they are working on it concurrently.

The screen shots that I saw looked nice, although I haven’t seen a demo yet; Tom gave credit to Alfresco’s UI designers for raising this above just another developer’s BPM tool into something that could be used by non-developers without a lot of customization. I’m looking forward to a demo next month, and seeing how this progresses to the November release and beyond.

I was invited to give a presentation at Ignite! Toronto this week, and decided to discuss how I’ve been using social media – Twitter, Flickr, Facebook, blogging – and some integration technologies including RSS and Python scripting to promote a new farmers’ market in my community. I’m on the local volunteer committee that acts as the marketing team for the market. Here’s the presentation, it’s not too clear on the video:

If you’re not familiar with Ignite, it’s a type of speed presentation: 20 slides, 5 minutes, and your slides auto-advance every 15 seconds. For a marathon presenter like me, keeping it down to 5 minutes is a serious challenge, but this was a lot of fun.

For a technology view, check out slide 17 in the slide deck, which shows a sort of context diagram of the components involved. Twitter is central to this “market message delivery framework”, displaying content from a number of sources on the market Twitter account:

I manually tweet when I see something of interest related to the market or food. Also, I monitor and retweet some of our followers, and reply to anyone asking a question via Twitter.

When I publish a post on my personal blog that is in the category “market”, Twitterfeed picks it up through the RSS feed and posts the title and link on Twitter. These are posted to both the market account and my own Twitter account, so you may have seen them if you’re following me there.

Each week, I save up a list of interesting links and other tweet-worthy info, and put them in a text file. My talented other half wrote a Python script that tweets one message from that file each hour for the two days prior to each Saturday market day.

I connected my Flickr account with Twitter, and can either manually tweet a link to a photo directly from Flickr, or email a photo from my iPhone to a private Flickr email address that will cause the link to be tweeted. I could have used Twitpic for the latter functionality, but Flickr gives me better control over my photo archive.

The whole exercise has been a great case study on using social media for community projects with no budget, using some small bits of technology to tie things together so that it doesn’t take much of my time now that it’s up and running. I’d be doing most of the activities anyway: taking pictures of the market, cooking and blogging about it, and reading articles on local food and markets online. This just takes all of that and pushes it out to the market’s online community with very little additional effort on my part.

All week, the local tech community has been buzzing around the news that Bell Canada is throttling P2P traffic — specifically the widely-used BitTorrent protocol — for not only their direct Sympatico subscribers, but also for anyone who buys their supposedly unlimited DSL from a Sympatico reseller, such as TekSavvy. For those of you new to the traffic shaping/net neutrality wars that have been going on in North America over the past months, here’s why throttling P2P traffic isn’t good news:

Bell Canada (and our only other "last mile" carrier, Rogers Cable) are violating their role as a common carrier: they’re supposed to deliver the data, regardless of what it is, subject to our individual bandwidth and download caps. As long as I’m not getting a higher bandwidth than I was promised, and don’t go over my monthly volume cap, I should be able to download whatever I want, whenever I want, because the contract that I signed with Bell implied that would be the case. If they can’t deliver that bandwidth, then they shouldn’t be selling it; furthermore, they should have taken the money made by all these years of overselling the same bandwidth and invested in improving the now-outdated infrastructure so that we wouldn’t have these problems now.

The carriers, Bell and Rogers, like to position this as allowing equal access to everyone instead of allowing those evil file-sharing types to hog the bandwidth, but they don’t exactly have altruistic motives: both of them sell services (cable and satellite TV) that compete with downloaded video, and they want you paying $40+ to them each month to watch the TV that they choose rather than be able to select from a wide variety of alternative — and legal — video available on the internet. Furthermore, Rogers wants to use the same bandwidth that you would use for free video downloads to download their pay-per-view movies instead.

Bell and Rogers have targeted the BitTorrent protocol for throttling even though it has many legal uses. Last week, CBC made history by offering a TV program available, DRM-free, for download by BitTorrent. This allowed anyone in the world with broadband access to have access to Canadian programming that might not be available on their local TV stations. By throttling BitTorrent, however, Bell and Rogers are effectively blocking access to that Canadian content within Canada, forcing people to watch it on Bell or Rogers’ TV services. Personally, I use BitTorrent not just for that CBC show, but to download new releases of Ubuntu, and other large open source downloads where the source site provides BitTorrent as an option in order to reduce the bandwidth demands on their servers.

What this all comes down to is a violation of net neutrality: Bell and Rogers are deciding which traffic on the network gets higher priority. They’re doing it now because they’ve failed to make the necessary investments in infrastructure over the years that would allow them to actually deliver what they sell, and coincidentally they choose to throttle traffic that competes with their other business areas.

Suffice it to say that Bell Canada didn’t have a good week because of this — it was all over the news, the DSL resellers are talking about suing, and even the unions are in on the action. Enter Jason Laszlo, a spokesperson (apparently associate director of media relations) for Bell Canada, who was quoted extensively on this issue in the press:

"Regarding customers like Mount Sinai [a major Toronto hospital that was used as an example of how legal file sharing might be used for CAT scans], Laszlo said it’s their own fault for using a notorious application like file-sharing. ‘We’re blind to the content flowing through our pipes,’ he said. ‘Our goal is to ensure maximum efficiency for everyone.’" — Digital Journal, March 25th. ["Notorious"? Oh, puh-leeze. And if they were blind to the content, then they wouldn’t be throttling file sharing.]

"P2P programs are only employed by a small percentage of internet users, but they tend to make use of all the available bandwidth, Laszlo said. Reduced P2P use should provide a better balance between P2P and other users at peak times, he said. ‘I feel we’re on the side of good,’ he said." — CBC News, March 25th. [Throttling P2P is a good way to make sure that it is only ever employed by a small percentage of users, which is exactly what Bell wants.]

"Bell spokesman Jason Laszlo on Friday reiterated the company’s position —that it was shaping traffic in order to prevent a small portion of bandwidth hogs from slowing speeds down for all customers." — CBC News, March 28th.

Yes, those last two are real; his Facebook profile was posted on a broadband discussion forum yesterday afternoon (you can Digg the story here); he obviously was unaware of the impact of no privacy settings, since I was able to access his profile immediately after that even though we’re not directly connected and have no mutual friends.

So what’s the lesson to be learned from this mess? The public is now aware and mobilized on the impact of traffic shaping on their daily lives, even if they haven’t yet heard the term net neutrality. To paraphrase Peter Finch’s character from Network, we’re mad as hell and we’re not going to take this anymore.

Oh, yeah, lesson #2: don’t entrust media relations for a sensitive subject to an inexperienced junior who doesn’t know well enough not to post inappropriate comments to his publicly-viewable Facebook profile.

I skipped this morning’s taxonomy/folksonomy smackdown featuring Seth Earley and Zach Wahl — I just wasn’t up for that much testosterone this early in the morning — and went to the best practices track to hear about how AvenueA|Razorfish implemented their internal wiki. I’m speaking next, so if this session isn’t sufficiently riveting, I’ll duck out early to review my notes.

Donna Jensen, their senior technical architect, took us through how they use a wiki as an intranet portal. She spent some amount of time first defining wikis and discussing benefits and challenges, particularly when used inside the firewall. She made a crack about how Ph.D. dissertations will be written on many of these points, which isn’t that far from the truth: things like encouraging active versus passive behaviour. And, although she claims that they’re breaking down behaviours tied to organizational silos, she admitted that no one can comment on the CEO’s blog although all others are open territory. At some point, even the top level executives have to learn that if they’re going to commit to Enterprise 2.0, it has to permeate to all levels of the organization: no one should be exempt.

The platform that they used was MediaWiki (the software used to create Wikipedia) on a standard LAMP stack, giving them a completely open source base. They also use WordPress for internal blogs, maintaining the commitment to open source. Although they did do some customization, particularly in terms of creating templates such as project pages, they took advantage of many freely-available third-party extensions for functionality such as tag clouds, calendaring and skins. They use Active Directory for security, and allow access only internal or VPN access: no external access or applications.

AA|RF put in the wiki with only a technical VP and a part-time intern, pretty much out of the box, and found that it wasn’t adopted. They did another cut with Jensen as technical architect (part-time) and a couple more interns, and arrived at their current state: no project management oversight, no content management system, and no creative designer, with the whole thing implemented in about 2,000 person-hours. As a web technology consulting company (although with little Web 2.0 experience), they can get away with this, but you may not want to try this one at home. They used agile scheduling, and eventually brought in some rigorous QA. Jensen feels that their only real mistake was not bringing in a create designer earlier, since the wiki is apparently pretty technical looking. They haven’t yet put a WYSIWYG editor so everyone still needs to work in WikiText, which is likely a bit of a barrier for the non-techies.

Jensen talked about a few byproducts of the wiki adoption, such as the incremental upgrade model that tends to come with open source or SaaS products, rather than the monolithic (and often disruptive) upgrades of proprietary software. She also talked about how many IT departments won’t use open source because it makes them unable to turn to someone who is compelled to help them — in other words, they have to take on the responsibility of finding a solution themselves. Another byproduct is the shift towards open source, and the savings that they can expect by replacing some of their current software platforms and their hefty maintenance fees with open source alternatives.

In their wiki environment, any kind of file can be uploaded, all pages (except the home page) are editable by everyone, and any content except client-confidential information can reside there. I really have to wonder how this would work if they upload a massive number of files: at what point do you need to add a content management system, and how painful is it going to be to do that later? Their wiki home page shows del.icio.us and Flickr feeds, internal blog feeds, Digg items and recent uploaded documents. One audience member asked if that meant that if anyone in the company tagged a public web page, that it would be included on the home page; there was general shock around the room and wonderment that you could do this without having some centralized body approving such content before it was surfaced to the rest of the company. I tried not to laugh out loud; is this such a radical idea? Obviously, the last year of being immersed in Web 2.0 has changed me, and I start wondering which of these things that I would adopt if I were still running a 40-person consulting company. As the session goes on, the same question about how user tagging on the internet drives their intranet home page keeps coming up from the audience over and over.

What I found interesting (and I’m probably blowing their whole game by publishing this), is that they’re using public Web 2.0 tools to feed part of the home page: if something is tagged AARF on del.icio.us or Flickr, it shows up there. For Digg, however, you have to be a friend of AARF to have your items show up. Jensen said that she’ll be changing the AARF tag to something unguessable, although if you know how to track items and users through del.icio.us or Flickr, it wouldn’t be that difficult to figure out their new tag. She also said that they had run some analytics on whether these tags gave away any secrets about what they’re currently researching, and found that the mix is too varied for any patterns to emerge.

The wiki is a portal in a very real sense, which was a bit of a revelation to me: I didn’t previously think of wikis as portals. Everyone has their own people page which they can format and populate as they wish, and which can include their recent file uploads and blog postings. On any page, adding a “portlet” is just a matter of copying and pasting a snippet of PHP code, including copying snippets of code such as the <embed> code provided by YouTube for every video on its site.

They’ve done some cool things with blogs as well, such as having mailing lists corresponding to blogs, and sending an email to that mailing list will auto-post it as a blog entry on the corresponding blog.

Jensen had some great ideas for wiki adoption, often centred around “wikivangelists” getting out there and helping people. I especially like the idea of the “days of wine and wikis” events. And they’re getting some great adoption rates.

I had to leave just before the end: she was running 7 minutes overtime and I had only 15 minutes between sessions to get to my own room to set up. It was hard to tear myself away, however; I found both Jensen’s presentation and the audience feedback to be riveting.

Ryan Herd, who heads the BPM centre of competence within RBM Private Bank, was up next to talk about the analysis that they did on open source BPM alternatives. Funny that the South Africans, like we understated Canadians, use the term “centre of competence” as opposed to the very American “center of excellence”.

Don’t tell Ismael Ghalimi, but Herd thinks that jBoss’ jBPM is the only open source BPM alternative; it was the only one that they evaluated, along with a number of proprietary solutions including TIBCO. Given that he’s here speaking at this conference, you can guess which one they picked.

Their BPM project started with some strategic business objectives:

operational efficiency

improved client service

greater business process agility

and some technology requirements:

a platform to define, improve and automate business processes

real-time and historical process instance statistics

single view of a client and their related activities

They found that then needed to focus on three things:

Process: dynamic quality verification, exception handling that can step outside the defined process, and a focus on the end-to-end process.

People: have their people be obsessed with the client, develop an end-to-end process culture in order to address SLAs, and create full-function teams rather than an assembly-line process.

Systems: a single processing front-end, a reusable business object layer and centralized work management.

Next, they started looking at vendors, and for whatever reasons, open source was considering the mix: quite forward-thinking for a bank. In addition to TIBCO and jBPM, they considered DST‘s AWD, IBM‘s BPM, eiStream (now Global 360) and K2: a month and a half to review all of the products, then another month and a half doing a more focussed comparison of TIBCO and jBPM.

For process design, jBPM has only a non-visual programmer-centric environment, and has support for BPEL but not (obviously, since it’s not visual) BPMN. It does allow modelling freedom, but that can be a problem with enforcing internal standards. It also has no process simulation. TIBCO, on the other hand, has a visual process modelling environment that supports BPMN, has a near zero-code process design and provides simulation. Point: TIBCO.

On the integration side, jBPM has no graphical application integration environment, although it has useful integration objects and methods and has excellent component-based design. The adapters are available but not easily reused, and has no out-of-the box communication or integration facilities. TIBCO has a graphical front-end for application integration, and a lots of adapters and integration facilities. Point: TIBCO.

On the UI side, jBPM has only a rudimentary web-based end user environment, whereas TIBCO has the full GI arsenal at their disposal. Point: TIBCO.

Overall, they found that the costs would be about the same (because of the greater jBPM customization requirement), but a much longer time to deploy with jBPM, which had them choose TIBCO.

Given what they found, I find it amazing that they spent three months looking at jBPM, since jBPM is, in its raw form, a developer tool whereas TIBCO spans a broader range of analyst and developer functionality. The results as presented are so biased in favour of TIBCO that it should have been obvious long before any formal evaluation was done that jBPM wasn’t suited for their particular purposes and should not have made their short list; likely, open source was someone’s pet idea so was thrown into the mix on a lark. Possibly an open source BPM solution like Intalio, which wasn’t available as open source at the time of their evaluation, would have made a much better fit for their needs if they were really dedicated to open source ideals. I’m pretty sure that anyone in the room that had not considered open source in the past would run screaming away from it in the future.

Getting past the blatant TIBCO plug masquerading as a product comparison, Herd went on to show the architecture of their solution, which uses a large number of underlying services managed by a messaging layer to interface with the BPM layer — a fairly standard configuration. They expect to go live later this year.

Jason Maynard of Credit Suisse moderated a panel on investment opportunities in the new software industry, which included Bill Burnham of Inductive Capital, Scott Russell (who was with two different venture capital firms but doesn’t appear to be with one at this time, although his title is listed as “venture capitalist”), and Ann Winblad of Hummer Winblad Venture Partners.

This was more of an open Q&A between the moderator and the panel with no presentation by each of them, so again, difficult to blog about since the conversation wandered around and there were no visual aids.

Winblad made a comment early on about how content management and predictive analytics are all part of the collaboration infrastructure; I think that her point is that there’s growth potential in both of those areas as Web 2.0 and Enterprise 2.0 applications mature.

There was a lengthy discussion about open source, how it generates revenue and whether it’s worth investing in; Burnham and Russell are against investing in open source, although Winblad is quite bullish on it but believes that you can’t just lump all open source opportunities together. Like any other market sector, there’s going to be winners and losers here. They all seem to agree, however, that many startups are benefiting from open source components even though they are not offering an open source solution themselves, and that there are great advantages to be had by bootstrapping startup development using open source. So although they might not invest in open source, they’d certainly invest in a startup that used open source to accelerate their development process and reduce development costs.

Russell feels that there are a number of great opportunities in companies where the value of the company is based on content or knowledge rather than the value of their software.

SaaS startups create a whole new wrinkle in venture: the working capital management is much trickier due to the delay in revenue recognition since payments tend to trickle in rather than be paid up front, even though the SaaS company needs to invest in infrastructure. Of course, I’m seeing some SaaS companies that are using hosted infrastructure rather than buying their own; Winblad discussed these sort of rented environments, and other ways to reduce startup costs such as using virtualization to create different testing environments. There are still a lot of the same old problems however, such as sales models. She advises keeping low to the ground, getting something out to a customer in less than a year, getting a partner to help bring the product to market in less than two years. As she put it, frugality counts; the days of spending megabucks on unnecessary expenses went away in 2000 when the first bubble burst, and VCs are understandably nervous about investing in startups that exhibit that same sort of profligate spending.

Maynard challenged them each to name one public company to invest in for the next five years, and why:

Russell: China and other emerging markets require banking and other financial data, which companies like Reuters and Bloomberg (more favoured) will be able to serve. He later made comments about how there are plenty of opportunities in niche markets for companies that own and provide data/information rather than software.

Burnham: mapping/GPS software like Tele Atlas, that have both valuable data and good software. He would not invest in the existing middleware market, and specifically suggested shorting TIBCO and BEA (unless they are bought by HP) — the two companies whose user conferences that I’m attending this week and next.

Winblad: although she focusses on private rather than public investments, she makes Amazon is a good bet since they are expanding their range of services to serve bigger markets, and have a huge amount of data about their customers that allows them to . She thinks that Bezos has a good vision of where to take the company. She recommends shorting companies like CA, because they’re in the old data, infrastructure and services business.

Audience questions following that discussion focussed a lot on asking the VCs opinions on various public companies, such as Yahoo. Burnham feels that Yahoo is now in the entertainment industry, not the software industry, so is not a real competitor to Google. He feels that Google versus Microsoft is the most interesting battle to come. Russell thinks that Yahoo is a keeper, nonetheless.

Questions about investments in mobile produced a pretty fuzzy answer: at some point, someone will get the interface right, and it will be a huge success; it’s very hard for startups to get involved since it involves them doing long negotiations with the big providers.

Burnham had some interesting comments about investing in the consumer versus the business space, and how the metrics are completely different because marketing, distribution and other factors differ so much. Winblad added that it’s very difficult to build a consumer destination site now, like MySpace or YouTube. Not only are they getting into a crowded market, but many of the startups in this area have no idea how to answer basic questions about the details of an advertising revenue model, for example.

Burnham had a great comment about what type of Web 2.0 companies not to invest in: triple-A’s, that is, AdSense, AJAX and arrogance.

Winblad feels that there’s still a lot of the virtualization story to unfold, since it is seriously changing the value chain in data centres. Although VMware has become the big success story in this market, there are a number of other niches that have plenty of room for new players. She also thinks that companies providing specialized analytics — her example was basically about improving financial services sales by analyzing what worked in the past — can provide a great deal of revenue enhancement for their customers. As a final point on that theme, Maynard suggested checking out Swivel, which provides some cool data mashups.

First up after lunch is a panel on the role of open source in service management, moderated by Martin Griss of CMU West, and including Kim Polese of SpikeSource, and Jim Berbsleb and Tony Wasserman of CMU West.

Polese is included in the panel because her company is focussed on creating new business models for packaging and supporting open source software, whereas the other two are profs involved in open source research and projects.

The focus of the session is on how open source is increasingly being used to quickly and inexpensively create applications, both by established companies and startups: think of the number of web-based applications based on Apache and MySQL, for example. In many of these cases, a dilemma is created by the lack of traditional support models for open source components — that’s certainly an issue with the acceptance of open source for internal use within many organizations — so new models are emerging for development, distribution and support of open source.

Open source is helping to facilitate unbundling and modularization of software components: it’s very common to see open source components from multiple projects integrated with both commercial software components and custom components to create a complete application.

A question from the audience asked if there is a sense of misguided optimism about the usefulness open source; Polese pointed out in response that open source projects that aren’t useful end up dying on the vine, so there’s some amount of self-selection that tends to promote successful open source components and suppress those that are less successful through market acceptance.

As I mentioned during the Brainstorm BPM conference a few weeks back, it’s very difficult to blog about a panel — much less structure than a regular presentation, so the post tends to be even more disjointed than usual. With luck, you’ll still get some of the flavour of the panel.