Welcome to the life of a data warehousing (DW) industry analyst. I’m often asked by Information and Knowledge Management (I&KM) professionals to address the perennial issue of which commercial DW solution is fastest or most scalable. Vendors ask me too, of course, in the process of them attempting to suss out rivals’ limitations and identify their own competitive advantages.

It’s always difficult for me to provide I&KM pros and their vendors with simple answers to such queries. Benchmarking is the blackest of black arts in the DW arena. It’s intensely sensitive to myriad variables, many of may not be entirely transparent to all parties involved in the evaluation. It’s intensely political, because the answer to that question can influence millions of dollars of investments in DW solutions. And it’s a slippery slope of overqualified assertions that may leave no one confident that they’re making the right decisions. Yes, I’m as geeky as the next analyst, but I myself feel queasy when a sentence coming out of my mouth starts to run-on with an unending string of conditional clauses.

If we offer any value-add in the DW arena’s commentary cloud, industry analysts can at least clarify the complexities. Here is how I frame the benchmarking issues that should drive I&KM pros’ discussions with DW vendors:

• Vendor DW performance-boost claims (10x, 20x, 30x, 40x, 50x, etc.) are extremely sensitive to myriad implementation factors. No two DW vendors provide performance numbers that are based on the same precise configuration. Also, vendors vary so widely in their DW architectural approaches, that each vendor can claim that no rival could provide a comparable configuration to its own. For the purpose of comparing vendor scalability for the recently completed Forrester Wave on Enterprise Data Warehousing Platforms (to be published imminently), I broke out the approaches into several broad implementation profiles. Each of those profiles (which you’ll have to wait for the published Wave to see) may be implemented in myriad ways by vendors and users. And each specific configuration of hardware, software, and network interconnect of each of those profiles may be optimized to run specific workloads very efficiently--and be very suboptimal for others.

• Vendor DW apples-to-apples benchmarks depend on comparing configurations that are processing comparable workloads. No two DW vendors, it seems, bases their benchmarks on the same specific set of query and loading tests. Also, no two vendors’ benchmarks incorporating the exact same set of parameters in their benchmark tests--in other words, the same query characteristics, same input record counts, same database sizes, same table sizes, same return-set sizes, same number of columns selected, same frequency distribution of values per column, same number of table joins, same source-table indexing method, same mixture of relational data and flat/text files in loads from source, same mixture of ad-hoc vs. predictable queries, same use of materialized views and client caches, and so forth.

• Vendor DW benchmark comparisons should cover the full range of performance criteria that actually matter in DW and BI deployments. No two DW vendors report benchmarks on the full range of performance metrics relative to users. Most offer basic metrics on query and load performance. But they often fail to include any measurements of other important DW performance criteria, such as concurrent access, concurrent query, continuous loading, data replication, and backup and restore. In addition, they often fail to provide any benchmarks that address various mixed workloads of diverse query, ETL, in-database analytics, and other jobs that execute in the DW.

• Different vendors’ DW benchmarks should use the same metrics for each criterion. Unfortunately, no two vendors in the DW market use the same benchmarking framework or metrics. Some report numbers framed in proprietary benchmarking frameworks that may be maddeningly opaque--and impossible to compare directly with competitors. Some report TPC/H, but often only when it puts them in a favorable light, whereas others avoid that benchmark on principle (with merit: it barely addresses the full range of transactions managed by a real-live DW). Others report “TPC/H-like” queries (whatever that means). Still others publish no benchmarks at all, as if they were trade secrets and not something that I&KM pros absolutely need to know when evaluating commercial alternatives. Sadly, most DW vendors tends to make vague assertions about “linear scalability,” “10-200x performance advantage [against the competition], and “[x number of] queries per [hour/minute/sec] in [lab, customer production, or proof of concept].” Imagine sorting through these assertions for a living--which is what I do constantly.

• DW benchmark tests should be performed and/or vouched for by reliable, unbiased, third parties--i.e,. those not employed by or receiving compensation by the vendors in question. If there were any such third parties, I’d be aware of them. Know any? Yes, there are many DW and DBMS benchmarking consultants, but they all make their living by selling their services to solution providers. I hesitate to recommend any such benchmark numbers to anybody who seeks a truly neutral third-party.

• DW solution price-performance comparisons require that you base your analysis on an equivalently configured/capacity solution stack--i.e., hardware, software--for each vendor and also the full lifetime total cost of ownership for each vendor/solution. That’s a black art in its own right. Later this year, I’ll be doing a study that provides a methodology for estimating return on investment for DW appliance solutions.

As an entirely separate issue, it does no good, competitively, for a DW vendor to assert performance enhancements that are only relative to a prior configuration of a prior version of its own product or technology. The customer has no easy assurance that the vendor is comparing its current solutions against a well-configured/engineered example of the prior solution. The vendor’s assertion of order-of-magnitude improvement over a prior version of its own product may be impressive, but only as a statement of how much they’ve improved its own technology, not how it fares against the competition. And such “past-self-comparisons” can easily backfire on the vendor, as customers and competitors may use it to insinuate that there were significant flaws or limitations in your legacy products.

Here’s my bottom-line advice, to all DW vendors on positioning your performance assertions. Frame them in the context of the architectural advantages of your specific DW technical approach. Publish your full benchmark numbers with test configurations, scenarios, and cases explicitly spelled out. To the extent that you can aggregate 100s of terabytes of data, serve thousands of concurrent users and queries, process complex mixtures of queries, joins, and aggregations, ensure subsecond ingest-to-query latencies, and support continuous, high-volume, multiterabyte batch data loading, call all of that out in your benchmarks. To the extent that any or all of that is in your roadmap, call that out too.

Here’s my bottom-line advice to I&KM pros: Don’t expect easy answers. Think critically about all vendor-reported DW benchmarks. And recognize that no one DW platform can possibly be configured optimally for all requirements, transactions, and applications.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 36, on communications as a service and the future of SOA in light of hard economic times. [[and further edited/annotated in a gratuitous, unnecessary way by James Kobielus on the evening of January 28, 2009 for the purpose of populating this blog with sweet new stuff without having to work very hard]]

We're going to begin with an example of what keeps SOA alive and vibrant, and that is the ability for the architectural approach to grow inclusive of service types and therefore grow more valuable over time.

We're going to examine service-oriented communications (SOC) a variation on the SOA theme, and a way of ushering a wider variety of services -- in this case communications and collaboration services from the network -- into business processes and consumer-facing solutions. We're joined by a thought leader on SOC, Todd Landry, the vice president of NEC Sphere.

In the second half of our show, we'll revisit the purported demise of large-scale SOA and find where the wellsprings of enterprise architectural innovation and productivity will eventually come from.

We’ll also delve into the psychology of IT. What are they thinking in the enterprise data centers these days? Somebody’s thoughts might resuscitate SOA or perhaps nail even more spikes into the SOA coffin.

*********************************

Baer: I hate to use a cliché, but it’s like the last mile of enterprise workflow and enterprise processes. The whole goal of workflows was trying to codify what we do with processes and trying to implement our best practices consistently. Yet, when it comes to verbal communications, we’re still basically using technology that began with the dawn of the human voice eons ago.

Gardner: I've seen people use sign language.

Baer: Well, that maybe too, and smoke signals.

Gardner: A certain finger comes up from time to time in some IT departments.

Kobielus: At least the use of a trusty index DTMS finger.

Gardner: There you go.

*********************************

Gardner: Jim Kobielus, isn’t there more to this on the consumer side as well? We've got these hand-held devices that people are using more and more with full broadband connectivity for more types of activities, straddling their personal and business lives and activities. We know Microsoft has been talking about voice recognition as a new interface goal for, what, 10 years now. What’s the deal when it comes to user habits, interfaces, and having some input into these back-end processes?

An Important Extension

Kobielus: That’s a huge question. Let me just unpeel the onion here. I see SOC as very much an important extension of SOA or an application of SOA, where the service that you're trying to maximize, share, and use is the intelligence that’s in people’s heads -- the people in the organization, in your team. You have various ways in which you can get access to that knowledge and intelligence, one of which is by tapping into a common social networking environment.

In a consumer sphere, the thing is the intelligence you want to gain access to is intelligence sets residing in mobile assets -- human beings on the run. Human beings have various devices and applications through which they can get access to all manner of content and through which they can get access to each other.

So, in a consumer world, a lot of the SOC value proposition is in how it supports social networking. The Facebook environments provide an ever more service-oriented environment within which people can mash up not only their presence and profiles, but all of the content the human beings generate on the fly. Possibly, they can tag on the fly as well, and that might be relevant to other people.

There is a strong potential for SOC and that consumer Facebook-style paradigm of sharing everybody’s user-generated content that’s developed on the fly.

*********************************

Text-Mining Capability

Kobielus: One of the services in the infrastructure of the SOC that will be critically needed in a consumer or a business environment is a text-mining capability within the cloud. That can go on the fly to all these unstructured texts that have been generated, and identify entities in relationships and sentiments to make that information quickly available. Or, it can make those relationships quickly available through search or through other means to people who are too busy to do a formal search or who are too busy to even do any manual tagging. We simply want the basic meanings to just bubble up out of the cloud.

*********************************

Kobielus: I want to add one last observation before we go to the "SOA is dead" topic. In order for this integration to happen in the cloud, the cloud providers need to federate their new registries with those of their enterprise customers. But, humans are reachable through a different type of registry called a directory, lightweight directory access protocol (LDAP) and other means.

Cloud providers need to federate their identity management in a directory environment with those of their customers. I don’t think the industry has really thought through all the federation issues to really make this service oriented, business communications in the cloud scenario a reality any time soon.

Gardner: So we need an open Wiki-like phone book in the sky.

Kobielus: Exactly.

*********************************

Kobielus: The whole "SOA is dead" theme struck a responsive chord in the industry, because there's a lot of fatigue, not only with the buzzword, but the very concept. It’s been too grandiose and nebulous. It’s been oversold, expectations have been built up too high, and governance is a bear.

We all know the real-world implementation problems with SOA, the way it’s been developed and presented and discussed in the industry. The core of it is services. As Anne indicated, services are the unit of governance that SOA begot.

We all now focus on services. Now, we’re moving into the world of cloud computing and, you know what, a nebulous environment has gotten even more nebulous. The issues with governance, services, and the cloud -- everything is a service in the cloud. So, how do you govern everything? How do you federate public and private clouds? How do you control mashups and so forth? How do you deal with the issues like virtual machine sprawl?

The range of issues now being thrown into the big SOA hopper under the cloud paradigm is just growing, and the fatigue is going to grow, and the disillusionment is going to grow with the very concept of SOA. I just want to point that out as a background condition that I’m sensing everywhere.

*********************************

Kobielus’ comments on all the above, here on the evening of January 28 by himself:

• “Trusty index DTMS finger”? What does that mean? I never said it. Whoever transcribed the audio misattributed it. What does “DTMS” stand for anyway? Go listen to the audio playback and tell me whether I actually said it. I’m too lazy to do so. Also, I can’t stand listening to my own voice (yeah, I know, you’d think otherwise, wouldn’t you, given how verbal I am).• Mash up each other’s presence, content, intelligence? A virtual mashpit, so to speak. Feels slightly creepy, doesn’t it. Sort of like the movie “The Fly” where the machine went haywire and mashed Jeff Goldblum’s DNA with an insect. Ewwww!• Glad I mashed up identity management and service governance on the call--mashed and mushed LDAP directories and UDDI registries, into each other conceptually--and federated them in that big Venn diagram in the sky. SOA for interpersonal communications depends on populating the governance bus with all that identity “metadata” (e.g., contacts, attributes, profiles, roles, demographics, interests, transactions, behavioral characteristics, clickstream, predictive model scores, etc.)• Text mining will provide the auto-discovery mechanism for all the “identity metadata” that people are self-publishing in un- and semi-structured formats (often without fully realizing it) in the Web 2.0, social networking, wiki world.• Controlling mashups. Mashup governance. Dave Linthicum introduced that concept a year or two ago, but I still don’t sense any clear feeling among vendors or users that it’s a hot button. I think everybody still regards user-created services (i.e., mashups) as outside the proper scope of SOA. But, then again, what’s the difference between a mashup and a rogue service? The former is not sanctioned by corporate IT but is ostensibly benign and is to be tolerated, if not encouraged or supported. The latter is also unsanctioned, and possibly benign, but under suspicion and to be decommissioned or neutralized at the first opportunity. And what’s the difference between mashups and virtual machine sprawl? The former proliferates but doesn’t necessarily hog resources or disrupt operations, whereas the latter also proliferates and consumes more than its fair share of resources. The relevant distinctions in these cases all concern where specifically a particular created/published service (be it a mashup service, Web service, or cloud service) sits on the governance spectrum in a given organization. Is it a sanctioned/supported or unsanctioned/unsupported service, from the point of view of the service governance “authorities”?

Tuesday, January 20, 2009

Response : Another headline that stops you cold (pun sort of intended). It’s intended as a comment on Steve Jobs’ health situation and the ramifications for Apple going forward. Rather than me get into a pointless ramble on God, heaven, the soul, legacy, the afterlife, “cemeteries are full of indispensable men,” and the like, I’ll just point out that, where corporate succession planning is concerned, Jobs’ eventual demise was factored into people’s thinking when it was revealed a few years ago that he had battled cancer. And, in fact, he has built a brand that can certainly survive the loss of one or more individuals and keep on prospering. It’s instructive to look at the legacy of Walt Disney (Jobs, in fact, has been called the new Disney because he founded Pixar). Quibble as you might with how Mike Eisner and others have built Disney’s brand in the 43 years since Walt went to heaven, you can’t deny that the founder did something exquisitely right. And that he was just as indispensable in his heyday to his shop as Jobs is now to his. That said, I wish Steven Jobs a speedy recovery. I don’t know him. I’ve never met him. I’m not a big Apple fan, but that’s irrelevant. He’s a human being suffering from some nasty medical condition, and should be in all our prayers.

Sunday, January 18, 2009

Rich Wolski of Eucalyptus had some very interesting insights to share about the role of identity federation among public and private clouds. You'll see those thoughts when my Network World article publishes on February 9.

What Rich said reminded me of this article, which I published in Business Communications Review in fall 2006. It's about the need for multi-layered federation infrastructures for IP networking. It reminds me of the fact that clouds (aka "everything as a service") will also have to federate on every level.

***************************************

New Federation Frontiers in the IP Networking World

Federation is a concept much in vogue these days, and it is being applied to a growing range of telecommunications and computing infrastructures.

Where telecommunications is concerned, federation refers to an established industry practice: interconnection, routing, billing, clearing, revenue settlement, and other negotiated arrangements among affiliated service providers. Network federation allows subscribers to authenticate to their primary carrier and thereby gain single sign-on (SSO) access to services, applications, and content controlled by affiliated service providers. The alternative to federation is centralization—in other words, the long-discarded “Ma Bell” approach of one carrier controlling everything in the connected universe.

If you think of it, the Internet is the most successful network federation of all. It is a global federation of separate, cooperating networks built on universal adoption of the Internet Protocol (IP), Domain Name Service (DNS), Uniform Resource Locator (URL), and other core standards developed under the auspices of the Internet Engineering Task Force (IETF) and other groups. In addition, as noted in “What is Federated Identity Management?” (Business Communications Review, August 2005), federations have been implemented widely in other telecommunications and distributed computing environments. For example, federated location-registry and roaming services enable interconnected cellular carriers to authenticate client devices, route incoming calls, apply appropriate calling features, and bill subscribers correctly. Furthermore, multi-institution automated teller machine (ATM) networks—such as Cirrus--operate under a type of federation, enabling users to login remotely to their bank accounts from any affiliated institution’s machines.

In addition to these long-established approaches, new frontiers in standards-based cross-carrier federation are opening up. Many of those new federation initiatives fall under the broad architectural umbrella of IP Multimedia Subsystem (IMS). Increasingly, network industry standards groups are using the word “federation” to describe their cross-carrier IP interoperability frameworks. The IMS community is referencing federation identity management (IdM) standards—such as those developed by the Organization for the Advancement of Structured Information Standards (OASIS) and the Liberty Alliance—to facilitate convergence among diverse IP-based services. But they’re going beyond the Web services world’s federation protocols to define federation environments that build on IETF specifications such as Domain Name Service (DNS), Session Initiation Protocol (SIP), and Electronic Number (ENUM).

Figure 1 illustrates several layers of federation that are possible in a cross-carrier IP internetworking environment: federated IdM (user and device authentication, SSO, and roaming); federated service creation, provisioning, and coordination; and federated service provider peering (interconnection, policy declaration, addressing, and routing).

The industry is implementing IP federation in all of these areas. Recently, federation has popped up in several new industry standardization efforts, though not all of these new federation approaches have yet been implemented in production carrier internetworking environments.

Most notably, infrastructure vendors are integrating federated IdM SSO protocols within IMS’ Home Subscriber Server (HSS). In addition, the IPsphere Forum is developing commercial and technical frameworks to support federated cross-carrier service provisioning, signaling, and management, incorporating federated IdM standards plus a broad range of WS-* standards. Furthermore, an Internet Engineering Task Force (IETF) Working Group is developing standards under which Voice Over IP (VoIP) service providers will be able to flexibly federate amongst themselves through the DNS and ENUM infrastructures.

Within a typical cross-carrier internetworking environment, federated IdM may be implemented in layers. For converged IP services, federated IdM may involve separate authentications at the application layer and network layer.

Increasingly, the application-layer authentications are relying on any or all of the federated IdM standards mentioned above. In fact, telecommunications carriers in many nations are among the most active implementers of the Liberty Alliance specifications, having deployed Liberty-based IdM services for application-layer account linking, SSO, and trusted attribute sharing across their catalogs of federated third-party services.

Application-layer federated IdM relies on carriers maintaining authoritative directories of user identities, credentials, roles, personalization settings, user preferences, and other attributes. Generally, each federated carrier and service provider operates as an identity provider (IdP), managing the master directory of its own registered subscribers along with their account profiles. Carriers may also provide real-time subscriber session state information to federated partners, thereby facilitating targeting and personalization of service delivery.

Within the underlying IP networking environment, network-layer authentications will increasingly rely on IMS’ HSS infrastructure. HSS is IMS’ principal network-layer IdM environment. Within each carrier’s IMS network, the HSS is a master directory that supports user authentication and authorization, subscriber profile management, session setup and management, call routing, and user roaming within carrier networks. There are no standards specifying exactly how the HSS must interact with its underlying user directory or database repository. Consequently, IMS infrastructure providers and carriers may rely on prevalent directory-access standards, such as the Lightweight Directory Access Protocol (LDAP), or WS-* standards, such as XML Query, for query, update, and management of their HSS repository.

The HSS is a master directory of device and user identity information relevant to network-level authentication, authorization, and roaming. For wireless networks, the HSS manages device and user identities such as International Mobile Subscriber Identity (IMSI), Temporary Mobile Subscriber Identity (TMSI), International Mobile Equipment Identity (IMEI), and Mobile Subscriber ISDN Number (MSISDN). With IMS, the HSS manages additional identities, including IP Multimedia Private Identity (IMPI) and IP Multimedia Public Identity (IMPU), which are URIs associated with single or multiple client devices.

To enable cross-carrier interconnection, SSO, and roaming, HSS environments must be federated through various approaches.

At the network layer of the IMS architecture, cross-HSS federation requires that each carrier also implement a Subscriber Location Function (SLF), and that each HSS and SLF implement the DIAMETER protocol (RFC 3588) for authentication, authorization, and accounting (AAA). Essentially, the HSS/SLF infrastructure in IMS environments is equivalent to the Home Location Registry (HLR) and Visitor Location Registry (VLR) services in cellular networks (one big difference is that the HSS/SLF is a media, network, and device-agnostic functional evolution, hence a functional superset, of the cellular-specific HLR/VLR infrastructure).

DIAMETER is an important piece of the IP networking federation equation. DIAMETER—the IMS successor to the widely adopted Remote Access Dial-In User Service (RADIUS) protocol--may be used for cross-carrier federated AAA in conjunction with the HSS. Wireline and wireless ISPs authenticate users at the application layer through DIAMETER/RADIUS servers that interface to authoritative directories of user identities, passwords, and other credentials. DIAMETER/RADIUS servers can serve as proxies, mediating between a front-end authenticating server and one or more back-end directories. As proxies, these servers can be set up to forward authentication and accounting messages to peer authentication servers in other application domains, which is essentially a federated IdM scenario.

In addition, DIAMETER is the principal access protocol that allows distributed IMS functions, no matter what carrier’s domain they happen to deployed within, to interact with the carrier’s master HSS. Within the IMS infrastructure, the Interrogating Call Session Control Function (I-CSCF) queries the HSS using DIAMETER to retrieve the user location, in order to route a Session Initiation Protocol (SIP) request to its assigned Serving CSCF (S-CSCF). The S-CSCF uses DIAMETER to download user profiles from and upload user profiles to the HSS. And an IMS Application Server—controlling caller ID and other enhanced services—can use DIAMETER to query the HSS for subscriber presence, location, and other account profile data.

Industry efforts are underway to integrate IMS’ federated IdM infrastructure—centered on HSS and DIAMETER—with the Web services world’s IdM environment, in which application-layer directories and federated SSO protocols are predominant. In separate recent initiatives, both Sun Microsystems and Microsoft are positioning their federated IdM platforms and protocols as SSO adjuncts to carrier HSS infrastructures.

In April 2006, Sun and Lucent Technologies announced joint development of infrastructure products that provide standards-based SSO access to federated IMS services. Sun is providing its Sun Java System Federation Manager product to the initiative, whereas Lucent has provided a full IMS product suite that includes HSS and other IdM functionality.

Under this joint development initiative, Lucent is providing a suite of IMS infrastructure products to support federated IdM functionality. The following set of products is indicative of the functional components necessary for federated IdM over HSS in an IMS environment:

 Lucent Datagrid: This product integrates the diverse, federated carrier databases that contain subscriber data relevant to call processing, session management, messaging, and customer care. Lucent Unified Subscriber Data Server (USDS): This product provides HSS, HLR, and AAA functionality. It enables HSSs to be deployed in a centralized or decentralized/federated fashion. It allows access to subscriber profile data that is hosted inside or outside of a service provider's network and on diverse network platforms. It also enables operators to provide subscribers with a single service presentation environment even when roaming to another carrier’s network. Lucent Session Manager: In conjunction with the USDS, the Session Manager supports SSO, presence management, and session management across diverse, federated IMS-based services. It allows operators to provide integrated voice, data, video, multimedia, and other capabilities over IMS sessions. Through embedding of Sun’s technology, the Session Manager is implementing the Liberty Alliance federated IdM protocols, which provide SSO within multilateral federated environments. In addition, the product leverages the Liberty protocols to allow subscribers to selectively disclose particular profile information—such as current locations and previously stored preferences—to particular federated application and content providers. Furthermore, the Session Manager can be deployed for several core IMS functional roles, including Call Session Control Function, Service Broker, Service Capabilities Interaction Manager (SCIM), Policy Decision Function (PDF), and the Breakout Gateway Control Function (BGCF).  Lucent Communication Manager: This product provides a unified portal presentation view for subscribers to access their IMS-based converged services from wireline or wireless clients. It supports integrated session control that is agnostic to the underlying application servers serving the subscriber and is agnostic to the client devices through which services are being accessed. Lucent Vortex: This product provides a policy engine that may be distributed throughout a network to support personalization and customization of end-user views of federated IMS-based services. It allows network operators to quickly modify network behaviors to serve the special requirements of particular customer segments and ensure guaranteed quality of service.

Separately, Microsoft has been working with carriers throughout the world to integrate its own application-layer federated IdM stack with their IMS environments. Microsoft published its federated IMS vision in a June 2005 whitepaper called “Connected Services Framework and IMS: A Partnership for Success.”

Microsoft’s and Sun’s visions for federated IdM have many common themes, such as promoting IMS service convergence and aggregation, enabling SSO with trusted user attribute sharing, and implementing WS-* standards pervasively throughout carrier infrastructure. Both of them promote IMS convergence visions under which network-layer IdM services—such as the DIAMETER protocol interfaces—could conceivably be exposed as Web services and invoked from application-layer IdM services (though neither Microsoft nor Sun has committed to exposing DIAMETER APIs as Web services). In other words, they both point to the eventual unification of IMS application- and network-layer federation within a common service-oriented architecture (SOA) framework.

However, their approaches differ in two important respects.

First, Sun has been promoting the Liberty Alliance protocols in its carrier-federation roadmap, and implementing them in its work with Lucent. Microsoft, by contrast, has been implementing the rival WS-Federation protocol, as well as other WS-* specifications—such as WS-Trust—that it has a key role in developing. It’s important to note that the functional differences between the Liberty Alliance protocols and WS-Federation are not great, and that they both support federated account linking, strong authentication, SSO, trusted attribute sharing, privacy protection, and session management over multi-organization circles of trust.

Second, Sun has been working with Lucent to embed federated IdM protocols into the underlying IMS HSS/SLF infrastructure. Microsoft, by contrast, has focused on connecting its federated IdM infrastructure to IMS as an Application Server. In the IMS architectural framework, an Application Server is a functional component that hosts and executes calling and application services. In addition to application-layer SSO, other services that may be implemented as IMS Application Servers include Caller ID, Call Waiting, Push To Talk, Voice Mail, Short Message Service, Presence, and Location-Based Services. From the subscriber’s point of view, an Application Server may be located in the subscriber’s own home carrier’s network, or in a federated third-party network or service provider environment.

It’s not clear which, if either, of the two approaches—Sun’s embedding of federated IdM in IMS HSS vs. Microsoft’s integration of federated IdM as an IMS Application Server—is best. Embedding of industry-standard federation protocols in HSS may pay off for Sun/Lucent if other IMS infrastructure providers and carriers follow their lead.

Integration of the WS-Federation protocol as an IMS Application Server may pay off for Microsoft if it can convince IMS infrastructure providers and carriers to implement this protocol. However, it should be noted that Microsoft’s three-year-old WS-Federation specification has not achieved much adoption in the mainstream federated IdM community.

Federated Service Creation, Provisioning, and CoordinationThe IMS framework is missing an important component: specifications that describe how IP services can be flexibly created, provisioned, and coordinated across federated carriers, application partners, and content publishers.

The IPsphere Forum is a telecommunications industry initiative to fill in this missing piece. The forum is an international consortium of service and infrastructure providers developing both the commercial and technical frameworks for federated cross-operator service delivery. The group, which has been in existence for more than a year, has established a formal liaison with the International Telecommunications Union Telecom Standardization Sector.

The IPsphere frameworks, still under development, implement SOA principles within the IMS architectural model. Leveraging WS-* specifications such as UDDI, IPsphere is defining a standards-based environment for provisioning network infrastructure, application, and content services—composed of modular “service elements”--to carriers, endpoints, and users across federated IP networks. Each service element is a software method or module that is hosted by a provider and published to a UDDI registry as a Web service. End-to-end IP voice, data, and multimedia services may be created from diverse service elements hosted by many federated providers. Providers link the services to their respective network and policy management infrastructures for runtime administration, optimization, and control.

Under the IPsphere commercial/technical framework, application-layer IdM services—such as Liberty Alliance-based SSO—are just one category of infrastructure interactions that may be federated across a “pan-provider” IMS environment. Boundaries between federated providers sit at the intercarrier interface (ICI), as defined under the IMS model. IMS defines Call State Control Function (CSCF) points that can be deployed at network boundaries, such as ICIs, for enforcing federation policies—such as security, trust, quality of service, revenue settlement, and service-level agreement (SLA) accountability--defined by cooperating IP service providers.

Across these network boundaries, federated service provisioning and coordination take place across the following functional service layers, or “strata,” as defined by IPsphere:

 Packet handling stratum: This corresponds to the seven-layer Open Systems Interconnection protocols, as implemented in the IMS model. Policy and control stratum: This corresponds to such IMS functional components as the “Policy Decision Function,” “Proxy Call Session Control Function,” “Policy Enforcement Point,” and “Common Open Policy Service.” Service signaling stratum: This stratum has no counterpart in the IMS model. It is the IPsphere layer at which federated pan-provider services are created, provisioned, and coordinated from elements hosted in diverse provider environments. Across this layer, the providers’ service creation environments exchange structured messages to manage the phases of federated service setup, execution, and assurance. IPsphere defines several models of federated message-driven service creation, including permissive Internet-like interactions among providers, policy-database-mediated linking of services at the ICI, and explicit linking of services at the ICI by the providers’ respective network management systems.

Under IPsphere’s commercial model, each federated service provider may perform one or both of the following functional roles: “Partners” or “Sellers.” Partners contribute resources in the form of registered component service elements from which Sellers assemble end-to-end services that are sold to users, who are also known as “Buyers.” Partners publish only those services/elements that they want Sellers to deliver to Buyers, using UDDI and other Web services standards for messaging-based service provisioning interactions with Sellers. Partners receive revenues from Buyers via settlement payments rendered by Sellers, who validate, authenticate, and bill the Buyers. Partners may also assemble component services contributed by various federated “Sub-Partners.”

Of course, negotiated contractual relationships determine how Partners, Sub-Partners, Sellers, and Buyers interact throughout the federated service provisioning and delivery life cycle. The flexible IPsphere federation framework allows participating organizations to offer whatever resources they choose, at whatever price the market will bear, under whatever federation partnering arrangements make business sense.

Federated Service Provider PeeringWithin the fast-evolving world of IP networks, cross-provider federations are being established to facilitate end-to-end service interoperability.

In their drive to establish a end-to-end alternative to the public switched telephone network (PSTN), VoIP service providers (VSPs) are establishing their own federations. Federation—also called “VoIP peering”--enables VSPs to offer end-to-end “on-net” VoIP calls and other IP multimedia communications services to their own customers and to the customers of ay federated VSP. As more VSPs federate with each other—preferably in multilateral arrangements--their collective on-net customer base will reach a critical mass under which VoIP becomes a cost-effective, full-service alternative to the PSTN. The number of calls that a VSP can complete on-net is directly proportional to the number of other federated VSPs and their customers.Founded in 2004 and headquartered in London, XConnect is the world’s largest VSP peering/federation community and operates the world’s largest international private ENUM registry. XConnect provides VSP federation services to more than 150 VSPs and 123 million unique VoIP telephone numbers worldwide. Its VSP services include address protocol interoperability, ENUM interconnect call addressing and routing services, and authentication and identity services. In addition, XConnect provides multi-protocol interoperability, VoIP call security, and Spam over Internet Telephony (SPIT) prevention services to VSP members.Separately, the IETF’s Session Peering for Multimedia Interconnect (SPEERMINT) Working Group is developing standards under which VSPs will be able to flexibly federate amongst themselves. The SPEERMINT specifications leverage the basic VoIP standards: SIP, Real-time Transport Protocol (RTP), and H.323. In addition, SPEERMINT is placing heavy reliance on DNS and the emerging DNS-integrated ENUM directory infrastructure to support a ubiquitous VSP federation address and policy administration environment.

Under SPEERMINT’s specifications, a federation is defined as “a group of VSPs [that] agree to receive calls from each other via SIP, agree on a set of administrative rules for such calls [such as settlement and abuse handling], and agree on specific rules for the technical details of the interconnection.” A VSP declares its membership in a federation by publishing to DNS a “domain policy” regarding the conditions under which they are willing to accept incoming communications per the rules of the federation. The specifications define the structure of these domain policies and the general approach for publishing them to DNS, using Dynamic Delegation Discovery System (DDDS) DNS records.

Under SPEERMINT’s approach, each VSP federation would identify itself by a unique URI, set membership eligibility criteria, define its internal policies and rules, and determine how to communicate those rules to member VSPs. SPEERMINT recommends but does not require that VSP federations use URLs to point to documents describing federation policies and rules.

Some of the VSP-federation policies, rules, and membership conditions that might be described in these documents include:

• Federated VSPs agree to use federation-designated ENUM infrastructure to translate existing numeric phone numbers to SIP addresses using DNS to facilitate on-net VoIP call routing;• Federated VSPs agree to accept SIP-based calls from each other via the public Internet, as long as each call uses Transport Layer Security (TLS) over Transmission Control Protocol and presents a X.509 cert that was signed by a federation-designated public key infrastructure certificate authority;• Federated VSPs agree to accept only those SIP-based calls from each other that were transmitted over a federation-wide virtual private network;• Federated VSPs agree to accept all SIP-based calls from each other that were originated from within the same country;• Federated VSPs agree to accept only those SIP-based calls from each other that were routed through a central, federated-designated SIP proxy;• Federated VSPs agree to have revenue settlements for calls from each other administered by a federation-designated clearinghouse; and• Federated VSPs agree to use firewalls and other perimeter security devices to block SIP calls that violate federation-administered anti-SPIT rules.

The SPEERMINT working group also points out that the same DNS-enabled federation approach may be used for peering among providers of SIP, instant messaging (IM), and other IP application services.

Though the SPEERMINT group doesn’t directly acknowledge the IPsphere Forum’s work, it’s clear that the two industry initiatives are complementary. The SPEERMINT effort defines an IP environment under which providers of a particular service—VoIP calling—may federate the policies under which they connect their users. IPsphere, by contrast, defines a larger IMS-based technical environment within which VSPs can provision and coordinate end-to-end VoIP and other services that conform to federation policies.

Likewise, the SPEERMINT and IPsphere frameworks require that end users and their devices authenticate using federated IdM protocols, at both the application layer (in the context of SOA and Web services) and network layer (in the context of IMS’ HSS). So there’s an important and growing role for the Liberty Alliance, SAML, and other federated IdM protocols in IP, IMS, SIP, VoIP, and IPTV federations.

Federation in a complex IP internetwork is a many-layered thing. In fact, federation—on many levels—is the key to convergence of diverse, pan-provider, multimedia IP services. Every new carrier, hosted application provider, and content publisher in the IMS fabric is another domain that must federate with existing providers in order to do business online.

***************************************

Back in the days I was a federation analyst. And an SOA analyst. Still am, but I've moved on.

My take: This is one of those headlines that absolutely stops you in your tracks. Didn’t Tom realize that Adolf had done the job himself--and long before the infant Mr. Mapother crawled out of L. Ron Hubbard’s stork-shaped spacecraft? By the way, “Valkyrie”—good popcorn movie, though, in all frankness, do we need yet another Nazi tale on the big screen? And do we need another reason to admit that, yes, Mr. Cruise has talent, but that he carries so much personal baggage on screen that it gets in the way of our viewing pleasure? In terms of big stars of my generation (true confession: Cruise is 5 years younger than me), Tom Cruise is a bit like Madonna: so absolutely cold and calculating that every new project seems more engineered than felt. You tend to focus on their ambition more than their message. And for all his talent as a politician, Barack Obama has a bit of that quality as well. A bit too tightly wound, though he certainly knows what he’s doing and why he’s doing it. I wish him well in his upcoming new job. I voted for him. But I’m still not sure who he is.

Ha ha. Yes, this analyst too is not averse to the usual “last wave/next wave” tropes, such as “xx 2.0” or “xx is dead.” So sue me.

Clouds are SOA 2.0. Cloud computing is to a great extent the future of SOA. However, this paradigm raise the SOA stakes while also accentuating the risks.

To the extent that organizations use governance to harness the richness of cloud environments, they will be able to supercharge their SOA initiatives while radically improving scalability and cost-effectiveness. Leveraging distributed cloud platforms, the next-generation SOA will be more fluid, flexible, and virtualized, managing ever more massive data sets and providing the agility to handle more complex mixed workloads of transactional applications, business intelligence, data mining, enterprise service bus, business process management, and other functions.

Clouds complicate the SOA governance picture, but it’s not as if many enterprises already have exemplary governance practices. In the real world, cloud computing, like SOA implementations, is often an ungovernable mess. By encouraging widespread reuse of scattered software components, SOA threatens to transform the enterprise application infrastructure into a sprawling, unmanageable hodgepodge of ad-hoc services. Without proper governance, SOA could allow anyone anywhere to deploy a new cloud service any time they wish, and anyone anywhere to invoke and orchestrate that service--and thousands of others—into ever more convoluted messaging patterns. In a governance-free environment, coordinated cloud service planning and optimization become frustratingly difficult. In addition, rogue cloud services could spring up everywhere and pass themselves off as legitimate nodes, thereby wreaking havoc on the delicate trust that underlies production SOA.

SOA governance is maturing as a discipline, while cloud computing—the new galaxy in which services will burst forth—is anything but. Unfortunately, the cloud arena may continue to evolve so fast over the next several years that it will be difficult for consensus service-governance practices to coalesce. Still, emerging cloud services can benefit from the many lessons learned by enterprise SOA governance implementers. Most important, you need a service catalog that maintains metadata about services and enables you to control development and construction of services and publish visibility and availability of services to consumers. Also, federation agreements should be set up to auto-provision service definitions between public clouds and enterprises’ Web services, REST, and other application environments.

So the outlook for strong service governance in this brave new paradigm remains cloudy, but with scattered patches of promise.

Friday, January 16, 2009

Drifting dream. Sitting on a beam. Two cells on a shaft. One fore, one aft. Protean complements. Sunday supplements. Brisk and happy beyond beyond belief, I take me in and what a relief. Every room well maintained. Every splash of blue contained within the lines. Rushing, shifting. Take me deep and go on drifting.

CENTER OF PAIN

Something. I can see the whole world fall away and there nothing but this pain. This morning I could feel the sheets rustling the leaves shaking but still this pain. Tonight I can relax and dwell in gray wet cloudy and fold my pain into nothing. Nothing.

ORGANISM

How strange when the infra-red ultra-thin membrane of dream blood ruptures and spills a dread film between the seeing-eye and a world itching and twitching inflamed rejecting a donor tissue.

PRASEODYMIUM

I wOUld wOrshIp grEEn glAss, bUt drIvEn tO cOnsIdEr...cOmpOsItIOn, thE rElAtIvE EAsE wIth whIch shArp pAnEs, slAppEd Up thrOUgh tIdY frAmEs, fIltEr whItE thrOUgh flAttEnEd sAlts, pOUndEd IntO rUdE AllIAncE...I Opt OUt. YEllOw Is hOw thEsE skYlInEs fAll, pOUrEd lIke sAndY sOIl In smEArY vAlEncE.

From: [self]Sent: [eight years ago]To: [people I knew way back when]Subject: Language as an Object Worthy of Contemplation

All:

I highly recommend Chris Redgate's daily, syndicated, capsule newspaper column, "The Red Pencil," which focuses on the art of putting words together. Chris' 100-words-a-day ranks right up there with Doonesbury and my morning bowl of Cheerios. He/she (never been able to resolve that forename into a definite gender, and I guess it doesn't matter--anybody here seen Julia Sweeney's wonderful "It's Pat!" movie?) recently wrote about the distinction (or lack thereof) between prose and poetry, and spurred me to respond as follows:

**************************************************

Chris:

I enjoy your "Red Pencil" column, which I read in the Washington Post. I'm writing to respond to your recent two-part column on the difference between prose and poetry. On one level, I agree that in practice there is often little difference between prose and poetry as distinct literary genres. In practice, modern poetry is often simply prose chopped up and defaced with arbitrary carriage returns, tabs, punctuation, misspellings, and obscurities. Poetry often suffers from highfalutin abstractions, precious diction, adjectival overload, lack of point or narrative, and whining, self-pitying attitudes. And poets wonder why very few people buy or care about their work.

On another level, though, we can distinguish between prosaic and poetic expression, which, taken together and interwoven well, can enliven even the most mundane writing. Prosaic expression points to objects in the world (even if that world exists only in the writer's head, as many scientific hypotheses, for example, do). Poetic expression points back at itself, focusing on language as an object worthy of contemplation in its own right (write!). Language as an object worthy of contemplation--what do I mean by that? I mean the features of language that make it noteworthy, catchy, and memorable: meter, cadence, rhythm, rhyme, alliteration, tintinnabulation, imagery, word choice, etc. Language as a symbol system or an equation that we continually manipulate: grammar, syntax, etc. Language as a human artifact that is capable of conveying beauty and meaning through its very structure and sound.

The very best writing is both prose and poetry--you want to read, then re-read it, focusing on the objects that the writer is trying to depict, but also the object through which the writer depicts them. Through brevity, the best poetry encourages us to re-read. The best e-mails do too.

Jim

**************************************************

It's all art and artifice. I've spent my career trying to breathe life into technical topics of thudding complexity. Committing this sh*t to someone's memory requires stealth poetry.

BI is no longer just about back-office reporting. As BI solutions increasingly permeate the enterprise and span a wide range of applications, analytics-driven organizations recognize BI as a key corporate asset and a do-or-die platform. In today's turbulent and increasingly commoditized economy, enterprises must make better and faster decisions to stay competitive — and often just to keep their heads above water.

As BI grows more pervasive, complex, feature-rich, and mission-critical, it also becomes harder to implement effectively. Many information and knowledge management professionals question whether they architect, implement, and manage their BI initiatives properly. Doing so requires sound BI and performance management best practices — and an awareness of the myriad ways it can all go wrong.

Forrester's ongoing research compiles a litany of worst practices often committed, deliberately or inadvertently, by even the smartest, most experienced information and knowledge management professionals. Common deficiencies in many enterprise BI environments often manifest themselves at the application level, but the root causes of the problems go much deeper. The chief symptoms of suboptimal BI management practices include:

The lack of a single trustworthy view of all relevant information. Many organizations strive for a single unified view of disparate transactional data and commit themselves to the long-range goal of consolidating it all into an all-encompassing enterprise data warehouse (EDW). In practice, though, the goal of an uber-EDW is a moving target. EDW projects are frequently the victims of "scope creep," due to constantly changing requirements, relentless growth in the range of operational-data sources, and stubborn resource bottlenecks within IT. Insufficient focus on data quality and master data management (MDM) only adds to a lack of trust. Even data in a comprehensive EDW may be viewed as untrustworthy or, in a worst case scenario, incorrect. As a result, BI application users resort to old fashioned methods to collect and analyze data such as running their own SQL queries and bringing data into spreadsheets for analysis.

BI applications too complex and confusing to use effectively. Crafting sophisticated BI applications for power users is important, but designing them for casual business users is far trickier. Even the most user-friendly, point-and-click BI applications often require users to slog through a daunting range of user interfaces, features, reports, metrics, dimensions, and hierarchies. Also, BI is just a subset of the surfeit of productivity tools that information workers must juggle just to perform their basic responsibilities. As a result, most BI end users have barely tapped the productivity potential of the tools at their disposal and often run back to IT to help them create new reports, queries, and dashboards.

BI applications too rigid to address even minor changes. Our modern world moves at lightning speed, but BI solutions are often too rigid to keep up with the changes. One simple change to a single source data element can result in a few changes to extract, transform, load (ETL) and data cleansing jobs, which may turn into several data model changes in operational data store (ODS), data warehouse (DW), and data marts; this in turn affects dozens of metrics and measures that could be referenced in hundreds of queries, reports, and dashboards.

As these problems illustrate, the typical BI environment is far from realizing its potential as a strategic business asset. Many organizations have responded by developing BI support centers or BI competency centers, but a BI Solution Center (BISC) offers a more business- and solutions-focused advance on these concepts. This article, which is based on the Forrester Report "Implementing Your Business Intelligence Solutions Center," details nine choices you'll need to make on the road to building an effective BISC.

BI Solutions Centers Cultivate BI Best Practices Implementing BI technology is easy (relatively) — but getting value from those technology investments is the truly hard part. Recognizing that sound BI management practices are often the missing ingredient, many companies have begun to transform their project-based BI support groups into a more strategic function: the BI solutions center. Though the BISC has great promise, it is no silver bullet. Enterprises with more successful BI implementations often implement some form of BISC practices, but there are a wide range of BISC implementation options, and not all of them are appropriate for every scenario. Why? Because there are many different approaches, organizational structures, and modus operandi for BISCs, each with its own pros and cons.

Forrester defines a BISC as: A permanent, cross-functional organizational structure responsible for governance and processes necessary to deliver or facilitate delivery of successful BI solutions, as well as being an institutional steward, protector, and forum for BI best practices.How does a BISC differ from such kindred concepts as the BI competency center (BICC) and BI center of excellence (BI COE)? Though it has the same core IT-centric functions (such as building OLAP cubes, deploying data warehouses, and writing ETL scripts) as a BICC or BI COE, the BISC differs in its business-led governance and solutions focus, as explained below.

The BISC, like a sharp business suit, must be cut, trimmed, and tailored to the contours of each organization. Each BISC must, at the very least, fit an organization's specific structure, people, business processes, technology, and especially the BI, data warehousing, and other analytics-relevant infrastructures.

The intersections of these four dimensions — process, people, data, and technology — create multiple BISC scenarios and approaches that information and knowledge management pros must consider when developing a BISC most relevant to support your BI efforts. Detailed below are nine scenarios and approaches you must consider when implementing your BISC.

Consideration 1: Strategic Or Operational Objectives? Some organizations deploy BISCs that are purely strategic or advisory in nature. In those organizations BISC accepts the role of being a BI champion, providing subject matter experts, and overseeing BI standards, methodologies, and a repository of best practices. When these BISCs take on more operational duties they become responsible for tasks like the BI project management office (PMO), training, and vendor management. And in an ultimate operational manifestation of BISC, it can also carry the full spectrum of delivering BI solutions — BI solutions-as-a-service.

Consideration 2: In-house or Outsourced? Enterprises deploying BI will need help from experienced consultants and systems integrators (SIs). This expertise is critical because BI is very much an art and will remain that for the foreseeable future, since it involves engineering a complex set of systems and data to address the changing imperatives of business organizations.

All successful, complete, scalable, "industrial strength" BI solutions require customization, application of best practices, and a significant systems integration effort. Because true best practices do not evolve from implementing two or three BI applications, internal resources with experience in dozens of successful BI implementations are difficult to find. A knowledge of best practices and lessons learned needs to be accumulated across hundreds of BI implementations — a privilege reserved for full-time systems integrators specializing in BI. As a result, most of the more successful BISC organizations include both internal and external staff.

Consideration 3: Virtual Or Physical? Organizations have a choice of leaving their BISC staff within their lines of business (LOBs) or functional departments, or moving them to a centralized physical BISC organization. Since members within virtual BISC organizational structure have other management or hands-on responsibilities, they may lack BI focus and have to juggle conflicting priorities. Therefore, this type of a structure is typically more appropriate to BISCs that are strategic and advisory in nature. On the other hand, a physical, dedicated, and centralized organization is often more appropriate to fully operational BISCs. However, these tend to become just another "cost center" — as any centralized function carries with it the burden of process, methodology, and organizational structure. This implies bureaucracy, red tape, and a lack of flexibility. While such a structure is a must for certain IT functions, like infrastructure, security, and many others, it could be a BI showstopper. The first time IT cannot respond quickly or efficiently enough to a new requirement, a typical BI user will run back to spreadsheets to build a homegrown model, run the analysis, and get the job done. Information and knowledge management pros must determine which BISC structure — virtual or physical — would be most effective within their organizational culture.

Consideration 4: Operational or Analytical in Scope? A BISC for some may focus on addressing the front-end access, presentation, delivery, and visualization requirements of analytic applications. Alternately, others may encompass a wider scope including data warehousing; data integration; data quality; master data management (MDM); and many other analytics-relevant infrastructures, processes, and tools.

Information and knowledge management pros can draw the scope of your BISC narrowly or broadly, and that line may depend greatly on how your company staffs, funds, and organizes these diverse IT groups. How far should a BISC go "upstream" to operational applications to draw that line? For example, is a database trigger implemented in an operational application for changed data capture (CDC) that feeds a DW part of the analytical or operational realm of your BISC's responsibilities? Is the data mart that calculates customer or product profitability and feeds these numbers to downstream operational applications an analytical or an operational data store? Such scope needs to be very well defined and managed to avoid the very real BISC "scope creep."

Consideration 5: Support IT only or All Stakeholders? In especially large, heterogeneous, and siloed organizations, corporate culture and other realities may not make a centralized strategic or operational BISC a practical proposition. However, even in such an environment, it's still possible and often beneficial to centralize BI infrastructure (servers, DW, ETL, and BI tools) and let each individual line of business and functional department manage its own prioritization and BI application development, while leveraging the centralized BI infrastructure. Developers are the ultimate customers of these more narrowly scoped BISCs, and in several real life examples Forrester found that this is a practical limit of how much responsibility a BISC can take on without running into "turf battles." Information and knowledge management pros must determine whether their organizational culture is ready to support BISC beyond BI infrastructure in scope.

Consideration 6: Type of Funding Model? BISC can be treated as a corporate cost center, and all departments across the enterprise can use and benefit from BISC services. The difficulty here is that this approach carries a stigma of "just another IT department/cost center." Furthermore, departments that are not yet set up to take advantage of the BISC will push back on carrying part of the cost burden. A cost allocation model based on the actual usage of BISC services can be fairer, but detailed, activity-based cost allocation models can be tricky to set up, implement, and manage.

Consideration 7: Narrow or Broad Scope? Though a BICC-ish BISC is certainly possible, it's not preferred. Forrester recommends business leadership and business-led governance orientation, not a technology-centric focus, for the BISC. The same road map principles that apply to the best practices of implementing BI apply to the BISC: strategy first, architecture next, technology last. In its ultimate breadth of scope, BISC could encompass as many as 20 major components, roles, and responsibilities (see Figure 6 in Forrester's complete report), so it's very important to start small and increase the scope slowly.

Consideration 8: Performance Measurement Approach? BISC stakeholders require transparent measurements of the success of the BISC program in order to support ongoing momentum and funding. BISC leaders must establish a clear set of BISC performance metrics and clearly communicate them on a periodic basis. Some BISC performance metrics are obvious and easy to calculate. Examples include number of BI applications delivered and maintained by BISC, number of BI users, number of reports in production and the usage patterns of these reports, reduced BI support staff, and reduced BI software and maintenance costs. Other metrics could be trickier to calculate and monitor such as improved information accuracy and turnaround time on BI requests. Most leading BI products come with pre-built applications to monitor and analyze at least some of these metrics. If such out-of-the-box applications are not available from your preferred BI vendor, or if you are using multiple BI platforms, a centralized BI metrics management solution can be architected by using products from vendors such as Appfluent Technology and Teleran Technologies.

Consideration 9: Isolated or Aligned With Other Solution Centers? No BI environment is an island from the rest of the data management infrastructure. Just as BI applications touch, depend on, and overlap with many related processes and technologies, BISCs cannot exist in isolation from other competency centers, solutions centers, or centers of excellence. Federation between the BISC and other data management competency centers is a best practice. Many such competency centers have existed in organizations for years, though they may not be recognized as distinct disciplines or organizations. Essentially, any group that defines, approves, and/or enforces standard practices for new projects or initiatives in any of these areas is a competency center.

To realize the full return on investment from BI, your organization's BISC should engage with all or most of the interdependent competency centers. To succeed in your specific organizational environment, your BISC must have clear lines of demarcation, cooperation, and integration with all these other relevant initiatives. Failing to define a clear charter with appropriate collaboration, communication, and change management processes between complementary efforts can be a fatal pitfall in your BISC initiative.

Download the Complete Report This article is based on the Forrester Report "Implementing Your Business Intelligence Solutions Center." The 16-page report includes charts, diagrams and seven recommendations not included in this article. Click here to download the free report.

Our topic this week, and this is the week of Dec. 15, 2008, marks our year-end show. Happy holidays to you all! But, rather than look back at this year in review, because the year changed really dramatically after September, I think it makes a lot more sense to look forward into 2009.

We're going to look at what trends may have changed in 2008, but with an emphasis on the impacts for IT users, and buyers and sellers in the coming year. We're going to ask our distinguished panel of analysts and experts for their predictions for IT in 2009.

Our topic this week, and this is the week of Dec. 15, 2008, marks our year-end show. Happy holidays to you all! But, rather than look back at this year in review, because the year changed really dramatically after September, I think it makes a lot more sense to look forward into 2009.

We're going to look at what trends may have changed in 2008, but with an emphasis on the impacts for IT users, and buyers and sellers in the coming year. We're going to ask our distinguished panel of analysts and experts for their predictions for IT in 2009.

To help us gaze into the crystal ball, we're joined by this week's BriefingsDirect Analyst Insights panel. Please let me welcome Jim Kobielus, senior analyst at Forrester Research.

Jim Kobielus: Hi, Dana. Hi, everybody.

*************************

Jim Kobielus, you're up. What are your five predictions?

Kobielus: I need to go home now. You stole all my predictions. Actually, that was great, Dana. I was taking notes, just to make sure that I don't repeat too many of your points unnecessarily, although I do want to steal everything you just said.

My five predictions for 2009 ... I'll start by listing them under a quick phrase and then I'll elaborate very quickly. I don't want to steal everybody else's thunder.

The five broad categories of prediction for 2009 are: Number one, Obama. Number two, cloud. Number three, recession. Number four, GRC -- that's governance, risk, and compliance. Then, number five, social networking.

Let me just start with [U.S. President Elect Barack] Obama. Obviously, we're going to have a new president in 2009. He'll most likely appoint a national chief technology officer or a national tech policy coordinator. Based on his appointment so far, I think Obama is going to choose a heavy hitter who has huge credibility and stature in the IT space.

We've batted around various names, and I'm not going to add more to the mix now. Whoever it is, it's going to be someone who's going to focus on SOA at a national level, in terms of how we, as a country, can take advantage of reusing agility, transformation, optimization, and all the other benefits that come from SOA properly implemented across different agencies.

So, number one, I think Obama is going to make a major change in how the government deploys IT assets and spends them.

The maturing of clouds

Number two, cloud. Dana went to town on cloud, and I am not going to say much more, beyond the fact that in 2009, clouds are going to become less of a work in progress, in terms of public clouds and private clouds, and become more of a mature reality, in terms of how enterprises acquire functionality, how they acquire applications and platforms.

I break out the cloud developments in 2009 into a long alliterative list. Clouds will start up in greater numbers. They will stratify, which means that the vendors, like Google, Microsoft, and Amazon and others with their cloud offerings, will build full stacks, strata, in their cloud services that include all the appropriate layers, application components, integration services, and platforms. So, the industry will converge on a more of a reference model for cloud in 2009.

They'll also stabilize the clouds. In other words, they'll become more mature, stable and less scary for corporate IT to move applications and data to. They'll standardize, and the clouds will standardize around SOA and WOA standards. There will be more standards, interfaces, and application programming interfaces (APIs) focused on cloud computing, so you can move your applications and data from one cloud to another a bit more seamlessly than you can now with these proprietary clouds that are out there. And, there are other "S" items that I won't share here.

Number three, recession. Clearly, we are in a deep funk, and it might get a lot worse before it gets better. That's clearly hammering all IT budgets everywhere. So, as Dana said, every user and every organization is going to look for opportunities to save money on their IT budgets.

They're going to put a freeze on projects. They're going to delay or cancel upgrades. Their users, as you said very nicely, Dana, are going to dip into petty cash and go around IT to get what they need. They're going to go to cloud offerings. So, the recession will hammer the entire IT industry and all budgets.

As far as GRC, government is cracking down. If it has to bail out the financial-services industry, bail out the auto industry, and bail out other industries, the government is not going to do it with no strings attached.

Compliance, regulations, reporting requirements, the whole apparatus of GRC will be brought to bear on the industries that the government is saving and bailing out.

Then finally, social networking. Dana provided a very good discussion of how social networking will pervade everything in terms of applications and services.

The Obama campaign set the stage clearly for more WOA-style, Web 2.0, or social-networking style governance in this country and other countries. So, we'll see more uptake of social networking.

We'll see more BI become social networking, in the sense of mashup as a style of BI application, reporting, dashboards, and development. Mashups for user self-service BI development will come to the fore. It will be a huge theme in the BI space in 2009 and beyond of that.

That really plays into the whole cost control theme, which is that IT will be severely constrained in terms of budget and manpower. They're going to push more of the development work to the end user. The end user will build reports that heretofore you've relied on data modelers to build for you. Those are my five.

*************************

Gardner: We're just about out of time. Let's go quickly down our list for any last synthesis insights.

Jim Kobielus, senior analyst at Forrester Research, thanks for joining. What's your synthesis of what you have heard?

Kobielus: My synthesis is that we are living in a very turbulent and volatile time in the industry. Things are changing on many levels simultaneously, and a lot of it will just be hammered by the recession. Approaches like cloud, social networking, and everything will be driven by the need to cut cost and to survive through fiscal austerity for an indefinite period.

Wednesday, January 14, 2009

In the crowded marketplace of ideas, everybody’s always trying to differentiate themselves. Declaring something “dead,” “obsolete,” “outmoded,” “tired,” “passe,” or “so last year” is such a clichéd look-at-me technique that I tend not to give too much credence. Yeah, I, like other analysts, am inclined to do it now and then, but you should interpret this sort of pronouncement as just part of the news cycle. The usual coarse or fine grains of salt.

That said, my take on this. First off, I define service-oriented architecture (SOA) as an architectural paradigm: one that focuses on maximizing the sharing, reuse, and interoperability of corporate resources over distributed fabrics. In other words, it refers to an approach with a clear set of goals in efficiency, standardization, cost control, agility, and so forth. That paradigm and those goals/benefits are certainly not dead. Perhaps what is dead is the notion that this utopia can be realized purely over a Web services environment built purely on XML, SOAP, WSDL, WS-*, etc. Clearly, cloud computing, virtualization, Web 2.0, mashups, REST, social networking, and so forth show that a great many services—in fact, most new services—are not riding an “ESB” built on those interfaces and standards.

Just as important, life-cycle SOA governance—aka “service governance”--as a set of emerging best practices, is certainly not dead. In fact, it’s more relevant than ever, though few enterprises have mastered it throughout the service life cycle (design time, runtime, etc) and across all platforms, apps, and services. Moreover, service governance is getting much more challenging in a cloud-oriented environment, where literally EVERYTHING—from app components down through integration and hardware infrastructure (CPU, storage, etc.) is a service, or potentially. And, as more enterprise app/integration/hardware services are outsourced to public clouds, governance will get ever trickier, both in terms of negotiating service contracts and setting up the requisite public/private service-federation relationships and infrastructure.

Service governance in the cloud is a terra incognita. The cloud providers, cloud management tool providers, and their customers are all groping for a common set of approaches, and, to some degree, trying to square it all with established SOA governance best practices. But how do you wrap controls around the every atom in the billowing universe? Can we think--and tailor our governance--around that many dimensions without exploding from the sheer nebulous complexity of it all.

In every era, I’m tracking what’s being born, not what’s on its last legs. Our architectural orientation toward services is what SOA begot, and has bequeathed to this new age of the cloud.

Monday, January 12, 2009

Coral8 Webcast with Featured Guest Analyst from Forrester Research Explores Continuous Intelligence for Optimization of Cross-Channel Customer Interactions

Experts Share How Companies Can More Effectively Interact with Customers to Drive More Revenue and Raise Customer Loyalty --(BUSINESS WIRE)--Coral8, Inc.:

What: During tough economic times, it’s vital for companies to optimize the growing online marketplace and optimize customer interactions. Coral8 is hosting a Webcast focused on innovative new technologies to help companies foster customer loyalty and drive more business in 2009.

Forrester Research estimates that U.S. online retail commerce will reach $335 billion in 2012, comprising 11% of the total market.1 Forrester also reports that the web will influence well over $1 trillion of in-store sales.2 Such market influences are leading many organizations to closely examine and optimize how they interact with customers across all channels.

Continuous Intelligence™ provides the platform, tools and techniques that allow organizations to constantly monitor all customer interaction channels, maintain a comprehensive “live” analytic profile of each customer, and drive immediate, personalized offers and service actions at the right time across all channels. This webcast will explain how companies can utilize rich untapped interaction sources – web, call center, ATMs, kiosks and more – to read and react to customer needs to drive faster purchasing decisions, cross-sell and up-sell products or services, and increase customer loyalty.

Join Coral8 for an interactive Webcast with featured guest James Kobielus, senior analyst at Forrester Research, and John Morrell, vice president of product marketing at Coral8, Inc., where they will explore:

What is Continuous Intelligence and where does it apply to the business? How can Continuous Intelligence and Advanced Analytics provide an evolutionary step towards true real-time situation awareness and response? What are the emerging best practices for applying CEP in the context of the Enterprise Data Warehouse? How CEP-driven Continuous Intelligence can offer faster, richer information to influence and optimize customer actions across channels

About Coral8, Inc. Based in Mountain View, Calif., Coral8, Inc. is a leading provider of Complex Event Processing (CEP) software and Continuous Intelligence™ solutions. Bringing together high-performance, innovative SQL-based programming and modeling and enterprise-class scalability and availability, Coral8 Engine™ is the fastest, most economical way to deploy powerful, sophisticated Continuous Intelligence™ that drives faster decisions and actions that positively impact revenue, customer service and operational efficiency. Coral8 is speeding the delivery of critical business information for customers worldwide, including Fortune 500 companies and global leaders within financial services, e-commerce, telecommunications, transportation, government and other rapidly growing vertical applications. For more information, visit www.coral8.com or call (650) 210-3810.

BI moves into the cloud. Enterprises of all sizes will adopt hosted, subscription-based services in greater numbers to supplement or, in increasing numbers, replace their premises-based BI platforms. In a soft economy, any on-demand pay-as-you-go offering becomes more attractive across all customer segments. Just as important, the increasing scalability, performance, flexibility, and availability demands on the enterprise BI infrastructure are spurring many users to consider outsourced offerings.

BI adopting Web 2.0 development paradigm. Mashups will move into mainstream BI practice as budget-stressed organizations push more development to users through self-service tools. The chief enablers for this new paradigm are the growing range of commercial, in-memory, BI-integrated mashup tools that let power users develop rich reports, dashboards, and analytic applications on the fly from within their browsers and spreadsheets. Data modelers and other traditional BI developers will supervise governance of user-generated BI mashups.

BI growing more federated. Enterprises will turn to federated data environments to support operational BI across stubbornly decentralized information silos that are scattered throughout their service-oriented architectures (SOAs). To respond to this growing requirement, IT organizations will supplement their enterprise data warehouses by beefing up their enterprise information integration middleware and semantic virtualization layers.

BI evolving into advanced analytic applications. Enterprises have substantially completed their adoption of core BI, enterprise data warehouse, and enterprise content management platforms and will increasingly turn to powerful predictive analytics, data mining, statistical analysis, and text analytics tools to leverage that information for business optimization. One consequence of this trend will be the growing adoption of in-database analytics techniques, under which users will process these compute- and data-intensive functions inside the enterprise data warehouses, taking advantage of that platform's massive parallel processing.

About Me

James Kobielus is IBM's
Big Data Evangelist. He is an industry veteran who spearheads IBM's thought
leadership activities in big data, data science, enterprise data
warehousing, advanced analytics, Hadoop, business intelligence, data management,
and next best action technologies. He works with IBM's product management
and marketing teams across the big data analytics portfolio. Prior to
joining IBM, he was a leading industry analyst, with firms including
Forrester Research, Current Analysis, and Burton Group. He has spoken at
such leading industry events as IBM Information On Demand, IBM Big Data
Integration and governance, Strata, Hadoop Summit, and Forrester Business
Process Forum. He has published several business technology books and is a
very popular provider of original commentary on blogs, podcasts, bylined
business/technology press publications, and many social media.