Meta

The malware ‘ransomware’ attack that hit the world on Friday, and may continue in a new form tomorrow (Monday May 15, 2017) is not preventable, but the damage might have been a lot less if those in charge of institutional computer networks did their jobs properly.

This malware, which was reportedly stolen from the U.S. National Security Administration, attacks a vulnerability in the no longer supported Microsoft XP operating system (O/S). Even though Microsoft offers a patch for the vulnerability, Microsoft has little or no ability to promote that patch to continuing users of an unsupported O/S, and certainly not to the zillions of pirated copies of the XP O/S.

Thus, if you are CIO (Chief Information Officer) or other official in charge of institutional computers, what in the heck are you doing running the XP O/S, and most especially what are you thinking in not doing everything possible to protect it while moving at full speed to get off of it?

Here’s what the New York Times reported today (May 14, 2017) about the lack of proactive protection despite warnings in Britian’s National Health Service (N.H.S.):

Britain’s defense minister, Michael Fallon, told the BBC on Sunday that the government was spending about 50 million pounds, about $64 million, to improve cybersecurity at the National Health Service, where many computers still run the outdated Windows XP software, which Microsoft had stopped supporting.

A government regulator warned [my emphasis] the N.H.S. last July that updating antiquated hardware and software was “a matter of urgency,” and noted that one hospital had already had to pay £700,000, about $900,000, to repair a breach that began after an employee clicked on a web link in an unsafe email.

“The threat from cyber attacks has not only put patient information at risk of loss or compromise but also jeopardizes access to critical patient record systems by clinicians,” the regulator, the Care Quality Commission, wrote in its report.

There should be consequences to those in charge of these institutional computers. This should have been a less destructive incident – especially since the attack did not go against the Windows 10 O/S which has been on the market for almost 2 years. I think this should have been “Your System: Not Guilty as Charged.”

Recently I read an article from a consulting firm that guides software selections and advises on implementations. The article was about the top 5 reasons why executives are afraid to undertake ERP and/or Digital Transformation Projects. Reason #1 was executive fear that the project will take too long and cost too much. The article offered these two safeguards:

“Our annual ERP Report shows that most projects either take longer than expected and/or cost more than expected. This statistic is often what drives fear. However, implementing strong project controls and governance is one way to mitigate this risk. Another failsafe way to address this risk is to ensure that you have a comprehensive implementation project plan that includes many of the hidden costs of implementation that most ERP vendors and consultants fail to recognize.” (My emphasis added).

These are great techniques, and absolutely required for project success. But a project with hidden (“built-in”) surprises and problems needs more than those two remedies. The missing, key critical factor for success: A proper “pre-sales” cycle.

I don’t mean a demo, particularly one oriented towards a huge checklist of features that all the top ERP systems will be able to handle; as I’ve written before (more than once), that kind of demo is a waste of everyone’s time and money.

A proper pre-sales cycle allows the time for sufficient “due diligence” in discovering, documenting, and examining the critical aspects of the target company’s business practices and policies, and why these practices are needed for the company’s success. These functions are what needs to be examined in sufficient depth, compared across competing ERP vendors, and demonstrated for usability and true degree of fit.

Simply saying “we perform function XYZ in our business operations” and having a vendor say “yes, we have a feature that allows you to do function XYZ” may not be enough. ERP sales rep’s are strongly urged to avoid getting into “implementation details” when trying to make a sale, on the grounds that these details will confuse the customer – and slow down the sales cycle. The unfortunate phrase is “Don’t confuse implementation with sales.”

But it is during “sales” that a company has the chance to discover if what they will need during implementation is actually met by the software. Overlooking or bypassing the detailed examination of a crtical functional fit during the sales cycle will mean an extension of time and money through a customization or enhancement during implementation – or perhaps a sub-optimal solution through “work arounds.” Sales and implementations are indeed tied very tightly together. In that old phrase, the “devil is in the details” … and that’s what executives are afraid of.

Insist on a full and proper pre-sales cycle. Don’t take vendor or consultancy reassurances that everything is “covered” when you haven’t seen it yourself – at least to the extent that you are personally confident that your critical features exist and can be implemented.

I had a conversation recently with someone who had a need for a software package to handle a series of transactions similar to those found in a law firm. In fact, the operation he ran was aligned with a legal department that used just that software package, and it appeared to be a perfect “fit” for his needs as well. He just needed to get his own license, installation, and training. So he found some funding within the larger organization for his needs.

But, before he could execute his contract with the software provider, a Information Technology Director interfered with the process. I purposely did not say an Information Services Director as you’ll see in a moment. This IT Director was “pulling rank” as they say, and claiming to take the funding for his own department, which would “build” a suitable application, instead of purchasing the off-the-shelf, good fit software package.

To me, this seems a purpose case for a system headed straight for “Guilty as Charged.” The amount of money was a small fraction of the logical cost for developing a system as robust as the desired package. The path for custom (“bespoke”) software is clogged with failed system development projects of all sizes and shapes. On top of that, the Director’s stated reason was that he had staff that needed to be kept busy. Bad idea.

The correct service option for that IT Director would be to put on a customer service hat, and understand that the software package firm would indeed install the package, convert the data, and provide ‘training’ to the end users. But an experienced and open minded Director would also know that packaged software vendors have a limited budget for training, and that end users typically need much more training to fully integrate a new software package into their policies and procedures, even when it’s providing all the needed functions. The missing piece is a “business analyst” (or BA) that is “embedded” and “lives” in that line of business operation for a long enough time to accomplish that deep integration of the software package into the daily business operations and reporting.

So there are two critical functions needed for the success of the highly functional packaged software offering:

Sufficient analyst time to understand and document the business requirements, learn the new package and work with the vendor’s trainings to envision and document how it will be used, and what training is needed; and then deliver and reinforce (and adjust) the training until the package is fully adopted and successfully integrated into daily use. This will take far longer than any software vendor will be on site (or you can afford to hire them).

Most packages today have extended feature for reporting and analytics that never get deployed because they are not used on “day one” and you don’t have the business analyst on staff to take on their learning and deployment after the initial go-live. This is the second area where the BA is extremely useful. The BA will work with the “super users” – those most adapt with the software – and with managers to understand how to best use the data being collected in the system as information to improve the business processes and results.

So, when you’re working with packaged software, and the “IT” department says they can “help” you, ask them to help you with #1 and #2 above. It will transform the relationship into of service, and everyone will be far more satisfied with the result.

I talked recently with an acquaintance who was working with a software start-up to provide some booking and hospitality software for his own business venture. Overall, he was pretty happy with the software, and it’s low monthly cost. But he wanted the software developer to add some features he needed. And that’s where I want to discuss how to keep a Cloud SaaS (software as a service) application out of the ‘guilty’ zone.

First, let’s all acknowledge that Cloud SaaS offerings are not intended to be the same, end client specific application that a licensed, on-premise application provides. That is, you can not expect to customize the application for your own needs. Configure – yes. Customize – no, because by definition, this is the same base software package that every other subscriber is using, especially so when you are running in a public, multi-tenant processing environment – that is, you’re sharing the ‘computer’ with other clients. So it’s not practical to let each customer have their own customized version of the underlying code.

So, again by definition, your needs will match the needs of others in terms of what the software offers. And, by definition, while this may be best of breed capabilities, it is not a unique competitive advantage for you. It is a potential competitive advantage over those who don’t have software as good as this. And your challenge is to skillfully utilize the existing features to best support your business model, perhaps exploiting their use in a way that others do not.

Now, if you “need” or want a new feature not already in this SaaS package, you’ll need to “ask” for it via whatever enhancement channels your vendor offers. In a mature package, there will be user groups, and other ways to place your request, and lobby others to support it. At some critical mass of “ask”, the developer will add “your” feature, and everyone else will also get it.

In a start-up company, there will be many, many such requests, all pending at the same time. And, in competition to the company’s own grand design for the product. Yes, there are lots of software engineers at work. But, they’re already busy with features already prioritized to get into the software ahead of your new request. At this point, the software is “not guilty as charged” because it isn’t doing anything wrong or not doing something it was designed to do.

But you’re going to feel that it’s “guilty as charged” if your feature doesn’t get in there, and that feature is holding back some aspect of your business. So get ready to make a case to someone higher up the chain than your software sales rep that this feature is important to “everyone” (or a large proportion of the clients) who use this SaaS application. Make a real business case – what the feature is, why it’s needed not just by you but by most clients or all clients, the pain or penalty from not having it. Especially focus on how it might be incorporated as a “configuration” offering or switch, so that those who do not want it can be excused from having to deal with it. And, is there any more “business” that you, the SaaS application customer, could give to the software developer? Perhaps a contract extension? Perhaps purchase a related product or service?

My friend had software that handled bookings for an event that took place on the hour, but only when demanded (that is, booked). And the software had a cut-off feature so that a “late” booking could be refused if made within a certain window of time. While this was good, my friend had the desire to accept a “blue bird” booking – one that was unplanned and within the cut-off window – when it was profitable to do so, that is, when his facility was already in use before and after this open time slot.

One way to configure this enhancement, a way that would definitely make the system “not guilty” would be for the software developer to add a configuration switch that allowed a booking to be made within the window, within a provisional status, with a fixed expiration time. This is similar to booking a ticket to a theater event, where your seat selection or ticket request is valid for a short period, such as ten minutes, and you see a timer clock running on the screen. So this less than cutoff lead time booking would be provisional while a message was triggered to my friend’s business to accept or reject the booking while the timer was running. Since the short lead time booking is clearly an impulse purchase, the timer must be short, and my friend’s business must be alert and ready to respond. Now the SaaS offering as a new feature that does not upset existing customers, yet may be very helpful to those customers who may have had the requirement, and worked around it.

Another, simpler configuration change, might be to trigger a message for each denied booking. The message goes to the business, and includes a way to contact back the would-be booker. A mostly manual work-around would be to change the error message in the booking denial to include a phone number to call to manually request a less than lead time booking.

The Wall Street Journal’s “CIO Journal” today has an article on why Cisco is buying AppDynamics just before it went public. The bottom line is that the hybrid Cloud – the predominant model today – makes it harder to manage the overall efficiency of having applications in multiple clouds, and harder to fix problems when they arise. Here’s the link, and some more thoughts below that.

I’ve said for a long time that if you are based around a packaged enterprise software offering, such as an ERP system like JD Edwards, Oracle E-business Suite, or SAP, then your system problems are much more likely to be something of your own doing, that is “your system” is not likely to be the guilty party in and of itself.

I’ve also said a number of times that custom (or “bespoke”) software is likely to be the reverse, simply because of the complexity of developing one-off software, and the disconnect that so often occurs between vision, requirements, design, and final delivery, and the errors introduced all along the way. Back in the 1980’s I spoke at software testing conferences about the need to find errors early in the design process, not in the final software.

And now, today, with the easy availability of so many applications “in the Cloud”, with platforms as a service, with data bases as a service, with infrastructures as a service, an organization can be running on “stacks” of offerings from many parties. Those applications may need to talk to each other. The underlying data centers (in the cloud) may be diverse. In short, the point of the article is you need a way to track all of that.

As a suggestion, consider focusing on a single “stack” – for example, get all of your Cloud from a single source – whether that be Oracle, Microsoft/Azure, SalesForce.com, or another provider. Consider building a “sandwich” where

The bottom layer is the Cloud platforms and services from one stack, such as Oracle infrastructure as a service, with an Oracle partner doing managed services, and using Oracle platforms as a service for application integrations, mobility, internet of things connection, etc.

The middle layer, or “meat”, is your core ERP application, such as JD Edwards and any related mission critical applications, such as a chargeback-rebate program, or a crop inspection program, etc.

The top sandwich layer are the Cloud based Software as a Service (SaaS) applications that you want in order to augment your ERP, such as HCM, CRM, Hospitality, eCommerce, supply chain planning, etc.

By focusing on a single provider “stack” you will at least have some sense that the applications are working on the same integration platform, and that the lower layer – infrastructure and platform and perhaps data base services – will work well, or at least better, with the upper layer SaaS applications and your core (licensed) applications running in a managed services Cloud facility which your in-house staff and your contracted managed services team understands well, and knows how to fine tune for optimum performance.

Maintenance supply inventories are often kept in a parts crib, which is a separate stockroom that only handles these items. Some parts cribs are unattended. In that case, how can the maintenance team (also known as the capital asset management group) know that all consumption of parts is recorded, and that the correct levels of inventory will always be on hand. The answer lies in part with vendor managed inventories (“VMI”) and in part with the Internet of Things (IOT).

To avoid having maintenance workers simply take parts, many small parts are stored in vending machines, similar to soft drink vending machines or the “traveler’s needs” vending machines seen in airports. A worker comes up to a machine, selects the part they need, such as saw blade, and uses a keypad to enter the number of the maintenance work order to which the issued quantity will be charged. The machine then dispenses the part, and records the issue. This satisfies the Accounting department, and also is a first step in automating the replenishment process.

Of course, a key consideration is that there should never be a “stock out” when a worker needs a part; that’s where vendor managed inventory comes into play. In the simplest scenario, the parts supplier (vendor) sends someone regularly to check on the machines and physically resupply all parts up to the maximum (or agreed upon level) that should be in the machine. These parts are “consigned” to the machine, and therefore to the customer who is operating the parts crib. Only when the parts are issued, and charged to a work order, will the customer pay the vendor for the parts used.

In a planned preventative maintenance situation, the needed part will likely be on a parts list attached to the work order. In a break-and-fix situation, the needed part will not be on the parts list ahead of time, but still needs to be issued and charged to the work order. If the worker is somehow trying to get the part without a work order being generated, there still needs to be a mechanism, such as departmental “charge card” or standing work order that can be used to record the issue.

Since we know that part of the cost of a part reflects the paperwork costs required to buy and pay for and replenish the inventory in a VMI situation, we want to do everything possible to reduce this paperwork, and thus the “friction” in the transaction. The less effort that is not value-added, the lower the potential purchase price of the part after negotiation.

The first step in reducing this paperwork is to have the vending machine “write” an issue transaction to the maintenance or ERP system using common interface protocols. In the case of one ERP system (Oracle’s JD Edwards EnterpriseOne), a flag on the item master is set to indicate that a part can be issued, and immediately, automatically “received” against an open purchase order. This is a simultaneous issue-and-receive situation that lowers processing time and cost by the consuming organization. Since that purchase order is likely set to be for a year’s time period, it also lowers the cost for the vendor.

Finally, a function available in some ERP systems called “evaluated receipts” is run nightly that processes these issue-and-receive transactions into outstanding Accounts Payable transactions that are approved and ready to pay. Using wire transfer payment methods, these automatically approved transactions are paid without further intervention. So there’s a reduction of handling on the part of the consuming organization, and less handling, as well as automatic payment, to the vendor.

The result: less effort for everyone involved, and a chance to negotiate a lower win-win cost on the part. I would suggest that 1% is a reasonable cost reduction based upon my own experience that even performing periodic requests for quotation can result in 2-3% cost reductions.

Taking this one step further, the supplying vendor has a notification that a part has been consumed. If they can track the “paper inventory” left in the vending machine, they can become more accurate in resupplying the vending machines, with less on-site people effort. To the consuming maintenance organization, this improves the chance of eliminating a stock-out, and also is a one-time cash flow improvement since the replenishment purchases are made one unit at a time rather than in a larger reorder-point driven replenishment order. I would suggest this could result in a one-time 30 day cash flow savings.

And the lower chance of a stock-out can mean valuable additional days of “up-time” and production, and perhaps improved safety, for the consuming organization.

And … There’s the intangible benefit of this being “easy” for everyone involved. It does involve a true trust relationship between supplier and consumer that takes time and care to develop.

How does the Internet of Things come into play here? Think about if you’ve been in a parking garage that has a red or green light showing over each parking space. That’s driven by a simple laser beam that detects whether a car is in or not in that space. A similar laser beam can be installed in each slot of the vending machine at the point where the remaining quantity of parts triggers a reorder signal to the vendor. So when the quantity of parts remaining in a particular slot of the vending machine is small enough to allow that slot’s laser beam to shine through to it’s receptor, the machine can transmit a signal to a collector or “orchestrator” that translates that signal into a reorder request for that part from the supplier.

The supplier can receive these signals and dispatch a predetermined reorder or refill quantity to the consuming location. Perhaps the consuming organization will be allowed to perform the refill, even further lowering the need for the supplier to send personnel on-site, and reduce the visits of those personnel to predetermined “cycle count” visits to ensure that the machines and the refills are working properly.

This is very similar to a “kanban” system whereby a predetermined low quantity kicks off a request to a vendor to deliver a predetermined amount of replenishment inventory.

In the case the supplier has software that performs “outbound inventory management” (as would be the case with Oracle’s JD Edwards EnterpriseOne ERP), then each consignment shipment of inventory is recorded as being held by a particular supplier, and even a slot in a vending machine. And each notice of consumption reduces that inventory. And when the predetermined reorder point level of inventory is reached, the software initiates a replenishment shipment — even without an IOT transaction.

Another variation on this system is where the vending machines are unlocked, but the entire parts crib requires a secure entry and exit. In that case, each part has an RFID tag, and the worker cannot exit the secure area without entering a work order number that will be automatically related to the RFID of the part or parts attempting an exit with the worker. This further reduces the effort involved, and the consuming organization can perform all the parts bin refills when the replenishment quantities show up on their dock.

In short, this is a good example of how a system can do more than you thought it could, and if you’ve been complaining about your system, it’s another chance to say your system is “Not Guilty As Charged.”

Note: A presentation on this topic was made recently at the Quest Direct INFOCUS 2016 conference in Denver by myself and Scott Hollowell, CEO and Director of Services of Asset Management Systems, LLC (info@amseam.com). We hope to reprise this talk at Quest’s Collaborate 2017 conference in Las Vegas in April of 2017. The presentation is here

I am an Advisor to the Board of NVIEM, where Kiran Garimella invented the neural-visual engine that powers iKnowCentral.com, a nifty new way to digitally curate useful information, both for public and for private consumption. Examples include additional digital channels to your website and your blogs; private curation of ‘tribal knowledge’ around sales and marketing assets; and in a private instance branded as “Consultant Advantage” you can map out business process flows (instead of using Visio) to document the “WHY” of implementation configuration and enhancement decisions – this can be very powerful as time goes by and “everyone forgets” just “why” a business processing option was selected, or “why” a user defined master data element (such as a reporting code) was chosen to have a specific purpose, e.g., a geographic meaning, or the obscure coding useful in a commissions or analytics report.

Kiran’s presentation – available by clicking the heading of the following node in iKnowCentral – was recently ranked in the Top 50 resources for getting the most out of your data analytics.

I was going through the (relatively) new “iKnowCentral.com” today, and experimented with a new feature – a hyperlink to another web site, stored within an “ICY” or filtered display of the iKnowCentral nodes.

I used the iKnowCentral ‘search’ bar to search for a particular set of nodes, called “NGDATA’s Business Analytics Curation” – this is a curation of business intelligence and analytics data put into iKnowCentral.com by Kiran Garimella (CTO) and Dan Conway (Data science advisor). The new features were:

All the child and grandchild nodes of this curation were not only there in the usual iKnowCentral display, but also in an interactive grid known as the “ICY” display. So by clicking on one of those children, or directly clicking on a grandchild in the ICY display, it enlarged itself, and became a temporary bookmark while I clicked elsewhere.

But Kiran had “hyperlinked” the subsequent destination link from the grandchild node directly inside the ICY display. Thus, it saved from navigating on the usual node-child-grandchild node to reach the hyperlink – to an outside website – and took me there immediately upon my first click on the ICY filter display. The result was this interesting article on business intelligence versus business analytics.

If you want to know more about how NVIEM’s iKnowCentral and the application branded “Consultant Advantage” can help you, please write to me, or get your own login to iKnowCentral with this link: https://iknowcentral.com/?n=JkdPUVkF7c3X96U and then follow the “sign-up” link in the upper right corner; sign-up’s are through LinkedIn.

I’ve recently become an Advisor to the Board of NVIEM, the leader in neural-visual enterprise management with applications for digital curation of tribal knowledge, and the corralling of loose information.

Consider how useful it would be to visually navigate through the tribal knowledge that has built up inside a consulting firm:

Which consultants had which particular skills applied to what projects and clients?

Which projects and clients proved – or disproved – a particular expertise that the firm claims to have?

Where are the best presentations, demonstrations, white papers, and other ‘assets’ stored within the various document repositories of the firm?

In a particular project, or software implementation, all key decisions are usually documented, through project or meeting notes, as to the outcome and the decision to proceed forward. But where is the documentation about how or why those decisions were made? That information is very useful later on when updates happen, when circumstances change and the previous course doesn’t seem right anymore, when staff and consultants change and new people – without tribal knowledge – come on board. All of that loose information, that tribal knowledge, would benefit greatly from being “digitally curated.”

A curation is like a museum – where are the paintings? The classical paintings? The classical paintings of a given period? or of a given artist? or of a theme? Or like being in a library – Where are the books of fiction? The mystery novels? The historical mystery novels? The historical, romantic, mystery novels?

You can apply this idea of digital curation to communities, or ‘birds of a feather’ as well. Consider a community of firms that all partner with the same computer software developer, perhaps Oracle’s JD Edwards partner community in which I have been involved for many years. Each partner has their own web site. Oracle has a “Partner Network” website. Yet it is extremely hard, if not impossible, for a JD Edwards customer to identify which partner might have experience with pharmaceutical manufacturing cost accounting.

Consider solving that problem. A partner website might note experience with the life science industry; another place on the website might note experience with manufacturing, or accounting problems. But if this is a driving factor in attracting new business to the partner, it is still not easy to identify the intersection of these capabilities and experiences. And, you would not turn first to your web site and add layers and layers of such detail, first of all because it would be exposed to your competitors as well as potential customers.

But with iKnowCentral, this JD Edwards partner could own a top level node within the overall JD Edwards community of iKnowCentral.com. And the partner’s primary child nodes could mimic their own website, building another digital channel – at very low cost – to drive traffic to their website. But beyond that, within “iKC”, the partner could freely and easily add additional child nodes that wave a digital flag saying “Hey! I have pharmaceutical manufacturing cost accounting experience, as well as any other ‘rifle shot’ experiences they’d like to promote. Landing on that node, the prospective visitor could be presented with some information (“digital assets”) that are suitable for public consumption, and then would be asked to ‘register’ and become ‘known’ to the partner firm, before being invited into more secure nodes with deeper information about this cost accounting expertise. Even deeper, more secure nodes could be available to the partner’s own employees – project managers, solution architects, client executives – to reveal more intimate details about those pharmaceutical manufacturing cost accounting experiences, client names, real life details – in short, tribal knowledge, complete with links to private presentations, stored documents, and the like.

The best part, is that along the way, each node can contain digital advertising that earns the partner money from their visitors, thus paying for the entire experience.

More later, in future posts, about how this same iKnowCentral could transform the gathering of loose information and the sharing of tribal knowledge for a non-profit, especially one with a central or head office, many regional or state councils, and hundreds or thousands of local affiliates or chapters. And, how iKnowCentral could help government/municipal/regional agencies or services to group their digital assets and make them more friendly and accessible to end users trying to find information about available services.

And in one more post to come very soon, a discussion of how the neural-visual engine behind iKnowCentral can be pointed at a particular company’s own projects and implementations to document “why” decisions were made, using the very documentation that likely already exists. We call this “Consultants Advantage.”

With grateful thanks for sparkling input from former Oracle colleagues Cindy Sayers, Mark Nix, and Leanne Harper. This is an answer I provided in an Executive Forum discussion recently at ExecRank.com, as a follow-up to their Software and Internet Advisory Council meeting.

In short, it’s no different from on-premise software – the customer owns the integration problem. Software from different vendors do not usually share the same “Lego” blocks of integration, even when they have “open API’s” and “web services” enabled. Only when those services and API’s use the same infrastructure do you have a chance that the vendor has made a successful integration, and then, you need to consider whether that integration solves your particular needs.

In an integrated Cloud ERP suite, all the applications from that one vendor should be talking to each other, the same as with on-premise applications such as ERP. If you want to change those integrations, the vendor should offer integration tools and services you can use. If this is a Cloud suite, then you’ll like need a Platform as a Service (PaaS) platform that contains those tools. Recently I’ve begun hearing about “Integration as a Service” offerings that are intended to connect a vendor’s Cloud and on-premise applications, and even 3rd party applications that use the same integration technologies.

In the end, it will probably come down the customer who owns the integration. Simple or point-to-point integrations offered by vendors are usually not complete (that is, covering all the required integration points), or don’t move all the data a customer requires between the applications (only a subset of the data has been integrated), or it simply doesn’t work the way the customer wants it to work. So the customer will do it themselves, or hire consultants to write the integration.

Anytime you have multiple vendors, such as a Cloud situation where different applications come from different vendors, (or the same in an on-premise situation), vendor supplied integrations might work in a perfect world. But you still have the issues of different update schedules from different vendors and whether one vendor’s update is validated against every other vendor’s then current releases (and there are often several current releases that need to be maintained on integration from every vendor). The multi-to-multi puzzle is simply not going to be cost effective for any one vendor to maintain. Recently I noticed that my Quicken was no longer accepting certain types of downloads from banks, or importing certain file types.

And, if you did get the integration to work, you’d find you have a master data management problem: who owns the ‘real’ customer, vendor, item, or employee master file when multiple cloud (or on-premise) software products all contain their own version of one or more of those files? Again, more integration work, this time likely requiring “MDM” tools as well.

And beyond that, customizations and configurations in any of the cloud (or on-premise) solutions makes the integrations that much more challenging. Is one system’s “sales order” the same as another’s? After decades of EDI (electronic data interchange) it’s still a challenging world when one dominant vendor or company imposes their view of the world on everyone else, such as Wal-Mart did some years back, or the top tier automotive companies did on their entire supply chain.

In the end, if everyone is satisifed with one-size-fits-all “best practices” built into Cloud software, how does one company use technology to differentiate itself from another company? If the race to best practice is shortened and made easier, where is the competitive advantage? And how do you implement transformational technology into a Cloud delivery model that gives everyone the same software?