How many times have IT organizations spoken about application rationalization or infrastructure refreshes? Over time many solutions have been created to solve the issue of the day or take the business in a different direction. Once created these applications never die, hence the application rationalization program. And once in place, refreshing the infrastructure becomes a challenge since you don’t know if the existing application will run on the new hardware or version of the operating system. But I have often heard these two discussions being held independently without concern to the impacts of one upon the other or more importantly the business implications.

What about the strategic planning process and the resulting action plans and allocation of resources? How often are the dependencies between the strategies addressed in order to make the action plans actionable? I have often seen the strategies split between different executives to deliver without understanding the interdependencies; often resulting in increased costs, unnecessary issues down the road or complete failure of the strategy.

In my mind, this is one of the differences between projects and programs. Programs ensure that the interdependencies are recognized; that the dominos are set up so they fall down in the right order.

I remember one really effective yearly planning process. We covered a wall with the outline of a Gantt chart with the strategies and associated 70+ candidate projects down the left side, each with an estimated duration and start date. Then each functional group was given a different color of small post-its. For each project, each function put a post-it in every month where their organization would be required to participate. Talk about being able to visually see resource contention! It also generated a lot of questioning and discussion to really understand what each project was trying to accomplish. This caused some projects to be combined and identified dependencies which generated start date and duration changes. This planning process yielded a set of seven programs, all of which were successfully delivered that year; the right dominos got selected, set up in the right places and were knocked over with precision!

Something similar happened with an infrastructure refresh program. Based on business need as to which applications were not meeting performance requirements, a set of applications were identified for refresh (the throw more hardware at it approach). Matched to this was the set of outstanding requests against each application, its part in the business process flow and any existing upstream or downstream projects. As a function of this, each was evaluated for refresh, upgrade, replacement or retirement. By setting up the dominos in this order we were able to create a program that was business driven and accomplished both application rationalization and infrastructure refresh.

This world of big data and analytics has somewhat overshadowed the data foundation that makes organizations run. Even the concepts of Master Data Management (MDM) and Data Governance sometimes mask the base that lies at the heart of doing business. Consider the following two scenarios:

Program 1 – Master Data Management (MDM)

A group of people from each functional area was brought together to determine the data needed to understand their business. Each was concerned that the data they cared most about was included in the data governance and maintenance process. While the group defined the core data elements that would tie all their individual pieces together, their main focus was to ensure that they could get the reporting that they wanted to run their function. The end result was a heavily attributed master data hub surrounded by complex integrations that was easily accessed and centrally maintained by the data stewards from each function. The data was then supplied (duplicated and/or transformed) to the reporting and analytics engines to deliver to the reporting requirements of each individual function and ultimately the needs of the corporation as a whole. This was more of a metrics/KPI /reporting based approach.

Program 2 – Critical Data Leverage (CDL)

A group of people from each functional area associated with the end-to-end business processes was brought together to determine the data attributes that drove the behavior of the company’s business. They were concerned that their company presented a consistent face to their customers at all times while managing their job functions. These people discussed (argued even) about the data used to segment customers, group offerings, drive the go-to-market strategy, determine geographical differences and so on. The end result was a small data steward team that governed the core data for the company as a whole. They managed the master data hub that integrated that core data to all consuming applications and the reporting and analytics engines. Each of the functions delivered their own reporting and fed the analytics engines to meet the needs of the corporation as a whole. This was more of an operations based approach.

Each of these approaches delivered the data foundation to tie the functions of the organization together and enable reporting and analytics. I have participated on programs that have used each of these approaches and I believe that critical data leverage comes first and master data management follows. In my experience the groups who used the critical data leverage approach created a more flexible and scalable foundation for their company to grow on and met their functional requirements as well. The metrics and KPIs came out of the operational model as opposed to the KPIs driving the operational requirements. Be careful what you measure, because that is what you will get; better perhaps to know where you’re going before determining how to measure if you got there (this kind of resembles the system integration cart being before the transformation horse ). At the core, know what data drives your organization.

Even with analytics, big data and the desire to discover unknown patterns, it is still important to have a purpose behind the quest. There is a difference between data, information, knowledge, understanding and wisdom. What you’re looking for is wisdom to make better decisions; big data is a method to provide you some understanding based on the knowledge of the information and data that’s out there. The question is what do you want to be wise about?

Maybe we should be talking about Critical Data Leverage, Master Information Management and Big Knowledge.

I was having discussions with some colleagues who are participating in transformation programs at their respective companies. Each program is segmented in two; one part focused on business transformation led by one outside partner, the second on systems integration led by a different partner.

The question: What approach should be used to ensure you get the best possible outcome for your company? I suggest you get your dancing shoes on because you will be balancing on a fine line. First it would be good to know what style dance and song you are going to dance to before you begin. A lot of work was done before these partners were selected; I am sure there was a business case, budget projections, etc. But is it clear if this is primarily a business transformation or systems implementation program? These can be very different with very different outcomes, even though many of the areas addressed may be the same: business requirements, process flows and taxonomies. The difference is where you start and who plays what role along the way.

A business transformation according to businessdictionary.com is a process of profound and radical change that takes a business in a new direction and entirely different level of effectiveness, a basic change of character with little or no resemblance with the past configuration or structure. Unless it’s a startup (which wouldn’t need to transform anyway), there are already organization structures and go-to-market strategies in place. But something has happened that makes the company believe that it needs to change and somewhat significantly to be competitive. Does your program clearly know why the need to transform exists and what your executive sponsors believe success looks like? This is the role of the business transformation partner: to articulate why to transform and what to transform: business models, operational models, business process flows and organization structures. It’s often in the business process flows where the transformation horse and the integration cart get sideways.

Let’s back up for a moment though. What is the systems integrators role? If why and what are the transformational piece; then how to map the models and process into technology solutions is the systems integrations piece. This will be affected by both the application landscape currently in place and by total cost of ownership (vanilla or custom, fit for function, cloud based, etc.) The “how” is focused on technology, balanced with process and guided by transformation.

This is where the dancers can step on each other’s toes. Both the transformation and integration partners have input into the business requirements and process areas and need to work together. Dare I say it; a RASCI is probably required in order to define who does what. It is likely this was not done initially when the partners were selected because that was too deep of a level of detail at the time. More important though, your company has to define how to make decisions between model, process and technology tradeoffs. Your organization’s magnitude of transformation, appetite for differentiation and budget will all play a part in establishing these guiding principles; this is the balance on the fine line. These tradeoff guiding principles will help to define the responsibilities of the transformation and integration roles and make decision making more rapid all along the way.

It is essential that the transformation horse is ahead of the system integration cart. Without consensus on why and what (the horse), your program will not know the style of the dance. And without the tradeoff guiding principles (the reins), you won’t know the song to dance to. Both of these are necessary to determine how (the cart) to dance your dance.

I may have mixed a couple of metaphors, but I hope you can see that the transformation (why and what) should be clearly defined before bringing in systems integration (how) and determining the right balance of differentiation and total cost of ownership for your company.

The person responsible for planning, choosing, buying and installing a company’s computer and information-processing operation. Originally called data-processing managers, then management information system (MIS) directors, CIOs develop the information technology (IT) vision for the company. They oversee the development of corporate standards, technology architecture, technology evaluation and transfer; sponsor the business technology planning process; manage client relations; align IT with the business; and develop IT financial management systems. They also oversee plans to reinvest in the IT infrastructure, as well as in business and technology professionals. They are responsible for leading the development of an IT governance framework that will define the working relationships and sharing of IT components among various IT groups within the corporation.

That’s pretty encompassing and I think accurately describes what I call “Enterprise IT”, or what it takes to help your business run efficiently and effectively from an internal perspective. But what happens when your business is the business of technology? What if your revenues are based on a cloud offering? It’s one thing if internal operations are compromised, but it’s a whole different level if your livelihood is on the line. This situation can still be considered your company’s computer and information-processing operations and the domain of the CIO. However, product development and perhaps the CTO have some pretty big influence on the technology architecture; in fact they probably own those standards and decisions. So what’s a CIO to do?

That’s where “Cloud IT” and being CIO of the Cloud comes into play. While the CIO may have little to say about what’s running, they may have a lot to say about how it runs. There is still no one better equipped to understand the operational and support aspects of running IT. And the truth of the matter is; the product people aren’t so interested in running the cloud as they are in creating the cloud. By the way, the Gartner definition sort of neglects the operational and support aspects of the CIO role since they stopped at install. Here is where the CIO must align IT not with “the business” as in the Gartner definition, but with the “business of the business”. They know the IT professionals and processes it takes to run technology, the CIO should be the partner entrusted with running revenue operations.

Are the Cloud CIO and the Enterprise CIO the same person? I remember one time when the company I was working for was acquired, the network was being run by the product development organization. You never knew what was going to happen when you plugged into the wall (this was a while ago obviously), would there be connectivity or not? There is definitely segregation required between Cloud and Enterprise IT, security and access are very different between the two not to mention the application set. There are also common skills required in both aspects of the CIO role and within the organization, the focus and priorities are very different though. Of course, not every organization is faced with this dilemma but it is becoming more prevalent as the cloud continues to evolve and grow.

What kind of CIO are you? What kind of IT organization are you a part of? Which one keeps you up at night?

This phase was a common answer from me when asked if our IT applications organization could do something for the business. When I first used the phrase back in 2002, it went “Anything is possible with time, money and resource”. I modified it a bit later after realizing that not all resources are created equal. If you have never done something before, there is no shame in asking for help from those with the expertise. That way you get to learn so next time you have the expertise. But it’s really not so much about what the phrase says as it is about the attitude that is represented.

I don’t know if this has ever happened to you, but in my career I have run into people who only know how to say why something can’t be done. They continually look for what won’t work, make excuses for why they don’t want to do something and generally throw up roadblocks every step of the way. For me, it’s not very much fun working with people who have that kind of attitude; everything seems to be a struggle. The easiest question can take weeks to be answered. In fact, depending on where they are in the organizational structure, an attitude of that nature can make it almost impossible to get anything done. Instead of engaging in a quest for answers, everything stops at them since they have already decided it’s never going to work.

On the other hand, if you have people who are constantly looking for how something can be made to work, I believe you have a pretty open environment. Instead of there being roadblocks, you have many people engaged in blowing holes through the rock trying to see if what’s on the other side will help. That’s how successful programs get done, because people have an attitude that it is possible.

Take a look at yourself. Are you a roadblock or a rock breaker? It is up to you the attitude you take every day. There are always down times but I fundamentally believe people want to be successful and make things work.

It’s doesn’t matter what type of software you’re implementing or for what purpose, Einstein’s quote is the guide to success; finding the balance point of simple is the key to a vanilla implementation.

Software is created to perform a function, typically a function that many people do on a regular basis. The product designers spend lots of time understanding the commonalties of the function in order to create a product that follows Pareto’s Principle or the 80-20 Rule. The 80% is the same for everyone; it’s standard/vanilla and is going to work just fine for your purposes. It’s how you handle the 20% that determines how vanilla your implementation will be.

From the software application perspective, here are three rules of keeping things vanilla.

Rule #1: Extend – never replace. If the application does it, use it. If shipping capabilities exist within the software then use it. Don’t reinvent the wheel; extend from the hub to add the uniqueness you require. That’s kind of your own 80-20 rule; you can use 80% of the functionality of the code that’s there and just add on your own 20%. The total cost of ownership for the 20% is a lot less expensive than 100% of your own development and support. Plus, you’ve already paid for the capability (and continue to pay support) so you certainly don’t want to pay twice! The question to ask in regard to Rule #1: Why doesn’t the standard capability work for my company? 99 times out of 100, there is just one specific situation where you need something a little bit different.

Rule #2: Extend – never change. Never modify standard application code or data structures. I have always told my teams: “Tell me the 25 other ways you’ve considered before saying you have to modify standard application code”. Almost every software manufacturer provides a method to extend the functional capabilities of their package. You may need to do some unique calculation or capture some additional data. Use the hooks provided to interrupt the flow of the process, do what you need to do, then come right back in where you left off in the application flow. This is what allows you to continue to upgrade as new versions of the software are released. In my career (many, many moons), there have only been two times when a standard code change was required. And believe me – everyone who ever worked with that section of the application knew about it and how to deal with it, talk about documentation!

Rule #3: Data drives flexibility and scalability. Every software application is configured to distinguish Company A from Company B. The data values that drive the function you are automating is what makes your company yours and allows you to extend the functionality to meet the unique needs of your organization. For example, imagine you are a company that provides financing directly to some of your best customers. Only 1% of the time is this option selected. By creating a unique value, “Financed” associated to the quote and order, you can create a hook when that value appears to capture additional data and determine differentiated pricing, for instance. You don’t change the way quotes and orders are entered or processed, you just add to it and perhaps do some overrides. If you decide later you’re not going to do that anymore or do it in a different way, then inactivate the “Financed” value or add a new value to drive behavior differently. You’ve handled the 1% and left the 99% as business as usual: vanilla.

If you follow these three rules you use what is there and have not modified the code or data structures as provided by the software manufacturer; you have a vanilla implementation. You added your 20% of uniqueness in a way that maintains the integrity of the supplied software. The ease of doing this varies based on the tools associated with the specific software application; some suppliers force the behavior, with others you have to be more diligent.

Of course, there is a bit more to it than that. How you implement in a vanilla fashion is one thing, the bigger measure of the “vanilla-ness” of your implementation is what you choose to put into the 20%. As I described in the rules, this is a question of uniqueness and value; unless there is something to differentiate you from your competitors, why spend the money? How do you know what is truly unique and differentiating about your company; where is the balance point for simple as described by Einstein? Software can do anything you can imagine but you probably can’t afford for it to do everything. I always like to say “Just because you can, doesn’t mean you should”.

Start by understanding the best practices for the function and in your industry (the 80%) and then state specifically (in writing) what is really special about your company in these areas (the 20%). Be aware if anyone says, “We’ve always done this, we have to have it!” For every unique factor, describe how this differentiates you from your competitors and what benefits are derived. Don’t forget about reporting; you will look at your data differently from others and you may need to feed your business intelligence applications for a more holistic look. Based on this, have a rating and ranking exercise performed by the program sponsors and executive leadership. This sets you up to find the balance point of simple for your implementation.

Vanilla now becomes a budget decision. Figure costs for environments, configuration, testing and change management, that’s your baseline. Determine rough estimates for the ranked list (only go as far as you think necessary). From there you can determine the cut line based on the budget for the program (or adjust your budget accordingly). Voila, vanilla!

There are three simple rules to follow for a vanilla implementation: never replace, never change and drive with data. The harder part is the choice of where to differentiate your implementation and your company. Vanilla is permission to play; the balance of simple is winning.

If your company is undertaking a major cross functional transformation program, then you have probably engaged a partner (or two or three) to help you. There are many reasons for needing a partner; not enough in house resources, knowledge gaps, etc. One of two things can happen: either you run the partner or the partner runs you. How do you know which situation you’re in?

Often you do not possess the in house expertise to lead and manage a change of this breadth and impact. It’s understandable; you’ve just never had the experience before. To remedy that you can either hire internally or ask a partner to provide this service for you. Hiring internally for a program of a fixed duration can be a challenge; although in my experience there’s always another program on the horizon, it’s just a question of when it’s actually going to start! Frequently you ask your partner to provide the program management services and support for your leadership. Here’s where it can get tricky.

If the program management expertise is coming from the same partner and the same practice group, this is a red flag for me. Partners know how to run their methodology and that’s what you’re going to get; they will be very adept at doing their work the way they know how to do it, but they may not be as effective in helping you to do the work you need to get done or calling you on it if things aren’t getting done. This is what I call the fox watching the hen house. That may be okay if your organization is mature, doesn’t work in silos and has a high degree of trust between all cross functional stakeholders. Otherwise, you may experience delays in your program due to inconsistent expectations, slow decision making and reactive risk responses.

If you have a fox ready to eat your hens, you may want an insurance policy or what a CFO I worked with called program assurance or independent audit (which your own legal department should be doing too). The CFO discovered this is not as easy to find as it sounds, although most of the big consulting firms have a practice. What are you really looking for here? You want to insure that the program is going to be successful; that the work is getting done and if not have an intervention. First and foremost, I believe you need someone who isn’t afraid to tell you and your partners what’s working and what’s not working. In addition, they should ensure you are proactively mitigating risk (not just managing it). I also believe that identifying assumptions in order to manage expectations about your program is another service this third party can perform that your partners who are busy delivering may not think about (at least that has been my experience). This independent party is watching your back and looking out for the big “C” in your Company.

What choices do you have? You can use the assurance practice from your existing partner. In a large program, this should be happening anyway; preferably as part of your contract with minimal additional cost since it’s supposedly good for the partner as well as good for you. In this case, the practice, while independent, still has the same basis in the underlying methodology; you might only get a better dressed fox. I look for a different point of view. The assurance practice of a different partner or an independent third party with large implementation experience is a good choice. This way there is no connection to the dollars per hour interest of the partner doing the heavy lifting and no preconceived notions of how things ought to work.

Agree on the measures that tell you this third party is keeping you on track such as mitigation plans being in place and executed accordingly, decisions being made at the right time/level and milestones being reached according to plan; even base a portion their fees on those measures. As a colleague and I discussed, there are no insurance policies for large scale transformational change programs. Using a third party to tell it like it is allows you to keep on top of all the partners and your own resources and actions. They set you up to run your partners as opposed to having them run you.

As the saying goes, an ounce of prevention is worth a pound of cure. You can take action toward insuring you are in the successful 40% of transformation programs.

Hens produce a lot of value over the long term. Make sure yours are protected from the foxes.