Syndication

I think there's a great opportunity waiting for someone to create some new software. The model has already been created, we just need to migrate the model to corporate data. Here's the problem we need to solve - I need to be able to find data in my company, but I need to find it in particular context.

There are a large number of existing solutions to finding data, but most of them rely on some sort of a brute force approach. Google and other search engines can help find information in a database, but they aren't that great at finding information with a specific context. So if I run a search in my organization, assuming I have some way to search across the multiple data systems that exist in my organization, I'll get a "hit" for every occurrence of a search term. However, if the search term is customer inquiries, I may get a training manual which defines how to handle customer inquiries, pointers to a database on total customer inquiries, a word document about specific customer inquiries and so forth. This is like searching for a needle in a stack of needles. The search is easy if any needle will do.

I can begin to bring human reasoning and cataloging to the effort by creating a Yahoo like directory, relying on humans to create links to information. But that begs another question - who decides what to link to and how to link? What rules do we follow if any? And how much of the information in a firm can accurately be catalogued and linked to by humans. Let's assume that in any knowledge based business, an individual generates documents, receives email and other information, review documents and information from colleagues, business partners and downloads material from the internet or other sources. For grins let's assume that an average knowledge worker generates, reviews, stores or downloads over 5 MB of data per person per day. In even a small firm (under 50 people), this means we are adding 250 MB of data every day, or close to a gigabyte a month. In larger firms - who knows. Humans can't possibly catalog and create hierarchical links to all this information.

Another way we've tried to manage the information we know is by using Wikis. Wikis tend to grow rapidly as information that one person finds valuable ends up on the Wiki, and others follow suit. What tends to end up on Wikis are short instructions - how to change your 401K investment options or how to code a specific bit of code. Wikis are great for concepts that you use ocassionally and often bug your colleagues to help you with.

So, a Google concept might not work because it does not provide contextual meaning to a search. Human cataloging is problematic given the volume of the data and the speed we are generating the data. Wikis are great for some knowledge management but are not very structured.

What we need is an application that can crawl our data and read tags that we associate with the information as we create or update documents. It seems to me that the tags we use for blogs can be used to add as "metadata" to the files we create. Then, we could create a crawler to crawl the files and establish links. A "google" like solution at that point could indicate the relative importance of certain documents through the number of links, but a search could include contextural information so you could look at documents that were important and met some contextural hurdles.

This approach still requires a human - to generate and enforce a set of consistent tags. If you look at Del.ici.ous for example, blogs are filed by individuals by the tags each individual decides. So one blog might be tagged with productivity and innovation by two different individuals. The "metadata" to control the search and provide context needs to be developed in advance, regulated and monitored. Good metadata and consistent tags could make finding information and files in your firm a lot easier.

I think there's a great opportunity to bring some of these existing web technologies together to create a powerful new application to improve knowledge management immensely in larger firms. I'm very interested in finding specific needles in the needle haystack. Is anyone working on anything like this?

Alchemy was the "science" of turning ordinary metals into gold, which was especially popular during the Middle Ages. There's a need for some new alchemy, but this time we want to convert data into insight which will help firms be more productive and more innovative.

I've written before that most of the businesses we interact with are awash in data. Over the last ten years or so, information technology has expanded very rapidly, and we have created all types of information systems to speed transactions. I can vaguely remember when I first entered the work force that people actually wrote out or typed purchase orders (which copy do you keep? pink or golden rod?) and managed much of their business on paper. Over the years, we've automated a wide range of transactional processes and created systems that generate a lot of data. This is a blessing and a curse. The automation has clearly made some processes more efficient, but the curse is that these systems generate more and more data every day and we simply don't know what to do with it. Take a look at Network Appliance and EMC and the other large storage vendors. They are raking it in because we are all generating a ton of data with no real purpose for its use.

Here's the news flash: Data is not valuable, in fact it creates significant cost. There is a progression that must occur before a firm can get value from its data. That progression is:

- Data - Information - Insight - Action

I guess I should coin a new acronym: DIIA. Consider it trademarked and on it's way to a new management book near you.

Data must be converted into useable formats that people or machines can review and make sense of to be useful. That's been the whole point of business intelligence, which has been a hot topic for several years now. The BI space has been too self-serving however, in that for the most part business intelligence and other means of preparing and making sense of the data merely set the stage for someone to review the data and draw conclusions.

Once data is converted to information, there needs to be someone who gains knowledge or insight from the data. That means the data must be presented in a way that people can understand and review quickly to find trends or anomalies in the data, or be able to use the data to forecast or predict the future, which in my mind is an even more valuable use of information. But that information must be presented in a way that is easy to understand, provides real meaning to underlying trends and is "fresh".

Finally, to complete our alchemy, we need for someone to take an action. Converting data into useable formats and providing insights into the real meaning of the data is not helpful if there are not clearcut actions that should be taken after the insight is created. Someone needs to gain the insight and be willing and able to make decisions or draw conclusions and take an action, otherwise the information is useless.

One real challenge with turning data into something valuable is that there is a set of steps or processes that need to be completed, and the value of the data can be ignored or missed in any of these steps. Even if the IT team gets it exactly right and provides meaningful, easy to use information that is easy to understand and easy to access, if no one is willing to take an action, why do we keep all that data?

In other words, this is not just an IT problem. The problems rest with IT until the data is presented in a fashion that creates real insight. After that, the problem is a cultural and decision making problem that rests in the business functions.

Unlike alchemy from the middle ages, we can convert base materials into gold. There's an easy, four step process to get it done. What stands in the way most probably is the understanding of what information is needed to make decisions and the will to use the data once it is presented.

I had a chance recently to get into an email dialog with an ocassional reader (thanks Anders) and we had a short but lively conversation through email about business process automation. Anders questions and thoughts were around the question - "What should we automate".

His argument is that the software industry is trying to convince us to automate everything. If it can be done, it can be done in software only better. Now, clearly, there are some things that humans simply do better than machines, and those processes will not be automated. Some processes should be automated simply to remove the drudgery of having to do them by hand - integrated purchasing systems really do save money and time.

What we both think is missing is definition and rules or constructs required to help make the decisions about automating a business function or process, and then the decisions on choosing a packaged software product or building something from scratch. Anders argues that a decision maker should usually choose packaged software for business functions that are not a competitive advantage for the business, and build systems for areas of the business that are a competitive advantage.

I think that most decision makers (who are not IT folks) don't understand the tradeoffs between building a custom system and purchasing a packaged system, regardless of the competitive differentiation of the business function. Mostly, the differences come down to "whole product" issues.

Fans of this blog know I always return to Crossing the Chasm to evaluate an issue like this. The whole product is the set of features and attributes not immediately tied to the base product. When we purchase a packaged software product, the whole product includes support, documentation, training, the future product roadmap and so forth.

When we choose to build a new system ourselves, there are several constraints. First, it can cost more to build a system than to purchase a packaged one. Second, it is usually very difficult to achieve the level of training and documentation that exists for a packaged software in a custom development. I've never seen a really well documented internally developed application. Third, you built it, you own it. There's no one else working with the software to extend it or find the bugs in it. Fourth, an internally developed application is typically limited to the technologies the internal IT group can and will learn and support.

On the flip side, a custom development can be built specifically to your processes and requirements. This is a good thing if your processes are well defined and make sense, but can be a real train wreck if you are "paving the goat path" - automating processes and decisions that didn't make sense to begin with.

The main problem with packaged solutions is that most firms are forced to make a choice - choose to modify their processes and implement packaged software in a "vanilla" fashion - that is, with little configuration, or try to rework the package to fit the existing processes. This rarely if ever works well. I find that most firms need to be willing to adopt the processes inherent in the software application to make the software truly productive.

When you are considering a software package, ask yourself what the bare minimum functions you need in order for the software to work according to your needs. Ignore the bells and whistles cause you'll probably not use them. Then, compare your business processes to the process inherent in the vanilla system. Choose the product that meets your minimum functional needs and most closely matches the business process you intend to follow in the future. This will reduce configuration time, reduce long term support costs and allow the software to function in the way it was intended.

Would you buy a farm tractor with the intent to convert it to an airplane simply because it has a very powerful engine? Why do we insist on doing the same thing with packaged software?

What you automate - the processes - are more important than how you automate - the systems. Whether you choose a custom development or packaged software, focus time on getting the processes right.

I find myself drawn back once again to the problems of disconnected data and information - what we've grown accustomed to calling "islands of information". I guess I come back to this challenge that most businesses face for a couple of reasons.

First, most business people will readily admit they've got this problem, but most won't admit they were the ones who created it. In many businesses, we reward people who have the best information, but we don't always reward people who share information effectively. Islands of information can be very helpful to people who seek to build power within a functional group or a line of business. Sharing information makes the information more valuable across the enterprise but can result in less power locally. Additionally, building an island of information and managing that information locally allows the manager or director of that business unit to massage the data and determine what data is reported or released to other groups. Much about islands of information is about personal power and fear.

Second, there's a problem between IT and business people. We business people expect IT to be done quickly and simply, but most firms only staff enough IT people to keep the baseline operations running. When we business people discover that we need a new database or new web application to support the latest process or product we've dreamed up, we turn to IT and expect it to be built in a matter of weeks. If IT were more involved in the planning of the new solution, they might be more able to deliver on our timeframes. However, our priorities and IT's priorities are rarely the same. So what happens? We find someone to build an Access database or a monstrous spreadsheet and base our new product or service on that advanced technology. Building new systems is not hard, just time consuming, and most IT staffs have been cut to the bone. Creating new Access databases and Excel spreadsheets isn't the answer - working early and often with IT is.

Third, while it is not hard to build IT systems, it is hard to build systems that encourage data sharing across an enterprise, since many folks on the business side have never worried too much about data from other business functions and might not understand its value. To marketing, market share and ad spending may be important. To finance, depreciation and EBITDA (whoops, promised fewer acronyms). To manufacturing, inventory levels and quality reporting. How does a firm build systems and data containers to manage this data, and how do the end users make sense of the data once the container is built?

One other problem is that most systems are built to solve a problem locally, with little thought to how they integrate or solve problems across the enterprise. When I create a new Access database with the list of firms who've leased machinery and their payment schedules, I never dream that one day some of that data may become important to the CEO and become part of his daily report. But stranger things happen in data management all the time.

How do we fix all this? I think everyone in a business function should have to spend time in IT. Not necessarily becoming an expert, but experiencing the demands and constraints that IT experiences. I think that every time someone in a business function wants to build a new complex spreadsheet or Access database instead of working with IT, they should be required to publish the data model and data elements of the new database, so that everyone in the business can understand what data is captured and whether or not that data may be important to other people.

Finally, a culture needs to be put in place that encourages and rewards sharing information across the enterprise. Otherwise we are all Robinson Crusoe on our desert islands of information.

My idea of productivity begins at the point where all of our corporate information and knowledge is put to use to help make decisions - every time.

I was discussing information productivity to a person who was interviewing for a job when the appropriate metaphor hit me. Most CFOs and CEOs are interested in receiving the maximum return for their shareholders from their corporate assets. Shareholder value is a driving force for senior management, and shareholder value is increased as the share price increases.

Share prices increase as the value of the goods and services produced grows faster than the cost to produce them. This means that a firm has to constantly improve its revenue stream, cut costs or get more from its existing assets. In my experience, CFOs are usually very good at finding and eliminating assets that do not perform, and investing and nurturing assets that are high performing. What does this say about how assets like knowledge assets and information are used?

Most firms generate copious amounts of data and have a tremendous amount of corporate knowledge. Yet, one of the most common complaints about information technology is the fact that while we have lots of data, we have very little actionable information. If information is important and a key asset to your business, why do we allow it to underperform? Why don't we squeeze every drop of value from our data and our corporate knowledge? Why is it OK for it to be "hard" to analyze and report on the data in our systems?

It seems to me that the first order of business for any CFO would be to crack open the nut that is his or her data and corporate knowledge and find a way to make those assets return more value. These assets have to be some of the most valuable, and most poorly utilized assets in any business. Put it another way - if I am a textile manufacturer, my looms run 98% of the time. If I own a metal stamping business, my stamping machines run almost constantly. In both cases I try to wring as much value as possible from those (admittedly) tangible assets. Yet we often throw up our hands or shrug our shoulders when it comes to getting more value from our data! Data and corporate knowledge IS a corporate asset, and we should be mining that asset for all it's worth.

I've got a topic I want to discuss today - providing context to data - but I haven't found the appropriate metaphor for it. It seems to me that data in the abstract is like a newborn baby - interesting, helpless but full of potential. It also seems to me that data is like a seed - useless without the soil, water and sunlight to help it sprout.

I was thinking about this when watching two colleagues of mine introduce themselves. They happened to know each other through one colleague's husband. So the spouse of the person they both know says "I'm so and so's wife" and the other colleague says "I know so and so through my former boss" and the discussion continued until they put their nascent relationship into context with the people, places and experiences both had had. What this process does is to help the individuals decide very quickly whether or not the new colleague is trustworthy, reliable and capable.

We don't take the same approach to data or information, but it's probably even more important to provide context to our data. About the best most of us do when we create a new document is to try to give the data a descriptive name. My desktop is loaded with files called "Sales and Marketing projection - February 2005" or something similar. This is a poor first step towards creating actual context for the data and helping other people find, understand and use the data or information I've created.

It seems to me we should create file storage systems with a lot of context about the data, and the context should not be optional. When you save a new file, the system should ask you:

- What's the data for? a temporary analysis or something we'll need long term - What's the data about? a marketing project or a term paper for your graduate degree - Who created the data? you, a team of people, a vendor or someone else - How long should we consider this data valid? a week, a month, forever in the case of the recipe for Coke - Who should have access to this data? - How should this data be used? - Is this data fact or conjecture?

This isn't a complete list - I can't define the entire list because to a certain extent each firm should shape how they capture the context of the data they create, because there are unique characteristics about information that may be specific to industries or other segments of businesses.

Putting our information into context so others can find it and use it means we are leveraging our internal knowledge and corporate assets more completely and are becoming more productive in using and sharing our resources.

Here at Thinking Faster we are always thinking about ways to help make people more productive. To me, one of the simplest ways to make people more productive is to provide them with tools to access all the data their company has created, in a method that is simple to create and easy to understand.

It constantly amazes me how difficult that is to do in reality, no matter how simple it seems in concept. People need information to do their jobs more effectively, but more so need data to analyze trends and spot patterns so they can begin to take action before a problem is magnified or before an opportunity passes them by. Some larger firms have decided to implement enterprise business intelligence from the larger software firms. There are three challenges to implementing business intelligence as an enterprise application:

The CEO and CFO were told that other Enterprise systems (CRM, SCM, ERP) would solve all their data needs. To a point this was true. Each application creates and stores a vast volume of data. But none of them present the full picture of the end to end business process.

Enterprise business intelligence takes a long time to deploy and to learn. Imaging putting your team through a long ERP implementation (let's say 12 months, just for grins). Then let's say you decide a year or so after the implementation (just long enough to get everyone up to speed on using the application) that you still don't get all the data and reports you need. Now you've got to implement another enterprise application which will take another 8 to 12 months to implement and another year to really learn. Four to five years later you finally have a solution to your data problem, just in time for...

The business to change. Business cycles are shrinking and firms need flexibility. Enterprise BI tools aren't flexible to change as the business changes.

Another problem with enterprise BI is that it generally is selected and implemented to solve the problems of one specific functional group - usually finance or sales. While the application may work for other groups, it never gets rolled out or the project peters out before it is rolled out company wide.

If we are to become more dynamic and flexible as businesses, if we are to share information across the business and even with our business partners, if we push decision making down the chain of command to speed up decision making and cut costs, EVERYONE in the business needs up to the minute access to data and the tools to make sense of the data. Otherwise, all we've done is expand decision making throughout the organization without providing the tools necessary to make those decisions responsibly. And that is a recipe for disaster.

Remember high school? Remember having to read a Shakespeare play or some exceptionally long and boring work by a now-forgotten Victorian author? And how your English teacher could extract information and meaning from the way Heathcliff glanced across the room?

That's exactly the way most decision makers feel today in data rich businesses. There's a lot of data out there, and somewhere deep inside that data there's some real information that can matter to the business. There's a trend line that indicates why customers are leaving, or an indication as to what's happening with product quality. How can you make distinctions and decisions based on several significant sources of data that generate thousands of megabytes every minute?

What managers need is what every high school kid relies on in senior english class - Cliff Notes. Only, these Cliff Notes are now called Dashboards and Score Cards. There is a distinction between the two.

Score Cards provide a short synopsis of key metrics in the business based on the "balanced scorecard" approach developed by Kaplan and Norton as a way to capture and measure business performance. This is a specific approach that captures data about Customers, Internal Business Processes, Financial Metrics and Learning and Growth Metrics. These metrics are captured and reported.

Dashboards are simply a method of distributing information to an interactive web portal or the desktop. Balanced Scorecard results can be published to a dashboard, or a firm can devise its own set of management metrics and publish its own metrics and measurements on a dashboard.

What's important about Score Cards and Dashboards is that they should present information that a manager needs about key performance indicators that allows the manager to understand what's currently happening, what processes and metrics are in or out of tolerance, and what items are most important. I'd like to see more information presented to managers about trend lines and predicting problems or challenges in the near future, but let's take it one step at a time.

There are several applications I've worked with for dashboards, but my favorite is QlikTech's QlikView product. There's really only one reason I like this one over other dashboard and business intelligence tools. I like it because it can be deployed quickly and is very flexible. As my needs change, the application can quickly change to report the information that is most important to me.

So, use your Score Card as Cliff Notes on the Dashboard and get a handle on your business.

By now, if you've ready the blog regularly, you know that I've got an opinion on just about every facet of business - especially where productivity and innovation are concerned. But the area that consistently amazes me is how little information we actually have.

After years of "investments" in information technology - and believe me, I was out there implementing ERP and CRM - we don't really know that much more about our business. Sure, we cut purchase orders more quickly, and we have a database of all of our customers...somewhere. But what does all this data actually do for me when I need to make decisions? Nothing.

Most of the "investment" in information technology has been in transactional systems. ERP, CRM, Supply Chain and other enterprise systems that have been the focus of IT professionals over the last ten years automated a lot of business processes. In that sense, they've been great for speeding up how we do things. But they came with a down side. These systems generate millions and millions of records every day. There's gold in that data - but how do you find it?

Today we are faced with lots and lots of data - but little information. What we need are some simple metrics (or key performance indicators) about the areas of our business we want to measure and manage. There is a significant amount of buzz around executive dashboards. These solutions have the right idea - putting key metrics on one or two screens for senior management to review and to use to make decisions. But why just senior management? Hasn't everyone in the business become a decision maker? And - what information is important? Who does the analysis of the information? What's the best way to present the data?

Productive and innovative firms are designing dashboards specific to their end users' needs, which delivers information in a manner that is intutive and can be used to make decisions. This information is interactive - users can "drill down" to get more details and make more informed decisions.