Category Archives: Risk Management

A lean financial infrastructure presumes the ability of every element in the value chain to preserve and generate cash flow. That is the fundamental essence of the lean infrastructure that I espouse. So what are the key elements that constitute a lean financial infrastructure?

And given the elements, what are the key tweaks that one must continually make to ensure that the infrastructure does not fall into entropy and the gains that are made fall flat or decay over time. Identification of the blocks and monitoring and making rapid changes go hand in hand.

The Key Elements or the building blocks of a lean finance organization are as follows:

Chart of Accounts: This is the critical unit that defines the starting point of the organization. It relays and groups all of the key economic activities of the organization into a larger body of elements like revenue, expenses, assets, liabilities and equity. Granularity of these activities might lead to a fairly extensive chart of account and require more work to manage and monitor these accounts, thus requiring incrementally a larger investment in terms of time and effort. However, the benefits of granularity far exceeds the costs because it forces management to look at every element of the business.

The Operational Budget: Every year, organizations formulate the operational budget. That is generally a bottoms up rollup at a granular level that would map to the Chart of Accounts. It might follow a top-down directive around what the organization wants to land with respect to income, expense, balance sheet ratios, et al. Hence, there is almost always a process of iteration in this step to finally arrive and lock down the Budget. Be mindful though that there are feeders into the budget that might relate to customers, sales, operational metrics targets, etc. which are part of building a robust operational budget.

The Deep Dive into Variances: As you progress through the year and part of the monthly closing process, one would inquire about how the actual performance is tracking against the budget. Since the budget has been done at a granular level and mapped exactly to the Chart of Accounts, it thus becomes easier to understand and delve into the variances. Be mindful that every element of the Chart of Account must be evaluated. The general inclination is to focus on the large items or large variances, while skipping the small expenses and smaller variances. That method, while efficient, might not be effective in the long run to build a lean finance organization. The rule, in my opinion, is that every account has to be looked and the question should be – Why? If the management has agreed on a number in the budget, then why are the actuals trending differently. Could it have been the budget and that we missed something critical in that process? Or has there been a change in the underlying economics of the business or a change in activities that might be leading to these “unexpected variances”. One has to take a scalpel to both – favorable and unfavorable variances since one can learn a lot about the underlying drivers. It might lead to managerially doing more of the better and less of the worse. Furthermore, this is also a great way to monitor leaks in the organization. Leaks are instances of cash that are dropping out of the system. Much of little leaks amounts to a lot of cash in total, in some instances. So do not disregard the leaks. Not only will that preserve the cash but once you understand the leaks better, the organization will step up in efficiency and effectiveness with respect to cash preservation and delivery of value.

Tweak the process: You will find that as you deep dive into the variances, you might want to tweak certain processes so these variances are minimized. This would generally be true for adverse variances against the budget. Seek to understand why the variance, and then understand all of the processes that occur in the background to generate activity in the account. Once you fully understand the process, then it is a matter of tweaking this to marginally or structurally change some key areas that might favorable resonate across the financials in the future.

The Technology Play: Finally, evaluate the possibilities of exploring technology to surface issues early, automate repetitive processes, trigger alerts early on to mitigate any issues later, and provide on-demand analytics. Use technology to relieve time and assist and enable more thinking around how to improve the internal handoffs to further economic value in the organization.

All of the above relate to managing the finance and accounting organization well within its own domain. However, there is a bigger step that comes into play once one has established the blocks and that relates to corporate strategy and linking it to the continual evolution of the financial infrastructure.

The essential question that the lean finance organization has to answer is – What can the organization do so that we address every element that preserves and enhances value to the customer, and how do we eliminate all non-value added activities? This is largely a process question but it forces one to understand the key processes and identify what percentage of each process is value added to the customer vs. non-value added. This can be represented by time or cost dimension. The goal is to yield as much value added activities as possible since the underlying presumption of such activity will lead to preservation of cash and also increase cash acquisition activities from the customer.

We have discussed financing via Convertible Debts and Equity Financing. There is a third element that is equally important and ought to be in the arsenal for financing the working capital requirements for the company.

Here are some common Term Sheet lexicons that you have to be aware of for opening up a credit facility.

Formula based Line of Credit: There are some variants to this, but the key driver is that the LOC is extended against eligible receivables. Generally, eligible receivables are defined as receivables that are within 90 days at an uber level. There are some additional elements that can reduce the eligible base. Those items that can be excluded would be as follows

Accounts outstanding for more than 90 days from invoice date

Credit balances over 90 days

Foreign AR. Some banks would specifically exclude foreign AR.

Intra-Company AR

Banks might impose a concentration limit. For example, any account that represents more than 30% of the AR that is outstanding may be excluded from the mix. Alternatively, credit may be extended up to the cap of 30% and no more.

Cross Aging Limit of 35%, defined as those accounts where 35% or more of an accounts receivable past due (greater than 90 days). In such instances, the entire account is ineligible.

Pre-bills are not eligible. Services have to be rendered or goods shipped. That constitutes a true invoice.

Some instances, you may be precluded from including receivables from government. Non-Formula based LOC: Credit is extended not on AR but based on what you negotiate with the Bank. The Bank will generally provide a non-formula based LOC based on historical cash flows and EBITDA and a board-approved budget. In some instances, if you feel that you can capitalize the company via an equity line in the near future, the bank would be inclined to raise the LOC.

Interest Rate

In either of the above 2 cases, the interest rate charged is basically a prime reference rate + some basis points. For example, the bank may spell out that the interest rate is the Prime Referenced Rate + 1.25%. If the Prime rate is 3.25%, then the cost to the company is 4.5%. Note though that if the company is profitable and the average tax rate is 40%, then the real cost to the company is 4.5 %*( 1-40%) = 2.7%.

Maturity Period

For all facilities, there is Maturity Period. In most instances, it is 24 months. Interest is paid monthly and the principal is due at maturity.

Facility Fees

Banks will charge a Facility Fee. Depending on the size of the facility, there could be some amount due at close and some amount due at the first year anniversary from the date the facility contract has been executed.

First Priority Rights

Bank will have a first priority UCC-1 security interest on all assets of Borrower like present and future inventory, chattel paper, accounts, contract rights, unencumbered equipment, general intangibles (excluding intellectual property), and the right to proceeds from accounts receivable and inventory from the sale of intellectual property to repay any outstanding Bank debt.

Bank may insist on having the right to the IP. That becomes another negotiation point. You can negotiate a negative pledge which effectively means that you will not pledge your IP to any third party.

Bank Covenants

The Bank will also insist on some financial covenants. Some of the common covenants are

Adjusted Quick Ratio which is (Cash held at the Bank + Eligible Receivables)/ (Current Liabilities less Deferred Revenue)

Trailing EBITDA requirement. Could be a six month or 12 month trailing EBITDA requirement

Bank will require the monthly financial statements according to GAAP and the Bank Compliance Certificate.

Bank may seek an Audit or an independent review of the Financial Statements within 90-180 days after each fiscal year ends.

You will have to provide AR and AP aging monthly and inventory breakdown.

In the event that there is a reforecast of the Budget or Operating Plan and it has been approved by the Board, you will have to provide the information to the bank as well.

Bank Oversight and Audit

Bank will reserve the right to do a collateral audit for the formula based line of credit financing. You will have to pay the audit fees. In general, you can negotiate and cap these fees and the frequency of such audits.

Most of the above relate to a large number of startups that do not carry inventory and acquire inventory from international suppliers.

Bankers Acceptance

BAs are frequently used in international trade because of advantages for both sides. Exporters often feel safer relying on payment from a reputable bank than a business with which it has little if any history. Once the bank verifies, or “accepts”, a time draft, it becomes a primary obligation of that institution.

Here’s one typical example. You decide to purchase 100 widgets from Lee Ku, a Chinese exporter. After completing the trade agreement, you approach your bank for a letter of credit. This letter of credit makes your bank the intermediary responsible for completing the transaction.

Once Lee Ku, your supplier, ships the goods, it sends the appropriate documents – typically through its own financial bank to your bank in the United States. The exporter now has a couple choices. It could keep the acceptance until maturity, or it could sell it to a third party, perhaps to your bank responsible for making the payment. In this case, Lee Ku receives an amount less than the face value of the draft, but it doesn’t have to wait on the funds. Bank makes some fees and the Supplier gets their money.

When a bank buys back the acceptance at a lower price, it is said to be “discounting” the acceptance. If your bank does this, it essentially has the same choices that your Chinese exporter had. It could hold the draft until it matures, which is akin to extending the importer a loan. More commonly, though, the bank will charge you a fee in advance which is a percentage of the acceptance. Could be anywhere from 2-4% of the value of the acceptance. In theory, you can get anywhere between 90-180 days financing using BA as an instrument to fund your inventory.

Dangers of Debt Financing

Debt Financing can be a cheap financing method. However, it carries potential risk. If you are not able to service debt, then the bank can, at the extreme, force you into bankruptcy. Alternatively, they can put you in forbearance and work out a plan to get back their principal amount. They can take over the role of receivership and collect the money on your behalf. These are all draconian triggers that may happen, and hence it is important to maintain a good relationship with your banker. Most importantly, give them any bad news ahead of time. It is really bad when they learn of bad news later. It would limit your ability to negotiate terms with the bank.

Manage Debt

In general, if you draw down against the LOC, it is always a good idea to pay that down as soon as possible. That ought to be your primary operational strategy. That will minimize interest expense, keep the line open, establish a better rapport with the bank and most importantly – force you to become a more disciplined organization. You ought to regard the bank financing as a bridge for your working capital requirements. To the extent you can minimize the bridge by converting your receivables to cash, minimizing operating expenses, and maximizing your margin … you would be in a happier place. Debt financing also gives you the time to build value in the organization rather than relying upon equity line which is a costly form of financing. Having said that, there will be times when your investors may push back on your debt financing strategy. In fact, if you have raised equity prior to debt, you may even have to get signoff from the equity investors. Their big concern is that having leverage takes away from the value of the company. That is not necessarily true, because corporate finance theory suggests that intelligent debt financing can, in fact, increase corporate value. However, the investors may see debt as your way out to stall more investment requirements and thus defer their inclination toward owning more of your company at a lower value.

Wall Street is the only place that people ride to in a Rolls Royce to get advice from those who take the subway. – Warren Buffett

So the big day is here. You have evangelized your product across various circles and the good news is that a VC has stepped forward to invest in your company. So the hard work is all done! You can rest on your laurels, sign the term sheet that the VC has pushed across the table, and execute the sheet, trigger the stock purchase, voter and investor rights agreements, get the wire and you are up and running! Wait … sounds too good to be true, doesn’t it? And yes you are right! If only things were that easy. The devil is in the details. So let us go over some of the details that you need to watch out for.

1. First, term sheet does not trigger the wire. Signing a term sheet does not mean that the VC will invest in your company. The road is still long and treacherous. All the term sheet does is that it requires you to keep silent on the negotiations, and may even prevent you to shop the deal to anyone else. The key investment terms are laid out in the sheet and would be used in much greater detail when the stock purchase agreement, the investor rights agreement, the voting agreement and other documents are crafted.

2. Make sure that you have an attorney representing you. And more importantly, an attorney that has experience in the field and has reviewed a lot of such documents. As noted, the devil is in the details. A little “and” or “or” can put you back significantly. But it is just as important for you to know some of the key elements that govern an investment agreement. You can quiz your attorney on these because some of these are important enough to impact your operating degree of freedom in the company.The starting point of a term sheet is valuation of the company. You will hear the concept of pre-money valuation vs. post-money valuation. It is quite simple. The Pre-Money Valuation + Investment = Post-Money Valuation. In other words, Pre-money valuation refers to the value of a company not including external funding or the latest round of funding. Post-Money thus includes the pre-money plus the incremental injection of capital. Let us look at an example:

Let’s explain the difference by using an example. Suppose that an investor is looking to invest in a start up. Both parties agree that the company is worth $1 million and the investor will put in $250,000.

The ownership percentages will depend on whether this is a $1 million pre-money or post-money valuation. If the $1 million valuation is pre-money, the company is valued at $1 million before the investment and after investment will be valued at $1.25 million. If the $1 million valuation takes into consideration the $250,000 investment, it is referred to as post-money. Thus in a pre-money valuation, the Investor owns 20%. Why? The total valuation is $1.25M which is $1M pre-money + $250K capital. So the math translates to $250K/$1,250K = 20%. If the investor says that they will value company $1M post-money, what they are saying is that they are actually giving you a pre-money valuation of $750K. In other words, they will own 25% of the company rather than 20%. Your ownership rights go down by 5% which, for all intents and purposes, is significant.

3. When a round of financing is done, security is exchanged in lieu of cash received. You already have common stock but these are not the securities being exchanged. The company would issue preferred stock. Preferred stock comes with certain rights, preferences, privileges and covenants. Compared to common stock, it is a superior security. There are a number of important rights and privileges that investors secure via a preferred stock purchase, including a right to a board seat, information rights, a right to participate in future rounds to protect their ownership percentage (called a pro-rata right), a right to purchase any common stock that might come onto the market (called a right of first refusal), a right to participate alongside any common stock that might get sold (called a co-sale right), and an adjustment in the purchase price to reflect sales of stock at lower prices (called an anti-dilution right). Let us examine this in greater detail now. There are two types of preferred. The regular vanilla Convertible Preferred and the Participating Preferred. As the latter name suggests, the Participating Preferred allows the VC to receive back their invested capital and the cumulative dividends, if any before common stockholders (that is you), but also enables them to participate on an as-converted basis in the returns to you, the common stockholder. Here is the math:Let us say company raises $3M at a $3M pre-money valuation. As mentioned before in point (3), the stake is 50%-50% owner-investor.

Let us say company sells for $25M. Now the investor has participating preferred or convertible preferred. How does the difference impact you, the stockholder or the founder. Here goes!

i. Participating Preferred. Investor gets their $3M back. There is still $22M left in the coffers. Investor splits 50-50 based on their participating preferred. You and Investor both take home $11M from the residual pool. Investor has $14M, and you have $11M. Congrats!

ii. Convertible Preferred. Investor gets 50% or $12.5M and you get the same – $12.5M. In other words, convertible preferred just got you a few more drinks at the bar. Hearty Congratulations!

Bear in mind that if the Exit Value is lower, the difference becomes more meaningful. Let us say exit was $10M. The Preferred participant gets $3M + $3.5M = $6.5M while you end up with $3.5M.

4. One of the key provisions is Liquidation Preferences. It can be a ticking time bomb. Careful! Some investors may sometimes ask for a multiple of their investment as a preference. This provision provides downside protection to investors. In the event of liquidation, the company has to pay back the capital injected for preferred. This would mean a 1X liquidation preference. However, you can have a 2X liquidation preference which means the investor will get back twice as much as what they injected. Most liquidation preferences range from 1X to 2X, although you can have higher liquidation preference multiples as well. However, bear in mind that this becomes important only when the company is forced to liquidate and sell of their assets. If all is gung-ho, this is a silent clause and no sweat off your brow.

5. Redemption rights. The right of redemption is the right to demand under certain conditions that the company buys back its own shares from its investors at a fixed price. This right may be included to require a company to buy back its shares if there has not been an exit within a pre-determined period. Failure to redeem shares when requested might result in the investors gaining improved rights, such as enhanced voting rights.

6. The terms could demand that a certain option pool or a pot of stock is kept aside for existing and future employees, or other service providers. It could be a range anywhere between 10-20% of the total stock. When you reserve this pool, you are cutting into your ownership stake. In those instances when you have series of financings and each financing requires you to set aside a small pool, it dilutes you and your previous investors. In general, the way these pools are structured is to give you some headroom up to at least 24 months to accommodate employee growth and providing them incentives. The pool only becomes smaller with the passage of time.

7. Another term is the Anti-Dilution Provision. In its simplest form, anti-dilution rights are a zero- sum game. No one has an advantage over the other. However, this becomes important only when there is a down round. A down round basically means that the company is valued lower in subsequent financing than previously. A company valued at $25M in Series A and $15M in Series B – the Series B would be considered a down round. Two Types of Anti-Dilution:

Full ratchet Anti-Dilution: If the new stock is priced lower than prior stock, the early investor has a clause to convert their shares to the new price. For example, if prior investor paid $1.00 and then it was reset in a later round to $0.50, then the prior investors will have 2X rights to common stock. In other words, you are hit with major dilution as are the later investors. This clause is a big hurdle for new investors.

Weighted Average Anti-Dilution. Old investor’s share is adjusted in proportion to the dilution impact of a down round

8. Pay to Play. These are clauses that work in your, the Company, favor. Basically, investors have to invest some money in later financings, and if they do not – their rights may be reduced. However, having these clauses may put your mind at ease, but may create problems in terms of syndicating or getting investments. Some investors are reluctant to put their money in when there are pay to play clauses in the agreement.

9. Right of First Refusal. A company has no obligation to sell stock in future financing rounds to existing investors. Some investors would like to participate and may seek pro-rata participating to keep their ownership stake the same post-financing. Some investors may even want super pro-rata rights which means that they be allowed to participate to such an extent that their new ownership in the company is greater than their previous ownership stake.

10. Board of Directors. A large board creates complexity. Preferable to have a small but strategic board. New investors will require some representation. If too many investors request representation, the Company may have smaller internal representatives and may be outvoted on certain issues. Be aware of the dynamics of a mushrooming board!

11.Voting Rights. Investors may request certain veto authority or have rights to vote in favor of or against a corporate initiative. Company founders may want super-voting rights to exercise greater control. These matters are delicate and going one way or the other may cause personal issues among the participants. However, these matters can be easily resolved by essentially having carve-outs that spell out rights and encumbrances.

12.Drag Along Provision. Might create an obligation on all shareholders of the company to sell their shares to a potential purchaser if a certain percentage of the shareholders (or of a specific class of shareholders) votes to sell to that purchaser. Often in early rounds drag along rights can only be enforced with the consent of those holding at least a majority of the shares held by investors. These rights can be useful in the context of a sale where potential purchasers will want to acquire 100% of the shares of the company in order to avoid having responsibilities to minority shareholders after the acquisition. Many jurisdictions provide for such a process, usually when a third party has acquired at least 90% of the shares.

13.Representations and Warranties. Venture capital investors expect appropriate representations and warranties to be provided by key founders, management and the company. The primary purpose of the representations and warranties is to provide the investors with a complete and accurate understanding of the current condition of the company and its past history so that the investors can evaluate the risks of investing in the company prior to subscribing for their shares. The representations and warranties will typically cover areas such as the legal existence of the company (including all share capital details), the company’s financial statements, the business plan, asset ownership (in particular intellectual property rights), liabilities (contingent or otherwise), material contracts, employees and litigation. It is very rare that a company is in a perfect state. The warrantors have the opportunity to set out issues which ought to be brought to the attention of the new investors through a disclosure letter or schedule of exceptions. This is usually provided by the warrantors and discloses detailed information concerning any exceptions to or carve-outs from the representations and warranties. If a matter is referred to in the disclosure letter the investors are deemed to have notice of it and will not be able to claim for breach of warranty in respect of that matter. Investors expect those providing representations and warranties about the company to reimburse the investors for the diminution in share value attributable to the representations and warranties being inaccurate or if there are exceptions to them that have not been fully disclosed. There are usually limits to the exposure of the warrantors (i.e. a dollar cap on the amount that can be recovered from individual warrantors). These are matters for negotiation when documentation is being finalized. The limits may vary according to the severity of the breach, the size of the investment and the financial resources of the warrantors. The limits which typically apply to founders are lower than for the company itself (where the company limit will typically be the sum invested or that sum plus a minimum return).

14. Information Rights. In order for venture capital investors to monitor the condition of their investment, it is essential that the company provides them with certain regular updates concerning its financial condition and budgets, as well as a general right to visit the company and examine its books and records. This sometimes includes direct access to the company’s auditors and bankers. These contractually defined obligations typically include timely transmittal of annual financial statements (including audit requirements, if applicable), annual budgets, and audited monthly and quarterly financial statements.

15. Exit. Venture capital investors want to see a path from their investment in the company leading to an exit, most often in the form of a disposal of their shares following an IPO or by participating in a sale. Sometimes the threshold for a liquidity event or will be a qualified exit. If used, it will mean that a liquidity event will only occur, and conversion of preferred shares will only be compulsory, if an IPO falls within the definition of a qualified exit. A qualified exit is usually defined as a sale or IPO on a recognized investment exchange which, in either case, is of a certain value to ensure the investors get a minimum return on their investment. Consequently, investors usually require undertakings from the company and other shareholders that they will endeavor to achieve an appropriate share listing or trade sale within a limited period of time (typically anywhere between 3 and 7 years depending on the stage of investment and the maturity of the company). If such an exit is not achieved, investors often build in structures which will allow them to withdraw some or the entire amount of their investment.

16. Non-Compete, Confidentiality Agreements. It is good practice for any company to have certain types of agreements in place with its employees. For technology start-ups, these generally include Confidentiality Agreements (to protect against loss of company trade secrets, know-how, customer lists, and other potentially sensitive information), Intellectual Property Assignment Agreements (to ensure that intellectual property developed by academic institutions or by employees before they were employed by the company will belong to the company) and Employment Contracts or Consultancy Agreements (which will include provisions to ensure that all intellectual property developed by a company’s employees belongs to the company). Where the company is a spin-out from an academic institution, the founders will frequently be consultants of the company and continue to be employees of the academic institution, at least until the company is more established. Investors also seek to have key founders and managers enter into Non-compete Agreements with the company. In most cases, the investment in the company is based largely on the value of the technology and management experience of the management team and founders. If they were to leave the company to create or work for a competitor, this could significantly affect the company’s value. Investors normally require that these agreements be included in the Investment Agreement as well as in the Employment/Consultancy Agreements with the founders and senior managers, to enable them to have a right of direct action against the founders’ and managers if the restrictions are breached.

“The world’s entire scientific … heritage … is increasingly being digitized and locked up by a handful of private corporations… The Open Access Movement has fought valiantly to ensure that scientists do not sign their copyrights away but instead ensure their work is published on the Internet, under terms that allow anyone to access it.” – Aaron Swartz

Information, in the context of scholarly articles by research at universities and think-tanks, is not a zero sum game. In other words, one person cannot have more without having someone have less. When you start creating “Berlin” walls in the information arena within the halls of learning, then learning itself is compromised. In fact, contributing or granting the intellectual estate into the creative commons serves a higher purpose in society – an access to information and hence, a feedback mechanism that ultimately enhances the value to the end-product itself. How? Since now the product has been distributed across a broader and diverse audience, and it is open to further critical analyses.

The universities have built a racket. They have deployed a Chinese wall between learning in a cloistered environment and the world who are not immediate participants. The Guardian wrote an interesting article on this matter and a very apt quote puts it all together.

“Academics not only provide the raw material, but also do the graft of the editing. What’s more, they typically do so without extra pay or even recognition – thanks to blind peer review. The publishers then bill the universities, to the tune of 10% of their block grants, for the privilege of accessing the fruits of their researchers’ toil. The individual academic is denied any hope of reaching an audience beyond university walls, and can even be barred from looking over their own published paper if their university does not stump up for the particular subscription in question.

This extraordinary racket is, at root, about the bewitching power of high-brow brands. Journals that published great research in the past are assumed to publish it still, and – to an extent – this expectation fulfils itself. To climb the career ladder academics must get into big-name publications, where their work will get cited more and be deemed to have more value in the philistine research evaluations which determine the flow of public funds. Thus they keep submitting to these pricey but mightily glorified magazines, and the system rolls on.”

JSTOR is a not-for-profit organization that has invested heavily in providing an online system for archiving, accessing, and searching digitized copies of over 1,000 academic journals. More recently, I noticed some effort on their part to allow public access to only 3 articles over a period of 21 days. This stinks! This policy reflects an intellectual snobbery beyond Himalayan proportions. The only folks that have access to these academic journals and studies are professors, and researchers that are affiliated with a university and university libraries. Aaron Swartz noted the injustice of hoarding such knowledge and tried to distribute a significant proportion of JSTOR’s archive through one or more file-sharing sites. And what happened thereafter was perhaps one of the biggest misapplication of justice. The same justice that disallows asymmetry of information in Wall Street is being deployed to preserve the asymmetry of information at the halls of learning.

MSNBC contributor Chris Hayes criticized the prosecutors, saying “at the time of his death Aaron was being prosecuted by the federal government and threatened with up to 35 years in prison and $1 million in fines for the crime of—and I’m not exaggerating here—downloading too many free articles from the online database of scholarly work JSTOR.”

The Associated Press reported that Swartz’s case “highlights society’s uncertain, evolving view of how to treat people who break into computer systems and share data not to enrich themselves, but to make it available to others.”

Chris Soghioian, a technologist and policy analyst with the ACLU, said, “Existing laws don’t recognize the distinction between two types of computer crimes: malicious crimes committed for profit, such as the large-scale theft of bank data or corporate secrets; and cases where hackers break into systems to prove their skillfulness or spread information that they think should be available to the public.”

Kelly Caine, a professor at Clemson University who studies people’s attitudes toward technology and privacy, said Swartz “was doing this not to hurt anybody, not for personal gain, but because he believed that information should be free and open, and he felt it would help a lot of people.”

And then there were some modest reservations, and Swartz actions were attributed to reckless judgment. I contend that this does injustice to someone of Swartz’s commitment and intellect … the recklessness was his inability to grasp the notion that an imbecile in the system would pursue 35 years of imprisonment and $1M fine … it was not that he was not aware of what he was doing but he believed, as does many, that scholarly academic research should be available as a free for all.

We have a Berlin wall that needs to be taken down. Swartz started that but he was unable to keep at it. It is important to not rest in this endeavor and that everyone ought to actively petition their local congressman to push bills that will allow open access to these academic articles.

John Maynard Keynes had warned of the folly of “shutting off the sun and the stars because they do not pay a dividend”, because what is at stake here is the reach of the light of learning. Aaron was at the vanguard leading that movement, and we should persevere to become those points of light that will enable JSTOR to disseminate the information that they guard so unreservedly.

“We chose steel and extra wide panels of glass, which is almost like crystal. These are honest materials that create the right sense of strength and clarity between old and new, as well as a sense of transparency in the center of the institution that opens the campus up to the street.”

It is the deliberate attempt by management to architect an organization that encourages open access to information, participation, and decision making, which ultimately creates a higher level of trust among the stakeholders.

The demand for transparency is becoming quite common. The users of goods and services are provoking the transparency question:

Shareholder demand for increased financial accountability in the corporate world,

Increased media diligence

Increased regulatory diligence and requirements

Increased demand by social interest and environmental groups

Demands to see and check on compliance based on internal and external policies

There are 2 big categories that organizations must consider and subsequently address while establishing systems in place to promote transparency.

External Transparency

Internal Transparency

External Transparency:

Some of the key elements are that organizations have to make the information accessible while also taking into account the risk of divulging too much information, make the information actionable, enable sharing and collaboration, managing risks, and establishing protocols and channels of communication that is open and democratic.

For example, it is important that employees ought to able to trace the integrity, quality, consistency and validity of the information back to the creator. In an open environment, it also unravels the landscape of risks that an organization maybe deliberately taking or may be carrying unknowingly. It bubbles up inappropriate decisions that can be dwelt on collectively by the management and the employees, and thus risks and inappropriateness are considerably mitigated. The other benefit obviously is that it enables too much overlap wherein people spread across the organizations may be doing the same thing in a similar manner. It affords better shared services platform and also encourages knowledge base and domain expertise that employees can tap into.

Internal Transparency:

Organization has to create the structure to encourage people to be transparent. Generally, people come to work with a mask on. What does that mean? Generally, the employees focus on the job at hand but they may be interested to add value in other ways besides their primary responsibility. In fact, they may want to approach their primary responsibility in an ingenious manner that would help the organization. But the mask or the veil that they don separates their personal interest and passions with the obligations that the job demands. Now how cool would it be if the organization sets up a remarkably safe system wherein the distinction between the employees’ personal interest and the primary obligations of the employee materially dissolve? What I bet you would discover would be higher levels of employee engagement. In addressing internal transparency, what the organization would have done is to have successfully mined and surfaced the personal interests of an employee and laid it out among all participants in a manner that would benefit the organization and the employee and their peers.

Thus, it is important to address both – internal and external transparency. However, implementing transparency ethos is not immune to challenges wherein increased transparency may distort intent, slow processes, increase organizational vulnerabilities, create psychological dissonance among employees or groups, create new factions and sometimes even result in poor decisions. Despite the challenges, the aggregate benefit of increased transparency over time would outweigh the costs. At the end, if the organization continues to formalize transparency, it would also simultaneously create and encourage trust and proper norms and mores that would lay the groundwork for an effective workforce.

Reputation is often an organization’s most valuable asset. It is built over time through a focused commitment and response to members’ wants, needs, and expectations. A commitment to transparency will increasingly become a litmus test used to define an association’s reputation and will be used as a value judgment for participation. By gaining a reputation for value through the disclosure of information, extensive communications with stakeholders, and a solid track record of truth and high disclosure of information, associations will win the respect and involvement of current and future members.

Kanter and Fine use a great analogy of transparency like an ocean sponge. These pore bearing organisms let up to twenty thousand times their volume in water pass through them every day. These sponges can withstand open, constant flow without inhibiting it because they are anchored to the ocean floor. Transparent organizations behave like these sponges: anchored to their mission and still allowing people in and out easily. Transparent organizations actually benefit from the constant flow of people and information.

Plans to implement transparency

Businesses are fighting for trust from their intended audiences. Shel Holtz and John Havens, authors of “Tactical Transparency,” state that the realities associated with doing business in today’s “business environment have emerged as the result of recent trends: Declining trust in business as usual and the increased public scrutiny under which companies find themselves thanks to the evolution of social media.” It is important, now more than ever, for organizations to use tools successfully to be sincerely but prudently transparent in ways that matter to their stakeholders.

“Tactical Transparency” adopted the following definition for transparency:

Transparency is the degree to which an organization shares the following with its stakeholder publics:

▪ Its leaders: The leaders of transparent companies are accessible and are straightforward when talking with members of key audiences.

▪ Its employees: Employees or transparent companies are accessible, can reinforce the public view of the company, and able to help people where appropriate.

▪ Its values: Ethical behavior, fair treatment, and other values are on full display in transparent companies.

▪ Its culture: How a company does things is more important today than what it does. The way things are done is not a secret in transparent companies.

▪ The results of its business practices, both good and bad: Successes, failures, problems, and victories all are communicated by transparent companies.

▪ Its business strategy: Of particular importance to the investment community but also of interest to several other audiences, a company’s strategy is a key basis for investment decisions. Misalignment of a company’s strategy and investors’ expectations usually result in disaster.

▪ Operational Transparency: That involves creating or following an ethics code, conflict-of-interest policies, and any other guidelines your organization creates.

▪ Transactional Transparency: This type of strategy provides guidelines and boundaries for employees so they can participate in the conversation in and out of the office. Can they have a personal blog that discusses work-related issues?

▪ Lifestyle Transparency: This is personalized information coming from sites like Facebook and Twitter. These channels require constant transparency and authenticity.

Create an Action Plan around policies and circumstances to promote transparency:

Holtz and Havens outline specific situations where tactical transparency can transform a business, some of which are outlined in this list.

We are entering into a new age wherein we are interested in picking up a finer understanding of relationships between businesses and customers, organizations and employees, products and how they are being used, how different aspects of the business and the organizations connect to produce meaningful and actionable relevant information, etc. We are seeing a lot of data, and the old tools to manage, process and gather insights from the data like spreadsheets, SQL databases, etc., are not scalable to current needs. Thus, Big Data is becoming a framework to approach how to process, store and cope with the reams of data that is being collected.

According to IDC, it is imperative that organizations and IT leaders focus on the ever-increasing volume, variety and velocity of information that forms big data.

Volume. Many factors contribute to the increase in data volume – transaction-based data stored through the years, text data constantly streaming in from social media, increasing amounts of sensor data being collected, etc. In the past, excessive data volume created a storage issue. But with today’s decreasing storage costs, other issues emerge, including how to determine relevance amidst the large volumes of data and how to create value from data that is relevant.

Variety. Data today comes in all types of formats – from traditional databases to hierarchical data stores created by end users and OLAP systems, to text documents, email, meter-collected data, video, audio, stock ticker data and financial transactions. By some estimates, 80 percent of an organization’s data is not numeric! But it still must be included in analyses and decision making.

Velocity. According to Gartner, velocity “means both how fast data is being produced and how fast the data must be processed to meet demand.” RFID tags and smart metering are driving an increasing need to deal with torrents of data in near-real time. Reacting quickly enough to deal with velocity is a challenge to most organizations.

SAS has added two additional dimensions:

Variability. In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. Is something big trending in the social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage – especially with social media involved.

Complexity. When you deal with huge volumes of data, it comes from multiple sources. It is quite an undertaking to link, match, cleanse and transform data across systems. However, it is necessary to connect and correlate relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control. Data governance can help you determine how disparate data relates to common definitions and how to systematically integrate structured and unstructured data assets to produce high-quality information that is useful, appropriate and up-to-date.

So to reiterate, Big Data is a framework stemming from the realization that the data has gathered significant pace and that it’s growth has exceeded the capacity for an organization to handle, store and analyze the data in a manner that offers meaningful insights into the relationships between data points. I am calling this a framework, unlike other materials that call Big Data a consequent of the inability of organizations to handle mass amounts of data. I refer to Big Data as a framework because it sets the parameters around an organizations’ decision as to when and which tools must be deployed to address the data scalability issues.

Thus to put the appropriate parameters around when an organization must consider Big Data as part of their analytics roadmap in order to understand the patterns of data better, they have to answer the following ten questions:

What are the different types of data that should be gathered?

What are the mechanisms that have to be deployed to gather the relevant data?

How should the data be processed, transformed and stored?

How do we ensure that there is no single point of failure in data storage and data loss that may compromise data integrity?

What are the models that have to be used to analyze the data?

How are the findings of the data to be distributed to relevant parties?

How do we assure the security of the data that will be distributed?

What mechanisms do we create to implement feedback against the data to preserve data integrity?

How do we morph the big data model into new forms that accounts for new patterns to reflect what is meaningful and actionable?

How do we create a learning path for the big data model framework?

Some of the existing literature have commingled Big Data framework with analytics. In fact, the literature has gone on to make a rather assertive statement i.e. that Big Data and predictive analytics be looked upon in the same vein. Nothing could be further from the truth!

There are several tools available in the market to do predictive analytics against a set of data that may not qualify for the Big Data framework. While I was the CFO at Atari, we deployed business intelligence tools using Microstrategy, and Microstrategy had predictive modules. In my recent past, we had explored SAS and Minitab tools to do predictive analytics. In fact, even Excel can do multivariate, ANOVA and regressions analysis and best curve fit analysis. These analytical techniques have been part of the analytics arsenal for a long time. Different data sizes may need different tools to instantiate relevant predictive analysis. This is a very important point because companies that do not have Big Data ought to seriously reconsider their strategy of what tools and frameworks to use to gather insights. I have known companies that have gone the Big Data route, although all data points ( excuse my pun), even after incorporating capacity and forecasts, suggest that alternative tools are more cost-effective than implementing Big Data solutions. Big Data is not a one-size fit-all model. It is an expensive implementation. However, for the right data size which in this case would be very large data size, Big Data implementation would be extremely beneficial and cost effective in terms of the total cost of ownership.

Areas where Big Data Framework can be applied!

Some areas lend themselves to the application of the Big Data Framework. I have identified broadly four key areas:

Hadoop is becoming a more widely accepted tool in addressing Big Data Needs. It was invented by Google so they could index the structural and text information that they were collecting and present meaningful and actionable results to the users quickly. It was further developed by Yahoo that tweaked Hadoop for enterprise applications.

Hadoop runs on a large number of machines that don’t share memory or disks. The Hadoop software runs on each of these machines. Thus, if you have for example – over 10 gigabytes of data – you take that data and spread that across different machines. Hadoop tracks where all these data resides! The servers or machines are called nodes, and the common logical categories around which the data is disseminated are called clusters. Thus each server operates on its own little piece of the data, and then once the data is processed, the results are delivered to the main client as a unified whole. The method of reducing the disparate sources of information residing in various nodes and clusters into one unified whole is the process of MapReduce, an important mechanism of Hadoop. You will also hear something called Hive which is nothing but a data warehouse. This could be a structured or unstructured warehouse upon which the Hadoop works upon, processes data, enables redundancy across the clusters and offers a unified solution through the MapReduce function.

Personally, I have always been interested in Business Intelligence. I have always considered BI as a stepping stone, in the new age, to be a handy tool to truly understand a business and develop financial and operational models that are fairly close to the trending insights that the data generates. So my ear is always to the ground as I follow the developments in this area … and though I have not implemented a Big Data solution, I have always been and will continue to be interested in seeing its applications in certain contexts and against the various use cases in organizations.

MECE is a thought tool that has been systematically used in McKinsey. It stands for Mutually Exclusive, Comprehensively Exhaustive. We will go into both these components in detail and then relate this to the dynamics of an organization mindset. The presumption in this note is that the organization mindset has been engraved over time or is being driven by the leadership. We are looking at MECE since it represents a tool used by the most blue chip consulting firm in the world. And while doing that, we will , by the end of the article, arrive at the conclusion that this framework alone will not be the panacea to all investigative methodology to assess a problem – rather, this framework has to reconcile with the active knowledge that most things do not fall in the MECE framework, and thus an additional system framework is needed to amplify our understanding for problem solving and leaving room for chance.

So to apply the MECE technique, first you define the problem that you are solving for. Once you are past the definition phase, well – you are now ready to apply the MECE framework.

MECE is a framework used to organize information which is:

Mutually exclusive: Information should be grouped into categories so that each category is separate and distinct without any overlap; and

Collectively exhaustive: All of the categories taken together should deal with all possible options without leaving any gaps.

In other words, once you have defined a problem – you figure out the broad categories that relate to the problem and then brainstorm through ALL of the options associated with the categories. So think of it as a mental construct that you move across a horizontal line with different well defined shades representing categories, and each of those partitions of shades have a vertical construct with all of the options that exhaustively explain those shades. Once you have gone through that exercise, which is no mean feat – you will be then looking at an artifact that addresses the problem. And after you have done that, you individually look at every set of options and its relationship to the distinctive category … and hopefully you are well on your path to coming up with relevant solutions.

Now some may argue that my understanding of MECE is very simplistic. In fact, it may very well be. But I can assure you that it captures the essence of very widely used framework in consulting organizations. And this framework has been imported to large organizations and have cascaded down to different scale organizations ever since.

Here is a link that would give you a deeper understanding of the MECE framework:

Now we are going to dig a little deeper. Allow me to digress and take you down a path less travelled. We will circle back to MECE and organizational leadership in a few moments. One of the memorable quotes that have left a lasting impression is by a great Nobel Prize winning physicist, Richard Feynman.

“I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say “look how beautiful it is,” and I’ll agree. Then he says “I as an artist can see how beautiful this is but you as a scientist takes this all apart and it becomes a dull thing,” and I think that he’s kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is … I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it’s not just beauty at this dimension, at one centimeter; there’s also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to theexcitement, the mystery and the awe of a flower! It only adds. I don’t understand how it subtracts.”

The above quote by Feynman lays the groundwork to understand two different approaches – namely, the artist approaches the observation of the flower from the synthetic standpoint, whereas Feynman approaches it from an analytic standpoint. Both do not offer views that are antithetical to one another: in fact, you need both to gather a holistic view and arrive at a conclusion – the sum is greater than the parts. Feynman does not address the essence of beauty that the artist puts forth; he looks at the beauty of how the components and its mechanics interact well and how it adds to our understanding of the flower. This is very important because the following dialogue with explore another concept to drive this difference between analysis and synthesis home.

There are two possible ways of gaining knowledge. Either we can proceed from the construction of the flower ( the Feynman method) , and then seek to determine the laws of the mutual interaction of its parts as well as its response to external stimuli; or we can begin with what the flower accomplishes and then attempt to account for this. By the first route we infer effects from given causes, whereas by the second route we seek causes of given effects. We can call the first route synthetic, and the second analytic.

We can easily see how the cause effect relationship is translated into a relationship between the analytic and synthetic foundation.

A system’s internal processes — i.e. the interactions between its parts — are regarded as the cause of what the system, as a unit, performs. What the system performs is thus the effect. From these very relationships we can immediately recognize the requirements for the application of the analytic and synthetic methods.

The synthetic approach — i.e. to infer effects on the basis of given causes — is therefore appropriate when the laws and principles governing a system’s internal processes are known, but when we lack a detailed picture of how the system behaves as a whole.

Another example … we do not have a very good understanding of the long-term dynamics of galactic systems, nor even of our own solar system. This is because we cannot observe these objects for the thousands or even millions of years which would be needed in order to map their overall behavior.

However, we do know something about the principles, which govern these dynamics, i.e. gravitational interaction between the stars and planets respectively. We can therefore apply a synthetic procedure in order to simulate the gross dynamics of these objects. In practice, this is done with the use of computer models which calculate the interaction of system parts over long, simulated time periods.

The analytical approach — drawing conclusions about causes on the basis of effects – is appropriate when a system’s overall behavior is known, but when we do not have clear or certain knowledge about the system’s internal processes or the principles governing these. On the other hand, there are a great many systems for which we neither have a clear and certain conception of how they behave as a whole, nor fully understand the principles at work which cause that behavior. Organizational behavior is one such example since it introduces the fickle spirits of the employees that, at an aggregate create a distinct character in the organization.

Leibniz was among the first to define analysis and synthesis as modern methodological concepts:

“Synthesis … is the process in which we begin from principles and [proceed to] build up theorems and problems … while analysis is the process in which we begin with a given conclusion or proposed problem and seek the principles by which we may demonstrate the conclusion or solve the problem.”

So we have wandered down this path of analysis and synthesis and now we will circle back to MECE and the organization. MECE framework is a prime example of the application of analytics in an organization structure. The underlying hypothesis is that the application of the framework will illuminate and add clarity to understanding the problems that we are solving for. But here is the problem: the approach could lead to paralysis by analysis. If one were to apply this framework, one would lose itself in the weeds whereas it is just as important to view the forest. So organizations have to step back and assess at what point we stop the analysis i.e. we have gathered information and at what point we set our roads to discovering a set of principles that will govern the action to solve a set of problems. It is almost always impossible to gather all information to make the best decision – especially where speed, iteration, distinguishing from the herd quickly, stamping a clear brand etc. are becoming the hallmarks of great organizations.

Applying the synthetic principle in addition to “MECE think” leaves room for error and sub-optimal solutions. But it crowd sources the limitless power of imagination and pattern thinking that will allow the organization to make critical breakthroughs in innovative thinking. It is thus important that both the principles are promulgated by the leadership as coexisting principles that drive an organization forward. It ignites employee engagement, and it imputes the stochastic errors that result when employees may not have all the MECE conditions checked off.

In conclusion, it is important that the organization and its leadership set its architecture upon the traditional pillars of analysis and synthesis – MECE and systems thinking. And this architecture serves to be the springboard for the employees that allows for accidental discoveries, flights of imagination, Nietzschean leaps that transform the organization toward the pathway of innovation, while still grounded upon the bedrock of facts and empirical observations.

AS PART of an effort to streamline Economist.com and arrange things more logically, we’re closing down the Babbage blog. We’ll continue to post extra science and technology stories online, in addition to those that appear in the print edition, but these will now appear on the Science and technology page, rather than as posts on the Babbage blog. Our aim is t […]

IN THE end, Microsoft fooled everyone. The replacement for its widely disparaged Windows 8 operating system turned out to be not Windows 9, as expected, but Windows 10. No explanation, other than marketing waffle, was given as to why the company should skip a release number. “We know that based on the product coming, and just how different our approach overa […]

IF YOU want something done, the saying goes, give it to a busy person. It is an odd way to guarantee hitting deadlines. But a paper recently published in the Journal of Consumer Research suggests it may, in fact, be true—as long as the busy person conceptualises the deadline in the right way. Yanping Tu of the University of Chicago and Dilip Soman of the Uni […]

EVER since the “paperless office” was first mooted in a Business Week article back in 1975, its estimated time of arrival has always been ten years away. And so it remains. The amount of paper used in homes and offices has declined slightly over the past decade. And certainly an increasing number of organisations have managed to go paperless to some extent, […]

WHEN the autonomous cars in Isaac Asimov's 1953 short story “Sally” encourage a robotic bus to dole out some rough justice to an unscrupulous businessman, the reader is to believe that the bus has contravened Asimov's first law of robotics, which states that “a robot may not injure a human being or, through inaction, allow a human being to come to […]