The Search for Suspect Accounting

Email this article

To*

Please enter your email address*

Subject*

Comments*

Financial analysts and regulators know a lot about how companies falsify accounting and misrepresent earnings. For example, if a company is moving items off the balance sheet or reporting earnings that move in the opposite direction of cash flows, it’s time to pay close attention. They also know that a large merger or acquisition is a common backdrop for earnings management. And they are familiar with the notion that a sudden jump in accruals suggests that a firm’s financial statements may not be a precise portrait of its economic health.

But what they and others know about accounting fraud or the misrepresentation of earnings is far outweighed by what they don’t know: Who’s manipulating earnings (legally or illegally) and how widespread is it? What percentage of accounting fraud do regulators and analysts miss? Is there any way to get better at catching the perpetrators?

Recommended Stories:

In a study released in January, a group of nearly 400 CFOs said they believe that in a given period, one-fifth of companies are “distorting” earnings — that, is following the letter of generally accepted accounting principles but not necessarily the spirit. On average, the earnings distortion is as large as 10 cents on every $1 of earnings, said the CFOs in the study. (The study, “The Misrepresentation of Earnings,” by Ilia Dichev of Emory University and John Graham and Campbell Harvey of Duke University, is based on an earlier survey of finance chiefs conducted with the assistance of CFO magazine.) If the problem is half as prevalent as those finance chiefs think it is, it can represent a hefty premium on the cost of capital and hurt the careers of senior executives once the shady dealings of such companies come to light.

Why do companies falsify or manage earnings? Many misstating firms trade at high valuations prior to being caught by the Securities and Exchange Commission, so it appears executives are trying to cover up a slowdown in financial performance, according to “Predicting Material Accounting Misstatements,” a 2011 paper by Patricia Dechow of the University of California, Berkeley, and three co-authors. Indeed, in the Dichev study, CFOs said “to influence stock price” was the top motivation for manipulating earnings within GAAP. Other motivations cited included the pressure to hit benchmarks, senior managers’ fear that poor performance will harm their careers, and the need to avoid violating debt covenants.

But close to 60% of the CFOs said something else — something disturbing: that they and their peers may be motivated to use accounting techniques to paint a rosy picture of earnings “because it will likely go undetected.” Amplifying the answer, one finance chief told the researchers that, depending on the industry, a firm could misrepresent earnings for two to three years without getting caught by an analyst. Indeed, the CFO said, it may take up to five years for anyone to uncover the sleight of hand.

Knowing the Score

Earnings manipulators are hard to spot, even when manipulation turns to fraud (see “Spotted Before the Fall,” next page). Studying firms that were subject to enforcement actions by the SEC, Dechow and co-authors Weili Ge of the University of Washington, Chad Larson of Washington University in St. Louis, and Richard Sloan of the University of California, Berkeley, found that companies that misreport earnings can look extremely vibrant.

During the period when companies misstated earnings, Dechow and her colleagues found, their cash margins were shrinking but cash sales were rising. Earnings manipulators were also often expanding their capital bases and operations. Finance-raising and related off-balance-sheet activities were also more common during misstatement periods.

In addition, price-earnings ratios and market-to-book ratios were often unusually high, and “misstating firms [had] unusually strong stock-return performance in the years prior to misstatement,” according to the study.

Considering the sheer computing power available for analyzing data, it seems as if analysts and regulators should be better at catching shoddy accounting and outright fraud, even if a firm’s public image doesn’t scream “bad guy.”

Indeed, models are being developed to spot earnings management, mostly in academia. A frequently cited measure of earnings management is the “M-Score,” developed by M.D. Beneish of Indiana University in the mid-1990s. The M-Score is a composite of eight ratios designed to “capture either financial statement distortions that can result from earnings manipulation or indicate a predisposition to engage in earnings manipulation,” wrote Beneish in a paper introducing the measure.

Beneish’s ratios are designed to capture unusual accumulation in receivables; unusual expense capitalization and declines in depreciation; and the extent to which accounting profits are supported by cash profits. Beneish validated the M-Score by correctly identifying 76% of firms subject to SEC accounting enforcement actions during a 10-year period.

Expanding on the M-Score was the F-Score, developed by Dechow, Ge, and Sloan in 2011. They also examined firms that were subject to SEC enforcement actions. The F-Score focuses not just on accrual quality, but also financial performance, nonfinancial measures (in particular, abnormal reductions in the number of employees), off-balance-sheet activities, and market-based measures for identifying misstatements.

In another recent, notable study, “Financial Statement Irregularities: Evidence from the Distributional Properties of Financial Statement Numbers,” Dan Amiram and Ethan Rouen of Columbia University and Zahn Bozanic of The Ohio State University created a measure to estimate the degree of financial reporting irregularities for a company in a given year, using “a method used by forensic investigators and auditors.” The measure is based on Benford’s Law, or the law of first digits, which refers to the frequency distribution of digits in real-life sources of data. According to Benford’s Law, for example, in a given distribution the number 1 occurs as the leading digit about 30% of the time.

Amiram’s statistically driven measure avoids a basic problem of the M-Score and F-Score: a model for detecting earnings manipulators that is based only on those that have been caught by the SEC is inherently skewed. The researchers say their measure is also superior because it “does not require forward-looking information, does not require returns or price data, is available for essentially every firm with accounting information, and does not rely on peer group benchmarking.” They believe that it could indeed be deployed on a large scale by investors, regulators, auditors and researchers, “as a ‘first pass’ filter to flag companies that potentially file suspect financial disclosures.”

Earnings Cop

It’s still early days, but there are developments at the SEC also. The SEC’s Financial Reporting and Audit Task Force announced it will be using something called the Accounting Quality Model, also known as RoboCop, to detect earnings management. The AQM is a quantitative analytic model — econometric-based — that will spot earnings management by, among other things, determining whether a registrant’s financial statements stick out from other filers’ in its industry.

The AQM will look at discretionary accounting choices by, for instance, examining total accruals and then estimating discretionary accruals, according to Nicolas Morgan and Jennifer Feldman of DLA Piper in “The SEC Sharpens Its Talons” (CFO, February). “The model then classifies the estimated discretionary accruals as risk indicators (factors directly associated with earnings management) or risk inducers (strong incentives for earnings management),” wrote Morgan and Feldman.

Another risk indicator could be an accounting policy in which a high proportion of transactions are structured off–balance sheet, says Craig Lewis, chief economist at the SEC, and a risk inducer could be a company losing market share to competitors. The AQM produces a score for each filing and compares it with the filer’s industry peer group, assessing the likelihood that fraud is occurring.

The AQM could vastly increase the number of comment letters companies receive from the SEC, although Lewis says the list could be adjusted to accommodate “evolving staff experiences and priorities.” Because it digs into XBRL data, a reporting language that many companies have yet to perfect, and because like any model it has its limits, the AQM is being greeted with skepticism in some quarters.

RDG Filings, a provider of XBRL solutions, says SEC investigations may become more like tax audits in which taxes were paid properly but the tax form was filled out incorrectly. “If your financial statements are in good order, but your XBRL is improperly built, the RoboCop may highlight your company as a candidate for further investigation,” RDG says on its website.

False Positives

Any analytical model, of course, can over-report occurrences. Write Dechow and her colleagues: “One unavoidable issue in developing models to detect misstatement is that the revelation of a misstatement by the SEC is a rare event. Thus, similar to bankruptcy prediction models, our models generate a high frequency of false positives.” That is, “many ﬁrms that do not have enforcement actions against them are predicted to have misstated their earnings.”

Moreover, analytical models can miss such things as the quality of the estimates and assumptions underlying the present-value calculations of assets, which normally appear in the footnotes to financial statements.

As one CFO told the Emory-Duke researchers, “It’s easy to spot changes in the assumptions[,] but the accounting profession does not ask firms to disclose what I call the ‘next sentence’ stating that the effect of the change is to increase earnings in the current period and increase risk on the balance sheet.”

What else would be hard for an analytical model to gauge? Academic researchers looking for earnings manipulators should analyze the credibility of the management team, one CFO told the Emory-Duke authors, because they “set the tone or culture which your internal accounting function will operate under.”

Models can be improved if researchers uncover more characteristics of companies that manipulate earnings and incorporate them. The F-Score, for example, misses some companies that manipulate sales because it only probes accrual-based measures. It skips techniques designed to boost sales, like channel-stuffing at quarter’s end, or “encouraging sales to customers with return provisions that violate the definition of a sale,” according to Dechow’s study. The four researchers see this as a fruitful area for further research and fine-tuning of their model.

Just because models aren’t perfect doesn’t mean that regulators won’t keep them in their tool belts. The SEC’s Lewis says if the commission can keep false positives to a manageable level, the AQM could be a powerful means of improving accounting, not just for identifying fraud but also for helping staff assist corporate filers with disclosing financials in compliance with GAAP.

As we know from the banking crisis and other historical debacles, however, businesses always adapt in relation to what regulators are searching for. RDG is already touting that it can help companies reduce the chances of getting flagged by the Robocop system. One day, analytical models could help identify who is really manipulating earnings and how widespread the practice is. In the meantime, the cat-and-mouse game between businesses that misrepresent or falsify their accounts and the builders of analytical models appears to have just begun.

We have a service for accessing financial statements directly from the Banks and Companies electronically. This helps prevent the companies from intercepting and manipulating false/fraudulent statements when reporting them to auditors.

As counterpoint, I would urge readers to consider “Manipulation or Misreporting? When We’re Too Quick to Cry Fraud,” by Francine McKenna, published June 7, 2013 in The Capital Ideas Blog, The University of Chicago, Chicago Booth School of Business, http://bit.ly/1jABP4L, citing a paper by Prof. Ray Ball entitled, “Accounting Informs Investors and Earnings Management is Rife: Two Questionable Beliefs” http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2211288

I am shocked, shocked (?) to learn that earnings manipulation occurs, but I doubt the reasons are due to fear alone.

Instead, I would suggest that the principle of “greed is good” uttered by that great philosopher, Gordon Gekho, in the movie “Wall Street” is the primary reason for earnings misstatements. When CFOs (or others) have bonuses based primarily on earnings then some distortion of that metric can be expected. After all, in a true capitalistic society if you do not pay for earnings it does not occur.

Taking a cue from the research papers cited in the article about the divergent of cash and earnings flows as a misrepresentation indicator companies might be better served to reveal the truth about their performance if they were to to base bonuses on a combination of earnings and cash flow.

Investment price is universally driven by the interplay between L. Welles Wilder, Jr.’s Relative Strength Index (RSI – Demand) and Abacus Road® (Supply). One might suppose that where there exists ‘room’ between actual earnings and the artificial construct that represents Supply in world markets, knowledgeable CFOs, as prompted by knowledgeable market makers, might be encouraged to ‘sweeten the pot” from time to time !

I am an accountant, and part of the brotherhood.
Slowly, over decades, we have eroded the accuracy of financial statements for everything except income tax reporting. If you happen to follow Warren Buffet and his annual “Letters to Shareholders” you can see his frustration over what he calls the difference between “intrinsic value” and “book value”. These measurements are critical in assessing a Company’s worth and an investors’ decision to buy/sell a share, or reward or discipline management.
His analysis is that “intrinsic value” should be the primary basis for making such decisions. Most people would agree.
Taking this further we can easily state that “intrinsic value” is real and “book value” is only a very rough guide. Yet the only value that is actually stated and published is “book value”.
Making decisions without the real facts is asinine and dangerous.