With eight variables, Beneish profiled manipulators as: (1) growing quickly, (2) experiencing deteriorating fundamentals, and (3) adopting aggressive accounting practices. The first two are the incentive variables — admitting to quick but unsustainable growth would surely hurt stock price. For his goal of detecting manipulators, Beneish used a probit model.

Why a probit model? In his sample, there are manipulators and innocent firms — coded as one and zero respectively. Estimating a probability by regressing a binary variable against his eight independent variables would not work, mainly because of the possibility of generating probabilities greater than one and less than zero.

Probit is short for probability unit and is simply another name for a Z-score from the normal distribution. But what does the normal distribution have to do with the binary case of manipulator or not? Well, we assume that the probability of manipulation remains low at first, then increases rapidly, and then levels off as the eight variables go further into red-flag territory. This creates an S-shaped (sigmoidal) curve that can be represented by the cumulative normal.

Although called probit regression, probit analysis is not really a regression. No regression is run with a dependent variable. Rather, probit analysis uses the cumulative normal and finds the coefficients on the eight independent variables that maximize the probability of generating the observed sample.

Beneish’s eight independent variables can be divided into a manipulation group and a motivation group, and were structured so that an increase in the variable means a higher probability of manipulation:

Manipulation signals are: days sales in receivables index (DSRI) for revenue inflation; asset quality index (AQI) for expenditure capitalization; depreciation index (DEPI) for declining rate; and total accruals to total assets (TATA) for accounting not supported by cash.

Motivation to rationalize about unsustainable earnings is driven by the market, and opportunity is provided by financial statement choices. In a 2003 cover story for CFA Magazine, Cynthia Harrington mentions “two of the three pegs of what is known in fraud circles as the fraud triangle: opportunity and the ability to rationalize the rightness of actions.” Rationalization easily creeps into lying. Aldert Vrij’s definition of lying is relevant to earnings management and manipulation: “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue.”

Beneish incorporates incentives in his model as he goes beyond the abnormal discretionary accruals that play the main part in the modified Jones approach. Beneish’s risk-analytic model is important for investment practitioners and parallels the increasingly quantitative approach of the SEC with its accounting quality model. The SEC model augments “factors that indicate earnings management” with “factors that induce earnings management.”

In their recent FAJ article, Beneish and his colleagues used −1.78 as their cutoff to classify a firm as a manipulator. Firms with Z-scores greater than or equal to −1.78 will be classified as manipulators. Using the cumulative normal table, one can see that a Z-score of −1.78 corresponds to a probability of 0.0375 that the firm is a manipulator. Call it a 0.04 probability. For a firm above but close to the cutoff, and hence classified as a manipulator, the probability of its innocence is about 0.96, twenty-four times the probability of guilt.

Can we gain intuition on the Beneish cutoff? Claiming that manipulators lose about 40% of their value in the quarter when discovered and innocent firms earn 2% in a quarter, Beneish wants to protect portfolios by making it 20 to 30 times easier to call an innocent firm guilty than to call a guilty firm innocent — a ratio consistent with our simple odds calculation above.

On a holdout sample, the model misclassified 50% of the manipulators and 7.2% of the innocent. Although the 7.2% figure seems low, there are actually two kinds of innocent firms: (1) economic earnings communicators, and (2) within-the-rules managers. Only earnings managers and manipulators are likely to venture near red flag territory and be above the cutoff.

We expect that future quantitative models will use probit (or its cousin logit) analysis with more recent data sets and incorporate more detection signals. For example, fraud-detection research based on Benford’s Law uses the distribution of digits as a flag; in reports, oddly enough, 30% of first digits in numbers should be one. Word-analysis software now helps the SEC with lie detection. As motivated manipulators become more crafty, portfolio detectives need to keep up.

Please note that the content of this site should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute.

Dennis McLeavey, CFA, was head of regulator and program recognition, as well as a content director for quantitative methods at CFA Institute. He is professor emeritus of finance and management science at the University of Rhode Island. McLeavey has also served as head of education, EMEA, at CFA Institute and as head of curriculum development for the CFA Program. He is coauthor of Quantitative Investment Analysis, a Wiley text in the CFA Institute Investment Series. He also coauthored Global Investments, Production Planning and Inventory Control, and Operations Research for Management Decisions, and is coeditor of Managing Investor Portfolios. His research has been published in Management Science, the Journal of Operations Research, and the Journal of Portfolio Management, among others. McLeavey has taught at the University of Western Ontario and the University of Rhode Island. He has served as chairperson of the CFA Institute Retirement Investment Policy Committee and as a New York Stock Exchange arbitrator. In 2008, he initiated the CFA Institute Take 15 series of webcasts. He serves on the Fund Advisory Board for the Global Perspectives Fund at the University of North Carolina Kenan-Flegler Business School, and he oversees the Ram Fund at the University of Rhode Island. McLeavey holds a bachelor’s degree in economics from the University of Western Ontario and a doctorate in production management and industrial engineering, with a minor in mathematics, from Indiana University.

I think that your point that “probit analysis is not really a regression” is misleading. Probit models explore the relationship between a dependent variable and several other independent variables. That makes it regression in my mind. It is not an Ordinary Least Squares (OLS) regression, but it typically is estimated by Maximum Likelihood, which can also be applied to the same types of linear models often estimated by OLS.

Thanks. You are right about the Maximum Likelihood Estimation. Still, I think Probit quacks a little more like Discriminant Analysis, and I would call it a classification technique. With binary variables, one cannot simply transform the 0 – 1 and then enter a transformed value as a dependent variable. Without defining what “is is” or what “really a regression” is, I am hoping to stimulate more interest in a technique (probit and logit) that has much merit and that sometimes gets glossed over as a half-page addendum to text chapters on regression. (The link supporting the “not really a regression comment” also provides some explanation.) But those who call it probit regression would certainly agree with your comment.

Hi Luiz, Good question. Yes and no. Yes, it applies to stocks around the world because the methodology, the variables, the fraud, and the motivation are generic. But no, the study has a US SEC and US GAAP basis. Perhaps someone has done an international replication of which I am unaware. Dennis

The SEC is now developing a model for investigating phrasing in disclosures to better detect signals for possible fraud. This was reported in the Wall Street Journal and I have discussed it with at least one partner in a CPA firm. The SEC is concerned about too many false positives in models that use only numerical financial statement data. Have you or the CFA considered/done any work in this area?

Thanks Judy, I appreciate your comment pointing to human language technology or natural language processing because it is a growing and very interesting area. One of my colleagues has been exploring the topic. Sandra Peters at CFA Institute has proposed a broader project on technology/big data. She’s been attending several sessions at Columbia’s new Institute on Data Science and Engineering (IDSE).

More than 130 business leaders have responded to a deadlock in the UK Parliament by signing a letter calling for a second Brexit referendum to prevent a chaotic withdrawal from the EU. "The only feasible way to do this is by asking the people whether they still want to leave the EU," the letter says. CNBC (17 Jan.)

The Chinese government is likely to establish a growth target for this year that falls short of the 6.5% target set for 2018, sources said. This year's target is expected to be 6% to 6.5%. China Daily (Beijing) (18 Jan.)

CFA Institute is the global, not-for-profit association of investment professionals that awards the CFA® and CIPM® designations. We promote the highest ethical standards and offer a range of educational opportunities online and around the world.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.