1. Motivating Example

If you regress the current quarter’s inflation rate, , on the previous quarter’s rate using data from FRED over the period from Q3-1987 to Q4-2014, then you get the AR(1) point estimate,

(1)

where the number in parentheses denotes the standard error, and the inflation-rate time series, , has been demeaned. In other words, if the inflation rate is points higher in Q1-2015, then on average it will be points higher in Q2-2015, points higher in Q3-2015, and so on… The function that describes the cascade of future inflation-rate changes due to an unexpected shock in period is known as the impulse-response function.

But, many interesting time-series phenomena involve multiple variables. For example, Brunnermeier and Julliard (2008) show that the house-price appreciate rate, , is inversely related to the inflation rate. If you regress the current quarter’s inflation and house-price appreciation rates on the previous quarter’s rates using demeaned data from the Case-Shiller/S&P Index, then you get:

(2)

These point estimates indicate that, if the inflation rate were points higher in Q1-2015, then the inflation rate would be points higher in Q2-2015 and the house-price appreciation rate would be points lower in Q2-2015.

Computing the impulse-response function for this vector auto-regression (VAR) is more difficult than computing the same function for the inflation-rate AR(1) because the inflation rate and house-price appreciation rate shocks are correlated:

(3)

In other words, when you see a point shock to inflation, you also tend to see a point shock to the house-price appreciation rate. Thus, computing the future effects of a shock to the inflation rate and a point shock to the house-price appreciation rate gives you information about a unit shock that doesn’t happen in the real world. In this post, I show how to account for this sort of correlation when computing the impulse-response function for VARs. Here is the relevant code.

2. Impulse-Response Function

Before studying VARs, let’s first define the impulse-response function more carefully in the scalar world. Suppose we have some data generated by an AR(1),

(4)

where , , and . For instance, if we’re looking at quarterly inflation data, , then . In this setup, what would happen if there was a sudden shock to in period ? How would we expect the level of to change? What about the level of ? Or, the level of any arbitrary for ? How would a point shock to the current inflation rate propagate into future quarters?

Well, it’s easy to compute the time expectation of :

(5)

Iterating on this same strategy then gives the time expectation of :

(6)

So, in general, the time expectation of any future will be given by the formula,

(7)

and the impulse-response function for the AR(1) process will be:

(8)

If you knew that there was a sudden shock to of size , then your expectation of would change by the amount . The figure below plots the impulse-response function for using the AR(1) point estimate by Equation (1).

There’s another slightly different way you might think about an impulse-response function—namely, as the coefficients to the moving-average representation of the time series. Consider rewriting the data generating process using lag operators,

(9)

where , , and so on… Whenever the slope coefficient is smaller than , , we know that , and there exists a moving-average representation of :

(10)

That is, rather than writing each as a function of a lagged value, , and a contemporaneous shock, , we can instead represent each as a weighted average of all the past shocks that’ve been realized, with more recent shocks weighted more heavily.

(11)

If we normalize all of the shocks to have unit variance, then the weights themselves will be given by the impulse-response function:

(12)

Of course, this is exactly what you’d expect for a covariance-stationary process. The impact of past shocks on the current realized value had better be the same as the impact of current shocks on future values.

3. From ARs to VARs

We’ve just seen how to compute the impulse-response function for an AR(1) process. Let’s now examine how to extend this the setting where there are two time series,

(13)

instead of just . This pair of equations can be written in matrix form as follows,

(14)

where and . For example, if you think about as the quarterly inflation rate and as the quarterly house-price appreciation rate, then the coefficient matrix is given in Equation (2).

Nothing about the construction of the moving-average representation of demanded that be a scalar, so we can use the exact same tricks to write the -dimensional vector as a moving average:

(15)

But, it’s much less clear in this vector-valued setting how we’d recover the impulse-response function from the moving-average representation. Put differently, what’s the matrix analog of ?

Let’s apply the want operator. This mystery matrix, let’s call it , has to have two distinct properties. First, it’s got to rescale the vector of shocks, , into something that has a unit norm,

(16)

in the same way that in the analysis above. This is why I’m writing the mystery matrix as rather than just . Second, the matrix has to account for the fact that the shocks, and , are correlated, so that point shocks to the inflation rate are always accompanied by point shocks to the house-price appreciation rate. Because the shocks to each variable might have different standard deviations, for instance, while , the effect of a shock to the inflation rate on the house-price appreciation rate, , will be different than the effect of a shock to the house-price appreciation rate on the inflation rate, . Thus, each variable in the vector will have its own impulse-response function. This is why I write the mystery matrix as rather than .

then will have both of the properties we want as pointed out in Sims (1980). The simple -dimensional case is really useful for understanding why. To start with, let’s write out the variance-covariance matrix of the shocks, , as follows,

(18)

where . The Cholesky decomposition of can then be solved by hand:

(19)

Since we’re only working with a -dimensional matrix, we can also solve for by hand:

(20)

So, for example, if there is a pair of shocks, , then will convert this shock into:

(21)

In other words, the matrix rescales to have unit norm, , and rotates the vector to account for the correlation between and . To appreciate how the rotation takes into account the positive correlation between and , notice that matrix turns the shock into a vector that is pointing standard deviation in the direction and in the direction. That is, given that you’ve observed a positive shock, observing a shock would be a surprisingly low result.

If we plug into our moving-average representation of , then we get the expression below,

(22)

implying that the impulse-response function for is given by:

(23)

The figure below plots the impulse-response function for both and implied by a unit shock to using the coefficient matrix from Equation (2).

1. Motivation

How persistent has IBM’s daily trading volume been over the last month? How persistent have Apple’s monthly stock returns been over the last years of trading? What about the US’s annual GDP growth over the last century? To answer these questions, why not just run an OLS regression,

(1)

where denotes the estimated auto-correlation of the relevant data series? The Gauss-Markov Theorem says that an OLS regression will give a consistentunbiased estimate of the persistence parameter, , right?

Wrong.

Although OLS estimates are still consistent when using time-series data (i.e., they converge to the correct value as the number of observations increases), they are no longer unbiased in finite samples (i.e., they may be systematically too large or too small when looking at rather than observations). To illustrate the severity of this problem, I simulate data of lengths , , and ,

(2)

and estimate the simple auto-regressive model from Equation (1) to recover . The figure below shows the results of this exercise. The left-most panel reveals that, when the true persistence parameter approaches one, , the bias approaches , or of the true coefficient size. In other words, if you simulate a time series of data points using , then you’ll typically estimate a !

What is it about time series data that induces the bias? Why doesn’t this problem exist in a cross-sectional regression? How can it exist even when the true coefficient is ? This post answers these questions. All of the code can be found here.

2. Root of the Problem

Here is the short version. The bias in comes from having to estimate the sample average of the time series:

(3)

If you knew the true mean, , then there’d be no bias in . Moreover, the bias goes away as you see more and more data (i.e., the estimator is consistent) because your estimated mean gets closer and closer to the true mean, . Let’s now dig into why not knowing the mean of a time series induces a bias in OLS estimate of the slope.

Estimating the coefficients in Equation (1) means choosing the parameters and to minimize the mean squared error between the left-hand-side variable, , and the right-hand-side variable, :

From here, it’s easy to solve for the expected difference between the estimated slope coefficient, , and the true slope coefficient, :

(7)

Note that is just the regression residual. So, this equation says that the estimated persistence parameter, , will be too high if big ‘s tend to follow periods in which is above its mean. Conversely, your estimated persistence parameter, , will be too low if big ‘s tend to follow periods in which is below its mean. How can estimating the time-series mean, , induce this correlation while knowing the true mean, , not?

Clearly, we need to compute the average of the time series given in Equation (3). For simplicity, let’s assume that the true mean is and the initial value is . Under these conditions, each successive term of the time series is just a weighted average of shocks:

(8)

So, the sample average given in Equation (3) must contain information about future shock realizations, . Consider the logic when the true persistence parameter is positive, , and the true mean is zero, . If the current period’s realization of is below the estimated mean, , then future ‘s have to be above the estimated mean by definition—otherwise, wouldn’t be the mean. Conversely, if the current period’s realization of is above the estimated mean, then future ‘s have to be below the estimated mean. As a result, the sample covariance between and must be negative:

(9)

As a result, when the true slope parameter is positive, the OLS estimate will be biased downward.

3. Cross-Sectional Regressions

Cross-sectional regressions don’t have this problem because estimating the mean of the right-hand-side variable,

(10)

doesn’t tell you anything about the error terms. For example, imagine you had data points generated by the following model,

(11)

where each observation of the right-hand-side variable, , is independently drawn from the same distribution. In this setting, the slope coefficient from the associated cross-sectional regression,

(12)

won’t be biased because isn’t a function of any of the error terms, :

(13)

So, estimating the mean won’t induce any covariance between the residuals, , and the right-hand-side variable, . All the conditions of the Gauss-Markov Theorem hold. If you only have a small number of observations, then may be a noisy estimate, but at least it will be unbiased.

4. Bias At Zero

One of the more interesting things about the slope coefficient bias in time series regressions is that it doesn’t disappear when the true parameter value is . For instance, in the figure above, notice that the expected bias disappears at and negative at . Put differently, if you estimated the correlation between Apple’s returns in successive months and found a parameter value of , then the true coefficient is likely . In fact, Kendall (1954) derives an approximate expression for this bias when the true data generating process is an :

(14)

A simple two-period example illustrates why this is so. Imagine a world where the true coefficient is , and you see the pair of data points in the left panel of the figure below, with the first observation lower than the second. If as well, then we have that:

(15)

I plot this sample mean with the green dashed line. In the right panel of the figure, I show the distances and in red and blue respectively. Clearly, if the first observation, , is below the line, then the second observation is above the line. But, since , the second observation is just , so you will observe a negative correlation between and . will be downward biased.

1. Motivation

Here, denotes the stock’s average abnormal returns, so the stock’s mispriced if . Suppose you don’t initially know whether or not the stock is priced correctly, whether or not , but as you see more and more data you refine your beliefs, . In fact, your posterior variance disappears as :1

(2)

So, such pricing errors can’t persist forever unless there’s some limit to arbitrage, like trading costs or short-sale constraints, to keep you from trading it away.

However, pricing errors often don’t persist period after period. Instead, they tend to arrive sporadically, affecting the first period’s returns, skipping the next two periods’ returns, affecting the fourth and fifth periods’ returns, and so on… What’s more, arbitrageurs don’t have access to an oracle. They don’t know ahead of time which periods are affected and which aren’t. They have to figure this information out on the fly, in real time. In this post, I show that arbitrageurs with an infinite time series may never be able to identify a sporadic pricing error because they don’t know ahead of time where to look.

2. Sporadic Errors

What does it mean to say that a pricing error is sporadic? Suppose that, in each trading period, there is a key state variable, . If , then the stock’s abnormal returns are drawn from a distribution with ; whereas, if , then the stock’s abnormal returns are drawn from a distribution with . Let denote the probability that in a given trading period:

(3)

So, for every in the example above where the stock always has a mean abnormal return of . By contrast, if in every trading period, then the pricing error is sporadic.

You can think about this state variable in a number of ways. For instance, perhaps the market conditions have to be just right for the arbitrage opportunity to exist. In the figure above, I model the distance to this Goldilocks zone as a reflected random walk, :

(4)

Then, every time hits the origin, the mispricing occurs. Thought of in this way, the state variable represents a renewal process.

3. Inference Problem

Suppose you see an infinitely long time series of abnormal returns. Let denote the distribution of abnormal returns when always, and let denote the distribution of abnormal returns when sometimes. So, if abnormal returns are drawn from , then there are no pricing errors; whereas, if abnormal returns are drawn from , then there are some pricing errors. Here’s the question: When can you conclusively tell whether the data was drawn from rather than ?

If a trader can perfectly distinguish between the pair of probability distributions, and , then, when you give him any randomly selected sequence of abnormal returns, , he will look at it and go, “That’s from distribution .”, or “That’s from distribution .” He will never be stumped. He will never need more information. Mutual singularity is the mathematical way of phrasing this simple idea. Let denote the set of all possible infinite abnormal return sequences:

(5)

We say that a pair of distributions, and , are mutually singular if there exist disjoint sets, and , whose union is such that for all sequences while for all sequences . If and are mutually singular then we can write . For example, if the alternative hypothesis is that every day, then since we know that all abnormal return sequences drawn from have a mean of exactly as while all abnormal return sequences drawn from have a mean of exactly as .

At the other extreme, you could imagine a trader being completely unable to tell a pair of distributions apart. It might be the case that any sequence of abnormal returns that is off limits in distribution is also off limits in distribution . That is, you might never be able to find a sequence of returns that you could use to reject the null hypothesis. Absolute continuity is the mathematical way of phrasing this idea. A distribution is absolutely continuous with respect to if implies that . This is written as .

In this post I want to know: for what kind of sporadic pricing errors is ? When can a trader never be completely sure that he’s seen an pricing error and not just an unlikely set of market events?

4. Main Results

Now for the two main results. Harris and Keane (1997) show that, i) if the pricing error happens frequently enough, then traders can identify it regardless of how large it is:

(6)

For instance, suppose that a stock’s abnormal returns are drawn from a distribution with a really small every single period (i.e., for all ). Then, just like standard statistical intuition would suggest, traders with enough data will eventually identify this tiny pricing error since .

By contrast, ii) if the pricing error is small and rare enough, then traders with an infinite amount of data will never be able to reject . They will never be able to conclusively know that there was a pricing error:

(7)

Again, means that traders can’t find a sequence of abnormal returns which would only have been possible under . This is a bit of a strange result. To illustrate, suppose that you and I both see a suspicious abnormal return at time . But, while you think it’s due to a sporadic arbitrage opportunity of size , I think it was just a random market fluctuation. If the probability that this pricing error recurs shrinks over time,

(8)

then no amount of additional data will enable us to conclusively settle our argument. There can be no smoking gun. We’ll just have to agree to disagree. This result runs against the standard Harsanyi doctrine which says that people who see the same information will end up with the same beliefs.

5. Proof Sketch

I conclude by sketching the proof for part ii) of Harris and Keane (1997)‘s main result: if the pricing error is sufficiently small and rare, then traders will never be able to reject . To do this, I need to be able to show that, if , then every sequence of abnormal returns with the property that also has the property that . The easiest way to do this is to look at the behavior of the following integral:

(9)

If it’s finite, then every time it must also be the case that . Otherwise, you’d be dividing a positive number, , by .

So, let’s examine this integral. At its core, this integral is just a weighted average of the number of times that a mispricing should occur under since :

(10)

If we define as the number of periods in which there is a pricing error,

(11)

then we can further bound this integral as follows since :

(12)

However, we know that the probability that there is at least period where a pricing error occurs is just the inverse of the expected number of periods in which a pricing error occurs:

1. Motivation

There are many ways that you might measure the typical horizon of a stock’s demand shocks. For instance, Fourier methods might at first appear to be a promising approach, but first impressions can be deceiving. Here’s why: spikes in trading volume tend to be asynchronous. For example, you might see a -hour burst of trading activity starting at 9:37am, then another starting at 11:03am, and a third at 2:42pm, but you’d never see bursts of trading activity arriving and subsiding every hour like clockwork. Wavelet methods, as described in my earlier post, can handle these kinds of asynchronous shocks. Fourier methods can’t.

2. Spectral Analysis

Suppose there’s a minute-by-minute trading-volume time series that realizes hour-long shocks. To recover the -hour horizon from this time series, Fourier analysis tells us to estimate a collection of regressions at frequencies ranging from cycle per month to cycle per minute:

(1)

A frequency of cycles per day, for example, denotes the -minute time horizon since there are minutes in a trading day. The amount of variation at a particular frequency is then proportional to the power of that frequency, , defined as:

(2)

If the time series realizes hour-long shocks, then shouldn’t we find a peak in the power of the series at the hourly horizon, ? Isn’t this what computing the power of a time series at a particular frequency is designed to capture?

3. Asynchronous Shocks

Yes, but only if the shocks come at regular intervals. For instance, if the first minutes realized a positive shock, minutes through realized a negative shock, minutes through realized a positive shock again, and so on… then Fourier analysis would be the right approach. However, trading-volume shocks have irregular arrival times and random signs. Fourier analysis can’t handle this sort of asynchronous structure.

4. Simulation-Based Example

Let’s consider a short example to solidify this point. We simulate a month-long time series of minute-by-minute trading-volume data with -minute shocks by first randomly selecting minutes during which hour-long jumps begin, , and then adding white noise, :

(3)

Each of the jumps has a magnitude of and is equally likely to be positive or negative. The white noise has a standard deviation of . The resulting series is shown in the left-most panel of the figure below.

By construction, this process only has white noise and -minute-long shocks. That’s it. There are no other time scales to worry about. None. If Fourier analysis were the correct tool for identifying horizon-specific trading-volume fluctuations, then you’d expect there to be a spike in the power of the time series at the -minute horizon. But, what happens if we look for evidence of this -minute timescale by estimating the power spectrum shown in the middle panel of the figure above? Do we see any evidence of a -minute shock? No. There is nothing at the -minute horizon. Asynchronous shocks of a fixed length don’t show up in the Fourier power spectrum. They do, however, show up in the Wavelet-variance plot as shown in the right-most panel of the figure above where there is a clear spike at the -hour horizon.

1. Introduction

Important market events often have a variety of interpretations. For example, a recent Financial Times article outlined several different readings Facebook’s “feeble showing… in the weeks since its initial public offering”. “Maybe Morgan Stanley, which organized the IPO, got complacent. Maybe Facebook neglected to adapt its platform fully to the world of mobile devices. Maybe, if we are to believe the Los Angeles Times, the company, for all its users, is ‘losing its cool’.” The article then tossed another hat into the ring. “Those explanations are wrong. There may be a simpler explanation: political risk… Facebook is less a revolution in technology than a revolution in property rights. It is to social life what enclosure was to grazing. Fed-up users might begin to question Facebook’s claim to full ownership of so much valuable personal information that they, the public, have generated.”

Whatever you think the right answer is, one thing is clear: traders can hold the exact same views for entirely different reasons. Moreover, while these views happen to line up for Facebook, they have wildly different implications for how a trader should behave in the rest of the market. For instance, if you think the poor performance was a result of Morgan Stanley’s hubris, then you should change the way you trade their upcoming IPOs. Alternatively, if you think the poor performance was a consequence of Facebook losing its cool, then you should change the way you trade Zynga. Finally, if you agree with the Financial Times reporter and think the poor performance was due to privacy concerns, then you should change the way you trade other companies, like Apple, which hoard users’ personal information.

Motivated by these observations, this post outlines an asset-pricing model where each asset has many plausibly relevant features, and, in order to turn a profit, arbitrageurs must diagnose which of these is relevant using past data.

2. Feature Space

I study a market with assets. Let’s begin by looking at how I model each asset’s exposure to different features, each representing a different explanation for the asset’s performance. I use the indicator function, , to capture whether or not asset has exposure to the th feature:

(1)

For example, while both National Semiconductor and Sequans Communications are in the semiconductor industry, and , only National Semiconductor was involved in M&A rumors in Q1 2011, so but . Feature exposures are common knowledge. Everyone knows each value in the -dimensional matrix , so there is no uncertainty about whether or not National Semiconductor belongs to the semiconductor industry. Each asset’s fundamental value stems from its exposure to exactly half of the different payout-relevant features.

Fundamental values have a sparse representation in this space of features. Only of the possible features actually matter:

(2)

There are enough observations, , to estimate the value of the feature-specific shocks using OLS if you knew ahead of time which features to analyze; however, there are many more possible features, , than observations. Without an oracle, OLS is an ill-posed problem in this setting. This sparseness assumption embodies the idea that financial markets are large and diverse, so finding the right trading opportunity is a needle-in-a-haystack type problem.

For analytical convenience, I study the case with and where of the assets have exposure to of the feature-specific shocks and of the assets have exposure to the other feature-specific shock. For example, if there is a shock to all big-box stores and to all companies based in Ohio, then there are no superstores based in Ohio like Big Lots in the list of assets. This is the simplest possible model in which the feature-specific average matters and every asset has exposure to the same number of shocks.

If only of the features actually realize shocks and each shock affects a separate subset of firms, then there are:

(3)

possible combinations of shocks. There are different shocks to choose from for the first shock, and only th of the remaining shocks will not overlap assets with the first shock. I index each combination with where denotes the true set of shocked features. Let denote the set of all features and denote the features associated with index . Nature selects which of the combinations of features realizes feature-specific shocks uniformly at random:

(4)

prior to the start of trading in period .

3. Asset Structure

We just saw what might impact asset values. Let’s now examine how these features actually affect markets. I study a model where nature selects fundamental values, , prior to the start of trading. And, these fundamental values are a function of components: the particular feature-specific shock affecting each asset together with an idiosyncratic shock:

(5)

where denotes the extent to which the th feature affects fundamental values:

(6)

and denotes the idiosyncratic shock. I use the th subscript to denote the idiosyncratic component of each stock for brevity, but always omit it when writing the -dimensional vector of feature-specific shocks, .

Each asset has exposure to only of the feature-specific shocks since it has exposure to a random subset of of all features. Thus, its fundamental volatility is given by:

(7)

since both the feature-specific shocks and each asset’s idiosyncratic shock have variance . The main benefit of forcing each asset to have exposure to exactly of the feature-specific shocks is that, under these conditions, every single one of the assets will have identical unconditional variance.

4. Naifs’ Objective Function

How does information about these feature-specific shocks gradually creep into prices? Naive asset-specific investors. There are such investors studying each of the stocks, one investor per shock. These so-called naifs choose how many shares to hold of a single asset, , in order to maximize their mean-variance utility over end-of-market wealth:

(8)

where is their risk-aversion parameter. The superscript is necessary because there are kinds of naifs trading each asset: one that has information about the feature-specific shock and one that has information about the idiosyncratic shock.

Naifs trading in each of the assets see a private signal each period, , about how a single shock affects their asset:

(9)

where denotes a signal about stock ‘s idiosyncratic component. For example, a naif studying Target Corp might get a signal about how the company’s fundamental value will rise due to an industry-specific supply-chain management innovation (big-box store feature-specific shock). The other naive asset-specific investor studying Target might then get a signal about how the company’s fundamental value will fall due to the unexpected death of their CEO (Target-specific idiosyncratic shock).

I make key assumptions about how the naifs solve their optimization problem. First, I assume that these investors believe each period that they’ll hold their portfolio until the liquidating dividend at time . Second, I assume that, while naifs see private signals about the size of a feature-specific shock, they do not generalize this information and apply it to other assets with this feature. For example, the naif who realized that Target’s fundamental value will rise due to the supply-chain innovation won’t use this information to reevaluate the correct price of Wal-Mart. Third, these naive investors do not condition on current or past asset prices when forming their expectations. To continue the example, this same investor studying Target won’t analyze the average returns of all big-box stores to get a better sense of how big a value shock the industry-specific supply-chain innovation really was.

All of these assumptions are motivated by bounded rationality. A naive asset-specific investor must use all his concentration just to figure out the implications of his private signals. With no cognitive capacity left to spare, he can’t implement a more complex, dynamic, trading strategy (first assumption), extend his insight to other companies (second assumption), or use prices to form a more sophisticated forecast of the liquidating dividend value (third assumption). These naifs behave similarly to the newswatchers from Hong and Stein (1999) and also neglect correlations in a similar fashion to Eyster and Weizsacker (2010).

5. Baseline Equilibrium

We now have enough structure to characterize a Walrasian equilibrium with private valuations. When no market-wide arbitrageurs are present, the price of each asset is given by:

(10)

where denotes the index of the particular shock affecting the th asset.

What do these formulas mean for an arbitrageur? Suppose that the big-box store supply-chain innovation occurred and affected assets . Naive asset-specific investors neglect the fact that they could use the average returns of assets in the big-box industry to refine their beliefs about the size of the shock. As an arbitrageur, you can profit from this neglected information by deducing the size of the shock from the industry average returns:

(11)

where is given by:

(12)

Simply buy shares of the underpriced big-box stock whose , and short shares of the overpriced big-box stock whose .

Of course, in the real world, you wouldn’t have an oracle. You wouldn’t know ahead of time that the big-box store shock had occurred. Instead, you’d have to not only value the big-box store shock but also identify that the shock had occurred in the first place. Let’s now introduce arbitrageurs to the model and study this joint problem.

6. Arbitrageurs’ Objective Function

Arbitrageurs start out with no private information; however, unlike the naifs, they can observe all asset returns in period . They can then use this information to both value and identify feature-specific shocks, submitting market orders to maximize their risk-neutral utility over end-of-game wealth:

(13)

where is chosen as the model of the world that minimizes the arbitrageurs’ average prediction error over the assets’ fundamental values given the observed period prices, . In this model, much like that of Hong and Stein (1999), the naifs effectively serve as market makers.

Because there are more features than assets, , arbitrageurs must engage in model selection a la Barberis, Shleifer, and Vishny (1998) or Hong, Stein, and Yu (2007). Choosing the right model of the world is their main challenge. It’s figuring out whether Facebook’s IPO failed due to Morgan Stanley’s complacency or due to under-appreciated political risks. If arbitrageurs knew which features to analyze ahead of time, , then their problem would be dramatically easier. It would be as if they had an oracle sitting on their shoulder interpreting market events for them. They would then be able to use the usual OLS techniques to form beliefs about the size of the feature-specific shocks:

(14)

where is restricted to columns , and is restricted to rows . There is no hat over the choice of feature-specific shocks, . Only the has a hat over it. Only exact values of the shocks are unknown.

By contrast, the market-wide arbitrageurs in this model have to use some thresholding rule to cull the number of potential features down to a manageable number. They have to both select and estimate . While this daunting real-time econometrics problem is new to the asset-pricing literature, researchers and traders confront this problem every single day. As Johnstone (2013) argues, this sort of behavior “is very common, even if much of the time it is conducted informally, or perhaps most often, unconsciously. Most empirical data analyses involve, at the exploration stage, some sort of search for large regression coefficients, correlations or variances, with only those that appear ‘large’, or ‘interesting’ being retained for reporting purposes, or in order to guide further analysis.”

7. Bayesian Inference

Let’s now turn our attention to how a fully-rational Bayesian arbitrageur with non-informative priors should select which features to use? Bayes’ rule tells us that the posterior probability of a particular combination of shocks, , is proportional to the likelihood of observing the realized data given the combination, , times the prior probability of Nature choosing the combination of shocks, :

(15)

So, this arbitrageur will select the collection of at most features that maximizes the log-likelihood of the observed data:

(16)

since each of the combinations of shocks is equally likely.

Why is there an inequality sign in Equation (16)? That is, why isn’t the constraint ? Because some of the elements in will be small. After all, it’s drawn from a Gaussian distribution. A fully-rational Bayesian arbitrageur will want to ignore some of the smaller elements in since he faces overfitting risk. For instance, if all Houston-based firms realize a local tax shock that increases their realized returns to the tune of per year, then it will be impossible for a market-wide arbitrageur to spot this shock. Firm-level volatility can exceed per year. An arbitrageur trying to recover such a weak signal out from amongst so much noise is more likely to overfit the observed data and draw the wrong inference.

Schwartz (1978) showed that fully Bayesian arbitrageurs in this setting should ignore all coefficients smaller than . This is the correct threshold for a Gaussian model in the following sense. Suppose that there were no shocks. That is, we had and for each of the assets. Then, we would like our estimator to tell us that there are no shocks with overwhelming probability:

(17)

where is an arbitrarily small number that is chosen in advance, and denotes the average over the set of assets with exposure to feature . This particular choice of comes from the fact that:

(18)

almost surely for a Gaussian model.

8. Equilibrium with Arbitrageurs

Let’s now wrap up by looking at the effect of these market-wide arbitrageurs on equilibrium asset prices. Prices in period will be the same as before since arbitrageurs have no information in the first period. As a result, they do not trade in period . To solve for time prices as a function of arbitrageur demand, simply observe that market clearing implies:

(19)

Some simplification then yields:

(20)

Thus, we can see that the price of each asset will be weighted average of the signals that the naifs receive and the arbitrageurs’ demand. An asset’s price will be higher if the naifs get more positive asset-specific signals or if arbitrageurs demand more as a result of a more positive feature-specific signal.

Suppose that, after observing period returns, arbitrageurs believe that features have realized a shock. If they are using the Bayesian information criterion, this means that for each the estimated was larger than . It’s possible to write the arbitrageurs’ beliefs about the value of each asset as a linear combination of an asset-specific component, , and the estimated feature-specific shock size, :

(21)

The asset-specific component, , comes from the fact that, if arbitrageurs believe that an asset’s value is due in part to a feature-specific shock of size , then they can use these beliefs to update their priors about the size of the asset’s idiosyncratic shock. Plugging this linear formula into arbitrageurs’ optimal portfolio holdings yields:

(22)

where the coefficient on can be simplified as follows:

(23)

This result implies that arbitrageurs decrease their demand for an asset with exposure to, say, a negative political-risk shock by shares for every increase in the size of the shock.

The key implication of this model is that including a shocked feature in the arbitrageurs’ model of the world will yield a price shock of size:

(24)

For instance, if arbitrageurs were using Bayesian updating, then there would be a discontinuous jump in the effect of a political-risk shock on social media companies like Facebook as the size of the shock crossed if the size of the shock crossed the threshold.