Views - Space Machinehttp://www.spacemachine.net/views/Sat, 07 May 2016 23:48:11 +0000en-USSite-Server v6.0.0-16196-16196 (http://www.squarespace.com)Trading Wall Street from Silicon Valley: A Journey of DiscoveryWorld PuzzleQuant QuantoSat, 07 May 2016 23:47:07 +0000http://www.spacemachine.net/views/2016/5/world-puzzle54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:572e7ebb8259b5058ce97f8eThis story has been retold many times, but the following narrative, excerpted from Gary Keller's The ONE Thing, is my favorite:

One evening, a young boy hopped up on his father's lap and whispered, “Dad, we don't spend enough time together.” The father, who dearly loved his son, knew in his heart this was true and replied, “You’re right and I’m so sorry. But I promise I’ll make it up to you. Since tomorrow is Saturday, why don’t we spend the entire day together? Just you and me!” It was a plan, and the boy went to bed that night with a smile on his face, envisioning the day, excited about the adventurous possibilities with his Pops.

The next morning the father rose earlier than usual. He wanted to make sure he could still enjoy his ritual cup of coffee with the morning paper before his son awoke, wound up and ready to go. Lost in thought reading the business section, he was caught by surprise when suddenly his son pulled the newspaper down and enthusiastically shouted, “Dad, I’m up. Let’s play!”

The father, although thrilled to see his son and eager to start the day together, found himself guiltily craving just a little more time to finish his morning routine. Quickly racking his brain, he hit upon a promising idea. He grabbed his son, gave him a huge hug, and announced that their first game would be to put a puzzle together, and when that was done, “we’ll head outside to play for the rest of the day.”

Earlier in his reading, he had seen a full-page ad with a picture of the world. He quickly found it, tore it into little pieces, and spread them out on the table. He found some tape for his son and said, “I want to see how fast you can put this puzzle together.” The boy enthusiastically dove right in, while his father, confident that he had now bought some extra time, buried himself back in his paper.

When I put the man together...

Within minutes, the boy once again yanked down his father’s newspaper and proudly announced, “Dad, I’m done!” The father was astonished. For what lay in front of him — whole, intact, and complete — was the picture of the world, back together as it was in the ad and not one piece out of place. In a voice mixed with parental pride and wonder, the father asked, “How on earth did you do that so fast?”

The young boy beamed. “It was easy, Dad! I couldn’t do it at first and I started to give up, it was so hard. But then I dropped a piece on the floor, and because it’s a glass-top table, when I looked up I saw that there was a picture of a man on the other side. That gave me an idea!

What if a perpetual alpha fund can be organized and operated in such a way that it can deliver on the following 8 attributes: high Sharpe ratio, zerotransparency, high liquidity, monitored risk exposures, few controls, just enough capacity, hyperactive turnover, and tiered fees, while preserving the key operational advantages enjoyed by a small prop shop? Don't you think this may be an interesting area of the design space to explore?

Things which matter most must never be at the mercy of things which matter least.

All good things come to an end: But why can’t good things be made to last?

Since inception, the iconic Nevsky Capital, a long-short global equity fund, has posted an impressive cumulative percentage change in $ terms of 1,213% over its 15-year lifespan from September 2000 to December 2015. The manager of Nevsky Fund, Martin Taylor, was shutting down the fund at the end of 2015, and returning all of the $1.5 billion capital to fund investors.

In his final letter to fund investors, Martin Taylor cited persistent challenges in the current market environment which led to such a decision. For over 21 years, a "broadly unchanged process" (i.e., one that marries the top down forecasting of key macro-economic variables with the bottom up forecasting of company earnings) had worked well for them. Until it doesn't anymore. What are we to make of this? Why can't alpha be made to last?

Turns out that alpha is rather ephemeral even for high-frequency traders, according to Jacob Loveless: “Imagine every day you have to figure out a small part of the world. You develop fantastic machines, which can measure everything, and you deploy them to track an object falling. You analyze a million occurrences of this falling event, and along with some of the greatest minds you know, you discover gravity. It’s perfect: you can model it, define it, measure it, and predict it. You test it with your colleagues and say, ‘I will drop this apple from my hand, and it will hit the ground in 3.2 seconds,’ and it does. Then two weeks later, you go to a large conference. You drop the apple in front of the crowd...and it floats up and flies out the window. Gravity is no longer true; it was, but it isn’t now. That’s HFT. As soon as you discover it, you have only a few weeks to capitalize on it; then you have to start all over.”

HFT was measurably harder by 2010. Most of our models at that time were running at half-lives of three to six months.

— Jacob Loveless (“Barbarians at the Gateways”, 2013)

Since alpha is hard to come by, might it be worthwhile to reconsider the possibility of creating passive portfolios based on replicating hedge fund exposures to risks? Earlier research by Andrew Lo and Jasmina Hasanhodzic of MIT had examined just such a possibility, based on using a linear regression technique to identify common factors. However, the actual performance results of such “replication funds” in the real world were somewhat disappointing, and had not matched expectations. For example, the Diversifying Strategies fund, which was launched in 2009 and managed by AlphaSimplex, performed poorly over a three-year period during 2011-2013, especially when compared against the S&P 500 performance benchmark in a rising stock market. The fund was subsequently shut down in 2014. The strategy of replicating the returns of hedge-fund techniques had its limitations. Whether by linear regression or other techniques, it was again found that the real world financial markets are not so easily modeled after all.

Academic Arbitrage: A renewable source of alpha? Can you still find free money on the street if you look hard enough?

Let’s think about the problem in a slightly different way, and ask: Is there a renewable source of alpha that traders can tap into? It so happened that David McLean of the University of Alberta and Jeffrey Pontiff of Boston College examined a total of 72 market anomalies (i.e., good old-fashioned money-making opportunities!) that appeared in academic journals between 1972 and 2011. They found that average returns went down 35% after publication. They also uncovered evidence that traders jumped in to exploit anomalies following publication, with trading volumes rising, and with short interest increasing for stocks on the wrong side of arbitrage opportunities. Curiously, the anomalies again start to produce improved returns after a few more years. It appears that traders have rather short attention span, and very likely have moved on to the next thing.

Just One Thing:How are we even related? I've always been a rational chimp and a loyal Mac user!

Here is an interesting question to consider: How might a “perpetual alpha fund” be organized? Does the trading firm do the same thing as usual, only different every day? Or does the trading firm do just one thing, the same every day but evolves along the way? What new framework do we need?

If this question sounds hard, then how about something a tad bit easier: How might a “perpetually flying paper airplane” be constructed? Here is an “instructable” that teaches how to build the “paper airplane that flies forever” and a video that demonstrates its perpetual flight characteristics along MIT’s famed “infinite corridor”:

No Magic: A shift in perspective is all that is needed.

Theoretically, this airplane will fly for as long as you continue to walk with and guide it (Image Credit: “The Coke and Mentos Guys”).

I’d rather write programs to write programs to write programs than write programs to write programs.

What’s the ONE Thing I can do such that by doing it everything else will be easier or unnecessary?

— Gary Keller ("The ONE Thing")

What's the ONE Thing I can do to get rid of a mouse such that by doing it everything else will be easier or unnecessary?

Since the gravitational potential energy of an upright domino is proportional to the fourth power of its size, a very small amount of input energy can be amplified quickly to knock down an impressively big domino. Starting with a 2-inch domino, and arrange for each successive domino to be 50% larger than the one before. Then the 18th domino would be as tall as the leaning tower of Pisa. The 23rd domino would tower over the Eiffel Tower, and the 31st domino would loom over Mount Everest by almost 3,000 feet. The 57th domino would practically be as tall as the distance between the earth and the moon! (Image Credit: Overflow).

Find the lead domino, and whack away at it until it falls.

— Gary Kelley ("The ONE Thing")

I knew I had to transform Alcoa, but you can’t order people to change. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.

On a blustery October day in 1987, a herd of prominent Wall Street investors and stock analysts gathered in the ballroom of a posh Manhattan hotel. They were there to meet the new CEO of the Aluminum Company of America — or Alcoa, as it was known — a corporation that, for nearly a century, had manufactured everything from the foil that wraps Hershey’s Kisses and the metal in Coca Cola cans to the bolts that hold satellites together.

A few minutes before noon, the new chief executive, Paul O’Neill, took the stage. He looked dignified, solid, confident. Like a chief executive. Then he opened his mouth. “I want to talk to you about worker safety,” he said. “Every year, numerous Alcoa workers are injured so badly that they miss a day of work.

“I intend to make Alcoa the safest company in America. I intend to go for zero injuries.”

The audience was confused. Usually, new CEOs talked about profit margins, new markets and ‘synergy’ or ‘co-opetition.’ But O’Neill hadn’t said anything about profits. He didn’t mention any business buzzwords. Eventually, someone raised a hand and asked about inventories in the aerospace division. Another asked about the company’s capital ratios.

“I’m not certain you heard me,” O’Neill said. “If you want to understand how Alcoa is doing, you need to look at our workplace safety figures.” Profits, he said, didn’t matter as much as safety.

The investors in the room almost stampeded out the doors when the presentation ended.

Within a year of O’Neill’s speech, Alcoa’s profits would hit a record high. By the time O’Neill retired in 2000 to become Treasury Secretary, the company’s annual net income was five times larger than before he arrived, and its market capitalization had risen by $27 billion. Someone who invested a million dollars in Alcoa on the day O’Neill was hired would have earned another million dollars in dividends while he headed the company, and the value of their stock would be five times bigger when he left. What’s more, all that growth occurred while Alcoa became one of the safest companies in the world.

]]>The ONE ThingSame as Usual, Only DifferentQuant QuantoSun, 01 May 2016 07:01:00 +0000http://www.spacemachine.net/views/2016/4/same-as-usual-only-different54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:57258c1c1bbee05672214a8e

Instead of embracing and celebrating change — or lying about it and pretending to embrace it — I think we ought to stop talking about change altogether. Let’s ignore it, avoid it, and sidestep it. Instead of spending time thinking about change, let’s all sign up for zooming lessons.

— Seth Godin (1999)

Doing the same thing as usual, only different.

Seth Godin defines “zooming” as “doing the same thing as usual, only different”. Zooming, according to Godin, is about stretching your limits without threatening your foundation. It's about handling new ideas, new opportunities, and new challenges without triggering the change-avoidance reflex.

There are all kinds of zoomers, and all kinds of categories in which you can learn to zoom. A person who is able to zoom across a large area without getting stressed out is said to have a broad “zoomwidth”.

Take the franchised restaurants — McDonald’s, Baskin-Robbins, Pizza Hut — for example, none of them has any zoomwidth at all. The structure of these organizations, Godin explained, made any sort of adjustment seem like a major threat, rather than an opportunity to zoom. In fact, Kentucky Fried Chicken even had to change its name to KFC, just so it could start selling non-fried foods!

In contrast, Limited Inc. is a company that has great zoomwidth. At Limited stores, introducing a new clothing style is easy. It changes its merchandise at every store at least once a month — whether it needs to or not.

The big question: Why is it that the big opportunities, the really obvious chances that we get to improve our businesses and our careers, almost always pass us by? The answer: big opportunities bring change, and change is painful. Godin concluded that as long as opportunity means “change”, and as long as change means “pain”, we will continue to miss our chances, unless we learn to zoom. Or, if it’s the business world, the escape route from doom leads to growing, adapting and transforming the organization so it finally has ample room to zoom.

If a market were informationally efficient, i.e., all relevant information is reflected in market prices, then no single agent would have sufficient incentive to acquire the information on which prices are based.

— Joseph Stiglitz (2001)

If markets are efficient, they reflect all information, so there is no profit to be had from trading on information. If there is no profit to be had, traders with information won’t trade, so markets won’t reflect it, and will not be efficient. This is the Grossman-Stiglitz paradox in a nutshell. Indeed, if there is no profit to be had from trading on information, then why would anyone expend resources to acquire the information upon which process are based in the first place?

Indeed, a visit to the “hall of fame” of equity market efficiencies popular with quantitative traders reveals a potpourri of sources of alpha (i.e., active investment ideas for out-performing a passive benchmark), e.g., earnings forecast analysis, earnings surprises, insider trading disclosure, stock splits, secondary equity offerings and stock buybacks, mergers and acquisitions, sector analysis, common factor analysis, message board counts, Twitter sentiments, web traffic analysis, etc. Financial markets in general are far from perfect; many sources of inefficiencies can be found at different times if one has the right tools and knows where to look.

A spectrum of return sources: from Beta to Exotic Beta to Alpha; it's all about having the right tools and knowing where to look.

An investment strategy that is quite popular with hedge funds is what is called the “market neutral long-short portfolio”. In a typical setup starting with, say $100 million, for example, a long-short, market neutral portfolio consists of $100 million in long positions and $100 million in short positions. After receiving $100 million from the short sale and spending $100 million on the long side, there is still $100 million in cash (which is the amount that the fund starts with). There is no net capital requirement to put on a position such as this. What the hedge fund manager typical does then is to use the cash to put on an unleveraged futures position, e.g., the S&P 500 index futures, so as to capture market return. This is because an ongoing market index futures position, reinvested at the contract expiration dates, closely tracks the index return.

So when a long-short portfolio and an index futures position are put together, what results is a total return equal to the return on the index (i.e., beta) plus whatever return captured from the long-short portfolios (i.e., alpha). This is called an “equitized” portfolio, named for the market return captured through putting on an equity market index futures position. Notice that all the money in the fund is working twice: once on the long side of the portfolio and once on the short side. And this alpha return comes on top of the market return. It is no wonder that David Leinweber dubbed this the “James Brown of quant stock strategies, the hardest working portfolio in the equity business.”

The general plan of quantitative strategies, such as the popular market neutral long-short portfolio aforementioned, is no mystery. After all, quantitative strategies are really just mathematical expressions of fundamental investment ideas, if one look inside the process. Quantitative methodology allows many disparate concepts to come together in a single forecast. Because the process is automated, it can be applied to many financial securities, thus spreading little bets across many active positions and limiting risk in the process. So in many ways, quantitative investing is really not that much different from traditional investing, although it may sound quite dissimilar.

To err is human, but to really screw up… you need a computer.

— Anonymous

This monkey has a future career in statistical arbitrage; he is starting to see it now from the vantage point of Arrow-Debreu...

Some quantitative strategies work by pure arbitrage, essentially finding the three-and-a-half-cent pennies in the market before anyone else does. Arbitrage opportunities are sweet if you can find them; statistical arbitrage works just as well. But in an increasingly wired world where the global financial markets are fully electronic, such arbitrage trading opportunities are rare, and available only to those with bleeding-edge infrastructure, or scale of capital, or both. For the rest of the trading masses, strategies based on prediction of financial markets (adjusted for risks) is far more common place. The objective here is now two-fold: increasing predictability increases investment return; while improving the consistency and downside error of predictive models reduces risk.

Maximizing Predictability: Just 3 places to look, but many stochastic combinations are possible (and don't forget about time horizon!).

A useful perspective on maximizing predictability in financial markets is depicted above. The perspective is attributed to Andrew Lo, but the picture is adapted from an illustration found in David Leinweber’s Nerds on Wall Street. When viewed from a high level, there are only three key decisions to make in any financial market prediction:

What to predict: One can choose to predict returns to an asset class, e.g., a broad market or an industry group, an exchange rate, interest rates, or returns to individual securities of many types. One can also choose to predict spreads (i.e., return differences) between individual securities or groups of securities. Predictions of volatility are useful for options-based strategies.

How to predict: One can choose from a wide variety of statistical and mathematical methods of prediction. Many use simple windowed regression methods, which are popular. Some choose more advanced regression methods, such as moving or expanding windows, kernel estimation, auto-regressive integrated moving average (ARIMA) time series models, or even neural networks.

What to predict with: This is the raw materials that feed into the prediction methods. Technical traders use only past prices to predict future prices; but this is quite rare in institutional trading. A wide selection of financial and economic data, e.g., commodity prices, foreign exchange rates, GDP announcements, analysts opinions, messages on bulletin boards or even measures of Twitter sentiments could find their respective predictive powers within the right context.

In an uncertain world, a stochastic world view and associated methodologies for conducting experiments, interpreting outcomes and take-away results might be important. Last but not least, the time horizon of how everything interacts together (i.e., long or short), plays just as important a role in determining success or failure, especially for electronic markets participated by high-frequency traders.

Now, here is an interesting meta-level question: Can hedge fund returns be predicted? Can hedge fund returns, assuming they are good, be replicated?

“Eddie Harris and Les McCann walked onto the stage and though they had hardly rehearsed at all, launched into an ad-libbed song that made history. Ironically enough, the song contained the line, “Real... compared to what?”

“Twenty years later, “perfect sound forever” brought us the CD version. There’s no pops and crackles, but to my ears, it’s just a reminder of the depth of the LP.

Montreuex Jazz Festival (1969): Better on YouTube?

Trying to make it real... What!?

“Then they had us move everything to MP3. Now I’ve got the CD version ripped on my iPod. There are far fewer bits of data and it doesn’t sound as good, but it reminds me of the original.

“Now, I’ve got a Monster cable for my car that lets me broadcast the MP3 version of the CD version of the vinyl version of the live event over the FM airwaves to my car radio. It sounds like Eddie’s in the Holland Tunnel. And it’s not even close to music, but it reminds me of the way I felt when I heard the album.

“This is not just happening to music. It’s not just traditional media, either. An e-mail doesn’t communicate as much information as a meeting, and a voice mail is really hard to file. A PowerBar may have plenty of vitamins and stuff, but it’s just not as good as a real meal…

“This phenomenon creates a big opportunity. The opportunity to provide sensory richness, to deliver experiences that don’t pale in comparison to the old stuff. It’s not just … nostalgia — it’s a human desire for texture.

1936 Deluxe Edition Monopoly Game Extremely Rare.

]]>Trying to Make It RealMaps and TerritoriesQuant QuantoSun, 03 Apr 2016 09:27:57 +0000http://www.spacemachine.net/views/2016/4/maps-and-territories54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:570053652fe1312243ec007c

Terrain doesn’t fight wars. Machines don’t fight wars. People fight wars. It’s in the minds of men that war must be fought.

— John Boyd (1927-1997)

Developed by maverick military strategist and USAF Colonel John Boyd, the phrase “OODA Loop” refers to the decision cycle of observe, orient, decide, and act. Boyd applied the concept to the combat operations process, often at the strategic level in military operations. Boyd believed that “getting inside the decision cycle of an adversary” is crucial for winning wars. In a recent April 1 issue of the “Breaking Smart” series, Venkatesh Rao formulated the general concept of “map-territory distinction” and explained in detail how finding exploitable weaknesses in the adversary's map can be an important source of competitive advantage.

Red is operating with finger-tip feeling, and has a map of Blue's map. Blue is map-blind, and has no idea what Red is thinking. Who do you think is going to come out ahead? (Image Credit: Breaking Smart).

According to Rao, maps are used everywhere: geographic maps, organization charts, market evolution maps, genome maps, neural circuit maps, biome maps, sheet music, etc. In competitive situation, there are maps, maps pf maps, maps of maps of maps, etc. One can also make maps of others’ behaviors. Maps can thus be viewed as the basis of all competition. After all, a map is a simplified model of directly experienced reality, or phenomenology in the context of discourse related to the philosophy of science.

Like models, maps are efficient and useful. They reduce the cognitive load of mindful attention to phenomenology via one’s senses. Phenomenological awareness is much more expensive than listening to a model in one’s head. A good map can lower the cost of actions by orders of magnitude. But, like models, there is a hiden cost. When reality changes and catches one unaware, costly failures can occur (e.g., the spectacular failure of LTCM in 1998, or the financial crisis of 2008).

There is also a less dramatic, but more serious, cumulative cost to “map addiction”, according to Rao, i.e., an atrophy of sense-awareness. “Map blindness” turns mere known-unknowns into unknown-unknowns. Almgren and Chriss have this to say about the limitations of all model-driven strategies:

“Finally, we note that any optimal execution strategy is vulnerable to unanticipated events. If such an event occurs during the course of trading and causes a material shift in the parameters of the price dynamics, then indeed a shift in the optimal trading strategy must also occur. However, if one makes the simplifying assumption that all events are either "scheduled" or "unanticipated," then one concludes that optimal execution is always a game of static trading punctuated by shifts in trading strategy that adapt to material changes in price dynamics.”

The opportunity cost of not developing phenomenological awareness is quite high: one is effectively denied the use of tacit knowledge that has not been organized into maps (or models) in conscious awareness. German World War II military strategists refer to this particular sense-awareness as Fingerspitzengefühl, or “finger-tip feeling”. Unlike closed-loop feedback that signals where the model is wrong and how to adjust and compensate for the discrepancy, finger-tip feeling sensitizes one to the things the model does not even “know” about ( i.e., where the model is not even wrong!).

A pure map-based navigation strategy is what control theorists call open-loop strategy. One simply assumes the map is the territory, and navigate by it with eyes closed. This strategy is very cheap: a decision to not pay attention. Adding error feedback results in a closed-loop strategy, an incremental improvement that is quite a bit costlier. Now one must budget attention based on what the model assumes is important, and navigate by it with eyes wide shut. But a navigation strategy based on finger-tip feeling attempts to eliminate explicit maps from the loop altogether. By “instrumenting the phenomenology” directly, in a manner of speaking, one is finally navigating the territory not only with eyes open, but with an open mind.

In finger-top feeling based navigation, rather than budget attention based on assumed priorities, one deploys attention without importance judgment. This is a stage that precedes map-making and is vastly more expensive in terms of cognitive processing load. But this approach can achieve radical improvements in the long term. Incidentally, this is why recent advances in deep learning technology are widely considered to be significant. By instrumenting phenomenology rather than models, they can make sense of situations the model does not know about. But how does this work exactly?

I’d rather write programs to write programs than write programs.

— Richard Sites

A low-quality map requires a lot of expensive error feedback to just barely function. Sometimes it might even be worse than having no map. A high-quality map, on the other hand, might easily function well even with little feedback. But in competitive situations, one does not win with a better or more detailed map than the adversary. Instead, one wins by using finger-tip feeling to find exploitable weaknesses in the adversary’s map. “Fight the enemy, not the terrain”, as military strategist John Boyd once said. During a crisis, a feedback loop could be worse than an open-loop map; it is an automatic, subconscious habit that can be used against itself to cause a cascade of damage. For example, the Flash Crash of May 6, 2010 can be considered an extreme case of “feedback-amplified map-blindness” among an active subset of the market participants.

Unlike explicit map-and-model building, finger-tip feeling is not a one-time investment. Because the environment and one’s priorities can shift constantly, one has to always allocate a certain amount of attention to “finger-tip feeling” of the territory. One must also keep in mind that phenomenology is not reality; it is merely one’s experience of reality, limited by one’s senses and subconscious mental models. Therefore, it is advantageous to strive for continuous improvement in Fingerspitzengefühl through constant practice and deepening self-awareness; like it is a form of cognitive basic R&D.

Venkatesh Rao recognized the value of multiple models, an insight he gained from an earlier study of map-territory gaps in formal models. When multiple models collide, as Rao observed, they create dissonances; and phenomenology tends to win over all of them. One can thus see reality through the debris. Furthermore, by simply deciding to value phenomenology over maps, one can realize much of the benefit of Fingerspitzengefühl. This happens to be the approach that the MIT roboticist Rodney Brooks had earlier adopted for building his collection of “robotic creatures”, whose “insect-level intelligence” made possible by the underlying “subsumption architecture” was first described in a seminal paper in 1987 titled “Intelligence without Representation”. Brook’s main insight was that AI suffers from abstraction, and that a system cannot reason beyond its representation. So by reacting directly on the real world instead, representations (aka models) become unnecessary, thus greatly simplifying the construction of robots.

However, there is also value in blending multiple models together. In a surprising turn of events, the winning team of the $1 million dollar Netflix Prize, BellKor’s Pragmatic Chaos, was actually a hybrid team. BellKor (AT&T Research), which won the first Progress Prize milestone in the contest, initially combined with the Austrian team Big Chaos to improve their scores. To pass the 10 percent mark, Quebecois team Pragmatic Theory later joined up to create “BellKor’s Pragmatic Chaos.” The second-place team The Ensemble was also a composite. Arguably, the Netflix Prize’s most convincing lesson is that a disparity of approaches drawn from a diverse crowd is more effective than a smaller number of more powerful techniques. Joining forces allowed both teams to incorporate small, outlying techniques that are relatively inconsequential in the big picture, but crucial during the final stages where tweaking matters most.

“When we were approaching the first progress prize as the BellKor team, there were several other teams that joined together to make a real run at us, and that was surprising to us,” according to Chris Volinsky, originally of team BellKor. “The success of that collaboration told us that this was a real, powerful way to improve our scores. When you’re banging heads together in an office trying to come up with new ideas, you sometimes run out of ideas, and you need to bring in new people into the team, and that turned out to have a great benefit in terms of the predictive power of the models.”

Better solutions come from unorganized people who are allowed to organize organically. But something else also happened that was not entirely expected: Teams that had it basically wrong — but for a few good ideas — made the difference when combined with teams which had it basically right, but couldn’t close the deal on their own. The top two teams beat the challenge by combining teams and their algorithms into more complex algorithms incorporating everybody’s work. The more people joined, the more the resulting team’s score would increase. “One of the big lessons was developing diverse models that captured distinct effects,” commented Joe Sill of The Ensemble, “even if they’re very small effects.”

What lessons might we draw from this that would illuminate the path forward for organizing a community of traders centered on a trading platform? How do we use models? What happens when models collide? How should we blend models? What is the phenomenology of financial trading? Can intelligence emerge from phenomenology?

]]>Maps and TerritoriesLittle BetsQuant QuantoFri, 01 Apr 2016 12:45:10 +0000http://www.spacemachine.net/views/2016/4/little-bets54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:56fdd89e60b5e972afa6370dHere is an interesting question: “How do we organize the underlying trading platform so as to achieve the following desired dynamics: a large number of little bets guided by robust models built upon imperfect data leading to many small but early and sure wins?” Notice that we are not interested in “one big win from one big bet”, nor are we concerned with perfect data that may be expensive to collect or maintain. The focus here is on building robust models that are useful for trading.

Failing quickly to learn fast: Using thought experiments (or Gedankenexperiment) driven by pre-mortems to speculate about potential antecedents for a designated consequent to counter intrinsic human biases and blind spots, as soft launches are a lot cheaper when they are simply imagined! But what about operating via Fingerspitzengefühl, or “finger-tip feeling”, based upon phenomenological awareness and tacit knowledge?

Failing forward is a well-known empirical approach of learning from mistakes and failures in order to find the way forward. It is not so much that one intentionally try to fail, but rather that one knows important discoveries will be made by being willing to be imperfect, especially at the initial stages of exploring new ideas or markets. Rough prototyping is often the method of choice for embracing the learning potential of failure, while affordable small bets are used to uncover unpredictable opportunities. The fast pace of change in a constantly evolving market highlights the value of the little bets approach, where moment-to-moment, creative opportunity-seeking can have no substitute. Working from the ground up and learning from the environment, the trading platform crafts new tactics to address the opportunities as they are discovered.

This is a whole new way of looking at the problem: one of experimentation and discovery, a creative approach to trading. Pre-conceived templates or strategies are obsolete. Two fundamental advantages of the little bets approach, according to Professor Saras Sarasvathy, are that: (i) it puts the focus on what we can afford to lose rather than make assumptions about how much we can expect to gain, and (ii) it facilitates the development of capabilities as trading opportunities are sought and discovered. In short, affordable loss and capabilities development are the bedrock foundation of the little bets approach to trading.

The Dragonfly Telephoto Array, a robotic imaging system optimized for the detection of extended ultra-low surface brightness structure. The ten Canon 400mm lenses are mounted on a common framework and are co-aligned to image simultaneously the same position on the sky, enabling removal of unwanted scattered light to reveal extremely faint galaxy structure that eludes even the largest, most advanced telescopes today. The Dragonfly "compound eye" is 10 times more sensitive and 1,000 times cheaper than the best large telescopes, and has already made a big new discovery about the structure of the universe. (Image Credit: University of Toronto/Yale University).

Dr. Carol Dweck, a professor of social psychology at Stanford University, initially developed the fixed versus growth mind-set distinction by studying how schoolchildren reacted to failure and challenges. To her surprise, she found that some students relished difficulty and challenge. Dozens of studies later, Dweck’s findings suggest that people exhibiting fixed mind-sets tend to gravitate to activities that confirm their abilities, whereas those with growth mind-sets tend to seek activities that expand their abilities. People with fixed mind-sets want to appear capable, even if that means not learning in the process. People with a growth orientation, on the other hand, are willing to take more risks since challenging experiences represent chances to grow.

We wonder if the "electronic brain" of a trading platform can be programmatically imbued with an inherent growth mind-set, anthropomorphically speaking, so as to more easily capture new opportunities for growth through experimentation, exploration and improvisation? After all, the market environment already specifies the underlying design constraints. Depending upon the time of day or the specifics of the trading calendar, one can learn a little from a lot of venues, or learn a lot from just a few venues. From this perspective, robust models that exert computational efforts probing the market for answers via little bets (i.e., which provide the foundational capabilities development and affordable loss protection) are beginning to look like a winning combination deserving of further investigation.

Perhaps the most important question that we can ask is this: What is the purpose of a trading platform? Is it to supply data and facts and to run models and strategies? Or is it to support experimentation and effortful problem-solving, facilitate growth of new trading opportunities, and nurture a capacity for continuous learning from the market? It seems that little bets could be potentially interesting as a central organizing principle for a novel trading platform that can learn and adapt quickly.

It’s a numbers game after all: How one can realize the statistical information advantage of many small wins from little bets over one big bet, and do so without incurring the infrastructure overhead of traditional high-frequency trading?

Content without method leads to fantasy; method without content to empty sophistry.

— Johann Wolfgang von Goethe (“Maxims and Reflections”, 1892)

“Perhaps the most important news of our day is that datasets — not algorithms — might be the key limiting factor to development of human-level artificial intelligence,” according to Alexander Wissner-Gross in a written response to the question posed by Edge: “What do you consider the most interesting recent scientific news?”

At the dawn of the dield of artificial intelligence, two of its founders famously predicted that solving the problem of machine vision would only take a summer. We now know that they were off by half a century. Wissner-Gross began to ponder the question of: “What took the AI revolution so long?” By reviewing the timing of the most publicized AI advances over the past 30 years, he found evidence that suggests a provocative explanation: perhaps many major AI breakthroughs have actually been constrained by the availability of high-quality training datasets, and not by algorithmic advances. Here we summarize the key AI milestones:

The average elapsed time between key algorithm proposals and corresponding advances was about 18 years, whereas the average elapsed time between key dataset availabilities and corresponding advances was less than 3 years, or about 6 times faster.

If true, this hypothesis have foundational implications for future progress in AI. For example, prioritizing the cultivation of high-quality training datasets might allow an order-of-magnitude speedup in AI breakthroughs over purely algorithmic advances. After all, focusing on dataset rather than algorithm is a potentially simpler approach. “Although new algorithms receive much of the public credit for ending the last AI winter,” concluded Alexander Wissner-Gross, “the real news might be that prioritizing the cultivation of new datasets and research communities around them could be essential to extending the present AI summer.”

We wonder if algorithmic trading systems might similarly benefit from the cultivation of new datasets and research communities around them. What might that look like? How do we learn to work with imperfect data? What are the risks of trusting the data too much?

]]>Datasets Over AlgorithmsTrading PlacesQuant QuantoWed, 30 Mar 2016 19:58:00 +0000http://www.spacemachine.net/views/2016/3/trading-places54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:56fabae540261d8e8e9a2c10David Swenson, who is chief investment officer at Yale University in charge of managing and investing its endowment assets, explained security selection as a tool of the investment professional: “One of the really important facts about security selection is that if you play for free, it’s a zero sum game. Because if you’re overweight on Ford and underweight on GM, there has to be some other investor, or group of investors, that are underweight on Ford or overweight on GM, because this is all relative to the market. And so, if you are overweight on Ford and underweight on GM, and somebody else is underweight on Ford and overweight on GM, but at the end of the day the amount by which the winner wins equal the amount by which the loser loses. And so it’s a zero sum game. But of course, if you take into account the fact that it costs money to play the game, it turns into a negative sum game.”

Professor Robert Shiller explained Fisher’s Theory of Interest by way of an example (i.e., Crusoe A and Crusoe B on an island): “You can see that both A and B have achieved higher utility than they did when they didn’t trade. So this is the function of a lending market. A who wants to consume a lot this period, the production point is here, and B lends this amount of consumption to A, so that A can consume a lot, A can consume this much. B, since he’s lent it to A, consumes only this much this period. But you see they are both better off. They’ve both achieved a higher indifference, a higher utility.”

So the question here is: What really happens when you “trade” with another?

(a) Are you better off if and only if your counter-party is worse off (as in the stock trading example with Ford and GM)?

(b) Are you both better off (as in the island economy with Crusoe A and B)?

(c) None of the above (John Locke said “words” like “trading” get us all confused)?

Is “trading” in the “financial market” fundamentally different from trading in David Ricardo's “goods and services” market? Does Ricardo assume perfect information about the market, known to all participants? Does the concept of time even play a role in Ricardo’s model?

Is this how we trade fundamentally related yet relatively mispriced assets? All based upon differences in one’s preferences, circumstances, and predictions about the future?

Philip Maymin offered a parable that illuminates interesting aspects of financial trading. It goes as follows: A non-Jew once approached the two leading rabbis two thousand years ago. He asked the first to teach him the whole Torah while standing on one foot: in other words, quickly. The first rabbi chased him away with a stick.

The non-Jew asked the second rabbi, named Hillel. Hillel answered, and his response encoded what has come to be known as the golden rule: “What is hateful to you, do not do unto others. This is the whole Torah; the rest is commentary. No go and study.”

According to Maymin, there are three important aspects here. First, the real Golden Rule of Hillel is not what you might usually think. He does not say to treat others as you would like them to treat you. Instead, he says to refrain from treating others as you would not like them to treat you. It is the difference between a command to do good and a command to abstain from evil. It is impossible to fulfill the duty to do good; one can always do more, and the goodness itself subjectively depends on others. But it is possible to fulfill the duty to abstain from evil: one can simply not hurt others, and the harm, if done, is more objectively noticeable.

Second, Hillel’s wisdom frames all ethical knowledge and teachings around this simple principle. In this way, when details begin to confuse, as they always tend to do, one can retreat to the big picture to see how it all fits in.

Third, Hillel points out that the Golden Rule is not the end of knowledge but rather the beginning. The important thing is not what you know, but what you have yet to find out.

If Hillel were a trader today, and a non-trader were to ask him to teach him all there is about financial hacking while standing on one foot, one would imagine Hillel might answer something like this: “Accumulate risks that are hateful to others; dispose of risks that are hateful to you. That is the whole of financial hacking; the rest is commentary. No go and trade.”

The world is impossible to grasp in its entirety. The human mind can focus on only a small part of its vast confusion. Models project detailed and complex world onto a smaller subspace, where regularities appear and then, in that smaller subspace, allow us to extrapolate and interpolate from the observed to the unknown. At some point, of course, the extrapolation will break down. But this strategy of reduction works very well in the physical sciences. Models in finance, by extension, use the same strategy in the hope that some of its magic would rub off nicely.

The aim of finance, like that of physics, is to find not only the relationships between the abstractions themselves, e.g., markets, money, assets, securities, but also the relationships between the realities they represent. In both physics and finance the first major struggle is to gain some intuition about how to proceed; the second struggle is to transform that intuition into something more formulaic, a set of rules anyone can follow, rules that no longer require the original insight itself. One person’s breakthrough thus becomes everybody’s possession.

The Efficient Market Hypothesis imagines price movements to be a diffusion process, i.e., a random walk. One of its origins is in the description of the drift of pollen particles through a liquid as they collide with its molecules. Einstein used the diffusion model to successfully predict the square root of the average distance the pollen particles move through the liquid as a function of temperature and time, thus lending credence to the existence of hypothetical molecules and atoms too small to be seen.

For particles of pollen, the model is also a theory, and pretty close to a true one. For stock prices, however, it is only a model. It is how we choose to imagine the the way changes in stock prices occur, not what actually happens. Models are simplifications, and simplification can be dangerous. It is naïve to imagine that the risk of every stock in the market can be condensed into just one quantity, its volatility σ. Risk has too many aspects to be accurately captured by that one number. In short, the Efficient Market Model’s price movements are too constrained and elegant to reflect the market accurately. After all, the movements of stock prices are more like the movements of humans than of molecules.

Model parameters that are implied from market prices are often easier to have an intuition about than are the market prices themselves, especially if the model is itself intuitive. For example, being told that an option has a particular price means nearly nothing, but being told that an option has a particular implied volatility gives a sense of meaning to it, something that can be pondered, something on which an opinion could be formed and a trade proposed. This is the power of model parameters. The idea is not that the model is correct, or that the assumptions can never be violated, but simply that the model is useful in explaining the risks. The parameters help the intuition.

We need models to explain what we see and to predict what will occur. We use models for envisioning the future and influencing it. The world of people is unpredictable and begs for divination as well. At every moment we face choices with uncertain outcomes. Each decision, even one made on the spur of the moment involves some imagined model for how the future may evolve and how our choices will affect it. We are always weighing the odds, estimating the relative importance of causality and chance. As time passes, possibilities narrow. Because our lifetime is finite, time, choice, risk, and reward are of the essence. Unless one can live in the perpetual present, one needs theories and models to exert some control. Theories and models are thus a kind of magic that bridges the visible and invisible worlds.

Models are analogies; they always describe one thing relative to something else. Models need a defense or an explanation. Theories, in contrast, are the real thing. The need confirmation rather than explanation. A theory describes an essence. The abstractions of mathematics are often more suitable than words for formulating theories. A successful theory can become a fact, i.e., by describing the object of its focus so accurately that the theory becomes virtually indistinguishable from the object itself. The creator of a theory is attempting to discover the invisible principles that hide behind the appearances. The role of theory is to make evident what is hidden. Unlike models, a theory doesn’t simplify. It observes the world and tries to describe the principles by which the world operates.

It takes hard work to master a model. But models splinter when you look at them closely. Theories are irreducible, the foundations on which new metaphors can be built. But a theory doesn’t have to be complete or unmodifiable. There are theories that are not exactly right, but they are not models. Theories are the thing itself; when you look closely, there isn’t anything more to see. The surface and the object, the outside and the inside, are one.

The similarity of physics and finance thus lies more in their syntax than their semantics. For example, financial modelers use a process similar to renormalization in physics to force their less than perfect, less than real models to fit the world they observe. They call this process calibration, the tuning of parameters in a model until it agrees with the observable prices of liquid securities whose values we know. But calibration in finance works much less well than renormalization in physics: in physics the normal and abnormal are governed by the same laws, whereas in markets the normal is normal only while people behave conventionally. In crises the behavior of people changes and normal models fail. While quantum electrodynamics is a genuine theory of all reality, financial models are only mediocre metaphors for a part of it. Financial models, because of their incompleteness, inevitably mask risk. When you use a model you are trying to shoehorn the real world into a container too small for it to fit perfectly.

In human affairs, history matters, and people are altered by every experience. But it’s not only the past that leaves its trace on humans. In physics, effects propagate only forward through time, and the future cannot affect the present. In the social sciences the imagined future can affect the present, and thereby the actual future, too. Despite this, the Efficient Market Model assumes that all uncertainties about the future are quantifiable. It claims that at any instant current prices reflect all current and past information, and that the best estimate of value is the current price. That’s why it is a model of a possible world rather than a theory about the one we live in.

In finance, a useful guiding principle is the Law of One Price: If you want to know the value of one financial security, your best bet is to use the known price of another security that’s as similar to it as possible. When we compare it with almost everything else in economics, the wonderful thing about this law of valuation by analogy is that it dispenses with utility functions, the undiscoverable hidden variables whose ghostly presence permeates economic theory. The Law of One Price, however, is not a consistent law of nature. It is a general reflection on the practices of fickle human beings, who, when they have enough time, resources, and information, would rather buy the cheaper of two similar securities and sell the richer, thereby bringing their prices into equilibrium. The law usually holds in the long run, in well-oiled markets with enough savvy participants. In crises, however, duress forces people to behave in what looks like irrational ways, and even in normal time there are persistent shorter- or longer-term exceptions to the law.

To use the Law of One Price that underpins financial modeling, one simply shows that a target security and its replicating portfolio have identical future payoffs under all circumstances. Most of the mathematical complexity of modeling in finance involves the description of the range of future behavior that composes all circumstances. One can easily invent more complicated models of risky stock prices that incorporate violent moves and ferocious outbursts of risk. But in using such models one gives up simplicity for a still imperfect but more complex model that doesn’t necessarily do better.

As with earthquakes, it may be wiser to ensure that one owns a portfolio that will not suffer too badly under disastrous scenarios than it is to try to estimate the probability of destruction. When models in physics fail, they fail precisely, and often expose a paradox that opens a door. When models in the social sciences fail, they fail bluntly, with no hint as to what went wrong and no clue as to what to do next. Financial models are always metaphors.

How finance is fundamentally different from physics.

Financial modeling is not the physics of markets. Physics models begin with the current state of the world and evolve it into the future. Financial models begin with the current perception about the future and use them to move back into the present to estimate current values. And it is humans doing the perceiving. In other words, financial models don’t forecast; they simply transform one’s forecasts of the future into present value. The point of a model in finance is not the same as the point of a model in physics. In physics one wants to predict or control the future. In finance one wants to determine present value and goes about it by forming opinions about the future, about the interest rates or defaults or volatilities or housing prices that will come to pass. One uses a model to turn those opinions about the future into an estimate of the appropriate price to pay today for a security that will be exposed to that imagined future.

Overall, models are useful in finance and here are some of their major benefits: (i) models facilitate interpolation; (ii) models transform intuition into a dollar value; (iii) models are used to rank securities by value. However, to confuse a model with a theory is to believe that humans obey mathematical rules, and so to invite future disaster. Therefore, financial modelers must compromise. They must decide what small part of the financial world is of greatest current interest to them, describe its key features, and then mock up those features only. A successful financial model must have limited scope and must work with simple analogies. In the end you are trying to rank complex objects by projecting them onto a scale with only a few dimensions.

A good model can advance fashion by ten years.

— Yves Saint Laurent

References:

Derman, Emanuel (2011). Models. Behaving. Badly. Why Confusing Illusion with Reality can Lead to Disaster, on Wall Street and in Life. Free Press.

]]>Theories and ModelsA Short History of WorkQuant QuantoFri, 25 Mar 2016 19:40:00 +0000http://www.spacemachine.net/views/2016/3/history-of-work54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:56fe97eb59827ef7da23ccc5What is the labor market for? The labor market is not just there to provide jobs, according to Richard Reeves, a researcher at Brookings, but actually performs three crucial roles:

Reeves observed that the labor market in most advanced economies has performed these functions well in the last seventy years or so, i.e., since World War II. Skills have been matched to capital, producing dramatic increases in economic output. Paid jobs have provided a social anchor for men and women, packaging purposeful work into manageable pieces. And until recently, wages have proved a successful mechanism for sharing the proceeds of wealth.

While the labor market continues to work pretty well as an economic institution in matching labor to capital for production, it is no longer working so well as a social institution for distribution. Structural changes in the economy, in particular skills-based technological change, mean that the wages for less-productive workers are dropping. At the same time, data from the Bureau of Labor Statistics indicates that the share of national income going to labor rather than capital is dropping.

Almost every social and economic policy debate is centered on improving the labor market.

Labor's share of U.S. national income has been dropping for 15 years. Why?

According to Roc Armenter, an economist at the Federal Reserve Bank of Philadelphia, there are three leading hypotheses purporting to explain the decline of the labor share in the U.S., but economists do not yet have a full grasp of the underlying determinants:

Capital Deepening: Technological innovations produce better and cheaper equipment that replaces workers and redistributes income from labor to capital. Capital should be viewed as at least a partial substitute for labor — more and more so as technology develops.

Income Inequality: Technological innovation is skill-biased, i.e., it augments productivity more for highly skilled workers than for low-skilled workers, thus making the low-skilled workers redundant and their wages fall. Interestingly, this is somewhat offset by the increasing wage inequality at the very top of the pay ladder.

Globalization: U.S. industries that are more labor intensive outsource their work to countries with cheap labor while industries that are more capital intensive remain in the U.S. The result is an increase in the capital share of income and a decrease in the labor share. However, the decline of labor share is a global phenomenon.

Sweat equity is entrepreneurial labor!

What could be missing from this picture? For one, the impact of technology startups, e.g., many of them going back to the Dotcom era of the late 1990’s, has not been accounted for. Their many high-value exits and subsequent sale of stocks by founders and early employees is deemed to occur, coincidentally, after 2000. While the Bureau of Labor Statistics (BLS) has taken great pains to distinguish between wages and profits in proprietor’s income (i.e., income of sole proprietorships and partnerships), scant attention has been paid to discern between "sweat equity" and “preferred shares” in gains from participation in technology entrepreneurships such as Silicon Valley startups (e.g., other than perhaps the GAAP rule for corporate expensing of employee stock options starting in 2005).

Venkatesh Rao, who is advancing a radical-sounding hypothesis — entrepreneurs are the new labor — from the perspective of “balance of power” between the investors and technology entrepreneurs, observes that: “this restricted class of entrepreneurs is quite significant in terms of both numbers and economic impact, and is growing rapidly.” Given the increasing dominance of tech startups, e.g., the thundering herd of “Unicorns”, on the economic scene in recent years, it is quite plausible that the observed “drop” in the labor share as traditionally measured by BLS (i.e., under its seven “Income Components of Economic Output”) masks an increasingly larger slice of common stock sale by entrepreneurs from technology companies. After all, the vastly increased valuation of technology startups is derived from a combination of entrepreneurial labor (i.e., "sweat equity" or “common stock”) and venture capital (i.e., “preferred stock”), both of which are treated as capital share of income by the Bureau of Labor Statistics. In other words, the entrepreneurial labor share of income, whether realized through acqui-hire, M&A, or IPO, has effectively been camouflaged in the national economic statistics!

Could something like this also be happening to non-manufacturing service sectors such as R&D, graphic design, or data science? What about all the other sectors served by the “Ubers of X” of the world? What about all those "free work" going into game play? (Source: Armenter Roc).

An emerging trend in large-scale problem-solving known as “swarm work” is creating intense competition within a huge labor pool, and is pioneered by companies such as InnoCentive, 99Designs, and Kaggle. InnoCentive, for example, operates a contest format to crowd-source innovation solutions to important business, social, policy, scientific, and technical challenges. Founded in 2001 by Alpheus Bingham and Aaron Schach, InnoCentive has built up a network of 365,000 registered problem solvers from 200 countries who compete to provide ideas and solutions to various organizations such as Eli Lilly, AstraZeneca, Booz Allen Hamilton, Proctor & Gamble, NASA, Thomson Reuters, Department of Defense, and several government agencies in the U.S. and Europe. Since 2001, InnoCentive has posted over 2,000 challenges, reviewed over 59,000 solutions, and handed out more than 2,400 cash awards (ranging from $5,000 to $1 million) totaling $48 million.

From the standpoint of corporate and government clients, the InnoCentive approach is both cost-effective and efficient. A 2009 study commissioned by InnoCentive examined its economic impact on one client company, the Swiss agribusiness Syngenta, over a three year period. In total, Syngenta ran 56 challenges at the cost of $10,000 each, and paid out about $1.9 million for successful solutions, for a total expense of $2.5 million. Syngenta estimated that running 56 projects, at the rate of $120,000 per scientist per project, would have cost the company more than $6.7 million in salary alone. In general, the success rate of posted premium challenge is reportedly around 85%. From the worker's point of view, however, introducing project-to-project competitions, while potentially rewarding, does not necessarily make for a stable income. It would seem that the InnoCentive model works best for those already in some kind of salaried position or with another source of income.

“We’ve got organizations that need to figure out how to make talent and work pools function globally,” said Dwayne Spradlin who was InnoCentive’s CEO at the time of an Aspen Institute study in 2010 called The Future of Work. “Organizations need to figure out a way to move from fixed procedures and infrastructure to variable ones in organizing and optimizing resources. And now we’ve got the millennial generation coming in, and if anything, they’re more project-based, not jobs-based, which means we need to think about how to orchestrate work talent in an environment of constant churn. There is a need for a whole new business science that can help organizations function more effectively in this ‘new normal,’ if you will.”

As it turns out, a big part of what Spradline referred to five years ago as “a whole new business science”, in its simplest form, is a relatively recent phenomenon we now call the “Uberization of work”, which resulted in the spread of a “Sharing Economy” that is reshaping the labor landscape. To wit, the rise of a slew of startups underscores this seismic shift in recent years across many service industries that employ human labor at the other end of the innovation spectrum: AirBnB (founded: August, 2008), Uber (founded: March, 2009), Postmates (founded: May 2011), Lyft (launched: summer 2012), Instacart (founded: June 2012), Handy (founded: June 2012), HomeJoy (founded: July 2012, RIP: July 2015), DoorDash (founded: February 2013), Washio (founded: March 2013), Shyp (founded: July 2013), etc. This sea of change is simultaneously lifting productivity while creating downward pressure on wages across a wide range of service industry sectors from travel to transportation, delivery, care-giving, and all different flavors of home services and errands.

The decoupling of the economic and social functions of the labor market — a result from the rise of the many "Ubers of X" — poses a stark policy challenge. Increasingly, the idea of a universal basic income is capturing the imagination and attention of policy intellectuals, across the globe and across the political spectrum. “We may find ourselves going into the future with fewer jobs for everybody,” said Michael Howard, coordinator of the U.S. Basic Income Guarantee Network. “So as a society, we need to think about partially decoupling income from employment.”

A Short History of Work (Source: The Rise of the Naked Economy).

The decoupling is complete; all too successfully in the particular case of Ingress, a massively multi-player augmented reality game from Niantic Labs, which was recently spun-off from Google. Not only is employment no longer needed for work to be performed, income has vanished, too. Think of it as “gamified work” on a global scale, where networks of humans act as sensors to the real world on behalf of machines in the cloud. Often described as “Foursquare meets capture the flag”, the Ingress game is played on Android phones where work is cleverly disguised as play in a way that would have made a fence-painting Tom Sawyer proud.

The game is set in a universe in which so called “Shapers” are changing the ways human think. Those Shapers do this via “exotic matter” that is leaked through portals into this world. The goal of the game is to capture as many portals — usually landmarks and points of interest — as possible and to link them to fields to either enable or stop the influencing of the human mind. Two factions battle each other: the “Resistance” who believe that humanity should not be controlled by an alien force; and the “Enlightened” who believe that the exotic matter will transcend humanity to a higher state. The game requires the players to actually go outside and to walk/cycle/drive to a portal and stand within 100 feet of the portal to interact with it. Players enjoy seeing their surroundings with new eyes and discovering new things in new places. The cooperative element of the game makes it social and usually leads to meeting up with new acquaintances and friends.

So what is Niantic or Google getting out of this? They are benefiting from Ingress in a number of ways, chief among them map data and advertising. Consider map data: players are asked to find the quickest way to the portal. As most “agents” are playing the game on foot, this delivers valuable walking map data, for free. What’s more, all portals are landmarks and points of interest; so almost every single landmark worldwide gets visited by agents. The alternative for Niantic or Google would have been to hire a local team of humans to walk the streets and collect data, e.g., as in how Google captures data for Street View, which would have been prohibitively expensive on a global scale (but would have nicely added to the labor’s share of income). Alas, one does not get paid for simply playing games.

"Give 'em basic income and let 'em play games!": Could this be the end of work? Or is this the future of work? What's the difference anyway?

Would I ever leave this company? Look, I’m all about loyalty. In fact, I feel like part of what I’m being paid for here is my loyalty. But if there were somewhere else that valued loyalty more highly, I’m going wherever they value loyalty the most.

There are very few analytical questions about mass human behavior which admit of being decided on economical premises alone. And those few are not the ones we are dealing with at this conference.

— Robert M. Solow (“Lessons on the Income Maintenance Experiments”, 1986)

These experiments do not take place in a test-tube and they do not involve identical individuals. There is just a lot more going on than can possibly be controlled. And many of those things are not even economical at all.

— Robert M. Solow (“Lessons on the Income Maintenance Experiments”, 1986)

The very rigor [or the lack of rigor] of social experiments limits the policy relevance of the results.

The MAB Problem Formulation: The problem of dynamic pricing with limited supply is considered. A seller has k identical items for sale and is facing n potential buyers ("agents") that are arriving sequentially. Each agent is interested in buying one item. Each agent's value for an item is an IID sample from some fixed distribution with support [0,1]. The seller offers a take-it-or-leave-it price to each arriving agent (possibly different for different agents), and aims to maximize his expected revenue. It has been recognized by Babaioff et. al. that even in a setting with limited supply, the Multi-Armed bandit (MAB) approach can still be fruitfully applied.

Soldiers or bandits? It may not be that easy to tell them apart when they're at rest...

The Manchurian Bannermen is a hereditary occupational caste, ranked above others in society, whose members were expected to devote themselves to the state. In China proper, bannermen did not cultivate the fields (as they had in Manchuria) but rather lived off stipends, paid part in silver and part in grain. The dynasty supported banner soldiers and their families from cradle to grave, with special allocations for travel, weddings, and funerals. The banner population grew faster than the need for soldiers. Within a couple of generations, there were not enough positions in the banner armies for all adult males in the banners. Yet bannermen were not allowed to pursue occupations other than soldier or official. Consequently, many led lives of forced idleness, surviving on stipends.

]]>Bandit ExperimentsCommunity CurrencyQuant QuantoWed, 17 Feb 2016 05:31:54 +0000http://www.spacemachine.net/views/2016/2/community-currency54345ed8e4b0fa5705e1825b:543467eee4b074329dc2190c:56c3ff1722482e692571d66cCommunity currency allows localities and regions to create real wealth in their local economy by matching the unmet needs (e.g., offer people employment, and to pay for local services like education, health care, fire and police protection, and road maintenance, etc.) with the under-utilized resources available that could fill those gaps. The main barrier to matching the unmet needs with the underutilized resources is often a lack of money.

Community currency also provides a way for the wealth that is produced locally to benefit local people, rather than being siphoned off to distant companies. This is because community currency circulates only locally and is not legal tender outside of its immediate community.

Double Speed: How fast did you say the Oars were going again?

Toda Oar, for instance, is a community currency used in Toda City in Saitama Prefecture, near Tokyo. The Oar is issued and managed by the Community Currency Toda Oar Management Committee, which is staffed by volunteers. First issued in 2003, the purpose of the Oar was to revitalize citizen activity and encourage mutual assistance. The unit of this currency is the Oar (i.e., equivalent to one yen). Both 10-Oar and 100-Oar bills are in circulation. As observed by Kurita et. al., eople’s favorable perception of their own community currency drives its circulation, and thus helps revitalize the local economy.

Establishing the circulation system for a new community currency so it does not "pool" in particular parts of the system. (Source: Community Currency Guide).

Complementary currencies in circulation all have curative power in their communities (Source: de la Rosa and Stodder).

Velocity of complementary currencies (circa 2012). In 2014, Bitcoin (not shown) is by far the speed king at 36! (Source: de la Rosa and Stodder).

Comparative velocity of fiat currencies in the world economy (circa 2004~2012). (Original Source: World Bank).

Do we have to behave in particular ways to justify compassion and support? Or is simply human dignity enough?

— Evelyn Forget

A "Millennial Hamster Tribe" member running in place trying to break free...

Solidarity among millennials facing an uncertain future: not everyone can be a card-carrying member of the "Millennial Hamster Tribe"!

A lot of our social services were based on the notion that there are a lot of 40 hour-per-week jobs out there, full-time jobs, and it was just a matter of connecting people to those jobs and everything will be fine. Of course, one of the things we know is that’s certainly not the case, particularly for young people who often find themselves working in precarious jobs, working in contracts for long periods of time without the benefits and long-term support that those of us who have been around longer take for granted.

— Evelyn Forget

Faith is the number one element. It isn’t something that spreads itself uniformly. Faith is concentrated in a few people at particular times and places. If you can involve young people in an atmosphere of hope and faith, then I think they’ll figure out how to get the answer. Faith and hope are absolutely central to everything one does.

— John Archibald Wheeler (1911-2008)

How does anyone recruit the ideal millennial candidates for basic income study from among all Americans born between 1980 and 2005, i.e., one-third of total U.S. population (circa 2013)?

A gentle helping hand that reaches across the generation. (Image Credit: IJCCR).