A Caffeinated Blog

Are elite athletes born that way or made that way through training? Or does that question present a false dichotomy? In The Sports Gene, David Epstein attempts to answer these questions in a very entertaining and readable way.

I thought one of the most interesting facts presented in this book is that there hasn’t been a white cornerback in the NFL (the position that demands the crème de la crème of speedy athletes) in over a decade. This is not to say that a white person cannot be speedy enough to be a corner in the NFL, because they certainly can. However, when it comes to natural sprinting prowess, the distribution of athletes with European heritage looks different than the distribution of athletes with West African heritage for reasons that can be attributed to genetics and evolution.

Epstein never explicitly puts it this way, but I hypothesize that the right tail of the distribution for those with West African heritage is fatter than the right tail for those with European heritage. For some reason, though, it has become controversial to even scientifically hypothesize about genetic differences in different ethnic groups because most people immediately assume (incorrectly I might add) that natural athletic prowess must come at the expense of natural intellectual abilities.

Another interesting fact I gleaned from this book is the following: “the CDC’s data suggest that of American men ages twenty to forty who stand seven feet tall, a startling 17 percent of them are in the NBA right now. Find six honest seven-footers, and one will be in the NBA.” However, it turns out that height alone isn’t necessarily the best predictor of success in basketball, although not being at least well over 6 ft. is a severe disadvantage that can only be overcome with something like a superhuman ability to jump (think Muggsy Bogues). The wingspan ratio to height is actually very important in basketball and certain Moneyball-esque managers/coaches are already on to this idea.

Another interesting idea from this book — one that most athletes already understand anecdotally — is that there is no “one-size-fits-all” training plan for athletes. We all respond to different training stressors in different ways based on genetic factors.

No matter what your initial answers to those two questions I initially posed are, this book is an excellent read if you’re at all interested in athletics or genetics. There will surely be something in this book that will both surprise and delight you. Read it and then get busy training!

As the author of this blog, my personal life will inevitably affect my writing. With that said, it’s probably a good time to announce that I’ll be the father to a daughter any day now!

I must admit that the whole idea of having a child, let alone raising one, is daunting. If most fathers are anything like me — which I suspect they are — they have that “oh shit” moment just seconds after they hear that their wife/significant other is pregnant. It’s during this time that most of us men realize that we don’t have the faintest clue about how to raise a child, let alone change a diaper. Then, the unnerving thought about how expensive fatherhood is sets in, both monetarily and in terms of having a “life” — I mean “time”.

Are kids really as expensive as we may make them out to be though? Well, that obviously depends, but in his book Selfish Reasons To Have More Kids, Bryan Caplan’s central argument is that kids cost the average person less than they think. In other words, thanks to the flawed advice of the Tiger Mothers out there, the perceived economic price of having great kids is higher than the actual economic price. Finally, a bit of good news to help assuage the financial anxiety I’ve been feeling!

Better yet, we also grossly underestimate the benefits that kids will bring to our lives. The idea that we are terrible at predicting what impact future events will have on our lives is not only currently fashionable in psychology, but it’s also true. This idea is especially relevant when we think of the impact that kids and grand-kids will have on our well-being and happiness. Kids may sound like a drag now (especially if, like me, you like to travel), but the joy most of them will bring to your life in later years is not only difficult to measure, but immeasurably valuable. I must concede, however, that at least a few parents out there may be overestimating the benefits of their kids (hopefully I’m not one of them), but I digress.

I decided to do some empirical investigation into this claim to see if Caplan was onto something. I asked several adults with grown children if they regretted not having more kids and the answer was unanimously “yes” (warning: I had a sample size of four and two of these individuals were my parents, who were obviously biased by the joy they experienced in raising a wonderful kid like yours truly).

Due to the idea that we are prone to overestimate the costs (and underestimate the benefits) Caplan goes on to suggest that the average person ought to have more kids than they are currently planning on. Please note: this isn’t to say that everyone ought to have more kids (or even any kids at all), but rather that the average person ought to have more kids. If you can avoid looking into the rose-colored mirror, you’ll probably realize that you’re more average than you think; therefore, you probably ought to have more kids.

The reason kids cost less than we think is because we place too much value on nurture, and not enough on nature. In other words, most of what will determine how your kids turn out will be determined by you, and who you choose to mate with. Caplan cites plenty of scientific evidence to support the claim that genetics matter more than nurture, so I won’t get into that here.Again, I think it’s important to reiterate that this is not to say that nurture doesn’t matter all, it does; however, it matters less than the average parent thinks. According to Caplan, nurture works more in the short-run, but nature has its way in the long-run. For example, disciplining a child at a young age may make a certain behavior go away temporarily, but the child’s genes will eventually trump his parent’s attempts to impart discipline into him.

Many parents (and soon to be parents ) find Caplan’s argument disturbing because it means that parent’s monumental efforts to improve their kids lives through a chaotic schedule are mostly for naught when it comes to how the child will turn out as an adult. I, however, think this is great news. It means that once you’ve married the right person, as I most certainly have, then having kids will be less work then you think. The fact that screwing up your kids is hard is a liberating thought indeed!

Caplan also points out that many of the structured activities parents try to force their kids to do are for naught. In other words, if a kid doesn’t have an iota of natural athletic ability, there is no amount of practice (or expensive lessons) that will make them an elite athlete, or even a decent one. The same goes for things like musical ability. So if a kid doesn’t show the faintest interest in something then you and your kid will be better off by simply dropping that activity.

Although Caplan never mentions Nassim Taleb’s work on antifragility, I think the idea of antifragility blends in nicely to this kind of parenting style. Once you realize that your kids have antifragile properties (like all humans), then the dangers in over-parenting become more apparent. You want to let your kids make a bunch of small inconsequential mistakes instead of one big mistake that is of great consequence. Over-parenting removes some volatility from the child’s life, which magnifies the harm when it does occur.

In the end, I think Caplan makes a compelling case that children don’t really cost as much as most of us think. Considering that my first child is now on the way, I sure hope he’s right. Before I read Caplan’s book, I was certain that being a father was going to be at least two things: 1) an amazing experience and 2) really hard. It turns out it may be less expensive than I think too. Then again, perhaps I’m being naive.

Like many Americans, I recently suffered the great inconvenience of pulling weeds from my lawn due to social mores. Is the cost of pulling weeds really worth the aesthetic benefit? Perhaps, but this belief of mine could stem from something other than aesthetics. If so, why is it exactly that I care about whether or not my lawn is full of weeds?

Here’s an interesting fact: before the Civil War very few Americans had lawns. Today, in the continental United States alone, it is estimated that an area larger than the state of Iowa is covered in grass, the supposedly visually pleasing Poa pratensis (Kentucky Blue Grass) kind. To the culturally literate American suburbanite, owning a home without having a well kept lawn has become something of a taboo. Accordingly, many modern homeowners are burdened with spending their precious weekends mowing and maintaining theses lawns in hopes of adhering to this cultural norm. Trying to figure out where exactly this norm came from is the type of thing that’s, well, damn interesting.

According to The Online Etymology Dictionary the word “lawn” can be traced back to the 1540’s. It’s important to note that the definition of the word “lawn” has changed over the centuries. In the sixteenth century “lawn” meant an open space or glade in the woods. In the seventeenth century it referred to land that wasn’t tilled, but yet was still covered with grass. In the eighteenth century it had become a term for part of a garden or grounds that were covered with grass and kept neatly mowed, which is where the American notion of a front lawn originates. In her book, The Lawn: A History of an American Obsession, Viriginia Scott Jenkins notes: “The notion of a front lawn began to take shape at the end of the eighteenth century, borrowed by a few wealthy Americans from French and English aristocratic landscape architecture.” [1]

In addition to being costly to maintain, an individual had to have land in order to have a lawn. Not just anyone owned land back then either. Given these facts, the origins of the lawn have more to do with signaling theory than any aesthetic or practical reasons. Another way of putting this is to say that the lawn’s origins have more to do with sexual appeal than any utilitarian appeal.

***

Signaling theory is a body of theoretical work which examines how individuals communicate different things to each other. For humans, it has useful applications in behavioral sciences, particularly in evolutionary psychology. The fundamental problem in evolutionary signaling games, however, is that dishonesty or cheating is encouraged.

The process of courtship and mating are the perfect examples of evolutionary signaling games. Due to the process of sexual selection, a lesser known Darwinian idea, the male peacocks tail exists not for any utilitarian purpose, but because of female sexual choice. If evolution were driven purely by natural selection, then the male peacocks’ tail would be a mystery because it’s a hindrance that does not produce any survival advantage.

The peacocks’ tail is by far the most hackneyed example within the field of evolutionary psychology, but it still makes for an interesting case study about sexual selection, signaling, and lawns. The creations of sexual selection, however, are sometimes disastrous for the species as a whole. For example, it has been suggested that the giant antlers of the Irish Elk (Megaloceros giganteus) were a sexually selected hindrance that ultimately caused the species to become extinct in Pleistocene Europe. It turns out that what makes one sexually attractive to one’s own species can make one an attractive meal to another species.

In the evolutionary game that is mating, males want to signal that they possess the traits that females find attractive. However, it’s important to note that there are two types of signals: honest signals and dishonest signals. An honest signal must, by definition, be costly and thus difficult to imitate. If peacocks with bigger tails are preferred by peahens, then one should expect that all peacocks would want to display large and lavish tail feathers. What makes the peacocks’ large and lavish tail feathers a valuable and honest signal is that not every peacock can have them and that they are difficult to imitate.

***

For human males, wealth and social status are important indicators of a male’s genetic fitness, hence it should be expected that males try to signal these traits. In this sense, then, lawns are analogous to the peacock’s tail. Since owning a lawn required land, and was costly in terms of maintenance, it served as a signal of being wealthy, and it was an honest signal at that since it was difficult for peasants to fake. Parading into town and verbally announcing that one is wealthy would have also been a signal that some males may have used, albeit it reeks of cheapness. That just any old chump could do this speaks to its reliability as a signal.

That lawns were costly is precisely what made them an honest signal. This well known fact about the multitude of costs that come with lawn ownership explain where their sexual value comes from. Lawns came into existence, nor for any utilitarian purpose, but simply to signal waste. As many homeowners are well aware, lawns are similar to the Irish Elk’s antlers because they are a sexually selected hindrance to having fun on the weekend.

Lawns, then, are the perfect example of what the economist and sociologist Thorstein Veblen called conspicuous consumption, which is a type of consumption used strictly for signaling rather than intrinsically utilitarian purposes. [2] That only a select class of people, who were part of the landed gentry, would be able to waste resources on something with no utilitarian purpose, like a lawn, is precisely what made lawns a reliable signal and precisely what brought them into existence.

The important lesson here is this: if weeds ever become difficult to cultivate, then the lawns of the future will likely be filled with weeds instead of the green velvety carpet. Instead of spending a weekend picking out the weeds, I may just find myself picking out the grass.

Many great thinkers, such as the recently deceased historian Jacques Barzun, have argued that the sun is setting on the West, meaning that the great American empire is in decline. Whether or not this is true, of course, is debatable. If you spend enough time in any major American city though, I suspect you’ll notice the cultural decay, or at least that people are so distracted by their phones that they have a hard time noticing the people around them. Perhaps I’m simply becoming crotchety as I age and this is just another personal rendition of the golden age fallacy. [1] Then again, perhaps not.

The late cultural critic Neil Postman was one of these thinkers who saw the cultural decay in America. In Amusing Ourselves to Death, Postman indicts the television as the root cause of this decay. This prescient book was published in 1985 long before the advent of tablets and smart phones, but many of the arguments are still applicable to modern times. Essentially, Postman argues that television is a medium that transforms our relationship with each other — and with our ability to obtain knowledge about the world — in a negative way. His belief is that writing is a medium suitable for rational discourse, where as television is a medium suitable almost purely for entertainment. Given the current state of global media it’s hard not disagree, but fortunately for the human race the issue is not quite as black and white as Postman makes it sound.

+++

Near the beginning of the book, Postman reminds us that two prominent writers of the 20th century, George Orwell and Aldous Huxley, prophesied about the state of human affairs in the distant future. Postman also points out that despite often being associated with envisioning the same thing, the two thinkers visions couldn’t have been more different. He eloquently captures the difference between their vision’s with the following passage, which is worth quoting at length:

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny ‘failed to take into account man’s almost infinite appetite for distractions.’ In 1984, Huxley added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us.

Before you accuse Postman of snobbery or elitism, remember that he is not arguing that television is inherently bad, but rather that it’s inherently bad for public discourse. The mind numbing, yet often highly entertaining, shows on television are not a threat to our political process, but news programs like CNN and Fox News are. The reason Postman believes this is because news programming is about creating entertainment and making money, not about exploring the truth. Talking heads, ostensibly discussing some serious matter, will be interrupted with advertisements on virtually any news channel you can find. Is no issue important enough to be exempt from profiting off of it?

If you don’t believe news is about entertainment — and not truth — then consider the following anecdote Postman notes in the book involving a news anchor in Kansas. This particular female news anchor Postman speaks of was fired “because research indicated that her appearance ‘hampered viewer acceptance'”. I bet this happens more than most of us realize. Come to think of it, can you name a single horribly unattractive news anchor? I can’t, and the reason is because politics has become like show business, and “If politics is like show business,” writes Postman, “then the idea is not to pursue excellence, clarity or honesty but to appear as if you are, which is another matter altogether.”

While I’m largely sympathetic to Postman’s critique of television as a poor medium for public discourse, I don’t believe it applies to computers and the Internet as well as it applies to television. Postman wrote this book before the explosion of the Internet, which is too bad, because I’m sure he’d have plenty of interesting things to say about the Internet as well. If he were still alive perhaps he’d view, as I do, the rise of the Internet as a positive thing — perhaps his outlook on the future of human intelligence and discourse might not have been so grim.

Many people from a range of scientific disciplines have put forth an answer to the following grand question: what made us human? Surprisingly, many of these brilliant scientific minds overlooked a simple thing that most of us do everyday, i.e., cook (or at least eat cooked food). In his book, Catching Fire, the biological anthropologist Richard Wrangham (of Harvard fame) takes a stab at answering the grand question by putting forward what he calls “the cooking hypothesis”. And his argument that cooking is what made us human is quite compelling.

Scientific evidence shows that 1.8 million years ago, the first apes learned to cook. But why are we the only apes that eat cooked food today?

Like many others, I think the answer to living better today can be found in our evolutionary past, but thanks to myriad cognitive biases in conjunction with the narrative fallacy, we have a tendency to grossly deceive ourselves. For example, both the raw meat eating Paleo dude and the new-age raw vegan hipster are making the same mistake. Both seem to embrace a teleological argument that concludes that humans weren’t meant to evolve beyond how they were, say, 10,000 years ago. Accordingly, they think they are improving their health through avoiding something that may turn out to be fundamentally human and essential to our well-being (i.e., cooking). Through their ability to craft a believable paleolithic narrative they have convinced themselves that it’s the other people who don’t get it.

It’s undeniably true that humans need food in order to live, but what do we really know about how we are supposed to eat? Or what we ate in the past? What role (if any) did cooked food play in our evolution? These types of questions are inherently difficult to answer with fact due to the scant record of evidence and the diversity that ranges across the human species. The problem with making grand theories about the past is that they often rest on too few facts. Evolution is a fact, but why and how it occurs still remains in the territory of theory. The fact of the matter is that we don’t know what made us human, but Wrangham’s cooking hypothesis (flawed as it may be) is an important narrative to consider.

So what do we know about cooked food? Most importantly, we know that cooking increases the amount of energy our bodies obtain from food. Surprisingly few Americans have heard of this experiment, but the BBC once persuaded a dozen people with high blood pressure to go on an “Evo Diet” at the Paignton zoo (this diet consisted of consuming raw food just like our ape cousins). The results were surprising. Even though the participants consumed massive amounts of raw food, it did wonders for improving their blood pressure and waistlines. It’s important to note that this was a short-term experiment and in all likelihood these participants would have eventually crossed into harmful territory without cooked food. Nonetheless, there is an important lesson to take away. Caloric deprivation, whether from pure fasting or simply eating very little, seems to provide health benefits.

Wrangham points out that cooking food fundamentally alters its caloric value and the nutrients we can absorb from it. This likely explains why primitive hunter-gatherers simply can’t survive on a purely raw-food diet. The human digestive system simply cannot consume enough calories and nutrients from raw food alone. Wrangham’s idea is that the extra energy that comes from cooked foods gave the first cooks biological advantages, and ultimately reproductive advantages, that can be explained by the theory of natural selection.

For as much as the cooking hypothesis makes sense, I think it needs to be considered in conjunction with many other hypothesis as well (e.g., the Man-the-Hunter hypothesis). What made us human is most likely a complex intersection of many things. My own speculations are that both cooking and meat-eating were an integral part of the human enterprise. Wrangham, however, seems to naively downplay the importance of meat eating. “Meat eating”, he writes, “has had less impact on our bodies than cooked food”. He goes on to say that “Even vegetarians thrive on cooked diets. We are cooks more than carnivores.” However, it’s far too easy to get the causation backwards in these just-so narratives. In other words, are we cooks because we’re carnivores or are we carnivores because we’re cooks?

The very thing that Wrangham suggests made us human may also be the very thing that makes a sucker for this narrative. Nonetheless, speculative as it is, Wrangham’s theory is fascinating food for thought.

Terms like “the paleo diet”, “primal lifestyle”, “evolutionary fitness”, and “ancestral health” all operate under the shared premise that our evolutionary past is the key to understanding how to be healthy today. While I certainly don’t agree with any movement that has one size fits all recommendations, I do agree that this premise is an important place to start from. However, it can also be a dangerous place to start from because naive reasoning without evidence can easily lead us astray (we all fall victim to this to some degree). Nonetheless, understanding how and why we evolved the way we did is one key to understanding how we can be healthier and live better today.

I recently finished reading The World Until Yesterday(a book I enjoyed more than I thought I would)which focuses on what we can learn from traditional societies in terms of our health, social relations, conflicts, child-rearing, and treatment of the elderly. As Diamond puts it in the book: “All human societies have been traditional for far longer than any society has been modern.” Even though I don’t agree with everything he wrote, I think Diamond makes a compelling case that we can learn from studying our hunter-gatherer ancestors (even though he doesn’t really advocate a paleo diet per se).

This seems like a reasonable starting point and many evolution enthusiasts share this belief. If one were simply to consider the fact that most of our species’ existence has been spent as hunter-gatherers it would seem that we have much to learn from our ancestors in terms of how to live well, both biologically and culturally. But is this so, or is this a Paleofantasy? [1]

The first thing to consider is that adaptations (both biological and cultural) don’t necessarily scale linearly. Just like technological growth, evolution can stagnate for long periods of time, and then quickly accelerate due to selection pressures and for other reasons that are opaque to us. For example, in just the last 10,000 years things like the ability to tolerate lactose and blue eyes have appeared in the human race. It also turns out that many people have also developed an ability to eat and properly digest grains (a big no-no paleo diet advocates). [2] This evidence, however, doesn’t stop some paleo zealots from proclaiming that evolution cannot occur in such a short period of time.

The paleo diet theory, then, seems to make sense until one examines the evidence. If human evolution is accelerating, as the evidence shows that it is, then it doesn’t matter how long we ate a “paleo diet” for — what matters is what we have now adapted to eat. Then again, let’s not rush to dismiss the paleo diet (and its many variations) simply because some populations have adapted to tolerate grains. Maybe the paleo folks have a point, i.e., just because something is edible it doesn’t necessarily follow that it’s also an optimal (or even good) source of nutrition.

The real issue here is that when you think about it, paleo logic can fail us in terms of how we think about both health and culture (largely due to the narrative fallacy). This makes any sort of dietary advice very difficult to give because certain populations have adapted to eat certain things that others haven’t. The implication is, of course, that there is no such thing as a one-size-fits-all optimal diet for everyone.

Paleo Logic is a good a starting point, but when something sounds nice in theory and doesn’t hold up empirically, then I’d suggest ditching the bogus theory.

The deepest relationships you can have in life are with people (particularly the people that brought you into existence) and the things that sustain you, like food. Modernity has fundamentally altered our relationships with both people and food, sometimes for the better and sometimes for the worse. One thing that has changed in regard to our relationship with food is that we have become incredibly disconnected from the process that brings the food to our plate. This phenomenon seems to be creating a growing malaise among many of today’s young people, including the difficult to define hipster. [1] In other words, we have largely forgotten that we are, at our core, hunters (not desk jockeys).

Anyway, I recently finished reading Meat Eater and it’s a personal reflection on hunting chock-full of personal anecdotes detailing Steven Rinella’s hunting and fishing adventures. It’s also somewhat of a philosophical inquiry and spiritual memoir. Or as Rinella himself puts it, “this book uses the ancient art of the hunting story to answer the questions of why I hunt, who I am as a hunter, and what hunting means to me.” It turns out that Rinella believes that spiritually connecting with the food that sustains us is part of what makes us human, and with this, I wholly agree.

According to the British primatologist Richard Wrangham, cooking is what made us human (a review of his book Catching Fire: How Cooking Made Us Human is forthcoming). I agree with Wrangham, but after reading Steven Rinella’s Meat Eater, I’d also argue that hunting is what allowed us to cook.

Some may say we were “born to run” (which may have some truth depending on how we define “run”), but I think it’s more accurate to say that we were born to hunt. [1] In light of evolutionary past, it only makes sense then that the desire to grow your own food and hunt/fish your own eat meat is as visceral a desire as there can be.

I have a confession to make though: the economist in me loathes the locavore movement and the idea of hunting/gathering all of your own food. However, the philosopher in me understands that there are quality differences, and ethical and aesthetic preferences, that aren’t always captured in naive economic and financial analysis. [2] In other words, not everything economists call a commodity is a actually a commodity. The meat from a grain-fed industrially raised cow, for example, may be financially cheaper than wild elk meat (especially if you hunt it yourself), but that the quality of the nourishment provided by the wild elk meat, even if it’s just spiritually, is different. This is something that naive rationalists cannot seem to understand.

Anyway, each chapter of the book depicts a different time in Rinella’s life. In the first chapter, he describes his introduction to hunting at the age of ten for the opening day of Michigan’s squirrel hunting season. The remainder of the chapters cover everything from fishing in the Yucatan to hunting in Alaska. While there are some practical tips included at the end of each chapter, this book is certainly not to be confused for a “how-to” guide.

I’ve had the itch to get into hunting/fishing my own meat for quite some time now. As I pursue that endeavor, it was nice to read Rinella’s account of why he hunts. Before you decide to take on any new hobbies, you probably ought to figure out why you want to do it. I’ve figured out why I want to hunt and fish, and I hope others do too.

Not too long ago I wrote a short post about Antifragile. At the time I wrote that post, I was working on this review, with plans to submit it to a few publications. Alas, I came to find out that several other reviewers beat me to the punch. The beauty (and perhaps tragedy too) of having a blog is that no piece of writing goes to waste — there is always a place to publish your writing. Sorry about that!

***

In The Bed of Procrustes, Nassim Taleb’s book of aphorisms, he wrote: “An idea starts to be interesting when you get scared of taking it to its logical conclusion.”

Here’s a thought experiment: suppose you were sending a package with delicate items to a friend – what would you write on the package? Since you don’t want the ornaments to break, you’d likely write “fragile” on the package. Now, suppose for some strange reason you were sending some rocks (say, a pet rock) in a package to a friend – what would you write on the package then? Obviously you wouldn’t need to write anything on it because the rocks are robust and wouldn’t be harmed even if they were grossly mishandled.

Those may sound like banal questions, with trivial answers, but here’s a more interesting question: what would you write on a package that you wanted to be purposely mishandled? Or, what should we call something that is the exact opposite fragile, i.e., something that is not only robust, but actually gets stronger from being mishandled? Until fairly recently there was no word for this term, but the answer is that you’d write the neologism “antifragile”, coined by Taleb in his provocative book Antifragile: Things That Gain From Disorder.

One of the most interesting things about being human is that we can understand things we can’t necessarily articulate. Often the reason we can’t articulate an idea is that we lack the linguistic tool to talk about it, but lacking the tool to talk about an idea doesn’t necessarily preclude us from understanding it. Taleb is not the first human to understand antifragility, but he is the first person to give us the linguistic tool to talk about it.

Taleb cites an interesting anecdote from the linguist Guy Deutscher near the beginning of the book. What Taleb learned from Deutscher was that until 140 years ago the Greeks didn’t have a word for the color blue. Strangely though, they still saw the color blue when they looked to the sky. They weren’t, then, physically color blind, but rather culturally color blind. Similarly, even though we have long understood some of the ramifications of antifragility when we’ve seen it, we moderns have been culturally blind to it.

Evolution is the perfect example of a system with antifragile properties. Biological evolution, in particular, thrives off of randomness, stressors, and volatility, which ultimately cause adaptations to occur in individual organisms, and evolution to occur in the species. For example, consider the discovery of penicillin. It was an evolutionary (and harmful) shock to bacteria, but new forms of bacteria have adapted to the point where they are immune to the devastating consequences of penicillin. By trying to harm the bacteria, we have actually made them stronger, and this is precisely because they are antifragile. Although Taleb never explicitly puts it this way, I think for things biological, antifragile properties are a necessary trait.

One of the more practical things Taleb points out in the book is that the human body is also antifragile. Intuitively, and more importantly empirically, weightlifters have long understood this. They purposely break down their muscles through the stress of lifting heavy weights and their muscles rebuild themselves to be stronger than they were before adapting to the stress. Antifragile systems (like humans) benefit from the recently revived concept of hormesis, which is the idea that small doses of harmful substances (like alcohol) are actually good for you because they make you stronger. [2] Or, as the toxicologists would say, “The dose determines the poison.” Taleb, however, is scornful of those who use his ideas to justify the pseudo-science of homeopathy.

Another ancient idea that is discussed in the book is iatrogenics, which means harm caused by the healer. Taleb suggests that Mother Nature and our bodies have a natural way of healing themselves and we often cause more harm through trying to naively “fix” things with medicines and surgical procedures (although this is not to say that all medicines and surgical procedures are bad). The old practice of bloodletting is the perfect example. Taleb reminds us that iatrogenic effects occur in fields outside of medicine too, like our economies and educational institutions.

As astute readers of his prior works know, Taleb is not particularly fond of most bankers, academics, economists, and journalists for a plethora of reasons, namely because they suffer from a form of epistemic arrogance and naïve rationalism that not only harms themselves, but harms others as well. In Antifragile, Taleb has coined the term “fragilista” to refer to these individuals. Fragilistas, according to Taleb, naively try to suck the antifragility out of systems that depend on it for its wellbeing, and they occasionally suffer from severe ethical lapses too.

Neurotic soccer moms are one such type of fragilista and they join the company of fragilista economists and journalists like Alan Greenspan, Paul Krugman, Joseph Stiglitz, and Thomas Friedman. Fragilista Robert Rubin, the former high ranking Citigroup official, had the honor of having “The Bob Rubin Problem” named after him because of an ethical lapse in which he finagled his way into getting financial upside without any possible downside at the taxpayers expense (a violation of Hammurabi’s Code).

For readers unfamiliar with Hammurabi’s Code, Taleb described it as follows in a 2011 New York Times editorial: “If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.” The central idea behind the code, then, is that no one knows more about the house than the architect and they should not gain the upside without being exposed to downside. The ancient Babylonians clearly recognized that the code would remedy the problems that occur in situations where there was an upside-without-downside asymmetry problem, like there is with modern banker’s bonuses.

***

For as much as I love this book (and Taleb’s other work), there is one thing that I must take issue with. It seems to me that being both a humanist and a proponent of antifragility are incompatible views. Taleb, however, claims that he is both of these things. The reason I see this as a contradiction is because human biological evolution cannot progress without stress and selection pressures (of all kinds) on individual humans. Thus, our attempts at saving weak individual people and trying to eliminate individual suffering may come at the expense of fragilizing the human species as a whole. Humanists, in this sense, are fragilistas.

As a humanist, one should innately value all human life and want to limit human suffering to any extent possible (a position I’m in favor of). However, should this be done at the expense of fragilizing the species? Not all fragilistas have ill intentions, and good-hearted efforts to improve the human condition often paradoxically make things worse. As the old cliché goes, “the road to hell is paved with good intentions.”

In his philosophical novel Thus Spoke ZarathustraNietzsche introduced the ideas of both the ubermensch (translated to be the overman or superman) and the last man. One interpretation of Nietzsche’s work could be that the last man is the evolutionary byproduct of a society run by fragilistas and that the ubermensch is the human ideal for proponents of antifragility (i.e., all that humans can become).

As a proponent of antifragility myself, this interpretation makes me uncomfortable, but I cannot deny the fact that there is definitely a Nietzechean ring to the idea of antifragility. If Nietzsche’s ubermensch can be understood to be the byproduct of humanity’s antifragility, then, Zarathustra was right, humanity can overcome itself if only it avoids fragilizing itself.

Perhaps one of the things that defines us as humans, then, is that we morally, ethically, and systematically try to remove antifragility from the very antifragile system that created us (i.e., Mother Nature). After reading this book, I can’t help but think that Nietzsche understood something startlingly haunting about the human condition: whatever supersedes humanity will embrace its antifragility.

Notes:

[1] You can find my review of Fooled By Randomnesshere. You can find my review of The Black Swanhere. You can find my review of The Bed of Procrusteshere.

Rolf Dobelli is an author, novelist (unfortunately it’s difficult to find non-German translations of his work), entrepreneur, and the founder of Zurich Minds. [1] In 2010, he wrote a very provocative article titled “Avoid News”. I have now read the article several times (slowly) and it inspired this essay response. You can read this essay response even if you haven’t read Dobelli’s article, but I highly recommend reading the article before reading my response.

***

In The Bed of Procrustes, the aphoristically elegant Nassim Taleb wrote: “To bankrupt a fool, give him information.” Rolf Dobelli shares this sentiment and claims that “news is to the mind what sugar is to the body”. In other words, it’s toxic!

Depending on how one defines “news”, I absolutely agree with Dobelli. However, I can’t help but notice that Dobelli never explicitly defined what “news” is in his piece — he certainly implicitly defined it — but there is ambiguity in this implied definition that needs to be resolved. Nonetheless, I get his point and I don’t want to quibble over semantics.

Dobelli claims that “Most people believe that having more information helps them make better decisions.” It’s not only important to point out that the people who hold this belief are wrong, but also that more information is sometimes better, but not always. Sometimes more information can also make things worse.

In this vein, there is a particularly important idea that can be generalized. Most people tend to believe that being proactive is better than doing nothing, but they fail to miss the point that action often, but certainly not always, makes things worse. Much to your employers dismay, you may actually be a more valuable employee if you read essays on the Web all day than if you try to “make things better” at work. This is because doing “nothing” generally does not cause harm. Despite the fact that I know this is true, I cannot in good faith recommend that you tell this to your boss because I don’t tell this to my boss (skin in the game).

When it comes to reading, I’ve come up with my own heuristic (i.e., a rule of thumb) to deal with the “news” problem Dobelli describes: when choosing what to read, make sure that it *is not* harming your intellectual health instead of trying to improve your intellectual health. I find that most people follow the opposite strategy and thus I agree with Dobelli about the following: “Reading news to understand the world is worse than not reading anything.”

Is all “news” really as toxic as Dobelli suggests though? The answer is certainly no. Suppose, for instance, that you lived in Colorado Springs this past summer when the fires occurred there and that you followed Dobelli’s advice to the tee. Also, suppose that all your neighbors did the same. How would you have ever known when to evacuate your house? Most “news” is probably worthless, but some “news” can be very valuable as is evidenced here. It’s absolutely true, then, that not all “news” is of equal value.

I’m sure Facebook updates would fall into the category of “news” in Dobelli’s lexicon, but I cannot agree with him that all Facebook posts are worthless either. Sure, some types of Facebook usage are harmful, but certainly not all. I’ve developed a connection with many people who share important, interesting, and insightful things on Facebook — things that create value in my life. However, too much of a good thing eventually becomes a bad thing, and the onus is on the individual to find that balance. [2]

Since I’ve come to the conclusion that not all “news” is inherently valuable or inherently worthless, I’m left to further conclude that what matters in today’s world is how you filter information. However, filtering itself leads to a whole host of problems (e.g., confirmation bias) that could (and probably should) be addressed in a separate essay.

Despite the fact that I agree with Dobelli, I do believe that his position is a bit extreme (after-all, he uses a bold title that attracts attention much like many news articles do). Like most things in life, not all “news” is bad. Sure, most “news” is useless, irrelevant, or flat out wrong, but there are instances where I think “news” is valuable (the Colorado Springs fire example above).

Even though I know intellectually that not all “news” is bad, I sympathize with the gist of Dobelli’s argument because I also know that it’s easier to fast than diet. Since all value is subjective, it’s important to remember that some people know things that aren’t worth knowing. One of the beauties of life is that you get to decide what is.

Every since I first read The Black Swan I’ve tried to apply the barbell strategy (i.e., a form of hedging using asymmetric diversification) to as many areas of life as I can possibly apply it. In other words, the barbell strategy is a strategy in which you insulate yourself from negative Black Swans (extremely low probability events with devastating consequences), all the while still taking a small gamble to reap the benefits of positive Black Swans (extremely low probability events with an exponentially wild payoff).

Let me illustrate this concept as it would pertain to investing. A barbell strategy in investing would include a portfolio that is made up of mostly (say, 90%) hyper-conservative investments and a small portion (say, 10%) hyper-risky investments, with absolutely no midlevel-risk. Not only does the barbell strategy apply to investing, it is equally applicable to diet and fitness. [1] Allow me to share an anecdote about my own experiences applying the barbell strategy with diet and fitness.

+++

Prior to reading The Black Swan I was into triathlon. As a former competitive athlete in college, the sport helped fill a void in my life. However, I also naively thought that all of the chronic training was not only making me more aerobically fit, but healthier too. What a foolish belief that was! [2]

Rather abruptly after reading The Black Swan, I had an epiphany of sorts. I realized how chronically tired I had become and I decided to quit “training”. While formally training, I ate quite a bit and frequently (usually six-times a day) all the while staying relatively lean. Luckily, I had read Loren Cordain’s book The Paleo Diet For Athletes long before I started seriously training, so at least most of the food I was eating was of a high quality (wild game, veggies, fish, grass-fed beef, fruit, quinoa, and some rice).

After I decided to quit training I was worried that I could I never maintain a lean physique without rigorous aerobic efforts (i.e., the opposite of the barbell strategy). After-all, the only time in my life I had achieved a level of leanness I was happy with was when I was training hard. Then again, I had never used a barbell strategy when it came to diet and fitness either.

Here’s what I decided to do: I ditched my training logs and quit doing any formal training all together. I started walking a lot and would go to the local park near my house to do intense 10-15 minute workouts that included sprints, pull-ups, burpies, jump-squats, and push-ups (the kind of stuff I did to get in shape for lacrosse in college). I would also intermittently fast (up to 24 hours a time) and vary my protein intake greatly every day (I also quit drinking a “recovery shake” 30 minutes after every workout), all the while still eating a Paleo-ish diet (albeit, I wasn’t too strict about this). To my surprise, I not only didn’t gain weight, I actually became leaner and stronger following this barbell strategy.

+++

Sometimes we miss the most obvious things. Until very recently, I missed two obvious places in which I should be implementing the barbell strategy in life. It’s fair to say that I simply have not been critical enough of my own habits. I’ve been blindly following “the expert’s” advice that an alcoholic drink or two a day (in other words, drinking in moderation) is better for your health than is binge drinking, which only naturally makes sense, right?

Well, it made sense until I read a profound thought in Antifragile that Taleb attributed to Rory Sutherland (another thinker I greatly admire). [3] Sutherland suggested that the optimal policy for consuming alcohol would be to drink liberally a few days, coupled with several days of abstaining entirely. Chronically consuming alcohol every day, even if it’s in moderation, is similar to chronic cardio in this respect.

This idea is dangerous, particularly because alcoholic drinks have an interesting property in that the amount of harm (or potential harm) they can cause doesn’t scale linearly (particularly with the hangover part). For example, the eighth beer you drink in an evening is going to harm you (and make your hangover worse) much more than the seventh one. Anyone who has been hung-over, I’m going to assume that means all readers of this blog, has intuitively understood this at some point in their life.

According to this line of logic, I’m going to suggest that drinking slightly beyond moderation (a healthy buzz, 3-4 glasses of wine or beers for me) is healthy if it is followed by several days of absolutely no drinking. If this sounds like hormesis, it is. [4]

Anyway, after I read this section of the book, I quickly realized that if this were true of alcohol, it’s probably applicable to coffee consumption too. Accordingly, I engaged Rory Sutherland on Twitter with the following question: Should one use a barbell strategy with coffee consumption too? I’m sure you can guess what his response was. [5]

Alas, since I love to drink coffee daily, I knew before I asked the question that I wasn’t going to like the answer. However, not only do I not like the answer, but I greatly fear that it is true (perhaps ignorance is bliss). Going a day without a coffee is tough (I’m slowly working my way up to several days), but the Stoics were right. Nothing makes you appreciate something you enjoy like fasting from it for a while — nothing tastes better than that first cup of coffee after a day away from it. Of course, I suspect there are some health benefits too.

Notes:

[1] I want to make the following point clear: while not applicable to everything, the barbell strategy is still a brilliant heuristic (i.e., a rule-of-thumb).

[2] I don’t necessarily mean to discourage anyone from triathlon as I certainly understand that there are psychological reasons for competing in grueling endurance sports too. This reason is why, at least partially, I was drawn to the sport.