Author Image

tim abraham

Nassim Taleb

PUBLISHED ON MAR 2016

Anyone who knows me knows also that I hold the writer Nassim Taleb as my personal hero. His ideas have had more influence on my way of thinking about the world than anything else - and by a long shot. I can remember back when I first read The Black Swan. I would go on really long walks and just think over it. It opened an intellectually curious side of me that has since never closed.

This month I read Antifragile for my third time. I chose it over The Black Swan and Fooled By Randomness because NNT considers it is magnum opus, and it brings together a lot of ideas from his former books and builds some cohesiveness around the concepts. Rather than dive right into the contents of the book, I want to first wax a bit about what all these books, and the author himself, are generally getting at.

I first discovered Nassim Taleb through a podcast called Econtalk that I used to listen to religiously. He’s been on the show a few times now, and on one of his more recent appearances he was asked “how would you sum up your ideas?”. The host was looking for a Rabbi Hillel style answer, who is famous for being asked “how would you sum up the Torah while standing on one foot?” to which he replied something to the tune of “do to others what you want them to do to yourself and the rest is just commentary”. Nassim’s answer was “I have just one idea: Decision making under opacity. Or what to do when you don’t know what to do.”

Now I think this perfectly sums up the idea of Antifragile, but what does it mean? I think the explanation is two-fold. First let’s unpack decision making under opacity.

Since the Enlightenment (and thanks a lot to the field of Economics which has bled into Political Science, Sociology, Psychology and basically all the soft sciences) we’ve been conditioned to think that we make rational decisions based on perfect information. We are taught this in school, and the leaders of today’s world go get a higher education in this stuff at a place called Business School. They’re taught tools to take into public life like “Expected Utility Theory”, “Game Theory”, “Decision Theory”, and a bunch of other theories based on a big assumption that we are rational and have perfect information. But when do we ever have perfect information in the real world? Humans are complex, society is complex, and moreover Nature is complex. We don’t fully understand any of them, so it’s naive to think that we can predict what the outcomes of our decisions are going to be and go with the one that provides the best results. But look around at the world as it is today. Every person with a powerful position in political or economic life has been schooled by this form of naive rationalism. The people making the most important decisions are using tools that assume the world is a smooth, understandable, and simple place. I don’t think I need to argue too hard to convince you that the world is not that simple. When we make decisions, we really don’t know everything. That’s what decision making under opacity means, and outside of casinos and a few board games, our decision making always has some opacity.

So most of the decisions we make we think we can “predict” the outcome of, but we really can’t. Nassim argues that the field of predicting, and the statistics behind it, are often not a good fit for the problem. But they appear to be very rigorous, which gives us a false sense of security that we know what we’re doing. This leads to a very scary thing called optimization. In it, we use our false ability to predict to expose ourself to even more risk if we are wrong. It can also lead to many unintended consequences, since the world is complex and like the butterfly effect we can’t possibly compute all the potential outcomes. Two obvious examples are the financial crisis and the Iraq war. The financial crisis was largely caused by overconfidence on house prices - an overconfidence that led to a system designed to function nicely as long as prices ticked up, but to completely implode should they tick down. That system was largely the result of over optimizing. We used mathematical models, had prominent economists talk about the Great Moderation, and used the bell curve to rationalize phony regulations like the metric value-at-risk. The consequences of these bad decisions was about a trillion dollars in taxpayer bailouts, while Wall Street bankers were able to keep the insane bonuses they’d already banked. When we went to war with Iraq, George W Bush said he expected it to take a few weeks. He was only looking at the tip of the iceberg, and never saw the insurgency or fighting of Sunni and Shite Muslims. The fallout of the operation turned out to be 99% of the cost, death toll, and time. In both these cases people assumed they knew what they were doing, and that confidence led to even more risk taking which eventually caused huge costly blowups. We still don’t even know if we’ve recovered from either, as there could be more hidden risks looming.

That brings us to “what to do when you don’t know what to do”. We need to make decisions. And we know that not every decision is going to lead to a good outcome. Nassim Taleb wants to live in a society where we can make errors, but with small costs that are locally confined. Currently, we do too much the opposite - our decisions have widespread consequences. NNT argues that we should seek policies that live in the domain where mistakes actually help the health of the system, rather than fully bringing the system and everything in its wake down.

In my next essay I’ll get into some of the ideas from Antifragile. They’ll always come back to this core concept.

TAGS: BOOKS, PHILOSOPHY