What you will learn from reading The Black Swan:
– How black swan events catch everyone off guard.
– How our brains aren’t evolutionary wired to understand abstract concepts like statistics.
– What is silent evidence and why it is important.
The Black Swan Book Summary
The Black Swan Book Summary is all about events that catch us off guard and how we are wired to be blind to these events. We can’t see the invisible, the stuff which isn’t measured and thus, fall for the illusion that what we can see is all there is to see.
We see the obvious and visible consequences, not the invisible and less obvious ones. Yet those unseen consequences can be more meaningful.
This book summary will have you thinking smarter about the world and how to see more of the invisbible.
Characteristics of a Black Swan:
Every Black Swan event has three attributes – unpredictability, consequences and retrospective explainability.
What are Models and where do they go wrong?
Models and constructions are intellectual maps of reality are not always wrong; they are only wrong in some specific applications.
The difficulty is that:
1. You do not know beforehand where the ‘map’ will be wrong.
2. The mistakes can lead to sever consequences.
It always remains the case that you know what is wrong with a lot more confidence than you know what is right.
Key Point — Always look for the observation that would prove you wrong.
People always seek for confirmatory evidence. Ask the question what proves this wrong.
True-self confidence, to look at the world without the need to stroke one’s ego.
The Human Problem:
We naively generalise, which means we have a naive expectation of linearity. We think things will continue as they have. Unaware the most big changes come unexpected. We generalise rules to reduce chaos.
On the fallibility of Memories:
Memory is more of a self-serving dynamic revision machine – you change the story at every remembrance without raising it.
A memory corresponds to the strengthening of connection from an increase of brain activity in a given sector of the brain – the more activity the stronger the memory.
We have far too many possible ways to interpret past events for our own good.
Paranoia is a state of creating worrying stories over minute details. The story seems rational in hindsight and evidence would make sense.
If you’re interested in the unreliability fo memory, check out these two book summaries:
Mistakes were Made But Not by Me Book Summary
Domain Specificity:
We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system.
Our reactions, our mode of thinking, our intuitions depend on the context in which the matter is presented.
Interesting thought – empirically, sex, social class and profession seem to be better predictors of someone’s behaviour than nationality.
Adding a because makes matters far more plausible and far more likely.
Joey seemed happily married. He killed his wife.
Joey seemed happily married. He killed his wife to get her inheritance.
First is more broad thus more likely, however, second seems more likely.
Linear Progression and nonlinearities:
Our intuitions are not cut out for nonlinearities. Our older environment was an environment where process and result are closely connected. Our emotional apparatus is designed for linear causality.
Linear progression, a platonic idea, is not the norm.
The ancients knew – the highest currency is respect.
History is seeing with the 20/20 hindsight glasses on.
Silent Evidence:
There is a distortion between what you see and what is. Survivors are shown. The dead are not.
There is no section anywhere on people who have committed crime and not been caught.
Therefore, our representation of the standard criminal might be based on the properties of those less intelligent ones who were caught.
We see the obvious and visible consequences, not the invisible and less obvious ones. Yet those unseen consequences can be more meaningful.
If both the positive and negative consequences of an action fell onto the author, our learning would be fast.
The risk tends to come from outside of your models as you tend not be looking there or measuring.
The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. The see more random noise and mistake it for information.
Three expert excuses:
You were playing a different game (missed critical information in a different field)
You invoke the outlier
The almost right defence
Reference Points:
We cannot work without a point of reference. Less mental effort is needed to compare an item to a reference point.
First fallacy people fall for. Variability matters. Should have a range of possible outcomes and timeframes within an acceptable range rather than exact. Exactly will always be wrong.
Poppers argument in order predict historical events you would need to predict technological innovation.
Know how to rank beliefs not according to their plausibility but by the harm they may cause.
Focus on Consequences not probability:
This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty.
Don’t know the odds of an earthquake in Sans Francisco but I can imagine how San Francisco might be affected by one.
Why we can’t figure out what’s going on:
a) epistemic arrogance
b) platonic notion of categories – fooled by reductions
c) flawed tools of inference