What you will learn from reading Ergodicity:
– Why probabilities around expected gains are always zero if theirs risk of ruin.
– How game overs change the probabilities of success.
– How financing big purchases using monthly instalments is more often then not a better decision.
Ergodicity Book Summary:
Ergodicity is another fantastic book by Luca Dellana, he clearly explains the concept of Ergodicity which Nassim Taleb has said is “The Most Important Property to Understand in Probability, in Life, in Anything.”.
Reversibility and irreversibility:
In skiing, and life in general, it is not the best ones who succeed. It is the best ones of those who survive.
Maximising the expected returns of your choices is a good strategy only if the consequences of mistakes and misfortunes are reversible. Otherwise, it’s a stupid strategy.
In theory, performance determines success. The fastest skier wins the race, and the most performing employee becomes the most successful one.
In practice, performance is subordinate to survival. It is the fastest racer of those who survive that wins races, it is the most performing employee who doesn’t burn out that becomes the most successful, and so on.
In general, we can say that in any repeated activity, irreversibility absorbs future gains. This means that you cannot extrapolate future outcomes from solely the expected outcomes of the activity performed once.
Distinguish between calculated risks whose consequences you can recover from and recklessness whose consequences might permanently debilitate you. There is a sweet spot where you expose yourself to the former but not the latter – that’s a good place to aim.
Against the Law of Large Numbers:
We generally assume the law of large numbers to be always relevant. In reality, it seldom is for individuals. It requires, well, a large number of trials. The problem is that in most real-life situations, we have a limited number of trials. For example, I cannot keep picking risky stocks until I get rich – a few bad results in a row, and I am broke.
Whenever an activity cannot be assumed repeatable at infinity, we should be wary of expecting to achieve its average outcome. Any form of “game-over” nullifies future gains, bringing the average down.
You can only rely on expected outcomes if you are guaranteed a large number of repetitions. Otherwise, they are misleading the law of large numbers requires a large number of repetitions).
Common Game Overs:
They include bankruptcies, injuries, severe depressions, burnouts, and break-ups of all kinds (between romantic partners, business partners, or friends).
Lifetime Outcome (Time Probability):
The lifetime outcome of an event is the final outcome of a person undergoing the event many times divided by the number of events.
Population Outcome (Expected Value):
The expectation value of an event is the sum of the outcomes of the event happening many times divided by the number of events.
So, the population outcome is the outcome of many people performing an action once. The lifetime outcome is the outcome of one person performing an action many times.
Non-Ergodic:
In particular, a system is ergodic if its population outcome coincides with the lifetime outcome of each of its components. Otherwise, it is non-ergodic.
The practical implication is that in ergodic systems, you can use the population outcome to make optimal decisions. In non-ergodic systems, you cannot.
Don’t envy the risks you wouldn’t take:
I would add that it is pointless to envy someone with whom you wouldn’t trade places in all parallel universes – including those in which his gambles didn’t pay off. For example, an entrepreneur whose venture had slim chances to succeed.
Do you desire to take his gambles, or do you only desire the winning outcome?
Do not envy the survivors of ventures in which you didn’t participate.
Whenever we desire an outcome because we see those that benefited from it, it is good practice asking yourself, do you want the outcome, or do you want the opportunity to take the gamble that produced the outcome? If you only want the former but not the latter, you might be unprepared for what’s to come.
A system can work on average but fail locally:
As an individual, you do not care whether the system works on average. You care if it works for you. Averages hide local spikes in irreversibility; survival is based on the local.
This tension between what happens on average and what happens locally is the main problem of centralization. Centralized organizations such as the WHO or the EU are not omniscient nor have illimited bandwidth. Their executives cannot read tons of data that describe every corner of the world.
They must rely on averages. They cannot make thousands of micro-decisions, each appropriate for a given corner of the world. They must take a single, one-fits-all decision. Even if these decisions work on average, they might have a terrible impact on some local populations. Centralised organisations make more sense in an ergodic world than in a non-ergodic one.
The Pitfalls of Centralisation:
This tension between what happens on average and what happens locally is the main problem of centralisation. Centralised organisations such as the WHO or the EU are not omniscient nor have unlimited bandwidth. Their executives cannot read tons of data that describe every corner of the world.
They must rely on averages. They cannot make thousands of micro-decisions, each appropriate for a given corner of the world. They must take a single, one-fits-all decision. Even if these decisions work on average, they might have a terrible impact on some local populations. Centralised organisations make more sense in an ergodic world than in a non-ergodic one.
A major problem of centralisation is the lack of granularity. A central government cannot possibly review granular data and cannot enact policies that are granular enough to be effective everywhere. Instead, we get one-fits-all.
Hence the importance of bringing down decision making closer to the people involved. For example, if a decision can be taken at the province or state level, it should be taken at the province level.
How Disagreements Occur:
As a side note, many disagreements between people in good faith come from one of the following two causes. One, they’re optimising different metrics. Two, they’re considering the marginal utility of a resource whose utility is nonlinear and of which they possess different quantities (example: $100 are less important to a millionaire than to a single parent working a part-time job – of course they will have different perceptions of the value of $100, even if none is virtue signaling).
The gambler and the gamble:
The point is, the best strategy depends on whether you are the gambler or the gamble. If you are the gambler, you do not care about each gamble making money. You care about the aggregate of all gambles making money. Conversely, if you are the gamble (in this example, the founder), you do not care about the overall outcome of all gambles but only yours.
Even if you do not play gambling games, it might be useful to see yourself as a gambler. After all, each of your habits is a gamble in which you bet time and energy for a possible return. Similarly, any belief of yours is a gamble. Any job of yours, any relationship of yours, any idea, any decision – they are all investments of time and money in exchange for a future return.
That parties sharing a stake in the same venture might have different incentives. They might have different opinions on what is replaceable and what is not.
For a company or a population, replaceability of its members means ergodicity. For the individual members, the opposite applies.
As an individual, you cannot blindly rely on membership of a group for your survival. Instead, you can become an irreplaceable part of it. This way, the population deems your loss irrecoverable and must take action to prevent it at all costs.
The Barbell Strategy:
One lesson from Taleb’s work is that risk management is not about prudence but about removing the risks of “game-over” so that you can be aggressive with other risks.
Similarly, The Barbell Strategy is not about reducing risk in general. Instead, it is about limiting the part of yourself or of your assets that are exposed to irreversibility.
Kelly Criterion:
These two intuitions, “don’t go all-in” and “payoffs determine the relative size of the bet,” summarise a betting strategy known as the Kelly Criterion, named after the mathematician who invented it.
The perks of being moody:
Moods cause us to intuitively bet more time and energy on activities with high payoffs and less on activities with low payoffs.
Well, Alice does not feel any mood, so she would tend to sample every bush she walks by – an inefficient method. Instead, Bob would check the first bush, then the second one, then he might become discouraged (a mood), and he would walk for a bit before bending forward to check the bush next to him. Once he finds some berries, he gets excited (another mood) and checks all the bushes nearby. This is advantageous because, in nature, resources tend to cluster together. If a bush is particularly fruitful, the chances are that the ones around it are too, because they grow on the same fertile soil.
As the hunter-gatherers’ example showed, moderately moody people tend to be more efficient than moodless ones. (The keyword being “moderately” – excesses are bad.)
Skin in the Game and Irreversibility:
A single speeding fine does not prevent the driver from driving again. If it were only for fines, drivers would be incentivized not to go fast, but would not have skin in the game.
Conversely, crashing is an event that can stop a driver from driving again – either because he died or because the police canceled his driving license. It is not incentives that provide skin in the game, but irreversibility.
Redistribution:
An important factor that influences whether a system working well on average also works well everywhere is redistribution.
The question that we must ask ourselves is, “When there is a local spike in load, can the system redistribute the load fast enough?”
A system that can redistribute load quickly is not immune to failure, just much less likely. For example, a system that can perfectly redistribute growth will break once the load exceeds the number of load-bearing units times the unit’s load capacity. The slower is the system in redistributing, the lower will be the maximum load it can withstand.
Rethinking Monthly Installments:
As another example, a good friend of Lucas’ recently purchased a car. He chose to pay it in monthly instalments over five years. He did so even though he had the financial availability to pay it cash. He explained that it allowed him to keep a buffer of money in the bank, to manage unexpected problems. If he paid the car cash and lost his job next month, he would be in dire straits.
This makes sense! He decided to prioritise survival over optimisation, the unexpected over the expected. Conversely, a clueless economist might call irrational the choice to pay something more, when you have the option to pay it less. Temporal distribution matters.
Pre-emptive redistribution increases resilience and opens up opportunities.
Mis-Measuring Corona:
“Number of cases per country” and “number of cases per country per 100,000 population” are less useful measures than apparent. Sure, they tell us how many people are infected (assuming correct data and transparent communication – a strong assumption). However, they do not tell us much about the state of healthcare. From this point of view, “number of hospitals with bed saturation higher than 95%” would be a better measure. Moreover, they do not tell us much about containment. “Number of active clusters” would have been a better metric. The point is, we measure to make decisions, and these decisions should consider local conditions. National averages are misleading.
R0. This coefficient is supposed to tell us how infective the virus is. It estimates the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. From most practical points of view, it is useless at best and misleading at worst. First of all, it is not the property of just a virus – as the media depicted it. Instead, it is the property of a virus, a population, the quantity of the former in the latter, and the exposure of the latter to the former. For example, the same virus can have different R0 values depending on whether concerts are allowed. More importantly, as many pointed out, spending 120 minutes with a single friend has a different potential for contagion than spending 10 minutes with 12 friends. Again, averages are misleading.
Restricting Vs Expanding the Scope:
Restricting:
The easiest way to increase performance is to restrict the scope of its definition. For example, we can restrict the definition of fast from “fast during a whole championship” to “fast in this particular slope.” Or, we can restrict the definition of happiness from “a fulfilling career, a happy family, a good social life, and a healthy body” to “a prestigious job title” or “a coveted spouse.”
In both cases, we make the outcome easier to achieve, but it also means less and will matter for a shorter time.
Expanding:
On the other hand, the easiest way to hide problems is to increase the scope of measurement.
If a town has a few districts whose population lives in poverty, it can conveniently hide the problem by talking about the average income measured across the city as a whole. example
If your romantic life is in shambles, you can conveniently hide the problem by measuring your happiness across all your activities, including your career and your friends.
Expanding the scope of measurement to hide problems leads to them too. **Problems grow the size they need for them to be acknowledged.** A hidden problem is a problem that keeps growing, and that will damage us more painfully in the future.
Ergodicity provides us with a few tools to recognise whether, in a given context, it is safe to expand or restrict the scope of measurement. In general, in non-ergodic contexts, it is not safe to do so.