Expected-Value in Life-or-Death Decisions
The problem with cancer is not so much the physical struggle, but the mental struggle to make a rational decision about treatments. We can withstand nearly any adversity so long as we believe it's rational. And to believe something is rational, we have to believe in its value. But how do you measure the value of a life-or-death decision? Expected-value calculations are undefined when the costs and benefits are unlimited. Death is an infinite loss. Prolonged life is an infinite benefit.
We have some capacity to deal with expected-value in life-or-death situations. For example, when we swerve out of the way to avoid a car accident, we take on some risk. But the probability of success is close to one, which practically removes it as a factor for consideration. There is no expected-value, just value. One action to save one life. That's easy.
In the case of cancer, treatment may have a 100% chance of six months of suffering for a 20% chance of prolonging your life another ten years. 20% multiplied by ten years is two years, but you can't tell a patient, "this choice is worth two years." While there is some research on maximizing QALYs or quality-adjusted life years, they require a battery of thought experiments. Patients have to consider various trade-offs, gambles, and preference-ranks to get to the bottom of what they "truly want." But even if such tests could approximate the rational choice 95% of the time, the patient would have to undergo significant duress to comply. It takes an insane amount of guts to refuse treatment that has a 20% of success, even if it's supposedly worth two QALYs. A 20% chance is high enough to activate hope, but not low enough to activate resignation.
Our ancestors rarely encountered these kinds of probabilistic scenarios. Our default approach is always to grasp for survival. If there is famine, strive for success. If you're sick, try to fight it. If you fail, well, it was a good two-week battle, and then you're done. Not a bad way to go out. There were no decades-long quests to cling onto life. In a way, modernity is worse, because we spend our golden years maximizing the number of last-ditch efforts to extend an increasingly poor quality-of-life.
The standard way to model the time factor in cost-benefit analyses is to discount the future. For most people, gains in the present are worth more than gains in the future. But time doesn't have to matter to us. Some adults, when they have children, for example, begin to exist in a timeless universe, one where they live just to monitor the growth of their children. Their main costs and benefits have been subsumed into their spawn. Should we go on vacation or not? It doesn't matter, because their children are living out their lives. Whether a decade or two passes by for the parents, it's no bother, since it's the arc of their children's story, something they can't control, that matters.
So the rationalist would find another function, a global discounting of costs and benefits. But, as the layers of discounting functions increases, so does the risk of overfitting, thus rendering analysis meaningless. Sometimes, there are zero costs, zero benefits, over zero time, leading to division by zero somewhere, making the whole calculus undefined.
Rationality is a ritual, one that involves light, vague estimates of costs and benefits
They're so light, that one wonders if they can even be called rational. For example, when you ask for help and someone volunteers it, neither party is really measuring cost-benefit because the stakes are so low. Rather, all our actions are just a matter of drives and culture. Likewise, in the case of voting, it's not a rational decision for most of the electorate.
Rational thinking is, at it's core, an aesthetic exercise
Rational thinking, at its core, is an aesthetic exercise—not a moral, practical, or logical one. Nazi and Enlightenment thinkers alike were motivated by the beauty of their ideas. How else would you miss the irony of creating a country founded on both freedom and slavery? How else would you premise the creation of a master race on the annihilation of the most intelligent people in your country?
Morality would find those ideas repulsive. Practicality would find them expensive. It's only in beauty that paradoxes are permitted. Consider the Japanese. How else would you go from being a people who, in the first half of the twentieth century, conducted bioweapons experiments on humans, to being renown in the second half for social programs for the elderly, low crime rates, and overall politeness? This is the same country that has the highest return rate for misplaced wallets. Why do the Japanese do this? Is it because it's the right thing to do? Or is it because it's the most elegant?
The Usefulness of Accuracy
Accurate statements aren't necessary, or even useful. For example, if your child gets mugged while walking down an alley in a rough neighborhood, then according to law and accuracy, the cause of their mugging was the mugger. Many other people have walked down that alley in that neighborhood and not gotten mugged. But you would still scold your child until they promise not to walk down that alley again, because that's what's most likely to prevent them from getting mugged in the future. You can cause your child to be less mugged in the future by blaming them for their mugging in the past.