New Micro-essays

Expected-Value in Life-or-Death Decisions

The problem with cancer is not so much the physical struggle, but the mental struggle to make a rational decision about treatments. We can withstand nearly any adversity so long as we believe it's rational. And to believe something is rational, we have to believe in its value. But how do you measure the value of a life-or-death decision? Expected-value calculations are undefined when the costs and benefits are unlimited. Death is an infinite loss. Prolonged life is an infinite benefit.

We have some capacity to deal with expected-value in life-or-death situations. For example, when we swerve out of the way to avoid a car accident, we take on some risk. But the probability of success is close to one, which practically removes it as a factor for consideration. There is no expected-value, just value. One action to save one life. That's easy.

In the case of cancer, treatment may have a 100% chance of six months of suffering for a 20% chance of prolonging your life another ten years. 20% multiplied by ten years is two years, but you can't tell a patient, "this choice is worth two years." While there is some research on maximizing QALYs or quality-adjusted life years, they require a battery of thought experiments. Patients have to consider various trade-offs, gambles, and preference-ranks to get to the bottom of what they "truly want." But even if such tests could approximate the rational choice 95% of the time, the patient would have to undergo significant duress to comply. It takes an insane amount of guts to refuse treatment that has a 20% of success, even if it's supposedly worth two QALYs. A 20% chance is high enough to activate hope, but not low enough to activate resignation.

Our ancestors rarely encountered these kinds of probabilistic scenarios. Our default approach is always to grasp for survival. If there is famine, strive for success. If you're sick, try to fight it. If you fail, well, it was a good two-week battle, and then you're done. Not a bad way to go out. There were no decades-long quests to cling onto life. In a way, modernity is worse, because we spend our golden years maximizing the number of last-ditch efforts to extend an increasingly poor quality-of-life.

Illusion vs. Perception

Saying "it's all an illusion" is close. All we have is perception. Whether or not those perceptions are false depends on the domain, though. In optics, we have near accuracy. When you look at a tree, most of what you perceive is true. Optical illusions represent only a sliver of optics. Our memories of vision, though, are probably fifty-percent false.

Nil Time-Values

The standard way to model the time factor in cost-benefit analyses is to discount the future. For most people, gains in the present are worth more than gains in the future. But time doesn't have to matter to us. Some adults, when they have children, for example, begin to exist in a timeless universe, one where they live just to monitor the growth of their children. Their main costs and benefits have been subsumed into their spawn. Should we go on vacation or not? It doesn't matter, because their children are living out their lives. Whether a decade or two passes by for the parents, it's no bother, since it's the arc of their children's story, something they can't control, that matters.

So the rationalist would find another function, a global discounting of costs and benefits. But, as the layers of discounting functions increases, so does the risk of overfitting, thus rendering analysis meaningless. Sometimes, there are zero costs, zero benefits, over zero time, leading to division by zero somewhere, making the whole calculus undefined.

Social-Oriented Thinking

In the natural world, our perceptions are accurate. When we see a tiger jump at us, we see it correctly and make a quick decision to either run or fight. In the social world, though, our perceptions are suspect, because we don't know who is or isn't a threat. Even if we discern a threat today, that person may not be one tomorrow. We can't have a fixed mind about everyone. Since everybody is mind-reading everybody mind-reading, it helps to have a complex layer of contingencies. It's helpful to be thoroughly genuine until it isn't.

Social AI

Each time we reach a milestone in artificial intelligence, it looks trivial in retrospect. For example, when Deep Blue beat chess champion Gary Kasparov, we wrote off chess as a rote puzzle. However, that doesn't necessarily mean we will always be unimpressed by every AI accomplishment. We have to properly define the most meaningful components of intelligence so that we know what mountains we've actually climbed.

The highest mountain that AI can climb will be in approximating the neocortex. The neocortex represents the largest increase in brain mass for humans relative to the Animal Kingdom. Since Nature is efficient, the size of the neocortex is an indication of the difficulty of its function. The neocortex is responsible for our higher-order cognition, such as mind-reading or empathy. It's through the neocortex that language recognition becomes language understanding. In other words, the most interesting goal-post in AI will be social.

The current AI goal-posts will seem trivial in retrospect. Multiple-object recognition or self-driving cars probably represent a small part of our brains. Even having an AI that passes the Turing Test, won't be that impressive. After all, small-talk is ultimately inane. But the ability to navigate political alliances won't seem trivial.

A social AI would require the ability to ask and give favors. A social AI would have to know who to avoid or who to approach. It will have to detect liars, and it will have to learn how to trust. The amount of processing power needed to empathize with a group of friends and figure out which restaurant to go to will require a lot more processing prowess than merely counting moves in chess.

ai

Therapy and Wine-Tasting

The cost-benefits of therapy are twisted to favor the wealthy. The potential upside of good therapy is life-changing, but figuring who will or won't deliver that upside is maddeningly difficult. You could be stuck with a bad therapist for years and not know it because the therapist keeps you busy spinning your wheels while you work on your "issues." The only patients who can sift among therapists properly are either the already emotionally well-balanced (so they don't need a therapist), or the rich. For the rich, money is not a concern, meaning they can select from both expensive and inexpensive therapists.

Even though we have no studies showing that the quality of therapy is correlated with how it costs, we can assume that they couldn't not be correlated. At the very least, therapists that make patients happy will be in high demand and thus would charge more. As a result, the rich have access to the widest range of quality therapists. In this way, money does buy happiness.

The fact that quality therapy might only be available to the rich makes therapy similar to wine-tasting. Refining your palette in wine-tasting requires a lot of money. Wine-tasting is also on the genetic cutting edge: you are either born with a "nose" or not. Likewise, therapy is on the frontier of human expression. Most therapy is bad, but the good therapists are great, and the great ones are Earth-shattering. As a result, the wealthy are the patrons of our collective endeavor to explore the benefits of mediated introspection. The upside is enormous, but currently, the ability to unlock that advantage remains in the hands of the select.

the Profundity of Convergent Eye Evolution

Convergent evolution makes the best case for an arrow or some inherent will in Nature to create or become something. Evolution is not all just random shifting sands, with cockroaches equally as interesting as humans. If a feature evolves multiple times in different species whose common ancestor is hundreds of millions of years ago, it says that a certain class of solutions are universally useful in Nature, but reliably reachable.

The most profound case is the convergent evolution of the eye. The eye evolved three separate times, once for cephalopods (squid), another for vertebrates (mammals), and a third time for cnidaria (jellyfish), each of which is separated by hundreds of millions of years. To be fair, profundity is a human construct, but it is also a proxy for complexity, which may exist in reality. That something as complex as the eye could evolve and re-evolve, says something about the ease with which Nature can take a relatively undifferentiated canvas and spring forth a complex machine.

Extrapolating from that, the easiness of eye evolution makes it seem that abiogenesis, the beginning of life, would also be easy. If it's "easy" to spring forth an eye from a simple patch of photoreceptor cells, then why shouldn't it also be easy to spring forth basic cells from a simple patch of amino acids that are caught up in loopy chemical reactions?