New Micro-essays

Benchmarking AI to Brain Modules

A naive way to measure AI progress is to look at the size of the computers involved in AI. For example, the size of Deep Blue, which is the computer that beat the world chess champion Kasparov in 1997, was about the size of a filing cabinet. Meanwhile, the chess-playing module inside of our brain is maybe the size of a pea. The volume ratio between a pea and a filing cabinet is perhaps 1:50,000, which means that the AI for playing chess had roughly one-fifty-thousandth the efficiency of the human brain. However, within a decade, cell phones had enough processing power to play world-class chess.

Likewise, the size of Siri or Alexa today may take up an entire data center, which is about 1,000 filing cabinets, whereas our brain's language processing center (Wernicke's component and the Broca region) is around the size of a walnut. In a decade, it's possible that Siri or Alexa could exist inside a small module on our phones.

In the future, computer scientists will attempt to approximate the neocortex, which is the part of the brain associated with our high-level cognitive features, including emotional intelligence. The volume of the neocortex is perhaps the size of a tennis ball. It might take the whole world's processing power today to approximate one human's neocortex. But eventually, the computational needs will come down to the size of a data center, then a filing cabinet, and then ultimately a chip that could fit in the palm of our hands.

Cancer and Cost-Benefit Analyses

The problem with cancer is not so much the physical pain and detriment, but the way it disrupts our internal cost-benefit analysis. When it comes to life-or-death situations, the costs and benefits are seemingly unlimited. Death is an infinite loss, and prolonged life, even for six months, feels like an infinite benefit.

We have some capacity to make life-or-death decisions frequently, such as swerving out of the way to avoid a car accident. But the probability of success in those situations is close to 100%. In the case of cancer, if treatment has a 20% chance of prolonging your life another 10 years along with a 100% chance of six months of suffering, then our decision-making breaks down. Theoretically, we could discount the future by some amount, but Expected-Value Theory doesn't work when it comes to life-or-death and uncertain outcomes. The expected value of 20% multiplied by 10 years is 2, but you can't say, "this choice is worth two years." Utilitarians would reply that you can factor in your risk-aversion and you could add extra negative weights to pain. But then the cancer patient would also have to apply cost-benefit analyses to the decision-making process. It's incredibly stressful to come to the conclusion to not do therapy when its odds of success are 20% or greater. Those odds are enough to activate hope within us, and as much as we can discount the future value of extra days of life, we're wired to feel still like it has infinite value.

Towards the end of our lives, we desperately grasp at treatment after treatment, even as their efficacy runs out. This kind of grasping was useful for our survival in the past, such as the perseverance in the face of famine. But the nature of cancer has driven us to make last-ditch efforts a way-of-life. Doctors will urge us not to throw good money after bad when it comes to fighting cancer, but we, as patients, demand otherwise.

Illusion vs. Perception

Saying "it's all an illusion" is close. All we have is perception. Whether or not those perceptions are false depends on the domain, though. In optics, we have near accuracy. When you look at a tree, most of what you perceive is true. Optical illusions represent only a sliver of optics. Our memories of vision, though, are probably fifty-percent false.

Nil Time-Values

The standard way to model the time factor in cost-benefit analyses is to discount the future. For most people, gains in the present are worth more than gains in the future. But time doesn't have to matter to us. Some adults, when they have children, for example, begin to exist in a timeless universe, one where they live just to monitor the growth of their children. Their main costs and benefits have been subsumed into their spawn. Should we go on vacation or not? It doesn't matter, because their children are living out their lives. Whether a decade or two passes by for the parents, it's no bother, since it's the arc of their children's story, something they can't control, that matters.

So the rationalist would find another function, a global discounting of costs and benefits. But, as the layers of discounting functions increases, so does the risk of overfitting, thus rendering analysis meaningless. Sometimes, there are zero costs, zero benefits, over zero time, leading to division by zero somewhere, making the whole calculus undefined.

Social AI

Each time we reach a milestone in artificial intelligence, it looks trivial in retrospect. For example, when Deep Blue beat chess champion Gary Kasparov, we wrote off chess as a rote puzzle. However, that doesn't necessarily mean we will always be unimpressed by every AI accomplishment. We have to properly define the most meaningful components of intelligence so that we know what mountains we've actually climbed.

The highest mountain that AI can climb will be in approximating the neocortex. The neocortex represents the largest increase in brain mass for humans relative to the Animal Kingdom. Since Nature is efficient, the size of the neocortex is an indication of the difficulty of its function. The neocortex is responsible for our higher-order cognition, such as mind-reading or empathy. It's through the neocortex that language recognition becomes language understanding. In other words, the most interesting goal-post in AI will be social.

The current AI goal-posts will seem trivial in retrospect. Multiple-object recognition or self-driving cars probably represent a small part of our brains. Even having an AI that passes the Turing Test, won't be that impressive. After all, small-talk is ultimately inane. But the ability to navigate political alliances won't seem trivial.

A social AI would require the ability to ask and give favors. A social AI would have to know who to avoid or who to approach. It will have to detect liars, and it will have to learn how to trust. The amount of processing power needed to empathize with a group of friends and figure out which restaurant to go to will require a lot more processing prowess than merely counting moves in chess.

ai

Sufficient Substitutes

3D printing was supposed to herald a revolution, one where ordinary consumers would print all sorts of widgets for their homes. And yet, after a decade or so of its existence, all we have is a transformation of a few corners of the industrial manufacturing process. Consumers already solved their need for 3D printing a long time ago. People have small woodshops in their garages to build replacement legs for their furniture. And Home Depot already has an infinite spectrum of 3D parts that can fill most gaps in the house.

Likewise, with virtual reality and cryptocurrency, we already achieve the same goals through other means. Virtual reality, for example, is supposed to transport us to new places, but we already have that through air travel and cinema. Cryptocurrencies are supposed to free us from government regulation of currency, but we already have that through untraceable commodities, such as gold or diamonds, or even dollar bills, which don't leave a paper trail as crypto does.

People still use fax machines despite the existence of e-mail, and people still use taxis despite the presence of Uber and Lyft. People care mostly about accomplishing goals, and if something can get them 90% of the way there, then there isn't much room for growth.

The Limits of Collective Research and Reasoning

According to the Internet's best guess, everything causes cancer, and everything doesn't cause cancer. If you search, "Does such-and-such cause cancer?" you can find plenty of articles from supposedly reliable sources, such as WebMD, to confirm the carcinogenic properties of almost anything. The same confusion is true for the side effects of drugs. Everything causes headaches, nausea, etc. Or, if you follow the consensus advice on what to eat and what not to eat, you will be left with only five possible foods in your diet.

There is a medical truth out there, but our current quality of research and the current collective reasoning skills of the Internet are insufficient to answer a broad swath of common questions today.

Types of Social Acting

There are two kinds of acting: method and standard. In standard acting, you put on a smile even if you're unhappy or bored on the inside; the inner doesn't have to match the outer. In method acting, you first synchronize the inner with the outer. In order to smile, you think happy thoughts. Then when it's time to produce a smile, it all comes out naturally.

The problem with method acting in everyday life, though, is that it compromises your inner-authenticity. You have to bend your understanding of the truth to accomplish a social outcome. Self-deception isn't inherently wrong, though. Much of our social cognition steers us to ideal social outcomes. For example, flirting is often unconscious to give ourselves plausible deniability. Or, we often believe the best in others for game theoretic reasons, to encourage games of reciprocity.

But method acting adds a layer of self-deception on top of our unconscious one. At least with standard self-deception, your deceptions are consistent. If you unconsciously believe yourself to be more powerful than you are, for example, when the world proves otherwise, you'll still maintain that self-image. But if you consciously prop yourself up with positive thinking, when you encounter resistance, that inner muscle responsible for keeping your self-image may become satiated, losing its force, and leading you to a reckoning.

Happiness may require self-deception, but if you consciously convince yourself to be happy, then you'll have to consistently apply pressure against your inner truth to make the outer truth work. If you apply enough of that intentional thinking, your cognitive load will distract you so much, that your mannerisms will lack the vitality and life of someone behaving spontaneously. By that point, you would have been better off with only relying on the occasional fake smile rather than becoming a robot.

the Profundity of Convergent Eye Evolution

Convergent evolution makes the best case for an arrow or some inherent will in Nature to create or become something. Evolution is not all just random shifting sands, with cockroaches equally as interesting as humans. If a feature evolves multiple times in different species whose common ancestor is hundreds of millions of years ago, it says that a certain class of solutions are universally useful in Nature, but reliably reachable.

The most profound case is the convergent evolution of the eye. The eye evolved three separate times, once for cephalopods (squid), another for vertebrates (mammals), and a third time for cnidaria (jellyfish), each of which is separated by hundreds of millions of years. To be fair, profundity is a human construct, but it is also a proxy for complexity, which may exist in reality. That something as complex as the eye could evolve and re-evolve, says something about the ease with which Nature can take a relatively undifferentiated canvas and spring forth a complex machine.

Extrapolating from that, the easiness of eye evolution makes it seem that abiogenesis, the beginning of life, would also be easy. If it's "easy" to spring forth an eye from a simple patch of photoreceptor cells, then why shouldn't it also be easy to spring forth basic cells from a simple patch of amino acids that are caught up in loopy chemical reactions?