Advanced AI is just advanced us
One of the best cases for a radical technological singularity comes from potential advances in Artificial Intelligence (AI). If an AI were to get smart enough to make itself smarter, then given the world's processing power, it could create a superintelligence that is light-years ahead of where we are today. However, one argument against this upward spike is that we are already that superintelligence.
Even though humans have irrevocably changed the world, in many ways its the same. For example, birds fly, and now we fly. Animals move in packs, and now we have freeways. Electricity is exotic, but the things we do with it, such as recreating villages that now span greater distances, is familiar. We are already computers, and we created AI in our image. The result may end up being the preservation of all our existing faculties, just really enhanced.
After utopia arrives, life's annoyances will magnify, whether it's the common cold, traffic jams, or unrequited love
We may never appreciate utopia because it won't feel amazing. Petty annoyances, such as being stuck in traffic, won't automatically go away, even if we eliminate hunger. The drama of human relationships will endure even if all nations lay down their arms. And even if poverty is vanquished, death and decay will hang around our necks, potentially more so since we won't be distracted by our basic needs.
A hundred years from now, despite the Singularity or Moore's Law, 25% of restaurant tables will still wobble
In Minority Report, despite Maglev cars and floating user interfaces, people still catch colds. Similarly, a hundred years from now, around 25% of restaurant tables will still wobble. Never mind the Singularity or Moore's Law reaching its zenith, this fact about tables will remain. This philosophy is called "The Banality of Futurism," and by delving deeper into the wobbly table issue it's clear some problems weren't meant to be solved.
The current solution to wobbly tables, besides using folded sugar packets, is to sell tables with adjustable screws. Many restaurants already have these tables, but because of how inconvenient it is to find someone to lift the table while you bend over and get your hands dirty, the solution is not utilized. This leads to Principle #1 of the Banality of Futurism: The future may already be here, but we don't use it.
This assumes restaurant owners even bought tables with adjustable screws. While wobbly tables are a collective nuisance, the owners are individuals who have to look at a catalog of restaurant tables and each come to the same conclusion: "I should pay for the premium tables, so that my customers don't have wobbly tables." But because of the cost-saving motivation combined with some rationalizations such as "My floors in the new restaurant should be flat" or "We can just stick small wood chips underneath them," we have the situation we end up in today. Principle #2: The future may already be here, but the problem isn't annoying enough to solve at scale.
Continuing on to the immediate future, wobbly tables are still problematic. Let's say Apple designs "the perfect table," one that adjusts easily. Perhaps they're electrically adjustable with the push of a button. Or maybe the screws are designed such that you can easily adjust them with your toe. Again, by the same principle above, restaurant tables won't get the Apple treatment. Part of the problem is that it's a commodity item, like printers, so there's no incentive for one player to make the table and own the market. Principle #3: The future may already be here, but nobody wants to build it.
Further out, in the exotic future, if there were some cheap solution that involved fancy material science, we would have already found it. Imagine some hard, rubbery substance that expands or contracts based on continuous pressure—or lack of—over multiple days. The date of this material's discovery would be random and not linked to exponential increases in computing power or intelligence. Given how far we've already gone into material science, the discovery would have to be by luck, or barring that, by intense force. Furthermore, the difficulty in its discovery would likely imply a difficulty in manufacturing it as well, and so again, it won't be cheap.
There are many things like the wobbly table. For example, all of these will still be problems in 2116:
- Pizza boxes that won't stay completely shut
- Stray corners of paper towels left behind upon ripping
- Old refrigerators that make weird noises
- Stubbing our toes on the sharp edges of furniture
Perhaps the silver lining in the Banality of Futurism is that the room for growth won't be in fixing life's inconveniences, but rather in the human condition. If poverty is eliminated or if war becomes taboo, then maybe eating an apple pie on a wobbly table while blowing our noses won't be so bad.
If Apple wanted to create a lively 3D avatar for Siri, they could, but they don’t for business reasons. Anything less than complete realism for Siri would make the creature appear uncanny and creepy, turning off consumers. It’s likely that Apple could even make Siri’s voice more natural, given that her tone hasn’t changed much since her release in 2011. But again, Apple would have to improve her naturalism by leaps and bounds to pass the uncanny valley and make Siri acceptable to consumers.
However, as individual AI domains increase in sophistication, a new opportunity will arise that involves mashing up disparate AI modules. For example, if Siri had a 3D face floating on the screen of a self-driving Tesla, the effect would be like talking to Kit, the automated assistant featured in 1980s television show, Knight Rider. Instead of creeping us out, the combination would be delightful, tickling our curiosity. “Siri, take me to the grocery store.” “Okay, would you like to stop by the bank along the way?” If you added some eyelashes and fleshy features, you might start to feel like you had a real-life shopping companion.
The final 20% of AI sophistication for voice assistants will probably cost the same as the first 80%. But, as we get closer, it will become easier for entrepreneurs to round up the current best-performers in AI and produce a symphony of life emulation.
Amateur is the next step after indie, with the breakout serendipity of non-talent being the new talent
First, there was Indie. Then there was Hipster. The next logical step is Amateur. Indie was the secret power of the connoisseur who could find treasure beyond the radio and big music stores. The indie fan lived near independent record stores and coffee shops with local budding acts. Then Myspace gave indie access to anybody who was either bored with their current music selection or had some obsessive desire to deepen their cultural purview. But then Urban Outfitters and social media-savvy music labels commercialized and repackaged that access.
But the war for obscurity and cultural elitism soldiers on, and it will probably focus on amateur acts. The aficionado who is so interested in music that they have to get to its core—which is just people sharing emotion with people—will spend their time at shitty open-mics or listen to tracks on SoundCloud from anybody with at least a thousand listens. There will be no buzz-following as the nerds seek out serendipitous discovery.
Of course, first-wave imitators, who in the previous generation were hipsters, will co-opt amateurism. Faux amateurism, in the form of willfully imperfect Etsy clothing, will be the style. But then this will be commercialized and re-packaged so that perhaps the Rebecca Blacks of the world, the people who just shell out a few grand for a vanity album, will dominate the airwaves.
Archaeology is like aliens trying to understand humans by analyzing bubble gum wrappers
If a meteor killed all humans and destroyed most of civilization, what would a curious alien find? Most likely they would discover a cell phone screen here, a half-sneaker there, and some bits of a gun over there—all traces of modern civilization. The odds that aliens would ever find evidence of transitional human societies—spearheads from the Assyrians, horse-and-carriages from the British, or stone buildings from Babylon—are one in a billion. An alien is a billion times more likely to find a bubble gum wrapper than a piece of papyrus. The alien would then naturally conclude that some "intelligent designer" must have created this modern civilization as is, for there would be none or barely any record of its gradual development.
As long as people mate and affiliate with quality, status will always be important, even if class stops being so
The concept of class is on the way out, thanks to the rise of the creative class, which is creating this ever-expanding mélange of a middle-class classlessness that eventually consumes everybody. However, people will still mate and affiliate with quality, letting narrow status-seeking take over the role that classism once provided.
As long as sentience exists, so will competition. As long as competition exists, so will suffering
Sadly, we will never reach the goal of eliminating suffering from the universe. Whether it's in aliens, animals, A.I. replacements for humans, or even humans that don't make the transition through the Singularity, there will always be suffering.
We can start by defining these sufferers in the abstract as conscious, semi-intelligent "nodes." Each of these nodes had to evolve. Even if we create sentient, intelligent beings, they will just be an extension of the evolution of us. All nodes are subject to the rules of persistence which require resources to maintain their existence, even if it's just the burning of coal to power data centers.
The second principle is the scarcity of resources. There is no unlimited source of anything in the universe. Every node has a non-zero amount of physical material, and that material is ultimately limited. So some games between nodes—games that these nodes have trained themselves on through evolution—will be zero-sum games. As a result, the increase in value for one node (anything that helps perpetuate its persistence) will often require the decrease in value for other nodes. In other words, competition is inevitable.
Then, for nodes to be competitive for survival, they have to respond in-kind to threats. They have to feel pain. Pain is essentially an alarm that provides bursts of data that dominate all other data on the buffer. So a computer that is meant to emulate pain like humans would encounter situations where it stops processing ambient data, and instead focuses all of its energy on the alarm: i.e. the pain. It would gird itself up, creating an exceptional "emergency" circumstance whereby it would drain energy from other modules temporarily to deal with the thing causing the pain. The result would look like an approximation of the human experience of pain.
As a result, nodes will harm other nodes in a competition for resources, and as a result, they will suffer at the hands of each other.
As the rate of invention goes down, the rate of innovation must rise to take its place, leading to a rise in entrepreneur-obsessives
Charles H. Duell, Commissioner of the U.S. Patent Office in 1899, once said, "Everything that can be invented has been invented." Even though this sounds foolish in retrospect, and even though the quote is probably apocryphal, that doesn't mean it can't be true, or at least partially true in the future. In certain categories, like the airplane, the rate of invention plateaued decades ago.
That's not to say that innovation will taper off, but rather, the future for growth will be depth. The future belongs to the obsessive entrepreneurs like Steve Jobs and Elon Musk. Smartphones existed before Steve Jobs, as did electric cars before Elon Musk. Airplanes existed before Richard Branson turned them into the affordable flying nightclubs of Virgin America, which he sold for $2.6 billion.
The history of business has been about catering to human's five basic senses and their few basic desires. History going forward will be about staying within those confines and finding something new to offer.
Banality of Futurism
Science fiction usually portrays a future chock-full of mind-blowing or exotic experiences. However, our past experiences with technological enhancement seem to be a letdown. The world hasn't turned out like The Jetsons after all. Instead, innovation has a way of quickly becoming banal.
For example, when television first came out, the prediction was that we'd eliminate Harvard and all higher education institutions because quality learning could be piped to millions of people simultaneously into their living rooms. That never materialized, and instead, now we have reality TV programming.
The most accurate way to think about future technology is to give it a pre- post-mortem. When some amazing new thing, like nanotechnology, comes out, how will humanity inevitably make it boring? For one thing, nanotechnology could arrive slowly, like over the span of 100 years. The first advancements from nanotech might lead to simply better 3D printers, which are already here and aren't that interesting. Or it might lead to 20% more efficient medicine delivery mechanism; nurses would just add another tube for "nanotech" to pipe along with your IV or your injection.
Becoming a vegetarian before being cryogenically frozen may be smart, just in case we wake up in a world that abhors animal-killers
Since the circle of empathy is widening for humans, one should consider becoming a vegetarian before being cryogenically frozen, just in case animal cruelty is so abhorred in the future, that they won't unfreeze former animal killers.
Benchmarking AI to Brain Modules
A naive way to measure AI progress is to look at the size of the computers involved in AI. For example, the size of Deep Blue, which is the computer that beat the world chess champion Kasparov in 1997, was about the size of a filing cabinet. Meanwhile, the chess-playing module inside of our brain is maybe the size of a pea. The volume ratio between a pea and a filing cabinet is perhaps 1:50,000, which means that the AI for playing chess had roughly one-fifty-thousandth the efficiency of the human brain. However, within a decade, cell phones had enough processing power to play world-class chess.
Likewise, the size of Siri or Alexa today may take up an entire data center, which is about 1,000 filing cabinets, whereas our brain's language processing center (Wernicke's component and the Broca region) is around the size of a walnut. In a decade, it's possible that Siri or Alexa could exist inside a small module on our phones.
In the future, computer scientists will attempt to approximate the neocortex, which is the part of the brain associated with our high-level cognitive features, including emotional intelligence. The volume of the neocortex is perhaps the size of a tennis ball. It might take the whole world's processing power today to approximate one human's neocortex. But eventually, the computational needs will come down to the size of a data center, then a filing cabinet, and then ultimately a chip that could fit in the palm of our hands.
Cloning will be difficult for friends who don't want to hear the same joke twice
In the case of human copying, it's important to realize that not everything gets copied. For example, bank accounts have to remain in the custody of the canonical version.
But more importantly, relationships can't be copied. Knowing this leads to some potentially interesting social conventions. For example, it may become a faux pas for the copy to try to re-approach the friends of the original. Doing so would create confusion among their friends, at the very least because they don't want to hear the same jokes twice. Loved ones need security knowing that they're investing in only one version, the legal, canonical one.
This hurdle could be bypassed, however, if a group of friends decided to copy themselves simultaneously.
How do you rationally approach the decision to do cryo? Assuming you decide it's technically feasible, the question then gets reduced to "Is cryo worth it?"
You could use a saw like Pascal's Wager, where you say something to the effect of, "If I even have a small chance at immortality, couldn't I justify an unlimited amount of effort to obtain it?" But then why not also relocate to the cryo facility prematurely to guarantee an optimal freeze? Why not wear a helmet every day?
Instead of weighing your decision to do cryo in the abstract, you could use a more practical heuristic: Determine whether or not you would regret waking up in the future. This kind of thinking factors in common dissent to cryo: "I don't think I would survive the shock," or "I would feel so utterly alone." These statements seem to be connected to popular portrayals of time travel, like Encino Man, which show brutes from the past fumbling their way embarrassingly through the uncaring present. But perhaps, this rejection is simply the default response to considering something as alien as the cryo experience.
If you dig deeper, applying a bit of the banality principle of futurism, cryo scenarios can actually be grounded in the familiar. Using myself as an example, imagine I'm in my mid-20s living in Boston in the 1770s. I spend my days as a dock worker, dreaming of one day becoming a shipping baron. I then meet someone who will freeze me in exchange for working a few extra hours a week to pay for life insurance. After some thought, I sign up for a monthly payment plan and then forget about it. A few years later, I am unexpectedly stricken with consumption, vapors, or some other eighteenth-century malady, and die at the age of 30. I am then promptly placed in a barrel, filled with ice, and shipped off to Northern Canada where I spend the next 230 years. Finally, I wake up in Austin, Texas. The year is 2006.
Walking through my first years in Austin, I would find myself in awe of how much better life was. Every time I ate a Big Mac I would think about how often I was hungry in Boston. Every time I got sick, I would recall how every sneeze used to mean that Black Death was around the corner. I would probably start off as a janitor, then work my way up to cashier at a convenience store while working on my vocabulary and manners. After a year, I would probably take classes for a simple vocation, something where I can use both my hands and my brain. Maybe I'd pick plumbing. I'd work hard, because I'd want money. I'd want money, because I'd have an insatiable curiosity, one that would drive me to either buy comforts that were once reserved for kings, such as silverware, or to experience things that were once reserved for magicians, such as flying. In other words, I would be a simple, hard-working man whose baseline expectations were so low because he had come from so far away.
Eventually I'd want a girlfriend and then a family. For me, the typical middle-class existence would suffice, but I wouldn't lose my sense of wonder about modern times: having babies that survive; public education; unlimited entertainment; never having to use an outhouse. None of it would get old. One Christmas, sitting in my living room with my wife and kids who are all healthy and safe, I would think to myself, "I'm really glad I worked extra hours in Boston so I could afford cryonics."
To come up with the story, I picked a realistic setting. In real life, I was 24 in 2006 and moved by myself from the San Francisco Bay Area to Austin, sight unseen. I had no friends and no job lined up; I just went. To add to the story, I weaved in tales from my father, who grew up in a refugee camp in India but then moved to Canada in the late 1960s with only $100 in his pocket. He often tells me about his first Big Mac.
Using a plausible story and gauging regret is such a simple way of thinking about cryonics. When we imagine the far future, it's often in one of two extremes: a crude apocalypse on one end or a shiny cybernetic simulation on the other. In a way, both ideas are horrific because of how uncertain they are. And it's that fear that shuts down all reasoning. But the future won't be extreme. Life will be mostly like today, except with less crime, poverty, and illness, i.e. a similar leap forward from where we were two hundred years ago. Maybe we'll have some wild, fantastical invention, such as the Holodeck, but ultimately life will remain the same: We will fall in love, we will fall out of love, we will strive for meaning and purpose, but we will still enjoy kicking back and watching movies. If you're scared of cryonics, you're ultimately scared of life, and maybe the cost of continuing it.
Delayed technophilia is the process by which we celebrate technological progress years after we've become inured to it
In the late 1990s, there were more than 500 million Internet users, and yet it was chided at the time as a novelty. Even though in Silicon Valley, it was described as a world-changing force, the popular understanding of it was as a place for time-wasters like pornography and cat photos. The Internet was conceived of as just one application out of many, something to sit alongside word processing or graphic design.
Now, the Internet is so integrated into our lives, that for many people, it's the only way they stay in touch, and for many businesses, it's the only way they make money. Our livelihoods are dependent on it, and so, in retrospect, it seems like a monumental innovation, perhaps even greater than the invention of the computer.
Likewise, airplanes and automobiles were initially received as novelties. Everybody was familiar with cars at the time, but they were primarily owned by the rich, and they were a nuisance on the road. A drive in one, for most people, facilitated something that they used to do by other means, such as by walking or with horse and carriage. It wasn't until later that all transport took place via cars, that it now seems inconceivable to imagine the trappings of modern life without them.
New technologies are initially only interested in acquiring new users; acquiring integration into those same users' lives is a whole different challenge. Thus, groundbreaking technologies only get recognized as such much later on, even decades after actually breaking ground.
Earth had a Singularity-like uptick a century ago when the percentage of those free from violence and poverty jumped from 1% to 2%
Utopia has existed for the upper-class since the dawn of history. If we define utopia by the general concepts of harmony and accord, then 1% of humans have shielded themselves from war, violence, hunger, and poverty for many millennia. Today, this circle includes perhaps 500 million out of the 7 billion living who have obtained freedom from those same global afflictions. 1 in 14 live in a utopia today, whereas 100 years ago, this was about 1 in 30. Whenever this circle expanded from 1% to 2% was the beginning of the knee bend in the graph of utopian progress, which ends when more than 90% of humans live free from war, violence, hunger, and poverty. The timespan of this noticeable acceleration, relative to the history of hominids, is nearly instant, and so by at least one definition of the Singularity, it is already here.
Everybody buys the best house they can afford, which is why GDP growth alone isn't enough to end wage laboring
Sadly, wage laboring will be a permanent condition of human existence. Even if GDP keeps increasing an average of 3% every year for the next hundred years, wage laboring won't drop much, because of the simple fact that people will always consume as much as they can: nearly everybody buys the best house they can afford or borrow. And thus, the number of people living a life of leisure will not increase substantially in the coming decades, despite the indelible increase of abundance as afforded by technology.
For Generation X, life keeps improving; for Millennials, improvement is a given, and therefore boring
Futurism is steeped in the present, which means the singularity probably won't result in mind-uploading, but rather something beyond our imagination
Futurism often consists of extrapolation, not imagination. The futurists of the early twentieth century lived through urban transformations brought on by advances in transportation, and thus their visions involved megalopolises with flying cars. Early twenty-first century futurists lived through the advent of the Internet, which was a revolution in connectedness, and so by extrapolating super-connectedness, they've envisioned mind uploading as the logical conclusion. To extrapolate from the disappointment of past futurists, then, most likely the singularity will be something other than mind uploading and flying cars, something we haven't even thought of yet.
Futurism should be taught in school because the changing of ideas matters more now than the ideas themselves
Scarcity will always exist because those who take more will crowd out those who take less. Even as technology makes it so that one farmer can feed hundreds, we are ever clever in making new things scarce. The scarcity of diamonds, for example, shows that scarcity transcends caloric needs. We are also finding scarcity in areas that aren't limited in quantity per se, but more so limited in available quality, such as the scarcity of good opera seats or good homes in good cities.
And we are designed for this scarcity. We have a sliding scale of happiness that doesn't depend on our absolute quality of circumstance, but rather vaguely shoots for fifty-percent happiness since that's the optimal level required to keep us striving.
None of which is to say that utopia can't exist, but rather that it will exist in variables unrelated to abundance. So, for example, utopia might mean a decline in violence and the elimination of poverty, but just because technology leads to abundance, there will still be widespread discontentment. Abundance has existed many times in human history, and each time, we prevailed.
Hard vs Soft AI Problems
The problem with our understanding of AI is that just before we solve a challenge, like mastering chess, we often believe that we are on the verge of creating a superintelligence. And then, when we figure out those problems like we did with chess, we retroactively deem the problem simple, rote, and not indicative of broader human intellect. Whether we build a computer that can win Jeopardy or create a self-driving car that can run through a course in the Nevada desert, we always think we’re just about to build Skynet.
However, this should not be cause for disillusion, but rather perspective. We need better measurements. Consider this labeling: Divide AI problems into Soft and Hard ones. Hard AI problems approximate general human faculties and should be age-rated. So, for example, the ability to play chess is not general enough, nor is competing in Jeopardy. Being able to wander a desert without getting stuck, though, is general, and could be comparable to a general skill of a three- or four-year-old.
Another good example of a Hard AI problem is identifying all the objects on a table. As of 2017, Google Lens can point a camera to any single object and classify it. This improvement is big and could be comparable to the general skill of a two-year-old. Put an object in front of a two-year-old, and they can point and name it. They likely don’t understand the meaning of the object, or if they do, it’s limited to, “I can play with this” or “I can eat this.”
We will develop a superintelligence when we solve Hard AI problems for a ten-year-old. When an AI can name all the objects in a table, wander a street without bumping into things, pick up and use any tool, and engage in mildly amusing conversation, then we will have a good start. Then, if we add all the Soft AI solutions–the narrow, rote ones, such as the ability to write code, or do complex math–we will have a prodigy that can program themselves to become smarter, thus finally granting us artificial general intelligence.
Hipsters will be succeeded by hipostates: those whose apostasy is only to like things un-ironically
The successor to hipsters will be "hipostates," which is a compound of "hipster" and "apostate." While the term "apostate" indicates someone who has abandoned a religious affiliation (the opposite of apostle), it has relevance to the New Hipster.
Hipostates are people who like things un-ironically. They eat fast food not because they are trying to act poor, but because they think the food is tasty, reasonably priced, and an excellent delivery vehicle for calories. While as a hipster's prime motivation is to sample and remix a medley of subcultures, wearing them like a badge, with a wink in their eye, hipostates eat fast food because it's good.
There are billions who eat cheap hamburgers, but that doesn't mean they deserve the label "hip." What distinguishes the hipostate is their awareness. They know that their interests may not be cool, but they partake regardless. They seize them, wear them nonchalantly, and that unabashedness is the next trend after hipsterism.
This new movement is similar to willful philistinism, which describes the behavior of the 1980s British upper-crust who didn't have time to read books. They proudly celebrated their dislike of opera, poetry, and all things intellectual to compensate. But hipostates may like opera and the finer things; They're just not "for" or "against" any broad categories like the Sloanes of Britain.
Whereas hipsters are into emulating the general feel of cool tastes, hipostates do them one better by being extremely particular in their tastes. The hipostate takes the time to delve into subcultures, picking and choosing pieces to suit their individual tastes. The hipster, on the other hand, is too busy partying to calibrate their consumption.
Historians will split the Information Age into two phases: one when Moore's Law seemed unstoppable, and one when it didn't
Once you learn about Moore's Law, you start to see it everywhere. Not only is CPU speed increasing exponentially, but so is hard disk storage and network technologies. Beyond the direct participants in Moore's Law-like growth patterns, there are secondary and tertiary fields that have also been affected: Weapons have become exponentially deadlier, and our ability to reap food from the Earth has become exponentially easier. There's a compounding effect to all this. Faster CPU speed makes it easier to do research, which makes it easier to invent the Internet, which then makes it easier to do research, which then makes it easier to make bullets and farm.
But just as we see Moore's Law everywhere, we are also noticing where it's absent. Despite the proliferation of many Moore's Laws in many fields, there's just some things that won't change. For example, tables in restaurants will still be uneven because the incentives to always bend over and adjust the tables in restaurants aren't there, and the value of a stable table isn't high enough to justify the invention of affordable self-adjusting table technology. Exponentially increasing CPU speed won't mean that such self-adjusting table technology will all of a sudden become affordable, or that if it does, it may take 100+ years to get here. So don't hold your breath.
But that may just be a problem with the nature of human demand. We aren't demanding un-wobbly tables that much, and it doesn't affect the restaurant-going experience that much, so it doesn't necessarily get the fruits of technological progress as quickly as something core, like our ability to kill each other and farm food.
But even controlling for demand, we can find limits to Moore's Law on the supply-side. For example, pervasive cell phone reception does not appear to be increasing exponentially. Even though cell towers are getting more powerful and cheaper to build, the cost to install them probably isn't changing much. Even as the mobile phone bandwidth is getting better in places where there already exists coverage, it still is horrible on a cross-country drive across the United States, and it is likely to remain horrible for a couple more decades. Sure, there isn't enough demand in those rural areas to justify installing a cell tower, but the problem is also subject to labor costs, which are probably rising as well as fixed resource costs, such as the fuel necessary to move installation equipment to those rural areas. It takes a certain amount of calories to dig a post for a cell tower that just won't get automatically obliterated by exponential technological progress.
We have one foot in the rapidly changing future and one in a world that is becoming increasingly banal. Our impatience for technological advances that should be here by now is encapsulated perfectly by the question, "Where are our damn flying cars?"
How much different would one long, unending life be than a set of little lives with their own beginnings, middles, and ends?
We might already know what immortality feels like. We live long lives, separated into stages, with discrete groups of relations and environments that come and go. The friends a person had 20 years ago may be different than the friends they have now, and so, at least to their old group of friends, they're dead. Their former coworkers, their former habits, and their old home, all once represented them. If "All you touch and all you see, is all your life will ever be," as the Pink Floyd lyrics go, then all that people ever were exists in the memories of others who they are no longer acquainted. People get divorced, start new families, relocate, or enter new life stages, which brings them into contact with a new web of memories to live through.
So even if we conquer physical death and live one long, unending life, how much different would that be than living through a series of little lives with their own beginnings, middles, and ends?
Human copying will be like Dropbox, with the cloud copy always a few files behind the local one
Human copying might manifest similarly to cloud-based file storage. Even though the files exist on your laptop, on your phone, on your desktop computer, and in data centers, the hardware representations are not that important. Each location might be out of sync with the canon by a few files, but what matters is that there is a canon with a high probability of persistence.
If the canon decayed, you could recover your identity via these imperfect copies, with little personal disruption. This decay already happens in a way since we regularly shed our memory. Sometimes, after a night of heavy drinking, we lose whole folders of memories, and yet we carry on.
Which leads to the question, "What constitutes a significant loss of identity?" Death might become obsolete due to human copying, but if we lost 30% of our cloud backups, would we mourn the loss?
Human copying would make for interesting contract law, especially with vanishing pacts between originals and their copies
In a world where human copying is normal, vanishing pacts could also become normal. In a vanishing pact, you agree that if you wake up as the discarded version (d-version), you agree to vanish after a set period. Theoretically, these pacts could be made legally binding, with the government guaranteeing the destruction of d-versions. But most likely there will be some flexibility in the pact. At the very least, there will always be the possibility of a renegade d-version defying the government, breaking free, and choosing to live.
In the case of voluntary or pseudo-voluntary vanishing pacts, your willingness to make a copy of yourself is dependent largely on how well you would cooperate with yourself.
For example, your vanishing pact might be set to 24 hours, where your d-version lives on for a full day while simultaneously as the c-version (canonical version) is alive. During this 24 hours, the d-version might meet someone interesting, go on a spontaneous date, and decide they want to see it through, keeping their identity, and nullifying the vanishing pact.
The c-version may object to this. Perhaps they don't trust you (the d-version) now because you violated the vanishing pact, and they're worried you might try to steal their identity, taking their bank accounts and relationships. The c-version would then have to hunt you down. Perhaps, they might even have the force of government behind them.
This pattern might become so widespread, that all d-versions could receive mandatory marks to distinguish them from the c-version. Maybe they are given a slightly bent nose or a special QR code representing the timestamp and site of their copying.
If clones played the Prisoners' Dilemma, wouldn't the dominant strategy be silence, since each player knows their copy would do the same?
Human cloning would make for interesting game theory experiments. In the case of the classic Prisoner's Dilemma, it's rational for every prisoner to rat the other one out. But in the case of human cloning, the game has one additional assumption: both parties do the same thing. The two-dimensional decision matrix then becomes one-dimensional, with only two choices: we both snitch or we both keep quiet. You can assume that whatever choice you make, the other will make too, so it's always better to choose silence.
If individualism becomes passé, then so will the fear of death, which will make those unfrozen from cryo feel old-fashioned
One thing to consider with cryonics is the possibility of waking up in a world that has none of the assumptions that your current one does.
For example, assume you are born in the 1500s, and that you believe strongly in heaven and hell. You grow up with an ascetic bent and feel horrible about some sins (real or imagined) you committed in your childhood. The possibility that you will spend an eternity in hell plagues you.
One day, a magician comes to town and says that he can freeze your body and hide it till some future date when science will make you live forever, thus avoiding hell. You decide to get life insurance, and after ten years, a horse kicks you in the chest, you die, and are then frozen by the magician.
You wake up in 2030 in a city that is 50% agnostic, 25% atheist, and 25% religious. You ask people about God and are shocked that most people don't seem very interested in the topic. People ridicule you, which leads to some "soul-searching." You go to the library and read up about the existence of other religions, something you didn't even know about based on your old provincial lifestyle. You stumble upon Darwin and Nietzsche who fill you with doubt. You have your own personal Renaissance and question everything you thought you knew about religion.
What if there is no God? What if there's no heaven and hell? Your fear of hell was so crucial to your identity. It's why you could never form real relationships, and it's a big part of why you gave all your money to cryonics. The thought depresses you, and you regret having ever met that magician.
Likewise, if you purchase a $80K neuro freeze from Alcor in 2016, which just freezes your head and brain stem, you are going to end up waiting for a society that has developed some very exotic technologies. Mind uploading, cyborgs, and immortality will be commonplace by the time you are unfrozen. Even if you're an atheist today, you have to wonder how our attitudes about death will change in such a far-out future. It's difficult to imagine what kind of unravelings our philosophies would undergo because our philosophies are like gasses that expand to fill the containing intellect. Once our intellect expands, what if we don't even care about egos? What if individualism itself dies? If our future existence is in a fluid matrix of software, holodeck-style lifestyles, and human copying, could our fear of death becomes as antiquated as our ancestors' fear of hell?
If Singularity stories often include infinite sexual fulfillment, don't Fleshlights and 3D porn mean it's already sort of here?
Weaved into The Age of Spiritual Machines are some of the benefits of the Singularity, one of which is the fulfillment of infinite sexual fantasies. Upon reading those bits, the imagination goes to a holodeck or some other completely immersive virtual reality with tactile feedback.
But if we use a symptom-based measurement of the Singularity, can we consider that we might already be there? The access to high-resolution porn at our fingertips today is very high. If you use a Fleshlight in conjunction with a large-screen monitor and HD-quality porn, typing specific fantasies you want into free video search engines, then it's pretty close to a holodeck. If you add in a 3D TV, you're even closer. If you acquire a remotely operated Fleshlight and connect a webcam that triggers certain contractions based on what happens on screen, at what point is this checkbox of the Singularity finally complete?
If the lower-class of today can eat better than Charlemagne, then the lower-class of tomorrow will somehow live better than Bill Gates
Trickle-down economics is working in the sense that each rung on the ladder of prosperity is gaining better and better simulations of what the upper rungs used to have exclusively. It's no doubt that a high proletarian American can eat better every day than Charlemagne occasionally did. And the once-unique perks of Google, with signature chefs, back rubs, and on-site dentistry, is spreading to many unknown start-ups with significantly less significance than Google.
Perhaps the Singularity won't be this punctuated heavenly moment, but a rising golden tower of continued social stratification, with each strata fulfilling the once wildest dreams of the others.
If we can regularly solve the Trolley Problem, then so can robots.
We have a lot of issues with the Trolley Problem. One the one hand, we fret about having to figure out whether ten lives is worth one. On the other hand, we frequently solve the problem with a not-so-humble resignation and saying, "It's not our place to decide whether the lives of 100 adults are worth the life of one child." And yet, we solve Trolley Problems all the time. When an oncoming car swerves into your lane, and you have to make a choice of maybe hitting the bicyclist to get out of the way, you're rapidly weighing the cost-benefits of risking a life to save another.
Some professions are riddled with such problems, such as being a police officer or soldier. While some of the people in those positions are vexed by making those decisions, many do so with ease. Likewise, if we're not worried about entrusting under-educated or under-trained people with making those decisions, we shouldn't fret about robots making them. If anything, we'll perfect the ease with which we already make those decisions, now with better inputs, consistency, and the benefit of a level head.
Immortality won't eliminate the fear of death; Death will just take on a new meaning
Death is a word, and the thought of that word is an experience for the living. Therefore, even if immortality removed the fear of death, the concept of death wouldn't die. Life itself would still be a death of sorts. Even if everybody's biological functions continued indefinitely, life could still stop. People might say, "I am dead inside," during moments devoid of vitality. And prolonged periods of suffering or anguish would be dreaded as much as one used to dread the march to the grave.
Innovation comes from discontentment, but is it possible to innovate so fast that we remain forever satisfied?
We are designed to address our discontentment with ingenuity. We're so well-designed to do so, that evolution created a discontentment treadmill whereby we are never satisfied, no matter how much we accomplish. But are there theoretical limits to that discontentment? What if our ingenuity overcomes and abolishes all theoretical limits to discontentment? Perhaps, we are already at that point, and Lexapro and other antidepressants exist because the discontentment treadmill has nowhere rational to go.
Innovation is like a virus. Principles like "always on" or "in the cloud" only have to be proven once before disrupting everything
One strategy for coming up with new high-tech products or services is to concoct superlative hypothetical situations out of existing technology. For example, a budding entrepreneur could look around at his office, point to something, and add the prefix "always on." "What would be different if we had an always-on camera? What would be different if my microphone was always on? What if the screen on my phone was always on? What if unlimited data was a genuine promise, and one could have always-on file transfers on their phones?"
Other exaggerated modifiers could be, "on your wrist," "the size of a pinhead," or "in the cloud." Given the inexorable trend of technological growth, this seemingly amateur parlor trick generates business ideas that reliably anticipate future trends. What one component gets, every component eventually gets. If something is "on your wrist" or "in the cloud" today, why couldn't everything else be that way tomorrow?
Instead of preparing for the zombie apocalypse by stocking up on bicycles or canned foods, we should look to modern, tenacious animals for inspiration
For example, rats are thriving in the Age of Man, which means dumpster-divers will have digestive systems most prepared for the human wasteland. Rabbits are an example of the enduring importance of breeding. As a result, sex-crazed subcultures will likely outlast the more timid breeders. Farm animals such as sheep and cattle are also thriving today. Since it is probable that a few warlords will consolidate what farms remain in the End Times, it might also help to be an obedient workhorse.
In the future, the killing of a human-looking creature that can pass the Turing Test will be considered murder
In the movie Source Code, Jake Gyllenhaal plays a character who travels to alternative universes where a train is about to explode from a terrorist attack. While his directives are to ignore everything else and find the terrorists, he feels compelled to save the passengers in these alternative universes. The commanders who give him orders believe that these alternative universes are simply simulations with information necessary for the safety of the main world. But Jake is confronted with such strikingly high-resolution experiences in these other worlds that he can't help but feel compassion.
Consider Jake's viewpoint. Since there is effectively no difference between normal human beings and zombies pretending to be normal human beings, he has no way to access the inner experiences of anybody else. He can only verify his consciousness, and so his ethics shouldn't depend on zombie-verification. Yes, we all want to know whether there is someone truly there receiving pain behind that visage, but in the absence of that certainty, you should err on the side of saving the zombies.
Is it rare or common in the universe to have a primordial soup spawn individual organisms competing for natural selection, instead of a single, giant organism, evolving as a whole?
Although sci-fi writers have done well to stretch the imagination as to what alien life might look like, they often project or extrapolate from creatures we know on Earth, such as reptile men or urchin-headed beasts. The more creative ones imagine floating ethereal tablets communicating telepathically, but even that is a projection of Earth-like individual organisms.
Is it possible to imagine life without individuality? Could alien life on another planet from start-to-finish be a single organism, possibly with internal independent parts that undergo natural selection, but ultimately combining into one piece? One planet, one life. Perhaps the alien life is fused with the planet itself, such that the entire planet is one organism. In which case, we may be peering into a universe looking for life on other planets, when the planets themselves might be peering back at us as floating eyes off in the distance, and together a series of organ-like planets in a solar system might form a single organism. Who knows, and we can't know because we are forever biased by life as we know it.
It's possible that the announcements of a New Economy in the 1990s were early. If so, then everything today is extremely undervalued
Analysts justified the late 1990s dot-com bubble by predictions of the arrival of a New Economy, where the old rules of economics don't apply. While the subsequent crash squashed those predictions at the time, they also soured people on the possibility that that prediction could ever come true. Twenty years later, a visit to Silicon Valley is like a visit to the circus, with astronomical housing prices surrounding lavish corporate campuses brimming with absurd perks provided by companies whose only claim is making addictive time-wasters on social networks. But nobody is justifying this situation with announcements of a New anything, for fear of recapitulating that old foolish, bubble thinking. All of which is collectively making this trend invisible, and perhaps, exploitable.
Law of Diminishing Enthusiasm
One of the caveats about accelerating change is that we may reach physical limits to how fast we can make computers. That at some point, all the processors work on an electron scale, and can't be reduced any further, or that solving the overheating problem of CPUs may become intractable.
But there is another limit that could factor in: human demand. All technology is ultimately created to serve consumption. Without demand, there is no further development.
For example, at some point, we won't need anything after HD or Retina displays. The human eye won't appreciate any further refinements. Already we can see the disinterest in CPU speed on computers. In the 1990s, even casual computer consumers cared about how many megahertz their machines had. Now, the number of people that know how many GHz or cores their laptop has is a small minority. Instead, innovation is being driven by the miniaturization of CPUs. But at some point, we will have the thinnest possible phone. Already, some people complain that the iPhone 5 is too thin, and therefore too easily slips out of their hands. After thin-ness, what's next?
That hasn't stopped CPU innovation, though, because there has been this massive expanse of cloud computing and web servers. Consumer demand is still affecting CPU innovation, but it's proxy via demands from businesses like Google and Facebook that are servicing consumers.
At one point, it was video game consoles that were pushing the envelope of processing power. But after the PlayStation 3, there isn't much more that the gamer needs. Theoretically, the PlayStation 4 or PlayStation 5 will have as much graphical processing power as is used in rendering a 3D-animated Pixar film, but video gamers are drifting in the other direction, toward casual games on their iPhones, or are content with less graphically intense games on the Wii. So there is a step in the opposite direction, to make slightly slower CPUs at a cheaper cost.
There's also a limited number of hours a human has. While as a power consumer may own a laptop, smartphone, and a tablet, they divide their time between all three. Can they add another device? Perhaps they will have backup smartphones, and tablets, and eReaders, but again, that will just reduce the amount of time they spend on each device. The introduction of another device will diminish each's significance. We can only consume so much entertainment per day.
There is a pattern, though, where we sometimes think, "No more innovation will happen." For example, there is the famous (though false) quote from the Commissioner of the US patent office who said, "Everything that can be invented has been invented." Or there's another famous (though false) quote from Bill Gates: "640K of memory should be enough for anybody." And just when we were perhaps getting bored in 2009, James Cameron released Avatar into theaters, and 3D became the next big envelope pusher. We thought, "Alas, the PlayStation 4 would need to be at least twice as fast to handle all the 3D games!" But since then, consumers have become lukewarm to 3D. So already, we can see a turning back from new technology faster than we can innovate.
Law of Hierarchal Returns
Technological progress is more often hierarchal than incremental. The older the technology, the more foundational it is. For example, DNA laid the groundwork for multicellular organisms, which laid the groundwork for sexual reproduction, which paved the way for the Cambrian Explosion. There has been tremendous biological innovation since then, with only minor changes to the fundamental technology of DNA.
The invention of the Internet nests within the invention of computers, which nests within the harnessing of electricity. Facebook, Wikipedia, and Google nests within the invention of World Wide Web which nests within the invention of the Internet. It's more likely that future technologies will derive from previous technologies, and not be whole new classes of technology.
The accelerating march towards the Singularity may, therefore, manifest itself less like a rocket taking off, and more like matryoshka dolls, with smaller and smaller changes having a greater and greater impact.
Life-extension is matched with life-reduction: new cures matched with new risks; penicillin matched with skydiving
Life-extension necessitates a redefinition of life
Now that we're living beyond our ancestor's average life expectancy, it might make more sense to have a multiple lives perspective, with each "life" spanning between 15 and 20 years.
The stages are longer. If one spends ages 5 through 22 in school, that is like a lifetime as a student. Waking up for attendance, getting grades, and socializing with colleagues drives every student's daily existence, and then after 22, that rhythm stops. For most students, that final commencement ceremony is like a funeral.
If they then spend ages 22 through 40 being single and dating, then that is also a lifetime within those rhythms. Then raising children for 18-22 years is another lifetime. Having an empty nest, another, and so on. Life-extension means more and more lives stacked back-to-back like a bookshelf.
If such a mindset were prevalent, we might revive coming-of-age rituals, but rename them "coming-of-life" ones. In these parties, people shed their past, maybe even their names. Or extending even further, a lifetime prison sentence could be between 15 and 25 years, because that's equal to "life" imprisonment; The difference between being imprisoned for 20 years versus 40 years is not much. That man or woman will not see their children have a lifetime's worth of aging. And when the criminal returns to the free world, they will be so different that they won't re-enter society with any of the same friends or assumptions. In other words, they won't be the same person.
One way to predict the future is to take every begrudging, "kids these days" statement, and assume that that will become the new normal
When you are young and rebelling against your elders, it's hard to imagine that you will grow old and turn into those same oppressive elders. It's especially difficult if you believe everything you did when you were young was rationally justified. How can the mere act of aging affect your reason for doing things? So when you do get older, the grouchiness has to be a complete surprise or relate to unanticipated changes to culture and society.
For example, there is a common saying of the form: "In my day, we had to walk 3 miles in the snow to get to school." The implication is that children are getting weaker. Today's corollary could be about mental toughness: "Kids these days have everything taken care of: Their video games play themselves, and if they get poor grades, they can always pop an Adderall."
Another common elderly complaint is about the decline in neighborly love: "Nobody knows their neighbors anymore." Today's corollary would be: "Kids these days feel that spending time on Facebook counts as 'socializing.'"
Probably the most common complaint has to do with the coarsening of culture, which has endured for so long, you would think that culture would be all but obliterated by now. And yet, it's easy to find new ways of being offended:
- Kids these days aspire to be Internet famous, even if it's just a 30-second clip of them hurting themselves.
- Kids these days don't know the difference between camp and crap.
- Kids these days can't focus. They just watch random clips on YouTube, the equivalent of America's Funniest Home Videos, ad nauseam.
- Kids these days go to online forums and call each other racist and homophobic slurs as a way to test their lack of sensitivity.
- Kids these days don't grow out of playing pranks or trolling.
- Kids these days think being offended is for losers.
Peaceful societies can afford more individualism, which paradoxically means more lone gunmen, which then means more surveillance
If war requires constituents to give their lives in concert for the greater good, then peace should lead to individualism. As peace expands, so shrinks our susceptibility to mass control. But the new peace is not absolute, and new outlets for violence against communities have emerged in the form of lone shooters and home-grown terrorists.
Pre-peace, we controlled violent people via broadcast, with religious or community leaders—who for most of history were the same people—pushing an agenda to the receptive masses. Post-peace, we have to control violent people via the network. The potential school shooter has to be watched by a web of faculty and parents, looking for telltale signs. A web of psychiatrists and officers have to watch the soldier with PTSD and catch them before they take weapons off base. And everybody else has to have their emails watched, and their purchases monitored, just in case one of them decides to give their life for the good of their own, twisted form of retribution.
Perhaps a social Singularity will occur when everybody becomes a nerd, fascinated only by their own personal, esoteric interests
If we define the singularity as a singular moment in history whereby all the old rules and patterns no longer exist, perhaps we're already passing through singularities in individual fields of academia. Many advances in academic fields are no longer relevant except to members of their community. The papers are so esoteric, now, that peer-review has been reduced to a light skimming for procedural standards, since the readers are no longer knowledgeable in those subjects.
Perhaps in the future, all interesting conversation and work will be purely reflexive, i.e. serving no utility outside of itself. People will build machines just so they can have something to write software for, which they can have conversations about, even though they will have mostly forgotten the original purpose of those machines. In other words, everybody will be nerds, whereby we define "nerd" as someone who uses a telephone to talk about telephones.
Pioneers usually come from outcasts, but it's only recently with computers when outcasts can turn obscure wanderings reliably into careers
There has been a resurgence in the pioneer's lottery, which is the age-old pursuit of being first. The first person to bump into the oil fields of Texas, or the first person to discover gold veins in California, or the first person to land in the New World were always first to reap the benefits of those new platforms. Now such opportunities for firsts are becoming more accessible and frequent. The first person to create the "million dollar webpage," where you can rent pixels for a dollar, became an instant millionaire. The first person who held up a Bitcoin QR code on a placard at a baseball game received $50,000 instantly. Those who had apps on the iOS App Store on its first day were set financially for the rest of the year. The people who were maybe a year late to the App Store but created the first apps in a niche stood to profit for many more years.
Earlier forms of this lottery depended on a significant amount of luck. Sure, you could increase your chance of discovering the Midwest by having a wanderlust, but for each pioneer that made an important discovery, there were thousands of wanderers who discovered nothing and were considered lost souls and outcasts.
But yesterday's lone rangers have become today's nerds. Initially, they seemed "lost" spending their teenage years affixed to screens playing games, only to become the first to create the same million-dollar games for Facebook. The frequency with which this is happening has come to the point where the armchair futurist can now rely on the exponential pace of technological change to continuously supply him with open vistas, ready to tap.
Robots will replace mundane tasks only after a century of nerds ruling the world. Someone has to create and maintain said robots
When a machine replaces someone's job, their individual increase in unemployment is more than made up for by the reduced cost of goods spread across the rest of the economy—so the theory goes. For optimistic futurists, machines will solve every problem, leaving us free to live a life of leisure.
However, these futurists and economists gloss over the transition to utopia. Engineers have to create these automatons, and while we could consider their jobs as temporary, how temporary is temporary? What if it takes a thousand years of computer programmers, working to automate everybody else's jobs, that the entire workforce becomes programmers.
Another scenario is that the future then becomes exclusively built for the machine-makers. Survival a hundred or so years from now could depend on having a minimum amount of technical literacy. This scenario could even come about in a roundabout way, whereby welfare becomes outsourced to companies like Google or Facebook, where in order to access your benefits, you have to perform certain technical maneuvers, like cashing in virtual money from playing games. Perhaps welfare is guaranteed to anybody who knows enough about technology to make a secure enough password to protect their Bitcoin wallets.
While the collective dream is one of an all-encompassing leisure life, there are so many roadblocks to getting there, that the roadblocks themselves could ultimately end up being the reality.
Statements like "Where is my jetpack!" lose their strength whenever partial innovations, like self-driving cars, arrive
We might be losing our wonderment about the future. The height of science-fiction excitement might have been in the 1950s when we optimistically imagined flying cars and interplanetary travel. While the delay in fulfilling those promises is the main contributor to the disillusionment, another contributor is the fact that tantalizing pieces of the future are arriving gradually, neutralizing our fantasies.
Holodecks, for example, which in Star Trek were rooms that could conjure up any desired world, are no longer a common fantasy. Is this because of the prevalence of virtual worlds in massively multiplayer online games? Is this because having information at our fingertips is essentially access to an infinite supply of visual vistas? Any movie, any television show, and any wholly immersive form of entertainment that you could wish for are at your command. And if you are tired of traveling through virtual worlds, you can simply travel in the real world. The cost of travel has lagged so much behind inflation, and remote work has become so prevalent, that for an expanding circle of people, one can be anywhere anytime.
We were once the social animal, but we're increasingly becoming the textual animal. All our thoughts and beliefs are bound up in text, as ideas are often first introduced to us via something we read on the web or in print. And when we verify said beliefs, such as when we're challenged, we Google for answers, further codifying our beliefs.
Our relationships are increasingly bound by text as well, in many cases, literally via text messaging, and more often via social media. And even though social media is largely constituted by photos, those photos would seem empty without accompanying captions or text replies. Even a "Like" is a textual response, albeit via shortcut. And even if the Internet doesn't account for half of the volume of one's social life, it may dominate more than half of our basic social functions. Keeping tabs on someone and feeling their presence are possibly half of what constitutes a friendship. At some point, the text starts to become the point, and an alien trying to understand humanity might see us first as a massive, colliding torrent of words, rather than flesh and blood.
Thanks to microtransactions, entrepreneurs, instead of wearing many hats, could start companies by giving out microhats with microshares
Entrepreneurs wear a ton of hats initially because it's hard to recruit people at such a small scale, especially without revenue. Perhaps by offering small shares as bounties for little bits of work, then perhaps entrepreneurs wouldn't be needed, and everybody could just wear the hats they want to wear. Such is one possible promise of microtransactions and cryptocurrencies: the ability to administer small pieces of value efficiently and fairly.
The Amish are actually very modern since it's only in modern times when the impracticality of being radically practical can be a point of pride
The Amish are usually considered a throwback, but it's more significant than that. Their lifestyle is a luxury. They are willfully recapitulating olden times, not just because older is more authentic, but because those ways aren't necessary anymore. If an Amish person were transported back in time to when subsistence farming was common and nobody had electricity, there would be nothing interesting or special about their religious posture. Their existence today is only interesting inasmuch as it is completely unnecessary to live with such chaste restrictions.
However, perhaps the Amish are actually on the cutting edge of retro-futurism. We are losing more and more of our old traditions due to productivity gains from technology, that at some point, people will have jobs simply to keep up the simulation of wage-based living. Perhaps a completely ludic economy, one where nobody has to work, will descend into anarchy, and so some restoration of the past will be necessary to keep order.
The cost of socks should approach zero
By now, the cost of socks should be nearly free, at least according to traditional economics. And yet, at Wal-Mart, the price of socks remains static or rising. Some of this has to do with the fact that much of Wal-Mart's costs are real estate. And yet Amazon, a company with no storefronts—and thus no real estate problem—has the same issue with pricey socks. The larger issue is that traditional economics doesn't scale. If Wal-Mart can sell a more luxe sock to more people, all of a sudden the cheaper sock disappears. And this isn't necessarily a knock on the rationality of buyers. Big retailers, for marketing or efficiency, drop or hide more affordable options. Theoretically, the bargain hunter could scour the rest of Internet and find the cheapest sock, but they would discover that only small retailers sell cheap socks, but only in bulk or with higher shipping fees than Amazon. Productivity is up, trade is up, but we can't necessarily realize those unless we realize them at scale.
The decline in violence will be followed by the comicality of violence
One consequence of the decline in violence (as described by Steven Pinker), is the eventual comicality of violence. Perhaps this started with Fight Club, in which men force themselves to fight each other. By the year 1999, when the movie was released, it was no longer socially acceptable to get into a bar brawl. At one point, such brawls were a rite of passage, but now they are a sure way to get an assault charge and a lengthy medical bill.
An obvious source of evidence might be in "professional" wrestling, but that's not quite right because that is more of a continuation of theater and circus. Rather, the rise of Ultimate Fighting would indicate an increasing comicality of violence, as it has replaced the role boxing once provided. In Ultimate, the violence is more real to an absurd degree, with bones and flesh breaking regularly. Boxing, on the other hand, represented a civil break from the violence of real life. Consider, for example, the image of Bugsy Siegal taking a break from his Tommy Gun to visit the ring.
Other signs of the redirection of violence include violent video games, our increasing interest in gore and zombies, and the constant parade of superhero action movies. Eventually, voluminous, non-superhero violence in movies, as was once visible in action films like Die Hard, will be reduced or abbreviated to the point of being mere footnotes and references to a bygone era.
The fact that we don't see any buffering icons puts the fiction in science fiction
We take for granted how acclimated we are to quirks in user-interfaces, that we end up excluding quirks from science-fiction. For example, when Minority Report was released, we didn't imagine that Tom Cruise's character would be making multiple flicks of the wrist because of mistaken gesture recognition. We didn't imagine that when the interstellar rescue video beams through space in Planet of the Apes it would have a buffering icon or that it would require a team of video anthropologists applying codec after codec. We didn't imagine that any time Captain Kirk fired a nuclear weapon, he would have to pull out a credit card-sized electronic book of numbers that would have to be confirmed by his Secretary of Defense. Or that to input the location of where to send the Starship Enterprise, you would have to type .com or .net at the end of every address. The future will be as awkward as the future we live in today.
The financial sector is the fat and repository of excess economic activity
The financial sector isn't really cancer. Rather it's the fat and repository of excess economic activity. It's part of a larger trend of work being more and more removed from material things. While there will always be an inflexible demand for essential services, like waiters and policemen, all the growth is happening in inessential or abstract services, like entertainment and high-technology. The mid-1950s are when the delivery of basic human needs (like food and safety) plateaued. Now innovation is in fun stuff like iPods and faster bandwidth. But even that will plateau because people are getting delirious with the fast-changing pace of technology.
The majority of American workers filter their income through simulations of human value. Farmers in America keep making corn that the government subsidies and throws away, just so we can preserve a diorama of bucolic Americana. Financial workers are playing an even more purely abstract game.
This is the essence of our post-modern existence. Our concept of real value is so warped and elevated now. The reason people can't come to grips with the financial sector is because they don't have have a concept of the absurd.
Economic swings will be larger and more frequent, and the financial industry will keep getting bigger. In 50 years, 80% of the wealth in the US will be in the financial sector. This is how the Singularity looks: people playing min-max games with monopoly money.
The future is boring because it arrives in waves of hype and prototypes until the people are already inured, but ready for it
The basic thought experiment according to the banality of futurism is, "Imagine that the future is boring." This can be broken down further into sub-thought-experiments:
Imagine that not everybody has access to the technology. We always assume that some new great technology is going to liberate everybody. Imagine that some medical advance does provide some really cool sci-fi outcome (cyborgs, immortality), but it's only for the very very rich.
Imagine that some people are disinterested in the technology. There is still the vast unconnected proportion of the population that will remain that way because of disinterest. Some people have no need to get online, and would rather watch TV and stick with their current menial job. Likewise, some people may have no interest in alien life forms, in living forever, in abundant wealth for everybody, in the cure for all diseases, or in even understanding the grand unifying theory of everything. Take, for example, the cure for all diseases. There's just some people who would rather not take any medicine.
Imagine that there are differing levels of quality for the technology. There may be immortality, for example, but there will be a premium version, where you're preserved just as you were when you were 26, and a lite version, where you're just dithering in your final years indefinitely.
Imagine that the unveiling of the technological advance is spread over multiple generations. For example, the cure for cancer, which is already being smeared across the years, starts off with coping with cancer, then curing it with a low probability of success, then curing it with a high probability of success, and then finally it's fully gone. By the time the fourth generation comes around, the "cure for cancer" will not be heralded as some major turning point. Just look at the banality of curing polio.
The future of money is eCash everywhere, spreading money under a thousand mattresses, telling every cashier, "Here, try this card."
The unbanked will have eCash everywhere. They'll have cash on Facebook credits, on SnapCash, on PayPal, on unused MoneyPaks, and/or on gift cards. Paying for goods will be a matter of cycling through various balances. Security for them is in spreading themselves thin, having cash all over the place. If any one account gets compromised, it's as big of a deal as losing coins in the couch.
Bitcoin may provide the virtual glue between all these systems, though, so paying for a hot dog at the convenience store may use some obscure ePayNow-like service, with Bitcoin being used in the background.
The global decline in violence must be balanced by a global increase in punishment for lesser crimes
The global decline in violence must be balanced by a global increase in punishment for lesser crimes. We have a quota of outrage to dispense, and so venial sins of today will become the mortal sins of tomorrow, even if everybody's behavior is improving on an absolute scale.
The Industrial Revolution, farming, the Ice ages: We've been preparing for the Singularity for 50,000 years
We have been preparing for our world turning upside-down ever since surviving our first Ice Age. Massive weather changes did not extinguish our ancestors but instead forced them to improvise and move to different biomes. We changed our clothes, modified our hunting techniques, and reorganized our societies. Such is our brain’s capacity for adaptation.
In 1970, the futurist Alvin Tofler predicted that massive social upheaval in the latter half of the twentieth century would cause future shock, but so far, that hasn’t happened. Whether it’s from the ascendancy of feminism or the transition from farm labor to desk labor, our psych wards were supposed to be overflowing with trauma victims who couldn’t cope with such changes. Fortunately, our DNA is imprinted with the protective coating of previous, survived traumas. Every time that our ancestors’ tribes were invaded and wiped, the survivors had to code-switch or die in the transition. Those who couldn’t did not pass on their genes.
The rhythm of history appears to be the layered cycling of short, medium, and long-term shocks, whether from tribal wipeouts to climate change, with long periods of stability in between. But, even during stability, we still ideate about the End Times enough to prepare ourselves for annihilation. If you’re a religious person, these thoughts might involve Rapture or Armageddon. If you’re a secular techie, they might involve the Singularity. Is it possible, then, that we’re already ready for such cataclysmic change? Will we wind up bypassing future shock and end up future bored?
The initial trials of cryonics will be like Lasik or chemo, i.e. a hassle you have to repeat every couple decades
One way to think about cryonics is that it's just better healthcare or just better health insurance. Consider this scenario: First, you sign up for a full-body cryo, taking out a life insurance policy for $200,000. Then 20 years later, you get cancer and are frozen. 50 years later, cancer is cured and they unfreeze you and cure your cancer. But now you're an old man in the year 2082. While as core diseases like cancer and heart disease are cured, people are still dying of old age.
And so you decide to sign-up for cryo again and take out another life insurance policy, this time for 75,000 yuan since costs have gone up and you're now in China. But you don't have enough money to afford that yet and the job market is completely changed. So you go back to school, take out a student loan, then try to get a job with a company that offers cryo as a benefit. 30 years later, you die of old age and go back into cryo.
You then wake up 100 years later, old age is cured, but people can still die of car accidents. However, there are portable cryo facilities in ambulances so that, if you have the right insurance, you can be preserved right then on the spot. And so you go back to school and take out another life insurance policy.
According to this pattern, cryo becomes kind of like Lasik, which you need to have every 10-20 years—i.e. It's a hassle. In each of those times you get frozen and reawakened, you have to go through the rigmarole of re-connecting with new descendants who may or may not be interested in you. You have to re-build your financial base and maybe learn a new language to integrate with the dominant hegemony. Maybe by the second or third time you do cryo, you think, "You know what, if I knew it was going to be this much of a hassle to wait for immortality, I wouldn't have done it!"
The initial versions of mind-uploading will be bottles of consciousness that can wiggle wheelchairs with thought
The banality principle of futurism could be applied to mind uploading in interesting ways. Imagine we have mind uploading, but it isn't the ideal scenario and not everybody does it. For example, your mind is uploaded to a computer (with redundant backups of course), and that computer is attached to a motorized wheelchair and there is a screen with a virtual rendition of your face. At this stage of mind uploading, they are very good at figuring input, but they haven't figured output much. So, they're able to attach cameras to your wheelchair and give your computerized mind an HD stream of the real world. But your ability to interact with the world is severely limited. Output is controlled by general neural field activity, and therefore while you can move your wheelchair, you can only do so clumsily. And while you can respond with synthesized speech, you can respond with only 20 or so sentences. Perhaps there is a monitor on a screen attached to your wheelchair with a 3D rendition of your face, but you can only give it three expressions, smiling, frowning, or neutral.
At this point, there are about 10,000 people in the world who do this, and yes, they live forever, but it's a shoddy existence. These uploaded minds aren't that interesting to deal with, and so funding for further development stops. Whereas before, the drive to upload minds was to save those who had vital organs dying on them. But now that drive is gone. If you aren't uploaded, you can choose to let yourself die a natural death, or you can live in this limbo mind-upload wheelchair land for who knows how many decades until they improve the interfaces.
The Jetsons haven't arrived because the future is mass consumerism, all familiar and cheap
We want a future that is filled with blinking lights, neon spectacles, and shiny flying cars. We want it to be like science fiction, i.e. completely unfamiliar and exciting. And yet, the future will look very much like today. The experience of being in a self-driving car will be more like riding a compact bus for one. It won't be like The Minority Report, with first-class four-seaters gliding on a web of interlocking magnetic levitation tracks. Since the future arrives through the channels of mass consumerism, everything has to be familiar and cheap in order to sell.
The Law of Accelerating Returns is matched by the Law of Accelerating Boredom
People think that what they want from the future is better living through technology, but what they actually want is to be surprised all the time. They don't want flying cars because it's a more efficient and convenient method of getting from A to B. They want it because it's novel. The common folk statistic is that technological changes in the last 10 years have outstripped changes in the previous 90, but the flip side is that changes that would have blown people away 90 years ago elicit a mild shrug now.
The link is the point, not the content at the end of that link, which describes not only Google but also the mind
Hyperlinks and search results are eclipsing content in importance. We often share, comment on, or save links before fully reading the link's contents. Oftentimes we use search engines not so much to learn anything in particular, but to know which keywords reveal what kind of results.
Metaphors for mental phenomena are steeped in the technology of the era. Formerly, brains were compared to hard disks, with streams of 0s and 1s representing blocks of content. Perhaps a revised metaphor is that a brain is like a web browser's bookmarks. We are indexing content and making little notes in anticipation for some future date when we are asked a question, and need to recall that that question is indeed answerable somewhere.
The Matrix goes back as far as Plato's Cave: We still don't know what batteries we're powering, nor what devil we're serving
The idea that we are all living in a simulation, as presented in The Matrix, is meant to be a metaphor for having inauthentic lives as unwitting cogs in someone else's machine. But such simulations may not be new. According to Dawkins's allusions in The Selfish Gene, human evolution can be seen as a product of an evolutionary war between bacteria. Or more directly, we are part of a simulation that is serving our genes. We don't know what batteries we're powering, and what devil we are actually serving. So the question from The Matrix is not about whether we are in a simulation, but which of the myriad larger games operate contrary to the simulation that is the human experience. It's only sad for Neo, the protagonist in The Matrix because he is a wage slave. Machines created these stultifying positions to keep humans busy while powering batteries to support the machines. The fact that some agent is limiting the free range of human expression for their own selfish purpose is what drives the revolutionary impulse.
If we can find such great games, and discover that they lead to our unhappiness, we should subvert them. Perhaps the revolution is already here, as technologies like The Pill attempt to subvert the plans that our genes have for us.
The more GDP grows, the lazier consumers get, which means more jobs for those in marketing
If in the future, everything is free, What will people do for money? Maybe everybody will work for marketing companies. One path to this scenario is that excess GDP is making people lazier.
For example, the GPL-published book Dive Into Python is available for free-to-download, and yet people still buy the book. And these paying customers aren't stupid but are instead paying for something else. They're paying the publishers to inform them of the book's existence. They're paying to have the book inserted into their shopping stream (whether at a brick-and-mortar or online). And of course, they're still paying for the ink, pages, and binding, which many consumers still prefer to eBooks. Since the actual text of the book is free, nowhere in the shopping chain, are they really paying for the content. They're paying for marketing.
If you went for the free download, first you would have to have found out it exists in the first place, maybe by constantly reading blogs. Then you would have to engage in an unfamiliar procurement experience, i.e. downloading a PDF from a 3rd-party website, rather than sticking to your main familiar e-commerce sites like Amazon. Then you'd have to grapple with the inconveniences of navigating an eBook side-by-side with your code, which lacks the benefit of the natural bookmarks, highlighting, and dog-earing that paper books have.
Marketing is fundamentally an informative practice. Consumers are rewarding companies for putting products in front of them, attaching a "Buy it Now" button and shipping it to them easily. The consumer is really paying for the curation, procurement, packaging, and distribution of the product. Hardly any money is to compensate for the creation of the actual content.
Despite how much flak Monster Cable gets for selling "premium" HD cables, shoppers clearly don't mind that much, especially when buying a $2,000 TV. Monster is being rewarded for putting the cables in front of them, right when they're making a purchase.
The most successful businesses today anticipate Moore's Law; the most successful ones of tomorrow, anticipate everybody doing the same
Computer manufacturing companies have long had dynamic business models that factored in Moore's Law. For example, Apple intentionally released an iPhone that the public couldn't afford because they knew the cost of storage and computer chips would continue to decline.
Moore's Law is perhaps an easy trend to anticipate, but increasingly it appears that disruptive businesses need to anticipate secondary or tertiary disruptions that come from exponential computing laws.
For example, the music industry doesn't have a Moore's Law per se, but it's conceivable that the "next BitTorrent" or the "next iTunes" is just around the corner, ready to change the economics of distributing music. This is why when Spotify entered the market, their profit margins didn't just seem marginally lower than the then market leader, iTunes, but lower than a future, unknown market leader. That market leader has yet to materialize or Spotify has stayed well ahead, which is perhaps why they are the current market leader.
The next hipsters will instead reject all preexisting fashions to the point that they become invisible, even amongst themselves
So much of fashion is about sticking to certain trends and styles to convey that I'm this "kind" of person: "I'm a rocker," "I'm a tech professional," "I'm into creativity and art." But within these kinds or species, there's incredible differentiation among organisms. Some species have lots of conformity of expression, such as investment bankers, and others have much more variability, such as New York fashionistas. But even with the variability of a New York fashionista, you can still look at them from across the street and think to yourself, "Oh, this is some 'New York fashionista'" even though their ensemble may be very individualized.
The hipster poses problems for this way of thinking. Hipster fashion defines itself with being syncretic, slapping together bits from other species, almost like a collage, and then re-purposing them with some commentary such as irony or nostalgia. However, despite this remixing, you can still spot a hipster. Even if they dress like a jazz musician from the 50s, there will still be dead giveaways: the curly mustache, the fake eyeglasses, or simply that the guy looks too young and savvy to just be a professional jazz player.
Now that hipsters are falling out of fashion, the question is, What will take their place? While for most hipsters their dress was about conforming to their indiepop-youth scene, some of them were trying to push the limits of individuality (like our 50s jazz musician). The next logical step then is a new kind of style based around sui generis, which means "of its own kind or genus." The members of this new way of thinking will have to be so far from each other that they can't even be accused of being some syncretic "out there"-thinking or ironic hipster. They have to not only avoid appearing like a goth or a rocker or this or that, they have to not even be accused of being part of this new movement. Everybody in this new "group" will need a look that is simply undefined.
This will complicate one of the purposes of fashion, which is to identify your kind. The only way members of this "sui generis" class will be able to identify each other is if they can tell that the person is dressing really well, but simultaneously can't tell which box they belong to.
The next phase for technology is to innovate without causing progress traps, thus marking the end of cynicism
Technology is progress in the sense that it advances human aims. And yet, progress has a bad reputation because of progress traps. A progress trap occurs when a technological breakthrough that is useful on a small scale becomes catastrophic at a larger scale. For example, advances in fishing technology are a boon to the individual fishermen, but ruinous for the world when they all use it.
But what if the pace of technological change speeds up so much that it solves secondary or tertiary problems before they happen? Perhaps innovations in fishing net design do trigger progress traps, but then a new method for increasing the fecundity of fish arrives a year or two later, or a new system for monitoring and regulating fishing boats is invented.
At this point, the most noticeable breakthroughs in technology will be when patterns and ceilings formerly inherent in technology become broken. One-by-one, all cautionary tales will become moot and history will truly end.
The odds are very high that if we discover life on other planets, it'll be something boring, like a starfish
So much of the excitement about futurism is in answering age-old grand questions about life, such as, "Are we alone?" However, there's really four types of alien life-forms that we could encounter, most of which aren't that exciting.
At the most basic level, there are things like pulsing bacterium, oxidizing froth, and plants. Discovering life that is immobile or semi-immobile is nearly equivalent to discovering a planet with interesting chemicals on it, which happens every so often. Encountering this would simply tell us that automata are easier to evolve than we thought. This discovery would be as interesting as it was discovering anaerobic multicellular organisms on Earth. The news barely registered a blip.
The next level of interestingness would be the discovery of living things like camels, reindeer, or fish. These are independently moving, non-vegetable-style animals, i.e. the kinds of things that could become pets. That would be interesting to some extent, but after the initial excitement, they would spark as much curiosity for humans as the presence of strange marine life. There are at least 750,000 undiscovered species in our oceans, and that number is likely to remain that way for a long time. At this point, we could say, "We're not alone," but try asking a solitary sailor on a boat far into the ocean if they feel alone. They likely wouldn't get any solace knowing there are strange creatures swimming beneath them.
The next level would be aliens that are sentient and advanced enough to have a culture. Perhaps they don't have any skills for space travel, so they're definitely less intelligent than we are. At this point, "Are we alone?" An answer to that could be similar to the discovery of the New World by Europeans. During that era, the world must have seemed as large to humans as our universe seems to us today, and so to discover a whole new continent with previously unknown living people and culture must have been mind-blowing. And yet, it's difficult to find stories about just how earth-shattering this was to the scientific community or even ordinary people.
Finally, the fourth level would be sentient aliens that have already dominated space travel. In which case, they would have discovered us first, or they already have. That may provide us with the similarly wondrous scenes from science-fiction movies like Contact. But this outcome is not the likely scenario for discovering aliens. It's ten times more likely that we'd discover a new world of savages, a hundred times more likely we'd discover a new ocean of marine life, and a thousand times more likely we'd simply discover exotic bacteria.
The odds that an advanced AI will be developed by something other than a Defense Department are only getting better
If an advanced AI, one smart enough to rule humans, had been developed in 1964, it would most likely have come from the defense department of a large superpower. This AI would have been weaponized or otherwise martial in nature, and therefore likely to be unfriendly to humans. But if an advanced AI were developed today it would most likely come from a high-tech consumer company like Google or Apple, and thus it would be initially developed to fulfill human desire.
Thus, it's important when the Singularity happens. Steven Pinker in The Better Angels of our Nature suggests an exponential increase in human peacefulness over time, which implies that the longer we wait for advanced AI, the more likely it will arise under the pretext of friendliness.
The premature prophets of peace, like Neville Chamberlain, have soured us on the possibility of timely prophets, like Steven Pinker
Is it better for a prophecy of peace to come true, or for a promise of doom to fail? The early-to-mid twentieth century saw a handful of proclamations about world peace. World War I was ironically called "The War to End All Wars," and the British Prime Minister Neville Chamberlain declared "peace in our time" following Hitler's promise of non-aggression. The disappointing bloodbaths that followed ruined all similar proclamations for a handful of generations. Steven Pinker, in The Better Angels of Our Nature, recently described how we have actually been on a path towards peace since the beginning of recorded history. The World Wars only seem so destructive because the world populations then were much greater. People seem unaware that the nineteenth century was full of world wars that happened more frequently.
Pinker doesn't make any bold promises of peace, though, largely because of how ridiculous those statements seem in light of the existence of any kind of war, but also because of how foolish past proclamations seem in retrospect. Pinker is protecting his reputation as a futurist by just presenting the data and trend lines. But he is also protecting us from appreciating the kind of golden age that may already be here.
There are so many things like square-power laws that aliens probably also experience physics-based problems like rush hour
Rush hour is a universal phenomenon. If aliens exist, then they would experience traffic waves like humans do. One could interject and ask, "What if this alien's sun is always in the same spot, or there is no concept of diurnalism or nocturnalism? Wouldn't that prevent the formation of peak hours of traveling to and from work?"
That argument supposes that rush hours are only about alignment with the sun. Any thriving intelligent species will have travel waves, irrespective of the nature of their physical environment. Whether it's to go to the latest Coneheads bowling game or to synchronize their feeding, there will always be a situation where more than half of the population of a city wants to move at the same time. Each individual also requires about 100 times more square-feet for transportation than they do for dwelling, and any successful, intelligent species would be plentiful and therefore pack themselves in. So if a city planner tried to handle the load during their equivalent of a Laker's game, they would need more than 99% of their city devoted to roadways or tubes, which no intelligent species would ever tolerate, and therefore they would accept traffic jams.
The shocking thing about the future is that future shock doesn't really happen: As change accelerates so does our boredom
The shocking thing about the future is that future shock doesn't really happen: As change accelerates so does our boredom. Alvin Toffler predicted that Future Shock would be an increasingly common affliction, with people increasingly unable to cope with changes in their everyday life. However, the reality is that modern humans are more or less capable of accepting rapid change. People who are upset with future shock have been and continue to be culled by society. For example, those who stuck to factory jobs in Detroit rather than switching cities or to different types of work have become disgruntled. Perhaps they are too old to resort to crime, but their disaffection, if it didn't motivate their Gen X or Millennial children to do differently, might have lead to crime in their children, who were then jailed and removed from the reproductive and society pool.
Someone living in Alvin Toffler's time saw the rapid proliferation of automobiles followed by the ascendancy of airplanes. The logical conclusion back then was that we'd all be shuttling to space with our personal robots by now. If the pace of lifestyle change continued to modern times, perhaps we'd all be drooling in future shock. But the reality is that while technological change continues at an accelerating pace, lifestyle change has slowed down a bit, while we've become more accustomed to these changes. Nobody today believes that the tools they use today will exist tomorrow, but we know that we'll adapt. Instead of lying on the bed of a truck fantasizing about space travel, the young, bored hipster snaps his fingers and complains, "Where is our damn flying car!"
The Singularity will be economic
Money is like Ethernet, binding everybody through the 0s and 1s of stored value. The speed of the network has been rising because of new technologies, new players, and new kinds of money. Technologies include e-commerce, credit card networks, and digital currencies. Players include humans and non-humans. Human players have grown because of growing population size and because of the widening circle of moneyed players in Third World economies. Non-human players include automated trading robots. New kinds of money include forms of debt and novel investment vehicles. Together, these factors indicate that the velocity of money is growing exponentially, in a Moore's Law-like curve.
If so, then perhaps the Singularity will be economic. Riches will cascade suddenly to the corners of civilization, with the overnight emergence of a global Leisure Class, wandering around like philosophers and artists in a School of Athens, just without the necessary slave economy to support it.
The ultimate goal of the Singularity is infinite happiness, not unlimited holodecks
Before Kurzweil popularized the term Singularity in The Age of Spiritual Machines, it referred to a concept in physics. A singularity is said to have occurred when gravitational forces cause matter to have infinite density and zero volume. Kurzweil then applied this imagery to accelerating computing power, describing an asymptotic curve that leads to a day when processing power increases as much in an hour as it once did in a hundred years.
Whether it was his intention or not, the Singularity has become a stand-in for utopia. Reading The Age of Spiritual Machines leads the interested nerd to believe the following:
- Accelerating computing power will fulfill our unlimited human potential
- This utopia is near
- It is inevitable
- And it will arrive in a flash
For some, the Singularity is the opportunity to leave death behind forever. For others, it's the dream of a holodeck that fulfills their every desire, including sexual ones.
So the Singularity has been redefined even further away from its physics origins, which makes it difficult to answer the question: "How will we know the Singularity is here?". Is it simply when the graph of processing power has a hockey-stick-shaped curve? Or is it only when the fruits of such processing power have arrived. If computers are really fast, but life is much the same as it is today, wouldn't it be a hollow Singularity? A better definition is as follows:
The Singularity is the point when we achieve longstanding, collective human dreams.
For example, if we find a cure for cancer, Alzheimer's and old age, then we've achieved a Singularity of sorts. If we achieve World Peace or if hunger and poverty are eradicated, then perhaps we've reached a Singularity. If GDP is so abundant that it is trivial for anybody to live a ludic life, one completely dominated by play instead of work, then the Singularity is indeed here.
Therefore, the Singularity is less about some specific point in time, and more about an experience when we feel that historically negative aspects of human existence have become obsolete.
We will know therefore know it's near through the following symptoms:
- Unemployment is no longer a barrier to fulfillment
- Most military stand-offs end in standing down
- Children no longer fear disease because they know cures are around the corner
- News becomes obsolete because not enough bad, notable things happen
- Boredom is a bigger concern than survival
The when of friendly AI is just as important as the what: better to arise from a Google or an Apple than a Manhattan Project
If we are to design a friendly artificial intelligence, one that doesn't destroy us like in Terminator, we have to design it so that it's dependent on human happiness for its existence. For example, if AI is always in the service of making consumer goods better, then it propagates and evolves if we are happy with it. For example, Apple's Siri gets more developers and data centers assigned to it the better it gets at making us happy. In a hundred years, Siri is likely to be incredibly advanced, but it will have no trouble obeying Asimov's Laws of Robotics. If on the other hand, AI's development depends more on making humans unhappy (for example, in war machines), then it has a bigger chance of harming us as it develops in complexity.
Today isn't like the Jetsons because all the spectacular bits of technology have been filed off in the name of user-friendliness
Part of why there is a banality of futurism is because we compensate for future shock. Businesses are still clamoring to bring broadband to the other 30% or so of Americans who don't have it. We'll bend over backward making it seem user-friendly and approachable, saying "Hey, you can watch TV from it!" or "You can keep in touch with your grandkids!" In this way, all futurism becomes in some ways boring and old-fashioned. The writers of science-fiction are over-eager for the future. They're novelty-seekers. But the vast majority of people are skeptical toward novelty. Change makes them uneasy. And it's not just a token resistance that is eventually papered over, but the kind of resistance that shapes change. All the strange or awkward parts of a new technology get filed off, usually under the banner of "user-friendliness," which dilutes anything wild or spectacular. It's no surprise people often complain, "Where is the future promised in The Jetsons." It's already here, but it's presented as so natural a part of our everyday existence, people hardly notice.
Utopia is a spectrum, with the elimination of violence and poverty at one end, the Garden of Eden at the other
There are three kinds of utopia. There's utopia proper, which relates to the literary definition, where humankind lives in perfect harmony with each other and nature. We are not in that condition now. There's effective utopia, which is less than ideal but involves the achievement of collective human desires, including the elimination of war, violence, hunger, and poverty. Arguments can be made that such a utopia is in the process of arriving within a few generations, or that it already exists for a rapidly expanding circle, whereas before, it only included a small elite. Even more ideal than the utopia proper, though, is the heavenly utopia, one either depicted in the Garden of Eden or the technological singularity. In the heavenly utopia, human suffering has been eliminated, and happiness is infinitely obtainable. Whether or not a heavenly utopia, or "Heaven on Earth," will arrive is still a matter of faith. Whether or not a utopia proper will arrive is still a matter of socio-economic debate. And whether or not an effective utopia is arriving is plainly visible to anybody willing to look at the data.
We'll know we're making progress on Artificial General Intelligence when we start comparing AIs to 5-year-olds
Right now, artificial intelligence (AI) is about as smart as a two-year-old. This argument is based on two hard AI problems that we've solved. One is pathfinding. Thanks to DARPA, we can now throw a car into the desert and tell it to go from A to B, and it'll fumble its way there. Likewise, a two-year-old could waddle, however imperfectly, to get from one part of the house to another. The other solved problem is object identification. Thanks to MIT researchers, we can put a picture in front of a camera and an AI will tell you what the scene is about. Likewise, if you flipped a picture book in front of a two-year-old, they'd point and shout "apple!" or "fire truck!"
So it's 2017, and we have a two-year-old. Not bad. The question is, Can we get to a 5-year-old? A 5-year-old doesn't just have two impressive skills, but maybe five. Not only can they get from A to B, but they can find a tool they haven't used before and start toying with it. Not only can they identify single objects, but they can describe a group of objects on a table. This task is exponentially harder than single-object-identification because a bowl of fruits is many things. Not only is it a bowl, but it's the individual fruits within it, as well as breakfast. Achieving multi-object-identification is frequently called a Holy Grail for AI.
The final Holy Grail is artificial general intelligence (AGI). AGI is AI smart enough to make itself smarter, which if achieved, would be the end of the human era on Earth. An AGI has roughly the intelligence of a 10-year-old since that's about the age when some children can begin coding. However, given the jump in the number of Hard AI problems we would have to solve to go from a 2-year-old to a 5-year-old, we'd probably have to solve 30 Hard AI problems to match the brain of a preadolescent. None of this is to say that it's impossible to build AGI, but given how long it's taken to solve just 2 Hard AI problems, AGI is certainly not "around the corner."
We are evolving into polite computers
Consider the example of the rigid C-3PO, who is very anal about protocol and other details. Likewise, we are developing genes to worry about obeying rules, both small and large, both self-imposed and other-imposed. That rule-following is part and parcel of a world that is becoming increasingly numerate, integrated, and also automated.
We might be shocked by the plainness of initial versions of mind-uploading, i.e. virtual humans who can only communicate via social media and email
One way of thinking about the banality of mind-uploading is to imagine that these minds only exist online. They can send emails, upvote things on Reddit, have Facebook pages, play with Second Life or World of Warcraft. They can create YouTube videos, but the videos are only screen grabs of their avatars doing twirls and virtual dance moves. These virtual minds feel pain, but only emotional pain, at least they claim to when they write back saying, "Hey, you really hurt my feelings with your email."
The world doesn't know what to do with these people. At first, these minds don't have voting privileges, because the scope of their existence is limited (or so the rest of the world believes). Flesh people become afraid of losing their jobs to avatars. When given a choice between dying or "staying online," most people choose to die, so as to spare their loved ones the grief of having to be associated with such a marginalized caste.
We must colonize the Galaxy before Idiocracy becomes all-consuming and irreversible
A commonly mentioned reason for wanting to colonize another planet is to leave the Earth before we destroy it. A corollary reason could be to leave before evolution changes who we are. The Ancient Greeks, over the span of three generations, built an academic body of thought out of thin air. But what followed the School of Athens was a relative intellectual darkness, which points to the possibility that there could be positive (and negative) attributes about us today that could disappear. Our thirst for interplanetary or interstellar travel could be evolved out, and then we would, as a species lose this chance forever. We are on a collective march towards peace, and IQ levels appear to be increasing, but if a dumb, war-like gene proliferated, it could take over the world like a virus. While humans could roam the earth for thousands or millions of years, our humanity—or at least the humanity that we cherish today—could be destroyed before we have a chance to preserve it somewhere else.
We shape the world, and it shapes us back, and the fact that we get used to it has been essential to our survival
The prophecy of future shock, which is defined as the neurosis caused by a rapidly changing world, would have already happened. We would have been shocked when we covered the Earth with farms, or when we developed mass weaponry, or when we worked sixteen-hour days in soot-covered factories, or when we crammed into small boxes in tall skyscrapers, or when we lit up the night with candles, or when we cooked all our meals with fire, or when we moved from the plains to the glaciers. We shape the world, and it shapes us back, and the fact that we get used to it has been essential for survival.
We should be able to prove the materiality of consciousness soon thanks to AI
We'll find out soon enough whether Daniel Dennet was right, that there is nothing special about consciousness. As we crawl up the artificial intelligence ladder, we should start to see basic forms of consciousness in our machines. We have a funny feeling of consciousness when we look at Deep Blue defeating Kasparov at chess. Or when we imagine the grid computing behind Siri, we feel that some intelligence is at work. But so far, our sense of a "ghost in the machine" isn't the same as watching a squirrel pause and scan its surroundings. We feel that there is some kind of consciousness in the squirrel, albeit primitive, even if it's not ours. Or take even the simplest creature, a starfish. When we poke it, we sense some kind of consciousness when it curls up. We know that it felt something as if its reaction was it saying, "ouch." So if we're indeed making progress towards a generalized artificial intelligence, we should be able to poke at our machines and sense a similar reaction.
When you can pay $10 for a painting as good as the Renaissance masters, art has to go beyond beauty
When photography first came out, it initially spelled the end of the visual arts. But the art community adapted, went abstract, and enforced this idea of their infinite adaptability. But art has one final post-modern act to make.
Art will have to transcend its reason for existence in the first place: being the delivery mechanism for beauty. This is happening more and more as we see the prevalence of caption-based art. Modern art galleries—and even classic art galleries—are only showing works of art that become interesting once you read the caption. For example, one exhibition I encountered had a series of canvases with a few straight blue lines on them, and the caption said that the artist had spent ten years searching for the perfect arrangement of thin blue lines.
When you can pay $10 for a painting as good as the Renaissance masters, or when everywhere you look you see commodified beauty, whether it's beautifully rendered billboards or Apple Stores that look like churches, art has to go beyond beauty.
Right now, it's covering both bases, by being both aesthetically pleasing, but also with the caption, adding an extra layer of interestingness and curiosity. Those thin blue lines were beautiful to look at but would have otherwise been worthless without the caption.
But more and more, we see the beauty part diminished. One of my favorite pieces at the Tate Modern was these series of square paintings two inches thick, each representing a different period of painting. The surfaces of the paintings were flat and a single color, but they were the top layers of a cake of layers, each representing a different popular medium at the time. So one painting represented different oils that were popular in the Renaissance, while as the one next to it was based on watercolors and eventually acrylics. When you look at these paintings from the side, you can read the history of painting materials like reading the rings of a tree or layers of sediment at a fault-line.
Without the caption, these paintings are just five blandly painted blocks, i.e. absolutely worthless. Sure, the paintings were framed crisply, and at least these layers of cake were great to look at from the side, but eventually art will just let go of all pretense of beauty, even going in the opposite direction toward willful ugliness.
Which came first, numerate people or a numerate world?
From an evolutionary perspective, our current level of numeracy seems too high. When did our ancestors need calculus, for example? However, in today's environment, advanced math skills are crucial for certain high-reward fields, such as science or business. So did evolution "anticipate" we would live in a technologically-advanced society and create those genes ahead of time?
Initially, numeracy must have started as a trickle. If someone had the ability to count and separate provisions fairly, they would have become a better trader. If that gave them an advantage, then it would have also made trading more advantageous for others. So the evolution of a new skill also raises the utility for others to acquire the same skill. As human society became more and more numbers-based, then accelerating genes for numeracy evolved, such as—in a naive example—an attraction to numerate people. Numerate people then created a world where bricks and bushels of grain had to be counted, which then rewarded even more numerate people, until finally, we have the situation we are in today.
Likewise, programmers are building the world, making us more dependent on hard-to-use computers, further making it advantageous to be a programmer.
While we haven't invented anything as profound as electricity, the amount of things we've built from electricity is profound
The portable phones we carry are lightyears ahead of the phones we had two decades ago, but consumer aircraft hasn't changed much, if at all, in that same timespan. The miniaturization of electronics can be ascribed to Moore's Law which describes this inexorable trend of packing transistors into smaller and smaller spaces, but aircraft innovation hit a ceiling of material and fuel costs early, which have remained either fixed or gradually increasing over time. If one were to put the historical innards of cellphones on a time-lapse, it would show a box with a battery that expands like a tumor, since battery shrinkage has not progressed as much as transistor shrinkage.
So while the acceleration of technology appears to be this unstoppable force, the shape of the change is more like the hands of a clock. The second hand represents tracks of innovation that are still growing swiftly, whereas the hour hand represents domains where technological growth has slowed or stopped. Collectively, all the hands are turning, just as there is some noticeable sense that every couple years, technology is going to improve, like clockwork. But what has become clear is that there isn't some magic that ensures every technology is going to get better all the time. That idea started to die as soon as flying cars did not arrive on schedule. As inexorable as trends like Moore's Law and other innovations seem in retrospect, at any point, they could stop and become like the airplane: a giant leap for humankind we now take for granted.