So much of ethics discourse ends with a sanity check. If you read any book espousing a particular ethic, whether Kantian or Consequentialist, they will often refute errant ethical systems by taking them to their logical conclusion. For example, they will show that any sane person would object to such-and-such ethic because it condones murder in too many circumstances.
But why not just construct an ethic purely based on sanity checks? At every point, consult your meta-ethics and ask, "Is this something that any sane person would object to?"
By using such a heuristic, though, you'll often see the same objections come up: "This ethic makes you overly kind to the weak," or "This ethic condones murder too often."
Eventually, this list of common objections would shape up into a simple list of commands, similar to the Ten Commandments or the Koran. In which case, the logical conclusion of any ethic is religion.
Art and dreams may be quantum observational acts that spring the rest of the multiverse into existence
The typical image of the multiverse theory is that of comic-book style alternate universes, where for every hero in our universe there is a similar-looking but opposing villain in another one. In these alternate universes, there are Bizarro versions of everything, like instead of a Pax Americana it's a Pax Germania with statues of Hitler everywhere.
But alternate, hypothetical universes could mean anything. At its simplest form, it could include universes that are an exact copy of our current one except with one atom randomly changed. If such a universe exists, then it's conceivable of a hypothetical universe where a trillion atoms have randomly changed, so that all of a sudden I find a rabbit sitting on top of my laptop. Or in the blink of an eye, I'm thrown into an actual replica of Lord of the Rings.
As they say, "There are no limits, except for your imagination." While this cliche implies there are endless possibilities, what if it means the complete opposite, that multiverses are bounded somehow by imagination. After all, the set of all conceivable universes is an infinitesimally small subset of all possible universes.
This puts a new spin on dreams and art. What if the act of having a dream or creating art is an act of observation from quantum mechanics? By the mere act of observing an imaginary world rendered in artwork, that universe with those characters and shapes and laws of physics exist in some way. Perhaps our understanding of the word "existence" is limiting in that existence is all about acknowledgment.
At the very least, we can assume the same cognitive world as mammals, sensing ground as solid and bird-song as pleasant
Even though we can't ultimately know what is happening inside the heads of other animals—and there's some who argue we can't even really know what is going on inside the minds of fellow humans—we can at least make some probabilistic guesses as to the nature of animal cognition. Animals likely share our perception of physics. They must have a sense of solids. They must understand the Earth as "ground" that exists below their bodies. They must have a sense of cause and effect. Predators must at least share the same sense of time with us since they must anticipate the moves of prey. Deer react to the sight of predators by throwing their bodies towards safety, expecting relief. Animals see colors, they feel the brush of wind, and they hear birds sing. They might even know that those songs are coming from birds, and because birds are not harmful, the sound is somewhere between soothing to non-threatening.
By recognizing our shared cognitive world with animals, we can bypass difficult questions like whether animals feel. Since so much of human consciousness has to do with things outside of emotions and self-awareness, it may not be necessary to understand whether those things exist for other animals for us to extend the circle of empathy.
Belief in Consciousness
One path to understanding conciousness is to understand how people defend a belief in God. If you ask God-believers why they are that way, they will defend by saying, "I just know it to be so." Or they may refer to some deeply spiritual experience in the past that is prominent in their mind. Likewise, if you ask conciousness-believers why they believe in qualia, which is the conscious experience of sensory perception, i.e. "knowing that blue is blue", they will emphatically say, "I just know it to be so."
Vouching for the authenticity of conscious experiences is still a memory-retrieval act, similar to vouching for the authenticity of God. The God-believer retrieves a placeholder for a prior cultivation of that belief, founded on memories that have re-enforced that belief. Their belief is as real as witness testimony in trials, which is hazy, at best. But to the witness recalling the scene of a crime, they will often state, "It's like it was yesterday." Likewise, when someone summons their understanding of the color blue, they are plumbing a memory of the color blue, even if they're looking at it right this second. As soon as the words leave their mouth explaining their conviction of the special properties of their perception of that color, their conviction becomes a memory.
It's that conviction that gives us an internal theory of the mind. Consciousness is a story told by our brains to our brains that there is a there there, which helps integrate all sorts of useful functions for the brain, such as developing an ego, or taking one's working memory seriously as opposed to treating it like some fleeting illusion. Realness is as much a sense as our sense of touch. There are cognitions, such as fantasies or dreams, that we know to be false. How do we know so? Because our brains tell us so. Likewise, our brains tell us that conscious inputs are real, and we happily go along with the story.
Consciousness can be reduced to the vague sensation of inner complexity: I am complex, therefore I am
The experience of consciousness can be boiled down to a single sensation: the feeling of inner-complexity. I am complex, therefore I am. It's a type of qualia, just like sight and smell, but of a lesser intensity, like our sense of balance. When we look at dogs and imagine that they're less human than us, or when our ancestors looked at slaves in an inhuman light, the assessment comes from a question: "Just how complex can their inner minds be?" Likewise, there are humans today whose minds are so much more complex (or more ordered, to use Kurzweil's terminology), that they would look down on us and think, "Well, just how conscious could those people be?"
Cost-benefit isn't a good behavioral model since, in practice, we experience no past cost, nor future benefit, only a present varying lack
Maybe we need to think beyond cost-benefit, or at least consider the time-based nature of cost-benefit. The premise of rationality and utilitarianism is that we are willing to absorb some cost now for some future benefit, and that somehow the benefit should outweigh the costs. But if we remove the time component, such that there is no future benefit, only benefit now, and no past cost, only the cost now, there might be a better framework for thinking about rationality.
For example, I was once excited about the prospect of eating a Philly Cheesesteak, and as I took the first bite, I savored it for just a split-second. Soon after, my mind was elsewhere. The lights in the restaurant flickered in an odd way, and I entered into an unpleasant trance. I paid an extra cost to fulfill a craving in a sit-down restaurant when simple sustenance would have sufficed. Plus, the setting wasn't that great, so ultimately the whole thing was cost.
But the anticipation was a benefit, costing me nothing on the drive to the restaurant. I had gone from a benefit-like phase before I took that first bite, to a cost phase when my mind shifted to other distractions. Life follows a similar rhythm, weaving in and out of cost and benefit phases, like a pendulum of attraction. It's hard to pinpoint when exactly we weigh the costs or benefits of each swing.
How bad can irrationality be? Chance-mating is the epitome of it, and yet that's how we got here
The criticism that someone is "irrational" seems to include a theory of rational philosophy that only has two dimensions—cost and benefit—while excluding a third one: time.
When choosing a mate, for example, which is probably the most important decision someone could make, some people make that decision within a few minutes, whereas others take a year or more to decide.
The former could be said to be irrational, being completely swayed by their emotions. But many—if not most—of us are the product of chance-mating. Even though the stakes are incredibly high, to not make a snap decision and "hook-up" with someone, means you may never have a chance to hook-up with that person ever again. Maybe that's the best possible person you could've ever mated with, and because you delayed, you will miss out on a genetic recombination strategy that has proven effective over millions of years.
Likewise, in making a decision to buy a house, even small decisions can make the difference between tens of thousands of dollars owed over the lifetime of the loan. For most people, nowhere else in their life will one hour of their time make that kind of a difference. The purely rational person could spend time according to their hourly rate, by comparison-shopping or by optimizing every single item on their mortgage statement. But most people wouldn't have a problem compromising that kind of detail-oriented work. They would determine how much effort to put into the search not based on the cost-benefit, but on the time they have to seize an opportunity.
How much different would one long, unending life be than a set of little lives with their own beginnings, middles, and ends?
We might already know what immortality feels like. We live long lives, separated into stages, with discrete groups of relations and environments that come and go. The friends a person had 20 years ago may be different than the friends they have now, and so, at least to their old group of friends, they're dead. Their former coworkers, their former habits, and their old home, all once represented them. If "All you touch and all you see, is all your life will ever be," as the Pink Floyd lyrics go, then all that people ever were exists in the memories of others who they are no longer acquainted. People get divorced, start new families, relocate, or enter new life stages, which brings them into contact with a new web of memories to live through.
So even if we conquer physical death and live one long, unending life, how much different would that be than living through a series of little lives with their own beginnings, middles, and ends?
Human copying will be like Dropbox, with the cloud copy always a few files behind the local one
Human copying might manifest similarly to cloud-based file storage. Even though the files exist on your laptop, on your phone, on your desktop computer, and in data centers, the hardware representations are not that important. Each location might be out of sync with the canon by a few files, but what matters is that there is a canon with a high probability of persistence.
If the canon decayed, you could recover your identity via these imperfect copies, with little personal disruption. This decay already happens in a way since we regularly shed our memory. Sometimes, after a night of heavy drinking, we lose whole folders of memories, and yet we carry on.
Which leads to the question, "What constitutes a significant loss of identity?" Death might become obsolete due to human copying, but if we lost 30% of our cloud backups, would we mourn the loss?
If Judeo-Christian tyranny is, "It must be this way," then Buddhist tyranny is, "Who can really say which way is right?"
If you were a hypothetical emperor and had to choose between having your subjects influenced by Eastern philosophies and thought versus influenced by Western thought, what would you choose? For the sake of argument, let's over-simplify and grossly generalize Eastern and Western thought, defining them as follows:
Eastern ways of thinking express moral behavior as an outcome stemming from mindful action. The student who meditates calms their mind and sees how we're All One will indirectly find the righteous or virtuous path.
The Western way is to tell people explicitly, "this is good" or "this is bad." Think of The Bible or The Koran: They are just a Thousand Commandments. The equivalent in Eastern thought might be the Bhagavad Gita or Confucius's writings. But these are more like "good ideas" or aphorisms, like "No man who got up before sunrise every morning failed to make his family wealthy."
Again this is a gross over-simplification. For the sake of argument, let's assert that Easterners believe virtue is an indirect result of a peaceful mind, while as Westerners believe virtue proceeds from following virtuous commandments.
As a Western emperor, you would have a society that springs up based on law-and-order, with codes of conduct everywhere. In 2009, the United States saw the rise of the Tea Party, a group of people for whom it was fashionable to carry a pocket-sized copy of the Constitution with them at all times. That is how Western revolutions looks.
An Eastern revolution would not revolve around a text. Gandhi's revolution revolved around mass stillness exercises (non-violence) for example.
If you were a benevolent emperor and wanted your subjects to be happy, Eastern philosophy might be useful because it emphasizes inner peace. But if you were a more pragmatic ruler, you might assume that people aren't smart of enough to think for themselves and that they need matters of virtue codified and drilled into people's heads.
If you wanted to be a tyrant, though, either philosophy could serve your ends, but in a different manner. The Westerners could be chained by Draconian laws and The Easterners could be chained by the lack of a clear and consistent legal system.
If the universe is deterministic, why can't all existence just be the tentative states of a pen-and-paper Turing machine?
Consider a DVD of the movie Fight Club. If someone placed it in a DVD player, they'd see that Ed Norton punches Brad Pitt in the face. If they don't put the DVD in the player, it's still true that Ed punches Brad. If all DVD players in the world broke down, but we retained the DVDs, it would still be true that in Fight Club, Ed punches Brad. The absence of the players doesn't invalidate the truth about that movie.
If all the DVDs were gone, people would still remember that Ed punches Brad. There would be no proof, and one could say that if we were to go back in time, we would find that indeed it is the case. This thought experiment could go further and further, asking questions like, "What if we erased everyone's memory?" Which begs the question, Is the medium of an event's existence necessary for the event's existence? Can existence be solely predicated on information, or do zeroes and ones have to be etched into a disc?
Likewise, it's possible that we are in a similar movie, a movie that isn't playing anywhere. It doesn't exist in the memory of any being, but it's just what happens in this particular, abstract, and grand, sequence.
If we can regularly solve the Trolley Problem, then so can robots.
We have a lot of issues with the Trolley Problem. One the one hand, we fret about having to figure out whether ten lives is worth one. On the other hand, we frequently solve the problem with a not-so-humble resignation and saying, "It's not our place to decide whether the lives of 100 adults are worth the life of one child." And yet, we solve Trolley Problems all the time. When an oncoming car swerves into your lane, and you have to make a choice of maybe hitting the bicyclist to get out of the way, you're rapidly weighing the cost-benefits of risking a life to save another.
Some professions are riddled with such problems, such as being a police officer or soldier. While some of the people in those positions are vexed by making those decisions, many do so with ease. Likewise, if we're not worried about entrusting under-educated or under-trained people with making those decisions, we shouldn't fret about robots making them. If anything, we'll perfect the ease with which we already make those decisions, now with better inputs, consistency, and the benefit of a level head.
Illusion vs. Perception
Saying "it's all an illusion" is close. All we have is perception. Whether or not those perceptions are false depends on the domain, though. In optics, we have near accuracy. When you look at a tree, most of what you perceive is true. Optical illusions represent only a sliver of optics. Our memories of vision, though, are probably fifty-percent false.
Immortality won't eliminate the fear of death; Death will just take on a new meaning
Death is a word, and the thought of that word is an experience for the living. Therefore, even if immortality removed the fear of death, the concept of death wouldn't die. Life itself would still be a death of sorts. Even if everybody's biological functions continued indefinitely, life could still stop. People might say, "I am dead inside," during moments devoid of vitality. And prolonged periods of suffering or anguish would be dreaded as much as one used to dread the march to the grave.
In the future, the killing of a human-looking creature that can pass the Turing Test will be considered murder
In the movie Source Code, Jake Gyllenhaal plays a character who travels to alternative universes where a train is about to explode from a terrorist attack. While his directives are to ignore everything else and find the terrorists, he feels compelled to save the passengers in these alternative universes. The commanders who give him orders believe that these alternative universes are simply simulations with information necessary for the safety of the main world. But Jake is confronted with such strikingly high-resolution experiences in these other worlds that he can't help but feel compassion.
Consider Jake's viewpoint. Since there is effectively no difference between normal human beings and zombies pretending to be normal human beings, he has no way to access the inner experiences of anybody else. He can only verify his consciousness, and so his ethics shouldn't depend on zombie-verification. Yes, we all want to know whether there is someone truly there receiving pain behind that visage, but in the absence of that certainty, you should err on the side of saving the zombies.
Life-extension necessitates a redefinition of life
Now that we're living beyond our ancestor's average life expectancy, it might make more sense to have a multiple lives perspective, with each "life" spanning between 15 and 20 years.
The stages are longer. If one spends ages 5 through 22 in school, that is like a lifetime as a student. Waking up for attendance, getting grades, and socializing with colleagues drives every student's daily existence, and then after 22, that rhythm stops. For most students, that final commencement ceremony is like a funeral.
If they then spend ages 22 through 40 being single and dating, then that is also a lifetime within those rhythms. Then raising children for 18-22 years is another lifetime. Having an empty nest, another, and so on. Life-extension means more and more lives stacked back-to-back like a bookshelf.
If such a mindset were prevalent, we might revive coming-of-age rituals, but rename them "coming-of-life" ones. In these parties, people shed their past, maybe even their names. Or extending even further, a lifetime prison sentence could be between 15 and 25 years, because that's equal to "life" imprisonment; The difference between being imprisoned for 20 years versus 40 years is not much. That man or woman will not see their children have a lifetime's worth of aging. And when the criminal returns to the free world, they will be so different that they won't re-enter society with any of the same friends or assumptions. In other words, they won't be the same person.
Our sense of shared humanity comes from perceiving any beige android with two arms, two legs, and language-processing as one of us
Every difference that matters is genetic. The angry driver who is raging out—that's genetic. The significant other who has incredible empathy—that's genetic. The best friend who is great at math, the aunt who has short fingers, the teacher who can run marathons at the age of 50, the co-worker who can't, etc.—all genetic. Picture a kindergarten school portrait. This group doesn't represent a bracket of "humans," but rather a garden of tulips, grasses, and pines, all from different periods, like the Jurassic or Cambrian. This bracket, when cross-pollinated with another equally strange bracket somewhere else will lead to an even richer, out-of-this-world, descendant garden.
Our passing perception of each other's differences is not one of wonder, though. Perhaps it's in our interest to be blasé. If each time we walked down the street, it felt like a giraffe crashing through the gates of a zoo, it would overwhelm us. We couldn't create social norms without a sense of shared humanity. And yet, if we were to appreciate our differences, connecting every trait not just to different notes on a piano as geneticists or anthropologists would have us believe, but rather to entirely different types of instruments, only after then hearing this cacophony would it click in our heads what it means to be human.
Philosophy is supposed to be the "love of wisdom," yet my philosophy department never had a course for that
Some of the names of popular topics of philosophy in college are epistemology, ethics, or existentialism. However, despite the etymology of philosophy being "love of wisdom," a course on living well is unlikely to appear on the same list.
Sometimes schools teach wisdom, but they use ideas from more than two millennia ago. Socrates, Aristotle, and Plato may have just been the first life coaches who had no qualms telling people how to live virtuously or how everything is just an illusion.
Finding appropriate authors for a course on wisdom is hard because no choice is going to be without controversy. The benefit that the Big Three Ancient Greek philosophers have, though, is that they're secular and they are so old that we can discuss them from a historically bland point-of-view. Instead of telling students, "This is how you should think," the professor can say, "This is how they thought back then. Oh, and they had slaves," i.e., "Don't take them seriously."
The "founders" of Western thought have gained that title because they're senior and secular.
Political correctness depends on the size of the mouthpiece, with populist news on one end, hushed tones on the other
The nature of ethics changes depending on where we hear those ethics. On one end of the spectrum is network news, which has mass appeal. These ethics tend toward irreproachable ideas, such as "All men are created equal." They're the same ethics that are discussed in high school history books, often perpetuating a mythology of the Founding Fathers and their concern for "life, liberty, and the pursuit of happiness."
On the opposite end of the spectrum is parental ethics. For example, mothers may teach their daughters things like man-catching, or dads may teach sons "how to be a man." The mainstream media—as well as academia, another land for mass ethics—scrubs out any notion of gender inequality.
In other words, the political correctness of ethics changes depending on who is listening. Somewhere in the middle is talk radio, since you often listen to it alone in your car, as opposed to television ethics, which are the ethics of the living room.
Consensus ethics, the kind of ethics that politicians talk about or we discuss in polite company, are ethics that we can all agree are for the good of all. Peer-to-peer ethics are designed to help both parties in a conversation. And parent-to-child ethics are just for one person since parents are trying to send their children on their way. The kind of ethics that gets handed down from parents tends to be the most selfish of imperatives. They're even worse than the ethics from friends; Friends at least want you to follow the golden rule.
If there were no cure for headaches, we might not suffer them as much. When we reach for painkillers, the order of events seems like it goes from vexation to question to answer. We feel tense, then we ask, "What can be done about this?" to which the response is, "Use this." But the existence of a possible answer draws the question out of us, and in tandem the knowledge of the vexation.
For example, a mother is driving her son to school and notices he is quiet. She asks, "What's wrong?" to which he replies, "My head hurts." (In the past, he might have said, "My tummy hurts," or "I don't feel good.") Suddenly a Children's Tylenol appears in his mouth, which creates an entry for "headache" in his database of fixable things.
Perhaps even the question, "What's wrong?" wouldn't have been asked a couple of generations earlier because parents didn't have video games and pills in their panacea toolbox.
Since emotion drives everything we do, then being driven is the constant pleasure running through life
Life is pleasure. Even something as simple as picking up a glass of water is pleasurable on some level. Having the thought, "I want water" is pleasing. Determining to satisfy that urge is pleasing. The locomotion of the body that goes to the kitchen is pleasant. Swinging the cabinet door open, rifling one's fingers for a glass, and then gripping it are all kinesthetically satisfying. If one is tired or has shoulder pain, then at the very least, the promise of simple achievement is the pleasure.
Even interactions that are considered unpleasant are pleasurable. Listening to a co-worker you dislike is typically considered unpleasant, whether you speak up or not. If you do speak up, it's pleasurable to release the words that bubbled up to your throat. If you don't speak up, it's pleasurable to give energy to your willpower and subvert the urge to retort.
While there are many other emotions at play, such as boredom, anxiety, and irritation, yielding to any of these is a form of pleasure. Negative emotions lure us into action through the pleasing thought of their release. Even depression makes the prospect of collapsing on the couch and turning on of the television the satisfaction of an urge. Since emotion drives everything we do, then being driven is the constant pleasure running through life.
The Universe is a giant, abstract Turing machine that doesn't need to be run
The universe is one big, abstract Turing machine, one that doesn't need to be run. According to the Church-Turing thesis, there exists some set of computing rules, that when played out, can approximate anything that is computable. Our universe is computable because everything that exists or happens has a logic to it. Even in the indeterministic world of quantum computing, nothing happens without reason. (Here, the word "reason" is used instead of "cause" so as to be agnostic about the nature of causality or time.) You can observe anything in our universe and ask "Why this and not that," and there will be an answer even if you can't know it.
The exception would be observations at the boundaries, whether at the start or end of time. "At some point, something must have come from nothing," so the saying goes, but using a time-independent, causality-independent framework would restate this to say, "Something must have had no reason." There had to be at least one irrational observation in the universe. Even in the case of infinite regress from incessantly asking, "Why this, not that?", one could step back and ask, "Why this infinite chain, and not something else?"
The opposite of reason is arbitrariness or assertion. Being part of an abstract Turing machine is consistent with the requirement for assertion. In an abstract model, the universe is asserted by possibility. It's possible there is a Turing machine with our assertions, simply because the set of all possible Turing machines exists, at least in the abstract.
Actually, one could deduce the universe is abstract without any knowledge of Turing machines, but the Turing machine metaphor helps us navigate our biases. If you picture a pen-and-paper Turing machine, it would have a network of nodes representing different bits and a pointer representing the current state. The current state could be now, and the bits could represent the position and velocity of every particle in the universe. Consciousness would then work similar to the snapshots of each frame of computation in a video game like World of Warcraft.
Consider the mind of a non-playable character, like a soldier who paces aimlessly throughout the court, waiting for someone to ask them for a quest. The soldier could step up to a pond, see themselves and possibly even click on themselves, asking for a quest. All of it would flow from the scripted narratives of the video game. It may appear random, but randomness in computation only works with a "random" seed, which is something external to the system, like the computer's temperature or the exact millisecond that someone started the game. If we use a strict definition of the universe, one that encompasses any so-called multiverses, then there can be nothing outside of it, and therefore no randomness. As a result, this video game would play out the same way each time, with the character going to the pool, and clicking on themselves for a quest at the same frame. Is the universe coded this way? Not necessarily, but according to the Church-Turing thesis, it could be isomorphic to something like this.
As for the clear sense that "I exist" or the problem of qualia, such as the blue-ness of the color blue, we shouldn't dock points from the abstract Turing model because we can't immediately answer those questions. We should try to understand the universe before the Cambrian explosion. Otherwise, we are limiting our understanding of reality by trying to accommodate the puzzles that have sprung up from the rare branch of evolution that is our minds. There is nothing, because there can't be anything, and yet something must be explained, and the only way to to explain explanation without any human assumptions is that it's all just an explanation, a story, an idea, or an abstract machine that doesn't need to be "run" in order for us to know its reality.
The day we realize we're not much different than unplayable sentinels in video games is the day we understand our universe
Consider this Theory of Everything: The universe is a giant abstract computer that doesn't actually have to be run. The obvious counter-argument is that you can look all around you and see "stuff" that changes over time. And that even if we're in a simulation, there has to be a recipient of that "stuff" that processes it over time and appreciates/reacts to it.
The biggest barrier to accepting this Theory of Everything is our absolute faith in our senses. Here is a thought experiment to help undo this:
What if we built a naivete machine, one that had absolutely no preconceptions of the world? It would have no concept of time or cause-and-effect, just simply the ability to check variables with values filled in. Initially, it would be useless since it wouldn't have access to our world. So we'd give it a pointer to a memory address where we placed a video feed. The computer could run a loop, check the memory address, and notice some bits. And then after one frame, it would notice different bits there. So the computer would notice that as the loop advances, those bits are different than what was there before. Then it could notice patterns. It could notice that the bits make a lot more sense if composed in x-y coordinates. And maybe it could figure out that it's a series of 8-bit numbers, seemingly repeating every three. Those would be to us RGB, but to it, simply (0-255, 0-255, 0-255). The computer at this point would not necessarily have a sense of time, but maybe it would have a sense of a "tick" since it can execute one line at a time. To the machine, it "knows" that at each tick, there is a different value in that field.
It could have a history of the values of all the previous ticks. But it wouldn't know whether or not ticks in the future have to be calculated in order to be known. It would only know that to get information for future ticks would require ticking forward.
Based on the video feed, it could notice objects, but these wouldn't feel solid to the machine. Instead, the machine would see recurring patterns of clumps of bits. Eventually, it could form rules to predict what will happen on the next tick based on where the clumps were in previous ticks.
At this point, what does the naivete machine believe? It doesn't believe in cause-in-effect. It doesn't believe in materialism. It doesn't believe in time. All it believes is that as each tick advances, there is different patterned data in a memory address. Even the language I use to imagine how the machine thinks is prejudicial.
Trying to find the meaning of life is much harder than just doing meaningful things every day
Answering the question, "What is the meaning of life?" is impossible, at the very least, because it's too universal: there's your meaning vs. everybody else's meaning. So the question should instead be transformed into something personal, and maybe more local. Instead of asking yourself, "What is my purpose?" or "What is my meaning?" you could ask yourself, "Is this meaningful?" or, "Is that meaningful?"
A simple syntactic change to the question can make the difference between boggling one's mind with depressing thoughts versus something proactive. Nearly everybody has at their disposal a handful of things that if asked about, they would respond with a deeply affirmative, "Yes, this is meaningful." And if that's the case, then the original question, "What is the meaning of life?" becomes moot.
Utilitarianism and rationality make a big assumption: We care about things in proportion to how much we value them
Implicit in utilitarianism and rationality is some form of proportionalism: We supposedly care about things in proportion to how much we value them. Cost-benefit are supposed to be vaguely quantified, if not numerically, at least by comparison. Some sources of utility have a higher benefit than others. Some benefits are "worth" the cost — i.e. that donut on the table provides some benefit versus some cost to your health. But even basic quantification may be moot in light of the binary nature of our ultimate utility; We only care about one or two things: not dying, and possibly, reproducing. Everything else can have noisy or random valuations since most things usually don't directly contribute to either life or death.
Instead, decision-making is more about drives. Certain thoughts are pleasing, given the right mood, context, and of course habit. It's exciting to go to a nightclub, or it's your dream to have a child. Neither of those may ever deliver "utils" worth the cost, and in many cases, as studies in positive psychology are bearing out, we are often less happy upon achieving or doing the things we think will make us happy. Our mind just isn't equipped to weigh the costs or benefits to a single decision that may be a drop in the ocean of the recurring decisions that ultimately affect our ability to survive and thrive.
We're irrational, not because we're stupid, but because of the volume of decisions we have to make with limited information
One problem with reason is that the act of doing so comes with an expectation that we have complete or complete enough information. But evolution didn't design us to be rational thinkers. Evolution designed us to maximize our outcomes, in the face of incredible uncertainty. Intuition is maybe 90% of what happens in reason. When we begin a line of reasoning, the path is guided by feeling. People with brain injuries who've lost their emotional centers, suddenly lose the ability to deduce. Extrapolation is the norm. Stereotyping is the norm. Jumping to conclusions is the norm. It's expecting otherwise that's irrational.
We aren't virtuous because it feels good, but because we believe it to be good, which is proof happiness isn't everything
Happiness is everything. You wouldn't ever do something that you didn't think would make you happy, right? You might object saying that making other people happy is more important, but then, it could be argued that that virtuous feeling of making others happy ultimately makes you happy. Some people believe that all motives or thought processes can be unified by single personal imperatives like happiness or pleasure. The objectivists believe everything can be ultimately reduced to self-interest. And some psychologists believe everything can be reduced to symptoms in the head.
But to focus solely on one's personal experience is to live life with a mirror constantly in one's face. It's akin to measuring the world only by the feelings it generates in you, and not on the actual content of the world. The philosopher Robert Nozick asked, "Would you rather be a brain in a jar that was stimulated all the time to feel happy?" Most people would answer no, which Nozick ascribes to the reality principle. Being connected to reality is important in of itself. We do things often because of their intrinsic value, not because of their felt qualities. People help others not just because it feels good, but because they believe it to be good. And while it could be argued that following one's righteousness feels good, the helper is not chasing the prospect of the good feeling of righteousness. If the payload of good feeling is removed from the equation, as it is often reduced to a split-second of satisfaction, then pretty soon it starts to look like the virtuous are chasing virtue for its own sake.
We have as much information as to whether or not to kill people in our dreams as we do in real life
Should you kill bad guys in your dreams? Sure, why not? They're not alive, right? Well, you only have two bases for this belief. One is that in your dreams, you remember having woken up before from a similar loopy world. You remember being in a place where there is no evidence of a dream world anywhere. And so you think to yourself, "I'll snap out of this, and none of it will exist anymore. Therefore, it's not real." So, you're 100% confident you can take down muggers or whoever you want with no regret. After all, there will be no material consequences in your waking world.
The second basis is that your dream world is fantastical and subject to your imagination, further proof it's not real. But let's deconstruct that. It's fantastical relative to our waking world, but why should that lead us to favor the waking world more? Isn't a world that is rigidly governed by rules, like the laws of physics, more insane than one without. In fact, it drives physicists to insane deductions to try to make it all unify and fit together. It drives us religiously insane because we look around and think, "Wow, look at this order and perfection, its got to have been designed by some infinite genius." And then we kill each other over who is more right about the true nature of this genius when really we're masking our own insecurities about these conclusions.
We should be able to prove the materiality of consciousness soon thanks to AI
We'll find out soon enough whether Daniel Dennet was right, that there is nothing special about consciousness. As we crawl up the artificial intelligence ladder, we should start to see basic forms of consciousness in our machines. We have a funny feeling of consciousness when we look at Deep Blue defeating Kasparov at chess. Or when we imagine the grid computing behind Siri, we feel that some intelligence is at work. But so far, our sense of a "ghost in the machine" isn't the same as watching a squirrel pause and scan its surroundings. We feel that there is some kind of consciousness in the squirrel, albeit primitive, even if it's not ours. Or take even the simplest creature, a starfish. When we poke it, we sense some kind of consciousness when it curls up. We know that it felt something as if its reaction was it saying, "ouch." So if we're indeed making progress towards a generalized artificial intelligence, we should be able to poke at our machines and sense a similar reaction.
What we lose in zest by using passive voice, we gain in intrigue, as storms gather, and things happen
"The Internet was invented" sounds better than "Scientists invented the Internet" or "Al Gore invented the Internet." Even though passive voice is discouraged in English class, we crave it. Much of the human experience is about receiving things or things happening to us, with the actor or agent unknown. We can identify who invented the Internet, but doing so ignores the idea that technology itself has agency. "Technology wants something," with or without the specifically named inventors, and so the arrival of the Internet, in a way, is something that magically appeared.
Active voice frames conversations with causality, which makes sense in the plot of a spy thriller with a character who is pushing the events of the story forward. But even then, one could frame it in terms like, "conspiracies formed, commands were sent, and computers got hacked."