Notes by Philip Dhingra
Computer Science

AI Mashups

If Apple wanted to create a lively 3D avatar for Siri, they could, but they don’t for business reasons. Anything less than complete realism for Siri would make the creature appear uncanny and creepy, turning off consumers. It’s likely that Apple could even make Siri’s voice more natural, given that her tone hasn’t changed much since her release in 2011. But again, Apple would have to improve her naturalism by leaps and bounds to pass the uncanny valley and make Siri acceptable to consumers.

However, as individual AI domains increase in sophistication, a new opportunity will arise that involves mashing up disparate AI modules. For example, if Siri had a 3D face floating on the screen of a self-driving Tesla, the effect would be like talking to Kit, the automated assistant featured in 1980s television show, Knight Rider. Instead of creeping us out, the combination would be delightful, tickling our curiosity. “Siri, take me to the grocery store.” “Okay, would you like to stop by the bank along the way?” If you added some eyelashes and fleshy features, you might start to feel like you had a real-life shopping companion.

The final 20% of AI sophistication for voice assistants will probably cost the same as the first 80%. But, as we get closer, it will become easier for entrepreneurs to round up the current best-performers in AI and produce a symphony of life emulation.

# ai computer_science futurism

Bitcoin is the first open-source government, with branches of power much like git trees, forking and merging, yet still producing a master

# computer_science politics technology

For content, crawlability is the new immortality

One day the birth of the Internet will seem like the birth of history. The amount of available recorded information, if plotted on a line, would look like a cliff starting in the late 1990s.

Even though the “digital revolution” supposedly happened before the Internet with the advent of computers, it wasn't until all content delivery became digital that we encountered this cliff.

If an archivist wanted to save a newspaper today, post-Internet, they would most likely crawl the newspaper's website on their own. If the newspaper's website weren't online, they would ask the paper to type a few commands and email a database dump. And if finally, that wasn't possible, they would ask the paper for access to the published HTML files that once represented their website. If the archivist had to, they could then copy and paste these contents.

Before the Internet, even though the newspapers' contents were most likely digital somewhere, nobody would know where the files were. Or if their location were known, they would require assistance from a technical employee who would have been laid-off by then. Or the file formats might be printer-ready and arcane, instead of web-ready and accessible.

Ultimately, the newspaper would simply dump an incomplete box of back-issues on the archivists' desk, only to then continue to collect dust again, as the archivist moves onto much easier projects.

# history computer_science technology science

Having a catalog of one million songs, books, or movies once required a master-negotiator like Steve Jobs

Now it's a given.

# computer_science business

If genetic code is the ultimate spaghetti code, then maybe we've been designing programming languages wrong

If you took an arbitrary computer program and changed a zero to a one, there's a good chance the whole thing would crash or at least dysfunction. Much of a computer's program is laid out as a long stream of registers, and so a change like that could corrupt something fundamental, like a variable type, which would break a module, and thus the whole program.

A single genetic mutation, on the other hand, is more durable. Our DNA encodes meta-DNA that determines what parts are allowed to change with what probability. Certain features are redundantly encoded so as to prevent fatal deviations. Whole features, like eyesight, are so crucial to our survival, that their general functionality is consistent across the animal kingdom. Other features, like bone length, aren't so risky that even among our species, there is a relative variety of bone-length combinations (i.e. small hands with long legs, high cheekbones but with narrow jaws). DNA thrives on its landscape of varied corruptibility whereas computer programs must remain untampered with to function at all.

# computer_science evolution

Intelligent design will become obsolete once computer programs written in organic code become normal and familiar

Does an organic programming language exist that mimics biology, with tendril modules twisting and combining in mysterious ways?

One method for internalizing an understanding of evolution is to recognize its speed. The public doesn't understand evolution because it can't see evolution happening around them. All people see are whole organisms with sophisticated machinery that doesn't change into other things. It's hard to imagine that random drifting organic compounds are what led to these complete and complex machines. Evolution has to seem fast in order to gain widespread acceptance.

Probably, as is often the case with many epiphanies about our nature, the metaphor lies in plants. Plants can graft onto each other. They can grow limbs in all sorts of directions. They can live short or long lives. Their gender is unclear. Perhaps plants are life-size projections of the actual internal machinery of DNA.

As a thought-experiment, imagine developing a computer program the way gardens look. Functions and classes would be networked with roots entangling, rather than linearly called in a programming stack. The web of genetic instruction would be flexible, unlike the fragility of programming a robot. If the blueprint for a robot was accidentally changed to give an extra 10cm to its femur length, the robot probably couldn't be built. If the same mutation happened to a mammal, on the other hand, all the muscles and tendons would grow appropriately around the longer femur. The new infant mammal would learn locomotion differently than its parents did, and perhaps this creature that was once landlocked would find a natural home in the ocean.

# computer_science

Just as feeds are traditionally used for livestock troughs, news must be consumed while we are consuming something else

Tablets were supposed to save the newspaper industry by turning everything into an app. And while the NPR, NYTimes, and WSJ apps have been impressive from the get-go, so far that promise hasn't been fulfilled.

The disadvantage with news apps is that news is best consumed when streamed in with something else. People often pick up newspapers at coffee shops with their loose change. Or they read the news at the breakfast table to keep their minds busy while eating alone. (The latter has been phased out as a result of 24-hour news channels which are easy to run in the background).

Individual apps, with separate user interfaces, are focused interactions. When you open an app, you are there to start and finish a task, not surf. Whereas when you open a browser tab to a news site, you're feeding a distraction into your current workflow. This flow is why Twitter feeds, Facebook feeds, and browser tabs are the primary methods for consuming news. The word "feed" itself conjures up an image of a pig at the trough. Hence news must be consumed while something else is happening.

# computer_science mediums

Just as while-loops and if-then statements are all that automatons need, cyclical chemical catalysts are all that life needs

One of the most basic programming concepts is the while-loop. Even in a multi-threaded paradigm, a while-loop cycles between each thread for new instructions to integrate. Without cycles, there would be no purpose to time. Cycles are perhaps the most basic sign of order in the Universe. Everything would just be one big event. The Big Bang produced objects that moved away from a center, but when those objects looped around each other, they formed meaningful activity.

The earliest prototypes for lifeforms were molecules that created eddies among themselves. Some carbon molecule reacted to movement in the environment in a looping pattern, creating a blip of activity only to dissipate afterward. For millions of years, a million times a day, these eddies came and went, came and went, until one of them formed a loop that could last for a longer length of time. These loops were like one of the self-replicating patterns in Conway's Game of Life, perhaps feeding off the bigger loops that were around them.

When the Earth orbits the sun, or when waves of the ocean move back and forth, or when sunlight increases and decreases, they provide pumps for a descending hierarchy of smaller and smaller cycles. Those early prototypes of lifeforms attached themselves to the periodicity of such cycles, ultimately creating feedback loops with their environment. The respiration of plants is so widespread, that when the Earth moves around the Sun and has seasons, the oxygen levels rise and fall in such a way, that if graphed, the Earth would appear to be breathing. Life is the loop.

# evolution computer_science

No 2.0

Software releases that follow the principle of incremental improvement may still carry legacy habits from major release cycles. The major release cycle was a byproduct of an unconnected era, whereby developers had to place all energy into releasing one great CD to distribute to stores. Patches were Band-Aids that hopefully not everybody had to apply for their software to work correctly. But the problem with alternating minor and major releases is that the major releases trade the promise of brand new features with a dip in user-experience. The first half of the dip is the suspension of patches and bug fixes in the run-up to the major release. This gap occurs because all the quality-assurance testing is tied up for the major release, and so software companies don't have time to test the bug fixes in both the old version and the new version. The second half of the dip is in the bug-fixing phase, where the public becomes extended beta-testers. The more features developers pack into the major release, the squared number of potential issues that could arise.

Most of the time, having features bundled together, such as an operating system face-lift plus some functionality upgrades, are purely so the marketing guys can have a "big splash." One day, the minor/major release dichotomy will be obsolete in favor of constant-sized releases. Or, some company will figure out how to subvert the extended beta-test. But even that would only cover the post-release dip. The pre-release dip would still occur as the company announces, "This will be fixed in version 2.0."

# computer_science

Software is like water, conforming its shape to its container, whether it's ice cube trays or a series of sprints

The reason Agile development works so well is that software is much like water. While it's important to architect and design code before building it, there's an infinite number of ways to get to the same business outcome. Whole features that were once thought critical may fall onto the floor of the editing room if a gun were put to the developer's head. Short two-week sprints are exactly that gun.

When building a house, one can't just skip building bathrooms or skip building a foundation. When building software, though, sometimes a sketch that's re-sketched a few times becomes a minimally viable product, and a few iterations later, one stand-out feature becomes a whole company.

# computer_science technology

Sometimes "act naturally" isn't the best advice when unnatural politeness will suffice

Is there, or isn't there, value in pat advice? A common one is, "be natural," but is this something you should do all the time? Does being natural lead you to the desired life outcomes? One way is to compare the piece of advice to a metaphor from computer science: the greedy algorithm.

A greedy algorithm is a method of searching for a larger solution by finding smaller solutions and adding them up. So, for example, if the overall goal is to be happy, then a "be natural" algorithm would seek happiness at this moment, and the next moment, and so on, with the idea that it would build happiness in the long-run.

In computer science, greedy algorithms are known for being reasonably efficient at getting good solutions, but they have one glaring weakness: it's possible to get stuck. For example, "be natural" could dictate that you act on your anger at this bar at this moment, and so you punch a patron in the face, which lands you in jail. Jail then makes you less happy than if you had just made some long-term assessment of consequences, acted unnaturally polite, and walked away.

So clearly "be in the moment" or "be yourself" aren't rules we should follow at every minute of every day. But how often then? Is this advice applicable 99.99% of the time, 95% of the time, or just something we should say to ourselves every once in awhile when our acting unnatural is getting overboard, and doing us more harm than good?

# happiness living computer_science

Thanks to microtransactions, entrepreneurs, instead of wearing many hats, could start companies by giving out microhats with microshares

Entrepreneurs wear a ton of hats initially because it's hard to recruit people at such a small scale, especially without revenue. Perhaps by offering small shares as bounties for little bits of work, then perhaps entrepreneurs wouldn't be needed, and everybody could just wear the hats they want to wear. Such is one possible promise of microtransactions and cryptocurrencies: the ability to administer small pieces of value efficiently and fairly.

# business computer_science futurism

The day we realize we're not much different than unplayable sentinels in video games is the day we understand our universe

Consider this Theory of Everything: The universe is a giant abstract computer that doesn't actually have to be run. The obvious counter-argument is that you can look all around you and see "stuff" that changes over time. And that even if we're in a simulation, there has to be a recipient of that "stuff" that processes it over time and appreciates/reacts to it.

The biggest barrier to accepting this Theory of Everything is our absolute faith in our senses. Here is a thought experiment to help undo this:

What if we built a naivete machine, one that had absolutely no preconceptions of the world? It would have no concept of time or cause-and-effect, just simply the ability to check variables with values filled in. Initially, it would be useless since it wouldn't have access to our world. So we'd give it a pointer to a memory address where we placed a video feed. The computer could run a loop, check the memory address, and notice some bits. And then after one frame, it would notice different bits there. So the computer would notice that as the loop advances, those bits are different than what was there before. Then it could notice patterns. It could notice that the bits make a lot more sense if composed in x-y coordinates. And maybe it could figure out that it's a series of 8-bit numbers, seemingly repeating every three. Those would be to us RGB, but to it, simply (0-255, 0-255, 0-255). The computer at this point would not necessarily have a sense of time, but maybe it would have a sense of a "tick" since it can execute one line at a time. To the machine, it "knows" that at each tick, there is a different value in that field.

It could have a history of the values of all the previous ticks. But it wouldn't know whether or not ticks in the future have to be calculated in order to be known. It would only know that to get information for future ticks would require ticking forward.

Based on the video feed, it could notice objects, but these wouldn't feel solid to the machine. Instead, the machine would see recurring patterns of clumps of bits. Eventually, it could form rules to predict what will happen on the next tick based on where the clumps were in previous ticks.

At this point, what does the naivete machine believe? It doesn't believe in cause-in-effect. It doesn't believe in materialism. It doesn't believe in time. All it believes is that as each tick advances, there is different patterned data in a memory address. Even the language I use to imagine how the machine thinks is prejudicial.

# philosophy computer_science physics

The expression "writing software is like building a house" is a poor analogy, unless the house is made of Lego

# technology computer_science

The fact that we don't see any buffering icons puts the fiction in science fiction

We take for granted how acclimated we are to quirks in user-interfaces, that we end up excluding quirks from science-fiction. For example, when Minority Report was released, we didn't imagine that Tom Cruise's character would be making multiple flicks of the wrist because of mistaken gesture recognition. We didn't imagine that when the interstellar rescue video beams through space in Planet of the Apes it would have a buffering icon or that it would require a team of video anthropologists applying codec after codec. We didn't imagine that any time Captain Kirk fired a nuclear weapon, he would have to pull out a credit card-sized electronic book of numbers that would have to be confirmed by his Secretary of Defense. Or that to input the location of where to send the Starship Enterprise, you would have to type .com or .net at the end of every address. The future will be as awkward as the future we live in today.

# futurism computer_science

The odds that an advanced AI will be developed by something other than a Defense Department are only getting better

If an advanced AI, one smart enough to rule humans, had been developed in 1964, it would most likely have come from the defense department of a large superpower. This AI would have been weaponized or otherwise martial in nature, and therefore likely to be unfriendly to humans. But if an advanced AI were developed today it would most likely come from a high-tech consumer company like Google or Apple, and thus it would be initially developed to fulfill human desire.

Thus, it's important when the Singularity happens. Steven Pinker in The Better Angels of our Nature suggests an exponential increase in human peacefulness over time, which implies that the longer we wait for advanced AI, the more likely it will arise under the pretext of friendliness.

# singularity futurism computer_science

The ultimate career assessment test is to get out in the field, do the work of said career, and assess your happiness

You can read all the descriptions of a happy work life, but they may not really help you at finding one. If anything, it might make you more miserable with work for a couple reasons: a) You find more reasons to be dissatisfied or, b) You strive in vain to mold your work to be more fun or interesting, which further frustrates you.

You could read Drive for example, which talks about the importance of intrinsic motivation. The keys to fulfilling work are a sense of mastery, a sense of purpose, and a sense of autonomy. Autonomy can be broken down into control over three Ts: team, time, and technique.

However, perhaps these qualities should be read more like symptom report. If you are happy at your work, then intrinsic motivation and autonomy are the kinds of things you feel.

You could read up on Maslow's Ladder, about a hierarchy of basic human needs. Once you satisfy your basic needs for survival and your basic needs for self-esteem and pride in what you do, then you have to strive for self-actualization to be happy. But that may not actually help you in filtering for jobs that provide self-actualization.

You can't pull up craigslist, tap a drop-down, and choose self-actualization. You could take a Signature Strength test, but the results that come up--for example, that your signature talent is design--may not be a helpful filter on craigslist.

This matched my experience, whereby I knew all these the concepts from Flow, Pathfinder, What Color is my Parachute, and Maslow's Ladder. I tried to manually sculpt a career with the ideal attributes and that led to years of frustration with so-called dream jobs that weren't. A good example is my stint as a video game designer.

Part of the problem is that if you already knew how the ideal job was shaped, you would already likely be there. Because the only way you'd know that it has the ideal attributes is because you have had some familiarity and success working in that field already.

Since we don't know what we don't know, the emphasis has to be on a process that will surprise you on your way to what you really want to do. Even just a little bit more emphasis on process instead of outcome can go a long way to leading you to the ideal work-life. Take this very simple process: Quit your job if you're unhappy. Then if your boss objects and offers you a position in a new department, take it. If not, leave the company and just sign up for the next best alternative. Repeat until you stop being unhappy.

In computer science, this could be called a "hill-climbing" algorithm, whereby you simply keep jumping to the next alternative until you finally wiggle your way to a good place. It should actually be called "blind wanderer" algorithm, because the computer has no pre-set notion of what the top of the hill actually looks like. It just knows when its on an incline or decline, and then proceeds accordingly.

# self-improvement work psychology computer_science living

The Universe is a giant, abstract Turing machine that doesn't need to be run

The universe is one big, abstract Turing machine, one that doesn't need to be run. According to the Church-Turing thesis, there exists some set of computing rules, that when played out, can approximate anything that is computable. Our universe is computable because everything that exists or happens has a logic to it. Even in the indeterministic world of quantum computing, nothing happens without reason. (Here, the word "reason" is used instead of "cause" so as to be agnostic about the nature of causality or time.) You can observe anything in our universe and ask "Why this and not that," and there will be an answer even if you can't know it.

The exception would be observations at the boundaries, whether at the start or end of time. "At some point, something must have come from nothing," so the saying goes, but using a time-independent, causality-independent framework would restate this to say, "Something must have had no reason." There had to be at least one irrational observation in the universe. Even in the case of infinite regress from incessantly asking, "Why this, not that?", one could step back and ask, "Why this infinite chain, and not something else?"

The opposite of reason is arbitrariness or assertion. Being part of an abstract Turing machine is consistent with the requirement for assertion. In an abstract model, the universe is asserted by possibility. It's possible there is a Turing machine with our assertions, simply because the set of all possible Turing machines exists, at least in the abstract.

Actually, one could deduce the universe is abstract without any knowledge of Turing machines, but the Turing machine metaphor helps us navigate our biases. If you picture a pen-and-paper Turing machine, it would have a network of nodes representing different bits and a pointer representing the current state. The current state could be now, and the bits could represent the position and velocity of every particle in the universe. Consciousness would then work similar to the snapshots of each frame of computation in a video game like World of Warcraft.

Consider the mind of a non-playable character, like a soldier who paces aimlessly throughout the court, waiting for someone to ask them for a quest. The soldier could step up to a pond, see themselves and possibly even click on themselves, asking for a quest. All of it would flow from the scripted narratives of the video game. It may appear random, but randomness in computation only works with a "random" seed, which is something external to the system, like the computer's temperature or the exact millisecond that someone started the game. If we use a strict definition of the universe, one that encompasses any so-called multiverses, then there can be nothing outside of it, and therefore no randomness. As a result, this video game would play out the same way each time, with the character going to the pool, and clicking on themselves for a quest at the same frame. Is the universe coded this way? Not necessarily, but according to the Church-Turing thesis, it could be isomorphic to something like this.

As for the clear sense that "I exist" or the problem of qualia, such as the blue-ness of the color blue, we shouldn't dock points from the abstract Turing model because we can't immediately answer those questions. We should try to understand the universe before the Cambrian explosion. Otherwise, we are limiting our understanding of reality by trying to accommodate the puzzles that have sprung up from the rare branch of evolution that is our minds. There is nothing, because there can't be anything, and yet something must be explained, and the only way to to explain explanation without any human assumptions is that it's all just an explanation, a story, an idea, or an abstract machine that doesn't need to be "run" in order for us to know its reality.

# physics computer_science philosophy

The when of friendly AI is just as important as the what: better to arise from a Google or an Apple than a Manhattan Project

If we are to design a friendly artificial intelligence, one that doesn't destroy us like in Terminator, we have to design it so that it's dependent on human happiness for its existence. For example, if AI is always in the service of making consumer goods better, then it propagates and evolves if we are happy with it. For example, Apple's Siri gets more developers and data centers assigned to it the better it gets at making us happy. In a hundred years, Siri is likely to be incredibly advanced, but it will have no trouble obeying Asimov's Laws of Robotics. If on the other hand, AI's development depends more on making humans unhappy (for example, in war machines), then it has a bigger chance of harming us as it develops in complexity.

# futurism computer_science

We'll know we're making progress on Artificial General Intelligence when we start comparing AIs to 5-year-olds

Right now, artificial intelligence (AI) is about as smart as a two-year-old. This argument is based on two hard AI problems that we've solved. One is pathfinding. Thanks to DARPA, we can now throw a car into the desert and tell it to go from A to B, and it'll fumble its way there. Likewise, a two-year-old could waddle, however imperfectly, to get from one part of the house to another. The other solved problem is object identification. Thanks to MIT researchers, we can put a picture in front of a camera and an AI will tell you what the scene is about. Likewise, if you flipped a picture book in front of a two-year-old, they'd point and shout "apple!" or "fire truck!"

So it's 2017, and we have a two-year-old. Not bad. The question is, Can we get to a 5-year-old? A 5-year-old doesn't just have two impressive skills, but maybe five. Not only can they get from A to B, but they can find a tool they haven't used before and start toying with it. Not only can they identify single objects, but they can describe a group of objects on a table. This task is exponentially harder than single-object-identification because a bowl of fruits is many things. Not only is it a bowl, but it's the individual fruits within it, as well as breakfast. Achieving multi-object-identification is frequently called a Holy Grail for AI.

The final Holy Grail is artificial general intelligence (AGI). AGI is AI smart enough to make itself smarter, which if achieved, would be the end of the human era on Earth. An AGI has roughly the intelligence of a 10-year-old since that's about the age when some children can begin coding. However, given the jump in the number of Hard AI problems we would have to solve to go from a 2-year-old to a 5-year-old, we'd probably have to solve 30 Hard AI problems to match the brain of a preadolescent. None of this is to say that it's impossible to build AGI, but given how long it's taken to solve just 2 Hard AI problems, AGI is certainly not "around the corner."

# ai computer_science futurism

We have more meta-memories than actual memories, sacrificing accuracy for speed, reality for witness-stand recall

I once had two long dreams about imaginary Hitchcock films. Both were packed with details and authentic Hitchcock plot twists, and yet I can barely recall their stories. All I remember is that the first film had Cary Grant and a large silvery gun, shaped like an old box camera that snapped together. The second film had Natalie Wood, who witnesses a murder while she is young, but cannot convince anybody around her. It gets to the point where she eventually believes she must have misremembered the murder, until later in life, she re-discovers real evidence of the murder. But in classic Hitchcock horror, she is still powerless to convince anybody.

These two dreams were so vivid that I immediately had dreams about them afterward. I remember reminiscing and embellishing the films in a sort of dreamworld "post-production." So in addition to having an intense memory of the movies themselves, I also have intense memories of reminiscing about the movies. When I woke up, I was so pleased with having seen and enjoyed two great Hitchcock films, that it took me a while to unravel the layers of remembrance to realize that those films were not real.

Have you ever remembered remembering something, but couldn't actually remember the thing itself? I often imagine that's how it feels like on the witness stand, where your recollection of the scene of the crime is weak, but you feel bolstered by your secondary recollection of your recollection. You may have a faint memory of being at the crime scene itself, but you vividly remember sitting uneasy on your sofa back at your apartment, replaying the events of the crime scene over and over again in your head.

This reminds me about how databases and Google work. Google caches all these websites and then constructs an index for looking up search terms. But oftentimes, Google's cache expires or the website drifts, and all that is left is the index as proof that those website caches were once real.

# dreams psychology systems computer_science
21 entries