Ai

We'll know we're making progress on Artificial General Intelligence when we start comparing AIs to 5-year-olds

Right now, artificial intelligence (AI) is about as smart as a two-year-old. This argument is based on two hard AI problems that we've solved. One is pathfinding. Thanks to DARPA, we can now throw a car into the desert and tell it to go from A to B, and it'll fumble its way there. Likewise, a two-year-old could waddle, however imperfectly, to get from one part of the house to another. The other solved problem is object identification. Thanks to MIT researchers, we can put a picture in front of a camera and an AI will tell you what the scene is about. Likewise, if you flipped a picture book in front of a two-year-old, they'd point and shout "apple!" or "fire truck!"

So it's 2017, and we have a two-year-old. Not bad. The question is, Can we get to a 5-year-old? A 5-year-old doesn't just have two impressive skills, but maybe five. Not only can they get from A to B, but they can find a tool they haven't used before and start toying with it. Not only can they identify single objects, but they can describe a group of objects on a table. This task is exponentially harder than single-object-identification because a bowl of fruits is many things. Not only is it a bowl, but it's the individual fruits within it, as well as breakfast. Achieving multi-object-identification is frequently called a Holy Grail for AI.

The final Holy Grail is artificial general intelligence (AGI). AGI is AI smart enough to make itself smarter, which if achieved, would be the end of the human era on Earth. An AGI has roughly the intelligence of a 10-year-old since that's about the age when some children can begin coding. However, given the jump in the number of Hard AI problems we would have to solve to go from a 2-year-old to a 5-year-old, we'd probably have to solve 30 Hard AI problems to match the brain of a preadolescent. None of this is to say that it's impossible to build AGI, but given how long it's taken to solve just 2 Hard AI problems, AGI is certainly not "around the corner."

We should be able to prove the materiality of consciousness soon thanks to AI

We'll find out soon enough whether Daniel Dennet was right, that there is nothing special about consciousness. As we crawl up the artificial intelligence ladder, we should start to see basic forms of consciousness in our machines. We have a funny feeling of consciousness when we look at Deep Blue defeating Kasparov at chess. Or when we imagine the grid computing behind Siri, we feel that some intelligence is at work. But so far, our sense of a "ghost in the machine" isn't the same as watching a squirrel pause and scan its surroundings. We feel that there is some kind of consciousness in the squirrel, albeit primitive, even if it's not ours. Or take even the simplest creature, a starfish. When we poke it, we sense some kind of consciousness when it curls up. We know that it felt something as if its reaction was it saying, "ouch." So if we're indeed making progress towards a generalized artificial intelligence, we should be able to poke at our machines and sense a similar reaction.