January 12, 2018

The Purpose of AI

What does it mean for an AI to be good?

Is Omniscience Good?

There are benefits to having computer systems that know everything. For example, yesterday a friend recounted a story about leaving a laptop in a taxi in China. Local police stations in China have a system that can call up any recorded video anywhere in the city, so they used the approximate time of the taxi ride obtain to a video clip of the exact moment of the cab pickup. Soon, they had the plate numbers and called the driver, who promptly found the laptop and returned it to its owner. Today, routine total surveillance in China is coupled with AI systems that constantly sift through the vast stream of data to identify and track every individual person, catalog every interaction, and flag anomalous behavior.

This makes prosecuting crime very easy in China. The court will be presented with a video tape summary of footage of the accused in the hours and days before and after the crime. AI systems, connected to a total surveillance apparatus, are able to automate weeks of police work and create a narrative about why a person is guilty. The same systems also simplify the hard work of putting a rapid stop to uncomfortable social disruptions such as demonstrations and protests.

China has no fourth and first amendments to give them pause, and so that country gives us a glimpse of what is possible with widely available technology today. And maybe it is a picture of humanity's future everywhere. Quiet streets, low crime, no picketing. Never lose a laptop again.

Is that a good thing?

The Purpose of AI

In our pursuit of making AI systems that are more accurate, faster, and leaner, we risk losing the sight of the fundamental design question: What is it for? The systems that we build are complex, with multiple intelligent agents interacting in the system, some human, and some not. So to understand the design of the whole system, we must ask, what is the role of the human, and what is the role of the AI?

Both humans and AI can perceive, predict, and generalize, so there is sometimes a misperception that the two roles are interchangeable. But they are not. Humans stand apart because their purpose is to be the active agents, the deciders. If that is the case, then what is the role of the AI? Can an AI have agency?

There are two forms of interaction between AI behavior and human behavior where agency seems messy.

  • AI can predict human behavior.
  • AI can shape human behavior.

The problem with optimizing a system around these two design goals is that they presume no role for human agency. It is assumed that a good system will make more accurate predictions - for example the way that Facebook is very good at predicting which thing you will click on next. And it is assumed a good system will be more effective at shaping future behavior - for example the way Google is very good at arranging advertising in a way that maximizes your likelihood of clicking on it.

If a system is designed around those principles alone, the humans are just treated as a random variable to be manipulated, and there is no decision maker in the design. These designs are incomplete. Like any engineered system, an AI is always designed for some purpose. When we do not consider that purpose, the actual decision makers have been erased from the picture.

The proper purpose of an AI is this:

  • AI should amplify human decisions.

What a Good AI Does

The question of AI goodness comes down to how we can evaluate whether an AI is good or not. We cannot stop at evaluating merely whether an AI is more accurately predictive, or whether it is more effective in achieving an outcome.

We need to be transparent about answering the questions:

  • Who is the user?
  • What decisions do they make?
  • Are the decisions improved by the AI?

For example, with the Chinese surveillance system, the people being observed by the cameras are not making any decisions that are improved by the AI. The people on the street are not the users. The actual users are the people behind the console at the police station: they are the ones whose agency is amplified by the system. They use the system to help decide what to look at, who to call, and who to arrest. To understand whether the AI is good, we need to understand whether it is serving the right set of users, and whether their decisions are improved. That means beginning with an honest discussion of what it means for a police officer to make a good decision. The right answer is likely to be more complicated than a question of crime and punishment.

Most of us work on more prosaic systems. Today I spoke with a researcher who is applying AI to an educational system. She has a large dataset of creations (student programs) made by thousands of students, and she wants to make suggestions to new students about what pieces (program statements) they might want to include in their own creations. In her case, the target user is clearly the student making the creation, and the system is being optimized to predict the user's behavior.

However, the right metric is not predictive accuracy, but whether the user's decisions are improved. That gets to a more difficult discussion of what it means to make a good decision. For example, if most students in her data set share a common misconception about the subject being learned, then the system will optimize predictive accuracy by propagating the same misconception to future students. But that does not amplify the agency of users; it does not improve decision making. Instead, it is exactly the type of optimization that results in an AI that will dull the senses of users.

This is the same problem being faced by Facebook and Google today. Misconceptions, lazy decision making, and addictive behavior are all common human phenomena that are easy to predict and trigger, and so when we optimize systems to maximize accuracy and efficacy in their interactions with humans, the systems solve the problem by propagating these undesirable behaviors. The AI succeeds in solving its optimization by robbing humans of their agency. But this is not inevitable: AI does not need to dehumanize its users.

Building Good AI is Hard

To build good AI, it is not enough to ask our AI to merely predict behavior or shape it reliably. We need to understand how to create AI that helps amplify the ability of human users to make good decisions. We need to understand who the users are and what decisions they make.

In the end, building a good AI means building an authentic understanding of what it means to make a good decision.

Posted by David at 09:14 AM | Comments (0)

January 16, 2018

Analyzing the Robot Apocalypse

On a long drive this last weekend, I chatted with my ten-year-old about the only thing I really know anything about, which is how computers work. We talked about all the parts that go into a computer we had built for playing video games, and he joked that we should figure it all out "before the robot apocalypse happens."

I thought this joke was an interesting window into his ten-year old thoughts, so I pressed him on it: "How will we know when it's the robot apocalypse? Maybe it has already happened.''

His response was simple: "It will happen when computers are sentient.''

Sentient is a sophisticated word for a ten-year old! What does he think it means? "What does it mean to be sentient? That's what I'm asking.''

Cody has a quick answer: "A robot is sentient if it's self-aware.''

Ah, a common science-fiction trope. "And what does self-awareness mean? We just talked about how the BIOS on the motherboard starts up by figuring out which CPU, GPU, memory, disk, and I/O you have plugged in. Doesn't that mean that computers are already self-aware?'' I thought I would start a debate with him about self-awareness, but he surprised me by having a different answer.

He said, "That's not all. To be sentient a computer can't just follow simple rules. It has to do complicated things, like the way the computer sees it when you attack it with a big army in AOE.''

Cody's Four Levels of Sentience

At ten, Cody has played lots of games against computer AIs, and so he has a remarkably subtle understanding of what AI can do. There is no stumping him. So over our hour-long drive, we chatted more about it, and I was able to glean a four-level model of what sentience means to Cody.

Level 1: Self-awareness. Can you think about your own thinking? A computer scientist might say this is roughly a way of saying that a computer can see its own program; or do recursion; or that it is Turing-complete. Cody quickly noticed that sentience is not just about having inward thoughts, but about the sophistication of a computer's relationship with data.

Level 2: Generalization. Can you handle messy data? Instead of following a simple set of brittle rules, sentience seems to be about having robust rules that let you make decisions in new situations. Cody explains that an AI game opponent needs to be flexible, since no game situation is exactly the same. Any rules need to generalize. The program needs to be soft and flexible, not rigid.

Level 3: Adaptability. Can you learn new patterns? A sentient program could certainly be surprised if you sneak up on it. But Cody explains that a sentient AI it would not be repeatedly surprised by the same thing. Soon enough, "like within an hour," says Cody, it would recognize a new pattern and adapt to it. So a sentient AI should never stop training.

Level 4: Opinions. Can you imagine? Cody recently watched a remarkable video by Jennifer Doudna, one of the inventors of CRISPR, who is calling for a clinical moratorium. Cody explains that she is able to imagine a future world of inventions that does not exist yet, and this leads her to form a strong opinion about something that has never been seen. So imagination is another part of being sentient, he explained.

Cody's four levels of sentient AI are: (1) know yourself; (2) act based on patterns, not hard rules; (3) learn by adapting to new patterns; (4) extrapolate based on imagined situations that have not yet been observed.

I think it is a pretty good roadmap when preparing for the coming robot apocalypse!

Posted by David at 12:12 PM | Comments (0)