January 12, 2018The Purpose of AIWhat does it mean for an AI to be good? Is Omniscience Good? There are benefits to having computer systems that know everything. For example, yesterday a friend recounted a story about leaving a laptop in a taxi in China. Local police stations in China have a system that can call up any recorded video anywhere in the city, so they used the approximate time of the taxi ride obtain to a video clip of the exact moment of the cab pickup. Soon, they had the plate numbers and called the driver, who promptly found the laptop and returned it to its owner. Today, routine total surveillance in China is coupled with AI systems that constantly sift through the vast stream of data to identify and track every individual person, catalog every interaction, and flag anomalous behavior. This makes prosecuting crime very easy in China. The court will be presented with a video tape summary of footage of the accused in the hours and days before and after the crime. AI systems, connected to a total surveillance apparatus, are able to automate weeks of police work and create a narrative about why a person is guilty. The same systems also simplify the hard work of putting a rapid stop to uncomfortable social disruptions such as demonstrations and protests. China has no fourth and first amendments to give them pause, and so that country gives us a glimpse of what is possible with widely available technology today. And maybe it is a picture of humanity's future everywhere. Quiet streets, low crime, no picketing. Never lose a laptop again. Is that a good thing? The Purpose of AI In our pursuit of making AI systems that are more accurate, faster, and leaner, we risk losing the sight of the fundamental design question: What is it for? The systems that we build are complex, with multiple intelligent agents interacting in the system, some human, and some not. So to understand the design of the whole system, we must ask, what is the role of the human, and what is the role of the AI? Both humans and AI can perceive, predict, and generalize, so there is sometimes a misperception that the two roles are interchangeable. But they are not. Humans stand apart because their purpose is to be the active agents, the deciders. If that is the case, then what is the role of the AI? Can an AI have agency? There are two forms of interaction between AI behavior and human behavior where agency seems messy.
The problem with optimizing a system around these two design goals is that they presume no role for human agency. It is assumed that a good system will make more accurate predictions - for example the way that Facebook is very good at predicting which thing you will click on next. And it is assumed a good system will be more effective at shaping future behavior - for example the way Google is very good at arranging advertising in a way that maximizes your likelihood of clicking on it. If a system is designed around those principles alone, the humans are just treated as a random variable to be manipulated, and there is no decision maker in the design. These designs are incomplete. Like any engineered system, an AI is always designed for some purpose. When we do not consider that purpose, the actual decision makers have been erased from the picture. The proper purpose of an AI is this:
What a Good AI Does The question of AI goodness comes down to how we can evaluate whether an AI is good or not. We cannot stop at evaluating merely whether an AI is more accurately predictive, or whether it is more effective in achieving an outcome. We need to be transparent about answering the questions:
For example, with the Chinese surveillance system, the people being observed by the cameras are not making any decisions that are improved by the AI. The people on the street are not the users. The actual users are the people behind the console at the police station: they are the ones whose agency is amplified by the system. They use the system to help decide what to look at, who to call, and who to arrest. To understand whether the AI is good, we need to understand whether it is serving the right set of users, and whether their decisions are improved. That means beginning with an honest discussion of what it means for a police officer to make a good decision. The right answer is likely to be more complicated than a question of crime and punishment. Most of us work on more prosaic systems. Today I spoke with a researcher who is applying AI to an educational system. She has a large dataset of creations (student programs) made by thousands of students, and she wants to make suggestions to new students about what pieces (program statements) they might want to include in their own creations. In her case, the target user is clearly the student making the creation, and the system is being optimized to predict the user's behavior. However, the right metric is not predictive accuracy, but whether the user's decisions are improved. That gets to a more difficult discussion of what it means to make a good decision. For example, if most students in her data set share a common misconception about the subject being learned, then the system will optimize predictive accuracy by propagating the same misconception to future students. But that does not amplify the agency of users; it does not improve decision making. Instead, it is exactly the type of optimization that results in an AI that will dull the senses of users. This is the same problem being faced by Facebook and Google today. Misconceptions, lazy decision making, and addictive behavior are all common human phenomena that are easy to predict and trigger, and so when we optimize systems to maximize accuracy and efficacy in their interactions with humans, the systems solve the problem by propagating these undesirable behaviors. The AI succeeds in solving its optimization by robbing humans of their agency. But this is not inevitable: AI does not need to dehumanize its users. Building Good AI is Hard To build good AI, it is not enough to ask our AI to merely predict behavior or shape it reliably. We need to understand how to create AI that helps amplify the ability of human users to make good decisions. We need to understand who the users are and what decisions they make. In the end, building a good AI means building an authentic understanding of what it means to make a good decision. Posted by David at January 12, 2018 09:14 AMComments
Post a comment
|
Copyright 2018 © David Bau. All Rights Reserved. |