December 18, 2017

In Code We Trust?

As world leaders show themselves prone to falsehood, corruption, greed, and malice, it is tempting to find a new authority in which to place our trust. In today's NYT, Tim Wu observes that rise of Bitcoin evidences humanity's new trust in code: "In our fear of human error, we are putting an increasingly deep faith in technology."

But is this faith well-placed if we do not know how code works or why it does what it does?

Trust in AI Today is about Trust in Testing

Take AI systems. Deep networks used to parse speech or recognize images are subject to massive batteries of tests before they are used. And so in that sense they are more scrutinized than any human person we might hire to do the same job. Trusting a highly scrutinized system seems much better than trusting something untested.

But here is one way that modern AI falls short: we do not expect most AIs to justify, explain, or account for their thinking. And perhaps we do not feel the need for any explanation. Even though explainability is often brought up in the context of medical decisions, my physician friends live in a world of clinical trials, and many of them believe that such rigorous testing on its own is the ultimate proof of utility. You can have all the theories in the world about why something should work, but no theory is as important as experimental evidence of utility. What other proof do we need beyond a rigorous test? Who cares what anybody claims about why it should work, as long as it actually does?

Battle: LeCun vs Rahimi

How much faith to place in empirical versus theoretical results is a debate that is currently raging among AI researchers. On the sidelines of the NIPS 2017 conference, a pitched argument broke out between Yann LeCun (the empiricist) and Ali Rahimi (the theoretician), who disagree on whether empirical AI results without a theoretical foundation just amount to a modern form of alchemy.

I side with Rahimi in revulsion against blind empiricism, but maybe I have different reasons than he. I do not worship the mathematics of rigorous theory. I think the relationship with humans is what is important. We should not trust code unless a person is able to understand some human-interpretable rules that govern its behavior.

The Mathematics of Interpretability

There are two reasons that test results need to be complemented by understandable rules. One is mathematical, and the other is philosophical.

Math first. Our modern AI systems, by their nature, respond to thousands of bits of input. So we should hold any claim of thoroughness of testing up against the harsh fact that visiting each of 2^1000 possible input possibilities - just a few hundred bytes of distinctive input state - would require more tests than atoms in the observable universe, even if every atom had a whole extra universe within it. Most realistic input spaces are far larger, and therefore no test can be thorough in the sense of testing any significant portion of the possibilities.

Furthermore, a sample can only accurately summarize a distribution under the assumption that the world never changes. But humanity imposes a huge rate of change on the world: we change our climate rapidly, we disrupt our governments and businesses regularly, we change our technology faster, and whenever we create a new computer system, adversaries immediately try to change the rules to try to beat it.

Testing is helpful, but "exhaustive" testing is an illusion.

The Philosophy of Interpretability

Philosophy next. This impossibility of testing every possible situation in advance is not a new problem: it has been faced by humanity forever (and, arguably, it is also one of the core problems facing all biological life).

It is in response to this state explosion that mankind invented philosophy, law, engineering, and science. These assemblages of rules are an attempt to distill what we think is important about the individual outcomes we have observed, so that when unanticipated situations arise, we can turn to our old rules and make good, sound decisions again. That is the purpose of ethics, case law, and construction standards. That is the reason that the scientific method is not just about observations, but about creating models and hypotheses before making observations.

We should hold our code to the same standard. It is not good enough for it to perform well on a test. Code should also follow a set of understandable rules that can anticipate its behavior.

Humans need interpretable rules so that we can play our proper role in society. We are the deciders. And to decide about using a machine, we need to be able to see whether the model of action used by the machine matches up with what we think it should be doing, so that when it inevitably faces the many situations in a changing world that will have never been tested before, we can still anticipate its behavior.

If the world never changes and the state space is small, mechanisms are not so important: tests are enough. But that is not the purpose of code in the modern world. Code is humanity's way of putting complexity in a bottle. Therefore its use demands explanations.

Are Understandable Models Possible?

This philosophy of rule-making all sounds vague. Is it possible to do this usefully while still creating the magic intelligent success of deep networks?

I am sure that it is possible, although I don't think it is necessarily easy. There is potentially a bit of math involved. And the world of AI may be easier to explain, or it may not. But it is worth a try.

So, I think, that will be the topic of my dissertation!

Posted by David at December 18, 2017 10:00 PM
Comments
Post a comment









Remember personal info?