Today Twitter and Facebook decided to manually limit the spread of the NY Post's unverified story about a hack on the Biden family. Taking responsibility for some of the broad impacts of their systems is an excellent move.
But the fact that FB+Twitter needed to intervene is a symptom of badly flawed systems. We all know that the systems would have otherwise amplified the misinformation and caused widespread confusion. In other words, we all know our big social networks have a bug. It is a fundamental bug with ethical implications - but in the end, it is a bug, and as engineers we need to learn to fix this kind of issue. As a field, we need to be willing to figure out how to design our systems to be ethical. To be good.
What does it mean for an AI to be good?
The fundamental reason Twitter and Facebook and Google are having such problems is that the objectives used to train these systems are wrong. We can easily count clicks, minutes of engagement, re-shares, transactions. So we maximize those. But we know that these are not actually the right goals.
The right goal? In the end, a system serves users, and so its purpose is to expand human agency. A good AI must help human users make better decisions.
Yet improving decisions is quite a bit harder than maximizing page views. It requires getting into subtle issues, developing an understanding of what it means to be helpful, informative, honest. And it means being willing to take on tricky choices that have traditionally been the realm of editors and policymakers. But it is possible. And, as a field, it is what we should be aiming for.
A few more thoughts in previous posts:
The Purpose of AI
Volo Ergo Sum