The AI trust deficit predates AI
There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.
But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.
If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?
Trust plays an important role in the public understanding of science. The excerpt above – from an article by Mark Bailey, chair of Cyber Intelligence and Data Science at the National Intelligence University, Maryland, in The Conversation about whether we can trust AI – showcases that.
Bailey treats AI systems as “alien minds” because of their, rather their makers’, inscrutable purposes. They are inscrutable not just because they are obscured but because, even under scrutiny, it is difficult to determine how an advanced machine-based logic makes decisions.
Setting aside questions about the extent to which such a claim is true, Bailey’s argument as to the trustworthiness of such systems can be stratified based on the people to whom it is addressed: AI experts and non-AI-experts, and I have a limited issue with the latter vis-à-vis Bailey’s contention. That is, to non-AI-experts – which I take to be the set of all people ranging from those not trained as scientists (in any field) to those trained as such but who aren’t familiar with AI – the question of trust is more wide-ranging. They already place a lot of their trust in (non-AI) technologies that they don’t understand, and probably never will. Should they rethink their trust in these systems? Or should we taken their trust in these systems to be ill-founded and requiring ‘improvement’?
Part of Bailey’s argument is that there are questions about whether we can or should trust AI when we don’t understand it. Aside from AI in a generic sense, he uses the example of self-driving cars and a variation of the trolley problem. While these technologies illustrate his point, they also give the impression that AI systems not making decisions aligned with human expectations and their struggle to incorporate ethics is a problem restricted to high technologies. It isn’t. The trust deficit vis-à-vis predates AI. Many of the technologies that non-experts trust but which don’t uphold that (so to speak) are not high-tech; examples from India alone include biometric scanners (for Aadhaar), public transport infrastructure, and mechanisation in agriculture. This is because people’s use of any technology beyond their ability to understand is mediated by social relationships, economic agency, and cultural preferences, and not technical know-how.
For the layperson, trust in a technology is really trust in some institution, individuals or even some organisational principle (traditions, religion, etc.), and this is as it should be – perhaps even for more-sophisticated AI systems of the future. Many of us will never fully understand how a deep-learning neural network works, nor should we be expected to, but that doesn’t implicitly make AI systems untrustworthy. I expect to be able to trust scientists in government and in respectable scientific institutions to discharge their duties in a public-spirited fashion and with integrity, so that I can trust their verdict on AI, or anything else in similar vein.
Bailey also writes later in the article that some day, AI systems’ inner workings could become so opaque that scientists may no longer be able to connect their inputs with their outputs in a scientifically complete way. According to Bailey: “It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible.” This is fair but it also misses the point a little bit by limiting the entities that can intervene to individuals and built-in technical safeguards, like working an ethical ‘component’ into the system’s decision-making framework, instead of taking a broader view that keeps the public institutions, including policies, that will be responsible for translating the AI systems’ output into public welfare in the picture. Even today in India, that’s what’s failing us – not the technologies themselves – and therein lies the trust deficit.
Featured image credit: Cash Macanaya/Unsplash.