Feel the pain
Emotional decision making is in many contexts undesirable – but sometimes it definitely needs to be part of the picture, insofar as our emotions hold a mirror to our morals. When machines make decisions, the opportunity to consider the emotional input goes away. This is a recurring concern I’m hearing about from people working with or responding to AI in some way. Here are two recent examples I came across that set this concern out in two different contexts: loneliness and war.
This is Anna Mae Duane, director of the University of Connecticut Humanities Institute, in The Conversation:
There is little danger that AI companions will courageously tell us truths that we would rather not hear. That is precisely the problem. My concern is not that people will harm sentient robots. I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the “user”.
And this is from Yuval Abraham’s investigation for +972 Magazine on Israel’s chilling use of AI to populate its “kill lists”:
“It has proven itself,” said B., the senior source. “There’s something about the statistical approach that sets you to a certain norm and standard. There has been an illogical amount of [bombings] in this operation. This is unparalleled, in my memory. And I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago. Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”