If AI is among us, would we know?
… an actual AI might be so alien that it would not see us at all. What we regard as its inputs and outputs might not map neatly to the system’s own sensory modalities. Its inner phenomenal experience could be almost unimaginable in human terms. The philosopher Thomas Nagel’s famous question – ‘What is it like to be a bat?’ – seems tame by comparison. A system might not be able – or want – to participate in the classic appraisals of consciousness such as the Turing Test. It might operate on such different timescales or be so profoundly locked-in that, as the MIT cosmologist Max Tegmark has suggested, in effect it occupies a parallel universe governed by its own laws.
The first aliens that human beings encounter will probably not be from some other planet, but of our own creation. We cannot assume that they will contact us first. If we want to find such aliens and understand them, we need to reach out. And to do that we need to go beyond simply trying to build a conscious machine. We need an all-purpose consciousness detector.
Interesting perspective by George Musser – that of a “consciousness creep”. In the larger scheme of things (of very-complex things in particular), isn’t the consciousness creep statistically inevitable? Musser himself writes that “despite decades of focused effort, computer scientists haven’t managed to build an AI system intentionally”. As a result, perfectly comprehending the composition of the subsystem that confers intelligence upon the whole is likelier to happen gradually – as we’re able to map more of the system’s actions to their stimuli. In fact, until the moment of perfect comprehension, our knowledge won’t reflect a ‘consciousness creep’ but a more meaningful, quantifiable ‘cognisance creep’ – especially if we already acknowledge that some systems have achieved self-awareness and are able to think compute intelligently.