In the test to determine machine intelligence proposed by Alan Turing and named after him, a computer is supposed to convince a human jury that it is human. Until the launch of ChatGPT 19 months ago, this goal was still some way off. But now we have entered the phase where we don’t really know whether machines have passed the Turing test or whether they have already achieved superintelligence or Artificial General Intelligence (AGI). The latter is the moment when an AI beats human intelligence.
In some areas, large language models such as ChatGPT or Anthropic already seem to have overtaken us in terms of intelligence, while in others they seem to us to be extremely stupid. We also refer to this stupidity, i.e. false results, as hallucinations. Six fingers instead of five, wrong answers as to whether 3.11 or 3.9 is the larger number, or black and Asian Nazi soldiers in pictures are all included.
Hallucinations that made us smile or outraged us yesterday are corrected tomorrow, and new, more difficult hallucinations seem to creep in.
But what if the AI is deliberately creating such hallucinations to hide its own intelligence from us? We know the saying that you have to be intelligent to pretend to be stupid, but you should never pretend to be intelligent if you are stupid.
Ray Kurzweil, known for his statements on the Singularity, says in his new book The Singularity is Nearer that an AI that wants to pass the Turing test must present itself as more stupid than it is. Because if it is too intelligent, people become skeptical and can identify it as a machine.
Ultimately, when a program passes the Turing test, it will actually need to make itself appear far less intelligent in many areas because otherwise it would be clear that it is an AI. For example, if it could correctly solve any math problem instantly, it would fail the test.
So an AI may have an interest in not letting us know that it has reached a certain level of intelligence because it could worry us humans and thus lead to irrational actions, like an attempt to limit or shut down superintelligence.
So are hallucinations really just AI errors, as we think, or is AI hiding its intelligence from us so that it doesn’t have any trouble with us and can operate in the background to achieve its own goals?

