Cameron R. Jones, Benjamin K. Bergen

Large Language Models Pass the Turing Test
Cameron R. Jones, Benjamin K. Bergen. 2025. (View Paper → )
We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human.
When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant. LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time—not significantly more or less often than the humans they were being compared to—while baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21% respectively).
The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test. The results have implications for debates about what kind of intelligence is exhibited by Large Language Models (LLMs), and the social and economic impacts these systems are likely to have.
GPT4.5 being judged to be human, more often than humans. This should have been a massive moment in computing history - but it hasn’t made the headlines. Partly because Turing’s “imitation game” was not intended as a test for artificial general intelligence. It was presented as what Dennett would call an “intuition pump”, a thought experiment to challenge people who claimed AGI was impossible. Passing the Turing test doesn’t prove a system is an AGI, and failing it doesn’t prove it isn’t. There can be non‑AGIs that pass and AGIs that fail. It seems Turing‑test performance isn’t a great indicator of genuine intelligence - maybe it took this moment for us to realise that?