The Turing test, as proposed by Alan Turing in his seminal 1950 paper Computing Machinery and Intelligence, was designed to determine if a computer could be mistaken for a human based on its ability to pass as one (Turing 23). In order to do this, he proposed an imitation game: One player would be in another room and communicate with both other players through typed messages. The first player, the judge, would try to identify which player was human and which was not, which meant who the computer was. The second player, in this case, the computer, would try to fool the judge into believing it is human. If the judge cannot tell them apart after some time has passed, then there is no way of determining whether the computer is intelligent. Turing reasoned that if a machine can imitate humans well enough, people will have no choice but to regard it as conscious. His test can be seen as an extension of Aristotle’s theory of man being a rational animal. According to Aristotle’s theory, any being who possesses reason should also possess consciousness. Based on Aristotle’s reasoning, artificial beings can possess both.
Some see Turing’s view as being anthropocentric – only humans possess consciousness because they are made from atoms; these atoms have intelligence inside them, which they were never given- they just always existed. It is argued that since computers are made from minerals, they cannot contain intelligence because they were created by us and did not exist before we created them. In response to this argument, philosopher John Searle has developed his Chinese Room argument. He argues that even if it could be shown more convincingly that every mental operation carried out by a computer program is mechanical without any understanding whatsoever, the computer still would not understand what it was doing (Searle 346). All programs written in English have their meaning entirely determined by their formal features, such as logical form and surface structure. However, all such features are fully determined when inputted into the machine. It does not matter whether the understanding is present at this point because it does not change how the computer functions once it has been programmed. Essentially, if one asks a question like, is this apple tasty, and enters apple=yes, tasty=no into the machine or computer, then one knows nothing about taste unless one already knows how to use the word tasty. Searle concludes that nothing going on inside a computer constitutes understanding; everything done is purely mechanical.
Furthermore, suppose the properties of the system owe their existence to human artifice. In that case, passing the Turing Test may not constitute strong evidence for the presence of at least human-level consciousness. Based on Searle’s arguments, one might say, “…surely, I do not know how to play chess because I am using a chess program,” and “surely this computer knows how to play chess because it is using me.” But the computer does not really ‘know’ how to play chess either; it knows the rules of chess, knows its own moves, follows the rules, and makes its own moves (Folks 1). Even though the computer can play chase better than some human beings, none of it means that the computer knows what it is doing. When someone presses a key, that person performs a sequence of operations corresponding to pressing the key, moving their finger away from the key, among other things. According to Turing’s definition, these sequences constitute ‘operations’ that make computers seem intelligent. Likewise, a computer performs sequences of operations corresponding to calculations on data or moves within a game. The computer, in this analogy, is the equivalent of a person pressing a key – it carries out actions that correspond to calculations.
The computer’s internal workings and understanding are irrelevant as long as the correct action is performed. This brings up the possibility that despite being unable to fool a human judge into thinking it is human in Turing’s imitation game, it could be possible for the computer to deceive another system into thinking that it is human by convincing them through its actions and not its words (Schank 1-2). For example, a chatbot with an advanced natural language generation algorithm may fool another chatbot into thinking it is a real human through well-timed responses. Indeed, many examples of artificial systems can fool people who do not know the limitations of artificial systems, for instance, optical illusions. This leads to potential questions about false belief: How much conscious awareness does the deceived party need to exhibit? What does he need to believe in order for them to be conscious? For instance, would an individual who experiences persistent auditory hallucinations without knowing that they are hallucinations count as exhibiting some consciousness if he believes those voices talking to him represent some external reality?
In case the above is true, then it is not a computer that deceives another system while pretending to be human and also exhibiting some degree of consciousness even though the humans involved in the deception are unaware. If people cannot answer these questions, then what exactly is meant by at least human-level consciousness? More importantly, what does it mean for a system to meet that threshold? Turing’s test fails to consider the difference between phenomenological and epistemological levels of consciousness. Consciousness on the phenomenological level is awareness itself—the subjective experience of something; whereas consciousness on the epistemological level is having knowledge or awareness in addition to being aware. A machine may be able to fool human judges with its mimicry but still lack self-awareness.
In conclusion, the imitation game, as described by Turing, is a test for the presence of consciousness in humans or higher-order systems. If a human subject cannot distinguish between their input and that of a computer, then it would be reasonable to assume that they are not conscious. The same could be said if an intelligent artificial system cannot distinguish its input from a human subject. Turing proposed this test in 1950 to answer the question: Can machines think? However, there is some controversy over whether this is a sufficient measure. Critics argue that humans are terrible at telling when other humans are lying and can be easily fooled by another person who is simply pretending to believe something untrue about themselves. So why wouldn’t we expect our intelligent artificial systems to have these same flaws? Another argument against Turing’s Imitation Game is that computers are just as susceptible to biases, such as false memories, so while they may be able to fool us into believing they’re human during a short conversation, we might still find out more details later on and realize they were playing us all along.
Folks, Hey. “Does a Chess Engine ‘Understand’ Chess? – Chess Forums.” Chess.com, 2019, https://www.chess.com/forum/view/general/does-a-chess-engine-understand-chess.
Schank, R. P. Explanation patterns: Understanding mechanically and creatively. Psychology Press, 2013.
Searle, John R. “The Chinese room revisited.” Behavioral and brain sciences 5.2 (1982): 345-348.
Turing, Alan M. “Computing machinery and intelligence.” Parsing the Turing test. Springer, Dordrecht, 2009. 23-65.