Introduction
With the current advancement in technology, people have made computers that are programmed to execute the same tasks they are programmed to do (Searle, 2006). This means that computers can only execute tasks when programmed by people but cannot execute programs independently. This perspective can be understood better using the Chinese room argument that holds that digital computers execute programs, but they do not have a mind that can make the computers behave in any way. Presented by a philosopher known as John Searle in his paper titled “Minds, brains, and Programs”, the argument was published in 1980 in behavioural and brain science. The argument of computers having a mind by John Searle has been a topic of discussion since then, an argument that is considered a thought experiment called the “Chinese room”. The aim of this paper is to present the reader with a clear understanding of why a computer cannot have a mind through a skilful discussion of Searle’s Chinese room argument.
The Chinese room thought experiment
The Chinese thought experiment by Searle starts with the speculative theory that the research of Artificial intelligence has created a computer that behaves as if it understands the Chinese language, which it executes as an output. According to Searle, if the computer performs the task convincingly and comfortably, passing the Turing test, then the computer will convince a Chinese speaker that it is a living Chinese speaker through the program. The main concern here is whether the Chinese speaker will be convinced enough that they are communicating with another Chinese speaker. Through discussion of Searle’s thought experiment, this study will answer whether a computer has a mind to make them behave in ways such as understanding Chinese or just simulating the ability to understand, such as understanding Chinese (Searle, 2009). According to Searle’s Chinese room idea, he supposed he is in a closed room with a book containing an English version of a computer program and good papers, pencils, filing cabinets and erasers. Searle could then receive various Chinese symbols through a door slot, from which he could process them according to the program’s instruction in English and execute the Chinese characters as his output without even understanding what the Chinese characters meant. As he repeats the process, Searle gets better and better in the process even though he does not understand the Chinese symbols. According to Searle, this implies that if a computer had passed the Turing test in this way, he would also do the same by manually running the computer program.
According to Searle, there is no significant difference between computer roles and himself from the contact experiment since each follows a given program, step-by-step which results in behaviour the user interprets as showing intelligent conversation (Narayanan, 1991). Since Searle could not understand the chine characters despite processing them according to the given program, he claims that the computer would not be able to understand the conversation either. He argues that without understanding, we are unable to explain what a machine is doing as “thinking”, and since computers do not think, it is impossible for them to have a mind on the usual sense of the world. Just because a computer does what is doable by humans through manipulating symbols on a syntax basis, the computers cannot genuinely understand Chinese.
Searle’s argument on the Chinese room scenario is the direction to a specific position He calls Strong AI. Strong AI can be defined as the view on how suitable programmed computers can understand natural language and have other mental capabilities similar to that of humans who mimic their behaviours. Firm A(I show that computers can perform tasks like playing Chess intelligently while making clever moves and understanding the language. Contrarily, weak Artificial intelligence shows that computers are intelligent and primarily valuable in linguistics, psychology and other niches since they can simulate human mental abilities. However, weak AI does not claim that computers can understand or think.
Biological Naturalism vs Strong AI
In his argument on whether computers can think, Searle came up with a philosophical position known as biological naturalism. The concept of biological naturalism implies that consciousness and understanding are possible through biological machinery found in the brain of humans. According to Searle, the brain creates the mind, and the human mental phenomena depend on the actual physical-chemical properties of the real brains of humans. The claims from Searle is that the machinery, called neural correlates of consciousness as known to neuroscience, must possess causal abilities that allow the consciousness experience by the humans. Even though the argument from Searle does not support the notion that computers can think, he agrees that computers can possess consciousness and understanding (Narayanan, 1991). According to Seal, a brain is a machine, but the brain can give rise to consciousness and understanding through non-computational machinery. In cases where neuroscience can isolate the mechanism giving rise to consciousness, it might be possible for people to make machines with consciousness and understanding. However, Searle claims that without specific machinery, consciousness cannot occur.
The Chinese room argument can also be compared to the version of the Turing test introduced in 1950 by Alan Turing to help answer whether a machine can have a mind. In the Turing test, a human judge engages in a natural language conversation with a machine with abilities to generate performance similar to humans and humans. The participants are then separated, whereby if the judge cannot reliably differentiate the machine from the human, the machine will have passed the test. However, Turing’s intentions were not to test the presence of consciousness and understanding in the machine since he believed they needed to be more relevant to the issue he was addressing. For Searle, as a philosopher, consciousness and understanding are essential virtues to be investigated, which is also designed to show the insufficiency of the Turing test in detecting the presence of consciousness despite the machine functioning and behaving similarly to a conscious mind.
The Chinese argument is designed in a way that is analogous to the modern computer whereby it consists of a Von Neumann architecture consisting of an instruction book which is in the place of the program, paper and file cabinets which are in the place of the memory and a man who is in the place of the CPU following instructions and the means to write symbols in the memory which in this case are the pencil and eraser (Moural, 2003). This type of machine can be called Turing Complete in theoretical computer science since it has all the necessary machinery to perform computations doable by a Turing machine. Hence, it can simulate other digital machines step-by-step with enough time and memory. This Turing completeness of the Chinese room argument shows that it can simulate what a digital computer can do. Therefore, a programmed computer can only have a mind if the Chinese room contains or contains a Chinese-speaking mind.
Conclusion
In conclusion, programmed computers can perform tasks similar to humans by simulating human performance. I agree with Searle that a programmed computer cannot have a mind. Unlike the human mind, which does not have to be programmed only on a specific task, a programmed computer only executes tasks according to the programs initiated to them by humans. For example, a robot programmed to help in the reply and translation of the Chinese language only may be unable to execute any other program, such as differentiating between a native Chinese speaker and an individual with abilities to speak Chinese. Similarly, the Turing test claims that if a judge is unable to tell the machine from a human reliably, then the machine has passed the Turing test; if a programmed computer is unable to tell the difference between a native Chinese speaker and an individual with little understanding of Chinese language, then it implies that the computer does not possess consciousness and understanding. Simulating what a human can do does not necessarily imply the ability to think of a computer since mimicking human performances shows that computers rely on the human mind to execute their performances. Hence, a programmed computer cannot have a mind.
Last, Searle’s Chinese room argument implies that computers do not understand the meaning of the information fed to them but instead process it according to a given syntax. Despite how the programme is connected to the world or how much knowledge is written on the program, Searle remains in the room manipulating the Chinese symbols according to the given instructions, which are syntactic, showing him what each symbol stands for. However, Searle still needs to understand Chinese symbols. This implies that a programmed computer will only perform according to the instructions provided to it without even understanding the meaning of the information. Hence, a computer cannot possess a mind.
References
Gulick, R. V. (n.d.). Chinese room argument. Routledge Encyclopedia of Philosophy. https://doi.org/10.4324/9780415249126-w003-1
Moural, J. (2003). The Chinese Room Argument. John Searle, pp. 214–260. https://doi.org/10.1017/cbo9780511613999.010
Narayanan, A. (1991). The Chinese Room Argument: An exercise in computational philosophy of mind. Logical Foundations, 106–118. https://doi.org/10.1007/978-1-349-21232-3_12
Searle, J. (2006). Chinese room argument, the. Encyclopedia of Cognitive Science. https://doi.org/10.1002/0470018860.s00159
Searle, J. (2009). Chinese room argument. Scholarpedia, 4(8), 3100. https://doi.org/10.4249/scholarpedia.3100