John Searle’s Chinese Room argument is a thought experiment in which Searle tries to refute the Turing Test and Strong AI. It involves a person, a room, 2 slots labeled A and B, and 3 pieces of paper. The Chinese Room argument was aimed at the position called “Strong AI” (Cole), also known as Representational Theory of the Mind, and against the Turing Test created by Alan Turing. The problem with the Chinese Room argument is that it misses the point entirely – Searle compares a CPU or computer to a person, a non-conscious object with a conscious agent (Cole). The argument also gets extrapolating the functions of the brain from a greater whole to a smaller part of the brain wrong – as a whole you may know things, but certain areas of the brain will always know more than we consciously do.
Let us start with John Searle’s argument, or one could say counter-argument, with what he calls “strong AI.” The most famous of his arguments; “The Chinese Room.” In which you picture yourself as a monolingual English speaker “locked in a room, and given a large batch of Chinese writing “ as well as, “ a second batch of Chinese script” and “a set of rules” in English “for correlating the second batch with the first batch.”
This applies to her argument about experience. As she explained, “What Chinese parents understand is that nothing is fun until you’re good at it.” To an extent, she's right. Anything can become more fun once you know how to do it, since you're not stressing over it. Despite this, Chua has some weaknesses to her argument. One weakness t I’ve noticed is that she generalizes Western parents.
They argue that the Chinese Room experiment is flawed, and thus Searle’s argument falls apart. Virginia Savova and Leonid Peshkin have created a thought experiment similar to the Chinese Room, but instead argues that a machine not understanding something does not make it unintelligent. They designed their thought so it is similar to the Chinese Room, but it has a few key differences. The man can speak Chinese, the story is about a cheeseburger, the questions are about the cheeseburger, and the man does not know what a cheeseburger is. The creators argue that in this experiment, the man would be unable to answer questions about the properties of the cheeseburger that are not specifically given in the story.
Chinese learning bristles with difficulties to many foreign learners. In David Moser’s article, “Why Chinese is so damn hard”, he gives nine reasons why it is difficult to learn Chinese. He thinks there are many confusing aspects in Chinese such as strange writing system and confusing tones that Chinese might be the most difficult language in the world. For other languages like French or Spanish, non-native learners can easily acquire without much but it takes twice or triple of the time for learners to understand simple words if they learn Chinese. Thus, the author believes that Chinese learning is a daunting process and one will choose to give up unless he is very interested in this language.
Outputs of System 2 are experienced as generated voluntarily by the Self. System 1 thought processes operate automatically, process information fast, are heavily influenced by context, biology and past experience, aid humans in mapping and assimilating newly acquired stimuli into preexisting knowledge structures, and are self-evidently valid experience alone is enough for belief. In contrast, System 2 thought processes are controlled, effortful, intentional, and require justification through logic and evidence. While Daniel Kahneman’s, and his research, have been influential in psychology and economics in helping understand the fallibility of human reasoning and decision making, his explanation of fallibility of System 1 overlooks the important adaptive value of System 1. One of the wonders of System 1 is its ability to feed creative insights to System 2.
I think this criticism signifies that it does not matter whether or not the person understands Chinese, this is beside the point, it only matters that the system as a whole understands. In fact I think the criticism is saying the human being has no significance at all, he/she is just the “central processing unit.” If the system can display understanding of Chinese, then it would indeed have to understand Chinese. Searle could actually be contradicting himself in saying the system can speak Chinese but not understand
They start to formulate questions as they see or read something that does not make sense to them or that they can relate to a different object or thought. So, by having this in mind, people can tell that their ideas are not original because there was something that sparked their ability to
Their studies imply that this easy structure might not be encapsulating the complexities of people’s understanding. Instead, it seems that people are adopting what might called a “Platonic dualism”. On such a view, the two categories of mind and body are divided up somewhat differently. The “mind” category contains one particular part of the mind, the capacity for thinking and reasoning; the body category includes both the body and a second section of the mind, the extent for extra visceral emotions and affection. So, if one centres his attention on a person’s body, one becomes synchronously less interested to characteristic to that person a capacity for abstract thought and more inclined to attribute seeking desires and feelings.
Conversely, the Mind and all its attributes, thoughts, emotions and qualia, are composed of “Spiritual” matter, and as such, dwells in the immaterial realm and does not abide to the laws of physics or nature. Academic Philosopher Simon Blackburn better classifies in Think: A Compelling Introduction to Philosophy (51) the