Artificial Intelligence: The Philosophy Of Artificial Intelligence

3865 Words16 Pages


Can we create humans? What does it take to be a human? Are there some prime attributes which are the necessary and sufficient conditions for someone (or something) to be called as human? Is there a possibility to “imprint” these attributes to a machine to make it “human”?
This article attempts to discuss these popular philosophical questions posed by the philosophers who have wracked their brains in exploring the domain of Artificial Intelligence (AI). When I come to think of AI, I am strongly convinced that AI is a separate field all together because of its comprehensive
…show more content…
The Chinese room argument by Searle is the most firm theory that does so. Searle says that if an English speaker is given a set of instructions which allows him to correlate the questions asked in Chinese to the respective answers in Chinese, he would be able to respond in Chinese. As a Chinese person, he would be compelled to believe that the English speaker knows Chinese and he is able to “converse” in Chinese with him, even though he cannot in the literal sense. He is just following the steps given to him in the form of instruction manual. On similar grounds, Searle argues that if a computer program can make a computer converse intelligently in any language, it does not imply that the computer understands the language. It is just fed with some instructions by the programmer which it executes. When it is asked “How are you?”, it searches for that code which instructs it to answer accordingly. The types of answers that it gets from the code are “I am good. What about you?”, “I am doing well. Thank you.” etc. Clearly, intelligence is not the sole criteria to decide whether someone (or something) has a…show more content…
When Professor Hobby proposes to create a Robot child who will love his parents deeply, a thought provoking question is raised by one of his colleagues: “If the robot can love a person genuinely, what moral responsibility does that person hold for that robot?” To this the professor replies that God created Adam in the beginning to love Him. The intention here is that the child Mecha is being created to love us, the humans. But the question remains unanswered. I believe that our moral responsibility would be to love the robot back. Why shouldn’t it be? If a robot behaves like human, thinks like human, performs actions like humans, loves and hates like humans, then why cannot it have the same morality as humans have? By not answering this question and by simply ignoring it reveals the fact that humans cannot stand anything or anyone which is smarter or more intelligent than them. We strive to be the supreme species and while doing so, we tend to objectify and belittle any other
Open Document