Joshua Twigg A Jenkins English 102 March 28, 2017 “Stopping Killer Robots” Autonomous machines and robots are no longer the ideas of the future, but are the reality of today. Robotics and A.I. algorithms are a hotbed for the brightest minds and latest cutting edge technology of the world today. Like any technology that has ever been developed, this technology is being militarized for use in fully autonomous war machines. The article “Stopping Killer Robots, why now is the time to ban autonomous weapon systems, by Frank Sauer, goes into detail on the reasoning why this technology should be put to rest immediately. Frank Sauer uses rhetorical strategies Pathos, Ethos, and Logos to explain the reason for banning autonomous war machines. Frank …show more content…
They will repeatedly perform these preprogrammed tasks within tightly set parameters. The introduction of this information helps lay a compare and contrast foundation for all readers who may not know the difference between an automatic system and the autonomous systems the he details in the article. This helps the reader to gain perspective on why there are major ethical concerns with a fully functioning and thinking weapon of war. The article starts the discussion by stating a fact that uses Pathos, or an emotional appeal. The statement is, “Autonomous weapons systems have drawn widespread media attention, particularly since last year's open letter signed by more than 3,000 artificial intelligence (AI) and robotics researchers warning against an impending "military AI arms race." (1)”. This will draw the reader in to want to read more about this article. Its intent is also to elicit an emotional response. Most people are familiar with the Terminator movies, starring Arnold Schwarzenegger, where the A.I. war machines became self aware with the Skynet …show more content…
With the development of the major ethical implications that these machines impose we should ban these machines before they are even put into mass production. Frank Sauer’s makes use of the rhetorical strategies of Logos, Pathos, and Ethos throughout the article. He uses strong emotional statements that cause the reader to stop and think about the implications. These statements led me to think about a possible Terminator type scenario. He also uses logical and ethical persuasive statements to back up why these machines should be banned. Before he gets into the depth of the article he uses information from the opposing side to give a larger picture of the topic to the reader. He avoids using any kind of logical fallacies that would take credibility away from the article. Works Cited Sauer, Frank. "Stopping 'killer robots': why now is the time to ban autonomous weapons systems." Arms Control Today, Oct. 2016, p. 8+. General OneFile, go.galegroup.com/ps/i.do?p=ITOF&sw=w&u=mcc_main&v=2.1&id=GALE%7CA469754900&it=r&asid=dec98a6de55d1f2bfc317f601fdb3f04. Accessed 25
Evaluation of “Remote Weaponry: The Ethical Implication” by Suzy Killmister Throughout time warfare has evolved both strategically and in their mechanics. Armed forces are no longer fighting with swords or lined up in trenches as commonly as they used to. It is only natural for something that is made to protect civilizations to evolve as strategists are introduced to new technologies. From swords to muskets and automatic rifles, the conversation now takes the “man to man” contact out of the equation.
An example of his ability to use logic to support his argument is his reference to Apples kill switch, where he says “The theft of iPhones plummeted this year after Apple introduced a remote kill switch”(1), and “If this feature is worth putting in consumer devices, why not embed it in devices that can be so devastatingly repurposed -- including against their rightful owners at the Mosul Dam”(1). In this case, his comparison of its efficiency in iPhones makes the reader consider the potentials of a kill switches implementation. I believe his use of logic is always understandable and successful and his paper is without fallacies. Apart from his logic, Zittrain is also quick to use expert opinions to back his logic. An example being his elaborations on kill switches in missiles, where he says,“ At Least one foreign policy analyst has suggested incorporating GPS limitations”(1).
Bonnie Docherty does not support the idea of using robot for warfares due to moral issues. She states :”It would undermine human dignity to be killed by a machine that can’t understand the value of human life.”. She also convokes the ban on the use of robots in war “before humanity crosses what she calls a moral threshold.”. She emphasizes how these machines will completely change the way of war like what gunpowder and nuclear have done. Thus, she worries about what these machines are capable of doing and who will take the responsible for war
Thompson illustrated what kind of world we would live in if work were to diminish. This world included excessive amounts of dominating robots, contentious politics, and leisure time. For the past couple of years people have said that robots will take over and dominate humans. This has always been a myth, or rather a topic that is brushed off of the shoulders. However, this fantasy is quickly becoming a reality due to current trends in technology.
"Assault-Style Weapons In The Civilian Market. " NPR. NPR, 20 Dec. 2015. Web.
In conclusion, both authors used different rhetorical strategies in their articles. Carr's perspective believes that if we’re not too careful and depend too much on automation. We will become less capable. He believes if this happens, there will be more robots than us.
As society continues to develop and makes new plans, technology in today’s world is starting to raise some questions. Patrick Lin, is a philosopher and director of the ethics emerging group at the state University in California. With the help of the university Patrick Lin wrote an essay called The Big Question: in his essay, he talks about the technologies and ideas in which many people seem to overlook today. In hopes of raising awareness about the upcoming industrial revolution of robotics. the changing of the world around us is already underway.
Brainless.com: Rhetorical Strategies in Carr’s “Is Google Making Us Stupid?” Do we depend on the Internet to answer all of our questions? Nicholas Carr, an American author, wrote “Is Google Making Us Stupid?” published in 2008 in The Atlantic, and he argues about the effects of the Internet on literacy, cognition, and culture. Carr begins his argument with the ending scene of Stanley Kubrick’s 2001: A Space Odyssey.
Introduction: The purpose of this analysis is to examine the rhetorical appeals of an argument presented by two different authors who have written on the topic of Artificial Intelligence. Douglas Eldridge’s, “Why the Benefits of Artificial Intelligence outweigh the Risks” provides the potential positives to the rise of Artificial Intelligence. He dispels some of the common myths regarding the risks of AI, suggesting that these myths are either unfounded or not so risky.
The author's purpose in writing “Robot Invasion” was to represent the effectiveness and relevance of robots in today’s society. The author is able to persuade the reader that robots are beneficial to society by stating statements such as “the robots will be able to unleash a productive boom”. This statement from the author really exemplifies the positive impact that robots have on our everyday lives by making our everyday tasks easier and having robots be the productive
In the New York Times Magazine, "Death by Robot," Robin Henig addresses about how robots contributed remarkably to society and became a part of human 's life, but when it came to choosing between two contradictory choices of life and death, even with superior data and calculations, a robot would not be able to replace a human 's
In “I, Robot” (Proyas, 2004), a film which takes place in 2035, there are numerous examples of self-driving cars that punctuate the futuristic, streamlined setting. The 2035 setting of “I, Robot” is looking more and more like an accurate portrayal of the technology we might have in our own 2035, not least in its portrayal of widely used self-driving cars and intelligent robotic household
Recently, the controversy over the future of these weapons has sparked an interest in many debates, whether to allow them or outlaw them. The result of these disputes can affect our nation, so it is of the utmost importance to create an analytically acceptable solution. To this degree, it is critically important to properly implement a solution before causing an erroneous mistake. A reasonable approach to dealing with such arduous challenges is to properly understand its benefits and the risks that caused by this debatable, but
If this feels wrong to you, this is because it should. Robots are affecting our consciousness in ways some overlook; as a society, we are not becoming additionally adaptable or wiser. We are becoming fish in a fish
Rise of Artificial Intelligence and Ethics: Literature Review The Ethics of Artificial Intelligence, authored by Nick Bostrom and Eliezer Yudkowsky, as a draft for the Cambridge Handbook of Artificial Intelligence, introduces five (5) topics of discussion in the realm of Artificial Intelligence (AI) and ethics, including, short term AI ethical issues, AI safety challenges, moral status of AI, how to conduct ethical assessment of AI, and super-intelligent Artificial Intelligence issues or, what happens when AI becomes much more intelligent than humans, but without ethical constraints? This topic of ethics and morality within AI is of particular interest for me as I will be working with machine learning, mathematical modeling, and computer simulations for my upcoming summer internship at the Naval Surface Warfare Center (NSWC) in Norco, California. After I complete my Master Degree in 2020 at Northeastern University, I will become a full time research engineer working at this navy laboratory. At the suggestion of my NSWC mentor, I have opted to concentrate my master’s degree in Computer Vision, Machine Learning, and Algorithm Development, technologies which are all strongly associated with AI. Nick Bostrom, one of the authors on this article, is Professor in the Faculty of Philosophy at Oxford University and the Director at the Future of Humanity Institute within the Oxford Martin School.