Imagine a future where every single second of your life is controlled by an entity that does not think, feel, or act in the same way you do, and whose motivations are so vastly removed from your own as to feel impossibly alien. Imagine being punished for transgressions over which you have no control, simply because your robotic overlord does not think the same way you do. Imagine a world of cold, hard, logic, with no emotion, or feeling, or art. This future might sound like a far-fetched improbability sourced from a dystopian novel, but according to some of Earth’s foremost authorities on science and technology such as Elon Musk and Stephen Hawking (Wootson), Artificial Intelligence is one of the biggest threats to humanity’s survival in the future, and we are ill-prepared to face it. This paper will outline the reasons why humanity is hopelessly doomed in case we do not adequately prepare for the Artificial Intelligence revolution, and subsequently, propose stringent regulations as the reasonable way forward to protect the future of humanity.
Artificial intelligence, driven by private enterprise and capitalistic interests, is advancing far quicker than the slowly-turning gears of government notice and regulation, and as such, we need to get out ahead of the progress and set rules. As it currently stands, Artificial Intelligence is ubiquitous in contemporary life, yet there exists very little to no regulation on the issue. Artificial intelligence algorithms are used in computer networks and intrusion detection systems, they are used in the medical field, in the accounting field, in customer care and support, in self-driving cars, and in customer tracking, data collection, and analysis (Pannu, 82). With the advent and rapidly increasing proliferation of the Internet of Things (IoT), which seeks to completely integrate human existence into one ecosystem, it is very likely that human life in the near future will be completely dominated by artificial intelligence algorithms. It is a glaring oversight that a field that will potentially play that huge a role in human life currently has no regulation, or even plans underway to effectively regulate it, and one that needs to be fixed with immediate effect.
Another reason why humanity would be doomed in the event Artificial Intelligence went rogue is the vast and hopeless difference in intelligence and processing capabilities between human beings and artificial intelligence. In the event of a conflict between humans and robotic intelligence, humanity would not be able to fight back, because we would be outthought, out planned, and out-maneuvered at every single turn. Artificial intelligence would have the capability to perform billions of relevant calculations per second, and coordinate responses on scale humanity cannot even comprehend. Artificial intelligence is programmed to think and act rationally, and as such, is predisposed to quick and efficient logic. Compared to humanity’s slow and sluggish thought patterns, and barriers in communication and coordination, a conflict with artificial intelligence in the future would be lost before it even began, and as such, we need to get ahead of the situation by enacting regulations.
Artificial intelligence is also unemotional and unfeeling, and as such, is unimpeded by such concerns in the making of decisions. Actions that would seem horrifically cruel to human beings, such as mass murders, would seem logical to artificial intelligence, for purposes such as population control. Artificial intelligence logic is unyielding and unemotional, and this is another key reason why humanity should regulate artificial intelligence. A dystopian future controlled by artificial intelligence, which, according to Elon Musk and Stephen Hawking, is not such a far-fetched possibility (Wootson), would involve unimaginable horrors and cruelties governed purely by logic and reason, and completely in disregard of ethics and morals. The most logical decision is, more often than not, not the humane or ethical one, and as such, humanity would suffer under artificial intelligence.
A future where Artificial Intelligence fully controls human lives would require that the said AI have gained sentience, and this usually seems like a far-fetched idea. We as human beings, however, do not fully understand the concept of sentience, and from whence it stems. There exists literature that tracks the potential pathways to AI gaining sentience, and from there on, exerting control over humanity (Yampolskiy, 40). Additionally, sentient Artificial Intelligence is not the only AI that poses threats to humanity in the future; extremely intelligent AIs programmed by rogue humans or governments in the future could become powerful enough to subjugate life as we know it. The impact of Artificial Intelligence in contemporary life is pervasive, and it did not reach there suddenly; it has been a creeping, incremental, unregulated growth. If Elon Musk, Bill Gates, and Stephen Hawking, some of the foremost leading minds on technology in current times, are warning about the potential of artificial intelligence to severely harm humanity (Katherine, 38), it is only prudent that the rest of humanity assume that they know that of which they are talking, and take reasonable steps.
This paper proposes commonsensical wide-range regulations on the field of Artificial Intelligence, based upon Asimov’s widely acclaimed three laws of robotics. Asimov, a science fiction writer, first proposed these rules in 1942, derived from a fictional future society where robots and humans lived together (Jung, 15). This society is no longer fictional, and the advent and wide proliferation of robotics and artificial intelligence mean that in about 10 or 20 years, our lives and those of artificial intelligence will be completely intertwined. Asimov’s first law states that a robot, or artificial intelligence, in this case, shall not harm a human being, either through action or inaction. The second is that a robot shall obey any orders given to it by a human being, except those which contradict the first law, and the third is that a robot shall protect its own existence at all cost, saving those interventions which would violate the first and second laws. While the second and third laws are not necessarily vital to contemporary AI regulations, the first is absolutely indispensable and should be the spirit of all such regulations in the future.
This paper proposes four key regulations that should apply to all artificial intelligence iterations in the future, based on Asimov’s rules, and adapted to the contemporary AI scene, and the risks currently faced. The first is that Artificial Intelligence should never be weaponized. The ruthless logic that AI is capable of would lead to drastic damages in the event it is used as a weapon. Some armies of the world are already integrating artificial intelligence algorithms into their smart weapons, and this spells disaster for future generations (Sharikov, 369). AI should be used solely for the advancement of humanity, not its destruction. The second is that all artificial intelligence that has even the remotest potential to harm humanity should have a universal “off-switch” that cannot be overridden by software. This should protect humanity in the event of rogue humans employing AI for their selfish destructive means, or on the small off-chance of AI gaining sentience.
The third regulation should be that all AI manufacturers must be bound by an ethics code, which they should integrate into their AI modules as immutable programming that cannot be overridden. This is the regulation that most directly corresponds to Asimov’s first law, but with a slight extension. This ethics code would stop logic from superseding ethics, morality, and humanity, and would avert potentially disastrous consequences. The fourth is that all AI, and in extension, their manufacturers, should be held responsible for any breaches in the first three aforementioned rules and that there should exist robust feedback systems to correct any such breaches proactively rather than reactively.
Artificial intelligence is a wonderful tool, and it has immense potential to change human lives for the better, and usher us into an age of prosperity previously unprecedented. On the flip side, artificial intelligence has the potential to completely eradicate humanity as we know it, a warning which has been sounded by some of our generation’s most preeminent scientists and technologists. We, as a society, have to account for the possibility that they know more about the topic than we do, and, having dealt with Artificial Intelligence firsthand, they know more about its dangers than we are aware of. It is upon us as a society to institute commonsensical regulations on Artificial Intelligence, both for our benefit and for the benefit of generations to come.
Wootson, Cleve. “Elon Musk Doesn’t Think We’re Prepared To Face Humanity’S Biggest Threat: Artificial Intelligence”. The Washington Post, 2017, https://www.washingtonpost.com/news/innovations/wp/2017/07/16/elon-musk-doesnt-think-were-prepared-to-face-humanitys-biggest-threat-artificial-intelligence/.
Pannu, Avneet. “Artificial intelligence and its application in different areas.” Artificial Intelligence 4.10 (2015): 79-84.
Yampolskiy, Roman V. “Taxonomy of pathways to dangerous artificial intelligence.” Workshops at the thirtieth AAAI conference on artificial intelligence. 2016.
Heires, Katherine. “The rise of artificial intelligence.” Risk Management 62.4 (2015): 38.
Jung, Gia. “Our AI Overlord: The Cultural Persistence of Isaac Asimov’s Three Laws of Robotics in Understanding Artificial Intelligence.” (2018).
Sharikov, Pavel. “Artificial intelligence, cyberattack, and nuclear weapons—A dangerous combination.” Bulletin of the Atomic Scientists 74.6 (2018): 368-373.