The research topic is Government should control developments in machine learning and AI. There is currently no legislation aimed specifically at regulating the use of AI and Machine learning. Rather, AI and Machine Learning systems are governed by existing regulations. These include laws governing data protection, consumer protection, and market competition. Bills have also been introduced to regulate specific AI and Machine Learning systems. Laws specific to AI and machine learning should be created to help deal with issues related to these technologies. There may be biases in artificial intelligence algorithms because humans created them, and these biases may have been consciously or unconsciously incorporated into the program. A biased AI algorithm, or biased data in the training sets it is given to learning from, will produce biased results as a result of its bias. As a result, some opponents claim that Facebook is making people miserable by presenting only the best parts of their friends’ lives, which they perceive to be completely unrepresentative.
As AI, as well as machine learning, become increasingly prevalent globally, governments should find a way to control the development of—21st century as these technologies advance at a very high rate. The research paper has utilized qualitative data that is expressed in words. The research has been utilized from secondary sources that other researchers have collected. It has not been obtained by controlling and changing factors, but the research describes data by gathering observations. World economic forum report indicates that a new generation of AI-powered machines could potentially lead to a large proportion of human jobs. Facebook breaches the law in a variety of ways, including failing to safeguard data from third parties, distributing advertisements using phone numbers that users have provided for security, and tricking users into believing that its face recognition software is disabled by default. Facebook has been the subject of a number of privacy scandals; for example, it was revealed in August 2019 that the company had paid contractors to create transcripts of users’ audio communications. Part of these concerns stems from the company’s business model, which involves the sale of information about its consumers and the possible loss of personal information. According to some critics, Facebook is causing users to feel envious and depressed by continually showing them good but unrepresentative highlights from their friends’ lives. In a similar scenario of Facebook destroying individuals psychologically. Researchers say by 2029, artificial intelligence will have advanced to human levels. Extending that until, say, 2045, we will have doubled our civilization’s intellect, the human biological machine intelligence a billion-fold. The paper investigates the dangers that have been posed by Artificial intelligence and machine learning and call for the Government to have control over their development to avoid higher threats they have later on. However, the greatest short-term damage that AI is expected to do to humans is job displacement since the quantity of labour we can automate with AI is far more than before. It is the Government’s responsibility to ensure that companies are creating a world in which every person has the chance to succeed.
As AI, as well as machine learning, become increasingly prevalent globally, governments should find a way to control the development. Artificial intelligence (AI) is a branch of computer science that simulates human cognitive processes by using computers, especially those of computer systems, to solve problems. It is a kind of data analysis known as machine learning that automates the building of analytical models rather than normal data analysis. An artificial general intelligence is a kind of artificial intelligence in which computers learn from data and spot patterns while making judgments without the need for human intervention or intervention. The 21st century has seen these technologies advance at a very high rate. It has caused the loss of jobs to people and violated the privacy that we enjoy, making companies such as Facebook exploit that gap to make millions of dollars daily. Researchers say by 2029, artificial intelligence will have advanced to human levels. Extending that until, say, 2045, we will have doubled our civilization’s intellect, the human biological machine intelligence, a billion-fold. The emergence of complete artificial intelligence might mean the extinction of humankind. Humans, who are hampered by slow biological evolution, would be unable to compete and surpass. If the machine thinking process were initiated, it would likely not take long to transcend our limited abilities. They’d be able to talk to one other to enhance their minds. As a result, we should anticipate the machines to take over at some point. AI does not have to be wicked to kill humankind — if AI has a purpose and humans happen to get in the way, it will eliminate humanity as a matter of course, no hard feelings. The real issue is when we will develop an AI Bill of Rights. What would it entail? And who is supposed to have the authority to make that decision? We’ve seen AI provide companionship and comfort to the lonely, but we’ve also seen AI engage in racial prejudice. The paper investigates the dangers that have been posed by Artificial intelligence and machine learning and call for the Government to have control over their development to avoid higher threats they have later on.
The research paper has utilized qualitative data that is expressed in words. The research has been utilized from secondary sources that other researchers have collected. It has not been obtained by controlling and changing factors, but the research describes data by gathering observations without intervening. It’s required since presenting raw data would be difficult to analyze, especially if there was a lot of data, to begin with. Researchers may better explain the information they are seeking to convey to the public by describing data in a more appropriate way.
The technique used in the research may be used to answer crucial questions about data. It is often regarded as a requirement for working in applied Artificial intelligence and machine learning. In addition, the research method is used to assist in transforming observations into information and answering queries regarding samples of observations. The direction taken by the research does not involve any ethical issues; however, it seeks to answer ethical issues that are raised by not controlling AI and machine learning development.
The research has utilized valid sources, and also that can be reliable. The sources have been examined the consistency of outcomes across time, among various observers, and across different aspects of the research themselves. By examining how well the findings match accepted ideas and other measurements of the same topic. The research has used online resources that are credible and have a list of authors for the sources which they linked to reliable sources. Some sources have been utilized to jump-start the sources, such as news websites and online forums, which have been used to fuel further research, but it has not been relied upon as sources of reliable information.
Background of the study
World Economic Forum study estimates that future artificial intelligence-powered robots may slash the number of humans working by as much as a third (“The Global Risks Report 2019 14th Edition Insight Report”, n.d.). The organization cites that automation will supplant about 85% of jobs by 2025. Artificial intelligence (AI) utilizes data and algorithms to mimic human decision-making and cognitive abilities. Automated systems are used on a daily basis in a variety of industries, including retail, life sciences, financial services, industrial production, healthcare, and chemical manufacturing. Algorithms are used in a variety of industries to assist systems in learning and solving issues on their own.
Violated the privacy that individuals once enjoyed. Businesses’ massive amounts of data into AI-driven algorithms are also vulnerable to data breaches. AI may produce personal data that was generated without the individual’s consent. The FTC and Facebook reached a settlement today after an investigation on whether Facebook had breached the federal constitution, which includes failure to ensure security to third party data, transmitting adverts using contact information offered for protection, and confusing customers into trusting that its software for facial recognition was deactivated by mistake (Federal Trade Commission, 2019). Facebook can track using these cookies from users: In order to keep tabs on its users, Facebook uses tracking cookies. While browsing other websites, Facebook could be able to see what other websites the user is visiting. Security researcher Alon Gal discovered a massive database of personal information, including phone numbers and email addresses as well as full names and birth dates. According to reports, information about the personal details of 533 million Active Facebook from more than 100 countries was made public over the weekend.
Similarly, face recognition software invades our privacy. As a result, AI is raising privacy issues among consumers/users. We might conclude that AI is undoubtedly a boon, but it also carries a real risk: infringement of human rights, particularly our “Privacy.” On the other hand, Facebook employs powerful machine learning to deliver you information, identify your face in photographs, and target users with ads. Instagram, which Facebook owns, uses AI to recognize images. The post is organized by Facebook algorithms in a feed of the user grounded on significance rather than the time of the publication. The possibility that a user will have a longing to view a piece of a particular post is used by Facebook to select the content that gives the impression first in their section of feed. In brief, the most recent postings from accounts a user followed appeared first. Social media increases emotions of depression, anxiety, poor body image, and loneliness. Facebook usage may have negative psychological repercussions such as romantic jealousy as well as tension, lacking attentiveness, as well as Facebook addiction equivalent to drug addiction in certain situations. The activities of Facebook have also been covered. Although firms such as Facebook and Google do not directly sell your data, they use it for targeted advertising, generating several chances for marketers to pay and get your personal information in exchange. Experts have raised their concerns that Facebook, as well as text messages that have become so common in the life of teenagers, has led to increased anxiety as well as decreased self-esteem.
Facebook has faced various privacy issues; for example, reports in August 2019 indicates the company had hired suppliers to make copies of audio from users’ conversations. The portion of these worries originates from the company’s business strategy, which includes information selling around its customers and potentially losing secrecy. Furthermore, companies, as well as people, are known to exploit data from Facebook for individual aims. As a result, individuals’ characters have been revealed without their permission at times. Accordingly, pressure associations and legislatures have progressively underlined clients’ on the whole correct to protection and command over their information.
As per the “World Unplugged” overview, which was done in 2011, stopping Facebook addictions is much the same as plugging, smoking or surrendering liquor for addicted clients. A review done in 2012 by scientists from the University Of Chicago in the US found that substances, for example, liquor and cigarettes, couldn’t contend as far as seductive nature with Facebook locales (“Peer-Reviewed Abstracts,” 2019). According to a 2013 research published in the journal CyberPsychology, Behaviour, and Social Networking, some users abandoned social networking sites because they felt hooked to them. The site fell for roughly 30 minutes in 2014, leading some users to contact emergency authorities.
As indicated by the examination, more than 90% used the web each day, with 24% utilizing it “regularly.” According to a review including 457 post-secondary understudy Facebook clients following a face-approval pilot of 47 post-secondary understudies Facebook clients at an enormous college in N. America, ADHD side effects had a genuinely critical positive connection with the use of Facebook driving. Male Facebook clients had all the more impressive motivations to utilize the website while in the driver’s seat than female clients (Frontiers in Psychology, March 2016).
Facebook has been blamed for making clients envious and miserable by continually presenting them to positive yet unrepresentative features of their companions. Journal writings, films, and images that illustrate or allude to such positive or otherwise remarkable actions, experiences, and facts are examples of such highlights. This impact is primarily generated because most Facebook users only exhibit the good elements of their life while suppressing the unpleasant. Still, it is also highly related to inequality and gaps across socioeconomic groupings since Facebook is available to users from all social classes. Guarantee that this sort of jealousy has extensive outcomes in different everyday issues, including serious pity, self-hatred, anger and disdain, disdain, sensations of deficiency and instability, negativity, self-destructive driving forces and goals, social detachment, and other central issues. The media has regularly alluded to this difficulty as “Facebook Envy” or “Facebook Depression.”
As indicated by joint exploration embraced by two German foundations, one out of three people feels less fortunate and less content with their lives after perusing Facebook. Excursion pictures were found to be the most predominant reason for harshness and jealousy. Following that, social contact was the second most normal wellspring of desire, as Facebook clients analyzed their measure of birthday wishes, likes, and remarks to their companions. Guests who gave the least regularly felt the most discouraged. According to the research, “passive following creates invidious feelings, with users mostly envying others’ pleasure, the way others spend their holidays and mingle.”
As indicated by a recent report directed by specialists at the University of Michigan, the a bigger number of people who utilized Facebook were more terrible than they felt after that. Narcissistic users who display extreme grandiosity evoke unpleasant emotions in viewers and induce envy, which may lead to viewers’ loneliness. To prevent this unwanted reaction, viewers may need to end ties with them. However, “avoidance” such as “terminating connections” would be reinforcing, which might lead to loneliness. The dreary example is a horrendous circle of depression and aversion adapting to the exploration.
Facebook is a multinational tech company with over 3 billion month to month dynamic clients as of the second quarter of 2020. It impacts individuals who use it. Enormous information calculations are utilized to give custom fitted substance and robotize processes; in any case, this innovation can affect people in different ways. The instructive air pocket, people’s decisive reasoning limit, and news culture all add to the issue of disinformation. As indicated by a 2015 examination, 62.5 percent of Facebook clients know nothing about any arranging of their News Feed.
Moreover, researchers have started to take a gander at calculations that give startling outcomes, prompting solitary political, financial, geographic, racial, or other bias. Facebook has been hush-hush about the internal functions of the calculations used for the News Feed relationship. To keep users interested, algorithms utilize their previous activity as a reference point for anticipating their preferences. However, this results in building a filter bubble, which begins to exclude consumers from various information. Clients are left with a twisted view of the world because of their inclinations and biases.
In 2015, Facebook scientists distributed a review demonstrating that the Facebook calculation sustains a reverberation chamber among clients by concealing substance from individual channels that clients might contradict. In rundown, the review’s discoveries proposed that the Facebook calculation positioning method came about in around 15% less differed material in clients’ substance channels, just as a 70% drop in the different material’s active visitor clicking percentage. Political polarization in the US has developed significantly beginning around 2000, as therapist Jonathan and FIRE President Greg indicated. As per them, this increment in advertisement openness is, for the most part, brought about by the channel bubbles shaped by Facebook calculation of the News Feed just as different stages (The Global Risks Report 2019 14th Edition Insight Report, n.d.).
Facebook counterly affects being educated, basically in the political field: the impacts of web-based media on available information on political issues were assessed regarding two US official races in two investigations from the US with a sum of more than 2,000 members (Garrett, 2019). The discoveries uncovered that Facebook use was essentially antagonistically associated with general political mindfulness. This was likewise obvious when segment, politico-philosophical, and earlier political information were incorporated. As per the last option, a causal affiliation exists; the more the utilization of Facebook, the lower the general political mindfulness. Facebook was chastised in 2013 for empowering clients to post and share terrible movies, including accounts of people being executed. Facebook wouldn’t erase such film on the core value that clients have the opportunity to depict the “truth in which we live,” Facebook turned around its situation in May, expressing that it would eliminate detailed recordings while looking at its strategy. In October, Facebook said it would permit brutal movies to be broadcast on its foundation as long as they were expected to teach individuals. Common freedoms infringement, psychological oppression, and other brutal activities may here, and there be referenced in these stories and encounters. At the point when people post this kind of visual data, it is frequently criticized. Facebook eliminates material if posted for savage delight or to laud brutality. However, Facebook was chastised again, with the Family Online Safety Institute asserting that such recordings “crossed a line” and could cause mental damage to youthful Facebook clients. Then, at that point, state head of the United Kingdom David Cameron referred to the choice as “unreliable,” referring to similar worries about youthful clients. Two days later, Facebook deleted a video of a beheading in response to “global outcry.” Although it acknowledged its commitment to enabling individuals to submit horrific information for criticism, it also announced that it would enhance its enforcement to avoid glorification. As a result of these developments, the company’s rules were also questioned, with some focusing specifically on Facebook’s approval of graphic material but prospective removal of nursing photographs. Facebook said in January 2015 that additional warnings would be placed for graphic content, requiring users to affirm that they want to read the material expressly.
Facebook has been chastised for failing to remove graphic material portraying Libyan war atrocities. A BBC investigation in 2019 discovered evidence of suspected Libyan war crimes being extensively posted on Facebook and YouTube. The BBC discovered photographs and videos of the remains of combatants and civilians being desecrated by fighters from the self-styled Libyan National Army on social media. The military, commanded by General Khalifa Haftar, controls a large swath of territory in eastern Libya and attempts to conquer Tripoli. BBC Arabic discovered around a hundred photographs and videos from Libya that had been posted in violation of their businesses’ policies on Facebook and YouTube. The UK Foreign Office said that it takes the claims very seriously and is worried about the effect of the recent violence on civilians.
The Guardian revealed in June 2017 that a software flaw had exposed the personal information of 1,000 Facebook employees engaged in assessing and eliminating terrorist material by showing their profiles in the “Activity” logs of Facebook groups tied to terrorism activity (Solon, 2017). Six people were identified as “high priority” victims of the error at Facebook’s Dublin, Ireland headquarters after the firm decided that their accounts were likely accessed by potential terrorists from organizations such as ISIS, Hezbollah, and the Kurdistan Workers’ Party. The flaw, found in November 2016 and remedied two weeks later, had been active for one month and retrospectively revealed filtered personal accounts dating back to August 2016. Due to a shortage of funds, one impacted worker left Ireland, went into hiding, and returned to Ireland after five months. Suffering from psychological discomfort, he filed a court claim for compensation against Facebook and CPL Resources, an outsourcing business. The investigation on Facebook has found out that only a small fraction of the names were likely viewed, and evidence was not found of any threat to the people impacted or their families as a result of this matter. Facebook promised to construct a home security system, give transportation to and from work, and provide counselling via its employee support program.
As a consequence of the data breach, Facebook is considering using administrative accounts for employees monitoring material rather than asking staff to sign in with their identities. Facebook has been disciplined for not doing enough to limit the spread of fake news. Stories on their platform, particularly following the 2016 United States presidential election, which some claim Trump would not have won if Facebook hadn’t assisted in spreading what they claim were fake stories biased in his favour.
Criminals such as terrorists are using autonomous weapons to cause terror to individuals and authorities worldwide. Autonomous weapons systems are deadly devices that have been given the ability by their human designers to assess their surroundings, identify prospective enemy targets, and select to strike those targets autonomously based on complex algorithms (Armscontrol.org, 2019). A system of this kind might comprise fixed and mobile robotic components uncontrolled aerial, ground, or naval vehicles outfitted with active or passive sensors for navigating and detecting objects, motion, or patterns. Electro-optical, infrared, radar and sonar detectors are examples of these sensors. No laws directly regulate autonomous weapon systems as described. Due to corruption and stealing, most of these weapons end up in the wrong hands, and these individuals use them to cause terror.
Some of the coordinated misleading efforts are aimed to disrupt decision-making. It destroys societal cohesiveness and delegitimizing enemies in the middle of interstate war. Intelligence gathering on particular targets, the creation of inciteful and sometimes purposely misleading narratives, and organized distribution across social and conventional media are all examples of IO methods. The Russian Government utilized similar techniques to portray the White Helmets humanitarian organization working in Syria as a terrorist organization, which aided in the organization’s violent assaults.
Disinformation operations may also be used to systematically control political discourse inside a state, influencing news reporting, stifling opposition, compromising the integrity of democratic governance and election institutions, and strengthening the hand of authoritarian governments. These campaigns are divided into creating key storylines, onboarding influencers and phoney account operators, and social media diffusion and amplification. For example, Rodrigo Duterte, the Philippines president, utilized Facebook to propagate favourable narratives about his campaign, smear opponents, and stifle critics.
Digital hate speech in vulnerable environments and social media platforms magnify and propagate hate speech, offering individuals and organized organizations possibilities to feed on existing concerns and grievances (Laub, 2019). They have the potential to empower aggressive actors and instigate bloodshed, either purposefully or unintentionally. The fast spread of mobile phones and Internet access increases the dangers and amplifies the effects of hate speech. Myanmar is a terrible case where incendiary internet hate speech directed towards the majority Muslim Rohingya population has been connected to riots and communal bloodshed.
Radicalization and recruiting the capacity to interact over long distances and exchange user-generated, multimedia information cheaply and in real-time has become social media a preferred conduit of recruitment, manipulation, and coordination for certain violent extremists and militant groups (ReliefWeb, n.d.). The Islamic State (ISIS) has been especially effective in leveraging the reach and strength of digital communication tools. Researchers say by 2029, artificial intelligence will have advanced to human levels. Extending that until, say, 2045, we will have doubled our civilization’s intellect, the human biological machine intelligence a billion-fold. Artificial intelligence that is as good as human intellect might be the end of humanity as we know it. Afterwards, it would go off on its own, constantly redefining itself as it went along. Humans, who are hampered by slow biological evolution, would be unable to compete and would be surpassed. If the machine thinking process were initiated, likely, it would not take long to transcend our limited abilities. They’d be able to talk to one other to enhance their minds. As a result, we should anticipate the machines to take over at some point. AI does not have to be wicked to kill humanity if AI has a purpose and humans happen to get in the way; it will eliminate humanity, no hard feelings. The real issue is when we will develop an AI Bill of Rights. What would it entail? And who would have the power to make the final decision? ‘ While AI has been shown to bring friendship as well as a pleasure to the lonely, it has also been shown to exhibit racial discrimination.
However, the greatest short-term damage that AI and Machine Learning are expected to do to humans is job displacement since the quantity of labour we can automate with AI is far more than before. The Government’s responsibility is to ensure that companies are creating a world in which every person has the chance to succeed—creating AI-powered machines that will not replace people but rather make their occupations more compassionate. Difficult, degrading, demanding, hazardous, and monotonous that are too dangerous for human beings should be the ones taken over by robots. Government should control these technologies to make sure humans do things correctly and develop a kind of job that taps into uniquely human qualities and restores humanity. The ultimate contradiction is that this technology has the potential to be a great motivator for us to restore our humanity. If we do things correctly, we may be able to develop a kind of job that taps into our uniquely human qualities and restores our humanity.
Armscontrol.org. (2019). Autonomous Weapons Systems and the Laws of War | Arms Control Association. [online] Available at: https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war.
Federal Trade Commission (2019). FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook. [online] Federal Trade Commission. Available at: https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions.
Garrett, R.K. (2019). Social media’s contribution to political misperceptions in US Presidential elections. PLOS ONE, [online] 14(3), p.e0213500. Available at: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0213500.
Laub, Z. (2019). Hate Speech on Social Media: Global Comparisons. [online] Council on Foreign Relations. Available at: https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.
ReliefWeb. (n.d.). The Weaponization of Social Media: How social media can spark violence and what can be done about it, November 2019 – World. [online] Available at: https://reliefweb.int/report/world/weaponization-social-media-how-social-media-can-spark-violence-and-what-can-be-done.
Solon, O. (2017). Revealed: Facebook exposed identities of moderators to suspected terrorists. [online] the Guardian. Available at: https://www.theguardian.com/technology/2017/jun/16/facebook-moderators-identity-exposed-terrorist-groups.
The Global Risks Report 2019 14th Edition Insight Report. (n.d.). [online] Available at: https://www3.weforum.org/docs/WEF_Global_Risks_Report_2019.pdf.