Need a perfect paper? Place your first order and save 5% with this code:   SAVE5NOW

Advanced Topics in Privacy, DP, and Cyber Law

Given the Government’s published National AI Strategy, assess the need for an AI Bill of Rights in the UK.

On 22 September 2021, the UK Government published the country’s first national artificial intelligence (AI) strategy, which sets out the government’s plan to make the UK a global AI superpower. The strategy recognises the substantial impact that AI will have on businesses around the world and intends to build the most pro-innovation regulatory environment worldwide which will enable businesses in the country to benefit from AI adoption and allow them to compete successfully in the global business environment. Specifically, the AI strategy is articulated around three major pillars. The first pillar focuses on the need to invest in the skills and resources that contribute to increased innovation in the field of AI. The second pillar intends to make sure that the benefits of AI innovation are shared across all sectors of the country’s economy. The third pillar which is the focus of this assignment aims to ensure that AI technologies are effectively regulated in order to address the potential risks and harms posed by AI.

How AI affects human rights

AI refers to the simulation of human intelligence in machines that are programmed to learn and perform human-like tasks. In today’s digital world, AI has become very popular today thanks to improvements in computing power, increased data volumes, and advanced algorithms. AI is developing every day and is expected to enhance human capacities and empower people over the next decade. However, as AI grows more sophisticated and ubiquitous, experts warn that it could pose significant threats to humanity. The increasing automation of jobs, racial and gender bias issues stemming from outdated information sources, the spread of fake news, and the adoption of autonomous weapons that operate without human oversight have been cited as some of the biggest dangers posed by AI.

The past decade has witnessed the great rise of AI which has revolutionised technology in a wide range of sectors and areas of life, such as health, education, social care, work, and law enforcement. There are various ways AI offers considerable opportunities for the advancement of human rights. For instance, AI technologies offer new tools for human rights investigations, documentation, and policies. The technologies also facilitate more personalised education and health care, and also help people to live a better life. Nevertheless, AI also has the potential to violate human rights and undermine the laws that protect them. As a result, there are various issues that the UK government needs to put into consideration when implementing its AI strategy to ensure that human rights are protected.

In the past few years, the field of AI has experienced immense growth in a variety of areas such as natural language processing, speech recognition, decision-making, image and video generation, among others. Breakthrough applications have also emerged in a broad range of domains including interactive personal assistance, medical diagnosis, autonomous driving, logistics systems, language translation, and sports. For the first time in history, decisions critical to the wellbeing of humans are being made in part or even wholly by machines – from medical procedures and job applications to creditworthiness and prison sentencing. In the pre-algorithm world, organisations made these decisions based on laws that regulate the decision-making processes in terms of equity, transparency, and fairness. Today, most of these decisions are entirely made or influenced by machines. With more organisations going digital, AI is increasingly informing crucial decisions. Experts warn that some of these decisions may be flawed, given that what these machines learn and interpret is influenced by the prejudices of the people who program them and the partial data sets they receive.

In recent years, it has become alarmingly clear from an overwhelming and growing body of evidence that algorithmic decision systems are perpetuating injustice for certain demographic cohorts. AI systems, trained on real-world data sets, can be biased based on the people who build them and how they are developed and deployed. This is referred to as algorithmic bias, which delineates certain attributes of an algorithm that cause it to create subjective or unfair outcomes. The effect has been to exacerbate rather than reduce injustices and discrimination, with black patients being de-prioritised for kidney transplants and black Uber drivers being fired over ‘racist’ facial recognition software.

The need for an AI Bill of Rights in the UK

As AI takes a bigger decision-making role in both public and private sectors, scientists and AI experts have identified the need to create an AI Bill of Rights that aims to protect people in the face of the transformative technology of AI. As the UK government embarks on a vision to make Britain a global hub for AI innovation, it must develop the country’s AI system based on democratic principles and universal human rights in order to ensure it will have future benefits in society. One way to accomplish this is through the creation of an AI Bill of Rights that would set out the standards that every existing or new AI technology must conform to in order to protect the rights of individuals.

In both private and public institutions, algorithms that assist with the hiring, provide health care, determine creditworthiness, and direct police have shown differential treatment on certain groups of people. In recent years, examples have emerged of AI recruiting tools that showed bias against women and persons with disabilities. Reports have also emerged of some prominent facial and speech recognition algorithms that are skewed against black people. According to author, algorithmic bias can arise from various factors such as the failure to address statistical biases, the use of prejudiced historical data, or reliance on unrepresentative training data. In many cases, algorithmic bias is not only unethical but could also result in illegal discrimination.

In the first case of its kind in Europe, two unions in the UK, the App Drivers and Couriers Union (ACDU) and the Independent Workers’ Union of Great Britain (IWGB) launched legal action against Uber in October 2021 for discrimination over the use of facial recognition software that is known to be error-prone with people of colour. This was triggered by the story of one black driver in the UK who lost his job when the automated face-scanning software failed to recognise him, leading to the deactivation of his account. While backing the action, IWGB indicated that many other drivers had had their registration with Uber terminated as a result of alleged mistakes with the software.

With one in 10 people in Europe now being employed by a digital platform such as Deliveroo or Uber, the gig economy is responsible for some of the most pressing challenges to individuals’ digital rights in the contemporary world. The workforce in this economy is disproportionately made up of people of colour, immigrants, and other people for whom alternative employment options are often limited. In London, for instance, nine out of ten private hire drivers identify as Black, Asian, or mixed race, according to Transport for London data. The majority of these groups are likely to accept unjust pay and unfair working conditions, such as no sick pay, parental leave or holidays due to limited employment options.

AI hiring tools have also been shown to escalate existing gender biases when it comes to recruitment. In 2018, tech giant Amazon was forced to abandon an AI tool it had been developing for four years after finding out that the software was discriminating against women. The data that was used to build the algorithm was benchmarked against the company’s predominantly male workforce to determine an applicant’s fit. As a result, the tool downgraded resumes containing the word “women’s” and filtered out potential hires who had attended women’s colleges, leading to gender bias. Similarly, the use of mortgage approval algorithms to predict creditworthiness has been found to discriminate individuals on the basis of race and socioeconomic status. In addition, some AI software in the healthcare sector has been shown to recommend medical support for groups that access health care services most often, rather than those who need the services most.

Data collected by means of AI also raises questions about privacy and transparency. The huge amounts of data that organisations feed into their AI-driven algorithms are vulnerable to data breaches. AI methods are also being used to identify individuals who wish to remain anonymous and to generate personal data without the permission of the individual. Without proper protection and regulatory assurances, AI in the health sector can also pose significant risks to patient data. On social media platforms, the massive amount of personal data that the sites gather and retain are susceptible to hacking and data breaches, especially if sites fail to put in place critical security measures and access restrictions.

The issues discussed above highlight the need for an AI Bill of Rights in the UK that would set out what AI technologies are permissible and the ground rules under which they should operate. some of the failures of AI may be unintentional, but they raise critical issues which disproportionately affect already marginalized groups. Also, a large proportion of these failings often result from AI developers not using appropriate data sets and not auditing systems comprehensively prior to their use. An AI Bill of Rights would help to address these problems by listing out six fundamental rules that developers must comply with in order to promote the interests of all people, protect privacy, and ensure trustworthiness in the systems. Further, such a bill would set out specific requirements aimed at governing all elements of any AI used in the country, from management to its supply and implementation.

Some of the major concerns about the risks of AI technologies include the use of inaccurate datasets that embed past prejudice and exacerbate modern-day discrimination, including facial recognition technology leading to wrongful, discriminatory arrests, digital assistants failing to recognise particular accents, facial recognition systems misidentifying people of colour, and discriminatory impacts from mortgage approval algorithms. The government must ensure that powerful technologies respect the country’s democratic values and conform to the central tenet that every person should be treated fairly. The AI bill of rights can help to codify certain values and tenets that all AI technology must comply with.

The AI bill of rights can also help to stipulate the rights and freedoms of individuals using, or subject to, AI technologies. Over the decades, there has been an expansion of human rights in terms of the number of rights that are recognized and the mechanisms available within the international regime to protect the rights. In the 21st century, there is a critical need to introduce the bill of rights that can protect people against the powerful technologies the world has created. Such a bill should also set out the rights and freedoms that these technologies are expected to respect. Some of the potential affirmative rights the bill should intend to protect include the right to know when and how AI technologies are influencing decisions that impact individuals’ civil liberties and civil rights. The bill should also have provisions that protect people from being subjected to AI technologies that have not been thoroughly audited to ensure that they are accurate, unbiased, and have been trained on adequately representative data sets. In addition, the bill should safeguard individuals from discriminatory or pervasive surveillance and monitoring in the workplace, home, and the community. Further, the bill should introduce the right to meaningful recourse if the use of an algorithm results in harm.

The bill should also stipulate a mechanism for enforcing the rights to ensure they are protected. All corporations, for instance, should be required to disclose how their AI systems collect and process data in a way that is understandable to anyone. Measures against those who do not adhere to the stipulated rights should be clearly stipulated. Such measures may include banning biased algorithms in decision-making that directly affect people’s civil rights and liberties. This may also extend to include the government refusing to buy software or technology products that do not respect these rights. The government may also enact regulations requiring all its contractors to use AI technologies that abide by these rights.

Conclusion

AI technologies have been integrated into many of every individual’s life, from social media platforms to smart home appliances, and it is increasingly being used by both public and private institutions to assess people’s qualities or skills, allocate resources, and otherwise make decisions that can have profound effects on the human rights of individuals. As a result, there is an urgent need to find the right balance between these technological advancements and human rights protection. Enacting an AI bill of rights can be a critical step in ensuring that these powerful technologies respect the country’s democratic values and the central tenet of fair treatment for all. The bill would not only help to codify certain values and tenets that all AI technology must comply with but also stipulate the rights and freedoms of individuals that these technologies must respect.

Bibliography

BCS Policy and PR. “The Online Safety Bill and the Tech Agenda for 2022.” ITNOW 64, no. 1 (2022): 28-29.

Benedetto, Loredana, and Massimo Ingrassia. “Digital parenting: Raising and protecting children in media world.” Parenting: Studies by an ecocultural and transactional perspective (2021): 127-148.

Davidson, Julia, and Elena Martellozzo. “Protecting children online: Towards a safer Internet.” In Sex as crime? pp. 360-377. Willan, 2013.

Fai, Melissa, Jen Bradley, and Meaghan Powell. “Cyber law: Online Safety Bill-An overview of the enhanced regime.” LSJ: Law Society of NSW Journal 80 (2021): 68-70.

Farthing, Rys, Katina Michael, Roba Abbas, and Genevieve Smith-Nunes. “Age appropriate digital services for young people: Major reforms.” IEEE Consumer Electronics Magazine 10, no. 4 (2021): 40-48.

Livingstone, Sonia. “Why is media literacy prominent in the UK’s draft Online Safety Bill 2021?.” Parenting for a Digital Future (2021).

Milkaite, Ingrida, and Eva Lievens. “Child-friendly transparency of data processing in the EU: from legal requirements to platform policies.” Journal of Children and Media 14, no. 1 (2020): 5-21.

Oxford Analytica. “UK Online Safety Bill faces tough legislative road.” Emerald Expert Briefings oxan-es (2021).

Picken, Adele. “Background and current data protection legislation.” Archives of Disease in Childhood-Education and Practice 105, no. 5 (2020): 300-303.

Ramchurn, Sarvapali D., Stuart E. Middleton, Derek McAuley, Helena Webb, Richard Hyde, and Justyna Jonak Lisinska. “A Response to Draft Online Safety Bill-a call for evidence from the Joint Committee.” (2021).

Stoilova, Mariya, Sonia Livingstone, and Rishita Nandagiri. “Digital by default: Children’s capacity to understand and manage online data and privacy.” Media and Communication 8, no. 4 (2020): 197-207.

Yar, Majid. “Protecting children from internet pornography? A critical assessment of statutory age verification and its enforcement in the UK.” Policing: An International Journal (2019).

 

Don't have time to write this essay on your own?
Use our essay writing service and save your time. We guarantee high quality, on-time delivery and 100% confidentiality. All our papers are written from scratch according to your instructions and are plagiarism free.
Place an order

Cite This Work

To export a reference to this article please select a referencing style below:

APA
MLA
Harvard
Vancouver
Chicago
ASA
IEEE
AMA
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Copy to clipboard
Need a plagiarism free essay written by an educator?
Order it today

Popular Essay Topics