I anticipate becoming a Data Scientist. I would like to execute this role at Amazon. A data scientist uses data to extract valuable insights and make data-driven decisions. There are numerous ethical debates in this career that I expect to encounter. The most prevalent concern is algorithmic bias in product recommendations and hiring processes. The issue revolves around whether data-driven algorithms inadvertently perpetuate bias against certain demographic groups or if they are purely objective and non-discriminatory. This paper presents the debate and identifies the underlying positions. The paper will establish bias and demonstrate statistics supporting or against the positions.
Debate
Position one asserts that data-driven algorithms, responsible for powering product recommendations and hiring processes, may unintentionally perpetuate bias based on demographics such as gender (Chen, 2023). This bias is attributed to the historical data, which may contain inherent societal stereotypes. For instance, Amazon’s historical hiring data reflected discrimination against people of color in the recruitment process. The company’s algorithms will likely perpetuate the trend. Since the algorithms are trained based on historical data, as Chen (2023) tries to show, they will still favor the mainstream racial groups while hindering opportunities for people of color to become Amazon’s employees.
Position two, in contrast, maintains that Amazon’s algorithms are designed with fairness and objectivity in mind (Amazon, n.d.). Proponents argue that any observed bias results from the data upon which these algorithms are trained rather than an inherent bias in the algorithms themselves. Amazon is portrayed as an organization committed to addressing this issue through continuous monitoring and improvements to mitigate potential bias. Amazon (n.d.) illustrates that the company uses the DPPL bias metric, where a specified range of values is set during deployment. Any deviation from this range triggers a “bias detected” alert. The frequency of checks is adjustable, according to the company. They are performed at regular intervals, such as every two days.
Those favoring Position One often draw from an act of utilitarian ethical ideology. According to Quinn (2015), this perspective holds that an action is only appropriate where its benefits surpass the harms it causes and vice versa. The ethical theory is based upon the utility principle. Utility concerns producing happiness or preventing pain and suffering for an individual/community. In line with act utilitarianism, Position One emphasizes the greater good, focusing on improving the lives of marginalized groups impacted by algorithmic bias. This perspective posits that addressing bias is more than just a practical necessity. Instead, it is a moral obligation to create a more equitable and just society.
The opposition to Position One cites deontological ethics. The ideology holds out that an act is morally obligatory, notwithstanding its consequences for human well-being. The deontological ethics-based oppositions might argue that the algorithms themselves are neutral. They may also assert that Amazon has a duty to protect user privacy and maintain the integrity of their systems. In this view, addressing bias may be seen as a distraction from this primary duty.
Supporters of Position Two might find support in virtue ethics. According to Quinn (2015), virtue ethics express that a right action is one in which a virtuous person who acts in character performs in the same circumstances. The theory demonstrates that a virtuous person is an individual possessing and living out virtues. Besides, the theory labels virtues as character traits that an individual needs for them to be truly happy and flourish. Those who support Position Two might emphasize Amazon’s commitment to self-regulation and the virtue of fairness and non-discrimination in algorithm design. In this perspective, Amazon’s intent to create ethical algorithms is seen as virtuous. Any unintended bias is not a reflection of their ethical stance.
In contrast, opponents of Position Two might subscribe to social contract ethics. The theory contends that individuals in a civilized society have implicitly consented to the creation of moral rules for interpersonal conduct and the existence of a government with the power to enforce these rules (Quinn, 2015). Therefore, social contract-based opponents of position two might argue that Amazon should be held accountable for potential societal harm caused by biased algorithms. They might rationalize this idea by claiming that Amazon, as a tech giant, has an implicit social contract to ensure its algorithms are ethically sound and do not harm any demographic group.
Bias
Position One might exhibit a bias in favor of marginalized groups. This bias arises from a genuine concern for equity and justice, but it can potentially lead to overlooking the practical challenges Amazon faces in addressing bias without compromising its overall business performance. The proponents of this position may give greater weight to cases or instances where bias negatively impacts marginalized groups, potentially magnifying the underlying ethics at the expense of practical considerations. While a commitment to social justice drives their intentions, they may involuntarily underestimate the complexity of mitigating bias in real-world algorithms and the need for a balanced approach that helps Amazon’s functionality.
Conversely, Position Two might exhibit a bias in favor of corporate interests. This bias can result in downplaying or ignoring the real impact of biased algorithms on individuals and society. Proponents of this position might prioritize Amazon’s business goals and maintaining the integrity of their algorithms over addressing potential bias. Their stance might unintentionally moderate the significance of biases that, even if unintended, can lead to discrimination and societal harm. While they emphasize the neutrality of algorithms, they might neglect the ethical responsibility of tech companies to minimize harm and ensure that their products are equitable for all users.
Statics
Certain statistics oppose position one. A study performed by researchers at USC found bias in up to 38.6% of the “truths” used by artificial intelligence (Gruet, 2022). Besides, a report by Ledford (2019) illustrated that millions of black persons are impacted by racial bias in healthcare-based algorithms. Another study by Public Citizen (2021) claims that auto insurance costs more for minority groups because of algorithm bias. The report showed that non-white individuals pay approximately 30% more than white communities for motor insurance premiums with less or similar accident costs. Similarly, thanks to biased algorithms, white homebuyers have 57 points of credit score more than their Black counterparts. They are also 33 points higher relative to Latinx homebuyers.
Other statistics support position two. For example, Public Citizen (2019) demonstrates that due to biased algorithms, mortgages are altogether inaccessible or more expensive for minority groups. The report illustrates that higher discriminatory mortgage prices usually cost Black and Latinx populations at least $750 million every year. Besides, 6% or more of Black and Latinx applications are denied. However, they would be acknowledged if the borrower were white or not part of these populations. Additionally, Public Citizen (2019) indicates that Black defendants in the criminal justice system are approximately 77% more likely to be allotted higher risk scores relative to white defendants.
Assessing the ethical considerations of the sources used in this debate is crucial in determining their credibility and potential biases. Public Citizen is generally regarded as an ethical source. Their primary goal is to safeguard the interests of the public, which aligns with the ethical perspective of promoting fairness and equity in data-driven algorithms. USC Viterbi (Gruet is affiliated), an academic institution, is also typically considered an ethical source. It conducts research with a commitment to academic rigor and unbiased inquiry. However, it is essential to scrutinize individual studies and researchers for any potential conflicts of interest or biases that may arise from funding sources or personal affiliations. Nature magazine (Ledford is affiliated), known for its high standards in scientific publishing, is considered an ethical source. The authors follow strict peer-review processes to ensure their research is quality and with sufficient integrity. Nevertheless, ethical considerations arise if corporations, including tech companies like Amazon, fund the research published in Nature in the context of this debate. The main reason for this is that it could introduce a potential conflict of interest that may affect the objectivity and ethical standing of the research. Therefore, while these sources are generally ethical, it is crucial to scrutinize individual studies and their funding sources to ensure that they maintain integrity and independence.
Conclusion
I align with position one primarily because addressing algorithmic bias is fundamental to ensuring fairness, equity, and justice for all users. The ethical ideologies supporting the position emphasize the moral responsibility to address bias and create a just society. The debate surrounding algorithmic bias is unlikely to be fully resolved soon. It is a multifaceted, ever-evolving issue. Amazon, along with other tech giants, will continue to deal with the challenge of mitigating bias in algorithms, but progress will be incremental. Ethical concerns in data science are lasting. Vigilance and accountability are essential in pursuing ethical AI and data practices.
References
Amazon (n.d.). Amazon AI Fairness and Explainability Whitepaper. https://pages.awscloud.com/rs/112-TZM-766/images/Amazon.AI.Fairness.and.Explainability.Whitepaper.pdf
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12.
Gruet, G. (2022). ‘That’s Just Common Sense’. USC researchers find bias in up to 38.6% of ‘facts’ used by AI. USC Viterbi. https://viterbischool.usc.edu/news/2022/05/thats-just-common-sense-usc-researchers-find-bias-in-up-to-38-6-of-facts-used-by-ai/
Ledford, H. (2019). Millions of black people affected by racial bias in health-care algorithms. Nature. https://www.nature.com/articles/d41586-019-03228-6
Public Citizen (2023). Report: Algorithms Are Worsening Racism, Bias, Discrimination. https://www.citizen.org/news/report-algorithms-are-worsening-racism-bias-discrimination/
Quinn, M. J. (2015). Ethics for Information Age. Boston: Pearson.