Utilization of programs of Artificial Intelligence (AI) has increased in recent years to better creativity and productivity issues within various areas. The use of AI has moved to global markets. The United Nations Intellectual Property provided reports on technological trends and considered over 300,00 patents for AI (World Intellectual Property Organization, 2019). At least 50 per cent of the patents were published in 2018 (World Intellectual Property Organization, 2019). A summary was provided on the risks, challenges and opportunities for corporate governance in using these systems. This involved aspects of analysis, optimization, customization and automation of processes through the use of software and machines in various industries. Despite this, their use of the programs has posed major threats that impact environmental, social, and governance issues as well as corporate reputation (Grove et al., 2019). Boards must recognize the new risks linked to the use of these technologies as well as risks to the reputation of a firm. The report aims at exploring the need to assess risks related to AI potential effects of the risks on the reputation of a firm and offers recommendations for best practices in addressing the risks.
2.0 The Need to Assess Risk Associated With AI
2.1 Regulatory and Legal Compliance
Artificial intelligence involves complex processes that are linked to potential risks once deployed. As such, firms must conduct an assessment of risks before implementation systems of artificial intelligence. Firms must mitigate the risks of applying AI issues and advanced analytics models (Cheatham et al., 2019). There are various reasons for assessing risks associated with artificial intelligence in firms. For example, assessing the risks ensures that there is regulatory and legal compliance. Firms should be keen on ensuring that the use of the systems complies with the applicable regulations and compliance to various laws. The laws may be linked to privacy, intellectual property, and discrimination laws. For example, the lack of algorithmic transparency is a major discussion area in AI that highlights high-risk areas for governing AI and its designs (Rodrigues, 2020). The lack of transparency may involve people being denied loans, jobs and other benefits in firms that, lead to legal issues. This can be processed through AI software with poor inaccessibility on the functions of algorithms leading to challenges. Failing to comply with the regulations results in reputation damage among firms and increased costs due to fines and legal actions in courts.
2.2 Check on Cyber and Security Issues
Another reason to assess risks is to check on cyber and security issues associated with AI use in firms. The systems may require assessing larger data amounts that may lead to vulnerability in cyber threats and security breaches. Data privacy attacks involve the inference of data utilized in training AI models. This compromises the privacy of data. As noted by Radanliev et al. (2020), an important point in a cyber physical system is security that may involve both physical and electronic. Security may need the protection of information and assurance of data privacy. Due to such threats in the use of AI, data protection in firms may be paramount, in which systems could be employed to address such challenges. Analyzing the parameters and the querying models used in an AI poses vulnerability threats (Rogers, 2023). A membership inference attack involves the determination of records in training data sets, where attackers determine the probability of whether the record was involved in data set training used for a specific AI. The breaches may be linked to cases of loss of data and damage to the reputation of a firm. This brings the issue of ethical concerns that may need assessment to help address data confidentiality. In addition, ethical concerns may arise in the decision-making process that utilizes AI. Firms must be considerate of the ethical implications linked to the use of AI in making such decisions. Transparency and limited bias in AI systems could be critical for firms to avoid damage to their reputation.
2.3 Operational Risks
Furthermore, assessing risks ensures that a firm is aware of the operational issues linked to the use of AI in various firm processes (Fosso Wamba et al., 2020). Using these systems could pose risks to operations, such as failure to perform in a desired manner. This may cause errors that lead to service delays affecting a firm’s operational needs. It further causes risks to the reputation of a firm that affects customer trust and impacts the public reputation of a firm. Effective risk assessment ensures that these firms identify the major risks and encourage the implementation of methods to mitigate them.
3.0 Ways AI-Related Issues Could Impact Corporate Reputations
3.1 Bias and Discrimination
AI-related issues could pose a negative effect on the reputation of corporates. For example, there may be increased discrimination and bias in which the systems discriminate against a certain group. This may lead to a firm’s reputational damage as well as public negativity in its operations. A study by Gupta and Mishra (2022) described that in recent years, AI had gained popularity in recruitment processes in firms and may inherit prejudice. Gender biases have been described as critical areas of focus while using AI in the recruitment process (Gupta and Mishra, 2022). This has led to effects in implementing systems that support recruitment processes that use AI for the overall processes for selecting candidates. An example is depicted in how Google shows ads to male candidates more compared to females (Gupta and Mishra, 2022). This depiction has shown how AI is unconscious of gender biases and affects a company’s reputation. In another example, Dastin (2018) reported that the AI recruitment tools for Amazon were scrapped as they had bias and discrimination against females.
3.2 Security Breaches
Another way in which AI issues may affect firms is through security breaches. The systems may require accessing information from people, which may cause vulnerability to breaches in security systems. It then can cause a loss of data that affects the reputation of a company. For instance, Bernard, et al. (2017) reported that Equifax systems were hacked in which the data breach affected over 140 million individuals leading to reputation damage for the company. The security breach also affected the trust of the customers in the company.
3.3 Unforeseen Challenges
Furthermore, there may be consequences that cannot be observed while utilizing AI in companies and can impact operations and reputation. For instance, the systems may make decisions that affect the values of a company and can further lead to damage to its reputation. For example, Kentucky fried chicken had an AI automation misstep that led to public outcry, causing KFC to apologize and delete the advertisement (Segal, 2022). The company sent a promotional advert to customers for the memorial of Kristallnacht. The customers were to treat themselves to tender cheese and fried chicken. Artificial intelligence can also be opaque leading to transparency challenges. The issue may be difficult for firms to understand the processes of making decisions that can cause damage to the reputation of a company ,such as ca ustomer distrust. A real-life example was in 2019, there was a critic iofusing artificial intelligence for mthe oderation of content ionFacebook (Lauer, 2021). This process was criticized for lacking accountability and being transparent.
3.4 Productivity and Performance Issues
Lastly, the use of artificial intelligence may be affected by productivity and performance issues. For example, the systems are prone to failing in executing operations within a firm. This may impact the reputation of a company leading to major challenges in service delivery. The increase in efficiency of operations, supply chain operations and maintenance of firms may also be affected by AI issues. Automation adaptation to changing conditions in the market could be impacted by the different issues related with AI use. An example is when there are incorrect predictions that are costly and can harm humans. A real-life example of this is the aircraft Boeing 737 Max of Ethiopian airlines that had a misstep in its autopilot intelligence system (Seidel, 2019). The airplane had the anti-stalling AI system but however, it crashed due to stalling, causing public outrage. In addition, lawsuits were filed against the firm, with fines and compensation being required from the company to those affected. For some time, the issue eroded people’s trust and damaged the airline’s reputation affecting its revenues.
3.5 Misusing AI
Misusing AI is a critical issue that can affect a firm’s reputation. For example, Cambridge Analytica obtained profiles and data for Facebook users without consent. The data was used to offer targeted adverts for the election campaign of the 2016 general elections 2016. Personalized messaging was utilized to influence voters (Cadwalladr & Graham-Harrison, 2018). This damaged the company’s reputation and further led to fines and distrust among customers.
While the use of AI has shown potential towards revolutionizing firms, there have been major potential risks that are affecting processes when using the system. This requires firms to proactively be engaged in managing and assessing the risks to minimize challenges of reputational damage. This is in addition to ensuring that there are ethical operations and responsible use of AI systems across corporates. Risk assessment when using AI is critical as it encourages a culture of ethical compliance, being transparent and accountable as a firm, to effectively executing processes. A firm must assess any risks of using AI, including system breaches and operational risks, to ensure maximum benefits and increased revenues. The example of KFC is critical in understanding a misstep in decision-making that could be caused by the use of AI. The issue caused damage to the reputation of the firm as customers were negatively affected by how the message and advert were presented. Other real-life examples presented depict that the use of AI may have issues that require risk assessment and proper management before the utilization of the systems in various aspects. The effectiveness of risk assessment, therefore, helps firms address the challenges that are linked to AI operations. Productivity and operational risks could also be biased in the case that the issues of AI are not well addressed in the corporate world.
From the potential issues that AI poses to firms, taking a strategic and proactive approach in the management and assessment of risks could be critical, based on these recommendations.
- Effective use of risk assessment models and strategies must be employed in firms. They should execute a comprehensive assessment of AI-related risks before the use and implementation of such systems in different processes. It can include the identification of the major risks and adverse effects linked to the systems. The effects can be assessed by checking on the severity and likelihood of the risks happening while putting in models to mitigate the said risks. Strategic teams can be included in this process, such as IT experts, legal personnel, and experts in functions of a business to address reputational damages and risks linked to AI.
- Firms must encourage accountability and transparency in the use of artificial intelligence systems in various operations. For example, ensuring that an explanation is offered about the AIs would be critical for employees to understand how various decisions are made. In addition, firms should ensure clear lines that are linked to being accountable for the outputs of AI and that the desired checks and assessment models are in place.
- Lastly, ethics must be followed in firms in which considerations are linked to the development, design and deployment of artificial intelligence models. This may involve making sure that the systems avoid discrimination, respect human rights and encourage fairness. The staff must be trained on ethical issues of using the systems, and in the case of ethical concerns, they ought to report the issues for further management.
Bernard, T. S., Hsu, T., Perlroth, N., & Lieber, R. (2017, September 7) Equifax says cyberattack may have affected 143 million in the U.S. The New York Times. Retrieved April 5, 2023, from https://www.nytimes.com/2017/09/07/business/equifax-cyberattack.html
Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. Retrieved April 5, 2023, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Cheatham, B., Javanmardian, K. and Samandari, H. (2019) Confronting the risks of Artificial Intelligence, McKinsey & Company. McKinsey & Company. Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence (Accessed: April 5, 2023).
Dastin, J. (2018) Amazon scraps secret AI recruiting tool that showed bias against women, Reuters. Thomson Reuters. Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G (Accessed: April 5, 2023).
Fosso Wamba, S., Queiroz, M.M., Guthrie, C. and Braganza, A. (2022). Industry experiences of artificial intelligence (AI): benefits and challenges in operations and supply chain management. Production Planning & Control, 33(16), pp.1493-1497.
Grove, H., Clouse, M. and Xu, T. (2020) New risks related to emerging technologies and reputation for corporate governance. Journal of Governance and Regulation/Volume, 9(2).
Gupta, A. and Mishra, M. (2022). Ethical Concerns While Using Artificial Intelligence in Recruitment of Employees.
Lauer, D. (2021). Facebook’s ethical failures are not accidental; they are part of the business model. AI and Ethics, 1(4), pp.395-403.
Radanliev, P., De Roure, D., Walton, R., Van Kleek, M., Montalvo, R.M., Maddox, L.T., Santos, O., Burnap, P. and Anthi, E. (2020). Artificial intelligence and machine learning in dynamic cyber risk analytics at the edge. SN Applied Sciences, 2, pp.1-8.
Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, p.100005.
Rogers, J. (2023) Artificial Intelligence Risk & Governance, AI & Analytics for Business. Available at: https://aiab.wharton.upenn.edu/research/artificial-intelligence-risk-governance/ (Accessed: April 5, 2023).
Segal, E. (2022, November 14). KFC’s apology for sending a promotional message to Germans provides 7 crisis management lessons. Forbes. Retrieved April 5, 2023, from https://www.forbes.com/sites/edwardsegal/2022/11/13/kfcs-apology-for-sending-promotional-message-to-germans-provides-7-crisis-management-lessons/?sh=1a0762d9e06e
Seidel, J. (2019). Confused AI may have led to fatal Boeing 737 MAX8 crash – news. Retrieved April 5, 2023, from https://www.news.com.au/technology/innovation/inventions/how-a-confused-ai-may-have-fought-pilots-attempting-to-save-boeing-737-max8s/news-story/bf0d102f699905e5aa8d1f6d65f4c27e
World Intellectual Property Organization (2019) WIPO – World Intellectual Property Organization. Available at: https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf (Accessed: April 5, 2023).