Users perceiving higher levels of transparency in communication with AI- Ethical
Some consumers believe that AI service robots communicate more openly and honestly, and they are more inclined to believe that these robots are moral. In the article by Urquhart, Reedman-Flint, and Leesakul (2019) that examines the ethical ramifications of domestic robots, it is stated that ethically sound domestic robotics can be developed and upheld with regard to privacy concerns and the transparency of AI service robots for users who interact with them. The writers go through how it’s critical to comprehend user expectations while creating these robots so that their users won’t see them as intrusive or immoral. This study adds to the discussion of a possible link between perceptions of ethical behavior in AI service robotic interactions and perceived transparency levels of communication from such interactions (Urquhart et al., 2019). They contend that improved levels and qualities of trust between people and artificial intelligence systems interacting in particular environments, such as homes or other living spaces, may produce more favorable impressions among users, which in turn would lead to higher ratings on measures related to adherence to acceptable ethical standards.
The paper by Etemad-Sajadi et al. also explores how ethical concerns about human-robot interaction might influence a user’s decision to employ the robot. According to the authors, consumers prefer openness in their interactions with AI service robots so they can comprehend the rationale behind choices and the principles behind the robot’s development. As a result, increasing clarity levels in human-robot communication may help users see possible advantages from an ethical perspective (Etemad-Sajadi et al., 2022). Due to increased feelings of safety during conversation or task completion and a higher level of satisfaction with services offered by robotic systems, this would positively influence perceived trustworthiness, which is likely to result in higher adoption rates for robotics technology and improved customer experience. The end result might be enhanced profitability rather than better customer retention and loyalty levels for both suppliers and customers when engaging with robots on any platform or circumstance.
The last section of the paper by Belk (2021) addresses the moral ramifications of artificial intelligence and service robots. It outlines specific issues that crop up, such as how robots could utilize personal information or communicate with individuals while doing certain activities. The paper also provides possible methods for dealing with these problems, such as ensuring adherence to relevant privacy and technology laws and regulations. In addition, it offers ideas for enhancing communication between people and robots in order to boost confidence in AI systems (Belk, 2021). This is directly related to the idea that users who perceive higher levels of transparency in communication with AI service robots will be more likely to view them as ethical. Increased transparency into an AI system’s operation fosters a better understanding of its behavior among consumers or citizens, creating a greater level of acceptance towards its use within society. For academics aiming to develop robotic systems ethically while satisfying customer expectations for responsibility and dependability, this paper offers helpful ideas.
Users see AI as trustworthy when there is more openness in communication. On the other side, some consumers believe that AI service robots are more trustworthy since they communicate with them in a more transparent manner. The authors of this essay by Papagni and Koeszegi (2020) provide a sensemaking viewpoint on how to create reliable and intelligible robots. According to the authors, interactions between AI service robots and users should be planned for maximum transparency to raise the user’s perception of trustworthiness. They specifically suggest three components: clarity via directness, personalisation through natural language processing, and accuracy based on exact knowledge of their capabilities (Papagni & Koeszegi, 2020). If successfully implemented, these methods might increase usability since users would feel more confident relying on such robots’ precise communications translating their intentions into workable plans. Therefore, it follows that it makes sense that Users who experience greater degrees of openness in interactions with AI service robots would be more inclined to see them as trustworthy sources, opening up a simpler road to use uptake within society.
Furthermore, the notion that consumers will be more likely to consider AI service robots as trustworthy if they experience increased degrees of openness in interactions with these robots is reinforced by the paper by Alaieri (2018). Alaieri looks into how a robot’s degree of autonomy and its capacity for open communication with humans while interacting with them affect how trustworthy people view it, leading to a general rise in acceptance and understanding between them. In addition, it examines methods for developing socially responsible robotics, including methods like examining current laws or regulations pertaining to human behavior, taking into account potential dangers from interactions between people and machines, defining safety protocols, and offering choices regarding data privacy issues, among others (Alaieri, 2018).Last but not least, the authors’ dual model of psychological ownership and trust contributes to the explanation of user post-adoption behavior, according to the study by Delgosha and Hajiheydari (2021), which explores how human users interact with consumer robots. Higher degrees of transparency in human-robot communication may increase perceived trustworthiness and result in more favorable post-adoption behavior from the user, according to certain research (Delgosha & Hajiheydari, 2021). Therefore, more openness encourages greater credibility, which may result in improved interactions with AI service robots systems in many scenarios.
Risks to privacy that consumers perceive with regard to AI service robots
Consumers’ perceptions of the privacy issues posed by these AI service robots have an impact on their desire to contact and communicate with them. The influence of communication quality and privacy issues on consumer adoption intention for artificial intelligence customer service robots is covered in the paper by Song et al. (2022). If users don’t feel secure using these robotic services, they might be less likely to use them in the future (Song et al., 202). This suggests that customers’ perceptions of privacy risks associated with AI service robots will affect their willingness to interact and communicate with them. In order to boost consumer confidence in their systems and encourage a wider adoption of this technology within society, businesses offering AI-based customer service solutions must take steps like preventing data leakage or guaranteeing transparency regarding the use of user data.
This essay by Seo and Lee (2021) illustrates how trust, perceived risk, and customer happiness may be merged with the rise of service robots in restaurants. It looks into the idea of consumer trust, the perceived danger posed by AI service robots, and how these factors affect how satisfied customers are with their interactions or communications with them. The main claim is that customers’ views of the privacy hazards connected with AI service robots will have a big impact on how eager they are to engage and communicate with these technological breakthroughs (Seo & Lee, 2021). Therefore, when implementing such robotic services in their businesses, restaurant owners may find this study to be quite helpful in developing workable tactics to ensure optimum levels of client engagement.
The complexity of customers’ intentions to employ service robots is covered in the essay by Chuah and Yee (2021). The authors used a fsQCA technique, or fuzzy-set qualitative comparative analysis in this situation. Four distinct configuration types have been found to affect consumer behavior towards using service robots, according to the findings: provisioning experience and trustworthiness; ownership models and level of convenience; social acceptance and impression management; preference accuracy and severity of outcome (Chuah & Yee, 2021). All of these factors have the potential to significantly effect a person’s desire to contact with AI service robots by directly influencing how they perceive the privacy hazards involved. This demonstrates the need of taking these factors into account when developing more efficient marketing plans for new projects employing artificial intelligence technology, such as AI service robots.
The Song and Kim (2022) paper investigates how customer acceptability of humanoid retail service robots is influenced by human-robot interactions. It is predicted that for customers to be open to adopting these robots, they must look trustworthy, dependable, and courteous. Consequently, taking into account how customers perceive privacy threats connected to AI services As people gain confidence that their information is safe, robots will probably become more inclined to connect or speak with them (Song & Kim, 2022). Based on this findings, firms should take steps to secure customer data while using AI in the workplace so as not to slow consumer adoption rates. This may be accomplished by being open and honest with clients regarding security procedures, which helps foster confidence between them and the businesses using this technology.
Alaieri, F. (2018). Ethics in Social Autonomous Robots: Decision-Making, Transparency, and Trust (Doctoral dissertation, Université d’Ottawa/University of Ottawa).
Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 41(13-14), pp. 860-876.
Chuah, S. H. W., Aw, E. C. X., & Yee, D. (2021). Unveiling the complexity of consumers’ intention to use service robots: An fsQCA approach. Computers in Human Behavior, 123, 106870.
Delgosha, M. S., & Hajiheydari, N. (2021). How do human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behavior. Computers in Human Behavior, p. 117, 106660.
Etemad-Sajadi, R., Soussan, A., & Schöpfer, T. (2022). How Ethical Issues Raised by Human–Robot Interaction Can Impact the Intention to Use the Robot? International journal of social robotics, 14(4), 1103-1115.
Papagni, G., & Koeszegi, S. (2020). Understandable and trustworthy explainable robots: a sensemaking perspective. Paladyn, Journal of Behavioral Robotics, 12(1), 13-30.
Seo, K. H., & Lee, J. H. (2021). The emergence of service robots at restaurants: Integrating trust, perceived risk, and satisfaction. Sustainability, 13(8), 4431.
Song, M., Xing, X., Duan, Y., Cohen, J., & Mou, J. (2022). Will artificial intelligence replace human customer service? The impact of communication quality and privacy risks on adoption intention. Journal of Retailing and Consumer Services, p. 66, 102900.
Urquhart, L., Reedman-Flint, D., & Leesakul, N. (2019). Responsible domestic robotics: exploring ethical implications of robots in the home. Journal of Information, Communication, and Ethics in Society.