Bibtex
Cite as text
@Select Types{,
Journal = "Band-1",
Title= "Trust in Smart Personal Assistants: A Systematic Literature Review and Development of a Research Agenda",
Author= "Naim Zierau, Christian Engel, Matthias Söllner, and Jan Marco Leimeister",
Doi= "https://doi.org/10.30844/wi_2020_a7-zierau",
Abstract= "Smart Personal Assistants (SPA) fundamentally influence the way individuals perform tasks, use services and interact with organizations. They thus bear an immense economic and societal potential. However, a lack of trust - rooted in perceptions of uncertainty and risk - when interacting with intelligent computer agents can inhibit their adoption. In this paper, we conduct a systematic literature review to investigate the state of knowledge on trust in SPAs. Based on a concept-centric analysis of 50 papers, we derive three distinct research perspectives that constitute this nascent field: user interface-driven, interaction-driven, and explanation-driven trust in SPAs. Building on the results of our analysis, we develop a research agenda to spark and guide future research surrounding trust in SPAs. Ultimately, this paper intends to contribute to the body of knowledge of trust in artificial intelligence-based systems, specifically SPAs. It does so by proposing a novel framework mapping out their relationship.
",
Keywords= "Trust, Smart Personal Assistant, Conversational Agent, Literature Review, Research Agenda
",
}
Naim Zierau, Christian Engel, Matthias Söllner, and Jan Marco Leimeister: Trust in Smart Personal Assistants: A Systematic Literature Review and Development of a Research Agenda. Online: https://doi.org/10.30844/wi_2020_a7-zierau (Abgerufen 24.11.24)
Open Access
Smart Personal Assistants (SPA) fundamentally influence the way individuals perform tasks, use services and interact with organizations. They thus bear an immense economic and societal potential. However, a lack of trust - rooted in perceptions of uncertainty and risk - when interacting with intelligent computer agents can inhibit their adoption. In this paper, we conduct a systematic literature review to investigate the state of knowledge on trust in SPAs. Based on a concept-centric analysis of 50 papers, we derive three distinct research perspectives that constitute this nascent field: user interface-driven, interaction-driven, and explanation-driven trust in SPAs. Building on the results of our analysis, we develop a research agenda to spark and guide future research surrounding trust in SPAs. Ultimately, this paper intends to contribute to the body of knowledge of trust in artificial intelligence-based systems, specifically SPAs. It does so by proposing a novel framework mapping out their relationship.
Trust, Smart Personal Assistant, Conversational Agent, Literature Review, Research Agenda
1. Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., Söllner, M.: AI-Based Digital Assistants. Bus. Inf. Syst. Eng. (2019).
2. Rzepka, C., Berger, B.: User Interaction with AI-enabled Systems: A systematic review of IS research. In: Pro. 39th Int. Conf. Inf. Syst. (2018).
3. McLean, G., Osei-Frimpong, K.: Hey Alexa … examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Human Behav. 99, 28–37 (2019). 99-114
4. Tractica: Worldwide Usage of Virtual Digital Assistant https://de.statista.com/statistik/daten/studie/620321/umfrage/ nutzung-von-virtuellendigitalen- assistenten-weltweit/ (Accessed: 16.08.2019).
5. Cowan, B.R., Pantidi, N., Coyle, D., Morrissey, K., Clarke, P., Al-Shehri, S., Earley, D., Bandeira, N.: “What can i help you with?”: Infrequent users’ experiences of intelligent personal assistants. In: Proceedings of the 19th International Conference on Human- Computer Interaction with Mobile Devices and Services (2017).
6. Jensen, M.L., Lowry, P.B., Burgoon, J.K., Jay, F., Jensen, M.L., Lowry, P.B., Burgoon, J.K., Jr, J.F.N.: Technology Dominance in Complex Decision Making : The Case of Aided Credibility Assessment Technology Dominance in Complex Decision Making : The Case of Aided Credibility Assessment. J. Manag. Inf. Syst. 27, 175–202 (2010).
7. Krogue, K.: Artificial Intelligence is Here to Stay, but Constumer Trust is a Must for AI in Business. Forbes. https://www.forbes.com/sites/kenkrogue/2017/09/11/artificialintelligence- is-here-to-stay-but-consumer-trust-is-a-must-for-ai-inbusiness/# 2cc10a2b776e (Accessed: 16.08.2019).
8. Mc Knight, D.H., Choudhury, V., Kacmar, C.: Developing And Validating Trust Measure for E-Commerce: An Integrative Typology. Inf. Syst. Res. 13, 334–359 (2002).
9. Lee, J.D., See, K.A.: Trust in Automation: Designing for Appropriate Reliance. Hum. Factors. 46, 50–80 (2004).
10. Torraco, R.J.: Writing Integrative Literature Reviews: Guidelines and Examples. Hum. Resour. Dev. Rev. 4, 356–367 (2005). https://doi.org/10.1177/1534484305278283.
11. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future : Writing a literature review. MIS Q. 26, 13–23 (2002).
12. vom Brocke, J., Simons, A., Niehavens, B., Reimer, K., Plattfaut, R., Cleven, A.: Reconstructing the Giant: On the Importance of Rigour in Docmeunting the Literature Search Process. In: 17th Eur. Conf. Inf. Syst.. pp. 2206–2217 (2009).
13. vom Brocke, J., Simons, A., Riemer, K., Niehaves, B., Plattfaut, R.: Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. Commun. Assoc. Inf. Syst. 37, 205–224 (2015).
14. Gregor, S., Benbasat, I.: Explanations from Intelligent Systems: Theoretical Foundations and Implications for Practice. MIS Q. 23, 497 (2006).
15. Knote, R., Janson, A., Söllner, M., Leimeister, J.M.: Classifying Smart Personal Assistants: An Empirical Cluster Analysis. In: Proceedings of the 52nd Hawaii International Conference on System Sciences. (2019).
16. Feine, J., Gnewuch, U., Morana, S., Maedche, A.: A Taxonomy of Social Cues for Conversational Agents. Int. J. Hum. Comput. Stud. (2019).
17. Maedche, A., Morana, S., Schacht, S., Werth, D., Krumeich, J.: Advanced user assistance systems. Bus. Inf. Syst. Eng. 58, 367–370 (2016).
18. Pfeuffer, N., Benlian, A., Gimpel, H., Hinz, O.: Anthropomorphic Information Systems. Bus. Inf. Syst. Eng. 61, 523–533 (2019).
19. Purington, A., Taft, J.G., Sannon, S., Bazarova, N.N., Taylor, S.H.: “Alexa is my new BFF”: Social Roles, User Satisfaction, and Personification of the Amazon Echo. In: Extended Abstracts of the 2017 CHI Conference on Human Factors in Computing Systems. (2017).
20. Söllner, M., Hoffmann, A., Leimeister, J.M.: Why different trust relationships matter for information systems users. Eur. J. Inf. Syst. 25, 274–287 (2016).
21. Mayer, R.C., Davis, J.H., Schoorman, D.F.: An Integrative Model of Organizational Trust. Acad. Manag. Rev. 20, 709–734 (1995).
22. Holliday, D., Wilson, S., Stumpf, S.: User Trust in Intelligent Systems: A Journey over Time. In: Proceedings of the 21st International Conference on Intelligent User Interfaces. pp. 164–168 (2016). https://doi.org/10.1016/j.ijid.2016.10.011.
23. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. – Manag. Inf. Syst. 26, 3 (2002).
24. Suddaby, R.: Editor’s Comments : Construct Clarity in Theories of. Acad. Manag. Rev. 35, 346–357 (2010).
25. Nunamaker, J.F., Derrick, D.C., Elkins, A.C., Burgoon, J.K., Patton, M.W.: Embodied Conversational Agent-Based Kiosk for Automated Interviewing. J. Manag. Inf. Syst. 28, 17–48 (2011).
26. de Visser, E.J., Monfort, S.S., McKendrick, R., Smith, M.A.B., McKnight, P.E., Krueger, F., Parasuraman, R.: Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 22, 331–349 (2016).
27. Riedl, R., Mohr, P.N.C., Kenning, P.H., Davis, F.D., Heekeren, H.R.: Trusting Humans and Avatars: A Brain Imaging Study Based on Evolution Theory. J. Manag. Inf. Syst. 30, 83–114 (2014).
28. Druga, S., Williams, R., Breazeal, C., Resnick, M.: “Hey Google is it OK if I eat you?” In: Proceedings of the 2017 Conference on Interaction Design and Children. (2017).
29. Mesbah, N., Olt, C.M., Tauchert, C., Buxmann, P.: Promoting Trust in AI-based Expert Systems. In: Proceedings of the 25th Americas Conference on Information Systems (2019).
30. Elkins, A.C., Derrick, D.C.: The Sound of Trust: Voice as a Measurement of Trust During Interactions with Embodied Conversational Agents. Gr. Decis. Negot. 22, 897–913 (2013).
31. Schroeder, J., Schroeder, M.: Trusting in Machines: How Mode of Interaction Affects Willingness to Share Personal Information with Machines. In: Proceedings of the 51st Hawaii International Conference on System Sciences (2018).
32. Qiu, L., Benbasat, I.: Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. J. Manag. Inf. Syst. 25, 145–182 (2009).
33. Muralidharan, L., de Visser, E.J., Parasuraman, R.: The effects of pitch contour and flanging on trust in speaking cognitive agents. In: Extended Abstracts of the 2014 CHI Conference Human Factors in Computing Systems – CHI ’14. (2014).
34. Yu, Q., Nguyen, T., Prakkamakul, S., Salehi, N.: “I Almost Fell in Love with a Machine”: Speaking with Computers Affects Self-disclosure. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. (2019).
35. Chattaraman, V., Kwon, W.S., Gilbert, J.E., Ross, K.: Should AI-Based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput. Human Behav. 90, 315–330 (2019). https://doi.org/10.1016/j.chb.2018.08.048.
36. Bickmore, T.W., Picard, R.W.: Establishing and Maintaining Long-Term Human- Computer Relationships. ACM Trans. Comput. Interact. 12, 293–327 (2005).
37. Wong-Villacres, M., Evans, H., Schechter, D., DiSalvo, B., Kumar, N.: Consejero automatico: Chatbots for Supporting Latino Parents’ Educational Engagement Marisol. In: Proc. Tenth Int. Conf. Inf. and Com. Techn. and Dev.(2019).
38. Vaccaro, K., Agarwalla, T., Shivakumar, S., Kumar, R.: Designing the Future of Personal Fashion. In: Proceedings of the 2018 CHI Conference Human Factors in Computing Systems. (2018).
39. Clark, L., Munteanu, C., Wade, V., Cowan, B.R., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C.: What Makes a Good Conversation? In: Proceedings of the 2019 CHI Conference Human Factors in Computing Systems (2019).
40. Lee, J.G., Kim, K.J., Lee, S., Shin, D.H.: Can Autonomous Vehicles Be Safe and Trustworthy? Effects of Appearance and Autonomy of Unmanned Driving Systems. Int. J. Hum. Comput. Interact. 31, 682–691 (2015). h
41. Hammer, S., Wißner, M., André, E.: Trust-based decision-making for smart and adaptive environments. User Model. User-adapt. Interact. 25, 267–293 (2015).
42. Cesta, A., D’aloisi, D.: Mixed-Initiative Issues in an Agent-Based Meeting Scheduler. Comput. Model. Mix. Interact. 229–262 (1999).
43. Cummings, M.L., Buchin, M., Carrigan, G., Donmez, B.: Supporting intelligent and trustworthy maritime path planning decisions. Int. J. Hum. Comput. Stud. 68, 616–626 (2010).
44. Meza Martínez, M.A., Nadj, M., Maedche, A.: Towards an Integrative Theoretical Framework of Interactive Machine Learning Systems. Proc. 27th Eur. Conf. Inf. Syst. (2019).
45. Schaffer, J., O’donovan, J., Michaelis, J., Raglin, A., Höllerer, T.: I Can Do Better Than Your AI: Expertise and Explanations ACM Reference Format. In: Proceedings of the 24th ACM International Conference on Intelligent User Interfaces. pp. 240–251 (2019).
46. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The Impact of Placebic Explanations on Trust in Intelligent Systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems – CHI ’19. (2019).
47. Stumpf, S., Rajaram, V., Li, L., Wong, W.K., Burnett, M., Dietterich, T., Sullivan, E., Herlocker, J.: Interacting meaningfully with machine learning systems: Three experiments. Int. J. Hum. Comput. Stud. 67, 639–662 (2009).
48. Lim, B.Y., Dey, A.K., Avrahami, D.: Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In: Proceedings of the 2009 CHI Conference Human Factors in Computing Systems. pp. 2119–2128 (2009).
49. Benlian, A., Klumpe, J., Hinz, O.: Mitigating the intrusive effects of smart home assistants by using anthropomorphic design features: A multimethod investigation. Inf. Syst. J. (2019).