Bibtex
Cite as text
@Select Types{,
Journal = "Band-1",
Title= "Designing a Conversational Agent as a Formative Course Evaluation Tool",
Author= "Thiemo Wambsganss, Rainer Winkler, Pascale Schmid, Matthias Söllner",
Doi= "https://doi.org/10.30844/wi_2020_k7-wambsganss",
Abstract= "Today’s graduating students face ever-changing environments when they enter their job life. Educational institutions must therefore continuously develop their course structure and content in order to prepare their students to be future employees. A very important means for developing the courses is the students’ course evaluations. Due to financial and organizational restrictions, these course evaluations are usually carried out quantitatively and at the end of the semester. However, past research has shown that this kind of evaluation faces certain constraints such as low acceptance rates, only time-related insights and low-quality answers that do not really help the lecturer to improve the course. Drawing on social response theory, we propose that conversational agents as a formative course evaluation tool are able to address the mentioned problems by interactively engaging with students. Therefore, we propose a set of design principles and evaluate them with our prototype Eva.
",
Keywords= "Conversational agents, formative course evaluation, design science research, human-computer interaction",
}
Thiemo Wambsganss, Rainer Winkler, Pascale Schmid, Matthias Söllner: Designing a Conversational Agent as a Formative Course Evaluation Tool. Online: https://doi.org/10.30844/wi_2020_k7-wambsganss (Abgerufen 24.11.24)
Open Access
Today’s graduating students face ever-changing environments when they enter their job life. Educational institutions must therefore continuously develop their course structure and content in order to prepare their students to be future employees. A very important means for developing the courses is the students’ course evaluations. Due to financial and organizational restrictions, these course evaluations are usually carried out quantitatively and at the end of the semester. However, past research has shown that this kind of evaluation faces certain constraints such as low acceptance rates, only time-related insights and low-quality answers that do not really help the lecturer to improve the course. Drawing on social response theory, we propose that conversational agents as a formative course evaluation tool are able to address the mentioned problems by interactively engaging with students. Therefore, we propose a set of design principles and evaluate them with our prototype Eva.
Conversational agents, formative course evaluation, design science research, human-computer interaction
1. vom Brocke, J., Maaß, W., Buxmann, P., Maedche, A., Leimeister, J.M., Pecht, G.: Future Work and Enterprise Systems. Bus. Inf. Syst. Eng. 60, 357–366 (2018).
2. Smithson, J., Birks, M., Harrison, G., Sid Nair, C., Hitchins, M.: Benchmarking for the effective use of student evaluation data. Qual. Assur. Educ. 23, 20–29 (2015).
3. Blair, E., Valdez Noel, K.: Improving higher education practice through student evaluation systems: Is the student voice being heard? Assess. Eval. High. Educ. 39, 879–894 (2014).
4. Spooren, P., Brockx, B., Mortelmans, D.: On the Validity of Student Evaluation of Teaching. (2013).
5. Shah, M., Cheng, M., Fitzgerald, R.: Closing the loop on student feedback: the case of Australian and Scottish universities. High. Educ. 74, 115–129 (2017).
6. Tucker, B., Jones, S., Straker, L.: Online student evaluation improves Course Experience Questionnaire results in a physiotherapy program. High. Edu. Res. Dev. 27, 281–296 (2008).
7. Crews, T.B., Curtis, D.F.: Online Course Evaluations: Faculty Perspective and Strategies for Improved Response Rates. Assess. Eval. High. Educ. 36, 865–878 (2011).
8. Shawar, B.A., Atwell, E.S.: Using corpora in machine-learning chatbot systems. Int. J. Corpus Linguist. 10, 489–516 (2005).
9. Rubin, V.L., Chen, Y., Thorimbert, L.M.: Artificially intelligent conversational agents in libraries. Libr. Hi Tech. 28, 496–522 (2010).
10. Nass, C., Moon, Y.: Machines and Mindlessness: Social Responses to Computers. (2000).
11. Moon, Y.: Intimate Exchanges: Using Computers to Elicit Self‐Disclosure From Consumers. J. Consum. Res. 26, 323–339 (2000).
12. Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI conference on Human factors in computing systems celebrating interdependence – CHI ’94. pp. 72–78. ACM Press, New York, New York, USA (1994).
13. Krassmann, A.L., Paz, F.J., Silveira, C., Tarouco, L.M.R., Bercht, M.: Conversational Agents in Distance Education: Comparing Mood States with Students’ Perception. Creat. Educ. 09, 1726–1742 (2018).
14. eMarketer: Alexa, Say What?! Voice-Enabled Speaker Usage to Grow Nearly 130%. (2017).
15. Hobert, S., Wolff, R.M. Von: Say Hello to Your New Automated Tutor – A Structured Literature Review on Pedagogical Conversational Agents. (2019).
16. Aguiar, E.V.B., Tarouco, L.M.R., Reategui, E.: Supporting problem-solving in Mathematics with a conversational agent capable of representing gifted students’ knowledge. Proc. Annu. Hawaii Int. Conf. Syst. Sci. 130–137 (2014).
17. Winkler, R., Söllner, M., Neuweiler, M.L., Rossini, F.C., Leimeister, J.M.: Alexa, Can You Help Us Solve This Problem? How Conversations With Smart Personal Assistant Tutors Increase Task Group Outcomes. (2019).
18. Tegos, S., Demetriadis, S., Tsiatsos, T.: A configurable conversational agent to trigger students’ productive dialogue: A pilot study in the CALL domain. (2014).
19. Latorre-Navarro, E., Harris, J.: An Intelligent Natural Language Conversational System for Academic Advising. Int. J. Adv. Comput. Sci. Appl. 6, (2015).
20. Song, D., Oh, E.Y., Rice, M.: Interacting with a conversational agent system for educational purposes in online courses. Proc. – 2017 10th Int. Conf. Hum. Syst. Interact. 78–82 (2017).
21. Winkler, R., Söllner, M.: Unleashing the Potential of Chatbots in Education : A State-Of- The-Art Analysis . In : Academy of Management. Meet. Annu. Chicago, A O M. (2018).
22. Kim, S., Lee, J., Gweon, G.: Comparing data from chatbot and web surveys effects of platform and conversational style on survey response quality. Conf. Hum. Factors Comput. Syst. – Proc. 1–12 (2019).
23. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design Science in Information Systems Research. Des. Sci. IS Res. MIS Q. 28, 75 (2004).
24. Kerly, A., Hall, P., Bull, S.: Bringing chatbots into education: Towards natural language negotiation of open learner models. Knowledge-Based Syst. super20, 177–185 (2007).
25. Serban, I. V., Sankar, C., Germain, M., Zhang, S., Lin, Z., Subramanian, S., Kim, T., Pieper, M., Chandar, S., Ke, N.R., Rajeswar, S., de Brebisson, A., Sotelo, J.M.R., Suhubdy, D., Michalski, V., Nguyen, A., Pineau, J., Bengio, Y.: A Deep Reinforcement Learning Chatbot. 1–40 (2018).
26. Payr, S.: The virtual university’s faculty: An overview of educational agents. Appl. Artif. Intell. 17, 1–19 (2003).
27. Kim, Y., Baylor, A.L., Shen, E.: Pedagogical agents as learning companions: The impact of agent emotion and gender. J. Comput. Assist. Learn. 23, 220–234 (2007).
28. Kim, Y., Baylor, A.L.: Research-Based Design of Pedagogical Agent Roles: a Review, Progress, and Recommendations. Int. J. Artif. Intell. Educ. 26, 160–169 (2016).
29. Kim, C.M., Baylor, A.L.: A virtual change agent: Motivating pre-service teachers to integrate technology in their future classrooms. Educ. Technol. Soc. 11, 309–321 (2008).
30. Chou, C.-Y., Huang, B.-H., Lin, C.-J.: Complementary machine intelligence and human intelligence in virtual teaching assistant for tutoring program tracing. Comput. Educ. 57, 2303–2312 (2011).
31. Erikson, M., Erikson, M.G., Punzi, E.: Student responses to a reflexive course evaluation. Reflective Pract. 17, 663–675 (2016).
32. Freeman, R., Dobbins, K.: Are we serious about enhancing courses? Using the principles of assessment for learning to enhance course evaluation. As. Eval. Hi. Edu. 38,142–151 (2013).
33. Chen, Y., Hoshower, L.B.: Student evaluation of teaching effectiveness: An assessment of student perception and motivation. Assess. Eval. High. Educ. 28, 71–88 (2003).
34. Steyn, C., Davies, C., Sambo, A.: Eliciting student feedback for course development: the application of a qualitative course evaluation tool among business research students. Assess. Eval. High. Educ. 44, 11–24 (2019).
35. Gnewuch, U., Morana, S., Adam, M., Maedche, A.: Faster is Not Always Better: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction. Res. Pap. (2018).
36. Hevner, A.R.: A three cycle view of design science research. Scand. J. Inf. Syst. 1–6 (2007).
37. Gläser, J., Laudel, G.: Experteninterviews und qualitative Inhaltsanalyse : als Instrumente rekonstruierender Untersuchungen. VS Verlag für Sozialwiss (2010).
38. Cohn, M.: User Stories Applied For Agile Software Development. (2004).
39. Venable, J., Pries-Heje, J., Baskerville, R.: FEDS: A Framework for Evaluation in Design Science Research. Eur. J. Inf. Syst. 25, 77–89 (2016).
40. Sonnenberg, C., vom Brocke, J.: Evaluation Patterns for Design Science Research Artefacts. Presented at the October 14 (2012).
41. Cooper, H.M.: Organizing knowledge syntheses: A taxonomy of literature reviews. (1988).
42. vom Brocke, J., Simons, A., Riemer, K., Niehaves, B., Plattfaut, R., Cleven, A.: Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. Commun. Assoc. Inf. Syst. 37, (2015).
43. Lajoie, S.P., Azevedo, R.: Teaching and Learning in Technology-Rich Environments. In: Handbook of educational psychology. pp. 803–821. (2006).
44. Chandra, L., Seidel, S., Gregor, S.: Prescriptive knowledge in IS research: Conceptualizing design principles in terms of materiality, action, and boundary conditions. HICSS (2015).