1. Introduction
Artificial intelligence (AI) technologies are “the frontier of computational advancements that references human intelligence in addressing ever more complex decision-making problems” [1], including machine learning models, natural-language processing techniques, computer vision, robotics, algorithms, etc. In the field of higher education, AI is considered to be an effective tool and has been applied in many learning contexts. Industrial reports show that the size of the global AI education market was USD 1.82 billion in 2021 and is expected to grow at a compound annual growth rate of 36.0% from 2022 to 2030 [2]. In fact, AI has enormous potential in future higher education. It is expected that AI will not only serve as a tool to assist teachers in teaching, but also as an independent agent to replace human teachers in automating education. For example, Georgia State University has launched higher education courses with AI as an independent teaching assistant [3]. By automatically supervising students’ learning performance, actively tracking their learning progress, and independently offering personalized feedback and suggestions, AI educators are fundamentally changing the learning pattern of college students, the teaching methods of teachers, and the relationship between teachers and students in higher education. Given the disruptive value of independent AI educators in student learning, exploring how its autonomous design feature affects students’ intrinsic needs and intentions to use AI is of great significance for the sustainable development of higher education, the sustainable application of AI technology in the field of higher education, as well as the sustainable self-driven learning of college students.
The proliferation of AI applications in education has also attracted scholars’ attention. Most recent studies focus on the perspective of the teacher and demonstrate that through the benefits provided by the autonomy of AI educators, teachers are able to free themselves from tedious teaching tasks such as homework correction, error analysis, personalized weakness analysis, and even basic knowledge teaching. In fact, autonomous AI educators can not only replace teachers in completing certain teaching tasks, but can also actively replace students in completing some learning processes, such as collecting learning materials, developing learning plans, etc. However, how students in higher education perceive this technology and whether they are willing to use such autonomous AI educators is still unknown. Considering that the student is the most central entity in the learning process, the current study attempts to investigate how different levels of AI autonomy will change students’ perceptions and intentions to use AI educators.
Unlike primary and secondary school students, college students have developed relatively mature cognitive abilities and can recognize their own unique needs in the learning process [4,5,6]. Higher education research documents that college students are motivated to take action to satisfy their own needs during the learning process [7,8,9,10], such as seeking information to satisfy the need to acquire information, enjoying themselves and relaxing to satisfy their entertainment needs, and building a relationship with other social actors to satisfy their social interaction needs. The success of AI educators would thus potentially be highly dependent on whether AI educators can meet students’ intrinsic needs. For example, when an AI educator actively meets students’ needs, it is likely to be welcomed; if an AI educator automatically replaces students in decision-making or actions, but cannot meet their needs, it is likely to be resisted. In this regard, an important but unexamined research question arises: how does the artificial autonomy of AI educators influence students’ intention to use them by satisfying students’ needs?
Extending previous higher education studies from the student perspective, the current study aims to explore the effect of the artificial autonomy of AI educators on students’ use intentions through user gratification. We first review the literature on AI education, artificial autonomy and the U&G theory. Based on the literature, we categorize the artificial autonomy of AI educators into sense autonomy, thought autonomy, and action autonomy to capture the autonomous ability of AI educators in each step in the process of problem-solving. In addition, we focus on the most important and robust dimensions of U&G benefits, i.e., information seeking, social interaction, and entertainment. Next, we propose our hypotheses to theoretically elaborate on how different categories of the artificial autonomy of AI educators (i.e., sense, thought, and action autonomy) induce students’ usage intentions through the mediating effects of distinct U&G benefits (information seeking, social interaction, and entertainment). An online survey was performed to test the proposed conceptual model. The methodology and data analyses results are presented. Finally, Section 6 discusses our findings, theoretical contributions, and practical implications. The limitations and potential directions for future studies are also disclosed.
2. Literature Review
2.1. Artificial Intelligence (AI) in Online Education
With the rapid growth of AI application in education, AI education is increasingly attracting attention from global institutions, researchers, and educators. Previous research on AI education has been further diverged into three streams, one focusing on what AI can help teachers to achieve, one focusing on how teachers perceive and accept AI in educational practice, and another focusing on how teachers should use AI to maximize effectiveness. Specifically, some scholars are interested in what AI can help teachers do. For example, Guilherme [11] argued that AI technology can facilitate the learning process and be more efficient, while it may also ignore the connectedness between teacher and student. Cope et al. [12] suggested that AI will never replace humans in education, but it can make education more human. Some scholars investigated whether teachers are willing to accept AI. For example, Wang et al. [13] found that teachers’ AI readiness positively predicted AI-enhanced innovation, whereas teachers’ perceived threats from AI negatively related to AI-enhanced innovation. AI-enhanced innovation was further positively associated with teachers’ job satisfaction. Kim et al. [14] showed that tutors may resist the use of an AI assistant due to technology overload. Those who contributed more tended to rely more on AI assistants, but benefited little from AI. Still, some researchers focused on how to use AI in a manner that maximizes the education achievements. For instance, Ouyang and Jiao [15] characterized AI education in AI-directed, AI-supported, and AI-empowered paradigms and analyzed different AI roles and the ways AI connected with educational theories in each paradigm.
Although researchers have devoted much effort to exploring the impact of AI usage in education, they mainly focus on how incorporating AI into education influences educational achievements from the perspective of the teacher, while neglecting that the students’ achievements are not only determined by the efforts of teachers, but are also highly dependent on their own perceptions and motivation to study. Very few studies in the existent literature have focused on students’ perceptions and motivation to use AI education. For example, Kim et al. [16] empirically found that students perceive a higher level of social presence when an AI instructor has a human-like voice, which leads to a greater perceived credibility of the AI instructor, and stronger intentions to participate in the AI-instructor-provided online courses. Xia et al. [17] suggested that both teachers’ and students’ attributes can influence students’ achievement levels in AI learning from a self-determination theoretical perspective. Moreover, most of the existing literature focuses on the context of primary and secondary education, such as AI education in K-12 [18,19,20], AI education for elementary students [21,22,23], and AI education for middle school [24,25,26], while little attention and effort have been devoted to exploring undergraduates’ perceptions and acceptance of AI education [27,28,29].
In summary, although there are explosive publications regarding the impact of AI education, most of them are limited to the teacher’s perspective and focus more on the context of primary and secondary education, while neglecting the students’ perspectives, as well as the context of higher education. In addition, most prior AI education studies focus only on the influences of the incorporation of AI (e.g., virtual teachers, intelligent systems) into teaching, while limited efforts have been devoted to examining the effects of specific features of AI application. According to prior studies, the design features of AI have significant and distinct impacts on the success of AI education [30,31,32,33]. Therefore, it is worth exploring how the design features of AI can motivate students to use AI education applications.
2.2. Artificial Autonomy
With the advance in AI technologies, AI is expected to serve as an independent agent and accomplish tasks with little human intervention in many situations, such as self-driving cars, intelligent customer services, and AI art generators. In contrast to human autonomy, artificial autonomy [33], also known as machine autonomy [34] and AI autonomy [35], refers to “the extent to which a product is able to operate in an independent and goal-directed way without interference of the user” [36]. The term ‘autonomy’ is essentially closely related to intelligence. When people believe that a machine or algorithm can independently sense the external environment, think about reasons, plan, and take action to solve problems, they will classify it as AI [37,38]. Artificial autonomy is not dichotomous, but varies along a continuum from no autonomy to full autonomy [39]. The more sensitive machines are to a larger range of situations, and the larger the range of conditions under which they can reason, plan, and act independently of human interventions, the higher their level of artificial autonomy. Although artificial autonomy is denoted as an important feature of AI agents, the influence of artificial autonomy on consumers’ perceptions and acceptance of AI is rarely examined. Most prior studies focused on AI features such as anthropomorphism [30,40,41], personalization [31,42,43], responsiveness [44,45], and authenticity [46], while little attention has been paid to exploring the potential of artificial autonomy to advance AI technology design.
The small stream of studies on the autonomous features of AI agents can be further divided into two categories, with one focusing on consumers’ lay beliefs regarding artificial autonomy in the theorizing process, and the other explicitly examining the effect of artificial autonomy on consumer perceptions and acceptance intentions. Specifically, some scholars theorized the difference between an AI agent and a human agent as the presence of autonomous intention. For example, Kim and Duhachek [32] used construal-level theory and theorized that consumers view AI agents as different from human agents essentially due to their lack of autonomous intentions, which thus results in consumers having different perceptions of AI-offered versus human-offered messages. Garvey et al. [47] theorized that consumers tended to perceive AI agents as lacking either benevolent autonomous intention or selfish autonomous intention, which led to differences in their responses to offers provided by AI versus human agents. On the other hand, some studies investigated the impact of artificial autonomy and yielded mixed findings. For example, Hu, Lu, Pan, Gong and Yang [33] disclosed that artificial autonomy could improve perceptions of competence and warmth, which, in turn, led to a higher level of continued-usage intention. Meanwhile, Hong et al. [48] suggested that the autonomy of an AI music generator has no influence on consumer perceptions and their evaluation of the songs. Plaks et al. [49] also reported a non-significant effect of AI autonomy on users’ trust and choices. Ulfert et al. [50] revealed that a high level of AI autonomy induced a reduction in information load and techno-stress, but also reduced user intention.
In summary, despite the great importance of artificial autonomy, the impacts of artificial autonomy on consumer perception and usage intention are still not clear. In the context of higher education, artificial autonomy may enable AI educators to independently teach and interact with college students, so as to achieve edutainment. However, little attention has been devoted to exploring the potential of artificial autonomy theory to advance AI education.
2.3. Uses and Gratification (U&G) Theory
The uses and gratification (U&G) theory is a classical theoretical framework in communication research for understanding users’ motivation to use mass media [51,52]. The U&G theory was initially proposed to identify the gratifications for which individuals proactively choose among various communication channels to meet their needs and desires [53]. In recent decades, scholars have been continuously committed to the development of the U and G theory, attempting to understand why people use technology and the satisfaction they derive from it. The theory suggests that the acceptance and usage of technology are motivated user behaviors [54]. Users actively seek to meet their personal needs through the use of various technologies [55,56], such as the need to seek information, the need to acquire pleasure, and the need to embody their identity symbols [45,57,58,59,60]. The U&G theory explains users’ motivation to use technology, the factors that influence these motivations, and the impact of motivation on technology usage outcomes [61].
The U&G research has been quite fruitful and was recently applied to examine consumers’ usage of AI applications in various contexts. Some studies explored the factors that influence user gratification. For example, Lin and Wu [45] showed that perceived contingency positively related to gratification, which was further associated with consumer engagement in the use of social media brand chatbots. Wald et al. [62] showed how the dispositional, developmental, and social/contextual family typologies affect parents’ motivation to use a virtual assistant in their family home. Some other studies investigated the impact of gratification on consumer perceptions and usage intentions. For example, Baek and Kim [63] examined how different motivation factors related to users’ trust of generative AI, as well as the perceived creepiness, which further related to continuance intention. McLean and Osei-Frimpong [60] explored how U&G benefits determined the usage of in-home voice assistants, as well as the moderating role of perceived privacy risks. However, there is still a lack of studies examining how artificial autonomy, one of the most important AI features, influences consumers’ motives to use AI teachers.
3. Research Model and Hypotheses Development
Research has shown that college students’ investments in learning and their media choices are more active than passive [53,55,56], and they are pickier about accepting cutting-edge technologies [54,61]. As AI education applications become prevalent, the role of college students’ gratification in the influence of AI educators’ artificial autonomy and their usage intention becomes more significant. Therefore, we propose a comprehensive framework (shown in Figure 1) to reveal how different types of artificial autonomy in AI educators affect usage intention through U&G benefits.
Drawing on the U&G theory [51,52], the literature on artificial autonomy [33,37,38], and the sense–think–act paradigm [64,65], this study provides a new perspective for understanding how AI educators’ artificial autonomy can improve their usage intention. The model proposed in this study takes into account that users proactively select AI educators when motivated by the U&G benefits. In the following, we first categorize the artificial autonomy of AI educators into sensing autonomy, thought autonomy, and action autonomy. Next, we identify the most important and robust dimensions of U&G in the literature [45,66,67,68,69,70,71,72,73,74,75], i.e., information seeking, social interaction, and entertainment, as mediators. Finally, we develop three hypotheses regarding how U&G benefits mediate the impact of artificial autonomy factors (i.e., sensing autonomy, thought autonomy, and action autonomy) on the endogenous factor of usage intention.
3.1. Categorizing the Artificial Autonomy of AI Educators
Autonomy broadly means “self-rule” [76], “self-law” [77], or “self-government” [78,79], and has been applied to various entities, such as humans, machines, and political institutions. As mentioned, artificial autonomy describes the ability of machines, systems, or robots to perform tasks independently in a human-like manner with little or no human intervention. Thus, artificial autonomy is closely linked with specific problem-solving tasks [39]. Typically, a specific problem-solving task should be addressed in three steps: sense, think, and act [64,65]. That is, the task performers first sense and identify the environmental factors, then think and reflect to generate a decision or plan, and finally act based on the plan to solve the problem. Consistent with previous studies [33,39,80,81,82,83], we categorize artificial autonomy, based on the STA paradigm, into sense autonomy, thought autonomy, and action autonomy to capture the autonomous ability of AI educators in each step in the process of problem-solving.
Specifically, sensing autonomy refers to the ability of AI to autonomously sense the surrounding environment. For example, AI educators can see things in the environment through a camera, hear surrounding sounds through sound sensors, sense what is happening around them through sensors, and recognize the user’s current biological status (such as their heart rate, skin electricity, blood pressure, etc.) through wearable devices. Thought autonomy refers to the ability of AI to make decisions or plans independently. For example, AI educators can determine the video and practice list based on the user’s progress, learning habits, and current status (e.g., emotions, flow, tiredness). AI educators can determine the start and end times of learning based on the user’s past preferences and habits. AI educators can also establish their next learning plan based on the user’s learning habits and current learning progress. These all reflect AI’s ability to think, reflect, and make decisions or plans independently. Action autonomy refers to the ability of AI to act, execute plans, or perform certain behavioral activities independently. For example, AI educators can autonomously play and stop the current video list. AI educators can autonomously remind users to complete exercise questions, grade test papers, and generate score analysis reports. AI educators can also execute teaching plans, teach specific chapters independently, and actively perform tutoring tasks.
3.2. Identifying the U&G Benefits of AI Educators
Prior studies have suggested that user gratifications for the Internet and website use can serve as a fundamental framework for AI-acceptance research. Since the early 1990s, researchers have identified many gratification factors for Internet and website use, among which information seeking, social interaction, and entertainment are the most important and robust dimensions of the U&G benefits [45,73,74,75]. However, the effects can also vary across contexts. For example, Luo, Chea and Chen [73] showed that information-seeking gratification, social interaction gratification, and entertainment gratification were crucial to improve usage behavior. Lin and Wu [45] and Lee and Ma [74] found that information-seeking gratification and social interaction gratification were positively associated with consumer engagement and users’ intention to share news, while entertainment gratification was not. Choi, Fowler, Goh and Yuan [75] demonstrated that information seeking significantly influenced user satisfaction with the hotel’s Facebook page, while social interaction and entertainment did not significantly affect satisfaction.
Information-seeking gratification reflects the instrumental and utilitarian orientation of media or technology usage. Specific to the current context of an AI educator, it refers to the extent to which AI educators can provide users with relevant and timely information [66,74,84]. For example, users can seek information and track the latest news using their learning materials, the contents generated by AI educators or shared by peer learners, and their past usage records.
Social interaction gratification reflects users’ motivation to use media or technology to contact or interact with others. Although social interaction motivations are considered to be related to human–human interaction in most cases, they have been revealed to be strongly related to human–AI interaction [44,60,85]. In our AI educator context, this refers to the extent to which AI educators can communicate with users, and establish and maintain relationships with them [86,87]. For example, AI educators can proactively express their concern to users, inquire whether users need certain services, answer users’ questions, chat with users, and establish friendships with users. Entertainment gratification represents the hedonic orientation of media or technology usage. In the context of an AI educator, it refers to the extent to which AI educators can provide users with fun and entertaining experiences [66,88,89]. For example, users can derive pleasure and entertainment from the gamified learning process, the humorous teaching style of AI educators, and vivid and interesting case studies.
3.3. Hypotheses Development
3.3.1. The Sensing Autonomy and Usage Intention of AI Educators
We first propose that sensing autonomy may help improve users’ information-seeking, social interaction, and entertainment gratifications, which further leads to a higher level of usage intention. Sensing autonomy enables AI educators to actively sense the surrounding environment by observing, listening, visually recognizing, and monitoring students’ states at any time, including collecting data from users and the environment (such as text, sound, images, location, environmental temperature, object morphology, and biological status) and further extracting information from these data. When students seek information, AI educators can autonomously sense their information needs and enable users to perceive the implementation of hands-free commands through their real-time grasp of the environment and users’ statuses. This type of autonomy allows users to perceive fast, accurate, efficient, effective, and even hands-free information responses [33,80], resulting in enhanced information-seeking gratification. For example, when students find it difficult to understand a teaching video, AI educators may monitor students’ confused expressions (such as frowning), actions (such as pausing the video or repeatedly watching), or voice (such as the voice of students asking others). Based on these variations, AI educators can recognize that students may encounter difficulties in this part of the learning content, and there may be some need for detailed explanations and relevant exercises, to proactively meet their information search needs (such as providing related pop-up information links, attaching relevant exercises after the video, and even switching directly to a more detailed teaching videos).
Furthermore, information seeking is often considered an important reason for users to use AI technology applications [90,91,92]. As one of the main functions of an educator, the use of an AI educator is an effective way for users to obtain learning-related information [93,94]. For example, users may use AI educators to seek course information, exam information, personalized practice questions, past learners’ experiences, teacher recommendations, etc. Because AI educators can autonomously provide personalized information for users based on artificial intelligence technology, respond to users’ information seeking needs at any time, allow for users to evaluate information in a timely manner, and even help users make decisions and plans based on such information, users are likely to rely on AI educators to satisfy their information seeking needs. Considering that AI usage is a utilitarian motivation-driven result [41,62,95], information-seeking gratification may enhance users’ intention to use AI educators. Therefore, we propose the following:
The sensing autonomy of AI educators is positively related to usage intention due to the mediating effect of information-seeking gratification.
Sensing autonomy enables AI educators to actively monitor user status variations and detect changes in environmental conditions. Therefore, AI teachers can autonomously capture real-time changes, respond to user calls at any time, and be ready for user commands at any time. For example, users can awaken Xiaomi-brand smart products through the “classmate Xiao AI” voice command. When the smart product senses the user’s voice command, it will immediately respond “I’m here” (About XiaoAI. Accessed on 23 December 2023, from
Further, users are likely to interact with AI agents in a human–human interaction pattern [46,49,100,101]. For example, users may voice-command the AI educator to perform specific tasks, request that an AI educator answers questions, ask an AI educator about the course schedule, receive proactive care and reminders from the AI educator, etc. Because AI educators can interact with users in an anthropomorphic [30,40,102], personalized [30,46,101], authentic [46], and responsive [45,103,104] manner, users can clearly perceive the friendliness, care, and warmth of the AI educator [80,101,105]. Therefore, such human–computer interactions may meet or even exceed users’ expectations for social interaction. Based on the U&G theory, social interaction gratification is an important motivation that drives users’ AI acceptance [44,57,59,106], and is likely to increase users’ intentions to use AI educators. Therefore, we propose the following:
The sensing autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.
As mentioned, sensing autonomy allows for users to experience a fast response from their AI teachers, and such interactions are mostly hands-free. This type of autonomy aligns with users’ existing impressions of future technology and enables them to enjoy a more techno-cool experience. That is, users can strongly feel the intelligence, competence, sense of techno-coolness, and modernity of AI educators in their interactions. The research has suggested that such a modern, futuristic, and intelligent experience satisfies users’ higher-order psychological needs, allowing them to immerse themselves and have fun using the technology [107]. The research has also argued that perceived autonomy can promote users’ intelligence perception and evoke positive emotions towards AI agents, and enhance users’ interests and enthusiasm [33,80], to provide them with a sense of entertainment [108,109].
Moreover, the AI educator’s wisdom and ready responses provide users with an entertaining and interesting experience [110,111,112]. For example, AI educators may present the teaching process in a gamified form to make learning fun [113]. Even if the questions raised by users are not related to learning, they can be responded to in a timely manner. When users feel tired or bored during study breaks, AI educators can proactively provide users with mini-games to relax their bodies and minds. Since AI educators can be designed to be relaxing, humorous, and playful [41,110,114,115], and enable users to enjoy novel, modern, and futuristic intelligent experiences [107,116], users are likely to rely on AI educators for entertainment and fun. Considering that hedonic motivation is a prominent factor driving users’ AI acceptance [41,62,90], entertainment gratification may promote users’ intention to use AI teachers. Therefore, we propose the following:
The sensing autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.
3.3.2. The Thought Autonomy and Usage Intention of AI Educators
We next elaborate on how thought autonomy may help improve users’ information-seeking, social interaction, and entertainment gratifications, which further influence usage intention. Thought autonomy enables AI educators to independently integrate and analyze information, evaluate, plan, and make decisions to provide users with personalized, clear, and rational decisions or plans [33,80]. For example, AI teachers can generate learning schedules based on users’ learning habits, recommend exercises and courses based on users’ learning progress, and provide advice for course selection (such as criminal law) and vocational certificate exams (such as legal professional qualification exams) based on users’ profiles and interests (such as first-year students interested in law). Hence, when users have information search needs, this type of autonomy can enable AI teachers to actively search for and process information, which effectively reduces the chance of users facing an information overload when facing massive amounts of information, saves their cognitive efforts in information processing, and even directly achieves the ultimate goal of information searches, i.e., decision making, leading to the efficient and effective satisfaction of users’ information-seeking needs [117]. As discussed, information-seeking gratification should be positively related to students’ usage intentions. Therefore, we propose the following:
The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of information-seeking gratification.
Thought autonomy enables AI educators to actively collect and analyze information, including user preferences and users’ behavioral habits. Therefore, AI educators can provide personalized decision-making guidance based on the uniqueness of the users, making users feel understood by AI educators. Highly personalized decisions or plans can also make users clearly perceive that they are cared for by AI educators. The research has shown that users’ perceptions of understanding and personalization in human–computer interactions can significantly enhance their perception of social presence [102], help to establish friendships between users and virtual agents [101], and make users more willing to engage in human–computer interactions [116]. In addition, thought autonomy supports AI educators to quickly and accurately process massive amounts of relevant information, and their analytical ability even surpasses that of the human brain [118,119]. Therefore, from the perspective of users, AI educators are good teachers and friends who can provide personalized, intelligent, comprehensive, and clear guidance, and decision-making advice, thereby increasing trust in and dependence on AI educators, as well as the willingness to interact with AI teachers to obtain more decision-making advice [90,120]. As discussed, social interaction gratification should be positively associated with students’ usage intention. Therefore, thought autonomy is likely to become an important factor in usage intention due to the mediating effect of social interaction gratification. We propose the following:
The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.
Thought autonomy is decision making-oriented and can help users to solve problems regarding information processing and decision making. From the perspective of users, AI educators are always ready to solve problems for them; for example, by determining a learning plan for new courses, helping students to prepare for exams within a limited time, helping students overcome knowledge weaknesses, and showing how to establish mind maps. Thus, users may feel a sense of pleasure that their problems can be solved successfully, and perceive that their needs are valued and fulfilled by AI educators. In addition, this type of autonomy can create an intelligent experience for users, increase their interest in using AI teachers, and enhance their happiness and enthusiasm during the use process [80]. As discussed, enhanced entertainment gratification will increase students’ intention to use AI educators. Therefore, thought autonomy is likely to lead to a higher level of usage intention by enhancing entertainment gratification. We propose the following:
The thought autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.
3.3.3. The Action Autonomy and Usage Intention of AI Educators
We finally hypothesize that action autonomy may help to improve usage intention by enhancing information-seeking, social interaction, and entertainment gratifications. Action autonomy enables AI educators to independently complete tasks with minimal or no user intervention, including device operation, sound or video playback, information searches, and proactive reminders to users. This type of autonomy can enable AI educators to serve as users’ agents in a hands-free manner. That is, AI educators can perform certain actions without the need for users to manually issue commands [99]. When users need to search for certain information, action autonomy may allow AI educators to complete the information search and proactively send it to the user before manual intervention and even further perform relevant behaviors beyond the user’s expectations [121,122]. This may make information searching more efficient and enhance users’ information-seeking gratification. As discussed, information-seeking gratification will increase students’ intentions to use AI educators. Thus, we propose the following:
The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of information-seeking gratification.
Action autonomy enables AI educators to act as agents for users; that is, to replace users in taking action. For example, AI educators can turn on devices without user intervention, automatically download learning materials, perform grading, automatically play teaching videos according to an established course schedule, and issue time alerts to users during exams. When AI performs tasks on behalf of users, users can feel the friendliness, care, kindness, and assistance of AI educators, perceive the intimacy between themselves and AI educators, and establish and maintain social relationships with them [33,80]. This may lead to a higher level of social interaction gratification. Furthermore, as previously discussed, social interaction gratification will increase students’ intention to use AI educators. Therefore, we propose the following:
The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of social interaction gratification.
Action autonomy allows for users to enjoy AI educators’ autonomous action execution and serving behavior without additional input, which is consistent with human lay beliefs in future technology. As mentioned, when AI meets users’ high-level psychological needs for techno-coolness, users can have fun using AI applications and generate enthusiasm for using AI [107]. Previous studies have also pointed out that AI-enabled virtual assistants with action autonomy can lead users to enjoy intelligent experiences and encourage positive emotional responses [123,124]. Therefore, we believe that action autonomy can satisfy users’ hedonic pursuits of futuristic and modern intelligent experiences, which further leads to increased entertainment gratification. As discussed, enhanced entertainment gratification will increase students’ intention to use AI educators. Thus, we propose the following:
The action autonomy of an AI educator is positively related to usage intention due to the mediating effect of entertainment gratification.
Taken together, our conceptual model is presented in Figure 2.
4. Methods
4.1. Sampling and Data Collection
Due to COVID-19, students have widely adapted to online education. An increasing number of brands and schools are trying to develop “AI teachers” products that provide AI-based intelligent learning services for students. For example, an AI educator developed by iflytek is responsible for teaching knowledge for all subjects in primary and secondary schools, interacting with students, and generating personalized mind maps. The AI educator Khanmigo, developed by Khan Academy, can teach students mathematics and computer programming. In general, online AI-based education applications are now able to cover the full scope of teaching, self-learning, and exam preparation through AI applications, generate personalized knowledge graphs for students, offer intelligent correction and rapid diagnosis, identify the weak points behind incorrect questions, and provide targeted practice exercises. In addition, many AI teachers are able to afford multi-scenario dialogues, supervise students’ learning, and accompany students in their daily lives through humorous real-time interactions.
The target participants in this study were college students. Following previous studies, considering that undergraduate students, master’s students, and doctoral students (1) all study in the environment of colleges, (2) have the need for self-motivated learning, and (3) need active and specialized teaching provided by AI educators, we incorporated undergraduate students, master’s students, and doctoral students as college students into our subject pool [125,126,127]. Participants were recruited through the Credamo platform (
To provide participants with examples of AI educators, this study first provided them with a video introduction (50 s) describing the services provided by AI educators. The video showcases an AI education application based on tablets from the perspective of students. In the video, students can interact with an AI educator by clicking on the screen, through a voice wake-up, or via text commands. The online teaching functions of AI virtual educators are introduced, including online course teaching, intelligently tracking students’ progress in real-time, independently analyzing knowledge gaps and visualizing mind maps, automatically developing personalized learning journeys, and actively interacting with students (e.g., answering questions, reminding students to start learning). Participants were told that this AI education application for college students has not yet been launched on the market, so the brand hoped to investigate the attitude of college students towards the AI-teacher product before the product was launched. The brand name was hidden, and no brand-related information was provided in the video. After watching the video, participants were asked to evaluate their perception and attitude towards the AI-educator product. We first conducted a pilot study by collecting 50 responses from college students to the questionnaire and made some minor modifications in terms of language and clarity. All participants in the pilot study were excluded from the main survey. A total of 673 unique responses were collected in November 2023.
4.2. Measurement Scales
The measurement scale of this study is divided into two parts. The first part is the measurement scales of the conceptual model. Measurement items for all constructs were adopted from scales in the previous literature, as shown in Table 1. Sensing autonomy was measured by four items borrowed from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.880). The four items on thought autonomy were adapted from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.836). The four items of action autonomy were borrowed from Hu, Lu, Pan, Gong, and Yang [33] (α = 0.903). Information seeking gratification was measured by three items adapted from Lin and Wu [45] (α = 0.810). Social interaction gratification was measured by a five-item scale adapted from Lin and Wu [45] (α = 0.822). Entertainment gratification was measured using three items adapted from Lin and Wu [45] (α = 0.763). The three items on usage intention were borrowed from McLean and Osei-Frimpong [60] (α = 0.845). All items were measured by the Likert 7-point scale, in which the degree of agreement ranges from strongly disagree to strongly agree, represented by 1–7. The second part collects demographic information of the participants, including gender, age, grade level, experience of using AI educators, experience of using AI applications other than AI educators, and experience in participating in online education.
4.3. The Profiles of Respondents
After excluding invalid responses, in which the demographic information did not match the sampling criteria of this study, 673 valid responses were collected. The specific statistical results of respondent profiles are shown in Table 2.
Firstly, 69.39% of the respondents were undergraduate students aged 17 to 22, with 51.18% being female. In total, 15.63% of the respondents were first-year students, 18.85% were second-year students, 35.33% were third-year students, and 30.19% were fourth-year students.
Secondly, 24.96% of the respondents were master’s students aged 21 to 25, with 48.21% being female. In total, 55.36% of the respondents were first-year master’s students, 32.14% are second-year master’s students, and 12.50% are third-year master’s students.
Finally, 5.65% of the respondents were doctoral students aged 22 to 29, with 50.00% being female. In total, 71.05% of the respondents were first-year doctoral students, 18.42% were second-year doctoral students, 10.53% were third-year and above doctoral students.
Overall, 50.37% of the participants were female. No participants had used AI educators, and those who had used were labeled as invalid samples. A total of 84.84% of participants had used AI applications other than AI educators, and only 15.16% of participants had not used them. Regarding experience in participating in online education, 89.01% of participants stated that they frequently participated, 6.98% of participants stated that they have participated, but not much, and 4.01% of participants had almost never participated.
5. Results
5.1. An Assessment of the Measurement Model
This study used partial least squares structural equation modeling (PLS-SEM) [129] to analyze data and used SmartPLS 3 software to estimate the hypothetical model by using a bootstrap resampling program (randomly generating 5000 sub-samples). We first examined the multicollinearity of the model by using the inner collinearity assessment function in SmartPLS. The results showed that all VIF values in the model were less than three [130]. In addition, we adopted Harman’s univariate test to assess common method bias. Harman’s univariate test is a widely used method for assessing common method bias [131]. By performing exploratory factor analysis on all items, the results revealed the existence of a multi-factor structure: (a) factor analysis yielded more than one factor (four factors in total), (b) the extracted four factor eigenvalues were all greater than one, and (c) the first factor accounted for 36.32% of the variance in the sample. Thus, the Harman’s test indicates that there is no serious common method bias in our data.
Table 3 presents the descriptive statistical analysis of the conceptual model. We will now test the reliability, convergent validity, and discriminant validity. Firstly, Table 3 lists the Cronbach’s Alpha values of each construct. All Cronbach’s Alpha values were greater than 0.7, indicating good reliability of the constructs [132]. Secondly, Table 1 presents the factor loadings of all items, with all items having factor loadings greater than 0.7 [133]. Table 3 lists the average variance extracted (AVE) value for each construct, and all values are greater than 0.5 [134]. In addition, Table 3 shows that all CR values are greater than 0.7 [135]. Therefore, the results indicated good convergent validity. Finally, as shown in Table 4, the square root of AVE for each factor is greater than all the correlation coefficients between constructs [134]. In addition, Table 5 lists the heterotrait–monotrait (HTMT) values. Each HTMT value is less than 0.85, which satisfies the criteria [136]. Taken together, the results suggested good discriminant validity.
5.2. Structural Model and Hypothesis Testing
Figure 3 shows the results of the path analysis, including the standard path coefficients β, the p-value and the R-squared value. The standardized root mean square (SRMR) of the SmartPLS model is 0.049 < 0.080, indicating a good fit of the model [137,138]. Following the bias-corrected and accelerated bootstrap confidence interval method recommended by Preacher and Hayes [139], we examined the mediating effects with 5000 subsamples of bootstrapping. Table 6 presents the results of the mediation analysis.
5.2.1. The Results of Sensing Autonomy on the Usage Intentions of AI Educators
Firstly, the results of the structural equation indicate that the standard path coefficient for the influence of sensing autonomy on information seeking is β = −0.075, p = 0.143, indicating that sensing autonomy does not have a significant impact on information seeking gratification. In addition, the results of mediation effect analysis do not support the mediating role of information-seeking gratification in the influence of sensing autonomy on usage intention (95% CI = [−0.068, 0.011], p = 0.152). Thus, although information-seeking gratification significantly increases intentions to use AI educators (β = 0.386, p = 0.000), the mediating effects are not significant. H1a was not supported.
Secondly, the results of the structural equation indicate that the impact of sensing autonomy on social interaction is significant (β = 0.315, p = 0.000), indicating that the increase in sensing autonomy of AI educators has a significant positive impact on social interaction gratification. Additionally, the mediation analysis results also demonstrate the mediating role of social interaction gratification in the influence of sensing autonomy (95% CI = [0.046, 0.134], p = 0.000). This indicates that the improvement in the sensing autonomy of AI educators significantly enhances the perception of social interaction gratification among college students, thereby enhancing their intention to use them. Thus, H1b was supported.
Thirdly, the results of the structural equation indicate that the impact of sensing autonomy on entertainment is positive (β = 0.383, p = 0.000), indicating that the sensing autonomy of AI educators significantly positively increases entertainment gratification. Moreover, the mediation analysis results also provide evidence for the mediating role of entertainment gratification in the influence of sensing autonomy (95% CI = [0.045, 0.132], p = 0.000). This indicates that with the increase in sensing autonomy, the entertainment needs of college students have been satisfied to a higher level, resulting in a stronger intention to use AI educators. Thus, H1c was supported.
We also tested the direct effect and total indirect effect of sensing autonomy on usage intention. The standard path coefficient for the direct effect of sensing autonomy is β = 0.049, p = 0.302, the 95% CI of total indirect effect is [0.055, 0.237], indicating the significance of the full-mediated effects of sensing autonomy on usage intention.
5.2.2. The Results of Thought Autonomy on the Usage Intentions of AI Educators
Firstly, the standard path coefficient for the impact of thought autonomy on information seeking is β = 0.215, p = 0.000, indicating that the increase in the thought autonomy of AI educators has a significant positive impact on information-seeking gratification. Furthermore, the mediation analysis results show that information-seeking gratification mediates the impact of thought autonomy (95% CI = [0.036, 0.136], p = 0.001). This indicates that with the increase in thought autonomy, the perception of information-seeking gratification among college students significantly improves, leading to stronger intentions to use AI educators. Thus, H2a was supported.
Secondly, the impact of thought autonomy on social interaction is significant (β = 0.238, p = 0.000). Mediation analysis results demonstrate the mediating role of social interaction gratification in the influence of thought autonomy (95% CI = [0.028, 0.107], p = 0.002) on usage intention. This indicates that the increase in the thought autonomy of AI educators significantly enhances the perception of social interaction gratification among college students, thereby inducing their intention to use them. Thus, H2b was supported.
Thirdly, the results of the structural equation show that the effect of thought autonomy on entertainment is not significant (β = −0.063, p = 0.312). Furthermore, the results of the mediation effect analysis do not support the mediating role of entertainment gratification in the influence of thought autonomy on usage intention (95% CI = [−0.045, 0.014], p = 0.341). That is, although entertainment gratification significantly increases the intention to use AI educators, the mediating effects are not significant (β = 0.223, p = 0.000). Thus, H2c was not supported.
The direct effect and total indirect effect of AI educator thought autonomy on usage intention was also tested. The standard path coefficient for the direct effect of thought autonomy is β = 0.047, p = 0.231, and the 95% CI of total indirect effect is [0.046, 0.221], indicating the significance of the fully mediated effects of thought autonomy on usage intention.
5.2.3. The Results of Action Autonomy on the Usage Intentions of AI Educators
Firstly, the standard path coefficient for the impact of action autonomy on information seeking is β = 0.374, p = 0.000, indicating that the increase in AI educators’ action autonomy has a significant positive impact on information-seeking gratification. In addition, the mediation analysis results show that information-seeking gratification mediates the impact of action autonomy (95% CI = [0.092, 0.200], p = 0.000) on usage intention. Together with the above analysis, indicating significant indirect effects of AI educator action autonomy on usage intention through information-seeking gratification, H3a was supported.
Secondly, the results of the structural equation show that the influence of action autonomy on social interaction is not significant (β = 0.102, p = 0.076). Moreover, the results of the mediation effect analysis do not support the mediating role of social interaction gratification in the influence of action autonomy on usage intention (95% CI = [−0.002, 0.060], p = 0.088). Thus, although social interaction gratification is significantly associated with usage intention (β = 0.267, p = 0.000), H3b was not supported.
Thirdly, the results of the structural equation demonstrate that action autonomy significantly influences entertainment (β = 0.306, p = 0.000). Furthermore, the mediation analysis results also provide evidence for the mediating role of entertainment gratification in the influence of action autonomy (95% CI = [0.033, 0.111], p = 0.001) on usage intention. That is, the increase in action autonomy will lead to higher level of entertainment gratification, and thus induce stronger intentions to use AI educators. Thus, H3c was supported.
Similarly, the direct effect and total indirect effect of action autonomy on the usage intention of AI educators was tested. The standard path coefficient for the direct effect is β = 0.030, p = 0.533, and the 95% CI of total indirect effect is [0150, 0.331], indicating the significance of the fully mediated effects of action autonomy on usage intention.
6. Discussion
Drawing on the uses and gratification theory, our study aims to analyze how the artificial autonomy of an AI educator leads to students’ usage intentions by improving user gratification. Specifically, regarding the U&G benefits, we focus on the three most salient and robust dimensions: information-seeking gratification, social interaction gratification, and entertainment gratification. Previous research has highlighted the various categories of AI autonomy and their importance; however, to our knowledge, there are few studies that classify autonomy into multiple types, as distinct influencing factors for usage intention. To this end, this study proposes a novel theoretical model that takes the sensing autonomy, though autonomy, and action autonomy of AI educators as factors of intention to use and examine the intermediary role of user gratifications (i.e., information-seeking gratification, social interaction gratification, and entertainment gratification). By doing this, our findings provide new insights into perceptions of how artificial autonomy motivates users to use an AI educator through multiple gratifications.
The findings reveal that three types of AI educator autonomy are associated with different user gratifications, which extends the findings of recent studies on the influence of artificial autonomy in sustainable higher education. Firstly, our study demonstrates that the sensing autonomy of AI educators is positively related to usage intention through the mediating effects of social interaction gratification and entertainment gratification. Contrary to our proposed hypothesis, our findings showed no significance in the relationship between sensing autonomy and information-seeking gratification, resulting in an insignificant indirect effect of information-seeking gratification in the impact of sensing autonomy on usage intention. The possible explanation for this is that, unlike primary and secondary school students, the learning process of students in higher education is often self-driven, self-determined, and self-regulated [140,141,142,143]. Students in higher education more actively engage in their own learning, know how to learn, and understand how to find solutions for their learning problems. Thus, their information-seeking needs exist more in their mind and less in external expressions that the AI can detect. For example, when there is an information-seeking need, students may not communicate with their classmates (allowing for their voice to be sensed by the AI educator), while they are more likely to start searching directly. Although AI educators are able to sense changes in students’ statuses, such as their pausing a video, they still cannot fulfill students’ need for information seeking because the students have already taken action.
Second, our findings show that the thought autonomy of AI educators is positively related to usage intention due to the mediating effects of information-seeking gratification and social interaction gratification. However, contrary to the proposed hypothesis, the indirect linkage between thought autonomy and usage intention is not significant because of the insignificant influence of thought autonomy on entertainment gratification. A possible reason for the non-significant relationship between thought autonomy and entertainment is that thought autonomy, which enables an AI educator to generate decisions or plans, is more problem-solving-oriented and associated more with utilitarian goals than hedonic goals [144,145]. Additionally, prior studies found that although AI autonomy may increase users’ passion [80], it also may lead to negative experiences such as techno-stress [50]. Thus, it is possible that students who were served by AI-educator-offered decisions or plans focus more on utilitarian needs, such as evaluating whether the decision is optimal and whether the plan is feasible, so they cannot relax and entertain themselves.
Third, our findings indicate that the action autonomy of AI educators is positively related to usage intention by increasing information-seeking and entertainment gratification. However, the proposed hypothesis about the indirect linkage between action autonomy and usage intention through the mediating path of social interaction gratification was not supported. A possible explanation for the unverified relationship between action autonomy and social interaction gratification is that, because AI with a high degree of autonomy can complete tasks autonomously without human intervention, this leaves users with no opportunity for human–machine interaction [146,147]. For example, AI educators with a lower level of action autonomy require clear commands from users before taking action. During the interaction process of giving and receiving commands between users and AI educators, users may perceive a sense of interaction with social agents and further establish social connections with AI educators. However, AI educators with a higher level of action autonomy can autonomously take actions without commands from users. As a result, users need not interact with the AI educators, making it difficult to establish perceptions of social connection with AI educators.
6.1. Theoretical Contributions
Our study makes several contributions to existing knowledge. First, most prior studies in the AI-education literature focused on the perspective of teachers, such as what an AI educator can do for human teachers and whether human teachers are willing to accept AI educators, while very little research paid attention to the perspective of students. In addition, among the few studies regarding students’ usage of AI educators, most efforts were devoted to primary and secondary education, while less were allocated to higher education. Moreover, previous studies seldom delved into the effects of the specific design features of AI educators. Our findings contribute to the sustainable education literature relating to AI-driven education by disclosing its power in utilizing technology design to satisfy students’ intrinsic needs in sustainable education. Our study developed a theoretical model that integrated artificial autonomy, which is one of the most important features of an AI educator, and user gratifications as factors influencing the usage of students in higher education. Specifically, our findings reveal that the sensing autonomy of AI educators is positively related to usage intention through increased social interaction and entertainment gratification; thought autonomy enables an AI educator to fulfill students’ information-seeking and social interaction needs and thus increase students’ usage intention; while action autonomy helps the AI educator to fulfill the demands for information seeking and entertainment to induce usage intention. This finding reconciles the concerns in prior research from the teacher perspective [11,148], and is aligned with recent studies from the student perspective [149,150], which emphasize the importance of research from both perspectives. Thus, our study highlights the importance of the artificial autonomy of AI educators in enhancing usage intention through user gratification, and further offers a comprehensive angle for future research to understand the differentiated power of three types of artificial autonomy on distinct gratifications from the student perspective.
Second, we contribute to the artificial autonomy literature by disclosing its power in determining user gratification and usage. Although there are booming studies on the effects of AI design features, such as anthropomorphism, responsiveness, and personalization, very few efforts have been devoted to examining the effect of artificial autonomy. More importantly, prior studies have reached mixed conclusions on the influence of artificial autonomy. Our study provides an integrated perspective to investigate how different types of artificial autonomy affect distinct user gratification and further influence usage intention in the context of higher education, which, to some extent, can reconcile the mixed findings. Specifically, our findings show that students in higher education are motivated to use AI educators by different benefits, and the different benefits are influenced by distinctive types of artificial autonomy. For example, we find that sensing autonomy enables AI educators to fulfill social interaction and entertainment needs, but is not able to increase information-seeking gratification. The thought autonomy of AI educators increases students’ information-seeking and social interaction gratifications, but is not related to entertainment gratification. Action autonomy induces students to use AI educators through their information-seeking and entertainment motivations, but cannot motivate student usage by satisfying social interaction needs. Therefore, our findings emphasize the nonidentical effects of artificial autonomy in AI educators on students’ usage intention through the dynamic mediating paths of multiple user gratifications.
Third, our study reveals the significant power of leveraging the U&G theory to investigate the impact of AI design features on AI usage intentions. The U&G theory has a long history of development. A large number of scholars have drawn on this theory to investigate the antecedent factors and consequent outcomes of multiple gratifications. However, very few previous studies have drawn on the U&G theory to examine the role of artificial autonomy in improving AI usage. Our findings disclose the power of the U&G theory in two ways. With regard to the antecedent factors of gratification, our findings disclose different factors of distinct gratifications. Specifically, students’ information-seeking gratification is positively associated with the thought autonomy and action autonomy of AI educators. Social interaction gratification is increased by sensing autonomy and thought autonomy. Entertainment gratification is enhanced by sensing autonomy and action autonomy. Regarding the consequent outcomes of gratifications, we find that, in the context of higher education, information-seeking, social interaction, and entertainment gratifications are all positively related to AI educator usage intentions. Our findings highlight the distinct role of different types of user gratification in the effects of AI autonomous features on usage intention, which extends the extant understanding of the effects of artificial autonomy and how students’ use of AI educators is driven by different motivations in the context of higher education.
6.2. Practical Implications
Our study also has several practical implications. Although AI education is not a new concept, AI technology is still far from widespread in higher education. How AI educators should be designed to promote students’ use intention remains a challenge for suppliers and designers. While it is important for higher education schools and teachers to implement innovative technologies from a sustainable perspective, for those technologies to be deeply involved in the learning process, it is first necessary to understand how students perceive the technologies, particularly their different motivations to use them. In other words, better understanding student gratification and the intention to use an AI educator is a critical first step in implementing AI technology to effectively improve the sustainable development of higher education. Our findings highlight important areas that the suppliers and designers of AI educators need to consider, such as the autonomous design of AI educators and the gratifications that motivate students in higher education to use AI educators.
First, our study offers insights from the student perspective into how students perceive and react to AI educators with different types of artificial autonomy. Our findings provide specific guidelines for the suppliers of AI educators to consider, such as the important roles of information-seeking, social interaction, and entertainment gratifications in inducing students to use AI educators. Additionally, while all three gratifications were identified as significant benefits that should be associated with AI educators, it is important for suppliers to understand that students in higher education may pay more attention to particular gratifications in their autonomous AI educator usage. Thus, it is recommended to give the highest priority to satisfying students’ distinct needs according to the autonomous design of AI educators when suppliers do not have sufficient capacity to guarantee all gratifications.
Second, our findings identify important autonomous features of AI educators for designers to consider when designing differentiated AI educators. The findings of our study show that sensing autonomy plays a significant role in social interaction and entertainment gratifications, thought autonomy is essential for information-seeking and social interaction gratifications, while action autonomy is critical to increasing information-seeking and entertainment gratifications. When designing AI educators with different usage purposes, such as designing for social interaction with students, or designing for students seeking information, designers should consider corresponding strategies that attach different types of autonomous features to AI educators to enhance students’ usage intentions.
Third, our study provides specific guidelines for the alignment of suppliers and designers of AI educators. In some cases, the requirements proposed by the supplier cannot be met by the designer, and our findings offer possible solutions to such contradictions. For example, when designers are unable to provide the AI educator feature of thought autonomy required by suppliers, our findings suggest that designers can provide action autonomy to meet users’ information-seeking needs and provide sensing autonomy to satisfy users’ entertainment needs, to achieve similar effects as the provision of thought autonomy. Similarly, when sensing autonomy cannot be offered, our findings demonstrate that designers can provide thought autonomy to increase social interaction gratification and provide action autonomy to enhance entertainment gratification. When suppliers require a higher level of information-seeking and entertainment gratifications, which should be induced by action autonomy, our findings recommend that designers attach a sensing autonomy feature to satisfy the need for entertainment and to increase thought autonomy to improve information-seeking gratification.
6.3. Limitations and Future Directions
This study still suffers from several limitations, which provide possible directions for future research. First, our study collected data from 673 college students in China. Future research is recommended to take cultural factors into consideration and extend our research model to other countries. Secondly, this study adopted a survey method to verify the influence path of AI-educator autonomy on students’ usage intentions and the mediating role of U&G benefits. Future research may delve into the basis of this study, such as by using experimental methods to manipulate the high and low levels of artificial autonomy to measure its impact on college students’ intentions to use AI educators, and the mediating role of gratifications, or verify the generalization of our findings in field settings. Third, this study enables participants to understand the core functions and usage experience of AI educators through video-viewing, ensuring that participants have a basic understanding of AI educators. With the implementation of AI education applications in the field of higher education, future research can use scenarios based on specific AI education applications and collect data from college students who have actually used the AI educator to verify our research findings. Finally, this study did not distinguish between types of college students, such as university level and professional disciplines. Future research may compare different types of college students to explore the possible boundary conditions of our proposed model.
Conceptualization, W.N. and W.Z.; Methodology, C.Z. and X.C.; Software, W.N.; Validation, W.Z., C.Z. and X.C.; Writing—Original Draft Preparation, W.N. and W.Z.; Writing—Review and Editing, C.Z. and X.C. All authors have read and agreed to the published version of the manuscript.
This study followed the ethical guidelines in the Declaration of Helsinki. Approval for this particular type of study was not required in accordance with the policy of the corresponding author’s institution.
Informed consent was obtained online from all individual participants included in the study prior to their enrollment.
The manuscript contains all the information and data in this study.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 3. Structural model (Note: ***: p < 0.001; ns: not significant at 95% level).
Measurement items and factor loading.
Constructs | Items | Factor Loading | Reference |
---|---|---|---|
Sensing Autonomy | This AI educator can autonomously be aware of the state of its surroundings. | 0.864 | Hu, Lu, Pan, Gong and Yang [ |
This AI educator can autonomously recognize information from the environment. | 0.832 | ||
This AI educator can independently recognize objects in the environment. | 0.873 | ||
This AI educator can independently monitor the status of objects in the environment. | 0.860 | ||
Thought Autonomy | This AI educator can autonomously provide me choices of what to do. | 0.792 | Hu, Lu, Pan, Gong and Yang [ |
This AI educator can independently provide recommendations for action plans for assigned matters. | 0.833 | ||
This AI educator can independently recommend an implementation plan of the assigned matters. | 0.828 | ||
This AI educator can autonomously suggest what can be done. | 0.821 | ||
Action Autonomy | This AI educator can independently complete the operation of the skill. | 0.862 | Hu, Lu, Pan, Gong and Yang [ |
This AI educator can independently implement the operation of the skill. | 0.885 | ||
This AI educator can autonomously perform the operation of the skill. | 0.881 | ||
This AI educator can carry out the operation of skills autonomously. | 0.891 | ||
Information-seeking gratification | I can use this AI educator to learn more about the lectures. | 0.863 | Lin and Wu [ |
I can use this AI educator to obtain information more quickly. | 0.845 | ||
I can use this AI educator to be the first to know information. | 0.846 | ||
Social interaction gratification | I can use this AI educator to communicate and interact with it. | 0.751 | Lin and Wu [ |
I can use this AI educator to show concern and support to it. | 0.766 | ||
I can use this AI educator to get opinions and advice from it. | 0.740 | ||
I can use this AI educator to give my opinion about it. | 0.789 | ||
I can use this AI educator to express myself. | 0.775 | ||
Entertainment gratification | I can use this AI educator to be entertained. | 0.865 | Lin and Wu [ |
I can use this AI educator to relax. | 0.852 | ||
I can use this AI educator to pass the time when bored. | 0.749 | ||
Usage intention | I plan to use the AI educator in the future. | 0.894 | McLean and Osei-Frimpong [ |
I intend use the AI educator in the future. | 0.894 | ||
I predict I would use the AI educator in the future. | 0.832 |
Descriptive statistics of respondents.
Profile | Percentage | |
---|---|---|
Undergraduate | Age | 17–22 |
Female | 51.18% | |
Male | 48.82% | |
First-year | 15.63% | |
Second-year | 18.85% | |
Third-year | 35.33% | |
Fourth-year | 30.19% | |
Master’s | Age | 21–25 |
Female | 48.21% | |
Male | 51.79% | |
First-year | 55.36% | |
Second-year | 32.14% | |
Third-year | 12.50% | |
PhD students | Age | 22–29 |
Female | 50.00% | |
Male | 50.00% | |
First-year | 71.05% | |
Second-year | 18.42% | |
Third-year or above | 10.53% | |
Gender | Female | 50.37% |
Male | 49.63% | |
Experience of using AI applications other than AI educators | Yes | 84.84% |
No | 15.16% | |
Experience in participating in online education | Frequently participate | 89.01% |
Participated, but not much | 6.98% | |
Almost never participated | 4.01% |
Descriptive statistics, reliability, and validity assessment.
Mean | SD | Cronbach’s Alpha | CR | AVE | |
---|---|---|---|---|---|
Sensing Autonomy | 5.17 | 1.07 | 0.880 | 0.917 | 0.735 |
Thought Autonomy | 5.55 | 0.95 | 0.836 | 0.891 | 0.670 |
Action Autonomy | 5.43 | 1.11 | 0.903 | 0.932 | 0.774 |
Information-seeking Gratification | 6.04 | 0.81 | 0.810 | 0.888 | 0.725 |
Social interaction Gratification | 5.73 | 0.81 | 0.822 | 0.876 | 0.585 |
Entertainment Gratification | 5.28 | 1.10 | 0.763 | 0.863 | 0.678 |
Usage Intention | 5.79 | 0.95 | 0.845 | 0.906 | 0.764 |
Correlation coefficients between constructs.
SA | TA | AA | IG | SG | EG | UI | |
---|---|---|---|---|---|---|---|
Sensing Autonomy (SA) | 0.857 | ||||||
Thought Autonomy (TA) | 0.628 | 0.819 | |||||
Action Autonomy (AA) | 0.549 | 0.547 | 0.880 | ||||
Information-seeking Gratification (IG) | 0.266 | 0.372 | 0.451 | 0.851 | |||
Social interaction Gratification (SG) | 0.520 | 0.491 | 0.652 | 0.652 | 0.765 | ||
Entertainment Gratification (EG) | 0.512 | 0.345 | 0.376 | 0.376 | 0.534 | 0.823 | |
Usage Intention (UI) | 0.450 | 0.446 | 0.689 | 0.689 | 0.699 | 0.567 | 0.874 |
Heterotrait–monotrait ratio.
SA | TA | AA | IG | SG | EG | UI | |
---|---|---|---|---|---|---|---|
Sensing Autonomy (SA) | |||||||
Thought Autonomy (TA) | 0.733 | ||||||
Action Autonomy (AA) | 0.615 | 0.629 | |||||
Information-seeking Gratification (IG) | 0.313 | 0.451 | 0.526 | ||||
Social interaction Gratification (SG) | 0.610 | 0.592 | 0.469 | 0.801 | |||
Entertainment Gratification (EG) | 0.617 | 0.415 | 0.573 | 0.464 | 0.661 | ||
Usage Intention (UI) | 0.520 | 0.530 | 0.541 | 0.831 | 0.838 | 0.695 |
Results of mediation analysis.
Relationships | Total Indirect Effect (CI) | Indirect Effect (CI) |
---|---|---|
Sensing autonomy → Information seeking → Usage | [0.055, 0.237] | [−0.068, 0.011] |
Sensing autonomy → Social interaction → Usage | [0.046, 0.134] | |
Sensing autonomy → Entertainment → Usage | [0.045, 0.132] | |
Thought autonomy → Information seeking → Usage | [0.046, 0.221] | [0.036, 0.136] |
Thought autonomy → Social interaction → Usage | [0.028, 0.107] | |
Though autonomy → Entertainment → Usage | [−0.045, 0.014] | |
Action autonomy → Information seeking → Usage | [0.150, 0.331] | [0.092, 0.200] |
Action autonomy → Social interaction → Usage | [−0.002, 0.060] | |
Action autonomy → Entertainment → Usage | [0.033, 0.111] |
References
1. Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Managing artificial intelligence. MIS Q.; 2021; 45, pp. 1433-1450.
2. Research, G.V. AI in Education Market Size, Share & Trends Analysis Report by Component (Solutions, Services), by Deployment, by Technology, by Application, by End-Use, by Region, and Segment Forecasts, 2022–2030. Available online: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-education-market-report (accessed on 24 December 2023).
3. Sparks, S.D. An AI Teaching Assistant Boosted College Students’ Success. Could It Work for High School?. Available online: https://www.edweek.org/technology/an-ai-teaching-assistant-boosted-college-students-success-could-it-work-for-high-school/2023/10 (accessed on 24 December 2023).
4. Halpern, D.F. Teaching critical thinking for transfer across domains: Disposition, skills, structure training, and metacognitive monitoring. Am. Psychol.; 1998; 53, 449. [DOI: https://dx.doi.org/10.1037/0003-066X.53.4.449] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/9572008]
5. Sherman, T.M.; Armistead, L.P.; Fowler, F.; Barksdale, M.A.; Reif, G. The quest for excellence in university teaching. J. High. Educ.; 1987; 58, pp. 66-84. [DOI: https://dx.doi.org/10.2307/1981391]
6. Anderson, J.R.; Corbett, A.T.; Koedinger, K.R.; Pelletier, R. Cognitive tutors: Lessons learned. J. Learn. Sci.; 1995; 4, pp. 167-207. [DOI: https://dx.doi.org/10.1207/s15327809jls0402_2]
7. Deci, E.L.; Vallerand, R.J.; Pelletier, L.G.; Ryan, R.M. Motivation and education: The self-determination perspective. Educ. Psychol.; 1991; 26, pp. 325-346. [DOI: https://dx.doi.org/10.1080/00461520.1991.9653137]
8. Dörnyei, Z. Motivation in action: Towards a process-oriented conceptualisation of student motivation. Br. J. Educ. Psychol.; 2000; 70, pp. 519-538. [DOI: https://dx.doi.org/10.1348/000709900158281] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11191185]
9. Liu, W.C.; Wang, C.K.J.; Kee, Y.H.; Koh, C.; Lim, B.S.C.; Chua, L. College students’ motivation and learning strategies profiles and academic achievement: A self-determination theory approach. Educ. Psychol.; 2014; 34, pp. 338-353. [DOI: https://dx.doi.org/10.1080/01443410.2013.785067]
10. Abeysekera, L.; Dawson, P. Motivation and cognitive load in the flipped classroom: Definition, rationale and a call for research. High. Educ. Res. Dev.; 2015; 34, pp. 1-14. [DOI: https://dx.doi.org/10.1080/07294360.2014.934336]
11. Guilherme, A. AI and education: The importance of teacher and student relations. AI Soc.; 2019; 34, pp. 47-54. [DOI: https://dx.doi.org/10.1007/s00146-017-0693-8]
12. Cope, B.; Kalantzis, M.; Searsmith, D. Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies. Educ. Philos. Theory; 2021; 53, pp. 1229-1245. [DOI: https://dx.doi.org/10.1080/00131857.2020.1728732]
13. Wang, X.; Li, L.; Tan, S.C.; Yang, L.; Lei, J. Preparing for AI-enhanced education: Conceptualizing and empirically examining teachers’ AI readiness. Comput. Hum. Behav.; 2023; 146, 107798. [DOI: https://dx.doi.org/10.1016/j.chb.2023.107798]
14. Kim, J.H.; Kim, M.; Kwak, D.W.; Lee, S. Home-tutoring services assisted with technology: Investigating the role of artificial intelligence using a randomized field experiment. J. Mark. Res.; 2022; 59, pp. 79-96. [DOI: https://dx.doi.org/10.1177/00222437211050351]
15. Ouyang, F.; Jiao, P. Artificial intelligence in education: The three paradigms. Comput. Educ. Artif. Intell.; 2021; 2, 100020. [DOI: https://dx.doi.org/10.1016/j.caeai.2021.100020]
16. Kim, J.; Merrill, K., Jr.; Xu, K.; Kelly, S. Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Comput. Hum. Behav.; 2022; 136, 107383. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107383]
17. Xia, Q.; Chiu, T.K.; Lee, M.; Sanusi, I.T.; Dai, Y.; Chai, C.S. A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Comput. Educ.; 2022; 189, 104582. [DOI: https://dx.doi.org/10.1016/j.compedu.2022.104582]
18. Ali, S.; Payne, B.H.; Williams, R.; Park, H.W.; Breazeal, C. Constructionism, ethics, and creativity: Developing primary and middle school artificial intelligence education. Proceedings of the International Workshop on Education in Artificial Intelligence K-12 (Eduai’19); Macao, China, 10–16 August 2019; pp. 1-4.
19. Su, J.; Zhong, Y.; Ng, D.T.K. A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Comput. Educ. Artif. Intell.; 2022; 3, 100065. [DOI: https://dx.doi.org/10.1016/j.caeai.2022.100065]
20. Touretzky, D.; Gardner-McCune, C.; Martin, F.; Seehorn, D. Envisioning AI for K-12: What should every child know about AI?. Proceedings of the AAAI Conference on Artificial Intelligence; Honolulu, HI, USA, 27 January–1 February 2019; pp. 9795-9799.
21. Ottenbreit-Leftwich, A.; Glazewski, K.; Jeon, M.; Jantaraweragul, K.; Hmelo-Silver, C.E.; Scribner, A.; Lee, S.; Mott, B.; Lester, J. Lessons learned for AI education with elementary students and teachers. Int. J. Artif. Intell. Educ.; 2023; 33, pp. 267-289. [DOI: https://dx.doi.org/10.1007/s40593-022-00304-3]
22. Kim, K.; Park, Y. A development and application of the teaching and learning model of artificial intelligence education for elementary students. J. Korean Assoc. Inf. Educ.; 2017; 21, pp. 139-149.
23. Han, H.-J.; Kim, K.-J.; Kwon, H.-S. The analysis of elementary school teachers’ perception of using artificial intelligence in education. J. Digit. Converg.; 2020; 18, pp. 47-56.
24. Park, W.; Kwon, H. Implementing artificial intelligence education for middle school technology education in Republic of Korea. Int. J. Technol. Des. Educ.; 2023; pp. 1-27. [DOI: https://dx.doi.org/10.1007/s10798-023-09812-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36844448]
25. Zhang, H.; Lee, I.; Ali, S.; DiPaola, D.; Cheng, Y.; Breazeal, C. Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. Int. J. Artif. Intell. Educ.; 2023; 33, pp. 290-324. [DOI: https://dx.doi.org/10.1007/s40593-022-00293-3]
26. Williams, R.; Kaputsos, S.P.; Breazeal, C. Teacher perspectives on how to train your robot: A middle school AI and ethics curriculum. Proceedings of the AAAI Conference on Artificial Intelligence; Vancouver, BC, Canada, 2–9 February 2021; pp. 15678-15686.
27. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators?. Int. J. Educ. Technol. High. Educ.; 2019; 16, 39. [DOI: https://dx.doi.org/10.1186/s41239-019-0171-0]
28. Dodds, Z.; Greenwald, L.; Howard, A.; Tejada, S.; Weinberg, J. Components, curriculum, and community: Robots and robotics in undergraduate ai education. AI Mag.; 2006; 27, 11.
29. Corbelli, G.; Cicirelli, P.G.; D’Errico, F.; Paciello, M. Preventing prejudice emerging from misleading news among adolescents: The role of implicit activation and regulatory self-efficacy in dealing with online misinformation. Soc. Sci.; 2023; 12, 470. [DOI: https://dx.doi.org/10.3390/socsci12090470]
30. Liu, K.; Tao, D. The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput. Hum. Behav.; 2022; 127, 107026. [DOI: https://dx.doi.org/10.1016/j.chb.2021.107026]
31. Shin, D.; Chotiyaputta, V.; Zaid, B. The effects of cultural dimensions on algorithmic news: How do cultural value orientations affect how people perceive algorithms?. Comput. Hum. Behav.; 2022; 126, 107007. [DOI: https://dx.doi.org/10.1016/j.chb.2021.107007]
32. Kim, T.W.; Duhachek, A. Artificial intelligence and persuasion: A construal-level account. Psychol. Sci.; 2020; 31, pp. 363-380. [DOI: https://dx.doi.org/10.1177/0956797620904985] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32223692]
33. Hu, Q.; Lu, Y.; Pan, Z.; Gong, Y.; Yang, Z. Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. Int. J. Inf. Manag.; 2021; 56, 102250. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2020.102250]
34. Etzioni, A.; Etzioni, O. AI assisted ethics. Ethics Inf. Technol.; 2016; 18, pp. 149-156. [DOI: https://dx.doi.org/10.1007/s10676-016-9400-6]
35. Mezrich, J.L. Is artificial intelligence (AI) a pipe dream? Why legal issues present significant hurdles to AI autonomy. Am. J. Roentgenol.; 2022; 219, pp. 152-156. [DOI: https://dx.doi.org/10.2214/AJR.21.27224]
36. Rijsdijk, S.A.; Hultink, E.J.; Diamantopoulos, A. Product intelligence: Its conceptualization, measurement and impact on consumer satisfaction. J. Acad. Mark. Sci.; 2007; 35, pp. 340-356. [DOI: https://dx.doi.org/10.1007/s11747-007-0040-6]
37. Wang, P. On defining artificial intelligence. J. Artif. Gen. Intell.; 2019; 10, pp. 1-37. [DOI: https://dx.doi.org/10.2478/jagi-2019-0002]
38. Formosa, P. Robot autonomy vs. human autonomy: Social robots, artificial intelligence (AI), and the nature of autonomy. Minds Mach.; 2021; 31, pp. 595-616. [DOI: https://dx.doi.org/10.1007/s11023-021-09579-2]
39. Beer, J.M.; Fisk, A.D.; Rogers, W.A. Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot. Interact.; 2014; 3, 74. [DOI: https://dx.doi.org/10.5898/JHRI.3.2.Beer]
40. Pelau, C.; Dabija, D.-C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput. Hum. Behav.; 2021; 122, 106855. [DOI: https://dx.doi.org/10.1016/j.chb.2021.106855]
41. Mishra, A.; Shukla, A.; Sharma, S.K. Psychological determinants of users’ adoption and word-of-mouth recommendations of smart voice assistants. Int. J. Inf. Manag.; 2021; 67, 102413. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2021.102413]
42. Ameen, N.; Tarhini, A.; Reppel, A.; Anand, A. Customer experiences in the age of artificial intelligence. Comput. Hum. Behav.; 2021; 114, 106548. [DOI: https://dx.doi.org/10.1016/j.chb.2020.106548] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32905175]
43. Ameen, N.; Hosany, S.; Paul, J. The personalisation-privacy paradox: Consumer interaction with smart technologies and shopping mall loyalty. Comput. Hum. Behav.; 2022; 126, 106976. [DOI: https://dx.doi.org/10.1016/j.chb.2021.106976]
44. Jiang, H.; Cheng, Y.; Yang, J.; Gao, S. AI-powered chatbot communication with customers: Dialogic interactions, satisfaction, engagement, and customer behavior. Comput. Hum. Behav.; 2022; 134, 107329. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107329]
45. Lin, J.-S.; Wu, L. Examining the psychological process of developing consumer-brand relationships through strategic use of social media brand chatbots. Comput. Hum. Behav.; 2023; 140, 107488. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107488]
46. Alimamy, S.; Kuhail, M.A. I will be with you Alexa! The impact of intelligent virtual assistant’s authenticity and personalization on user reusage intentions. Comput. Hum. Behav.; 2023; 143, 107711. [DOI: https://dx.doi.org/10.1016/j.chb.2023.107711]
47. Garvey, A.M.; Kim, T.; Duhachek, A. Bad news? Send an AI. Good news? Send a human. J. Mark.; 2022; 87, pp. 10-25. [DOI: https://dx.doi.org/10.1177/00222429211066972]
48. Hong, J.-W.; Fischer, K.; Ha, Y.; Zeng, Y. Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Comput. Hum. Behav.; 2022; 131, 107239. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107239]
49. Plaks, J.E.; Bustos Rodriguez, L.; Ayad, R. Identifying psychological features of robots that encourage and discourage trust. Comput. Hum. Behav.; 2022; 134, 107301. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107301]
50. Ulfert, A.-S.; Antoni, C.H.; Ellwart, T. The role of agent autonomy in using decision support systems at work. Comput. Hum. Behav.; 2022; 126, 106987. [DOI: https://dx.doi.org/10.1016/j.chb.2021.106987]
51. Baxter, L.; Egbert, N.; Ho, E. Everyday health communication experiences of college students. J. Am. Coll. Health; 2008; 56, pp. 427-436. [DOI: https://dx.doi.org/10.3200/JACH.56.44.427-436]
52. Severin, W.J.; Tankard, J.W. Communication Theories: Origins, Methods, and Uses in the Mass Media; Longman: New York, NY, USA, 1997.
53. Cantril, H. Professor Quiz: A Gratifications Study. Radio Research; Duell, Sloan & Pearce: New York, NY, USA, 1940; pp. 64-93.
54. Blumler, J.G.; Katz, E. The Uses of Mass Communications: Current Perspectives on Gratifications Research; Sage Publications: Beverly Hills, CA, USA, 1974; Volume III.
55. Rubin, A.M. Media uses and effects: A uses-and-gratifications perspective. Media Effects: Advances in Theory and Research; Bryant, J.; Zillmann, D. Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 1994; pp. 417-436.
56. Rubin, A.M. Uses-and-gratifications perspective on media effects. Media Effects; Routledge: Oxfordshire, UK, 2009; pp. 181-200.
57. Cheng, Y.; Jiang, H. How do AI-driven chatbots impact user experience? Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued use. J. Broadcast. Electron. Media; 2020; 64, pp. 592-614. [DOI: https://dx.doi.org/10.1080/08838151.2020.1834296]
58. Xie, Y.; Zhao, S.; Zhou, P.; Liang, C. Understanding continued use intention of AI assistants. J. Comput. Inf. Syst.; 2023; 63, pp. 1424-1437. [DOI: https://dx.doi.org/10.1080/08874417.2023.2167134]
59. Xie, C.; Wang, Y.; Cheng, Y. Does artificial intelligence satisfy you? A meta-analysis of user gratification and user satisfaction with AI-powered chatbots. Int. J. Hum.-Comput. Interact.; 2022; 40, pp. 613-623. [DOI: https://dx.doi.org/10.1080/10447318.2022.2121458]
60. McLean, G.; Osei-Frimpong, K. Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Hum. Behav.; 2019; 99, pp. 28-37. [DOI: https://dx.doi.org/10.1016/j.chb.2019.05.009]
61. Valentine, A. Uses and gratifications of Facebook members 35 years and older. The Social Media Industries; Routledge: Oxfordshire, UK, 2013; pp. 166-190.
62. Wald, R.; Piotrowski, J.T.; Araujo, T.; van Oosten, J.M.F. Virtual assistants in the family home. Understanding parents’ motivations to use virtual assistants with their Child(dren). Comput. Hum. Behav.; 2023; 139, 107526. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107526]
63. Baek, T.H.; Kim, M. Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telemat. Inform.; 2023; 83, 102030. [DOI: https://dx.doi.org/10.1016/j.tele.2023.102030]
64. Siegel, M. The sense-think-act paradigm revisited. Proceedings of the 1st International Workshop on Robotic Sensing; Örebro, Sweden, 5–6 June 2003.
65. Hayles, N.K. Computing the human. Theory Cult. Soc.; 2005; 22, pp. 131-151. [DOI: https://dx.doi.org/10.1177/0263276405048438]
66. Luo, X. Uses and gratifications theory and e-consumer behaviors: A structural equation modeling study. J. Interact. Advert.; 2002; 2, pp. 34-41. [DOI: https://dx.doi.org/10.1080/15252019.2002.10722060]
67. Kaur, P.; Dhir, A.; Chen, S.; Malibari, A.; Almotairi, M. Why do people purchase virtual goods? A uses and gratification (U&G) theory perspective. Telemat. Inform.; 2020; 53, 101376.
68. Azam, A. The effect of website interface features on e-commerce: An empirical investigation using the use and gratification theory. Int. J. Bus. Inf. Syst.; 2015; 19, pp. 205-223. [DOI: https://dx.doi.org/10.1504/IJBIS.2015.069431]
69. Boyle, E.A.; Connolly, T.M.; Hainey, T.; Boyle, J.M. Engagement in digital entertainment games: A systematic review. Comput. Hum. Behav.; 2012; 28, pp. 771-780. [DOI: https://dx.doi.org/10.1016/j.chb.2011.11.020]
70. Huang, L.Y.; Hsieh, Y.J. Predicting online game loyalty based on need gratification and experiential motives. Internet Res.; 2011; 21, pp. 581-598. [DOI: https://dx.doi.org/10.1108/10662241111176380]
71. Hsu, L.-C.; Wang, K.-Y.; Chih, W.-H.; Lin, K.-Y. Investigating the ripple effect in virtual communities: An example of Facebook Fan Pages. Comput. Hum. Behav.; 2015; 51, pp. 483-494. [DOI: https://dx.doi.org/10.1016/j.chb.2015.04.069]
72. Riskos, K.; Hatzithomas, L.; Dekoulou, P.; Tsourvakas, G. The influence of entertainment, utility and pass time on consumer brand engagement for news media brands: A mediation model. J. Media Bus. Stud.; 2022; 19, pp. 1-28. [DOI: https://dx.doi.org/10.1080/16522354.2021.1887439]
73. Luo, M.M.; Chea, S.; Chen, J.-S. Web-based information service adoption: A comparison of the motivational model and the uses and gratifications theory. Decis. Support Syst.; 2011; 51, pp. 21-30. [DOI: https://dx.doi.org/10.1016/j.dss.2010.11.015]
74. Lee, C.S.; Ma, L. News sharing in social media: The effect of gratifications and prior experience. Comput. Hum. Behav.; 2012; 28, pp. 331-339. [DOI: https://dx.doi.org/10.1016/j.chb.2011.10.002]
75. Choi, E.-k.; Fowler, D.; Goh, B.; Yuan, J. Social media marketing: Applying the uses and gratifications theory in the hotel industry. J. Hosp. Mark. Manag.; 2016; 25, pp. 771-796. [DOI: https://dx.doi.org/10.1080/19368623.2016.1100102]
76. Darwall, S. The value of autonomy and autonomy of the will. Ethics; 2006; 116, pp. 263-284. [DOI: https://dx.doi.org/10.1086/498461]
77. Moreno, A.; Etxeberria, A.; Umerez, J. The autonomy of biological individuals and artificial models. BioSystems; 2008; 91, pp. 309-319. [DOI: https://dx.doi.org/10.1016/j.biosystems.2007.05.009] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17719170]
78. Schneewind, J.B. The Invention of Autonomy: A History of Modern Moral Philosophy; Cambridge University Press: Cambridge, UK, 1998.
79. Formosa, P. Kantian Ethics, Dignity and Perfection; Cambridge University Press: Cambridge, UK, 2017.
80. Pal, D.; Babakerkhell, M.D.; Papasratorn, B.; Funilkul, S. Intelligent attributes of voice assistants and user’s love for AI: A SEM-based study. IEEE Access; 2023; 11, pp. 60889-60903. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3286570]
81. Schepers, J.; Belanche, D.; Casaló, L.V.; Flavián, C. How smart should a service robot be?. J. Serv. Res.; 2022; 25, pp. 565-582. [DOI: https://dx.doi.org/10.1177/10946705221107704]
82. Falcone, R.; Sapienza, A. The role of decisional autonomy in User-IoT systems interaction. Proceedings of the 23rd Workshop from Objects to Agents; Genova, Italy, 1–3 September 2022.
83. Guo, W.; Luo, Q. Investigating the impact of intelligent personal assistants on the purchase intentions of Generation Z consumers: The moderating role of brand credibility. J. Retail. Consum. Serv.; 2023; 73, 103353. [DOI: https://dx.doi.org/10.1016/j.jretconser.2023.103353]
84. Ko, H.; Cho, C.-H.; Roberts, M.S. Internet uses and gratifications: A structural equation model of interactive advertising. J. Advert.; 2005; 34, pp. 57-70. [DOI: https://dx.doi.org/10.1080/00913367.2005.10639191]
85. Ki, C.W.C.; Cho, E.; Lee, J.E. Can an intelligent personal assistant (IPA) be your friend? Para-friendship development mechanism between IPAs and their users. Comput. Hum. Behav.; 2020; 111, 106412. [DOI: https://dx.doi.org/10.1016/j.chb.2020.106412]
86. Oeldorf-Hirsch, A.; Sundar, S.S. Social and technological motivations for online photo sharing. J. Broadcast. Electron. Media; 2016; 60, pp. 624-642. [DOI: https://dx.doi.org/10.1080/08838151.2016.1234478]
87. Park, N.; Kee, K.F.; Valenzuela, S. Being immersed in social networking environment: Facebook groups, uses and gratifications, and social outcomes. Cyberpsychol. Behav.; 2009; 12, pp. 729-733. [DOI: https://dx.doi.org/10.1089/cpb.2009.0003]
88. Eighmey, J. Profiling user responses to commercial web sites. J. Advert. Res.; 1997; 37, pp. 59-67.
89. Eighmey, J.; McCord, L. Adding value in the information age: Uses and gratifications of sites on the World Wide Web. J. Bus. Res.; 1998; 41, pp. 187-194. [DOI: https://dx.doi.org/10.1016/S0148-2963(97)00061-1]
90. Canziani, B.; MacSween, S. Consumer acceptance of voice-activated smart home devices for product information seeking and online ordering. Comput. Hum. Behav.; 2021; 119, 106714. [DOI: https://dx.doi.org/10.1016/j.chb.2021.106714]
91. Ahadzadeh, A.S.; Pahlevan Sharif, S.; Sim Ong, F. Online health information seeking among women: The moderating role of health consciousness. Online Inf. Rev.; 2018; 42, pp. 58-72. [DOI: https://dx.doi.org/10.1108/OIR-02-2016-0066]
92. Gordon, I.D.; Chaves, D.; Dearborn, D.; Hendrikx, S.; Hutchinson, R.; Popovich, C.; White, M. Information seeking behaviors, attitudes, and choices of academic physicists. Sci. Technol. Libr.; 2022; 41, pp. 288-318. [DOI: https://dx.doi.org/10.1080/0194262X.2021.1991546]
93. Hernandez, A.A.; Padilla, J.R.C.; Montefalcon, M.D.L. Information seeking behavior in ChatGPT: The case of programming students from a developing economy. Proceedings of the 2023 IEEE 13th International Conference on System Engineering and Technology (ICSET); Shah Alam, Malaysia, 2 October 2023; pp. 72-77.
94. Poitras, E.; Mayne, Z.; Huang, L.; Udy, L.; Lajoie, S. Scaffolding student teachers’ information-seeking behaviours with a network-based tutoring system. J. Comput. Assist. Learn.; 2019; 35, pp. 731-746. [DOI: https://dx.doi.org/10.1111/jcal.12380]
95. Dinh, C.-M.; Park, S. How to increase consumer intention to use Chatbots? An empirical analysis of hedonic and utilitarian motivations on social presence and the moderating effects of fear across generations. Electron. Commer. Res.; 2023; pp. 1-41. [DOI: https://dx.doi.org/10.1007/s10660-022-09662-5]
96. Aitken, G.; Smith, K.; Fawns, T.; Jones, D. Participatory alignment: A positive relationship between educators and students during online masters dissertation supervision. Teach. High. Educ.; 2022; 27, pp. 772-786. [DOI: https://dx.doi.org/10.1080/13562517.2020.1744129]
97. So, H.-J.; Brush, T.A. Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Comput. Educ.; 2008; 51, pp. 318-336. [DOI: https://dx.doi.org/10.1016/j.compedu.2007.05.009]
98. Tackie, H.N. (Dis)connected: Establishing social presence and intimacy in teacher–student relationships during emergency remote learning. AERA Open; 2022; 8, 23328584211069525. [DOI: https://dx.doi.org/10.1177/23328584211069525]
99. Nguyen, N.; LeBlanc, G. Image and reputation of higher education institutions in students’ retention decisions. Int. J. Educ. Manag.; 2001; 15, pp. 303-311. [DOI: https://dx.doi.org/10.1108/EUM0000000005909]
100. Dang, J.; Liu, L. Implicit theories of the human mind predict competitive and cooperative responses to AI robots. Comput. Hum. Behav.; 2022; 134, 107300. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107300]
101. Pal, D.; Vanijja, V.; Thapliyal, H.; Zhang, X. What affects the usage of artificial conversational agents? An agent personality and love theory perspective. Comput. Hum. Behav.; 2023; 145, 107788. [DOI: https://dx.doi.org/10.1016/j.chb.2023.107788]
102. Munnukka, J.; Talvitie-Lamberg, K.; Maity, D. Anthropomorphism and social presence in Human–Virtual service assistant interactions: The role of dialog length and attitudes. Comput. Hum. Behav.; 2022; 135, 107343. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107343]
103. Lv, X.; Yang, Y.; Qin, D.; Cao, X.; Xu, H. Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Comput. Hum. Behav.; 2022; 126, 106993. [DOI: https://dx.doi.org/10.1016/j.chb.2021.106993]
104. Jiang, Y.; Yang, X.; Zheng, T. Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav.; 2023; 138, 107485. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107485]
105. Harris-Watson, A.M.; Larson, L.E.; Lauharatanahirun, N.; DeChurch, L.A.; Contractor, N.S. Social perception in Human-AI teams: Warmth and competence predict receptivity to AI teammates. Comput. Hum. Behav.; 2023; 145, 107765. [DOI: https://dx.doi.org/10.1016/j.chb.2023.107765]
106. Rhim, J.; Kwak, M.; Gong, Y.; Gweon, G. Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Comput. Hum. Behav.; 2022; 126, 107034. [DOI: https://dx.doi.org/10.1016/j.chb.2021.107034]
107. Mamonov, S.; Koufaris, M. Fulfillment of higher-order psychological needs through technology: The case of smart thermostats. Int. J. Inf. Manag.; 2020; 52, 102091. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2020.102091]
108. Doty, D.H.; Wooldridge, B.R.; Astakhova, M.; Fagan, M.H.; Marinina, M.G.; Caldas, M.P.; Tunçalp, D. Passion as an excuse to procrastinate: A cross-cultural examination of the relationships between Obsessive Internet passion and procrastination. Comput. Hum. Behav.; 2020; 102, pp. 103-111. [DOI: https://dx.doi.org/10.1016/j.chb.2019.08.014]
109. Eigenraam, A.W.; Eelen, J.; Verlegh, P.W. Let me entertain you? The importance of authenticity in online customer engagement. J. Interact. Mark.; 2021; 54, pp. 53-68. [DOI: https://dx.doi.org/10.1016/j.intmar.2020.11.001]
110. Drouin, M.; Sprecher, S.; Nicola, R.; Perkins, T. Is chatting with a sophisticated chatbot as good as chatting online or FTF with a stranger?. Comput. Hum. Behav.; 2022; 128, 107100. [DOI: https://dx.doi.org/10.1016/j.chb.2021.107100]
111. Chubarkova, E.V.; Sadchikov, I.A.; Suslova, I.A.; Tsaregorodtsev, A.; Milova, L.N. Educational game systems in artificial intelligence course. Int. J. Environ. Sci. Educ.; 2016; 11, pp. 9255-9265.
112. Kim, N.-Y.; Cha, Y.; Kim, H.-S. Future english learning: Chatbots and artificial intelligence. Multimed.-Assist. Lang. Learn.; 2019; 22, pp. 32-53.
113. Huang, J.; Saleh, S.; Liu, Y. A review on artificial intelligence in education. Acad. J. Interdiscip. Stud.; 2021; 10, pp. 206-217. [DOI: https://dx.doi.org/10.36941/ajis-2021-0077]
114. Pizzoli, S.F.M.; Mazzocco, K.; Triberti, S.; Monzani, D.; Alcañiz Raya, M.L.; Pravettoni, G. User-centered virtual reality for promoting relaxation: An innovative approach. Front. Psychol.; 2019; 10, 479. [DOI: https://dx.doi.org/10.3389/fpsyg.2019.00479]
115. Ceha, J.; Lee, K.J.; Nilsen, E.; Goh, J.; Law, E. Can a humorous conversational agent enhance learning experience and outcomes?. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; Yokohama, Japan, 8–13 May 2021; pp. 1-14.
116. Sung, E.C.; Bae, S.; Han, D.-I.D.; Kwon, O. Consumer engagement via interactive artificial intelligence and mixed reality. Int. J. Inf. Manag.; 2021; 60, 102382. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2021.102382]
117. Eppler, M.J.; Mengis, J. The concept of information overload—A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inf. Soc. Int. J.; 2004; 20, pp. 271-305. [DOI: https://dx.doi.org/10.1080/01972240490507974]
118. Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus. Horiz.; 2018; 61, pp. 577-586. [DOI: https://dx.doi.org/10.1016/j.bushor.2018.03.007]
119. Nowak, A.; Lukowicz, P.; Horodecki, P. Assessing artificial intelligence for humanity: Will AI be the our biggest ever advance? or the biggest threat [opinion]. IEEE Technol. Soc. Mag.; 2018; 37, pp. 26-34. [DOI: https://dx.doi.org/10.1109/MTS.2018.2876105]
120. Chong, L.; Zhang, G.; Goucher-Lambert, K.; Kotovsky, K.; Cagan, J. Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Comput. Hum. Behav.; 2022; 127, 107018. [DOI: https://dx.doi.org/10.1016/j.chb.2021.107018]
121. Endsley, M.R. Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Comput. Hum. Behav.; 2023; 140, 107574. [DOI: https://dx.doi.org/10.1016/j.chb.2022.107574]
122. Zhang, Z.; Yoo, Y.; Lyytinen, K.; Lindberg, A. The unknowability of autonomous tools and the liminal experience of their use. Inf. Syst. Res.; 2021; 32, pp. 1192-1213. [DOI: https://dx.doi.org/10.1287/isre.2021.1022]
123. Cui, Y.; van Esch, P. Autonomy and control: How political ideology shapes the use of artificial intelligence. Psychol. Mark.; 2022; 39, pp. 1218-1229. [DOI: https://dx.doi.org/10.1002/mar.21649]
124. Osburg, V.-S.; Yoganathan, V.; Kunz, W.H.; Tarba, S. Can (A)I give you a ride? Development and validation of the cruise framework for autonomous vehicle services. J. Serv. Res.; 2022; 25, pp. 630-648. [DOI: https://dx.doi.org/10.1177/10946705221118233]
125. Mohammadi, E.; Thelwall, M.; Kousha, K. Can Mendeley bookmarks reflect readership? A survey of user motivations. J. Assoc. Inf. Sci. Technol.; 2016; 67, pp. 1198-1209. [DOI: https://dx.doi.org/10.1002/asi.23477]
126. Li, J.; Che, W. Challenges and coping strategies of online learning for college students in the context of COVID-19: A survey of Chinese universities. Sustain. Cities Soc.; 2022; 83, 103958. [DOI: https://dx.doi.org/10.1016/j.scs.2022.103958]
127. Eisenberg, D.; Gollust, S.E.; Golberstein, E.; Hefner, J.L. Prevalence and correlates of depression, anxiety, and suicidality among university students. Am. J. Orthopsychiatry; 2007; 77, pp. 534-542. [DOI: https://dx.doi.org/10.1037/0002-9432.77.4.534] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18194033]
128. Malhotra, Y.; Galletta, D.F. Extending the technology acceptance model to account for social influence: Theoretical bases and empirical validation. Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences; Maui, HI, USA, 5–8 January 1999; 14.
129. Hair, J., Jr.; Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2021.
130. Petter, S.; Straub, D.; Rai, A. Specifying formative constructs in information systems research. MIS Q.; 2007; 31, pp. 623-656. [DOI: https://dx.doi.org/10.2307/25148814]
131. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol.; 2003; 88, pp. 879-903. [DOI: https://dx.doi.org/10.1037/0021-9010.88.5.879]
132. Nuimally, J.C. Psychometric Theory; McGraw-Hill Book Company: New York, NY, USA, 1978; pp. 86–113, 190–255.
133. Comrey, A.L. A First Course in Factor Analysis; Academic Press: New York, NY, USA, 1973.
134. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res.; 1981; 18, pp. 39-50. [DOI: https://dx.doi.org/10.1177/002224378101800104]
135. Urbach, N.; Ahlemann, F. Structural equation modeling in information systems research using partial least squares. J. Inf. Technol. Theory Appl.; 2010; 11, 2.
136. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci.; 2015; 43, pp. 115-135. [DOI: https://dx.doi.org/10.1007/s11747-014-0403-8]
137. Kim, S.S.; Malhotra, N.K.; Narasimhan, S. Research note—Two competing perspectives on automatic use: A theoretical and empirical comparison. Inf. Syst. Res.; 2005; 16, pp. 418-432. [DOI: https://dx.doi.org/10.1287/isre.1050.0070]
138. Turel, O.; Yuan, Y.; Connelly, C.E. In justice we trust: Predicting user acceptance of e-customer services. J. Manag. Inf. Syst.; 2008; 24, pp. 123-151. [DOI: https://dx.doi.org/10.2753/MIS0742-1222240405]
139. Preacher, K.J.; Hayes, A.F. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behav. Res. Methods; 2008; 40, pp. 879-891. [DOI: https://dx.doi.org/10.3758/BRM.40.3.879] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18697684]
140. MacMahon, S.J.; Carroll, A.; Osika, A.; Howell, A. Learning how to learn—Implementing self-regulated learning evidence into practice in higher education: Illustrations from diverse disciplines. Rev. Educ.; 2022; 10, e3339. [DOI: https://dx.doi.org/10.1002/rev3.3339]
141. Broadbent, J.; Poon, W.L. Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet High. Educ.; 2015; 27, pp. 1-13.
142. Wingate, U. A framework for transition: Supporting ‘learning to learn’ in higher education. High. Educ. Q.; 2007; 61, pp. 391-405. [DOI: https://dx.doi.org/10.1111/j.1468-2273.2007.00361.x]
143. Sagitova, R. Students’ self-education: Learning to learn across the lifespan. Procedia-Soc. Behav. Sci.; 2014; 152, pp. 272-277. [DOI: https://dx.doi.org/10.1016/j.sbspro.2014.09.194]
144. Chen, C.Y.; Lee, L.; Yap, A.J. Control deprivation motivates acquisition of utilitarian products. J. Consum. Res.; 2017; 43, pp. 1031-1047. [DOI: https://dx.doi.org/10.1093/jcr/ucw068]
145. Chebat, J.-C.; Gélinas-Chebat, C.; Therrien, K. Lost in a mall, the effects of gender, familiarity with the shopping mall and the shopping values on shoppers’ wayfinding processes. J. Bus. Res.; 2005; 58, pp. 1590-1598. [DOI: https://dx.doi.org/10.1016/j.jbusres.2004.02.006]
146. Murray, A.; Rhymer, J.; Sirmon, D.G. Humans and technology: Forms of conjoined agency in organizations. Acad. Manag. Rev.; 2021; 46, pp. 552-571. [DOI: https://dx.doi.org/10.5465/amr.2019.0186]
147. Möhlmann, M.; Zalmanson, L.; Henfridsson, O.; Gregory, R.W. Algorithmic management of work on online labor platforms: When matching meets control. MIS Q.; 2021; 45, pp. 1999-2022. [DOI: https://dx.doi.org/10.25300/MISQ/2021/15333]
148. Fryberg, S.A.; Markus, H.R. Cultural models of education in American Indian, Asian American and European American contexts. Soc. Psychol. Educ.; 2007; 10, pp. 213-246. [DOI: https://dx.doi.org/10.1007/s11218-007-9017-z]
149. Kim, J.; Merrill, K., Jr.; Xu, K.; Sellnow, D.D. I like my relational machine teacher: An AI instructor’s communication styles and social presence in online education. Int. J. Hum.-Comput. Interact.; 2021; 37, pp. 1760-1770. [DOI: https://dx.doi.org/10.1080/10447318.2021.1908671]
150. Kim, J.; Merrill, K.; Xu, K.; Sellnow, D.D. My teacher is a machine: Understanding students’ perceptions of AI teaching assistants in online education. Int. J. Hum.-Comput. Interact.; 2020; 36, pp. 1902-1911. [DOI: https://dx.doi.org/10.1080/10447318.2020.1801227]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With the rapid development of artificial intelligence (AI) technology, AI educators have become a reality. The advancement and increasing applications of AI technology in higher education not only provide more efficient tools for teachers in long-term and focused teaching, but also provide new active and independent spaces for sustainable self-motivated learning for college students. It is of great importance that the effects of AI educator design are understood to ensure the sustainable development and deployment of AI-driven courses at universities. This paper investigates the influences of AI educators’ autonomy design on students’ usage intentions by delving into how the artificial autonomy of AI educators satisfies students’ needs. Drawing on the uses and gratification (U&G) framework, we theoretically elaborate on how AI educator autonomy (i.e., sensing autonomy, thought autonomy, and action autonomy) influences students’ intentions to use an AI educator through the mediating effects of U&G benefits (i.e., information-seeking gratification, social interaction gratification, and entertainment gratification). By conducting an online survey (N = 673) on college students, we found that the sensing autonomy of AI educators is positively associated with usage intention due to the mediating effects of social interaction and entertainment gratifications; the thought autonomy of AI educators is positively related to usage intention, mediated by information-seeking and social interaction gratifications, and the action autonomy of AI educators is positively linked with usage intention through the paths of information-seeking and entertainment gratifications. Our findings provide both theoretical contributions and practical implications.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 School of Business, Ningbo University, Ningbo 315211, China;
2 Department of Student Affairs, Zhejiang University, Hangzhou 310027, China;
3 College of Economics and Management, Zhejiang Agricultural and Forestry University, Hangzhou 310007, China;