Medical error has an estimated economic impact between $735 to $980 billion annually in the United States alone (Andel, Davidow, Hollander, & Moreno, 2012). Furthermore, estimates indicate that preventable medical errors occur in one of every three hospital admissions (Classen et al., 2011) and result in 98,000 patient deaths per year (Kohn, Corrigan, & Donaldson, 1999). Ultimately, the evidence is clear: medical error causes patient harm, and much of this error is preventable (Hogan et al., 2012).
In an effort to reduce preventable errors and patient harm, research has focused on factors that contribute to medical error (e.g., Pham et al., 2011). Factors such as teamwork failures (e.g., poor communication) have been identified as substantial contributors to 68.3% of patient harm events, making teamwork failures a major source of preventable medical errors (The Joint Commission, 2014). In response to this evidence, the healthcare industry has begun to recognize the value of team training. However, despite recent increases in the use of team training in healthcare, we know little about how effective these practices are. Given that healthcare is predicted to be the largest industry in the world by 2018 (Woods, 2009) and team training is becoming increasingly prevalent in the healthcare industry (e.g.,Weaver, Dy, et al., 2014), we believe it is necessary to gain a deeper understanding of team training’s effectiveness in healthcare.
Thus, the purpose of this study is to meta-analytically examine the effects of team training in the healthcare industry and examine the factors that enhance or attenuate its effectiveness. Specifically, we seek to answer three overarching questions in this paper: (a) Is team training in healthcare effective? (b) Under what conditions is healthcare team training most effective? and (c) How does healthcare team training influence bottom-line organizational outcomes and patient outcomes? The answers to these questions will inform three key scholarly areas: First, although prior evidence suggests that team training is beneficial (see Salas, DiazGranados, et al., 2008), it is unknown whether these results-based largely on teams that differ greatly in context from healthcare-hold in the healthcare industry. Therefore, to answer our first research question (Is team training in healthcare effective?), the current paper estimates the effectiveness of team training in healthcare by evaluating (a) reactions to training (i.e., the extent to which trainees enjoy the training and perceive it to be useful), (b) learning outcomes (i.e., the degree to which trainees acquire knowledge, skills, and abilities [KSAs]), (c) transfer (i.e., the extent to which trainees demonstrate trained KSAs on the job), and (d) results, including organizational outcomes and patient outcomes.
To answer our second research question (Under what conditions is healthcare team training most effective?), we examine moderators of team training effectiveness, with some moderators increasing our understanding of characteristics that are largely unique to healthcare (e.g., patient acuity), whereas other moderators seek to inform the larger team training literature (e.g., provision of feedback). Ultimately, we use the meta-analytic findings to provide practical implications for the design of team training interventions in the healthcare industry.
Last, in an attempt to understand the pathway through which healthcare team training influences organizational and patient outcomes (answering our third research question, “How does healthcare team training influence bottom-line organizational outcomes and patient outcomes?”), we test a theoretical model originally proposed in traditional training literature (Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Tharenou et al., 2007). This model tests a sequential pathway among training criteria to understand the process by which training is effective and how healthcare team training influences bottom-line organizational outcomes. Specifically, we propose and test whether healthcare team training influences evaluation criteria via a sequential effect wherein training causes reactions, which lead to learning, which subsequently influences transfer, which then causes results (i.e., the sequential model of healthcare team training).
Healthcare Teams and Healthcare Team Training
Team training has been defined as a learning strategy in which a learner or group of learners systematically acquire(s) teamwork KSAs to impact cognitions, affect, and behaviors of a team (Salas, DiazGranados, Klein, et al., 2008; Salas, Nichols, & Driskell, 2007). Thus, team training is either an individual-level or team-level intervention which frequently has the goal of improving outcomes at multiple levels of analysis, including individual (e.g., individual learning), team (e.g., cohesion), and organizational (e.g., return on investment) levels.
While team training is not restricted to healthcare (Tannenbaum et al., 1991), its use has been widely encouraged within the healthcare industry (Shekelle et al., 2013). For example, recent estimates show that 75% of all medical students receive team training (Beach, 2013) and that as many as 1.5 million individuals have received Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) training (Global Diffusion of Healthcare Innovation Working Group, 2015), a popular team training program designed specifically for healthcare providers. Despite the prevalence of team training in healthcare, evidence of its effectiveness is not well-established. Although one could argue that evidence of training effectiveness (i.e., not specific to teams or the healthcare setting; e.g., Arthur, Bennett, Edens, & Bell, 2003) could be used as proxy evidence for the effectiveness of healthcare team training, we argue that the unique healthcare environment and content of healthcare team training programs warrants its own investigation. To this end, the purpose of this section is to briefly describe the characteristics of healthcare teams that differentiate these teams and their associated training programs. To aid in providing scholarly relevance, we use this section to position healthcare teams within existing team taxonomies and frameworks (e.g., Hollenbeck et al., 2012; Wildman et al., 2012) such that our results may inform training literature for similar team types. Specifically, we assert that many healthcare teams have low temporal stability (Hollenbeck et al., 2012), a short team life span (Wildman et al., 2012), a functional role structure (Wildman et al., 2012), high skill differentiation (Hollenbeck et al., 2012), a rotating leadership structure (Wildman et al., 2012), high authority differentiation (Hollenbeck et al., 2012), and high interdependence (Wildman et al., 2012). As we describe below, each of these features impacts team training to necessitate an independent examination within this context.
To begin, 24/7 patient care demands high member fluidity and, in turn, short team lifespans and low temporal stability. These factors make the training of transportable, generic competencies more important in healthcare in comparison to specific competencies that are often trained outside of healthcare teams (Ellis et al., 2005). Similarly, team training is sometimes administered before healthcare providers become practicing clinicians who are part of a healthcare team (e.g., in nursing school, medical school), increasing the importance of generic competency training for those who have not yet been assigned a set of specific job duties.
Importantly, however, whereas the personnel on the team are fluid, the roles required by the team are often stable (Andreatta, 2010); many of the various roles on a healthcare team (e.g., anesthesiologist, surgeon, etc.) can be performed only by a team member of a particular discipline and/or profession who is licensed and trained to fulfill job duties assigned to such roles (e.g., an anesthesiologist must oversee the administration of drugs to induce patients prior to surgery). This member composition creates a unique need for emphasis on communication and conflict management skills that can be used in interdisciplinary and interprofessional teams, which are ripe for miscommunication. Further, this member composition requires adjustment to traditional team training programs’ focus on mutual support and backup behavior, as not all members of a healthcare team are licensed or equipped to provide backup on tasks (e.g., a surgeon cannot step in for a missing anesthesiologist).
Additionally, healthcare teams are unique in that there is often a rotating leadership structure (Wildman et al., 2012). For instance, healthcare providers shift leadership during surgery; namely, the anesthesiologist holds a leadership position when the patient is being induced, after which point leadership shifts to the surgeon so that surgery can begin. Further, healthcare teams are characterized by high authority differentiation which is highlighted in healthcare teams’ reliance upon a chain of command (Hollenbeck et al., 2012). For instance, a senior physician has the final say when working with a team of nurses and medical interns. This creates a unique need for functional leadership to have a major emphasis in healthcare team training programs, as well as content tailored to hierarchical decision-making.
Moreover, healthcare teams often work under conditions of intensive task interdependence (Sundstrom, DeMeuse, & Futrell, 1990), which is rarely found in other fields and is essential to team performance (Taplin et al., 2015). This creates a unique need for training teamwork KSAs that are applicable to highly interdependent teams.
Finally, although some healthcare teams work with patients of relatively low workload, many healthcare teams work in high-stakes, fast-paced departments that require swift and accurate decisions. Thus, many healthcare teams require team training under time-sensitive, high stress, simulated conditions that are not as critical for transfer in other fields outside of healthcare.
Together, these features make healthcare team training a unique form of training that is likely to be developed and implemented differently than training in more traditional teams, necessitating a unique examination of team training in the healthcare field. As a result of these characteristics, we have also chosen to examine several moderators of training effectiveness that we argue are uniquely relevant to the healthcare team training literature. For example, we focus on the training of interdisciplinary and/or interprofessional team members, as well as type of sample (e.g., clinician or student) and acuity of the patients which healthcare teams treat. Although these moderators are largely specific to the healthcare industry, we also examine a series of moderators that are not unique to healthcare (e.g., feedback), which may inform training in similar types of teams that are traditionally understudied: highly interdependent teams with high skill differentiation and low temporal stability.
In sum, given the needs of healthcare providers, the purpose of this study is to meta-analytically examine the effects of team training in the healthcare industry and examine the factors that enhance or attenuate its effectiveness. Specifically, the primary goal of the current paper estimates the effectiveness of team training in healthcare by evaluating (a) reactions to training (i.e., the extent to which trainees enjoy the training and perceive it to be useful), (b) learning outcomes (i.e., the degree to which trainees acquire knowledge, skills, and abilities [KSAs]), (c) transfer (i.e., the extent to which trainees demonstrate trained KSAs on the job), and (d) results, including organizational outcomes and patient outcomes. We note that our analysis of team training effectiveness for each of these criteria is based exclusively on repeated measures (i.e., pre- and post-training assessment of an outcome) and independent groups (i.e., control group vs. training group) primary study designs, which allows us to make stronger causal attributions about the effectiveness of team training than typical meta-analyses that are based on correlational studies. Our second goal is to propose and test a causal chain implied by Kirkpatrick (1956, 1996) using meta-analytic path analysis. Specifically, in as part of this goal, we aim to determine whether healthcare team training influences evaluation criteria via a progressive effect. In testing this progressive effect, results seek to identify whether training causes reactions, which lead to learning, which subsequently influences transfer, which then causes results (i.e., the sequential model of healthcare team training). The third goal of the paper is to examine a series of moderators of healthcare team training effectiveness including training design, trainee composition, and the work environment, in an effort to better understand the conditions under which healthcare team training is most effective. That is, this goal aims to determine the conditions under which healthcare team training is most effective. We use the meta-analytic findings to provide practical implications for the design of team training interventions in the healthcare industry.
A Theory of Healthcare Team Training Effectiveness
Given the importance of teamwork in the healthcare industry and the increasing prevalence of training designed to improve teamwork in this industry (Weaver, Dy, & Rosen, 2014), we begin our quantitative review of team training in healthcare by leveraging existing models of training, training transfer (e.g., Baldwin & Ford, 1988), and training evaluation (e.g., Kirkpatrick, 1956, 1996) to formalize a theory of team training effectiveness in healthcare, which we present in Figure 1.
Figure 1. A theoretical model of healthcare team training effectiveness.
To elaborate on our theory of healthcare team training effectiveness, we begin by discussing the evaluation criteria that are used to define training “effectiveness.”
Healthcare Team Training Evaluation Criteria
Kirkpatrick’s (1956, 1996) model of training evaluation criteria is a widely used framework that consists of four criteria: (a) trainee reactions, (b) learning, (c) transfer (also known as behavior), and (d) results. We selected Kirkpatrick’s evaluation criteria as our criteria for training effectiveness in the current meta-analysis because of its widespread use within training practice (Patel, 2010) in addition to its use within prior meta-analytic work on training (e.g., Arthur et al., 2003). Below, we elaborate on each of the criteria in more detail, followed by a discussion of how these criteria influence each other.
Reaction criteria represent the extent to which individual trainees enjoy the training and/or find it useful (Alliger et al., 1997; Kirkpatrick, 1996). Reactions to training are evaluated in 91% of reported training evaluations (Patel, 2010) and are important indicators of employee response to change initiatives in general, including training. For instance, the organizational development literature explains how change in organizations is not well-implemented unless employees have positive reactions to the changes (Oreg, Vakola, & Armenakis, 2011). Although there is a pop-culture myth that employees typically dislike training, (see Kelly, 2012, March 6 popular press website on this issue), it’s likely that teamwork training contextualized for the healthcare setting will be well-received. Particularly, as many healthcare team training programs (e.g., TeamSTEPPS) involve videos and demonstrative content, this is likely to satisfy or exceed trainees’ expectations because these methods realistically tap into past experiences, increasing motivation (Vroom, 1964), making gains from training more substantial (Smith-Jentsch, Cannon-Bowers, Tannenbaum, & Salas, 2008). Therefore, we anticipate that change in pre- and post-training reactions (i.e., the difference between trainees pre-training perceptions of whether they think they the training will be enjoyable/useful and their post-training evaluation of whether the training was enjoyable/useful) is positive, indicating that healthcare providers’ expectations are typically met (i.e., positive change reflects that expectations of enjoyment/utility were exceeded, whereas negative change indicates expectations were not met). Based on this rationale, we hypothesize the following:
Hypothesis 1: Healthcare team training improves reactions from pre- to post-training.
Learning has been defined as “a relatively permanent change in knowledge or skill produced by experience,” (Weiss, 1990, p. 172) that is often emphasized as the most fundamental criterion in training evaluation (Campbell, 1988). Prior work has questioned the extent to which “soft skills” such as skills related to teamwork and communication can be acquired during training (and subsequently transferred to the job; Foxon, 1993; Laker & Powell, 2011; Merriam & Leahy, 2005), primarily because these soft skills are often contrasted with “hard skills” that are arguably easier to learn and apply. Therefore, some may question the extent to which team training (i.e., a soft skill training) results in actual knowledge acquisition. Given the importance and frequent use of these soft skills (e.g., communication) in healthcare and given that knowledge acquisition is one of the primary goals of healthcare team training, we estimate that clinicians and students undergoing team training perceive content to be valid and necessary, thus motivating trainees during the training session (Grohmann, Beller, & Kauffeld, 2014) which results in skill acquisition (Holton, 1996). Thereby, we anticipate that learning occurs as a result of healthcare team training and we hypothesize the following:
Hypothesis 2: Healthcare team training increases learning scores from pre- to post-training.
Transfer is defined as “the use of trained knowledge and skills back on the job” (Burke, Hutchins, & Saks, 2013, p. 265). Healthcare team training may transfer to the job because of teamwork’s importance in coping with job demands (Demerouti, Bakker, Nachreiner, & Schaufeli, 2001). Specifically, although traditional training programs typically increase job demands (e.g., training a surgical resident how to perform a new surgical procedure increases the scope of procedures he or she is expected to perform), team training may increase job resources that help employees cope with job demands (e.g., team communication training may free up resources that would otherwise be devoted to stressful communication difficulties; Crawford, Lepine, & Rich, 2010). Whereas job demands related to traditional training programs may be a barrier to training transfer, team training may transfer to the job because team training provides trainees with resources that allow one to cope with job demands.
Hypothesis 3: Healthcare team training increases the use of KSAs on-the-job from pre- to post-training.
Results are a “measure of the final results that occur due to training, including increased sales, higher productivity, bigger profits, reduced costs, less employee turnover, and improved quality” (Kirkpatrick, 1996, p. 56) that occur at the organizational level. Given that healthcare requires a team of providers to care for a patient, it seems that team training should intuitively lead to better patient care and in turn, better patient outcomes and lower patient mortality. As originally argued by Wright and McMahan (1992) and then evaluated in the training literature by Tharenou, Saks, and Moore (2007), training as a human resource practice should improve results because (a) training is an investment in human capital that adds value to the firm, (b) training reinforces behaviors that are aligned with the organization’s strategy, and (c) training increases employee KSAs, which are necessary to achieve desired organizational results (an issue which we discuss in more detail below). Borrowing this logic for a healthcare setting, we surmise that team training as a strategic human resource practice should lead to improved organizational and patient outcomes because it strengthens human capital, aligns behaviors with the organization’s strategy, and increases employee KSAs. With this in mind, we hypothesize the following:
Hypothesis 4: Healthcare team training improves results such that organizational outcomes and patient outcomes are improved from pre- to post-training.
In summary, we seek to evaluate the extent to which team training in the healthcare industry is effective and anticipate that healthcare team training will markedly improve the aforementioned criteria of training reactions, learning, transfer, and results (which is displayed in Figure 1).
The Sequential Model of Healthcare Team Training
Previous meta-analytic work on training has indicated that training effectiveness varies depending on the training criterion that is used to evaluate effectiveness (i.e., reactions, learning, transfer, or results; Arthur et al., 2003), with findings suggesting that training has the weakest impact on the most distal criterion used (i.e., results). Taking this a step further, some interpretations of Kirkpatrick’s work suggest there may be a progressive effect wherein training impacts reactions, which induce learning, which then influences behaviors, and finally, results (Alliger et al., 1997). Although this sequential model has been implied in prior work (Alliger et al., 1997; Tharenou et al., 2007), neither a full theoretical account nor an empirical test of this mediational model have been included in prior literature. We formally propose a progressive relationship between healthcare team training and the Kirkpatrick criteria, which we label the “sequential model of healthcare team training,” as shown in Figure 2.
Figure 2. The sequential model of healthcare team training.
In this model, healthcare team training has direct effects on reactions, learning, transfer, and results in addition to an indirect mediational chain wherein reactions predict learning, which leads to transfer, and finally, results.
Learning theory and a well-established body of higher education literature that is based on the idea that students learn more when they have high course satisfaction (Cohen, 1981; Knowles, 1973) explains a potential link between reactions and learning. Specifically, some have suggested that perceptions of satisfaction and utility are related to learning because negative trainee/student reactions may hinder attentional focus on the training content whereas positive reactions may increase attentional span (Brown, 2005; Tomkins, 1984). The link between reactions and learning has been included in prior models of training effectiveness (Kirkpatrick, 1996; Mathieu, Tannenbaum, & Salas, 1992), and although some meta-analytic work has found weak empirical support for these assertions (Alliger et al., 1997), more recent meta-analytic evidence suggests reactions are positively related to learning (particularly affective learning, declarative knowledge, and procedural knowledge; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008). Although the existing evidence has yet to be applied to team training in the healthcare context, we argue that healthcare providers should exhibit similar attentional benefits when they perceive their training to be useful and enjoyable, supporting a positive relationship between reactions and learning in the sequential model of healthcare team training. Based on this rationale, we expect the following:
Hypothesis 5: Trainee reactions positively predict learning.
Upon examining the next link in the progressive chain of training evaluation criteria, it seems logical and consistent with previous theory that learning would lead to transfer; particularly, this may be true because trained KSAs must first be learned before they can be transferred to the work environment (Huang, Blume, Ford, & Baldwin, 2015). As learning occurs, neural networks form and become reinforced as information from the network is consistently retrieved (Hikosaka et al., 1999); therefore, the more that this information is retrieved during the learning process, the easier it is to retrieve when one is on the job (i.e., it is less effortful; Barnett & Ceci, 2002). Given our focus on healthcare-specific teams, we propose that learning leads to transfer because higher levels of learning may indicate more accurate and more readily accessible neural networks of learned KSAs, which increase the ease with which these KSAs can be applied to the job. Thus, we hypothesize:
Hypothesis 6: Learning positively predicts training transfer.
Finally, our sequential model of healthcare team training suggests that transfer leads to enhanced results. A framework of training transfer proposed by Kozlowski and Salas (1997) postulates transfer as a multilevel phenomenon that impacts organizational learning and organizational norms, which may lead to enhanced results. Specifically, they propose that transfer may occur vertically, wherein learned KSAs are transferred upward across organizational levels (e.g., individual, team, organizational). This vertical transfer process facilitates organizational learning, the establishment of organizational teamwork norms, and the development of leadership skills related to team coordination that all lead to subsequent organizational and patient outcomes. Thus, we formally hypothesize that:
Hypothesis 7: Training transfer positively predicts results.
Although others have proposed transfer as a key mediator of the training-results relationship (Tharenou et al., 2007; Wright, McCormick, Sherman, & McMahan, 1999), no prior work has formally theorized a sequential model of team training in which reactions predict learning, followed by transfer, and results (see Figure 2). Given the above support for the relationships among reactions, learning, transfer, and results, we expect to find support for the sequential model of healthcare team training as displayed in Figure 2, including the implied partial mediation chain wherein training predicts reactions, which predict learning, which leads to transfer, which subsequently impacts results. Ideally, knowledge of this mediational chain will inform scholars and practitioners alike of the process via which training impacts bottom-line results.
Moderators of Healthcare Team Training Effectiveness
A tertiary goal of the current paper is to determine under what conditions healthcare team training is most effective. Baldwin and Ford (1988) conducted a review of the literature on training transfer and proposed that training inputs, including training design features, trainee characteristics, and characteristics of the work environment, optimize learning, and facilitate subsequent transfer of learning. This framework has been supported with qualitative (e.g., Ford & Weissbein, 1997) and quantitative (i.e., meta-analytic) evidence (Blume, Ford, Baldwin, & Huang, 2010) and thus, we base our moderators of healthcare team training effectiveness on Baldwin and Ford’s (1988) original work. Of note, we do not include all antecedents to training transfer that are proposed in Baldwin and Ford’s (1988) original work as moderators of healthcare team training effectiveness in the current study due to a lack of primary studies available to investigate these moderators (e.g., self-efficacy, personality, motivation, supervisory support, transfer climate).
Training design and implementation features
Training strategy (also referred to as instructional strategy) has been defined as the set of tools, method of delivery, and training content that constitutes an instructional approach (Salas & Cannon-Bowers, 1997) and can include information (e.g., lecture), demonstration (e.g., videos), and/or practice (e.g., simulation; Cannon-Bowers & Salas, 1991). Information involves providing training content via PowerPoint, lecture, or in a computer-based module to facilitate accurate mental model formation of the training materials (Craik, 1943). Alternatively, demonstration allows trainees to acquire KSAs by allowing them to view contextualized examples (e.g., in videos, or provided by live actors) such that trainee attentional processes are improved (Bandura, 1977). Finally, practice involves an action-based approach to learning the training material and is critical to learning as it incorporates real-world problem solving techniques that promote skill acquisition and action learning by enhancing motivational processes and retention (Bandura, 1977; Cannon-Bowers & Salas, 1991; Kuhl, 1992; Revans, 1982). The use of multiple instructional strategies (information, demonstration, and practice) is not only more likely to be effective than using a single instructional strategy (e.g., information) because it appeals to all trainees’ learning styles (Franzoni & Assar, 2009), but also because the use of multiple strategies involves both passive learning (e.g., information) and active learning (e.g., demonstration and practice) which have been shown to be more effective when combined (Zapp, 2001). Ideally, the long-term retention of learning and use of trained competencies requires (a) the presentation of relevant information to be learned, (b) the demonstration of KSAs to be learned, and (c) opportunities for trainees to practice using the skills (Salas & Cannon-Bowers, 2001). Because each of these training strategies (information, demonstration, and practice) provides a unique enhancement to the learning process (i.e., information provides the training content, demonstration induces social learning, and practice supports action-based learning), we expect training programs that utilize a combination of these training techniques to be more effective than those that only utilize a single technique (see also Taylor, Russ-Eft, & Chan, 2005).
Hypothesis 8: Training strategy moderates the effectiveness of healthcare team training such that programs involving multiple training implementation strategies (i.e., information, demonstration and practice) are more effective (i.e., display greater improvement in reactions, learning, transfer, and results) than training programs that involve only one strategy (e.g., information only, demonstration only, practice only).
Although providing the opportunity to use the trained KSAs is critical to any effective training program, evidence shows that unguided practice is insufficient for learning. Specifically, practice that does not provide guidance or correction may yield an inaccurate mental model of the KSAs to be acquired (Kluger & DeNisi, 1996). Therefore, feedback that is tailored to trainee needs is essential for long-term retention of knowledge (Salas & Cannon-Bowers, 2001) and to establish correct mental models and transfer skills to the job (Schmidt & Bjork, 1992). According to feedback intervention theory (Kluger & Denisi, 1996), feedback enables individuals to regulate behavior by comparison of feedback to goals or standards. Thus, by promoting self-regulation in using trained skills, skill acquisition is improved. Therefore, we hypothesize the following:
Hypothesis 9: Healthcare team training that provides feedback to trainees is more effective (i.e., displays greater improvement in reactions, learning, transfer, and results) than healthcare team training that does not provide feedback.
The use of simulators within medical education has become a staple of educational services offered to students and practicing clinicians alike (Kunkler, 2006). In fact, many simulators mimic part or all of a patient, including programmable breathing, speaking, and life-like skin. Typically, these simulations are developed to maximize physical fidelity, or the extent to which the physical appearance and the behavior of the simulation matches that of the real-life environment (Miller et al., 2012). Medical simulations have historically emphasized the importance of physical fidelity in training and research suggests that high fidelity medical simulations are effective at creating a realistic learning environment that complements transfer to patient care settings (Issenberg, McGaghie, Petrusa, Lee Gordon, & Scalese, 2005). In fact, simulators that are high in physical fidelity may be more effective than their less physically realistic counterparts due to their ability to replicate key features of the clinical work environment (Woodworth & Thorndike, 1901), particularly when training events share contiguity, similarity, and frequency to events occurring in the transfer environment (Hays & Singer, 1989). Therefore, we hypothesize that physical fidelity will result in more effective healthcare team training:
Hypothesis 10: Team training programs that leverage simulators that are high on physical fidelity are more effective (i.e., display greater improvement in reactions, learning, transfer, and results) than team training programs that use simulators that are lower in physical fidelity.
Trainee composition
Interprofessional training involves team members from more than one profession (e.g., physician, nurse, and respiratory therapist) trained as a group. Interdisciplinary training differs from interprofessional training in that it involves training team members of more than one discipline (e.g., cardiology, pediatrics, and labor and delivery). For example, a nurse and physician working within cardiology are of the same discipline but are in different professions, whereas a physician in obstetrics and physician working within cardiology are of the same profession but in a different discipline (Mitchell et al., 2012). Traditionally, medical teams are professionally diverse and depending on the needs of the patient, these team members may often be from different disciplines. Training multiple professions or disciplines together results in what is arguably the most realistic training environment-where team members can learn about others’ professions/disciplines and practice interacting with these individuals to increase transfer to the job (Woodworth & Thorndike, 1901). However, training homogenous teams (i.e., one discipline and/or one profession) may result in more effective team training by allowing for content that is maximally relevant to the trainee’s profession/discipline. In other words, training content that contains more discipline and profession-specific language and examples (e.g., the Anesthesia Crew Resource Management training program) is more likely to increase trainees’ valence (Vroom, 1964) and enhance trainee motivation (Colquitt, LePine, & Noe, 2000; Grohmann et al., 2014). Taken together, this likely results in greater training effectiveness (Holton, 1996). Moreover, training homogeneous teams reduces competing goals among trainees and helps eliminate status differences that exist when teams are comprised of multiple disciplines and/or professions (Cronin & Weingart, 2007; Hall, 2005). Based on this rationale, we argue that team training delivered to teams that are interprofessional or interdisciplinary will likely be less effective than training delivered to homogeneous (i.e., noninterprofessional or noninterdisciplinary) teams. As such, we predict:
Hypothesis 11: Team training delivered to teams who are (a) homogeneous in profession and (b) homogeneous in discipline is more effective (i.e., displays greater improvements in reactions, learning, transfer, and results) than team training delivered to trainees who are interprofessional or interdisciplinary, respectively.
Team training in the healthcare industry is administered to two different types of samples: healthcare students (e.g., medical students, nursing students, etc.) and practicing clinicians (e.g., physicians, nurses, etc.; Weaver, Lyons, et al., 2010). These types of trainees vary in experience, which may influence the extent to which they benefit from team training. Specifically, healthcare students tend to lack knowledge about teamwork in healthcare given their minimal prior experience with teamwork in healthcare settings. As such, these students often enter team training with a “blank slate” for learning and transferring teamwork KSAs, resulting in large gains when they attend a team training program. In contrast, practicing clinicians have likely acquired some team training KSAs on-the-job and therefore may benefit less from team training than students. In addition, given that healthcare students are maximally motivated to learn from team training to obtain future employment, they may exhibit larger gains from training than practicing clinicians, who are already employed and may therefore be less motivated to acquire knowledge from a team training program. As students are not employees of an organization where organizational or patient results would be assessed, we do not make any predictions in regards to the impact of sample type on results as a criterion. Therefore, we hypothesize the following:
Hypothesis 12: Sample type moderates the effectiveness of healthcare team training such that team training is most effective (i.e., displays greater improvements in reactions, learning, and transfer) for students than practicing clinicians.
Characteristics of the work environment
Among various environment characteristics, cognitive load has been identified as a characteristic that hinders both learning (i.e., cognitive load theory; Sweller, 1988) and transfer (Holton, Baldwin, & Holton, 2003). In the healthcare environment, we note that the cognitive load of healthcare teams varies substantially with the patient acuity (i.e., health status of the patient) that the unit typically treats (Commission & Dumpel, 2005). For example, emergency medicine, intensive care units, and trauma care have been denoted as high acuity units (Harper & McCully, 2007); these units impose a high cognitive load because these units typically treat high volumes of patients who require immediate, intensive care. In comparison, lower patient acuity units such as ambulatory units, elderly/nursing care, and/or home healthcare are characterized by lower cognitive load because these patients typically require less emergent medical attention. High acuity units may have difficulty applying team-related KSAs to the job where the cognitive load of treating patients may impede the application of team training to the work environment. For example, an emergency room physician who is responsible for three patients who all require immediate, emergency care has less time and fewer cognitive resources to recall recent training on how to communicate with emergency medical technicians during a patient handoff, and transfer of this training may suffer as a result. As such, we extend prior work that suggests training effectiveness is reduced under conditions of high cognitive load (Holton et al., 2003; Sweller, 1988) into the healthcare field by hypothesizing that patient acuity of the unit moderates the effectiveness of team training in healthcare.
Hypothesis 13: Unit acuity moderates the effectiveness of healthcare team training such that team training is more effective (i.e., displays greater improvements in reactions, learning, transfer, and results) when unit acuity is low rather than high.
Method
Healthcare Team Training Effectiveness
Literature search
To answer Hypotheses 1 through 4 and 8 through 13, we conducted an extensive literature search to identify published and unpublished evaluations of healthcare team training. The databases searched include PsycINFO, ProQuest Dissertations and Theses, Academic Search Premier, Business Source Premier, MEDLINE, CINAHL, PubMed, OVID, Science Direct, and Google Scholar. The following terms were used: hospital, healthcare, medical, medicine, medical facility, medical students, nursing students, team, teamwork, nontechnical skills paired with: training, education, TeamSTEPPS, intervention, Crew Resource Management, and Crisis Resource Management. In addition, we manually searched each of the following journals for relevant articles: Academic Medicine, Academy of Management Journal, Academy of Management Learning & Education, Cognitive Modeling, Ergonomics in Design, Human Factors, International Journal of Training and Development, Journal of Applied Psychology, Medical Education, and Simulation in Healthcare. Our search included articles from the earliest available start date for each database through April 2015. In addition to our searches in the aforementioned databases, we searched for additional unpublished work by manually searching conference programs (Academy of Management Best Paper Proceedings, Human Factors and Ergonomics Proceedings papers, Society for Industrial and Organizational Psychology Annual Conference Program, and INGroup conference programs) and contacting relevant authors in the field.
Inclusion criteria
Our initial searches identified 26,971 unique articles. After removing 26,565 articles from our pool of potentially relevant primary studies because they did not involve healthcare team training, 487 remaining articles were then reviewed for the following inclusion criteria. Studies were deemed eligible for inclusion if they (a) were written in English, (b) compared pre-training and post-training measures or compared a control group with a training group, (c) reported the sample size and enough information to calculate a Cohen’s deffect size, (d) evaluated team training as a single intervention rather than as part of a package of quality improvement interventions, and (e) were primarily (i.e., more than 50%) focused on training teamwork KSAs, as healthcare team training initiatives often incorporate a combination of both technical and nontechnical education.1
In the event that appropriate statistics were not reported in a primary study deemed eligible for inclusion, the corresponding author was contacted. These inclusion criteria yielded 129 eligible studies totaling 146 independent samples.
Coding of primary studies
Two authors independently coded each of the 129 studies included in the meta-analysis. Agreement between the coders was 92% and all discrepancies were resolved through discussion. When coding, reactions were defined as measures evaluating the extent to which trainees enjoyed the training and/or thought it was useful. Following Kraiger et al. (1993), we defined learning as a change in affect (e.g., attitudinal outcomes such as attitudes toward teamwork and motivational outcomes such as self-efficacy), cognition (e.g., measures pertaining to declarative knowledge), or skill (e.g., display of skills such as teamwork behaviors) immediately after or within one day of team training. Kraiger et al. (1993) have argued that there are three conceptually distinct forms of learning: affective-, cognitive-, and skill-based learning. Affective learning outcomes refer to the attitudinal and motivational changes that occur as a result of participation in training (e.g., the extent to which a team member values teamwork; Kraiger et al., 1993). Cognitive learning outcomes refer to the class of variables related to acquired verbal knowledge, organization of knowledge, and cognitive strategies (e.g., the extent to which a team member has knowledge of team member roles and responsibilities during an emergency cesarean section; Gagne, 1984; Kraiger et al., 1993). Skill-based learning outcomes refer to the class of variables related to the development of technical or motor skills (e.g., the extent to which a team member can enact a standard series of steps for patient handoffs; Kraiger et al., 1993). Whereas more typical definitions of training transfer focus on behaviors and transfer of skills alone (Baldwin & Ford, 1988), we distinguish transfer as affective transfer (i.e., motivational and attitudinal changes that are retained or occur on the job), cognitive transfer (i.e., verbal knowledge, organization of knowledge, and cognitive strategies retained or used on the job), or skill-based transfer (i.e., technical or motor skills retained or used on the job). Additionally, we consider medical errors as an indicator of training transfer. Because errors can occur as a result of skill-based factors (e.g., unintentionally placing an intravenous line into an artery instead of a vein) or cognitive factors (e.g., lack of knowledge that an intravenous line should be inserted into a vein and not an artery), we consider them separately from cognitive and skill-based outcomes; yet, we stipulate that reduced errors indicate successful transfer of trained KSAs. Transfer was therefore operationalized as affective outcomes (e.g., confidence that there is improvement of teamwork on-the-job), cognitive outcomes (e.g., decision making), skill-based outcomes (e.g., improved teamwork or taskwork), or medical errors (e.g., needlestick injuries) that were measured at least one day after training. Within skill-based transfer, separate effect sizes were calculated for teamwork performance (i.e., performance of teamwork-related skills such as initiating a team debrief) and clinical task performance (i.e., performance of task-related job duties such as administering a medication on time) to isolate the differential effects of team training on teamwork skills and task-related skills, respectively. Results were defined as organizational outcomes (e.g., safety climate, and non-ICU length of stay) or patient outcomes (e.g., patient satisfaction, patient mortality).
Further, studies were coded for training strategy (i.e., multiple strategies [a combination of information, demonstration, and/or practice] or single-strategy [information-only, demonstration-only, practice-only]), provision of feedback (i.e., yes/no), and simulator fidelity (i.e., high [human patient simulator/HPS, actor] or low [manikin, paper scenario]). Additionally, studies were coded for patient acuity of the unit (i.e., units were coded as high acuity if they were a unit that treated patients with urgent conditions such as ICU, or low acuity if they were a unit that cared for more stable, and less urgent patients such as postsurgical care; Harper & McCully, 2007; Holden, Eriksson, Andreasson, Williamsson, & Dellve, 2015; Raby, 2007; Sir, Dundar, Barker Steege, & Pasupathy, 2015), trainee composition (i.e., samples were coded as students or clinicians, interprofessional [i.e., consisting of >1 type of profession; e.g., nurse and physician] and interdisciplinary teams [i.e., consisting of >1 type of discipline; e.g., pediatrics and cardiology]), and study design (i.e., repeated measures and/or independent groups).
Each study was coded for a Cohen’s deffect size or the information necessary to calculate a Cohen’s d effect size. When appropriate, Cohen’s d values were reverse-coded to reflect positive trends (e.g., patient dissatisfaction was reverse coded to reflect patient satisfaction). When the primary study reported multiple, nonindependent effect sizes, the effect sizes were combined in a linear composite (Nunnally, 1978), unless the necessary information to calculate a composite was not reported, in which case, the effect sizes were averaged into a single aggregate effect size. Coded information from the primary studies is presented in Table 1.
Meta-analytic procedures
The current meta-analysis combined effect sizes from three types of study designs: repeated measures (pre- vs. post-training), independent groups (training vs. control), and independent groups with repeated measures (pre- and post-training measures in training and control groups). Therefore, we used the procedures described in Morris and DeShon (2002), which are designed to combine effect sizes from multiple study designs into a single meta-analytic effect size. As part of this procedure, all primary study effect sizes were transformed into a common metric-in this case, a repeated measures metric. This is consistent with Morris and DeShon’s (2002) recommendation and prior meta-analytic work where the research involves the investigation of within-person change (Smither, London, & Reilly, 2005), as is the case in the current study (i.e., the current research is interested in estimating pre-training to post-training within-person change). To ensure that all study designs could be combined into a single effect size, we first calculated separate effect sizes for each design and evaluated whether these effect sizes were significantly different from each other (for additional information regarding the calculation of repeated measures d values, refer to the Appendix). Following Morris and DeShon (2002), a random effects model was used to conduct the meta-analysis and all effect sizes were weighted by the reciprocal of the sampling variance (Hedges & Olkin, 1985). We then corrected the meta-analytic effect sizes for unreliability in the criterion measure using an artifactdistribution (following recommendations in Hall & Brannick, 2002) that was created using internal consistency estimates (i.e., the independent variable of “training” was assumed to have perfect reliability; the mean internal consistency estimate of all training evaluation measures was .91, and the mean internal consistency estimates of reactions, learning, transfer, and results were .91, .90, .94, and .91, respectively).
To identify potential outliers, we calculated the sample-adjusted meta-analytic deviance value for each primary study effect size and plotted these values on a scree plot for each criterion of interest (Huffcutt & Arthur, 1995). One potential outlier was identified for transfer as a criterion and four potential outliers were identified for results as a criterion. Upon inspection, there appeared to be no calculative or reporting errors in these effect sizes and no substantive reason to exclude the data from the current meta-analysis. In the absence of any justifiable reason to exclude these effect sizes, we retained them in the current paper (Cortina, 2003) and we note that the meta-analytic effect sizes changes very little when these effect sizes are excluded (the observed transfer d changed by .04 and the observed results d changed by .01).
Meta-analytic effect sizes were considered to be significant (p < .05) if the 95% confidence interval did not include zero (Whitener, 1990). As an initial step to identify whether moderator analyses were appropriate, we first calculated a confidence interval around τ2, representing the degree of heterogeneity in the superpopulation of true effect sizes (Aguinis, Gottfredson, & Wright, 2011; Borenstein, Hedges, Higgins, & Rothstein, 2009) for each Kirkpatrick criterion (reactions, learning, transfer, and results). The confidence interval for reactions as a criterion included zero, suggesting a lack of significant heterogeneity to search for moderators (95% CI [−.02, .13]). However, confidence intervals for learning (95% CI [.34, .45]), transfer (95% CI [.15, .21]), and results criteria (95% CI [.09, .13]) excluded zero, supporting a heterogeneous superpopulation of effect sizes and a subsequent search for moderators for these criteria. To evaluate whether effect sizes were significantly different, we utilized Zou’s (2007) procedure to calculate a modified asymptotic confidence interval which was interpreted as a significant difference if the interval excluded zero. This procedure performs better than standard confidence intervals when evaluating effect size differences because it more appropriately handles correlation heterogeneity (Bonett, 2008; Zou, 2007). Finally, the trim and fill procedure (Duval & Tweedie, 2000) was used to examine publication bias in the current meta-analysis. Following recommendations provided by Kepes, Banks, McDaniel, and Whetzel (2012), results from a fixed effects trim-and-fill analysis (of our random effects meta-analysis) that was run on the database of effect sizes for each Kirkpatrick criterion, separately (i.e., reactions, learning, transfer, results), indicate that no studies were imputed to the left of the mean for any of the criteria, suggesting publication bias did not artificially inflate the meta-analytic effects in the current study (trim-and-fill funnel plots are presented in Figure 3).
Figure 3. Funnel plots from publication bias analyses.
We also present separate meta-analyses of published and unpublished effect sizes in Table 2
to supplement the trim and fill analysis (reactions are not included because this criterion was based exclusively on published studies; Kepes et al., 2012). Upon comparing published and unpublished meta-analytic results using a modified asymptotic confidence interval (Zou, 2007), learning (95% CI [−.37, .65]), transfer (95% CI [−.25, .42]), and results (95% CI [−.84, 1.78]) do not appear to be biased by publication status (i.e., the confidence intervals all included zero), supporting the trim-and-fill results. We also conducted Begg and Mazumdar’s (1994) rank correlation test, Egger’s test of the intercept (Egger, Davey Smith, Schneider, & Minder, 1997), and a cumulative meta-analysis (Borenstein, Hedges, Higgins, & Rothstein, 2009) as additional tests of publication bias. Begg and Mazumdar’s rank correlation test indicated nonsignificant rank order correlations between the effect size and the standard error for all criteria (reactions: τ = .10, p > .05; learning: τ = .01, p > .05; transfer: τ = .04, p > .05; results: τ = .09, p > .05), supporting a lack of publication bias. Similarly, Egger’s intercept test revealed nonsignificant intercepts when the standardized effect size was regressed onto the inverse of the standard error (reactions: β0 = .47, p > .05; learning: β0 = .84, p > .05; transfer: β0 = −.14, p > .05; results: β0 = 1.58, p > .05), again supporting a lack of publication bias. Finally, results of the cumulative meta-analyses also supported a lack of publication bias because there was no substantial drift in the cumulative meta-analyses across criteria (i.e., for each criterion, the final meta-analytic effect size was included in the confidence interval of the cumulative meta-analytic effect size as long as at least three studies had been included in the cumulative analysis).
The Sequential Model of Healthcare Team Training
Literature search
To test the sequential model of healthcare team training, we first ran a series of meta-analyses to obtain estimates of the intercorrelations among the Kirkpatrick criteria (i.e., reactions, learning, transfer, and results). These intercorrelations were then inserted into a metamatrix of correlations among healthcare team training and the criteria which were subsequently used to test the model. Ideally, we would have preferred to estimate meta-analytic intercorrelations among the criteria only in samples of healthcare teams; however, a search of our meta-analytic database of healthcare team effect sizes resulted in too few primary studies that reported the necessary intercorrelations. Therefore, we decided to use primary studies from the broader training literature in our meta-analysis of intercorrelations among the criteria under the assumption that the intercorrelations among training criteria would not differ substantially across contexts. In doing so, we first located the articles meta-analyzed by Alliger, Tannenbaum, Bennett, Traver, and Shotland (1997). Then, we searched articles published since 1997 (thus updating Alliger et al.) that cited one or more of the following: Kirkpatrick (1996); Kirkpatrick (1956), and/or Kirkpatrick (1967). Additionally, we searched for training evaluation studies using the same procedures described above for our training effectiveness meta-analysis.
Inclusion criteria
To be included, articles needed to report sample size and a correlation, or enough information to calculate a correlation, between at least two of Kirkpatrick’s training criteria (i.e., reactions, learning, transfer, or results). This resulted in the inclusion of 53 primary studies and 56 independent samples.
Coding of primary studies
Coding procedures mirrored those described earlier (i.e., all studies were double-coded using the same construct definitions described above). All coded information from the primary studies in the meta-analysis of intercorrelations among Kirkpatrick criteria are included Table 3.
Meta-analytic procedures
Hunter and Schmidt (2004) meta-analytic procedures were used to estimate meta-analytic intercorrelations among the Kirkpatrick criteria (corrected for unreliability using the artifactdistributions described above). Intercorrelations among the Kirkpatrick criteria were then input into a meta-analytic correlation matrix along with correlations between the criteria and healthcare team training (i.e., our original meta-analyses of d values were converted to correlations). This metamatrix was then used to test the progressive sequential model (see Figure 2), using the harmonic mean of the matrix as the sample size for the path analysis (N = 1,034; Shadish, 1996; Viswesvaran & Ones, 1995).
Results
Tables 4
and 5
present the results of the meta-analysis across outcome criteria, in addition to results for each Kirkpatrick training effectiveness criterion. All results are presented as both observed d values and corrected d values (δ; corrected for unreliability) in a repeated measures metric that represent a standardized pre-training/post-training change score. Before combining effect sizes from different types of study designs, we first calculated effects sizes for each design separately. Using Zou (2007) asymptotic confidence intervals, results indicated that effect sizes did not differ significantly across repeated measures and independent groups designs (repeated measures: k = 112, N = 14,408, δ = .60; independent groups: k = 15; N = 2,806, δ = .40; Zou’s [2007] modified asymptotic 95% CI [−.07, .45]). Given that these effect sizes did not differ significantly, and given that most of the primary studies involved a repeated measures design, we followed Morris and DeShon’s (2002) procedures by combining effect sizes across study designs into a single effect size in a repeated measures metric (see also Smither et al., 2005).
The Effectiveness of Healthcare Team Training
The first objective of our meta-analysis was to assess the extent to which team training was effective in influencing each of Kirkpatrick’s (1996)training evaluation criteria (i.e., reactions, learning, transfer, and results). Upon examining each evaluation criterion separately, reactions were significantly improved by healthcare team training (δ = .53 and the 95% confidence interval excluded zero: 95% CI [.33, .73]); however, these analyses are based on a small number of independent samples (k = 5) and should be viewed as tentative. Nonetheless, hypothesis 1 was supported.
The effect size for learning criteria was δ = .89 (k = 79, 95% CI [.66, 1.11]), suggesting that KSAs are acquired during healthcare team training, thereby providing full support for hypothesis 2. Moreover, results indicate that team training causes gains in affective-based learning (δ = .80, k = 38, 95% CI [.58, 1.02]), cognitive-based learning (δ = .84, k = 28, 95% CI [.36, 1.32]), and skill-based learning (δ = .98, k = 48, 95% CI [.63, 1.33]). Modified asymptotic confidence intervals (Zou, 2007) comparing each of these learning criteria included zero (affective-based vs. cognitive-based 95% CI [−.44, .45]; cognitive-based vs. skill-based 95% CI [−.29, .61]; affective-based vs. skill-based 95% CI [−.15, .44]), suggesting that team training in healthcare does not differentially affect learning criteria.
The effect size for transfer as a criterion was δ = .67 (k = 63, 95% CI [.52, .82]), indicating that targeted on-the-job KSAs improve by more than half of a standard deviation after healthcare team training is implemented, showing full support or hypothesis 3. Specifically, affective transfer (δ = .66, k = 15, 95% CI [.44, .89]) and skill-based transfer (δ = .77, k = 57, 95% CI [.59, .94]) were improved by team training, whereas results for cognitive transfer, although of similar magnitude as other transfer criteria, were based on a limited number of studies (δ = .59, k = 3, CI [−.09, 1.27]) and require future replication. Zou (2007) modified asymptotic confidence intervals comparing affective and cognitive transfer (95% CI [−.50, .80]), cognitive and skill-based transfer (95% CI [−.90, .38]), and affective and skill-based transfer (95% CI [−.33, .15]) suggest that healthcare team training does not affect these types of transfer differently. Upon examining discrete forms of skill-based transfer, team training in healthcare appeared to substantially improve teamwork performance (δ = .48, k = 45, 95% CI [.29, .66] and clinical task performance (δ = 1.00, k = 22, 95% CI [.74, 1.24]) and it also reduces medical errors (δ = −.50, k = 8, 95% CI [−.88, −.13]). Therefore, hypothesis 3 was supported. Of note, a comparison of skill-based transfer criteria using a modified asymptotic confidence interval (Zou, 2007) suggests that healthcare team training results in stronger transfer of clinical task performance (i.e., task-related skills) than teamwork performance (i.e., teamwork-related skills; 95% CI [.17, .71]).
Findings also supported hypothesis 4, which suggested that healthcare team training improves results (δ = .37, k = 47, 95% CI [.21, .52]). Specifically, organizational outcomes (δ = .34, k = 31, 95% CI [.19, .49]) including safety climate (δ = .31, k = 24, 95% CI [.14, .48]), and patient outcomes (δ = .38, k = 20, 95% CI [.10, .66]) were shown to improve as a result of healthcare team training. We also report more granular effect sizes for non-ICU length of stay (δ = .18, k = 3, 95% CI [−.01, .37]), patient satisfaction (δ = .37, k = 2, 95% CI [.05, .70]), and patient mortality (δ = −.36, k = 5; 95% CI [−.45, −.26]), although we do not interpret these effect sizes because they are based on too few primary studies. Therefore, in answering the question, “To what extent is healthcare team training effective in changing reactions, learning, transfer, and results?”, our meta-analysis suggests that healthcare team training is effective across all of Kirkpatrick’s training criteria.
The Sequential Model of Healthcare Team Training
A secondary goal of our current meta-analysis was to test the sequential model of healthcare team training. Results from the meta-analyses among criteria are displayed in Table 5 and the final metamatrix used to estimate the model is displayed in Table 6.
The model displayed good fit, χ2(3 df): 16.18, CFI = .98, RMSEA = .07, SRMR = .03, TLI = .93, and all paths were significant except the path from reactions to learning (see Figure 4).
Figure 4. The sequential model of healthcare team training. Standardized estimates. * p < .05, N = 1,034, χ2(3 df): 16.18, CFI = .98, RMSEA = .07, SRMR = .03, TLI = .93.
Therefore, there appears to be a sequential effect among the Kirkpatrick criteria, although the cascade appears to begin with learning rather than reactions. Additionally, the direct effects of healthcare team training on each of the Kirkpatrick criteria were moderate and significant, suggesting that team training influences the Kirkpatrick criteria both directly and indirectly through a sequential effect (except for the results criterion, which appears to be primarily influenced by the indirect cascading sequential effect). To formally test whether the relationship between healthcare team training and results was mediated by the reactions/learning/transfer pathway, we tested the significance of several specific indirect effects in the sequential model as well as the total indirect effect using a Monte Carlo confidence interval (Preacher & Selig, 2012). For each of these tests, we first examined the parameter covariance matrix; all covariances were negligible (the largest was .0004) and were subsequently treated as if they were zero (following Preacher & Selig, 2012). The specific indirect effect of training →reactions→learning→transfer→results was not significant (i.e., the Monte Carlo 95% confidence interval included zero: −.004, .001), suggesting the relationship between training and results is not mediated by the reactions/learning/transfer pathway. The specific indirect effect of training→learning→transfer→results was significant (95% CI: .01, .03), and the specific indirect effect of training→transfer→results was significant (95% CI: .06, .11), suggesting that the relationship between training and results is partially mediated by learning and transfer (complete mediation was not supported because the direct effect of training on results was significantly different from zero). Finally, we tested the total indirect effect (i.e., the cumulative effect of the aforementioned three specific indirect effects) between training and results, which was significant (95% CI: .08, .14). Therefore, it appears that the mediational pathway implied in the sequential model of healthcare team training is supported, with the caveat that the cascade begins with learning rather than reactions. These results fail to support hypothesis 5, which posited that reactions would positively predict learning, although full support was found for hypothesis 6 and 7 which predicted that learning would positively predict transfer, which would then predict results.
As a supplementary contribution to the paper, we tested the sequential model of healthcare team training using estimates from traditional training literature to see whether the sequential model is unique to healthcare team training or whether it also is appropriate for the general training literature. To do this, we used the metamatrix displayed in Table 6, except we replaced the training-criteria correlations with estimates from Arthur et al.’s (2003)meta-analysis of training effectiveness (we converted the reported d values to correlations). Results from this analysis are displayed in Figure 5.
Figure 5. The sequential model of training. Standardized estimates. Based on Arthur et al.’s (2003) meta-analytic relationships between training and Kirkpatrick’s training evaluation criteria. * p < .05, N = 2,021, χ2(3 df): 31.43, CFI = .98, RMSEA = .07, SRMR = .03, TLI = .92.
When applied to the generic training literature, the sequential model exhibits good fit, χ2(3 df): 31.43, CFI = .98, RMSEA = .07, SRMR = .03, TLI = .92, and all paths are significant (although the path from reactions to learning is weak, β = .05, similar to the weak path coefficient found in our test of the sequential model of healthcare team training). To formally test the mediational pathway implied by Figure 5, we estimated a Monte Carlo confidence interval for the specific indirect effects and the total indirect effect. All confidence intervals excluded zero (95% CI for training→reactions→learning→transfer→results: .0002, .002; 95% CI for training→learning→transfer→results: .01, .02; 95% CI for training→transfer→results: .06, .09; 95% CI for total indirect effect: .07, .11), supporting partial mediation of the relationship between training and results via reactions, learning, and transfer (complete mediation was not supported because the direct effect from training to results was significant).
Moderators of Healthcare Team Training Effectiveness
The third goal of the meta-analysis was to identify under what conditions healthcare team training is most effective. Moderator analyses were conducted on (a) the entire database of effect sizes, collapsing across training evaluation criteria (i.e., multiple effect sizes from a single sample were averaged to create a single effect size per independent sample unless the necessary information to calculate a composite was reported, in which case, a linear composite was calculated; Nunnally, 1978) and (b) each Kirkpatrick training evaluation criterion, separately. Because only five primary studies in our meta-analytic database examined reactions to healthcare team training and because the reactions effect size did not contain significant heterogeneity, we did not analyze moderators for reactions as a criterion. Moderator results are presented in Table 7.
Hypothesis 8 predicted that training programs using multiple training strategies are more effective than single-strategy training programs. Although the results demonstrate that team training programs implementing multiple strategies (i.e., information, demonstration, and/or practice) are effective at fostering overall team training effectiveness (δ = .63, k = 112, 95% CI [.51, .75]), learning (δ = .89, k = 67, 95% CI [.72, 1.0]), transfer (δ = .70, k = 44, 95% CI [.50, .90]), and results (δ = .48, k = 32, 95% CI [.27, .68]), these effect sizes did not significantly differ from single strategy-only effect sizes when using a modified asymptotic confidence interval (overall effectiveness 95% CI [−.45, .45]; learning 95% CI [−1.69, .56]; transfer 95% CI [−.44, .07]; results 95% CI [−.52, .14]). Therefore hypothesis 8 was not supported.
Hypothesis 9 suggested team training programs that incorporate feedback are more effective than programs that do not use feedback and was not supported. Interestingly, results showed larger effect sizes for training programs that do not involve feedback for the overall effectiveness criterion (feedback: δ = .62, k = 66, 95% CI [.36, .88]; no feedback: δ = 1.26, k = 14, 95% CI [.84, 1.68]), and a modified asymptotic confidence interval (Zou, 2007) indicated a significant difference between these effect sizes (95% CI [−.85, −.12]). Similar results were found for learning (feedback: δ = .98, k = 43, 95% CI [.62, 1.35]; no feedback: δ = 1.28, k = 12, 95% CI [.81, 1.76]), transfer (feedback: δ = .45, k = 23, 95% CI [.13, .77]; no feedback: δ = .69, k = 3, 95% CI [.13, 1.24]), and results (feedback: δ = .45, k = 17, 95% CI [.21, .69]; no feedback: δ = 2.24, k = 2, 95% CI [.26, 4.22]), although the modified asymptotic confidence intervals comparing feedback versus no feedback included zero (learning: 95% CI [−.56, .19]; transfer: 95% CI [−.78, .38]; results 95% CI [−2.08, .19]) and some of these effect sizes (i.e., transfer and results) are based on few primary studies and should be interpreted with caution.
Hypothesis 10, which stated that training programs with high physical fidelity are more effective than training programs that are lower in physical fidelity, was also not supported. Contrary to expectations, high physical fidelity resulted in lower overall effectiveness than low physical fidelity (high fidelity: δ = .80, k = 30, 95% CI [.59, 1.01]; low fidelity: δ = 1.01, k = 11, 95% CI [−.09, 2.10]), learning (high fidelity: δ = .66, k = 10, 95% CI [.23, 1.08]; low fidelity: δ = 2.76, k = 4, 95% CI [−.53, 6.06]), and transfer (high fidelity: δ = .54, k = 13, 95% CI [.27, .80]; low fidelity: δ = .71, k = 8, 95% CI [.34, 1.08]). However, modified asymptotic confidence intervals indicate that the high/low fidelity conditions were not significantly different (overall effectiveness 95% CI [−.43, .33]; learning 95% CI [−1.67, .19], transfer 95% CI [−.43, .03]). Fidelity as a moderator of the effect of team training on results is also presented in Table 7, although we do not interpret this finding because it is based on too few primary studies.
Hypothesis 11a stated that healthcare team training is more effective when delivered to teams that are not interprofessional than teams that are interprofessional. This hypothesis was not supported for overall effectiveness (interprofessional: δ = .57, k = 102, 95% CI [.46, .68]; noninterprofessional: δ = .64, k = 26, 95% CI [.36, .92]), learning (interprofessional: δ = .73, k = 50, 95% CI [.55, .92]; noninterprofessional: δ = .95, k = 16, 95% CI [.58, 1.32]), transfer (interprofessional: δ = .70, k = 46, 95% CI [.56, .85]; noninterprofessional: δ = .27, k = 12, 95% CI [−.13, .67]), or results (interprofessional: δ = .37, k = 37, 95% CI [.18, .55]; noninterprofessional: δ = .33, k = 6, 95% CI [−.05, .72]) because modified asymptotic confidence intervals included zero (overall effectiveness 95% CI [−.31, .20]; learning 95% CI [−.47, .16], transfer 95% CI [−.02, .85]; results 95% CI [−.34, .41]). Similarly, although Hypothesis 11b stated that team training is more effective for teams that are not interdisciplinary than teams that are interdisciplinary, this was not supported. Effect sizes did not differ significantly across interdisciplinary/noninterdisciplinary teams for overall effectiveness (interdisciplinary: δ = .55, k = 38, 95% CI [.33, .77]; noninterdisciplinary: δ = .54, k = 72, 95% CI [.42, .67]), learning (interdisciplinary: δ = .57, k = 22, 95% CI [.24, .90]; noninterdisciplinary: δ = .89, k = 31, 95% CI [.69, 1.09]), transfer (interdisciplinary: δ = .55, k = 15, 95% CI [.24, .85]; noninterdisciplinary: δ = .77, k = 35, 95% CI [.58, .95]), or results (interdisciplinary: δ = .28, k = 16, 95% CI [.14, .43]; noninterdisciplinary: δ = .37, k = 27, 95% CI [.18, .56]) because modified asymptotic confidence intervals included zero (overall effectiveness 95% CI [−.24, .23]; learning 95% CI [−.65, .05], transfer 95% CI [−.46, .15]; results 95% CI [−.50, .26]).
Hypothesis 12, which predicted that team training is more effective for students than clinicians, was not supported. Results suggest that team training healthcare is equally beneficial for students and clinicians in terms of overall effectiveness (clinicians: δ = .61, k = 112, 95% CI [.49, .72]; students: δ = .68, k = 25, 95% CI [.37, .99]), learning (clinicians: δ = .91, k = 50, 95% CI [.72, 1.11]; students: δ = .70, k = 23, 95% CI [.35, 1.04]), and transfer (clinicians: δ = .68, k = 58, 95% CI [.52, .83]; students: δ = .59, k = 4, 95% CI [−.07, 1.24]). Modified asymptotic confidence intervals also failed to report significant differences between students and clinicians (overall effectiveness 95% CI [−.34, .22]; learning 95% CI [−.16, .51], transfer 95% CI [−.47, .73]). We do not draw comparisons for improved results as students are not employees of an organization wherein results would be assessed.
Hypothesis 13 predicted that low acuity units would exhibit greater team training effectiveness in comparison with high acuity units. However, hypothesis 13 was not supported. Results indicate that trainees in low and high acuity units exhibit similar overall training effectiveness (high acuity: δ = .49, k = 47, 95% CI [.30, .68]; low acuity: δ = .51, k = 9, 95% CI [.22, .80]), and a modified asymptotic confidence interval included zero (95% CI [−.30, .25]). Further, too few primary studies were available to reliably compare unit acuity within each criterion, although we present these results, where available, in Table 7.
Discussion
Summary of Results
Our meta-analysis of healthcare team training effectiveness contributes to the literature in four ways. First, our meta-analysis indicates that healthcare team training is effective. Specifically, healthcare team training appears to (a) surpass employees’ pre-training utility and enjoyment expectations, (b) induce learning, (c) transfer learned material to the job, and (d) lead to improved organizational and patient results. Although team training may receive less attention from healthcare managers because it is perceived to be “soft skills” training, our results suggest that team training improves objective criteria such as patient mortality, and, surprisingly, stronger transfer of team training to task performance than teamwork performance. Second, our results support the sequential model of healthcare team training, a model that has been implied but never empirically tested in prior work (Alliger et al., 1997), by showing a sequential effect of healthcare team training on learning, transfer, and results. This model helps to understand the pathway through which healthcare team training affects bottom-line results; further, it clarifies the role of reactions as a training evaluation criteria (i.e., our meta-analysis did not support the utility of reactions in the sequential model of healthcare team training, contributing to prior suspicions that reactions should not be relied on as a sole-source of training effectiveness because they have minimal relationships with other criteria; Holton, 1996; Hook & Bunce, 2001). Interestingly, results from the sequential model in the generic training literature (labeled the “sequential model of training”) support a slightly different progressive effect for training across all contexts. Specifically, whereas reactions appear to be excluded from the cascade of training criteria that subsequently influence results in the healthcare team training model, the sequential model of training suggests that the progressive effect does indeed begin with reactions. Although the effect of reactions on learning (β = .05, p < .05) was relatively weak, the role of reactions in the mediational chain was significant, suggesting that reactions do have downstream effects on results (although learning and transfer appear to play a larger role in influencing results). Third, our findings illustrate that healthcare team training is effective under a variety of conditions regardless of the training strategy, team composition (interprofessional/interdisciplinary), sample type (students/clinicians), and patient acuity of the trainee’s unit, suggesting that practitioners should not restrict the implementation of team training to specific clinical contexts. In addition, our results exhibited two counterintuitive findings: training that involved feedback exhibited weaker effects than training that did not involve feedback, and that fidelity of the training program did not influence training effectiveness.
The Uniqueness of Team Training in Healthcare Settings
We have previously suggested that team training in healthcare requires unique considerations when designing training, and may function differently due to contextual factors in comparison to team training outside of the healthcare setting. Before moving forward to discuss implications of our findings, we revisit the idea that healthcare team training is unique in light of our findings.
Our results support some commonalities between team training in healthcare and team training in other industries. Namely, we found that team training is effective (i.e., improves reactions, learning, transfer, and results) in the healthcare setting, which is similar to prior findings that training is effective in other settings (Arthur et al., 2003; Salas, DiazGranados, et al., 2008). However, although our results show that team training in healthcare is effective in improving each of these criteria, it is interesting to note that whereas training of all forms across all industries improves reactions, learning, transfer, and results equally (i.e., uncorrected meta-analytic d values are .60, .63, .62, and .62 for reactions, learning, transfer, and results, respectively; Arthur et al., 2003, which did not report corrected d values), our findings suggest that healthcare team training is most effective at improving learning (uncorrected meta-analytic d = .79), followed by transfer (uncorrected meta-analytic d = .62), reactions (uncorrected meta-analytic d = .48), and results (uncorrected meta-analytic d = .33). Therefore, it appears that in answering our first research question (Is team training in healthcare effective?), healthcare team training outperforms traditional training when effectiveness is defined as learning, and underperforms traditional training when effectiveness is defined as reactions and results.
Our results also suggested a second uniqueness of healthcare team training compared to team training outside of this industry upon answering our research question, “How does healthcare team training influence bottom-line organizational outcomes and patient outcomes?” Specifically, we found that the sequential paths among Kirkpatrick’s evaluation criteria differed between our sequential model of healthcare team training (see Figure 4) and our sequential model of training (see Figure 5), with reactions having no impact on the causal chain in the former, as compared to a modest impact of reactions in the latter. We discuss potential reasoning behind this finding in the theoretical implications section, below.
Additionally, although prior generic training literature has found support for a series of moderators of training effectiveness, we generally found a lack of support for these variables as moderators of the effectiveness of healthcare team training, including trainee characteristics (i.e., sample type [clinicians vs. students], membership [interprofessional, interdisciplinary]), training design (i.e., training strategy, simulation fidelity), and work environment (i.e., unit acuity) variables (Baldwin & Ford, 1988). The one exception was the finding that a training design variable-feedback-negatively impacted training effectiveness, although this is still in contrast to traditional training findings, which we discuss in more detail below. Thus, upon answering our research question, “Under what conditions is healthcare team training most effective?”, it appears that our moderator results are in contrast to both theory (Baldwin & Ford, 1988) and practical guidelines for team training in other settings (e.g., Salas & Cannon-Bowers, 2000) because our results suggest that healthcare team training is generally effective regardless of the context. We expand more on the implications of these findings in the section below.
Theoretical Implications
Ultimately, we sought to answer three questions in the current study: (1) Is team training in healthcare effective?, (2) Under what conditions is healthcare team training most effective?, and (3) How does healthcare team training influence bottom-line organizational outcomes and patient outcomes?. We elaborate on our theoretical contributions by answering each of these questions below. In doing so, our results suggest potential revisions to long-standing theories of training and learning.
Question 1: Is team training in healthcare effective?
As an answer to the first question posed above, our study shows team training is effective within healthcare and produces significant improvements across training evaluation criteria. Specifically, findings of our meta-analysis are consistent with previous examinations of both training and teams (Ellis, Bell, Ployhart, Hollenbeck, & Ilgen, 2005) in that training is effective at improving reactions, learning, and transfer (Arthur et al., 2003), and that more distal outcomes (i.e., results) display weaker effects (De Wit, Greer, & Jehn, 2012; DeChurch & Mesmer-Magnus, 2010). Thus, prior findings from the generic training literature appear to extend to team training in healthcare contexts, which was previously unexplored (Salas, DiazGranados, et al., 2008).
Question 2: Under what conditions is healthcare team training most effective?
Upon answering this question, moderator results revealed that team training is generally effective regardless of the training design and implementation, trainee characteristics, and characteristics of the work environment. The only exception to this was the finding that feedback appeared to decrease the effectiveness of healthcare team training. In reviewing the primary studies included in this meta-analysis, feedback was often delivered by a physician or senior staff member to a more junior staff member or student (Hicks, Kiss, Bandiera, & Denny, 2012; Nielsen, Randall, & Christensen, 2010). The high power distance in these scenarios may have increased anxiety during the training program, which may have inhibited learning and subsequent transfer (Lyons et al., 2015; Salas, Klein, et al., 2008). These results may also reflect Kluger and DeNisi’s findings that feedback can be a “double-edged sword” (Kluger & DeNisi, 1996; Kluger & DeNisi, 1998) wherein feedback can decrease performance especially when it draws attention to the self instead of the task. Because feedback was delivered as part of a team training program that involves a substantial proportion of person-related content (e.g., feedback about one’s communication skills) rather than task-related content, feedback in this context may be substantially more anxiety-provoking than when it is delivered in task training. Lastly, although there was insufficient detail in the current set of studies to examine this as a moderator, it is possible that the feedback provided to trainees was unclear; in a meta-analysis of error management training, Keith and Frese (2008) found that low-clarity task feedback was associated with significantly worse training effectiveness in comparison to high-clarity feedback. Future research should continue to examine whether feedback has unintentional negative consequences in team training contexts outside of healthcare and examine the pathways through which feedback reduces team training effectiveness.
With the exception of feedback as a moderator of training effectiveness, all other moderators investigated in the current study resulted in nonsignificant changes in healthcare team training effectiveness. Because our investigation of moderators in the current paper was based on Baldwin and Ford’s (1988) model of training transfer, our results suggest this model may fail to extend to healthcare contexts, and simultaneously highlight advances that can be made in the current theoretical literature on learning and transfer. For instance, contrary to existing theories of learning that support the use of multiple learning strategies in concert (Franzoni & Assar, 2009; Zapp, 2001), our results suggested that training programs involving a single training strategy (i.e., information, demonstration, or practice) were just as effective as training programs that used multiple strategies. One explanation for this surprising finding may be that any effectiveness associated with selected training strategies could be confounded with the effectiveness of leveraging a training needs analysis; specifically, it may be the case that the single-strategy training programs were selected based on a thorough training needs analysis, whereas the multiple-strategy training programs may have been based on a haphazard “try everything and see what works” approach instead of informing training design using a thorough needs analysis. If this was the case, it comes as no surprise that a single information-only training strategy selected through a thorough training needs analysis, for example, would be as effective as a combination of robust training strategies selected without a thorough needs analysis; Goldstein, Braverman, & Goldstein, 1991). Therefore, future research is needed to tease apart whether this finding truly contradicts traditional learning theory or if it is a sign of a confounding training design element-needs analysis.
Similarly, traditional healthcare training theory highlights the importance of selecting a high fidelity simulator under the notion that high fidelity simulators would be more capable of mimicking human anatomy and physiology and are therefore the most conducive to transfer (Kassirer, Kuipers, & Gorry, 1982). However, our results suggest that the selection of a simulator with high physical fidelity does not influence team training effectiveness. Although high fidelity simulators may be critical to training effectiveness for technical skill training, high physical fidelity may not be necessary to create a realistic teamwork environment. Therefore, future healthcare training literature may benefit from delineating the conditions under which high physical fidelity is necessary (e.g., technical skills training) and specifying other forms of fidelity (e.g., psychological fidelity) that may be more critical to team training.
Nonsignificant moderator analyses may also highlight theoretical gaps involving learner differences. Interestingly, we found that trained clinicians and students experienced similar learning and transfer gains from healthcare team training. Therefore, although clinicians may benefit from team training by leveraging their previous experiences with teamwork during training, whereas students’ lack of experience and supportive learning culture puts them in a position for substantial training gains, healthcare team training programs appear to be similarly effective in both groups. Future work would benefit from a better understanding of whether and how the learning processes in these groups differ, now that results appear to support the use of healthcare team training in both groups.
In this same vein, results of our meta-analysis suggested that homogenous teams did not outperform mixed composition teams who were diverse in discipline or profession. This finding is salient to the debate on the impact of diversity in teams, wherein some scholars have found a positive benefit of diversity (e.g., Horwitz & Horwitz, 2007); yet, others have suggested that diversity may result in negative team outcomes such as conflict (e.g., Pelled, 1996). Although our results are confined to the healthcare context, they contribute to this debate by supporting the idea that diversity appears to neither hamper nor benefit team training effectiveness in healthcare. However, future work could build on these findings to examine whether these results extent to other types of diversity, such as demographic diversity.
Lastly, workload has been a key consideration in the training transfer literature which deems that support is essential to reduce workload and permit opportunities to apply trained KSAs for transfer (Burke & Saks, 2009; Cromwell & Kolb, 2004). However, our findings imply that workload, operationalized here as acuity of a unit’s patients, does not significantly impact healthcare team training effectiveness. This finding may bear important theoretical implications for operationalizing workload at higher levels of analysis, rather than simply at the individual worker level, which has posed challenges to team and organizational researchers (Funke, Knott, Salas, Pavlas, & Strang, 2012). Workload is a dynamic construct which has been said to be influenced by individual differences, resources, extent of demand, and time. Taken together, resource allocation theory in workload focuses on the ability to allocate individual cognitive or physical resources to meet or satisfy external task demands (Hancock & Warm, 1989). In fact, Carayon and Gurses (2008) indicate that workload in healthcare should be examined at multiple levels within the clinical care environment, as workload manifests differently at the job level, patient level, unit level, and situation level. Importantly, each level of workload introduces an understanding of clinicians’ needs to cope with demands in the workplace. Thereby, although our findings highlight that unit level workload (as defined by a unit’s patient acuity) does not impact healthcare team training effectiveness and its ability to transfer, more variation may exist within lower levels (Carayon & Gurses, 2008), such as the individual patient, clinician, or healthcare team level (Bedwell, Salas, Funke, & Knott, 2014).
Question 3: How does healthcare team training influence bottom-line organizational outcomes and patient outcomes?
As an answer to this question, our results support prior suggestions that training affects results via learning and transfer both within and outside of the healthcare context (i.e., the sequential model of team training, Figure 4, and the sequential model of training, Figure 5; Alliger et al., 1997; Kirkpatrick, 1996). Our paper is the first to show evidence of a sequential and progressive effect among Kirkpatrick criteria via path analysis that explicates how training affects bottom-line results. Interestingly, results demonstrated two slightly different theoretical pathways within the healthcare context and across all contexts. Specifically, within healthcare contexts, findings support prior literature that questions the utility of reactions in training evaluations and in Kirkpatrick’s framework (Holton, 1996; Hook & Bunce, 2001) by showing that reactions do not have downstream effects on more distal effectiveness criteria. Moreover, these findings emphasize the role of learning in initiating the sequential effect to influence transfer and results, a finding that has important practical implications for healthcare training, as we discuss below. This finding is consistent with Huang et al.’s (2015) examination of learning as a mediator between trainee inputs and training transfer, and prior literature supporting learning as a significant predictor of transfer (Blume et al., 2010). However, across all contexts, the sequential model of training demonstrated that reactions do have (weak) downstream effects on results via learning and transfer. This raises an interesting, new theoretical question: what is it about the healthcare team training context that reduces the importance of reactions as a criterion? Future work would benefit from addressing this question, although we suspect that there may be a ceiling effect of reactions in healthcare (i.e., healthcare workers may uniformly feel negative toward participating in team training opportunities, as they already spend a significant portion of their time engaging in educational activities at the expense of providing patient care [Block et al., 2013], restricting any relationship between training reactions and other criteria).
Although prior literature has theorized several pathways through which training influences results (vertical transfer; Kozlowski & Salas, 1997), the current paper is the first to illustrate that organizational results and patient outcomes appear to be influenced by the learning-to-transfer pathway. In other words, our results suggest that learning and transfer are key mechanisms for healthcare organizations to see improvements in their bottom-line. As results failed to support the role of reactions in initiating the sequential model in healthcare team training and a weak effect of reactions in generic training contexts, Kirkpatrick’s original framework of training evaluation criteria may be amended to emphasize the key roles of learning and transfer in influencing an organization’s bottom-line.
Practical Implications
Our work has several practical implications, the most obvious being that team training is a valuable human resource capital strategy in healthcare that can affect organizational and patient outcomes. Recent changes in Medicare policy have required that perceptions of quality care- including teamwork and patient satisfaction- be assessed in order to fulfill hospital payment for provision of Medicare services (Medicare, 2014). Given these policy changes and our current findings that support the effectiveness of healthcare team training across multiple criteria, hospital management may wish to implement team training in their hospital to increase perceptions of teamwork and patient satisfaction, which could lead to subsequent funding from national incentives in the hospital value-based purchasing program portion of the Affordable Care Act.
Our findings further inform practice in the selection of trainees and training design. First, our results imply that team training can be initiated at any stage of the career trajectory for clinicians; that is, team training is effective for both students and practicing clinicians. As evidenced earlier by widespread implementation of team training within medical school curriculum (Beach, 2013; Kirch, 2007), 75% of medical school graduates have received team training. However, our results suggest that team training opportunities should be embedded within every healthcare course curriculum (e.g., nursing school, medical school, etc.) and within continuing education programs for practicing clinicians. Our findings also suggest that training design is important for healthcare practitioners to consider, but not in the ways one might think. Specifically, our findings suggest that expensive, high-fidelity team training is no more effective than inexpensive, low-fidelity team training. Thus, we suggest that selection of a simulator should be done using the results of a needs analysis rather than selecting a simulator based solely on its physical fidelity. For example, if a needs analysis reveals that communication within the team is a primary training need, a computerized manikin with full capability of replicating eye movements, heartbeat, blood pressure, and breathing rate may not be necessary to facilitate effective team training. We suggest that instead of emphasizing the need for physical fidelity, practice should instead incorporate tools that are high on psychological fidelity. That is, scenarios should replicate key psychological processes (e.g., time stress, high-stakes) of the work environment and task for which the training was intended to transfer to (Kozlowski & DeShon, 2004) rather than focusing on an exact replication of the physical work environment.
In addition, results reveal that team training programs involving some task training content are no more or less effective than “pure” teamwork-only team training programs. That is, although previous literature has encouraged a focus on teamwork KSAs (e.g., Stevens & Campion, 1994), inclusion of clinical or task-relevant training content within a team training program does not reduce its effectiveness, suggesting that healthcare organizations may combine trainings such that trainee time is well-spent and meets more than one organizational imperative. Furthermore, our results suggest that practitioners should exercise caution when using feedback as part of a healthcare team training program, given our findings that suggest these programs are less effective than those that do not include feedback. Finally, the results of the current meta-analysis also point to the importance of assessing learning before and after training, given the critical role of learning as a predictor of organizational and patient outcomes in the sequential model of healthcare team training. That is, although more than 90% of organizations assess reactions to training, learning appears to have greater subsequent implications for organizational outcomes, and should be assessed in more training programs as an early indicator of the training program effectiveness.
Limitations and Future Research
The most notable limitation involves the small number of primary studies that were included in our estimate of healthcare team training’s effect on reactions. A large number of effect sizes for training reactions were not included in this meta-analysis because they were evaluated using single group, post-training only designs. Therefore, we are limited in what information we can provide on training reactions and the extent to which training strategy, feedback, simulator fidelity, team composition, sample type, and unit acuity affect trainee reactions. Future research should build on recent efforts to revitalize interest in trainee reactions (Harman, Ellington, Surface, & Thompson, 2015; Sitzmann et al., 2008) and use the current work as a signpost for future studies on trainee reactions. This limitation of a small k further extends to several of our more specific empirical indicators (i.e., cognitive transfer, non-ICU length of stay, patient satisfaction, and patient mortality), where it is important to caution the reader in interpreting small k results as effect size values may be less stable than findings with higher k. Thus, we encourage future studies to examine these outcomes within the healthcare team training context. Similarly, we note that several of the moderator analyses involved unbalanced subgroups (i.e., a comparison of two effect sizes in which one effect size is based on substantially more primary studies). Although some unbalanced subgroups may arise from practitioners who have followed best practices from the training literature (e.g., leveraging a multiple-training strategy approach; Salas et al., 2012), it is important to interpret results from imbalanced subgroups with caution. For instance, interprofessional teams as compared with homogeneous teams exhibit no significant differences in the transfer criteria; however, this may result from comparisons made in imbalanced subgroups (i.e., k of 46 [interprofessional teams] in comparison with a k of 12 [homogenous teams]). Uneven subgroups may reduce our power to detect significant differences because of small k in one of the subgroups.
As an additional limitation involving our moderator analyses, we were unable to investigate several moderators of training effectiveness that are presented in the Baldwin and Ford (1988)transfer of training model because these factors are not commonly reported in the journals that most frequently publish team training studies (e.g., medical journals). As such, we were unable to account for specific training features like the framing of training (i.e., how supervisors framed training to trainees), design features such as whether learning objectives were specified, the number of examples provided, or which platforms were used to provide training (e.g., PowerPoint). Moreover, trainee characteristics were rarely reported, leaving the contribution of individual-level trainee inputs on team training in healthcare effectiveness largely unexplored (e.g., trainee motivation, trainee self-efficacy, goal orientation; Christoph, Schoenfeld, & Tansky, 1998; Colquitt et al., 2000). Further, although we acknowledge the work environment’s critical role in training transfer (Baldwin & Ford, 1988; Blume et al., 2010; Ford & Weissbein, 1997; Tracey, Tannenbaum, & Kavanagh, 1995), we found insufficient instances of these factors (e.g., supervisory support, opportunity to use trained KSAs, policy and procedural change, systems of reinforcement) reported in the literature to examine their effect in the current meta-analysis. Therefore, future research would benefit from using these unexplored areas to guide the next generation of scholarly work in this area.
Further, we note that future research should attempt to isolate the effects of team training that targets true team competencies (e.g., collaborative problem solving) from training that focuses on individual level team competencies (e.g., assertive communication). By doing so, researchers can better explore and understand whether the level of team training content influences team training outcomes (e.g., it may be the case that individual competencies are easier to train and therefore lead to stronger outcomes, whereas team competencies require intrateam coordination during training that may impede learning and transfer). In the current study, a majority of team training trained individual-level competencies and thus, we did not examine this in the current meta-analysis.
One final limitation involves the approach used to analyze the sequential model of healthcare team training (see Figure 4). As previously stated, there were too few primary studies reporting intercorrelations among Kirkpatrick’s (1956, 1996) criteria (i.e., reactions, learning, transfer, results) within a healthcare sample to calculate meta-analytic estimates of the intercorrelations using only healthcare samples. As such, to test the mediational pathway among the criteria, it was necessary to include training studies that used samples outside of the healthcare industry in these meta-analytic intercorrelations. This is in contrast to the approach used to examine the effectiveness of healthcare team training (Hypotheses 1-4) and the moderators of healthcare team training effectiveness (Hypotheses 8-13), which relied solely on studies involving team training with a healthcare sample. Although we have no reason to theorize that meta-analytic intercorrelations among the criteria would differ across healthcare samples and other organizational samples, we note that future research should test the model using intercorrelations among the criteria drawn from healthcare samples, exclusively, to confirm our results.
Conclusion
The current meta-analysis demonstrates the effectiveness of team training in the healthcare industry. In particular, our results suggest that team training is associated with reactions, acquired learning, positive transfer, and enhanced results (i.e., organizational outcomes and patient outcomes), suggesting that team training is an effective team development intervention with the potential for improving patient health. Furthermore, our results suggest a modest sequential effect among Kirkpatrick criteria such that participation in healthcare team training promotes learning, which in turn induces use of the training on-the job, which ultimately improves results, a pathway that appears to exist in healthcare and other nonhealthcare industries. The current meta-analysis also provides encouraging evidence that team training in healthcare is effective across various team compositions of trained teams and training strategies (with the exception of feedback included in training, which hindered healthcare team training effectiveness). We hope that these results can inform theory and practice, and we conclude by encouraging healthcare practitioners to implement team training.
Appendix: Equations for Morris and DeShon (2002) Procedures
Following Morris and DeShon (2002), when a standardized mean difference (i.e., d value) in the repeated measures metric was not reported, a d value in repeated measures studies was calculated as follows:
where MPost and MPre refer to the mean post- and pre-training scores, respectively, and SDChange refers to the standard deviation of the change scores from pre-training to post-training. The standard deviation of the change scores was calculated as:
where SDPre and SDPost refer to the standard deviation of the pre-training and post-training scores, respectively, and rPre.Post is the correlation between pre- and post-training scores. Because most studies did not directly report rPre.Post, the inverse sampling variance-weighted average rPre.Post across repeated measures studies was substituted into the equation (rPre.Post = .47; Morris & DeShon, 2002).
Standardized mean differences in independent groups studies were calculated as follows:
where ME and MC refer to the mean of experimental and control group scores respectively, and SDpooled refers to the pooled standard deviation across experimental and control groups. The pooled standard deviation was estimated with Equation 4.
In Equation 4, nE and nC refer to the experimental and control group sample sizes, respectively. SDE and SDC refer to the standard deviations of the experimental and control groups, respectively. Independent groups effect sizes were transformed into the repeated measures metric via Equation 5.
Effect sizes from independent groups with repeated measures studies were calculated in the repeated measures metric as follows:
where dRM,E refers to the repeated measures d value (see Equation 1) calculated within the experimental group and dRM,C refers to the repeated measures d value calculated within the control group.
Footnotes
1 As it is common for teamwork and taskwork to be trained in the same training program, we examined whether the percent of teamwork content influenced our results by estimating a meta-analytic regression wherein the primary study effect size was predicted from percentage of teamwork content in the training program. Results of this regression indicate teamwork content did not significantly predict the strength of effect sizes for any of the Kirkpatrick evaluation criteria with the exception that the results criterion was improved when there was higher teamwork content in the training program (β = −.42, p > .05 for reactions, β = .06, p > .05 for learning, β = .22, p > .05 for transfer, and β = .38, p < .05 for results), which helps to reduce concerns that results may be inflated because some of the training programs included task-related training content (although it is worth noting that the mean amount of task-related training content across studies was only 10%).
Acknowledgements
Corresponding Author
This work was supported in part by contract NNX16AB08G with the National Aeronautics and Space Administration (NASA) to Rice University. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government. All opinions expressed in this article are those of the authors and do not necessarily reflect the official opinion or position of the University of Central Florida, or the Department of Defense.Correspondence concerning this article should be addressed to Eduardo Salas, Department of Psychology, Rice University, 6100 Main Street, Sewall Hall 429C, Houston, TX 77005
Email: [email protected]
Publication History
Received February 27, 2015
Revision received March 25, 2016
Accepted March 28, 2016
First published online June 16, 2016
Aguinis, H., Gottfredson, R. K., & Wright, T. A. (2011). Best-practice recommendations for estimating interaction effects using meta-analysis. Journal of Organizational Behavior, 32, 1033–1043. https://doi.org/10.1002/job.719
*Ajeigbe, D. O. (2012). Nurse-physician teamwork in the emergency department (Doctoral dissertation). Retrieved from ProQuest.
*Al-Ammar, S. A. (1994). The influence of individual and organizational characteristics on training motivation and effectiveness (Doctoral dissertation). Retrieved from ProQuest.
*Allen, L. A. (2010). An evaluation of a shared leadership training program (Doctoral dissertation). Retrieved from ProQuest. (3420897)
*Alliger, G. M. (1988). Do zero correlations really exist among measures of different intellectual abilities?Educational and Psychological Measurement, 48, 275–280. https://doi.org/10.1177/0013164488482001
*Alliger, G. M., & Horowitz, H. M. (1989). IBM takes the guessing out of testing. Training and Development Journal, 43, 69–73.
Alliger, G. M., Tannenbaum, S. I., Bennett, B. J., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341–358. https://doi.org/10.1111/j.1744-6570.1997.tb00911.x
Andel, C., Davidow, S. L., Hollander, M., & Moreno, D. A. (2012). The economics of healthcare quality and medical errors. Journal of Healthcare Finance, 39, 39–50.
*Anderson, M., LeFlore, J. L., & Anderson, J. M. (2012). Evaluating videotaped role-modeling to teach crisis resource management principles. Clinical Simulation in Nursing, 9, 343–354. https://doi.org/10.1016/j.ecns.2012.05.007
Andreatta, P. B. (2010). A typology for healthcare teams. Healthcare Management Review, 35, 345–354. https://doi.org/10.1097/HMR.0b013e3181e9fceb
*Antle, B. F., Frey, S. E., Sar, B. K., Barbee, A. P., & van Zyl, M. A. (2010). Training the child welfare workforce in healthy couple relationships: An examination of attitudes and outcomes. Children and Youth Services Review, 32, 223–230. https://doi.org/10.1016/j.childyouth.2009.08.023
*Armour Forse, R., Bramble, J. D., & McQuillan, R. (2011). Team training can improve operating room performance. Surgery, 150, 771–778. https://doi.org/10.1016/j.surg.2011.07.076
*Arora, S., Cox, C., Davies, S., Kassab, E., Mahoney, P., Sharma, E., . . . Sevdalis, N. (2014). Towards the next frontier for simulation-based training: Full-hospital simulation across the entire patient pathway. Annals of Surgery, 260, 252–258. https://doi.org/10.1097/SLA.0000000000000305
Arthur, W., Jr., Bennett, W., Jr., Edens, P. S., & Bell, S. T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88, 234–245.
*Baldwin, T. T. (1992). Effects of alternative modeling strategies on outcomes of interpersonal-skills training. Journal of Applied Psychology, 77, 147–154. https://doi.org/10.1037/0021-9010.77.2.147
Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63–105. https://doi.org/10.1111/j.1744-6570.1988.tb00632.x
Bandura, A. (1977). Social learning theory. Oxford, UK: Prentice Hall.
Barnett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn? A taxonomy for far transfer. Psychological Bulletin, 128, 612–637. https://doi.org/10.1037/0033-2909.128.4.612
Beach, S. (2013, August2). Annual medical school graduation survey shows gains in team training. Retrieved fromwww.aamc.org/newsroom/newsreleases/351120/080213.html
Bedwell, W. L., Salas, E., Funke, G. J., & Knott, B. A. (2014). Team workload: A multilevel perspective. Organizational Psychology Review, 4, 99–123. https://doi.org/10.1177/2041386613502665
Begg, C. B., & Mazumdar, M. (1994). Operating characteristics of a rank correlation test for publication bias. Biometrics, 50, 1088–1101. https://doi.org/10.2307/2533446
*Bennett, W., Jr., Alliger, G. M., Eddy, E. R., & Tannenbaum, S. I. (2003). Expanding the training evaluation criterion space: Cross aircraft convergence and lessons learned from evaluation of the air force mission ready technician program. Military Psychology, 15, 59–76.
*Bledsoe, M. D. (1999). Correlations in Kirkpatricks’s training evaluation model (Doctoral dissertation). Retrieved from ProQuest.
Block, L., Habicht, R., Wu, A. W., Desai, S. V., Wang, K., Silva, K. N., . . . Feldman, L. (2013). In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time?Journal of General Internal Medicine, 28, 1042–1047. https://doi.org/10.1007/s11606-013-2376-6
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of training: A meta-analytic review. Journal of Management, 36, 1065–1105. https://doi.org/10.1177/0149206309352880
*Bolman, L. (1971). Some effects of trainers on their T groups. The Journal of Applied Behavioral Science, 7, 309–325. https://doi.org/10.1177/002188637100700303
Bonett, D. G. (2008). Meta-analytic interval estimation for bivariate correlations. Psychological Methods, 13, 173–181. https://doi.org/10.1037/a0012868
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: Wiley. https://doi.org/10.1002/9780470743386
*Bretz, R. D., Jr., & Thompsett, R. E. (1992). Comparing traditional and integrative learning methods in organizational training programs. Journal of Applied Psychology, 77, 941–951. https://doi.org/10.1037/0021-9010.77.6.941
*Brock, D., Abu-Rish, E., Chiu, C. R., Hammer, D., Wilson, S., Vorvick, L., . . . Zierler, B. (2013). Interprofessional education in team communication: Working together to improve patient safety. British Medical Journal Quality & Safety, 22, 414–423. https://doi.org/10.1136/bmjqs-2012-000952
*Brodsky, D., Gupta, M., Quinn, M., Smallcomb, J., Mao, W., Koyama, N., . . . Pursley, D. M. (2013). Building collaborative teams in neonatal intensive care. British Medical Journal Quality & Safety, 22, 374–382. https://doi.org/10.1136/bmjqs-2012-000909
Brown, K. G. (2005). An examination of the structure and nomological network of trainee reactions: A closer look at “smile sheets.”Journal of Applied Psychology, 90, 991–1001.
*Budin, W. C., Gennaro, S., O’Connor, C., & Contratti, F. (2014). Sustainability of improvements in perinatal teamwork and safety climate. Journal of Nursing Care Quality, 29, 363–370. https://doi.org/10.1097/NCQ.0000000000000067
Burke, L. A., Hutchins, H. M., & Saks, A. M. (2013). Best practices in training transfer. In M. A. Paulidi (Ed.), Psychology for business success (Vol. 3, pp. 115–132). Santa Barbara, CA: Praeger.
Burke, L. A., & Saks, A. M. (2009). Accountability in training transfer: Adapting Schlenker’s model of responsibility to a persistent but solvable problem. Human Resource Development Review, 8, 382–402. https://doi.org/10.1177/1534484309336732
Campbell, J. P. (1988). Training design for productivity improvement. In J. P. Campbell & R. J. Campbell (Eds.), Productivity in organizations (pp. 177–215). San Francisco, CA: Jossey-Bass.
Cannon-Bowers, J. A., & Salas, E. (1991). Toward an integration of training theory and technique. Human Factors, 33, 281–292.
*Capella, J., Smith, S., Philp, A., Putnam, T., Gilbert, C., Fry, W., . . . Remine, S. (2010). Teamwork training improves the clinical care of trauma patients. Journal of Surgical Education, 67, 439–443. https://doi.org/10.1016/j.jsurg.2010.06.006
Carayon, P., & Gurses, A. P. (2008). Nursing workload and patient safety- a human factors engineering perspective. In R. G. Hughes (Ed.), Patient safety and quality: An evidence-based handbook for nurses (pp. 1–14). Rockville, MD: Agency for Healthcare Research and Quality.
*Carbo, A. R., Tess, A. V., Roy, C., & Weingart, S. N. (2011). Developing a high-performance team training framework for internal medicine residents: The ABC’S of teamwork. Journal of Patient Safety, 7, 72–76. https://doi.org/10.1097/PTS.0b013e31820dbe02
*Carhart, E. D. (2012). Effects of crew resource management training on medical errors in a simulated prehospital setting (Doctoral dissertation). Retrieved from ProQuest.
*Carpenter, J. (1995). Interprofessional education for medical and nursing students: Evaluation of a programme. Medical Education, 29, 265–272. https://doi.org/10.1111/j.1365-2923.1995.tb02847.x
*Castner, J., Foltz-Ramos, K., Schwartz, D. G., & Ceravolo, D. J. (2012). A leadership challenge: Staff nurse perceptions after an organizational TeamSTEPPS initiative. The Journal of Nursing Administration, 42, 467–472. https://doi.org/10.1097/NNA.0b013e31826a1fc1
*Catchpole, K. R., Dale, T. J., Hirst, D. G., Smith, J. P., & Giddings, T. A. (2010). A multicenter trial of aviation-style training for surgical teams. Journal of Patient Safety, 6, 180–186. https://doi.org/10.1097/PTS.0b013e3181f100ea
*Caylor, S., Aebersold, M., Lapham, J., & Carlson, E. (2015). The use of virtual simulation and a modified TeamSTEPPS training for multiprofessional education. Clinical Simulation in Nursing, 11, 163–171. https://doi.org/10.1016/j.ecns.2014.12.003
*Ceravolo, D. J., Schwartz, D. G., Foltz-Ramos, K. M., & Castner, J. (2012). Strengthening communication to overcome lateral violence. Journal of Nursing Management, 20, 599–606. https://doi.org/10.1111/j.1365-2834.2012.01402.x
*Chen, H. J. (2010). Linking employees’ e-learning system use to their overall job outcomes: An empirical study based on the IS success model. Computers & Education, 55, 1628–1639. https://doi.org/10.1016/j.compedu.2010.07.005
Christoph, R. T., Schoenfeld, G. A., Jr., & Tansky, J. W. (1998). Overcoming barriers to training utilizing technology: The influence of self-efficacy factors on multimedia-based training receptiveness. Human Resource Development Quarterly, 9, 25–38. https://doi.org/10.1002/hrdq.3920090104
*Chung, S. P., Cho, J., Park, Y. S., Kang, H. G., Kim, C. W., Song, K. J., . . . Cho, G. C. (2011). Effects of script-based role play in cardiopulmonary resuscitation team training. Emergency Medicine Journal, 28, 690–694. https://doi.org/10.1136/emj.2009.090605
Classen, D. C., Resar, R., Griffin, F., Federico, F., Frankel, T., Kimmel, N., . . . James, B. C. (2011). ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Affairs, 30, 581–589. https://doi.org/10.1377/hlthaff.2011.0190
*Clay-Williams, R., McIntosh, C. A., Kerridge, R., & Braithwaite, J. (2013). Classroom and simulation team training: A randomized controlled trial. International Journal for Quality in Healthcare, 25, 314–321. https://doi.org/10.1093/intqhc/mzt027
*Clement, R. W. (1982). Testing the hierarchy theory of training evaluation: An expanded role for trainee reactions. Public Personnel Management, 11, 176–184. https://doi.org/10.1177/009102608201100210
Cohen, P. A. (1981). Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Review of Educational Research, 51, 281–309. https://doi.org/10.3102/00346543051003281
Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678–707. https://doi.org/10.1037/0021-9010.85.5.678
Commission, T. J. N. P., & Dumpel, H. (2005). The California Nursing Practice Act: Safe staffing standards by scope, ratio, and acuity. Part II. California Nurse, 10, 20–26.
*Cooley, E. (1994). Training an interdisciplinary team in communication and decision-making skills. Small Group Research, 25, 5–25. https://doi.org/10.1177/1046496494251002
Cortina, J. M. (2003). Apples and oranges (and pears, oh my!): The search for moderators in meta-analysis. Organizational Research Methods, 6, 415–439. https://doi.org/10.1177/1094428103257358
Craik, K. (1943). The nature of explanation. Cambridge, England: Cambridge University Press.
Crawford, E. R., Lepine, J. A., & Rich, B. L. (2010). Linking job demands and resources to employee engagement and burnout: A theoretical extension and meta-analytic test. Journal of Applied Psychology, 95, 834–848. https://doi.org/10.1037/a0019364
*Crofts, J. F., Ellis, D., Draycott, T. J., Winter, C., Hunt, L. P., & Akande, V. A. (2007). Change in knowledge of midwives and obstetricians following obstetric emergency training: A randomised controlled trial of local hospital, simulation centre and teamwork training. British Journal of Obstetrics and Gynaecology, 114, 1534–1541. https://doi.org/10.1111/j.1471-0528.2007.01493.x
Cromwell, S. E., & Kolb, J. A. (2004). An examination of work-environment support factors affecting transfer of supervisory skills training to the workplace. Human Resource Development Quarterly, 15, 449–471. https://doi.org/10.1002/hrdq.1115
Cronin, M. A., & Weingart, L. R. (2007). Representational gaps, information processing and conflict in functionally diverse teams. The Academy of Management Review, 32, 761–773. https://doi.org/10.5465/AMR.2007.25275511
*Curran, V., Heath, O., Adey, T., Callahan, T., Craig, D., Hearn, T., . . . Hollett, A. (2012). An approach to integrating interprofessional education in collaborative mental healthcare. Academic Psychiatry, 36, 91–95. https://doi.org/10.1176/appi.ap.10030045
DeChurch, L. A., & Mesmer-Magnus, J. R. (2010). The cognitive underpinnings of effective teamwork: A meta-analysis. Journal of Applied Psychology, 95, 32–53. https://doi.org/10.1037/a0017328
*Deering, S., Rosen, M. A., Ludi, V., Munroe, M., Pocrnich, A., Laky, C., & Napolitano, P. G. (2011). On the front lines of patient safety: Implementation and evaluation of team training in Iraq. Joint Commission Journal on Quality and Patient Safety, 37, 350–356.
Demerouti, E., Bakker, A. B., Nachreiner, F., & Schaufeli, W. B. (2001). The job demands-resources model of burnout. Journal of Applied Psychology, 86, 499–512. https://doi.org/10.1037/0021-9010.86.3.499
de Wit, F. R., Greer, L. L., & Jehn, K. A. (2012). The paradox of intragroup conflict: A meta-analysis. Journal of Applied Psychology, 97, 360–390. https://doi.org/10.1037/a0024844
Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56, 455–463. https://doi.org/10.1111/j.0006-341X.2000.00455.x
*Eden, D., & Shani, A. B. (1982). Pygmalion goes to boot camp: Expectancy, leadership, and trainee performance. Journal of Applied Psychology, 67, 194–199. https://doi.org/10.1037/0021-9010.67.2.194
Egger, M., Davey Smith, G., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal, 315, 629–634. https://doi.org/10.1136/bmj.315.7109.629
Ellis, A. P. J., Bell, B. S., Ployhart, R. E., Hollenbeck, J. R., & Ilgen, D. R. (2005). An evaluation of generic teamwork skill training with action teams: Effects on cognitive and skill-based outcomes. Personnel. Psychology of Aesthetics, Creativity, and the Arts, 58, 641–672.
*Faerman, S. R., & Ban, C. (1993). Trainee satisfaction and training impact: Issues in training evaluation. Public Productivity & Management Review, 16, 299–314. https://doi.org/10.2307/3380872
*Fernandez, R., Pearce, M., Grand, J. A., Rench, T. A., Jones, K. A., Chao, G. T., & Kozlowski, S. W. (2013). Evaluation of a computer-based educational intervention to improve medical teamwork and performance during simulated patient resuscitations. Critical Care Medicine, 41, 2551–2562. https://doi.org/10.1097/CCM.0b013e31829828f7
*Fernandez Castelao, E., Russo, S. G., Cremer, S., Strack, M., Kaminski, L., Eich, C., . . . Boos, M. (2011). Positive impact of crisis resource management training on no-flow time and team member verbalisations during simulated cardiopulmonary resuscitation: A randomised controlled trial. Resuscitation, 82, 1338–1343. https://doi.org/10.1016/j.resuscitation.2011.05.009
Ford, J. K., & Weissbein, D. A. (1997). Transfer of training: An updated review and analysis. Performance Improvement Quarterly, 10, 22–41.
Foxon, M. (1993). A process approach to the transfer of training. Australian Journal of Educational Psychology, 9, 130–143.
*Fransen, A., Oei, S. G., Schuit, E., Van Deven, J., & van Tetering, A. (2014). How simulation-based teamwork training translates into benefits for patients (TOSTI study): A multicenter, cluster-randomized controlled trial. Society for Simulation in Healthcare, 9, 410.
Franzoni, A. L., & Assar, S. (2009). Student learning styles adaptation method based on teaching strategies and electronic media. Journal of Educational Technology & Society, 12, 15–29.
*Frayne, C. A., & Latham, G. P. (1987). Application of social learning theory to employee self-management of attendance. Journal of Applied Psychology, 72, 387–392. https://doi.org/10.1037/0021-9010.72.3.387
Funke, G. J., Knott, B. A., Salas, E., Pavlas, D., & Strang, A. J. (2012). Conceptualization and measurement of team workload: A critical need. Human Factors, 54, 36–51. https://doi.org/10.1177/0018720811427901
Gagne, R. M. (1984). Learning outcomes and their effects: Useful categories of human performance. American Psychologist, 39, 377–385. https://doi.org/10.1037/0003-066X.39.4.377
*Gibson, C. B. (2001). Me and us: Differential relationships among goal-setting training, efficacy and effectiveness at the individual and team level. Journal of Organizational Behavior, 22, 789–808. https://doi.org/10.1002/job.114
*Gifford, W. A. (2011). Development and evaluation of a leadership intervention to influence nurses’ use of clinical guideline recommendations (Doctoral Dissertation). Retrieved from ProQuest.
*Gist, M. E. (1989). The influence of training method on self-efficacy and idea generation among managers. Personnel Psychology, 42, 787–805. https://doi.org/10.1111/j.1744-6570.1989.tb00675.x
*Gist, M. E., Schwoerer, C., & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training. Journal of Applied Psychology, 74, 884–891. https://doi.org/10.1037/0021-9010.74.6.884
Global Diffusion of Healthcare Group. (2015). Global diffusion of healthcare innovation study: Accelerating the journey. Retrieved fromhttp://wish-qatar.org/summit/2015-summit/global-diffusion-of-healthcare-innovation
Goldstein, I. L., Braverman, E. P., & Goldstein, H. (1991). Needs assessment. Developing Human Resources, 5–35.
Grohmann, A., Beller, J., & Kauffeld, S. (2014). Exploring the critical role of motivation to transfer in the training transfer process. International Journal of Training and Development, 18, 84–103. https://doi.org/10.1111/ijtd.12030
*Guerrero, S., & Sire, B. (2001). Motivation to train from the workers’ perspective: Example of French companies. The International Journal of Human Resource Management, 12, 988–1004. https://doi.org/10.1080/713769684
*Gupta, R. T., Sexton, J. B., Milne, J., & Frush, D. P. (2015). Practice and quality improvement: Successful implementation of TeamSTEPPS tools into an academic interventional ultrasound practice. American Journal of Roentgenology, 204, 105–110. https://doi.org/10.2214/AJR.14.12775
*Hagemann, V., Kluge, A., & Kehren, C. (2014). Evaluation of crew resource management interventions for doctors-on-call. Paper presented at the Proceedings of the Human Factors and Ergonomics Society, Europe chapter, 2014 Annual Conference.
Hall, P. (2005). Interprofessional teamwork: Professional cultures as barriers. Journal of Interprofessional Care, 19, 188–196. https://doi.org/10.1080/13561820500081745
Hall, S. M., & Brannick, M. T. (2002). Comparison of two random-effects methods of meta-analysis. Journal of Applied Psychology, 87, 377–389. https://doi.org/10.1037/0021-9010.87.2.377
*Haller, G., Garnerin, P., Morales, M. A., Pfister, R., Berner, M., Irion, O., . . . Kern, C. (2008). Effect of crew resource management training in a multidisciplinary obstetrical setting. International Journal for Quality in Healthcare, 20, 254–263. https://doi.org/10.1093/intqhc/mzn018
*Halverson, A. L., Andersson, J. L., Anderson, K., Lombardo, J., Park, C. S., Rademaker, A. W., & Moorman, D. W. (2009). Surgical team training: The Northwestern Memorial Hospital experience. Archives of Surgery, 144, 107–112. https://doi.org/10.1001/archsurg.2008.545
*Hamilton, N. A., Kieninger, A. N., Woodhouse, J., Freeman, B. D., Murray, D., & Klingensmith, M. E. (2012). Video review using a reliable evaluation metric improves team function in high-fidelity simulated trauma resuscitation. Journal of Surgical Education, 69, 428–431. https://doi.org/10.1016/j.jsurg.2011.09.009
Hancock, P. A., & Warm, J. S. (1989). A dynamic model of stress and sustained attention. Human Factors, 31, 519–537.
*Hänsel, M., Winkelmann, A. M., Hardt, F., Gijselaers, W., Hacker, W., Stiehl, M., . . . Müller, M. P. (2012). Impact of simulator training and crew resource management training on final-year medical students’ performance in sepsis resuscitation: A randomized trial. Minerva Anestesiologica, 78, 901–909.
*Hansen, K. S., Uggen, P. E., Brattebø, G., & Wisborg, T. (2007). Training operating room teams in damage control surgery for trauma: A followup study of the Norwegian model. Journal of the American College of Surgeons, 205, 712–716. https://doi.org/10.1016/j.jamcollsurg.2007.06.015
*Hanyok, L. A., Walton-Moss, B., Tanner, E., Stewart, R. W., & Becker, K. (2013). Effects of a graduate-level interprofessional education program on adult nurse practitioner student and internal medicine resident physician attitudes towards interprofessional care. Journal of Interprofessional Care, 27, 526–528. https://doi.org/10.3109/13561820.2013.790881
Harman, R. P., Ellington, J. K., Surface, E. A., & Thompson, L. F. (2015). Exploring qualitative training reactions: Individual and contextual influences on trainee commenting. Journal of Applied Psychology, 100, 894–916. https://doi.org/10.1037/a0038380
Harper, K., & McCully, C. (2007). Acuity systems dialogue and patient classification system essentials. Nursing Administration Quarterly, 31, 284–299. https://doi.org/10.1097/01.NAQ.0000290426.41690.cb
*Harrison, J. K. (1992). Individual and combined effects of behavior modeling and the cultural assimilator in cross-cultural management training. Journal of Applied Psychology, 77, 952–962. https://doi.org/10.1037/0021-9010.77.6.952
Hays, R. T., & Singer, S. J. (1989). Simulation fidelity: Definitions, problems, and historical perspectives. In R. T. Hays & S. J. Singer (Eds.), Simulation fidelity in training system design: Bridging the gap between reality and training (pp. 1–22). New York, NY: Springer-Verlag. https://doi.org/10.1007/978-1-4612-3564-4
*Heard, L. A., Fredette, M. E., Atmadja, M. L., Weinstock, P., & Lightdale, J. R. (2011). Perceptions of simulation-based training in crisis resource management in the endoscopy unit. Gastroenterology Nursing, 34, 42–48. https://doi.org/10.1097/SGA.0b013e31820b2239
Hedges, L., & Olkin, I. (1985). Statistical models for meta-analysis. New York, NY: Academic Press.
*Hicks, C. M., Kiss, A., Bandiera, G. W., & Denny, C. J. (2012). Crisis Resources for Emergency Workers (CREW II): Results of a pilot study and simulation-based crisis resource management course for emergency medicine residents. Canadian Journal of Emergency Medical Care, 14, 354–362.
Hikosaka, O., Nakahara, H., Rand, M. K., Sakai, K., Lu, X., Nakamura, K., . . . Doya, K. (1999). Parallel neural networks for learning sequential procedures. Trends in Neurosciences, 22, 464–471. https://doi.org/10.1016/S0166-2236(99)01439-3
*Hobgood, C., Sherwood, G., Frush, K., Hollar, D., Maynard, L., Foster, B., . . . the Interprofessional Patient Safety Education Collaborative. (2010). Teamwork training with nursing and medical students: Does the method matter? Results of an interinstitutional, interdisciplinary collaboration. Quality & Safety in Healthcare, 19, Article e25.
Hogan, H., Healey, F., Neale, G., Thomson, R., Vincent, C., & Black, N. (2012). Preventable deaths due to problems in care in English acute hospitals: A retrospective case record review study. British Medical Journal Quality & Safety, 21, 737–745. https://doi.org/10.1136/bmjqs-2011-001159
Holden, R. J., Eriksson, A., Andreasson, J., Williamsson, A., & Dellve, L. (2015). Healthcare workers’ perceptions of lean: A context-sensitive, mixed methods study in three Swedish hospitals. Applied Ergonomics, 47, 181–192. https://doi.org/10.1016/j.apergo.2014.09.008
Hollenbeck, J. R., Beersma, B., & Schouten, M. E. (2012). Beyond team types and taxonomies: A dimensional scaling conceptualization for team description. The Academy of Management Review, 37, 82–106.
Holton, E. F. (1996). The flawed four-level evaluation model. Human Resource Development Quarterly, 7, 5–21. https://doi.org/10.1002/hrdq.3920070103
Holton, E., Baldwin, T. T., & Holton, E. F. (2003). Improving learning transfer in organizations (1st ed.). San Francisco, CA: Jossey-Bass.
*Hook, K., & Bunce, D. (2001). Immediate learning in organizational computer training as a function of training intervention affective reaction, and session impact measures. Applied Psychology: An International Review, 50, 436–454. https://doi.org/10.1111/1464-0597.00066
Horwitz, S. K., & Horwitz, I. B. (2007). The effects of team diversity on team outcomes: A meta-analytic review of team demography. Journal of Management, 33, 987–1015. https://doi.org/10.1177/0149206307308587
*Hsu, Y. C., Jerng, J. S., Chang, C. W., Chen, L. C., Hsieh, M. Y., Huang, S. F., . . . Hung, K. Y. (2014). Integrating team resource management program into staff training improves staff’s perception and patient safety in organ procurement and transplantation: The experience in a university-affiliated medical center in Taiwan. BMC Surgery, 14, 51. https://doi.org/10.1186/1471-2482-14-51
Huang, J. L., Blume, B. D., Ford, J. K., & Baldwin, T. T. (2015). A tale of two transfers: Disentangling maximum and typical transfer and their respective predictors. Journal of Business and Psychology. Advance online publication. https://doi.org/10.1007/s10869-014-9394-1
Huffcutt, A. I., & Arthur, W. (1995). Development of a new outlier statistic for meta-analytic data. Journal of Applied Psychology, 80, 327–334. https://doi.org/10.1037/0021-9010.80.2.327
*Hughes, K. M., Benenson, R. S., Krichten, A. E., Clancy, K. D., Ryan, J. P., & Hammond, C. (2014). A crew resource management program tailored to trauma resuscitation improves team behavior and communication. Journal of the American College of Surgeons, 219, 545–551. https://doi.org/10.1016/j.jamcollsurg.2014.03.049
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research. Thousand Oaks, CA: Sage.
Issenberg, S. B., McGaghie, W. C., Petrusa, E. R., Lee Gordon, D., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher, 27, 10–28. https://doi.org/10.1080/01421590500046924
*Jankouskas, T. S., Haidet, K. K., Hupcey, J. E., Kolanowski, A., & Murray, W. B. (2011). Targeted crisis resource management training improves performance among randomized nursing and medical students. Simulation in Healthcare, 6, 316–326. https://doi.org/10.1097/SIH.0b013e31822bc676
*Jones, F., Podila, P., & Powers, C. (2013). Creating a culture of safety in the emergency department: The value of teamwork training. The Journal of Nursing Administration, 43, 194–200. https://doi.org/10.1097/NNA.0b013e31828958cd
*Kalisch, B. J., Aebersold, M., McLaughlin, M., Tschannen, D., & Lane, S. (2015). An intervention to improve nursing teamwork using virtual simulation. Western Journal of Nursing Research, 37, 164–179. https://doi.org/10.1177/0193945914531458
Kassirer, J. P., Kuipers, B. J., & Gorry, G. A. (1982). Toward a theory of clinical expertise. The American Journal of Medicine, 73, 251–259. https://doi.org/10.1016/0002-9343(82)90187-5
Keith, N., & Frese, M. (2008). Effectiveness of error management training: A meta-analysis. Journal of Applied Psychology, 93, 59–69. https://doi.org/10.1037/0021-9010.93.1.59
*Kellicut, D. C., Kuncir, E. J., Williamson, H. M., Masella, P. C., & Nielsen, P. E. (2014). Surgical team assessment training: Improving surgical teams during deployment. American Journal of Surgery, 208, 275–283. https://doi.org/10.1016/j.amjsurg.2014.03.008
Kelly, D. (2012, March6). Why people hate training, and how to overcome it. Retrieved fromhttps://www.mindflash.com/blog/2012/03/why-people-hate-training-and-how-to-overcome-it/
Kepes, S., Banks, G. C., McDaniel, M., & Whetzel, D. L. (2012). Publication bias in the organizational sciences. Organizational Research Methods, 15, 624–662. https://doi.org/10.1177/1094428112452760
*Khanal, P. (2014). Design, development and evaluation of collaborative team training method in virtual worlds for time-critical medical procedures (Doctoral dissertation). Retrieved from ProQuest.
*Kilday, D., Spiva, L. A., Barnett, J., Parker, C., & Hart, P. (2013). The effectiveness of combined training modalities on neonatal rapid response teams. Clinical Simulation in Nursing, 9, 249–256. https://doi.org/10.1016/j.ecns.2012.02.004
*Kim, L. Y. (2015). The effects of simulation-based TeamSTEPPS interprofessional communication and teamwork training on patient and provider outcomes (Doctoral dissertation). University of California, Los Angeles. (3637609)
*Kimrey, M. M., Green, B., Spiva, L. A., Delk, M. L., Patrick, S., & Gallagher, E. (2011). Effectiveness of team training on fall prevention. WellStar. Retrieved fromhttp://www.wellstar.org/education/documents/nursingresearchconference/2013/2013-kimrey-et-al-fall-prevention-presentation.pdf
Kirch, D. (2007, November). Culture and the courage to change. Presidential address to Association of American Medical Colleges Annual Meeting, Washington, DC.
*Kirkman, B. L., Rosen, B., Tesluk, P. E., & Gibson, C. B. (2006). Enhancing the transfer of computer-assisted training proficiency in geographically distributed teams. Journal of Applied Psychology, 91, 706–716. https://doi.org/10.1037/0021-9010.91.3.706
Kirkpatrick, D. L. (1956). How to start an objective evaluation of your training program. Journal of the American Society of Training Directors, 10, 18–22.
Kirkpatrick, D. L. (1967). Evaluation of training. In R. Craig & L. Bittel (Eds.), Training and development handbook (pp. 87–98). New York, NY: American Society of Training and Development.
Kirkpatrick, D. L. (1996). Great ideas revisited: Revisiting Kirkpatrick’s four-level model. Training & Development, 50, 54–59.
*Kirschbaum, K. A., Rask, J. P., Brennan, M., Phelan, S., & Fortner, S. A. (2012). Improved climate, culture, and communication through multidisciplinary training and instruction. American Journal of Obstetrics and Gynecology, 207, 200.e1–200.e7. https://doi.org/10.1016/j.ajog.2012.06.036
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284. https://doi.org/10.1037/0033-2909.119.2.254
Kluger, A. N., & DeNisi, A. (1998). Feedback interventions: Toward the understanding of a double-edged sword. Current Directions in Psychological Science, 7, 67–72. https://doi.org/10.1111/1467-8721.ep10772989
Knowles, M. (1973). The adult learner: A neglected species. Houston, TX: Fulf Publishing Company.
Kohn, L. T., Corrigan, J. M., & Donaldson, M. (1999). To err is human: Building a safer health system. Washington, DC: Institute of Medicine.
*Kolbe, M., Weiss, M., Grote, G., Knauth, A., Dambach, M., Spahn, D. R., & Grande, B. (2013). TeamGAINS: A tool for structured debriefings for simulation-based team trainings. British Medical Journal Quality & Safety, 22, 541–553. https://doi.org/10.1136/bmjqs-2012-000917
*Koutantji, M., McCulloch, P., Undre, S., Gautama, S., Cunniffe, S., Sevdalis, N., . . . Darzi, A. (2008). Is team training in briefings for surgical teams feasible in simulation?Cognition Technology & Work, 10, 275–285.
Kozlowski, S. W., & DeShon, R. P. (2004). A psychological fidelity approach to simulation-based training: Theory, research and principles. In E. Salas, L. R. Elliott, S. G. Schflett, & M. D. Coovert (Eds.), Scaled worlds: Development, validation, and applications (pp. 75–99). Burlington, VT: Ashgate Publishing.
Kozlowski, S. W. J., & Salas, E. (1997). An organizational systems approach to implementation and transfer of training. In J. K. Ford, S. W. J. Kozlowski, K. Kraiger, E. Salas, & M. Teachout (Eds.), Improving training effectiveness in work organizations (pp. 3–90). Mahwah, NJ: Erlbaum.
Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311–328. https://doi.org/10.1037/0021-9010.78.2.311
Kuhl, J. (1992). A theory of self-regulation: Action versus state orientation, self-discrimination, and some applications. Applied Psychology, 41, 97–129. https://doi.org/10.1111/j.1464-0597.1992.tb00688.x
Kunkler, K. (2006). The role of medical simulation: An overview. International Journal of Medical Robotics and Computer Assisted Surgery, 2, 203–210. https://doi.org/10.1002/rcs.101
Laker, D. R., & Powell, J. L. (2011). The differences between hard and soft skills and their relative impact on training transfer. Human Resource Development Quarterly, 22, 111–122. https://doi.org/10.1002/hrdq.20063
*Lam, S. S., & Schaubroeck, J. (2000). A field experiment testing frontline opinion leaders as change agents. Journal of Applied Psychology, 85, 987–995.
*LaPoint, J. L. (2012). The effects of aviation error management training on perioperative safety attitudes. International Journal of Business and Social Science, 3, 77–90.
*Lee, C., Bernstein, P., Chazotte, C., Angert, R., Bernstein, J., McGowan, A., . . . Goffman, D. (2013). 757: Interdisciplinary obstetric simulation training improves team performance. American Journal of Obstetrics and Gynecology, 208, S318. https://doi.org/10.1016/j.ajog.2012.10.095
*Lee, P., Allen, K., & Daly, M. (2012). A ‘Communication and Patient Safety’ training programme for all healthcare staff: Can it make a difference?British Medical Journal Quality & Safety, 21, 84–88. https://doi.org/10.1136/bmjqs-2011-000297
*Levy, P. D., Dancy, J. N., Stowell, S. A., Hoekstra, J. W., Arthur, C. L., Wilson, C. H., . . . Hiestand, B. (2014). Lessons in flying: Crew resource management as a quality improvement method for acute coronary syndromes care. Critical Pathways in Cardiology, 13, 36–42. https://doi.org/10.1097/HPC.0000000000000002
*Liaw, S. Y., Zhou, W. T., Lau, T. C., Siau, C., & Chan, S. W. (2014). An interprofessional communication training using simulation to enhance safe care for a deteriorating patient. Nurse Education Today, 34, 259–264. https://doi.org/10.1016/j.nedt.2013.02.019
*Liu, L., Grandon, E. E., & Ash, S. R. (2009). Trainee reactions and task performance: A study of open training in object-oriented systems development. Information Systems & E-Business Management, 7, 21–37. https://doi.org/10.1007/s10257-007-0049-x
Lyons, R., Lazzara, E. H., Benishek, L. E., Zajac, S., Gregory, M., Sonesh, S. C., & Salas, E. (2015). Enhancing the effectiveness of team debriefings in medical simulation: More best practices. Joint Commission Journal on Quality and Patient Safety, 41, 115–125.
*Mager, D. R., & Lange, J. (2014). Teambuilding across healthcare professions: The ELDER project. Applied Nursing Research, 27, 141–143. https://doi.org/10.1016/j.apnr.2013.06.006
*Mahoney, J. S., Ellis, T. E., Garland, G., Palyo, N., & Greene, P. K. (2012). Supporting a psychiatric hospital culture of safety. Journal of the American Psychiatric Nurses Association, 18, 299–306. https://doi.org/10.1177/1078390312460577
*Mahramus, T., Penoyer, D., Sole, M. L., & Bowe, E. (2013). The impact of a teamwork training program intervention on teamwork performances in cardiopulmonary arrest events short version: Improving teamwork performance in cardiopulmonary arrest through a teamwork intervention. Simulation in Healthcare, 8, 562. https://doi.org/10.1097/01.SIH.0000441629.07068.3a
*Malec, J. F., Torsher, L. C., Dunn, W. F., Wiegmann, D. A., Arnold, J. J., Brown, D. A., & Phatak, V. (2007). The mayo high performance teamwork scale: Reliability and validity for evaluating key crew resource management skills. Simulation in Healthcare, 2, 4–10. https://doi.org/10.1097/SIH.0b013e31802b68ee
*Marshall, N. E., Vanderhoeven, J., Eden, K. B., Segel, S. Y., & Guise, J. M. (2015). Impact of simulation and team training on postpartum hemorrhage management in non-academic centers. The Journal of Maternal-Fetal & Neonatal Medicine, 28, 495–499. https://doi.org/10.3109/14767058.2014.923393
*Martineau, J. W. (1996). A contextual examination of the effectiveness of a supervisory skills training program (Doctoral dissertation). Retrieved from ProQuest Information & Learning.
*Martocchio, J. J., & Webster, J. (1992). Effects of feedback and cognitive playfulness on performance in microcomputer software training. Personnel Psychology, 45, 553–578. https://doi.org/10.1111/j.1744-6570.1992.tb00860.x
*Matharoo, M., Haycock, A., Sevdalis, N., & Thomas-Gibson, S. (2014). Endoscopic non-technical skills team training: The next step in quality assurance of endoscopy training. World Journal of Gastroenterology, 20, 17507–17515. https://doi.org/10.3748/wjg.v20.i46.17507
*Mathieu, J. E., Martineau, J. W., & Tannenbaum, S. I. (1993). Individual and situational influences on the development of self-efficacy: Implications for training effectiveness. Personnel Psychology, 46, 125–147. https://doi.org/10.1111/j.1744-6570.1993.tb00870.x
*Mathieu, J. E., Tannenbaum, S. I., & Salas, E. (1992). Influences of individual and situational characteristics on measures of training effectiveness. Academy of Management Journal, 35, 828–847. https://doi.org/10.2307/256317
*Mayer, C. M., Cluff, L., Lin, W.-T., Willis, T. S., Stafford, R. E., Williams, C., . . . Amoozegar, J. B. (2011). Evaluating efforts to optimize TeamSTEPPS implementation in surgical and pediatric intensive care units. Joint Commission Journal on Quality and Patient Safety, 37, 365–374.
*McCaffrey, R., Hayes, R. M., Cassell, A., Miller-Reyes, S., Donaldson, A., & Ferrell, C. (2012). The effect of an educational programme on attitudes of nurses and medical residents towards the benefits of positive communication and collaboration. Journal of Advanced Nursing, 68, 293–301. https://doi.org/10.1111/j.1365-2648.2011.05736.x
*McCulloch, P., Mishra, A., Handa, A., Dale, T., Hirst, G., & Catchpole, K. (2009). The effects of aviation-style non-technical skills training on technical performance and outcome in the operating theatre. Quality & Safety in Healthcare, 18, 109–115. https://doi.org/10.1136/qshc.2008.032045
*McEvoy, G. M. (1997). Organizational change and outdoor management education. Human Resource Management, 36, 235–250. https://doi.org/10.1002/(SICI)1099-050X(199722)36:2(235::AID-HRM5)3.0.CO;2-Y
Medicare. (2014). HCAHPS: Patients’ perspectives of care survey. Retrieved fromhttps://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalHCAHPS.html
*Meier, A. H., Boehler, M. L., McDowell, C. M., Schwind, C., Markwell, S., Roberts, N. K., & Sanfey, H. (2012). A surgical simulation curriculum for senior medical students based on TeamSTEPPS. Archives of Surgery (Chicago, IL.: 1960), 147, 761–766. https://doi.org/10.1001/archsurg.2012.1340
Meliones, J. N., Alton, M., Mericle, J., Ballard, R., Cesari, J., Frush, K. S., & Mistry, K. (2008). 10-Year experience integrating strategic performance improvement initiatives: Can the Balanced Scorecard, Six Sigma, and team training all thrive in a single hospital? In K. Henriksen, J. B. Battles, M. A. Keyes, & M. L. Grady (Eds.), Advances in patient safety: New directions and alternative approaches (Vol. 3, pp. 1–13). Rockville, MD: Agency of Healthcare Research and Quality.
Merriam, S., & Leahy, B. (2005). Learning transfer: A review of the research in adult education and training. PAACE Journal of Lifelong Learning, 14, 1–25.
*Meurling, L., Hedman, L., Felländer-Tsai, L., & Wallin, C.-J. (2013). Leaders’ and followers’ individual experiences during the early phase of simulation-based team training: An exploratory study. British Medical Journal Quality & Safety, 22, 459–467. https://doi.org/10.1136/bmjqs-2012-000949
*Meurling, L., Hedman, L., Sandahl, C., Felländer-Tsai, L., & Wallin, C. J. (2013). Systematic simulation-based team training in a Swedish intensive care unit: A diverse response among critical care professions. British Medical Journal Quality & Safety, 22, 485–494. https://doi.org/10.1136/bmjqs-2012-000994
*Miles, M. B. (1965). Changes during and following laboratory training: A clinical-experimental study. Journal of Applied Behavioral Science, 1, 215–242. https://doi.org/10.1177/002188636500100302
Miller, D., Crandall, C., WA, C., III, & McLaughlin, S. (2012). Improving teamwork and communication in trauma care through in situ simulations. Academic Emergency Medicine, 19, 608–612. https://doi.org/10.1111/j.1553-2712.2012.01354.x
*Mishra, A., Catchpole, K., & McCulloch, P. (2009). The Oxford NOTECHS System: Reliability and validity of a tool for measuring teamwork behaviour in the operating theatre. Quality & Safety in Health Car Ce, 18, 104–108. https://doi.org/10.1136/qshc.2007.024760
Mitchell, P., Wynia, M., Golden, R., McNellis, B., Okun, S., Webb, C. E., & Von Kohorn, I. (2012). Core principles & values of effective team-based healthcare. Washington, DC: Institute of Medicine.
*Morey, J. C., Simon, R., Jay, G. D., Wears, R. L., Salisbury, M., Dukes, K. A., & Berns, S. D. (2002). Error reduction and performance improvement in the emergency department through formal teamwork training: Evaluation results of the MedTeams project. Health Services Research, 37, 1553–1581. https://doi.org/10.1111/1475-6773.01104
*Morgan, L., Hadi, M., Pickering, S., Robertson, E., Griffin, D., Collins, G., . . . New, S. (2015). The effect of teamwork training on team performance and clinical outcome in elective orthopaedic surgery: A controlled interrupted time series study. British Medical Journal, 5, Article e006216. https://doi.org/10.1136/bmjopen-2014-006216
Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7, 105–125. https://doi.org/10.1037/1082-989X.7.1.105
*Müller, M. P., Hänsel, M., Fichtner, A., Hardt, F., Weber, S., Kirschbaum, C., . . . Eich, C. (2009). Excellence in performance and stress reduction during two different full scale simulator training courses: A pilot study. Resuscitation, 80, 919–924. https://doi.org/10.1016/j.resuscitation.2009.04.027
*Muñoz, G. J., Atoba, O. A., & Arthur, W. Jr. (Under review). An empirical investigation of the reciprocal relationships between team-level affective reactions and team performance. Organizational Behavior and Human Decision Processes.
*Nicotera, A. M., Mahon, M. M., & Wright, K. B. (2014). Communication that builds teams: Assessing a nursing conflict intervention. Nursing Administration Quarterly, 38, 248–260. https://doi.org/10.1097/NAQ.0000000000000033
*Nielsen, K., Randall, R., & Christensen, K. B. (2010). Does training managers enhance the effects of implementing team-working? A longitudinal, mixed methods field study. Human Relations, 63, 1719–1741.
*Nielsen, P. E., Goldman, M. B., Mann, S., Shapiro, D. E., Marcus, R. G., Pratt, S. D., . . . Sachs, B. P. (2007). Effects of teamwork training on adverse outcomes and process of care in labor and delivery: A randomized controlled trial. Obstetrics and Gynecology, 109, 48–55. https://doi.org/10.1097/01.AOG.0000250900.53126.c2
*Noe, R. A., & Schmitt, N. (1986). The influence of trainee attitudes on training effectiveness: Test of a model. Personnel Psychology, 39, 497–523. https://doi.org/10.1111/j.1744-6570.1986.tb00950.x
*Noonan, L. E., & Sulsky, L. M. (2001). Impact of frame-of-reference and behavioral observation training on alternative training effectiveness criteria in a Canadian military sample. Human Performance, 14, 3–26. https://doi.org/10.1207/S15327043HUP1401_02
Nunnally, J. C. (1978). Psychometric theory. New York, NY: McGraw-Hill.
*O’Connor, P., Byrne, D., O’Dea, A., McVeigh, T. P., & Kerin, M. J. (2013). “Excuse me:” teaching interns to speak up. Joint Commission Journal on Quality and Patient Safety, 39, 426–431.
*Oh, Y., & Chun, S. (2013). Board 393-research abstract analysis on team dynamics performance in ACLS of new medical residents (Submission #905). Simulation in Healthcare, 8, 573. https://doi.org/10.1097/01.SIH.0000441645.75612.ef
*Ong, M. E. H., Quah, J. L. J., Annathurai, A., Noor, N. M., Koh, Z. X., Tan, K. B. K., . . . Fook-Chong, S. (2013). Improving the quality of cardiopulmonary resuscitation by training dedicated cardiac arrest teams incorporating a mechanical load-distributing device at the emergency department. Resuscitation, 84, 508–514. https://doi.org/10.1016/j.resuscitation.2012.07.033
Oreg, S., Vakola, M., & Armenakis, A. (2011). Change recipients’ reactions to organizational change: A 60-year review of quantitative studies. Journal of Applied Behavioral Science, 47, 461–524. https://doi.org/10.1177/0021886310396550
*Østergaard, H. T., Østergaard, D., & Lippert, A. (2004). Implementation of team training in medical education in Denmark. Quality & Safety in Healthcare, 13, i91–i95. https://doi.org/10.1136/qshc.2004.009985
*Paige, J. T., Garbee, D. D., Kozmenko, V., Yu, Q., Kozmenko, L., Yang, T., . . . Swartz, W. (2014). Getting a head start: High-fidelity, simulation-based operating room team training of interprofessional students. Journal of the American College of Surgeons, 218, 140–149. https://doi.org/10.1016/j.jamcollsurg.2013.09.006
*Paige, J. T., Kozmenko, V., Yang, T., Gururaja, R. P., Hilton, C. W., Cohn, I., Jr., & Chauvin, S. W. (2009). Attitudinal changes resulting from repetitive training of operating room personnel using of high-fidelity simulation at the point of care. The American Surgeon, 75, 584–590.
Patel, L. (2010). State of the industry report: ASTD’s definitive review of workplace learning and development trends. Alexandria, VA: American Society for Training & Development.
*Patterson, M. D., Geis, G. L., LeMaster, T., & Wears, R. L. (2013). Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. British Medical Journal Quality & Safety, 22, 383–393. https://doi.org/10.1136/bmjqs-2012-000951
*Paull, D. E., Mazzia, L. M., Wood, S. D., Theis, M. S., Robinson, L. D., Carney, B., . . . Bagian, J. P. (2010). Briefing guide study: Preoperative briefing and postoperative debriefing checklists in the Veterans Health Administration medical team training program. American Journal of Surgery, 200, 620–623. https://doi.org/10.1016/j.amjsurg.2010.07.011
*Peckler, B., Prewett, M. S., Campbell, T., & Brannick, M. (2012). Teamwork in the trauma room evaluation of a multimodal team training program. Journal of Emergencies, Trauma, and Shock, 5, 23–27. https://doi.org/10.4103/0974-2700.93106
Pelled, L. H. (1996). Demographic diversity, conflict, and work group outcomes: An intervening process theory. Organization Science, 7, 615–631. https://doi.org/10.1287/orsc.7.6.615
Pham, J. C., Story, J. L., Hicks, R. W., Shore, A. D., Morlock, L. L., Cheung, D. S., . . . Pronovost, P. J. (2011). National study on the frequency, types, causes, and consequences of voluntarily reported emergency department medication errors. The Journal of Emergency Medicine, 40, 485–492. https://doi.org/10.1016/j.jemermed.2008.02.059
*Posmontier, B., Montgomery, K., Smith Glasgow, M. E., Montgomery, O. C., & Morse, K. (2012). Transdisciplinary teamwork simulation in obstetrics-gynecology healthcare education. The Journal of Nursing Education, 51, 176–179. https://doi.org/10.3928/01484834-20120127-02
*Pratt, S. D., Mann, S., Salisbury, M., Greenberg, P., Marcus, R., Stabile, B., . . . Sachs, B. P. (2007). John M. Eisenberg Patient Safety and Quality Awards. Impact of CRM-based training on obstetric outcomes and clinicians’ patient safety attitudes. Joint Commission Journal on Quality and Patient Safety, 33, 720–725.
Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6, 77–98. https://doi.org/10.1080/19312458.2012.679848
*Prewett, M. S., Brannick, M. T., & Peckler, B. (2013). Training teamwork in medicine: An active approach using role play and feedback. Journal of Applied Social Psychology, 43, 316–328. https://doi.org/10.1111/j.1559-1816.2012.01001.x
*Quiñones, M. A., Ford, J. K., Sego, D. J., & Smith, E. M. (1995). The effects of individual and transfer environment characteristics on the opportunity to perform trained tasks. Training and Research Journal, 1, 29–49.
Raby, C. C. (2007). Safe and efficient obstetrical care: A solution at last. Retrieved fromhttps://awhonn.confex.com/awhonn/2007/recordingredirect.cgi/id/110
*Ralyea, C. M. (2013). For labor and delivery staff, how does the implementation of TeamSTEPPS compared to current practice impact quality indicators over a six-month period? (Doctoral dissertation). Retrieved from ProQuest.
*Reeves, E. T., & Jensen, J. M. (1972). Effectiveness of program evaluation. Training and Development Journal, 26, 36–41.
*Renz, S. M., Boltz, M. P., Wagner, L. M., Capezuti, E. A., & Lawrence, T. E. (2013). Examining the feasibility and utility of an SBAR protocol in long-term care. Geriatric Nursing, 34, 295–301. https://doi.org/10.1016/j.gerinurse.2013.04.010
Revans, R. (1982). The origins and growth of action learning. Lund, Sweden: Studentlitteratur.
*Ricci, M. A., & Brumsted, J. R. (2012). Crew resource management: Using aviation techniques to improve operating room safety. Aviation, Space, and Environmental Medicine, 83, 441–444. https://doi.org/10.3357/ASEM.3149.2012
*Riethmüller, M., Fernandez Castelao, E., Eberhardt, I., Timmermann, A., & Boos, M. (2012). Adaptive coordination development in student anaesthesia teams: A longitudinal study. Ergonomics, 55, 55–68. https://doi.org/10.1080/00140139.2011.636455
*Riley, W., Davis, S., Miller, K., Hansen, H., Sainfort, F., & Sweet, R. (2011). Didactic and simulation nontechnical skills team training to improve perinatal patient outcomes in a community hospital. Joint Commission Journal on Quality and Patient Safety, 37, 357–364.
*Roberts, N. K., Williams, R. G., Schwind, C. J., Sutyak, J. A., McDowell, C., Griffen, D., . . . Wetter, N. (2014). The impact of brief team communication, leadership and team behavior training on ad hoc team performance in trauma care settings. American Journal of Surgery, 207, 170–178. https://doi.org/10.1016/j.amjsurg.2013.06.016
*Robertson, B., Kaplan, B., Atallah, H., Higgins, M., Lewitt, M. J., & Ander, D. S. (2010). The use of simulation and a modified TeamSTEPPS curriculum for medical and nursing student team training. Simulation in Healthcare, 5, 332–337. https://doi.org/10.1097/SIH.0b013e3181f008ad
*Robertson, B., Schumacher, L., Gosman, G., Kanfer, R., Kelley, M., & DeVita, M. (2009). Simulation-based crisis team training for multidisciplinary obstetric providers. Simulation in Healthcare, 4, 77–83. https://doi.org/10.1097/SIH.0b013e31819171cd
*Saks, A. M. (1993, August). Moderating and mediating effects of self-efficacy for the relationship between training and newcomer adjustment. Paper presented at the Academy of Management, Montreal, Quebec.
Salas, E., & Cannon-Bowers, J. A. (1997). Training for a rapidly changing workplace. In M. A. Quinones & A. Ehrenstein (Eds.), Applications of psychological research (pp. 249–279). Washington, DC: American Psychological Association.
Salas, E., & Cannon-Bowers, J. A. (2000). The anatomy of team training. In S. Tobais & J. D. Fletcher (Eds.), Training and retraining: A handbook for business, industry, government, and the military (pp. 312–335). New York, NY: Macmillan Reference USA.
Salas, E., & Cannon-Bowers, J. A. (2001). The science of training: A decade of progress. Annual Review of Psychology, 52, 471–499. https://doi.org/10.1146/annurev.psych.52.1.471
Salas, E., DiazGranados, D., Klein, C., Burke, C. S., Stagl, K. C., Goodwin, G. F., & Halpin, S. M. (2008). Does team training improve team performance? A meta-analysis. Human Factors, 50, 903–933. https://doi.org/10.1518/001872008X375009
Salas, E., Klein, C., King, H., Salisbury, M., Augenstein, J. S., Birnbach, D. J., . . . Upshaw, C. (2008). Debriefing medical teams: 12 evidence-based best practices and tips. Joint Commission Journal on Quality and Patient Safety, 34, 518–527.
Salas, E., Nichols, D. R., & Driskell, J. E. (2007). Testing three team training strategies in intact teams a meta-analysis. Small Group Research, 38, 471–488. https://doi.org/10.1177/1046496407304332
Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13, 74–101. https://doi.org/10.1177/1529100612436661
*Sawyer, T., Laubach, V. A., Hudak, J., Yamamura, K., & Pocrnich, A. (2013). Improvements in teamwork during neonatal resuscitation after interprofessional TeamSTEPPS training. Neonatal Network, 32, 26–33. https://doi.org/10.1891/0730-0832.32.1.26
*Sax, H. C., Browne, P., Mayewski, R. J., Panzer, R. J., Hittner, K. C., Burke, R. L., & Coletta, S. (2009). Can aviation-based team training elicit sustainable behavioral change?Archives of Surgery, 144, 1133–1137. https://doi.org/10.1001/archsurg.2009.207
Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science, 3, 207–217. https://doi.org/10.1111/j.1467-9280.1992.tb00029.x
*Sculli, G. L., Fore, A. M., West, P., Neily, J., Mills, P. D., & Paull, D. E. (2013). Nursing crew resource management: A follow-up report from the Veterans Health Administration. The Journal of Nursing Administration, 43, 122–126. https://doi.org/10.1097/NNA.0b013e318283dafa
*Severin, D. (1952). The predictability of various kinds of criteria. Personnel Psychology, 5, 93–104. https://doi.org/10.1111/j.1744-6570.1952.tb01002.x
Shadish, W. R. (1996). Meta-analysis and the exploration of causal mediating processes: A primer of examples, methods, and issues. Psychological Methods, 1, 47–65. https://doi.org/10.1037/1082-989X.1.1.47
*Shah, S. H., Heitmann, D., Mangolds, V., Zgurzynski, P., & Bird, S. B. (2014). Evaluating the implementation of an interprofessonal TeamSTEPPS curriculum for medical students using high fidelity simulation. Western Journal of Emergency Medicine with Population Health, 15, 102.
*Shapiro, M. J., Morey, J. C., Small, S. D., Langford, V., Kaylor, C. J., Jagminas, L., . . . Jay, G. D. (2004). Simulation based teamwork training for emergency department staff: Does it improve clinical team performance when added to an existing didactic teamwork curriculum?Quality & Safety in Healthcare, 13, 417–421. https://doi.org/10.1136/qshc.2003.005447
*Shea-Lewis, A. (2009). Teamwork: Crew resource management in a community hospital. Journal for Healthcare Quality, 31, 14–18. https://doi.org/10.1111/j.1945-1474.2009.00042.x
Shekelle, P. G., Pronovost, P. J., Wachter, R. M., McDonald, K. M., Schoelles, K., Dy, S. M., . . . Walshe, K. (2013). The top patient safety strategies that can be encouraged for adoption now. Annals of Internal Medicine, 158, 365–368. https://doi.org/10.7326/0003-4819-158-5-201303051-00001
*Sherwood, G., Frush, K., & Hollar, D. (2008, July). Measuring the effects of interdisciplinary team training with multi-site medical and nursing students. Paper presented at the 19th International Nursing Research Congress, Singapore.
*Siassakos, D., Hasafa, Z., Sibanda, T., Fox, R., Donald, F., Winter, C., & Draycott, T. (2009). Retrospective cohort study of diagnosis-delivery interval with umbilical cord prolapse: The effect of team training. British Journal of Obstetrics and Gynaecology, 116, 1089–1096. https://doi.org/10.1111/j.1471-0528.2009.02179.x
*Sigalet, E. L. (2012). The design, integration and assessment of a simulation-based team training curriculum delivered to groups of medical, nursing and respiratory therapist students (Doctoral Dissertation). Retrieved from ProQuest Dissertations Publishing.
*Sigalet, E. L., Donnon, T. L., & Grant, V. (2015). Insight into team competence in medical, nursing and respiratory therapy students. Journal of Interprofessional Care, 29, 62–67. https://doi.org/10.3109/13561820.2014.940416
*Sijstermans, R., Jaspers, M. W., Bloemendaal, P. M., & Schoonderwaldt, E. M. (2007). Training inter-physician communication using the Dynamic Patient Simulator. International Journal of Medical Informatics, 76, 336–343. https://doi.org/10.1016/j.ijmedinf.2007.01.007
Sir, M. Y., Dundar, B., Barker Steege, L. M., & Pasupathy, K. S. (2015). Nurse-patient assignment models considering patient acuity metrics and nurses’ perceived workload. Journal of Biomedical Informatics, 55, 237–248. https://doi.org/10.1016/j.jbi.2015.04.005
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280–295. https://doi.org/10.1037/0021-9010.93.2.280
*Smith, K. A. (1994). Narrowing the gap between performance and potential: The effects of team climate on the transfer of assertiveness training(Unpublished doctoral dissertation). University of South Florida, Tampa, FL.
*Smith, P. E. (1976). Management modeling training to improve morale and customer satisfaction. Personnel Psychology, 29, 351–359. https://doi.org/10.1111/j.1744-6570.1976.tb00419.x
Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology, 58, 33–66. https://doi.org/10.1111/j.1744-6570.2005.514_1.x
Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008). Guided team self-correction: Impacts on team mental models, processes, and effectiveness. Small Group Research, 39, 303–327. https://doi.org/10.1177/1046496408317794
*Sonesh, S. C., Gregory, M. E., Hughes, A. M., Feitosa, J., Benishek, L. E., Verhoeven, D., . . . Salas, E. (2015). Team training in obstetrics: A multi-level evaluation. Families, Systems, & Health, 33, 250–261. https://doi.org/10.1037/fsh0000148
*Spiva, L., Robertson, B., Delk, M. L., Patrick, S., Kimrey, M. M., Green, B., & Gallagher, E. (2014). Effectiveness of team training on fall prevention. Journal of Nursing Care Quality, 29, 164–173. https://doi.org/10.1097/NCQ.0b013e3182a98247
*Stead, K., Kumar, S., Schultz, T. J., Tiver, S., Pirone, C. J., Adams, R. J., & Wareham, C. A. (2009). Teams communicating through STEPPS. The Medical Journal of Australia, 190, S128–S132.
*Steinemann, S., Berg, B., Skinner, A., DiTulio, A., Anzelon, K., Terada, K., . . . Speck, C. (2011). In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. Journal of Surgical Education, 68, 472–477. https://doi.org/10.1016/j.jsurg.2011.05.009
Stevens, M. J., & Campion, M. A. (1994). The knowledge, skill, and ability requirements for teamwork: Lmplications for human resource management. Journal of Management, 20, 503–530. https://doi.org/10.1177/014920639402000210
*Stroud, P. V. (1959). Evaluating a human relations training program. Personnel, 36, 52–60.
Sundstrom, E., De Meuse, K. P., & Futrell, D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45, 120–133. https://doi.org/10.1037/0003-066X.45.2.120
*Suva, D., Haller, G., Lübbeke, A., & Hoffmeyer, P. (2012). Differential impact of a crew resource management program according to professional specialty. American Journal of Medical Quality, 27, 313–320. https://doi.org/10.1177/1062860611423805
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257–285. https://doi.org/10.1207/s15516709cog1202_4
*Tannenbaum, S. I., Mathieu, J. E., Salas, E., & Cannon-Bowers, J. A. (1991). Meeting trainees’ expectations: The influence of training fulfillment on the development of commitment, self-efficacy, and motivation. Journal of Applied Psychology, 76, 759–769. https://doi.org/10.1037/0021-9010.76.6.759
Taplin, S. H., Weaver, S., Chollette, V., Marks, L. B., Jacobs, A., Schiff, G., . . . Salas, E. (2015). Teams and teamwork during a cancer diagnosis: Interdependency within and between teams. Journal of Oncology Practice, 11, 231–238. https://doi.org/10.1200/JOP.2014.003376
*Tapson, V. F., Karcher, R. B., & Weeks, R. (2011). Crew resource management and VTE prophylaxis in surgery: A quality improvement initiative. American Journal of Medical Quality, 26, 423–432. https://doi.org/10.1177/1062860611404694
*Taylor, C. R., Hepworth, J. T., Buerhaus, P. I., Dittus, R., & Speroff, T. (2007). Effect of crew resource management on diabetes care and patient outcomes in an inner-city primary care clinic. Quality & Safety in Healthcare, 16, 244–247. https://doi.org/10.1136/qshc.2006.019042
Taylor, P. J., Russ-Eft, D. F., & Chan, D. W. L. (2005). A meta-analytic review of behavior modeling training. Journal of Applied Psychology, 90, 692–709. https://doi.org/10.1037/0021-9010.90.4.692
*Tena-Nelson, R., Santos, K., Weingast, E., Amrhein, S., Ouslander, J., & Boockvar, K. (2012). Reducing potentially preventable hospital transfers: Results from a thirty nursing home collaborative. Journal of the American Medical Directors Association, 13, 651–656. https://doi.org/10.1016/j.jamda.2012.06.011
Tharenou, P., Saks, A. M., & Moore, C. (2007). A review and critique of research on training and organizational-level outcomes. Human Resource Management Review, 17, 251–273. https://doi.org/10.1016/j.hrmr.2007.07.004
*Thayer, P. W., Antoinetti, J. A., & Guest, T. A. (1958). Product knowledge and performance: A study of life insurance agents. Personnel Psychology, 11, 411–418. https://doi.org/10.1111/j.1744-6570.1958.tb00029.x
*Theilen, U., Leonard, P., Jones, P., Ardill, R., Weitz, J., Agrawal, D., & Simpson, D. (2013). Regular in situ simulation training of paediatric medical emergency team improves hospital response to deteriorating patients. Resuscitation, 84, 218–222. https://doi.org/10.1016/j.resuscitation.2012.06.027
The Joint Commission. (2014). National patient safety goals effective January 1, 2014: Hospital accreditation program. Retrieved fromhttp://www.jointcommission.org/assets/1/6/HAP_NPSG_Chapter_2014.pdf
Woodworth, R. S., & Thorndike, E. L. (1901). The influence of improvement in one mental function upon the efficiency of other functions. Psychological Review, 8, 247–261. https://doi.org/10.1037/h0074898
*Tofil, N. M., Morris, J. L., Peterson, D. T., Watts, P., Epps, C., Harrington, K. F., . . . White, M. L. (2014). Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship. Journal of Hospital Medicine, 9, 189–192. https://doi.org/10.1002/jhm.2126
Tomkins, S. S. (1984). Affect theory. In K. R. Scherer & P. Ekman (Eds.), Approaches to emotion (pp. 163–195). New York, NY: Taylor & Francis.
*Tracey, J. B., Tannenbaum, S. I., & Kavanagh, M. J. (1995). Applying trained skills on the job: The importance of the work environment. Journal of Applied Psychology, 80, 239–252. https://doi.org/10.1037/0021-9010.80.2.239
*Treadwell, J., Binder, B., Symes, L., & Krepper, R. (2015). Delivering team training to medical home staff to impact perceptions of collaboration. Professional Case Management, 20, 81–88. https://doi.org/10.1097/NCM.0000000000000066
*Truijens, S., Banga, F., Oei, G., & van Runnard Heimel, P. (2014). Transmural multi-professional simulation-based obstetric team training and its effect on quality of care as perceived by women who recently gave birth. Simulation in Healthcare, 9, 435. https://doi.org/10.1097/01.SIH.0000459334.81049.1d
*Truijens, S., Fransen, A., & Oei, G. (2013). Changes in satisfaction with team functioning after multi-professional simulation-based medical team training. Simulation in Healthcare, 8, 599. https://doi.org/10.1097/01.SIH.0000441685.72933.a9
*Tschannen, D., McClish, D., Aebersold, M., & Rohde, J. M. (2015). Targeted communication intervention using nursing crew resource management principles. Journal of Nursing Care Quality, 30, 7–11. https://doi.org/10.1097/NCQ.0000000000000073
*Tziner, A., & Falbe, C. M. (1993). Training-related variables, gender and training outcomes: A field investigation. International Journal of Psychology, 28, 203–221. https://doi.org/10.1080/00207599308247185
*van Eerde, W., Simon Tang, K. C., & Talbot, G. (2008). The mediating role of training utility in the relationship between training needs assessment and organizational effectiveness. The International Journal of Human Resource Management, 19, 63–73. https://doi.org/10.1080/09585190701763917
*van Schaik, S. M., Plant, J., Diane, S., Tsang, L., & O’Sullivan, P. (2011). Interprofessional team training in pediatric resuscitation: A low-cost, in situ simulation program that enhances self-efficacy among participants. Clinical Pediatrics, 50, 807–815. https://doi.org/10.1177/0009922811405518
*Velada, R., & Caetano, A. (2007). Training transfer: The mediating role of perception of learning. Journal of European Industrial Training, 31, 283–296. https://doi.org/10.1108/03090590710746441
*Vertino, K. A. (2014). Evaluation of a TeamSTEPPS© initiative on staff attitudes toward teamwork. The Journal of Nursing Administration, 44, 97–102. https://doi.org/10.1097/NNA.0000000000000032
Viswesvaran, C., & Ones, D. S. (1995). Theory testing: Combining psychometric meta-analysis and structural equations modeling. Personnel Psychology, 48, 865–885. https://doi.org/10.1111/j.1744-6570.1995.tb01784.x
Vroom, V. H. (1964). Work and motivation. New York, NY: Wiley.
*Wadsworth, N. S. (2006). The effects of an interdisciplinary team training intervention on trainee attitudes toward teamwork(Doctoral dissertation). Case Western Reserve University (3240542).
*Walker, E. M., Mwaria, M., Coppola, N., & Chen, C. (2014). Improving the replication success of evidence-based interventions: Why a preimplementation phase matters. Journal of Adolescent Health, 54, S24–S28. https://doi.org/10.1016/j.jadohealth.2013.11.028
*Wallin, C. J., Kalman, S., Sandelin, A., Färnert, M. L., Dahlstrand, U., & Jylli, L. (2015). Creating an environment for patient safety and teamwork training in the operating theatre: A quasi-experimental study. Medical Teacher, 37, 267–276. https://doi.org/10.3109/0142159X.2014.947927
*Warr, P., Allan, C., & Birdi, K. (1999). Predicting three levels of training outcome. Journal of Occupational and Organizational Psychology, 72, 351–375. https://doi.org/10.1348/096317999166725
*Warr, P., & Bunce, D. (1995). Trainee characteristics and the outcomes of open learning. Personnel Psychology, 48, 347–375. https://doi.org/10.1111/j.1744-6570.1995.tb01761.x
*Watts, B. V., Percarpio, K., West, P., & Mills, P. D. (2010). Use of the Safety Attitudes Questionnaire as a measure in patient safety improvement. Journal of Patient Safety, 6, 206–209. https://doi.org/10.1097/PTS.0b013e3181fbbe86
Weaver, S. J., Dy, S. M., & Rosen, M. A. (2014). Team-training in healthcare: A narrative synthesis of the literature. British Medical Journal Quality & Safety, 23, 359–372. https://doi.org/10.1136/bmjqs-2013-001848
Weaver, S. J., Lyons, R., DiazGranados, D., Rosen, M. A., Salas, E., Oglesby, J., . . . King, H. B. (2010). The anatomy of healthcare team training and the state of practice: A critical review. Academic Medicine, 85, 1746–1760. https://doi.org/10.1097/ACM.0b013e3181f2e907
*Weaver, S. J., Rosen, M. A., DiazGranados, D., Lazzara, E. H., Lyons, R., Salas, E., . . . King, H. B. (2010). Does teamwork improve performance in the operating room? A multilevel evaluation. Joint Commission Journal on Quality and Patient Safety, 36, 133–142.
Weiss, H. M. (1990). Learning theory and industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (2nd ed., Vol. 1, pp. 171–221). Palo Alto, CA: Consulting Psychologists Press.
*Weitzman, D. O., Fineberg, M. L., Gade, P. A., & Compton, G. L. (1979). Proficiency maintenance and assessment in an instrument flight simulator. Human Factors, 21, 701–710.
*Wenzel, R. (2014). Pathways to training transfer: Proactive transfer behaviour and hope at work. Academy of Management Proceedings, 2014, 17456.
*Werner, J. M., O’Leary-Kelly, A. M., Baldwin, T. T., & Wexley, K. N. (1994). Augmenting behavior-modeling training: Testing the effects of pre- and post-training interventions. Human Resource Development Quarterly, 5, 169–183. https://doi.org/10.1002/hrdq.3920050207
*Wexley, K. N., & Baldwin, T. T. (1986). Post-training strategies for facilitating positive transfer: An empirical exploration. Academy of Management Journal, 29, 503–520. https://doi.org/10.2307/256221
Whitener, E. M. (1990). Confusion of confidence intervals and credibility intervals in meta-analysis. Journal of Applied Psychology, 75, 315–321. https://doi.org/10.1037/0021-9010.75.3.315
Wildman, J. L., Thayer, A. L., Rosen, M. A., Salas, E., Mathieu, J. E., & Rayne, S. R. (2012). Task types and team-level attributes: Synthesis of team classification literature. Human Resource Development Review, 11, 97–129. https://doi.org/10.1177/1534484311417561
*Wolf, F. A., Way, L. W., & Stewart, L. (2010). The efficacy of medical team training: Improved team performance and decreased operating room delays: A detailed analysis of 4863 cases. Annals of Surgery, 252, 477–483.
Woods, R. (2009). Industry output and employment projections to 2018. Monthly Labor Review, 132, 52–81.
Wright, P. M., McCormick, B., Sherman, W. S., & McMahan, G. C. (1999). The role of human resource practices in petro-chemical refinery performance. International Journal of Human Resource Management, 10, 551–571. https://doi.org/10.1080/095851999340260
Wright, P. M., & McMahan, G. C. (1992). Theoretical perspectives for strategic human resource management. Journal of Management, 18, 295–320. https://doi.org/10.1177/014920639201800205
*Youngblood, P., Harter, P. M., Srivastava, S., Moffett, S., Heinrichs, W. L., & Dev, P. (2008). Design, development, and evaluation of an online virtual emergency department for training trauma teams. Simulation in Healthcare, 3, 146–153. http://www.ncbi.nlm.nih.gov/pubmed/19088658. https://doi.org/10.1097/SIH.0b013e31817bedf7
Zapp, L. (2001). Use of multiple teaching strategies in the staff development setting. Journal for Nurses in Staff Development, 17, 206–212. https://doi.org/10.1097/00124645-200107000-00011
Zou, G. Y. (2007). Toward using confidence intervals to compare correlations. Psychological Methods, 12, 399–413. https://doi.org/10.1037/1082-989X.12.4.399
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2016. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://publishing.aip.org/publications/journals/covid-19/.
Abstract
As the nature of work becomes more complex, teams have become necessary to ensure effective functioning within organizations. The healthcare industry is no exception. As such, the prevalence of training interventions designed to optimize teamwork in this industry has increased substantially over the last 10 years (Weaver, Dy, & Rosen, 2014). Using Kirkpatrick’s (1956, 1996) training evaluation framework, we conducted a meta-analytic examination of healthcare team training to quantify its effectiveness and understand the conditions under which it is most successful. Results demonstrate that healthcare team training improves each of Kirkpatrick’s criteria (reactions, learning, transfer, results; d = .37 to .89). Second, findings indicate that healthcare team training is largely robust to trainee composition, training strategy, and characteristics of the work environment, with the only exception being the reduced effectiveness of team training programs that involve feedback. As a tertiary goal, we proposed and found empirical support for a sequential model of healthcare team training where team training affects results via learning, which leads to transfer, which increases results. We find support for this sequential model in the healthcare industry (i.e., the current meta-analysis) and in training across all industries (i.e., using meta-analytic estimates from Arthur, Bennett, Edens, & Bell, 2003), suggesting the sequential benefits of training are not unique to medical teams. Ultimately, this meta-analysis supports the expanded use of team training and points toward recommendations for optimizing its effectiveness within healthcare settings.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer