Skip to main content

“A debriefer must be neutral” and other debriefing myths: a systemic inquiry-based qualitative study of taken-for-granted beliefs about clinical post-event debriefing

Abstract

Background

The goal of this study was to identify taken-for-granted beliefs and assumptions about use, costs, and facilitation of post-event debriefing. These myths prevent the ubiquitous uptake of post-event debriefing in clinical units, and therefore the identification of process, teamwork, and latent safety threats that lead to medical error. By naming these false barriers and assumptions, the authors believe that clinical event debriefing can be implemented more broadly.

Methods

We interviewed an international sample of 37 clinicians, educators, scholars, researchers, and healthcare administrators from hospitals, universities, and healthcare organizations in Western Europe and the USA, who had a broad range of debriefing experience. We adopted a systemic-constructivist approach that aimed at exploring in-depth assumptions about debriefing beyond obvious constraints such as time and logistics and focused on interpersonal relationships within organizations. Using circular questions, we intended to uncover new and tacit knowledge about barriers and facilitators of regular clinical debriefings. All interviews were transcribed and analyzed following a comprehensive process of inductive open coding.

Results

In total, 1508.62 min of interviews (25 h, 9 min, and 2 s) were analyzed, and 1591 answers were categorized. Many implicit debriefing theories reflected current scientific evidence, particularly with respect to debriefing value and topics, the complexity and difficulty of facilitation, the importance of structuring the debriefing and engaging in reflective practice to advance debriefing skills. We also identified four debriefing myths which may prevent post-event debriefing from being implemented in clinical units.

Conclusion

The debriefing myths include (1) debriefing only when disaster strikes, (2) debriefing is a luxury, (3) senior clinicians should determine debriefing content, and (4) debriefers must be neutral and nonjudgmental. These myths offer valuable insights into why current debriefing practices are ad hoc and not embedded into daily unit practices. They may help ignite a renewed momentum into the implementation of post-event debriefing in clinical settings.

Introduction

Post-event debriefing is held in clinical settings among healthcare providers and is an educational, team learning, and patient safety intervention [1,2,3,4]. It is a guided learning conversation among participants that aims to explore and understand the relationships among events, actions, thought and feeling processes, and performance outcomes of a clinical situation [5,6,7,8,9,10]. Debriefing, also labelled after-action review [11, 12], is based on mutual reflection of clinical practice. Although debriefing is a core part of simulation-based training and clinical rehearsal, its use in learning from real events in the clinical environment is well documented [10, 13]. It is a low-cost learning opportunity designed for multiple forms of healthcare teams [1, 2, 14, 15], which makes it particularly suited for managing effective and safe teamwork during uncertain, complex, and risky conditions, such as the COVID-19 pandemic [16, 17].

Empirical studies have demonstrated the effectiveness of learning-oriented debriefing [8, 14, 18,19,20]. Yet in spite of the obvious potential benefits, debriefing is still underused in the clinical environment [1, 21,22,23,24]. One contributing factor is the perceived difficulty of facilitating debriefing conversations [22, 25,26,27,28,29]. Although debriefing is best facilitated by trained debriefers, there are literature, courses, and videos freely available on the numerous approaches for how to structure debriefing, create a psychologically safe and engaging setting, use of co-debriefing, and the management of difficult debriefing situations [5, 23, 30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. Another contributing factor is logistical barriers such as high workload, interprofessional scheduling issues, social distancing, or lack of interest [22, 23, 46]. These logistical barriers can be addressed through the judicious scheduling of post-event debriefings. Even using no-go criteria, which are standard best practice for in situ simulation, may be useful [47].

Organization science may point to another set of barriers—and enablers—of debriefing: taken-for-granted beliefs about use, costs, and benefits of debriefing. Also called lay or implicit theories, these taken-for-granted beliefs allow individuals to make priori predictions, comparable to scientific theories [48]. Yet, in contrast to scientific theories, “lay theories need not be objective, testable, or true. Lay theories may be adopted to serve the self and to justify the state of affairs” (p. 18) [49]. They reflect organizational paradoxes such as hospital management directing hospital staff to conduct debriefings while simultaneously triggering a culture of anxiety, time pressure, and control, thus, preventing open communication [50]. As such, implicit debriefing theories may include myths which prevent healthcare personnel from engaging in debriefings. This is problematic because it woefully impedes benevolent initiatives to install regular team debriefings as low-cost learning opportunities.

The goal of this study was to identify implicit debriefing theories with a particular focus on debriefing myths—beliefs and misconceptions contradicting empirical evidence. Myths and implicit theories have important functions such as understanding, simplification, and self- and group-protection. They also determine individuals’ and team’s actions [49]. We interviewed an international sample of clinicians, educators, scholars, healthcare administrators, and researchers who had a broad range of debriefing expertise, and adopted a systemic-constructivist approach that aimed at exploring in-depth assumptions about debriefing beyond obvious constraints such as time and logistics and focused on interpersonal relationships within organizations. In particular, using circular questions, we intended to uncover new and tacit knowledge about barriers and facilitators of conducting regular clinical debriefings. Circular questions are based on social constructivism and on circular assumptions about an issue; they aim at exploring recurrent patterns and processes, generating information, fostering perspective taking, “fluidizing” problems, and putting actions into relational contexts [51,52,53,54,55,56,57]. They explore interactions with respect to differences in behavior rather than personality traits, ranking and classification, change in the relationship before and after an event, and differences in respect to hypothetical conditions [44, 55, 57].

In a variety of instances, debriefings are the only opportunity for learning. For example, during the COVID-19 pandemic, additional healthcare providers with various levels of training joined intensive care personnel to care for very sick patients in healthcare facilities globally. Teams had to form and function on the spot with little time for formal training. Post-event debriefings held at the end of each shift or if feasible, during the shift as well, can support such ad hoc teams during the pandemic and allow for lessons learned and process improvement, before members disband and join other teams the following day [17, 58, 59]. In this study, we analyze implicit theories of post-event debriefings. Post-event debriefings are defined as any structured post-event discussion with the purpose of learning; their labels may vary across professions and organizations [11, 60]. Importantly, these debriefings are not to be confused with stress debriefings as psychological intervention to prevent or treat traumatic experiences [61, 62]. Addressing the gaps in our current understanding of what facilitates effective debriefing is important for developing and targeting debriefing faculty development efforts for clinical faculty [63,64,65]. It will also contribute to implementing and maintaining a culture of debriefing and open conversations in healthcare organizations. Actionable knowledge on how to achieve in-depth reflection in debriefing may help mitigate cognitive load during debriefing [28], enhance debriefing skills and debriefing quality, and thus contribute to safer patient care. When the learnings from clinical debriefings have a clear pathway to be actioned, visible quality improvement follows, which may in turn inspire clinicians to debrief again to see yet more improvement.

Method

This study was ruled exempt by the local ethics committee.

Setting

Interviews were conducted within the scope of a research project investigating how structured debriefings can provide a suitable learning infrastructure for acute care teams in healthcare. Interviews were conducted by the first and last authors (JS and MK) with backgrounds in organizational psychology and simulation-based team training. All experts participated voluntarily in this study. Interviews took place during regular working hours and were usually conducted at the participants’ workplace. Participants did not receive any credits or benefits for their participation. In total, 1508.62 min (25 h, 9min, 2 s) of interviews were recorded.

Participants

Of the 37 participants, 10 (27%) were nurses, 17 (46%) physicians, 6 (16%) scholars, and 4 (11%) education specialists and administrators from hospitals, universities, and healthcare organizations in Switzerland, Germany, and the USA. We deliberately approached experts who differed in both the quality and quantity of their debriefing experience. Some clinicians among our study participants had a shared experience with debriefings in both clinical and simulation-based training settings. Other participants had—by the very nature of their occupation as administrators or scholars—not participated in numerous debriefings themselves but had, through research, counselling, or implementation, developed valuable and unique expertise on debriefings. We intended the sample to represent the diversity of healthcare professions who have reflected on how to initiate, conduct, and implement post-event and learning-oriented debriefings. These participants had varying and overlapping degrees of experience from multiple perspectives with it. As such, our recruiting strategy was threefold: First, we recruited clinicians based on our knowledge that they had valuable experience in conducting, participating in, or observing post-event debriefings in hospitals. Second, we recruited interview participants (clinical and non-clinical) who were renowned experts in the fields of team learning, team debriefing and/or team reflexivity—they had not necessarily been frequently participating in debriefings themselves. Third, we included healthcare administrators; some of them had previous careers as clinicians. This recruiting strategy allowed us to have a broad and multi-pronged perspective.

Data collection instrument

For the purpose of this study, a semi-structured interview guide with questions related to challenges and success criteria for debriefings in the clinical setting was designed. The data collection instrument was developed based on research in organizational behavior (with a particular focus on difficult conversations) [66,67,68,69,70], debriefings in healthcare [5, 26, 30, 38, 40, 63,64,65, 71,72,73,74,75], and circular questioning [44, 55]. Content areas were (1) experiences with debriefings in clinical and training settings; (2) characteristics of debriefings with respect to participants, place, duration, frequency, and organizational routines; (3) mental models with respect to effectiveness of debriefings and differences between debriefings and other kinds of conversations; (4) leadership in debriefings; (5) psychological safety; and (6) double-loop learning. Interviews were structured along these content areas. Targeted question types were used. For example, for assessing participants’ homogeneity vs. heterogeneity with respect to certain attitudes, scaling questions, a type of question derived from sociometry [76, 77] prompting interviewees to quantify their experience (e.g., “On a scale from 1 to 10, how high do you consider the extent of discussing “unpopular topics” in debriefings – 1 = only popular topics discussed; 10 = only unpopular topics discussed?”), were used. For exploring statements in more detail, a variety of circular questions (i.e., questions based on circular assumptions about relationships and interactions [44, 55]), were applied (e.g., “You mentioned earlier that debriefings are typically run by XY. How do you explain that?”). For desired yes/no answers, close-ended questions were used (e.g., “Do you think there should be rules for how to conduct debriefings?”); for desired narrative answers, open-ended questions were used (e.g., “Which rules do you think there should be?”).

Data collection

At the beginning of each interview, we thanked the participants for taking time for the interview and informed them about the study objectives and potential interview length of 30 min. One interview was conducted in English; the other interviews were conducted in German. The primary language of most participants was German. Consent for recording and transcription was obtained verbally. All participants were informed that the authors would not publish any answers allowing for personal details of the interviewees. As we conducted interviews rather than conversations, we previewed that we would not engage in an in-depth discussion and would not comment on participants’ statements. We invited the participants to share anything beyond what we were addressing in our questions that they would consider relevant. We also asked for permission to record the interview. The recorded interviews did not contain any personal information about the participants. Data collection was anonymous and confidential. Assignment of participants’ names to their interview data was not possible.

Data analysis

Formation of categories was done using an inductive open coding process structured along the interview questions (Table 1) [80]. Depending on the selected data analysis method, answers can be excluded (e.g., arguments that do not fit the topic can be excluded and would thus not be analyzed). We decided to use all answers (N = 1591) and gradually formed data categories related to our assumptions and the concepts of the relevant literature. Answers were sorted step by step into the categories, and if they did not fit into existing categories, new categories were developed inductively. After having processed a portion of the data, the chosen categories were reviewed. It was examined whether redundant or overlapping categories exist. After reviewing the coding scheme, the final version was established and a coding manual was written before answers were sorted into the final categories, and relative frequencies for each category were determined. Although all participants were asked the full set of questions, they were free to not answer some of the questions. Some questions were only answered by three participants. Due to the large amount of data, we decided that questions needed to be answered by at least one third of the participants (n = 12) to be categorized. As a final step, debriefing myths across categories were extracted.

Table 1 Data analysis process

Results

As expected, reported practical debriefing experiences varied: Twenty-seven participants reported only some practical experience with post-event debriefings (i.e., conducting or participating in). These participants shared their views from a clinical (i.e., many information-sharing meetings, but no explicitly established debriefing routine yet), scholarly (i.e., studying team reflexivity) and/or administrative (e.g., overseeing patient safety initiatives) perspective. Others reported significant experience with more than 100 debriefings (see Supplementary Table 1, Additional file 1).

In what follows, we first briefly report findings on debriefing characteristics and beliefs about debriefing value; results of the respective coding are included in the supplemental digital content (see Supplementary Table 2, Additional file 2 & Supplementary Table 3, Additional file 3). Second, we focus on findings with respect to beliefs about what to talk about and taboo topics in debriefing (Tables 2 and 3). Third, we describe reported beliefs about debriefing facilitation (Table 4 and Supplementary Table 4, Additional file 4) and about learning environments (see Supplementary Table 5, Additional file 5). The percentage of each answer is indicated in parentheses. We discuss implicit debriefing theories that qualify as debriefing myths in the subsequent discussion section.

Table 2 Beliefs about debriefing topics and who gets to decide on them
Table 3 Beliefs about taboo topics in debriefing
Table 4 Beliefs about facilitating debriefing

Characteristics of debriefings with respect to participants, place, duration, frequency, and organizational routines

Reported characteristics of debriefing differed vastly (see Supplementary Table 2, Additional file 2). While the majority described the conduct of various but not classical debriefing types (39.3%; e.g., team meetings, case reviews, etc.), some participants described debriefings following critical events (33.9%), while others talked about the lack of debriefings (14.3%). Frequency (e.g., several times a year (16.1%), rarely (12.9%), daily (12.9%)), duration (e.g., few minutes (22.2%), half an hour (30.6%), three quarters of an hour (5.6%)), participants (e.g., everybody involved (21.6%), physicians and nurses (18.9%), only nurses (18.9%)) differed as well. Who initiated (e.g., attending physician (25%), most senior person (17.5%), nurses (17.5%)) and led the debriefing or meeting (e.g., attending physician (28.6%), most senior person (25%), no one (14.3%)) differed as well with a tendency for seniority. Most debriefings seemed to follow a specific structure (77.3%) as opposed to no structure (18.2%) and take place at various locations (e.g., private setting without disruptions (31.4%), anywhere (14.3%), break room (14.3%)).

Beliefs about value of debriefing

Participants expressed deliberate value of conducting debriefings for a wide range of people (e.g., participants, 26.8%), including patients and their relatives (24.4%) and staff (19.5%) (see Supplementary Table 3, Additional file 3). Reported debriefing benefits include, among others, higher quality and fewer mistakes (40%), higher work satisfaction (28.9%), and more open communication (20%). Management and staff members with a negative attitude (34.6%) were perceived to benefit the least from debriefings followed by people working in management (15.4%) and people that were not involved (15.4%). Saving time by not conducting debriefings was associated with less learning (38.5%). To support the use of debriefings, culture change (42%), tool availability (32%), and improved logistics (16%) were most frequently mentioned.

Beliefs about what to talk about and taboo topics in debriefings

Technical and medical issues (30.2%), teamwork (20.8%), and critical events and mistakes (18.9%) were the most frequently mentioned topics that people reported to actually talk about in their debriefings (Table 2). When asked about what should be discussed in debriefing, similar topics were mentioned (teamwork (29.3%), critical events and mistakes (13.9%), with the frequent addition of emotions and perceptions (18.5%)). The exploration of successful performance episodes was not mentioned. Interestingly mistakes, errors, and deviations (32.9%) were also the most frequently mentioned taboo topic for debriefing, closely followed by hierarchy (20.6%) and personal issues (19.2%) (Table 3). Participants seemed divided with respect to the extent that people discuss unpopular topics in debriefings and the respective timing; 47.6% answered that these topics are rather addressed at the end of the debriefing. The best requirements to address potential unpopular topics would have either external people (25%) or trained, experienced (19.6%), or open-minded staff members (17.7%).

The people initiating and facilitating the debriefing (31.2%), senior and more experienced staff members (27.9%) and physicians (21.3%), were described to have (and ought to have) the most influence on what is discussed in debriefings, whereas people with certain personality characteristics (e.g., quiet, introverted members) were described to have the least influence (42.9%), followed by less powerful members (20%) and nurses (14.3%). Participants explained this mostly as the impact of hierarchy (26.5%), facilitation (26.5%), and experience and expertise (20.2%) (Table 2).

Beliefs about facilitating debriefing

Participants reported a variety of beliefs about facilitating debriefing (Table 4). When asked what would they do if they were able to change how debriefings were typically led, the majority would more clearly define the facilitation role (65.4%). Institutionalizing debriefing by implementing it in a top–down way and making it a deliberate routine was the second most frequent recommendation (26.9%) followed by applying a structure (7.7%). Facilitators should first and foremost apply a structure (28%), be curious (16%) as well as neutral (13.3%), and apply mechanisms for coordinating the conversation, such as setting the stage and previewing (13.3%), giving space by asking questions, and listening (10.7%). Participants reported detailed assumptions about what characterizes a good debriefer like being respectful, standing and staying calm (24.65%), skilled in facilitating conversations (21.5%), and being empathic (13.9%) (see Supplementary Table 4, Additional file 4), as well as actions which she/he should avoid (e.g., judging or taking sides (31.5%), such as being rude or humiliating learners (18.5%), or assuming to hold the truth (18.5%); Table 4). The notion of the debriefer’s “neutrality” and avoidance of judgments occurred frequently. The biggest reported barriers for running a debriefing were a lack of debriefing skills (26%), fear that participants might not wish to be debriefed (19.2%), and being emotionally involved or not neutral (17.8%) (Table 4). To develop more confidence in being able to debrief, most participants reported the need for reflective practice (61.1%) rather than training (13.9%) or support (13.9%; e.g., preparation, tools).

Beliefs about the learning environment

Participants generally expressed regard for debriefing rules and structure (97%). They seemed hopeful that psychological safety—“the perception of the consequences of taking interpersonal risks” [69]—could be established for debriefing, particularly through providing structure, transparency, managing time, using an appropriate setting and stating objectives (25.4%), providing confidentiality (15.9%), getting support via trained, neutral and authentic debriefers (12.7%), and more (see Supplementary Table 5, Additional file 5). Participants seemed to prefer discussion rather than unilateral feedback in debriefing. However, they also reported that their colleagues would not have much patience in exploring potential reasons for mistakes in detail. The main reported hurdle for exploring mistakes in detail were repercussions (64.3%).

Discussion

The goal of this study was to identify implicit debriefing theories, i.e., taken-for-granted beliefs about use, costs, and benefits of post-event debriefing. In contrast to scientific theories, implicit theories need neither be objective, nor testable, nor true [49]. They reflect organizational paradoxes [50] and may include myths which prevent healthcare personnel from engaging in debriefings. In what follows, we first summarize the implicit debriefing theories that seem to be in line with recent debriefing science. Second, we highlight debriefing myths, i.e., beliefs and misconceptions that seem to be in contrast to scientific debriefing evidence. Lastly, we point to study limitations.

Implicit debriefing theories in line with scientific evidence

In line with scientific evidence, the value of debriefings for debriefing participants, patients and their relatives and employees [1, 15, 17, 20] was shared among the study participants—a value that might not be easily visible for management. Participants were also mindful of the delicacy of choosing debriefing topics [67]. Respectively, participants seemed aware of the complexity and difficulty of facilitating debriefings [25], the importance of structuring the debriefing conversation [8, 23, 75] and engaging in reflective practice to advance debriefing skills [26, 27]. The importance of organizational support of debriefing was also shared knowledge [74].

Debriefing myths

Based on the coded responses presented in detail above, we have identified four debriefing myths. They are presented in Table 5. In what follows, we discuss why each of these myths is problematic.

Table 5 Debriefing myths

Debriefing myth #1: debrief when disaster strikes

The first debriefing myth we identified through coding describes the assumption that debriefing should particularly—and almost exclusively—follow critical performance episodes and catastrophic events. While many respondents saw the benefit of regular debriefings, the conduct of debriefing seems associated with adverse events, errors, and mistakes—interestingly with mistakes as the most frequently mentioned taboo topic (Tables 3 and 5). This notion seems common; even the recent WHO guidance for after-action review limits its scope to emergencies [12]. This is problematic for several reasons. First, employees may implicitly anchor debriefing with “somethings must have happened” or “I did something wrong” [81]. Since organizational culture tends to consider mistakes as something to avoid, employees may experience fear, anxiety, and embarrassment when asked to debrief and engage in face-saving strategies such as withdrawal, obscuring critique, and reluctance to speak up or to discuss mistakes [26, 68, 82,83,84]. This process may vastly limit debriefing effectiveness [41]. Second, psychological research has demonstrated that “bad is stronger than good”: bad events and negative information receive far more processing and impact than good events and positive information [85]. As a consequence, debriefing only when disaster strikes may undermine learning from positive performance episodes and overemphasize mistakes, error, and adversity. Third, recent safety approaches highlight the importance of not only exploring why “things go wrong” (i.e., Safety I) but also why “things go right” (i.e., Safety II) [86]. It is recommended not to limit debriefings to learning from failure, and to extent them to learning from success.

Debriefing myth #2: debriefing is a luxury which may not improve team performance

The second debriefing myth identified includes the belief that debriefing is something “extra” (Table 5). This myth is not only problematic because it may negatively impact the decision about whether to conduct a debriefing or not when resources are limited. Two factors may contribute to this myth: First, debriefing takes initial extra effort to organize, both explicitly and implicitly [74]. Second, the benefits of debriefings for improved patient care and safety may not seem to be obvious immediately. Importantly, this myth stands in contrast to scientific evidence: almost a decade ago, a meta-analysis showed that learning-oriented debriefings improve performance by 20 to 25% on average [20]—a finding that has been confirmed in a more recent meta-analysis [11]. Team science considers shared reflexivity as occurring in debriefing a core process of teamwork [15, 87,88,89]. The conduct of debriefing has been associated with more team helping and workload sharing, more speaking up, shorter surgical duration, and reduced number of adverse events [14, 18]. Comparable to the use of checklists, time-outs, and even breaks, debriefing is an investment worth its effort [90,91,92]. In countries particularly such as the USA, malpractice claims might be reduced due to the impact of debriefing [93, 94]. While the fact that much of today’s patient care being performed by teams and multi-team systems is becoming more and more established [95,96,97,98], debriefings may not yet be considered a part of that establishment.

Debriefing myth #3: the senior clinician should determine debriefing content

The third debriefing myth describes a dilemma of identifying what to talk about in debriefings; on the one hand, employees of higher seniority and more experience were considered as those who do and should determine debriefing topics. On the other hand, hierarchy was considered a barrier for speaking up with topics during debriefing, especially by quiet and less powerful team members or by professional groups such as nurses. This dilemma is problematic: facilitating is an advanced skill which may not be automatically acquired with more seniority [25]. Leaving the decision of what to talk about with the powerful may manifest undiscussable issues [74]. Research on organizational silence has demonstrated that healthcare professionals may remain silent in spite of having patient safety concerns [99,100,101,102,103,104,105]. This silence is caused by a complexity of individual, interpersonal, and organizational barriers to speaking up [99, 106,107,108,109,110,111,112,113,114]. Clarifying roles and applying predefined debriefing frameworks that suggest relevant content areas such as PEARLS [30], guided team self-correction [19], TeamGAINS [34] may help navigate debriefing content more effectively [115].

Debriefing myths #4: debriefers must be neutral and nonjudgmental

The final debriefing myth we have identified includes the belief that debriefing facilitators must be neutral and nonjudgmental. Only three participants (4%) believed that debriefing facilitators should share their point of view. While this myth may reflect the differing perspectives of the participants, it likely reflects the facilitators’ well-described feedback dilemma in debriefing: caring for the personal relationship vs. caring for task performance [26, 116]. They worry that offering critique may damage relationships and make learners defensive, while “protecting” learners by withholding critique may leave them without learning [26]. As a consequence, facilitators are at risk of withholding their personal—positive, negative, astonished, etc.—reactions, leaving the learners in doubt and impeding shared in-depth reflection for the purpose of learning. This might be particularly the case when debriefing colleagues, peers, or even superiors. This myth is problematic for several reasons; first, without honest feedback and curious inquiry, debriefings are shallow, vague, and ineffective [5, 26]. Second, by not offering any personal views, facilitators may manifest “undiscussable” issues and may miss a chance to demonstrate feedback and difficult conversation skills [5, 26]. Third, psychological safety may suffer when facilitators are not “real” [41]. Fourth, critical topics may not be discussed at all [117]. Importantly, group decision-making and counselling science do not suggest to reverse the myth and offer unfiltered judgement, taking sides, or becoming disrespectful. Refraining from neutrality should not be mistaken with arguing about who is right. On the contrary, sharing thoughts, opinions and information in groups is a highly complex process where minor variations in the interaction are associated with major changes in the result [118,119,120,121]. Debriefing facilitators are advised to use expertise wisely, at appropriate times and put it up for discussion [7, 60, 66, 121, 122]. By offering one’s point of view at appropriate times and inquiring about others, the facilitator serves as expert mediator, makes the issue “discussable”. Instead of taking sides, facilitators assume multipartiality, even with respect to their own points of view [44]. Since there “is no such thing as nonjudgmental debriefing” [123], holding learners in high regard while assuming curiosity, combining sharing own observations and respective personal reactions with inquiring the learners’ perspective [5, 124] are recommended alternatives. Helpful strategies are establishing a safe learning environment, applying debriefing frameworks, using co-debriefing, getting peer feedback and respective faculty development [40,41,42,43, 64, 65, 125].

Limitations

This study has limitations. We investigated a comprehensive yet small sample of experts. We deliberately approached experts with varied backgrounds in debriefing experience. Some clinicians among our study participants had a shared experience with debriefings in both clinical and simulation-based training settings. Other participants had—by the very nature of their occupation as administrators or scholars—not participated in numerous debriefings themselves but had, through research, counselling, or implementation, developed valuable and unique expertise on debriefings. Although participants likely represent the diversity of healthcare professions who have reflected on how to initiate, conduct, and implement post-event debriefings with varying and overlapping perspectives, this diversity may include various biases of participants which may warrant further systemic study. Furthermore, participants’ responses were likely triggered by the nature of the interview questions. Since this study deliberately relied on open questions based on organizational behavior science rather than on more narrow vignettes or stimulated recall, we assume that participants might indeed have a broad range of debriefing situations in mind. Also, some participants seemed to easily navigate their narratives with respect to work “as done” and work “as imagined”; however, this differentiation seemed less feasible for others. Data analysis may have been shaped by our own understanding of related phenomena. Further research on debriefing strategies such as embedded in “circle up” [126] may help us understand the efficacy of embedding daily debriefings in clinical units, rather than as an ad hoc tool when disaster strikes. For example, understanding the return on investment (ROI) of debriefing by quantifying the fiscal expense of staff time versus the financial gains of up to 25% improved teamwork, shorter surgical duration, and decreased adverse events might shed even more light on debriefing effectiveness.

Conclusion

Exploring taken-for-granted beliefs or implicit theories about clinical debriefing can help us understand why powerful quality improvement techniques, such as post-event debriefing, have seen slow uptake in most clinical environments. Despite empirical data demonstrating improved clinical unit performance in several dimensions, for example, less errors and better teamwork; these priori predictions create barriers that prevent clinical leaders endorsing and using debriefing in a routine way. The four debriefing myths highlighted in this manuscript may assist Clinical Leaders and Quality Improvement and Patient Safety personnel to counter false assumptions raised by debriefing skeptics. As simulation educators, we recognize that high-quality debriefing requires a rigorous approach with skilled personnel. We also recognize that the benefits of improved communication and coordination in healthcare teams, and teams that learn and grow together through routine clinical debriefings, far outweigh the costs. We hope that the four presented myths will spark controversy and stimulate more empirical debriefing research.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Mullan PC, Kessler DO, Cheng A. Educational opportunities with postevent debriefing. JAMA. 2014;312(22):2333–4.

    Article  CAS  PubMed  Google Scholar 

  2. Hicks CW, Rosen M, Hobson DB, et al. Improving safety and quality of care with enhanced teamwork through operating room briefings. JAMA Surg. 2014;149(8):863–8.

    Article  PubMed  Google Scholar 

  3. Cheng A, Eppich W, Grant V, et al. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ. 2014;48:657–66.

  4. Mullan PC, Cochrane NH, Chamberlain JM, et al. Accuracy of postresuscitation team debriefings in a pediatric emergency department. Ann Emerg Med. 2017;70(3):311–9.

    Article  PubMed  Google Scholar 

  5. Rudolph JW, Simon R, Rivard P, et al. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25:361–76.

    Article  PubMed  Google Scholar 

  6. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc. 2007;2:115–25.

    Article  PubMed  Google Scholar 

  7. Salas E, Klein C, King H, et al. Debriefing medical teams: 12 evidence-based best practices and tips. Jt Comm J Qual Patient Saf. 2008;34:518–27.

    PubMed  Google Scholar 

  8. Eddy ER, Tannenbaum SI, Mathieu JE. Helping teams to help themselves: comparing two team-led debriefing methods. Pers Psychol. 2013;66:975–1008.

    Article  Google Scholar 

  9. Dieckmann P, Molin Friis S, Lippert A, et al. The art and science of debriefing in simulation: ideal and practice. Med Teach. 2009;31:e287–e94.

    Article  PubMed  Google Scholar 

  10. Tavares W, Eppich W, Cheng A, et al. Learning conversations: an analysis of the theoretical roots and their manifestations of feedback and debriefing in medical education. Acad Med. 2020;95(7):1020–5.

    Article  PubMed  Google Scholar 

  11. Keiser NL, Arthur W. A meta-analysis of the effectiveness of the after-action review (or debrief) and factors that influence its effectiveness. J Appl Psychol. 2020.

  12. World Health Organization. The global practice of after action review: a systematic review of literature. Geneva: World Health Organization; 2019. 2019. Contract No.: WHO/WHE/CPI/2019.9

    Google Scholar 

  13. Arnold J, Cashin M, Olutoye OO. Simulation-based clinical rehearsals as a method for improving patient safety. JAMA Surg. 2018;153(12):1143–4.

    Article  PubMed  Google Scholar 

  14. Vashdi DR, Bamberger PA, Erez M. Can surgical teams ever learn? The role of coordination, complexity, and transitivity in action team learning. Acad Manage J. 2013;56:945–71.

    Article  Google Scholar 

  15. Schmutz JB, Eppich WJ. Promoting learning and patient care through shared reflection: a conceptual framework for team reflexivity in health care. Acad Med. 2017;92(11):1555–63.

    Article  PubMed  Google Scholar 

  16. Dieckmann P, Torgeirsen K, Qvindesland SA, et al. The use of simulation to prepare and improve responses to infectious disease outbreaks like COVID-19: practical tips and resources from Norway, Denmark, and the UK. Adv Simul (Lond). 2020;5:3.

    Article  Google Scholar 

  17. Tannenbaum SI, Traylor AM, Thomas EJ, et al. Managing teamwork in the face of pandemic: evidence-based tips. BMJ Qual Saf. 2020.

  18. Weiss M, Kolbe M, Grote G, et al. Why didn’t you say something? Using after-event reviews to affect voice behavior and hierarchy beliefs in multi-professional action teams. Eur J Work Organ Psychol. 2017;26(1):66–80.

    Article  Google Scholar 

  19. Smith-Jentsch KA, Cannon-Bowers JA, Tannenbaum S, et al. Guided team self-correction: impacts on team mental models, processes, and effectiveness. Small Group Res. 2008;39:303–29.

    Article  Google Scholar 

  20. Tannenbaum SI, Cerasoli CP. Do team and individual debriefs enhance performance? A meta-analysis. Hum Factors. 2013;55(1):231–45.

    Article  PubMed  Google Scholar 

  21. Tannenbaum SI, Goldhaber-Fiebert S. Medical team debriefs: simple, powerful, underutilized. In: Salas E, Frush K, editors. Improving patient safety through teamwork and team training. New York: Oxford University Press; 2013. p. 249–56.

    Google Scholar 

  22. Sandhu N, Eppich W, Mikrogianakis A, et al. Postresuscitation debriefing in the pediatric emergency department: a national needs assessment. Can J Emerg Med. 2014;16(5):383–92.

    Article  Google Scholar 

  23. Ahmed M, Sevdalis N, Paige J, et al. Identifying best practice guidelines for debriefing in surgery: a tri-continental study. Am J Surg. 2012;203:523–9.

    Article  PubMed  Google Scholar 

  24. Arriaga AF, Sweeney RE, Clapp JT, et al. Failure to debrief after critical events in anesthesia is associated with failures in communication during the event. Anesthesiology. 2019;130(6):1039–48.

    Article  PubMed  Google Scholar 

  25. Cheng A, Eppich W, Kolbe M, et al. A conceptual framework for the development of debriefing skills: a journey of discovery, growth, and maturity. Simul Healthcare. 2020;15(1):55–60.

    Article  Google Scholar 

  26. Rudolph JW, Foldy EG, Robinson T, et al. Helping without harming. The instructor's feedback dilemma in debriefing--a case study. Simul Healthc. 2013;8:304–16.

    Article  PubMed  Google Scholar 

  27. Kolbe M, Rudolph JW. What’s the headline on your mind right now? How reflection guides simulation-based faculty development in a master class. BMJ Simul Technol Enhanced Learn. 2018;4(3):126–32.

    Article  Google Scholar 

  28. Fraser KL, Meguerdichian MJ, Haws JT, et al. Cognitive load theory for debriefing simulations: implications for faculty development. Adv Simul. 2018;3(1):28.

    Article  Google Scholar 

  29. Sweeney RE, Clapp JT, Arriaga AF, et al. Understanding debriefing: a qualitative study of event reconstruction at an academic medical center. Acad Med. 2020;95(7):1089–97.

    Article  PubMed  Google Scholar 

  30. Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc. 2015;10.

  31. Jaye P, Thomas L, Reedy G. ‘The Diamond’: a structure for simulation debrief. Clin Teach. 2015;12(3):171–5.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Rose S, Cheng A. Charge nurse facilitated clinical debriefing in the emergency department. CJEM. 2018;20(5):781–5.

    Article  PubMed  Google Scholar 

  33. Kessler DO, Cheng A, Mullan PC. Debriefing in the emergency department after clinical events: a practical guide. Ann Emerg Med. 2015;65(6):690–8.

    Article  PubMed  Google Scholar 

  34. Kolbe M, Weiss M, Grote G, et al. TeamGAINS: a tool for structured debriefings for simulation-based team trainings. BMJ Qual Saf. 2013;22:541–53.

    Article  PubMed  Google Scholar 

  35. Mullan PC, Wuestner E, Kerr TD, et al. Implementation of an in situ qualitative debriefing tool for resuscitations. Resuscitation. 2013;84(7):946–51.

    Article  PubMed  Google Scholar 

  36. Chinnock B, Mullan PC, Zinns LE, et al. Debriefing: an expert panel's how-to guide. Ann Emerg Med. 2017;70(3):320–2.e1.

    Article  PubMed  Google Scholar 

  37. Zinns LE, Mullan PC, O'Connell KJ, et al. An evaluation of a new debriefing framework: REFLECT. Pediatr Emerg Care. 2020;36(3).

  38. Sawyer T, Eppich W, Brett-Fleegler M, et al. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simul Healthc. 2016;11.

  39. Gougoulis A, Trawber R, Hird K, et al. ‘Take 10 to talk about it’: use of a scripted, post-event debriefing tool in a neonatal intensive care unit. J Paediatr Child Health. 2020;Advanced online publication.

  40. Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the pre-simulation briefing. Simul Healthc. 2014;9(6):339–49.

    Article  PubMed  Google Scholar 

  41. Kolbe M, Eppich W, Rudolph J, et al. Managing psychological safety in debriefings: a dynamic balancing act. BMJ STEL. 2020;6(3):164–71.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Cheng A, Palaganas J, Eppich W, et al. Co-debriefing for simulation-based education: a primer for facilitators. Simul Healthcare. 2015;10(2):69–75.

    Article  Google Scholar 

  43. Grant VJ, Robinson T, Catena H, et al. Difficult debriefing situations: a toolbox for simulation educators. Med Teach. 2018:1–10.

  44. Kolbe M, Marty A, Seelandt J, et al. How to debrief teamwork interactions: using circular questions to explore and change team interaction patterns. Adv Simul. 2016;1(1):29.

    Article  Google Scholar 

  45. Welch-Horan TB, Lemke DS, Bastero P, et al. Feedback, reflection and team learning for COVID-19: development of a novel clinical event debriefing tool. BMJ Simul Technol Enhanced Learn. 2020:bmjstel-2020-000638.

  46. Cheng A, Kolbe M, Grant V, et al. A practical guide to virtual debriefings: communities of inquiry perspective. Adv Simul. 2020;5(1):18.

    Article  Google Scholar 

  47. Bajaj K, Minors A, Walker K, et al. “No-go considerations” for in situ simulation safety. Simul Healthc. 2018;13(3):221–4.

    Article  PubMed  Google Scholar 

  48. Detert JR, Edmondson A. Implicit voice theories: taken-for-granted rules of self-censorship at work. Acad Manage J. 2011;54(3):461–88.

    Article  Google Scholar 

  49. Levy SR, C-y C, Y-y H. Lay theories and intergroup relations. Group Processes Intergroup Relat. 2006;9(1):5–24.

    Article  Google Scholar 

  50. Putnam LL, Fairhurst GT, Banghart S. Contradictions, dialectics, and paradoxes in organizations: a constitutive approach. Acad Manage Ann. 2016;10(1):65–171.

    Article  Google Scholar 

  51. von Schlippe A, Schweitzer J. Lehrbuch der systemischen Therapie und Beratung [textbook of systemic therapy and counselling]. 10. Göttingen: Vandenhoeck & Ruprecht; 2007.

  52. Diorinou M, Tseliou E. Studying circular questioning “in situ”: discourse analysis of a first systemic family therapy session. J Marital Fam Ther. 2014;40(1):106–21.

    Article  PubMed  Google Scholar 

  53. Penn P. Circular questioning. Fam Process. 1982;21(3):267–80.

    Article  CAS  PubMed  Google Scholar 

  54. Simon FB, Rech-Simon C. Zirkuläres Fragen. Systemische Therapie in Fallbeispielen: Ein Lehrbuch. Heidelberg: Carl-Auer; 2007.

    Google Scholar 

  55. Tomm K. Interventive interviewing: part 111. Intending to ask lineal, circular, strategic, or reflexive questions? Fam Process. 1988;27:1–15.

    Article  CAS  PubMed  Google Scholar 

  56. Kriz WC. A systemic-constructivist approach to the facilitation and debriefing of simulations and games. Simul Gaming. 2010;41:663–80.

    Article  Google Scholar 

  57. Palazzoli Selvini M, Boscolo L, Cecchin G, et al. Hypothesizing--circularity--neutrality: three guidelines for the conductor of the session. Fam Process. 1980;19(1):3–12.

    Article  Google Scholar 

  58. Vashdi DR, Bamberger PA, Erez M, et al. Briefing-debriefing: using a reflexive organizational learning model from the military to enhance the performance of surgical teams. Hum Resour Manag. 2007;46(1):115–42.

  59. Servotte J-C, Welch-Horan TB, Mullan P, et al. Development and implementation of an end-of-shift clinical debriefing method for emergency departments during COVID-19. Adv Simul (Lond). 2020;5(1):32.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Allen JA, Reiter-Palmon R, Crowe J, et al. Debriefs: teams learning from doing in context. Am Psychol. 2018;73:504–16.

    Article  PubMed  Google Scholar 

  61. Shalev A, Liberzon I, Marmar C. Post-traumatic stress disorder. N Engl J Med. 2017;376(25):2459–69.

    Article  PubMed  Google Scholar 

  62. Rose S, Bisson J, Churchill R, et al. Psychological debriefing for preventing post traumatic stress disorder (PTSD). Cochrane Database Syst Rev. 2002(2):Cd000560.

  63. Cheng A, Grant V, Dieckmann P, et al. Faculty development for simulation programs: five issues for the future of debriefing training. Simul Healthc. 2015;10(4):217–22.

    Article  PubMed  Google Scholar 

  64. Cheng A, Morse KJ, Rudolph J, et al. Learner-centered debriefing for health care simulation education: lessons for faculty development. Simul Healthc. 2016;11(1):32–40.

    Article  PubMed  Google Scholar 

  65. Cheng A, Grant V, Huffman J, et al. Coaching the debriefer: peer coaching to improve debriefing quality in simulation programs. Simul Healthcare. 2017;12(5):319–25.

    Article  Google Scholar 

  66. Stone D, Patton B, Heen S. Difficult conversations. New York: Penguin Books; 1999.

    Google Scholar 

  67. Argyris C. Making the undiscussable and its undiscussability discussable. Public Administrative Review. 1980;40:205–13.

  68. Argyris C, Putnam R, McLain SD. Action science: concepts, methods, and skills for research and intervention. San Francisco: Jossey-Bass; 1985.

    Google Scholar 

  69. Edmondson AC, Lei Z. Psychological safety: the history, renaissance, and future of an interpersonal construct. Annu Rev Organ Psychol Organ Behav. 2014;1(1):23–43.

    Article  Google Scholar 

  70. Argyris C. Double-loop learning, teaching, and research. Acad Manag Learn Edu. 2002;1:206–18.

  71. Hull L, Russ S, Ahmed M, et al. Quality of interdisciplinary postsimulation debriefing: 360° evaluation. BMJ Simul Technol Enhanced Learn. 2017;3(1):9–16.

    Article  Google Scholar 

  72. Eppich W, Mullan PC, Brett-Fleegler M, et al. “Let’s talk about it”: translating lessons from healthcare simulation to clinical event debriefings and clinical coaching conversations. Clin Pediatr Emerg Med. 2016;177:200–11.

  73. Sawyer T, Loren D, Halamek LP. Post-event debriefings during neonatal care: why are we not doing them, and how can we start[quest]. J Perinatol. 2016;36(6):415–9.

    Article  CAS  PubMed  Google Scholar 

  74. Kolbe M, Grande B, Spahn DR. Briefing and debriefing during simulation-based training and beyond: content, structure, attitude, and setting. Best Pract Res. 2015;29(1):87–96.

    Google Scholar 

  75. Eppich WJ, Hunt EA, Duval-Arnould JM, et al. Structuring feedback and debriefing to achieve mastery learning goals. Acad Med. 2015;90:1501–8.

  76. Strong T, Pyle NR, Sutherland O. Scaling questions: asking and answering them in counselling. Couns Psychol Q. 2009;22(2):171–85.

    Article  Google Scholar 

  77. von Ameln F, Kramer J. Skalierungsfragen und Aktionssoziometrie. In: von Ameln F, Kramer J, editors. Organisationen in Bewegung bringen: Handlungsorientierte Methoden für die Personal-, Team- und Organisationsentwicklung. Berlin, Heidelberg: Springer Berlin Heidelberg; 2016. p. 109–20.

    Chapter  Google Scholar 

  78. Seelandt JC. Quality Control. In: Brauner E, Boos M, Kolbe M, editors. The Cambridge handbook of group interaction analysis. Cambridge Handbooks in Psychology. Cambridge: Cambridge University Press; 2018. p. 227–44.

    Chapter  Google Scholar 

  79. Landis JR, Koch GG. The measurement of observers agreement for categorial data. Biometrics. 1977;33:159–74.

    Article  CAS  PubMed  Google Scholar 

  80. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  81. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.

    Article  CAS  PubMed  Google Scholar 

  82. Edmondson A. Psychological safety and learning behavior in work teams. Adm Sci Q. 1999;44:350–83.

    Article  Google Scholar 

  83. Schein EH. How can organizations learn faster? The challenge of entering the green room. Sloan Manage Rev. 1993;34:85–92.

    Google Scholar 

  84. Pian-Smith MCM, Cooper JB. If we don’t learn from our critical events, we’re likely to relive them: debriefing should be the norm. Anesthesiology. 2019;130(6):867–9.

    Article  PubMed  Google Scholar 

  85. Baumeister RF, Bratslavsky E, Finkenauer C, et al. Bad is stronger than good. Rev Gen Psychol. 2001;5(4):323–70.

    Article  Google Scholar 

  86. Hollnagel E. Safety-I and Safety-II. The past and future of safety management. Farnham: Ashgate; 2014.

    Google Scholar 

  87. Marks MA, Mathieu JE, Zaccaro SJ. A temporally based framework and taxonomy of team processes. Acad Manage Rev. 2001;26:356–76.

    Article  Google Scholar 

  88. Gurtner A, Tschan F, Semmer N-K, et al. Getting groups to develop good strategies: effects of reflexivity interventions on team process, team performance, and shared mental models. Organ Behav Hum Decis Process. 2007;102:127–42.

    Article  Google Scholar 

  89. Hackman JR, Wageman R. A theory of team coaching. Acad Manage Rev. 2005;30:269–87.

    Article  Google Scholar 

  90. Engelmann C, Schneider M, Kirschbaum C, et al. Effects of intraoperative breaks on mental and somatic operator fatigue: a randomized clinical trial. Surg Endosc. 2011;25:1245–50.

    Article  PubMed  Google Scholar 

  91. Lingard L, Regehr G, Cartmill C, et al. Evaluation of a preoperative team briefing: a new communication routine results in improved clinical practice. BMJ Qual Saf. 2011;20(6):475–82.

    Article  PubMed  Google Scholar 

  92. Arriaga AF, Bader AM, Wong JM, et al. Simulation-based trial of surgical-crisis checklists. N Engl J Med. 2013;368:246–53.

    Article  CAS  PubMed  Google Scholar 

  93. Davies Joanna M, Posner Karen L, Lee Lorri A, et al. Liability associated with obstetric anesthesia: a closed claims analysis. Anesthesiology. 2009;110(1):131–9.

    Article  CAS  PubMed  Google Scholar 

  94. Arriaga AF, Gawande AA, Raemer DB, et al. Pilot testing of a model for insurer-driven, large-scale multicenter simulation training for operating room teams. Ann Surg. 2014;259(3):403–10.

    Article  PubMed  Google Scholar 

  95. DiazGranados D, Dow AW, Perry SJ, et al. Understanding patient care as a multiteam system In: Shuffler ML, Rico R, Salas E, editors. Pushing the boundaries: Multiteam systems in research and practice: Emerald Group Publishing Limited; 2014. p. 95-113.

  96. Taplin SH, Weaver S, Salas E, et al. Reviewing cancer care team effectiveness. J Oncol Pract. 2015;11(3):239–46.

    Article  PubMed  PubMed Central  Google Scholar 

  97. Head SJ, Kaul S, Mack MJ, et al. The rationale for Heart Team decision-making for patients with stable, complex coronary artery disease. Eur Heart J. 2013;34(32):2510–8.

    Article  PubMed  Google Scholar 

  98. Pronovost P. Teamwork matters. In: Salas E, Tannenbaum SI, Cohen D, Latham G, editors. Developing and enhancing teamwork in organizations: Evidence-based best practices and guidelines. San Francisco: Jossey-Bass; 2013. p. 11–2.

    Google Scholar 

  99. Raemer DB, Kolbe M, Minehart RD, et al. Improving anesthesiologists’ ability to speak up in the operating room: a randomized controlled experiment of a simulation-based intervention and a qualitative analysis of hurdles and enablers. Acad Med. 2016;91(4):530–9.

    Article  PubMed  Google Scholar 

  100. Pattni N, Arzola C, Malavade A, et al. Challenging authority and speaking up in the operating room environment: a narrative synthesis. Br J Anaesth. 2019;122(2):233–44.

    Article  CAS  PubMed  Google Scholar 

  101. Schwappach D, Gehring K. Trade-offs between voice and silence: a qualitative exploration of oncology staff's decisions to speak up about safety concerns. BMC Health Serv Res. 2014;14(1):303.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Schwappach D, Richard A. Speak up-related climate and its association with healthcare workers’ speaking up and withholding voice behaviours: a cross-sectional survey in Switzerland. BMJ Qual Saf. 2018;27(10):827–35.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Schwappach DLB, Gehring K. Frequency of and predictors for withholding patient safety concerns among oncology staff: a survey study. Eur J Cancer Care. 2015;24(3):395–403.

    Article  CAS  Google Scholar 

  104. Okuyama A, Wagner C, Bijnen B. Speaking up for patient safety by hospital-based health care professionals: a literature review. BMC Health Serv Res. 2014;14:61–9.

    Article  PubMed  PubMed Central  Google Scholar 

  105. Bell SK, Roche SD, Mueller A, et al. Speaking up about care concerns in the ICU: patient and family experiences, attitudes and perceived barriers. BMJ Qual Saf. 2018;27(11):928–36.

    Article  PubMed  PubMed Central  Google Scholar 

  106. Detert JR, Burris ER. Leadership behavior and employee voice: Is the door really open? Acad Manage J. 2007;50:869–84.

    Article  Google Scholar 

  107. Kish-Gephart JJ, Detert JR, Treviño LK, et al. Silenced by fear: the nature, sources, and consequences of fear at work. Res Organ Behav. 2009;29:163–93.

    Google Scholar 

  108. Tangirala S, Kamdar D, Venkataramani V, et al. Doing right versus getting ahead: the effects of duty and achievement orientations on employees’ voice. J Appl Psychol. 2013;98(6):1040–50.

    Article  PubMed  Google Scholar 

  109. Beament T, Mercer S. Speak up! Barriers to challenging erroneous decisions of seniors in anaesthesia. Anaesthesia. 2016;71(11):1332–40.

    Article  CAS  PubMed  Google Scholar 

  110. Kobayashi H, Pian-Smith M, Sato M, et al. A cross-cultural survey of residents’ perceived barriers in questioning/challenging authority. Qual Saf Health Care. 2006;15:277–83.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  111. Pattni N, Bould MD, Hayter MA, et al. Gender, power and leadership: the effect of a superior's gender on respiratory therapists’ ability to challenge leadership during a life-threatening emergency. Br J Anaesth. 2017;119(4):697–702.

    Article  CAS  PubMed  Google Scholar 

  112. Garden AL, Weller JM. Speaking up: Does anaesthetist gender influence teamwork and collaboration? Br J Anaesth. 2017;119(4):571–2.

    Article  CAS  PubMed  Google Scholar 

  113. Green B, Oeppen R, Smith D, et al. Challenging hierarchy in healthcare teams–ways to flatten gradients to improve teamwork and patient care. Br J Oral Maxillofac Surg. 2017;55(5):449–53.

    Article  CAS  PubMed  Google Scholar 

  114. Edrees HH, Ismail MNM, Kelly B, et al. Examining influences on speaking up among critical care healthcare providers in the United Arab Emirates. International Journal for Quality in Health Care. 2017;29(7):948–60.

    Article  PubMed  Google Scholar 

  115. Hale SJ, Parker MJ, Cupido C, et al. Applications of postresuscitation debriefing frameworks in emergency settings: a systematic review. AEM Educ Train. 2020;4(3):223–30.

    Article  PubMed  PubMed Central  Google Scholar 

  116. Scott K. Radical candor: how to be a kick ass boss without losing your humanity. New York: St. Martins Press; 2017.

    Google Scholar 

  117. Stasser G, Titus W. Pooling of unshared information in group decision making: biased information sampling during discussion. J Pers Soc Psychol. 1985;48:1467–578.

    Article  Google Scholar 

  118. Kerr NL, Tindale RS. Group performance and decision making. Annual Review of Psychology. 2004;55(1):623–55.

    Article  PubMed  Google Scholar 

  119. Larson JRJ, Foster-Fishman PG, Franz TM. Leadership style and the discussion of shared and unshared information in decision-making groups. Pers Soc Psychol Bull. 1998;24:482–95.

    Article  Google Scholar 

  120. Mesmer-Magnus JR, DeChurch LA. Information sharing and team performance: a meta-analysis. J Appl Psychol. 2009;94:535–46.

    Article  PubMed  Google Scholar 

  121. Mojzisch A, Schulz-Hardt S. Knowing others’ preferences degrades the quality of group decisions. J Pers Soc Psychol. 2010;98(5):794–808.

    Article  PubMed  Google Scholar 

  122. Stewart DD, Stasser G. Expert role assignment and information sampling during collective recall and decision making. J Pers Soc Psychol. 1995;69(4):619–28.

    Article  CAS  PubMed  Google Scholar 

  123. Rudolph JW, Simon R, Dufresne RL, et al. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1(1):49–55.

    Article  PubMed  Google Scholar 

  124. Rudolph JW, Simon FB, Raemer DB, et al. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med. 2008;15:1010–6.

    Article  PubMed  Google Scholar 

  125. Peterson DT, Watts PI, Epps CA, et al. Simulation faculty development: a tiered approach. Simulation in Healthcare. 2017;12(4):254–9.

    Article  PubMed  Google Scholar 

  126. Rock LK, Rudolph JW, Fey MK, et al. “Circle Up”: workflow adaptation and psychological support via briefing, debriefing, and peer support. NEJM Catalyst. 2020;Online first.

Download references

Acknowledgements

The authors thank Fanny Honegger for help in designing the data collection instruments, Sarah Kriech for transcribing the interviews, and Gabriel Friedli for preparing the transcribed interviews for coding. We thank the interviewees for their time and willingness to share their points of view.

Funding

This research was funded by a grant from the Swiss National Science Foundation (Grant No. 100014_152822).

Author information

Authors and Affiliations

Authors

Contributions

JCS was involved in conceptualizing the study. She conducted most of the interviews and was involved in analyzing the data and writing the initial draft. KW participated in conceptualizing the study, interpreting the finding, and writing the initial draft. MK was involved in conceptualizing the study, participated in data collection and analysis, and was the major contributor in writing the initial draft. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Michaela Kolbe.

Ethics declarations

Ethics approval and consent to participate

Ethical approval has been waived by the Kantonale Ethikkommission Zurich (January 27, 2014, KEK-ZH-Nr. 2013-0592).

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary Table 1.

Absolute frequencies of study participants experiences with conducting, participating in and observing debriefings.

Additional file 2: Supplementary Table 2.

Characteristics of debriefings with respect to participants, place, duration, frequency, and organizational routines.

Additional file 3: Supplementary Table 3.

Beliefs about value of debriefing.

Additional file 4: Supplementary Table 4.

Characteristics of a “good debriefer”.

Additional file 5: Supplementary Table 5.

Beliefs about the learning environment of debriefings.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Seelandt, J.C., Walker, K. & Kolbe, M. “A debriefer must be neutral” and other debriefing myths: a systemic inquiry-based qualitative study of taken-for-granted beliefs about clinical post-event debriefing. Adv Simul 6, 7 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s41077-021-00161-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s41077-021-00161-5

Keywords