The faces of failure: understanding students’ affective experiences with failure in introductory chemistry laboratory learning activities

Shauna Schechtel and Amanda Bongers *
Department of Chemistry, Queen's University, 90 Bader Lane, Kingston, Ontario K7L 3N6, Canada. E-mail: amanda.bongers@queensu.ca

Received 31st July 2025 , Accepted 10th October 2025

First published on 17th October 2025


Abstract

Failure is recognized as valuable to learning in the classroom and research contexts. While interventions incorporating failure into learning have been explored in science education, the affective experience of failure is less understood. From an affective experience lens, failure is stigmatized and not accounted for in learning and assessment. This study explores introductory chemistry students’ affective experiences with failure through qualitative semi-structured interviews framed from an interpretivist lens. Students shared that failure is overwhelming, shapes their beliefs, is not accounted for in course design, and is defined by the learning and assessment outcomes. Asking students to fail as a part of their learning is much more nuanced than previously discussed interventions where failure is part of the design. This study explores the idea that not all failures are created equal and provides insight into laboratory activities and assessments that ask students to fail. Paying attention to students’ experiences can change your mindset as an educator and offer pathways to creating learning environments that reduce judgment, allow instructors to share their own failures, and offer feedback to help students move forward with their failures.


1 Introduction

Undergraduate teaching laboratories are modelled after authentic research experiences in chemistry, and a part of these authentic experiences are failed experiments (Gravelle and Fisher, 2012). Experiencing failure is touted as an unwritten informal learning outcome of teaching laboratories, and some laboratory activities have been purposely designed to incorporate and expose students to failure (Heller et al., 2020; Limeri et al., 2020). While these experiences are valuable, educators should not ignore how failure is perceived in greater chemistry research culture and society, where “success depends upon the evaluations of the likelihood of concrete results coming out of the research project” (Sismondo, 2010, p. 138). The general public perceives science as successful because science predominantly shares success stories, such as the creation of technology and industrial innovation (Firestein, 2016; Rudolph, 2019, 2023, 2024). In establishing science in public education in the late 1800s.

The utility of scientific knowledge for industrialization and farming was sold as a key success. Students bring this understanding of success through science into our courses (Chambers, 1983). Greater society, chemistry research, and the education system situate a student's experience of failure within a course. Since the context plays a role in shaping failure, this suggests that not all failures are created equal (Feigenbaum, 2021).

This study explores students’ affective experiences with failure in the introductory chemistry laboratory. Affect as embodied and performative experience is situated by what students learn about chemistry, such as what counts as success and who counts as a chemistry practitioner (Ahmed, 2004; Hodson, 2014; El Halwany and Adams, 2025). Through understanding these affective experiences, we can describe how failure can be unproductive in some learning contexts for students. With laboratories and classrooms that try to promote failure, these stories help create a more nuanced discussion around the benefits and consequences of asking students to fail in their learning. In this study, we explore students’ situated affective experiences with failure by asking: what is it like for a student to experience failure in an introductory chemistry laboratory?

2 Literature review

2.1 Affective experiences of failure

Educators recognize that failure during the learning process is valuable (Fink et al., 2020; Heller et al., 2020; Whittle et al., 2020; Santos and Mooring, 2022). Failure is seen as an opportunity to use the feedback of others to extend a student's learning to new contexts and problems (Feigenbaum, 2021). In their research, practicing chemists recognize the value of failure (Firestein, 2016). During seminars or conferences, how many times have you heard the story where the failed results lead to a breakthrough (Green et al., 2018)? Failure is valuable in chemistry research because it is framed as a step along the path to discovery (Firestein, 2016; Green et al., 2018). Much of the research on failure focuses on how it is a tool for learning content and disciplinary skills, rather than exploring the affective experience of failure (Bruck and Towns, 2009; Hunnicutt et al., 2015; Choi, 2021; Satusky et al., 2022; Partanen, 2023). The affective experience of failure is therefore not understood in chemistry contexts. This literature review will frame failure as a situated affective experience that can be generative, transformative, or stigmatizing. The theories help narrow our understanding of the word experience in the research question and interpret failure as a situated affective experience. Students can experience failure in infinite ways, and the affective experience matters without any clear connection to content knowledge and disciplinary skills (Ahmed, 2004; El Halwany and Adams, 2025).

2.2 Failure as a generative experience

Within higher education, failure has been viewed as being generative. Under this framing, the affective experience of failure is iterative, messy, and involves getting things wrong (Feigenbaum, 2021). In chemistry education, generative failure is the affective experience educators generally want for students (Fink et al., 2020; Heller et al., 2020). Generative failure allows students to learn something from the affective experience of failing. The challenge of generative failure in the curriculum is that it does not account for the practical context and the systems that shape students’ affective experiences of failure (Whittle et al., 2020; Feigenbaum, 2021). As Dearden 1967, p. 105 describes, in practice, “leaving things open to discovery also necessarily leaves open the opportunity for not discovering them”. Practically, failure is not built into the logistics of laboratory activities (Hodson, 1996; Shepherd et al., 2020). While some laboratory learning accounts for failure, the assessment then punishes it; students can make mistakes only if the summative outcome is that they have explainable data for their reports (DeKorver and Towns, 2016; Galloway et al., 2016). Del Carlo and Bodner (2004) discuss students’ perceptions of needing “good data” and how sharing data after the experiment gave them their only feeling of control over their grades. This shows how the system frames failure as a valuable experience without accounting for the practical limitations like time restrictions and requirements for learning and assessment (Feigenbaum, 2021). Specifically, “students are in an institutional structure for which grades remain the primary proof of learning” (Nunes et al., 2022, p. 34). Failure is thus contextualized by the system's restrictions on learning and assessment in courses.

2.3 Failure as a transformative experience

Transformative learning is another affective experience of failure that educators may desire for their students (Jarvis, 1987). The transformative experience is initiated by a disorienting dilemma, which leads to critical reflection (Mezirow, 1978; Kitchenham, 2008). An example of a disorienting experience is when “the learner encounters a problem that they cannot resolve through their current understanding” (Kitchenham, 2008, p. 122). This means the learner must adopt a new assumption to understand the problem. Failure is a disorienting experience, so it can be viewed through a lens of transformative learning. As educators, our role in transformative learning is to help facilitate that disorienting experience. With transformative learning theory, there is the possibility for the experience to be helpful or harmful. Just because a student has a disorienting dilemma, it does not mean that they will adopt the new assumption and learn what we intend. Essentially, Jarvis 1987, p. 165 describes “that not every experience results in learning because [a student] brings a unique constellation of previous experiences to each social situation.” It is not as simple as asking students to fail, it is recognizing that not all failure is the same for every student (Feigenbaum, 2021). Failure is not the same because experiences are situated in the social and cultural context of the student (Mezirow, 1978; Jarvis, 1987;). In some cases, the affective experience of failure can be alienating and drive students away from learning (Jarvis, 1987). In the context of laboratory work, the idea that not every failed experiment is a meaningful affective experience makes sense. A failed experiment might not be worth analyzing because limited meaning can be drawn from the results, so the practical answer is to try again. When asking students to experience failure in the laboratory, some failures support learning, others can be detrimental, or have no impact (Jarvis, 1987). A critique of transformative learning in practice is that it does not account for how failure feels in the moment. Specifically, it does not acknowledge that a disorienting experience like failure is stressful, and how students can feel guilt and shame in the process (Kitchenham, 2008). The affective experience is not the same for every student, and struggles requiring persistence are not neutral (Robertson et al., 2025). The affective aspect is better understood through viewing failure as a stigmatized experience.

2.4 Failure as a stigmatized experience

Failure can also be framed as stigmatized and shameful, meaning the affective experience is often hidden (Feigenbaum, 2021). If affective experiences of failure are shared, it is done as a retrospective, where there is a resolution, and the person sharing is in a position of power and authority in their career (Whittle et al., 2020). For example, a tenured professor explaining how they failed an exam or showcasing a CV of failure (Stefan, 2010). These conversations about failure are important to shaping the affective experience, however, the types of experiences shared highlight the privilege of who can fail (Feigenbaum, 2021). Students, who do not have the same security in their position, fear the shame of failure and judgment on the individual student for failing (Harrowell et al., 2018; Shepherd et al., 2020; Whittle et al., 2020). The judgment that students fear is rooted in the idea that failure indicates that they are inferior in potential and intelligence; in other words, that failure is not part of the learning process (Feigenbaum, 2021). Students experience this judgment coming from expectations of family, society, higher education, educators, and their peers (Rop, 1999; Sismondo, 2010; Shepherd et al., 2020; Nunes et al., 2022). When failure is shameful, students can feel alone in their affective experiences (Harrowell et al., 2018; Whittle et al., 2020). A result of failure being hidden away is that it becomes an overwhelming affective experience, which puts students in a vulnerable position (Shepherd et al., 2020; Whittle et al., 2020). Stigmatized framing continues to thrive because the system refuses to acknowledge how it feels to experience failure.

2.5 Affective experiences are situated in a context

Failure has logistical, social, and affective components that make it different for each student (Sismondo, 2010; Harrowell et al., 2018; Shepherd et al., 2020; Choi, 2021). The same failure that is productive for one student is detrimental to others due to practical restrictions (Jarvis, 1987; Feigenbaum, 2021). A critique of students from educators is that they seem grade-focused rather than learning-focused (DeKorver and Towns, 2015; Galloway et al., 2016). Yet, this focus on grades is understandable and valid when some students’ grades determine the standards to receive scholarships and financial aid to afford post-secondary education (Feigenbaum, 2021), meaning they cannot afford to take a course where they may struggle to get a high grade. In this way, the context of society and our higher education system creep into conversations on failure, meaning that not all students have the same privilege to struggle in a course (Feigenbaum, 2021).

2.6 Affective experiences in chemistry education

As failure is a situated affective experience, the practical context and stigmatization by experts contribute to students’ pervasive fear of failure. Chemistry students’ fear of failure has been explored indirectly through affective experiences in laboratories (Del Carlo and Bodner, 2004; DeKorver and Towns, 2015, 2016; Galloway et al., 2016; Burrows et al., 2021). For example, the chemistry laboratory anxiety instrument (CLAI) explored how factors like time, working with chemicals, and data collection shaped a student's anxiety in the laboratory (Bowen, 1999). Much of this work has focused on improving students’ mindset, including work on self-efficacy, attitudes, motivation, and growth or fixed mindsets (Kahveci and Orgill, 2015; Fink et al., 2020; Santos and Mooring, 2022). Currently, chemistry educators’ understanding of failure is still rooted in the role a student's mindset plays in their experiences (Galloway and Bretz, 2015a, 2015b; Flaherty, 2020; Heller et al., 2020). The challenge with these approaches is that they frame experiences of failure as an individual problem rather than a systemic problem in higher education and society (Shepherd et al., 2020; Feigenbaum, 2021; Nunes et al., 2022). Specifically, a student's mindset is defined as the problem, without acknowledging that failure is not accounted for in our learning and assessment (Flaherty, 2020). If failure were a possible outcome, students would still be able to succeed on the report without data; instead, they are provided with a dataset from their instructor if they make a mistake or in the event that laboratory equipment fails (Del Carlo and Bodner, 2004). Students’ experiences with failure revolve around the boundary of acceptable and unacceptable failures. Students have shared that they were scared of losing points when collecting data, felt nervous about making mistakes, and felt helpless when their results were imperfect (DeKorver and Towns, 2015, 2016; Galloway et al., 2016). Students fear failure because the context does not allow for failure, and they can feel overwhelmed when it happens (Flaherty, 2022).

3 Research design

3.1 Worldview (paradigm) and frameworks

The worldview and conceptual and theoretical frameworks for a qualitative study frame the researcher's thought process and inform their decision-making (Denzin and Lincoln, 2018). The research in this study is situated an interpretivist worldview, with the goal of “understanding how [participants] interpret their experience, how [participants] construct their worlds and what meaning [participants] attribute to the experience” (Merriam, 2009, p. 5). Social, historical, and political contexts shape how the experience is interpreted by both the participant and the researcher, leading to multiple meanings in the findings to answer the research question (Merriam, 2009). This goal is grounded in the theoretical framework of affect theory and our methodology (Varpio et al., 2020). Affect theory (Ahmed, 2004) ensures that situated affective experiences are prioritized because understanding affect is the research goal. To situate our research question with affect theory, we treat disciplinary skills and chemistry content knowledge as immaterial. Instead, we care about understanding the affective experience based on the meaning the student participant constructs (Merriam, 2009). This includes our own pedagogical content knowledge, which can interfere with and overshadow students’ lived experiences (Merriam, 2009). Our conceptual framework of student voice recognizes that a student's contextual experiences matter and must be listened to (Fielding, 2004; McLeod, 2011). The student voice framework encourages reflexivity and provides additional support for prioritizing affect in our study.

3.2 Methodology

Methodology helps us understand what we, as the researchers, define as data (Merriam, 2009; Denzin and Lincoln, 2018). Our methodology situates lived experiences in chemistry culture through a phenomenological approach to ethnography (Kessing, 1974). Thus, our data are lived experiences situated in disciplinary values and norms (culture) of chemistry. Ethnography is defined as writing down culture (Ingold, 2017), with culture being an individual's “theory of the code being followed, the game being played, in the society into which [a person was] born” (Kessing, 1974, p. 16). The culture of chemistry situates a student's experiences in what is transmitted to them from the chemistry educator. What is transmitted are the socially constructed values of chemistry in the form of what is taught, how it is taught, who is taught, why it is taught, and what is valued (Hammond and Brandt, 2004; Brandt and Carlone, 2012). Within the discussion of this paper, the findings will be framed through an ethnographic lens. This study prioritizes students’ constructed meaning of their own experience (Merriam, 2009). To understand these lived experiences, we looked to van Manen's (1997) hermeneutical phenomenology, which says lived experiences have a temporal and linguistic structure. The temporal structure is the assumption that we cannot understand an experience in the present moment, meaning we need to reflect after the experience to understand the essence. Linguistic structure is what creates the relationship between experiences and the data generated in an interview. Specifically, “we are able to recall and reflect on experiences” because meaning “may be understood as a linguistic construction, a description of a phenomenon” (van Manen, 1997, pp. 38–39). The data generated is a thick description where the researcher explains “the feelings, the mood”, “specific events”, “how the body feels” and “senses” rather than being “concerned with the factual accuracy of an account” (van Manen, 1997, pp. 64–65). Unique to research about experiences is the idea that the researchers also have lived experience of the phenomenon being explored (Denzin and Lincoln, 2018). Bracketing is important within our research design as it recognizes that a researcher's own experiences of the phenomenon “are also the possible experiences of others” (van Manen, 1997, p. 54), but they are set aside so that student participants’ meaning is prioritized. In this work, we bracket anything outside our research design and methodology during the analysis, including our expertise in chemistry content, skills, and teaching practices. We do not explain away a student's experience by showing why it was productive for them to learning content or disciplinary skills in chemistry. This approach allows us to actively understand the students’ affective experiences and maintain phenomenological focus (Merriam, 2009).

3.3 Methods: interviews informed by observation

Data was generated through 1-hour semi-structured virtual interviews (Kvale and Brinkmann, 2009). The focus of the interview questions was informed by the participant observations of students’ experiences during the introductory laboratory and the overarching research question. The interview guide and details on participant observations are provided in the supporting information. Each interview was a unique moment for data generation, as different conversations become constructed between the participant and the researcher (Kvale and Brinkmann, 2009). Thus, we work under the assumption that these questions do not elicit one specific answer; rather, they allow participants to describe their experience (Kvale and Brinkmann, 2009). Follow-up questions were generated in the moment, focusing on a student participant's experiences and using the student's words to build rapport through listening (Lareau, 2021). To build rapport with the students and inform the interview guide, participant observations were conducted for 8 months prior to the interviews, under ethics approval (Latour and Woolgar, 1986; Buxton, 2001; Lareau, 2021; Zhang, 2022). Participant observations, as described in our prior work (Schechtel and Bongers, 2024), created a space for our students’ expertise (i.e., lived experiences) to be central to our interview questions and follow-ups (Lareau, 2021). While the details and analysis of these participant observations were part of a larger study outside the scope of this paper, a handful are included in this manuscript with the notation: (Date of Observation, Experiment Number, Notebook page number), following the approach of other studies (Buxton, 2001; Brandt and Carlone, 2012). In alignment with our methodology, these written observations provide examples of student's affective experiences as seen in practice (Taussig, 2011).

3.4 Analysis and interpretations

In qualitative research, data analysis is not seen as a separate step in the research process, but rather as “occurring throughout the entire [iterative] process” (Lareau, 2021, p. 195). The decisions in the analysis were broadly inspired by reflexive thematic analysis (Braun and Clarke, 2019). Specifically, reflexive thematic analysis is guided by the researcher's thought process as informed by research design assumptions. Reflexive thematic analysis is not a rigid series of steps to follow or either-or choices, such as inductive or deductive analysis (Braun and Clarke, 2019). Our main analysis assumptions are that failure is an affective experience (Jarvis, 1987; Whittle et al., 2020; Feigenbaum, 2021), students have experiences that must be listened to (Fielding, 2004; McLeod, 2011) and our goal is to understand students’ affective experiences (Ahmed, 2004). The first author immersed themself in the data through multiple listens as they transcribed the audio verbatim. Throughout the transcription process, notes were added to the transcript and memos to preserve the affective qualities of the audio as it was translated to a textual representation. The transcribed interviews were imported into NVivo 14 and organized for the next stage. In the second read, initial codes were developed, and memos were written (Creswell and Creswell, 2013). The initial codes were framed through the lens of the research question and design. Code refinement was completed for internal homogeneity (Braun and Clarke, 2019). The refinement allowed the researcher to see if all the quotes in a particular code aligned or if they needed an alternative code. The themes were generated from the refined codes. Throughout the analysis process, memos and annotations were used to document the analysis and guide the researcher's decisions. Trustworthiness (see SI) was approached through positionality in the research design (Bowen, 1999), multiple peer debriefings throughout the analysis (Parker et al., 2023), and triangulation between the participant observations and the interviews (Lincoln, 1995; Morse, 2015).

3.5 Participants and context

The inclusion criteria selected for our exploratory study were students who had completed introductory chemistry (1300 students) during a specific academic year at a mid-sized Canadian institution. The course involved weekly (1.5 hours) traditional lab experiments with lab reports in a year-long (September–April) introductory chemistry course. Failure was not an explicit learning outcome of our laboratory course or assessment. Additionally, students were not directly graded on the accuracy of their results in their lab report or any other course assessment. Recruitment began after the course completed, and 15 students consented to participate in the interviews. The participants chose their pseudonyms or were assigned one based on individual preference. As part of the interview, each student was asked about their prior laboratory experience, and each had diverse experiences with laboratories, which were also shaped by the COVID19 pandemic (SI). These differences in experience were not analyzed as they do not answer the research question because the focus is solely on affective experience.

3.6 Delimitations

While we have a diverse range of laboratory experiences, this study has no intention to be able to generalize the experiences of introductory chemistry students at the institution studied or the broader community (Lincoln, 1995; Lincoln et al., 2011). Sample size, reproducibility, and generalizability are all outside the scope of qualitative research; an entirely different study design, with varying questions of research, epistemologies, and methodologies, would be needed to achieve those goals (Lincoln, 1995; Lincoln et al., 2011; Denzin and Lincoln, 2018). Qualitative research is situated within the researcher's positionality (Jacobson and Mustafa, 2019; Secules et al., 2021). The first author's lived experience with chemistry laboratory learning activities motivated her to design a study that prioritized affective experiences, recognizing that student experiences should be listened to and respecting qualitative research traditions in education.

4 Findings

Three overarching themes about students’ experiences emerged: what failure feels like (Theme 1), what does it mean to fail (Theme 2), and what about the learning environment shapes failure (Theme 3). What failure feels like is about experiences of failure being overwhelming, shameful, and never-ending in the laboratory. The second theme centered on the student's definition of failure and how the definition shapes a student's actions. The final theme narrows in on how the learning environment situates the experiences of failure through practical considerations such as space, equipment, and time. Across these themes, students provided solutions, which will be woven into the findings and further fleshed out in the implications.

4.1 Theme 1: what failure feels like

The experience of failure for a student is more than a slight discomfort; it is overwhelming, containing many emotions and decisions to be navigated. Failure is overwhelming in a student's experience as they feel judged by others, and the laboratory constantly exposes them to weekly experiences of failing.
4.1.i Overwhelming. Our students experienced overpowering emotions when they failed, which consumed their focus during and after the laboratory. The strong, overwhelming emotions also spread across the laboratory, shaping other students’ experiences. The experience of mistakes, “[took] concentration away from where it was actually needed” [Allen], was “more frustrating than stressful” [Abigail], and was “stress and a lot of heartache” [Erica]. An aspect of the experience these students described was the strong negative emotions they felt, such as stress and frustration. Erica expected that sometimes their “experiments [would] fail” or would “not go [as] planned,” but the experience was still difficult.

Erica: “We barely had any results [for the Prussian blue experiment]. It was, it was essentially a failed experiment. And I remember that so clearly. ‘Cause I – I was so disappointed because we followed those steps to the T. We re-read. I printed out the lab steps, so we weren’t dependent on the computers.

There was disappointment when the experiment failed because there was an expectation that the experiment would succeed if they correctly prepared. Similarly, in the same experiment, a group of students asked, “Why their experiment did not work as their blueprints were parchment yellow and other students’ were royal blue. The TA said they did not know why, and the students’ faces sank with disappointment” (October 26th, Fourth Experiment, pg. 6). When the experiment fails, the disappointment comes from feeling isolated as other students succeed around them. Taylor felt their failures were “not as important because I felt like it was just me” [Taylor]. In addition to failure being an isolating experience, it was an energy felt around the room. Aisha recalled “showing the panic on [their face], while [their lab partner] would be internally panicking.” The panic Aisha expressed was noticed by other students completing the experiment, including Allen:.

[During the experiment I] saw other people, like, really freak out and lose their nerves when they made an essentialist mistake. Other people were really losing hope and thinking that they could not get the experiment done.

Other students in the lab also feel the strong emotions of the student who made the mistake. “Even a TA felt that disappointment overtook them when they could not help students complete the experiment” (October 26th, Fourth Experiment, pg. 6). The experience of failure creates strong emotions felt in the laboratory.

4.1.ii Judgment. Our students felt they were being judged for their failures, which led to strong, negative emotions when they made mistakes.

Student: “Can I come back at a later time to complete the experiment alone? I made too many errors today.”

TA: “What are the errors?”

Student: “That does not matter. I still made too many errors.”

TA: “Tell me what the errors are, and I can tell you if we need to repeat the experiment.”

Student: “I know it is my fault for not coming to the lab prepared. The vortex on our stir plate is smaller than the groups beside us, and we did not time how long it took for the water to boil.”

TA: “Your experiment is not messed up at all. You can finish it today, no problem.”

Student: [Sighs with relief]

(September 19th, First Experiment, pg. 8)

In this moment, the student was already worried about failure and hesitant to share it with their TA during the first experiment of the course. Part of this hesitation centers on the difference in expertise between students, TAs, and instructors. When asking questions students expressed that they “felt really stupid asking what is PV = nRT” [Sage], believing “that [their question] was the stupidest thing they could have said” … “I should have known it was a glass stir rod” (September 19th, First Experiment, pg. 3). These interactions show how there is an expectation that students should already know a piece of content knowledge and the students are blaming themselves for not knowing. When asked to reflect on how to improve the laboratories, students shared that “[they] don’t think it was a problem with the course, [they] think it is probably (next word said barely audible) themselves” [Julia] or that “[they] don’t have enough chemistry knowledge to be commenting on [that]” [Sage]. Students also blamed themselves during the experiment, with their judgment rooted in expertise. The judgment was not something that students just perceived; it was a part of the course experience. TAs would tell a student, “like, oh, you’re doing this completely wrong” [Abigail]. Within the lab, Erica described:

Erica: “We need better facilitators. We need people who actually want to be there and who understand the experiment start to finish. And, like, who get it. And can explain it, in a calm and clear way. Instead of making me feel like an idiot! (Heavy sigh) Because I don’t understand something. ‘Cause there were—there were two, um, leads, uh, within my lab. One was a little more approachable. The other, absolutely not. Nu-uh. I felt like I would look at them and I would turn to stone. It was really awful. I like—I felt like I was being watched constantly.

The judgment Erica experienced was not only rooted in expertise, but was also constant and overwhelming. Specifically, they shared that asking questions was seen as a burden to the instructor. Collin shared a similar experience when they asked questions about their lab reports on the discussion board.

Collin: “When the prof, see, [the question] (pause) because they are masters of what like, they are masters of the uh, of the science behind it and they know it. It's just this automatically—like, no matter what you’re asking—they’re going to know the answer and they’re just going to think you’re a little special, you know? Special in a dumb way.

As there is an expertise imbalance when it comes to asking questions, students are judged for not having expertise, but they cannot learn because certain questions are valid, and others are not. Valid questions are determined by the expertise of the person answering the question. With these students’ experiences, students share an expectation of being judged for their mistakes, and experiences of being judged by those with more expertise than themselves.

4.1.iii Inescapable. Failure was an experience that exposed our students to strong negative emotions, and repeated exposure shapes their approach to future experiences. Failure was an experience that our students could not escape during the laboratory and the assessment. Allen experienced “obsessing with the lab reports, [by] thinking about them day and night” [Allen]. Part of this obsession was “[the automatic grading software that] would always mark it as like wrong even though I still rounded it [and] had the right number” [Vanessa]. With the automatic grading system only providing marks for the correct answer with the correct rounding, it perpetuated that the student's answer needed to be perfect to count as learning. Even within the experiment Collin shared, they needed to be perfect with the procedure.

Collin: “Use approximately this amount and the [instructors] don’t tell you like give or take how much [you need]. You aim for you know 0.8[g], but sometimes you get 0.9[g] and then that's not approximated enough, you know? But because they said approximately, you’re like okay 0.9[g] that must be good enough… (faintly) it was never good enough, (normal speaking voice) it was never good (laughs softly).

No matter what these students did, they expected that they would fail. There was no approach that they could take that would not fail. “Every time I went in [the lab there] was a pervasive feeling [of] oh what's going to go wrong today” [Erica]. For the lab reports, Allen shared, “not knowing what to do or if you got [the calculation] wrong or like if the system is wrong. Because [the automatic grading software] had (clears throat) a history of not functioning properly, you would like you would doubt it” [Allen]. Part of the never-ending failure is that the students were unsure if they had made a mistake or if the instructing team had made a mistake when setting up the assessment or lab design. These students were obsessed with their mistakes, partly due to the teaching team's poor communication and lack of accountability. Some students regulated their emotions from constantly failing by actively choosing not to care about the failure. After three quarters of the experiments had been completed, their partner was “just like whatever, I’m stressed, but whatever [be]cause nothing [is] changing [my] grade” [Kate]. There is a contradiction, saying they do not care, yet the care is still evident because the student is stressed. Allen expressed a similar sentiment in their advice to future students. “It is still 2% of your grade and just one uncertainty question [does] not matter that much in the long run” [Allen]. Despite the advice, Allen had previously felt obsessed over their lab reports. Our students struggled to adopt a carefree mindset around failure, despite it being recommended as advice by other students. Additionally, both students highlighted that they could not care because the mistake would not impact their grades. This suggests that there is a certain threshold students need to reach before they can set aside their experiences of failure. Not caring about failure was not only a belief students attempted to adopt; it was also an approach to the experiments.

Kathleen: “Sometimes I made a mistake [and] maybe I have to redo the whole lab, and that made me feel like scared. My lab partner and I were like oh… should we just like go for it? Should we like not care, [when] the TA would be like you have to restart, [even] if [we] were almost finished the lab.”

When the first author asked Kathleen to describe any occasions that they started an experiment over, they explained that “[they] never start over [and] always move forward” [Kathleen]. From Kathleen's experience, they actively avoided failure by ignoring the mistakes they made in the laboratory. Choosing to ignore mistakes aligns with other students because they were constantly exposed to failure in the laboratory and on the lab report. The constant exposure to strong negative emotions and the judgment of others could motivate a student to give up. No matter what these students did, it was never enough to meet the standards of the experiment or assessment.

4.2 Theme 2: what does it mean to fail?

Understanding the criteria for perfection in learning and assessment is rooted in a student's constructed definition of failure. As we discussed above, a student's failure is stigmatized by educators, meaning that to understand how failure is defined, we need an alternative approach (Harrowell et al., 2018; Feigenbaum, 2021) A student's definition of success allows us to understand what failure is (Blaisdell, 2004; Nairn et al., 2005). Our students’ definitions of failure are rooted in the outcome of the experiments and the report. Additionally, students began to believe that failure is controllable and that some failures are acceptable, based on the definition rooted in summative outcomes.
4.2.i Students’ definition of failure. The definition of failure shared by our students was rooted in whether the experimental outcome was achieved, meaning the experimental results. This question was posed by a student during an experiment where data was shared:

“Which group has the best data?” (October 5th, Second Experiment, pg. 3)

This question stayed with us throughout our research study. Initially, we did not know what the student meant. The word best is relative; We might say I performed better or worse compared to something or someone. In conversations with students following the above interaction, they shared the standards they compared against. Success was “a large yield like 140%” [Kate], “a perfect 100 point something percent yield” [Sage], “getting every single step done” [Justin], and “an outcome [that] matched what our predictions were” [Justin]. Specifically, predicted outcomes meant when the experiment “was exactly or very similar to the demo [experiment]” [Melissa]. Students felt successful in the lab when they had results from the experiment, and those results matched the desired outcomes, such as a yield from a synthesis. In addition to the synthesis, success was clear-cut for students to assess in titration experiments.

Nicole: “Cause I got like three very good titrations in that one and I it was very satisfying. And I think like titration is the type of lab where like, (Pause) if you do it perfectly it's you can come out of the lab like feeling really good about yourself (Pause) because honestly it's like hit or miss to like the other labs it is more like yeah it could be good it could be better but titration is like you either get it or you don’t.

Like the synthesis experiment, titration has results that are macroscopic observations, such as color changes. Success was easier for students to define as the laboratory observations were macroscopic, and they could compare their results to other groups. Students also described how they used comparison in their own experiences. A failed experiment can then be attributed to “not seeing the colour change that they were supposed to” [Aisha] or “our graph looked wonky” [Kate]. Melissa built on this idea by describing failure as “if what you’re looking at [in your experiment] is a lot different from the person next to you or everybody around you” [Melissa]. A student's observable results were used to define their success by comparison with others. Not every experiment the students encountered had macroscopic results as the outcome. In “some other labs where you are just getting numbers you don’t really know un how accurate those numbers are until you’re actually doing the lab report” [Nicole]. When Collin assessed their data, they “didn’t know where [they had] messed up because it's all so finicky. If [they] messed up, it would only show up at the end (light sigh on the word end), and then [they are] sitting there just like uhhh” [Collin]. With the results framing the definition of failure, in situations that relied less on macroscopic outcomes, students shared that it was harder to recognize a failed experiment until the data was analyzed well after the experimental session. For our students, the definition of failure is when they do not collect the correct experimental data or obtain the right results.

In addition to the experimental results, there were outcomes outside the experiment that our students used to define failure, which was the assessment. Students described success on the report as “when I actually understood what was being taught [in the course],” [Aisha], “when [their] lab report was perfectly written,” [Sage], or “understanding every step of the lab and the material” [Allen]. Success, in this case, depended on the metrics for assessing what counts as perfect and understanding. While we do not have specific metrics for each student, “understanding” and the “quality of the writing” are assessed by the teaching assistant who provides students with a grade. “I don’t remember the grades I got for all of them, but I think I did pretty good overall” [Julia]. The evidence was the summative grade assigned on the lab reports, and the metric for defining failure for the assessment is the outcome. Instead of the outcome being the result of the experiment, it is now the outcome of the lab report. While the specific grades were not shared in the experiences by students, they knew they understood the content or wrote well based on their grades obtained.

4.2.ii Defining acceptable failures. Some students would downplay the failures they experienced as successful if they achieved the outcomes. The downplaying allowed students to distance themselves from how failure feels in the moment. This was a strategy Taylor learned to adopt in the face of finishing a quarter of all the experiments in the course.

Taylor: “I mean, in my case yes because all the other weeks. I was successful in the lab report, but the lab itself felt like, (pause), it was just kind of the same experience over and over again. Of just the stress, of I won’t be able to finish.”

Interviewer: “Yeah. Was that stress at any point like overwhelming or was it manageable or [pause] somewhere in between?”

Taylor: (interjects as interviewer is pausing) “I think it was. I think it was manageable because I knew that, no matter what happened in the lab itself, as long as I’d be able to have the knowledge to get the lab reports done I would be fine.”

Taylor contradicts themself by downplaying their failure by first acknowledging that not finishing is stressful, but then they brush it off. A successful lab report allows them to downplay the failure of leaving the experiment without results, as that is the only metric that impacts Taylor's grade. Layla's approach was similar.

Layla: “Um I feel like since we already did [titration in high school] it was kinda just like doing it just to do it. Like titrations are there like fun but only after you mess up three times and you kind of have to just do it to get done and leave (soft laugh).”

Together, these students’ experiences share that acceptable mistakes within the course align with their definition of failure. Layla's comment shows how mistakes could be fun if you still leave the laboratory with the correct data. Allowable mistakes cannot affect the results needed to write the lab report or the quality of the report. As these mistakes are allowed within our context, students can downplay the feelings of failure when the end of the story results in success in the experiment or the report.

4.2.iii Controlling failure. In our context, students believed that failure was controllable. During the experiment, failure could be controlled by “following the instructions to [a] T”, writing down observations to [a] T” [Melissa], avoiding glassware that was “not 100% clean” [Abigail], and watching out for the “many human and nonhuman errors that can happen in the lab” [Abigail]. Here, controlling failure was about knowing what leads to deviation, such as not following instructions, improper observation, and human error. What students wanted to control was the human element of the experiment, meaning that failure was experienced as their fault. Abigail explained why she cannot expect clean glassware for the experiment, saying “The people using them before us the day before probably don’t do the best job cleaning it” [Abigail]. Not only do students judge themselves for these failures, but they also blame the imperfections of others. Erica's experiences build on the idea of perfectly following the procedure and experimental success.

Erica: “And I’d look at my lab partner and be like, are we doing this right? Like, I’m reading this… and this is how I’m interpreting it? Is that how you’re interpreting it too? And we would still have things go wrong. Or even talking to people across from us. Other groups. We would look at each other and be like, so is this the step we’re on here? Is this what you guys did for, for this? Like having to double-check because it was so unclear. And because of those unclear instructions um we just didn’t have successful experiments. And I’m quite certain we’re not alone in that.”

Based on Erica's experience, if the instructions were clear enough to follow, there was an expectation that the experiment would and should be successful. Student collaboration was an approach used to control failure, specifically with lab reports. Students worked together to “make sure [they] are all on the same page” [Justin], make sure “[they] didn’t make any silly mistakes” [Justin], and “helped [them] to figure out stuff as [they] were discussing the problems with the lab” [Allen]. The support extended beyond report writing, where those with “lab on Monday [would] tell their friends [to] be careful when you do this [step]” [Taylor] and “that the [experiment] took a notoriously long time” [Collin]. Through collaboration in writing lab reports and providing advice before the experiment, students believed they could control for failure. In summary, as failure is often judged, students seek ways to manage failure by following the instructions perfectly, removing human error during the experiment, and collaborating on reports would control for failure.

4.3 Theme 3: what about the learning environment shapes failure?

Our students described how the lab environment, specifically time restrictions, the lab space, and the equipment used, shaped their affective experiences of failure.
4.3.i A new and intimidating space for learning. As the laboratory was a new learning environment for students, the new space and new experiments were a source of anxiety and shaped students’ experiences with failure. Justin described “[feeling] anxious cause [the laboratory is] a new space so your um making more mistakes”. “In particular, [the experiments at] the beginning of the year felt quite rushed because it was so like unfamiliar” [Justin]. One specific source of anxiety in the laboratory space was the chemicals students worked with.

Sage: “[It was] honestly very scary [because I thought] there was going to be something wrong like an explosion or [maybe I might] die if [the chemical] touched [their] skin (laughs). Like [I was] very, very scared, [even though] a lot of the mistakes [I] made were actually more of the technical, like on the computer things where [I] had forgot to input data or forgot to click stop. So now [our] slope looked like a curve.”

Taylor shared this fear, developing from their “first time working with acids and bases and learning that they can be harmful [which is] really scary” [Taylor]. Additionally, “hearing the warnings of like this thing can burn your skin, [led them to] automatically start playing through the worst possible scenarios in [their] head” [Taylor]. Working with chemicals was a fear for these students because it was a new experience, and they internalized the safety warnings. “In addition to the safety concerns, there was anxiety around using the chemical names when navigating the experiment” (October 26th, Fourth Experiment, pg. 7).

4.3.ii Equipment issues added stress about failure. What makes equipment failure problematic is not that it happens, but how it shapes the student's experience under the practical restrictions of the teaching laboratory. In our context, using computers as equipment created an additional task that students needed to complete on top of their experiment during the session. Students described “getting the software going with like the probe and stuff was hard enough” [Sage] because it “would like glitch a lot” [Julia] and “would randomly shut down” [Erica]. Erica shared that equipment failure “was an added stress because I wasn’t sure if even [the data collection software] was gonna work. If the computer would randomly shut down. Or if I can even access my account” [Erica]. If the experiment failed, the students were expected “in the middle of the lab to stop and like sign in on another computer like right behind us [and] we kinda had to like rush at some point.” [Layla]. Students were still expected to continue or restart the experiment and, ideally, have their own data by the end of the lab period. Our students could struggle to finish experiments each week.

Taylor: “it felt like it was something different every lab […] One week the hotplate wouldn’t work, another week we’d just kind of be moving slow, another week we wouldn’t be 100% sure what to do (pause). It just felt like different things each time.”

Erica similarly echoed that it was a “mixed bag [of mistakes] every time going into the lab” [Erica]. The equipment created additional tasks and stressors that were not accounted for in the time allotted for the experiment.

4.3.iii The drawbacks of time restrictions. Part of what contributed to the overwhelm of failure was the uncertainty of the consequences. Aisha and their lab partner were “panicking a bit, because [they] knew [they] were short on time” [Aisha]. Being short on time was “honestly kind of stressful, [as] the main thing that would go through [their] head [was they] probably do not have time to redo a big section” [Melissa]. For one student, not finishing was a worst-case scenario thought running through their head, but for another, it was their weekly reality. Taylor and their lab partner “were excited when [we] finished the [experiment for] the first time” as they only “finished like four or five labs in total” out of 16 weeks of experiments. In Taylor's experience, “it was just kind of the same experience over and over again of just the stress, of I won’t be able to finish [the experiment]” [Taylor].

Time restrictions meant that some students had nothing tangible to take away at the end of the laboratory, which was a major part of their experience of failure. A student would spend the time and do the work but walk away with other groups’ data, not their own.

Justin: “Sometimes the [spectrometer] did not calibrate correctly, probably my mistake, and it would just mess up the entire experiment. That would just be such a bummer that you put in you know like an hour of work for nothing (laughs).”

Melissa asked us to “imagine putting that much effort into creating the solution and then it is just all gone, that's—that's so sad” [Melissa]. After Taylor's experiment failed, they “realized that [I] just, I just wasted half an hour doing the wrong thing and I need to go back and start the lab from the beginning, which means I’m guaranteed not to finish the lab again” [Taylor]. Time and effort were wasted when students made mistakes, contributing strong emotions to their experiences of these failures. In the eyes of some students, mistakes only waste time and energy because they are not assessed.

5 Discussion

5.1 Situating the findings in chemistry culture

The three themes that emerged from students’ experiences were the feelings, definitions, and context of failure. Failure in the moment was described as overwhelming, stigmatizing, and never-ending. Student definitions of failure were based on experimental and lab report success as metrics, which accounted for some failures over others. The situated context, such as the equipment, time allotted, and being in a new learning environment, also shaped students’ experience of failure. We come to understand that generative failure and transformative experiences that lead to learning are not allowed for in our laboratory context. The failure students experience is stigmatized due to the neglect of the affective component by educators in our context (Feigenbaum, 2021), and the disorienting dilemmas do not lead to students undertaking a new approach (Kitchenham, 2008). Through our ethnographic lens, we will situate students’ experiences of failure in chemistry culture in this discussion. Specifically, we explore the values transmitted to students about success, failure, affect, and practicality (Hammond and Brandt, 2004; Brandt and Carlone, 2012). We discuss how experiencing judgment transforms a student's beliefs about failure, how failure is stigmatized in laboratory settings, how failure is vulnerable, and how failure is not accounted for in learning environments.

5.2 Experiences of failure transform into student beliefs

Our students experienced strong negative emotions like frustration, stress, paranoia, and disappointment. These strong emotions emerged from students’ experiences of judgment or perceiving judgment. Specifically, Erica described being judged for their expertise when asking questions about the experiment or content knowledge to an instructor. This judgment, rooted in expertise, has previously been seen in ethnographic studies on scientific research. In the study below, when a new researcher arrived in the research laboratory and asked questions, it was observed that:

“The greater the ignorance of a newcomer, the deeper the informant was required to delve into layers of implicit knowledge, and farther into the past. Beyond a certain point, persistent questioning by the newcomer about “things that everybody knew” was regarded as socially inept.” (Latour and Woolgar, 1986, p. 77)

Like the experiences of our students, new researchers’ questions were also judged for their expertise, indicating that some questions were acceptable, and others were socially inept (Latour and Woolgar, 1986). Beyond what they felt, our students would brace themselves for judgment as if they expected it. Questions they asked during the experiment were prefaced with phrases like “this might be a dumb question” (February 1st, Tenth Experiment, pg. 4) or “sorry for all the questions” (November 2nd, Fifth Experiment, pg. 6). Like the experiences of our students, new researchers’ questions were also judged for their expertise, indicating that some questions wee acceptable, and others socially inept (Latour and Woolgar, 1986). Beyond what they felt, our students would brace themselves for judgment as if they expected it. Questions they asked during the experiment were prefaced with phrases like “this might be a dumb question” (February 1st, Tenth Experiment, pg. 4) or “sorry for all the questions” (November 2nd, Fifth Experiment, pg. 6). Essentially, the students were in a catch22: judged if they asked and judged if they did not. When our students share experiences of being judged by educators, it can be understood through a deficit framing that inherently fosters judgment. When chemistry practitioners or educators judge students’ expertise through a deficit lens, they classify a student's questions as a problem or issue to solve (Sismondo, 2010, p. 174). Through this lens, if the student had the required expertise, they would not ask any questions (Firestein, 2016). Additionally, it “assumes that science trumps all other knowledge traditions, ignoring claims to knowledge that come out of non-science traditions” (Sismondo, 2010, p. 177). From their experiences of judgment related to failure, our students began to recognize that only certain questions were valued and that their lack of expertise was a problem, not an experience that could be built from. To protect against this judgment, students adopted beliefs of blame and responsibility for any failure during laboratory. Essentially, by blaming themselves, students felt recognized by their teaching assistant as belonging in science.

5.3 Failure is stigmatized in research culture

Failure is a challenging experience for students because it is stigmatized in science contexts. Experiences of how failure feels in the moment are not often openly discussed in research or science classrooms (Whittle et al., 2020). Instead, failures are hidden away until they can be repackaged and reframed by scientists as success stories (Firestein, 2016; Green et al., 2018). Rarely is a failure not repackaged into a success story because research failure “becomes synonymous with the failure of the person” (Harrowell et al., 2018, p. 232). When failure in research is framed as an individual's fault, it stigmatizes our view of failure (Feigenbaum, 2021). This judgment makes it unlikely, if not impossible, for the failure to be generative or transformative in a way that supports students’ learning (Jarvis, 1987; Robertson et al., 2025). With failure being a characteristic of the individual (i.e., the student), there is a possibility that failure transforms into a fixed mindset. With the student experiences shared, Taylor reframed his failures during the experiment as a nonissue, since he did well on the lab report. Similarly, Layla talked about how it was fun to mess up the titration, so long as they left the lab with the correct data. Despite not having done chemistry research, students see how failure is represented by the teaching team and how scientific research impacts society, an idea explored in related literature: “There's no problem that seemingly can’t be solved by channeling more and more students into the science education pipeline” (Rudolph, 2019, p. 1). Science is’propped up’ in society is because it is sold on its successes. For example, after the Second World War, the American government proclaimed that science had won the war (Rudolph, 2014, 2019). Within our society, any challenge is seen as solvable by science, such as economic downturns, international crises, and solving everyday problems (Dewey, 1910; Rudolph, 2019, 2024). As science built its authority and respect in society through its successes, students may expect that science and failure do not mix. If failure is not shared unless it can be reframed as a success story, the shame of failing can degrade a student's sense of belonging in science (Green et al., 2018).

5.4 Failure is contextual

When experiments fail, students are frustrated and surprised because the experimental procedure is seen as a rule instead of a procedures to be interpreted or decision to be made (Sismondo, 2010, p. 143). Yet, as chemistry educators, we rarely share how we make decisions for designing the laboratory activity or when completing the experiment, i.e., why “[A chemist] blocks out some features and emphasizes others” (Sismondo, 2010, p. 108). This information provides important context: the decisions, knowledge, resources, goals, and lived experiences that led to data collection and development of a piece of scientific knowledge (Latour and Woolgar, 1986). Aside from labs not practically allowing for many mistakes, we see how students lack the context to interpret allowable mistakes, much less learn from them. Our students believed that if they could follow the procedure exactly and control for human error, they could avoid failure. This is because they experienced that the majority of the 40 student groups in the laboratory at one time could arrive at the expected outcome. The knowledge and data that students collect becomes decontextualized, which reinforces the idea that the experimental procedure is a set of rules rather than guidelines that need interpretation. If we do not support students in learning the decision-making that goes into completing a procedure, they will struggle to determine which decisions are mistakes (Hodson, 2014; Firestein, 2016). Only by considering the practical and logistical aspects of the experimental procedure can generative and transformative experiences of failure become possible.

5.5 Failure is a vulnerable experience

For some students in our study, their failure did not turn into a success story or an allowable failure within the course. The stories of failure our students shared often focused on the overwhelming emotions in the moment failure happens. The “lived experiences of failure [are] deeply affectual and often deeply vulnerable” (Whittle et al., 2020, p. 1). Collin, Allen, and Erica described how the feelings of failure were inescapable in the course. Specifically, they had the mentality framed as what would go wrong today, and nothing they did was ever enough. These students were trapped in these feelings because failure was inescapable in the experiments and lab reports. In our study, some students chose to avoid or ignore failures to deal with the intense emotions. These students explained the ways their experiences with failure were not productive and detrimental to their mental health. Specifically, Kathleen and their lab partner would ignore when the experiment failed and finish it even if their results were impacted. The overwhelming emotions surrounding failure are reflected in defining failure as a transformative experience. “Any learning might be meaningful, but they can be meaningless” (Jarvis, 1987, p. 69). Meaningless experiences can alienate the learner and prevent growth in future experiences (Jarvis, 1987; Shepherd et al., 2020). Within our context, the sheer amount of failure students experienced shaped how they approached future experiences during the experiment, such as controlling for or avoiding failure. For example, students tried to follow the procedure perfectly to prevent failure. The affective experience of failing was overwhelming, leading students to take different approaches to reduce their feelings of failure. Given these findings, it is not surprising that students are mainly concerned about completing the work and getting out of the lab as quickly as possible (DeKorver and Towns, 2015, 2016).

5.6 Failure is not practical and has consequences

Our study, and others, show how the laboratory learning environment does not adequately allow students to fail without consequences. Allowable failure for a student accounts for the consequences, such as payment to re-take courses, the security of their position, and the emotional burden (Whittle et al., 2020; Feigenbaum, 2021). As failure is different for every student, we cannot assume that it is either productive or a consequence that a student can bear. Chemistry laboratory learning activities are often designed without accounting for how failure is shaped by the course logistics, such as time restrictions and equipment breakdown (Millar, 2004; Hodson, 2014). In our context, there was an expectation that all students would be able to complete the experiment and obtain suitable data. However, students experienced equipment failure, struggled to troubleshoot, and moved stations to finish. Similarly, other studies—even from decades ago—have described how “even if the [students] ‘do everything right’, the waywardness of shoddy and poorly maintained school apparatus may lead them astray” (Hodson, 1996, p. 118). No matter how well designed a laboratory learning activity is, students can still fail to obtain data, just like in research contexts. Unlike research contexts, teaching laboratories are time-restricted (1.5 hours for our students), not allowing students time for troubleshooting or repeating an experiment to obtain better results (Del Carlo and Bodner, 2004; Galloway et al., 2016). Yet, students are often still required to explain their failed experiments in lab reports. Students in laboratory learning activities cannot be expected to respond to failure in the same ways as in a research context; the logistical considerations, such as time restrictions, support, ability to repeat experiments, and discard unexplainable data, are vastly different (Buxton, 2001; Hodson, 2014). Students have expressed concerns about being penalized in their grades for mistakes in the laboratory (DeKorver and Towns, 2015; Galloway et al., 2016; Vaughan et al., 2025). If we want students to learn something from the experience of failing, it needs to be an acceptable outcome in the learning activities and assessment of the laboratory (Millar, 2004; Hodson, 2014).

6 Implications for practice

6.1 Communities of learning without judgment

This study showed the impact of TAs and lab coordinators judging students’ failures during the laboratory. To reform teaching training for teaching assistants, we need to first understand where students are comfortable asking for feedback. Students shared that they would “always ask [my lab partner] for their help” [Aisha], and approach “[another CHEM 112 student] they recognized [when] were in the dining hall [and] desperately needed help” [Sage]. Part of this community development is unique to the institution studied, as all the first-year students live on campus. The difference between asking the TA in the course and asking another student was that the students did not judge each other. In sharing their struggles, students “knew that if [they] were to ask [a] question, no matter how stupid there was definitely someone else who also felt like they didn’t know what was going on, so it never felt like [they] were gonna get judged” [Collin]. The TAs and instructors, who were a key source of judgment, were removed from these spaces (Latour and Woolgar, 1986; Sismondo, 2010; Rudolph, 2023). Rather than trying to add more collaboration within our laboratory spaces, such as group work or a discussion board, what students are asking for is spaces where their failures are accepted and normalized (Shepherd et al., 2020; Whittle et al., 2020; Nunes et al., 2022). The way that we, as educators, respond to our students’ questions shapes how they experience setbacks and can transform the learning environment (Feigenbaum, 2021). Training of teaching assistants needs to focus on responding to students’ questions with understanding and encouragement rather than judgment (Denial, 2024).

6.2 Creating space and time for feedback

Laboratory learning activities prioritize collecting data, but rarely support students by providing feedback on their analysis and interpretation. Studies on argument-driven inquiry, laboratory learning activity design, and on the construction of scientific knowledge have emphasized the value of conversation and feedback to science (Latour and Woolgar, 1986; Walker et al., 2011; Walker and Sampson, 2013). If laboratory learning activities are not designed for feedback, and students are scared to ask questions, when do students get feedback and support for their learning? Allen “had to ask 10 different friends to know like how to do” the analysis.” Similarly Julia “would like talk with their lab partner if they didn’t know like how to do a certain part [of the lab report].” The challenge with students only receiving feedback from other students is that the TAs ultimately are the ones with the expertise and expectations for what is required for data analysis and writing. If analysis, interpretation, and scientific writing are skills we want for students, the laboratory learning activities need to be redesigned to teach students these skills (Obenland et al., 2014; Denial, 2024). Beyond curricular changes, TAs need to be trained on how to give students feedback on learning activities and assessments. Without the proper feedback, students could be caught in a cycle of never-ending failure, like we saw.

6.3 Feedback to learn from failure

Failure is not just about having the experience, but also having the time and support to reflect on the experience to make it meaningful moving forward (Del Carlo and Bodner, 2004). Specific feedback or clarification if they were on the right track was essential for helping students reflect on their learning (Nicol and Macfarlane-Dick, 2006; Jørgensen et al., 2023). Taylor explained how their teaching assistant (TA) “saw that people were struggling with, I think it was citations and introductions, so they sent out an email about it to everyone.” Additionally, Collin appreciated that “sometimes the TAs would be nice and would like give you [a] nudge in the right direction [which was] lovely when they did that (appreciative/happy tone)”. The greater challenge for designing these learning activities is when time constraints or TA mindsets prevent clear and specific feedback from reaching the students (Fairweather, 2005; Shortlidge and Eddy, 2018). Our students often described that their TA was not an approachable or accessible person for feedback. Erica shared that approaching a particular TA with a question was like looking Medusa in the eyes and turning into a statue. Layla's TA “didn’t have access to [their] email in the first [semester], meaning that Layla “had no idea what the hell I was doing.” Even though the support is “built-in” to teaching, i.e., students can ask TAs questions during the laboratory, it does not mean it is practical or achievable for all students. Work needs to be done to advocate for allocated hours in TA contracts for student feedback. Specifically, there needs to be a culture among TAs and faculty that values teaching undergraduate students rather than seeing it as a burden or a chore (Fairweather, 2005; Shortlidge and Eddy, 2018).

6.4 Responsibility and owning failure

As chemistry educators, we need to support students in seeing chemistry data and knowledge as contextual and subjective. If we want students to feel comfortable and embrace failure, the teaching team (instructors, TAs, and coordinators) needs to take responsibility and admit our own failures (Feigenbaum, 2021). Erica was often panicked because the experimental procedure and report template were different. The contradiction “left us scrambling (panic in voice) [and] paranoid [that values were] hidden somewhere in the report template” [Erica]. The course expected her to sort out the inconsistencies and punished Erica with grade deductions. Our failure to acknowledge and rectify mistakes fairly when they happen within the course breaks our students’ trust, which can have consequences on future learning (McLeod, 2011; Feigenbaum, 2021). Additionally, if TAs or instructors are not seen making mistakes, this only reinforces the deficit framing that failure is about lacking expertise or experience (Shepherd et al., 2020; Feigenbaum, 2021). The lack of responsibility continues to help uphold the narrative that science is about success rooted.

7 Conclusions

In science education, affective experiences such as failure have been explored to understand how emotions impact the acquisition of content knowledge and disciplinary skills (El Halwany and Adams, 2025). In other cases, certain affective experiences are justified as integral to scientific inquiry without questioning how students might feel in the moment (Robertson et al., 2025; El Halwany and Adams, 2025). This paper undertakes an alternative approach, where the affective experiences are worth understanding on their own. By prioritizing affect, we can tease out how our learning activity design, practical restrictions, assessment, and interactions restrict students’ experiences of failure. Students’ experiences of failure were stigmatizing and transformed into beliefs about themselves and belonging in science culture. In this way, the experiences could be seen as transformative, just not in the way educators intend. From the literature and our experience as chemistry educators, we know that failure is not all bad; sometimes, it leads to a transformative learning experience (Fink et al., 2020; Heller et al., 2020; Santos and Mooring, 2022). While much work has focused on changing students’ mindsets or providing more opportunities for formative feedback, these approaches still promote the idea that failure is an individual problem, not a systematic one (Kahveci and Orgill, 2015; Flaherty, 2020). Systems and structures within chemistry, higher education, and broader society work to define failure (Rudolph, 2019; Feigenbaum, 2021; Nunes et al., 2022). Despite the value of failure in learning, educators should question how we ask students to fail and which students are allowed to fail in the system. If we want students to value failure, we must create a learning environment where it is allowable and supported (Nunes et al., 2022). Teaching laboratories designed this way would make failure practical and communal, keeping everyone, including educators, accountable. While failure is a part of research and laboratory learning activities, we need to recognize that not all failures are created equal.

Ethical considerations

This study was approved by our institutional General Research Ethics Board. Participant observations (GREB# GCHEM-01423) were conducted anonymously and involved detailed reflexive note-taking after laboratory sessions (i.e., no audio/video recording). Classrooms, including teaching laboratories, are not public spaces, and students have a reasonable expectation of privacy; the students were given notice of the anonymous observations being conducted through their learning management platform. Participants were recruited for interviews (GREB# GCHEM-00320) after the course was complete and grades finalized to mitigate the risk of power imbalances between the researchers and student participants. All interview participants gave informed consent and had the option to decline or withdraw from participation in the study at any time. To maintain the interview participants’ privacy, minimal identifying information was collected (e.g., name and email address but no demographic information), all data (e.g., audio files, transcripts) were de-identified, and participants had the option to choose their pseudonyms.

Author contributions

S. S.: conceptualization, methodology, investigation, writing – original draft. A. B.: conceptualization, supervision, resources, validation, writing – review and editing (CRediT Contributor Roles Taxonomy).

Conflicts of interest

There are no conflicts to declare.

Data availability

Data collected fromicipants, including participant observations and interview transcripts, are not available for confidentiality reasons.

Supplementary information (SI) is available. See DOI: https://doi.org/10.1039/d5rp00297d.

Acknowledgements

We would like to thank Brian Gilbert for his many conversations around qualitative research design and failure, which informed the first author's development throughout the entire study. Additionally, we would like to thank our participants for their time and stories, as this research would not be possible without them. S. S. was supported by funding from an Ontario Graduate Scholarship.

References

  1. Ahmed S., (2004), The Cultural Politics of Emotion, 2nd edn, Routledge.
  2. Blaisdell B., (2004), Beyond binaries: the use of metaphor to rearticulate success in school reform, J. Thought, (4), 75–88.
  3. Bowen C. W., (1999), Development and score validation of a chemistry laboratory anxiety instrument (CLAI) for college chemistry students, Educ. Psychol. Meas., 59(1), 171–185.
  4. Brandt C. B. and Carlone H., (2012), Ethnographies of science education: situated practices of science learning for social/political transformation, Ethnography Educ., 7(2), 143–150 DOI:10.1080/17457823.2012.693690.
  5. Braun V. and Clarke V., (2019), Reflecting on reflexive thematic analysis, Qual. Res. Sport, Exercise Health, 11(4), 589–597 DOI:10.1080/2159676X.2019.1628806.
  6. Bruck L. B. and Towns M. H., (2009), Preparing students to benefit from inquiry-based activities in the chemistry laboratory: guidelines and suggestions, J. Chem. Educ., 86(7), 820 DOI:10.1021/ed086p820.
  7. Burrows N. L., Ouellet J., Joji J. and Man J., (2021), Alternative Assessment to Lab Reports: A Phenomenology Study of Undergraduate Biochemistry Students’ Perceptions of Interview Assessment, J. Chem. Educ., 98(5), 1518–1528 DOI:10.1021/acs.jchemed.1c00150.
  8. Buxton C. A., (2001), Modeling science teaching on science practice? Painting a more accurate picture through an ethnographic lab study, J. Res. Sci. Teach., 38(4), 387–407 DOI:10.1002/tea.1011.
  9. Chambers D. W., (1983), Stereotypic images of the scientist: the draw-a-scientist test, Sci. Educ., 67(2), 255–265 DOI:10.1002/sce.3730670213.
  10. Choi B., (2021), I’m Afraid of not succeeding in learning: introducing an instrument to measure higher education students’ fear of failure in learning, Stud. High. Educ., 46(11), 2107–2121 DOI:10.1080/03075079.2020.1712691.
  11. Creswell J. W. and Creswell J. W., (2013), Qualitative inquiry and research design: choosing among five approaches, 3rd edn, SAGE Publishing.
  12. Dearden R. F., (1967), Instruction and Learning by Discovery, The Concept of Education (International Library of the Philosophy of Education Volume 17), 1st edn, Routledge, pp. 93–107.
  13. DeKorver B. K. and Towns M. H., (2015), General chemistry students’ goals for chemistry laboratory coursework, J. Chem. Educ., 92(12), 2031–2037 DOI:10.1021/acs.jchemed.5b00463.
  14. DeKorver B. K. and Towns M. H., (2016), Upper-level undergraduate chemistry students’ goals for their laboratory coursework, J. Res. Sci. Teach., 53(8), 1198–1215 DOI:10.1002/tea.21326.
  15. Del Carlo D. I. and Bodner G. M., (2004), Students’ perceptions of academic dishonesty in the chemistry classroom laboratory, J. Res. Sci. Teach., 41(1), 47–64 DOI:10.1002/tea.10124.
  16. Denial C. J., (2024), A pedagogy of kindness, University of Oklahoma Press.
  17. Denzin N. K. and Lincoln Y. S., (2018), The SAGE Handbook of Qualitative Research, 5th edn, SAGE Publishing.
  18. Dewey J., (1910), Science as subject-matter and as method, Sci. Educ., 4(1995), 391–398.
  19. El Halwany S. and Adams J. D., (2025), Affective Politics of Belonging to STEM: Some Conceptual and Methodological Considerations, Sci. Educ., sce.21951 DOI:10.1002/sce.21951.
  20. Fairweather J. S., (2005), Beyond the Rhetoric: Trends in the Relative Value of Teaching and Research in Faculty Salaries, J. High. Educ., 76(4), 401–422.
  21. Feigenbaum P., (2021), Telling Students it's O.K. to Fail, but Showing Them it Isn’t: Dissonant Paradigms of Failure in Higher Education, Teach. Learn. Inquiry, 9(1), 13–26 DOI:10.20343/teachlearninqu.9.1.3.
  22. Fielding M., (2004), Transformative approaches to student voice: theoretical underpinnings, recalcitrant realities. Br. Educ. Res. J., 30(2), 295–311 DOI:10.1080/0141192042000195236.
  23. Fink A., Frey R. F. and Solomon E. D., (2020), Belonging in general chemistry predicts first-year undergraduates’ performance and attrition, Chem. Educ. Res. Pract., 21(4), 1042–1062 10.1039/D0RP00053A.
  24. Firestein S., (2016), Failure: Why is science so successful, Oxford University Press.
  25. Flaherty A. A., (2020), A review of affective chemistry education research and its implications for future research, Chem. Educ. Res. Pract., 21(3), 698–713 10.1039/C9RP00200F.
  26. Flaherty A. A., (2022), The Chemistry Teaching Laboratory: A Sensory Overload Vortex for Students and Instructors? J. Chem. Educ., 99(4), 1775–1777 DOI:10.1021/acs.jchemed.2c00032.
  27. Galloway K. R. and Bretz S. L., (2015a), Development of an Assessment Tool To Measure Students’ Meaningful Learning in the Undergraduate Chemistry Laboratory, J. Chem. Educ., 92(7), 1149–1158 DOI:10.1021/ed500881y.
  28. Galloway K. R. and Bretz S. L., (2015b), Measuring Meaningful Learning in the Undergraduate General Chemistry and Organic Chemistry Laboratories: A Longitudinal Study, J. Chem. Educ., 92(12), 2019–2030 DOI:10.1021/acs.jchemed.5b00754.
  29. Galloway K. R., Malakpa Z. and Bretz S. L., (2016), Investigating affective experiences in the undergraduate chemistry laboratory: Students’ perceptions of control and responsibility, J. Chem. Educ., 93(2), 227–238 DOI:10.1021/acs.jchemed.5b00737.
  30. Gravelle S. and Fisher M. A., (2012), Signature Pedagogies in Chemistry, in Chick N., Haynie A. and Gurung R. A. R. (ed.), Exploring More Signature Pedagogies: Approaches to Teaching Disciplinary Habits of Mind, Stylus Publishing, pp. 112–128.
  31. Green S. J., Grorud-Colvert K. and Mannix H., (2018), Uniting science and stories: perspectives on the value of storytelling for communicating science, FACETS, 3(1), 164–173 DOI:10.1139/facets-2016-0079.
  32. Hammond L. and Brandt C., (2004), Science and cultural process: defining an anthropological approach to Science Education, Stud. Sci. Educ., 40(1), 1–47 DOI:10.1080/03057260408560202.
  33. Harrowell E., Davies T. and Disney T., (2018), Making Space for Failure in Geographic Research, Prof. Geographer, 70(2), 230–238 DOI:10.1080/00330124.2017.1347799.
  34. Heller S. T., Duncan A. P., Moy C. L. and Kirk S. R., (2020), The Value of Failure: A Student-Driven Course-Based Research Experience in an Undergraduate Organic Chemistry Lab Inspired by an Unexpected Result, J. Chem. Educ., 97(10), 3609–3616 DOI:10.1021/acs.jchemed.0c00829.
  35. Hodson D., (1996), Laboratory work as scientific method: three decades of confusion and distortion, J. Curriculum Stud., 28(2), 115–135 DOI:10.1080/0022027980280201.
  36. Hodson D., (2014), Learning Science, Learning about Science, Doing Science: different goals demand different learning methods, Int. J. Sci. Educ., 36(15), 2534–2553 DOI:10.1080/09500693.2014.899722.
  37. Hunnicutt S. S., Grushow A. and Whitnell R., (2015), Guided-inquiry experiments for physical chemistry: the POGIL-PCL model, J. Chem. Educ., 92(2), 262–268 DOI:10.1021/ed5003916.
  38. Ingold T., (2017), Anthropology contra ethnography, HAU: J. Ethnographic Theory, 7(1), 21–26 DOI:10.14318/hau7.1.005.
  39. Jacobson D. and Mustafa N., (2019), Social Identity Map: A Reflexivity Tool for Practicing Explicit Positionality in Critical Qualitative Research, Int. J. Qual. Methods, 18, 1609406919870075 DOI:10.1177/1609406919870075.
  40. Jarvis P., (1987), Meaningful and Meaningless Experience: Towards an Analysis of Learning From Life. Adult Educ. Quart., 37(3), 164–172 DOI:10.1177/0001848187037003004.
  41. Jørgensen J. T., Gammelgaard B. and Christiansen F. V., (2023), Teacher Intentions vs Student Perception of Feedback on Laboratory Reports, J. Chem. Educ., 100(10), 3764–3773 DOI:10.1021/acs.jchemed.2c01148.
  42. Kahveci M. and Orgill M., (2015), Affective Dimensions in Chemistry Education, Springer eBooks DOI:10.1007/978-3-662-45085-7.
  43. Kessing R. M., (1974), Cultural anthropology: A contemporary perspective, New York: Holt, Rinehart and Winston.
  44. Kitchenham A., (2008), The Evolution of John Mezirow's Transformative Learning Theory, J. Transf. Educ., 6(2), 104–123 DOI:10.1177/1541344608322678.
  45. Kvale S. and Brinkmann S., (2009), Constructing an interview, InterViews: Learning the Craft of Qualitative Research Interviewing, SAGE Publishing, pp. 123–141.
  46. Lareau A., (2021), Listening to people: a practical guide to interviewing, participant observations, data analysis and writing it all up, University of Chicago Press.
  47. Latour B. and Woolgar S., (1986), Laboratory Life: The Social Construction of Scientific Facts, 2nd edn, Princeton University Press.
  48. Limeri L. B., Carter N. T., Choe J., Harper H. G., Martin H. R., Benton A. and Dolan E. L., (2020), Growing a growth mindset: characterizing how and why undergraduate students’ mindsets change, Int. J. STEM Educ., 7(1), 35 DOI:10.1186/s40594-020-00227-2.
  49. Lincoln Y. S., (1995), Emerging Criteria for Quality in Qualitative and Interpretive Research, Qualitative Inquiry, 1(3), 275–289 DOI:10.1177/107780049500100301.
  50. Lincoln Y. S., Lynham S. A. and Guba E. G., (2011), Paradigmatic controversies, contradictions and emerging confluences, revisited, The SAGE Handbook of Qualitative Research, 4th edn, SAGE Publishing, pp. 97–128.
  51. McLeod J., (2011), Student voice and the politics of listening in higher education, Crit. Stud. Educ., 52(2), 179–189 DOI:10.1080/17508487.2011.572830.
  52. Merriam S. B., (2009), What is qualitative research? Qualitative Research: a Guide to Design and Implementation, 2nd edn, John Wiley & Sons, pp. 3–19.
  53. Mezirow J., (1978), Education for perspective transformation: Women's re-entry programs in community colleges, New York: Center for Adult Education, Teachers College, Columbia University.
  54. Millar R., (2004), The role of practical work in the teaching and learning of science, Washington, DC: National Academy of Science.
  55. Morse J. M., (2015), Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry, Qual. Health, 25(9), 1212–1222 DOI:10.1177/1049732315588501.
  56. Nairn K., Munro J. and Smith A. B., (2005), A counter-narrative of a ‘failed’ interview, Qual. Res., 5(2), 221–244 DOI:10.1177/1468794105050836.
  57. Nicol D. J. and Macfarlane-Dick D., (2006), Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Stud. High. Educ., 31(2), 199–218 DOI:10.1080/03075070600572090.
  58. Nunes K., Du S., Philip R., Mourad M. M., Mansoor Z., Laliberté N. and Rawle F., (2022), Science students’ perspectives on how to decrease the stigma of failure, FEBS Open Bio, 12(1), 24–37 DOI:10.1002/2211-5463.13345.
  59. Obenland C. A., Kincaid K. and Hutchinson J. S., (2014), A General Chemistry Laboratory Course Designed for Student Discussion, J. Chem. Educ., 91(9), 1446–1450 DOI:10.1021/ed400773j.
  60. Parker A., Noronha E. and Bongers A., (2023), Beyond the Deficit Model: Organic Chemistry Educators’ Beliefs and Practices about Teaching Green and Sustainable Chemistry, J. Chem. Educ., 100(5), 1728–1738 DOI:10.1021/acs.jchemed.2c00780.
  61. Partanen L. J., (2023), A Guided Inquiry Learning Design for a Large-Scale Chemical Thermodynamics Laboratory Module, J. Chem. Educ., 100(1), 118–124 DOI:10.1021/acs.jchemed.2c00387.
  62. Robertson A., Vélez V. N., Huynh T. and Tali Hairston W., (2025), Exposing and Challenging “Grit” in Physics Education, Sci. Educ., sce.21961 DOI:10.1002/sce.21961.
  63. Rop C. J., (1999), Student perspectives on success in high school chemistry, J. Res. Sci. Teach., 36(2), 221–237 DOI:10.1002/(SICI)1098-2736(199902)36:2%3C221::AID-TEA7%3E3.0.CO;2-C.
  64. Rudolph J. L., (2014), Dewey's “Science as Method” a Century Later: Reviving Science Education for Civic Ends, Am. Educ. Res. J., 51(6), 1056–1083 DOI:10.3102/0002831214554277.
  65. Rudolph J. L., (2019), How we teach science: What's changed and why it matters, Harvard University Press.
  66. Rudolph J. L., (2023), Why We Teach Science: and Why We Should, Oxford University Press.
  67. Rudolph J. L., (2024), Scientific literacy: Its real origin story and functional role in American education, J. Res. Sci. Teach., 61(3), 519–532 DOI:10.1002/tea.21890.
  68. Santos D. L. and Mooring S. R., (2022), Characterizing Mindset-Related Challenges in Undergraduate Chemistry Courses, J. Chem. Educ., 99(8), 2853–2863 DOI:10.1021/acs.jchemed.2c00270.
  69. Satusky M. J., Wilkins H., Hutson B., Nasiri M., King D. E., Erie D. A. and Freeman Jr. T. C., (2022), CUREing Biochemistry Lab Monotony, J. Chem. Educ., 99(12), 3888–3898 DOI:10.1021/acs.jchemed.2c00357.
  70. Schechtel S. and Bongers A., (2024), Representing chemistry culture: ethnography's methodological potential in chemistry education research and practice, Chem. Educ. Res. Pract., 25(3), 584–593 10.1039/D3RP00272A.
  71. Secules S., McCall C., Mejia J. A., Beebe C., Masters A. S., Sánchez-Peña M. L. and Svyantek M., (2021), Positionality practices and dimensions of impact on equity research: a collaborative inquiry and call to the community, J. Eng. Educ., 110(1), 19–43 DOI:10.1002/jee.20377.
  72. Shepherd L., Gauld R., Cristancho S. M. and Chahine S., (2020), Journey into uncertainty: medical students’ experiences and perceptions of failure, Med. Educ., 54(9), 843–850 DOI:10.1111/medu.14133.
  73. Shortlidge E. E. and Eddy S. L., (2018), The trade-off between graduate student research and teaching: a myth? PLoS One, 13(6), e0199576 DOI:10.1371/journal.pone.0199576.
  74. Sismondo S., (2010), An introduction to science and technology studies, 2nd edn, Blackwell Publishing.
  75. Stefan M., (2010), A CV of Failures, Nature, 468, 467.
  76. Taussig M., (2011), I swear I saw this: Drawings in fieldwork notebooks, namely my own, The University of Chicago Press.
  77. van Manen M., (1997), Researching lived experience: Human science for an action sensitive pedagogy, 2nd edn, Routledge DOI:10.4324/9781315421056.
  78. Varpio L., Paradis E., Uijtdehaage S. and Young M., (2020), The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework, Acad. Med., 95(7), 989 DOI:10.1097/ACM.0000000000003075.
  79. Vaughan E. B., Tummuru S. and Barbera J., (2025), Investigating students’ expectations and engagement in general and organic chemistry laboratory courses, Chem. Educ. Res. Pract., 26(1), 271–288 10.1039/D4RP00277F.
  80. Walker J. P. and Sampson V., (2013), Learning to Argue and Arguing to Learn: Argument-Driven Inquiry as a Way to Help Undergraduate Chemistry Students Learn How to Construct Arguments and Engage in Argumentation During a Laboratory Course, J. Res. Sci. Teach., 50(5), 561–596 DOI:10.1002/tea.21082.
  81. Walker J. P., Sampson V. and Zimmerman C. O., (2011), Argument-driven inquiry: an introduction to a new instructional model for use in undergraduate chemistry labs, J. Chem. Educ., 88(8), 1048–1056 DOI:10.1021/ed100622h.
  82. Whittle R., Brewster L., Medd W., Simmons H., Young R. and Graham E., (2020), The ‘present-tense’ experience of failure in the university: reflections from an action research project, Emotion, Space Soc., 37, 100719 DOI:10.1016/j.emospa.2020.100719.
  83. Zhang Y., (2022), The production of laboratory scientists: negotiating membership and (re)producing culture, Front. Educ., 7, 1–15 DOI:10.3389/feduc.2022.1000905.

This journal is © The Royal Society of Chemistry 2026
Click here to see how this site uses Cookies. View our privacy policy here.