Reetta
Kyynäräinen
*a,
Lars-Erik
Malmberg
b,
Elisa
Vilhunen
c,
Mikko-Jussi
Laakso
d and
Veli-Matti
Vesterinen
a
aDepartment of Chemistry, University of Turku, Turku, Finland. E-mail: reetta.kyynarainen@utu.fi
bDepartment of Education, University of Oxford, Oxford, UK
cDepartment of Education, University of Helsinki, Helsinki, Finland
dTurku Research Institute for Learning Analytics, University of Turku, Turku, Finland
First published on 10th September 2025
This study investigates the role of mistakes and affective experiences during online pre-lab activities in predicting students’ situational engagement (conceptualized here as a simultaneous experience of interest, skill, and challenge, i.e. optimal learning moments) in subsequent laboratory sessions in an undergraduate chemistry laboratory course (n = 256). The data collection followed an ecological momentary assessment design. We specified multilevel structural equation models (MSEMs), including two- and three-level structural equation models, to examine how mistakes impacted students’ situational engagement during pre-lab activities and subsequent laboratory sessions. The findings indicate that mistakes in pre-lab tasks were associated with lower perceived skill and higher experience of challenge during that task, but did not predict students’ interest, skill, challenge, or situational engagement in the subsequent laboratory session. Autoregressive effects from pre-lab activities on students’ situational engagement during lab sessions were observed across all elements of engagement, while skill and challenge during pre-lab activities also predicted higher interest in the subsequent laboratory session. Based on our findings, we propose that while mistakes in the pre-lab activities do not play a significant role in predicting students’ engagement upon entering the laboratory, affective experiences during pre-lab activities can play a significant role in predicting students’ engagement in the laboratory, and that laboratory engagement could be enhanced by providing students with sufficiently challenging pre-lab activities.
In particular, disciplines encompassing a trial-and-error nature, such as chemistry, foster environments where mistakes inevitably occur (Schmid et al., 2022; Agustian et al., 2024; Kyynäräinen et al., 2024). Nonetheless, there is still a deficiency of research on students’ perceptions and affective responses to making mistakes, particularly across varying contexts and learning environments, like chemistry laboratory courses (cf.Steuer et al., 2025). Understanding students’ reactions to their errors is crucial for supporting learning stemming from these integral events. Beyond this, understanding the previously overlooked lagged effects of making mistakes can help educators intervene early and provide additional support to students in need (cf.Kyynäräinen et al., 2024).
In addition, the affective domain in general has often been overlooked in chemistry laboratory-based studies (see e.g.Galloway et al., 2016; Agustian and Seery, 2017; Agustian, 2022). Previous research suggests that it is the primary driver of undergraduate students’ learning goals in chemistry laboratory courses (DeKorver and Towns, 2015) and that students’ expectations determine their perceived and enacted experiences (Kirschner et al., 1993; Galloway and Bretz, 2015). Based on these findings, students’ prior affective experiences could significantly predict their goal-setting and learning processes in the chemistry laboratory. This might arise from students’ attempts at affective forecasting, trying to foresee future emotional experiences based on one's expectations of success or failure (Wilson and Gilbert, 2003; Pilin, 2021). Affective forecasting might plausibly lead to assimilation, where students shift their actual emotional experiences to meet their forecast, increasing the significance of future success or failure anticipations (Wilson and Gilbert, 2003).
In order to expand knowledge about how students’ prior experiences influence their laboratory instruction, we focus on the role of pre-lab activities and their affective outcomes and investigate how students’ engagement and mistakes in pre-lab activities predict their engagement in subsequent laboratory sessions. We go beyond previous studies by investigating the lagged effects of students’ experiences during pre-lab exercises on their affective experiences in the laboratory. This study deepens the understanding of students’ affective responses to making mistakes in a new context, as well as provides insight into the role of pre-lab activities in promoting high situational engagement during subsequent laboratory instruction.
The research data were collected using an ecological momentary assessment (EMA) design, using brief questionnaires measuring the in situ experiences of students repeatedly during or immediately after learning tasks in authentic learning settings (see Sinatra et al., 2015). In this study, students responded altogether to 20 EMA questionnaires while completing the digital pre-lab activities and during the laboratory sessions. We view mistakes as situation-specific instances that influence learning engagement (cf.Kyynäräinen et al., 2024). Considering the dynamic nature of engagement (e.g.Greene, 2015; Schneider et al., 2016; Schmidt et al., 2018; Upadyaya et al., 2021; Heikkinen et al., 2025; Tang et al., 2025), which the EMA design is well-suited to capture, we utilize situational engagement, also referred to as optimal learning moments (Schneider et al., 2016), as a theoretical lens to study undergraduate students’ momentary experiences over a chemistry laboratory course. According to this conceptualization, students experience high levels of situational engagement when high situational interest, high perceived skill level, and high enough task difficulty meet; that is, they are challenged just the right amount. The EMA questionnaires contained items on these optimal learning elements – interest, skill, and challenge. The design enables us to capture how situational engagement and its elements change over time and which factors, such as making mistakes or prior engagement, might contribute to these changes.
Chemistry laboratory courses typically consist of preparative pre-lab activities followed by an instructional laboratory experiment. Most traditionally, pre-lab activities intend to introduce chemical concepts by, for instance, pre-laboratory lecture, quizzes, or discussion (Agustian and Seery, 2017). Other possible objectives of pre-lab activities are introducing laboratory techniques with, for example, simulations, technique videos or safety information, as well as preparing students for the affective aspects of laboratory work by, e.g. increasing their confidence or motivation or reducing anxiety (Agustian and Seery, 2017; Chu, 2017; Spagnoli et al., 2017; Rayment et al., 2023). Pre-lab activities are beneficial for learning as they provide opportunities to stimulate and increase students’ knowledge on the key chemical concepts prior to the laboratory experiment itself – facilitating deeper engagement with the content (Chu, 2017; Seery et al., 2024). According to Seery and colleagues (2024), these activities could also help students identify the intended goals for the laboratory experiment, promoting constructive alignment and allowing them to engage in a meaningful way.
Based on previous research, students’ situational engagement during laboratory instruction is highly prone to external or structural triggers, particularly being given choice in, for example, framing one's own work (Schmidt et al., 2018). In addition, Galloway and Bretz (2015) found out that for most students, their learning experience in the laboratory is intensely framed by their expectations – what they anticipate thinking and feeling becomes an enacted reality. Previous research also suggests that prior engagement can significantly predict contemporary engagement (Wylie and Hodgen, 2012). Considering these notions, students’ antecedent experiences during the pre-lab activities might impact the level of their engagement in the laboratory (cf.Agustian and Seery, 2017).
The optimal learning theory is based on the concept of flow (Csikszentmihalyi, 2000; Schmidt, 2010) and situational interest (Hidi and Renninger, 2006; Krapp and Prenzel, 2011). Csikszentmihalyi (2000) defines flow as a situation-specific instance so deeply engaging that human needs are momentarily suspended and time loses its temporal boundaries, characterized by above-average levels of challenge and skill. Situational interest, on the other hand, is described as a spontaneous motivating psychological state of a short duration, triggered typically by external features and sustained through meaningfulness of tasks and/or personal involvement (Hidi and Renninger, 2006). Situational interest is perceived as an important precondition for learning, as it enables the student to focus on the tasks at hand and motivates them to put in effort (Schraw and Lehman, 2001; Hidi and Renninger, 2006). Thus, optimal learning theory links situational interest to the higher-than-average levels of skill and challenge during flow (Schneider et al., 2016; Tang et al., 2025).
OLMs are assumed to promote learning (Schneider et al., 2016; Tang et al., 2025), although the linkage between experiencing OLMs, i.e. high situational engagement, and heightened learning outcomes has not been empirically confirmed. However, previous research indicates that the relationship between situational engagement and learning could be mediated by students’ self-regulatory processes (cf.Heikkinen et al., 2025). Namely, experiencing high situational engagement can promote students’ self-regulatory processes and the use of more efficient learning strategies (e.g.Lee et al., 2014; Schneider et al., 2016), which are, in turn, are associated with heightened learning outcomes. Beyond influencing students’ immediate learning outcomes, situational engagement can also shape broader affective factors, such as experiences of concentration, enjoyment, efficacy, and success (Schneider et al., 2016; Tang et al., 2025). In their study, Hidi and Renninger (2006) describe how individual interest can be developed and deepened through situational interest. Similarly, situational engagement could develop into sustained engagement in students (cf.Hidi and Renninger, 2006).
According to previous findings, students’ momentary, task-specific perceptions of the utility and attainment values are reflected in their situational engagement, i.e. OLMs (Schmidt et al., 2018; Eccles and Wigfield, 2020; Salmela-Aro et al., 2021; Tang et al., 2025). Previous research also states that the type of instructional activities affects students’ situational engagement (Schmidt et al., 2018; Inkinen et al., 2019; Inkinen et al., 2020; Vilhunen et al., 2025). Inkinen and colleagues (2019) conducted a study in Finland and USA, which revealed some differences between the proportion of time students in secondary school science classrooms experienced OLMs on average during lab work (21.7% in Finland and 15.1% in USA), calculating (38.1% in Finland and 18.1% in USA), and computer work (25.2% in Finland and 14.9% in USA). Compared to other types of activities, Finnish students were twice as likely to experience high situational engagement when they were calculating. Oftentimes, pre-lab activities can contain calculations, suggesting a possibly engaging experience for students (Inkinen et al., 2019). Despite the growing body of research on students’ affect in the laboratory, little is known about the lagged effects of students’ engagement from pre-lab activities on the subsequent laboratory sessions (Agustian and Seery, 2017).
Error-centred learning interventions (Chu and Leighton, 2019; Lee, 2020), like productive failure activities, in which students’ misconceptions are utilized as prompts for conceptual change, can enhance cognitive learning (Loibl and Leuders, 2019; Vilhunen et al., 2023). These interventions promote adaptive, action-related responses to mistakes, which have been associated with further acquisition of knowledge, enhancing students’ learning stemming from mistakes (Spear et al., 2024). Although detecting and discussing individuals’ mistakes in depth can be effective in promoting students’ learning, it can be challenging to implement this in real classroom or laboratory settings alongside other instructional activities (Narciss and Alemdag, 2025). An alternative approach could be erroneous examples or solutions presented by a teacher, allowing students to detect the mistake, followed by a classroom discussion of the mistake at hand (Safadi and Yerushalmi, 2014). This was also detected to elicit more naïve ideas and support learning transfer, especially for the students with lower prior knowledge.
Nevertheless, previous research (Schmid et al., 2025; Tulis and Dresel, 2025) suggests that brief interventions aiming at improving students’ error beliefs and learning seem a little ineffective, as internalizing the (meta-) cognitive and (meta-) affective strategies in dealing with mistakes can take time (Narciss and Alemdag, 2025). Instead, for example, encouraging dialogue about overcoming mistakes, accompanied by emotion recognition and concrete action plans, has been found to reduce students’ fear of failure (Peterson et al., 2025). Although these error beliefs seem to be rather stable, in situ responses to mistakes seem to be more malleable (Soncini et al., 2025).
Accordingly, students’ affective responses to mistakes have been found to differ in, for example, situations where the mistake is resolved with an instructor or with one's peers (Kyynäräinen et al., 2024). In general, integrative emotion regulation (Sharabi and Roth, 2025), as well as positive framing of errors, and the absence of framing mistakes negatively or as deleterious (Leighton et al., 2018; Käfer et al., 2019; Tulis and Dresel, 2025), might also contribute to students’ adaptive coping practices to making mistakes, plausibly promoting learning. This could mean encouraging students not to fear challenge or frustration, but to be attentive to them and to identify those not as ego-threatening but as excellent indicators of learning possibilities, i.e. meta-affective learning (DeKorver and Towns, 2015; Radoff et al., 2019; Thomas et al., 2022; Sharabi and Roth, 2025). Meta-affective learning could promote behavioural shifts, actions towards learning from the challenges or mistakes. For example, a student might fail a pre-lab activity, experiencing disappointment or frustration, but this could reflect their recognition of the need to revise before entering the laboratory, and their view of the laboratory experiment as a good opportunity to fill in their learning gaps (cf.Chu, 2017).
The research questions of this study are:
1. To what extent is making mistakes associated with students’ situational engagement in the pre-lab activities?
2. To what extent do students’ situational engagement and making mistakes in the pre-lab activities predict their situational engagement during the subsequent laboratory session?
3. How does students’ situational engagement change within a laboratory session and across the laboratory course?
As a preparatory task before each lab session, students were asked to complete mandatory learning exercises on an online learning platform. The pre-lab tasks were to be completed before entering the laboratory, but no earlier than one week in advance. Completing the tasks took approximately a little under an hour. After completing the exercise and a brief EMA questionnaire, students automatically received feedback indicating which tasks they had answered correctly or partially correctly – indicating whether they had made any mistakes. These pre-lab tasks were embedded into the overall laboratory learning process (cf.Agustian and Seery, 2017), as they were related to the topic of each subsequent laboratory session. The aim was to familiarize students with the concepts and the laboratory instructions of the upcoming experiment. The preparative exercises included, for instance, calculations, questions about the key analysis methods or procedures, or simulations close to the topic of the laboratory work.
In the laboratory, students performed the weekly 3- to 4-hour experiments in pairs, led by the same teacher each week. Students were required to keep a laboratory diary containing all the calculations required to complete the experiment. After students were done with the experiment, they presented their lab diary to the teacher and discussed the work. Each student began the course by carrying out experiment 1; however, the rest of the experiments were performed in one of three alternative orders (1, 2, 3, 5, 7, 4, 6 or 1, 3, 2, 4, 6, 5, 7 or 1, 2, 5, 3, 4, 6, 7).
Totally 256 students took part in the study (n = 155 in 2023 and n = 101 in 2024). The participants were mostly (n = 188, 73.4%) first-year students, while there were some second-year students (n = 33, 12.9%), and third-year-or-above (n = 35, 13.7%). Females were overrepresented in this study, as 185 (72.3%) of the participants identified themselves as females, and 65 (25.4%) identified themselves as males. The participants majored in biochemistry (n = 78, 30.9%), biotechnology (n = 62, 24.2%), chemistry (n = 47, 18.4%), physics (n = 20, 7.8%), biology (n = 11, 4.3%), mathematics (n = 8, 3.1%), geology (n = 8, 3.1%), materials technology (n = 6, 2.3%), and other subjects (n = 16, 6.25%).
We collected the EMA data from the laboratory experiments 4. Liquid–liquid extraction, 5. pH-titration, 6. Complexometric titration, and 7. Buffer solutions. During these four laboratory sessions, students responded to a total of 13 EMA questionnaires, with 3 or 4 questionnaires implemented within one session. The links to the EMA questionnaires were embedded in the work instructions, and students responded at predetermined points of the laboratory experiment using their phones. Thus, the data collection followed students’ progress in the experiments, allowing us to study different phases of work with respective EMA questionnaires. This also ensured that responding to the EMA questionnaires would interfere with one's workflow and engagement as little as possible. EMA questionnaires were administered approximately an hour apart, and the last EMA questionnaire of each experiment was completed after finalizing the experiment, but before discussing it with the teacher.
Altogether, we received 1586 situational responses from the laboratory, with a compliance rate of 47.7%, and the mean number of EMA questionnaire responses per person was 7.36 (SD = 3.56, range = [1,13]). In addition, we received 1006 situational responses from the four preparative learning exercises antedating these four laboratory sessions. The EMA questionnaire items automatically showed up after the students had completed the pre-lab learning exercises. Therefore, the compliance rate in these EMA questionnaires was significantly higher than in the lab, namely 98.2%. The EMA questionnaires were quick to respond to, taking only approximately 1 to 3 minutes.
All EMA questionnaires contained the same single-item indicators of the levels of situational interest, perceived skill, and task difficulty (Schneider et al., 2016), originally adapted from Csikszentmihalyi and Schneider's (2000) as well as Shernoff and colleagues’ (2003) studies. In general, previous research suggests that single-item indicators offer a psychometrically sound alternative to longer scales in measuring motivational-affective constructs, particularly across contexts with repeated measures, such as EMA designs (see Hoeppner et al., 2011; Gogol et al., 2014; Fisher et al., 2016). Although the internal consistency of the measures cannot be measured with only one item, there is evidence for the stability of the measures of self-competence items (cf. skill) within students across time (Moneta et al., 2014) as well as significant cross-study consistency for the OLM items used in this study across various contexts and age groups, including undergraduate students in chemistry laboratory courses (see e.g.Schneider et al., 2016; Upadyaya et al., 2021; Atabek-Yigit and Senoz, 2023). The questionnaires were implemented in Finnish, and the phrasing aligned with previous research (e.g.Schneider et al., 2016; Inkinen et al., 2019; Inkinen et al., 2020; Tang et al., 2025), including the following items: ‘Were you interested in what you were doing?’, ‘Did you feel skilled at what you were doing?’, and ‘Did you feel challenged by what you were doing?’. Participants reported their level of agreement on a 5-point Likert scale with response categories ranging from ‘1 = Not at all’ to ‘5 = Very much’.
Across all measurement points in the data, the means and standard deviations for the optimal learning elements were as follows: interest (M = 3.43, SD = 0.95), skill (M = 3.2, SD = 1.09), and challenge (M = 2.84, SD = 1.12). Accordingly, optimal learning moments (Schneider et al., 2016), representing high situational engagement, were coded as 1 when interest, skill, and challenge were simultaneously above the average of the population (4 or 5 for interest and skill, and 3, 4, or 5 for challenge), and 0 in all other cases. Third, we coded the progress within the session to range from 0 (pre-lab activities) to 1 (final EMA questionnaire during the session). Similarly, progress across the entire course ranged from 0 (before the first experiment) to 1 (the final experiment).
The statistical analysis methods were selected considering the hierarchical structure of the data. That is, situational responses are nested within lessons, which are again nested within individuals. Namely, each student (level 3, individual level) has not only participated in multiple lessons (level 2, lesson level) but also responded to multiple EMA questionnaires within one lesson (level 1, situational level). Therefore, a three-level hierarchy is present in the data, and our modelling approach included three-level MSEM models. In these models, individuals and lessons were treated as clusters.
The intraclass correlations (ICCs) of interest, skill, and challenge indicated that a three-level approach was reasonable. Generally, a multilevel approach is suggested when ICCs are above 0.10 (Irimata and Wilson, 2018). Here, at the lesson level, ICCs were 0.08 for OLMs, 0.21 for interest, 0.11 for skill, 0.07 for challenge, 0.10 for OLMs, and at the individual level, 0.17 for OLMs, 0.31 for interest, 0.19 for skill, 0.12 for challenge, and 0.14 for OLMs. This indicates that there is significant variation in the variables explained by both the lesson and the individual, but most of the variance was found at the situational level.
Situational level predictors included progress within a session as well as progress across the entire course. Lesson-level predictors, on the other hand, included students’ experiences during pre-lab activities, their mistakes in these activities, as well as the topic of the experiment at hand. At the individual level, optimal learning elements were allowed to correlate with one another; however, no individual-level predictors were included in the models. Results concerning the individual-level effects on situational engagement can be found in our previous article (Kyynäräinen et al., 2024).
Four multilevel models were created to answer the research questions in turn. Model 1 (see Fig. 1) and Model 2 (see Fig. 2) aimed to answer RQ1. They were simple two-level regression models, including only the individual level and lesson level (ICCs for OLMs = 0.14, interest = 0.32, skill = 0.14, and challenge = 0.14). This was due to the lack of variance at the situational level, as the models focused on students’ experiences only during pre-lab activities. Model 1 was a two-level, logistic regression-based structural equation model evaluating the relationship between making mistakes and OLMs at the lesson level, including the associations between the experiment at hand (i.e. laboratory session covariates) and the probability of making mistakes. The individual level means and variances of experiencing OLMs were also considered in the model. Model 2 deepened Model 1, investigating the elements of optimal learning (interest, skill, and challenge) separately.
Model 3 (see Fig. 3) and Model 4 (see Fig. 4) were designed to address RQ2 and RQ3. They were regression-based three-level MSEM models, resembling cross-lagged panel models, first investigating OLMs as a whole (Model 3) and then its elements separately (Model 4). In these models, we investigated lagged effects of situational engagement and mistakes in pre-lab activities on students’ situational engagement during the entire subsequent laboratory session, rather than only at the first measurement point, as typically done in cross-lagged panel models.
In more detail, Model 3 (see Fig. 3) examined the relationships between progress within the session and OLMs at the situational level and between progress across the course, experiment (i.e. laboratory session covariates), OLMs, and making mistakes during pre-lab activities, as well as OLMs during laboratory sessions at the lesson level. Also, means and variances of OLMs were considered at the individual level. Model 4 (see Fig. 4) was similar to the third one, but instead of OLMs, it included interest, skill, and challenge as the dependent variables, and the correlations between these were considered at the individual level.
The analyses were conducted using Mplus version 8.11 (Muthén and Muthén, 1998–2017). Maximum likelihood for robust standard error (MLR) estimation was applied, and the missing data, which were missing completely at random (Little's MCAR test, χ2 = 5.31, p = 0.505), were handled by the Full Information Maximum Likelihood (FIML) method. As none of the models included any latent variables and the residuals were allowed to covariate freely, the models were overfitted with zero degrees of freedom. This resulted in perfect, non-informative model fit indices. As such, model fit statistics (including RMSEA, CFI, TLI, and SRMR) are not reported for the models, as they do not provide meaningful information about model adequacy in this context.
| n | Compliance rate (%) | M | S.D. | |
|---|---|---|---|---|
| Interest in pre-lab activity | 990 | 96.7 | 3.11 | 0.96 |
| Skill in pre-lab activity | 991 | 96.8 | 2.71 | 1.02 |
| Challenge in pre-lab activity | 989 | 96.6 | 3.43 | 0.95 |
| OLM in pre-lab activity | 984 | 96.1 | 0.06 | 0.24 |
| Mistake in pre-lab activity | 1006 | 98.2 | 0.639 | 0.48 |
| Interest in lab | 1611 | 48.5 | 3.62 | 0.88 |
| Skill in lab | 1613 | 48.5 | 3.51 | 1.02 |
| Challenge in lab | 1613 | 48.3 | 2.48 | 1.06 |
| OLM in lab | 1609 | 48.4 | 0.12 | 0.32 |
| Situational level | Lesson level | Individual level | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Interest | Skill | Challenge | Interest | Skill | Challenge | Interest | Skill | Challenge | |
| r | r | r | r | r | r | r | r | r | |
| Note: * = p < 0.05, ** = p < 0.01, *** = p < 0.001. | |||||||||
| Interest | — | — | — | — | — | — | — | — | — |
| Skill | 0.14*** | — | — | 0.06** | — | — | 0.14*** | — | — |
| Challenge | −0.12*** | −0.45*** | — | −0.01 | −0.01 | — | −0.02 | −0.06* | — |
| Interest in pre-lab activities | Skill in pre-lab activities | Challenge in pre-lab activities | Mistakes in pre-lab activities | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| β | S.E. | p | β | S.E. | p | β | S.E. | p | β | S.E. | p | |
| Note: presented estimate values (β), standard errors (S.E.) and p-values (p). Statistically significant results (p < 0.05) are highlighted in bold. | ||||||||||||
| Mistakes in pre-lab activities | −0.06 | 0.06 | 0.282 | −0.19 | 0.06 | 0.003 | 0.12 | 0.06 | 0.042 | — | — | — |
| Reference: pH-titration | ||||||||||||
| Liquid–liquid extraction | −0.51 | 0.08 | <0.001 | 0.33 | 0.09 | <0.001 | 0.08 | 0.08 | 0.302 | 0.27 | 0.04 | <0.001 |
| Buffer solutions | 0.27 | 0.06 | <0.001 | −0.48 | 0.07 | <0.001 | 0.48 | 0.07 | <0.001 | 0.05 | 0.04 | 0.147 |
| Complexometric titration | 0.10 | 0.07 | 0.137 | 0.25 | 0.07 | 0.001 | −0.47 | 0.07 | <0.001 | −0.01 | 0.04 | 0.732 |
| Reference: liquid–liquid extraction | ||||||||||||
| Buffer solutions | 0.24 | 0.07 | 0.001 | −0.81 | 0.09 | <0.001 | 0.40 | 0.07 | <0.001 | −0.21 | 0.04 | <0.001 |
| Complexometric titration | 0.61 | 0.08 | <0.001 | −0.08 | 0.09 | 0.417 | −0.55 | 0.08 | <0.001 | −0.28 | 0.04 | <0.001 |
| Reference: buffer solutions | ||||||||||||
| Complexometric titration | 0.36 | 0.06 | <0.001 | 0.73 | 0.07 | <0.001 | −0.95 | 0.07 | <0.001 | −0.07 | 0.04 | 0.117 |
| OLM in the subsequent laboratory session | |||
|---|---|---|---|
| β | S.E. | p | |
| Note: presented estimate values (β), standard errors (S.E.) and p-values (p). Statistically significant results (p < 0.05) are highlighted in bold. | |||
| OLM in pre-lab activities | 0.12 | 0.05 | 0.028 |
| Mistakes in pre-lab activities | 0.01 | 0.02 | 0.540 |
| Progress across the course | −0.10 | 0.07 | 0.160 |
| Reference: pH-titration | |||
| Liquid–liquid extraction | 0.01 | 0.03 | 0.984 |
| Buffer solutions | −0.01 | 0.03 | 0.839 |
| Complexometric titration | −0.00 | 0.03 | 0.968 |
| Reference: liquid–liquid extraction | |||
| Buffer solutions | −0.01 | 0.04 | 0.893 |
| Complexometric titration | 0.00 | 0.03 | 0.988 |
| Reference: buffer solutions | |||
| Complexometric titration | 0.00 | 0.02 | 0.839 |
For a more comprehensive analysis, the elements of engagement were studied further separately (see Fig. 4). The results (see Table 5) suggest that students’ interest, skill, and challenge during pre-lab activities play a significant role in predicting their situational engagement, encompassing interest, skill, and challenge in the subsequent laboratory session. The results state that interest in pre-lab activities predicts higher interest in the lab, while feeling skilled at the pre-lab activities predicts not only higher skill but also higher interest in the lab. Similarly, perceived challenge in the pre-lab activities predicts higher challenge in the lab, while it also predicts higher interest.
| Interest in the subsequent laboratory session | Skill in the subsequent laboratory session | Challenge in the subsequent laboratory session | |||||||
|---|---|---|---|---|---|---|---|---|---|
| β | S.E. | p | β | S.E. | p | β | S.E. | p | |
| Note: presented estimate values (β), standard errors (S.E.) and p-values (p). Statistically significant results (p < 0.05) are highlighted in bold. | |||||||||
| Interest in pre-lab activities | 0.20 | 0.04 | <0.001 | 0.01 | 0.03 | 0.885 | 0.00 | 0.03 | 1.000 |
| Skill in pre-lab activities | 0.07 | 0.04 | 0.040 | 0.20 | 0.04 | <0.001 | −0.01 | 0.03 | 0.738 |
| Challenge in pre-lab activities | 0.08 | 0.04 | 0.041 | 0.03 | 0.04 | 0.373 | 0.10 | 0.04 | 0.010 |
| Mistakes in pre-lab activities | 0.03 | 0.05 | 0.642 | 0.04 | 0.06 | 0.500 | −0.01 | 0.05 | 0.842 |
| Progress across the course | −0.00 | 0.14 | 0.980 | 0.37 | 0.15 | 0.015 | −0.47 | 0.18 | 0.010 |
| Reference: pH-titration | |||||||||
| Liquid–liquid extraction | 0.16 | 0.06 | 0.010 | 0.10 | 0.07 | 0.158 | −0.37 | 0.08 | <0.001 |
| Buffer solutions | −0.23 | 0.07 | 0.001 | −0.17 | 0.08 | 0.033 | 0.06 | 0.07 | 0.366 |
| Complexometric titration | 0.09 | 0.07 | 0.169 | 0.13 | 0.07 | 0.050 | −0.33 | 0.07 | <0.001 |
| Reference: Liquid–liquid extraction | |||||||||
| Buffer solutions | −0.38 | 0.07 | <0.001 | −0.26 | 0.08 | 0.001 | 0.42 | 0.09 | <0.001 |
| Complexometric titration | −0.06 | 0.07 | 0.382 | 0.03 | 0.07 | 0.649 | 0.02 | 0.10 | 0.883 |
| Reference: buffer solutions | |||||||||
| Complexometric titration | 0.32 | 0.06 | <0.001 | 0.29 | 0.07 | <0.001 | −0.41 | 0.07 | <0.001 |
Based on the results, students’ mistakes in the pre-lab activities did not seem to carry over to the laboratory session, as they did not predict any of the elements of OLMs, i.e. interest, skill, or challenge, in the subsequent laboratory session.
The model also implies that the topic of the laboratory experiment played a crucial role in predicting the levels of students’ interest, skill, and challenge. Students seemed to be the most interested during the pH-titration experiment, the most skilled during the complexometric titration experiment, and the most challenged by the buffer solution experiment.
Additionally, here, students did not know for sure if they had made mistakes in the pre-lab tasks before responding to the EMA questionnaires, as they only saw their scores after completing the EMA questionnaire. Thus, in this setting, students’ perceptions of their interest, skill, and challenge in the pre-lab activities were mostly based on their self-reflections of success (cf.Chu, 2017). Having received failure feedback, compared to being uncertain of one's success, might also alter students’ emotional experiences (cf.Tulis and Ainley, 2011; Chu, 2017; Soncini et al., 2025; Tulis and Dresel, 2025) and therefore, students’ perceptions might still have shifted after filling in the EMA questionnaire. Previous research suggests that students’ adaptive error reactions and affective-motivational responses could be supported with digital learning environments giving encouraging feedback after mistakes (Narciss and Alemdag, 2025; Soncini et al., 2025; Tulis and Dresel, 2025), mitigating the plausible maladaptive reactions to receiving failure feedback.
In addition, the model results indicate that the experiment at hand has significant impacts on the levels of students’ interest, skill, and challenge, supporting their role in addressing the affective domain (cf.Agustian and Seery, 2017), as well as their probability of making mistakes. It seems that students made significantly more mistakes in the preparative learning exercises concerning the pH-titration experiment. These exercises incorporated, for instance, pH-calculations, which are often perceived as difficult (see e.g.Tümay, 2016; Sheppard, 2006). However, based on the data, students perceived the learning exercises concerning buffer solutions even more difficult, but this did not result in a significantly higher probability of making mistakes in those exercises. Therefore, it seems like a higher challenge level does not contribute to the probability of erring alone but perhaps factors like students’ misconceptions or the type of task could also have an impact (cf.Tulis, 2013; Käfer et al., 2019; Loibl and Leuders, 2019).
In addition, factors like positive error beliefs, i.e. seeing errors as valuable for learning (Schmid et al., 2025; Tulis and Dresel, 2025), resilience, i.e. the ability to bounce back (Smith et al., 2008; McMillan and Moore, 2020), or mastery orientation, i.e. the drive to grow through challenge (Tulis and Ainley, 2011; Steuer et al., 2013; Tulis et al., 2018), could moderate these lagged effects of mistakes. Students’ reactions to mistakes could also be associated with their ability to regulate meta-affective learning (cf.Radoff et al., 2019; Sharabi and Roth, 2025). Students with high meta-affective skills (see Thomas et al., 2022) could be more likely to perceive mistakes in pre-lab activities as indications of excellent learning opportunities and look forward to the laboratory experiments to fill in their learning gap.
Previous research also states that people tend to overestimate the duration of their emotional reaction to setbacks and challenges (Wilson and Gilbert, 2003). Therefore, if students were asked to anticipate the impact of previous mistakes on forthcoming experiences, they might forecast more intense and longer in duration effects. Thus, it might not be as surprising as it may seem that our results do not support long-term lagged effects of mistakes.
While mistakes did not demonstrate any lagged effects in the laboratory, students’ affective experiences did. This finding contributes to the previous understanding that pre-lab activities play a significant role in shaping students affective experiences (Cann, 2016; Galloway et al., 2016; Agustian and Seery, 2017) while it also aligns with the results of DeKorver and Towns (2015), suggesting that affective learning domain is a significant driver of students’ learning experiences in the laboratory. Our empirical results reveal that engagement predicted engagement, interest predicted interest, skill predicted skill, and challenge predicted challenge – expectations based on previous experiences predicted enacted prospective experiences (cf.Wylie and Hodgen, 2012; Galloway and Bretz, 2015).
In addition to these consequent, autoregressive effects, both experiences of skill and challenge during pre-lab activities predicted higher interest in the subsequent laboratory session. The result indicates that situational interest in the laboratory could also be triggered by prior experiences of skill and challenge in the pre-lab activities (cf.Hidi and Renninger, 2006). This emphasises the importance of pre-lab tasks that are challenging enough, but that still retain students’ experiences of competence for a meaningful experience in the laboratory – having to exert oneself at an adequate level can promote affective learning (Schneider et al., 2016).
Here, students were less interested, significantly less skilled and more challenged, but also more likely to be situationally engaged at the end of the laboratory experiment. This suggests that, although students’ levels of situational interest and perceived skill decreased, they remained high enough for them to experience OLMs. The result aligns with the previous understanding that too low a challenge level is typically the barrier to experiencing OLMs in the laboratory (cf.Inkinen et al., 2019; Atabek-Yigit and Senoz, 2023; Kyynäräinen et al., 2024). Therefore, an increase in the challenge level at the end of the laboratory session seems to contribute to experiencing a higher level of engagement.
Finally, the results reveal that at the end of the laboratory course, students perceived higher skills and lower challenge levels of the activities. This suggests that students had improved in their skills and knowledge during the laboratory course, plausibly perceiving themselves as more competent.
Another significant consideration is that the EMA questionnaire responses in the pre-lab activities do not perfectly align with the lab session EMA responses, as there was only one EMA questionnaire per pre-lab activity, in contrast to several questionnaires per lab session. Therefore, students may have considered the pre-lab more holistically when responding to the EMA questionnaire. However, as the laboratory EMAs were administered approximately one hour apart, students had completed several lab activities within that timeframe, presumably reflecting the antecedent phase rather holistically again.
Additionally, another factor that should be considered is that students’ experiences of interest, skill, and challenge in the laboratory might encompass very different things from those in pre-lab tasks. We suggest that several factors, such as the task type, online versus in-person learning environment, and working independently versus with one's peers, might have an impact. Here, for example, students completed the pre-lab exercises, containing, for example, simulations and calculations, in an online learning environment individually, whereas the laboratory experiments, containing, for instance, measuring chemicals, performing chemical analyses, and calculating the results, were carried out in person, working with a pair. This does not affect the interpretation of our results as we treat interest, skill, and challenge in the pre-lab activities and during the laboratory sessions as separate variables, but additional research on the impact of these factors could provide a deeper understanding.
We identified students’ mistakes in the pre-lab activities solely based on their automatically assessed scores. Therefore, some of the mistakes may have remained unobserved if the student had received a perfect score regardless of their mistake, for example, by guessing. However, in these cases, students might not have realized their mistake before entering the lab, impacting their expectations for the work. We would like to acknowledge that the variable “mistakes in pre-lab activities” is not a perfect measure of students’ mistakes, and it is in many ways similar to students’ academic performance in the pre-lab tasks. Nevertheless, in this study, each pre-lab activity was scored with a maximum of 21 points, even though the number of subtasks in the pre-labs varied between 4 and 7. Accordingly, some subtasks carried more weight than others, and the maximum number of mistakes varied between pre-labs. Thus, in this context, we perceive the “mistakes in pre-lab activities” variable as more standardized and interpretable than, for instance, the score of the pre-lab tasks reflecting performance in the pre-lab activities. Finally, students performed the experiments in one of three optional orders. On the one hand, this made it more difficult to follow students’ learning based on acquired skills or knowledge of chemical concepts. On the other hand, this randomization decreased noise on the overtime impacts along the progress across the course.
Finally, future research is needed to examine which factors can contribute to how students overcome their mistakes. We suggest that in-person factors, such as resilience, meta-affective skills, error beliefs, and mastery orientation, as well as environmental factors, such as error climate, learning environment, and teacher support, may play key roles. Furthermore, we propose that students’ reactions to making mistakes can vary between digital environments and in-person contexts, and these differences in the dynamics of encountering mistakes across different contexts should be investigated further (cf.Soncini et al., 2025).
Instead, our results (RQ2) suggest that when students feel challenged in the pre-lab activities, they express higher interest in the experiment. Perhaps, if the pre-lab activities feel too easy, students feel like they do not have so much learning to look forward to, leading to decreased interest and engagement. Therefore, educators should direct their attention to how challenging the pre-lab activities are, ensuring they include difficult enough tasks to promote interest and engagement. However, also feeling skilled in pre-lab activity predicted higher interest in the lab. Thus, fostering a sense of competence and confidence through pre-lab activities can also trigger interest in students in the laboratory.
Based on the results (RQ3), students’ skill level increased, while the perceived challenge decreased over the entire laboratory course, indicating that students have improved across the laboratory course. This highlights the need to provide students with activities of an adequate challenge level, for them to operate within their zone of proximal development (Vygotsky, 1978). Thus, we propose that a steady increase in the challenge level of learning tasks might be called for to promote students’ continued growth during laboratory courses in an optimal way.
| This journal is © The Royal Society of Chemistry 2026 |