Salawat Lateefa,
Emmanuel Echeverri-Jimenez
b and
Morgan Balabanoff
*a
aDepartment of Chemistry, University of Louisville, Louisville, KY, USA. E-mail: morgan.balabanoff@louisville.edu
bDepartment of Chemistry, Universidad de Puerto Rico en Cayey, Cayey, Puerto Rico
First published on 19th August 2025
Engaging in the practice of modeling is one of the core skills identified in the Next Generation Science Standards for K-12 Science Education. Drawing on the findings of fifty years of chemical education research on students’ comprehension of atomic theory and the nature of matter, we argue that student's capacity to model abstract chemical phenomena is important to acquire a deeper conceptual understanding. In this project, we explored how the knowledge of what counts as a scientific model is structured across six modeling dimensions in undergraduate general chemistry students and how that perception interplays with their interpretation of different atomic models. Analysis of semi-structured interviews shows that students possess relatively unsophisticated and unstable knowledge of the nature of scientific models. However, we observed a temporal improvement when their ideas are situated in a context over the course of the interview. Also, students interestingly invoked different ideas to justify the most accurate way of representing the atom, falling back on their perceptions of what serves as a good scientific model. These results have implications for supporting student engagement in the practice of modeling in general chemistry, specifically, when external feedback would be useful for supporting learners in integrating their content knowledge with their modeling knowledge.
To dive deeper into students’ knowledge of scientific models and its challenges, it is important to first define them as explanations about the empirical world that serve as communication, explanation, and prediction tools (Bailer-Jones, 2003). A variety of scientific models can represent the same natural phenomenon, each model with its own applications, strengths, and limitations (Koedinger and Anderson, 1990; Grosslight et al., 1991; Kozma and Russell, 1997; Van Der Valk et al., 2007; Stieff and Raje, 2010). Although different aspects of a phenomenon might be better described by different models (Bailer-Jones, 2003), students often struggle to identify when multiple models convey different perspectives of the same phenomenon; and how to apply the best model to fit their needs in a specific context (Lazenby et al., 2020).
The practice of modeling is ubiquitous in STEM to explain, predict, and make sense of empirical observations, as well as to propose possible solutions to design problems (National Research Council Committee, 2012). Modeling is useful to represent relations between variables, investigate complex systems, provide conceptual frameworks, generate accurate predictions, provide explanations (Odenbaugh, 2005) and can even be used for inventive purposes (Halloun, 2007). The content of a model can be conveyed using multiple different representations (e.g. text, diagrams, mathematical expressions) (Bailer-Jones, 2003). In this sense, representations can become models if their purpose goes beyond communication. However, both students (Grosslight et al., 1991) and teachers (Justi and Gilbert, 2003) often conflate models and representations and fail to consider modeling as a scientific practice.
From a modeling perspective, the purpose of scientific research is to produce consistent and predictive relationships between observations and justifications (i.e. models) (Gilbert, 1991). Important skills such as argumentation and investigation emerge from and are supported by modeling competence (Manz et al., 2020). Therefore, supporting the practice of modeling in instruction allows students to understand that scientific knowledge is a constructed rather than absolute and can change as new evidence emerges (Gilbert, 1991) and learn about the nature of science (Harrison and Treagust, 2000; Passmore et al., 2017). However, some authors argue and dispute the correlation between developing explicit modeling knowledge and acquiring conceptual understanding (Justi, 2003; Louca and Zacharia, 2012; Nicolaou and Constantinou, 2014). Although the practice of modeling is of interdisciplinary interest, it is not always transferable between sciences and other disciplines (Abel and Conant, 2011). Thus, the characterization of students’ modeling knowledge should always be framed within specific contexts (Park and Light, 2009). However, by engaging in context-rich modeling tasks, learners might improve their understanding of the process of modeling while still retaining a basic understanding of what models are (Crawford and Cullin, 2005).
Students’ competence in the practice of modeling can be defined in two ways (Gouvea and Passmore, 2017). On one hand, operative modeling knowledge pertains to the use of models as explanatory and predictive tools (i.e., thinking with models). On the other, epistemic modeling knowledge involves identifying the assumptions, limitations, and characteristics of the model (i.e. thinking about models) (Gouvea and Passmore, 2017; Lazenby et al., 2020). With the widespread presence of students’ atomic alternative conceptions and considering that traditional curriculum focuses more frequently on promoting students’ ability to think with models (Passmore et al., 2017), we hypothesize that the missing piece hindering conceptions of the nature of atom and behavior of subatomic particles is the second dimension of modeling knowledge: thinking about models.
Dimension | Level | |||
---|---|---|---|---|
Limited understanding (level 0) | Pre-scientific understanding (level 1) | Emerging-scientific understanding (level 2) | Scientific understanding (level 3) | |
a Grosslight et al., 1991.b Justi and Gilbert, 2003.c Treagust et al., 2004.d Crawford and Cullin, 2005.e Schwarz et al., 2009.f Sins et al., 2009.g Oh and Oh, 2011.h Krell et al., 2012.i Grünkorn et al., 2014.j Lazenby et al., 2020. | ||||
Model definition (MD)abcfgh | Students don't have a clear definition of what a model is. For example: models simply are charts or tables. | Models are toys, copies, or scaled versions of reality that represent nature as accurately as possible. | Models are not exact replicas of reality and are constructed for the purpose of representing reality rather than ideas | Models are constructed in the service of developing and testing ideas, rather than copying reality, and undergo evaluation and refinement |
Modeling purpose (MPu)abcdefghi | Models are used as teaching tools, as aids in making explanations to someone else about a phenomenon. | A model is something to help users make visualizations while thinking about phenomena. Models help users formulate their own explanations. | Users make a clear distinction between models and their target. Models are used in place of their target for simplicity, safety, accessibility, or other reasons. | Models are a research tool to analyze information or obtain information about a target that cannot be observed directly. Models allow users to derive hypotheses to produce new information about the target. |
Model changeability (MC)abdefghij | Models cannot be changed. There are no reasons for altering models. | Model revision occurs if the model was wrong, and to correct the errors in the model. A model is changed when new discoveries are made. | Model revision occurs when new evidence does not align with the behavior described by the current model. | Models are temporary in nature and evolve through iterative processes when their behavior is not in agreement with observations. Changing a model is not only possible, but inevitable. |
Model multiplicity (MM)abdhj | Multiple models seldom coexist. Different models are the result of different learning modalities. | Various views of the same phenomenon can exist. Multiple models exist because different people have different opinions. | Multiple models can emphasize different aspects of nature, omitting others to provide greater clarity. Multiple models may coexist because the same phenomenon affords the creation of different models. | Several competing models coexist for the same phenomenon depending on the question to answer. Two or more different models can represent the same phenomenon, but in practice there may be a reason to select one model over the others. |
Modeling process (MPr)bdfj | Models are built by connecting ideas about a phenomenon through trial and error. | The relationships built by a model depend on the way it was designed to behave like the target. | A model is developed through an iterative process in which empirical data may lead to a revision of the model. Models are always being refined to better explain or predict the target | |
Model evaluation (ME)bdfhij | Models are always true, they don't need to be tested | All models have equal value, should be understandable, and are evaluated against reality by the scientific community. | Models are validated by comparing the behavior of the model with the behavior of nature. | The usefulness of a model depends on the purpose it was designed for. Some models are better than others due to their validity and accuracy. Models are verified by comparing their predictions with real-world observations |
Table 1 shows the six dimensions considered as aspects of epistemic modeling knowledge in this work, separated by levels of sophistication. It is important to highlight that these levels of sophistication are not equidistant across all aspects. For example, it may be easier for a student to progress from level 1 to level 2 in the aspect “purpose of models” than to progress from level 2 to level 3 in the “model definition” aspect (Krell et al., 2012). The sophistication levels for each epistemic modeling dimension are separated as limited understanding, pre-scientific understanding, emerging-scientific understanding, and scientific understanding (Crawford and Cullin, 2005). Complementarily, levels 1 and 2 have been referred to as the “descriptive nature of models,” while level 3 corresponds to the “predictive nature of models” (Treagust et al., 2004).
Many researchers have explored the distinct ways in which students, from middle school to university level across different science disciplines, as well as experts, approach models and modeling in relation to various dimensions of metamodeling knowledge. Most students and teachers possess a thorough understanding of the descriptive nature of models while struggling to recognize models as research tools for making sense of the world (Krell and Krüger, 2017). Using a model-of-modeling framework, Justi and Gilbert (2003) studied teachers’ considerations about the instruction of the nature of modeling. They identified that most instructors did not consider mentioning the scope and limitations of models to be important when introducing the practice to students. This study is corroborated by Crawford and Culling's finding that teachers often have fragmented knowledge of the purpose and nature of scientific models (Crawford and Cullin, 2005).
Although the epistemic modeling knowledge dimensions are well-described, and their levels of sophistication are delineated, it is important to highlight the context-dependence of the practice of modeling. For example, Krell and colleagues have demonstrated that students’ responses to the decontextualized task are different from their contextualized responses (Krell et al., 2012). They argue that students might have intuitively a higher sophistication or understanding of certain models than others (Krell et al., 2012). In other words, students might elicit different levels of understanding of models and modeling depending on the task they are given. In a study ranking 7th to 10th grade students’ understanding of models and modeling, Krell and colleagues found that students’ epistemic modeling knowledge sophistication is higher in chemistry and physics tasks than in biology tasks (Krell et al., 2015). At the university level, STEM majors often present a more sophisticated understanding of models than non-STEM students (Krell and Krüger, 2017). The epistemic modeling knowledge context-dependence trend holds even across contexts within the same discipline (Grünkorn et al., 2014) and there is a need to know how this is situated in different contexts in order to define competence that goes beyond just a specific domain. Based on this, we set to expand the scope of our study to characterize how epistemic knowledge is structured both within the context of atomic models and without the context, and how it interplays with operative knowledge of atomic structure models.
According to this framework, knowledge can be modeled across the naïve, novice, and finally, a conceptually competent stage (Fig. 1). Students’ knowledge at the naïve and novice stages is unstable but has some productive elements that are modified and combined in complex ways that constitute the final structure of expert knowledge (diSessa, 1988; diSessa and Sherin, 1998; Balabanoff et al., 2020; Rodriguez et al., 2020). The KiP framework conforms to the constructivist approach that knowledge can be constructed by pulling elements from different sources. The naïve stage is characterized by context-dependency, temporal instability, and knowledge fragmentation. Here, knowledge is contextually situated, neither fixed nor stable, and students rely heavily on their intuitions to make sense of scientific phenomena while being oblivious to them (diSessa, 1993; Hammer, 1996).
![]() | ||
Fig. 1 Illustration of knowledge structure from naïve stage to coordination class, adapted from (diSessa, 1988). |
Comparatively, the novice stage is a developing coordinated stage where students have started recognizing relationships within and across contexts and can extract and draw inferences; however, there is an inability to do that on a large scope. This is characterized by an initial stage of forming connections across concepts with limited success (diSessa, 2002) but not seen in the naïve stage of knowledge construction. The conceptually competent stage is the advanced level, where students have a comprehensive and robust understanding of constructs and concepts. This level is characterized by the ability to either incorporate prior knowledge into new conceptual understanding or dismiss it (Barth-Cohen and Wittmann, 2017). One of the distinct qualities of students in this region of knowledge construction is their ability to activate relevant knowledge resources from a pool and apply them across a range of appropriate contexts. This is termed “span.” The other is “alignment,” which entails students’ ability to recognize when information from different sources is the same and could be applied in the same context. These two specifications must be met to make complex inferences to attain the conceptually competent stage.
1. How is students’ epistemic modeling knowledge structured?
2. How does students’ epistemic modeling knowledge change with context?
3. How does students’ epistemic modeling knowledge intersect with their interpretations of atomic structure models?
The first part aimed to explore their epistemic understanding of scientific models. We explored the six epistemic modeling dimensions as explained above (Table 1). We asked questions like ‘What do you think scientific models are?’ and ‘How do you think models are evaluated?’ By investigating these areas, we aimed to facilitate a deeper exploration of their underlying cognitive frameworks and thought processes surrounding scientific modeling.
In the second part, we prompted our participants to draw their own versions of atoms and reflect on their reasoning for including certain features and omitting others in their depictions. The third part has two subsections. First, we introduced four atomic structure models to our study subjects and prompted them to describe the features of these models (Fig. 2). This includes the Electron Probability model, the Electron Cloud model, the Solid Sphere, and the Bohr model. These models were not ranked in order of complexity or historical context, (but similar to how it was used by). Students were prompted to use these models to explain some chemical concepts (location of the nucleus, location of electrons, electron movement, nuclear attraction, probability of finding electrons, and energy quantization) to a hypothetical friend. We then prompted them to choose the most accurate way of representing the atom. With this, we wanted to see how their perceptions of what models are (Part 1), reflect their interpretations of atomic models (Part 3), especially since they have the requisite knowledge of the atom and the chemical concept we explored (e.g., energy quantization). Finally, we reintroduced the epistemic dimensions outlined in the first part of the interview, to understand how that is situated in the presence of a context.
This manuscript reports our findings on the first and the third parts of the interview, where students’ epistemic modeling knowledge was explored as well as their operative reasoning for the “most accurate” model of the atom.
Students’ mental image of an atom was determined by the criteria they used to conceptualize an ideal model out of the list of atomic models we introduced to them. We adapted the characteristics of scientific models described by Pluta et al., (2011) and White et al., (2011) to infer their conceptual knowledge. We identified criteria like parsimony, which is related to the simplicity and specificity of models; ability to convey the intended information, which is defined as a communicative element; how closely it represents the target; familiarity and reliance on model acceptance by the community. Some of the criteria here could be seen to be deeply rooted in intuition, the naïve level of diSessa, (1988) knowledge framework. Other criteria are characterized by the ability to activate necessary cognitive elements alongside those mentioned above. Making claims on whether this knowledge is transferrable or not is beyond the scope of this study, especially when there is a lack of common ground on what models and modeling mean in the interdisciplinary conversations. Rather, the goal of this analysis is to characterize how students’ epistemic modeling knowledge changes with and without context to better understand the role of context for this science practice.
Fig. 3 summarizes students’ level of understanding of model definition with associated frequencies. Among our study participants, six students demonstrated a consistent level of understanding throughout the interview, with five of them exhibiting relatively low levels. All other students exhibited inconsistencies in model definition across different levels with different numbers of occurrences that are not linearly related. Delores for example showed a consistently fragmented knowledge with one instance of not being able to explain what models are, two occurrences of seeing models as accurate representations of reality, one occurrence of models being representations of reality which does not need to be accurate and one instance of models representing ideas.
Donna: Usually a visual representation of what's specifically needed to be shown, like a concept or a theory.
Here, Donna was explicitly asked to explain what the scientific model meant to her. She responded by explaining how models need to represent target phenomena, showing what we intend to show specifically. This is a moderate level of understanding where a student states that the model represents reality. However, her understanding of the nature of models seemed to change as the interview progressed. When asked how a scientific model is developed, Donna explained:
Donna: I think, probably by visually representing what we're supposed to be, or the way we're supposed to be understanding it. So, orbits look like circles because we think of orbits as circular things. But electrons don't orbit in a circle. So, when we think about an orbit, it would look like a circle on a model, but we know that it's not.
It is interesting to see that Donna recognized the idea-representative nature of models. She understands that models are developed to represent ideas rather than reality, as implicitly stated in her explanation of orbit representation in the excerpt above: “orbits look like circles because we think of orbits as a circular thing”. By extracting relevant cues from the way we represent an atom, Donna's notion that models represent ideas acknowledges the complexities that come with modeling as a practice. This is a typical response attributed to model definition level three, where students understand that models are constructed and developed to test ideas. However, we saw a shift in Donna's understanding of what scientific models are, by emphasizing the need for models to accurately represent the target phenomenon. Contrasting her previous notion of our attempt to model ideas, she stated:
Donna: So, if you're talking about, maybe different models, they should reflect what they're trying to say accurately.
Here, Donna is moving towards a view where models are expected not only to represent conceptual ideas, but also to maintain a degree of accuracy in reflecting the target. By stating this, she is implying that there is a standard for accuracy that model should meet which is the expectation to closely align with or reflect the target. This contrast between representing ideas and striving for accurate representation highlights the inconsistency in Donna's understanding of models. Her fluctuating views illustrate a broader trend among students, reflecting an overall lack of coherence and integration in their conceptual framework of scientific modeling.
We actively sought out facets across all epistemic dimensions conveyed by students in their responses. Part of our emphasis was on identifying instances where students elicited ideas on a different epistemic dimension other than what was asked at that very instance. This was done to provide a precise depiction of the evolution of their epistemic knowledge throughout the interview process.
Bonnie: By understanding the way that people understand things, learn, and how they look at things.
Bonnie stated that models are developed by understanding how people see and how they use them. This is a low idea of model evaluation in Table 1, showing a novice-like understanding [level 1]. Students in this category think models need to be understandable and evaluated against reality by authorities based on how accurate it depicts reality. If different from the target phenomenon, people might use the model incorrectly, and therefore warrants a revision. As stated in Table 1, model evaluation level 2 shows a moderate understanding that models are validated through nature, and not solely on community opinions. Consequently, responses that show the expert-like thinking of model evaluation state the ability of models to make accurate future predictions.
Ashley: Well, there are different ways of conceptualizing [a phenomenon]… [For instance] how you can consider light, as a particle or as a wave, like there are different ways to represent a laser: as a particle or like as a wave.
Ashley recognized the role of empirical data and the modeler's interpretive framework in the process of developing models that could warrant having multiple models. She also accessed relevant prior knowledge to show alignment across the nature of light and atomic theory. We saw this in her explanation of the “different ways of conceptualizing phenomenon” as she cited the wave-particle duality of light.
A majority (n = 11) of students showed a moderate level of understanding by stating that the same data observations can give rise to multiple models when we emphasize different aspects of the same phenomenon [level 2] for greater clarity. A fewer number of students (n = 4) attributed model multiplicity to variations in learning modalities, educational levels, and modeler's opinions, indicating a naive-level understanding [level 0] and novice-level [level 1] respectively.
As seen in Fig. 4 above, students either maintained their level of understanding (x = 0), improved some amount (x = +1, +2), or lacked growth (x = −1, −2) across all epistemic dimensions over the course of the interview. The improvements can be of one level (from 1 to 2, or 2 to 3), which is denoted by a +1 in the figure. above, or two levels (from 1 to 3) denoted as +2. Students can also regress one level (from 3 to 2, or 2 to 1), denoted as −1, or regress two levels (from 3 to 1), which is shown as −2. However, none of the participants in our study demonstrated a two-level regression (from 3 to 1). For example, in model multiplicity (MM), three students retained a level 2 understanding at the end of the interview (maintained: x = 0), three students improved from level 1 to level 2 (one level improvement: x = +1) and one student regressed from level 2 to level 1 (lacked growth: x = −1). We recorded the highest improvement in model definition followed by modeling purpose. This shows that students were able to extract the necessary inferences from the context given to them to advance their epistemological knowledge of scientific models. Due to previous work on the effect of context and epistemic modeling knowledge, one might argue that the improvement we saw in our study participants was due to such reasons as they are now situating their ideas and they have been able to redeem the necessary knowledge to build a complex framework that advanced their epistemic modeling understanding.
Josh: A lot of trial and error, probably a lot of kind of just testing them and seeing if they actually react the way you want them to react or react the way the real process works.
However, Josh's idea of the modeling process improved towards the end of the interview, and he exhibited a moderate level of modeling process by stating the impacts of research and experimentation. Now, it is not just about “connecting dots” randomly, as he expressed earlier; it is through a set of determined actions:
Josh: Okay, I think it takes people wanting to understand more about, like, what makes up the world around us. And it took a lot of like problem solving, to try to figure out how to better you know, like, test what is around us and try to find out and understand the world around us through a mean, a probably took several people kind of brainstorming and working together to develop different tests and experiments to like, see how different things interact.
Now, Josh has recognized the role of data [MPr level 2] in the process of developing scientific models which is a step beyond his initial idea of “trial and error”, but he did not say anything about how the data could become models which is needed to acknowledge the influence of modeler's interpretive framework [MPu level 3]. However, he extracted a justifiable inference based on the scenario of multiple models presented to him.
Sam's understanding of model evaluation progressed significantly throughout the interview. Initially, her viewpoint on model evaluation centered around the notion that models are assessed based on how easily people can interact with and comprehend them. This perspective aligns with a low-level understanding [level 1] on our construct map, where the evaluation criteria primarily focus on subjective perceptions of clarity and usability, without considering scientific criteria or methodologies. This response is typical of reliance solely on intuitive ideas. Sam stated:
Interviewer: What would be like the things to check for in a good model?
Sam: Okay, I would say [a good model would] have different modes or ways of people being able to interact with it.
However, as the interview unfolded, Sam's perspective on model evaluation evolved. She expressed a more nuanced understanding, stating that models should be evaluated by comparing their behavior with that of the target phenomenon. She stated:
Interviewer: Which one of these models do you prefer?
Sam: Definitely A [electron cloud], because in my way of thinking, it is a cloud, I could be wrong. And if I learned that that was wrong, I would probably change my picture. But from my understanding right now, I would say that atoms are clouds, little clouds that condense so much together that they make a physical thing. So, I would say this one represents most of what a cloud would be.
This viewpoint represents a moderate level of understanding [level 2] on our construct map. It acknowledges the importance of assessing models based on their ability to accurately represent real-world phenomena, rather than solely relying on subjective opinions or ease of interaction. In her preference for the electron cloud model, Sam illustrated her evolving understanding of model evaluation by activating necessary prior knowledge. She explained her preference by likening atoms to clouds, demonstrating her consideration of how well the model aligns with her conceptualization of the phenomenon. While not explicitly stating scientific criteria for evaluation, her reasoning suggests a deeper appreciation for the correspondence between models and reality.
Overall, Sam's progression from a low-level to a more intermediate understanding of model evaluation highlights her growing recognition of the importance of aligning models with empirical observations and scientific principles. This evolution reflects a deeper engagement with the epistemic aspects of model evaluation and signifies her increasing proficiency in assessing the validity and utility of scientific models.
Students’ ideas on the purpose of scientific models were relatively lower than the other dimensions of epistemic modeling knowledge. Students’ ideas of why models are needed and used span across the naïve- to moderate levels, even more than the other epistemic dimensions we investigated in this study. The naïve-level understanding in our construct map is described as seeing models as just teaching and learning tools which is intuitive. This shows a limited knowledge of what models are used for and was coded as level 0. Level 1 here is characterized as when a student sees models as a way to make users formulate their own explanations. This still says that models are learning tools just like level 0, but the distinction here is that students understand that they could develop it for themselves to aid their understanding, not entirely waiting for a higher authority to develop it. Level 2 is an emerging scientific perspective. It is a large step forward in sophistication, and it depends on the explicit mention that there is a distinction between models and targets. The scientific perspective [level 3] of the purpose of models is when students recognize the need for models as scientific tools to observe abstract phenomena.
We based the epistemic knowledge in the context-general portion of this study on the frequent co-occurrence of codes while the responses on their choice for the most accurate model were categorized into two groups: extrinsic and intrinsic. The extrinsic group comprises students who based their justification of the most accurate way of representing the atom on human preferences, like being told by teachers and accessibility. However, the intrinsic group characterizes those who activated chemistry content knowledge and adapted the features of the models to explain the model that looked most accurate to them.
i. Using extrinsic factors such as relying on the opinions of people in authority (like teachers) and familiarity to identify the most accurate model of the atom.
Out of our sample (n = 13), three students based their decisions of an accurate model on extrinsic criteria with two students using what they were told by their teachers and the other student stating familiarity as her reason. A typical response using the ontological status (i.e., acceptance of truth by community perspective) is “I don't know because I've heard all my professors and teachers saying it's not accurate. So, I know, it's probably not accurate.”
Ashley used familiarity as a preference to select the most accurate model. She stated: “Probably Bohr model (Fig. 2), because that's the most I'm used to seeing.”
The distinguishing feature of students here is their knowledge of the utility of models as compared to the rest of our participants. There was no intermingling of ideas on what the purposes of scientific models are, and they simply stated that instructors’ perspectives and familiarity play a significant role for something to attain the status of a model. Just like the naïve stage of the KiP framework, students here did not demonstrate activating appropriate and relevant cognitive resources.
ii. Using intrinsic factors such as the features of the models and activating the necessary conceptual knowledge resources to identify the most accurate model of the atom.
We categorized student responses based on their proficiency in using model features and chemistry knowledge into four groups: high feature usage with high chemistry knowledge, low feature usage with high chemistry knowledge, low feature usage with low chemistry knowledge, and high feature usage with low chemistry knowledge (Fig. 5). In this present study, there were no elicited responses with high feature usage with low chemistry knowledge.
We know that electrons in atoms are constantly in motion as described by quantum mechanics. Even though the justification stated in this response is wrong, it shows us that students have potentially productive resources that could be built on to attain a higher level of understanding. Interestingly, we saw that students in this category had the highest level of understanding across the epistemic dimensions we investigated in this study. This ranged from knowing that models are used to represent not just reality but also ideas, to the iterative nature of the modeling process, as well as the utility of models spanning from teaching tools to a visualizing tool to formulate explanations [Table 1].
“If I had to say what I think an atom looks like, in my head, it would be this one (electron cloud). Like, it's not a solid thing, it's more of an empty space, you know, because the first thing they tell you is, an atom is like 90% empty space.”
From this response, it is evident that Donna has a somewhat solid grasp of the probability notion and of course, her choice of the most accurate model was beyond the classical view. However, her interpretation of the model features shows her implicit interpretation of the electron probability model, picturing the dots in the electron probability model as electrons and not chances. Consequently, checking through her epistemic idea across the different dimensions, the most frequent level was 2 which is an emerging scientific understanding of scientific models.
“Probably D. That is because it's easier to like, visualize the electrons as compared to the nucleus.”
Consequently, we saw some overlaps in their epistemic notion of an ideal scientific model. The distinguishing feature in their response is that models are built by connecting ideas about a phenomenon through trial and error and are evaluated on how people comprehend and use them. This is denoted as a pre-scientific understanding in our construct map, and that aligns with their low feature usage and low chemistry knowledge in the operative level analysis.
Students in the three categories above fall within the novice stage of the resource framework because they were able to fall back on prior knowledge, with different levels of sophistication. Since we don’t have any empirical data across contexts, we can’t make any claim on span and alignment.
Furthermore, there was notable progress observed in students' epistemic knowledge across all explored dimensions throughout the course of the study. This is clearly depicted in their responses after the short modeling activity we introduced to them. This shows that students could advance in their understanding by making concrete connections within concepts, however, we do not have any evidence for transferability because this was explored in the single context of atomic theory. Despite initially exhibiting relatively low understanding, participants demonstrated improvement by the interview's conclusion, albeit to differing extents and on various epistemic dimensions (Fig. 4). Because of the observed increase in epistemic dimensions, there is an evidence to support that engaging students in short modeling task can improve their epistemologies of scientific models. This work complements previous work by Barowy and Roberts (1999) that their study participants learned how to engage in science activity through modeling practices.
Finally, students’ conceptions of an ideal scientific model spanned across a range of qualities. Students who based their judgment of atomic model accuracy by interpreting some of the concrete features present in the models also called upon the necessary prior knowledge of scientific theories and facts surrounding the theory of atoms. However, this was used in two ways: the ability to recognize limitations with other models (HUHC) and the misinterpretation of model features (LUHC and LULC). Interestingly, we saw that these groups of students possessed a high to moderate level of epistemic knowledge. Other students stated utility, clarity, ease of use, and reliance on authority for approval, e.g., being told by a teacher, for their evaluation of accuracy. Generally, this group of students showed difficulties in extracting information from the models presented to them and using prior knowledge. They mostly relied more on intuitive reasoning such as “greater clarity or comprehension of ease of use is better.” Similar findings were identified by Lazenby et al., (2020) that epistemic knowledge can only develop through explicit instruction, which will evoke students to think about the criteria used by scientists to evaluate models.
Overall, these findings reveal the complexities of students' epistemic knowledge and its influence on their interpretations of atomic structure models, underscoring the importance of fostering a more nuanced understanding of models in science education.
Students’ epistemic knowledge is usually less sophisticated than their content knowledge, which is often built by interconnecting bits of information gained through various experiences, including but not limited to classroom learning. They encounter challenges in navigating the concepts of scientific models because they do not have sufficient epistemologies of the nature of models. While we observed students relying overly on the observable features of models, some were more focused on external factors like the utility of models. Therefore, instructors need to develop deeper opportunities for students to engage in modeling practices, targeted toward developing the epistemic unit of modeling knowledge, engaging them to model scientific situations to come up with their own models, and making them reflect on those experiences. Because developing competence in modeling phenomena may take a great deal of effort and time, learning environments should help scaffold students’ epistemic knowledge if explicit instruction is not possible. This could be done by building low-stakes class activities targeted at developing their modeling skills just like we incorporated in this study.
Additionally, the findings from this study open more room for researchers to explore how modeling knowledge that is transferable can be achieved. Since inquiry is the cornerstone of science and it's central to science education reforms in the United States, there is a need to help students develop competence in modeling which is not framed within just a context. This goes a long way in preparing them for the life beyond the classroom.
We also acknowledge our assumption that students’ responses to the open-ended question on the most accurate way of representing the atom were the truest representation of their perceptions of scientific models. Since there is no evidence of the transferability of epistemic modeling knowledge across contexts and disciplines, further studies should explore these areas in order to have a sense of how to better support our students.
Lastly, studies have identified the effect of individual differences in students’ formal reasoning, but that's not within the scope of this study. Therefore, it is possible that the ideas elicited in our study by our participants are not representative of all students.
The findings of this investigation show that despite introducing the evolution of atomic theory to students and engaging them in using atomic structure models to solve problems as we do in traditional classroom setups, students still possess a relatively low and unsophisticated knowledge of scientific models. This is expected given that modeling knowledge has two dimensions (thinking with and thinking about), and we frequently engage in only one (thinking with) during instruction. Overall, the findings reveal the challenges students face in comprehending the different models of the atom which were developed to foster a deeper understanding of its abstract nature. The findings also highlight the need to develop students’ modeling knowledge from an epistemic perspective (thinking about) to prepare them for success in subsequent chemistry concepts and across STEM disciplines.
The supplementary information file includes the interview protocol. See DOI: https://doi.org/10.1039/d4rp00360h
This journal is © The Royal Society of Chemistry 2025 |