Marc N.
Muniz
*a,
Cassidy
Crickmore
a,
Joshua
Kirsch
a and
Jordan P.
Beck
*b
aWestern Washington University, Bellingham, Washington 98225, USA. E-mail: Marc.muniz@wwu.edu
bConcordia University Wisconsin, Mequon, Wisconsin 53097, USA. E-mail: Jordan.beck@cuw.edu
First published on 24th April 2018
Chemical processes can be fully explained only by employing quantum mechanical models. These models are abstract and require navigation of a variety of cognitively taxing representations. Published research about how students use quantum mechanical models at the upper-division level is sparse. Through a mixed-methods study involving think-aloud interviews, a novel rating task, and an existing concept inventory, our work aims to fill this gap in the literature and begin the process of characterizing learning of quantum chemistry in upper-division courses. The major findings are that upper-division students tend to conflate models and model components. Students, unlike experts, focus on surface features. Our data indicates two specific surface features: lexical features and a “complex equals better” heuristic. Finally, there is no correlation in our data between a student's facility with navigating models and their conceptual understanding of quantum chemistry as a whole. We analyze the data through the lens of a framework which enables us to cast model conflation as a problem of ontology.
Stefani and Tsaparlis conducted a phenomenographic study of second-year undergraduate students’ understanding of basic quantum chemical models and concepts (Stefani and Tsaparlis, 2009). The researchers categorized student responses to an interview protocol, which covered content ranging from atomic orbitals and the Schrödinger equation to simple molecular orbital theory. They found that even students who performed the strongest on the interview expressed ideas about quantum chemical constructs (e.g. hybrid orbitals) that were not scientifically normative. Further, they explicitly identified fragmented knowledge as a barrier to students’ development of scientifically normative ideas about the theoretically underpinnings of atomic, molecular, and hybrid orbitals. Furthermore, Tsaparlis used three years’ worth of students’ final exam data from an upper-division quantum chemistry course to illustrate that students struggle to construct their understanding across a wide-variety of content including, but not limited to, orbitals (atomic and molecular), the orbital approximation, and a wide array of useful mathematical and symbolic-level representations (e.g. operators, atomic and molecular term symbols) (Tsaparlis, 1997). It is important to note that the data Tsaparlis analyzed in this work focused exclusively on those students who had passed the course. This supports the notion that even “high-performing” students encounter significant challenges when developing understanding of quantum chemical phenomena.
In the physics education literature, Singh identified three major ways in which upper-division undergraduate physics students struggled to “discriminate between related concepts” in a wide variety of contexts (including, but not limited to, quantum measurements and time dependence of states) (Singh, 2001). This involves the tendency of information to “run together” in such a manner that various systems and models cannot be appropriately deconvolved in students’ minds. More recently, Singh, in collaboration with Marshman, found that upper-division quantum physics students experienced challenges recognizing that the constructs of probability distribution and expectation value are separate but related (Marshman and Singh, 2017). At the same time, many students did not recognize that equivalent statements of the expectation value (one in Dirac notation, and one in traditional integral notation) are, indeed, equivalent.
To summarize, results from existing work on student ideas in quantum chemistry at the secondary and undergraduate levels as well as in quantum physics at the upper-division undergraduate level indicate that students have a tendency to make inappropriate associations between model or system components, or do not have a means of distilling useful information from prompts within quantum contexts. Prior work also suggests that students focus on surface-level features (e.g. “nucleus” in biology should mean the same thing as “nucleus” in chemistry) to draw conclusions about the systems under study. The work also indicates that students have strongly-held conceptions about the behavior of electrons and atoms, and that these deterministic or hybrid-type ideas are stable and difficult to change.
While more work has been done to characterize advanced students’ knowledge in quantum physics, comparatively little has been carried out to investigate how students navigate and use quantum chemical models at the upper-division level (with the work of Tsaparlis being a notable exception). This is unsurprising, given that the latest report from the U.S. National Academy of Sciences on discipline based education research explicitly states: “…DBER on upper division and graduate courses is currently relatively limited…” (Council, 2012). Furthermore, the report states that “The majority of (studies on students engaging in scientific practices, such as modeling) involve only students majoring in the biological sciences, and it is much more common for these studies to take place in lower division courses than upper-division courses.” Our work aims to fill this crucial gap in the literature and begin the process of characterizing students’ learning of quantum chemistry in upper-division courses.
Models and the scientific practice of modeling play a crucial role in enabling learners to construct and appropriately use their knowledge to predict and explain quantum chemical phenomena. For this reason, we have chosen a model- and modeling-centered approach to our investigations. To guide this work, we posed the following research questions (RQs):
RQ1 How do upper-division undergraduate physical chemistry students develop and apply quantum mechanical models in the context of the hydrogen atom?
RQ2 How do upper-division undergraduate physical chemistry students select quantum mechanical models and model components to explain various quantum chemical phenomena? How does this compare to experts’ approach(es)?
RQ3 Are students’ abilities to select appropriate models or model components correlated with the strength of their conceptual understanding of quantum chemistry as a whole?
RQ2 was formulated based on the results from data collected to address RQ1. Likewise, RQ3 was formulated while data pertinent to RQ2 was being interpreted. Thus, while we will briefly describe the methodology insofar as it applies to the entire study, the results and discussion will be presented as a narrative that follows the progression of the development of RQs 1, 2, and 3.
Tool 1 Think-aloud interviews (Bowen, 1994) in which participants were asked to construct a model of the hydrogen atom and use the model to address typical questions encountered in a quantum chemistry course (addresses RQ1).
Tool 2 A quantum chemical model rating task that characterizes whether and how students choose appropriate models, or model components, to solve a variety of quantum chemical problems (addresses RQ2, partly addresses RQ3).
Tool 3 A concept inventory, the Quantum Chemistry Concept Inventory (QCCI) (Dick-Perez et al., 2016), to determine students’ conceptual understanding of quantum chemistry in multiple contexts (partly addresses RQ3).
Complete versions of the think-aloud interview protocol and rating task are provided in Appendices 1 and 2. For both Tool 1 and Tool 2, students were provided with written prompts and a resource sheet. They were asked to read the prompts aloud and to respond both verbally and with written answers.
Participants were drawn from upper-division physical chemistry courses at a large, public regional comprehensive university in the Pacific Northwest (LRU), a small private liberal arts university in the Midwestern United States (SLA), and a large, public research university in the Midwestern United States (R1U) during the 2015–2016 and 2016–2017 academic years. Table 1 provides a summary of the number of participants from each institution. Some student participants completed more than one of these three tasks. A full grid delineating the number of unique students who completed each task is given in Appendix 3.
Institution | H-Atom protocol | Rating task with (without) interview | QCCI |
---|---|---|---|
LRU | 4 | 10 (8) | 13 |
SLA | 6 | 4 (0) | 8 |
R1U | 3 | 4 (0) | 12 |
Four experts were also recruited to engage in a think-aloud interview centered on the rating task. For the purpose of this study, experts are defined as individuals with a PhD in chemistry or closely related field and with at least two terms of experience teaching upper-division quantum chemistry. All participants provided informed consent and all research procedures were in compliance with the IRB standards set forth at each institution. For this report, students have been randomly assigned pseudonyms and gender pronouns.
1. Model construction: participants were prompted to draw upon a resource sheet in order to build a model of the hydrogen atom. They were prompted to consider how potential and kinetic energy were taken into account in their model.
2. Model application: participants were prompted to use their model to help explain several figures and diagrams.
3. Model extension: participants were simply asked to state what, if anything, they would change in order to model a helium atom as opposed to a hydrogen atom.
The research team (which consists principally of the two corresponding authors, but also includes the help of the two upper-division undergraduates listed as authors on this manuscript), using qualitative analysis software (Dedoose and Atlas.ti) as well as the comment feature of Microsoft Word, examined the transcripts of student responses for emergent patterns and themes by constructing memos and, eventually, preliminary codes. This approach is one that is traditionally used in qualitative research, particularly with methods whose principal aim is to categorize thematic elements in the data. Thus, our approach to the qualitative component of our data analysis can be best described as a form of thematic analysis (Braun and Clarke, 2006).
After multiple meetings and discussions among the researchers, it was determined that three general codes could be constructed from, and applied to, the existing data:
• Physical misinterpretation: occurred when a participant provided an improper explanation for a phenomenon, model, or model component itself.
• Reluctance to mathematize
• Conflation: occurred when models or model components were conflated with one another or invoked in improper circumstances. There are two sub-codes associated with the parent code of “Conflation.”
∘ Explicit: the language used by the participant suggests that they are aware of their struggles in the context under consideration.
∘ Implicit: the language used by the participant suggests that they are unaware of their struggles in the context under consideration.
Application of these codes to the data set from academic year 2015–2016 revealed a number of problematic conceptions among students about the hydrogen atom system as well as a tendency to conflate various models and model components with one another or invoke them under improper circumstances. As these results are similar to those reported in the physics literature (McKagan et al., 2008; Baily and Finkelstein, 2010; Savall-Alemany et al., 2016), only a few, short representative quotes are provided below to demonstrate how these themes appeared in our data.
Adrian (in response to the prompt about accounting for kinetic and potential energy in the model): “Um I didn’t really draw them (potential and kinetic energy) into my model, um, however the electron orbital could be taken as, um, the path where the potential energy is kinda leading…it's keeping the electron in its orbital and it's not straying too far from the path that it normally takes.”
Adrian's statements imply that the electron follows a particular path around the proton (i.e. it orbits the proton). This indicates that Adrian has yet to move beyond a deterministic interpretation of the electron insofar as it behaves in relation to the positively charged nucleus.
Of the depictions displayed in Fig. 2, Kris gives the most complete response. Kris included the time-independent Schrödinger equation and separated out the potential energy and kinetic energy operators, panel A. Jamie, panel B, also included the Schrödinger equation, but mislabeled the kinetic and potential energy portions. Both Jodie, panel C, and Jessie, panel D, attempted to provide an expression for the potential energy operator. Jodie included the V(r) expression only after prompting from the interviewer.
Given the importance of mathematics in physical chemistry and that the prompt asked students to use mathematical formulae, it is interesting that nine of the thirteen students did not include any mathematical formulations. This is evidence that
• students do not consider mathematical formulae to be models or model components, and/or
• students are so uncomfortable with the mathematics of quantum chemistry that they choose to avoid it.
Our data provides evidence that both could be true. For the former, consider how Eddie responds to the first prompt:
Eddie: “It's kind of a hard question to answer cause when you draw a diagram, like kind of what you want to draw is like these two discrete particles with this like interaction where one orbits the other one… So it's, I guess that's hard to draw… Is it like in a wave function that has to like meet back up with itself at the end or like what… And then you how do you draw that? It's hard to pick one. I guess you just have to pick a model that satisfies the aspect of the system that you're trying to talk about at the time.”
Students’ reluctance to consider mathematical formulae to be models or model components has been observed at the general chemistry level (Becker et al., 2017; Brandriet et al., 2018). This is the first time, to our knowledge, that direct evidence of this also occurs at the upper-division level in chemistry.
Our data also provides ample evidence of the possibility that students don’t understand the mathematics so they simply avoid it if possible. For example, a student should be able to look at the terms in the Hamiltonian operator for the hydrogen atom and clearly state that kinetic and potential energy operators are represented by the Laplacian and Coulombic terms, respectively. They should also understand that, once appropriate solutions to the differential equations that arise out of the Schrödinger equation for this system have been found, the wavefunctions (eigenfunctions) will be functions associated with the energies (eigenvalues) of particular states but should not be conflated with the observable (energy in this case) itself. A prime example of a participant who does precisely this is as follows:
Jodie (in response to the prompt about accounting for kinetic and potential energy in the model): “Potential and kinetic energy. I think those terms are going to be in the operator for the wavefunction of an electron.”
Interviewer: “When you say in the operator for the wavefunction, what do you mean?”
Jodie: “I guess the set of parameters of the wavefunction is used to measure the energy of the wavefunction. So the kinetic energy and potential energy are energy. So the functions for those are used to find the whole energy of the function, maybe the electron.”
Note that Jodie clarifies her statement about the “operator for the wavefunction” by alluding to the “set of parameters of the wavefunction.” The implication is that the parameters of the function that describes the state are used as a means to measure the observable (in this case, energy). While, if one operates on a particular wavefunction with the Hamiltonian operator, they will certainly obtain the total energy of the state which contains parameters of the function, it is clear that the participant has a great deal of difficulty explicitly focusing on the information contained within the Hamiltonian operator. It is this operator that, specifically, serves as a platform in quantum mechanics within which to build terms for kinetic and potential energy.
Interviewer (in response to Jodie's statement ending in “…electron.”): “So is there a way to take that into account? In building up the model?”
Jodie: “In a model…I don’t know how to make a picture out of that. There was that thing with the box. I’m bad at pictures.”
Interviewer: “Does it necessarily have to be just a picture? Or…”
Jodie: “Well, it could be a visualization. I guess it could be the graph of particle in a box kind of thing. I still don’t really get that kind of graph anyway.”
In addition to Jodie's implication that a model must be a “picture” or a “visualization” (rather than stating that such pictures or visualizations are, likely, a model component, the likes of which are accompanied by other model components such as mathematical expressions), she invokes a model (particle in a box) that is not directly relevant to the system under consideration. It is clear that Jodie is conflating the particle in a box model with the hydrogen atom model. In this segment, Jodie does not indicate that she is consciously aware of this conflation. We have coded such instances as “implicit.” The implicit conflation code was one of the most robust themes that emerged from the data.
In addition to “implicit conflation,” there were several instances where students were aware of and able to verbalize their difficulty consolidating and appropriately applying several models or model components. Such statements were coded as “explicit”. One representative example is given below.
Avery: “Everything has been clouded up by so many other things. Um, I think this has to do with the energy and the Rydberg.”
Interviewer: “So when you say it's been clouded, what do you mean?”
Avery: “I mean like it's been so long since we talked about this emission spectrum thing that I don’t remember what all it's signifying.”
The full rating task and resource sheet are provided in Appendix 2. In short, participants were instructed to consider eight prompts drawn, or adapted, from two widely-used and prominent physical chemistry texts (McQuarrie and Simon, 1997; Atkins and De Paula, 2009). Rather than solve each problem, they were instructed to rate four quantum mechanical models or model components on a four-point Likert scale ranging from 1 (very unhelpful) to 4 (very helpful) in terms of how helpful they would be for solving the problem. Each model or model component was to be considered independently. So, for example, on any given prompt each of the four choices could be rated a 1 if the participant believed each choice to be very unhelpful in addressing the prompt.
The rating task was administered to 18 student participants as a think-aloud interview during the 2016–2017 academic year. Additionally, some participants at the LRU completed the rating task without going through the interview process. All responses were transcribed and coded. While the research team analyzed responses to all of the prompts, only three of these eight questions will be discussed here as they provide the richest data in terms of model conflation. These three prompts and models to rate are provided in Table 2.
Item number | Prompt | Models/model components to rate |
---|---|---|
1 | Atoms in a chemical bond vibrate around the equilibrium bond length. Show that the average displacement from the equilibrium bond length of a chemical bond equals zero and is independent of vibrational energy. | a. Spherical harmonics |
b. Rigid rotator model | ||
c. Harmonic oscillator model | ||
d. Morse potential energy curve | ||
3 | The “spacings” between lines in a pure microwave spectrum of HBr are not equidistant. Why? | a. Rigid rotator model |
b. Anharmonic oscillator model | ||
c. Centrifugal distortion term | ||
d. Spin selection rules | ||
6 | Prompt description: Given the length of a linear, conjugated molecule (hexatriene), predict the HOMO to LUMO transition energy. | a. Particle in a box model |
b. Particle on a ring model | ||
c. Harmonic oscillator model | ||
d. Free particle model |
Four physical chemistry experts—all of whom have multiple years of experience teaching upper-division quantum chemistry—engaged in think-aloud interviews with the rating task. The experts’ ratings, coupled with their interview transcripts, served as a baseline against which to compare the ratings and statements of student participants. Therefore, the expert interview transcripts were analyzed for general themes. As expected, the experts gave fluid, scientifically normative responses that were very much focused on deep, structural features of the knowledge domain which are recorded in Table 3.
Item number (topic) | Expert responses |
---|---|
1 (Vibration) | • Harmonic oscillator is helpful |
• Spherical harmonics are unhelpful | |
• Clearly articulated the physical systems of motion involved in the model/model components | |
3 (Microwave spectroscopy) | • Focused on rotational motion |
• Described the physical system – rotating diatomic | |
6 (Hexatriene) | • Described the physical system – linear, conjugated molecule |
It is well documented in the literature that experts approach problems and organize knowledge in ways that are quite different from novices (Chi et al., 1981; Daley, 1999; Smith et al., 2013; Irby et al., 2016). The results presented here certainly adhere to the literature in these areas. They also support the notion that learners, as novices, seldom organize their knowledge in a domain based on deep, structural characteristics of that domain.
The rating task think-aloud interviews produced a wealth of data that uncovered not only the model conflation we sought to “tease out” in the context of various content across the quantum chemistry curriculum, but also illustrated a tendency of students to consider superficial aspects of models and model components. Below, we present a description of the results along the lines of both conflation and fixation on surface features.
The codes that resulted from, and were applied to, this data are:
• Conflation: occurs when models or model components are conflated with one another or invoked in improper circumstances. This code was typically applied when there was a departure from the expert majority response with respect to a particular model or model component on a given item. For this analysis, the actual ratings were dichotomized, i.e. ratings of 1 and 2 were collapsed into a single category called “unhelpful” and ratings of 3 or 4 were collapsed into a single category called “helpful”. In addition to divergence from the expert position described above, there must be evidence in the students’ statements that the rating is not simply due to “not knowing.” Since explicit statements of conflation did not arise within the data collected for this task, there was no need to re-invoke the child codes (explicit vs. implicit) in this analysis.
• Fixation on surface feature: this theme came about due to the tendency of student participants to focus on superficial aspects of models and model components as opposed to their underlying predictive/explanatory power or relevance to the problem. This theme emerged from the data in two general ways, and led to the development of the following child codes:
∘ Lexical: participant focused on a surface feature that was due to the language used to describe the model or model component. The language was interpreted in such a way that it is superficial and unrelated to the utility of the model or model component to address the problem.
∘ Complex = better: participant automatically assumed that a model that was either introduced later in the sequence, or that shares certain structural similarities with a more appropriate model, is better suited to solving the problem under consideration.
A prime example of conflation is provided by Addison in response to Item 1 (vibration):
Addison: “Okay, so. Spherical harmonics describe the motion for a chemical, a chemical bond vibrating around the equilibrium bond length; that will be the, um, part of the wavefunction that describes the motion going back and forth, so I ranked it ‘four,’ very helpful.”
Here, Addison has conflated the rigid rotator wavefunctions (spherical harmonics) with those of the harmonic oscillator.
Ryan, in response to Item 3 (microwave spectroscopy) constructs a response that conflates molecular rotation and vibration:
Ryan: “A better model would be the anharmonic model because it does illustrate that just the spectrum, the lines of the spectrum going from lower to higher energy are, they get closer together. It's not constant. And, I guess the centrifugal distortion sort of explains that, it's the correction for the anharmonicity.”
Note, here, that the conflation of rotation and vibration is with respect to the corrections to ideal motion. Ryan believes that the unequal spacing in the pure microwave (rotational) spectrum can be accounted for by an anharmonic model (vibrational) and then goes on to express centrifugal distortion as a “correction for the anharmonicity.”
As a final representative example of conflation, consider Brooklyn's description of the model(s) that would be most appropriate to address Item 6 (hexatriene):
Brooklyn: “…I said that both the particle-on-a-ring and the harmonic oscillator would be equally helpful because, in reality, it's kind of a combination between the two. And the particle in a box. I mean, the particle in a box is kind of like a particle on a ring sort of, but less accurate.”
The most salient feature of this response, upon first glance, is perhaps the combination of a model for internal motion (harmonic oscillator) with a simple model to describe electronic behavior (particle-on-a-ring). However, upon further reading, one discovers another interesting statement in the excerpt: one that suggests that Brooklyn's choice of particle-on-a-ring over particle-in-a-box is based on the interpretation that the former is more complex than the latter. The idea that one model being more complex than another model, irrespective of its appropriateness to the system that is being considered, is arguably a reliance on a type of surface feature: the student's own interpretation of the relative complexity of models leads them to virtually ignore the appropriateness of the model with respect to symmetry or even the entity or type of motion that is under consideration.
This revelation naturally leads us into a discussion about surface features, which we have characterized as either related to a simplistic interpretation of the language associated with the model or model component (lexical) or, as in the case of Brooklyn above, the interpretation of the models or model components in such a way that increasing complexity is always better (complex = better).
The reliance of novice practitioners on surface features is well reported in the literature (Chi et al., 1981; Smith et al., 2013; Irby et al., 2016) and not particularly surprising in this context. What is informative and interesting in this data are the two classes of surface features that were overwhelmingly present in our data. These two classes, lexical and complex = better are briefly discussed below.
Robin: “I put, uh “a” (spherical harmonics) and “c” (harmonic oscillator) both as very helpful for solving the problem. Um, because if you’re dealing with bond vibration, you’re going to be looking at uh, um, you’re going to be looking at harmonics.”
The term “harmonics” is applied by the participant in a manner that is far too broad: he believes that simply because “harmonics” is in the name of the model or model component that it must be useful for solving this particular problem.
While the incidence of surface feature (lexical) code was quite high in the first item, it was also discovered in other contexts. For example, consider Sam's response to the item about the microwave spectrum of HBr.
Sam: “Probably like it's not harmonic so if it was perfectly harmonic I think it would have the same distance… Centrifugal distortion term I think that's just like a correction to it being anharmonic… To explain it I think it's mostly because its anharmonic is why they are not equal.”
Sam focuses on harmonicity vs. anharmonicity to address this prompt. Presumably because he learned that energy levels in a harmonic oscillator are evenly spaced, Sam may have created a heuristic that “harmonic means equal” and “anharmonic means unequal” with disregard to the property being addressed. Sam approaches this question with a focus on the lexicon—the equal spacing of spectral lines and then applies the heuristic.
One example of a student relying on this surface feature was given in Brooklyn's response to item six on the rating task (about the transition energy of hexatriene) as discussed above. Brooklyn refers to the particle-on-a-ring model as being a more accurate model than particle in a box and, therefore, suggests that it is more appropriate for solving the problem. This is a prime example of a participant focusing in on a “complex = better” type surface feature.
As another example, consider Gwen's response to Item 3 on the rating task (about the microwave spectrum of HBr):
Gwen: “For choice A (rigid rotator model) I will give it a four. It is very helpful. Um um so so for choice B (anharmonic oscillator model) I will also rank it very helpful because ah according to the question HBr the spectrum is supposed to be equidistance from the original rotator model umm like the equation of it but actually it's not. So there must be something like to the model itself might not be exactly precise so there are some anharmonic terms which can also be added in to describe it. So we need to know that…”
Gwen correctly states that rigid rotators have spectral features with equidistant spacings. Then, she reasons that since the actual spectrum is stated to have non-equidistant spacings, a more complex model is required. Finally, she concludes that anharmonic terms should be added to account for the original model being imprecise. It is the final conclusion which illustrates the surface-level feature of complex = better. Much of Gwen's response is normative, but she eventually falls back on the idea that the more complex anharmonic oscillator model should be able to account for the microwave spectrum of HBr.
To investigate RQ3 (are students’ abilities to select appropriate models or model components correlated with the strength of their conceptual understanding of quantum chemistry as a whole?) we administered the Quantum Chemistry Concept Inventory (QCCI)—an instrument for measuring conceptual understanding of upper-division quantum chemistry students (Dick-Perez et al., 2016). The QCCI is a 13-item multiple choice diagnostic assessment tool which was administered by Dick-Perez et al. to 140 quantum chemistry students in the United States and Canada as a post-test (i.e. post instruction). Each multiple choice question has only one correct, best answer. The question topics include: approximations, bonding, harmonic oscillator, spectroscopy, etc. Given that the QCCI is a multiple-choice, and, therefore, quantitative instrument, we decided to compare the degree to which student participants deviated from the expert responses on the model rating task to their overall score on the QCCI.
One way to simply view the model rating task data, in aggregate, is to dichotomize the results of both the student and expert participants (ratings of 1 and 2 were grouped into a general “unhelpful” category while ratings of 3 and 4 were grouped into a general “helpful” category). For the purposes of this study, the scientifically normative response (helpful or unhelpful) for each model or model component was the response given by the majority of four expert positions and is referred to as the “expert response”. There were three instances where there was not a consensus among the experts (i.e. the expert positions were split with two answering helpful and two answering unhelpful), in which cases no “expert response” was established.
Fig. 3 represents dichotomized responses for the student participants (novices).
The plots in Fig. 3 give a quantitative overview of responses to items, and certain problematic responses by novices that were detailed in the qualitative results provided in the previous section can be readily observed. For example, a relatively high number of student participants characterized spherical harmonics as “helpful” in the first problem and a significant number characterized the anharmonic oscillator model and harmonic oscillator models as helpful in items 3 and 6, respectively.
The dichotomized representation of the data provides us a relatively simple means of comparing, across-the-board, how student participants performed on the model rating task. These so-called discrepancies were calculated as follows:
• The novice and expert responses, in dichotomized form, were placed side by side in a spreadsheet.
• If the responses matched (i.e. the novice response matched the expert response), a score of “0” was assigned, indicating that there was no discrepancy between novice and expert on that particular model/model component for that item.
• If the responses did not match, a score of “−1” was assigned, indicating that there was a discrepancy between novice and expert on that particular model/model component for that item.
• The scores, 0's and −1's, were summed for each student participant across all models/model components and all items on the rating task.
These discrepancies provide us with a simple, yet descriptive, representation of how far from the expert position, as a whole, participants were in their assignment of “helpful” or “unhelpful” ratings to the models and model components.
Table 4 provides descriptive statistics of the model rating task discrepancies and the QCCI scores.
Rating task discrepancy | QCCI score | |
---|---|---|
Average | −8.7 | 7.8 |
Standard deviation | 3.2 | 2.7 |
The range of possible discrepancy values was 0 (perfect agreement with expert positions) to −29 (opposite of expert position on each part). The actual range of student responses was −3 to −15 with an average of −8.7. To provide a reference point, the expert responses were coded for discrepancies against the majority position in the same manner as the student responses. The four experts each had a discrepancy of either −1 or −2 with an average discrepancy of −1.8.
One further grain of analysis can be applied to the student discrepancy data by categorizing the types of discrepancies. There was an expert response to 29 models on the rating task. Of those, 11 were classified as helpful and 18 were classified as unhelpful. Thus, assuming that students are equally likely to disagree with experts on helpful or unhelpful models, we may expect approximately 38% (11/29) of student discrepancies to be of the type where the students answered that the model was unhelpful when the experts rated it as helpful. Conversely, approximately 62% (18/29) would be expected to be of the other type: students answered helpful and experts answered unhelpful. The student data actually bears out this simple assumption: 35% of all discrepancies were of the former type and 65% of the latter type. Without more rigorous statistical analysis, it isn’t possible to make a definitive statement, but it does appear that students are equally likely to disagree with experts in the helpful and unhelpful directions. This was initially surprising to the research team as, anecdotally, it seemed as if students were very eager to rate a model as being helpful. This researcher bias was likely due to a strong focus on a few key models that were included intentionally as unhelpful models.
The discrepancies, taken alone, do not provide us much further insight with respect to our third research question, which aims to determine if there is a relationship between students’ conceptual knowledge in the domain of quantum chemistry and their abilities to choose appropriate models or model components to address problems in this domain. For this reason, we collected student responses to the Quantum Chemistry Concept Inventory (QCCI)—which, as stated earlier, is an instrument that assesses students’ conceptual understanding of quantum chemistry.
Upon first glance, the mean score of our student participant population on the QCCI (cf.Table 4) is slightly above the 6.9 ± 2.7 post-term score reported by Dick-Perez et al. in their original paper about the QCCI (Dick-Perez et al., 2016). However, the difference between the mean score of our student participants and that of the QCCI validation cohort is not significantly different (two-tailed t-test, p = 0.16). Thus, it appears the performance of our cohort of students on this particular task compares reasonably well to that of the broader sample of Dick-Perez and coworkers.
These statistics provide a simple overview of the data and lead us, tentatively, to conclude that, despite moderately strong conceptual knowledge of quantum chemistry, students struggle to a great extent when it comes to choosing appropriate models or model components to approach quantum chemical problems. Fig. 4 illustrates the lack of correlation between rating task and QCCI scores.
Fig. 4 QCCI vs. rating task discrepancy scores. Some of the student participants quoted in the manuscript are indicated to give a sense of the sampling of representative quotes. |
Note, in Fig. 4, that there is no correlation between the rating task discrepancies and the QCCI scores (r2 = 0.02). Also, the range of rating task discrepancies for a given QCCI score is quite large. For instance, four participants attained a near-perfect (12 out of 13) score on the QCCI. Among these participants, however, the discrepancies on the rating task ranged from −6 (near the best) down to −13 (near the worst). These results support the notion that strong conceptual knowledge within a particular domain does not necessarily imply a robust understanding of how to choose and apply appropriate models to solve a problem.
While the work of Chi et al. (Chi et al., 1981; Slotta et al., 1995) focuses on how students categorize knowledge about basic classical physics, we can readily extend the explanatory power of their perspective to our work. That is, we wish to make the claim that the model conflation we observe in our data can be explained as a miscategorization of models within the quantum chemical context.
In order to facilitate the use of this framework, we present a basic diagram that lays out part of the knowledge structure of quantum chemistry at the undergraduate level in Fig. 5.
Fig. 5 Hierarchical diagram of the domain of quantum chemistry, as it is typically taught at the undergraduate level. Select models and model components have been inserted for illustrative purposes. |
The information contained within Fig. 5 presents the domain of quantum chemistry as an area of study that centers on the application of quantum mechanics to atoms and molecules. It also provides the broad categories into which models (and their components) can, ultimately, be placed and provides some relevant models and model components.
Upon careful examination of Fig. 5, one can begin to notice where students can potentially experience difficulties with the appropriate categorization of models or model components and, thus, their scientifically normative application. To further illustrate this, we will briefly recap some of the findings from the H-atom protocol and the model rating task think-aloud interviews with this perspective.
With the H-atom protocol, participants at times spoke quite candidly about their difficulty categorizing and organizing the models and model components. We presented representative quotes that were coded as “explicit conflation” and illustrated participants “grasping at straws” (i.e. invoking the model they could best remember which, in one prominent case, was the particle-in-a-box model). These findings fit neatly into an interpretation that is based on miscategorization of models and model components. In the case of the H-atom protocol, this manifests itself in a manner that renders the participants unable to choose, or select, the appropriate models and model components to successfully predict and explain the behavior of the hydrogen atom.
In the case of the model rating task, which was explicitly created to investigate whether and how participants choose appropriate models and model components, the applicability of the framework of Chi et al. and Slotta et al. to the data is even more apparent. In response to Item 1 (vibration), a sizable portion (42.3%) of participants chose spherical harmonics as an appropriate model component ostensibly due to the presence of the term “harmonics.” In addition to being a prime example of fixation on a surface feature—lexical, it is quite obviously an indication of a miscategorization. While the participants would be right to focus on simple harmonic behavior, their inability to focus on the total model component “spherical harmonics” causes them to mistakenly ascribe utility to this model component as a tool to help predict and explain molecular vibration. The participants have categorized their knowledge based on this lexical surface feature (the term “harmonics”) rather than the appropriateness of the model component for the type of motion under consideration.
In Item 3 (microwave spectroscopy), responses tended to be problematic with respect to conflation of vibrational and rotational motion. In fact, a rather large majority (76%) of participants rated the anharmonic oscillator as helpful (see Fig. 3)! Participants, in general, recognized that some sort of perturbation to an ideal system was necessary to account for the uneven spacing. However, they did not make a connection between the type of motion under consideration and, therefore, the appropriate model or model component to explain the uneven spacing. This can be readily cast as a miscategorization in the following manner: the type of internal motion was, initially, assumed to be vibrational rather than rotational. If this is the case, then the invocation of the anharmonic oscillator model is conveniently substituted for the centrifugal distortion term. The type of motion under consideration has been miscategorized altogether.
For Item 6 (hexatriene) on the rating task, nearly a third of student participants (31%) rated the harmonic oscillator as “helpful.” This constitutes a clear miscategorization within the ontological framework of basic quantum chemistry: the system that is under consideration is a linear conjugated polyene; thus, a one-dimensional particle-in-a-box model would be appropriate to describe the pi electronic structure of the molecule. Instead, this fraction of student participants assigned weight to a model that is specifically designed to predict and explain vibrational motion of molecules (in the quantum chemistry context). Thus, this is a clear example of invoking a model whose target physical system is fundamentally different from the system under study.
Our findings are, ultimately, consistent with what Slotta, Chi and coworkers have found in the context of basic physics both with respect to the problem of ontology and the novice-expert divide in knowledge organization. Novices tend to miscategorize models and model components insofar as how they predict and explain the physicochemical system or phenomenon under study. Novices tend to rely on, and approach problems based on, surface features, contributing to the difficulties in coupling models/model components with a given phenomenon. Experts, meanwhile, are able to almost immediately identify the type of phenomenon under study, categorize it, and assign high utility to models and/or model components that are appropriate for the system at hand.
1. Our sample size is relatively small,
2. Psychometrics have not been conducted on the model rating task in a manner similar to the QCCI.
With regard to the first limitation, it is worth noting that upper-division undergraduate courses in chemistry do not have population sizes on the same order of magnitude as lower-division (e.g. general chemistry or organic chemistry) courses. Despite this, we have collected a relatively large amount of rich qualitative data and have reported several emergent themes, and an interpretation of these themes, in the sections above. The themes were remarkably consistent across multiple institutions, which is an indication of the reliability of our approach. Furthermore, the majority of our quantitative data (e.g. frequencies of unhelpful vs. helpful responses to each model or model component in the model rating task) was reported to support the qualitative results based on the think-aloud interviews. The exception to this was in our correlation of the rating task and the QCCI instrument, which naturally leads to our second limitation.
The model rating task was, principally, designed to gain deeper qualitative insight into how students assigned priority to various models and model components in the context of quantum chemistry. Descriptive values were compiled and used in a manner that supplemented the qualitative analysis. At the same time, we calculated a total score in the form of a total discrepancy from expert consensus. We used these values to determine if there is a correlation between this deviation and student participant scores on the QCCI. However, despite the disparity between the psychometric work performed on the QCCI and the lack thereof for the model rating task, we propose that the wealth of qualitative data we have collected about the model rating task lends itself to the face validity and, to some extent, the content validity of its results. That is, it serves as an adequate means of assessing students’ abilities to choose models or model components that are appropriate for physicochemical systems and phenomena.
1. Develop protocols for other physicochemical systems (e.g. vibrating or rotating molecules) and conduct think-aloud interviews to investigate how students engage in the modeling process in these contexts.
2. Use a much larger sample size and psychometrics to rigorously validate, and improve, the model rating task.
We expect that these two approaches will contribute further to our small, but growing, literature base for student learning in upper-division chemistry environments. We are currently collecting data on a harmonic oscillator protocol, which is constructed in a similar fashion to the H-atom protocol from the current work.
a. Using your resource sheet, use sketches, diagrams, and mathematical formulae to construct a model of the interaction between the electron and proton in the hydrogen atom.
b. Should the electron fall into the nucleus? Why or why not? Justify whether or not this has been taken into account in your model.
c. How do potential and kinetic energy play a role in your model (if at all)? Explain.
d. Address the question: “How far away from the nucleus is the electron in the hydrogen atom?” Draw upon your model to support your response.
e. Using your model, explain the following emission spectrum obtained for a sample of hydrogen:
f. Kwame and Jean, upon observing the data presented above, disagree about the nature of the specific lines observed in the hydrogen emission spectrum. Kwame suggests that the individual lines arise from the emission of a photon coinciding with an electron transitioning from a higher energy state to a lower energy state. Jean argues that the lines occur due to the emission of an electron as the atom moves from a higher energy state to a lower energy state. Whose response do you most agree with? Use what you have developed in parts a and b to support your response.
g. Kenya and Jose all (sic) examine the following images representing the 2p orbital of the hydrogen-like atom.
• Jose argues that both images represent the electron occupying an orbital. The image on the left shows exactly the shape of the orbital as a “container” of the electron. The image on the right shows how the electron bounces around in that container and how often it makes impact with the outer boundaries of the orbital.
• Kenya disagrees and suggests that the orbital is the square of a wave function that describes the behavior of the electron, and represents a probability density. The image on the left shows an isosurface of the probability density function, and contains some percentage of the likelihood of finding the electron. The image on the right is a dot density diagram, in which the density of the dots represents the likelihood of finding the electron.
Whose response do you most agree with? Use your model to support your response, as well as the images the three students looked at to make their claims.
h. What (if anything) about your model would need to change if you were to add additional nucleons and an electron to make a helium atom? Explain your answer.
i. The plots on the left and right are the 1s orbital probability density and radial probability distribution functions, respectively, vs. the distance of the electron from nucleus (r). Use your model and the information contained within both plots to explain the physical significance of the Bohr radius (a0 = 5.29 × 10−11 m or 0.529 Å).
Useful physical constants and additional information:
ħ = 1.055 × 10−34 m2 kg s−1
c = 3.0 × 108 m s−1
R H = 1.097 × 107 m−1
a
0 = 5.29 × 10−11 m or 0.529 Å
1. Very unhelpful for solving the problem
2. Unhelpful for solving the problem
3. Helpful for solving the problem
4. Very helpful for solving the problem
1. Atoms in a chemical bond vibrate around the equilibrium bond length. Show that the average displacement from the equilibrium bond length of a chemical bond equals zero and is independent of vibrational energy.
a. Spherical harmonics _____
b. Rigid rotator model _____
c. Harmonic oscillator model _____
d. Morse potential energy curve _____
2. Calculate the most probable radius for an electron in the 2s state of a He+ ion
a. Hydrogen atom model _____
b. Associated Laguerre polynomials _____
c. Variational principle _____
d. Slater determinant _____
3. The “spacings” between lines in a pure microwave spectrum of HBr are not equidistant. Why?
a. Rigid rotator model _____
b. Anharmonic oscillator model _____
c. Centrifugal distortion term _____
d. Spin selection rules _____
4. Calculate the Huckel pi-electron energies of benzene.
a. Slater determinant _____
b. Secular determinant _____
c. Harmonic oscillator model _____
d. LUMO _____
5. Consider the following plot of the molecular potential energy curve for H2+ molecule ion, along with the coulomb (J) and exchange (K) contributions.
Which of the contributions (J or K) is responsible for the stability of the chemical bond?
a. Morse potential energy curve _____
b. Radial probability distribution functions _____
c. Linear variational principle _____
d. Overlap integrals _____
6. The length of hexatriene can be estimated to be 867 pm. Show that the first electronic transition is predicted to occur at 2.8 × 104 cm−1
a. Particle in a box model _____
b. Particle on a ring model _____
c. Harmonic oscillator model _____
d. Free particle model _____
7. In a sentence or two explain why the 2s orbital in this figure has a light ring near its center whereas the 1s orbital does not.
Figure caption: representations of the 1s and 2s hydrogenic atomic orbitals in terms of their electron densities (as represented by the density of shading).
a. Radial distribution functions: _____
b. Hydrogenic wavefunction: _____
c. Particle in a box: _____
d. Harmonic oscillator energies: _____
8. Can the position and total angular momentum of any electron be measured simultaneously to arbitrary precision?
a. Born interpretation of the wavefunction _____
b. Normalization condition _____
c. Heisenberg uncertainty principle _____
d. Free particle model _____
Note: Items 1 and 6, and the image for item 7 are adapted from Atkins (Atkins and De Paula, 2009). The image from item 5 is adapted from McQuarrie and Simon (McQuarrie and Simon, 1997).
Anharmonic oscillator model: V(l) = D(1 − e−β(l−l0))2, (Image of Morse potential energy curve, from figure 5.6 on p. 165 of McQuarrie and Simon, 1997, was inserted here.)
Born interpretation of the wavefunction: p(q) = ψ(q)* ψ(q), where q is a set of coordinates
Centrifugal distortion term: Erot(J) = J(J + 1) − J2(J + 1)2
Harmonic oscillator model: , (Image of harmonic oscillator potential energy curve with v = 0 through v = 4 energy levels superimposed, from figure 5.7 on p. 167 of McQuarrie and Simon, 1997, was inserted here.)
Heisenberg uncertainty principle:
Hydrogenic wavefunction: ψn,l,m(r,θ,ϕ) = Rn,l(r)Yl,m(θ,ϕ)
LUMO: Lowest Unoccupied Molecular Orbital
Morse potential energy curve: (Image of Morse potential energy curve, from figure 5.6 on p. 165 of McQuarrie and Simon, 1997, was inserted here.)
Particle in a box model: , (Image of 1-D PIB potential, from figure 3.1 on p. 80 of McQuarrie and Simon, 1997, was inserted here.)
Radial distribution functions: Pr(r) = r2Rn,l2(r)
Rigid rotator model: Erot(J) = J(J + 1), (Image of simple, 2-body rigid rotator schematic from figure 5.9 on p. 174 of McQuarrie and Simon, 1997, was inserted here.)
Spherical harmonics: Yl,m(θ,ϕ)
Spin selection rules: ΔS = 0, where S is the total spin angular momentum.
H only | RI only | RNI only | Q only | H & Q | RI & Q | RNI & Q | R (any) & Q | |
---|---|---|---|---|---|---|---|---|
LRU | 4 | 2 | 3 | 5 | 0 | 8 | 5 | 13 |
SLA | 6 | 0 | 0 | 4 | 0 | 4 | 0 | 4 |
R1U | 0 | 0 | 0 | 5 | 3 | 4 | 0 | 4 |
Total | 10 | 2 | 3 | 14 | 3 | 16 | 5 | 21 |
This journal is © The Royal Society of Chemistry 2018 |