S. J. R.
Hansen
*a,
B.
Hu
a,
D.
Riedlova
a,
R. M.
Kelly
b,
S.
Akaygun
c and
A.
Villalta-Cerdas
d
aDepartment of Chemistry, Columbia University, New York, NY 10027, USA. E-mail: sjh2115@columbia.edu
bChemistry Department, San José State University, San José, CA 95192, USA
cDepartment of Mathematics and Science Education, Bogazici University, Istanbul 34342, Turkey
dDepartment of Chemistry, Sam Houston State University, Huntsville, TX 77340, USA
First published on 3rd July 2019
This mixed method study investigation uses eye tracking and qualitative analysis to investigate the impact of animation variation and visual feedback on the critique of submicroscopic representations of experimental phenomena. Undergraduate general chemistry students first viewed an experimental video of a precipitation or oxidation reduction (redox) reaction. Next, they viewed the corresponding animations and were shown a visualization of where they had looked. Critique of the chemically relevant features in the animations and viewing pattern were monitored using participant generated drawings, verbal responses, graphic organizers, and eye tracking. Viewing and critique of chemically relevant features were found to increase after engaging with structured animations and visual feedback. Findings from this study support the use of structured variations and visual feedback in developing critical consumers of visual information, empowering students to describe and develop their understanding of chemical phenomena and become more purposeful visual consumers.
“In academic discourse knowing why a wrong answer is wrong can be just as important as knowing why the right answer is right.” (Henderson, 2015)
However, there is increasing evidence that contrasting incorrect and correct examples can support learning outcomes. In an example from mathematics education, viewers saw incorrect features based on misconceptions which allowed them to contrast substantive differences in the examples shown (Große and Renkl, 2007). In a study of decimal magnitude, Durkin and Rittle-Johnson (2012) found incorrect examples helped students focus on the concepts and differentiate between key features, resulting in more discussion of the concept being studied.
Henderson et al. (2015) found that the majority of students can engage in counter-critique (select a wrong image and explain why it is wrong) arguing that critique needs to play a central role in science education. By making learning visible and providing students with justifications for the ideas, we also want them to know why incorrect ideas are wrong. Students presented with alternative ideas gain an opportunity to respond to counter arguments and rebut alternative ideas. Henderson argues that this shift towards a more student-centered learning environment can increase motivation, self-regulated learning, and more accurately portray the practice of science. Argumentation and critique provide integral epistemic activities needed for the construction of new knowledge (Billing, 1996). Throughout this study, participants were never told which animation was most accurate, nor were they told how they should look. A key aspect of this study is allowing participants space to decide (and hopefully question) what they believe and what they should view.
Instructional approaches that provide opportunities for scientific argumentation are needed, Faize et al. (2017) suggests asking students to provide reasonings for their claim to justify their understanding of the concept. Critical evaluation of scientific claims helps develop critical consumers of science and technology information (NRC, 2012), and extending argumentation and critique to visual representations offers an opportunity to develop critical consumers of visual information. Rickey and Stacy (2000) argue that more research is needed in how to employ metacognitive monitoring in lecture situations effectively. With animations playing a critical role in learning science and chemistry (Kelly and Jones, 2008) investigating the impact of incorrect chemistry animations holds the potential to greatly benefit students. Previous studies by our research team investigated how students link experimental evidence to submicroscopic phenomena and revise their drawings after watching animations with structured variations (animations-in-variation). Critique was found to be a powerful teaching tool, giving students an opportunity to recall prior knowledge, evaluate evidence, and develop a desire to know the correct answer (Kelly et al., 2017).
The animations used in this study are designed for students to engage with the key features of varied animations, to provide opportunities for them to link experimental observations with the molecular phenomena. Previous studies with these animations found students view more features than they mention (Kelly and Hansen, 2017), therefore eye tracking is a critical tool for tracking the animation features students attended to visually. Eye tracking has been used in a variety of chemistry education research studies and found to compliment other forms of data collection (VandenPlas et al., 2018).
This study was designed to encourage participant engagement with the animations through the lens of higher order learning, specifically describing, explaining, arguing, and critiquing (Ohlsson, 1995). Participants were asked to explain their understanding of the phenomena, provide an argument to support their rating of feature accuracy, and critique the varied features as well as their own viewing pattern. A mixed method approach is key when studying argumentation because the qualitative analysis provides insight into the quantitative mechanisms (Schwarz et al., 2003). For this study the quantitative mechanisms being investigated are critique of animation features and viewing feedback.
In variation theory, critical features are those features students are aware of and notice while ignoring others. Contrast “allows the learner to create meaning for an object or feature by defining it against things that are different from it.” Separation of features allows for focus, varying one while maintaining other features as constant (Bussey et al., 2013). These critical features are objects that distinguish different ways of thinking (Runesson and Mok, 2004) and the animations-in-variation were designed with critical features based on common student misconceptions (Guo et al., 2012; Kelly and Hansen, 2017; Kelly et al., 2017) while stylistic features (color, size of atoms/ions, scale) were held constant. In this study, we call these varied features chemically relevant features, and they were identified in previous studies of student misconceptions and build off prior work from this group (Kelly, 2014, 2017; Kelly et al., 2017). Researcher Kelly developed these animations-in-variation as groups of animations that feature a more scientifically accurate animation paired with two animations that represent common misconceptions about the reaction phenomena. The features varied between each animation will be referred to as structured variations or chemically relevant features in this paper.
• Independent variables:
∘ Visual attention (from eye tracking).
∘ Critique of animation (from written and verbal responses).
• Dependent variables:
∘ Structured variations (animations-in-variation).
∘ Visual feedback (eye-tracking feedback).
The study took place in a clinical setting with ambient light and minimal distractions. All visual stimuli were displayed on a laptop connected to a SensoMotoric Instruments (SMI) REDn 60 Hz eye tracker, participant responses were recorded using the iPad application Vittle, which simultaneously records audio and written responses as well as any erasing and changes made. Data collection and analysis was conducted using the SMI iview X screen based mounting set-up on an integrated monitor. SMI software BeGaze 3.7 and Experiment Center 3.7 were used for the data collection and analysis. All data was collected using a binocular setting at the default settings with participants positioned between 60 and 80 cm from the monitor. A five-point calibration was employed and X/Y eye deviation accepted at 0.20° and below (or a note was added by the researcher indicating calibration deviation above this threshold). All data was collected with a pseudonym, and no link between individual participants and their pseudonym was maintained.
Eye tracking can provide insight into how decisions are made (Ryan et al., 2017; Khedher et al., 2017) and eye tracking is well-established in the study of how individuals engage with visual stimuli (Holmqvist et al., 2015). More recently eye tracking has become an established tool in chemistry education research (VandenPlas et al., 2018). The eye-tracking system allows the participant to sit comfortably and move their head freely. To help set the participant at ease interviews were conducted by an undergraduate student researcher who had been trained by and worked closely with researcher Hansen. Although participants knew they were eye-tracked, awareness of being eye-tracked has been found to only influence less socially acceptable objects—none of which were included in this study (Risko and Kingstone, 2011).
One big theme in the transcripts was a lack of understanding of conductivity, as participants struggle to understand what the conductivity meter is measuring and the role of free ions in solution. Throughout the pilot study and subsequent phases, participants were never told which animation was most accurate, nor when they told how they should look. A key aspect of this study is allowing participants space to decide (and hopefully question) what they believe and what they should view.
Fig. 1 Overview of experiment with instruments associated: an experimental video, viewing structured animations, and visual feedback. |
Between each part participants were asked to document their understanding of the chemistry phenomena through drawings, completing graphic organizers, and/or responding to structured questions. After calibrating the eye tracker, participants viewed a macro-scale experimental video detailing a redox reaction between solid copper and aqueous silver nitrate. After viewing the experimental video, participants were asked to draw then describe what was happening at the sub-microscopic level during the experiment. An open-ended drawing prompt provided insight into the participant's understanding of the particulate level phenomena (Nyachwaya et al., 2011) and was provided on an iPad. The participant was asked to draw the reaction at the molecular level. After watching a macro-scale experimental video participants were given a drawing prompt and asked to construct a submicroscopic image of the phenomena. Three animations-in-variation were shown, two that varied chemically relevant features based on previously studied misconceptions with a third scientifically accurate animation. While viewing a static composite image of the animations participants completed an accurate/inaccurate grid rating each animation based on experimental evidence from the macro-scale video. A short break allowed an opportunity to edit the initial drawing then participants critiqued their eye-tracking pattern and viewed the static composite image a final time. To increase the opportunities for reflection the redox trials included a second accurate/inaccurate grid rating each animation.
Each reaction was depicted in a separate drawing prompt. The drawing prompt for solution 3 (Fig. 3) showed the formation of a precipitate while the drawing prompt for solution 5 showed no visible reaction after the solutions were mixed. Each experiment included conditions where a reaction occurred and at least one condition where no reaction occurred.
Solid copper(II) nitrate is added to one beaker and solid silver nitrate is added to another, the third beaker is left with only water. The conductivity meter returns and the conductivity of the copper and silver solution are shown to register a higher conductivity reading than the water alone. After taking conductivity readings from the initial solutions, the video shows the addition of copper wire to each of the three beakers. After a sped up 13 minutes, the wire in the silver solution has visible deposition while the wires in the other two solutions remain unchanged (Fig. 4). The wire is removed from each solution, final conductivity readings are taken, and the wire with deposited metal is zoomed in showing a silver/black solid that can be brushed off. Only the copper wire reacting with silver nitrate was featured in the animations-in-variation. The redox drawing prompt included a start and end image of the reaction depicted in the experimental video.
During the experimental video for solution 5, the mixing of these two solutions did not result in the formation of a precipitate. Unmixed solutions at the start are shown by two separate spaces divided by a bar, during the animation the bar lifts allowing the solutions to mix. Animation A is the most scientifically accurate representation, showing free ions at the start and end of the mixing. Animation B inaccurately depicts the starting aqueous ions as paired up, copper(II) nitrate is shown with two nitrates paired with a copper 2+ ion. The aqueous sodium chloride is shown as a free-floating solid lattice, two sodium and two chloride ions all lumped together. After mixing this animation shows no change at all to the depiction of the species in solution. Animation C begins similar to animation B (all species are paired up), after mixing free-floating molecular solids are formed then dissociate to results in free floating ions at the end of the animation. Because this animation has a middle state, there is an additional frame for animation C.
These features link to the experimental evidence in the macro-scale video shown before the animations. Specifically the conductivity evidence can be explained by free ions in aqueous solution, the color change of the silver solution can be explained by the presence of free copper 2+ ions, and the depositing of metal onto the wire can be explained by silver solid. Animation C (the bottom images) is the most scientifically accurate while animations A and B are less scientifically accurate in their depiction of the ions in solution and the solid deposited. Animation A is less chemically accurate in that the aqueous species are depicted as molecular in nature. Animation B also shows aqueous molecular species and adds the inaccuracy of nitrate depositing as a solid along with silver atoms. After watching the animations participants were given an accurate/inaccurate grid (Fig. 7) and asked to select the most accurate animation. The precipitation reaction tool focuses participant's critique of the animation accuracy to four aspects: the species ratios, free ions, the representation of the solid, and whether or not a reaction occurred. The redox grid focuses on: deposition of silver solid, solution color change, and conductivity evidence.
Fig. 7 The accurate/inaccurate grid for the precipitation reactions on top with the grid for the redox reaction on the bottom. Each was phrased to minimize the reliance on chemistry vocabulary. |
t | df | Sig. (2 tailed) | Mean difference | ||
---|---|---|---|---|---|
Solution 3 | Nitrate AB | 2.50 | 10 | 0.021 | 17.8 |
Sodium BC | 7.42 | 11 | <0.00001 | 39.1 | |
Solution 5 | Sodium AB | 1.12 | 9 | 0.28 | 5.6 |
Chloride BC | 1.63 | 9 | 0.12 | 7.4 | |
Redox | Nitrate AB | 1.61 | 13 | 0.12 | 8.5 |
Water BC | 4.57 | 13 | 0.00011 | 19.5 | |
Solution 3 | Inaccurate | 3.38 | 9 | 0.0033 | 17.7 |
Solution 5 | Inaccurate | 0.67 | 9 | 0.51 | 4.43 |
Redox | Inaccurate | 4.31 | 11 | 0.00028 | 18.01 |
Because no species are chemically equivalent we feel the most meaningful comparison is the change in viewing of that particular species rather than comparing between species. Each species plays a unique role in each redox/precipitation reaction. Additionally, the species in the redox animations are not on the screen for the same amount of time, nor do they occupy equal space on the screen. This is why multiple t-tests were selected as the method of analysis, there are only two groups being compared at any time (the before viewing and the after viewing). Statistically significant changes in average viewing were observed for the average gain in viewing nitrate (gain = 17.8) between animations A and B; t(10) = 2.50, p = 0.021. Viewing of the sodium ions between animations B and C in solution 3 increased (gain = 39.1) significantly also; t(11) = 7.42, p = <0.00011. This change in viewing occurs after the sodium inaccurately pairs with nitrate in animation B then accurately remains a free ion in animation C. The variation in sodium behavior resulted in a significant increase in viewing this species.
In the redox animations statically significant changes were observed between animations B and C with an increased viewing of water molecules (gain = 19.5); t(13) = 4.57, p = 0.00011. Animation C is the most scientifically accurate redox animation with water molecules interacting with the free ions in solution, a feature that was varied between animations B and C. When viewing the static images statistically significant changes were observed for the composite images of solution 3 precipitation reaction (gain = 17.7); t(9) = 3.38, p = 0.0033. A significant gain was also observed after visual feedback for the inaccurate animations in the redox reaction (gain = 18.01); t(11) = 4.31, p = 0.00028.
Showing participants animations-in-variation draws their attention to the varied features in the animation. Because these features were selected based on misconceptions, this instructional approach is causing participants to visually engage with a visual representation of the misconception. When participants increased their viewing of the static images of the inaccurate animations after feedback they were again visually engaged with representations of misconceptions when selecting the most accurate animation.
This visual engagement is a vital step in getting participants to critique inaccurate animation features and evaluate animation features with regards to experimental evidence.
“It shows I didn’t look much in the middle area of C, I thought I did but maybe I guess I would have looked there now.” Participant 36 reflection on eye-tracking feedback.
Throughout this study participants told us they stopped looking once they found an animation that matched their understanding of the phenomena. An increase in viewing the inaccurate animation frames suggests participants are looking more closely at animations they may have dismissed. They are becoming more critical of the features represented within each animation-in-variation and looking at the differences, not just stopping once the most accurate animation is selected.
“I missed some molecules on the far right” Participant 55 reflection on eye-tracking feedback.
“I remembered chloride ions were usually larger…It's interesting to see where you’re cutting corners as far as your focus, focusing on one part of the larger picture and thinking you’re getting the entire picture.” Participant 32's reflection when viewing eye-tracking feedback.
Participant 34 viewed the chloride more in the initial animation then shifted viewing to the sodium and nitrate then finally back to the chloride for the final animation (Fig. 12). Similar to participant 32, 34 recorded minimal fixation time on the cations.
“I saw patterns – tended to concentrate on the top portion; I didn’t actually look at the entire thing which made me rethink how much I was actually looking at…” Participant 34's reflection when viewing eye-tracking feedback.
Chemical reaction | A | B | C | Average A/I grid score |
---|---|---|---|---|
*The most scientifically accurate animation is marked with an asterisk. | ||||
Precipitation solution 3 n = 12 | 8* | 3 | 1 | 8.8 ± 2.5 |
Precipitation solution 5 n = 12 | 10* | 0 | 2 | 7.4 ± 2.0 |
Redox after variations n = 14 | 6 | 3 | 5* | 5.0 ± 1.8 |
Redox after visual feedback n = 14 | 5 | 2 | 7* | 5.6 ± 2.0 |
After viewing visual feedback of the animations-in-variation two participants shifted their selection to the correct animation. To better understand the reasoning for each selection, the accurate/inaccurate grids were scored for correct rating. This additional analysis increased the dimensions being considered to 12 for the precipitation animations and 9 for the redox animation, as each rating of accurate/inaccurate was scored individually. A slight gain was observed for the redox reaction scores after critiquing visual feedback. Participants in the redox experiment scenario used the accurate/inaccurate grid before and after viewing visual feedback (Table 2). As a result, the number of correctly marked answers increased from 5.0 ± 2.1 to 5.6 ± 1.8. Although this gain in the selection of the accurate animation and accurate/inaccurate score is not significant, the increase suggests participants are not disadvantaged by viewing visual feedback, and additional studies with visual feedback are warranted.
The greatest gain in correct accurate/inaccurate rating was seen with animation C of the redox animations (Fig. 13). The number of participants correctly identifying the scientifically accurate animation increased slightly from 5 to 7 (Table 2).
Fig. 13 Gain in accurate/inaccurate rating of animation features on the A/I grid for redox animations before and after visual feedback. |
In the case of participant 29, the drawings (Fig. 15) show the before and after images for solution 3. This participant initially drew a reaction of molecular aqueous species with all species as molecular pairs at the end. After watching the animations-in-variation, both participants changed the initial aqueous species to being free ions but left the final spectator ions as bound in molecular pairs. In the bottom set of images for participant 29, this time the participant changed the molecular aqueous species to free ions at both the start and end of the mixing. The bottom four images of Fig. 15 are the start and end drawings by participant 36, after seeing the animations in variation they changed their drawings to show nitrate as polyatomic and they shifted the silver chloride precipitate to a solid lattice at the bottom of the solution.
Similar to the precipitation experiment, only two redox participants of the 14 chose to edit their drawings after watching the animations-in-variation (Fig. 16).
Although only a few participants in each group (five of the 26 total participants) chose to edit their drawings, participants who did not edit their drawings critiqued incorrect aspects of the drawings that were varied in the animations. This suggests that these participants may have changed their representation of the incorrect features after watching the animations-in-variation if the interview required a new drawing instead of offering an opportunity to edit the previous drawing. The correctly edited drawings suggest participants became more critical of their representations after viewing the animations-in-variation. Thus, shifting the protocol to require a new drawing may lead to additional corrections, other than those observed from voluntary editing.
“I got to compare the different possibilities… I was able to think about and reflect on the outcome of the actual experiment.” Participant 28 reflecting on their study participation.
Critiquing the chemically relevant features of the animations resulted in more accurate drawings with regard to the chemically relevant aspects of the animations-in-variation. Animations are useful tools but they are inherently flawed in that they are representations. Being able to critique representations of chemical phenomena in light of experimental evidence moves our students towards more critical consumers of visual information. The following quote exemplifies the impact of the animations-in-variation on student's metacognition and self-reflection in light of the depiction of a submicroscopic chemical phenomena.
“I guess the most helpful part was from the animations, that I got to compare the different possibilities and that allowed me to… from the different possibilities I was able to compare them and think about how they would reflect like in the, how the… how they reflect in the outcome of the actual experiment.” Participant 28
During the viewing of the static images, participants often shifted their viewing back and forth between the computer screen displaying the image and the iPad showing the accurate/inaccurate grid. This look away behavior changed eye-tracking reliability for the viewing of static images; future studies should be designed to minimize this effect by dedicating separate time to the task of viewing versus completing the grid or shifting the grid to the eye-tracking computer screen. Shifting the accurate/inaccurate grid to the eye-tracking screen would have the added benefit of linking specific feature critique (rating of accurate/inaccurate) to specific viewing areas on the stimuli viewed.
Changes in viewing after visual feedback was analyzed using composite images without visual feedback, because the visual feedback obscured only the areas previously viewed the participants were able to see the areas not previously viewed. The visual presentation was chosen to encourage participants' viewing of new areas on the stimuli but limits the data analysis to the two viewings of the static images. This data analysis was chosen to allow all areas in the image equal viewing time during data collection but does omit analysis of areas viewed during visual feedback, the assumption is that participants view all areas unobscured during the visual feedback. Additional trials are needed to analyze the areas viewed during visual feedback or present the visual feedback without obscuring previously viewed areas.
To facilitate eye-tracking data collection participants were interviewed individually. Although clinical studies allow controlled investigations into participant behavior, the goal of this research is to develop instructional tools to be used in classroom settings. This disconnect between the clinical and naturalistic environments is a limitation to the study. Current iterations of this research are shifting to include multiple participants and opportunities to discuss their critique of the animations and feedback.
The use of animations-in-variation and visual feedback holds the potential to increase participant critique and visual attention. Using these instructional strategies in individual, small group, and classroom settings holds promise for increasing student-centered learning and metacognitive monitoring in chemistry classrooms.
By critiquing animations and viewing patterns, students have the opportunity to articulate and consider their thinking, and at the same time engage in authentic scientific reasoning. Additionally, instructors can gain insight into how their students construct knowledge when making links between submicroscopic representations and experimental evidence of chemical phenomena. If all models are wrong chemistry students need opportunities to evaluate the aspects that are useful and become critical of visual information they consume.
Footnote |
† Electronic supplementary information (ESI) available. See DOI: 10.1039/c9rp00015a |
This journal is © The Royal Society of Chemistry 2019 |