Myriam S.
Carle
,
Rebecca
Visser
and
Alison B.
Flynn
*
Department of Chemistry & Biomolecular Sciences, University of Ottawa, Ottawa, Ontario, Canada. E-mail: Alison.flynn@uOttawa.ca
First published on 31st January 2020
We developed an online learning module called “Organic Mechanisms: Mastering the Arrows” to help students learn part of organic chemistry's language—the electron-pushing formalism. The module guides students to learn and practice the electron-pushing formalism using a combination of interactive videos, questions with instant feedback, and metacognitive skill-building opportunities. This module is part of http://OrgChem101.com, an open educational resource (OER) that houses a series of learning modules. To evaluate the mechanism module's effects on students’ learning and experiences, we offered a workshop during which undergraduate students used the module. We investigated their learning gains via a pre-test and post-test format and their experiences using a survey. Analysis of responses revealed significant learning gains between the pre- and post-test, especially with questions that asked students to draw the products of a reaction. After using the learning tool, students used more analysis strategies, such as mapping, attempted more questions, and made fewer errors. The students reported positive experiences and a belief that the module would help them in their organic chemistry courses. Previous work also identified greater metacognitive skills after using the module, related to the module's intended learning outcomes. Herein, we describe the module, evaluation study, findings, and implications for research and practice.
To support student mastery of and fluency in the EPF, we created the “Organic Mechanisms: Mastering the Arrows” module in http://OrgChem101.com (Flynn and Visser, 2018). The module addresses four learning outcomes (LOs, Fig. 1): (1) draw the electron-pushing arrows, given the starting materials and products of the elementary steps, (2) draw the products of a reaction step, given the starting materials and electron-pushing arrows, (3) draw the transition state structure for a reaction step, and (4) draw the reverse reaction mechanism, given the elementary steps in the forward direction (Flynn and Ogilvie, 2015). These learning outcomes are designed to help students gain fluency that they can leverage when they are learning more advanced concepts of reactivity.
The organic reaction mechanisms module begins with a “Get started” section that has a self-assessment as part of metacognitive skill-building (Brown et al., 2014), pre-test, a comparison of those two assessments with a prompt that asks students to decide what to do next for their learning, and an introductory video. Each learning outcome section involves interactive videos and activities with feedback. The module finishes with a “Wrap-up” section that has a self-assessment, post-test, and summary. All the sections are aligned with the module's intended learning outcomes and the module can be used in any curriculum type, including traditional and transformed (Flynn and Ogilvie, 2015).
The module introduces strategies for analyzing questions, such as: expanding or redrawing the structure, mapping the atoms and electrons involved in the reaction step, and building a model. Expanding the structure includes drawing all non-bonding electrons (lone pairs) on heteroatoms involved in the reaction steps. Expanding the structure also helps avoid errors related to implicit atoms and electrons, including avoiding pentavalent carbon atoms. Mapping involves keeping track of electrons and atoms from the starting materials to the products, which usually involves numbering or lettering atoms and electrons, but which can take other forms, such as circling atoms, writing geometrical shapes, or highlighting atoms or electrons. Mapping is particularly useful to determine the bonds broken and formed during a reaction step and can help students avoid mistakes such as missing atoms or misplaced atoms. Building a model can be used to examine the molecule in various conformations, including the reactive one, plus drawing products that have more complex stereochemical information. More details about the module are available in the module itself and in an earlier publication (Visser and Flynn, 2018).
The module was designed based on existing literature, including a study that used exam analysis of thousands of isolated EPF questions that found that students who were taught and assessed on those learning outcomes used few reversed arrows (e.g., atom to electrons), few pentavalent atoms, and higher scores for Draw arrows (LO1) than Draw products (LO2); lower scores were correlated with questions involving implicit atoms and electrons, intramolecular reaction steps, and reactants drawn in conformations differing the reactive one (Flynn and Featherstone, 2017). A follow-up study used an interview format to investigate students’ meaning making when analyzing EPF questions, finding that all participants analyzed electron movement and leveraged their prior knowledge while approaching these questions and the most successful students used mapping strategies (Galloway et al., 2017). Participants relied on charge as a cue to identify areas of reactivity and some used stepwise approaches that resulted in non-chemically feasible intermediates—the latter approach may have simply been a problem-solving strategy to reduce cognitive load or may represent how participants visualized the reactions occurring. Expanding and mapping strategies have also been correlated with successful problem-solving in organic synthesis (Bodé and Flynn, 2016).
IPT guided the design of the module; information is presented to student in a meaningful manner by linking the new concepts to what learners have previously learned and to future required (tested) skills. The module is interactive and engages the learner by pausing the videos and allowing the students to build their own answers. The module also contains practice questions with feedback, allowing the student to practice the skills learned and build on their knowledge. Metacognitive skill-building activities and prompts (e.g., compare your skill rating with your pre-test score: how will you manage your studying accordingly?) provide opportunities for learners to increase their skill in identifying what they know and don’t know, as well as planning their learning time accordingly (National Research Council, 2000).
RQ1: What are students’ learning gains after using the module?
RQ2: What effect does the module have on students’ strategies when solving EPF-related questions?
RQ3: What effect does the module have on students’ errors when solving EPF-related questions?
For the cohort II study described herein, Organic Chemistry II students enrolled in the 2018 fall term were invited to participate in a workshop held during a regularly held tutorial session (i.e., recitation, discussion group). The researchers made an announcement during a class period and the professor teaching the course posted a recruitment text on the class’ online page. Workshop attendees provided informed consent to participate in the study and could elect to participate in the workshop without having their data used for the study; 103 of 172 attendees consented to have their data used for research purposes; 330 students were enrolled in the course in total.
The cohort III study consisted of Organic Chemistry II students enrolled in the Fall 2019 term during their second tutorial of the term. Two sections of the tutorial were used and were separated into two groups (1) the intervention group that followed the same procedure as the cohort II study, and (2) the control group for which their regular teaching assistant (TA) gave a lesson in acid–base chemistry instead of using the mechanisms module. The tutorial consisted of instruction in organic acid–base chemistry and determining the relative strength of acids and bases. The TA taught the students how to differentiate between weak and strong acids and bases and then the students were asked to complete some questions in small groups; the TA then went over the questions with the students. This specific tutorial and lesson were chosen because they are unrelated to the electron-pushing formalism and therefore served as a control group. Similar to the cohort II study, the participants were asked to provide informed consent to participate but any student was welcomed to attend without their data being used for research; Twenty-four and forty students provided consent in the intervention and control groups, respectively.
The pre-test and post-tests were identical to ensure that the tests were of equal difficulty (Fig. 4). Questions 1–4 were aligned with LO1 (Draw the arrows); Questions 5–8 were aligned with LO2 (Draw the products). While all four learning outcomes are believed to be important, due to time constraints, we focused on what we think are the learning outcomes that are more essential to students’ later success in analyzing mechanisms. During the workshop, students used the module and could ask questions of facilitators who were present. The intended learning outcomes that would be tested were shared with the students and they were encouraged to focus on only those learning outcomes during the session. Students worked individually, in pairs, or in small groups, according to their preference.
Some arrows, such as the small arrow in Question 8, were only worth one point since the arrow represents the breaking of a single (e.g., C–Br) bond and no new bonds are formed. Similarly, the arrow for the collapse of the tetrahedral intermediate in Questions 6 and 7 was only worth one point because no bonds are being broken and one C–O π-bond is being formed.
For RQ2, we analyzed the following strategies for all questions: mapping, expanding the structure, redrawing the structure, and drawing non-bonding electrons. This analysis was done in the Cohort II study. We coded each strategy as being absent or present (even if it was not properly used). Mapping was used whenever a participant would mark (with a number, letter, shape, highlighter) part of a molecule to help situate it in the product. Expanding the structure was coded when implicit atoms and electrons/bonds were drawn. Re-drawing the structure implied that the student had re-drawn either the product or the starting material. Drawing the non-bonding electrons was coded when students drew the electrons on heteroatoms.
For RQ3, the errors on the tests were coded and analyzed for the Cohort II study (codebook is available Appendix 2). For the Draw the arrows learning outcome, the following codes were used for each Draw the arrows question: correct, reversed, wrong, from atom/charge, vague, extra or did not attempt (Flynn and Featherstone, 2017). A correct arrow demonstrated the correct electron flow. A reversed arrow started from the electron deficient site and pointed toward the electron rich site. A wrong arrow started or ended at the wrong site on the molecule. An arrow coded from atom/charge demonstrated the correct electron flow from electron rich to electron poor but did not start from electrons; rather, the arrow started from a charge or an atom. Missing arrow was coded when a required arrow was absent, while extra arrow was coded if there were too many arrows present. Finally, if a student did not draw a single arrow, the did not attempt code was used.
For Draw the products questions, the errors were coded as in previous work (Flynn and Featherstone, 2017). For example, a formal charge error represented an instance with an extra, missing, or incorrect formal charge. A placement error was coded when the product was not correctly connected; a double bond or an atom was misplaced. Transplanting electrons represented a situation when electrons in a bond or on an atom were relocated to a new atom instead of bonding two atoms together.
Fig. 5 Post-test versus pre-test scores: overall (left, blue circles), for LO1: Draw the arrows (middle, red triangles), for LO2: Draw the products (right, yellow squares). N = 103. Cohort II. |
Independent variable | Pre-test | Post-test | t(102) | p | Cohen's d | ||
---|---|---|---|---|---|---|---|
Mean (%) | SD (%) | Mean (%) | SD (%) | ||||
Overall score | 59.5 | 22.7 | 72.3 | 17.6 | 8.597 | <0.001 | 0.729 |
Scores from LO1: Draw the arrows | 58.3 | 26.1 | 66.7 | 26.9 | −3.992 | <0.001 | 0.315 |
Scores from LO2: Draw the products | 60.5 | 31.1 | 77.0 | 19.1 | −5.806 | <0.001 | 0.863 |
Some of the students obtained a score of zero in some of the questions and their answers were examined further for validity. We found that these scores were all valid since they resulted from one of the following three scenarios: (1) some students focused on questions related to only one of the LOs, and thereby obtained a score of zero on the other LO; (2) the students answered the questions incorrectly; or (3) the students wrote a partial (and incorrect) answer on the questions. Since all of these students had answered at least one of the questions on the worksheet, their scores were included in the data analysis.
Scores on Questions 4 and 8 had the largest improvements (Fig. 6). Moreover, only 63% of participants attempted Question 4 and 57% attempted Question 8 on the pre-test, while 82% and 84% attempted those questions on the post-test, respectively.
Normalized learning gains were also calculated and analyzed to account for the high scores of certain participants and a potential ceiling effect in Questions 3 and 6. Normalized learning gains account for the variance in the pre-test scores, the gain in score that is possible for each participant, and ceiling effects; they were calculated using eqn (1). The normalized learning gain calculation accounts for the fact that lower scores have more room for improvement and higher scores have less. A normalized learning gain of 1.0 indicates a perfect score on the post-test. A normalized learning gain of 0.0 indicates no improvement from the pre-test to the post-test, while a negative value shows deterioration of the score.
(1) |
The normalized learning gains revealed improvements on the post-test from the pre-test as a whole (Fig. 7). The biggest learning gains occurred in Question 8, which had a median normalized learning gain of 0.336. Questions 3 and 6 did not have high normalized learning gains since participants performed well on the pre-test, with means of 73% and 72%, respectively.
Group | Independent variable | Pre-test | Post-test | U | p | r | ||
---|---|---|---|---|---|---|---|---|
Mean (%) | SD (%) | Mean (%) | SD (%) | |||||
Intervention (N = 24) | Overall scores | 54.4 | 19.9 | 78.3 | 14.7 | 107.0 | <0.001 | 0.548 |
Scores on LO1: Draw the arrows | 63.1 | 25.8 | 70.8 | 24.4 | 242.5 | 0.345 | — | |
Scores on LO2: Draw the products | 47.2 | 33.8 | 84.5 | 11.8 | 79.5 | <0.001 | 0.136 | |
Control (N = 40) | Overall scores | 60.0 | 24.0 | 62.5 | 20.3 | 776.0 | 0.817 | — |
Scores on LO1: Draw the arrows | 60.6 | 26.7 | 57.0 | 28.2 | 854.5 | 0.602 | — | |
Scores on LO2: Draw the products | 59.4 | 29.0 | 67.1 | 22.6 | 719.0 | 0.434 | — |
Fig. 8 Normalized learning gains for the intervention (N = 24) and control (N = 40) groups. Cohort III. |
We found a significant difference between the control and intervention group in the overall learning gains (Table 3), LO1 Draw the arrows normalized learning gains and LO2 Draw the products normalized learning gains. The large effect sizes calculated show that the intervention had a strong effect on students’ learning gains compared to the control group. In summary students in two different cohorts (cohort II and cohort III) had learning gains using the mechanism module over a short period of time.
Independent variable | Intervention (N = 24) | Control (N = 40) | U | p | r | ||
---|---|---|---|---|---|---|---|
Mean | SD | Mean | SD | ||||
Overall normalized LG | −0.076 | 0.391 | −0.111 | 0.569 | 167.0 | <0.001 | 0.548 |
Normalized LG on LO1: Draw the arrows | −0.250 | 0.254 | −0.302 | 0.932 | 1102.0 | <0.001 | 0.320 |
Normalized LG on LO2: Draw the products | 0.525 | 0.450 | −0.196 | 1.212 | 213.5 | <0.001 | 0.462 |
Fig. 9 Mapping instances found on the pre-test (orange, left) and post-test (blue, right) for each question. N = 103. Cohort II. |
Mapping was the strategy that was demonstrated significantly more often in the post-test (48%) than in the pre-test (20%), χ2(1) = 23.361, p < 0.001, although still in relatively few questions overall (Fig. 9). The module dedicated a lot of time to this subject and contained many mapping questions for students to practice. Draw the products questions (Q5 to Q8) had the biggest increase in use of mapping (example in Fig. 10). Question 6 was done very well—average of 77% on the pre-test and 87% on the post-test—but very few students chose to map. We hypothesize that in those cases, participants chose not to map because they could visualize the answer or mapped in their heads. Question 8 had the largest increase in mapping (from 11 to 38 instances) as well as the largest learning gains (47% to 77%). Perhaps in this question, participants found value in mapping. A key decision point for students will be when to use such strategies, much like how one needs to decide when heuristics are appropriate and when deeper analytical thinking is needed (Talanquer, 2014, 2017).
Students who mapped in the pre-test (N = 18) had a higher mean score (70.1) than the participants who did not (N = 84, M = 57.0), t(25) = 2.383, p = 0.034. However, on the post-test both students who mapped (N = 49, M = 73.3) and did not map (N = 53, M = 71.5) obtained similar results, t(100) = 0.513, p = 0.609.
To analyze the effect of mapping on the students’ learning gains, the participants were separated into four groups based on whether or not they mapped (Fig. 11). Group 1 includes the sixteen participants who mapped on both the pre-test and the post-test. Group 2 includes the two participants who mapped on the pre-test and did not map on the post-test. Group 3 includes the 33 participants who did not map on the pre-test but mapped on the post-test. Group 4 includes the 51 participants who did not map on the pre-test or post-test.
Fig. 11 Test scores, grouped according to whether mapping was observed in the pre-test and post-test. Cohort II. |
The participants in Group 3, who used mapping on the post-test but not on the pre-test showed learning gains of 13%, which is not statistically different than the average of 11% for all students.
A one-way ANOVA was done between Groups 1, 3, and 4 to determine the effect of the groups on normalized learning gains. Group 2 was excluded due to the low number of participants. There was no significant effect for the three groups [F(2,97) = 0.941, p = 0.394]. Participants who changed from not-mapping (pre-test) to mapping (post-test) had similar learning gains as the other groups. This analysis accounts for the overall score of the worksheet and not the specific questions related to mapping. Another limitation of the analysis is that the worksheets were not coded on whether the participants mapped properly or not, but simply whether they mapped at all. The increase in the use of the mapping strategy is nevertheless promising.
The most common type of error was drawing an arrow from an atom or charge (Fig. 13). These arrows demonstrate the correct direction of electron movement; however, the start of the arrow did not begin with electrons and so we did not assess it to be correct. The module emphasizes that bonds are created by the movement of electrons—not atoms or charges; therefore, curved (EPF) arrows should start from electrons (Fig. 14). In later years, we anticipate that students would adopt the common conventions used by organic chemists to draw EPF arrows from atoms or charges. Question 2 had the highest instance of this type of error, which had three of the six arrows start from electrons on a heteroatom, meaning that the students had to draw the electrons (i.e., an extra step). In contrast, drawing arrows from atoms or charges was rarely observed in Question 1, which involved a hydride transfer.
Fig. 14 Common errors found in answers for questions associated with LO2: Draw the products. N = 103. Cohort II. |
Questions associated with LO2 (Draw the products) had common errors consistent with previous work (Fig. 14) (Flynn and Featherstone, 2017). Missing or extra bonds were prevalent in Question 5 (Fig. 15). Questions tended to have specific types of errors; for example, answers to Question 7 were frequently missing a methyl group. Transplanting electrons was only observed in Question 8 (example in Appendix 2, Table 7).
Fig. 15 Example of a missing atom error in a Draw the products question (LO2), specifically Question 5. |
For all the errors in the Draw the products questions, the “long” arrow seemed to be most challenging to interpret, while shorter arrows were interpreted correctly more often. For example, in Question 5, only 39% of students on the pre-test successfully made the bond from the long arrow, as opposed to 65% for the bottom left arrow and 62% for the bottom right arrow. Comparing the arrow interpretations to each other showed significant difference between the long arrow and each of the shorter arrows, χ2 = 23.59, p < 0.001 and χ2 = 22.88, p < 0.001, respectively. There was no significant difference in interpretation between the two shorter arrows, χ2 = 0.16, p = 0.344. Similar results were found in the post-test, as well as Questions 7 and 8. These long arrows represent situations in which conformational changes are needed for the molecule to go from the conformation depicted in the starting material to the reactive conformation, which likely posed visualization and mental rotation challenges for students. Analogous difficulty was found between the same reactants shown in different conformations (Flynn and Featherstone, 2017). These results will need to be explored in more depth.
The learning gains of the cohort II study could be associated with time on task; analogous class time could have similar effects, as we found when evaluating the Nomenclature101 module in the same learning tool (Bodé et al., 2016). Students’ learning was measured over a short time period and so we do not make claims about the enduring nature of students’ abilities or students’ ability to transfer their skills to new situations. As with any learning, we expect that practice and use in context are essential for meaningful and enduring connections to made with other areas of chemistry. Students’ learning was only studied with respect to the first two learning outcomes of the module, although we hypothesize that the learning effects will be similar for intended learning outcomes three (draw the transition state of a reaction step, given the starting materials and product of that step) and four (draw the mechanism of the reversed reaction, given the forward mechanism. Because the pre-test and post-test were identical, students may have remembered their answers from the pre-test. We think this limitation is unlikely since the pre-test was collected immediately after it was completed, and the questions were of sufficient difficulty and present in sufficient quantity to mitigate a memorization effect. Moreover, even if the students remembered their answers, a repeated individual response would tend to produce the same answer (e.g., an incorrect answer would still be incorrect) if the intervention had no effect.
Another limitation of the study is that we did not collect data on which part of the module students finished or what they worked on during the workshop. The Classroom Observation Protocol for Undergraduate STEM courses (COPUS) was used to record what the students were generally doing during the workshop and showed that the students were working either by themselves or in group. However, we do not have data on what task exactly they were doing, and some students may have been working off-task or may not have completed the module.
With either type of module use, summative assessment questions (e.g., midterm and exam questions) should include aligned questions to demonstrate the value of the activities and module to the students as well as assess progress toward the intended learning outcomes.
The module's approach and question types address students’ skill in interpreting the electron-pushing formalism, part of organic chemistry's language, but does not directly teach or assess concepts or reasons for reactivity observed. In uOttawa's curriculum, those concepts and reasons are addressed in other areas and question types, building off students’ skills with the EPF (Flynn and Ogilvie, 2015; Raycroft and Flynn, 2017; Bodé et al., 2019). The module could also be used in a traditional curriculum to help students learn EPF skills and allow the students to practice on new questions focused solely on the EPF. The learning module also teaches strategies that are correlated with greater success for organic synthesis problems (Bodé and Flynn, 2016). The increase in the usage of mapping highlights the usefulness of the learning module in modelling these strategies for students.
The effect of this module could also be studied for students currently in a traditional curriculum. Future research could investigate the effects of gaining fluency in organic chemistry's symbolism on students’ abilities to learn new concepts in organic chemistry that require using organic chemistry's symbolism. Future research could also investigate the existence and extent of benefits for a variety of learners (e.g., gender differences, effects of technological proficiency on learning).
The cohort I study consisted of four parts (Fig. 3): (1) a pre-test, (2) time allotted for the students to use the learning module, (3) a post-test, and (4) a survey asking for their opinions about the module. The workshop focused on the first two learning outcomes (Draw arrows and Draw products).
Participants used very few strategies, such as mapping or expanding, although the strategies are taught in the module and have been correlated with successful problem solving in organic synthesis questions. Every participant answered Question 1 correctly on both the pre-test and the post-test. Because of this ceiling effect, the researchers replaced that question with a Draw the arrows question in the cohort II study.
All the participants completed a survey about their experiences. They replied positively when asked if the module's effect. For example, when answering the question: “Do you think your use of the organic reaction mechanisms module will have an impact on your success in your course? If so, how?”, Participant 1 wrote that “It is very good for learning the basics”. Participants 3, 5, 6 and 7 all mentioned that the practice questions and instant feedback were useful and the main reason they liked the module. Participant 9 stated that the module was good “to teach me everything I was behind on” and Participant 2 mentioned that the module was useful to “Make sure I understand the basics”. Table 1 shows the common themes recurring from the students’ answers in the survey (Table 4).
Best features | Worst features |
---|---|
a The developers have resolved these issues. | |
Quick response to answers | Occasional errors in the answersa |
Module interface was easy | Video question sometimes hid key information |
Good explanations and descriptive videos | Draw the arrow tool was hard to use |
Self-paced | Mapping questions were hard to seea |
A few technical difficulties were reported, such as occasional errors in the feedback, problems logging in, and difficulty with the drawing ability of the learning module. These issues have been addressed by the developers and were not significant barriers to the students’ experiences or learning.
For LO2: Draw the products questions, one point was awarded for interpreting each aspect of the electron pushing arrow: breaking and making a bond. For the example question 5, shown in Fig. 17, arrow A was worth two points: one point was awarded for breaking the π bond between carbons 1 and 6 and one point for making the bond between carbons 5 and 6. Each response was coded accordingly. All of the other Draw the arrow questions were coded in this way (Fig. 18–20).
Code | LO1: draw the arrows | LO2: draw the products |
---|---|---|
Attempted | Must have minimum 1 arrow | Must have any structures |
Drew non-bonding electrons | Must have at least 1 non-bonding electron pair | Must have at least 1 non-bonding electron pair on the product |
Expanded the structure | Explicitly drew proton(s) and/or wrote out a C for carbon | Explicitly drew proton(s) and/or wrote out a C for carbon |
Re-drew the structure | Re-drew all or part of the structure | Re-drew all or part of the structure |
Mapping | Any attempt to identify atoms in both the SM and product | Any attempt to identify atoms in both the SM and product |
For LO1, each arrow is assigned one of the codes outlined in Table 6. The codes were adapted from Flynn and Featherstone's work (2017).
For LO2: Draw the products several errors were coded for (Table 7). The errors were once again adapted from previous work (Flynn and Featherstone, 2017).
This journal is © The Royal Society of Chemistry 2020 |