A.
Kat Cooper
* and
M. T.
Oliver-Hoyo
Department of Chemistry, North Carolina State University, Raleigh, North Carolina 27695, USA. E-mail: akcooper@ncsu.edu
First published on 12th July 2016
Argument construction is a valuable ability for explaining scientific phenomena and introducing argumentation skills as part of a curriculum can greatly enhance student understanding by promoting self-reflection on the topic under investigation. This article aims to use argument construction as a technique to support an activity designed to improve student understanding of noncovalent interactions. Two theoretical argumentation frameworks were used and compared in order to determine the most effective for use in the developed activity.
A suggested pedagogical response to this deficiency is the use of argument-based instruction which requires that students compare possible solutions and constantly refine their understanding (Bricker and Bell, 2008; Berland and Reiser, 2009; Berland and McNeill, 2010; Sampson and Walker, 2012; Juntunen and Aksela, 2014). When presenting evidence in the argumentation process, students will more likely examine their understanding of the topic and identify what pieces of information are incongruent with the rest (Berland and McNeill, 2010; Berland and Lee, 2012). In addition, by requiring that students consider supportive evidence, problematic conceptions can be uncovered that would not appear with traditional instructional methods (Zohar and Nemet, 2002; Mercer et al., 2004; Asterhan and Schwarz, 2007; Sampson and Clark, 2009).
Studies on argumentation practices have shown that university level students struggle to construct valid evidential support in response to questions on scientific phenomena (Kelly et al., 1998; Horng et al., 2013; Kaya, 2013; Lin, 2013; Yu and Yore, 2013; Almudi and Ceberio, 2015; Moon et al., 2016). For example, Takao et al. (2002) investigated arguments constructed by students in an oceanography course and found a need for explicit instruction on argumentation practices to improve students' applications of reasoning skills in response to scientific problems. Such needs were also found by Erduran and Villaman (2009) with electrical engineering students and by Becker et al. (2013) with physical chemistry students. The positive effects of argumentation practices in the study of chemical equilibrium were shown by Aydeniz and Dogan (2016). In a comparison between university science students to secondary science teachers and scientists, Abi-El-Mona and Abd-El-Khalick (2011) found that students, along with teachers, struggled to provide evidence to justify constructed claims while this skill was more prevalent with the responses from scientists showing the significance of introducing this skill earlier in a student's career. These studies on argumentation emphasize the continued need of students at the university level for receiving instruction on how to construct effective justification to support claims about scientific phenomena.
Argumentation practices have also been adopted to study student construction of scientific explanations (Sandoval, 2003; McNeill et al., 2006; Walker and Sampson, 2013). While the construction of an argument and the construction of an explanation have several similarities, many have drawn a distinction between the two (Sandoval and Millwood, 2005; Berland and Reiser, 2009; Gray and Kang, 2014). Sandoval and Millwood (2005) have summarized this distinction by stating, “Explanations are a central artifact of science, and their construction and evaluation entail core scientific principles on argumentation.” The difficulties that students exhibit in the construction of scientific arguments is extended to their construction of scientific explanations (Berland and Reiser, 2009; Walker and Sampson, 2013; Gray and Kang, 2014). In order to strengthen their ability to provide evidential reasoning to the understanding of scientific phenomena, it is therefore essential to take these relevant argumentation practices and use them to strengthen the student's ability to construct strong scientific explanations.
Knight and McNeill (2015) outline two aspects of argumentation that can be used in the evaluation of students' scientific explanations. The first examines the argumentation process by evaluating the oral collaborative argument that a group of students construct together. The second examines constructed arguments by evaluating independent written arguments that individual students record. Walker and Sampson (2013) expand upon these definitions to investigate more specifically scientific argumentation which they define as a process that “requires individuals to analyze and evaluate data and then rationalize its use as evidence and a rationale.” Therefore, we operationalize these terms as scientific argumentation to relate to the collaborative discourse that students participate in and scientific argument construction to refer to the individual arguments that students construct and translate into a written product.
While scientific argumentation and the construction of scientific explanations can strengthen a student's understanding of scientific phenomena, the skills necessary to construct a high quality argument or explanation are not inherently recognized by students as applicable to scientific arguments (Lin, 2013; Iordanou and Constantinou, 2015). Therefore, in the adaptation of scientific argumentation for use in instructional materials it is essential to consider how to best incorporate argumentation language to minimize the intrinsic difficulty for the students. Walker and Sampson (2013) and Walker et al. (2012) describe the implementation of a new model denoted argument-driven inquiry in chemistry laboratory courses to investigate the benefits of using argumentation practices in a laboratory setting and show that student scientific writing is improved with the use of argument-driven inquiry tasks. A number of frameworks are available for use in designing materials based on the implementation of argument or scientific explanation construction (Sampson and Clark, 2008). Since few studies have investigated the advantages of each framework outside of the studies developed by the authors to test them, it can be beneficial to consider possible frameworks and identify one most effective for the development of the instructional resources at hand.
In the adaptation of argumentation to instructional materials, it is necessary to consider the collection and analysis of the data as robust instructional materials stand a better chance to fulfill learning outcomes. Studies of student constructed arguments in scientific settings have analyzed both argumentation and constructed arguments (Cavagnetto, 2010; Buck et al., 2014; Choi et al., 2015; Knight and McNeill, 2015). Scientific argument construction guides students to consider evidential support for a constructed claim (McNeill, 2011; Walker and Sampson, 2013; Knight and McNeill, 2015). The resulting argument can give insight into the student's understanding of the phenomenon under investigation. Overall this process serves to improve the strength of a student's critical evaluation and to provide a means of assessing their deeper understanding (Sandoval and Millwood, 2005; McNeill et al., 2006; Erduran and Jimenez-Aleixandre, 2012; Erduran et al., 2015).
Supporting critical evaluation skills is essential for concepts that serve as core principles for a scientific field (Driver et al., 1998; Newton et al., 1999; Osborne, 2010). Implementation of argumentation strategies have been shown to support stronger understanding for students (Walker et al., 2012; Walker and Sampson, 2013; Grooms et al., 2014). One area of concern for chemistry educators is the concept of noncovalent interactions (Hapkiewicz, 1991; Nicoll, 2001; Coll and Taylor, 2002; Ünal et al., 2006). Numerous studies have found prevalent misconceptions that persist in student understanding about noncovalent interactions from the general chemistry level through upper division courses such as biochemistry (Mulford and Robinson, 2002; Schmidt et al., 2009; Villafañe et al., 2011; Cooper et al., 2015). Improving student understanding of a topic that generates misconceptions can be difficult and requires methods that encourage the learner to reflect on their own understanding and correct for problematic conceptions (McNeill et al., 2006; Kingir et al., 2012; Cooper et al., 2015; Heng et al., 2015). The process of argument construction may encourage the necessary reflection in students to promote a more in depth mental model.
Due to the documented educational benefits argumentation offers, scientific explanation construction based on tenets of scientific argument construction became a component in the development of instructional materials designed to investigate and support student understanding of noncovalent interactions. The first activity developed is based on noncovalent interactions in small molecules and incorporates the framework by McNeill et al. (2006). Reflecting upon the results from the argumentation component of the activity, this component was redesigned using the framework developed by Sandoval (2003). The two versions of the same activity were used to analyze the quality of the constructed arguments to determine which framework best supported our learning objectives. Our process was not a study to debate the relevance or limitations of each framework, but to identify which framework would provide better guidance during the development of our instructional intervention. The process described within illustrates the relevance of choosing appropriate frameworks in hopes to assist instructors developing instructional materials with an argumentation component.
Adaptations of the Toulmin (1958) framework have been developed with two primary goals in mind: the simplification of the number of elements evaluated in constructed arguments and the inclusion of a measurement of scientific accuracy (Sandoval, 2003; Erduran et al., 2004; McNeill et al., 2006; Sampson and Clark, 2008; Shemwell et al., 2015). The framework developed by McNeill et al. (2006) identified three elements to evaluate in a constructed argument: claim, evidence, and reasoning. Condensing the number of elements from five to three leads to several advantages. The first is that the language used in the prompts for argument construction can be broken into sections to simplify the process for students. McNeill et al. (2006) were able to show the effectiveness of this support in improving the quality of arguments constructed by middle school students investigating molecular properties. The students were first instructed on how to effectively design an argument using this strategy. The students who were in the argumentation treatment group performed better on higher level reasoning questions indicating an improvement in their comprehensive understanding.
Second, lowering the number of elements simplifies the evaluation of the constructed arguments. The scheme developed by Toulmin (1958) can cause difficulties in analysis from the similarities terms share such as distinguishing between a warrant and a backing. Also, rebuttals and qualifiers can only be identified when a group argument is examined as a whole, precluding the ability to assign an individual score to each student. By devising a scientific explanation prompt modifying the Toulmin argumentation coding scheme into three distinct sections, the setup identified by the McNeill et al. (2006) framework can be assessed for each individual and each section can be scored independently using a rubric.
In the construction of scientific arguments, a measure of validity lends more weight to the determination of the strength of the argument. Whereas the Toulmin (1958) model supports the evaluation of argument construction for any field, in scientific argumentation there is often a limited number of scientifically valid claims that can be constructed from the relevant evidence (Sandoval, 2003; Sampson and Clark, 2008). The Sandoval (2003) framework takes scientific accuracy into account in the evaluation of the argumentation elements. The elements were also simplified from those identified by Toulmin (1958) to focus on three parts: causal elements, articulations, and warrants. The causal elements are identified by the investigator as elements crucial to the construction of a claim in response to the question the learner is presented with. Support for the causal elements is considered an articulation, and if the articulation is further supported by scientifically valid evidence then it is considered warranted. Warrants are only identified if the evidence is valid, which gives a comparison of the number of articulations to the learner's ability to warrant these articulations with valid scientific evidence.
As summarized by Erduran and Jimenez-Aleixandre (2012), the nature of the analysis of argumentation can vary between argumentation frameworks. Several analysis schemes have focused on the identification of justifications and evidence that are present within a given argument to analyze argument quality (Zohar and Nemet, 2002; Erduran et al., 2004). Argument assessment has also been directly related to discipline-specific understanding (Kelly and Takao, 2002; Takao and Kelly, 2003; Sandoval and Millwood, 2005). Additionally, some approaches have focused on modifying the coding scheme developed by Toulmin (1958) to generate new coding schemes that are easier to navigate (Erduran et al., 2004; McNeill et al., 2006). The large number of approaches that can be taken in the evaluation of constructed scientific arguments again emphasizes the importance for selecting a theoretical framework that is best matched to the desired outcome of the instructional materials being developed.
The design of our activity was originally guided by a framework developed by McNeill et al. (2006) for scientific explanation using argumentation. The activity questions leading up to the scientific explanation construction section were designed to support students in their investigation of the physical models in relation to differences in molecule boiling points. The prompt was divided following the guidelines from the framework into three sections: claim, evidence, and reasoning. These guidelines are intended to provide a strong support for the students by breaking the argument construction into more manageable parts. Therefore, we selected this framework for our material development with the goal of supporting the students in their argument construction. The sections of the prompt were explicitly defined for each section. However, students expressed difficulty in understanding what the prompts were asking of them, even though at the surface the prompts were clear and specific. Distinguishing what would be considered a claim, evidence, or reasoning was not intuitive for students and short explanations of each component were not enough to make these terms operational to them. Since argument construction was used as a vehicle for content validation and our goal was not to sharpen scientific argumentation skills per se, this alerted us to the need of using a different framework to lessen the confusion and frustration the students experienced in this portion of the activity. We therefore sought a framework that focused less on the support of intensive argumentation processes and more on the promotion of student evidence-based explanations.
In the design of version II of the activity, we selected the framework developed by Sandoval (2003) that uses a simplified format for argument construction based on a prompt for the construction of a scientific explanation. The Sandoval (2003) framework also places an emphasis on the use of guiding questions that support students in their reasoning before introducing a succinct explanation construction prompt. Version II of our activity used this guidance to redesign the questions aimed to support the students' investigations of the physical models by asking them to select an appropriate answer and then provide their reason for selecting this choice. The scientific explanation construction prompt was simplified and designed to guide student attention to the various elements investigated in the activity that could support their scientific argument construction while lessening the need for the translation of argumentation skills. Table 1 shows the structure of the prompts used in both versions of the activity which exemplifies the format of questions used in the remainder of the activity. In comparing the two versions, it is worth noting that both versions focused student attention to the same fundamental chemistry concepts but guided students at different levels of specificity.
Version 1 | Version II | |
---|---|---|
Questions | ||
Compare the Lewis structures and electrostatic potential maps of methanethiol and methanol.
a. What is the biggest difference you observe in the Lewis structures? b. How is this difference reflected in the electrostatic potential maps? c. How do these differences relate to the difference in boiling points of the two compounds? d. Choose two other molecules from the list that show a similar relationship. |
Comparing the Lewis structures or methanol and methanethiol, methanol and methanethiol are different Compounds because ____________________________________________________. | |
Argument prompts | ||
Claim |
Using your answer to the question above and the observations you have made about the electron distributions of the molecules and the effect of heat on the system, make a claim that states how enthalpy and entropy affect the boiling point of a molecule.
Answer in one sentence. |
Construct an explanation for the difference in the boiling points of methanol and methanethiol. Include as much supporting details as you can from the ether pars of the activity. Include aspects from the Lewis structures, electrostatic potential maps, interaction strengths, and the entropy and enthalpy changes in the systems. |
Evidence |
Use drawings to answer the following question and show evidential support of the claim that you are making.
Draw a representation of a sample of molecules at 30 °C for methanethiol and methanol. Be sure to include aspects if what you learned from both the electrostatic potential maps the physical models. Describe how your drawing for the sample differs from what you observed before adding heat in the thermodynamics portion of H this activity in terms of: enthalpy and entropy |
|
Reasoning |
Describe what you have drawn for your evidence and how this supports your claim for each of the following characteristics.
Charge separation: Molecule size: Strength of non-covalent interactions: |
Two frameworks were used in the creation of the two versions of the activity to analyze arguments based primarily on the types of justifications and evidence incorporated into the arguments, similar to the analysis in the studies by Zohar and Nemet (2002) and Erduran et al., (2004) (Erduran and Jimenez-Aleixandre, 2012). In order to meaningfully compare the difference in the results from the two versions, we sought to include an analysis methodology that was independent of those designed for the evaluation of constructed scientific arguments. We selected the Knowledge Integration in Science framework by Liu et al. (2008), a validated assessment framework designed to evaluate student understanding based on the type and number of connections the student makes between related areas of knowledge. This allowed us to measure the strength of student understanding in a more inclusive manner based on the written arguments the students constructed.
In addition, both versions were scored using the Knowledge Integration in Science framework developed by Liu et al. (2008) to compare argument quality across the two versions independent of the supportive framework. Scores were input into 2 × 2 contingency tables using VassarStats online statistics calculator to compare score distributions for demographic categories and between the two versions of the activity (VassarStats, http://vassarstats.net/tab2x2.html). To construct the tables, each demographic characteristic was divided into two categories as shown in Table 2 and further grouped according to scores obtained.
0 | 1 | 2 | |
---|---|---|---|
Claim | Blank or completely inaccurate | Only enthalpy or entropy accurate, not both | Complete claim of enthalpy and entropy relations |
Evidence: drawing | No separation difference between methanethiol and methanol molecules | Separation difference between methanethiol and methanol molecules but molecule organization is not accounted for or is inaccurate | Separation difference between methanethiol and methanol molecules and molecule organization accurate |
Evidence: table | One or more incorrect answers | Indicates that enthalpy and entropy both increase | Enthalpy is related to interactions and entropy is related to movement |
Reasoning | Two or more incorrect answers or doesn't relate to boiling points | Two or more are correct but no relation to drawings | Two or more are correct and relation to drawings |
Claims were scored as a 0, 1, or 2 based on the accuracy and detail of the claim. The prompt asked students to include both entropy and enthalpy effects in their claim. Fig. 2 shows both the results of the claim scores and sample responses for each possible score. Only S2G1 constructed a claim with a score of 2 that correctly accounted for both entropy and enthalpy in relation to molecule boiling points. A score of 0 was assigned to four of the students indicating that several students struggled to present a claim that was partially correct. Of the 10 students who received a score of 1, three students were able to correctly incorporate enthalpy as shown in the response by S3G3 while seven correctly described an entropy relation. This suggests that the students were better able to make relations between boiling point and the change in molecular movement as compared to changes in number of interactions between molecules.
The highest scores were obtained for the drawing portion of the evidence section. The scores obtained by the students and sample responses for each score are shown in Fig. 3. The quality of the drawn evidence was scored based on the presence or absence of two characteristics: differential molecular spacing between the two samples and correct orientation of the methanol molecules.
Orientation of the molecules was scored as correct only for drawings that correctly showed the separation of the molecules. This distinction can be observed in the responses given by S1G1 and S3G6 where both drawings show a differentiation in the spacing of the molecules, but only S3G6 included information on the electron density of the interacting regions as shown with their use of color.
The majority of the students were able to correctly and clearly indicate the spacing difference between gaseous methanethiol and liquid methanol with fourteen students receiving a score of 1 or 2. Molecular orientation was accurately shown by eight of the students who correctly represented molecule spacing and therefore received a score of two. S3G6 depicted both of these traits by showing a larger spacing for the methanethiol molecules and using color to represent regions of differing electron density interacting in the methanol molecules. Only one of the students was unable to accurately represent the separation of the molecules as shown in the response by S1G6 who only used a single molecule in each representation. This suggests that most of the students who were able to differentiate between the two phases also identified molecule electron densities as important evidence in supporting their claim about molecule boiling points.
For the construction of evidence in the table, students' answers were analyzed to determine if they were able to make connections between entropy with molecule movement and enthalpy with interactions as discussed in the other portions of the activity. Fig. 4 shows the scores for the students for the table portion of the evidence section and sample responses for each score. The prompts used in this section were intended to guide student thought to the relationships they investigated in the previous section of the activity. However, even though the prompts were explicitly deconstructed to provide students with a straight forward template for supporting evidence, the results we obtained consisted of responses with single words or short phrases. None of the students correctly related entropy and enthalpy to the concepts they had investigated in the activity of movement and interactions and only four attempted to relate enthalpy to interactions between molecules. These stilted responses indicate that students were treating the argument construction section as a series of questions to answer instead of following the prompts to create a cohesive discussion of noncovalent interactions. This supported the need to address how these topics were presented in the activity, and was a consideration for the rewording of the question prompts for version II of the activity.
The final section of the argument construction portion of version I that was scored was the reasoning. Three molecular characteristics that were predicted to be influential in the evidence section were included in the reasoning prompt: charge separation, molecule size, and strength of non-covalent interactions. Scoring was based on the accuracy of two or more of the statements and whether or not the descriptions were tied with the rest of the argument or independent of the argument. The scores obtained for the reasoning section and sample responses from students are shown in Fig. 5. Only two of the students related the terms correctly to either their drawings or the molecules under discussion, indicating that the majority of the students were unable to use the reasoning portion of the activity as a part of a whole cogent argument. This reinforced the findings from the responses in the evidence table that indicated that students were not constructing a connected argument, but were instead treating each response as a question to answer independent of the other prompts and of the remainder of the activity.
An independent rater scored a portion of the activities for interrater reliability data. The rater scored five of the arguments constructed by participants from group one with 85% agreement for the scores from all four categories of the argument components. No further discussion was undertaken.
The use of the framework by McNeill et al. (2006) constitutes a portion of a multi-faceted activity on non-covalent interactions. However, the student responses clearly showed difficulties in the construction of quality arguments. In the context of the activity, the argument construction prompts seemed to offer limited benefits to the students and an often expressed level of confusion. The selected framework was originally shown to be effective after explicit instruction and practice and was tested with a group of middle school students. Since our participants were given minimal instruction on argumentation, it was unsurprising that the argument construction prompts caused difficulties for the students. Another area for improvement involved how to better examine or evaluate for the scientific accuracy of the constructed arguments. These considerations led to the use of an alternative framework for the argument construction component and the development of a second version of the activity.
The constructed arguments were scored based on three characteristics: causal elements, articulations, and warrants. Causal elements were identified based upon examination of the constructed arguments and the structure of the activity. Four causal elements were identified as commonly occurring and relevant to the activity: enthalpy, entropy, number of interactions, and electron density. An articulation was credited if an acceptable articulation of the difference in the causal element property was given. A point for a warrant was assigned if a scientifically accurate reasoning was provided for the selected causal element. Scores were determined based upon the number of causal elements, articulations, and warrants present in each constructed argument.
Each of the arguments constructed by the students were first scored by the rubric shown in Table 4. The results from the scoring of the students' arguments are shown in Fig. 6 along with examples of an articulation and valid warrant for each causal element. Only one of the students included all four articulated causal elements and an additional five students included three of the four. Arguments including only one or two causal elements were constructed by seven of the students, indicating that the majority of the students were either limited on time, or did not consider all four elements as necessary to describe the difference in boiling points between the two molecules. This indicates that although specific causal elements were not included in the prompt, students were able to recognize the need to include multiple elements in the construction of their argument.
Causal elements | Articulations | Warrants |
---|---|---|
Strength of noncovalent interactions | Description of types of interactions | Relation to difference in atomic composition |
Electron density | Difference in densities between the two molecules | Relation to atomic composition or electronegativities of atoms present |
Entropy | Entropy change is greater for methanethiol than methanol | Relation to spacing of molecules |
Enthalpy | Enthalpy change is greater for methanethiol than methanol | Relation to number or strength of interactions present |
Fig. 6 Scores from version II of activity designed using Sandoval (2003) framework. |
While all of the students incorporated at least one causal element with an appropriate articulation, a smaller proportion of the articulations were presented with accurate warrants. Of the 31 included articulations, only 14 were scientifically warranted. Twelve of the students only accurately warranted one of the included articulations. The remaining two students included two valid warrants, and none of the students included more than two. The inclusion of articulations and warrants indicate that the students were forming connections in the construction of their arguments and found this reasoning valuable in support of the claim they were making.
Another analysis conducted with version II was the causal coherence score which gives a measure of the cohesiveness of constructed arguments (Sandoval, 2003). This score is determined by mapping the different elements that the student uses to construct their argument. If these ideas are connected, then this connection is indicated on the map between the two elements. The longest path of connected elements in this network is divided by the total number of elements to determine a causal coherence score between 0 and 1, with a low score indicating a very disjointed argument and a score of 1 indicating a completely cohesive argument.
An independent rater scored a portion of the activities for interrater reliability data. The rater scored four of the arguments constructed by participants from group five with 78% agreement for the number of articulations and warrants assigned to each participant. After discussion of the assigned scores, agreement rose to 100%.
The causal coherence scores were calculated for each argument constructed by the thirteen participants who completed version II. Examples of the maps constructed from participants' arguments as well as the calculated scores are shown in Fig. 7. The constructed scores ranged between 0.33 and 1.0, with six of the students receiving a score of 1.0. Only one of the students scored below 0.5, indicating that most of the participants were able to construct highly cohesive arguments. The high coherence of the scores reflects the students' abilities to connect the ideas that they included in their argument.
Fig. 7 Causal coherence scores calculated for participants in version II and three examples of coherence networks constructed from sample arguments. |
The majority of the arguments constructed by the students fall into the category of highly cohesive arguments even though much simpler prompts were used in version II. The simplification of the argument structure led to an assessment scheme that clearly showed both the quality of a constructed argument and the scientific validity of the included statements. The calculated scores indicate that the Sandoval (2003) framework was an effective structure for the design of the activity and can give insights into student understanding.
In order to directly compare across all completed activities from both versions, an effective way to measure these connections independent of the activity structure was needed. For this purpose, we used the Knowledge Integration in Science Framework developed by Liu et al. (2008). This validated framework investigates the connections that students make between different pieces of knowledge which is indicative of a deeper understanding. The Knowledge Integration in Science framework scores knowledge integration by assessing the number of links formed by the student. Fig. 8 shows the rubric developed by Liu et al. (2008) for scoring knowledge integration. Scores of 0 or 1 are assigned to students who give no answer or an off task answer respectively. If there are no links formed between any scientific ideas or only nonnormative links are present, then a score of 2 (no link) is assigned. If ideas are present that are scientifically normative, but no connections between these ideas are identified this is classified as a partial link. If only partial links are present, then a score of 3 is assigned. Finally, two scientifically normative ideas that are effectively linked are considered a full link. A score of 4 is assigned if one full link is present and a score of 5 is assigned for multiple full links.
Fig. 8 Scoring rubric adapted from Liu et al. (2008) for Knowledge Integration in Science framework. |
The Knowledge Integration in Science (Liu et al., 2008) framework was used to evaluate the arguments constructed by all participants. The results of the evaluation are shown in Fig. 9. No participant in either group was found to be off task or left a blank response and no scores of zero or one were assigned. A significant difference between the two groups was found in the number and types of links that participants made. None of the students who completed version I of the activity were able to construct full links within their arguments, while seven out of the thirteen participants who completed version II were able to construct one or more full links.
Fig. 9 Calculated Knowledge Integration in Science (Liu et al., 2008) scores from versions I and II. Scores rated as follows: 0 = blank; 1 = off task; 2 = no link; 3 = partial link; 4 = one full link; 5 = multiple full links. |
An independent rater scored a portion of the activities for interrater reliability data. The rater scored five of the arguments constructed by participants from group one and four of the arguments constructed by participants in group five. The scores were divided into categories of low and high to match the assignment in the statistical analysis. Agreement was 100% for the categories of KIS scores. No further discussion was undertaken.
We used the collected demographic data to investigate other possible contributing factors to the differences in scores that we observed. Each demographic characteristic was assigned two categories as shown in Table 5. The number of participants for each category were then assigned as low scoring if they received a score of 2 or 3 or high scoring if they received a score of 4 or 5. These categories grouped students who did not construct a full link and students who constructed one or more full links. The categories were then used in the construction of 2 × 2 contingency tables, and the Fisher's exact test was used to determine an α value to evaluate the statistical significance of the difference in low and high scoring students for each 2 × 2 table. A table was also constructed for comparison of low and high scoring students between the two versions of the activity. As shown in Table 5, for all of the demographic characteristics, the α value did not support a statistical significance in the scoring categories. However, when comparing the score distributions for the students who participated in the different versions of the activity, we found a statistical significance with an α value of 0.0014. These findings directly support the conclusion that students using the second version of the activity were able to construct higher quality arguments and that these findings were independent of demographic differences in the participants.
Category | Comparison | Low scoring | High scoring |
---|---|---|---|
a Total of n = 37, one English major not included. b Total n = 37, one GPA not reported. | |||
Majora | Science | 15 | 5 |
p = 1 | Engineering | 5 | 2 |
Gender | Female | 15 | 6 |
p = 0.64 | Male | 6 | 1 |
GPAb | <3.0 | 4 | 1 |
p = 1 | ≥3.0 | 16 | 6 |
Grade | A/B | 9 | 4 |
p = 0.67 | C or Lower | 12 | 3 |
Version | 1 | 15 | 0 |
p = 0.0014 | 2 | 6 | 7 |
The measure of comparison using the Knowledge Integration of Science (Liu et al., 2008) scoring rubric allowed for a direct assessment of the difference in quality of constructed arguments independent of the framework. Our findings showed that only students who completed version II of the activity were able to construct one or more full links. This indicates that students who completed version II constructed higher quality arguments and better expressed their evaluation skills. Even when using a prompt with an emphasis on the construction of a supported explanation, students included stronger supportive evidence in their constructed arguments. Version II was determined to be more effective for scientific explanation construction and will serve as a template in the development of the series of activities focused on noncovalent interactions.
A second limitation involved the recruitment method for volunteers. Although groups were comparable and included both low and high performing students, it is possible that a bias was introduced by recruiting students with the incentive of extra credit.
Another consideration for teaching is the time constraints required for appropriate assessment. The Sandoval framework (2003) simplifies the prompt scheme and the analysis. We found the process of identifying causal elements, articulations, and warrants to be more objective and considerably quicker. Also, by employing an independent framework for scoring, we were able to compare the constructed arguments across two different versions of the same activity. These considerations support the need for carefully examining at least some results before the ultimate selection of a framework to use in the design of instructional materials.
This journal is © The Royal Society of Chemistry 2016 |