Exploring the effect of a technology-supported science writing heuristic approach on pre-service science teachers’ written argumentation, representation, and reasoning
Received
2nd January 2025
, Accepted 4th August 2025
First published on 12th August 2025
Abstract
This study examined the utilization and correlation of written argumentation, multiple representations, and reasoning of pre-service science teachers (PSTs) in three learning environments (normal laboratory, virtual laboratory, and mixed laboratory) based on the Science Writing Heuristic (SWH) approach. An embedded single case design was employed in the study with the sample consisting of 20 first-year PSTs. Data consisted of 262 SWH laboratory reports prepared by the PSTs in a Chemistry 1 course over one semester. The SWH laboratory reports were analyzed using rubrics in three dimensions: argument quality, multiple representations, and reasoning. Friedman, Wilcoxon and Spearman Correlation tests were used in analyzing the holistic rubric scores. The results revealed that PSTs’ use of argument, multiple levels of representation, and reasoning were parallel to each other especially in virtual and mixed learning environments. There was a significant positive correlation between argument, representation, and reasoning in each learning environment. When compared to virtual and normal lab environments, there was a statistically significant difference in favor of mixed lab in terms of argument, multiple representations, and reasoning. Additionally, incorporating writing-to-learn activities and small and whole class discussions are recommended to enhance learning.
Introduction
Recent national science curricula emphasize involving students in conducting inquiry and developing basic digital competencies in science and technology (NRC, 2012; MONE, 2018; Hand et al., 2021). Students are expected to be involved in reasoning processes within inquiry, use opposing arguments to support ideas through different reasoning and utilize their language skills (reading, writing, speaking and listening). In addition, students are expected to explain science concepts using multiple representations (formulas, models, graphs, and tables) (NRC, 2012; MONE, 2018). In science education, especially in chemistry, multiple levels of representations (macroscopic, microscopic, symbolic, and algebraic) help students to understand the nature of science/chemistry and to develop conceptual understanding. Studies show that students use multiple representations in a more interconnected way and achieve more meaningful learning while supporting their claims and evidence in argument-based inquiry environments (Yaman, 2019, 2020). In addition, studies have highlighted the need to have technology-supported applications to help students make more connections between multiple representations in chemistry (Erduran and Papuçcu-Akış, 2023). In order to offer this opportunity to students, teachers are expected to be a guide in the classroom environment and to have the knowledge and skills on argument, reasoning, and representations. Therefore, there is a need to provide opportunities for pre-service science teachers (PSTs) to develop their argument, reasoning, and representation skills in teacher training programs. This is because science teachers tend to teach what they have learned during the education they received during their university years (Suh, 2016; Yaman and Hand, 2025). In order to prepare teachers to teach argument, reasoning, and representation skills, they need to develop these skills during their own education. However, studies show that the number of learning environments that offer these opportunities to PSTs is limited.
Recent studies on the Science Writing Heuristic (SWH) approach have shown that it is able to provide PSTs opportunities for engaging in argument, reasoning and representations in an argument-based inquiry environment. Studies show that in learning environments where the SWH approach is used, writing activities are used as a learning tool and contribute to the development of students' scientific reasoning skills (McDermott and Hand, 2010; Hand, 2017; Hand et al., 2021). Previous studies comparing the SWH approach to the normal lab have shown that the arguments, multiple representations, and reasoning of PSTs in argument-based writing increase at greater rates over time (Yaman, 2018; Yaman and Hand, 2022, 2024a, b). Moreover, the results show the development of argument and representation are parallel to each other. In a similar vein, in virtual lab using the SWH approach, research highlights that students’ argumentative writing and representations increased over time (Yaman, 2019). In a more recent study, research highlighted that PSTs preferred mixed lab for engaging in representation in their oral argumentation (Yaman and Hand, 2024a, b). Further research highlights that most of the studies using the SWH approach examining students' use of multiple representations were conducted in normal laboratory environments (Yaman, 2018, 2019; Yaman and Hand, 2022). There are limited studies where students are engaged in the SWH approach within virtual laboratory environments or a mixed laboratory where virtual and normal laboratory are used together (Yaman and Hand, 2024a, b). Outside of these few studies, little is known about how PSTs develop their arguments, representations and reasoning in different SWH environments including normal lab, virtual lab and mixed lab as reflected in their written argumentation.
The intent of this study was focused on how PSTs’ written argument quality, multiple representation use in chemistry (macroscopic, microscopic, symbolic and algebraic), and reasoning differ in three different SWH framed learning environments (normal laboratory, virtual laboratory, mixed laboratory). Specifically, we investigated whether there were any differences in terms of argument, multiple representation, and reasoning usage between the three learning environments and whether there was any relationship between argument, multiple representation, and reasoning in each of these environments. For this purpose, interactive simulations and the virtual chemistry laboratory were carried out in accordance with the SWH approach and integrated into the Chemistry 1 course. The following questions guided the study.
1. Is there any difference between argument, representation and reasoning scores in (a) normal lab, (b) virtual lab, and (c) mixed lab learning environments?
2. What is the correlation between argument, multiple representation, and reasoning skills within each learning environment in which PSTs participate?
3. Is there a significant difference in (a) argument quality, (b) multiple representation, and (c) reasoning skills between the learning environments (normal lab, virtual lab, mixed lab) in which PSTs participate?
Literature review
Science writing heuristic (SWH) approach
Argument-based inquiry activities have an important place in science education. One form of argument-based inquiry is the SWH approach, which combines interactive laboratory activities with intra- and inter-group discussions in student-centered classroom environments. In this context, students discuss the data, observations, claims, and evidence they obtained in the laboratory. Laboratory courses taught with the SWH approach are more effective in increasing students' learning and success compared to laboratory courses taught with the traditional laboratory formats (Burke et al., 2005; Hand and Choi, 2010; Hand et al., 2021).
In the SWH approach, students construct knowledge by asking questions, making claims, and supporting these claims with evidence. Learning environments designed in accordance with the SWH approach support the development of students' language skills (listening, speaking, reading, and writing). For example, students improve speaking and listening skills as they discuss their research questions, methods, observations, claims, and evidence in small groups and whole class settings. They improve their writing and reading skills with the SWH student reports, a writing-to-learn activity that requires them to research and discuss their results in the reading and reflection section (Yaman et al., 2019). The SWH approach provides students opportunities for discussion, reasoning, and negotiation (Chen et al., 2016), all of which are important in improving an individual's ability to use scientific language and concepts (Wellington and Osborne, 2001). Within the SWH approach, students create questions for the research, they justify why they chose these questions, and they discuss which designs are best to address their questions. Reasoning is carried out throughout the process as students question every transaction they make and all information they encounter in the SWH learning environment (Yaman et al., 2019).
Researchers have examined the effects of the SWH approach on skills such as academic success, argument quality, scientific process skills, communication skills, metacognitive knowledge, and problem solving skills in different contexts rather than learning environments (normal lab, virtual lab, mixed lab). In normal laboratory environments, the SWH approach has been found to increase the participants' scientific process skills (Er and Kırındı, 2020), support their learning processes (Hand et al., 2017), and improve science success and scientific writing (Tekindur, 2022) and communication and problem-solving skills (Kutru and Hasançebi, 2024). In a virtual laboratory environment, activities designed in accordance with the SWH approach increased the participants' multiple representation skills (Yaman, 2019). However, there are few studies using virtual laboratories in the SWH approach (Yaman, 2019). Research has shown that the mixed laboratory environment was more effective in developing students' multiple representation skills compared to only laboratory and only virtual laboratory environments (Yaman and Hand, 2024a, 2024b). However, there are few studies examining the effects of three learning environments designed in accordance with the SWH approach on multiple representation, reasoning, and argument quality.
Multiple representations in chemistry
Multiple representations in chemistry education are an important tool in developing students' conceptual understanding, and the inclusion of these in teaching processes contributes to students' development of a deeper and broader understanding (Permatasari et al., 2022). The use of multiple representation levels (macroscopic, microscopic, symbolic, and algebraic) plays an important role in students' understanding of chemistry (Gabel, 1999; Yaman and Hand, 2022). The macroscopic level includes observable and concrete events experienced in daily life or in a laboratory environment (Treagust et al., 2003; Chandrasegaran et al., 2008). The microscopic level covers the movements and numbers of atoms, molecules, and electrons, and the atomic structure of matter and bonding theory (Hinton and Nakhleh, 1999; Chandrasegaran et al., 2008). The symbolic level includes equations, formulas, diagrams, molecular level drawings, and models. Finally, the algebraic level covers mathematical operations related to graphics and formulas (Nakhleh and Krajcik, 1994; Talanquer, 2011).
Studies have shown that students should be given opportunities to use multiple representations (Hinton and Nakhleh, 1999) and to write and speak concurrently when criticizing and creating arguments because it helps them develop higher order skills (Chen et al., 2016). When students are provided with argument-based inquiry (e.g. SWH approach) in a normal laboratory environment, studies have shown that students’ written multiple representations and written arguments are improved (Hand and Choi, 2010; Yaman, 2020). In addition, research has indicated that multiple representations used in written argumentation allow students to understand the concepts related to the subject in more depth (Yaman, 2020). However, Allred and Bretz (2019) found that students had difficulty connecting various representations describing the molecular level and understanding abstract mathematical expressions. Chandrasegaran et al. (2008) suggest that technology (such as simulation and visual tools) should be used more and integrated into lessons to help students use multiple representations more effectively.
Studies show that virtual environments are more effective than real environments in students' associating multiple representations (Barrett et al., 2015). However, currently the recommendation is that virtual laboratory environments be used as a complementary tool rather than replacing traditional chemistry teaching methods, and that such virtual environments be designed in parallel with the course content, allowing students to test their theoretical knowledge (Davenport et al., 2018). For example, Wang et al. (2022) showed that students had difficulties in associating macroscopic observations (such as color change in a reaction) with molecular level interactions (such as electron transfer), but they were able to visualize and understand abstract relationships better thanks to the technologies integrated into the course. In a study where the hybrid model (normal laboratory and online module) was used, findings show that the model provided students with the opportunity to learn concepts in practice, while online modules contributed to their understanding of abstract representations (Irby et al., 2018). Research indicates that PSTs preferred the mixed laboratory environment in terms of the use of representations, and the mixed lab was beneficial in encouraging the development of representational competency (Yaman and Hand, 2024a, b).
In this study, we aimed to examine the PSTs’ written SWH laboratory reports within the scope of the Chemistry I course, in terms of multiple representational utilization in three different learning environments (normal laboratory, virtual laboratory, mixed laboratory).
Scientific reasoning
Scientific reasoning skills have a very important place among higher-level thinking skills (Dökme, 2019). Students' processes of questioning the information they have learned during science experiments, establishing hypotheses about the experiment, determining variables, testing their hypotheses, and trying to reach a conclusion accordingly require reasoning skills (Dökme, 2019). Studies show that writing, in addition to being used as a communication tool, is used to promote scientific reasoning skills (McDermott and Hand, 2010; Hand et al., 2018). For example, in SWH learning environments, which provide students writing-to-learn activities, research has highlighted that students improve their reasoning skills (Yaman and Hand, 2024a, b) as they structure their questions, claims, and evidence. Each component of the SWH approach requires reasoning (Hand et al., 2017).
In science education, there has been a shift from using Toulmin's analytical argument reasoning structure to utilizing Walton's argument schemes (e.g., argument from sign, argument from example, analogy and evidence to hypothesis) (Duschl and Osborne, 2002). Nussbaum (2021) and Rapanta and Christodoulou (2022) state given that argument-based inquiry approaches are focused on dialogical interactions between participants in the process of generating arguments, Walton's argument schemes are much more appropriate. Walton's argument schemes provide students with the meaning of argument structures and support logical consistency (Erduran and Jiménez-Aleixandre, 2007) through the implementation of a Premise, Justification and Conclusion cycle (Walton, 2016; Yaman and Hand, 2024a, b). This cycle aligns with the Questions, Claims and Evidence structure of the SWH approach (Washburn and Cavagnetto, 2013). Osborne et al., (2004) argue that utilizing Walton's argument schemes allow errors to be detected and students to reach more robust scientific thinking structures. In this study, we used the same 12 of Walton's argument schemes used by Yaman and Hand (2024a, b) because these argument schemes are the most commonly used in science education. Utilizing these schemes enabled us to examine students’ reasoning development in a chemistry course in three different learning environments (normal laboratory, virtual laboratory, mixed laboratory) across one semester.
Method
A case study was used as the qualitative research methods in this study because it provided a deep perspective on the events (Yin, 2014). In this study, an embedded single case design was used since it allowed for in-depth examination of the research question with qualitative data and allowed comparisons with other data. There may be more than one unit of analysis within a single case (Yin, 2014; Yıldırım and Şimşek, 2021). For this study, the case involved the development and correlation of written arguments, representations and reasoning of 20 PSTs who enrolled in Chemistry 1 course over one semester. Three units of analysis were handled as the examination of normal laboratory, virtual laboratory and mixed lab learning environments with the results compared with each other by quantifying qualitative data using statistical analysis.
Participants
The participants of the study consisted of 20 PSTs, 6 males and 14 females, aged 18–22, studying in the first year of the Science Teaching program of a state university in the Central Anatolia region of Türkiye. The PSTs were placed at the university according to the scores they received in the fields of physics, chemistry, biology and mathematics in the central examination system. This study was carried out in Chemistry I, a compulsory course. Necessary ethical permissions for the study were obtained from the university's ethics committee, and 20 PSTs participated voluntarily in the study.
SWH-laboratory environments and topics
In this study, the PSTs engaged in 14 SWH framed activities over one semester. Each SWH activity lasted four lesson hours and utilized one of three different laboratory environments: normal laboratory, virtual laboratory, and mixed laboratory in which virtual and normal laboratories are used together. As shown in Table 1, while some experiments were carried out in a normal laboratory framework, some experiments were carried out in the virtual chemistry laboratory developed by Tatli, Ayas (2013) and with interactive simulations (PhET) developed by the University of Colorado (Moore et al., 2014). In normal lab, the PSTs collected their data using equipments (e.g., test tubes, beherglass, or assay balance) that can be found in the lab. There was no technology (e.g., PhET interactive simulations) included and the PSTs only conducted their experiments (e.g., hands-on) using the SWH approach in the laboratory. In virtual lab, the PSTs only conducted technology-based experiments (e.g., PhET interactive simulations or virtual chemistry laboratory) using the SWH approach in the class. In mixed labs, the PSTs used both laboratory experiments and technology-based experiments while performing the SWH approach in the laboratory.
Table 1 SWH experiment name and learning environment conducted in chemistry I courses
Learning environment |
NEa |
W&ENb |
Normal laboratory |
Phet simulation |
Virtual laboratory |
Experiment name |
Experiment name |
Experiment name |
NE: number of experiments. W&EN: weeks and experiment number. |
Normal laboratory |
3 |
W2E2 |
Separation of mixtures |
|
|
W3E5 |
Fundamental laws of chemistry |
|
|
W4E4 |
Density of substances |
|
|
|
Virtual laboratory |
5 |
W1E1 |
|
|
States of matter |
W5E6 |
|
|
Subatomic particles and isotopes |
W6E7 |
|
|
Creating a molecule |
W7E8 |
|
|
Balancing chemical equations |
W8E9 |
|
Limiting component |
|
|
Mixed laboratory |
6 |
W2E3 |
Chemical and physical changes |
|
Chemical and physical changes |
W9E10 |
Solutions |
Solutions |
|
W10E11 |
Acids and bases |
Acids and bases |
|
W11E12 |
Dissolution–precipitation reactions |
|
Dissolution–precipitation reactions |
W12E13 |
Redox reactions |
|
Redox reactions |
W13E14 |
Gases |
Gases |
|
As seen in Table 1, three experiments were conducted in normal lab, five experiments were implemented in virtual lab, and six experiments were done in mixed lab. There was no sequence in implementing the laboratory environments. For example, in the first, fifth, sixth, seventh, and eight weeks, virtual lab was implemented; in the second, ninth, tenth, eleventh, and thirteenth weeks, mixed lab were implemented; while in the second, third, and fourth weeks, normal labs were implemented. The topics were aligned with the concepts that are supposed to be taught in each week based on the higher education council document for preservice science teacher training programs in Türkiye (HEC, 2018). For this study, if any topic (e.g., Fundamental Laws of Chemistry) only had lab experiments and did not have any technology-based experiments, then the PSTs engaged in SWH activities in the normal lab. If any topic (e.g., Subatomic Particles and Isotopes) only had technology-based experiments, then the PSTs conducted virtual lab. If any topic (e.g., acid and bases) had two opportunities, the PSTs engaged in mixed lab and carried out SWH activities both using hands-on and simulations or virtual laboratories.
There were 14 different chemistry topics in the study. The topics consisted of laboratory activities carried out as a part of a Chemistry I course lasting 14 weeks over one semester. The PSTs met once in a week and the course was taught using the SWH approach for 4 hours per week. Chemistry I course can be considered to have two major components – labs and lectures. However, there was no separation of lab and lectures in this 4 hour. There was “just in time” instruction which refers to the need to provide direct information because the PSTs did not fully understand the topic. As shown in Table 1, the topics included observing states of matter, separating mixtures, noting chemical and physical changes, identifying density, basic laws of chemistry, subatomic particles and isotopes, molecule formation, balancing chemical equations, determining the limiting component, calculating solutions, observing acids and bases, dissolution, precipitation reactions, redox reactions, and gases.
Process
At the beginning of the semester, the PSTs were introduced to the SWH approach and its student template. There were seven components in the student template of the SWH approach: (1) beginning questions (what are my questions?), (2) method (what did I do?), (3) observation (what can I see?), (4) claim (what can I claim?), (5) evidence (how do I know? Why I am making these claims?), (6) reading (how do my ideas compare with others?), and (7) reflection (how my ideas changed?). The purpose of the beginning question was to provide PSTs opportunities to ask questions that they want to investigate. The method component required PSTs to design an approach to find answers for the questions that they outlined. The observation was centered on collecting and recording data. The claims and evidence components were focused on encouraging PSTs to construct knowledge claims based on evidence. The purposes of the reading and reflection components were to provide PSTs with opportunities to compare their claims to disciplinary accepted knowledge and to explain how their ideas have changed based on what they did and read. Another purpose of claims and evidence and reading and reflection components were to provide external evaluation when PSTs critiqued and constructed arguments. Using these components, the PSTs engaged in pre-lab, during lab and post lab activities in the SWH approach as shown in Table 2. In all three environments, the PSTs were not given any instruction or feedback on how to use, develop or embed argument, representation and reasoning in their SWH laboratory reports. Rather than using the traditional audience of the instructor, the PSTs were required to prepare their written reports for new incoming students who are not familiar with the topic. This followed writing to learn approaches to promote student learning of concepts (Graham et al., 2020).
Table 2 The SWH student template and expectations from pre-service science teachers (PSTs)
Timeline of lesson activities |
SWH student template |
Expectations from preservice science teachers |
Before class |
1. Beginning questions – What are my questions? (individual) |
PSTs, |
2. Method – What did I do? (individual) |
(a) Draw a pre-concept map related to the subject in the experiment. |
|
(b) Prepare beginning questions individually. |
|
(c) Research the method, i.e. how to answer the beginning questions. |
|
(d) Determine the precautions to be taken for the experiment to be conducted in the laboratory. |
|
During class |
1. Beginning questions (group)– What are my questions? |
(a) PSTs come together with their small groups and discuss the beginning questions they have prepared individually and determine the group beginning questions. Then, they write the questions they have determined for the research on the board. |
|
(b) The whole class discusses the research questions written on the board by the small groups. Then PSTs decide which questions will be examined. |
2. Method (group)– What did I do? |
(c) The whole class discusses the method to be followed in order to test the class questions. |
3. Data/observations – What did I observe? |
PSTs, |
|
(a) Prepare a group and class table containing dependent and independent variables. |
|
(b) Divide into groups to conduct their experiments. |
|
(c) Take notes of data and observations during the experiment. |
|
(d) Look for patterns and abnormalities in the data. |
|
(e) Reapply the experiment if there are abnormalities in the data obtained as a result of the experiment. |
|
(f) Create graphs from the data obtained after the experiment and interpret these graphs. |
4. Claim – based on my data and observations, what can I claim in response to my beginning questions? |
(a) PSTs, considering the data they obtained together with their small groups, create claims that can answer the beginning questions. |
5. Evidence – Why do I make these claims? |
(b) Small groups justify the claims they created and create their evidence. (c) Small groups write the data they obtained as a result of the experiment and the claims they prepared on the board. |
What are the grounds for my claims? |
(d) Group spokespeople chosen from among the small groups come to the board one by one and share with their classmates what the beginning questions were, how they followed a path to find answers to these questions, and what kind of data they obtained from the experiments they conducted. After obtaining the data, they explain how they created the claim and support it with evidence. |
|
After class |
6. Reading – How do my results compare to other studies? Do the studies I have conducted support or refute my claims? |
PSTs, |
7. Reflection – How did my ideas change before and after the experiment? |
(a) Individually research the information they learned in the laboratory from at least three different sources (textbooks, magazines, websites, articles, etc.). |
|
(b) Express how the research they conducted with the help of sources supports or refutes their claims. |
|
I Express whether their ideas have changed after the results they obtained before and after the experiment and after the research in the reflection section. |
|
(d) Draw the post concept map. |
Data collection tools
In this study, the SWH student template (the SWH lab reports) was used to collect data related to PSTs’ written argument, representation and reasoning. Specifically, 262 SWH lab reports, ranging from 3–19 pages in length, were collected over one semester and analyzed in terms of written argument, representation and reasoning.
Data analysis
This study was a continuation of research on first year undergraduate chemistry labs focusing on PSTs’ development and utilization of argument, reasoning and representations (Yaman and Hand, 2022; 2024a, b, 2025). Each of these studies has utilized the same lab course, topics and scoring rubrics as means to build a rich understanding of PSTs’ use, and thus the analytical frame for this study was a continuation of this research. In this study, while scoring PSTs’ argument, representation and reasoning, we used 0-5-10-15 scoring system. The 0-5-10-15 scoring system in argument, representation and reasoning employed in our study was mathematically equivalent to the traditional 0-1-2-3 scale through linear transformation principles. Both systems maintained identical ordinal relationships, as derived scores were obtained by applying mathematical transformations to raw scores. As established by Stevens (1946) in his seminal work on measurement scales, ordinal scales preserved their ranking properties when monotonic increasing functions (y = kx, k > 0) were applied to the data. The transformation from 0-1-2-3 to 0-5-10-15 followed the function y = 5x (where k = 5), thereby maintaining the ordinal structure of the measurement.
Analysis of written arguments
The written arguments of PSTs were analyzed in two stages: analytic and holistic (Choi, 2008; Choi et al., 2013; Yaman, 2018, 2019). In the analytical analysis, the beginning questions, claims, evidence, and reading and reflection components in the SWH student template were examined to understand whether the quality of arguments differed by components. In the holistic analysis stage, the focus was on the strength of the arguments and the relationship between the components, regardless of which part of the SWH they were in. In both stages of the analysis, scoring was done on a scale from 0–15. When the PSTs’ written argument was very weak, weak, moderate or strong, a score of 0, 5, 10 or 15 was given, respectively. Appendix 1 contains the criteria and scoring scale showing how analytic and holistic arguments were scored. Appendix 2 shows PST4's written scores using analytical and holistic frameworks related to Experiment 14.
In the second stage of the analysis, the holistic argument scores obtained from the reports of PSTs were analyzed using SPSS. The development of PSTs’ written arguments over time was analyzed using a Friedman test, which is a non-parametric statistical test used for repeated measures (Smalheiser, 2017). This test is used to determine whether the mean scores of two or more related measurement sets differ significantly from each other. In this study, we used a Friedman test because the assumptions of normality and homogeneity of variances were not met. Since a Friedman test does not have any post hoc multiple comparison, a Wilcoxon signed-rank test for post hoc was conducted to determine whether there was a significant difference between the scores of the two related measurement sets (Büyüköztürk, 2020).
Analysis of multiple representations
The SWH student lab reports prepared by the PSTs were analyzed in three stages: (1) the type of representations used; (2) the interconnection of these representations; and (3) statistical analysis. In the first stage of the analysis, which involved examining the types of representations, two criteria were used: (1) the number of different representation levels used in the text and (2) the number of representations used to explain the same topic or concept. When the PSTs used only one of the representation levels—macroscopic (MAS), microscopic (MIS), symbolic (SYM), or algebraic (ALG)—to explain the topic in their reports, it was referred to as a single representation. When two or three different representation levels were used together, it was referred to as a two-connected or three-connected representation. When all four representation levels were used together, it was referred to as a four-connected representation. Table 3 explains some types of representations used by PSTs with examples from laboratory reports. Appendix 3 provides detailed information on other types of representations used in student reports.
Table 3 Multiple representation types, explanations, and examples of PSTs’ reports prepared in three different learning environments
Type of multiple representations |
Multiple representations |
Explanation |
Examples of science teacher candidates’ written arguments in three different learning environments |
Single representation |
Macroscopic level (MAS) |
If the statements written in the SWH report include observable and concrete events experienced in daily life or in a laboratory setting, and if the bulk properties of matter are explained, they are examined under the macroscopic level category. |
* To distinguish between water and ethyl alcohol, we tried burning them separately. When we brought the lighter close, we observed that the ethyl alcohol burned while the water did not (Normal Laboratory). |
* Substances are affected by pressure and temperature. When we increase the pressure, we see that the temperature also increases (Virtual Laboratory). |
* We observed that to increase the concentration of a solution, we can add solute and evaporate, and to decrease the concentration, we need to add solvent (Mixed Laboratory). |
Microscopic level (MIS) |
Statements written in the SWH report are examined at the microscopic level if they include the numbers of atoms and molecules, the atomic structure of matter, or bonding theory. |
* Atoms of different elements are different from each other in every aspect (Normal Laboratory). |
* The mass number varies according to the subatomic particles, protons and neutrons. When we have 6 protons and 7 neutrons, the mass number becomes 13, and there is no change in the ion (Virtual Laboratory). |
* Bonds are broken and new bonds are formed again, the internal structure of the molecule changes. The internal bond changes (Mixed Laboratory). |
Symbolic level (SYM) |
If the statements written in the SWH report include equations, diagrams and molecular level drawings, they are examined in the symbolic level category. |
* Density d = m/v |
Volume of cylinder = πr2h (Normal Laboratory). |
* 4NH3 + 3O2 → N2 + 6H2O Hydrogen and Oxygen numbers but nitrogen numbers were not equal. To equalize nitrogen, let's set the N2 coefficient to 2 and equalize (Virtual Laboratory). |
*Zn + HCl → ZnCl2 + H2 (Mixed Laboratory) |
Algebraic level (ALG) |
If the expressions written in the SWH report contain mathematical operations related to graphics and formulas, they are examined in the algebraic level category. |
* 9.104/10 = 0.9104 g cm−3 (Normal Laboratory). |
* Oksijen-16 = 99.757/100 × 15.9949 1 = 0.99757 |
0.99757 × 15.99491 = 15.95664 (Virtual Laboratory). |
* pH = −log[1 × 10−2] pH = 2 (Mixed Laboratory). |
|
Two-connected representations |
Macroscopic –symbolic level (MAS-SYM) |
The statements include representations at both macroscopic and symbolic levels. |
*It was observed that the Fe–S (SYM) solids placed in the test tube reacted after they started to be heated (MAS). (Normal Laboratory) |
|
|
|
*SO2 + 2H2 → S + 2H2O (SYM) While balancing the reaction, it was observed that the coefficients in front of the elements or compounds were balanced in the smallest way (MAS). (Virtual Laboratory) |
|
|
|
* HCl–Zn (SYM) In the experiment, we put some Hydrochloric acid into our test tube. Then we place a piece of zinc solid into the test tube containing hydrochloric acid. We observe that zinc corrodes and releases a gas into the air (MAS). When the experiment is completed, we observe that the Zinc solid is completely gone (if there is enough HCl) |
|
|
|
Zn + 2HCl → ZnCl2 + H2 (SYM) (Mixed Laboratory). |
Three-connected representations |
Macroscopic–microscopic–symbolic level (MAS-MIS-SYM) |
If the PSTs’ statements written in the SWH report include representations at both macroscopic, microscopic and symbolic levels, it is examined in this category. |
* In our experiments, we reacted a certain amount of Fe and a certain amount of S, and we saw that the mass of the product formed after the experiment was equal to the total mass of Fe and S. After we put the iron powder and sulfur powder into the reaction, we see that the Iron(III) Sulfur compound emerges. |
|
|
|
Fe + S → FeS (SYM) As we see in the equation (MAS), the atomic number (MIS) is preserved. (Normal Laboratory). |
|
|
|
* We showed the compound CH4(methane)(SYM), whose formula is given, with a ball-and-stick model to see its bonds, and a space-filling model (MAS) to clearly see the shape and size of the molecule (MIS). (Virtual Laboratory). |
|
|
|
* In the NaCl–H2O (SYM) experiment, we poured water into the beaker and added NaCl (SYM) salt into it. Then, when we heated the beaker, the water in it started to evaporate. We observed that when all the water molecules evaporated, NaCl solid remained at the bottom (MAS). We evaporated the water, which removed the intermolecular bonds (MIS) of the NaCl solid, and it precipitated as a solid at the bottom of the beaker. (Mixed Laboratory). |
Four-connected representation |
Macroscopic–microscopic–symbolic–algebraic level (MAS-MIS-SYM-ALG) |
If the PSTs’ statements written in the SWH report include representations at the macroscopic, microscopic, symbolic and algebraic levels, it is examined in this category. |
* No sample (Normal Laboratory). |
* When we opened the lithium element in the virtual laboratory, we saw the (SYM) symbol in the lower yellow box. In the upper left part, we observe that there are 3 protons, 2 neutrons and 2 electrons (MIS). We add protons and neutrons to find the mass number. When we add 3 protons and 2 neutrons, we find that the mass number is 5. Since 5 is written on the upper left, the mass number is written on the upper left. The 3 written at the bottom left is the number of protons. We find +1 written in the upper right corner by adding the number of electrons and protons. Since the electron is (−) negatively charged and the proton is (+) positively charged, (+3) + (−2) = 1 (ALG). Then in the upper right part is the ionic charge. (Virtual Laboratory) |
* In the experiments in this section, we examined the properties of gases. For this, we looked at the diffusion rate of gases (MAS) in the experiment we conducted in the laboratory. |
 |
VHCl/VNH3 = 38/55 = 0.6909 (SYM-ALG) |
In this experiment, we observed that the diffusion rate of gases is inversely proportional to the square root of the molecular (MIS) weight. (Mixed Laboratory) |
In the second stage of the analysis, to analyze the interconnectedness of multiple representation levels, four categories were used: no connection, weak connection, moderate connection, and strong connection. When the representations used in each component of the SWH approach had no connection with the text, a score of 0 was given. When multiple representations were interconnected but weakly, moderately, or strongly related to the text, a score of 5, 10 or 15 was given, respectively (Yaman, 2019, 2020; Yaman and Hand, 2022). For example, if a PST used a total of 30 different representations in a laboratory report, 15 of which were categorized as moderately connected, 10 of which were categorized as weakly connected, and 5 of which were categorized as strongly connected, the PST's interconnectedness was scored as moderately connected and 10 points were awarded because the majority of the written representations were in this category. In the third stage of the analysis, the holistic scores assigned by the researchers in the second stage were analyzed using the Friedman test and the Wilcoxon tests.
Scientific reasoning analysis
Walton's argumentation schemes (Walton et al., 2008) were used for the analysis of PSTs’ scientific reasoning. The reasoning analysis was carried out in three stages. In the first stage, 12 schemes selected from Walton's argumentation schemes were coded. The selected 12 schemes were (1) argument from sign, (2) argument from example, (3) argument from commitment, (4) argument from verbal classification, (5) evidence to hypothesis, (6) knowledge state, (7) argument against the person, (8) expert opinion, (9) cause relationship, (10) effect cause, (11) consequences, and (12) analogy. In Table 4, three of the 12 schemes are explained with examples taken from the PSTs’ laboratory reports, the rest of them can be seen in Appendix 4.
Table 4 Reasoning analysis table, explanations and examples of written arguments of PSTs
Walton's argument scheme |
Explanation |
Examples from written arguments of PSTs |
Argument from sign |
A particular observation, data, or event is taken as evidence of the existence of a feature (Walton et al., 2008). Expressions such as “look at this” and “it shows” in the written arguments of prospective teachers are examined in this category (Duschl, 2007). |
* We can notice that the ratios are almost the same from the two different weightings we made, which shows us that there is a certain ratio between the elements. (Normal Laboratory) |
* First, when we remove 1 neutron, Lithium-6 is also in balance, which shows that it has an isotope. When we remove neutrons for the second time, Lithium-5 is not in balance, which shows that it has an isotope. (Virtual Laboratory) |
* As a result of the reaction of Na and H2O, a different substance (NaOH) is released, and the formation of new bonds shows that this reaction is a chemical reaction. (Mixed Laboratory) |
|
Evidence to hypothesis |
If A is true, B will be observed. It has been observed that B is true in a particular case. Therefore, A is correct (Walton et al., 2008). Pre-service teachers use “if” followed by “I think”, “it seems like”, “if only”, “then it would be.” etc. in their writing arguments. Expressions like this are examined in this category (Duschl, 2007). |
* If we had not attached the balloon during the experiment, gas would have been released and the mass of the products would have decreased. (Normal Laboratory) |
* When we increase the number of neutrons, if the atom is in balance, we find the isotope. (Virtual Laboratory) |
* If we add some HCl to the magnesium solid, corrosion and gas release are observed in the magnesium. (Mixed Laboratory) |
|
Correlation to cause |
The advocate infers a causal connection between two events from a premise that describes a correlation between them (Walton et al., 2008). A positive or negative correlation/relationship is examined. |
* When we increase the volume of different substances, their mass also increases. (Normal Laboratory) |
* When the temperature of a solid substance is increased, its energy increases. (Virtual Laboratory) |
* We increase and decrease the volume in the container and realize that when the volume decreases, the pressure increases, and when the volume increases, the pressure decreases. From here we understand that pressure and volume are inversely proportional. (Mixed Laboratory) |
In the second stage of the analysis, the quality of reasoning was categorized as very weak, weak, moderate, and strong. When PSTs used a total of 0–2 different reasoning schemes, it was classified as no consistency (very weak) and given a score of 0. When PSTs used 3–5 different reasoning schemes, it was considered weak and given a score of 5. When PSTs used a total of 6 to 8 different reasoning schemes, it was considered moderate and given a score of 10. Finally, when the PSTs used a total of 9–12 different reasoning schemes, it was considered strong consistency and given a score of 15 (Yaman and Hand, 2024a, b, 2025). The scores obtained were then analyzed using the Friedman test and the Wilcoxon tests.
Statistics used in the study
In this study, we used Kruskal Wallis and Mann–Whitney U tests. The purpose of using the Kruskal Wallis test was to determine whether there was a difference in the PSTs’ scores depending upon the learning environment. If a significant (p < 0.05) difference was obtained in the Kruskal Wallis test, the “Mann–Whitney U Test” was used to understand which environments this significance was between (Büyüköztürk, 2020). In this context, we investigated whether there was any difference between the written arguments, multiple representations and reasoning of the PSTs between the courses where technology was integrated and the courses conducted using normal laboratories. The argument, multiple representation and reasoning scores were considered separately. Next, we examined whether there was a difference between the learning environments (virtual lab, normal lab, mixed lab) for each. In this study, the dependent variables were defined as the written argument, multiple representations, and reasoning scores of the pre-service teachers, and the independent variable was the learning environment (normal lab-virtual lab-mixed lab).
In cases where the Kruskal Wallis test result was significant, it was necessary to examine whether there was a difference between the binary groups. In this context, the Mann Whitney U Test could be applied to the binary combinations of the groups to identify the source of the difference. This required finding an alpha (the significance level) value for the Mann Whitney U Test. For this; Bonferroni correction formula (n·(n − 1))/(2) was used (n: number of groups). The value obtained as a result of this formula was divided by the previously used alpha value, i.e. 0.05. In our study, the new significance value was found as (3·(3 − 1)/(2) = 3) p = 0.05/3 = 0.017.
In analyses using the Friedman Test and Wilcoxon Signed-Rank Test, a correlation was used to determine the effect size. Correlation was calculated by dividing the “Z” score obtained in the tests by the square root of the total sample size. In the Friedman test, where three or more related measurements were involved, the effect size was examined by pairwise comparisons when the analysis results were significant. When a significant result was obtained in the Friedman Test, the “Z” statistic from the Wilcoxon Signed-Rank Test was used. The sample size (n) in the Wilcoxon test was twice the number of participants in the study. For each pairwise comparison in the Friedman Test, twice the number of participants (2·n) was considered (Büyüköztürk, 2020). A correlation value for effect size of 0.10 indicated a small effect, 0.30 indicated a medium effect, and 0.50 indicated a large effect (Büyüköztürk, 2020).
In the Kruskal Wallis H-Test with three or more groups, effect size was examined by making pairwise comparisons when the analysis results were significant. Correlation was used for effect size, and this correlation was calculated by dividing the Z test statistic by the square root of the total sample size. A correlation value for effect size of 0.10 indicated a small effect, 0.30 indicated a medium effect, and 0.50 indicated a large effect (Büyüköztürk, 2020).
To ensure the reliability of the PSTs’ written arguments, 60 SWH reports from the total of 262 lab report across 14 laboratory activities were independently analyzed by an additional expert – a faculty member in the field of chemistry education – in terms of argument quality, multiple representation, and reasoning. Cohen's Kappa statistic was used to determine the consistency between the ratings of the two raters. Cohen's Kappa calculates the level of agreement between the ratings of the two raters and provides information about the reliability level of the ratings. Cohen's Kappa coefficient ranges between 0 and 1 (Landis and Koch, 1977). When examining the reliability level between the argument quality, multiple representation, and reasoning scores of the two raters, it was found that the argument quality analysis scores of the two raters were statistically significant and at an excellent level (k = 1.000, p < 0.01, p = 0.000), the multiple representation analysis scores were statistically significant and at a good level of agreement (k = 0.704, p < 0.01, p = 0.000), and the reasoning analysis scores were statistically significant and at a good level of agreement (k = 0.779, p < 0.01, p = 0.000).
Findings
Assertion 1. There were different patterns of utilizing argument, multiple representations and reasoning within each environment
Assertion 1 emerged from the analysis of the first question that investigated if there was any significant difference in PSTs’ argumentation, multiple representation and reasoning scores across the learning environments they participated in. To answer this question, the PSTs’ argument, representation and reasoning scores in normal (Table 5), virtual (Table 6) and mixed lab (Table 7) were analyzed using Friedman and Wilcoxon Tests.
Table 5 Friedman and Wilcoxon test results for PSTs' argument, multiple representation and reasoning scores in a normal laboratory environment
Normal lab. |
Experiments |
N |
Rank average |
sd. |
x2 |
P |
Significant differencea |
Bold indicates laboratory in which differences mostly appeared in favor. |
Argument |
Experiment 2 |
20 |
2.25 |
2 |
7.400 |
0.025 |
E5–E2 |
Experiment 4 |
20 |
2.03 |
Experiment 5 |
20 |
1.73 |
|
Multiple representations |
Experiment 2 |
20 |
2.05 |
2 |
3.769 |
0.152 |
— |
Experiment 4 |
20 |
1.68 |
Experiment 5 |
20 |
2.28 |
|
Reasoning |
Experiment 2 |
20 |
1.95 |
2 |
0.143 |
0.931 |
— |
Experiment 4 |
20 |
2.03 |
Experiment 5 |
20 |
2.03 |
Table 6 Friedman test results for pre-service science teachers' argument, multiple representations and reasoning scores in the virtual laboratory environment
Virtual lab. |
Experiments |
N |
Rank average |
sd. |
x2 |
P |
Argument |
Experiment 1 |
20 |
2.13 |
4 |
25.588 |
0.000 |
Experiment 6 |
20 |
3.80 |
Experiment 7 |
20 |
3.55 |
Experiment 8 |
20 |
2.83 |
Experiment 9 |
20 |
2.70 |
|
Multiple representations |
Experiment 1 |
20 |
1.60 |
4 |
48.191 |
0.000 |
Experiment 6 |
20 |
4.75 |
Experiment 7 |
20 |
3.63 |
Experiment 8 |
20 |
2.30 |
Experiment 9 |
20 |
2.73 |
|
Reasoning |
Experiment 1 |
20 |
2.53 |
4 |
17.522 |
0.002 |
Experiment 6 |
20 |
3.68 |
Experiment 7 |
20 |
3.25 |
Experiment 8 |
20 |
3.30 |
Experiment 9 |
20 |
2.25 |
Table 7 Wilcoxon test results for pre-service science teachers' argument, multiple representations and reasoning scores in the virtual laboratory environment
Virtual lab. |
Pairwise comparisons |
Z |
P |
Effect size |
Significant differencea |
Bold indicates experiments in which differences mostly appeared in favor. |
Argument |
Exp1–Exp6 |
−3.260 |
0.001 |
0.51 |
E1–E6 |
Exp1–Exp7 |
−2.968 |
0.003 |
0.46 |
E1–E7 |
Exp6–Exp8 |
−2.138 |
0.033 |
0.33 |
E6–E8 |
Exp6–Exp9 |
−3.000 |
0.003 |
0.47 |
E6–E9 |
Exp7–Exp8 |
−2.121 |
0.034 |
0.33 |
E7–E8 |
Exp7–Exp9 |
−2.646 |
0.008 |
0.41 |
E7–E9 |
|
Multiple representations |
Exp1–Exp6 |
−3.920 |
0.000 |
0.61 |
E1–E6 |
Exp1–Exp7 |
−3.341 |
0.001 |
0.52 |
E1–E7 |
Exp1–Exp8 |
−2.119 |
0.034 |
0.33 |
E1–E8 |
Exp1–Exp9 |
−2.727 |
0.006 |
0.43 |
E1–E9 |
Exp6–Exp7 |
−3.436 |
0.001 |
0.54 |
E6–E7 |
Exp6–Exp8 |
−3.883 |
0.000 |
0.61 |
E6–E8 |
Exp6–Exp9 |
−3.885 |
0.000 |
0.61 |
E6–E9 |
Exp7–Exp8 |
−3.510 |
0.000 |
0.55 |
E7–E8 |
Exp7–Exp9 |
−2.763 |
0.006 |
0.43 |
E7–E9 |
|
Reasoning |
Exp1–Exp6 |
−2.517 |
0.012 |
0.39 |
E1–E6 |
Exp1–Exp7 |
−1.994 |
0.046 |
0.31 |
E1–E7 |
Exp1–Exp8 |
−1.999 |
0.046 |
0.31 |
E1–E8 |
Exp6–Exp9 |
−2.828 |
0.005 |
0.44 |
E6–E9 |
Exp7–Exp9 |
−2.324 |
0.020 |
0.36 |
E7–E9 |
Exp8–Exp9 |
−2.496 |
0.013 |
0.39 |
E8–E9 |
The Friedman test was run to determine whether the means scores of argument, representation and reasoning of the set of three related measurement (Experiments 2, 4 and 5) in normal laboratory, the set of five related measurement (Experiments 1, 6, 7, 8 and 9) in virtual laboratory, and the set of six related measurement (Experiments 3, 10, 11, 12, 13 and 14) in mixed lab differ significantly from each other. Since a Friedman test does not have any post hoc multiple comparison, a Wilcoxon signed-rank test for post hoc was conducted to determine whether there was a significant difference between the scores of two related measurement sets. The experiments for each laboratory setting were completed in different weeks (e.g., experiments for the virtual lab environment were completed during weeks 1, 6, 7, 8 and 9).
Assertion 1a: In the normal lab environment, although there was a significant difference in the PSTs’ argument scores, there were no significant differences in PSTs’ multiple representations and reasoning scores
The Friedman test was conducted for repeated measures to determine whether there was a difference between the argument quality, multiple representations and reasoning scores across experiments in the normal laboratory environment. This test revealed a statistically significant difference in PSTs’ argument scores in the normal laboratory environment (x2 = 7.400, p < 0.05, p = 0.025). The Wilcoxon Test was applied to determine which experiment or experiments this difference was related to. This revealed a significant difference between the argument quality scores of the PSTs in Experiment 2 and Experiment 5 (z = −2.646, p < 0.05, p = 0.008, effect size = 0.41) in favor of Experiment 2. There was no significant difference between the quality of argument scores in Experiment 2 and Experiment 4 (z = −1.134, p > 0.05, p = 0.257), and Experiment 5 and Experiment 4 (z = −1.633, p > 0.05, p = 0.102).
Assertion 1b: In virtual laboratory environment, Friedman tests indicated that there were statistically significant differences in the PSTs’ argument, multiple representations, and reasoning scores
The Friedman test results indicated that there was a statistically significant difference in PSTs’ argument (x2 = 25.588 p < 0.01, p = 0.000), multiple representations (x2 = 48.191, p < 0.01, p = 0.000), and reasoning (x2 = 17.522, p < 0.01, p = 0.002) scores as seen in Table 6.
The Wilcoxon Test was applied to determine which experiment or experiments this difference was related to. As shown in Table 7, the results of the Wilcoxon Test analysis showed a significant difference between PSTs’ argument, multiple representations, and reasoning scores in Experiment 6 with Experiments 1 and 9 in favor of Experiment 6. In a similar vein, there was a significant difference between PSTs’ argument, multiple representations, and reasoning scores in Experiment 7 with Experiments 1 and 9 in favor of Experiment 7. There was a significant difference between PSTs’ quality of argument and representations in Experiments 6 and 7 with Experiment 8 in favor of Experiments 6 and 7. Moreover, there is a significant difference between the multiple representation and reasoning scores in Experiment 1 and Experiment 8 in favor of Experiment 8. These results highlight that there are similar patterns in significant differences of PSTs’ argument, multiple levels of representation, and reasoning scores in the virtual laboratory environment. As seen in Table 7, most of the significant differences appeared in the same experiments related to the PSTs’ written argument, representation and reasoning (e.g., Experiments 1 and 6, Experiments 1 and 7). In particular, the middle two experiments (Experiments 6 and 7) had the strongest impact in terms of argument, representations and reasoning scores. Moreover, there was a medium level or high level of effect size of the experiments in virtual laboratory environment on the PSTs’ quality of argument, representation and reasoning as shown in Table 7.
Assertion 1c: In the mixed laboratory environment, Friedman tests indicated that there were statistically significant differences in the PSTs’ argument, multiple representations and reasoning scores
The results of the Friedman test in Table 8 showed that there was a statistically significant difference in PSTs’ argument (x2 = 17.844 p < 0.01, p = 0.003), multiple representations (x2 = 34.742 p < 0.01, p = 0.000), and reasoning scores (x2 = 13.056 p < 0.05, p = 0.023) in the mixed laboratory environment.
Table 8 Friedman test results for pre-service science teachers' argument, multiple representations, and reasoning scores in a mixed laboratory environment
Mix lab. |
Experiments |
N |
Rank average |
sd. |
x2 |
P |
Argument |
Experiment 3 |
20 |
3.18 |
5 |
17.844 |
0.003 |
Experiment 10 |
20 |
2.73 |
Experiment 11 |
20 |
3.33 |
Experiment 12 |
20 |
3.93 |
Experiment 13 |
20 |
3.63 |
Experiment 14 |
20 |
4.22 |
|
Multiple representations |
Experiment 3 |
20 |
2.78 |
5 |
34.742 |
0.000 |
Experiment 10 |
20 |
2.17 |
Experiment 11 |
20 |
4.78 |
Experiment 12 |
20 |
4.78 |
Experiment 13 |
20 |
2.80 |
Experiment 14 |
20 |
3.70 |
|
Reasoning |
Experiment 3 |
20 |
3.38 |
5 |
13.056 |
0.023 |
Experiment 10 |
20 |
3.30 |
Experiment 11 |
20 |
4.40 |
Experiment 12 |
20 |
3.13 |
Experiment 13 |
20 |
3.28 |
Experiment 14 |
20 |
3. 53 |
As shown in Table 9, the Wilcoxon analysis results showed that there was a significant difference between the PSTs’ argument quality and multiple representations score in Experiment 14 and Experiments 3, 10, and 14 in favor of Experiment 14. The Wilcoxon Test results also highlighted that there was a significant difference between the PSTs’ multiple representation and reasoning scores in Experiment 11 and Experiments 3, 10, 13, and 14 in favor of Experiment 11; and Experiment 10 and Experiment 12 in favor of Experiment 12. These results indicated that there were similar patterns in significant differences of PSTs’ argument, multiple levels of representation, and reasoning scores in the mixed laboratory environment. As seen in Table 7, most of the significant difference of the experiments occurred during the same experiments related to the PSTs’ written argument, representation, and reasoning (e.g., Experiments 3 and 11, Experiments 10 and 12). Moreover, it appears that all the later experiments were significantly beneficial compared to the early activities. As seen in Table 9, there was a medium level or high level of effect size of the experiments in mixed laboratory environment on the PSTs’ quality of argument, representation and reasoning.
Table 9 Wilcoxon test results for pre-service science teachers' argument, multiple representations, and reasoning scores in a mixed laboratory environment
Mixed lab |
Pairwise comparisons |
Z |
p |
Effect size |
Significant differencea |
Bold indicates experiments in which differences mostly appeared in favor. |
Argument |
Exp3–Exp14 |
−2.646 |
0.008 |
0.41 |
E3–E14 |
Exp10–Exp12 |
−2.530 |
0.011 |
0.40 |
E10–E12 |
Exp10–Exp13 |
−2.121 |
0.034 |
0.33 |
E10–E13 |
Exp10–Exp14 |
−3.162 |
0.002 |
0.49 |
E10–E14 |
|
Multiple representations |
Exp3–Exp11 |
−3.435 |
0.001 |
0.54 |
E3–E11 |
Exp3–Exp12 |
−2.950 |
0.003 |
0.46 |
E3–E12 |
Exp3–Exp14 |
−2.633 |
0.008 |
0.41 |
E3–E14 |
Exp10–Exp11 |
−3.659 |
0.000 |
0.57 |
E10–E11 |
Exp10–Exp12 |
−3.379 |
0.001 |
0.53 |
E10–E12 |
Exp10–Exp14 |
−2.969 |
0.003 |
0.46 |
E10–E14 |
Exp11–Exp13 |
−3.137 |
0.002 |
0.49 |
E11–E13 |
Exp11–Exp14 |
−2.334 |
0.020 |
0.36 |
E11–E14 |
Exp12–Exp13 |
−3.418 |
0.001 |
0.54 |
E12–E13 |
Exp12–Exp14 |
−2.988 |
0.003 |
0.47 |
E12–E14 |
Exp13–Exp14 |
−2.035 |
0.042 |
0.32 |
E13–E14 |
|
Reasoning |
Exp3–Exp11 |
−2.111 |
0.035 |
0.33 |
E3–E11 |
Exp10–Exp11 |
−2.530 |
0.011 |
0.40 |
E11–E10 |
Exp11–Exp12 |
−3.000 |
0.003 |
0.47 |
E10–E11 |
Exp11–Exp13 |
−2.828 |
0.005 |
0.44 |
E13–E11 |
Assertion 2. There was a positive significant relationship between argument and representation, argument and reasoning, and multiple representations and reasoning in each learning environment
Assertion 2 was made in response to the second research question that asked what the relationship was between argument, multiple representation and reasoning skills within each learning environment in which PSTs participate. Multiple Spearman Correlation tests were conducted to examine whether there were any relationships between the argument, multiple representations, and reasoning skills of PSTs in the three different learning environments. The results of the Spearman Correlation test showed a positive high level of correlation between argument and representation scores (r = 0.789, p < 0.01, p = 0.000) and argument and reasoning scores (r = 0.818, p < 0.01, p = 0.000) in the normal laboratory environment. Moreover, there was a positive, medium-level significant relationship (r = 0.634, p < 0.01, p = 0.003) between reasoning and representation in the normal laboratory environment. In the virtual laboratory environment, there was a strong, positive correlation between argument and representation (r = 0.754, p < 0.01, p = 0.000), argument and reasoning scores (r = 0.730, p < 0.01, p = 0.000), and representation and reasoning scores (r = 0.719, p < 0.01, p = 0.000). In the mixed laboratory environment, the results showed that a positive high level of correlation exists between argument and representation (r = 0.880, p < 0.01, p = 0.000), and representation and reasoning scores (r = 0.729, p < 0.01, p = 0.000). There was also a moderate positive difference between argument and reasoning scores (r = 0.683, p < 0.01, p = 0.001). These results indicated that in each learning environment there was a positive medium or high level of correlation between argument and multiple representations, between argument and reasoning, and between representations and reasoning.
Assertion 3. There was a statistically significant difference in favor of mixed lab in terms of argument, multiple representations, and reasoning
Assertion 3 emerged from the analysis of the third research question that investigated if there was any significant difference in PSTs’ argument, representation and reasoning between the learning environments (normal lab, virtual lab, mixed lab). To answer this question, the Kruskal Wallis H-Test and the Mann Whitney U test were run. Kruskal Wallis H-Test results for PSTs’ average scores for written argument, representation, and reasoning across learning environments were shown in Table 10. Analysis results showed that PSTs' written argument (x2(sd = 2, n = 20) = 7.471, p < 0.05, p = 0.024), representation (x2(sd = 2, n = 20) = 22.743, p < 0.01, p = 0.000), and reasoning (x2(sd = 2, n = 20) = 17.529, p < 0.01, p = 0.000) average scores differed significantly according to the learning environment (normal laboratory, virtual laboratory, mixed laboratory).
Table 10 Kruskal Wallis H-test result of argument, representation and reasoning score means according to learning environment
|
Learning environments |
N |
Rank average |
sd |
x2 |
P |
Significant differencea |
Bold indicates laboratory in which differences mostly appeared in favor. |
Argument |
Normal Laboratory (NL) |
20 |
31.40 |
2 |
7.471 |
0.024 |
ML-VL |
Virtual Laboratory (VL) |
20 |
22.58 |
Mixed Laboratory (ML) |
20 |
37.53 |
|
Multiple representations |
Normal Laboratory (NL) |
20 |
20.43 |
2 |
22.743 |
0.000 |
ML-NL |
Virtual Laboratory (VL) |
20 |
25.68 |
ML-VL |
Mixed Laboratory (ML) |
20 |
45.40 |
|
|
Reasoning |
Normal Laboratory (NL) |
20 |
26.25 |
2 |
17.529 |
0.000 |
ML-NL |
Virtual Laboratory (VL) |
20 |
21.75 |
ML-VL |
Mixed Laboratory (ML) |
20 |
43.50 |
|
Assertion 3a: In argument quality, there was a significant difference between virtual and mixed lab environments in favor of the mixed lab environment. Nevertheless, there were no significant differences between argument scores in the virtual and normal labs and mixed and normal labs
Since the Kruskal Wallis H-Test result was significant, it was necessary to examine whether there was a difference between the two groups using Mann Whitney U Test. As indicated earlier, in our study using Bonferroni correction the new significance value was found as 0.017. According to the results obtained from the Mann Whitney U test, there was no significant difference between the argument score averages of the students in the normal laboratory and virtual laboratory environments (U = 139.50, p > 0.017; p = 0.099) or in the normal laboratory and mixed laboratory environments (U = 157.50, p > 0.017; p = 0.246). However, there was a significant difference between the argument quality in the virtual laboratory and mixed laboratory environments in favor of mixed lab (U = 102.00, p < 0.017, p = 0.008). The rank averages indicated that the argument quality in the mixed laboratory environment (rank average = 25.40) was higher than the argument quality in the virtual laboratory environment (rank average = 15.60). There was a medium level (effect size = 0.420) effect of the mixed laboratory environment on the PSTs’ argument quality.
Assertion 3b: In representation quality, there was a significant difference between virtual and mixed labs and normal and mixed labs in favor of the mixed lab environments. However, there was no significant difference between multiple representation scores in the virtual and normal lab environments
Considering the multiple representations scores, the Mann Whitney U Test highlighted that there was a significant difference between the normal laboratory and mixed laboratory environments in favor of mixed lab (U = 42
000, p < 0.017, p = 0.000). Considering the rank averages, the multiple representation scores in the mixed laboratory environment (rank average = 28.40) were higher than the multiple representation scores in the normal laboratory environment (rank average = 12.60). This finding showed that the mixed laboratory environment was highly effective (effect size = 0.675) for PSTs’ use of multiple representations. There was a significant difference between the multiple representation scores in the virtual laboratory environment and the mixed laboratory environment in favor of mixed lab (U = 60
000, p < 0.017, p = 0.000). The multiple representation scores in the mixed laboratory environment (rank average = 27.50) were higher than those in the virtual laboratory environment (rank average = 13.50). Thus, the mixed laboratory environment was moderately effective (effect size = 0.598) for PSTs’ utilization of multiple representation.
Assertion 3c: In reasoning quality, there was a significant difference between virtual and mixed labs and normal and mixed labs in favor of the mixed lab environments. Nonetheless, there was no significant difference between reasoning scores in the virtual and normal lab environments
Regarding reasoning scores, according to the Mann Whitney U Test results, there was a significant difference between the normal laboratory and mixed laboratory environments in favor of mixed lab (U = 98.00, p < 0.016, p = 0.005). Considering the rank averages, the reasoning scores in the mixed laboratory environment (rank average = 25.60) were higher than the reasoning scores in the normal laboratory environment (rank average = 15.40). This finding showed that the mixed laboratory environment was effective at a medium level (effect size = 0.444) for PSTs’ use of reasoning. There was a significant difference between the reasoning scores in the virtual laboratory environment and the mixed laboratory environment in favor of mixed lab (U = 42.00, p < 0.016, p = 0.000). When the rank averages were considered, the reasoning scores in the mixed laboratory environment (rank average = 28.40) were higher than the reasoning scores in the virtual laboratory environment (rank average = 12.60). This finding showed that the mixed laboratory environment was highly effective (effect size = 0.681) in promoting PST's use of reasoning. Taken as a whole, while there was no statistically significant difference between PSTs’ argument, representation and reasoning scores in normal lab and virtual lab, there was a statistically significant difference between PSTs’ argument, representation and reasoning scores in virtual lab and mixed lab in favor of mixed lab.
Discussion and implications
This study investigated the utilization and correlation of PSTs’ written argument, representation, and reasoning in normal, virtual, and mixed lab activities over one semester. Moreover, the study aimed to determine if there was any difference in these learning environments in terms of argument, representations, and reasoning. The results highlighted that the utilization of PSTs’ argument, representation, and reasoning were parallel to each other especially in virtual and mixed learning environments. There was a significant positive correlation between argument, representation and reasoning in each learning environment. Moreover, there was a significant difference between virtual lab and mixed lab in favor of mixed lab in the quality of PSTs’ argument, representation and reasoning scores, and there was a significant difference between normal lab and mixed lab in favor of mixed lab in the quality of representation and reasoning. Interestingly, there was no significant difference between normal lab and virtual lab in the quality of PSTs’ argument, representation and reasoning scores. These findings may be attributed to the type of learning environment and in part to writing to a different audience other than the teacher.
Previous studies using the SWH approach show that the utilization of arguments, multiple representations, and reasoning of pre-service science teachers in argument-based writing activities increase over time (Yaman and Hand, 2022, 2024a, b). In this study, there was a significant difference in the argument, multiple representation and reasoning utilization of PSTs in the mixed laboratory environment compared to the other two learning environments (normal laboratory, virtual laboratory). Specifically, the Mann Whitney U Test results reveal that the mixed laboratory environment had a medium level effect on the PSTs’ argument utilization and a high-level effect on the utilization of multiple representations and reasoning. While building on our previous studies, one of the important results obtained in this study is that the utilization of these skills was dependent on the learning environment, with the mixed lab environment showing an advantage of more connected representations and reasoning.
We argue that utilizing the mixed laboratory in the SWH learning environment allowed PSTs to benefit from aspects from both the normal and virtual lab environments. The dialogical aspect of the normal lab enabled students to collectively create more research questions, create claims by observing at both macroscopic and microscopic levels, and justify these claims. This dialogical work is about providing students with opportunities to put ideas forward, negotiate with each other and arrive at class generated questions and claims. From the virtual lab, the ability to quickly repeat an experiment provides a richer set of data for students to utilize. In this mixed laboratory learning environment, PSTs design experiments by physically experiencing the process with the normal laboratory environment and observing phenomena at the macroscopic level (color change, gas release, etc.) and conduct more experiments with the virtual laboratory environment compared to the normal laboratory environment. For example, in the mixed lab environment, when the PSTs investigated whether or not a solution was acidic or basic they used pH meter, pH paper and electrical conductivity. However, in the normal lab they only used pH paper to test the solutions. Additionally, in the mixed lab environment, PSTs make observations at the microscopic level (molecules, atoms, bonds, etc.) that the naked eye cannot see.
While the experiments that PSTs can do in a normal laboratory environment are limited by time, they can do more than one experiment in a short time in a virtual environment. For example, in the normal laboratory, the PSTs were expected to do all experiments in class hours. However, when the PSTs used the virtual lab, such as PhET simulations, they were able to run experiments out of classroom hours since PhET simulations can be conducted online. While we argue for the importance of physically attending laboratory courses to learn how to weigh a substance, closely observe the substances used in the laboratory, and perform the operations themselves, utilizing virtual labs promotes and enables repeated experiments to confirm claims and evidence. The mix lab environment utilizes both aspects of physically understanding the aspects of lab work (normal lab) while building on the concept of repeated measurements (virtual lab).
Interestingly, the impact of these different learning environment conditions is noted in relation to utilization of competencies. Our previous work (Yaman and Hand, 2024a, b) has shown that oral reasoning in the form of dialogue impacts written argument. Even though dialogue was part of the virtual lab, the benefits of dialogue within the bounds of non-virtual lab activities appear to promote richer development of arguments. We did not complete discourse analysis occurring in each lab session, and thus we cannot confirm if dialogue centered on physical labs is richer than on virtual labs, but we would suggest that the physical lab is the connection between normal and mix labs. The role of dialogue appears to be critical.
This study revealed no difference between normal and virtual lab in terms of the three components of argument, representation and reasoning. To explain this finding, we argue that in each type of lab there is an incompleteness not found in the mix lab. In the normal lab, the emphasis is on the macroscopic representation level as this is the most easily accessible. However, in the virtual lab, the emphasis is on the microscopic level because of the capabilities of that technology. We would argue that these two environments – normal and virtual – do not fully engaged the range of representational modes that are necessary to understand the concepts under study. The advantage of the mixed lab environment is that it enables the students to engage with the macroscopic mode through the physical aspects of the lab, and the microscopic level due to the virtual component of the lab. Therefore, we argue that the normal laboratory and virtual laboratory environments may have a complementary effect on each other. For example, while students make observations at the macroscopic level in the normal laboratory environment, they can see the movements of molecules and how atomic bonds change in the virtual environment; that is, they can make observations at the microscopic level. They can support the reasons for their macroscopic level observations in the normal laboratory environment with their microscopic level observations in the virtual laboratory environment. Thus, the two environments appear to have a supportive effect on each other. Technologies have made these abstract and micro level representations more accessible and helped students concretize concepts and as such these virtual laboratory environments need to be used as a complementary tool rather than replacing traditional chemistry teaching methods (Davenport et al., 2018).
In terms of the parallel utilization of these three elements of argument, representations and reasoning, previous research by Chen et al. (2016) indicates that promotion of writing and dialogue is essential. In each of these different SWH learning environments, PSTs interpreted the data they collected, established a strong relationship between claims and evidence, and created strong arguments in writing using multiple representations in evidence to support their claims. Providing PSTs with opportunities to use concurrent writing and talking opportunities in each learning environment may have allowed their argument, multiple representation, and reasoning skills to develop over time and have a strong positive correlation between each other. The work of Klein (1999) and Galbraith (2015) highlight this epistemic value of writing as a tool to promote learning and reasoning.
This study continues the work on showing how argument, reasoning and representation are utilized in a parallel manner. The correlation analysis does show that there is a relationship between these three areas; however, the mixed lab combining both physical engagement with lab and virtual engagement with the microscopic appears to be the strongest in promoting the use of all three. This would suggest that while there has been a push for the use of virtual labs, there is a need to ensure that there is a mix of both physical engagement with materials linked to using virtual experiences.
Limitations
The study has two major limitations: the number of experiments and the potential carry-over effect. Firstly, the study was conducted with a different number of experiments in each environment. However, because the averages were considered during analysis, this might not have caused a statistically significant difference in the results. Secondly, the PSTs’ performance in one learning environment may benefit or inhibit performance when participating in a different learning environment. Relatedly, the PSTs’ performance on one experiment or the order in which the experiments were done may benefit or inhibit performance on subsequent experiments. Even though there was not a predetermined order for the three learning environments, generally, normal labs were implemented in earlier experiments (Experiments 2, 4 and 5) in the semester. Virtual laboratories were implemented later (Experiments 1, 6, 7, 8 and 9) in the semester. Most of the experiments in the mixed laboratories (Experiments 3, 10, 11, 12, 13 and 14) were implemented at the end of the semester. To avoid carry-over effects, counterbalancing strategies (e.g., all possible orders, Latin Squares, and block randomization), which test different participants in different orders, could have been implemented (Price et al., 2017). However, it may not have been practical to test different orders (such as all possible orders or Latin Squares) for learning environments in each week for one topic because there were not enough experiments to run for each condition for each topic (e.g., six different orders for the all possible orders strategy or three different orders for the Latin Squares strategy). In this study, all PSTs engaged in the same environment each week and all environments occurred in the sequence before any of them were repeated (e.g., first week virtual lab, second week normal lab, third week mixed lab), and in other weeks, there was not a particular order in utilizing the environments (Price et al., 2017). Therefore, the block randomization strategy might have been more appropriate for counterbalancing in this study. We argue that regardless of the sequence of the environments, the PSTs had equal opportunities for developing argument, representations, and reasoning because they were provided rich dialogue for each week.
In the study, we used the Bonferroni Correction method, which is widely used due to its simplicity and easy applicability, for finding an alpha (the significance level) value for the Mann Whitney U Test. We could have used this method for the Friedman test and Wilcoxon test results. However, as the number of tests increased (i.e., the number of experiments compared and the number of pairwise comparisons made accordingly), the alpha value obtained with the Bonferroni correction became lower. For example, in the mixed lab, when we did Bonferroni correction for 6 experiments, we ran 15 different pairwise comparisons, which are the pairwise combination of 6. Then, when we applied the Bonferroni correction formula, “0.05/15 = 0.003”, our alpha significance value became 0.003. In that case, the probability of not finding a significant value in any pairwise comparison results increased. This situation caused truly significant differences to be overlooked (Type II error). For this reason, the Bonferroni Correction was considered a conservative approach, that is, one that “minimizes Type I error while increasing Type II error,” especially when many hypothesis tests are performed. Therefore, the results should be taken cautiously.
Conclusion
This study examined the utilization and correlation of written argumentation, multiple representations, and reasoning of pre-service science teachers (PSTs) in three learning environments (normal laboratory, virtual laboratory, and mixed laboratory) based on the Science Writing Heuristic (SWH) approach. Friedman tests indicated that while there were only significant differences in PSTs’ argument scores in the normal lab, there were statistically significant differences in the PSTs’ argument, representations, and reasoning scores in virtual and mixed laboratory environments. Multiple Spearman Correlation tests highlighted that in each learning environment there were positive significant relationships between argument, multiple representations, and reasoning. The Kruskal Wallis H-Test and the Mann Whitney U tests showed that there was a statistically significant difference in favor of the mixed lab in terms of argument, multiple representations, and reasoning.
In this study, in all learning environments, the PSTs used worksheets and electronic devices in the laboratory to keep record of what occurred in order to complete the laboratory report at home, which was due within four or five days following the lab. During this time, the PSTs might have had opportunities to access technology even if they completed normal lab sessions. For example, while writing reading and reflection sections, the PSTs might have used technology to access written information since they were required to search at least three different sources to refute or support their claim. We argue that utilizing technology for the purpose of gathering written information might not have had any statistical effect on PSTs’ argument, representations, and reasoning scores. This is because in virtual and mixed lab environments, the PSTs gathered their data using PhET simulations and virtual laboratories to construct and critique their claims in the laboratories. On the other hand, in reading and reflection sections, the PSTs were required to use written information (which did not include YouTube videos or simulations).
While recognizing that the PSTs completed their written reports at home, and there were variations in how much technology was used outside of the laboratory environment, the results highlighted that a mixed laboratory environment appeared to benefit PSTs more than the normal or virtual SWH lab environment. Incorporating multiple opportunities to utilize PhET simulations and virtual laboratories to construct and critique their claims appeared to help PSTs generate richer arguments, incorporate more multiple representations, and utilize stronger reasoning in constructing their written arguments. We believe that this resulted in greater learning occurring from these lab environments.
Conflicts of interest
There are no conflicts to declare.
Data availability
The excerpts from participants were provided in the published article and in the SI. The full data have not made publicly available due to ethical confidentiality requirements and to maintain anonymity.
Appendix 1: Scoring Analytical and Holistic Arguments. Appendix 2: PST 4’s written excerpt related to Gasses in Experiment 14. Appendix 3: Types of Multiple Representations, Descriptions, and Examples from Student Reports in Three Different Learning Environments. Appendix 4: Reasoning Analysis Table, Explanations and Examples from Written Arguments of PSTs. See DOI: https://doi.org/10.1039/d5rp00002e
Acknowledgements
This study was funded by The Scientific and Technological Research Council of Türkiye (TUBITAK) (Grant Number: 123K709). Moreover, the study included a part of the first author's Master Thesis.
References
- Allred Z. D. R., and Bretz S. L., (2019), University chemistry students’ interpretations of multiple representations of the helium atom, Chem. Educ. Res. Pract., 20(2), 358–368.
- Barrett T. J., Stull A. T., Hsu T. M. and Hegarty M., (2015), Constrained interactivity for relating multiple representations in science: When virtual is better than real, Comput. Educ., 81, 69–81.
- Burke K. A., Greenbowe T. J. and Hand, B. M., (2005), Excerpts from the process of using inquiry and the science writing heuristic, Iowa State University. Retrieved May, 30, 2010.
- Büyüköztürk Ş., (2020), Handbook of Data Analysis for the Social Sciences: Statistics, Research Design, SPSS Applications, and Interpretation, 28th edn, Pegem Akademi Publications, ISBN: 978-975-6802-74-8.
- Chandrasegaran A. L., Treagust D. F. and Mocerino M., (2008), An Evaluation of a Teaching Interventionto PromoteStudents’ Abilityto Use Multiple Levels of Representation When Describingand Explaining Chemical Reactions, Res. Sci. Educ., 38(2), 237–248.
- Chen Y. C., Hand B. and Park S., (2016), Examining Elementary Students’ Development of Oral and Written Argumentation Practices Through Argument-Based Inquiry, Sci. Educ., 25(3–4), 277–320.
- Choi A., (2008), A study of studentwritten argument using the Science Writing Heuristicapproach in inquiry-basedfreshman general chemistrylaboratoryclasses, (PhDdissertation), University of Iowa.
- Choi A., Hand B. and Greenbowe T., (2013), Students’ Written Arguments in General Chemistry Laboratory Investigations, Res. Sci. Educ., 43, 1763–1783.
- Davenport J. L., Rafferty A. N. and Yaron D. J., (2018), Whether and how authentic contexts using a virtual chemistry lab support learning, J. Chem. Educ., 95(8), 1250–1259.
- Dökme İ., (2019), An Overview of Scientific Reasoning Skills. The Art of Thinking with Scientific Reasoning Skills, Ankara: Anı Publishing, pp. 1–12.
- Duschl R. A., (2007), Quality Argumentation andepistemiccriteria, in Erduran S. and Jimenez-Aleixandre M. (ed.), Argumentation in science education: Perspectives from classroom-based research, Springer, pp. 159–175.
- Duschl R. and Osborne J., (2002), Supporting and promoting argumentation discourse, Stud. Sci. Educ., 38(1), 39–72.
- Er S. and Kırındı T., (2020), The Impact of the Argumentation Method Based Science Course on Students’ Science Process Skills, Gazi J. Educ. Sci., 6(3), 317–343 DOI:10.30855/gjes.2020.06.03.004.
- Erduran S. and Jiménez-Aleixandre M. P., (2007), Argumentation in science education: Perspectives from classroom-based research, Springer Science & Business Media.
- Erduran S. and Papuçcu-Akış A., (2023), Chemistry Education Research:Recent Trends and the Onset of the Pandemic, Handbook Res. Sci. Educ., 3, 35 DOI:10.4324/9780367855758.
- Gabel D., (1999), Improving teaching and learning through chemistry education research: a look to the future, J. Chem. Educ., 76(4), 548–554.
- Galbraith D., (2015), Conditions for writing to learn, J. Writing Res., 7(1), 215–226.
- Graham S., Kiuhara S. A. and MacKay M., (2020), The effects of writing on learning in science, social studies, and mathematics: a meta-analysis, Rev. Educ. Res., 90(2), 179–226.
- Hand B., (2017), Exploring the role of writing in science: a 25-year journey, Lit. Learn., 25(3), 16–23.
- Hand B., Chen Y. C. and Suh J. K., (2021), Does a knowledge generation approach to learning benefit students? A systematic review of research on the science writing heuristic approach, Educ. Psychol. Rev., 33(2), 535–577 DOI:10.1007/s10648-020-09550-0.
- Hand B. and Choi A., (2010) Examiningtheimpact of studentuse of multiple modal representations in constructingarguments on organicchemistrylaboratoryclasses, Res. Sci. Educ., 40(1), 29–44.
- Hand B., Norton-Meier L. and Jang J. Y., (2017), ExaminingtheImpact of an Argument-Based Inquiry on the Development of Students’ Learning in International Contexts, More Voices from the Classroom, Rotterdam: SensePublishers, pp. 1–9.
- Hand B., Shelley M. C., Laugerman M., Fostvedt L. and Therrien W., (2018), Improving critical thinking growth for disadvantaged groups within elementary school science: a randomized controlled trial using the Science Writing Heuristic approach, Sci. Educ., 102(4), 693–710.
- Higher Education Council (HEC), (2018), Science Teaching Undergraduate Program, HEC, https://www.yok.gov.tr/Documents/Kurumsal/egitim_ogretim_dairesi/Yeni-Ogretmen-Yetistirme-Lisans-Programlari/Fen_Bilgisi_Ogretmenligi_Lisans_programi.pdf.
- Hinton M. E. and Nakhleh M. B., (1999) Students’ Microscopic, Macroscopic, and Symbolic Representations of Chemical Reactions, Chem. Educ., 4, 158–167.
- Irby S. M., Borda E. J. and Haupt J., (2018), Effects of implementing a hybrid wet lab and online module lab curriculum into a general chemistry course: impacts on student performance and engagement with the chemistry triplet, J. Chem. Educ., 95(2), 224–232.
- Klein P. D., (1999), Reopening inquiry into cognitive processes in writing-to-learn, Educ. Psychol. Rev., 11, 203–270.
- Kutru Ç. and Hasançebi F., (2024), The Effect of Argumentation-Based Science Learning-Supported STEM Education on 7th Grade Students' Communication, Scientific Creativity, Problem-Solving Skills, and Critical Thinking Disposition, Buca Faculty Educ. J., 59, 139–175.
- Landis J. R. and Koch G. G., (1977), The measurement of observer agreement for categorical data. Biometrics, 159–174 DOI:10.2307/2529310.
- McDermott M. A. and Hand B., (2010), A secondary reanalysis of student perceptions of non-traditional writing tasks over a ten-year period, J. Res. Sci. Teach., 47(5), 518–539.
- Ministry of National Education, (2018), Science Curriculum Middle (School 8th Grade), Ankara.
- Moore E. B., Chamberlain J. M., Parson R. and Perkins K. K., (2014), PhET interactive simulations: transformative tools for teaching chemistry, J. Chem. Educ., 91(8), 1191–1197.
- Nakhleh M. and Krajcik J. S., (1994) Influence of Levels of Information as PresentedBy Different Technologies on Students’ Understanding of Acid, Base, And pH Concepts, J. Res. Sci. Teach., 31, 1077–1096.
- National Research Council, (2012), A framework for K-12 science education: practices, crosscutting concepts, and core ideas, Washington: The National Academy of the Sciences.
- Nussbaum E. M., (2021), Critical integrative argumentation: toward complexity in students’ thinking, Educ. Psychol., 56(1), 1–17.
- Osborne J., Erduran S. and Simon S., (2004), Enhancingthequality of argumentation in schoolscience, J. Res. Sci. Teach., 41(10), 994–1020.
- Permatasari M.
B., Rahayu S. and Dasna I. W., (2022), Chemistry Learning Using Multiple Representations: A Systematic Literature Review, J. Sci. Learn., 5(2), 334–341.
- Price P. C., Jhangiani R., Chiang C., Leighton D. C. and Cuttler C., (2017), Experimental Design, Research Methods in Psychology, 3rd American edn, ch. 5.2, pp. 83–88.
- Rapanta C. and Christodoulou A., (2022), Walton's types of argumentation dialogues as classroom discourse sequences, Learn., Culture Soc. Interaction, 36, 100352.
- Smalheiser N. R., (2017), Chapter 12–Nonparametric Tests, in Data Literacy, Academic Press, pp. 157–167.
- Stevens S. S., (1946), On the theory of scales of measurement, Science, 103(2684), 677–680.
- Suh J. K., (2016), Examining teacher epistemic orientations toward teaching science (EOTS) and its relationship to instructional practices in science (Unpublished doctoral dissertation), IA, USA: The University of Iowa.
- Talanquer V., (2011) Macro, Submicro, andSymbolic: The Many Faces of the Chemistry Triplet, Int. J. Sci. Educ., 33(2), 179–195.
- Tatli Z. and Ayas A., (2013), Effect of a virtual chemistry laboratory on students' achievement, Educ. Technol. Soc., 16(1), 159–170.
- Tekindur A., (2022), Effect of argument based ınquıry approach on fourth grade students’ scıence achıevement, ınquıry and scıentıfıc wrıtıng skılls, Doctoral Thesis, Ankara: Hacettepe University, Institute of Educational Sciences.
- Treagust D. F., Chittleborough G. and Mamiala T. L., (2003), The Role of Submicroscopic and Symbolic Representations İn Chemical Explanations, Int. J. Sci. Educ., 25, 1353–1368.
- Walton D., (2016), Argument evaluation and evidence, vol. 23, Springer.
- Walton D., Reed C. and Macagno F., (2008), Argumentation schemes, Cambridge University Press.
- Wang L., Hodges G. and Lee J., (2022), Connecting macroscopic, molecular, and symbolic representations with immersive technologies in high school chemistry: the case of redox reactions, Educ. Sci., 12(7), 428.
- Washburn E. and Cavagnetto A., (2013), Using argument as a tool for integrating science and literacy, Reading Teacher, 67(2), 127–136.
- Wellington J. and Osborne J., (2001), Language and literacy in science education, UK: McGraw-Hill Education.
- Yaman F., (2018), Effects of the science writing heuristic approach on the quality of Prospective science teachers’ argumentative writing and their understanding of scientific argumentation, Int. J. Sci. Math. Educ., 16(3), 421–442.
- Yaman F., (2019), Investigation of Multiple Levels of Representations in Students Written Argument using Virtual Chemistry Laboratory, Elementary Educ. Online, 18(1), 207–207.
- Yaman F., (2020), Pre-service science teachers’ development and use of multiple levels of representation and written arguments in general chemistry laboratory courses, Res. Sci. Educ., 50(6), 2331–2362.
- Yaman F., Çıkmaz A., Şahin E. and Hand B., (2019), The SWH approach from theory to practice: application example in chemistry laboratories, J. Faculty Educ. Trakya, 9(2), 260–286.
- Yaman F. and Hand B., (2022), Examining pre-service science teachers’ development and utilization of written and oral argument and representation resources in an argument-based inquiry environment, Chem. Educ. Res. Pract., 23, 948–968 10.1039/d2rp00152g.
- Yaman F. and Hand B., (2024a), Examining the link between oral and written reasoning within a generative learning environment: the impact of the Science Writing Heuristic approach, Int. J. Sci. Educ., 46(8), 750–772 DOI:10.1080/09500693.2023.2256460.
- Yaman F. and Hand B., (2024b), Exploring Conditions for Utilizing Representations in Chemistry in an Argument-Based Inquiry Environment: Laboratory Only, Technology Only, or a Combination of Laboratory and Technology, J. Chem. Educ., 101(6), 2231–2243 DOI:10.1021/acs.jchemed.3c01136.
- Yaman F. and Hand B., (2025), A Longitudinal Study Examining the Role of Generative Laboratory Environments in the Utilization of Argument, Representation, and Reasoning, J. Res. Sci. Teach., 1–31 DOI:10.1002/tea.70011.
- Yıldırım A. and Şimşek H., (2021), Sosyal Bilimlerde Nitel Araştırma Yöntemleri, Ankara: Seçkin Yayıncılık, 12. Baskı, ISBN: 978-975-02-6982-0.
- Yin R. K., (2014), Case Study Research Design and Methods, 5th edn, Thousand Oaks. CA: Sage, p. 282.
|
This journal is © The Royal Society of Chemistry 2025 |
Click here to see how this site uses Cookies. View our privacy policy here.