Beatriz
Crujeiras-Pérez
* and
Pablo
Brocos
Departamento de Diácticas Aplicadas, Facultade de Ciencias da Educación, Universidade de Santiago de Compostela, Spain. E-mail: beatriz.crujeiras@usc.es; pablo.brocos@usc.es
First published on 23rd November 2020
This study addresses the use of epistemic criteria related to the scientific practice of inquiry in the context of environmental chemistry. In particular, it analyses the type of criteria that are used by pre-service teachers when assessing the adequacy of several scientific procedures for identifying microplastics in beach sand, as well as determining the ways in which these participants make use of said criteria. The participants were 22 pre-service primary teachers who were divided into small groups of 3–4 participants who were given the task of assessing the scientific quality of three different procedures before selecting which they considered to be the best option. The data collected includes audio recordings of the participants' small group conversations and their written comments. The data analysis is framed in qualitative content analysis, in which the participants' conversations were transcribed and coded using the ATLAS.ti software. The coding frameworks that were used to address each research question were developed by taking into consideration both the literature and the collected data. The main results indicate different patterns in terms of the types of criteria that were used in the participants' assessments, as well as the different uses of criteria within each of the small groups. These results could have been influenced by the participant's limited knowledge of both scientific inquiry and chemistry.
The relevance of this study emanates from the 2015 PISA report, which is the most recent report to focus on science as the major domain assessed (OECD, 2016b). The results of this report indicate that of the three assessed scientific competencies, evaluating and designing investigations yielded the lowest score in 55% of the evaluated countries (OECD, 2016b). Furthermore, numerous issues regarding the implementation of scientific inquiry in primary and secondary education, as well as in teacher education, have been identified in the literature. One of the main problems that students encounter is the difficulty in designing complete and well-defined procedures (Zimmerman, 2000; Crujeiras-Pérez and Jiménez-Aleixandre, 2017; García-Carmona et al., 2017). For the teachers, the main issues are associated with a lack experience and training in carrying out scientific investigations (Capps et al., 2012), as well as a lack of trust in one's own ability to implement inquiry-oriented strategies in the classroom setting (Rikmanis et al., 2012). These difficulties, along with the PISA assessment results, highlight the need to firstly devise tasks that are aimed at promoting students’ competences in the evaluation and design of scientific inquiries, and, secondly, to develop teaching strategies that encourage instructors to appropriately implement inquiry approaches in their classrooms. This potential course of action is in line with the educational implications that have been emphasized in numerous studies about inquiry (e.g.Biggers and Forbes, 2012; Crujeiras-Pérez and Jiménez-Aleixandre, 2017; García-Carmona et al., 2017). However, we believe that if participants are not engaged in the nature of scientific inquiry, that is to say with the characteristics of the scientific processes through which scientific knowledge is constructed and justified (Schwartz et al., 2008), this perspective may be insufficient in order to achieve a substantial improvement in learning about scientific inquiry. This assertion is grounded in empirical studies that have determined that students do not acquire learning about scientific inquiry by doing science (e.g.Bell et al., 2003; Metz, 2004; Sandoval and Morrison, 2003) and it is also backed by science education literature reviews (Lederman et al., 2014; Lederman et al., 2013). Therefore, we propose that additional aspects that are directly involved in the construction of scientific knowledge, such as the epistemic knowledge associated with scientific inquiry are taken into account, and this constitutes the focus of this study.
Within the framework of Science Education, epistemic knowledge is understood as the comprehension of the role of the specific constructs that are involved in the production of knowledge and of the essential characteristics of the knowledge-building processes (Duschl, 2008).
Addressing epistemic knowledge is relevant for science education since a higher level of epistemic knowledge allows for more productive learning about scientific practices and contents (Sandoval, 2005; Elby et al., 2016). Epistemic knowledge has been examined from a range of perspectives: disciplinary, personal and social (Kelly et al., 2012). This study focuses on the disciplinary perspective, which, according to these authors, includes knowledge about the nature of the evidence, criteria for selecting scientific theories, and the role of theoretical dependence in scientific methodologies, among other aspects.
We agree with Sandoval's (2014) consideration that epistemic knowledge is manifested through actions and reasoning, and, as such, it should be evaluated within its practice. This is also in line with Kelly's (2008) proposal that suggested that science learning should be evaluated through the study of the participants' practices in situ. Therefore, in this study the participants’ epistemic knowledge was examined by analysing their engagement in an inquiry task, placing particular emphasis on the epistemic criteria used when assessing several procedures for identifying microplastics in beach sand.
Epistemic criteria are the standards that are used by scientists in order to evaluate the validity and accuracy of scientific products (Pluta et al., 2011). According to these authors, making epistemic criteria explicit in the classroom setting is a good way of making the cognitive practices of science visible, for example, the awareness that criteria should be applied contextually rather than rigidly.
Several studies can be found in the literature that address the students' use of criteria in scientific practices such as argumentation (Hogan and Maglienti, 2001), modelling (Pluta et al., 2011) and inquiry (Samarapungavan et al., 2006). However, we are yet to identify any studies in which the specific criteria for assessing a scientific procedure are identified in a given context. For this study, we have used the evaluation of the adequacy of several scientific procedures as the basis for learning both about the design itself (Pérez-Vidal and Crujeiras-Pérez, 2019) and about the epistemic knowledge associated to inquiry or, what is the same, about the nature of scientific inquiry.
Addressing NOSI in science lessons is important as it not only allows students to perform better investigations, but it also enables them to participate in science-related discourses and make decisions as a scientifically literate citizen (Flick and Lederman, 2006). In fact, as Schwartz et al. (2012) pointed out, learners may be able to conduct a scientific investigation, but it is important that they also understand the nature of the practices involved in order to be able to evaluate the validity of the claims and understand how scientific knowledge is developed and accepted.
Several epistemic aspects aligned with NOSI frameworks have been examined in empirical studies, for example, considering systematicity as a feature of the scientific methodology that leads to valued forms of knowledge (Sandoval and Reiser, 2004), considering hypotheses as being subject to falsification (Constantinou and Papadouris, 2004), acknowledging the role of evidence in the justification of scientific conclusions (Kittleson, 2011), identifying patterns in data sets (Sandoval et al., 2000), and understanding that conclusions should cohere with the range of available evidence (Falk and Yarden, 2009).
In order to ensure the adequate teaching and learning of NOSI, it is fundamental that teachers are familiar with these epistemic aspects, and, as such, these must be addressed in teacher training programs. However, this sort of training is not common in science education instructional programs (Lederman et al., 2013). Given the lack of training on this aspect, teachers may not be aware of its relevance in science instruction. In this regard, Strippel and Sommer (2015) explored the manner in which chemistry teachers addressed NOSI issues in their chemistry laboratory lessons, observing that these were not a primary goal for them.
It is worth mentioning that there is a debate as to how NOS and NOSI issues should be addressed in science education, if this should be done in general terms or specifically for each discipline. In this sense, we agree with the idea of addressing the particular nature of chemical knowledge, which was proposed by Erduran (2001), Erduran and Scerri (2003) and Vesterinen and Aksela (2009). In this study, we have addressed NOSI aspects through a general perspective, focusing on how pre-service primary teachers apply epistemic criteria in their assessment of a scientific investigation. This constitutes the first step in their NOSI training, giving them the opportunity to become acquainted with the general epistemic aspects that must be considered in order to be able to plan and carry out adequate scientific investigations. Further instruction should include specific aspects that are associated with each scientific discipline.
The research questions that guided the investigation are:
(1) What epistemic criteria do pre-service primary teachers use for selecting the most appropriate procedure for identifying microplastics in beach sand?
(2) How do participants use the epistemic criteria to select the most appropriate procedure?
The participants were 22 pre-service primary teachers (PSTs) from a public university located in the northwest of Spain, who were on the last subject about science education that was included in their syllabus. The PSTs worked in six small groups that were formed by 3 to 4 participants (GA, GB, GC, GE, GG and GH).
It is important to note than in the primary education syllabus in Spain, science is taught as a single subject, with no differentiation between the specific scientific disciplines such as chemistry, physics, or biology. The scientific curriculum is organised around five groups of contents, namely: (1) introduction to scientific activity, (2) human beings and health, (3) living beings, (4) matter and energy, (5) technology, apparatus and machines. This curriculum is used as a framework for organising the instructional approaches that are addressed in science teacher education.
The aforementioned subject addressed a specific topic about the disciplinary aspects that are associated with the epistemic knowledge that is involved in the development of the scientific practices of inquiry, argumentation and modelling. The task that was analysed in this study, focused on inquiry, and it formed part of a more comprehensive project in which these three scientific practices were addressed. The task was framed in environmental chemistry; the accumulation of microplastics is a current environmental issue that should be addressed in chemistry lessons in order to raise the awareness among PSTs of their uses and management. This topic relates to the curricular content of waste production, pollution and environmental impact that is included in the Spanish science curriculum for Primary Education. Moreover, this context of identifying microplastics in beach sand relates to some specific contents of chemistry that are addressed in primary science lessons all over the world, such as mixtures and solutions and the separation of mixtures. In this study, we explored the potential of this context for engaging PSTs in inquiry and NOSI as well as for learning chemistry.
The task began with a brief oral presentation about the environmental threat that the presence of microplastics in marine and coastal environments poses. The PSTs were then asked to carry out an investigation to ascertain whether these compounds were found in the regional beaches, highlighting the need for a procedure that was both scientifically adequate and reliable to be developed. For this purpose, three alternative procedures (A, B and C) were presented to the participants (see appendix), and they were required to choose the procedure that they considered to be most reliable from a scientific point of view, and they were required to justify their choice.
The highest quality procedure (B) was designed by the authors who drew from a standardized method developed following the review of procedures for the extraction and determination of microplastics in beach sand by means of density separation employed using a saline solution (Besley et al., 2017). This adaptation, which was adjusted to the classroom context and to the time available, was previously tested and performed by the authors. The alternative procedures, A and C, did not differ from B in terms of their overall rationale or extraction mechanism, however they attained lower detail, systematicity, accuracy, reliability and reproducibility. For instance, procedure C does not include two samples for each kind of sand (reliability), and procedure A does not specify the quantity of sand that is to be analysed (detail). It should be noted that the aim of the task was to select the best procedure for performing the investigation and to justify this selection in terms of these epistemic criteria, rather than managing the different chemical procedures, which would go beyond the objectives that are included in the science curriculum for primary education.
The task included other steps, which, for the purpose of this study it was not necessary to analyse, and these addressed: the modification of the discarded procedures in order to increase their scientific quality; the selection of materials; the data collected once the selected procedure was put in practice; as well as the conclusions drawn and their reliability.
For the analysis, the PSTs' conversations were transcribed and coded using the ATLAS.ti software. In order to address the first research question, the transcriptions were examined according to the coding frame, which is presented in Table 1. The categories for this coding frame were fundamentally developed in a concept-driven way, taking into consideration the aspects that must be met in an appropriate scientific procedure. These categories, which corresponded to the epistemic criteria, drew from the philosophy of science and in particular, Bunge's (1968) characteristics of science, such as being accurate, testable and systematic; and from the understandings of inquiry that are included in science education policy documents (NRC, 1996; 2000), or, in other words, the understandings of the nature of inquiry (NOSI) that should be developed by K12 students. For instance, one of these understandings is that “the methods and procedures that scientists use to obtain evidence must be clearly reported in order to enhance opportunities for further investigation”, therefore involving the “detail” and “systematicity” criteria given that a scientific procedure must be detailed and well organised in order for others to be able to reproduce it. For the purpose of this analysis, the data was segmented into episodes, which were divided by thematic criteria.
Epistemic criteria | Description | Terms used by participants |
---|---|---|
Accuracy | Referring to the inclusion of specific data, such as the quantities that were to be measured for each magnitude | Suitable, greater precision |
More specific data | ||
Systematicity | Referring to the order in which the steps were to be performed | Sequenced |
Detail | Referring to the need for each step to be included and described | Complete, detailed, more exhaustive, specific, concrete |
Reliability | Referring to the use of two samples of each beach sand | Effective, representative |
Replicability | Considering the need for the procedure to be replicated by others | So that the person who reads it understands it, so that the procedure can be repeated |
With regards to the second research question, the transcriptions were examined in terms of the use that the PSTs made of the epistemic criteria when selecting the procedure. This analysis was essentially inductive and it was conducted by examining each turn of speech in the transcripts and it resulted in the construction of the following five data-driven categories: (1) referring to explicit criteria and justifying them; (2) referring to explicit criteria, but providing inadequate justifications; (3) referring to explicit criteria, but providing no justification; (4) referring to the criteria implicitly; and (5) revealing misconceptions regarding the use of criteria.
It is worth mentioning that the PSTs were familiar with all of the epistemic criteria examined in this paper because these criteria had been addressed in the two 90-minute sessions that took place prior to them performing the analysed task.
We must also note that the process of analysing the two research questions was performed independently by the two authors through several cycles of analysis until an 85% agreement was attained in the coding. This percentage is considered to be more than acceptable for ensuring reliability in qualitative content analysis (Julien, 2008).
Likewise, although this research required participation in a laboratory activity, the PSTs were not exposed to any significant risk, because the task was adapted to enable it to be performed in a school laboratory and as such it did not involve any potential hazards such as the use of strong acids, high temperatures, or dangerous materials and/or procedures. The samples of beach sand were collected and processed by the authors, who also prepared the saturated NaCl solution.
Epistemic criteria | Frequency (number of episodes in which groups appeal to each criterion) | ||||||
---|---|---|---|---|---|---|---|
GA | GB | GC | GE | GG | GH | Total | |
Accuracy | 4 | 0 | 3 | 2 | 2 | 1 | 12 |
Systematicity | 2 | 0 | 0 | 0 | 0 | 0 | 2 |
Detail | 3 | 1 | 4 | 4 | 4 | 0 | 16 |
Reliability | 2 | 5 | 2 | 4 | 1 | 2 | 16 |
Replicability | 2 | 0 | 0 | 0 | 2 | 0 | 4 |
In terms of frequency, the two criteria that were the most used by the PSTs were detail and reliability with a total frequency of 16 episodes each. The accuracy criterion was also widely used, appearing in 12 episodes, whereas the other two, systematicity and replicability were mentioned less, with these being identified in just one and four episodes respectively.
One example of the criteria considered is reproduced in the following excerpt from GA's discourse:
Turn | Transcription | Criteria |
---|---|---|
80 | Aroa: It gives like… a good explanation, no, but a more detailed one about… about what needs to be done, or something like that. | Detail |
81 | Alicia: An explanation… | |
82 | Aroa: More complete and detailed. | |
83 | Alicia: [Writing] Complete… about what needs to be done. | |
84 | Ángela: And yeah… also makes reference to… [slip of the tongue] to the grams, too. | |
85 | Aroa: [simultaneously] No, instead of “about what needs to be done”, about the steps to follow. Well, or like that, whatever, don’t change it. | Systematicity |
86 | Alicia: That way we can put it step by step. | |
[…] | ||
90 | Alicia: Specifying… the quantities… | Accuracy |
91 | Aroa: Specific [quantities]. | |
92 | Alicia: And the timing… | |
93 | Aroa: Specific [timing]. | |
94 | Alicia: ¿Specific? I already wrote specifying… quantities and timing… | |
95 | Ángela: That must be used. | |
[…] | ||
101 | Alicia: I’d put something like that now as it's more reliable… | Reliability |
102 | Aroa: [simultaneously] Or something like… so that… so that the person who reads it really understands… because that's what she [the teacher] told us. For example, the recipe, right? If they tell you, you have to make an omelette with eggs and potatoes and whatever, and in fact, the person who reads it, they don’t know… I mean… | Replicability |
103 | Alicia: What they add first. | |
104 | Aroa: Yeah. |
In this excerpt, the PSTs considered all of the criteria in their intervention. In turns 80–84 they examined whether the procedures included a complete and detailed explanation of what to do, and this was coded under the category of detail. Then, in turns 84–85, they continued their assessment by examining whether or not the procedure contained all of the required stages and whether it could be followed step by step, which was an indicator of systematicity. After that, in turns 90–95, the participants considered the need to specify the quantities of the variables that were to be measured, which was coded as accuracy. Next, in turn 101, they explicitly mentioned the reliability of the procedure, and in turns 102–104 they accounted for the reliability of the process, suggesting that other people would need to understand the information in order to be able to replicate it, meaning that the procedure should include all of the specific steps and elements so that anyone could reproduce it in the same conditions.
We then examined the order in which the different groups used the criteria to identify existing patterns. To do so, we represented the timeline that encompassed the assessment of the procedures and the selection of the most appropriate ones (see Fig. 1).
![]() | ||
Fig. 1 Participants' use of criteria across the time. Legend: ![]() ![]() ![]() ![]() ![]() |
The sequence of criteria that were used by the small groups provides useful information on the importance that the PSTs placed on some of the criteria compared to the other criteria, and it also made it possible to identify patterns in the PSTs behaviours as described below. Patterns are important in research as this allows for the findings to be translated to other contexts.
Moreover, this is also useful as it gives us an idea as to how the PSTs would conduct the investigation in terms of epistemic criteria if they were required to do so.
As represented in Fig. 1, the use of criteria was quite different in the groups. Although there was no clear pattern, some coincidences among some of the groups can be highlighted. Three of the six groups (GB, GC and GE) started the assessment by examining the level of detail (D) in the procedures. Their decision to do so may have been due to prior training about planning and carrying out scientific investigations in which all of the aspects that a quality-based plan should meet were addressed. This intervention particularly emphasized the need for all of the steps that were to be followed in the investigation to be explained clearly, given that this aspect is widely addressed in literature as one of the difficulties that are associated with the students' inadequate performances in inquiry settings.
Moreover, there were two groups (GB and GH) in which the use of reliability (Rl) prevailed, although they only considered two criteria in their assessments. This may indicate that these groups considered reliability as the most important criterion for the assessment. Finally, it is worth mentioning that whenever the reproducibility criterion (Rp), was mentioned, it always appeared at the end of the assessment processes, which may suggest that the groups did not find it as relevant as the other criteria.
The fact that a given criterion was addressed on several occasions by the same group might indicate that the PSTs had different opinions or knowledge about it. It is worth noting, however, that the fact that a criterion was discussed on multiple occasions does not imply that they demonstrated better knowledge about it. Therefore, although the detail and reliability criteria were the most frequently used, this does not necessarily mean that the participants’ discourse revealed a more sophisticated knowledge about said criteria. In order to address this question, it is necessary to examine the use that participants made of said criteria, which is analysed in the next section.
Actions | Small group (number of episodes) | N total (%) | ||||
---|---|---|---|---|---|---|
A | S | D | Rl | Rp | ||
a. Referring to explicit criteria and justifying them | GE (1) | GA (2) | GB (5) | GG (1) | 13 (26) | |
GC (1) | GH (1) | |||||
GE (2) | ||||||
b. Referring to the explicit criteria, but providing inadequate justifications | GG (1) | GC (1) | 3 (6) | |||
GG (1) | ||||||
c. Referring to the explicit criteria, but providing no justification | GC (1) | GA (1) | GA (1) | 10 (20) | ||
GG (1) | GG (3) | GB (1) | ||||
GE (2) | ||||||
d. Referring to the criteria implicitly | GA (4) | GA (2) | GA (1) | GG (1) | GA (2) | 19 (38) |
GE (1) | GC (3) | GE (2) | GG (1) | |||
GH (1) | GH (1) | |||||
e. Revealing misconceptions in the use of criteria | GC (2) | GC (1) | 5 (10) | |||
GE (2) |
In general, the most frequent action was the implicit use of epistemic criteria for selecting the most appropriate procedure, which represented 38% of the episodes. However, the most sophisticated level, which corresponded to the participants' use of explicit criteria and their justification for said usage, accounted for 26% of the episodes, which is considered as an acceptable result when considering the difficulties that have been identified in the literature associated with this operation. It is worth noting that 10% of the episodes involved the misuse of criteria, mistakenly attributing the meaning of one criterion to another.
In the small groups the use of criteria was quite different both in terms of the types of criteria and their actions. We will now discuss each category with examples from the PSTs' interventions.
These results show that the groups tended to discuss only one criterion in an explicit and justified manner, except for GE, who considered two of them. On the other hand, the fact that the majority of the episodes corresponded to the use of reliability and detail, might suggest that the PSTs were more familiar with these criteria.
The following excerpt describes the ways in which GB made explicit use of the criterion reliability and justified it:
23 Bárbara: This procedure…tests two samples and their results may therefore be more reliable. If one [one of the tests] goes wrong, they always have another opportunity to check, or the results of the two samples can be compared afterwards so that it [the experiment] is more… you know? To check if the results are really the same and that they are reliable.
In this excerpt, Bárbara was explicitly referring to reliability, as she mentioned this at the beginning of the intervention. She justified the idea of using two samples of sand, explaining that in this way it would be possible to compare results. Therefore she associated the use of more than one sample with the reliability of the results.
The following excerpt reproduces the way in which GG used the accuracy criterion, justifying it inadequately:
165 Gemma: No, [procedure B] is more accurate because the other [procedure A] does not filter anything.
In this intervention, Gemma was comparing procedures A and B (see Appendix) by using the term accuracy, however she provided an inadequate justification for it, as her reasoning was based on aspects that were related to the level of detail of the steps that were included in the procedure, rather than to their accuracy. In particular, she considered that filtrating the supernatant liquid before observing the sample through the stereo microscope would result in a more accurate procedure than simply removing the microplastics before observing the sample through the stereomicroscope.
The following excerpt reproduces the way in which GG made use of the detail criterion without justification:
182 Gemma: I prefer [procedure] B because it is more specific.
In this excerpt, the participant selected procedure B as the best option, backing up her decision based on the level of detail that it presented. She referred to this criterion using the term specific, which was correct, however she did not justify why this procedure was more specific than the other two.
Five of the six small groups performed this action, however this was done with different frequency and related to different criteria. In general, the groups made use of just one or two criteria, with the exception of GA, which used four of the five criteria (accuracy, systematicity, detail and replicability).
The following excerpt reproduces an example of the way in which GC referred implicitly to the reliability criterion:
73 Cintia: What we can write is that… that…I mean, it runs tests, but it even runs two tests for each kind [of sample]. It does not just run one [test] like the others [procedures A and C].
74 Cruz: OK.
75 Celso: Yeah, it is what I…
76 Cintia: Exactly. No, no, but not only that, it is like it tests it twice in case one of the tests went wrong, and all that, you know?
In this excerpt, the participants were looking for the aspects that were included in procedure B that they could use in order to justify selecting it over the other two. In turn 73, Cintia referred to the number of samples tested by type of sand, given that this procedure involved using two samples per type of sand. Moreover, in turn 76, the same participant, Cintia, interpreted the use of two samples as a way of having more data to compare. It can be considered that she contemplated the reliability criterion in both interventions, however, she did not explicitly refer to it.
Five episodes were coded under this category and these were identified in two of the six small groups (GC and GE) and were related to two criteria, accuracy and reliability.
GC mistook both the accuracy and reliability criteria for the detail criterion, whereas GE used reliability when referring to effectiveness.
The following extract reproduces an example of how GC mistook accuracy for detail:
47 Cruz: [Procedure B] is more accurate, it contains more steps…
In this example, Cruz was comparing procedure B with the other two. She considered procedure B to be more accurate than the others, however her justification was incorrect as she associated accuracy with the inclusion of a greater number of steps, which corresponded to the detail criterion, rather than to accuracy.
In general, the participants made use of five epistemic criteria related to inquiry (accuracy, systematicity, detail, reliability and replicability). However, the use of each criterion was different among the groups, with detail and reliability being the most frequently used criteria. This finding is quite relevant in terms of the PSTs' engagement in planning scientific investigations since it is widely agreed that participants usually propose ill-defined or incomplete plans (Zimmerman, 2000; Krajcik et al., 1998; Crujeiras-Pérez and Jiménez-Aleixandre, 2017). Therefore, the strategy of considering the epistemic criteria that are involved in a scientific procedure might contribute to the improvement of the PSTs' performances when planning a scientific investigation, an issue that has been highlighted in the literature as one of the main difficulties encountered by the students.
Although the PSTs considered several criteria for selecting the best procedure, only one group made use of all of them. In light of these results, we believe that holding a general discussion with all of the groups before making the final decision might improve the use of criteria. This aligns with Pluta et al.'s (2011) proposal that a whole class discussion about criteria could promote better group and individual learning.
The patterns identified in the PSTs' performances over time provide relevant information that enables us to interpret the use that they gave to the epistemic criteria that are involved in planning and carrying out scientific investigations. Moreover, although qualitative results are not generalisable, these patterns suggest the performances that could be expected in other learning contexts when participants have to select a scientific procedure or even design it. Apart from visual facilities, identifying sequences in research data is relevant when transferring the findings to an educational setting. In this case, identifying the sequences that the PSTs followed when assessing inquiry procedures has implications when designing learning tasks and in their teaching interventions, showing for instance that the criterion of systematicity needs to be emphasized more in lessons, especially to avoid the common misunderstanding of trial and error performances that are associated to laboratory inquiry by students with naïve views of NOSI instead of designing and carrying out systematic investigations (Sandoval and Reiser, 2004).
Regarding the second research question, which addresses the manner in which PSTs use the epistemic criteria for selecting the most appropriate procedure, a low percentage of the PSTs were able to use explicit criteria and propose an adequate justification for its usage, whereas most participants used the criteria implicitly. These findings suggest that the number of sessions that were dedicated to addressing the epistemic criteria that are associated with scientific practices before the task was performed were not sufficient in order to fully engage the PSTs in the use of epistemic knowledge. It is also important to note that the participants were not informed about the specific criteria that they needed to apply, because we were looking to investigate how they activate their epistemic knowledge about inquiry in a specific context, as well observing their comprehension of the relevance of this knowledge for meaningful engagement in inquiry practices, as recommended in the literature about learning through scientific practices (e.g.Duschl, 2008; Berland et al., 2016).
In fact, we consider that, rather than merely providing the PSTs with a list of predefined criteria that are to be used, it is only possible to meaningfully address the epistemic criteria involved in inquiry through the participants' reflective immersion in inquiry practices, such as the selection of an adequate procedure to investigate an issue, as proposed in this study.
Although the results show that the PSTs’ use of epistemic criteria was not optimal, the strategy of assessing different procedures and selecting which one was the best in order to conduct an adequate scientific investigation might be a promising resource for promoting the effective engagement of participants in inquiry. Furthermore, the epistemic criteria that are used for the assessment can be understood as epistemic tools that support the process of planning an investigation, since epistemic tools are characterised as physical, symbolic, and/or discursive artefacts that facilitate the construction of knowledge and support knowledge building (Kelly and Cunningham, 2019).
However, in light of the results, we consider that perhaps the PSTs require further training from the instructors regarding the use of criteria, or the establishment of specific classroom norms, such as the need to justify each decision or criteria in the groups. Although the need for justification was included in the task handout and they were familiar with the importance of this operation in argumentation practices, they may not have considered it necessary for inquiry. This aspect also highlights the need for scientific practices to be addressed through an integrated perspective, given that all three are interconnected (Osborne, 2014; Jiménez-Aleixandre and Crujeiras-Pérez, 2017).
To conclude, another aspect that may have influenced the PSTs’ performances was their epistemic beliefs, that is to say the participants’ beliefs about the nature of knowledge and knowing (Hofer and Pintrich, 1997). Numerous studies have highlighted the relationship between these beliefs and the students' learning (Mason et al., 2013; Getahum et al., 2016; Lin and Chang, 2018), however further research that addresses the relationship between such beliefs and the use of epistemic criteria may allow for innovative approaches that promote participants’ adequate use of epistemic criteria when engaging in scientific practices to be developed.
In order to obtain reliable results, we must follow a procedure that is as scientifically adequate as possible. Below you will find three examples of methods that have been used by three scientific groups when performing this investigation. You have to decide which one is the most adequate if you want to guarantee the reliability of your results.
– Which procedure should you use in order to obtain the most representative data for the investigation? Why?
This journal is © The Royal Society of Chemistry 2021 |