Peng
He
*ab,
Changlong
Zheng
*a and
Tingting
Li
bcd
aInstitute of Chemical Education, Northeast Normal University, P. R. China. E-mail: hep905@nenu.edu.cn; zhengcl@nenu.edu.cn
bCREATE for STEM Institute, Michigan State University, USA
cFaculty of Education, Northeast Normal University, P. R. China
dDepartment of Counseling, Educational Psychology, and Special Education, Michigan State University, USA
First published on 12th January 2021
This study aims to develop and validate a new instrument for measuring chemistry teachers’ perceptions of Pedagogical Content Knowledge for teaching Chemistry Core Competencies (PCK_CCC) in the context of new Chinese chemistry curriculum reform. The five constructs and the initial 17 items in the new instrument were contextualized by the PCK pentagon model (Park S. and Oliver J. S., (2008), J. Res. Sci. Teach., 45(7), 812–834.) with the notions of the Senior High School Chemistry Curriculum Standards (Ministry of Education, P. R. China, 2017). 210 chemistry teachers from a University-Government-School initiative voluntarily participated in this study. The findings from item analysis, confirmatory factor analysis and correlation analysis provide sufficient empirical evidence to support the convergent and discriminant validity of the instrument. The concurrent validity of the instrument was confirmed by testing mean differences among teacher demographic groups. The high Cronbach's coefficient alpha values show good internal consistency reliability of the instrument. Integrating the evidence from theory and data, we documented a valid and reliable PCK_CCC instrument with five constructs consisting of 16 items. This study provides a thorough process for developing and validating instruments that address teacher perceptions of their PCK in a particular subject domain. The valid and reliable PCK_CCC instrument would be beneficial for teacher education researchers and teacher professional programs.
In particular, pedagogical content knowledge (PCK) (Shulman, 1986) has been proposed as an essential aspect of teachers’ knowledge (Alonzo et al., 2012) and is widely considered crucial for a teacher's ability to create high-quality instruction. PCK and its components are useful for researchers to study teacher knowledge and practice and have a significant role in defining capable and competent teachers (Toh et al., 2006; van Driel and Berry, 2012; Kulgemeyer and Riese, 2018). Thus, it is essential to assess the development of pre-service and in-service teachers’ PCK. Teacher perception arises from the teacher's set of attitudes, beliefs, and knowledge about his/her personal characteristics and attributes (Shavelson et al., 1976; Ghazvini, 2011). As such, teacher perception of PCK is defined as teacher self-evaluation of their PCK competencies in a particular teaching context. A large body of research revealed that teacher perception can be considered as a significant indicator of the teacher's PCK (Kuntze et al., 2013; Kavanoz et al., 2015; Großschedl et al., 2019), and could be an alternative to measure teacher PCK (MaKinster et al., 2010). Moreover, research has shown that teacher actual and perceived knowledge have a similar development after a workshop designed to increase teachers’ teaching skills (Barton-Arwood et al., 2005). It is also a mediator that plays an integral part in teachers’ professional development and can provide us with a perspective of researching teachers’ professional development (Kuntze et al., 2013). Research also recommends the importance of understanding teacher perceptions of their knowledge needed for teaching to support their PCK development (Nilsson, 2008; Nilsson and van Driel, 2011). In this sense, exploring teacher perceptions of their PCK can provide insight into how to help teachers’ self-evaluation of their PCK on teaching chemistry in the context of the 2017 SHSCCS.
Nevertheless, few studies investigate chemistry teachers’ perception of their PCK for teaching CCCs. Incorporated with the 2017 SHSCCS reform, this study investigates in-service and pre-service chemistry teachers’ perceptions of their PCK in a University-Government-School (UGS) professional learning community (Lv et al., 2016). Our goal is to develop and validate an instrument for determining chemistry teachers’ perceptions of their PCK on teaching CCCs. Such an instrument would contribute to teacher educators for monitoring the development of teacher professional learning. It could also be beneficial for pre-service and in-service teachers’ self-awareness by identifying their weaknesses in teaching the competency-based curriculum.
Regarding the construct of PCK, numerous studies investigated the components of PCK (e.g., Tamir, 1988; Grossman, 1990; Magnusson et al., 1999). Drawing from those previous studies, Park and Oliver (2008b) identified the five components of PCK and organized them into a pentagon diagram of PCK. According to Park and Oliver (2008b), the components of PCK from different conceptualizations can be mainly categorized as orientation toward teaching science, knowledge of students’ understanding in science, knowledge of science curriculum, knowledge of instructional strategies and representations, and knowledge of assessment of science learning. The knowledge of subject matter and the knowledge of pedagogy are placed outside of PCK as a distinct knowledge base for teaching (e.g., Park and Oliver, 2008a). Reconciling the consensus model of PCK (Carlson and Daehler, 2019) with extant PCK models, Park (2019) claims that the pentagon model with five components can be the conceptual and analytic framework for capturing PCK and integrations among those components. For the integration of the components of PCK, Park and Chen (2012) employed their pentagon PCK model to explore the integration of PCK and found that the relationship between knowledge of student understanding and knowledge of instructional strategies and representations was central in the integration, and knowledge of assessment of science learning was more often connected with the knowledge of student understanding and knowledge of instructional strategies and representations. Aydin and Boz (2013) found that the components of the knowledge of learner and instructional strategy are central among the integrations. Using CoRes and PaP-eRs as analytic tools, Padilla and her colleagues (2008) portrayed the integration among PCK components and found two types of integrations. There is synchronization between knowledge of goals and knowledge of instructional strategy. The knowledge of assessment and knowledge of learners are integrated with each other.
Teacher demographic variables such as gender, age, teaching experience, and professional stages are considered to be the main factors for understanding teachers’ development of PCK. Koh and her colleagues (2014) examined demographic variables such as gender, age, teaching experience, and teaching level affecting pre-service teachers’ perceptions of PCK, and found that teachers’ age and gender are not related to their perception of PCK. However, teaching experience significantly impacts their perceptions of PCK. Several studies investigated the comparison between pre-service and in-service teachers and found that in-service teachers have a higher PCK than pre-service teachers in different subject contexts (Kleickmann et al., 2013; Liu et al., 2015; Meschede et al., 2017). Moreover, the difference in professional development between teachers in urban and rural areas should also be considered (Soebari and Aldridge, 2016). Using qualitative and quantitative methods, Soebari and Aldridge (2016) found that disparities between teachers in urban and rural schools exist. They also pointed out that teaching practices in rural schools continued to be teacher-centered, primarily because of the limited facilitates and resources available (Wahyudi and Treagust, 2004). Given the findings from previous studies, teachers’ demographic variables impacting their perceptions of PCK in this study should also be investigated to understand their PCK development toward CCC teaching.
Given that validity is a comprehensive construct (American Educational Research Association [AERA], the American Psychological Association [APA], and the National Council on Measurement in Education [NCME], 2014), Trochim and Donnelly (2006) framed different aspects of construct validity consisting of translation validity and criterion-related validity. Face and content validity are the two types of translation validity that are usually examined in the development process. Four types of criterion-related validity included convergent and discriminant validity, and concurrent and predictive validity are usually conducted in the measurement process. We presented face and content validity in the development stage of the instrument. In this study, we particularly conducted and reported the validity evidence in the measurement process of the instrument. The research questions (RQs) can be specifically addressed as follows:
RQ1: What is the empirical evidence supporting the convergent and discriminant validity and reliability of the instrument?
RQ2: What is the empirical evidence supporting the concurrent validity of the instrument?
According to the 2017 SHSCCS, CCCs are designed mainly to align with the nature of chemistry knowledge and practice, and the intrinsic values and unique merits of chemistry. Five main components of chemistry core competencies are categorized as: Macroscopic Identification and Microscopic Analysis (MIMA), Changes and Equilibrium (CE), Evidence-based Reasoning and Modeling (ERM), Scientific Inquiry and Innovation (SII), and Scientific Attitude and Social Responsibility (SASR). The first two components represent the big ideas of chemistry such as structure and properties of matter, chemical reactions, and interaction and energy. The third and fourth components can be attributed to scientific practices, which are evidence-based reasoning, modeling, argumentation, investigation, questioning, explanation, communication, and innovation. The component of scientific attitude and social responsibility is treated as the attitude and unique values of chemistry. Students can make correct personal decisions on socio-scientific issues and actively solve real-world problems related to chemistry. In particular, all statements within the five main components are illustrated as students can apply the knowledge of big ideas, practices, and attitudes and values of chemistry to make decisions on socio-scientific issues or to solve real-world problems.
As a collaboration scheme, the University-Government-School (UGS) initiative for practicum placements provides a professional learning community (PLC) for both in-service teachers and pre-service teachers in Mainland China. Before starting their school practicum, pre-service teachers in the university have already finished the courses related to subject content, pedagogical method, and teaching practice. At the beginning of the fall semester, those pre-service teachers are assigned to affiliated schools to start their two-month in-site educational practice. During their practicum period, they routinely observe classroom teaching, assist their supervising teachers, participate in managing student activities, and last teach formally in front of all students. In-service teachers from practicum schools serve as supervising teachers for mentoring pre-service teachers to develop their professional ability. Typically, an in-service teacher supervises one or two pre-service teachers. As a part of the UGS initiative, in-service teachers are benefited from the university and local government in facility supplement and educational resource support. In turn, the university provides different kinds of professional development (PD) programs for in-service teachers from practicum placements, such as PD training workshops, research-based collaborations, PLC-based classroom preparation, and practice. As one of the main PD programs, a ten-day face-to-face PD training program is held for in-service teachers on campus at the end of school educational practicum. During the training period, pre-service teachers help their mentors manage and teach classrooms to meet routine teaching progress.
This study established empirical evidence for content and face validity in the development stage; we further confirmed convergent and discriminant validity, and concurrent validity in the pilot and field test stage.
As such, we constructed an initial version of the instrument involving 17 items to cover all constructs and elements of PCK mentioned in the previous literature. As Gogol and colleagues (2014) suggested, each subscale consists of at least three items to ensure reliability, which could also be found in many studies of instrument development in science education (e.g., Schmidt et al., 2009; Zheng et al., 2014; Hayes et al., 2016). All items were presented as a five-point Likert scale (e.g., 1 = strongly disagree to 5 = strongly agree). For each item's responses, a teacher who responded “5 strongly agree” should indicate that the teacher considers himself or herself to have possessed enough PCK for teaching CCCs. Whereas, if the response is “1 Strongly disagree”, the teacher thinks oneself cannot do well on teaching chemistry core competencies. The sample items in each subscale can be seen in Table 2. Based on the context of CCCs, the five subscales are described as follows.
• Knowledge of orientation to teaching and learning chemistry (KOTLC). Following Park and Oliver's (2008b) description, this refers to teachers’ beliefs on the purposes and goals of teaching chemistry for developing students’ CCCs, especially in the CCC5: Scientific Attitude and Social Responsibility. The CCC5 emphasizes that the purpose of learning chemistry is no longer only about understanding chemistry contents. It transits to understanding the nature of chemistry, the intersection between chemistry and our society, and daily life decision-making by using chemistry knowledge. Correspondingly, chemistry teachers should know the nature of chemistry, the purpose of learning chemistry, and the value of learning chemistry. Thus, three items were designed in the KOTLC subscale.
• Knowledge of chemistry curriculum (KCC). This subscale mainly focuses on teachers’ knowledge about curriculum materials available for teaching topic-specific content to help students obtain their CCCs (Grossman, 1990). For instance, when teaching “Chemical Reaction and Energy Transfer”, an experienced teacher should identify the main CCCs of the topic, and that can be “CCC2: Changes and Equilibrium” and “CCC3: Evidence-based Reasoning and Modeling” at the current level. The teacher needs to know the completed requirements of the CCC2 and CCC3. To reach this purpose, chemistry teachers should know well about the vertical and horizontal requirements of CCCs to a content-specific topic in chemistry curriculum standards. The KCC subscale consisted of three items.
• Knowledge of instructional strategies for teaching chemistry (KISTC). Subject-specific strategies and topic-specific strategies (Magnusson et al., 1999) were adopted in the subscale to explore chemistry teachers’ perceptions of their instructional strategies for teaching core competence-based chemistry. In the 2017 SHSCCS, several instructional strategies for a given topic are provided as suggested tips for teachers’ references. For teaching “Atomic Structure and Periodic Table”, the main CCCs can be identified as “CCC1: Macroscopic Identification and Microscopic Analysis” and “CCC4: Scientific Inquiry and Innovation”. Developing students’ CCCs, the 2017 SHSCCS provides instructional strategies such as “investigation and group discussion about the trends of elements in the third period” for CCC1 and “design and creation of your own periodic table” for CCC4. Thus, to meet the main CCCs of a given topic, teachers should know how to select instructional strategies properly. Accordingly, three items were included in the KISTC subscale.
• Knowledge of assessment of chemistry learning (KACL). This subscale explores teachers’ perceptions of their knowledge on diagnostic, formative, and summative assessments toward students’ development of CCCs. The 2017 SHSCCS suggests that all types of assessments align with the standards of the development levels of CCCs. Moreover, teachers should select the types of assessments properly for assessing different CCCs. Performance assessments of classroom activities can be employed to assess students’ practice-based CCCs (CCC3 and CCC4); paper–pencil tests are favored methods for assessing students’ core ideas-based CCCs (CCC1 and CCC2); moreover, students’ learning portfolio is regarded as a comprehensive tool for understanding the development of students’ CCCs. The KACL subscale consists of three items.
• Knowledge of students’ understanding of chemistry (KSUC). This subscale refers to teachers’ perceptions of their knowledge on students’ conceptions of particular topics, learning difficulties, motivation, interest, learning style, and developmental level (Park and Oliver, 2008b). After targeting the main CCCs for a particular topic, teachers should know students’ current levels of the CCCs, the potential difficulties for students’ development of the CCCs, and how to engage students in classroom activities. Those aspects of students’ learning are vital to explain students’ developmental traits of CCCs. Therefore, five items were designed in the KSUC subscale.
Regarding the pre-service teachers from this normal university, the undergraduates had already finished their compulsory courses of theory and micro-teaching practice in chemistry education but did not have in situ classroom teaching experience in secondary schools. In comparison, all graduate students were undergoing their master's degree in chemistry education and had their in situ classroom teaching practices in secondary schools during their two-year graduate programs. All in-service chemistry teachers were the practice mentors of pre-service chemistry teachers in the same university. As a benefit for practice mentors, those in-service chemistry teachers can attend a series of professional development training programs provided by the university. As part of the UGS initiative for practicum placements, local educational governments and schools have signed commitments that allowed university researchers to collect data for research purposes. There were no further local procedures required in this study. All teachers voluntarily participated in the study and gave their permission to use the data of their performance.
To meet the purpose of this study, we collected data in two periods. We distributed a paper version of the questionnaire to in-service teachers who attended our PD training programs. The other in-service teachers in the UGS practicum placements who did not attend our PD training were not included in this study. To collect pre-service teachers’ responses, we designed and distributed an online version questionnaire. Even so, a few pre-service teachers did not accept the invitations due to their job hunting. In the pilot stage, 107 chemistry teachers were involved in validating and revising the initial survey. In the next stage, 103 more teachers were added to form the final data set of 210 chemistry teachers. The demographics of the participants can be seen in detail in Table 1. In particular, 81.4% of them (n = 171) were female teachers, while 18.6% were male teachers aged 24 to 55. 60% of those teachers (n = 126) only had no more than 5 years of teaching experience, 17.6% (n = 37) had 6 to 15 years of teaching experience, and the other 22.4% (n = 47) had at least 16 years of teaching experience. Of the 210 chemistry teachers, 57.1% (n = 120) were pre-service teachers from a normal university, while 42.9% (n = 90) were in-service teachers from regional PD training programs in mainland China. Among those in-service teachers, 44 came from urban schools, and the others (n = 46) came from suburban and rural areas.
Subscales | Number of items | Sample items |
---|---|---|
Knowledge of Orientation to Teaching and Learning Chemistry (KOTLC) | 3 | KOTLC1: I believe the purpose of learning chemistry is to assist students’ view of the natural world through the perspective of chemistry. |
Knowledge of Chemistry Curriculum (KCC) | 3 | KCC1: I can organize curriculum materials properly to help students’ gain specific chemistry core competencies. |
Knowledge of Instructional Strategies for Teaching Chemistry (KISTC) | 3 | KISTC1: I can select topic-specific representation strategies properly to help students form chemistry core competencies. |
Knowledge of Assessment of Chemistry Learning (KACL) | 3 | KACL1: I can evaluate students’ chemistry learning through the perspective of chemistry core competencies. |
Knowledge of Students’ Understanding in Chemistry (KSUC) | 5 | KSUC1: I know well about students’ misconceptions of chemistry knowledge and the previous level of chemistry core competencies. |
Pilot test | Field test | |||
---|---|---|---|---|
Frequency | Percent (%) | Frequency | Percent (%) | |
Note: teachers with zero teaching year represent pre-service chemistry teachers who did not have any long-term career teaching experience; teachers from University areas in the school region represent pre-service teachers who came from normal universities. | ||||
Gender | ||||
Male | 17 | 15.9 | 39 | 18.6 |
Female | 90 | 84.1 | 171 | 81.4 |
Total | 107 | 100 | 210 | 100 |
Age | ||||
<26 | 42 | 39.3 | 117 | 55.7 |
26–35 | 17 | 15.9 | 27 | 12.9 |
36–45 | 36 | 33.6 | 50 | 23.8 |
>45 | 12 | 11.2 | 16 | 7.6 |
Total | 107 | 100 | 210 | 100 |
Teaching years | ||||
0–5 | 48 | 45 | 126 | 60 |
6–15 | 24 | 22.3 | 37 | 17.6 |
>15 | 35 | 32.7 | 47 | 22.4 |
Total | 107 | 100 | 210 | 100 |
PD stage | ||||
In-service | 45 | 42.1 | 90 | 42.9 |
Pre-service | 62 | 57.9 | 120 | 57.1 |
Total | 107 | 100 | 210 | 100 |
School region | ||||
Urban area | 21 | 19.6 | 44 | 21.0 |
Suburban and rural area | 41 | 38.3 | 46 | 21.9 |
University area | 45 | 42.1 | 120 | 57.1 |
Total | 107 | 100 | 210 | 100 |
Item analysis is a fundamental task for testing the extent to which success on an item corresponds to success on the whole test (Teo, 2010; Chiang and Liu, 2014). Item convergent validity was examined by calculating point-biserial correlations between each item score and the total score (Ferketich, 1991). The cut-off value of point-biserial correlations ranged from 0.3 to 0.7. Item discriminant validity was conducted by testing the mean difference between extreme groups. An independent sample t-test between upper and lower groups (27 percent of total scores as baseline) is used to examine item discrimination's acceptability. 27 percent is used because “this value will maximize differences in normal distributions while providing enough cases for analysis” (Wiersma and Jurs, 1990, p. 145). Items with no significant differences between the upper 27 percent and lower 27 percent are considered as poor discrimination (Turkan and Liu, 2012; Chiang and Liu, 2014). Convergent and discriminant validity are also confirmed if higher correlations between items within a particular construct and lower correlations between items with other constructs were found (Trochim and Donnelly, 2006). Items that failed to meet the criteria above will be excluded in the following analysis.
Confirmatory factor analysis (CFA) is a particular application of structural equation models (SEM). As a SEM-based method, the CFA enables the estimation and analysis of latent variables underpinning SEM mathematical principles and statistical procedures. Differentiated with exploratory factor analysis (EFA), the CFA aims to confirm a particular pattern of relationships predicted based on theory or previous analytic results (Kline, 2015; DeVellis, 2016). This study constructed a five-factor model based on the five notable PCK constructs in the previous literature (Magnusson et al., 1999; Park and Oliver, 2008a, 2008b; Park and Chen, 2012). The CFA was employed to provide a statistical criterion for evaluating how well the real data fit the five-factor model. Besides, the CFA analytical procedure facilitates the evaluation of convergent and discriminant validity and reliability and the measurement of individual item quality. We utilized multiple types of indices to perform the overall goodness-of-fit index for the five-factor model, including absolute fit indices, relative fit indices, and parsimony fit indices (Schreiber et al., 2006). All fit indices and their cut-off criteria are summarized in Table 4. After item analysis and the CFA test, we formatted the final instrument with the constructs and their associated items. Thus, correlations between subscale scores and the total score were performed to further confirm the convergent and discriminant validity of the instrument (Trochim and Donnelly, 2006). Cronbach's alpha values for each subscale and the total instrument were presented as the internal consistency reliability of the instrument.
To answer RQ2, we conducted independent t-tests and ANOVA tests to examine the differences among demographic groups to perform the concurrent validity of the instrument (e.g., Velayutham et al., 2011). Concurrent validity examines whether each construct could be distinguished between those groups that should be able to be distinguished theoretically (Trochim and Donnelly, 2006). Mean differences in each subscale score and the total score were performed between different professional levels, the groups of teaching ages, and the groups of school regions.
Version 22.0 of the Statistical Program for the Social Sciences (SPSS) was adopted for item analysis, and descriptive and inferential statistical analysis. The AMOS 22.0 software (Arbuckle, 2013) was used to run the CFA test.
The overall goodness-of-fit tests for the initial and final model are presented in Table 3. According to those index baseline criteria (Schreiber et al., 2006), some critical fit indices for the initial model were not adequate to meet the criteria. The root mean square error of approximation (RMSEA) value was greater than 0.08, indicating an unsatisfactory fit between the initial model and the data. Also, the values of some absolute fit indices (GFI, AGFI) and relative fit indices (NFI, RFI, TLI) were lower than 0.90 even though those values [0.82–0.88] were close to the baseline. The results above indicated that the initial model should be revised. We considered the Modification Indices (MIs) from the AMOS output that the large values of MIs should be concerned for modifying the initial model. The MIs represent the possible reductions in the Chi-square values of the overall model when some specific parameters (e.g., residual covariance parameters) were added or an individual item was subtracted (Chiang and Liu, 2014). Following that, we found that the covariances between the residual (e6) and other residuals (e11, e9, and e3) were higher than 10, indicating that the item KSUC3 might not be a clear indicator in the models (Hu and Bentler, 1999; Schreiber et al., 2006). We dropped the item KSUC3 and constructed a revised model as the model final consisting of five factors with 16 items. We reran the CFA analysis by using the same data set. We found that all indices were much improved; in particular, the RMSEA values (0.078) were smaller than 0.80. In the initial model, the Chi-square value equaled 268.40 (df = 109); after modifying the initial model, the Chi-square value decreased to 214.62 (df = 94) significantly. The Chi-square/df value (2.28) was smaller than 3. The GFI (0.89) and AGFI (0.84) indices were acceptable for absolute fit indices, and the SRMR and RMSEA indices were good. For relative fit indices, three of them (IFI, TLI, and CFI) were good enough (larger than 0.90), whereas the other two (NFI and RFI) were just acceptable (0.87 and 0.83). All three parsimony fit indices (PGFI, PNFI and PCFI) were good (larger than 0.50). More detailed information is shown in Table 3. In all, it shows that the majority of the indices present acceptable or good goodness-of-fit in the final model.
Overall goodness-of-fit indexes | Criteria | Initial model | Final model | Evaluation results |
---|---|---|---|---|
Notes: df, degree of freedom; GFI, global fit index; AGFI, adjusted global fit index; SRMR, standardized root mean square residual; RMSEA, root mean square error of approximation; NFI, normed fit index; RFI, relative fit index; IFI, incremental fit index; TLI, Tucker–Lewis index; CFI, comparative fit index; PGFI, parsimony goodness-of-fit index; PNFI, parsimony normed fit index; PCFI, parsimony comparative fit index. | ||||
Absolute fit indices | ||||
Chi-square (χ2) | p > 0.05 | 268.40 | 214.62 | Poor |
df | — | 109 | 94 | — |
GFI | ≥0.90 | 0.87 | 0.89 | Acceptable |
AGFI | ≥0.90 | 0.82 | 0.84 | Acceptable |
SRMR | ≤0.08 | 0.036 | 0.035 | Good |
RMSEA | ≤0.08 | 0.084 [0.071, 0.096] | 0.078 [0.065–0.092] | Good |
Relative fit indices | ||||
NFI | ≥0.90 | 0.86 | 0.87 | Acceptable |
RFI | ≥0.90 | 0.82 | 0.83 | Acceptable |
IFI | ≥0.90 | 0.91 | 0.92 | Good |
TLI | ≥0.90 | 0.88 | 0.90 | Good |
CFI | ≥0.90 | 0.91 | 0.92 | Good |
Parsimony fit indices | ||||
PGFI | ≥0.50 | 0.62 | 0.62 | Good |
PNFI | ≥0.50 | 0.69 | 0.68 | Good |
PCFI | ≥0.50 | 0.73 | 0.72 | Good |
χ 2/df | ≤3 | 2.46 | 2.28 | Good |
Factors | Items | Standardized factor loading | Variance explained | CR | AVE |
---|---|---|---|---|---|
Note: CR represents composite reliability; AVE represents average variance extracted. | |||||
KACL | KACL1 | 0.57 | 0.33 | 0.78 | 0.55 |
KACL2 | 0.79 | 0.62 | |||
KACL3 | 0.84 | 0.70 | |||
KSUC | KSUC1 | 0.75 | 0.57 | 0.79 | 0.48 |
KSUC2 | 0.58 | 0.34 | |||
KSUC4 | 0.73 | 0.53 | |||
KSUC5 | 0.70 | 0.48 | |||
KCC | KCC1 | 0.72 | 0.51 | 0.85 | 0.65 |
KCC2 | 0.83 | 0.69 | |||
KCC3 | 0.86 | 0.73 | |||
KISTC | KISTC1 | 0.85 | 0.72 | 0.84 | 0.64 |
KISTC2 | 0.88 | 0.78 | |||
KISTC3 | 0.63 | 0.40 | |||
KOTLC | KOTLC1 | 0.73 | 0.54 | 0.74 | 0.49 |
KOTLC2 | 0.81 | 0.66 | |||
KOTLC3 | 0.52 | 0.27 |
Table 4 presents the summary of the CFA analysis on the final model. The standardized factor loadings of all items were greater than 0.45, revealing that the constructs showed good convergent validity (Hair et al., 2010). We calculated the average variance extracted (AVE) values for each construct to examine if the items contribute more than the errors to the factors (Fornell and Larcker, 1981). The cut-off value of the AVEs is larger than 0.50. We found that the AVEs of three factors (KACL, KCC and KISTC) were higher than 0.50, and the KSUC (0.48) and KOTLC (0.49) factors were very close to the baseline. The composite reliability (CR) values of all factors ranged between 0.74 and 0.85, exceeding the criteria value of 0.70 (Hair et al., 2010).
Subscale | Item no. | Cronbach alpha | Correlations | ||||
---|---|---|---|---|---|---|---|
KACL | KSUC | KCC | KISTC | KOTLC | |||
Note: * <0.05, ** < 0.01, *** <0.001. | |||||||
KACL | 3 | 0.77 | 1 | ||||
KSUC | 4 | 0.79 | 0.62** | 1 | |||
KCC | 3 | 0.84 | 0.52** | 0.58** | 1 | ||
KISTC | 3 | 0.83 | 0.46** | 0.54** | 0.63** | 1 | |
KOTLC | 3 | 0.72 | 0.28** | 0.44** | 0.35** | 0.31** | 1 |
Total | 16 | 0.90 | 0.76** | 0.83** | 0.82** | 0.77** | 0.62** |
In sum, we developed and formatted the final PCK_CCC instrument consisting of five constructs with 16 items. The above results provide sufficient evidence to support the convergent and discriminant validity and reliability of the final instrument for measuring chemistry teachers’ perceptions of their teaching CCCs.
Subscale | Item no. | Counts | Mean | Min | Max | SD |
---|---|---|---|---|---|---|
KACL | 3 | 210 | 3.81 | 1.33 | 5.00 | 0.60 |
KSUC | 4 | 210 | 3.87 | 2.25 | 5.00 | 0.57 |
KCC | 3 | 210 | 3.61 | 1.33 | 5.00 | 0.66 |
KISTC | 3 | 210 | 3.78 | 2.00 | 5.00 | 0.58 |
KOTLC | 3 | 210 | 4.26 | 2.67 | 5.00 | 0.57 |
Total | 16 | 210 | 3.87 | 2.38 | 5.00 | 0.46 |
Subscale | Mean and standard deviation (SD) | t value | Cohen's d | |
---|---|---|---|---|
In-service (n = 90) | Pre-service (n = 120) | |||
Note: * <0.05, ** < 0.01, *** <0.001. | ||||
KACL | 3.92 (0.60) | 3.73 (0.59) | 2.380* | 0.32 |
KSUC | 4.06 (0.58) | 3.73 (0.52) | 4.329*** | 0.60 |
KCC | 3.79 (0.68) | 3.48 (0.62) | 3.340** | 0.48 |
KISTC | 3.84 (0.57) | 3.74 (0.59) | 1.243 | 0.17 |
KOTLC | 4.42 (0.54) | 4.13 (0.57) | 3.783*** | 0.52 |
Total | 4.01 (0.48) | 3.76 (0.41) | 3.982*** | 0.56 |
Subscale | Mean and standard deviation (SD) | F value | Post hoc | ||
---|---|---|---|---|---|
(1) Year 0–5 (N = 126) | (2) Year 6–15 (N = 37) | (3) Year 16 + (N = 47) | |||
Note: * <0.05, ** < 0.01, *** <0.001. | |||||
KACL | 3.74 (0.60) | 3.87 (0.64) | 3.97 (0.55) | 2.091 | |
KSUC | 3.75 (0.52) | 4.01 (0.64) | 4.08 (0.55) | 7.673** | (3) > (1) |
(2) > (1) | |||||
KCC | 3.49 (0.62) | 3.68 (0.68) | 3.88 (0.69) | 6.253** | (3) > (1) |
KISTC | 3.74 (0.58) | 3.76 (0.57) | 3.90 (0.59) | 1.327 | |
KOTLC | 4.14 (0.57) | 4.41 (0.59) | 4.45 (0.48) | 7.178** | (3) > (1) |
(2) > (1) | |||||
Total | 3.77 (0.41) | 3.95 (0.52) | 4.06 (0.45) | 7.856** | (3) > (1) |
Subscale | Mean and standard deviation (SD) | F value | Post hoc | ||
---|---|---|---|---|---|
(1) University (N = 120) | (2) Suburban and rural (N = 46) | (3) Urban (N = 44) | |||
Note: * <0.05, **< 0.01, *** <0.001. | |||||
KACL | 3.72 (0.59) | 3.77 (0.63) | 4.09 (0.53) | 6.286** | (3) > (1) |
(3) > (2) | |||||
KSUC | 3.73 (0.52) | 3.79 (0.49) | 4.34 (0.54) | 23.171*** | (3) > (1) |
(3) > (2) | |||||
KCC | 3.48 (0.62) | 3.80 (0.70) | 3.77 (0.66) | 5.568** | (3) > (1) |
(2) > (1) | |||||
KISTC | 3.74 (0.59) | 3.83 (0.56) | 3.84 (0.60) | 0.771 | |
KOTLC | 4.13 (0.57) | 4.27 (0.50) | 4.58 (0.53) | 11.152*** | (3) > (1) |
(3) > (2) | |||||
Total | 3.76 (0.41) | 3.89 (0.49) | 4.12 (0.45) | 11.398*** | (3) > (1) |
(3) > (2) |
In the development stage, the content validity of the instrument was confirmed by carefully examining the theoretical foundation of the constructs of the instrument (Magnusson et al., 1999; Park and Oliver, 2008a) and item statements aligned with the notion of CCCs (Ministry of Education, P. R. China, 2017). Aligned with previous studies of instrument development (e.g., Velayutham et al., 2011; Zheng et al., 2014), a review panel with diverse expertise guaranteed the face validity of the instrument. As Park and Suh (2019) claimed, the PCK pentagon model can be utilized as an analytical tool for capturing and displaying the abstract and complex constructs of PCK. It is suitable for this study to adopt the pentagon model of PCK to frame the constructs of the instrument and the attributes of all items in each construct. Reflecting on the framework of the 2019 RCM (Hume et al. (ed.), 2019), this study has investigated chemistry teachers’ perceptions of collective PCK in the subject domain of teaching chemistry core competencies. Teacher perception of PCK for teaching CCCs should be treated as collective PCK. Because the collective PCK for teaching CCCs has been articulated and is shared among a group of professionals (Carlson and Daehler, 2019). In this study, a group of professionals includes the lead writer of the 2018 SHSCCS and an expert panel of chemistry educators, expert teachers, pre-service teachers, and in-service teachers. They worked together to contribute the combined professional knowledge base for teaching a given subject matter (CCCs).
Regarding RQ1, we employed item analysis, CFA, correlation analysis to provide convergent and discriminant validity of the new instrument in the pilot and field test stages, respectively. Results from those analyses provided sufficient evidence at both the item level and construct level to support the entire structure of the instrument and the quality of individual items and constructs. The overall goodness-of-fit indices from the CFA analysis were able to be used as empirical evidence to further support the utility of the PCK pentagon model. Notably, item KSUC3 was deleted in the process of CFA analysis to reach a better model fit. Research indicated that teacher perception of student motivation contains multi-dimensional constructs, making it hard to be measured by only one item (Martin, 2006; Hardre et al., 2008). Thus, the item KSUC was not included in the final instrument. From the correlation analysis, we confirmed that those five subscales were related to each other closely, which is consistent with previous studies about the interactions between PCK components (Park and Oliver, 2008b; Park and Chen, 2012).
However, a concern should be cautioned that may threaten the discriminant validity of the new instrument. Based on theoretical grounds, there should be a moderately strong correlation between those factors (Field, 2009); the correlation coefficients above 0.80 imply overlap of concepts and point towards poor discriminant validity (Brown, 2006). The correlations between KCC and KISTC (0.63) and between KACL and KSUC (0.62) were strong even though they met the discriminant validity requirements. There is a debate about the connection between the knowledge of the science curriculum and other PCK components. The findings are consistent with previous studies (Arzi and White, 2007; Cohen and Yarden, 2009). However, other research found that the knowledge of the science curriculum had the most limited connection with other components (Park and Chen, 2012). Park and Chen (2012) proposed a plausible explanation that the teachers in their study had a narrow view of the curriculum as a collection of topics. In this study, the instrument was used to figure out their teaching CCCs generally, but not in a topic-specific manner. Unlike other countries, the Chinese science textbooks are designed to align with curriculum standards so that teachers in China can easily connect their knowledge of curriculum and textbooks to the knowledge of instructional strategies. In other words, under the context of Chinese curriculum reform, teachers tend to use classroom activities and suggested sequences of lessons in science textbooks, which tightens their selections of instructional strategies closely. The knowledge of assessment of chemistry learning (KACL) was integrated tightly with the knowledge of students’ understanding of chemistry (KSUC). As Park and Chen (2012) suggested, the connection of knowledge of assessment and students’ understanding indicated that when chemistry teachers considered assessment, they were likely to align the assessment with student learning.
Regarding RQ2, the concurrent validity of the instrument was determined by differentiating teachers’ perceptions of their PCK for teaching CCCs between demographic groups. Within demographic groups, in-service teachers were significantly better than pre-service teachers, congruent with the literature (e.g., Aydeniz and Kirbulut, 2014; Meschede et al., 2017). Understandably, pre-service teachers have less teaching experience than in-service teachers; they hardly handle well on the knowledge of learners for learning chemistry and instruction for teaching chemistry. Given the core courses in pre-service teacher education programs, they learned the knowledge of orientation to teaching and learning chemistry as in-service teachers, especially about the new chemistry curriculum reform.
The differences among teaching year groups showed teachers with more than 16 years of teaching experience are more likely to accommodate core competency-based classroom teaching than teachers with less than five years or 6–15 years of teaching experience. We speculate several reasons for this observation. Firstly, teachers with more teaching experience are more likely to be focal teachers in their schools, with more opportunities to attend local and national professional development programs. Secondly, the more teaching experience teachers have, the more confidence they have about themselves in terms of their PCK (Koh et al., 2014; Saltan and Arslan, 2017). The above findings may be of interest to chemistry education and teacher education researchers. Professional development programs for facilitating teachers’ implementing CCCs should be designed and conducted by accounting for different teaching experience.
The differences among school region groups should be considered that teachers from urban areas are significantly higher in their perceptions of PCK for teaching CCCs than suburban and rural areas and university areas. Potential reasons can be supported by previous studies on Chinese teacher PD programs: teachers from suburban or rural areas have fewer opportunities of attending professional development programs than urban teachers (Sargent and Hannum, 2009); they have the dilemma of time for implementing innovative teaching (Wang, 2011); and a gap of teacher quality and teacher development exists between rural and urban areas (Peng et al., 2014). The competence-based chemistry curriculum standards were designed to promote all students’ development of CCCs in China. The issue of equity in facilitating teachers’ professional learning is vital to equity for all students. Thus, national and local educational departments’ policies should provide more opportunities for teachers from suburban and rural areas.
The findings of this study would also contribute to a new insight into testing the refined consensus model of PCK (Carlson and Daehler, 2019), especially for a discipline-specific collective PCK (e.g., chemistry core competency-based curriculum) in a particular learning context (e.g., Chinese educational background). The findings can be used for testing the RCM model by expanding the scope of teaching a competency-based chemistry curriculum. Personal and enacted PCK on the grain size of topic-specific and concept-specific domains would be conducted to further test the RCM model.
The instrument from the perspective of teacher perception has to be supplemented and corroborated by other data sources, such as classroom observation, in-depth interviews, and paper–pencil tests. Chemistry teachers’ knowledge and enactment practice of PCK for teaching CCCs both in personal and enacted realms (Carlson and Daehler, 2019; Chan et al., 2019) would be described and interpreted comprehensively by using multiple measures. The components and integrations of the teacher's knowledge and skills of PCK for teaching CCCs can be further examined. In so doing, teachers’ perception, knowledge, and skills of their PCK would provide a holistic view of their PCK for teaching core competency-based chemistry.
Constructs and aspectsa | Annotations and adaptions for teaching CCCsb | The statements of itemsc |
---|---|---|
Note: aThe constructs and aspects of PCK were adapted directly from the Pentagon model in Park and Oliver (2008a, 2008b). bThe “Annotations from literature” served as the elaboration of individual PCK aspects; the “Adaptions from 2017 SHSCCS” provides evidence for contextualizing the individual PCK aspects in the notion of the 2017 SHSCCS; the 2017 SHSCCS is short for Senior High School Chemistry Curriculum (Ministry of Education, P. R. China, 2017). cThe statements of 17 items were designed in the initial instrument; except item *KSUC3, all other 16 items were included in the final instrument. | ||
Subscale 1: Knowledge of Orientation to Teaching and Learning Chemistry (KOTLC) | ||
– Beliefs about purposes of learning science (Grossman 1990; Park and Oliver, 2008a) | Adaption from the 2017 SHSCCS: “Chemistry is playing an increasingly important role in promoting the sustainable development of human civilization and is the core force that reveals the mysteries of elements to life.” (p. 1) | KOTLC1: I believe the purpose of learning chemistry is to assist students’ view of the natural world through the perspective of chemistry. |
– Decision making in teaching (Grossman 1990; Park and Oliver, 2008a) | Adaption from the 2017 SHSCCS: “Chemistry core ideas are essential scientific literacies for all high school students, which prompt students’ life-span learning and development.” (p. 1) | KOTLC2: I believe the purpose of teaching chemistry is to help students’ better understanding of chemistry core ideas and its values for personal development. |
– Beliefs about the nature of science (Grossman 1990; Park and Oliver, 2008a) | Adaption from the 2017 SHSCCS: “The essence of chemistry discipline is to know and create chemical substances.” (p. 1) | KOTLC3: I believe the nature of chemistry is to know chemical substances, and further to create chemical substances. |
Subscale 2: Knowledge of Chemistry Curriculum (KCC) | ||
– Curricular materials (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) | Adaption from the guideline in the 2017 SHSCCS: “The organization and structure of chemistry curriculum should be designed aligned with the 2017 SHSCCS for prompting students development of chemistry core competencies.” (p. 82) | KCC1: I can organize curriculum materials properly to help students’ gain specific chemistry core competencies. |
– Vertical curriculum (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) | Annotations from literature: Organization of curriculum contents according to the sequence and continuity of learning within a given knowledge domain or subject over time. (Grossman, 1990). | KCC2: I can handle well the progression of one specific chemistry core competency among different grade levels. |
Adaption from the 2017 SHSCCS: For instance, one specific chemistry core competency (e.g., MIMA) is articulated across different grade levels. | ||
– Horizontal curriculum (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) | Annotations from literature: The scope and integration of curricular contents across different topics within a particular grade level. (Grossman, 1990). | KCC3: I can handle well the coherence of different specific chemistry core competencies in a certain grade level. |
Adaption from the 2017 SHSCCS: For instance, several chemistry core competencies (e.g., MIMA and ERM) are integrated in the same compulsory course. | ||
Subscale 3: Knowledge of Assessment of Chemistry Learning (KACL) | ||
– Dimensions of science learning to assess (Champagne, 1989; Tamir, 1988; Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: “This category refers to teachers’ knowledge of the aspects of students’ learning that are important to assess within a particular unit of study…the dimensions upon which teacher knowledge in this category is based are those of scientific literacy.” (Magnusson et al., 1999) | KACL1: I can evaluate students’ chemistry learning through the perspective of chemistry core competencies. |
Adaption from the 2017 SHSCCS: states the “Performance Requirements” of chemistry core competencies for each topic of chemistry learning. | ||
– Methods of assessing science learning (Tamir, 1988; Magnusson et al., 1999; Park and Oliver, 2008a) | Adaption from the 2017 SHSCCS: Summative assessments (NRC, 2006; 2014) such as activity performance rubrics, paper–pencil tests and portfolio are recommended for assessing students’ chemistry academic achievement in the 2017 SHSCCS. (p. 74). | KACL2: I can employ different ways to diagnose and assess students’ chemistry academic achievement. |
Adaption from the 2017 SHSCCS: Formative assessments (NRC, 2006; 2014) such as classroom questioning and feedback, homework and quiz are recommended for assessing students’ chemistry core competencies (p. 75). | KACL3: I can employ different ways to diagnose and assess students’ chemistry core competencies. | |
Subscale 4: Knowledge of Instructional Strategies for Teaching Chemistry (KISTC) | ||
– Topic-specific strategies: representations (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: Topic-specific strategies of representations can be analogies, models, illustrations and examples. (Magnusson et al., 1999; Aydin et al., 2014). | KISTC1: I can select topic-specific representation strategies properly to help students forming chemistry core competencies. |
Adaption from the 2017 SHSCCS: provides the section of “Instructional Strategies” for each topic. | ||
– Topic-specific strategies: activities (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: Topic-specific strategies of activities can be simulations, demonstrations, and experiments (Magnusson et al., 1999; Aydin et al., 2014). | KISTC2: I can select topic-specific activities properly to help students forming chemistry core competencies. |
Adaption from 2017 SHSCCS: provides the section of “Learning Activities” for each topic. | ||
– Subject-specific strategies (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: Conceptual change (Tobin, Tippins and Gallard, 1994; Duit and Treagust, 2003), 5E learning cycle (Bybee and Landes, 1990; Duran and Duran, 2004) and project-based learning (PBL) (Blumenfeld et al., 1991; Krajick and Shin, 2014) are popularly used and examined as efficient teaching methods for prompting students’ science learning. | KISTC3: I can select subject-specific teaching modes (e.g., conceptual change, 5E, PBL) properly to help students forming chemistry core competencies. |
Adaption from the 2017 SHSCCS: encourages teachers to enact the new standards by using a diversity of teaching methods (p. 76). | ||
Subscale 5: Knowledge of Students’ Understanding in Chemistry (KSUC) | ||
– Misconceptions (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: “This category consists of teachers’ knowledge and beliefs about prerequisite knowledge for learning specific scientific knowledge.” (Magnusson et al., 1999, p. 104). “… students encounter when learning science involves topic areas in which their prior knowledge is contrary to the targeted scientific concepts. Knowledge of this type is typically referred to as misconceptions.” (Magnusson et al., 1999, p. 105). | KSUC1: I know well about students’ misconceptions of chemistry knowledge and the previous level of chemistry core competencies. |
Adaption from the 2017 SHSCCS: students’ misconceptions of chemistry knowledge and their prerequisite knowledge for the late topic should be used for prompting students’ chemistry core competencies. | ||
– Learning difficulties (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: “This category refers to teachers’ knowledge of the science concepts or topics that students find difficult to learn.” (Magnusson et al., 1999; p. 105). | KSUC2: I know well about the student learning difficulty of the current chemistry topics. |
Adaption from the 2017 SHSCCS: To gain the development of their chemistry core competencies, learning difficulty of chemistry concepts is necessarily considered. | ||
– Motivation and Interest (Park and Oliver, 2008a) | Annotations from literature: Park and Oliver (2008a) emphasized teacher knowledge of students’ motivation and interests in the component of Knowledge of Students’ Understanding in Science. | * KSUC3: I know well about student motivation and interest in chemistry learning. |
Adaption from the 2017 SHSCCS: underlines the importance of students’ motivation and interests of chemistry learning by engaging them to make sense of real-world phenomena or problem solving. (p. 73–74). | ||
– Development levels (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: “This category consists of … as well as their understanding of variations in students’ approaches to learning as they relate to the development of knowledge within specific topic areas.” (Magnusson et al., 1999; p. 104). | KSUC4: I know well about student development stages of chemistry core competencies. |
Adaption from the 2017 SHSCCS: outlines the development stages (Level 1–4) of chemistry core competencies in “Appendix 1: The Levels of Chemistry Core Competencies”. (p. 89–92). | ||
– Learning style (Magnusson et al., 1999; Park and Oliver, 2008a) | Annotations from literature: “Teachers’ knowledge of variations in approaches to learning includes knowing how students of differing developmental or ability levels or different learning styles may vary in their approaches to learning as they relate to developing specific understandings.” (Magnusson et al., 1999; p. 104). | KSUC5: I know well about the student learning style and the ability of chemistry learning. |
Adaption from the 2017 SHSCCS: emphasized the importance of students’ psychological cognitive development in prompting students development of their chemistry core competencies in teacher enactment and curriculum design guidelines. (p. 82). |
This journal is © The Royal Society of Chemistry 2021 |