Development and validation of an instrument for measuring Chinese chemistry teachers’ perceptions of pedagogical content knowledge for teaching chemistry core competencies

Peng He *ab, Changlong Zheng *a and Tingting Li bcd
aInstitute of Chemical Education, Northeast Normal University, P. R. China. E-mail: hep905@nenu.edu.cn; zhengcl@nenu.edu.cn
bCREATE for STEM Institute, Michigan State University, USA
cFaculty of Education, Northeast Normal University, P. R. China
dDepartment of Counseling, Educational Psychology, and Special Education, Michigan State University, USA

Received 16th December 2019 , Accepted 8th January 2021

First published on 12th January 2021


Abstract

This study aims to develop and validate a new instrument for measuring chemistry teachers’ perceptions of Pedagogical Content Knowledge for teaching Chemistry Core Competencies (PCK_CCC) in the context of new Chinese chemistry curriculum reform. The five constructs and the initial 17 items in the new instrument were contextualized by the PCK pentagon model (Park S. and Oliver J. S., (2008), J. Res. Sci. Teach., 45(7), 812–834.) with the notions of the Senior High School Chemistry Curriculum Standards (Ministry of Education, P. R. China, 2017). 210 chemistry teachers from a University-Government-School initiative voluntarily participated in this study. The findings from item analysis, confirmatory factor analysis and correlation analysis provide sufficient empirical evidence to support the convergent and discriminant validity of the instrument. The concurrent validity of the instrument was confirmed by testing mean differences among teacher demographic groups. The high Cronbach's coefficient alpha values show good internal consistency reliability of the instrument. Integrating the evidence from theory and data, we documented a valid and reliable PCK_CCC instrument with five constructs consisting of 16 items. This study provides a thorough process for developing and validating instruments that address teacher perceptions of their PCK in a particular subject domain. The valid and reliable PCK_CCC instrument would be beneficial for teacher education researchers and teacher professional programs.


Introduction

Knowledge is growing exponentially as technology continually transforms the way we live and work (NRC, 2011). It demands new work practices and new competencies, which are also referred to as “21st century skills” (Bybee and Fuchs, 2006; Trilling and Fadel, 2009; Binkley et al., 2012; Griffin and Care, 2014). Incorporated with that, international organizations and institutions, such as the Organization for Economic Cooperation and Development (OECD), have established a variety of core competency frameworks (OECD, 2005; European Parliament, 2007; Ananiadou and Claro, 2009; Partnership for 21st-century skills, 2009). The frameworks affect K-12 school curriculum reform significantly in many countries around the world. To catch up with the international tendency of the world, the central government of China has released several milestone documents (The State Council of P. R. China, 2006a, 2006b; Ministry of Science and Technology and Publicity Department, 2016) that take the notion of core competencies as a focal point in its curriculum reform (Wei, 2019). Based on the notion, the 2017 Senior High School Chemistry Curriculum Standards were released (hereafter denoted as the 2017 SHSCCS). The fundamental purpose of the 2017 SHSCCS aims to cultivate students’ core competencies in secondary school chemistry, which are called Chemistry Core Competencies (CCCs). As the 2017 SHSCCS was released, how to teach CCCs became a big challenge for both in-service and pre-service chemistry teachers in China. Numerous studies have shown that teacher professional knowledge positively affects instructional quality and student learning (Hill et al., 2005; Baumert et al., 2010; Park et al., 2011; Heller et al., 2012; Keller et al., 2017).

In particular, pedagogical content knowledge (PCK) (Shulman, 1986) has been proposed as an essential aspect of teachers’ knowledge (Alonzo et al., 2012) and is widely considered crucial for a teacher's ability to create high-quality instruction. PCK and its components are useful for researchers to study teacher knowledge and practice and have a significant role in defining capable and competent teachers (Toh et al., 2006; van Driel and Berry, 2012; Kulgemeyer and Riese, 2018). Thus, it is essential to assess the development of pre-service and in-service teachers’ PCK. Teacher perception arises from the teacher's set of attitudes, beliefs, and knowledge about his/her personal characteristics and attributes (Shavelson et al., 1976; Ghazvini, 2011). As such, teacher perception of PCK is defined as teacher self-evaluation of their PCK competencies in a particular teaching context. A large body of research revealed that teacher perception can be considered as a significant indicator of the teacher's PCK (Kuntze et al., 2013; Kavanoz et al., 2015; Großschedl et al., 2019), and could be an alternative to measure teacher PCK (MaKinster et al., 2010). Moreover, research has shown that teacher actual and perceived knowledge have a similar development after a workshop designed to increase teachers’ teaching skills (Barton-Arwood et al., 2005). It is also a mediator that plays an integral part in teachers’ professional development and can provide us with a perspective of researching teachers’ professional development (Kuntze et al., 2013). Research also recommends the importance of understanding teacher perceptions of their knowledge needed for teaching to support their PCK development (Nilsson, 2008; Nilsson and van Driel, 2011). In this sense, exploring teacher perceptions of their PCK can provide insight into how to help teachers’ self-evaluation of their PCK on teaching chemistry in the context of the 2017 SHSCCS.

Nevertheless, few studies investigate chemistry teachers’ perception of their PCK for teaching CCCs. Incorporated with the 2017 SHSCCS reform, this study investigates in-service and pre-service chemistry teachers’ perceptions of their PCK in a University-Government-School (UGS) professional learning community (Lv et al., 2016). Our goal is to develop and validate an instrument for determining chemistry teachers’ perceptions of their PCK on teaching CCCs. Such an instrument would contribute to teacher educators for monitoring the development of teacher professional learning. It could also be beneficial for pre-service and in-service teachers’ self-awareness by identifying their weaknesses in teaching the competency-based curriculum.

PCK definition and constructs

PCK was first offered by Shulman (1986), and it has been described as a type of knowledge that “represents the blending of content and pedagogy into an understanding of how particular topics, problems, or issues are organized, represented, and adapted to the diverse interests and abilities of learners, and presented for instruction” (Shulman, 1987, p. 8). Based on Shulman's description of PCK, researchers from different areas conducted numerous studies on developing models of PCK (e.g., Park and Oliver, 2008a; Carlson and Daehler, 2019), and relationships between PCK and other knowledge bases (e.g., Jüttner et al., 2013; Kulgemeyer and Riese, 2018), classroom practice (e.g., Barendsen and Henze, 2019; Gess-Newsome et al., 2019), and student outcomes (e.g., Gess-Newsome et al., 2019). To address the divergence in definitions, models, and data collection methods of PCK studies, the 2012 PCK Summit proposed a consensus model of teacher professional knowledge and skills including PCK and influences on classroom practice and student outcomes (Berry et al. (ed.), 2015). In the consensus model, researchers agree to define personal PCK in the context of teaching a specific topic (Gess-Newsome, 2015). This consensus PCK model was tested and further refined by the 2016 second PCK Summit (Hume et al. (ed.), 2019). To address the limitations of the 2012 consensus model, the Refined Consensus Model (RCM) consists of three distinct realms of PCK (collective, personal, and enacted PCK) and also addresses the role of grain sizes of science PCK (e.g., discipline, topic, concept) (Carlson and Daehler, 2019). Nonetheless, the consensus model of PCK (Gess-Newsome, 2015; Carlson and Daehler, 2019) has clearly illustrated teacher professional knowledge bases and the layers of PCK with learning context. However, the components of PCK are not particularly addressed.

Regarding the construct of PCK, numerous studies investigated the components of PCK (e.g., Tamir, 1988; Grossman, 1990; Magnusson et al., 1999). Drawing from those previous studies, Park and Oliver (2008b) identified the five components of PCK and organized them into a pentagon diagram of PCK. According to Park and Oliver (2008b), the components of PCK from different conceptualizations can be mainly categorized as orientation toward teaching science, knowledge of students’ understanding in science, knowledge of science curriculum, knowledge of instructional strategies and representations, and knowledge of assessment of science learning. The knowledge of subject matter and the knowledge of pedagogy are placed outside of PCK as a distinct knowledge base for teaching (e.g., Park and Oliver, 2008a). Reconciling the consensus model of PCK (Carlson and Daehler, 2019) with extant PCK models, Park (2019) claims that the pentagon model with five components can be the conceptual and analytic framework for capturing PCK and integrations among those components. For the integration of the components of PCK, Park and Chen (2012) employed their pentagon PCK model to explore the integration of PCK and found that the relationship between knowledge of student understanding and knowledge of instructional strategies and representations was central in the integration, and knowledge of assessment of science learning was more often connected with the knowledge of student understanding and knowledge of instructional strategies and representations. Aydin and Boz (2013) found that the components of the knowledge of learner and instructional strategy are central among the integrations. Using CoRes and PaP-eRs as analytic tools, Padilla and her colleagues (2008) portrayed the integration among PCK components and found two types of integrations. There is synchronization between knowledge of goals and knowledge of instructional strategy. The knowledge of assessment and knowledge of learners are integrated with each other.

Measurement of teacher perceptions of PCK

To investigate teachers’ knowledge, skills, and perceptions of their PCK, various approaches, such as rubrics for interviews, artifacts and classroom observations, tests, and questionnaires, are employed depending on different realms and grain sizes of PCK (Carlson and Daehler, 2019). Two measures of semi-structured interviews and questionnaires are mainly applied for assessing teachers’ perceptions of PCK. For instance, Khakbaz (2016) analysed semi-structured interview data to uncover university mathematics teachers’ perceptions of their PCK. Questionnaires with Likert point scale are developed and used to examine teachers’ perceptions of PCK and other associated constructs (e.g., Chai et al., 2011; Koh et al., 2013; Großschedl et al., 2019). However, reviewing previous studies, few examine chemistry teachers’ perceptions of their PCK, especially in core competency-based curriculum reform.

Teacher demographic variables such as gender, age, teaching experience, and professional stages are considered to be the main factors for understanding teachers’ development of PCK. Koh and her colleagues (2014) examined demographic variables such as gender, age, teaching experience, and teaching level affecting pre-service teachers’ perceptions of PCK, and found that teachers’ age and gender are not related to their perception of PCK. However, teaching experience significantly impacts their perceptions of PCK. Several studies investigated the comparison between pre-service and in-service teachers and found that in-service teachers have a higher PCK than pre-service teachers in different subject contexts (Kleickmann et al., 2013; Liu et al., 2015; Meschede et al., 2017). Moreover, the difference in professional development between teachers in urban and rural areas should also be considered (Soebari and Aldridge, 2016). Using qualitative and quantitative methods, Soebari and Aldridge (2016) found that disparities between teachers in urban and rural schools exist. They also pointed out that teaching practices in rural schools continued to be teacher-centered, primarily because of the limited facilitates and resources available (Wahyudi and Treagust, 2004). Given the findings from previous studies, teachers’ demographic variables impacting their perceptions of PCK in this study should also be investigated to understand their PCK development toward CCC teaching.

Research purposes and questions

This study aims to develop and validate an instrument for measuring chemistry teachers’ perception of PCK teaching high school CCCs in the context of Chinese new curriculum reform. The instrument was developed under the framework of the PCK pentagon model (Park and Oliver, 2008b), and was further validated based on the framework of construct validity (Trochim and Donnelly, 2006).

Given that validity is a comprehensive construct (American Educational Research Association [AERA], the American Psychological Association [APA], and the National Council on Measurement in Education [NCME], 2014), Trochim and Donnelly (2006) framed different aspects of construct validity consisting of translation validity and criterion-related validity. Face and content validity are the two types of translation validity that are usually examined in the development process. Four types of criterion-related validity included convergent and discriminant validity, and concurrent and predictive validity are usually conducted in the measurement process. We presented face and content validity in the development stage of the instrument. In this study, we particularly conducted and reported the validity evidence in the measurement process of the instrument. The research questions (RQs) can be specifically addressed as follows:

RQ1: What is the empirical evidence supporting the convergent and discriminant validity and reliability of the instrument?

RQ2: What is the empirical evidence supporting the concurrent validity of the instrument?

The background of the current study

Chemistry core competencies

In the last round of science curriculum reform (Ministry of Education, P. R. China, 2003), three-dimensional curriculum objectives, Knowledge and Skills, Processes and Methods, and Attitudes and Values, were offered to promote students’ development of scientific literacy. However, the three-dimensional curriculum objectives failed to afford teachers’ insight into chemistry teaching since it was applied in a general manner for all school subjects (e.g., Mathematics, History, and Literacy). Learning from the past 15 years of implementing the 2003 chemistry curriculum, the three-dimensional goal system was not able to reflect the nature of chemistry and represent students’ performance in their development of scientific literacy (Fang and Xu, 2018). Breaking into the three-dimensional objective system, the 2017 SHSCCS proposed a five-component objective system that serves as the essential characteristics and critical abilities that students should obtain through learning chemistry to ensure their lifelong development and social development (Ministry of Education, P. R. China, 2017).

According to the 2017 SHSCCS, CCCs are designed mainly to align with the nature of chemistry knowledge and practice, and the intrinsic values and unique merits of chemistry. Five main components of chemistry core competencies are categorized as: Macroscopic Identification and Microscopic Analysis (MIMA), Changes and Equilibrium (CE), Evidence-based Reasoning and Modeling (ERM), Scientific Inquiry and Innovation (SII), and Scientific Attitude and Social Responsibility (SASR). The first two components represent the big ideas of chemistry such as structure and properties of matter, chemical reactions, and interaction and energy. The third and fourth components can be attributed to scientific practices, which are evidence-based reasoning, modeling, argumentation, investigation, questioning, explanation, communication, and innovation. The component of scientific attitude and social responsibility is treated as the attitude and unique values of chemistry. Students can make correct personal decisions on socio-scientific issues and actively solve real-world problems related to chemistry. In particular, all statements within the five main components are illustrated as students can apply the knowledge of big ideas, practices, and attitudes and values of chemistry to make decisions on socio-scientific issues or to solve real-world problems.

Pre-service and in-service teacher professional programs

Numerous studies (e.g., Ryder and Banner, 2011; Fensham, 2013; Law, 2014; Yao and Guo, 2018) have shown that it is not easy to transform policy curriculum and programmatic curriculum into the classroom curriculum in China and other countries. Many factors affect curriculum transformation, such as lacking qualified teachers, low status of science courses, and insufficient support for science education research (Yao and Guo, 2018). Among those factors, the shortage of qualified teachers is treated as a tremendous challenge for implementing a new curriculum (Campbell and Hu, 2010; Ye et al., 2019). In 2018, the Central Committee of the Communist Party of China and the State Council of P. R. China (2018) established Opinions on Comprehensively Deepening the Reform of Teacher Cohort Construction in the New Era. The opinions encourage more and more first-class comprehensive universities to establish colleges of teacher education; strengthen the support for normal universities and teacher education colleges to educate pre-service teachers by the Free-tuition Policy Teacher Education Programs (Ministry of Education, P. R. China, 2007) and Integrated Undergraduate-Graduate Excellent Teacher Development Programs (Ministry of Education, P. R. China, 2014); and continually conduct Secondary and Primary In-service Teachers National Training Programs (Ministry of Education, P. R. China, 2010). The opinions further promote higher institutions to forge a partnership with schools and local education agencies to strengthen both in-service and pre-service teachers’ professional development.

As a collaboration scheme, the University-Government-School (UGS) initiative for practicum placements provides a professional learning community (PLC) for both in-service teachers and pre-service teachers in Mainland China. Before starting their school practicum, pre-service teachers in the university have already finished the courses related to subject content, pedagogical method, and teaching practice. At the beginning of the fall semester, those pre-service teachers are assigned to affiliated schools to start their two-month in-site educational practice. During their practicum period, they routinely observe classroom teaching, assist their supervising teachers, participate in managing student activities, and last teach formally in front of all students. In-service teachers from practicum schools serve as supervising teachers for mentoring pre-service teachers to develop their professional ability. Typically, an in-service teacher supervises one or two pre-service teachers. As a part of the UGS initiative, in-service teachers are benefited from the university and local government in facility supplement and educational resource support. In turn, the university provides different kinds of professional development (PD) programs for in-service teachers from practicum placements, such as PD training workshops, research-based collaborations, PLC-based classroom preparation, and practice. As one of the main PD programs, a ten-day face-to-face PD training program is held for in-service teachers on campus at the end of school educational practicum. During the training period, pre-service teachers help their mentors manage and teach classrooms to meet routine teaching progress.

Methodology

Quantitative and qualitative research methods were used to develop and validate a new instrument for measuring Chinese chemistry teachers’ perceptions of their PCK for teaching CCCs (hereafter PCK_CCC instrument) in the context of Chinese new curriculum reform. As validity is the most fundamental consideration in developing and evaluating instruments (AERA, APA, and NCME, 2014), we collected different kinds of empirical evidence to support the validity of the PCK_CCC instrument based on the Trochim and Donnelly’ (2006) framework. Accordingly, content validity attempts to assess the degree to which the constructs are theoretically well defined and inclusive, whereas face validity evaluates if items of a construct can reflect the theoretical constructs. Criterion-related validity provides relational conclusions that are expected based on theory. As two subtypes of construct validity, convergent and discriminant validity always work together since the former one examines whether items of a construct are highly correlated with each other and the latter one examines whether items from different constructs are not highly correlated with each other (Trochim and Donnelly, 2006; Velayutham et al., 2011). Both concurrent and predictive validity can show the relationship between test scores and predict criterion scores theoretically. Concurrent validity focuses on test scores and criterion measures (e.g., demographic groups) at the same time, but predictive validity indicates the relationship between them at a later time (Velayutham et al., 2011; AERA, APA, and NCME, 2014).

This study established empirical evidence for content and face validity in the development stage; we further confirmed convergent and discriminant validity, and concurrent validity in the pilot and field test stage.

Instrumentation

The framework of the PCK-CCC instrument. This study developed a new instrument to measure chemistry teachers’ perceptions of their PCK for teaching CCCs. Based on previous studies (Magnusson et al., 1999; Park and Oliver, 2008a, 2008b; Park and Chen, 2012), the constructs of the instrument included the five subscales of knowledge of orientation to teaching and learning chemistry (KOTLC), knowledge of chemistry curriculum (KCC), knowledge of instructional strategies for teaching chemistry (KISTC), knowledge of assessment of chemistry teaching (KACL), and knowledge of students’ understanding in chemistry (KSUC).
Item selection and creation. The item statements were mainly adopted from the descriptions of PCK components (Magnusson et al., 1999; Park and Oliver, 2008a, 2008b). Appendix 1 presents the constructs and elements of PCK. From Appendix 1, the adaptations and treatments of the PCK elements for teaching CCCs and the item statements were shown in the initial instrument. In particular, we emphasized the adaptation and treatment of all items by linking those descriptions of PCK elements from previous studies with the corresponding elaborations of the 2017 SHSCCS. In the item construction process, we exerted the descriptions of PCK elements based on previous literature. An expert panel consisting of two chemistry education researchers and two teacher education researchers scrutinized the 2017 SHSCCS carefully. They tried to find the main features of teaching CCCs based on those PCK components and their elements. The panel members both have theories and practices of teaching chemistry under the context of core competency-based chemistry curriculum reform. One of the panel members served as a lead writer of the 2017 SHSCCS and was charged with the section of Implementation Suggestions, including curriculum development, classroom teaching, assessment, and teacher professional learning. The creation of all items in the initial instrument was under the iterative process in the expert panel meetings.

As such, we constructed an initial version of the instrument involving 17 items to cover all constructs and elements of PCK mentioned in the previous literature. As Gogol and colleagues (2014) suggested, each subscale consists of at least three items to ensure reliability, which could also be found in many studies of instrument development in science education (e.g., Schmidt et al., 2009; Zheng et al., 2014; Hayes et al., 2016). All items were presented as a five-point Likert scale (e.g., 1 = strongly disagree to 5 = strongly agree). For each item's responses, a teacher who responded “5 strongly agree” should indicate that the teacher considers himself or herself to have possessed enough PCK for teaching CCCs. Whereas, if the response is “1 Strongly disagree”, the teacher thinks oneself cannot do well on teaching chemistry core competencies. The sample items in each subscale can be seen in Table 2. Based on the context of CCCs, the five subscales are described as follows.

• Knowledge of orientation to teaching and learning chemistry (KOTLC). Following Park and Oliver's (2008b) description, this refers to teachers’ beliefs on the purposes and goals of teaching chemistry for developing students’ CCCs, especially in the CCC5: Scientific Attitude and Social Responsibility. The CCC5 emphasizes that the purpose of learning chemistry is no longer only about understanding chemistry contents. It transits to understanding the nature of chemistry, the intersection between chemistry and our society, and daily life decision-making by using chemistry knowledge. Correspondingly, chemistry teachers should know the nature of chemistry, the purpose of learning chemistry, and the value of learning chemistry. Thus, three items were designed in the KOTLC subscale.

• Knowledge of chemistry curriculum (KCC). This subscale mainly focuses on teachers’ knowledge about curriculum materials available for teaching topic-specific content to help students obtain their CCCs (Grossman, 1990). For instance, when teaching “Chemical Reaction and Energy Transfer”, an experienced teacher should identify the main CCCs of the topic, and that can be “CCC2: Changes and Equilibrium” and “CCC3: Evidence-based Reasoning and Modeling” at the current level. The teacher needs to know the completed requirements of the CCC2 and CCC3. To reach this purpose, chemistry teachers should know well about the vertical and horizontal requirements of CCCs to a content-specific topic in chemistry curriculum standards. The KCC subscale consisted of three items.

• Knowledge of instructional strategies for teaching chemistry (KISTC). Subject-specific strategies and topic-specific strategies (Magnusson et al., 1999) were adopted in the subscale to explore chemistry teachers’ perceptions of their instructional strategies for teaching core competence-based chemistry. In the 2017 SHSCCS, several instructional strategies for a given topic are provided as suggested tips for teachers’ references. For teaching “Atomic Structure and Periodic Table”, the main CCCs can be identified as “CCC1: Macroscopic Identification and Microscopic Analysis” and “CCC4: Scientific Inquiry and Innovation”. Developing students’ CCCs, the 2017 SHSCCS provides instructional strategies such as “investigation and group discussion about the trends of elements in the third period” for CCC1 and “design and creation of your own periodic table” for CCC4. Thus, to meet the main CCCs of a given topic, teachers should know how to select instructional strategies properly. Accordingly, three items were included in the KISTC subscale.

• Knowledge of assessment of chemistry learning (KACL). This subscale explores teachers’ perceptions of their knowledge on diagnostic, formative, and summative assessments toward students’ development of CCCs. The 2017 SHSCCS suggests that all types of assessments align with the standards of the development levels of CCCs. Moreover, teachers should select the types of assessments properly for assessing different CCCs. Performance assessments of classroom activities can be employed to assess students’ practice-based CCCs (CCC3 and CCC4); paper–pencil tests are favored methods for assessing students’ core ideas-based CCCs (CCC1 and CCC2); moreover, students’ learning portfolio is regarded as a comprehensive tool for understanding the development of students’ CCCs. The KACL subscale consists of three items.

• Knowledge of students’ understanding of chemistry (KSUC). This subscale refers to teachers’ perceptions of their knowledge on students’ conceptions of particular topics, learning difficulties, motivation, interest, learning style, and developmental level (Park and Oliver, 2008b). After targeting the main CCCs for a particular topic, teachers should know students’ current levels of the CCCs, the potential difficulties for students’ development of the CCCs, and how to engage students in classroom activities. Those aspects of students’ learning are vital to explain students’ developmental traits of CCCs. Therefore, five items were designed in the KSUC subscale.

Content and face validity. In the development process, robust theoretical PCK constructs and items from previous literature and the 2017 SHSCCS annotations provide evidence to support the content validity of the instrument. After creating the initial instrument, we recruited external reviewers to review and provide feedback on the appropriateness of the construct and items. The initial instrument was assigned to two chemistry education experts to examine their belongings to the subscales. Three pre-service and three in-service chemistry teachers were asked to respond to the initial instrument individually to avoid their misunderstanding of those items. Their comments and suggestions were used to revise the instrument, which provides evidence to support the face validity of the initial instrument.

Participants

The participants were pre-service chemistry teachers in a normal university and in-service chemistry teachers involved in the UGS initiative for practicum placements in the northeast part of Mainland China. In the 2017 and 2018 UGS initiative practicum, pre-service chemistry teachers, including 60 graduates and 120 undergraduates from the College of Chemistry enrolled in the UGS initiative practicum, and 140 in-service chemistry teachers from 18 schools served as practice mentors of pre-service chemistry teachers. Those schools are public secondary schools in 14 cities across four provinces (Heilongjiang, Jilin, Liaoning and Inner Mongolia) in the northeast part of China.

Regarding the pre-service teachers from this normal university, the undergraduates had already finished their compulsory courses of theory and micro-teaching practice in chemistry education but did not have in situ classroom teaching experience in secondary schools. In comparison, all graduate students were undergoing their master's degree in chemistry education and had their in situ classroom teaching practices in secondary schools during their two-year graduate programs. All in-service chemistry teachers were the practice mentors of pre-service chemistry teachers in the same university. As a benefit for practice mentors, those in-service chemistry teachers can attend a series of professional development training programs provided by the university. As part of the UGS initiative for practicum placements, local educational governments and schools have signed commitments that allowed university researchers to collect data for research purposes. There were no further local procedures required in this study. All teachers voluntarily participated in the study and gave their permission to use the data of their performance.

To meet the purpose of this study, we collected data in two periods. We distributed a paper version of the questionnaire to in-service teachers who attended our PD training programs. The other in-service teachers in the UGS practicum placements who did not attend our PD training were not included in this study. To collect pre-service teachers’ responses, we designed and distributed an online version questionnaire. Even so, a few pre-service teachers did not accept the invitations due to their job hunting. In the pilot stage, 107 chemistry teachers were involved in validating and revising the initial survey. In the next stage, 103 more teachers were added to form the final data set of 210 chemistry teachers. The demographics of the participants can be seen in detail in Table 1. In particular, 81.4% of them (n = 171) were female teachers, while 18.6% were male teachers aged 24 to 55. 60% of those teachers (n = 126) only had no more than 5 years of teaching experience, 17.6% (n = 37) had 6 to 15 years of teaching experience, and the other 22.4% (n = 47) had at least 16 years of teaching experience. Of the 210 chemistry teachers, 57.1% (n = 120) were pre-service teachers from a normal university, while 42.9% (n = 90) were in-service teachers from regional PD training programs in mainland China. Among those in-service teachers, 44 came from urban schools, and the others (n = 46) came from suburban and rural areas.

Table 1 Sample items in the five subscales
Subscales Number of items Sample items
Knowledge of Orientation to Teaching and Learning Chemistry (KOTLC) 3 KOTLC1: I believe the purpose of learning chemistry is to assist students’ view of the natural world through the perspective of chemistry.
Knowledge of Chemistry Curriculum (KCC) 3 KCC1: I can organize curriculum materials properly to help students’ gain specific chemistry core competencies.
Knowledge of Instructional Strategies for Teaching Chemistry (KISTC) 3 KISTC1: I can select topic-specific representation strategies properly to help students form chemistry core competencies.
Knowledge of Assessment of Chemistry Learning (KACL) 3 KACL1: I can evaluate students’ chemistry learning through the perspective of chemistry core competencies.
Knowledge of Students’ Understanding in Chemistry (KSUC) 5 KSUC1: I know well about students’ misconceptions of chemistry knowledge and the previous level of chemistry core competencies.


Table 2 Demographics of chemistry teachers in the pilot and field test
Pilot test Field test
Frequency Percent (%) Frequency Percent (%)
Note: teachers with zero teaching year represent pre-service chemistry teachers who did not have any long-term career teaching experience; teachers from University areas in the school region represent pre-service teachers who came from normal universities.
Gender
Male 17 15.9 39 18.6
Female 90 84.1 171 81.4
Total 107 100 210 100
Age
<26 42 39.3 117 55.7
26–35 17 15.9 27 12.9
36–45 36 33.6 50 23.8
>45 12 11.2 16 7.6
Total 107 100 210 100
Teaching years
0–5 48 45 126 60
6–15 24 22.3 37 17.6
>15 35 32.7 47 22.4
Total 107 100 210 100
PD stage
In-service 45 42.1 90 42.9
Pre-service 62 57.9 120 57.1
Total 107 100 210 100
School region
Urban area 21 19.6 44 21.0
Suburban and rural area 41 38.3 46 21.9
University area 45 42.1 120 57.1
Total 107 100 210 100


Statistical analysis

To answer RQ1, we conducted item analysis, confirmatory factor analysis (CFA), and correlation analysis to provide evidence for the convergent and discriminant validity of the PCK_CCC instrument. We employed Cronbach's alpha coefficients to provide the internal consistency reliability of the instrument.

Item analysis is a fundamental task for testing the extent to which success on an item corresponds to success on the whole test (Teo, 2010; Chiang and Liu, 2014). Item convergent validity was examined by calculating point-biserial correlations between each item score and the total score (Ferketich, 1991). The cut-off value of point-biserial correlations ranged from 0.3 to 0.7. Item discriminant validity was conducted by testing the mean difference between extreme groups. An independent sample t-test between upper and lower groups (27 percent of total scores as baseline) is used to examine item discrimination's acceptability. 27 percent is used because “this value will maximize differences in normal distributions while providing enough cases for analysis” (Wiersma and Jurs, 1990, p. 145). Items with no significant differences between the upper 27 percent and lower 27 percent are considered as poor discrimination (Turkan and Liu, 2012; Chiang and Liu, 2014). Convergent and discriminant validity are also confirmed if higher correlations between items within a particular construct and lower correlations between items with other constructs were found (Trochim and Donnelly, 2006). Items that failed to meet the criteria above will be excluded in the following analysis.

Confirmatory factor analysis (CFA) is a particular application of structural equation models (SEM). As a SEM-based method, the CFA enables the estimation and analysis of latent variables underpinning SEM mathematical principles and statistical procedures. Differentiated with exploratory factor analysis (EFA), the CFA aims to confirm a particular pattern of relationships predicted based on theory or previous analytic results (Kline, 2015; DeVellis, 2016). This study constructed a five-factor model based on the five notable PCK constructs in the previous literature (Magnusson et al., 1999; Park and Oliver, 2008a, 2008b; Park and Chen, 2012). The CFA was employed to provide a statistical criterion for evaluating how well the real data fit the five-factor model. Besides, the CFA analytical procedure facilitates the evaluation of convergent and discriminant validity and reliability and the measurement of individual item quality. We utilized multiple types of indices to perform the overall goodness-of-fit index for the five-factor model, including absolute fit indices, relative fit indices, and parsimony fit indices (Schreiber et al., 2006). All fit indices and their cut-off criteria are summarized in Table 4. After item analysis and the CFA test, we formatted the final instrument with the constructs and their associated items. Thus, correlations between subscale scores and the total score were performed to further confirm the convergent and discriminant validity of the instrument (Trochim and Donnelly, 2006). Cronbach's alpha values for each subscale and the total instrument were presented as the internal consistency reliability of the instrument.

To answer RQ2, we conducted independent t-tests and ANOVA tests to examine the differences among demographic groups to perform the concurrent validity of the instrument (e.g., Velayutham et al., 2011). Concurrent validity examines whether each construct could be distinguished between those groups that should be able to be distinguished theoretically (Trochim and Donnelly, 2006). Mean differences in each subscale score and the total score were performed between different professional levels, the groups of teaching ages, and the groups of school regions.

Version 22.0 of the Statistical Program for the Social Sciences (SPSS) was adopted for item analysis, and descriptive and inferential statistical analysis. The AMOS 22.0 software (Arbuckle, 2013) was used to run the CFA test.

Results

Validity evidence from the pilot test

In the pilot stage, we conducted item analysis and internal consistency reliability of the initial instrument by using the data of 107 chemistry teachers. For item analysis, there were significant differences between the upper 27 percent and lower 27 percent groups for each item (t = [5.13–8.36], p < 0.001), indicating that those items have good discriminations. The correlation coefficients between each item score and the total score ranged from 0.35 to 0.77, indicating that all 17 items would significantly contribute to the whole instrument (Liang and Tsai, 2008; Lavigne and Vallerand, 2010). Also, the correlation coefficients between items with the same construct were higher than that between items with the other constructs. For instance, the correlation coefficients between three KOTLC items ranged from 0.39 to 0.64, which were much higher than the correlation coefficients between those items and the items in other constructs (0.05–0.31). We calculated the internal consistency reliability for each subscale and the whole instrument. As Taber (2018, p. 1278) reviewed articles in notable science education journals, Cronbach's alpha values’ cut-off usually is greater than 0.70 for reporting research instruments in science education. The Cronbach's alpha values ranged from 0.74 to 0.85, and the reliability coefficient of the whole instrument was 0.91. The results suggested that the initial instrument with 17 items had a good convergent and discriminant validity and reliability. Given that, we decided to use the current instrument to collect more data for the field test.

Validity evidence from the field test

In the field test, we employed 210 chemistry teachers to further present empirical evidence for validating the instrument. The sample size was ten times larger than the number of items, which meets the sample size criteria for processing SEM-based statistics, such as CFA (Hair et al., 2010; Kline, 2015). The final results of item analysis, CFA, correlations, and reliability are presented in the following sections.
Item analysis. We first calculated the mean differences between the upper 27% and lower 27% groups by independent t-tests and found that there are significant differences between them (t = [7.37, 13.19], p < 0.001). This further shows that all items obtained large discriminations to differentiate teachers with different perceptions of PCK for teaching CCCs. The correlations between item scores and the total score ranged from 0.45 to 0.71, which remains a similar pattern as the preliminary results. Meanwhile, we found a similar finding that the correlations between items within the same construct were still higher than those of the other constructs. Thus, the convergent and discriminant validity of the instrument was confirmed at the item level.
CFA. According to the theoretical foundation of PCK constructs, we established an initial model as a five-factor model with the original 17 items, see Fig. 1. The CFA analysis was conducted to run the data of 210 samples by using AMOS 22.0.
image file: c9rp00286c-f1.tif
Fig. 1 Final CFA model.

The overall goodness-of-fit tests for the initial and final model are presented in Table 3. According to those index baseline criteria (Schreiber et al., 2006), some critical fit indices for the initial model were not adequate to meet the criteria. The root mean square error of approximation (RMSEA) value was greater than 0.08, indicating an unsatisfactory fit between the initial model and the data. Also, the values of some absolute fit indices (GFI, AGFI) and relative fit indices (NFI, RFI, TLI) were lower than 0.90 even though those values [0.82–0.88] were close to the baseline. The results above indicated that the initial model should be revised. We considered the Modification Indices (MIs) from the AMOS output that the large values of MIs should be concerned for modifying the initial model. The MIs represent the possible reductions in the Chi-square values of the overall model when some specific parameters (e.g., residual covariance parameters) were added or an individual item was subtracted (Chiang and Liu, 2014). Following that, we found that the covariances between the residual (e6) and other residuals (e11, e9, and e3) were higher than 10, indicating that the item KSUC3 might not be a clear indicator in the models (Hu and Bentler, 1999; Schreiber et al., 2006). We dropped the item KSUC3 and constructed a revised model as the model final consisting of five factors with 16 items. We reran the CFA analysis by using the same data set. We found that all indices were much improved; in particular, the RMSEA values (0.078) were smaller than 0.80. In the initial model, the Chi-square value equaled 268.40 (df = 109); after modifying the initial model, the Chi-square value decreased to 214.62 (df = 94) significantly. The Chi-square/df value (2.28) was smaller than 3. The GFI (0.89) and AGFI (0.84) indices were acceptable for absolute fit indices, and the SRMR and RMSEA indices were good. For relative fit indices, three of them (IFI, TLI, and CFI) were good enough (larger than 0.90), whereas the other two (NFI and RFI) were just acceptable (0.87 and 0.83). All three parsimony fit indices (PGFI, PNFI and PCFI) were good (larger than 0.50). More detailed information is shown in Table 3. In all, it shows that the majority of the indices present acceptable or good goodness-of-fit in the final model.

Table 3 Summary of the goodness-of-fit indexes
Overall goodness-of-fit indexes Criteria Initial model Final model Evaluation results
Notes: df, degree of freedom; GFI, global fit index; AGFI, adjusted global fit index; SRMR, standardized root mean square residual; RMSEA, root mean square error of approximation; NFI, normed fit index; RFI, relative fit index; IFI, incremental fit index; TLI, Tucker–Lewis index; CFI, comparative fit index; PGFI, parsimony goodness-of-fit index; PNFI, parsimony normed fit index; PCFI, parsimony comparative fit index.
Absolute fit indices
Chi-square (χ2) p > 0.05 268.40 214.62 Poor
df 109 94
GFI ≥0.90 0.87 0.89 Acceptable
AGFI ≥0.90 0.82 0.84 Acceptable
SRMR ≤0.08 0.036 0.035 Good
RMSEA ≤0.08 0.084 [0.071, 0.096] 0.078 [0.065–0.092] Good
Relative fit indices
NFI ≥0.90 0.86 0.87 Acceptable
RFI ≥0.90 0.82 0.83 Acceptable
IFI ≥0.90 0.91 0.92 Good
TLI ≥0.90 0.88 0.90 Good
CFI ≥0.90 0.91 0.92 Good
Parsimony fit indices
PGFI ≥0.50 0.62 0.62 Good
PNFI ≥0.50 0.69 0.68 Good
PCFI ≥0.50 0.73 0.72 Good
χ 2/df ≤3 2.46 2.28 Good


Table 4 Summary of confirmatory factor analysis on the final model
Factors Items Standardized factor loading Variance explained CR AVE
Note: CR represents composite reliability; AVE represents average variance extracted.
KACL KACL1 0.57 0.33 0.78 0.55
KACL2 0.79 0.62
KACL3 0.84 0.70
KSUC KSUC1 0.75 0.57 0.79 0.48
KSUC2 0.58 0.34
KSUC4 0.73 0.53
KSUC5 0.70 0.48
KCC KCC1 0.72 0.51 0.85 0.65
KCC2 0.83 0.69
KCC3 0.86 0.73
KISTC KISTC1 0.85 0.72 0.84 0.64
KISTC2 0.88 0.78
KISTC3 0.63 0.40
KOTLC KOTLC1 0.73 0.54 0.74 0.49
KOTLC2 0.81 0.66
KOTLC3 0.52 0.27


Table 4 presents the summary of the CFA analysis on the final model. The standardized factor loadings of all items were greater than 0.45, revealing that the constructs showed good convergent validity (Hair et al., 2010). We calculated the average variance extracted (AVE) values for each construct to examine if the items contribute more than the errors to the factors (Fornell and Larcker, 1981). The cut-off value of the AVEs is larger than 0.50. We found that the AVEs of three factors (KACL, KCC and KISTC) were higher than 0.50, and the KSUC (0.48) and KOTLC (0.49) factors were very close to the baseline. The composite reliability (CR) values of all factors ranged between 0.74 and 0.85, exceeding the criteria value of 0.70 (Hair et al., 2010).

Correlations. After the CFA analysis, we confirmed the final instrument with a five-factor construct consisting of 16 items. The correlations among the five subscales and the whole instrument were conducted to further provide evidence for the convergent and discriminant validity of the instrument. The results are shown in Table 5. We found that the correlations between each subscale and the whole instrument ranged from 0.62 to 0.83, indicating that each subscale contributed a significant part of the whole instrument. The correlations among those subscales ranged from 0.28 to 0.63, indicating that they served individually and related to each other. In other words, those five PCK components are interdependent with each other. Among those relationships, the correlation between KACL and KSUC (0.62) is higher than the correlations between the two with other previous components (0.28–0.58). The strong connection between the components of KACL and KSUC can be supported by previous studies (Henze et al., 2008; Park and Oliver, 2008a; Park and Chen, 2012). The strong connection between KCC and KISTC can be seen from the high correlation of these two (0.63). The required school science curriculum was “the single most powerful determinant of teacher knowledge, serving as both its organizer and source” (Arzi and White, 2007). Teachers’ lack of curricular knowledge of the topic limits their instructional strategies (Cohen and Yarden, 2009).
Table 5 The reliability and correlations among the total and the five subscales
Subscale Item no. Cronbach alpha Correlations
KACL KSUC KCC KISTC KOTLC
Note: * <0.05, ** < 0.01, *** <0.001.
KACL 3 0.77 1
KSUC 4 0.79 0.62** 1
KCC 3 0.84 0.52** 0.58** 1
KISTC 3 0.83 0.46** 0.54** 0.63** 1
KOTLC 3 0.72 0.28** 0.44** 0.35** 0.31** 1
Total 16 0.90 0.76** 0.83** 0.82** 0.77** 0.62**


Reliability. Table 5 presents the internal consistency reliability of the five subscales and the whole instrument. The Cronbach's coefficient alphas for the KACL, KSUC, and KOTLC ranged from 0.72 to 0.79, which is acceptable, whereas the values of the other two subscales (KCC and KISTC) were higher than 0.80, indicating their good reliability. The overall reliability of the instrument is 0.90, indicating that the final instrument with 16 items is reliable for measuring teachers’ perceptions of their PCK on teaching CCCs.

In sum, we developed and formatted the final PCK_CCC instrument consisting of five constructs with 16 items. The above results provide sufficient evidence to support the convergent and discriminant validity and reliability of the final instrument for measuring chemistry teachers’ perceptions of their teaching CCCs.

Concurrent validity

In addition to the construct validity of the final PCK_CCC instrument, we further conducted the mean differences among teacher demographic groups to provide evidence for the concurrent validity of the instrument. The results of descriptive statistics mean differences between teacher gender, PD stage, teaching year experience, and type of school district are presented as follows.
Descriptive statistics. Table 6 presents the descriptive statistics of teachers’ performances in the three subscales and the whole instrument. All mean scores in the three subscales were higher than 3, indicating that those chemistry teachers in the UGS initiative were confident about their teaching chemistry core competencies. We found that the mean scores on the KOTLC (4.26) were higher than the other four subscales, whereas the KCC scores were relatively lower than other subscales. The results showed that chemistry teachers were more confident with their knowledge about the orientation of teaching and learning chemistry in the context of CCCs than their knowledge about the CCC-based curriculum, instructional strategies, and assessments. These results can be interpreted as that teachers tend to change their orientation at the beginning of curriculum reform but hardly adjust their instruction and assessments in their classrooms. The high mean of the total score (3.87) indicates that the chemistry teachers who participated in this study were confident about their PCK for teaching CCCs.
Table 6 Descriptive statistics
Subscale Item no. Counts Mean Min Max SD
KACL 3 210 3.81 1.33 5.00 0.60
KSUC 4 210 3.87 2.25 5.00 0.57
KCC 3 210 3.61 1.33 5.00 0.66
KISTC 3 210 3.78 2.00 5.00 0.58
KOTLC 3 210 4.26 2.67 5.00 0.57
Total 16 210 3.87 2.38 5.00 0.46


Gender differences. Firstly, we provided evidence to support the concurrent validity of the PCK_CCC instrument by reporting the mean differences between teacher genders. Consistent with previous studies (e.g., Koh et al., 2014; Kavanoz et al., 2015), we assumed our instrument was not developed purposely to distinguish teacher perceptions by their genders. Independent sample t-tests were conducted to examine the difference between female and male teachers’ perceptions of their PCK for teaching CCCs. The results showed no significant difference between them in the subscale scores (t = [−0.30, 1.46], p > 0.05) and the total score (t = 1.00, p > 0.05), indicating that the gender bias issue did not exist in the PCK_CCC instrument.
In-service and pre-service differences. We confirmed the concurrent validity of the instrument by providing evidence from the mean difference between in-service and pre-service teachers. Previous research recognized the differences between in-service teachers and pre-service teachers by their belief structures, knowledge, and teaching experience (Damnjanovic, 1999; Spaulding, 2007). Based on this notion, we assumed that our instrument could be used to distinguish teacher perception of their PCK for teaching CCCs between them. Results indicated significant differences across the four subscales (KACL, KSUC, KCC, and KOTLC) (see Table 7). The effect size ranged from 0.32 to 0.60, suggesting that in-service teachers are more confident than pre-service teachers in those aspects of their PCK. Interestingly, there was no significant difference between them in the subscale of KISTC (t = 1.243, p > 0.05). Both in-service and pre-service teachers were confident with their knowledge of using those subject-specific and topic-specific strategies (Magnusson et al., 1999). Overall, in-service teachers were more confident than pre-service teachers about their PCK for teaching CCCs (t = 3.982, p < 0.001), even though both of them were confident with themselves.
Table 7 Summary of mean differences between in-service and pre-service teachers
Subscale Mean and standard deviation (SD) t value Cohen's d
In-service (n = 90) Pre-service (n = 120)
Note: * <0.05, ** < 0.01, *** <0.001.
KACL 3.92 (0.60) 3.73 (0.59) 2.380* 0.32
KSUC 4.06 (0.58) 3.73 (0.52) 4.329*** 0.60
KCC 3.79 (0.68) 3.48 (0.62) 3.340** 0.48
KISTC 3.84 (0.57) 3.74 (0.59) 1.243 0.17
KOTLC 4.42 (0.54) 4.13 (0.57) 3.783*** 0.52
Total 4.01 (0.48) 3.76 (0.41) 3.982*** 0.56


Groups of teaching years differences. Our instrument was expected to distinguish teacher PCK for teaching CCCs between different groups of the years of teaching experience (e.g., Koh et al., 2014; Saltan and Arslan, 2017). We compared the mean differences among teacher groups of teaching years (Year 0–5, Year 6–15 and Year 16+) (e.g., Cheramie et al., 2007; Chval et al., 2008; Dong et al., 2015). Given that, we employed one-way ANOVAs to examine the significant difference among those three groups and used Tukey's test for post hoc analysis to provide deeper insights into the patterns between two of those groups. We found that significant differences among the three teacher groups exist in the three subscales (KSUC, KCC, and KOTLC), but no differences in the other two subscales (KACL and KISTC). Table 8 shows the detailed results of the ANOVA analysis. Using Tukey’ test, we found that significant differences mainly exist between Year 16+ and Year 0–5 teachers and between Year 6–15 and Year 0–5 teachers. These results indicate that teachers with more teaching experience will have more confidence with those three aspects of their PCK (KSUC, KCC, and KOTLC). However, those patterns are not held in the KACL and KISTC aspects. Overall, teachers with Year 16+ of teaching experience were much more confident about their PCK for teaching CCC than teachers with Year 0–5, but not with Year 6–15.
Table 8 Summary of mean differences among three teacher groups by teaching years
Subscale Mean and standard deviation (SD) F value Post hoc
(1) Year 0–5 (N = 126) (2) Year 6–15 (N = 37) (3) Year 16 + (N = 47)
Note: * <0.05, ** < 0.01, *** <0.001.
KACL 3.74 (0.60) 3.87 (0.64) 3.97 (0.55) 2.091
KSUC 3.75 (0.52) 4.01 (0.64) 4.08 (0.55) 7.673** (3) > (1)
(2) > (1)
KCC 3.49 (0.62) 3.68 (0.68) 3.88 (0.69) 6.253** (3) > (1)
KISTC 3.74 (0.58) 3.76 (0.57) 3.90 (0.59) 1.327
KOTLC 4.14 (0.57) 4.41 (0.59) 4.45 (0.48) 7.178** (3) > (1)
(2) > (1)
Total 3.77 (0.41) 3.95 (0.52) 4.06 (0.45) 7.856** (3) > (1)


Groups of school region differences. Given large disparities between urban and rural schools in China (Hao, 2013; Zhao, 2015), we intended our instrument to be used to address the differences in teachers’ perceptions of their PCK for teaching CCCs. The mean differences were compared among teachers from three school regions: university, suburban and rural, and urban areas (e.g., Chval et al., 2008; Dong et al., 2015). Using one-way ANOVA methods, we found that significant differences among those groups exist in the subscales of KACL, KSUC, KCC, and KOTLC, but not in the subscale of KISTC. Post hoc analysis showed that teachers from urban schools were more confident than those from suburban and rural schools and from the university, especially in the subscales of KACL, KSUC and KOTLC. We also found that teachers from suburban and rural schools were more confident than university teachers only in the KCC subscale, but no significant difference in other subscales. We found that teachers from urban schools were more confident than teachers from other schools in their PCK for teaching CCCs from the total scores. The detailed results are shown in Table 9.
Table 9 Summary of mean differences among the groups of school regions
Subscale Mean and standard deviation (SD) F value Post hoc
(1) University (N = 120) (2) Suburban and rural (N = 46) (3) Urban (N = 44)
Note: * <0.05, **< 0.01, *** <0.001.
KACL 3.72 (0.59) 3.77 (0.63) 4.09 (0.53) 6.286** (3) > (1)
(3) > (2)
KSUC 3.73 (0.52) 3.79 (0.49) 4.34 (0.54) 23.171*** (3) > (1)
(3) > (2)
KCC 3.48 (0.62) 3.80 (0.70) 3.77 (0.66) 5.568** (3) > (1)
(2) > (1)
KISTC 3.74 (0.59) 3.83 (0.56) 3.84 (0.60) 0.771
KOTLC 4.13 (0.57) 4.27 (0.50) 4.58 (0.53) 11.152*** (3) > (1)
(3) > (2)
Total 3.76 (0.41) 3.89 (0.49) 4.12 (0.45) 11.398*** (3) > (1)
(3) > (2)


Discussion and conclusions

This study developed and validated a new instrument for measuring chemistry teachers’ perceptions of their PCK for teaching CCCs in the context of new Chinese curriculum standards. Multiple data sources were analyzed to provide sufficient evidence for validating the new instrument underpinning the Trochim and Donnelly's (2006) framework of construct validity. Based on that, we finally constructed a valid and reliable instrument with a five-factor structure consisting of 16 items.

In the development stage, the content validity of the instrument was confirmed by carefully examining the theoretical foundation of the constructs of the instrument (Magnusson et al., 1999; Park and Oliver, 2008a) and item statements aligned with the notion of CCCs (Ministry of Education, P. R. China, 2017). Aligned with previous studies of instrument development (e.g., Velayutham et al., 2011; Zheng et al., 2014), a review panel with diverse expertise guaranteed the face validity of the instrument. As Park and Suh (2019) claimed, the PCK pentagon model can be utilized as an analytical tool for capturing and displaying the abstract and complex constructs of PCK. It is suitable for this study to adopt the pentagon model of PCK to frame the constructs of the instrument and the attributes of all items in each construct. Reflecting on the framework of the 2019 RCM (Hume et al. (ed.), 2019), this study has investigated chemistry teachers’ perceptions of collective PCK in the subject domain of teaching chemistry core competencies. Teacher perception of PCK for teaching CCCs should be treated as collective PCK. Because the collective PCK for teaching CCCs has been articulated and is shared among a group of professionals (Carlson and Daehler, 2019). In this study, a group of professionals includes the lead writer of the 2018 SHSCCS and an expert panel of chemistry educators, expert teachers, pre-service teachers, and in-service teachers. They worked together to contribute the combined professional knowledge base for teaching a given subject matter (CCCs).

Regarding RQ1, we employed item analysis, CFA, correlation analysis to provide convergent and discriminant validity of the new instrument in the pilot and field test stages, respectively. Results from those analyses provided sufficient evidence at both the item level and construct level to support the entire structure of the instrument and the quality of individual items and constructs. The overall goodness-of-fit indices from the CFA analysis were able to be used as empirical evidence to further support the utility of the PCK pentagon model. Notably, item KSUC3 was deleted in the process of CFA analysis to reach a better model fit. Research indicated that teacher perception of student motivation contains multi-dimensional constructs, making it hard to be measured by only one item (Martin, 2006; Hardre et al., 2008). Thus, the item KSUC was not included in the final instrument. From the correlation analysis, we confirmed that those five subscales were related to each other closely, which is consistent with previous studies about the interactions between PCK components (Park and Oliver, 2008b; Park and Chen, 2012).

However, a concern should be cautioned that may threaten the discriminant validity of the new instrument. Based on theoretical grounds, there should be a moderately strong correlation between those factors (Field, 2009); the correlation coefficients above 0.80 imply overlap of concepts and point towards poor discriminant validity (Brown, 2006). The correlations between KCC and KISTC (0.63) and between KACL and KSUC (0.62) were strong even though they met the discriminant validity requirements. There is a debate about the connection between the knowledge of the science curriculum and other PCK components. The findings are consistent with previous studies (Arzi and White, 2007; Cohen and Yarden, 2009). However, other research found that the knowledge of the science curriculum had the most limited connection with other components (Park and Chen, 2012). Park and Chen (2012) proposed a plausible explanation that the teachers in their study had a narrow view of the curriculum as a collection of topics. In this study, the instrument was used to figure out their teaching CCCs generally, but not in a topic-specific manner. Unlike other countries, the Chinese science textbooks are designed to align with curriculum standards so that teachers in China can easily connect their knowledge of curriculum and textbooks to the knowledge of instructional strategies. In other words, under the context of Chinese curriculum reform, teachers tend to use classroom activities and suggested sequences of lessons in science textbooks, which tightens their selections of instructional strategies closely. The knowledge of assessment of chemistry learning (KACL) was integrated tightly with the knowledge of students’ understanding of chemistry (KSUC). As Park and Chen (2012) suggested, the connection of knowledge of assessment and students’ understanding indicated that when chemistry teachers considered assessment, they were likely to align the assessment with student learning.

Regarding RQ2, the concurrent validity of the instrument was determined by differentiating teachers’ perceptions of their PCK for teaching CCCs between demographic groups. Within demographic groups, in-service teachers were significantly better than pre-service teachers, congruent with the literature (e.g., Aydeniz and Kirbulut, 2014; Meschede et al., 2017). Understandably, pre-service teachers have less teaching experience than in-service teachers; they hardly handle well on the knowledge of learners for learning chemistry and instruction for teaching chemistry. Given the core courses in pre-service teacher education programs, they learned the knowledge of orientation to teaching and learning chemistry as in-service teachers, especially about the new chemistry curriculum reform.

The differences among teaching year groups showed teachers with more than 16 years of teaching experience are more likely to accommodate core competency-based classroom teaching than teachers with less than five years or 6–15 years of teaching experience. We speculate several reasons for this observation. Firstly, teachers with more teaching experience are more likely to be focal teachers in their schools, with more opportunities to attend local and national professional development programs. Secondly, the more teaching experience teachers have, the more confidence they have about themselves in terms of their PCK (Koh et al., 2014; Saltan and Arslan, 2017). The above findings may be of interest to chemistry education and teacher education researchers. Professional development programs for facilitating teachers’ implementing CCCs should be designed and conducted by accounting for different teaching experience.

The differences among school region groups should be considered that teachers from urban areas are significantly higher in their perceptions of PCK for teaching CCCs than suburban and rural areas and university areas. Potential reasons can be supported by previous studies on Chinese teacher PD programs: teachers from suburban or rural areas have fewer opportunities of attending professional development programs than urban teachers (Sargent and Hannum, 2009); they have the dilemma of time for implementing innovative teaching (Wang, 2011); and a gap of teacher quality and teacher development exists between rural and urban areas (Peng et al., 2014). The competence-based chemistry curriculum standards were designed to promote all students’ development of CCCs in China. The issue of equity in facilitating teachers’ professional learning is vital to equity for all students. Thus, national and local educational departments’ policies should provide more opportunities for teachers from suburban and rural areas.

The contribution of this study

This study contributes to the professional learning research community by developing a valid and reliable instrument for measuring teachers’ self-reports of their PCK in a given subject matter. The developed instrument can be a pre-post tool to demonstrate outcomes of PD training programs related to the 2017 SHSCCS. It can be used to check the progress of professional development programs in implementing CCCs, and it also provides empirical evidence for curriculum developers and policymakers. We further validated the constructs of PCK by analyzing teacher survey data and found that the five components (Park and Oliver, 2008b) were suitable for constructing teachers’ perceptions of their PCK for teaching CCCs. The constructs of the instrument and the relationships between PCK components can be used as empirical evidence for further supporting the PCK pentagon model.

The findings of this study would also contribute to a new insight into testing the refined consensus model of PCK (Carlson and Daehler, 2019), especially for a discipline-specific collective PCK (e.g., chemistry core competency-based curriculum) in a particular learning context (e.g., Chinese educational background). The findings can be used for testing the RCM model by expanding the scope of teaching a competency-based chemistry curriculum. Personal and enacted PCK on the grain size of topic-specific and concept-specific domains would be conducted to further test the RCM model.

The limitations and future study

The limitations of this study should be noted. Firstly, some items in the instrument may be interpreted by teachers in their ways. Items with terminology are commonly used in educational research. Nevertheless, they are not in teachers’ daily language, such as topic-specific activities (item KISTC1), the coherence of different specific chemistry core competencies (item KCC3), and the nature of chemistry (item KOTLC3). The above limitation might represent a potential bias of the instrument as pre-service teachers are more familiar with those terminologies since they have received that knowledge in method courses during their teacher education programs. Those items should be further examined and explained to make sure both pre-service and in-service teachers understand them clearly and explicitly. All items in the instrument were contextualized into the subject of chemistry in the 2017 SHSCCS even though they were derived from previous studies. The contextualization of those items integrated the features of the chemistry subject and the uniqueness of CCCs. Even though we recruited an expert panel for creating those items, potential bias might have existed due to the involvement of the 2017 SHSCCS designer. Given that, transferring the instrument into other subjects should be carefully considered. The participants of this study were conveniently sampled from a UGS initiative for practicum placements associated with a normal university in the northeast part of China. Therefore, the instrument validated in this study may not be suitable for other parts of China and especially those from different economic areas and educational contexts. In conclusion, the development and validation of the instrument in this study were conducted under the Chinese new curriculum reform. The findings are unlikely to be generalizable. However, the instrument can be localized to fit for other subjects both in China and other countries.

The instrument from the perspective of teacher perception has to be supplemented and corroborated by other data sources, such as classroom observation, in-depth interviews, and paper–pencil tests. Chemistry teachers’ knowledge and enactment practice of PCK for teaching CCCs both in personal and enacted realms (Carlson and Daehler, 2019; Chan et al., 2019) would be described and interpreted comprehensively by using multiple measures. The components and integrations of the teacher's knowledge and skills of PCK for teaching CCCs can be further examined. In so doing, teachers’ perception, knowledge, and skills of their PCK would provide a holistic view of their PCK for teaching core competency-based chemistry.

Conflicts of interest

There are no conflicts to declare.

Appendix

Constructs and aspectsa Annotations and adaptions for teaching CCCsb The statements of itemsc
Note: aThe constructs and aspects of PCK were adapted directly from the Pentagon model in Park and Oliver (2008a, 2008b). bThe “Annotations from literature” served as the elaboration of individual PCK aspects; the “Adaptions from 2017 SHSCCS” provides evidence for contextualizing the individual PCK aspects in the notion of the 2017 SHSCCS; the 2017 SHSCCS is short for Senior High School Chemistry Curriculum (Ministry of Education, P. R. China, 2017). cThe statements of 17 items were designed in the initial instrument; except item *KSUC3, all other 16 items were included in the final instrument.
Subscale 1: Knowledge of Orientation to Teaching and Learning Chemistry (KOTLC)
– Beliefs about purposes of learning science (Grossman 1990; Park and Oliver, 2008a) Adaption from the 2017 SHSCCS:Chemistry is playing an increasingly important role in promoting the sustainable development of human civilization and is the core force that reveals the mysteries of elements to life.” (p. 1) KOTLC1: I believe the purpose of learning chemistry is to assist students’ view of the natural world through the perspective of chemistry.
– Decision making in teaching (Grossman 1990; Park and Oliver, 2008a) Adaption from the 2017 SHSCCS:Chemistry core ideas are essential scientific literacies for all high school students, which prompt students’ life-span learning and development.” (p. 1) KOTLC2: I believe the purpose of teaching chemistry is to help students’ better understanding of chemistry core ideas and its values for personal development.
– Beliefs about the nature of science (Grossman 1990; Park and Oliver, 2008a) Adaption from the 2017 SHSCCS:The essence of chemistry discipline is to know and create chemical substances.” (p. 1) KOTLC3: I believe the nature of chemistry is to know chemical substances, and further to create chemical substances.
Subscale 2: Knowledge of Chemistry Curriculum (KCC)
– Curricular materials (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) Adaption from the guideline in the 2017 SHSCCS:The organization and structure of chemistry curriculum should be designed aligned with the 2017 SHSCCS for prompting students development of chemistry core competencies.” (p. 82) KCC1: I can organize curriculum materials properly to help students’ gain specific chemistry core competencies.
– Vertical curriculum (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) Annotations from literature: Organization of curriculum contents according to the sequence and continuity of learning within a given knowledge domain or subject over time. (Grossman, 1990). KCC2: I can handle well the progression of one specific chemistry core competency among different grade levels.
Adaption from the 2017 SHSCCS: For instance, one specific chemistry core competency (e.g., MIMA) is articulated across different grade levels.
– Horizontal curriculum (Grossman 1990; Park and Oliver, 2008a; Aydin et al., 2014) Annotations from literature: The scope and integration of curricular contents across different topics within a particular grade level. (Grossman, 1990). KCC3: I can handle well the coherence of different specific chemistry core competencies in a certain grade level.
Adaption from the 2017 SHSCCS: For instance, several chemistry core competencies (e.g., MIMA and ERM) are integrated in the same compulsory course.
Subscale 3: Knowledge of Assessment of Chemistry Learning (KACL)
– Dimensions of science learning to assess (Champagne, 1989; Tamir, 1988; Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: “This category refers to teachers’ knowledge of the aspects of students’ learning that are important to assess within a particular unit of study…the dimensions upon which teacher knowledge in this category is based are those of scientific literacy.” (Magnusson et al., 1999) KACL1: I can evaluate students’ chemistry learning through the perspective of chemistry core competencies.
Adaption from the 2017 SHSCCS: states the “Performance Requirements” of chemistry core competencies for each topic of chemistry learning.
– Methods of assessing science learning (Tamir, 1988; Magnusson et al., 1999; Park and Oliver, 2008a) Adaption from the 2017 SHSCCS: Summative assessments (NRC, 2006; 2014) such as activity performance rubrics, paper–pencil tests and portfolio are recommended for assessing students’ chemistry academic achievement in the 2017 SHSCCS. (p. 74). KACL2: I can employ different ways to diagnose and assess students’ chemistry academic achievement.
Adaption from the 2017 SHSCCS: Formative assessments (NRC, 2006; 2014) such as classroom questioning and feedback, homework and quiz are recommended for assessing students’ chemistry core competencies (p. 75). KACL3: I can employ different ways to diagnose and assess students’ chemistry core competencies.
Subscale 4: Knowledge of Instructional Strategies for Teaching Chemistry (KISTC)
– Topic-specific strategies: representations (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: Topic-specific strategies of representations can be analogies, models, illustrations and examples. (Magnusson et al., 1999; Aydin et al., 2014). KISTC1: I can select topic-specific representation strategies properly to help students forming chemistry core competencies.
Adaption from the 2017 SHSCCS: provides the section of “Instructional Strategies” for each topic.
– Topic-specific strategies: activities (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: Topic-specific strategies of activities can be simulations, demonstrations, and experiments (Magnusson et al., 1999; Aydin et al., 2014). KISTC2: I can select topic-specific activities properly to help students forming chemistry core competencies.
Adaption from 2017 SHSCCS: provides the section of “Learning Activities” for each topic.
– Subject-specific strategies (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: Conceptual change (Tobin, Tippins and Gallard, 1994; Duit and Treagust, 2003), 5E learning cycle (Bybee and Landes, 1990; Duran and Duran, 2004) and project-based learning (PBL) (Blumenfeld et al., 1991; Krajick and Shin, 2014) are popularly used and examined as efficient teaching methods for prompting students’ science learning. KISTC3: I can select subject-specific teaching modes (e.g., conceptual change, 5E, PBL) properly to help students forming chemistry core competencies.
Adaption from the 2017 SHSCCS: encourages teachers to enact the new standards by using a diversity of teaching methods (p. 76).
Subscale 5: Knowledge of Students’ Understanding in Chemistry (KSUC)
– Misconceptions (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: “This category consists of teachers’ knowledge and beliefs about prerequisite knowledge for learning specific scientific knowledge.” (Magnusson et al., 1999, p. 104). “… students encounter when learning science involves topic areas in which their prior knowledge is contrary to the targeted scientific concepts. Knowledge of this type is typically referred to as misconceptions.” (Magnusson et al., 1999, p. 105). KSUC1: I know well about students’ misconceptions of chemistry knowledge and the previous level of chemistry core competencies.
Adaption from the 2017 SHSCCS: students’ misconceptions of chemistry knowledge and their prerequisite knowledge for the late topic should be used for prompting students’ chemistry core competencies.
– Learning difficulties (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: “This category refers to teachers’ knowledge of the science concepts or topics that students find difficult to learn.” (Magnusson et al., 1999; p. 105). KSUC2: I know well about the student learning difficulty of the current chemistry topics.
Adaption from the 2017 SHSCCS: To gain the development of their chemistry core competencies, learning difficulty of chemistry concepts is necessarily considered.
– Motivation and Interest (Park and Oliver, 2008a) Annotations from literature: Park and Oliver (2008a) emphasized teacher knowledge of students’ motivation and interests in the component of Knowledge of Students’ Understanding in Science. * KSUC3: I know well about student motivation and interest in chemistry learning.
Adaption from the 2017 SHSCCS: underlines the importance of students’ motivation and interests of chemistry learning by engaging them to make sense of real-world phenomena or problem solving. (p. 73–74).
– Development levels (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: “This category consists of … as well as their understanding of variations in students’ approaches to learning as they relate to the development of knowledge within specific topic areas.” (Magnusson et al., 1999; p. 104). KSUC4: I know well about student development stages of chemistry core competencies.
Adaption from the 2017 SHSCCS: outlines the development stages (Level 1–4) of chemistry core competencies in “Appendix 1: The Levels of Chemistry Core Competencies”. (p. 89–92).
– Learning style (Magnusson et al., 1999; Park and Oliver, 2008a) Annotations from literature: “Teachers’ knowledge of variations in approaches to learning includes knowing how students of differing developmental or ability levels or different learning styles may vary in their approaches to learning as they relate to developing specific understandings.” (Magnusson et al., 1999; p. 104). KSUC5: I know well about the student learning style and the ability of chemistry learning.
Adaption from the 2017 SHSCCS: emphasized the importance of students’ psychological cognitive development in prompting students development of their chemistry core competencies in teacher enactment and curriculum design guidelines. (p. 82).

Acknowledgements

This paper was based on work supported by several grants, such as the National Education Sciences Planning Program (General Category) under Grant No. BHA170131, titled “Research on Promoting Teachers’ Competency-based Classroom Teaching Ability” and the Grant No. NENU-421/131005031 and No. NENU-17XQ008. Any opinions, findings, and conclusions or recommendations expressed in the materials are those of the authors and do not necessarily reflect the views of those funding grants. The authors would also like to thank the editor and reviewers for their insightful suggestions on the research described in this paper.

References

  1. Alonzo A. C., Kobarg M. and Seidel T., (2012), Pedagogical content knowledge as reflected in teacher-student interactions: Analysis of two video cases, J. Res. Sci. Teach., 49(10), 1211–1239.
  2. American Educational Research Association, American Psychological Association and National Council on Measurement in Education, (2014), Standards for educational and psychological testing, Washington, DC: American Educational Research Association.
  3. Ananiadou K. and Claro M., (2009), “21st Century Skills and Competences for New Millennium Learners in OECD Countries”, OECD Education Working Papers, No. 41, OECD Publishing. DOI:10.1787/218525261154.
  4. Arbuckle J. L., (2013), IBM SPSS Amos 22 user's guide, Amos Development Corporation, SPSS Inc.
  5. Arzi H. and White, R., (2007), Change in Teachers’ Knowledge of subject matter: a 17-year longitudinal study, Sci. Educ., 92(2), 221–251.
  6. Aydeniz M. and Kirbulut Z. D., (2014), Exploring challenges of assessing pre-service science teachers’ pedagogical content knowledge (PCK), Asia Pac. J. Teach. Educ., 42(2), 147–166.
  7. Aydin S. and Boz Y, (2013), The nature of integration among PCK components: A case study of two experienced chemistry teachers, Chem. Educ. Res. Pract., 14(4), 615–624.
  8. Aydin S., Friedrichsen M., Boz Y. and Hanuscin L., (2014), Examination of the topic-specific nature of pedagogical content knowledge in teaching electrochemical cells and nuclear reactions, Chem. Educ. Res. Pract., 15(4), 658–674.
  9. Barendsen E. and Henze I, (2019), Relating teacher PCK and teacher practice using classroom observation, Res. Sci. Educ., 49(5), 1141–1175.
  10. Barton-Arwood S., Morrow L., Lane K. and Jolivette K., (2005), Project IMPROVE: improving teachers’ ability to address students’ social needs, Educ. Treat. Children, 28(4): 430–443.
  11. Baumert J., Kunter M., Blum W., Brunner M., Voss T., Jordan A., … and Tsai Y. M., (2010), Teachers’ mathematical knowledge, cognitive activation in the classroom, and student progress, Am. Educ. Res. J., 47(1), 133–180.
  12. Berry A., Friedrichsen P. and Loughran J., (2015), Re-examining pedagogical content knowledge in science education, Routledge.
  13. Binkley M., Erstad O., Herman J., Raizen S., Ripley M., Miller-Ricci M. and Rumble M., (2012), Defining twenty-first century skills. In assessment and teaching of 21st century skills, Dordrecht: Springer, pp. 17–66.
  14. Blumenfeld P. C., Soloway E., Marx R. W., Krajcik J. S., Guzdial M. and Palincsar A., (1991), Motivating project-based learning: Sustaining the doing, supporting the learning, Educ. Psychol., 26(3–4), 369–398.
  15. Brown T. A., (2006), Confirmatory factor analysis for applied research, New York: Guilford Press.
  16. Bybee R. W. and Landes N. M., (1990), Science for life and living: An elementary school science program from biological sciences curriculum study, Am. Biol. Teach., 52(2), 92–98.
  17. Bybee R. W. and Fuchs B., (2006), Preparing the 21st century workforce: A new reform in science and technology education, J. Res. Sci. Teach., 43(4), 349–352.
  18. Campbell A. and Hu, X., (2010), Professional experience reform in China: Key issues and challenges, Asia Pac. J. Teach. Educ., 38(3), 235–248.
  19. Carlson J. and Daehler K. R., (2019), The refined consensus model of pedagogical content knowledge in science education, in Hume A., Cooper R. and Borowski A. (ed.), Repositioning pedagogical content knowledge in teachers’ knowledge for teaching science, Singapore: Springer Singapore, pp. 77–92.
  20. Champagne A., (1989), Scientific literacy: A concept in search of a definition, in Champagne, Lovitts and Calinger (ed.), This year in school science 1989: Scientific literacy, Washington, D.C.: American Association for the Advancement of Science.
  21. Chai C. S., Koh J. H. L., Tsai C. C. and Tan L. L. W., (2011), Modeling primary school pre-service teachers’ technological pedagogical content knowledge (TPACK) for meaningful learning with information and communication technology (ICT), Comput. Educ., 57(1), 1184–1193.
  22. Chan K. K. H., Rollnick M. and Gess-Newsome, J., (2019), A grand rubric for measuring science teachers’ pedagogical content knowledge, Repositioning Pedagogical Content Knowledge in Teachers’ Knowledge for Teaching Science, Singapore: Springer, pp. 251–269.
  23. Cheramie G., Goodman B., Santos V. and Webb E., (2007), Teacher perceptions of psychological reports submitted for emotional disturbance eligibility, J. Educ. Human Dev., 1(2), 1–8.
  24. Chiang W. W. and Liu C. J., (2014), Scale of academic emotion in science education: Development and validation, Int. J. Sci. Educ., 36(6), 908–928.
  25. Chval K., Abell S., Pareja E., Musikul K. and Ritzka G., (2008), Science and mathematics teachers’ experiences, needs, and expectations regarding professional development, Eurasia J. Math. Sci. Technol. Educ., 4(1), 31–43.
  26. Cohen R. and Yarden A., (2009), Experienced junior-high-school teachers’ PCK in light of a curriculum change: ‘The cell is to be studied longitudinally’, Res. Sci. Educ., 39, 131–155.
  27. Damnjanovic A., (1999), Attitudinal differences between preservice and inservice teachers toward inquiry-based teaching, Sch. Sci. Math., 99(2), 71–76 Search PubMed.
  28. DeVellis R. F., (2016), Scale development: Theory and applications, Sage Publications, vol. 26.
  29. Dong Y., Chai C. S., Sang G. Y., Koh J. H. L. and Tsai C. C., (2015), Exploring the profiles and interplays of pre-service and in-service teachers’ technological pedagogical content knowledge (TPACK) in China, J. Educ. Technol. Soc., 18(1), 158–169.
  30. Duit R. and Treagust D. F., (2003), Conceptual change: A powerful framework for improving science teaching and learning, Int. J. Sci. Educ., 25(6), 671–688.
  31. Duran L. B. and Duran E., (2004), The 5E instructional model: A learning cycle approach for inquiry-based science teaching, Sci. Educ. Rev., 3(2), 49–58.
  32. European Parliament, (2007), Key Competences for Lifelong Learning: A European Reference Framework. Annex of a Recommendation of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning, Official Journal of the European Union, 30.12.2006/L394. Available online at: http://ec.europa.eu/dgs/education_culture/publ/pdf/ll-learning/keycomp_en.pdf, accessed 18 January 2010.
  33. Expert Committee on Basic Education Curriculum and Textbooks of Ministry of Education, P. R. China, (2018), in Fang Y. and Xu D., (ed), Interpretation of the Chemistry curriculum standards for senior high school (2017 Version), Higher Education Press.
  34. Fensham P. J., (2013), The science curriculum: the decline of expertise and the rise of bureaucratise, J. Curric. Stud.,45(2), 152–168.
  35. Ferketich S., (1991), Focus on psychometrics: aspects of item analysis, Res. Nurs. Health, 14(2), 165–168.
  36. Field A., (2009), Discovering statistics using SPSS, London: Sage.
  37. Fornell C. and Larcker D., (1981), Evaluating structural equation models with unobservable variables and measurement error, J. Mark Res., 18, 39–50.
  38. Gess-Newsome J., (2015), A model of teacher professional knowledge and skill including PCK: results of the thinking from the PCK summit, in Berry A., Friedrichsen P. J. and Loughran J. J., (ed.), Re-examining pedagogical content knowledge in science education, New York, NY: Routledge, pp. 28–42.
  39. Gess-Newsome J., Taylor J. A., Carlson J., Gardner A. L., Wilson C. D. and Stuhlsatz M. A., (2019), Teacher pedagogical content knowledge, practice, and student achievement, Int. J. Sci. Educ., 41(7), 944–963.
  40. Ghazvini S. D., (2011), Relationships between academic self-concept and academic performance in high school students, Procedia. Soc. Behav. Sci., 15, 1034–1039.
  41. Gogol K., Brunner M., Goetz T., Martin R., Ugen S., Keller U., Fischbach A. and Preckel F., (2014), “My questionnaire is too long!” The assessments of motivational-affective constructs with three-item and single-item measures, Contemp. Educ. Psychol., 39(3), 188–205 Search PubMed.
  42. Griffin P. and Care E. (ed.), (2014), Assessment and teaching of 21st century skills: Methods and approac,. Springer.
  43. Großschedl J., Welter V. and Harms U., (2019), A new instrument for measuring pre-service biology teachers’ pedagogical content knowledge: The PCK-IBI, J. Res. Sci. Teach., 56(4), 402–439.
  44. Grossman P. L., (1990), The making of a teacher: Teacher knowledge and teacher education. Teachers College Press, Teachers College, Columbia University.
  45. Hair J. F., Black W. C., Babin B. J. and Anderson R. E., (2010), Multivariate data analysis, (7th edn), New York: Prentice Hall.
  46. Hardre P. L., Davis K. A. and Sullivan D. W., (2008), Measuring teacher perceptions of the “how” and “why” of student motivation, Educ. Res. Eval., 14(2), 155–179.
  47. Hao Z., (2013), Curriculum Reform in Rural Areas in Mainland China, Curriculum Innovations in Changing Societies, Rotterdam: Sense Publishers, pp. 519–531.
  48. Hayes N., Lee S., DiStefano R., O'Connor D. and Seitz C., (2016), Measuring science instructional practice: A survey tool for the age of NGSS, J. Sci. Teach. Educ., 27(2), 137–164.
  49. Heller J. I., Daehler K. R., Wong N., Shinohara M. and Miratrix L. W., (2012), Differential effects of three professional development models on teacher knowledge and student achievement in elementary science, J. Res. Sci. Teach., 49(3), 333–362.
  50. Henze I., van Driel J. H. and Verloop N., (2008), Development of experienced science teachers’ pedagogical content knowledge of models of the solar system and the universe, Int. J. Sci. Educ., 30(10), 1321e1342 DOI:10.1080/09500690802187017.
  51. Hill H. C., Rowan B. and Ball D. L., (2005), Effects of teachers’ mathematical knowledge for teaching on student achievement, Am. Educ. Res. J., 42(2), 371–406.
  52. Hu L. T. and Bentler P. M., (1999), Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives, Struct. Equ. Modeling, 6(1), 1–55.
  53. Hume A., Cooper R. and Borowski A., (ed.), (2019), Repositioning pedagogical content knowledge in teachers’ knowledge for teaching science, Springer.
  54. Jüttner M., Boone W., Park S. and Neuhaus B. J., (2013), Development and use of a test instrument to measure biology teachers’ content knowledge (CK) and pedagogical content knowledge (PCK), Educ. Assess. Eval. Acc., 25(1), 45–67.
  55. Kavanoz S., Yüksel H. G. and Özcan E., (2015), Pre-service teachers’ self-efficacy perceptions on web pedagogical content knowledge, Comput. Educ., 85, 94–101.
  56. Keller M. M., Neumann K. and Fischer H. E., (2017), The impact of physics teachers’ pedagogical content knowledge and motivation on students’ achievement and interest, J. Res. Sci. Teach., 54(5), 586–614.
  57. Khakbaz A., (2016), Mathematics university teachers’ perception of pedagogical content knowledge (PCK), Int. J. Math. Educ. Sci. Technol., 47(2), 185–196.
  58. Kleickmann T., Richter D., Kunter M., Elsner J., Besser M., Krauss S., et al., (2013), Teachers’ content knowledge and pedagogical content knowledge: The role of structural differences in teacher education, J. Teach. Educ., 64(1), 90–106.
  59. Kline R. B., (2015), Principles and practice of structural equation modeling, 4th edn, Guilford publications.
  60. Koh J. H. L., Chai C. S. and Tsai C. C., (2013), Examining practicing teachers’ perceptions of technological pedagogical content knowledge (TPACK) pathways: A structural equation modeling approach, Instr. Sci., 41(4), 793–809.
  61. Koh J. H. L., Chai C. S. and Tsai C. C., (2014), Demographic factors, TPACK constructs, and teachers’ perceptions of constructivist-oriented TPACK, J. Educ. Technol. Soc., 17(1), 185–196.
  62. Krajcik J. S. and Shin N., (2014), Project-based learning, in Sawyer R. K. (ed.), The Cambridge handbook of learning sciences, 2nd edn, New York: Cambridge.
  63. Kuntze S., Siller H. S. and Vogl C., (2013), Teachers’ self-perceptions of their pedagogical content knowledge related to modelling–an empirical study with Austrian teachers, Teaching mathematical modelling: Connecting to research and practice, Dordrecht: Springer, pp. 317–326.
  64. Kulgemeyer C. and Riese J., (2018), From professional knowledge to professional performance: The impact of CK and PCK on teaching quality in explaining situations, J. Res. Sci. Teach., 55(10), 1393–1418.
  65. Lavigne G. L. and Vallerand R. J., (2010), The dynamic processes of influence between contextual and situational motivation: A test of the hierarchical model in a science education setting, J. Appl. Soc. Psychol., 40(9), 2343–2359.
  66. Law W. W., (2014), Understanding China's curriculum reform for the 21st century, J. Curric. Stud., 46(3), 332–360.
  67. Liang J. C. and Tsai C. C., (2008), Internet self-efficacy and preferences toward constructivist Inter- net-based learning environments: A study of pre-school teachers in Taiwan, J. Educ. Technol. Soc., 11(1), 226–237.
  68. Liu S. H., Tsai H. C. and Huang, Y. T., (2015), Collaborative professional development of mentor teachers and pre-service teachers in relation to technology integration, J. Educ. Technol. Soc., 18(3), 161–172.
  69. Lv L., Wang F., Ma Y., Clarke A. and Collins J., (2016), Exploring Chinese teachers’ commitment to being a cooperating teacher in a university-government-school initiative for rural practicum placements, Asia Pac. J. Educ., 36, 34–55.
  70. Magnusson S., Krajcik J. and Borko H., (1999), “Nature, sources, and development of pedagogical content knowledge for science teaching”, in Gess-Newsome J. and Lederman N.G. (ed.), Examining pedagogical content knowledge: PCK and science education, Dordrecht, The Netherlands: Kluwer, pp. 95–132.
  71. MaKinster J. G., Boone W. J. and Trautmann. N. M., (2010), Development of an instrument to assess science teachers’ perceived technological pedagogical content knowledge.” Paper presented at Annual International Conference of National Association for Research in Science Teaching, Philadelphia, PA. Accessed March. http://www.narst.org/annualconference/NARST2010_abstracts.pdf.
  72. Martin A. J., (2006), The relationship between teachers’ perceptions of student motivation and engagement and teachers’ enjoyment of and confidence in teaching, Asia Pacific J. Teach. Educ., 34(1), 73–93.
  73. Meschede N., Fiebranz A., Möller K. and Steffensky M., (2017), Teachers’ professional vision, pedagogical content knowledge and beliefs: On its relation and differences between pre-service and in-service teachers, Teach. Teach. Educ., 66, 158–170.
  74. Ministry of Education, P. R. China, (2003), Notice about the issuance of ‘Course Plan for Senior High School (Trial Draft)’ and Subjects Curriculum Standards for Senior High School (Trial Draft)”. Retrieved from http://old.moe.gov.cn//publicfiles/business/htmlfiles/moe/s8001/201404/167349.html.
  75. Ministry of Education, P. R. China, (2007), Notification of normal universities affiliated to the Ministry of Education in enrolling free tuition teacher education undergraduates. http://www.moe.gov.cn/srcsite/A15/moe_776/s3258/200705/t20070518_79885.html.
  76. Ministry of Education, P. R. China, (2010), Notification of the Ministry of Education and the Ministry of Finance on implementing secondary and primary in-service teachers national training programs. Retrieved from http://www.moe.gov.cn/srcsite/A10/s7034/201006/t20100630_146071.html.
  77. Ministry of Education, P. R. China, (2014), Opinions of the Ministry of Education on implementing excellent teacher professional development programs. Retrieved from http://www.moe.gov.cn/srcsite/A10/s7011/201408/t20140819_174307.html.
  78. Ministry of Education, P. R. China, (2017), Chemistry curriculum standards for senior high school, Beijing: People's Education Press.
  79. Ministry of Science and Technology and Publicity Department, CCCPC, (2016), Notice of the Ministry of Science and Technology and the Publicity Department, CCCPC on Issuing ‘The Chinese Citizens Science Quality Benchmark’. Retrieved from http://www.gov.cn/gongbao/content/2016/content_5103155.htm.
  80. National Research Council, (2006), Systems for State Science Assessment, Washington, DC: The National Academies Press DOI:10.17226/11312.
  81. National Research Council, (2011), Assessing 21st century skills: Summary of a workshop, National Academies Press.
  82. National Research Council, (2014), Developing assessments for the next generation science standards, Washington, DC: The National Academies Press DOI:10.17226/18409.
  83. Nilsson P., (2008), Teaching for understanding: the complex nature of pedagogical content knowledge in pre-service education, Int. J. Sci. Educ., 30 (10): 1281–1299.
  84. Nilsson P. and van Driel J., (2011), How will we understand what we teach? primary student teachers’ perceptions of their subject matter knowledge and attitudes towards physics, Res. Sci. Educ., 41: 541–560.
  85. OECD, (2005), The definition and selection of key competencies: Executive summary. Retrieved from http://www.deseco.admin.ch/2005.dskcexecutivesummary.pdf.
  86. Padilla K., Ponce-de-León A. M., Rembado F. M. and Garritz A., (2008), Undergraduate professors’ pedagogical content knowledge: The case of ‘amount of substance’, Int. J. Sci. Educ., 30(10), 1389–1404.
  87. Park S., (2019), Reconciliation Between the Refined Consensus Model of PCK and Extant PCK Models for Advancing PCK Research in Science, Repositioning Pedagogical Content Knowledge in Teachers’ Knowledge for Teaching Science, Singapore: Springer, pp. 117–128.
  88. Park S. and Chen Y. C., (2012), Mapping out the integration of the components of pedagogical content knowledge (PCK): Examples from high school biology classrooms, J. Res. Sci. Teach., 49(7), 922–941.
  89. Park S. and Oliver J. S., (2008a), Revisiting the conceptualization of pedagogical content knowledge (PCK): PCK as a conceptual tool to understand teachers as professionals, Res. Sci. Educ., 38(3), 261–284.
  90. Park S. and Oliver J. S., (2008b), National Board Certification (NBC) as a catalyst for teachers’ learning about teaching: The effects of the NBC process on candidate teachers’ PCK development, J. Res. Sci. Teach., 45(7), 812–834.
  91. Park S. and Suh J. K., (2019), The PCK map approach to capturing the complexity of enacted PCK (ePCK) and pedagogical reasoning in science teaching, Repositioning pedagogical content knowledge in teachers’ knowledge for teaching science, Singapore: Springer, pp. 185–197.
  92. Park S., Jang J. Y., Chen Y. C. and Jung J., (2011), Is pedagogical content knowledge (PCK) necessary for reformed science teaching?: Evidence from an empirical study, Res. Sci. Educ., 41(2), 245–260.
  93. Partnership for 21st Century Skills, (2009), Framework for 21st century learning, Retrieved from http://www.p21.org/documents/P21_Framework.pdf.
  94. Peng W. J., McNess E., Thomas S., Wu X. R., Zhang C., Li J. Z. and Tian H. S., (2014), Emerging perceptions of teacher quality and teacher development in China, Int. J. Educ. Dev., 34, 77–89.
  95. Ryder J. and Banner I., (2011), Multiple aims in the development of a major reform of the national curriculum for science in England, Int. J. Sci. Educ., 33(5), 709–725.
  96. Saltan F. and Arslan K., (2017), A comparison of in-service and pre-service teachers’ technological pedagogical content knowledge self-confidence, Cogent. Educ., 4(1), 1311501.
  97. Sargent T. C. and Hannum E, (2009), Doing more with less: Teacher professional learning communities in resource-constrained primary schools in rural China, J. Teach. Educ., 60(3), 258–276.
  98. Schmidt A., Baran E., Thompson D., Mishra P., Koehler J. and Shin S., (2009), Technological pedagogical content knowledge (TPACK) the development and validation of an assessment instrument for preservice teachers, J. Res. Technol. Educ., 42(2), 123–149.
  99. Schreiber J. B., Nora A., Stage F. K., Barlow E. A. and King J., (2006), Reporting structural equation modeling and confirmatory factor analysis results: A review, J. Educ. Res., 99(6), 323–338.
  100. Shavelson R. J., Hubner J. J. and Stanton G. C., (1976), Self-concept: Validation of construct interpretations, Rev. Educ. Res., 46(3), 407–441.
  101. Shulman L. S., (1986), Those who understand: Knowledge growth in teaching, Educ. Res., 15(2), 4–14.
  102. Shulman L., (1987), Knowledge and teaching: Foundations of the new reform, Harv. Educ. Rev., 57(1), 1–23.
  103. Soebari T. and Aldridge J. M., (2016), Investigating the differential effectiveness of a teacher professional development programme for rural and urban classrooms in Indonesia, Teach. Dev., 20(5), 701–722.
  104. Spaulding W., (2007), Comparison of preservice and in-service teachers' attitudes and perceived abilities toward integrating technology into the classroom, PhD thesis, The University of Memphis, Retrieved from https://www.learntechlib.org/p/127253/ Search PubMed.
  105. Taber K. S., (2018), The use of Cronbach's alpha when developing and reporting research instruments in science education, Res. Sci. Educ., 48(6), 1273–1296.
  106. Tamir P., (1988), Subject matter and related pedagogical knowledge in teacher education, Teach. Teach. Educ., 4(2), 99–110.
  107. Teo T., (2010), Examining the influence of subjective norm and facilitating conditions on the intention to use technology among pre-service teachers: A structural equation modeling of an extended technology acceptance model. Asia Pacific Educ. Rev., 11(2), 253–262 DOI:10.1007/s12564-009-9066-4.
  108. The State Council of P. R. China, (2006a), The outline of the national mid- and long-term plan for scientific and technological development (2006–2020). Retrieved from http://www.gov.cn/jrzg/2006-02/09/content_183787.htm.
  109. The State Council of P. R. China, (2006b), The outline of action plan for scientific literacy for all (2006-2010-2020). Retrieved from http://www.gov.cn/jrzg/2006-03/20/content_231610.htm.
  110. Tobin K., Tippins D. and Gallard J., (1994), Research on instructional strategies for teaching science. In: Gable D., (ed.), Handbook of Research on Science Teaching and Learning, New York: Macmillan Publishing Company.
  111. Toh K. A., Ho B. T., Riley J. P. and Hoh Y. K., (2006), Meeting the highly qualified teachers challenge, Educ. Res. Policy Pract., 5(3), 187–194.
  112. Trilling B. and Fadel C., (2009), 21st century skills: Learning for life in our times, John Wiley and Sons.
  113. Trochim W. M. and Donnelly J. P., (2006), The research methods knowledge base, 3rd edn, Cincinnati, OH: Atomic Dog.
  114. Turkan S. and Liu O. L., (2012), Differential performance by English language learners on an inquiry-based science assessment, Int. J. Sci. Educ., 34(15), 2343–2369.  DOI:10.1080/09500693.2012.705046.
  115. van Driel J. H. and Berry A., (2012), Teacher professional development focusing on pedagogical content knowledge, Educ. Res., 41(1), 26–28.
  116. Velayutham S., Aldridge J. and Fraser B., (2011), Development and validation of an instrument to measure students’ motivation and self-regulation in science learning, Int. J. Sci. Educ., 33(15), 2159–2179.
  117. Wang D., (2011), The dilemma of time: Student-centered teaching in the rural classroom in China, Teach. Teach. Educ., 27(1), 157–164.
  118. Wahyudi W. and Treagust D., (2004), An Investigation of Science Teaching Practices in Indonesian Rural Secondary Schools, Res. Sci. Educ., 34(4): 455–474.
  119. Wei B., (2019), Reconstructing a School Chemistry Curriculum in the Era of Core Competencies: A Case from China, J. Chem. Educ., 96(7),1359–1366.
  120. Wiersma W. and Jurs S. G., (1990), Educational measurement and testing, 2nd edn, Boston, MA: Allyn and Bacon.
  121. Yao J. X. and Guo Y. Y., (2018), Core competences and scientific literacy: the recent reform of the school science curriculum in China, Int. J. Sci. Educ., 40(15), 1913–1933.
  122. Ye J., Zhu X. and Lo L. N., (2019), Reform of teacher education in China: a survey of policies for systemic change, Teach. Teach., 25(7), 1–25.
  123. Zhao X., (2015), Competition and compassion in Chinese secondary education, New York, NY, USA: Palgrave Macmillan.
  124. Zheng C., Fu L. and He P., (2014), Development of an instrument for assessing the effectiveness of chemistry classroom teaching, J. Sci. Educ. Technol., 23(2), 267–279.

This journal is © The Royal Society of Chemistry 2021