Partial least squares structural equation modeling of chemistry attitude in introductory college chemistry

James Ross *, Leslie Nuñez and Chinh Chu Lai
Department of Chemistry, East Los Angeles College, Monterey Park, California 91754, USA. E-mail: rossj2@elac.edu

Received 3rd December 2017 , Accepted 10th July 2018

First published on 10th July 2018


Abstract

Students’ decisions to enter or persist in STEM courses is linked with their affective domain. The influence of factors impacting students’ affective domain in introductory college chemistry classes, such as attitude, is often overlooked by instructors, who instead focus on students’ mathematical abilities as sole predictors of academic achievement. The current academic barrier to enrollment in introductory college chemistry classes is typically a passing grade in a mathematics prerequisite class. However, mathematical ability is only a piece of the puzzle in predicting preparedness for college chemistry. Herein, students’ attitude toward the subject of chemistry was measured using the original Attitudes toward the Subject of Chemistry Inventory (ASCI). Partial least squares structural equation modeling (PLS-SEM) was used to chart and monitor the development of students’ attitude toward the subject of chemistry during an introductory college chemistry course. Results from PLS-SEM support a 3-factor (intellectual accessibility, emotional satisfaction, and interest and utility) structure, which could signal the distinct cognitive, affective, and behavioral components of attitude, according to its theoretical tripartite framework. Evidence of a low-involvement hierarchy of attitude effect is also presented herein. This study provides a pathway for instructors to identify at-risk students, exhibiting low affective characteristics, early in a course so that academic interventions are feasible. The results presented here have implications for the design and implementation of teaching strategies geared toward optimizing student achievement in introductory college chemistry.


Introduction

There are numerous studies identifying factors responsible for students’ achievement in science (Steinkamp and Maehr, 1983; Turner and Lindsay, 2003; Seery, 2009; Çakici et al., 2011). A significant positive correlation exists between mathematical ability and achievement in chemistry (Bunce and Hutchinson, 1993; Spencer, 1996; Wagner et al., 2002; Ewing et al., 2005; Lewis and Lewis, 2007). For example, Spencer (1996) studied the relationships between mathematical SAT scores and grades earned by students in eight consecutive years of first and second semester general chemistry courses. The results showed that students who earned higher scores in the SAT tended to earn higher grades in both chemistry courses. This link between mathematics and chemistry is commonplace. Consequently, mathematics classes are invariably used as prerequisite courses for chemistry. Scores on SAT and ACT subject tests in mathematics are often used by education researchers as standardized measures of a mathematical ability. However, there are many non-traditional students predominantly served by community colleges for whom participation in the SAT or ACT subject tests in mathematics is not the norm. These institutions and their students are left seeking alternative metrics of likely success in chemistry.

Assessment of students’ logical thinking ability has been shown to mimic the correlation between mathematical ability and chemistry achievement. Bunce and Hutchinson (1993) identified the Group Assessment of Logical Thinking (GALT) test as a predictor of academic success in college chemistry. They found that scores on the GALT test can be used as an alternative to mathematical SAT scores to predict success in chemistry because both the SAT and GALT measure students’ ability to do mathematical manipulations. For students who are non-science majors, GALT scores may even serve as a better predictor of success than mathematical SAT scores because non-science major chemistry courses often include less mathematical manipulation than a chemistry course designed for science majors. Lewis and Lewis (2007) employed the Test of Logical Thinking (TOLT) over a three year period to identify at-risk individuals among a population of over 3000 students entering general chemistry. The TOLT was able to correctly predict at-risk students with 70.5% accuracy compared with 72.5% accuracy for the SAT and was shown to satisfactorily predict at-risk students for whom SAT scores are unavailable.

Tripartite attitudinal model

Attitude is a tripartite composite structure consisting of affective (A), behavioral (B), and cognitive (C) components – the ABC model of attitude (Rosenberg and Hovland, 1960; Wagner and Sherwood, 1969; Aiken, 1980). Eagly and Chaiken (1993) define attitude as, “a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor.” Critical to this definition of attitude is the evaluation process which treats the affective, behavioral and cognitive components of attitude as concomitants. This definition attests that an attitude does not exist prior to a person's evaluative response to an attitudinal object on the affective, behavioral and cognitive level. The affective component concerns students’ feelings and emotions about an attitude object (e.g. I love chemistry). The behavioral component represents a students’ learned predisposition to the attitude object (e.g. I read chemistry books for fun). The cognitive component of attitude portrays students’ prejudice toward the attitude object stemming from prior knowledge (e.g. Learning about chemistry will help me to enter a career in medicine).

Student attitudes toward science

Chemistry educators study factors impacting the meaningful learning of chemistry and seek ways to maximize and catalyze the learning happening inside and outside of the classroom (Richter-Egger et al., 2010; Cook et al., 2013; Galloway and Bretz, 2015; Chan and Bauer, 2016). The importance of students’ attitude toward learning mathematics and how it impacts the students’ cognitive domain in science is well known (McLeod, 1992). While controlling for the cognitive aspect of attitude, research has shown the positive role played by the affective component of students’ attitude toward the conceptual development of science (Nieswandt, 2007; Lewis et al., 2009).

Hierarchy of attitude effects

The ABC model of attitude consists of three distinct orders of influence that portray how invested the individual is about the attitudinal object (Lavidge and Steiner, 1961; Ray, 1973; Solomon, 1997; Leone et al., 1999). For example, the high-involvement or standard learning hierarchy proceeds in the order CAB and is followed when an individual is highly invested in decision making regarding the attitudinal object. The experiential learning hierarchy follows the sequence ABC and occurs when an individual's attitudinal response is dictated by an initial emotional reaction. Finally, the low-involvement hierarchy proceeds in the order CBA and is manifest in an individual who develops feelings regarding a decision only after it has been made (Krugman, 1965).

It is possible not to observe each distinct component of attitude during attitude research, and hence not to observe any hierarchy of attitude effect. For example, the three components of attitude can converge into one component when the inter-component correlations are very high. This can occur where reporting of attitudes is exclusively verbal, and where the attitudinal object is not physically present during the reporting of attitude (Breckler, 1984). It is also possible that the inter-components of attitude are completely independent of each other (Zajonc, 1980). Since a hierarchy of attitude effect specifies the causal direction between the three sub-components of attitude, all three sub-components must be independent of each other and must be measureable in the data to enable an analysis of a hierarchy of attitude effect. Extant attitude theories like the theory of reasoned action (TRA) and the theory of planned behavior (TPB) consider affect, behavior and cognition to be somewhat positively correlated, whether or not correlations are observed in the data (Fishbein and Ajzen, 1975; Ajzen, 1991).

The CAB hierarchy is considered high-involvement due to its order of influence and its behavioral culmination, which testifies that the individual places a high personal value on the attitudinal object in question. An individual following the CAB hierarchy regarding an attitudinal object invests time and effort researching and accumulating personal knowledge about the attitudinal object (cognition). This research is used to formulate feelings toward the attitudinal object (affect), which are then translated into either covert behavioral decision (behavioral intention) or overt behavioral action (behavior) (Lavidge and Steiner, 1961; Fishbein and Ajzen, 1975; Ray, 1982). Since behavior can be considered the least reversible and hence most committal component of the tripartite structure of attitude, the placement of behavior (or behavioral intention) as the culminating component of attitude solidifies the CAB hierarchy as high-involvement to the operating individual. Students exhibiting the CAB hierarchy would be the ideal outcome since it would indicate their association of a high value with chemistry toward their overall education. It would follow that these students will probably be committed to meaningful learning and will be engaged in their chemistry class.

The ABC hierarchy could be considered to cater to an individual's need for self-gratification with regards to an attitudinal object. The antecedence of affect in the ABC hierarchy testifies to the importance of an emotional reward for the individual regarding the attitudinal object (Solomon, 1997). It is unlikely that the ABC hierarchy would be displayed by students in an education setting where chemistry is the attitudinal object. On the contrary, the ABC hierarchy would be more likely to surface where the attitudinal object was a personal gift for the individual, such as a new computer or a new car.

The CBA hierarchy is low-involvement due to the removal of behavior (or behavioral intention) as the culminating act of the attitude structure and its replacement with the affective component. The CBA hierarchy signifies that the individual does not place a high personal value on the attitudinal object in question (Krugman, 1965; Ray, 1982). An individual following the CBA hierarchy when considering an attitudinal object is initially investing time conducting the appropriate research and is acquiring relevant knowledge regarding the attitudinal object. However, the individual then prematurely makes a covert behavioral decision or overtly acts in an arguably ill-advised manner. It is only upon later reflection of the premature behavioral impulse that the individual begins to develop feelings about the behavior toward the attitudinal object. Students’ adoption of the CBA hierarchy would be considered the least desirable outcome and should spark an ethical need for a remedial academic attitudinal intervention. It could follow that these students will probably be unengaged with meaningful learning or perhaps simply do not see the need or benefit to learn chemistry.

Students’ affective domain in chemistry

Despite many studies emphasizing the positive correlation between students’ affective domain and their learning, science education typically stresses the cognitive aspect of learning and does not address the affective domain (Paavola et al., 2004). For instance, student learning outcomes (SLO) typically follow from Bloom's work on students’ cognitive domain and focus solely on gaining knowledge without including the significance of fostering positive attitudes toward the material being learned (Bloom et al., 1956). SLO statements seldom consider Bloom's work on students’ affective domain (Krathwohl et al., 1964). There is a chance that without greater emphasis on students’ affective domain in SLO statements, efforts to entice students already considering college majors to consider STEM majors could be too late to be significant. Students’ attitude toward STEM education not only influences their decision to pursue higher STEM education but has been reported to impact students’ decision making regarding STEM education as early as elementary school (DeWitt et al. 2014).

Evaluation of students’ affective domain to garner insights into their learning in college chemistry classes is an underused but potentially profitable educational strategy (Chan and Bauer, 2014; Kahveci and Orgill, 2015). Numerous studies have probed the different aspects of students’ affective domain, such as attitudes (Cukrowska et al., 1999; Osborne et al., 2003; Salta and Tzougraki, 2004; Grove and Bretz, 2007; Barbera et al., 2008; Cheung, 2009; Heredia and Lewis, 2012; Else-Quest et al., 2013), self-concept (Wilkins, 2004; Bauer, 2005; Lewis et al., 2009; Nielsen and Yezierski, 2015), and self-efficacy (Bandura, 1997; Kan and Akbas, 2006; Uzuntiryaki and Aydin, 2009; Ferrell and Barbera, 2015; Vishnumolakala et al., 2017). In general, a positive affective domain leads to higher academic achievement in science than a negative affective domain.

Measuring students’ attitude toward chemistry

A relatively recent and popular instrument for studying students’ affective domain is the Attitude toward the Subject of Chemistry Inventory, ASCI (Bauer, 2008; Xu and Lewis, 2011; Brandriet et al., 2013; Brown et al., 2014; Kahveci, 2015; Vishnumolakala et al., 2017). Students’ attitude toward the subject of chemistry is a metric that can be reliably measured, validated, and used to predict achievement in chemistry, thereby affording a temporal window for implementing academic interventions (Osborne et al., 2003; Cook et al., 2013; Vishnumolakala et al., 2017). However, the results from research of students’ attitude toward chemistry are relatively unknown by the larger chemistry community and are therefore an untapped resource in need of development and more widespread dissemination (Berg, 2005).

A potential distraction from the dissemination of research findings concerning students’ attitude toward chemistry is the internal inconsistencies within the literature, arguably owing to overreaching conclusions made from research using blunt instruments that fail to adequately disaggregate the composite attitude metric (Adams and Wieman, 2011). For example, student attitude is sometimes erroneously considered as a single construct, and its tripartite composite structure (ABC) is ignored. There is a call for standardized rigor in educational research that use survey instruments, especially the uniform confirmation of the reliability and validity of data collected by instruments prior to the drawing of conclusions (Arjoon et al., 2013).

Students at two year and four year colleges possibly maintain different attitudes toward chemistry, and there is no logical reason to think that their attitudinal measurements should be the same. Institutional funding, location, entrance policies and reputation are all potential influencers of attitude (Malcom and Feder, 2016). Indeed, the results from an educational research instrument performed in one four year institution do not necessarily predict the outcomes when the instrument is used in another four year institution. For example, Xu and Lewis (2011) found that the ASCI instrument in their hands and with their students performed differently when compared with Bauer's original work (2008).

A present gap in the literature is the lack of research looking at students’ attitude toward chemistry in two year institutions. Perhaps this is due to concerns about the small class sizes often prevailing in two year institutions and the problems of performing structural equation modeling on small sample sizes; other reasons for the lack of attention at the two year level could of course prevail. Another current gap in the literature concerning attitudes toward chemistry is the lack of research considering the entire complement of the tripartite attitude model. Studies have looked at cognition and affect without behavior (or behavioral intention) and have justified the absence of behavior (or behavioral intention) on grounds of a conflict with course grade that was also measured (Xu and Lewis, 2011). By not including the full complement of cognition, affect and behavior (covert or overt) in research of students’ attitude toward chemistry, researchers have thus far been unable to investigate the hierarchy of attitude effects. Furthermore, the absence of the behavioral component from these attitude studies could introduce a psychometric bias to the results obtained.

Modeling students’ attitude toward chemistry

Covariant-based structural equation modeling (CB-SEM) is the most common method of confirmatory factor analysis (CFA) in science education research. CFA permits confirmation of a proposed factorial interpretation of a survey instrument (Greenbaum and Dedrick, 1998; Kember and Leung, 1998). Performing CFA is a logical next step following exploratory factor analysis (EFA), and although used much less frequently to date, researchers are starting to provide CFA data with their analyses (Xu and Lewis, 2011; Brandriet et al., 2013; Xu et al., 2013; Ferrell and Barbera, 2015; Ferrell et al., 2016; Villafañe and Lewis, 2016; Vishnumolakala et al., 2016; Liu et al., 2017).

CFA provides a variety of fit indices that are used to critique a proposed structural equation model. For example, large chi-square (χ2) values, comparative fit index (CFI) scores greater than 0.95, and standardized root-mean-squared (SRMR) scores less than 0.08 are all considered to be markers of good-fitting structural models of data (Hu and Bentler, 1999). Guidelines for the implementation of CB-SEM in educational research support the relevance of reporting these fit indices (Schreiber et al., 2006; Schreiber, 2008; Kline, 2015).

CFA has become one of the most commonly used statistical procedures in applied research because it is well equipped to address the types of questions that researchers often ask (Brown, 2014). CFA permits the testing of competing structural models of data, helps researchers navigate toward increasingly parsimonious interpretations of results, and has been used to produce nomological networks of variables affecting the learning of science, albeit cautiously (Blalock, 1986).

Structural equation modeling

Partial least squares structural equation modeling (PLS-SEM) is prediction focused and models path coefficients and construct loadings using a sequence of least squares regressions and weighted sums during the convergent sequencing of a multi-stage algorithm (Hair et al., 2011). Notably, the operational objective of PLS-SEM is contrary to that of CB-SEM. Whereas CB-SEM is a multivariate data analysis algorithmic tool that uses maximum likelihood approaches to reproduce the observed covariance matrix underpinning a specific theoretical model (Rigdon, 1998), PLS-SEM is a multistage data analysis algorithmic tool that aims to maximize the explained variance between latent variable constructs in a hypothesized theoretical model (Lohmöller, 1989).

In PLS-SEM, latent constructs are considered as either exogenous or endogenous during the evaluation of structural models, as is the case in CB-SEM. Exogenous constructs are latent variables that do not have any structural path input from other latent constructs, whereas endogenous constructs are target latent variables that are explained by other latent constructs in the structural model (Hair et al., 2016). In the formative measurement mode, where arrows point from the factors to the items, error terms exist. However, in the reflective measurement mode, where arrows point from the items to the latent constructs, error terms are not present due to the inherent composite formulation of the constructs in PLS-SEM. PLS-SEM softens the assumption in CB-SEM of a common factor causation to explain all covariation between sets of indicators. Instead, PLS-SEM represents constructs with proxies, which are weighted composites of indicator variables (Rigdon, 2012).

Contrary to CB-SEM, PLS-SEM has no accepted global measure of goodness-of-fit index (Cudeck, 1989; Hu and Bentler, 1999; Hair et al., 2011; Henseler and Sarstedt, 2013). Therefore, the concept of “fit index” does not translate in the same way to PLS-SEM as it does to CB-SEM (Chin, 2010). Key criteria for assessing structural models with PLS-SEM include the significance of path coefficients, β, the size of R2 values for endogenous constructs, the f2 and q2 effect sizes, and the predictive relevance values, Q2 (Stone, 1974; Geisser, 1975). The predictive relevance of endogenous latent constructs in PLS-SEM, as registered by their Q2 values, is obtained following the blindfolding procedure. Blindfolding systematically deletes data points at a given omission distance (D) between 5 and 12 and then predicts their value using the remaining data. For example, setting D equal to 5 would cycle the blindfolding procedure five times until every data point has been omitted and predicted. Differences between omitted data and predicted data are used to calculate Q2 values (Hair et al., 2016). This is why PLS-SEM is said to be predictive in nature.

Whereas neither PLS-SEM nor CB-SEM is generally considered to be superior to the other, under certain circumstances PLS-SEM can often furnish more usable and hence useful results compared with the results from CB-SEM from comparable input data (Hair et al., 2011). For example, PLS-SEM is insensitive to the skewness and kurtosis of its input data and can return robust and dependable model results from data lacking a normal distribution (non-parametric). Regardless, researchers often report skewness and kurtosis results even when using PLS-SEM. In contrast, CB-SEM requires normally distributed data to produce dependable model results. PLS-SEM can accommodate single-item latent constructs whereas this is not possible in CB-SEM. PLS-SEM has greater statistical power compared to CB-SEM, meaning that PLS-SEM is more likely to correctly identify significant structural relationships than CB-SEM. Furthermore, PLS-SEM can handle greater model complexity compared to CB-SEM (Hair et al., 2016). PLS-SEM is the method of choice for exploratory research analyses that aim to extend existing structural theoretical models and provide more robust statistical results with smaller sample sizes, rendering PLS-SEM suited to structural equation modeling of students’ attitude toward chemistry in two year institutions (Wold, 1982; Reinartz et al., 2009; Hair et al., 2014).

Present study

Our objective in this study was to profile the affective domain of our introductory chemistry students. Owing to the possibility of misinterpretation of survey items, and potential loss of attitudinal information (vide infra), the original ASCI instrument was chosen to meet our research objective rather than the more commonly adopted ASCIv2 instrument (Xu and Lewis, 2011). PLS-SEM was adopted for the structural modeling of our students’ attitude data due to its preferential applicability to smaller sample sizes and known theoretical models.

Expecting factor loadings from an attitude measuring instrument such as ASCI, developed in a four year institution housing traditional students, to predict the attitudinal outcomes of minority and non-traditional students surveyed using the same instrument in two year colleges is not obviously reasonable. Presuming that positive attitudes toward chemistry correlate with achievement in chemistry, and with an ultimate view to augment student achievement in introductory chemistry in our institution, the following research questions were considered:

(1) Does the original ASCI instrument, developed and administered in traditional four year institutions serving predominantly non-minority students, reveal any meaningful attitude constructs when administered to the predominantly minority students at an urban two year college?

(2) Can PLS-SEM offer a meaningful interpretation of the students’ attitude data, in line with the tripartite attitude model?

(3) Can path analysis using PLS-SEM reveal a hierarchy of attitude effect in our students’ data?

Methods

The instrument

Students’ attitude toward the subject of chemistry was measured using the original ASCI survey instrument developed by Bauer (2008). The ASCI survey is a semantic differential instrument consisting of 20-items grouped into 5-subscales and requires participants to choose between two polar objectives on a 7-point scale. The instrument garners the participant's attitude toward the object chemistry. In Bauer's hands, the ASCI instrument resolved the 20-items into 3-latent constructs, namely interest and utility, anxiety, and intellectual accessibility. The remaining 2-subscales, fear and emotional satisfaction, were labelled as “items”. The original ASCI instrument was developed and shown to furnish reliable and valid student attitude data at different four year institutions in the US and elsewhere and with different student populations (Brandriet et al., 2011; Brandriet et al., 2013; Brown et al., 2014; Kahveci, 2015).

Participants

The ASCI instrument was administered in paper format to 118 students enrolled across 6 separate Introduction to General Chemistry classes with 6 different instructors during the first and fifth week of the five week Winter 2016 intersession at a large two year institution in southern California. During laboratory time the instructor administering the survey packet announced to the students the reason for requesting student participation in completing the survey questions and the broad goals of the research. The instructor in each class verbally confirmed that student participation was voluntary and was being requested, not demanded, and that students’ identities would be kept confidential. These activities were filed with and approved by the Office of Institutional Effectiveness and Advancement.

The number of useable completed surveys from weeks 1 and 5 was 98 and 85, respectively. To be considered useable for week 1, surveys had to be completed by the same student twice during week 1. 20 surveys were disregarded from week 1 due to students failing to repeat the survey on two separate occasions. To be considered useable for week 5, surveys had to be completed by students who had previously taken the survey during week 1. A further loss of 13 more surveys by week 5 resulted from student attrition. Student participants in week 1 surveys were 50.0% female, 50.0% male, and 76.5%, 20.4% and 3.1% were classified as Hispanic/Latino, Asian/Pacific Islander, and other, respectively. Students taking week 5 surveys were 52.9% female, 47.1% male, and 63.5%, 29.4% and 7.1% were classified as Hispanic/Latino, Asian/Pacific Islander, and other, respectively.

Data collection and analysis

Pencil-and-paper copies of the ASCI survey were distributed to each student during the first week of class and were used throughout this study. The ASCI survey was administered a second time, unannounced during the first week. This was done to measure the test-retest reliability of the surveys with our students and the initial temporal stability of students’ responses. A third survey was given during the last week of class to observe any changes in the students’ attitude between the beginning and the end of the class.

All students’ survey data was manually input into Excel spreadsheets (Bauer, 2008). EFA of the internal structure of the ASCI instrument in our hands was performed with XLSTAT 2017.5. The original literature reports used principal components analysis with varimax rotation (Bauer, 2008; Xu and Lewis, 2011). More recent literature reports used principal axis factoring and direct oblimin rotation (Brown, 2014; Kahveci, 2015). Both rotation methods were evaluated, and since trivial difference was observed between varimax and oblimin rotations, the former was used in this study.

In deciding how many factors to extract from ASCI, the eigenvalue-greater-than-one rule was adopted, along with inspection of the scree plot, and careful consideration of the interpretability of results against the ABC model of attitude (Rosenberg and Hovland, 1960). Predictive structural modeling (PLS-SEM) of students’ data was performed using XLSTAT 2017.5. The statistical significance of structural paths was established using 5000 bootstrap samples. The predictive relevance of each construct was obtained from the blindfolding procedure (Hair et al., 2016).

Results and discussion

Evaluation of ASCI data

ASCI survey responses were only considered complete for students who took the ASCI survey twice during the first week of class to ensure stable student responses. In light of the ASCI instrument handling somewhat differently in different institutions, we first considered the reliability of the ASCI survey results obtained from our introductory chemistry students against subscale internal consistency (Cronbach's alpha) and test–retest consistency (Bauer, 2008; Brown et al., 2014; Kahveci, 2015).

Analysis of item scores showed good distribution normality. Skewness and kurtosis values were less than ±2 except for item 2 surveyed during week 1 (Appendix 1). As shown in Appendix 2, Cronbach's alpha values for the ASCI constructs (excluding fear, as this was only a single item construct) from the original study were at or above the threshold satisfactory value of 0.70, ranging from 0.70 to 0.78, and resemble literature values (Davidshofer and Murphy, 2005; Sijtsma, 2009). Cronbach's alpha values ranging from 0.78 to 0.83 were originally reported by Bauer (2008), while values ranging from 0.71 to 0.82 were reported by Xu and Lewis (2011). Test-retest reliability correlations for the ASCI survey ranged from 0.55 to 0.72 and resemble the correlation values originally reported by Bauer (2008), which ranged from 0.64 to 0.74 (Appendix 2). This signals that students’ attitude responses during the first week of class were adequately stable.

The suitability of survey results for factor analysis is often determined prior to performing EFA. The Kaiser–Meyer–Olkin (KMO) test of sampling adequacy measures the proportion of variance among survey items that might be due to a common variance. KMO = (∑rij2)/[(∑rij2) + (∑μij2)] for ij, where rij2 are the correlation coefficients and μij2 are the partial correlation coefficients. Lower proportions of variance (where ∑μij2 is minimized) are more suitable for FA and are converted to a value between 0 and 1, with values >0.70 indicating an acceptable KMO value for FA and structure detection (Kaiser, 1974). The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy for the 20-item ASCI administered during week 1 and 5 was 0.801 and 0.856, respectively, signaling the suitability of our data for FA.

Bartlett's test of sphericity addresses the hypothesis that the correlation matrix is an identity matrix. If confirmed true (p > 0.05), the data is unrelated and further analysis for detecting structure is pointless. Bartlett's test of sphericity for our data was significant (week 1: χ2(209) = 1069.297, p < 0.0001; week 5: χ2(209) = 1143.849, p < 0.0001) and signifies that our students’ attitude data is correlated, allowing rejection of the null hypothesis. Together, the KMO and Bartlett's test results imply that the ASCI data from our students is reliable and factor analysis of the student data is feasible.

Factor analysis

To make a comparison between our EFA data from ASCI and the work of other groups, EFA data from week 5 was used since we were comparing it to week 4 data recorded by Xu and Lewis (2011) and to end of semester data recorded by Bauer (2008). EFA of ASCI data from week 5 was run with 4 factors extracted to mimic and allow comparison with the EFA of other authors (Bauer, 2008; Xu and Lewis, 2011). However, our EFA revealed only 2 factors that satisfied the eigenvalue criterion (Yong and Pearce, 2013). This analysis was confirmed by inspection of the scree plot, which leveled off at the third component. These two factors explain 55% of the variance in the data. Referencing the original construct labels from Bauer's work (2008), one of the factors we identified was a combination of interest and utility, anxiety, and emotional satisfaction, while the other factor was intellectual accessibility (Appendix 3). The common loading of anxiety measuring items on other factors has been observed by others (Xu and Lewis, 2011; Brown et al., 2014). Since anxiety is not a construct of the ABC attitude model under investigation here it was not further considered part of the mixed emotional satisfaction and interest and utility construct. Our decision not to consider anxiety items in further analyses is further supported by Xu and Lewis’ work (2011) wherein they concluded that the anxiety and emotional satisfaction subscales are strongly correlated and therefore redundant. We observed a similar correlation between the anxiety and emotional satisfaction subscales (r = −0.701, p < 0.00001, N = 85).

In our hands, the intellectual accessibility construct was for the most part loaded onto one factor and seems to be captured in its entirety without overlapping other constructs by a variety of students in different institutions (Bauer, 2008; Xu and Lewis, 2011). In general, however, the ASCI instrument performed differently with our students compared with Bauer's students, but performed similarly to Xu and Lewis’ students and Brown's students (Xu and Lewis, 2011; Brown et al., 2014). For example, our data revealed factors containing a mixture of items from Bauer's interest and utility, emotional satisfaction and anxiety constructs as reported in Brown's and Xu and Lewis’ work, whereas this was not observed by Bauer; instead, anxiety was a cleanly resolved factor (Bauer, 2008). We speculate that the cultural and ethnicity differences between our students and the students used to validate the instrument originally are contributing to the handling subtleties of the ASCI instrument (Xu et al., 2015). The alternative factor, values, which was a construct molded by Brown et al. (2014) from a combination of items belonging to intellectual accessibility, interest and utility, anxiety, and emotional satisfaction constructs was also not observed as an independent construct from our students’ data.

The unexpected overlap and common factor loading of items belonging to the interest and utility and emotional satisfaction constructs was previously observed in Bauer's original work, by Xu and Lewis, and by Brown et al., and could indicate that all our students’ interest and utility measurements are being forged by emotional sentiments (Bauer, 2008; Xu and Lewis, 2011; Brown et al., 2014). To our knowledge, these authors did not pursue the possibility that overlapping factor loadings and significant correlations between factors could enable the interpretation of a hierarchy of attitude effect (Beatty and Kahle, 1988; Riketta, 2008).

In the presence of the interest and utility items, coupling with emotional satisfaction items occurs and could indicate the presence of a composite latent structure. As far as we can tell, Bauer did not consider this specific possibility in his original work (2008) and instead just concluded that emotional satisfaction was not a totally independent measure. Xu and Lewis (2011) in their work to modify the ASCI and produce a shorter and potentially psychometrically superior survey, ASCIv2, dismissed the interest and utility factor claiming it is undesirable for one subscale to have more than one concept. However, an alternative possibility, not considered by Xu and Lewis, is that Bauer's original label (interest and utility) could be replaced by a label invoking just one concept, such as behavioral intention. Since the original construct interest and utility has already been demonstrated by others to produce reliable and valid data, all that would be needed to justify its relabeling as behavioral intention is evidence that its items could be interpreted as a behavioral intention construct. Some authors contend that the behavioral component of attitude is not easily captured with accuracy with a self-reporting instrument, like ASCI (Xu and Lewis, 2011). Perhaps this is why Bauer chose not to attempt to measure behavior or behavioral intention using the ASCI instrument.

According to Ajzen's theory of planned behavior (TPB), behavioral intentions capture the motivational influencers of a behavior (Ajzen, 1985). The decision to represent behavioral intention by the items in Bauer's interest and utility construct follows from the realization that Bauer used motivational descriptors as the semantic opposites in this construct. For example, identifying an attitudinal object as “worthwhile” or “beneficial” would reasonably be expected to result in a motivational incentive to consider a behavioral action (Ajzen, 1991). Additional support for the decision to represent behavioral intention by the items in the interest and utility construct comes from Eccles et al.'s expectancy-value of achievement motivation model, which links an individual's behavioral intention to subjective task value. The four motivational components of subjective task value outlined in Eccles et al.'s model are importance, cost, and notably interest and usefulness (Eccles et al., 2007).

PLS-SEM

Descriptive statistics. To disaggregate the two overlapping latent constructs found in EFA, namely interest and utility and emotional satisfaction, and to investigate the hierarchy of attitude effects in our students’ data, PLS-SEM analyses were performed on the assumption that our data would align with the tripartite attitudinal model (Rosenberg and Hovland, 1960). The potential for our students to interpret emotional satisfaction items like “comfortable” and interest and utility items like “good” in a similar vein is realized when considering the vernacular saying, “feels good.” Since meaningful application of PLS-SEM must be guided by theory and not merely patterns in data (Kline, 2015), only items loading onto constructs in the tripartite attitudinal model were considered further.

Following EFA of the ASCI survey results, intellectual accessibility items 1, 4 and 5 were assigned to cognition, emotional satisfaction items 7, 11 and 14 were considered representative of affect, and interest and utility items 2, 3, 6, 12 and 15 were assigned to behavioral intention. Intellectual accessibility items 9 and 10 were dropped from further analysis due to cautionary loading concerns found by others (Xu and Lewis, 2011; Xu et al., 2015). Emotional satisfaction item 17 was dropped from further analysis due to loading concerns from our EFA. Removal of one or two items representing a latent construct is not expected to significantly affect the analysis of the structural equation model since the construct is still represented by the remaining items (Hair et al., 2016).

Average scores for the constituent items considered for the tripartite structural model ranged from 3.388 to 5.816, and from 3.129 to 5.682, for week 1 and 5, respectively (1–7 range, 4 midpoint). Higher scores indicate more positive attitudes (Table 1). The highest and lowest score was found for ASCI item 2 and 4 in both week 1 and week 5 surveys, indicating that students find chemistry beneficial and complicated during the course. ASCI items 1, 4, and 5 scored below the midpoint value of 4, suggesting that students find chemistry intellectually inaccessible. ASCI items 7, 11, and 14 scored above 4 in value, suggesting that students find chemistry emotionally satisfying. ASCI items 2, 3, 6, 12, and 15 scored above 4 in value (except item 3 in week 1) and were the highest scores, suggesting that students find chemistry interesting and useful above all else. It is an encouraging sign that our students value the usefulness of the subject of chemistry and bodes well for efforts to increase the number of STEM majors at our institution.

Table 1 Descriptive statistics of select ASCI data used in model analysis
Construct Item number Item Mean (SD)a Mean (SD)b
a Week 1 (N = 98). b Week 5 (N = 85). c Item reversed here to aid interpretation; negatively stated item is reversed before averaging.
Cognition 1c Hard Easy 3.388 (1.352) 3.624 (1.543)
4 Complicated Simple 3.388 (1.469) 3.129 (1.454)
5 Confusing Clear 3.878 (1.594) 3.612 (1.423)
Behavioral intention 2 Worthless Beneficial 5.816 (1.409) 5.682 (1.071)
3c Boring Exciting 4.571 (1.741) 5.000 (1.447)
6c Bad Good 5.337 (1.631) 5.224 (1.199)
12c Dull Interesting 5.602 (1.504) 5.612 (1.273)
15c Useless Worthwhile 5.429 (1.392) 5.118 (1.418)
Affect 7c Frustrating Satisfying 4.214 (1.508) 4.529 (1.615)
11c Unpleasant Pleasant 4.490 (1.203) 4.729 (1.409)
14c Uncomfortable Comfortable 4.235 (1.208) 4.376 (1.431)


Brandriet et al. (2013) have shown that superior structural models of students’ attitude are found later in a course, rather than at the beginning, when attitudes have had chance to establish and settle. Although this finding is to be expected and can be of use, it could be argued that surveying students’ attitude toward chemistry near the end of a course offers no time for meaningful intervention, if that is the reason for measuring students’ attitude in the first place (Abdullah et al., 2009; Shaw, 2012; Koponen et al., 2012; Vishnumolakala et al., 2017). However, in pursuit of the best model, we compared structural models from data taken at the beginning and end of the course.

Structural modeling. PLS-SEM analysis was performed on students’ data from weeks 1 and 5 to evaluate the hierarchy of attitude effects (Fig. 1). Large ovals are latent variables; the endogenous latent variables are accompanied by their adjusted R2 values. Standardized path coefficients (β) are shown on the arrows connecting latent variables (*p < 0.05; **p < 0.001). The rectangles represent the manifest variables. The 3-factor structures considered in Fig. 1 could signal the distinct cognitive, affective, and behavioral components of attitude, aligning with the theoretical tripartite framework (Rosenberg and Hovland, 1960).
image file: c7rp00238f-f1.tif
Fig. 1 PLS-SEM structural models from select ASCI data recorded during week 1 ((1) CAB hierarchy; (3) ABC hierarchy; (5) CBA hierarchy; N = 98) and week 5 ((2) CAB hierarchy; (4) ABC hierarchy; (6) CBA hierarchy; N = 85). *p < 0.05; **p < 0.001.
Reliability. Consideration of any data model involves internal consistency reliability analysis of the outer model and whether the manifest variables represent their latent constructs. Composite reliability ranged from 0.860 to 0.915 for the three latent variables being considered, whereas Cronbach's alpha values ranged from 0.749 to 0.879 (Table 2). Since all measures of internal consistency reliability were greater than the minimum recommended value of 0.70, these results demonstrate that the relationships between latent and manifest variables under consideration are reliable and meaningful, and the items are sufficient indicators of their latent constructs. With PLS-SEM, composite reliability tends to overestimate internal consistency reliability whereas Cronbach's alpha, a more conservative measurement, tends to underestimate a construct's reliability. Manifest variable reliability scores (loading squared) were all greater than the 0.50 recommended minimum value (item 14 was the exception, with a manifest variable of 0.493 – see Table 3).
Table 2 Internal consistency reliability: composite reliability and Cronbach's alphaa
Latent variable Internal consistency reliability
Composite reliability Cronbach's alpha
a Week 5 (N = 85).
Cognition 0.890 0.815
Behavioral intention 0.915 0.879
Affect 0.860 0.749


Table 3 Outer model loadings and cross-loadings: convergent validity analysisa
Item Latent variable
Cognition Behavioral intention Affect
a Week 5 (N = 85). Loadings are in bold. b Negatively stated.
1b 0.911 0.260 0.545
4 0.776 −0.028 0.256
5 0.816 0.164 0.403
2 0.144 0.726 0.527
3b 0.201 0.864 0.689
6b 0.088 0.821 0.601
12b 0.227 0.834 0.700
15b 0.200 0.848 0.638
7b 0.452 0.691 0.896
11b 0.471 0.653 0.838
14b 0.386 0.556 0.702


Validity. Table 3 shows that loadings for the manifest variables (items) being considered ranged from 0.702 to 0.911, exceeding the 0.70 recommended minimum value. All latent variable loadings were greater than their cross-loadings. The average variance extracted (AVE) for each latent variable being considered was greater than the minimum threshold value of 0.50 (Hair et al., 2011). AVE values ranged from 0.666 to 0.699 (see Table 4). These results indicate convergent validity for each latent variable construct under consideration.
Table 4 PLS-SEM discriminant validity: Fornell–Larcker criteriona
Cognition Behavioral intention Affect AVE
a Week 5 (N = 85).
Cognition 1.000 0.699
Behavioral intention 0.049 1.000 0.673
Affect 0.284 0.606 1.000 0.666


The Fornell–Larcker criterion (FLC) for discriminant validity was inspected (Table 4). The FLC is used to address and prevent multicollinearity issues. The FLC is a common way to evaluate the degree of shared variance between latent constructs in a structural model. The FLC is adhered to when the average variance extracted (AVE) value of each latent construct exceeds the constructs’ highest squared correlation with other latent constructs (Fornell and Larcker, 1981). All AVE values were higher than the squared correlations with other latent variables and ranged from 0.666 to 0.699. Together, these results demonstrate that discriminant validity was achieved for the latent constructs under consideration.

Predictive relevance. A sound PLS structural model should have relatively high adjusted R2 values. Adjusted R2 values represent the ability of the sample data to predict the endogenous latent variables of the structural model. Adjusted R2 values for endogenous latent variables of 0.75, 0.50, or 0.25 in structural PLS-SEM models are considered substantial, moderate, or weak, respectively. As anticipated, all hierarchy models were superior in week 5 than in week 1, as indicated by larger adjusted R2 values for endogenous latent constructs in week 5 structural models (Brandriet et al., 2013). Thus, only week 5 models were considered when evaluating the best structural model. Although PLS-SEM has no accepted global measure of model fit, measures of in-sample predictive relevance (adjusted R2 and corresponding effect size f2) and out-of-sample predictive relevance (Q2 and corresponding effect size q2) have previously been used to evaluate PLS-SEM structural models and are recommended in the literature (Sharma and Kim, 2012; Sarstedt et al., 2014; Hair et al., 2016). These predictive metrics were applied in this study to analyze the hierarchy effect in structural models 2, 4 and 6 (Fig. 1).

Hierarchy analysis

The largest (0.741) and the smallest (0.041) adjusted R2 values were found in model 6 (low-involvement CBA hierarchy) for latent constructs affect and behavioral intention, respectively. This implies that approximately 74% (substantial) of the affect construct is predicted from a combination of cognition and behavioral intention constructs. Most of this predictive ability came from behavioral intention, which recorded a large effect size (f2 = (adj Rincluded2 − adj Rexcluded2)/(1 − adj Rincluded2) = 1.792) (Hair et al., 2016). However, only 4% (probably negligible) of the behavioral intention construct is predicted from the cognition construct (Table 5). This assertion was confirmed by a small f2 effect size of only 0.051. Whereas the f2 effect size of cognition on affect was large (0.536) in model 6, the f2 effect size of behavioral intention on affect was much larger (1.792). This indicates that behavioral intention is more important than cognition for the in-sample prediction of affect. An additional model (model 7) was constructed by removing the path connecting cognition and behavioral intention (Fig. 2). This more parsimonious low-involvement hierarchy model preserved the adjusted R2 value of the affect construct (0.738) and maintained its large f2 effect size (1.830) observed in model 6. Model 7 could be a better representative model of a low-involvement CBA hierarchy effect compared with model 6.
Table 5 In-sample predictive relevance of structural modelsa
Model Construct Adjusted R2 f 2
Affect Behavioral intention Cognition
a Week 5 (N = 85). Large adjusted R2 values suggests that a model has in-sample predictive relevance for an endogenous construct. f2 is the effect size of adjusted R2. Values for f2 of 0.02, 0.15 and 0.35 are viewed as small, medium and large effects, respectively.
2 Affect 0.284 1.792
Behavioral intention 0.655
Cognition 0.397 0.155
4 Affect 1.540 0.536
Behavioral intention 0.606 0.155
Cognition 0.373
6 Affect 0.741
Behavioral intention 0.041 1.792
Cognition 0.536 0.051
7 Affect 0.738
Behavioral intention 1.830
Cognition 0.527
8 Affect 1.635
Behavioral intention 0.648
Cognition 0.126



image file: c7rp00238f-f2.tif
Fig. 2 Alternative PLS-SEM structural models from ASCI data recorded during week 5 ((7) CBA hierarchy; (8) CAB hierarchy; N = 85). *p < 0.05; **p < 0.001.

The high-involvement CAB and the ABC hierarchy effect models are represented in models 2 or 8 and 4, respectively. The high-involvement hierarchy model 2 shows that approximately 66% of students’ behavioral intention is predicted by a combination of their cognition and affect (Fig. 1). The largest f2 effect size in model 2 is between affect and behavioral intention (1.792), as was the case in model 6 (Table 5). Model 8 was also constructed whereby the path connecting cognition and behavioral intention was eliminated (Fig. 2). This more parsimonious high-involvement hierarchy model maintained most of the adjusted R2 value of the behavioral intention construct (0.648) and maintained its large f2 effect size (1.635) found in model 2 (Table 5). Model 8 could be a better representative model of a high-involvement CAB hierarchy effect compared with model 2. Inspection of the models presented in Fig. 2 shows that cognition and behavioral intention explain the variance in affect (74%, model 7) better than cognition and affect can explain the variance in behavioral intention (65%, model 8).

In the ABC hierarchy model 4 (Fig. 1), only 37% of the terminal attitude construct, cognition, is predicted from the combination of students’ affect and behavioral intention. On theory grounds, it is unlikely that the ABC hierarchy model is an appropriate model for our students’ attitude toward the subject of chemistry data, and its reduced in-sample predictive qualities (adjusted R2) support this assertion (Table 5).

Bootstrapping was used to assess the significance of the path coefficients (β) in each model (Table 6). Although most path coefficients are significant at the p < 0.05 level, the paths between cognition and behavioral intention in models 6 and 8 both exhibit a 95% confidence interval (CI) range which includes zero. Paths with a 95% CI range which includes zero should be rejected even with p < 0.05 (Hair et al., 2016). This finding supports the notion that the 4% in-sample prediction of behavioral intention by cognition in model 6 is probably meaningless and supports model 7 as a better representative (than model 6) of the low-involvement (CBA) hierarchy effect. This finding also raises the concern that the predictive path connecting cognition and behavioral intention in high-involvement hierarchy model 2 is likely a pattern in the data and could be meaningless (Table 6) (Kline, 2015). However, an alternative possibility is that the path connecting cognition and behavioral intention in model 2 is providing a suppressor effect to the mediation role of affect between cognition and behavioral intention, as evidenced by the negative β value (−0.272) (Cheung and Lau, 2008).

Table 6 Standardized path coefficient values (β) and their significancea
Model Pathb β 95% CI t-Value p-Value
a Critical t-values for a two-tailed test: <1.96 (p > 0.05), 1.96 (p = 0.05), >2.58 (p < 0.001). Paths with a 95% confidence interval (CI) including zero should be rejected even with p < 0.05. b A, affect; BI, behavioral intention; C, cognition.
2 C → A 0.533 [0.309; 0.667] 5.743 0.000
C → BI −0.272 [−0.448; −0.101] −3.571 <0.001
A → BI 0.924 [0.795; 1.045] 12.122 0.000
4 A → C 0.918 [0.655; 1.167] 6.630 0.000
BI → C −0.495 [−0.785; −0.205] −3.571 <0.001
A → BI 0.779 [0.676; 0.861] 11.305 <0.001
6 C → A 0.380 [0.250; 0.500] 6.630 0.000
C → BI 0.220 [−0.076; 0.419] 2.059 0.043
BI → A 0.695 [0.591; 0.794] 12.122 0.000
7 C → A 0.377 [0.256; 0.504] 6.576 0.000
BI → A 0.703 [0.605; 0.785] 12.251 0.000
8 C → BI −0.256 [−0.399; 0.070] −3.212 0.002
A → BI 0.922 [0.695; 1.034] 11.579 0.000


Cohen's path analysis was performed to analyze the causal direction between the sub-components of attitude (Cohen et al., 1993). Cohen's path analysis reasons that estimated path coefficients should be as close as possible to the actual path coefficients. This means that total squared errors (TSE) between estimated and actual path coefficients should be minimized. Changes in path direction alter the estimated path coefficients but not the actual path coefficients. Therefore, path connections and their directions are critical for calculating estimated paths in proposed structural models (Sun and Zhang, 2006). A causal direction is supported if it leads to a decrease in TSE. In order to conduct Cohen's path analysis, one important criterion is that there must be a path from every variable to the dependent variable; models 2 and 6 meet this criterion and were subsequently used to compare the hierarchy of attitude effects.

Estimated and actual path coefficients were read directly from the PLS-SEM data from models 2 and 6 in XLSTAT (Table 7). TSE values were 0.0441 and 0.0225 for models 2 and 6, respectively. Therefore, by reversing the path from affectbehavioral intention (model 2) to behavioral intentionaffect (model 6), the TSE is changed by (0.0225–0.0441)/0.0441 = −49.0%. The negative sign indicates that the TSE is reduced and hence model 6 is an improvement over model 2. Whether or not the apparent improvement of model 6 over model 2 is statistically significant should be gauged by calculating Cohen's d effect size ((TSEmodel[thin space (1/6-em)]6 − TSEmodel[thin space (1/6-em)]1)/s, where s is the pooled standard deviation of the TSE values). It is not possible to calculate Cohen's d effect size for our data since our structural models only contain 3 latent constructs and hence we cannot calculate the required pooled standard deviation of the TSE values, s (Cohen, 1988). A fourth latent construct would be necessary to allow calculation of the pooled standard deviation. Nevertheless, the apparent 49.0% reduction in the TSE achieved by transitioning from model 2 to model 6 supports the low-achievement (CBA) hierarchy of attitude effect.

Table 7 Cohen's path analysis of models 2 and 6a
Model Direct path Indirect path Estimated Actual TSE
a Week 5 (N = 85).
2 C → B C → A → B 0.220 0.430 0.0441
6 C → A C → B → A 0.533 0.683 0.0225


The goal of predictive modeling using PLS-SEM is to produce models with high levels of predictive power (large R2, f2, Q2 and q2 values) that are grounded in theory (Sarstedt et al., 2014). Blindfolding was performed to obtain a cross-validated redundancy (Q2) for the endogenous constructs (Hair et al., 2016). Q2 values greater than 0 suggest that a model has predictive relevance for an endogenous construct, whereas Q2 values below 0 indicate a lack of predictive relevance. Lower R2, f2, Q2 and q2 values for latent constructs in the ABC hierarchy model 4 support the assertion that this hierarchy effect is an unlikely theoretical fit to our students’ attitude toward the subject of chemistry (vide supra). However, it is theoretically reasonable that our students’ data is exhibiting either a high-involvement CAB hierarchy or a low-involvement CBA hierarchy.

Effect size is defined by Kelley and Preacher (2012) as “…a quantitative reflection of the magnitude of some phenomenon that is used for the purpose of addressing a question of interest.” The effect size, q2 = (Qincluded2Qexcluded2)/(1 − Qincluded2), was computed. Values for q2 of 0.02, 0.15, and 0.35 are viewed as small, medium, and large effects, respectively (Hair et al., 2016). Out of the two parsimonious representative models (models 7 and 8), the highest predictive q2 effect size is observed in low-involvement hierarchy model 7 for behavioral intention on affect (0.621, large). A medium predictive q2 effect size (0.179) is observed in model 7 for cognition on affect (Table 8).

Table 8 Out-of-sample predictive relevance of structural modelsa
Model Construct Q 2 q 2
Affect Behavioral intention Cognition
a Week 5 (N = 85). Q2 > 0 suggests that a model has out-of-sample predictive relevance for an endogenous construct; Q2 < 0 indicates a lack of out-of-sample predictive relevance for an endogenous construct. q2 is the effect size of Q2. Values for q2 of 0.02, 0.15 and 0.35 are viewed as small, medium and large effects, respectively.
2 Affect 0.189 0.707
Behavioral intention 0.444
Cognition 0.067
4 Affect 0.296
Behavioral intention 0.408 0.102
Cognition 0.266
6 Affect 0.495
Behavioral intention 0.033 0.628
Cognition 0.184
7 Affect 0.494
Behavioral intention 0.621
Cognition 0.179
8 Affect 0.407
Behavioral intention 0.437
Cognition 0.053


Thus, our students’ data should exhibit a greater predictive link between behavioral intention and affect than between cognition and affect, showing that behavioral intention is driving affect to a greater extent than cognition is driving affect. In high-involvement hierarchy model 8, a large yet lower predictive q2 effect size (0.407) is found for affect on behavioral intention, and a small predictive q2 effect size (0.053) is found for cognition on behavioral intention. The q2 effect size of affect on behavioral intention is greatly increased from 0.407 to 0.707 by reconnecting the path between cognition and behavioral intention and returning to model 2 (Table 8). Overall, it seems that the high-involvement CAB hierarchy is best argued using model 2, while the low-involvement hierarchy is best argued using model 7. Having considered all of the evidence described above, we believe that the findings support the low-involvement hierarchy model 7 as better able to predict our students’ attitude data compared with high-involvement hierarchy models. Furthermore, the behavioral intention construct contained the highest scoring items in the ASCI survey (Table 1), as would be expected from a low-involvement hierarchy of attitude effect (Calder, 1979).

Evidence can be sought to support our assertions of the causal relationships between latent constructs in our students’ data, but it is very difficult to prove causality without a priori considerations built in to the survey instrument, none of which were built in or are present in the ASCI instrument. Therefore, we cautiously proceed in stating that evidence from our analysis of our students’ attitude data, along with theoretical considerations, point to a low-involvement (CBA) hierarchy of attitude effect, while acknowledging that our assertions fall short of definitive proof of this hierarchy effect.

The work presented herein suggests that our students are making decisions based on what they know or believe they know about the subject of chemistry, then are considering future behavioral intentions. Finally, the cognitive and behavioral intention components of our students’ attitude structure are producing feelings (the affective component) and remain concomitants of our students’ feelings toward the subject of chemistry.

If accurate, these preliminary low-involvement hierarchy (CBA) results suggest that our students are not as invested as they could be in making decisions about introductory chemistry, as evidenced by the removal of behavioral intention as the culminating act of the attitude structure and its replacement with the affective component. If true, and not just a reflection of the survey results, this hierarchy of attitude effect finding provides an opportunity to develop metacognitive and attitudinal interventions to address attitude mindsets at the beginning of an introductory chemistry course.

Conclusions and implications for practice

Meaningful insights into our students’ attitude toward the subject of chemistry has been obtained following application of Bauer's semantic differential ASCI instrument. Our study shows for the first time that PLS-SEM can be successfully employed as a complimentary modeling process to the more commonly used CB-SEM when investigating the ASCI survey. EFA only resolved two latent constructs (Appendix 3) and a possible reason could be an underlying integrated hierarchy of attitude effect. Investigation of a possible hierarchy of attitude effect present in our students’ data using PLS-SEM allowed the representation of our students’ data that both fit the data and was aligned with attitude theory. Inclusion of a third latent construct more closely adhered to the tripartite theory of attitude composition and was not merely the result of pursuing patterns in our data. We are calling this third latent construct behavioral intention and consider the construct representative of our students’ covert behavior or their intention to act (Rosenberg and Hovland, 1960; Ajzen, 1985; Ajzen, 1991; Eccles et al., 2007).

This work suggests that the original ASCI instrument is suited to the investigation of a hierarchy of attitude effect when attitude is the focus rather than its link to achievement. The hierarchy of attitude effect offers a potential lens towards understanding students’ attitude toward chemistry and monitoring academic interventions that might impact students’ attitude.

The results from this PLS-SEM analysis of students’ attitude toward the subject of chemistry chart a path for chemistry educators to be able to investigate the hierarchy of attitude effect being displayed within a given cohort in an institution and to pinpoint which aspects of students’ attitude are important. For example, faced with a low-involvement (CBA) hierarchy of attitude effect, the remedial goal should aim to reverse the hierarchical placement of behavioral intention and affect, hence transitioning from the low-involvement (CBA) to the high-involvement (CAB) hierarchy. It is possible that students’ attitude toward chemistry can be measured and redirected in a more positive direction within a semester if necessary (Abdullah et al., 2009; Koponen et al., 2012; Vishnumolakala et al., 2017). Engaging students in metacognitive discussions about different hierarchy of attitude effects would also likely be a useful component of remedial dialogue between students and instructors (Cook et al., 2013). For example, instructors could engage students with a short classroom presentation on the hierarchy of attitude effects, including what they are exactly and what they say about learning in the classroom. Time could then be made available to students to discuss the implications of each hierarchy toward learning chemistry. It would be important to allow students to think about the hierarchy of attitude effects and their confluence with academic performance.

Sole use of the shortened ASCIv2 instrument eliminates the possibility of capturing the hierarchy of attitude effect since ASCIv2 has the capacity to monitor only two of the three constituents of attitude (Xu and Lewis, 2011). Measurement of all three constituents of attitude are required to evaluate the hierarchy of attitude effect. While behavioral intention was the most positive component of attitude with our students, structural data presented herein showed that affect was the component of attitude most predicted by the other two latent constructs (Fig. 1 and 2). We believe that our students’ data was best modeled by the low-involvement (CBA) hierarchy of attitude effect (model 7). Together with previous studies, this work answers a call in the literature to further demonstrate that the ASCI instrument is a robust and valuable tool that can provide tailored insights into the attitudes of a diverse student cohort that might be used to augment the meaningful learning of chemistry.

Following on from this work, we plan to use PLS-SEM analysis of the ASCI instrument to evaluate the impact of learning interventions on the hierarchy of attitude effects in future cohorts of introductory chemistry students in our institution. A shift in students’ attitude from a low-involvement (CBA) hierarchy to a high-involvement (CAB) hierarchy of effect would be desirable and would signal a successful learning intervention. Another avenue for future work is to combine the results of students’ attitude toward chemistry with other affective domain markers (Bauer, 2005; Ferrell and Barbera, 2015). A vertical study of students’ affective domain during the transition from introductory chemistry to general chemistry to organic chemistry, and students’ sentiment toward electronic textbooks would also be beneficial to future planning and curriculum design in our institution and this work is currently underway.

Limitations of study

This study has several limitations that should be acknowledged. First, the attitudinal object (the subject of chemistry) was not and could not be physically present, and was instead an abstract notion in the minds of the students. In such cases wherein the attitudinal object is not physically present, and the tripartite structure of attitude is analyzed solely by a self-response survey instrument (ASCI), correlations between affect, behavior and cognition tend to be overestimated (Greenwald, 1982; Breckler, 1984). Therefore, the means of measurement could be a variable in the results and if so would be incorporated into the hierarchy of attitude effect findings being reported.

Second, no structural model is perfect and so we have to accept and work with the best models available to us (MacCallum, 2003). While we have made every effort to present and discuss the best models of our data, grounded in sensible attitudinal theory and not just patterns in data, the structural models presented here are not perfect and therefore conclusions drawn from those models are likewise imperfect. For example, the tripartite structure of attitude does not exist in isolation and we have not considered extraneous interactions with our proposed hierarchy of attitude effect model (Breckler, 1984; Bhattacherjee and Sanford, 2006). Furthermore, this study adopted a PLS path analysis approach to investigate a possible hierarchy of attitude effect and, in doing so, forwent the chance to measure the structural model fit offered by CB-SEM. We encourage others with larger sample sizes to examine evidence of a hierarchy of attitude effect using a covariance-based approach. This is much easier to do with data from large 4 year institutions where the numbers are large enough to allow CB-SEM analysis.

Third, the participants in this study were mainly Hispanic/Latino students, which could limit the generalizability of the results to an institution with a different student make-up. Whereas we typically observe the Asian/Pacific Islander students in our chemistry classes outperform the majority Hispanic/Latino students, we did not investigate the structural model of each race group separately due to insufficient sample sizes needed to do so. Instead, our data incorporates attitudinal data from all students. We acknowledge that it is possible that the smaller sample of typically higher performing Asian/Pacific Islander students in our study could have a different structural model (or hierarchy effect) if analyzed in isolation. Furthermore, we did not account for the possibility that some students might be taking introductory chemistry for the first time or are repeating the class. Instead, we made a broad assumption that all students were encountering college-level introductory chemistry for the first time and were developing an attitude toward the subject of chemistry from a level starting point, and not from prior experience.

Fourth, this study was performed during an intensive 5 week Winter intersession rather than during a regular 15 week semester. The literature shows that students’ attitudes settle with time and exposure to an attitudinal object (Brandriet et al., 2013). Therefore, superior structural models can be forged from data surveyed later in a course. It cannot be assumed that an attitude toward the subject of chemistry surveyed at the end of a 5 week exposure would be identical to an attitude surveyed at the end of a 15 week exposure to introductory chemistry. Although we do not claim in this paper that results from a 5 week course can be extrapolated to a regular 15 week semester, we do caution against doing so here, and recommend that re-measuring students’ attitude during a full 15 week semester is necessary if a comparison is of interest.

Despite the aforementioned limitations, we believe that the work presented here will be of value to the broader chemistry education community. The link between students’ cognitive and affective domains during science classes, and the responsibility afforded to science educators to cater to both, is particularly relevant in today's science classrooms. PLS-SEM extends to chemistry educators with modest class sizes the opportunity to explore the unique affective domain of students within a cohort at any type of institution. PLS-SEM, coupled with the potential of the hierarchy of attitude effect to function as a lens through which to analyze the impact of instruction on students’ attitude, offers the possibility of monitoring the impact of learning interventions on an increasingly diverse student demographic.

Conflicts of interest

There are no conflicts to declare.

Appendices

Appendix 1

Table 9 Descriptive statistics of ASCI data
Item number Item Week 1 (N = 98) Week 5 (N = 85)
Mean (SD) Skew Kurt Mean (SD) Skew Kurt
a Negatively stated item is reversed before averaging.
1a Easy Hard 3.388 (1.352) 0.051 0.055 3.624 (1.543) 0.021 −0.615
2 Worthless Beneficial 5.816 (1.409) −1.538 2.341 5.682 (1.071) −0.460 −0.560
3a Exciting Boring 4.571 (1.741) −0.332 −0.775 5.000 (1.447) −0.265 −0.535
4 Complicated Simple 3.388 (1.469) 0.355 0.014 3.129 (1.454) 0.341 −0.304
5 Confusing Clear 3.878 (1.594) 0.236 −0.640 3.612 (1.423) 0.161 −0.000
6a Good Bad 5.337 (1.631) −0.867 0.085 5.224 (1.199) 0.022 −1.168
7a Satisfying Frustrating 4.214 (1.508) 0.068 −0.495 4.529 (1.615) −0.184 −0.536
8a Scary Fun 3.602 (1.470) −0.016 −0.264 3.671 (1.592) 0.234 −0.591
9a Comprehensible Incomprehensible 4.571 (1.193) −0.080 0.300 4.800 (1.317) −0.326 0.333
10 Challenging Not challenging 2.847 (1.515) 0.882 0.476 2.718 (1.477) 0.912 0.669
11a Pleasant Unpleasant 4.490 (1.203) −0.012 0.083 4.729 (1.409) −0.627 0.716
12a Interesting Dull 5.602 (1.504) −1.109 0.668 5.612 (1.273) −1.071 1.356
13a Disgusting Attractive 3.082 (1.329) 0.224 0.212 3.165 (1.174) −0.102 −0.522
14a Comfortable Uncomfortable 4.235 (1.208) 0.467 0.624 4.376 (1.431) −0.070 −0.104
15a Worthwhile Useless 5.429 (1.392) −0.949 0.913 5.118 (1.418) −0.392 −0.695
16a Work Play 4.990 (1.576) −0.579 0.110 4.965 (1.607) −0.541 −0.077
17 Chaotic Organized 4.837 (1.337) 0.094 −0.644 4.729 (1.606) −0.427 −0.265
18 Safe Dangerous 4.061 (1.442) 0.038 0.110 3.871 (1.494) −0.058 −0.141
19a Tense Relaxed 4.469 (1.408) −0.320 0.001 4.412 (1.606) −0.137 −0.316
20a Insecure Secure 3.663 (1.218) −0.128 0.804 3.612 (1.372) 0.029 0.090


Appendix 2

Table 10 Internal consistency (Cronbach's alpha) and test–retest reliability (correlation) for ASCI data from week 1
Latent variable Item number Cronbach's alpha Correlation
Our data (N = 98) Lit.a Lit.b (N = 405) Our data (N = 98) Lit.a Lit.b (N = 10)
a Bauer, 2008. b Xu and Lewis, 2011.
Interest & utility (2,3,6,12,15) 0.76 0.83 0.82 0.55 0.74 0.91
Anxiety (8,13,16,19,20) 0.72 0.77 0.71 0.68 0.64 0.96
Intellectual accessibility (1,4,5,9,10) 0.70 0.78 0.79 0.60 0.71 0.96
Emotional satisfaction (7,11,14,17) 0.78 0.79 0.74 0.72 0.72 0.96


Appendix 3

Table 11 Item loadings from EFA of ASCI data from week 5 versus literature values
Item Our data (week 5) Bauer (2008) Xu and Lewis (2011)
F1 F2 F3 F4 F1 F2 F3 F4 F1 F2 F3 F4
EFA was performed in XLSTAT 2017.5 using literature parameters, including principal components analysis with varimax rotation. Four factors were extracted for comparison with the literature. Expected item loadings are in bold. Item loadings onto unexpected factors are in italics.a Negatively stated.
Interest and utility
15a Worthwhile Useless 0.83 0.16 0.01 0.26 0.85 0.01 −0.06 −0.11 0.75 −0.22 0.01 0.06
2 Worthless Beneficial 0.74 −0.11 −0.05 −0.10 0.79 −0.10 0.03 −0.04 0.68 0.21 0.05 0.04
6a Good Bad 0.82 −0.16 −0.12 −0.04 0.71 0.05 −0.20 −0.04 0.68 −0.20 −0.16 0.13
12a Interesting Dull 0.80 0.02 0.10 0.16 0.67 0.32 0.02 −0.15 0.80 −0.17 −0.13 −0.01
3a Exciting Boring 0.82 −0.12 −0.00 0.11 0.58 0.38 −0.05 −0.09 0.69 0.08 −0.24 −0.08
Anxiety
19a Tense Relaxed −0.14 0.53 −0.58 −0.12 −0.14 0.75 0.32 0.02 −0.17 0.55 0.45 0.31
16a Work Play 0.12 0.56 −0.52 0.33 0.06 0.74 0.23 −0.15 −0.14 0.23 0.36 0.71
8a Scary Fun −0.61 0.13 −0.42 −0.25 −0.35 0.60 0.18 0.16 −0.38 0.26 0.53 −0.15
20a Insecure Secure −0.24 0.74 −0.07 −0.39 −0.34 0.53 0.23 0.29 −0.20 0.64 0.37 −0.17
13a Disgusting Attractive −0.70 0.18 0.11 −0.08 −0.42 0.53 −0.01 0.11 −0.55 0.44 0.08 0.16
Intellectual accessibility
4 Complicated Simple −0.06 −0.29 0.83 −0.05 −0.03 −0.13 0.80 −0.13 −0.05 0.17 0.72 0.05
5 Confusing Clear 0.17 −0.30 0.73 0.03 −0.24 −0.33 0.75 0.06 −0.10 0.04 0.78 −0.06
1a Easy Hard 0.27 −0.04 0.77 0.21 0.13 0.18 0.73 −0.34 0.19 −0.24 0.70 −0.07
10 Challenging Unchallenging −0.17 0.15 0.83 0.10 0.29 −0.36 0.54 −0.01 0.08 0.09 0.69 0.36
9a Comprehensible Incomprehensible 0.50 −0.37 0.36 0.24 0.38 −0.03 0.52 −0.41 0.49 −0.12 0.52 0.20
Fear
18 Safe Dangerous −0.06 0.15 −0.07 0.88 0.03 0.05 −0.05 0.85 0.09 −0.29 −0.18 0.68
Emotional satisfaction
11a Pleasant Unpleasant 0.70 −0.14 0.32 0.19 0.50 0.44 −0.35 −0.27 0.60 −0.13 −0.50 −0.10
14a Comfortable Uncomfortable 0.48 −0.14 0.27 0.49 0.48 0.43 −0.35 −0.28 0.46 −0.42 −0.48 0.03
17 Chaotic Unchaotic 0.43 −0.49 0.21 0.10 0.44 −0.34 0.32 −0.15 0.25 0.73 0.08 −0.07
7a Satisfying Unsatisfying 0.74 −0.21 0.29 0.01 0.41 0.30 −0.46 −0.28 0.49 −0.07 −0.67 0.03


Acknowledgements

We are grateful to the numerous chemistry professors and students who enabled this research study.

References

  1. Abdullah M., Mohamed N. and Ismail Z. H., (2009), The effect of an individualized laboratory approach through microscale chemistry experimentation on students’ understanding of chemistry concepts, motivation, and attitudes, Chem. Educ. Res. Pract., 10, 53–61.
  2. Adams W. K. and Wieman C. E., (2011), Development and validation of instruments to measure learning of expert-like thinking, Int. J. Sci. Educ., 33(9), 1289–1312.
  3. Aiken L. R., (1980), Attitude measurement and research, New Dir. Test. Meas., 7, 1–24.
  4. Ajzen I., (1985), From intentions to actions: a theory of planned behavior, in Kuhl J. and Beckmann J., (ed.), Action-control: from cognition to behavior, Heidelberg: Springer, pp. 11–39.
  5. Ajzen I., (1991), The theory of planned behavior, Organ. Behav. Hum. Decis. Process., 50(2), 179–211.
  6. Arjoon J. A., Xu X. and Lewis J. E., (2013), Understanding the state of the art for measurement in chemistry education research: examining the psychometric evidence, J. Chem. Educ., 90(5), 536–545.
  7. Bandura A., (1997), Self-efficacy: the exercise of control, New York: W. H. Freeman and Company.
  8. Barbera J., Adams W. K., Wieman C. E. and Perkins K. K., (2008), Modifying and validating the Colorado Learning Attitudes about Science Survey for use in chemistry, J. Chem. Educ., 85(10), 1435–1439.
  9. Bauer C. F., (2005), Beyond “student attitudes”: Chemistry Self-Concept Inventory for assessment of the affective component of student learning, J. Chem. Educ., 82(12), 1864–1870.
  10. Bauer C. F., (2008), Attitude towards chemistry: a semantic differential instrument for assessing curriculum impacts, J. Chem. Educ., 85(10), 1440–1445.
  11. Beatty S. E. and Kahle L. R., (1988), Alternative hierarchies of the attitude-behavior relationship: the impact of brand commitment and habit, J. Acad. Market. Sci., 16(2), 1–10.
  12. Berg C. A. R., (2005), Factors related to observed attitude change toward learning chemistry among university students, Chem. Educ. Res. Pract., 6(1), 1–18.
  13. Bhattacherjee A. and Sanford C., (2006), Influence processes for information technology acceptance: an elaboration likelihood model, Manag. Inf. Syst. Q., 30(4), 805–825.
  14. Blalock Jr. H. M., (1986), Multiple causation, indirect measurement and generalizability in the social sciences, Synthese, 68, 13–36.
  15. Bloom B. S., Engelhart M. D., Hill W. H. and Furst E. J., (1956), Taxonomy of educational objectives. Handbook I: Cognitive domain, New York: David McKay Company, Inc.
  16. Brandriet A. R., Xu X., Bretz S. L. and Lewis J. E., (2011), Diagnosing changes in attitude in first year college chemistry students with a shortened version of Bauer's semantic differential, Chem. Educ. Res. Pract., 12, 271–278.
  17. Brandriet A. R., Ward R. M. and Bretz S. L., (2013), Modeling meaningful learning in chemistry using structural equation modeling, Chem. Educ. Res. Pract., 14, 421–430.
  18. Breckler S. J., (1984), Empirical validation of affect, behavior, and cognition as distinct components of attitude, J. Pers. Soc. Psychol., 47(6), 1191–1205.
  19. Brown S. J., Sharma B. N., Wakeling L., Naiker M., Chandra S., Gopalan R. D. and Bilimoria V. B., (2014), Quantifying attitude to chemistry in students at the University of the South Pacific, Chem. Educ. Res. Pract., 15, 184–191.
  20. Brown T. A., (2014), Confirmatory factor analysis for applied research, 2nd edn, New York: The Guildford Press.
  21. Bunce D. M. and Hutchinson K. D., (1993), The use of the GALT (Group Assessment of Logical Thinking) as a predictor of academic success in college chemistry, J. Chem. Educ., 70(3), 183–187.
  22. Çakici Y., Aricak O. T. and Ilgaz G., (2011), Can ‘attitudes toward biology course’ and ‘learning strategies’ simultaneously predict achievement in biology? Egit. Arast., 11(45), 31–48.
  23. Calder B. J., (1979), When attitudes follow behavior – a self-perception/dissonance interpretation of low involvement, in Maloney J. C. and Silverman B. (ed.), Attitude research plays for high stakes, Chicago: American Marketing Association.
  24. Chan J. Y. K. and Bauer C. F., (2014), Identifying at-risk students in general chemistry via cluster analysis of affective characteristics, J. Chem. Educ., 91(9), 1417–1425.
  25. Chan J. Y. K. and Bauer C. F., (2016), Learning and studying strategies used by general chemistry students with different affective characteristics, Chem. Educ. Res. Pract., 17, 675–684.
  26. Cheung D., (2009), Students’ attitudes toward chemistry lessons: the interaction effect between grade level and gender, Res. Sci. Educ., 39, 75–91.
  27. Cheung G. W. and Lau R. S., (2008), Testing mediation and suppression effects of latent variables: bootstrapping with structural equation models, Org. Res. Meth., 11(2), 296–325.
  28. Chin W. W., (2010), How to write up and report PLS analyses, in Vinzi V. E., Chin W. W., Henseler J. and Wang H., (ed.), Handbook of partial least squares: concepts, methods and applications, New York: Springer Handbooks of Computational Statistics, pp. 655–690.
  29. Cohen J., (1988), Statistical power analysis for the behavioral sciences, Hillsdale, NJ: Lawrence Erlbaum.
  30. Cohen P. R., Carlsson A., Ballesteros L. and Amant R. S., (1993), Automating path analysis for building causal models from data: First results and open problems, Eleventh National Conference on Artificial Intelligence, Washington DC.
  31. Cook E., Kennedy E. and McGuire S. Y., (2013), Effect of teaching metacognitive learning strategies on performance in general chemistry courses, J. Chem. Educ., 90(8), 961–967.
  32. Cudeck R., (1989), Analysis of correlation matrices using covariance structure models, Psychol. Bull., 105(2), 317–327.
  33. Cukrowska E., Staskun M. G. and Schoeman H. S., (1999), Attitudes towards chemistry and their relationship to student achievement in introductory chemistry courses, S. Afr. Tydskr. Chem., 52(1), 8–14.
  34. Davidshofer C. O. and Murphy K. R., (2005), Psychological testing: principles and testing, 6th edn, Upper Sadler River, NJ: Pearson.
  35. DeWitt J., Archer L. and Osborne J., (2014), Science-related aspirations across the primary-secondary divide: evidence from two surveys in England, Int. J. Sci. Educ., 36(10), 1609–1629.
  36. Eagly A. H. and Chaiken S., (1993), The psychology of attitudes, Fort Worth, TX: Harcourt Brace Jovanovich.
  37. Eccles J. S., Adler T. F., Futterman R., Goff S. B., Kaczala C. M., Meece J. L. and Midgley C., (2007), Expectancies, values, and academic behaviors, in Spence J. T., (ed.), Achievement and achievement motivation, San Francisco, CA: W. H. Freeman, pp. 75–146.
  38. Else-Quest N. M., Mineo C. C. and Higgins A., (2013), Math and science attitudes and achievement at the intersection of gender and ethnicity, Psychol. Women Q., 37(3), 293–309.
  39. Ewing M., Huff K., Andrews M. and King K., (2005), Assessing the reliability of skills measured by the SAT® (Office of Research and Analysis, Trans.), The College Board.
  40. Ferrell B. and Barbera J., (2015), Analysis of students’ self-efficacy, interest, and effort beliefs in general chemistry, Chem. Educ. Res. Pract., 16, 318–337.
  41. Ferrell B., Phillips M. M. and Barbera J., (2016), Connecting achievement motivation to performance in general chemistry, Chem. Educ. Res. Pract., 17, 1054–1066.
  42. Fishbein M. and Ajzen I., (1975), Belief, attitude, intention and behavior, Reading, MA: Addison-Wesley.
  43. Fornell C. and Larcker D. F., (1981), Evaluating structural equation models with unobservable variables and measurement error, J. Mark. Res., 18(1), 39–50.
  44. Galloway K. R. and Bretz S. L., (2015), Development of an assessment tool to measure students’ meaningful learning in the undergraduate chemistry laboratory, J. Chem. Educ., 92(7), 1149–1158.
  45. Geisser S., (1975), The predictive sample reuse method with applications, J. Am. Stat. Assoc., 70(350), 320–328.
  46. Greenbaum P. E. and Dedrick R. F., (1998), Hierarchical confirmatory factor analysis of the child behavior checklist/4–18, Psychol. Assess., 10(2), 149–155.
  47. Greenwald A. G., (1982), Is anyone in charge? Personalysis versus the principle of personal unity, in Suis J., (ed.), Psychological perspectives on the self, Hillsdale, NJ: Erlbaum, vol. 1, pp. 151–181.
  48. Grove N. and Bretz S. L., (2007), CHEMX: an instrument to assess students’ cognitive expectations for learning chemistry, J. Chem. Educ., 84(9), 1524–1929.
  49. Hair Jr. J. F., Ringle C. M. and Sarstedt M., (2011), PLS-SEM: indeed a silver bullet, J. Mark. Theory Pract., 19(2), 139–151.
  50. Hair Jr. J. F., Sarstedt M., Hopkins L. and Kuppelwieser V. G., (2014), Partial least squares structural equation modeling (PLS-SEM): an emerging tool in business research, Eur. Bus. Rev., 26(2), 106–121.
  51. Hair Jr. J. F., Hult, G. T. M., Ringle C. M. and Sarstedt M., (2016), A primer on partial least squares structural equation modeling (PLS-SEM), 2nd edn, Los Angeles: Sage Publications, Inc.
  52. Henseler J. and Sarstedt M., (2013), Goodness-of-fit indices for partial least squares path modeling, Comput. Stat., 28, 565–580.
  53. Heredia K. and Lewis J. E., (2012), A psychometric evaluation of the Colorado Learning Attitudes about Science Survey for use in chemistry, J. Chem. Educ., 89(4), 436–441.
  54. Hu L. and Bentler P. M., (1999), Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives, Struct. Equ. Modeling, 6(1), 1–55.
  55. Kahveci A., (2015), Assessing high school students’ attitudes toward chemistry with a shortened semantic differential, Chem. Educ. Res. Pract., 16, 283–292.
  56. Kahveci M. and Orgill M. (ed.), (2015), Affective dimensions in chemistry education, Dordrecht: Springer.
  57. Kaiser H. F., (1974), Analysis of factorial simplicity, Psychometrika, 39(1), 31–36.
  58. Kan A. and Akbas A., (2006), Affective factors that influence chemistry achievement (attitude and self-efficacy) and the power of these factors to predict chemistry achievement–I, J. Turk. Sci. Educ., 3(1), 76–85.
  59. Kelley K. and Preacher K. J., (2012), On effect size, Psychol. Methods, 17(2), 137–152.
  60. Kember D. and Leung D. Y. P., (1998), The dimensionality of approaches to learning: an investigation with confirmatory factor analysis on the structure of the SPQ and LPQ, Educ. Psychol., 68(3), 395–407.
  61. Kline R. B., (2015), Principles and practice of structural equation modeling, 4th edn, New York: The Guildford Press.
  62. Koponen J., Pyörälä E. and Isotalus P., (2012), Comparing three experiential learning methods and their effect on medical students’ attitudes to learning communication skills, Med. Teach., 34, e198–e207.
  63. Krathwohl D. R., Bloom B. S. and Masia B. B., (1964), Taxonomy of educational objectives, in Handbook II: Affective domain, New York: David McKay Company, Inc.
  64. Krugman H. E., (1965), The impact of television advertising: learning without involvement, Public Opin. Q., 29, 349–356.
  65. Lavidge R. J., and Steiner G. A., (1961), A model for predictive measurement of advertising effectiveness, J. Mark., 25(4), 59–62.
  66. Leone L., Perugini M. and Ercolani A. P., (1999), A comparison of three models of attitude-behavior relationships in the studying behavior domain, Eur. J. Soc. Psychol., 29, 161–189.
  67. Lewis S. E. and Lewis J. E., (2007), Predicting at-risk students in general chemistry: comparing formal thought to a general achievement measure, Chem. Educ. Res. Pract., 8(1), 32–51.
  68. Lewis S. E., Shaw J. L., Heitz J. O. and Webster G. H., (2009), Attitude counts: self-concept and success in general chemistry, J. Chem. Educ., 86(6), 744–749.
  69. Liu Y., Ferrell B., Barbera J. and Lewis J. E., (2017), Development and evaluation of a chemistry-specific version of the academic motivation scale (AMS-Chemistry), Chem. Educ. Res. Pract., 18, 191–213.
  70. Lohmöller J.-B., (1989), Latent variable path modeling with partial least squares, Heidelberg, Germany: Physica.
  71. MacCallum R. C., (2003), Working with imperfect models, Multivariate Behav. Res., 38(1), 113–139.
  72. Malcom S. and Feder M. (ed.), (2016), Barriers and opportunities for 2 year and 4 year STEM degrees, Washington, DC: The National Academies Press.
  73. McLeod D. B., (1992), Research on affect in mathematics education: a reconceptualization, in Grouws D. A. (ed.), Handbook of research on mathematics teaching and learning, New York: Macmillan Publishing Company, pp. 575–596.
  74. Nielsen S. E. and Yezierski E., (2015), Exploring the structure and function of the Chemistry Self-Concept Inventory with high school chemistry students, J. Chem. Educ., 92(11), 1782–1789.
  75. Nieswandt M., (2007), Student affect and conceptual understanding in learning chemistry, J. Res. Sci. Teach., 44(7), 908–937.
  76. Osborne J., Simon S. and Collins S., (2003), Attitudes towards science: a review of the literature and its implications, Int. J. Sci. Educ., 25(9), 1049–1079.
  77. Paavola S., Lipponen L. and Hakkarainen K., (2004), Models of innovative knowledge communities and three metaphors of learning, Rev. Educ. Res., 74(4), 557–576.
  78. Ray M. L., (1973), Marketing communications and the hierarchy-of-effects, in Clark P. (ed.), New models for mass communication research, Beverley Hills, CA: Sage Publications, pp. 147–176.
  79. Ray M. L., (1982), Advertising and communication management, Englewood Cliffs, NJ: Prentice-Hall.
  80. Reinartz W., Haenlein M. and Henseler J., (2009), An empirical comparison of the efficacy of covariance-based and variance-based SEM, Int. J. Res. Mark., 26(4), 332–344.
  81. Richter-Egger D. L., Hagen J. P., Laquer F. C., Grandgenett N. F. and Shuster R. D., (2010), Improving student attitudes about science by integrating research into the introductory chemistry laboratory: interdisciplinary drinking water analysis, J. Chem. Educ., 87(8), 862–868.
  82. Rigdon E. E., (1998), Structural equation modeling, in Modern methods for business research, Marcoulides G. A. (ed.), Mahwah, NJ: Lawrence Erlbaum, pp. 251–294.
  83. Rigdon E. E., (2012), Rethinking partial least squares path modeling: in praise of simple methods, Long Range Plann., 45(5–6), 341–358.
  84. Riketta M., (2008), The causal relation between job attitudes and performance: a meta-analysis of panel studies, J. Appl. Psychol., 93(2), 472–481.
  85. Rosenberg M. J. and Hovland C. I., (1960), in Hovland C. I. and Rosenberg M. J. (ed.), Attitude organization and change: an analysis of consistency among attitude components, New Haven, CT: Yale University Press.
  86. Salta K. and Tzougraki C., (2004), Attitudes toward chemistry among 11th grade students in high schools in Greece, Sci. Educ., 88, 535–547.
  87. Sarstedt M., Ringle C M., Henseler J. and Hair J. F., (2014), On the emancipation of PLS-SEM: a commentary on Rigdon (2012), Long Range Plann., 47(3), 154–160.
  88. Schreiber J. B., Nora A., Stage F. K., Barlow E. A. and King J., (2006), Reporting structural equation modeling and confirmatory factor analysis results: a review, J. Educ. Res., 99(6), 323–337.
  89. Schreiber J. B., (2008), Core reporting practices in structural equation modeling, Res. Soc. Adm. Pharm., 4, 83–97.
  90. Seery M. K., (2009), The role of prior knowledge and student aptitude in undergraduate performance in chemistry: a correlation-prediction study, Chem. Educ. Res. Pract., 10, 227–232.
  91. Sharma P. N. and Kim K. H., (2012), Model selection in information systems research using partial least squares based structural equation modeling, in Proceedings of the International Conference on Information Systems, Orlando, Florida.
  92. Shaw J. A., (2012), Using small group debates to actively engage students in an introductory microbiology course, J. Microbiol. Biol. Educ., 13(2), 155–160.
  93. Sijtsma K., (2009), On the use, the misuse, and the very limited usefulness of Cronbach's alpha, Psychometrika, 74(1), 107–120.
  94. Solomon M., (1997), Consumer behavior: Buying, having and being, 3rd edn, Englewood Cliffs, New Jersey: Prentice-Hall.
  95. Spencer H. E., (1996), Mathematical SAT test scores and college chemistry grades, J. Chem. Educ., 73(12), 1150–1153.
  96. Steinkamp M. W. and Maehr M. L., (1983), Affect, ability, and science achievement: a quantitative synthesis of correlational research, Rev. Educ. Res., 53(3), 369–396.
  97. Stone M., (1974), Cross validatory choice and assessment of statistical predictions, J. Royal Stat. Soc., 36(2), 111–147.
  98. Sun H. and Zhang P., (2006), Causal relationships between perceived enjoyment and perceived ease of use: an alternative approach, J. Assoc. Inf. Syst., 7(9), 618–645.
  99. Turner R. C. and Lindsay H. A., (2003), Gender differences in cognitive and noncognitive factors related to achievement in organic chemistry, J. Chem. Educ., 80(5), 563–568.
  100. Uzuntiryaki E. and Aydin Y. C., (2009), Development and validation of chemistry self-efficacy scale for college students, Res. Sci. Educ., 39(4), 539–551.
  101. Villafañe S. M. and Lewis J. E., (2016), Exploring a measure of science attitude for different groups of students enrolled in introductory college chemistry, Chem. Educ. Res. Pract., 17, 731–742.
  102. Vishnumolakala V. R., Southam D. C., Treagust D. F. and Mocerino M., (2016), Latent constructs of the students’ assessment of their learning gains instrument following instruction in stereochemistry, Chem. Educ. Res. Pract., 17, 309–319.
  103. Vishnumolakala V. R., Southam D. C., Treagust D. F., Mocerino M. and Qureshi S., (2017), Students’ attitudes, self-efficacy and experiences in a modified process-oriented guided inquiry learning undergraduate chemistry classroom, Chem. Educ. Res. Pract., 18, 340–352.
  104. Wagner E. P., Sasser H. and DiBiase W. J., (2002), Predicting students at risk in general chemistry using pre-semester assessments and demographic information, J. Chem. Educ., 79(6), 749–755.
  105. Wagner R. V. and Sherwood J. J., (1969), The study of attitude change, Brooks/Cole Publishing Company.
  106. Wilkins J. L. M., (2004), Mathematics and science self-concept: An international investigation, J. Exp. Educ., 72(4), 331–346.
  107. Wold H., (1982), Soft modeling: the basic design and some extensions, in Jöreskog K. G. and Wold H. (ed.), Systems under indirect observations: causality, structure, prediction. Part II, Amsterdam: North-Holland, pp. 1–54.
  108. Xu X. and Lewis J. E., (2011), Refinement of a chemistry attitude measure for college students, J. Chem. Educ., 88(5), 561–568.
  109. Xu X., Villafane S. M. and Lewis J. E., (2013), College students’ attitudes toward chemistry, conceptual knowledge and achievement: structural equation model analysis, Chem. Educ. Res. Pract., 14, 188–200.
  110. Xu X., Alhooshani K., Southam D. and Lewis J. E., (2015), in Kahveci M. and Orgill M. (ed.), Affective Dimensions in Chemistry Education, Berlin Heidelberg: Springer, pp. 177–194.
  111. Yong A. G. and Pearce S., (2013), A beginner's guide to factor analysis: focusing on exploratory factor analysis, Tutorials in Quantitative Methods for Psychology, 9(2), 79–94.
  112. Zajonc R. B., (1980), Feeling and thinking: preferences need no inferences, Am. Psychol., 35(2), 151–175.

This journal is © The Royal Society of Chemistry 2018