Evaluation of the influence of wording changes and course type on motivation instrument functioning in chemistry
Abstract
Increased understanding of the importance of the affective domain in chemistry education research has led to the development and adaptation of instruments to measure chemistry-specific affective traits, including motivation. Many of these instruments are adapted from other fields by using the word ‘chemistry’ in place of other disciplines or more general ‘science’ wording. Psychometric evidence is then provided for the functioning of the new adapted instrument. When an instrument is adapted from general language to specific (e.g. replacing ‘science’ with ‘chemistry’), an opportunity exists to compare the functioning of the original instrument in the same context as the adapted instrument. This information is important for understanding which types of modifications may have small or large impacts on instrument functioning and in which contexts these modifications may have more or less influence. In this study, data were collected from the online administration of scales from two science motivation instruments in chemistry courses for science majors and for non-science majors. Participants in each course were randomly assigned to view either the science version or chemistry version of the items. Response patterns indicated that students respond differently to different wordings of the items, with generally more favorable response to the science wording of items. Confirmatory factor analysis was used to investigate the internal structure of each instrument, however acceptable data-model fit was not obtained under any administration conditions. Additionally, no discernable pattern could be detected regarding the conditions showing better data-model fit. These results suggest that even seemingly small changes to item wording and administration context can affect instrument functioning, especially if the change in wording affects the construct measured by the instrument. This research further supports the need to provide psychometric evidence of instrument functioning each time an instrument is used and before any comparisons are made of responses to different versions of the instrument.