Investigating student and staff perceptions of students' experiences in teaching laboratories through the lens of meaningful learning

Stephen R. George-Williams *, Dimitri Karis , Angela L. Ziebell , Russell R. A. Kitson , Paolo Coppo , Siegbert Schmid , Christopher D. Thompson and Tina L. Overton
School of Chemistry, Monash University, Victoria, 3800, Australia. E-mail: stephen.george@monash.edu

Received 21st July 2018 , Accepted 21st September 2018

First published on 21st September 2018


Abstract

How students behave and learn in the teaching laboratory is a topic of great interest in chemical education, partly in order to justify the great expense of teaching laboratories. Much effort has been put into investigating how students think, feel and physically act in these unique learning environments. One such attempt was made through the generation and utilisation of the Meaningful Learning in the Laboratory Instrument (MLLI). This 30 question survey utilised Novak's theory of Meaningful Learning to investigate the affective, cognitive and psychomotor domains of the student learning experience. To date, this survey has been used to great effect to measure how students’ perception of their own feelings and actions will change over the course of a semester. This study reports the use of a modified MLLI survey to probe how the expectations of students change over their undergraduate degree. To increase the generalisability of the outcomes of the study data was gathered from four universities from Australia (Monash University, the University of New South Wales and the University of Sydney) and the UK (the University of Warwick). Students were found to start their university careers with very positive expectations of their teaching laboratory experiences. Their outlook became somewhat more negative each year that they were enrolled in the program. A further modified MLLI survey was presented to teaching associates and academic staff. Teaching staff were shown to have far more negative expectations of the students’ feelings and actions, with academic staff more likely to believe that students do not undertake many items of positive meaningful learning. Overall, this study highlights the large gap between the expectations of teaching staff and students which, if left unaddressed, will likely continue to cause great frustration for both teaching staff and students.


Introduction

In almost every institution that teaches chemistry throughout the world, one would expect to encounter a teaching laboratory that complements the lectures and tutorials delivered to the students. Whilst it is commonly believed that these teaching laboratories aid student learning, scant meaningful evidence has been obtained to support such a claim (Hofstein and Lunetta, 1982; Hofstein and Lunetta, 2004). Indeed, there have been arguments over the need for teaching laboratories (Hawkes, 2004; Morton, 2005; Sacks, 2005; Stephens, 2005) with concerns raised that not all students continue in chemistry and therefore do not require the practical skills developed in laboratories. Furthermore, the laboratory teaching experiences themselves are often criticised for being too expository or recipe-based (Letton, 1987; Hodson, 1990), i.e. they rely heavily on laboratory manuals and cause students to simply follow a procedure (Domin, 2007). Overall, there is a need to investigate the value of the learning undertaken in teaching laboratories.

Galloway and Bretz (2015a) sought to meet this need through the generation of the Meaningful Learning in the Laboratory Instrument (MLLI). This 31 item survey consisted of a range of questions that were generated through the lens of Joseph Novak's Theory of Meaningful Learning and Human Constructivism (Novak, 1998). This theory focuses on the concept that true learning requires the overlap of the affective, cognitive and psychomotor domains of the students’ thoughts and actions. Whilst many surveys exist in the literature with a focus on teaching laboratories, these tend to focus on just the cognitive domain (Grove and Bretz, 2007) or just the affective domain (Bauer, 2005, 2008; Xu and Lewis, 2011). The MLLI survey was the first survey to ‘focus solely on learning in the laboratory and to expressly operationalize a theory of learning’ (Galloway and Bretz, 2015a).

The original use of this survey focused on the changes in the students’ expectations in relation to teaching laboratories after one semester, in either first year general chemistry or first year organic chemistry. It was generally found that the expectations of the students were not being met by the experiments that they undertook (e.g. students felt that they were not thinking about what the molecules are doing). A second study was undertaken at a national level (Galloway and Bretz, 2015b), including 15 different institutions and the responses of 3853 students. With this data, the researchers were able to support the supposition that the mismatch between student expectations and experiences was a widely observed issue.

The aforementioned surveys primarily focused on the perspectives of students. The responses of teaching staff have also been investigated in the literature either through interviews (Bruck et al., 2010; Bretz et al., 2013) or the Faculty Goals Survey (Bruck and Towns, 2013; Bretz et al., 2016). It is worth noting, however, that these investigations tended to have a wide focus on the overall aims or goals of teaching laboratories rather than on the specific actions or feelings of students as raised by the MLLI survey. These studies indicated that academic staff tended to focus more on cognitive or psychomotor goals compared to affective ones.

Whilst these previous studies highlight the large amount of work already undertaken in this field, the results are difficult to compare to one another due to the different means of measurement and underlying focus. Additionally, the responses of students have tended to be sourced from first year cohorts and the viewpoint of teaching associates or laboratory demonstrators is currently missing from the literature.

To address these issues, this study sought to utilise the MLLI survey to investigate the perceptions of both students and teaching staff of how students will act and feel during teaching laboratories. The study also investigated the responses of students in upper year levels in order to measure the longitudinal impact of multiple chemistry courses. Finally, data was collected at four different institutions in two different education systems in order to increase the generalisability of any conclusions drawn. Data was also collected from teaching staff in order to compare their views of student experiences with those of the students themselves.

Method

The aim of this study was to compare students’ and teaching staffs’ perceptions of the cognitive, psychomotor and affective expectations of students during teaching laboratories. This was investigated using a quantitative analysis of responses to either a paper-based or online survey.

Data collection

Undergraduate students and teaching associates (sometimes referred to as laboratory demonstrators) were asked to answer the modified MLLI survey (Galloway and Bretz, 2015a) in paper format. The scale was modified from the original electronic slider (0–100%) into a five point Likert scale (strongly disagree, disagree, neutral, agree and strongly agree) at Monash University, the University of New South Wales (UNSW), and the University of Warwick and a 1–10 scale (i.e. on a scale of one 1–10 how much do agree with the statement) at the University of Sydney. The modified survey also included some general demographic questions about age and gender, course choice and domestic or international enrolment for students, or the amount of teaching or industry experience for teaching associates. Academic staff were asked to either complete another modified survey in paper format or through an online Google form. Teaching associates and academics were approached via email. All participants were informed that the survey was voluntary and would not affect either their academic standing or employment.

At Monash University, all students enrolled in chemistry courses, at any year level, were given the opportunity to complete the survey. In total, two first year courses, three second year courses and four third year courses were surveyed. There were a total of around 1700 students enrolled in these courses in mid-2016. Teaching associates were asked to complete the survey at the end of a compulsory training session in early 2017. There were approximately 120 teaching associates present who taught across all year levels.

At UNSW the students were asked to complete the survey at the end of their first teaching laboratory in semester 1, 2017. Unlike Monash University, access was not readily available to all chemistry courses so only three second year and two third year courses were surveyed. The number of enrolled students was similar to Monash University but only around 1350 received the survey.

The responses for the University of Sydney were collected during a typical laboratory session early in semester 1, 2017. Students were asked to complete the survey before the commencement of their experiments for the day. The number of enrolled students was similar to Monash University.

Responses from students at the University of Warwick were obtained during three separate events where a lunch was provided, scheduled on days when students were undertaking laboratory exercises early in their academic year (late 2016). Third year students were not surveyed until May 2017 due to scheduling commitments. There were approximately 490 students enrolled in the first three year levels at the University of Warwick at the time of data collection.

Monash University, the University of Sydney and the University of New South Wales represent three of the eight Australian universities known as the ‘Group of Eight’. These universities routinely perform very well on international teaching and research ranking platforms. The University of Warwick is a member of the ‘Russell Group’ in the United Kingdom. It is a highly prestigious university that also performs highly on international teaching and research platforms.

Research theoretical framework

The primary theoretical framework of this work is Constructivism which postulates that learning is an active process that builds upon the prior experiences of the learner (Leal Filho and Pace, 2016). Hence, in the case of a respondents’ perceptions of how students act and feel during teaching laboratory experiments, it is postulated that their responses will be a direct result of their experiences prior to answering the closed questions.

Data analysis

The surveys completed by 3202 undergraduate students, 143 teaching associates and 102 academic staff were transcribed into Excel after recoding (e.g. Strongly Disagree = 1, Disagree = 2, Neutral = 3, Agree = 4 and Strongly Agree = 5). Data from the University of Sydney was originally on a 1–10 scale, which was then recoded (1–2 became 1, 3–4 became 2, 5–6 became 3, 7–8 became 4 and 9–10 became 5) in order to allow for common methods of analysis.

To ensure the questions within the survey held a reasonable amount of internal consistency, a Cronbach's Alpha was calculated by SPSS for all student responses and found to be 0.756. As this value was over the literature threshold of 0.7 (Nunnally and Bernstein, 1994), and was likely underestimated as per the use of Cronbach's alpha on ordinal data (Gadermann et al., 2012), the internal consistency of the survey was considered reasonable.

In order to determine the significance of any differences between teaching staff and students, universities, year levels or demographics, the coded data was analysed as frequencies (e.g. the number of respondents selecting agree) with SPSS. Using overall data an omnibus test was performed in the form of an F-test with effect sizes measured through a calculation of eta-squared (η2). For example, the responses from 16–18 year olds to all questions was directly compared to how all students (of any age level) responded. Further comparisons of how the 16–18 year olds answered individual questions was only performed if the F-test showed no significant difference or exhibited an effect size <0.04. Hence, this test would show that any differences noted later on are more likely due to how the cohort responded to a specific question rather than how they responded to any question. Comparison of group responses to individual questions was achieved through using a Pearson's chi-squared test to check that differences held to Holm–Bonferroni corrected confidence interval (i.e. p ≤ 0.05 became p ≤ 0.002 for the first nine questions compared). Cramér's V(ϕC) was also calculated in order to measure the effect sizes of any differences (df = 4; small, 0.05–0.15, medium, 0.016–0.25, or large, >0.25) (Sheskin, 2003).

Limitations

The first limitation of this study is the change in scales utilised throughout this work as compared to the original electronic slider (0–100%) from the MLLI survey. This change complicates direct comparison between the results generated in this study with those previously reported. This change was made to allow for a paper format rather than an electronic version in order to increase the number of responses received from students and teaching staff. As such, the transcribed data was treated differently to the original work i.e. a question by question analysis with a Pearson's chi-squared test. It is worth noting that a factor analysis did not show factors that aligned with original ones raised by Galloway and Bretz (2015a) (affective, cognitive and affective/cognitive). This is possibly due to the students at these universities failing to connect which items are connected to either the cognitive, psychomotor or affective domains i.e. the students potentially exhibit poor meaningful connections to the questions being raised. Therefore, direct comparison between the factors raised in the original results of the MLLI survey and this study will not be achievable.

The second limitation of this work is the variation in the percentage of respondents from any given group of students. These ranged from 22% to 80% of the various cohorts surveyed. The approach of Barlett et al. (2001) can be used to evaluate whether each data set is representative of the respective cohort. However, the application of these minimal sample sizes is complicated, as this study uses ordinal data rather than the continuous or categorical data considered in the literature. If one were to utilise the lower acceptable sample sizes for the continuous data with an alpha of 0.05 (i.e. a 5% error), there are sufficient responses in eight of the twelve subgroups (e.g. first year students at Monash University) to statistically represent those subgroups. Of the four that fail to meet these minimum requirements (UNSW, third year students; the University of Sydney, second and third year students; University of Warwick, second year students), they tend to fall short by only 10–15 responses. As such, whilst these data sets may not be fully representative of their respective cohorts, their use as comparison data likely mitigates this issue.

The number of responses from teaching associates at Monash University (111) are considered statistically representative of that group. However, the number of responses at the University of Warwick is significantly less than required for statistical representation (32). Therefore, the data from the teaching associates at the University of Warwick is unlikely to be truly representative. The responses from academic staff were collected at a range of institutions in Australia and the United Kingdom and are, therefore, non-representative of any given institution or country.

The third limitation of this work is the change in scale utilised in the data collected from the University of Sydney (1–10 rather than strongly disagree to strongly agree). This was simply due to multiple research groups gathering data individually prior to later collaboration. This could potentially result in erroneous conclusions being drawn from that data. However, as the data were being used as a comparison data set, this effect was believed to have been minimal.

The fourth limitation is the statistical test used to compare the data sets. The use of an F-test to measure overall variation (an omnibus test) is typically used to compare the mean values of multiple data sets, which is usually considered poor practice for ordinal data. However, measuring the same data through a Pearson's Chi-squared test (which required strict control of sample sizes), generally revealed either no significant differences or differences with little to no practical relevance (i.e. ϕc < 0.1). Furthermore, the Pearson's Chi-squared test utilised is actually for categorical data (Pearson, 1900) whereas the data produced by this survey are ordinal. In some cases, another common statistical test, the Mann Whitney U, can be used for ordinal data (Nachar, 2008). However, this test cannot be used to compare data sets that do not have the same overall shape (e.g. a bimodal response compared to a unimodal response). To remove this error, the Pearson's chi-squared test was used. It is possible that the choice of statistical test affected the results, but a comparison of outcomes of applying both tests on a single data set (male-identifying first year students vs. female-identifying first year students at Monash University) showed minimal differences between them.

Finally, as this survey was sourced from the literature with no changes made to the parent statements (other than changing the focus for teaching staff), it was believed the original validity and reliability measures undertaken by the original authors of the survey would suffice in this case. As such, no attempts were made to measure or ensure the validity or reliability of this survey.

Results and discussion

The cohorts of students who responded to the survey and their demographic data are shown in Table 1.
Table 1 The demographic breakdown of the student cohorts by university, year level, gender identity, age and enrolment
1st year students 2nd year students 3rd year students
Monash University (N) 965 210 159
Gender (%) M 46 51 60
F 52 47 37
Other 2 2 2
Age (%) 16–18 46 85 1
19–21 49 6 71
22+ 5 9 28
Enrolment (%) Domestic 92 92 90
International 8 8 10
UNSW (N) 419 238 55
Gender (%) M 52 43 51
F 48 57 45
Other 0 0 4
Age (%) 16–18 65 12 0
19–21 29 79 80
22+ 6 9 20
Enrolment (%) Domestic 88 75 85
International 12 25 15
USyd (N) 728 68 77
Gender (%) M 40 59 47
F 58 40 53
Other 2 1 0
Age (%) 16–18 79 51 6
19–21 16 41 81
22+ 5 7 13
University of Warwick (N) 148 58 77
Gender (%) M 45 49 54
F 55 51 46
Other 0 0 0
Age (%) 16–18 51 0 0
19–21 48 100 70
22+ 1 0 30
Enrolment (%) Domestic 90 95 90
International 10 5 10


Generally, there were approximately equal numbers of male-identifying students and female-identifying students. Of the students enrolled, 10% were noted to be international students. The majority of the students were aged 16–18 in their first year of study, with older students enrolled in higher year levels as expected. As this data matched the demographic enrolment of the individual cohorts, the data appears to be representative of the student bodies, at least in terms of gender, age and enrolment.

Demographic effects

The survey was first used to investigate demographic effects on the largest data set available – the Monash University first-year cohort. When enrolment and gender were considered, only one question was noted to have significantly different responses. Furthermore, Cramér's V indicated only a small effect size for the individual question i.e. whilst there was a measurable significance difference, the actual difference was quite small. Investigating the effect of age resulted in no questions showing significant differences. Hence, gender, age and enrolment were not considered significant sources of variation in this study.

Overall, demographic categories did not appear to have particularly notable effects on the responses. Hence, demographics will not be raised again and are considered an unlikely source of any changes noted.

First year students

Comparisons of the responses of the first year students from three of the institutions were made. Each institution's results were compared with those from Monash University, as this was the largest data set. The data from the University of Sydney was excluded from this analysis as the change in scale (a 1–10 scale versus strongly disagree–strongly agree) meant that any discrepancies in the data could not be deconvoluted from this.

Before a direct comparison could be made, the large difference in the sample size between Monash University first year students (965) and UNSW first year students (419) needed to be taken into account. Hence, only every second result from Monash University was utilised in the analysis (463 in total). This method of analysis has been shown to be valid and accounts for the sensitivity of the Pearson's chi-squared to large variations in sample sizes (Barlett et al., 2001). Only two questions showed a significant difference between student responses (p ≤ 0.0005) and both exhibited only a medium effect size (ϕc = 0.182 and 0.159). Overall, these results indicate a minimal difference between the responses from first year Monash University and UNSW students. As these universities are both high ranking international universities within the same education system, this lack of difference is not surprising.

To compare the results of Monash University and the University of Warwick, the sample sizes (965 and 148 respectively) were again accounted for. Therefore, every fifth response from Monash University was used for the analysis (193 in total). Again, only two questions showed a significant difference (p < 0.0005), with both exhibiting a large effect size (ϕc = 0.253 and 0.275). These results are shown in Table 2.

Table 2 Questions that showed a significant difference when comparing the responses of Monash University first year students (MUFYS) with first year students enrolled at the University of Warwick (UWFYS), as measured by the Pearson's chi-squared test. Due to the 95% confidence interval chosen, only changes above 5% are noted
Question showing significant difference >5% decrease in response (MUFYS compared to UWFYS) >5% increase in response (MUFYS compared to UWFYS) p value ϕ C
To be confused about how the instruments work. Neutral Agree <0.0005 0.275
To be excited to do chemistry. Strongly agree Neutral <0.0005 0.253


These results appear to show that the first year cohort at the University of Warwick had very similar expectations to those at Monash University or UNSW. The only notable differences were that students at the University of Warwick may be more likely to expect excitement, or to be confused by instrumentation. This is potentially the result of the UK university system where students choose a single discipline (e.g. chemistry) compared to the Australian students who undertake a more general education.

Higher year students

The responses from the first year students were compared to the second and third year students at each of the four institutions. In order to compare the overall effect of each laboratory program, changes in the responses to specific questions that resulted in at least a medium effect size between the first year and the third year data are shown in Table 3. It is worth noting that subsets of the data were again required in order to ensure more comparable sample sizes.
Table 3 Questions that showed a significant difference when comparing the responses of first year students (FY) with third year students (TY) as measured by the Pearson's chi-squared test. Due to the 95% confidence interval chosen, only changes above 5% are noted
Question showing significant difference >5% decrease in response (TY compared to FY) >5% increase in response (TY compared to FY) p value ϕ C
Monash University
To make decisions about what data to collect. Agree or strongly agree Neutral or disagree <0.0005 0.278
To use my observations to understand the behaviour of atoms and molecules. Strongly agree Neutral, agree or disagree <0.0005 0.263
To think about what the molecules are doing. Strongly agree Neutral or agree 0.002 0.219
University of Sydney
To be frustrated. Strongly disagree or disagree Agree <0.0005 0.353
To be confident when using equipment. Strongly agree Neutral or disagree 0.001 0.319
The University of Warwick
To be excited to do chemistry. Strongly agree Neutral 0.001 0.277
To worry about finishing on time. Disagree or neutral Agree or strongly agree 0.002 0.267
To make decisions about what data to collect. Disagree or neutral Strongly agree 0.002 0.264


The results from the UNSW students showed no changes when comparing first year to third year students. This could be the result of either a non-impactful laboratory program or a program that meet their expectations, but is worth noting that only a small number of third year students responded to the survey (n = 58). Hence, it is possible that this result is simply underestimating the effect at UNSW. At Monash University, the University of Sydney, and the University of Warwick, Table 3 indicates almost no effect of the laboratory programs on the expectations of the students as only two to three questions showed a significant difference at any given university.

It is worth noting that the second-year data indicates a combination of either gradual or erratic changes (e.g. inconsistent changes between year levels and even bimodal data). However, most questions changed in overall gradual manner and are therefore more likely an artefact that changes with time (e.g. experience or maturation). An example from the University of Sydney is shown in Fig. 1.


image file: c8rp00188j-f1.tif
Fig. 1 The percentage of students selecting a given response at the University of Sydney to the statement ‘I expect to be frustrated’.

Of the questions that do show significant change, it would appear that they represent students either experiencing significant fatigue with the teaching laboratory experience or becoming more realistic in their expectations, having lost their original assumptions over time. There was only one positive change noted at the University of Warwick where students were more likely to expect to ‘make decisions about what data to collect’. This may be because a larger portion of the third year teaching laboratories at the University of Warwick are open-ended and project-based (as compared to the Australian universities), likely explaining the students’ increasing expectations around making decisions. It is also worth noting that the questions that showed a significant change are not consistent between the universities. Hence, the MLLI survey is able to differentiate between the effects of different laboratory programs on student expectations. Overall, this investigation would appear to show that student expectations are not being significantly changed by their experiences.

However, if any change is to be believed, it would appear that students are generally harbouring more negative expectations as they proceed further in their undergraduate experiences. Whether this is due to more challenging higher year experiences, poorly designed laboratory programs, student maturation or even student fatigue is difficult to discern at this point. That being said, some items, such as the level of student decision making, is something that can be directly impacted by the design of the laboratory program. Hence, the change in student perception is likely a combination of all these three factors. These results are consistent with the original MLLI results which showed that many students held expectations that were not being met by their current teaching laboratories (Galloway and Bretz, 2015a).

Teaching associates

As teaching associates can have a significant impact on the engagement/learning of the students, their beliefs about the behaviours of students were also considered. Hence, teaching associates were asked to answer a modified form of the survey with the parent statement changed from ‘I expect to…’ to ‘I think the students will…’. Table 4 shows the demographic breakdown of the teaching associates. Note that there were too few responses from the UNSW and the University of Sydney teaching associates to be considered representative, hence these data sets are not shown.
Table 4 The demographic breakdown of the teaching associates who responded to the survey at Monash University and the University of Warwick
Monash University
N 111
Gender (%) M 54
F 45
Other 1
Age (%) 19–21 9
22–24 61
25+ 34
Teaching experience (%) <One year 44
≥One year 56
Industry experience (%) <One year 69
≥One year 31
Occupation (%) Postgraduate student 75
Other 25

University of Warwick
N 32
Gender (%) M 57
F 40
Other 3
Age (%) 22–24 50
25+ 50
Teaching experience (%) <One year 30
≥One year 70
Industry experience (%) <One year 78
≥One year 22
Occupation (%) Postgraduate student 100


Consideration of demographics by the Pearson's chi-squared test resulted in no significant differences in each demographic group (gender, age or teaching/industry experience). This may be because the demographics of the respondents had no effect on their responses, but it is also possible that there were simply not enough responses from teaching associates to measure these effects.

A comparison was then made between the teaching associates and the students. As third year students have had more laboratory experiences, it was decided to compare teaching associates to these students (as they were likely to be more closely aligned). Consequently, the responses of 111 Monash University teaching associates was compared to all 159 Monash third year student responses.

This comparison is the only example in this overall study where the F-test indicated that teaching associates responded in a significantly different manner to students overall (p < 0.005, η2 = 0.06). However, it is worth noting that the effect size (η2 = 0.06), whilst moderate (i.e. 0.04 < η2 < 0.36), is low and likely has no practical significance.

Of the 30 questions in the survey, 21 showed a significant difference with a large effect size. The questions that yielded these differences are shown in Table 5, which highlights many important changes.

Table 5 Questions that showed a significant difference when comparing the responses of third year students with teaching associates at Monash University, as measured by the Pearson's chi-squared test. Due to the 95% confidence interval chosen, only changes above 5% are noted. NB: the final p value shown (0.003) is strictly above the threshold due to the Holm–Bonferroni method used instead of just a Bonferroni method
Question showing significant difference >5% decrease in response (teaching associates compared to students) >5% increase in response (teaching associates compared to students) p value ϕ C
Be confused about how the instruments work. Strongly disagree or disagree Neutral <0.0005 0.584
Be nervous about making mistakes. Strongly disagree or disagree Agree or strongly agree <0.0005 0.568
Be confused about the underlying concepts. Strongly disagree or disagree Agree or strongly agree <0.0005 0.568
Focus on procedures, not concepts. Disagree Agree or strongly agree <0.0005 0.528
Be confident when using equipment. Agree or strongly agree Neutral or disagree <0.0005 0.524
Feel intimidated. Strongly disagree or disagree Neutral, agree or strongly agree <0.0005 0.520
Feel unsure about the purpose of the procedures. Strongly disagree or disagree Agree or strongly agree <0.0005 0.516
Be frustrated. Strongly disagree or disagree Agree or strongly agree <0.0005 0.512
Think about what the molecules are doing. Strongly agree or agree Disagree or neutral <0.0005 0.510
Be nervous when handling chemicals. Strongly disagree or disagree Agree or strongly agree <0.0005 0.509
Feel disorganized. Strongly disagree or disagree Agree <0.0005 0.500
Worry about finishing on time. Strongly disagree or disagree Agree or strongly agree <0.0005 0.483
Be confused about what their data means. Disagree or neutral Agree or strongly agree <0.0005 0.470
Worry about getting good data. Strongly disagree or disagree Agree or strongly agree <0.0005 0.460
Interpret their data beyond only doing calculations. Agree or strongly agree Neutral <0.0005 0.432
Use their observations to understand the behaviour of atoms and molecules. Agree Neutral or disagree <0.0005 0.418
Worry about the quality of their data. Strongly disagree or disagree Agree or strongly agree <0.0005 0.385
Be excited to do chemistry. Agree or strongly agree Neutral <0.0005 0.343
Consider if their data makes sense. Strongly agree Disagree or neutral 0.001 0.296
Develop confidence in the laboratory. Strongly agree Neutral 0.001 0.285
Learn critical thinking skills. Strongly agree Neutral or disagree 0.003 0.270


Firstly, the effect sizes are much larger than those previously noted in this study with 16 showing effect sizes above 0.4. This alone indicates that there is a large variation in opinion between the teaching associates and the students. Secondly, many of the questions (13) saw drastic shifts from the students responses of strongly disagree/disagree to the teaching associates responses of agree/strongly agree. These questions were generally items that would provide a negative impact on students’ meaningful learning (e.g. to be frustrated or to worry feel disorganised). This implied that teaching associates were far more likely to believe that students would experience negative emotions and undertake actions that would hinder their meaningful learning. An example shift is shown in Fig. 2.


image file: c8rp00188j-f2.tif
Fig. 2 The percentage of third year students or teaching associates selecting a given response to the statement ‘I expect to/I think the students will: feel unsure about the purpose of the procedures’.

Teaching associates also did not tend to agree to many items (eight in total) that were positive for the students’ meaningful learning. Questions that focused on student confidence, interpretation of data or even just general excitement were met with more neutral responses. Overall, the results from the Monash University teaching associates indicated that teaching staff held far more negative views of the experiences that they expect students to have during teaching laboratories. Furthermore, as this comparison was made with the more negative third year cohort, this issue would likely be even more prominent for first year classes. It is also worth noting that even though a large number of the teaching associates were relatively inexperienced postgraduate students, these differences were already beginning to surface in their expectations of students during teaching laboratories.

The uniformity of the teaching associate responses over multiple institutions was also considered. As sufficient responses were only obtained from Monash University and the University of Warwick (118 and 34, respectively) only this comparison was made. No questions were found to be answered in a significantly different manner.

With no differences noted it would appear that the teaching associates held relatively similar views between the two universities, potentially implying that these results may be somewhat generalisable to other universities.

These comparisons highlight the need to ensure that the teaching associates are adequately trained in order to either deal with potential pitfalls for the students or to recognise their own potential negativity. These results also highlight a large mismatch between student and staff expectations which, left unaccounted for, could lead to greater frustration for both students and staff. Lastly, these results indicate that whilst teaching staff can vary greatly between institutions, their overall viewpoints may be somewhat similar with regards to their perceptions of students’ experiences during teaching laboratories.

Academic staff

Academic members of staff are often responsible for training teaching associates and regularly interact with students directly during teaching laboratories (although often to a lesser degree than teaching associates). Therefore, their perceptions of what students would think and do during teaching experiences was also explored. The demographics of the 102 academics who responded to the survey are shown in Table 6.
Table 6 The demographic breakdown of the 102 academic staff who responded to the survey
Gender Male Female Rather not say
74% 22% 4%

Teaching experience 1–3 years 4–6 years 7+ years
11% 17% 71%

Industry experience 0 years 1–2 years 3–4 years 5+ years
77% 14% 0% 7%

Professional title Professor Associate Professor Senior Lecturer Lecturer Other
21% 21% 15% 24% 19%


Generally speaking, most of the academic respondents identified as male, had been teaching for a significant length of time, had limited industrial experience and held a range of professional titles. Due to this, no demographic analysis could be undertaken as the numbers within many of the demographic sub-groups were too small to conduct a meaningful statistical analysis. It is important to note that the academic respondents were employed at a range of institutions across Australia and the UK. Consequently, the responses from the academics were considered as a single group. The responses of the academic staff were directly compared with the teaching associates and seven of the 30 items were responded to in a significantly different manner (p = <0.0005–0.002). Six differences exhibited a large effect size (ϕc = 0.263–0.377) whilst one was found to have a medium effect size (ϕc = 0.248) (Table 7).

Table 7 Questions that showed a significant difference when comparing the responses of teaching associates with academic staff, as measured by the Pearson's chi-squared test. Due to the 95% confidence interval chosen, only changes above 5% are noted
Question showing significant difference >5% decrease in response (academics compared to teaching associates) >5% increase in response (academics compared to teaching associates) p value ϕ C
Make decisions about what data to collect. Agree or strongly agree Disagree <0.0005 0.377
Consider if their data makes sense. Agree or strongly agree Neutral, disagree or strongly disagree <0.0005 0.304
Focus on procedures, not concepts. Neutral Strongly agree <0.0005 0.298
Learn critical thinking skills. Strongly agree Neutral or agree <0.0005 0.289
Be confused about how the instruments work. Strongly agree Neutral 0.001 0.263
Learn chemistry that be useful in their life. Agree Neutral or disagree 0.002 0.263
Learn problem solving skills. Strongly agree Neutral or agree 0.002 0.248


An example of a particularly large effect size where academic staff, students and teaching associates from all universities all answer significantly differently is shown below in Fig. 3. Generally speaking, academic staff were less sure than the teaching associates that students would experience such meaningful learning, such as making decisions, considering if their data made sense or learning critical thinking skills. Furthermore, the academics maintained the teaching associates’ belief that the students would encounter experiences negative to their meaningful learning (as noted by a lack of significant differences to those prompts).


image file: c8rp00188j-f3.tif
Fig. 3 The percentage of academics, teaching associates and students selecting a given response to the statement ‘I expect to/I think the students will: make decisions about what data to collect’.

Hence, this data implies that the responses of the academic staff contrasted even more with the responses of the students. No direct comparisons between the responses of academic staff and students were made due to issues of varied year levels, sample sizes, utilised scales and institutions. As before, this data implies that there is a very large gap between student expectations and the expectations of the teaching staff.

Whilst it is possible that many of these differences could be due to student naiveté and teaching staff experience (or simply a large disconnect between academic and student viewpoints), this cannot explain every variation. For example, students expect to make decisions whilst academics believe that they would not. This lack of belief in student inquiry is more likely due to a simple lack of inquiry-based experiences which can be easily rectified through an increase in such activities. Hence, these items should be individually probed at any given institution that endeavours to enhance the student experience by better matching the laboratory program with the expectations held by the students where possible.

Conclusions

This work highlights the significantly large gap between the expectations of students and teaching staff with regards to the experiences of students in undergraduate teaching laboratories as seen through the lens of Novak's meaningful learning. Through the use of the MLLI survey, data concerning the expectations of students at three Australian universities (UNSW, The University of Sydney and Monash University) and one UK university (The University of Warwick) have been collected and analysed through the use of Pearson's chi-squared test. This survey was delivered in paper format whilst the students were either completing (or about to undertake) an experiment, during a safety induction or at a lunch provided for them.

In total, 3202 students responded to the survey across all four institutions. In general, students tended to start their university careers with positive expectations of teaching laboratories. This was noted through students agreeing to the statements on the MLLI survey that aided their meaningful learning (e.g. I expect to be excited). Students also tended to select either neutral, disagree or strongly disagree to statements that would have a negative impact on their meaningful learning (e.g. I expect to be frustrated). As students progressed through the laboratory program at any of the four universities, their responses generally did not change. However, a few minor changes appeared to indicate a small shift to a more negative outlook (albeit with a large number of the changes only showing a medium effect size). Each laboratory program elicited slightly different changes indicating that the students’ expectations were likely shifting because of the experiences provided by the institutions rather than maturity or some other factor that changed with time.

The responses of 143 teaching associates from Monash University and the University of Warwick were analysed. Very few differences were found between the responses of teaching associates at either institution, implying a degree of generalisability to these results. When compared to the students, a large number of questions were found to be answered in a significantly different manner. Overall, teaching associates were far more likely to think that students would undertake actions that would lead to their negative learning (e.g. I think that the students will be confused about how the instruments work). Furthermore, teaching associates were also more likely to select a neutral or disagree/strongly disagree response to many of the positive items on the survey (e.g. I think the students will learn critical thinking skills). It would appear that the teaching associates held a far more negative, or perhaps pragmatic, view of the students during teaching laboratories. Further investigations (such as interviews or focus groups) would be required to probe further into the exact nature of this difference.

The viewpoints of academic staff were also sought. 102 academic responses were collected and were very similar to those of the teaching associates (i.e. they also tended to think that students would undertake actions that would negatively affect their meaningful learning). However, when considering the positive items on the survey, academic staff were even more likely to select neutral, disagree or strongly disagree to a range of statements such as ‘I think the students will consider if their data makes sense’. It would appear that academic staff held an even more negative, or more pragmatic, view of the students during an undergraduate teaching laboratory.

Overall, students appeared to be generally optimistic about their laboratory experiences, which was underestimated by both the teaching associates and the academic staff. It is important to recognise these differences of opinion in order to better manage the learning experience and expectations for both students and teaching staff. This is particularly important where mismatches occur on specific items that are potentially avoidable (such as decision making or concerns around timely completion of teaching laboratories).

Conflicts of interest

There are no conflicts to declare.

Appendix

The questions asked of the students and teaching staff are shown below in Table 8.
Table 8 Questions asked on the MLLI survey. Note that students were asked ‘I expect to…’ whereas teaching staff were asked ‘I think the students will…’
When performing experiments in a chemistry laboratory course, I expect/I think the students will…
1. Learn chemistry that will be useful in my life.
2. Worry about finishing on time.
3. Make decisions about what data to collect.
4. Feel unsure about the purpose of the procedures.
5. Experience moments of insight.
6. Be confused about how the instruments work.
7. Learn critical thinking skills.
8. Be excited to do chemistry.
9. Be nervous about making mistakes.
10. Consider if my data makes sense.
11. Think about what the molecules are doing.
12. Feel disorganized.
13. Develop confidence in the laboratory.
14. Worry about getting good data.
15. Find the procedures simple to do.
16. Be confused about the underlying concepts.
17. “get stuck” but keep trying.
18. Be nervous when handling chemicals.
19. Think about chemistry I already know.
20. Worry about the quality of my data.
21. Be frustrated.
22. Interpret my data beyond only doing calculations.
23. Please select both agree and disagree for this question.
24. Focus on procedures, not concepts.
25. Use my observations to understand the behaviour of atoms and molecules.
26. Make mistakes and try again.
27. Be intrigued by the instruments.
28. Feel intimidated.
29. Be confused about what my data mean.
30. Be confident when using equipment.
31. Learn problem solving skills.


Acknowledgements

The authors would like to acknowledge and thank, first and foremost, the students, teaching associates and academics who participated in this work, whose honest feedback will result in a far stronger learning experience for undergraduates at the participating institutions. The authors would also like to acknowledge the technical staff at Monash University, the University of New South Wales, the University of Sydney and the University of Warwick, without whom, this work would not have been possible. The authors acknowledge Monash University for funding, and hosting, the Transforming Laboratory Learning program. The authors also acknowledge the Monash–Warwick Alliance seed fund for additional funding. Ethics approval was obtained from the Monash University Human Ethics Research Committee (application number, 2016000584) and the University of Sydney Human Ethics Research Committee (application number, 2016/902). Work at the University of Warwick and the University of New South Wales was performed under the Monash University ethics application.

References

  1. Barlett J. E., Kotrlik J. W. and Higgins C. C., (2001), Organizational research: determining appropriate sample size in survey research, Inf. Technol., Learn., Perform. J., 19(1), 43.
  2. Bauer C. F., (2005), Beyond “Student Attitudes”: Chemistry Self-Concept Inventory for Assessment of the Affective Component of Student Learning, J. Chem. Educ., 82(12), 1864,  DOI:10.1021/ed082p1864.
  3. Bauer C. F., (2008), Attitude toward Chemistry: A Semantic Differential Instrument for Assessing Curriculum Impacts, J. Chem. Educ., 85(10), 1440,  DOI:10.1021/ed085p1440.
  4. Bretz S. L., Fay M., Bruck L. B. and Towns M. H., (2013), What Faculty Interviews Reveal about Meaningful Learning in the Undergraduate Chemistry Laboratory, J. Chem. Educ., 90(3), 281–288,  DOI:10.1021/ed300384r.
  5. Bretz S. L., Galloway K. R., Orzel J. and Gross E., (2016), Faculty Goals, Inquiry, and Meaningful Learning in the Undergraduate Chemistry Laboratory, in Technology and Assessment Strategies for Improving Student Learning in Chemistry, American Chemical Society, vol. 1235, pp. 101–115.
  6. Bruck A. D. and Towns M., (2013), Development, Implementation, and Analysis of a National Survey of Faculty Goals for Undergraduate Chemistry Laboratory, J. Chem. Educ., 90(6), 685–693,  DOI:10.1021/ed300371n.
  7. Bruck L. B., Towns M. and Bretz S. L., (2010), Faculty Perspectives of Undergraduate Chemistry Laboratory: Goals and Obstacles to Success, J. Chem. Educ., 87(12), 1416–1424,  DOI:10.1021/ed900002d.
  8. Domin D. S., (2007), Students' perceptions of when conceptual development occurs during laboratory instruction, Chem. Educ. Res. Pract., 8(2), 140–152,  10.1039/B6RP90027E.
  9. Gadermann A. M., Guhn M. and Zumbo B. D., (2012), Estimating ordinal reliability for Likert-type and ordinal item response data: a conceptual, empirical, and practical guide, Pract. Assess., Res. Eval., 17(3), 1–13.
  10. Galloway K. R. and Bretz S. L., (2015a), Development of an Assessment Tool To Measure Students’ Meaningful Learning in the Undergraduate Chemistry Laboratory, J. Chem. Educ., 92(7), 1149–1158,  DOI:10.1021/ed500881y.
  11. Galloway K. R. and Bretz S. L., (2015b), Measuring Meaningful Learning in the Undergraduate Chemistry Laboratory: A National, Cross-Sectional Study, J. Chem. Educ., 92(12), 2006–2018,  DOI:10.1021/acs.jchemed.5b00538.
  12. Grove N. and Bretz S. L., (2007), CHEMX: An Instrument To Assess Students' Cognitive Expectations for Learning Chemistry, J. Chem. Educ., 84(9), 1524,  DOI:10.1021/ed084p1524.
  13. Hawkes S. J., (2004), Chemistry Is Not a Laboratory Science, J. Chem. Educ., 81(9), 1257,  DOI:10.1021/ed081p1257.
  14. Hodson D., (1990), A critical look at practical work in school science, Sch. Sci. Rev., 70(256), 33–40.
  15. Hofstein A. and Lunetta V. N., (1982), The Role of the Laboratory in Science Teaching: Neglected Aspects of Research, Rev. Educ. Res., 52(2), 201–217,  DOI:10.3102/00346543052002201.
  16. Hofstein A. and Lunetta V. N., (2004), The laboratory in science education: foundations for the twenty-first century, Sci. Educ., 88(1), 28–54.
  17. Leal Filho W. and Pace P., (2016), Teaching Education for Sustainable Development at University Level, Springer.
  18. Letton K. M., (1987), A study of the factors influencing the efficiency of learning in a undergraduate chemistry laboratory, (M Phil), Glasgow, Scotland: Jordanhill College of Education.
  19. Morton S. D., (2005), Response to “Chemistry Is Not a Laboratory Science”, J. Chem. Educ., 82(7), 997,  DOI:10.1021/ed082p997.1.
  20. Nachar N., (2008), The Mann–Whitney U: a test for assessing whether two independent samples come from the same distribution, Tutor. Quant. Methods Psychol., 4(1), 13–20.
  21. Novak J. D., (1998), Learning, creating, and using knowledge, Mahwah, NJ: Erlbaum.
  22. Nunnally J. and Bernstein L., (1994), Psychometric theory, New York: McGraw-Hill Higher, Inc.
  23. Pearson K., (1900), On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 50(302), 157–175,  DOI:10.1080/14786440009463897.
  24. Sacks L. J., (2005), Reaction to “Chemistry Is Not a Laboratory Science”, J. Chem. Educ., 82(7), 997,  DOI:10.1021/ed082p997.2.
  25. Sheskin D. J., (2003), Handbook of parametric and nonparametric statistical procedures, CRC Press.
  26. Stephens C. E., (2005), Taking Issue with “Chemistry Is Not a Laboratory Science”, J. Chem. Educ., 82(7), 998,  DOI:10.1021/ed082p998.1.
  27. Xu X. and Lewis J. E., (2011), Refinement of a Chemistry Attitude Measure for College Students. J. Chem. Educ., 88(5), 561–568,  DOI:10.1021/ed900071q.

This journal is © The Royal Society of Chemistry 2019