A study of approaches to solving open-ended problems in chemistry

Tina Overton*, Nicholas Potter and Christopher Leng
University of Hull, Cottingham Road, Hull, HU6 7RX, UK. E-mail: t.l.overton@hull.ac.uk

Received 18th February 2013, Accepted 31st May 2013

First published on 7th June 2013


Abstract

This paper describes the outcomes of a qualitative investigation into the range of different approaches that students use to solve open-ended, context rich problems. The study involved a small cohort of students individually solving open-ended, context-rich problems using a think aloud protocol. The problems required the students to develop a strategy, to identify data required and to make estimations. The problems did not lead to a single correct answer but rather a range of acceptable answers. Analysis of the transcripts and recordings of the sessions resulted in a limited number of categories of approaches to solving the problems. Applications of these approaches to individual students showed that there were three different types of problem-solver, described here as novice, expert and transitional. The results provide insight that will help tutors change how they, and subsequent students, approach problem solving.


Introduction

We have previously reported how the three cognitive variables, working memory, M capacity and field dependence, effect students' achievement in solving open-ended, content-rich problems (Overton and Potter, 2011). The results of that study showed a difference between the cognitive variables required for success in traditional algorithmic problems and open-end problems, with field dependence crucial for solving open-ended problems. Overall degree scores correlated strongly with algorithmic problems solving scores and much more weakly with scores on open-end problems solving. Interestingly, the context-rich open-ended problems significantly shifted students' attitudes making them more positive disposed towards problem solving. This qualitative study investigates the different ways in which students approach such problems, providing insight into the distinct strategies used.

Problem solving has been defined as ‘what you do, when you don't know what to do’ (Wheatley, 1984). Anderson (2005) described problem solving as ‘any goal-directed sequence of cognitive operations’. The types of problems set in examinations or assessments in higher education chemistry are largely algorithmic (Bennett, 2008; Pappa and Tsaparlis, 2011). Algorithmic problems use mainly lower order cognitive skills (LOCS), whereas more open ended problems call upon higher order cognitive skills (HOCS) (Zoller and Pushkin, 2007). Problems requiring the application of HOCS demand more than just knowledge to reach a solution. Knowledge or known theory must be applied in unfamiliar contexts using such skills as analysis, connection making, synthesis and critical thinking. Bodner (1986) suggest that the difference between an exercise and a problem is the result of differences in the level of familiarity with similar tasks the individual brings to a given task. Funke (2010) describes problem solving as an activity that requires more than routine action and thinking and states that complex cognition is required to solve complex problems. Funke views complex problem solving as involving intransparency, dynamics (underlying psychological forces, such as motivation and emotion), or polytely (multiple simultaneous goals).

More than 50 years ago, Polya proposed a model of problem solving that consists of four steps or stages (Polya, 1945); understand the problem, devise a plan, carry out the plan, look back. Bodner and Domin (1991) reports a much less structured approach to solving problems and identified many more steps. His model consisted of the following steps; read the problem, read the problem again, write down what you hope is the relevant information, draw a picture, make a list, or write an equation or formula to help you begin to understand the problem, try something, try something else, see where this gets you, read the problem again, try something else, see where this gets you, test intermediate results, read the problem again, write down ‘an’ answer, test the answer to see if it makes sense. This is obviously a more iterative and reflective approach to that proposed by Polya and the difference between them may reflect the difference between expert and novice problem solvers.

Developing problem solving skills has been the subject of much research into science education (Zoller et al., 1999; Dori and Hameiri, 2003). Gabel and Bunce (1994) proposed that the factors affecting students' success in problem solving are threefold: the type of problem and the underlying concepts of the problem; the learner characteristics (including cognitive styles and knowledge base); learning environment factors such as individual or group activity. Bunce attempted to enhance chemistry students' skill in problem solving through problem categorization. Students were taught how to solve problems using the explicit method of problem solving (EMPS) (Bunce and Heikkinen, 1986). The EMPS involves encoding the information given in a problem, relating this to what is already known, and planning a solution. This study showed that training in categorization skills and the EMPS can lead to higher achievement in problem solving. However, there was no link between understanding of the chemical concepts involved and the problem-solving skills ability.

Johnstone and El-Banna (1986) stressed the importance of working memory capacity in learning science and Reid (2009) suggested that problems that place a high load on working memory may be a better measure of a student's working memory capacity than of problem solving ability. Tsaparlis (2005) investigated the effect of several cognitive variables on students' ability to solve open-ended problems in physical chemistry. The results showed a strong correlation between M-capacity and disembedding ability and problem solving scores. In a study by St Clair-Thompson et al. (2012) it was found that the best predictors of algorithmic and open-ended problem solving abilities were tests that measured different elements of working memory. Performance on algorithmic problems was predicted by the counting recall tests for working memory which utilises lower order cognitive skills. Performance on open-ended problems was predicted by the figural intersection test and backwards digit recall test. These which measure mental capacity which utilises higher order cognitive skills. These results demonstrated that different cognitive resources are required for solving algorithmic and open-ended problems. In a study of online problem solving, Bühner et al. (2008) found that the working memory component of intelligence is the relevant aspect in predicting problem-solving ability. The nature of the task, its verbal, figural, or numerical content, might play an important role in explaining this relationship. When a task requires information to be stored and processed as well as integrated with elements held in memory, the aspects of intelligence that are related to working memory become most influential.

Kapa (2007) studied the effect of providing metacognitive support mechanisms (MSM) to students during their problem solving activities. MSMs are verbal prompts such as “What are you asked to find?” “What is given in the problem?” The study showed that these prompts were effective in helping students transfer from solving structured, closed problems to open-ended problems. Cooper (2009) studied the influence of students' metacognitive behaviours on their ability to solve ill defined problems in chemistry. Using an automated online instrument that collected and analysed students actions, a significant correlation was found between the use of metacognitive strategies and the complexity of the problems students could handle effectively. Scherer and Tiemann (2012) investigated the relationship between problem solving ability and metacognitive strategy knowledge within an online environment. Their subjects were solving analytical, or closed, problems in which all the required data are provided, and complex problems, in which all required information is not provided. The study showed substantial correlation between strategy knowledge and analytical problem solving, but not with complex problem solving. The authors report that the ability to solve analytical problems can be distinguished from the ability to solve complex problems and that fostering problem-solving skills has to move beyond the development of strategy knowledge. Chen and Wu (2012) also found that metacognitive strategies did not lead to higher performances with complex problems in online environments. In a study of chemistry students' ability to solve quantitative, closed problems, Taasoobshirazi and Glynn (2009) found that their ability to develop a successful strategy was influence by students' conceptualization of the problem and their self-efficacy, or belief in their own ability to succeed.

Walsh et al. (2007) carried out a phenomenographical study of problem solving approaches used by physics undergraduates. They found that four different approaches were observed: a scientific approach, which includes qualitative analysis of the situation, planning and execution of a solution, and evaluation of the solution; a structured plug-and-chug approach, which includes qualitative analysis based on formulas; unstructured plug and chug, which has a focus on required variable and lacks evaluation; a memory-based approach which uses analysis of previous examples and lacks evaluation; and no clear approach. The students who adopted a scientific approach were able to adopt the plug-and-chug approach when tackling less complex problems but plug-and-chug students were unable to adopt the scientific approach.

Thus, much of the published evidence in the area of problem solving in chemistry has focussed how students tackle algorithmic or structured problems or on what factors affect their performance in more open-ended problems. Our focus in this study is on the approaches that students adopt when faced with open-ended or complex problems in chemistry.

Methodology

The study involved individual students solving three open-ended, context-rich problems using a think aloud protocol. Think aloud protocols involve participants verbalising their thoughts whilst completing a task and have a long tradition in chemistry education research (Bowen, 1994; Cheung, 2009). The researcher may take notes or record the session in order to capture the evidence. The participant is encouraged to verbalise their thoughts but the researcher must take care to remain impartial and objective. The method enables the process of completing a task to be observed and analysed, rather than just the product of the task. There are several factors which affect the validity of think aloud techniques. The number of people being simultaneously studied can affect validity because of the possibility of social interactions and the influence of others on a participant's thoughts and actions. The degree of researcher intervention can have a great influence on the quality of the data gathered. Asking a simple question such as ‘Why do you think that?’ can invoke metacognition, causing the participant to reflect on the activity and thus modify outcomes. Participants may exhibit atypical behaviour and responses if they are in an uncomfortable environment or if they are trying second guess the motivations of the researcher. How participants report verbally can influence the data gathered; if participants report back at the end of a problem solving activity they can post-rationalise their approach and appear to be more coherent and strategic than they actually were. There are also task-specific variables which influence the data gathered. These include items such as calculators, paper, pens and textbooks.

The think aloud method was chosen for this study to enable analysis of the process of solving problems, rather than focussing on the problem solutions or scores, that is, approach rather than ability. The study took place through one to one sessions with the researcher. Students were encouraged to verbalise their thoughts and to write as much as possible whilst solving the problems. Intervention from the researcher was kept to a minimum and care was taken not to provide help with the problems themselves. The sessions were audio recorded and all problem solving scripts retained for later analysis.

The problems used all used a real-life context, did not provide the students with all the data required to solve them, and did not lead to a single correct answer. Although the chemistry content would be familiar to most students entering higher education, it became clear during the sessions that the style of the problem was unfamiliar to them. Students were not allowed to use calculators or to look up any information that they thought they required. This constraint was applied in order to study whether students were able to make sensible estimations and carry out approximate or order of magnitude calculations. Therefore, in order to solve the problems students would have produce a strategy, make sensible estimations and approximations and carry out simple ‘back of the envelope’ calculations and think qualitatively about the problem before generating any data to enable application of any algorithmic operations.

Johnstone (1993) has previously categorised problems according to whether the strategy required is familiar to the students, whether they are provided with all required data, or whether there is a clear goal (Table 1). Algorithmic problems are defined as type 1, as all required data is given, the method is well known and there is a single correct answer. At the other extreme of this categorisation are type 8 problems, in which data are incomplete, the strategy is unfamiliar and has to be developed, and the goal is ill-defined or there is no single correct answer. The problems used in this study are described as type 8 as, even though there may be some element of algorithmic calculations involved, students cannot use a well know routine and have to develop their own strategy, the data is not all given in the problem and they have to make estimations in order to develop their own data, and there is not a single correct answer to any of the problems. According to Funke (2010) these would be classed as complex problems.

Table 1 Types of problems (Johnstone, 1993)
TypeDataMethodsOutcomesSkills
1GivenFamiliarGivenRecall of algorithms
2GivenUnfamiliarGivenLooking for parallels to known methods
3IncompleteFamiliarGivenAnalysis of problem to decide what further data are required
4IncompleteUnfamiliarGivenWeighing up possible methods and deciding on data required.
5GivenFamiliarOpenDecision making about appropriate goals. Exploration of knowledge networks.
6GivenUnfamiliarOpenDecisions about goals and choices of appropriate methods. Exploration of knowledge and technique networks.
7IncompleteFamiliarOpenOnce goals have been specified, the data are seen to be incomplete.
8IncompleteUnfamiliarOpenSuggestion of goals, methods, consequent need for additional data. All of the above skills.


The problems used have been used in previous studies (Overton and Potter, 2011) and are given below.

Problem 1: You've been on ‘Who Wants To Be a Millionaire’ and won £64[thin space (1/6-em)]000. You decide to treat yourself and some friends to a holiday in America. The flight from Heathrow to New York is 7 hours. To provide breathable air on an aircraft recirculation cells containing KO2 are used. Potassium dioxide reacts with the exhaled carbon dioxide as follows:

4KO2 + 2CO2 → K2CO3 + 3O2

K2CO3 + CO2 + H2O → 2KHCO3

What mass of KO2 would be needed on a Boeing 747 for this flight?

Problem 2: On November 13 2005 an explosion at a chemical plant in Jilin City in north eastern China released 100 tonnes of benzene and nitrobenzene into the Songhua River. Two weeks later, nearly 700 miles downstream, the spill flowed into the Amur river flowing through the Russian city of Khabarovsk. The Chinese and Russian authorities used activated carbon in water treatment facilities to stop the contaminants getting into the municipal water supplies. Quantities of carbon were also dumped in the river. Below is a table giving the specifications of activated carbon typically used for water treatment.

GradeFiltracarb FY5Filtracarb CC65/1240
TypeGranularGranular
Surface area BETN2 (m2 g−1)11501050
CTC (%)5565
Bulk density (g cm³)0.490.45
Hardness (%)9990
Iodine no. (mg g−1)11001050

What mass of activated carbon would be required to completely eliminate the pollutants from the water in the affected areas?

Problem 3: Many commercial hair-restorers claim to stimulate hair growth. If human hair is composed mainly of the protein α-keratin, estimate the rate of incorporation of amino acid units per follicle per second.

The student participants were volunteers drawn from years 1, 2 and 3 of full-time undergraduate chemistry programmes. They covered a range of abilities, gender and ethnic backgrounds. The sessions were not time limited but all students solved three problems within one hour. The interviews were conducted over two academic years, with 15 students involved in year 1 and 12 in year 2. The study in year 1 led to a limited number of approaches being identified so the second study was carried in order to verify those results.

Analysis

Clement (2000) states that there are two ways to interpret think aloud studies or interview. Generative studies aim to identify observational categories or elements if a theoretical model by intensive interpretation of relatively large sections of the transcript. Convergent studies are confirmatory rather than exploratory in nature and apply predefined codes in order to obtain information about the frequency of certain observations. A generative study can provide a sound basis for the design of a convergent study. This current study was generative, as the aims were to explore the approaches that undergraduates take when solving open-ended problems and to produce observational categories.

The students' scripts were analyzed along with the audio recordings of the interviews by each researcher in order to identify the approaches used as each student solved each problem. Qualitatively distinct approaches were identified for each solution and each recording. Several different approaches could be identified for each student as each individual segment of activity was analysed separately. This process was carried out independently by each researcher (Armstrong et al., 1997) and a high degree of consensus was found between the themes and approaches identified by each, thus providing confidence in the categorisation. The individual interpretations of the protocols was compared and discussed and a common set of observational categories emerged from that process. This initial set of categories was refined by collapsing together categories of a similar nature.

Each identified category was given an appropriate code to allow further analysis of each participant's data. For categories that described an ability and its corresponding inability, the code was given the notation of a single letter and a ‘+’ or ‘−’ sign, any other codes were given a two letter notation. For example, the ability to make estimations and approximations was given the code E+ whereas an inability to make estimations and approximations was given the code E−. The categories identified from the analysis of the students scripts and recordings are shown in Table 2. Once the initial categories had been identified and coded then the individual student transcripts and recordings were analysed and these codes were applied.

Table 2 Initial categories of description
CategoryCode
Makes estimations, approximations, generates dataE+
Unable to make estimations, approximations or generate dataE−
Understands problem, what needs to know or doP+
Can't get started, can't identify what needs to know or doP−
Logical approach and reasoningL+
Not logical, gaps in reasoningL−
Makes sensible assumptionsA+
GuessesA−
Evaluates answer, aware of limitationsEV
Seeks algorithmic approachAL
Distracted by context of problemCO
Lack of knowledge a barrierKN


Results

The results of the coding of individual students are shown in Table 3. The data in the table is organised with the codes representing those approaches that are considered to be supportive of positive outcomes in open-ended problem solving placed at the top of the table. These are approaches that could lead to the development of a feasible solution and include; an ability to estimate and generate data (E+), an ability to identify the problem (P+), use of logical approach and reasoning (L+), an ability to make reasonable and sensible assumptions (A+), and evaluation of the problem solution (EV). Those approaches that are considered to be unsupportive of open-ended problem solving are listed in the bottom half of the table. These approaches hinder the process of reaching a solution to the problem and include; an inability to estimate (E−), an inability to identify the problem (P−), a lack of logical approach and reasoning (L−), an inability to make reasonable or sensible assumptions (A−), seeking an algorithmic approach (AL), being distracted by the context of the question (CN) and lacking sufficient knowledge to progress the problem (KN).
Table 3 Participant results


A shaded cell in the table indicates that this approach was observed for that participant whilst solving any one of the problems. The supportive, or positive, approaches have been shaded in light grey, unsupportive, or negative, approaches in dark grey. Table 4 presents the results for the individual students re-ordered in such a way that students who exhibited predominantly negative approaches and those who exhibited predominantly positive approaches are grouped together. Those that remain used a variety of approaches across the three problems. On viewing this presentation of the data it became clear that the students could be organised into three broad groupings based on the combination of approaches they used; those that use predominantly positive approaches like students 1, 11, 12, and 13, those that use predominantly negative approaches like students 6, 8, 10 and 15, and those that use a mixture of the all approaches depending upon the problem, such as students 2, 3, 5 and 7.

Table 4 Participants results grouped by common approach


The students who adopted mostly negative approaches were typically unable to identify what they problem was asking them to do and were unable to make estimations or to generate their own data. This approach is typified by student 11 in the extract from their interview:

Student: One follicle per second…I can't even answer that, I don't know where to start. I don't how long a protein takes…I don't even understand it

Researcher: Well what do you think it means?

Student: I know you want the number of amino acid units going into a follicle; an answer in unit per second. From this….no idea how to get those numbers. I have no idea where to get any numbers from.

Student 12 also demonstrates negative approaches and guesses at an answer:

Student: I don't understand the question.

Researcher: What do you think it means?

Student: I think it's about the rate of hair growing. It must be rate…, nothing comes to mind.

Researcher: What do you think you need to know?

Student: The speed that hair grows…or I think how many amino acids get into hair.

Researcher: That's what you have to work out.

Student: That's a funny question. Something like 5 or 6 amino acids a second…yeah, that's my answer.

Students who adopted mostly positive approaches could typically clarify the aims of the problem and were comfortable identifying what data were needed and making estimations. They were also likely to explore different strategies and evaluate their answers. This is illustrated by this extract form the interview from student 6:

Student: The basic knowledge I have is 12 amino acids make up polypeptide. So I need to know the rate of amino acid production I suppose …have to be average wouldn't it? Unless I work backwards from a bald patch and work out how fast hair grew, how long from start to finish and work backwards. I'd need a designated site to measure and need to know how many follicles in that. or I could do it on how fast hair grows which is directly proportional to the rate of amino acid production. Yeah, that's how I'd go about it. Go for one hair. How fast does that grow? A length, a time and composition. I say 2 days to grow 2 mm and need certain number of amino acids a to produce that. So.

Researcher: So what are you thinking now?

Student: I'm thinking about estimating the volume of the hair and an amino acid and then I can use that with the volume of hair. So if I say an amino acid is diameter 10 nm and that hair is 0.1 mm thick. So I can find how many can fit in that volume of hair.

So, I've worked out the volume of an amino acid molecule and divided volume of hair which gives me this answers. That seems a lot to me. I'll do that again.

Student 8 also demonstrated these positive attitudes and developed an effective strategy very quickly and was very comfortable making estimations.

Student: So I guess hair grows around 1cm a month…so that's 0.03 cm a day…and 3.85 × 10−7cm a second. So if hair has a diameter of, say, 0.1 mm that's 0,01 cm..and I can find the volume of hair per second.

Researcher: What are you doing now?

Student: I'm mapping out spherical amino acid units. If I know the width of a unit I can work out the volume and then can divide into the volume of hair to find the answer. An amino acid is, let's say, 10 nm, that's 10−8m, ah, 10−6m, no it must be smaller, it's 10−10cm…

Students who adopted a mixture of approaches demonstrated behaviours typical of both these situations described above and their approach varied with the problem. They seem to be more influenced by the context of the problems, as illustrated by this extract from the interview of student 7:

Student: Follicle per second? I need the formula of keratin. It says it in the question. So do you want a rate equation? Or a number?

Researcher: Well, how would you work that out?

Student: I need to know what k looks like. Is it an amino acid? But just trying to work out the way to go.

Researcher: What have you written there?

Student: A rate equation and so I want the concentration of the product… So suppose I could guess how much amino acid there is, how much hair restorer you could put on your head at one time…guestimate how much keratin…could shave my head and work it out. What is concentration of keratin in my hair? Half a kilo? A follicle? How do you think of these questions? Well, it's supposed to stay in your hair and make it grow not to come out, so it must be a low rate.

Student 5 typified how some students became distracted by their own prior knowledge and the context of the problem.

Student: How do you monitor that? You'd have to take a hair out each second. The best way to monitor amino acids is by mass spec…keratin is a helix, a regular helix, it's hollow too, I've done this before…amino acid per follicle per second?…it will have to be a really rough estimate unless..this is awkward because if you pull out hairs they'll all be different lengths, they won't all grow at the same rate. How can we monitor hair growth? If you get someone with hair and cut it off to the skin and then let it grow and, measure the rate of growth…then pluck out one hair and analyse it by mass spec because it's really sensitive.

The focus of this study was an exploration of the approaches students used when tackling problems of this type rather than problem solving ability. Therefore, the problem solutions were not formally marked. However, it was clear that only those students exhibiting predominantly positive approaches produced satisfactory solutions to one or more of the problems. Those using negative approaches rarely made progress towards a solution and the performance of those using a mixture of approaches varied from student to student and with the individual problems.

Discussion

On viewing Table 4 it is clear that students fall into three broad categories when solving open-ended problems; those that adopt largely positive strategies, those that adopt largely negative strategies and those that utilise a wide range of both positive and negative approaches.

Those students who adopt the positive approaches could be described as taking a scientific approach; they can understand a problem, are logical, make estimations, can handle a lack of data and evaluate their solution. This description corresponds to one of the types of problem solver described in Walsh's (2007) study of solving algorithmic problems in physics.

Those students that adopt negative or unhelpful approaches to solving open-ended problems take an unscientific approach and are unable to define for themselves what the problem is about and find dealing with a lack of data or a lack of relevant background knowledge particularly difficult to cope with. These students never evaluate their solution are often distracted by the context of the problem and seek an algorithmic solution. This class of students corresponds to Walsh's ‘no clear approach' in the study of physics students.

Perhaps the most interesting group of students is the third group. These students use a wide range of approaches depending on the problem encountered, and often within a single problem. Whether they could identify the problem and handle lacking data seemed to depend on the individual context. These students usually evaluated their solutions but they also sought algorithmic approaches. That students seek algorithmic routes through these problems is not surprising when most of their previous experience in problem solving has been of the algorithmic, structured type (Bennett, 2008; Pappa and Tsaparlis, 2011).

Bodner and Domin (1991) and Cartrette and Bodner (2010) describes students who take an unstructured approach to problem solving as novices and those that take a structured or scientific approach, as experts. It would seem reasonable to assume that the students in this study who exhibited all positive approaches are indeed functioning as expert problem solvers. Those who adopted only unhelpful approaches are clearly novice problem solvers as they find solving these complex problems challenging and have little success. Those who use the wide variety of approaches depending on context may be in a transition phase from being novice to expert problem solvers. The three types of problem solvers did not correlate with age or year of study, so this transition, if indeed it is a transition, must be related to intellectual development, practice or cognitive factors.

Implications for teaching

These finding have implications for teaching and learning. Problem-based pedagogies are increasingly popular in undergraduate chemistry education (Belt et al., 2002; Hick and Bevsek, 2012; Tosun and Taskesenligli, 2013). Problem-based learning utilises open-ended problems with real-life contexts and students have to develop a strategy, find missing information and data and seldom use an algorithmic approach. Our study indicates that, within a student cohort, there will be a significant proportion of students who are unable to identify the details of a problem, who have great difficulty dealing with a lack of data, who will be distracted by the context and by their own lack of knowledge and will be unable to evaluate their progress. These students will require additional support and facilitation if they are to succeed in problem-based activities. The use of group work may over come, or mask, these issues somewhat if randomised groups contain students from all three of our identified groupings.

Conclusions

A think aloud protocol has been used to identify approaches to open-ended problem solving in undergraduate chemistry students. Students fall into three categories; those who adopt a scientific approach and use approaches supportive of solving the problems, those who use only negative and unhelpful approaches and a group who use all the observed strategies within and between problems. These groupings may be described as expert, novice and transitional.

There are, of course, limitations to this study. This was a small investigation of 27 students solving just three problems. The sample is size is small from which to generalise and the individual problems selected could have influenced the outcome of the study. As with any qualitative study, the role of the investigator can influence the data collected and all interpretive methodologies are subjective to some extent. The sample will be extended in order to further validate these findings. The issue of whether our three types of students constitute novice, expert and transitional problem solvers will be investigated by identification of the characteristics of experts and novices. Longitudinal studies of the development of approaches to solving complex, open-ended problems will be carried out in order to ascertain whether these approaches are fixed for individuals or whether, with intervention, they can be accelerated to the expert/scientific stage.

References

  1. Anderson J. R., (2005), Cognitive psychology and its implications, 6th edn, New York: Worth.
  2. Armstrong D., Gosling A., Weinman J. and Marteau T., (1997), The place of inter-rater reliability in qualitative research: an empirical study, Sociology, 31(3), 597–602.
  3. Belt S. T., Evans E. H., McCreedy T., Overton T. L., Summerfield, S., (2002), A Problem Based Learning Approach to Analytical and Applied Chemistry, Univ. Chem. Educ., 6(2), 65–72.
  4. Bennett S. W., (2008), Problem solving: can anybody do it? Chem. Educ. Resc. Pract., 9, 60–64.
  5. Bodner G., (1986), Constructivism: a theory of knowledge, J. Chem. Educ., 63, 873–878.
  6. Bodner G. and Domin D., (1991), Towards a Unifying Theory of Problem Solving: A View from Chemistry, in Smith M. (ed.), Towards a unified theory of problem solving: views from the content domain, Hillesdale, NJ: Lawrence Erblaum Associates.
  7. Bowen C. W., (1994), Think-aloud methods in chemistry education: understanding student thinking, J. Chem. Educ., 71(3), 184–190.
  8. Bühner M., Kröner S. and Ziegler M., (2008), Working memory, visual–spatial-intelligence and their relationship to problem-solving, Intelligence, 36, 672–680.
  9. Bunce D. M. and Heikkinen H., (1986), The effects of an explicit problem solving approach on mathematical chemistry achievement, J. Res. Sci. Teach., 23(1), 11–20.
  10. Cartrette D. P. and Bodner G. M., (2010), Non-mathematical problem solving in organic chemistry, J. Res. Sci. Teac., 47(6), 643–660.
  11. Chen C. H. and Wu I., (2012), The interplay between cognitive and motivational variables in a supportive online learning system for secondary physical education, Comput. Educ., 58(1), 542–550.
  12. Cheung D., (2009), Using think-aloud protocols to investigate secondary school chemistry teachers' misconceptions about chemical equilibrium, Chem. Educ. Res. Pract., 10(2), 97–108.
  13. Clement J. (2000), Analysis of clinical interviews: Foundation and model viability, ed. Kelly A.E. and Lesh R., 547-589, NJ, Lawrence Erlbaum Associates.
  14. Cooper M. M., Sandi-Urena S. and Stevens R., (2009), Reliable multi method assessment of metacognition use in chemistry problem solving, Chem. Educ. Res. Pract., 9(1), 18–24.
  15. Dori Y. J. and Hameiri M., (2003), Multidimensional analysis system for quantitative chemistry problems – symbol, macro, micro and process aspects, J. Res. Sci. Teach., 40(3), 278–302.
  16. Funke J., (2010), Complex problem solving: a case for complex cognition? Cognit. Process., 11(2), 133–142.
  17. Gabel D. and Bunce D. M., (1994), Research on chemistry problem solving, in Gabel D. (ed.), Handbook of Research on Teaching and Learning Science, New York: MacMillan.
  18. Hick R. W. and Bevsek H. M., (2012), Utilizing problem-based learning in qualitative analysis lab experiments, J. Chem. Educ., 89(2), 254-257.
  19. Johnstone A. H., (1993), Introduction, in Wood C. and Sleet R. (ed.), Creative Problem Solving in Chemistry, London: The Royal Society of Chemistry.
  20. Johnstone A. H. and El-Banna H., (1986), Capacities demands and processes: a predictive model for science education, Educ. Chem., 23, 80–84.
  21. Kapa E., (2007), Transfer from structured to open-ended problem solving in a computerized metacognitive environment, Learn. Instruct., 17(6), 688–707.
  22. Overton T. L. and Potter N. M., (2011), Investigating students' success in solving and attitudes towards context-rich open-ended problems in chemistry, Chem. Educ. Res. Pract, 12, 294–302.
  23. Pappa E. T. and Tsaparlis G., (2011), Evaluation of questions in general chemistry textbooks according to the form of the questions and the question-answer relationship (QAR): the case of intra- and intermolecular chemical bonding, Chem. Educ. Res. Pract, 12, 262–270.
  24. Polya G., (1945), How to Solve it: A New Aspect of Mathematical Method, Princeton, NJ: Princeton University Press.
  25. Reid N., (2009), Working memory and science education: conclusions and Implications, Res. Sci. Technol. Educ., 27(2), 245–250.
  26. St Clair-Thompson H., Overton T. and Bugler M., (2012), Mental capacity and working memory in chemistry: algorithmic versus open-ended problem solving, Chem. Educ. Res. Pract., 13(4), 484–489.
  27. Scherer R. and Tiemann R., (2012), Factors of problem-solving competency in a virtual chemistry environment: the role of metacognitive knowledge about strategies, Comput. Educ., 59(4), 1199–1214.
  28. Taasoobshirazi G. and Glynn S.M., (2009), College students solving chemistry problems: a theoretical model of expertise, J. Res. Sci. Teach., 46(10), 1070–1089.
  29. Tosun C. and Taskesenligli Y., (2013), The effect of problem-based learning on undergraduate students' learning about solutions and their physical properties and scientific processing skills, Chem. Educ. Res. Pract., 14, 36.
  30. Tsaparlis G., (2005), Non -algorithmic quantitative problem solving in university physical chemistry: a correlation study of the role of selective cognitive variables, Res. Sci. Technol. Educ., 23, 125–148.
  31. Walsh L. N., Howard R. G. and Bowe B., (2007), Phenomenographic study of students' problem solving approaches in physics, Phys. Rev. Spec. Top. Phys. Educ. Res., 3(2), 1–12.
  32. Wheatley, G. H., (1984), MEPS Technical Report, Mathematics and Science Centre, Purdue University.
  33. Zoller U., Fastow M., Lubezky A. and Tsaparlis G., (1999), College students' self-assessment in chemistry examinations requiring higher and lower-order cognitive skills: an action-oriented research, J. Chem. Educ., 76, 112–113.
  34. Zoller U. and Pushkin D., (2007), Matching Higher-Order Cognitive Skills (HOCS) promotion goals with problem-based laboratory practice in a freshman organic chemistry course, Chem. Educ. Resc. Pract., 8(2), 153–171.

This journal is © The Royal Society of Chemistry 2013
Click here to see how this site uses Cookies. View our privacy policy here.