Anda di halaman 1dari 18

Metacognition Learning (2008) 3:189206 DOI 10.

1007/s11409-008-9026-0

Multi-method assessment of metacognitive skills in elementary school children: how you test is what you get
Annemie Desoete

Received: 14 September 2006 / Accepted: 16 June 2008 / Published online: 8 July 2008 # Springer Science + Business Media, LLC 2008

Abstract Third grade elementary school children solved tests on mathematical reasoning and numerical facility. Metacognitive skillfulness was assessed through think aloud protocols, prospective and retrospective child ratings, teacher questionnaires, calibration measures and EPA2000. In our dataset metacognition has a lot in common with intelligence, but planning measured with teacher ratings plays a role above and beyond IQ. Moreover, we found that skills are generally related, but that it is more appropriate to assess them separately. In addition, results show the value of an experienced teacher as actual measure of metacognitive planning skills. Our dataset suggests convergent validity for prospective and retrospective child ratings, but no significant relationship with the other metacognitive measures. Metacognitive skillfulness combined with intelligence accounts for between 52.9% and 76.5% of the mathematics performances. The choice of diagnostic instruments highly determines the predicted percentage. Consequences for the assessment of metacognitive skills are discussed. Keywords Metacognition . Assessment . Teacher ratings . Think aloud protocols . Prospective questionnaire . Retrospective questionnaire . Child

Introduction This study is devoted to the multi-method assessment of metacognition in elementary school children. The relationship between mathematical problem solving and metacognitive skills is described in young children. In the study it is investigated if prospective (e.g.,

A. Desoete (*) Department of experimental clinical and health psychology, Ghent University, Henri Dunantlaan 2, 9000 Ghent, Belgium e-mail: anne.desoete@Ugent.be A. Desoete Arteveldehogeschool, Ghent, Belgium

190

A. Desoete

Vermunt 1996), retrospective (e.g., Artelt 2000), on-line (e.g., Artzt and Armour-Thomas 1992) and combined (e.g., Sperling et al. 2002) techniques or teacher ratings can be used for elementary school children. Metacognitive knowledge and skills Since Flavell introduced the concept of metacognition in 1976, most authors agree that the construct can be differentiated into a knowledge and skills component (Lucangeli et al. 1995). Metacognitive knowledge can be described as the knowledge, awareness, and deeper understanding of ones own cognitive processes and products (Flavell 1976). Metacognitive skills can be seen as the voluntary control people have over their own cognitive processes (Brown 1987). Metacognition was found to be instrumental in challenging tasks in mathematics, not overtaxing the capacity and skills of children and in relatively new skills that are being acquired (e.g., Carr and Jessup 1995). A substantial amount of data has been accumulated on four metacognitive skills important for mathematics: prediction, planning, monitoring and evaluation skills (e.g., Lucangeli et al. 1998). The four metacognitive skills investigated Prediction One of the metacognitive skills is prediction. Prediction can be described as the skill enabling children to think about the learning objectives, proper learning characteristics and the available time. Moreover, children estimate or predict the difficulty of a task and use that prediction metacognitively to regulate their engagement related to outcome and efficacy expectation. The ability to predict enables children to foresee task difficulties in the classroom and makes them work slowly on difficult tasks and more quickly on easier tasks. In addition prediction makes children relate certain problems to other problems, develop intuition about the prerequisites required for doing a task and distinguish between apparent and real difficulties in mathematical problem solving (Lucangeli et al. 1998). In mathematics, prediction refers to activities aimed at differentiating difficult exercises (e.g. 126:5= ) from the easy ones (e.g. 1265= ), in order to be able to concentrate on and persist more in the high-effort tasks. Planning Planning skills make children think in advance of how, when, and why to act in order to obtain their purpose through a sequence of subgoals leading to the main problem goal. Planning involves in a classroom context analyzing exercises (e.g., it is a division exercise in a number-problem format), retrieving relevant domain-specific knowledge and skills (e.g. how to do divisions) and sequencing problem solving steps (e.g., division of the hundreds, tens, units in mental mathematics). (e.g., division of the hundreds, tens, units in mental mathematics). Monitoring Monitoring skills can be described as the self-regulated control of used cognitive skills during the actual performance, in order to identify problems and to modify plans.

Multi-method assessment of metacognitive skills in elementary school

191

Monitoring is related in a classroom context to questions such as am I following my plan?, is this plan working? should I use paper and pencil to solve the division? and so on. Proficient students are assumed to select appropriate skills and adjust behavior to changing task demands, making use of the awareness of previously knowledge and selecting appropriate study behavior (Montague 1998). Evaluation (and calibration) A last metacognitive skill, the evaluation skill, can be defined as the reflections that take place after an event has transpired (Brown 1987), whereby children look at what they did and whether or not this led to a desired result. Specifically children reflect on the outcome and the understanding of the problem and the appropriateness of the plan, the execution of the solution method as well as on the adequacy of the answer within the context of the problem (Vermeer 1997). Evaluation makes children in the classroom evaluate their performance and compare task performance with people and use the final result in locating the error in the solution process (Lucangeli et al. 1998). Calibration can be defined in terms of whether the evaluated value corresponds with the occurrence of that value on the criterion test (e.g., Lin and Zabrucky 1998). A global comparison is made of whether the evaluation after the task corresponds to the actual performance on the task. Metacognition and intelligence The relationship between metacognition, intelligence and mathematics remains very unclear (Desoete 2007; Desoete et al. 2006). Veenman and his colleagues compared three models on the relationship between mathematics learning outcome, intelligence and metacognition (Veenman et al. 2004, 2005,). The first model is the intelligence model, in which metacognitive skills are considered as manifestations of intelligence and as part of the cognitive repertoire, responsible for mathematics learning outcome (Sternberg 2001). A second model is the independency model, in which intelligence and metacognition are considered as two independent factors in the prediction of learning outcome (Swanson 1990). A third model is a combined model, where intelligence and metacognition have a lot in common but metacognitive skills are more relevant than intelligence in predicting learning outcome in initial and complex learning situations (Demetriou et al. 1992). To see the true contribution of metacognition to mathematical problem solving IQ should be used as covariate. Metacognitive assessment Nowadays, a lot of diagnostic tools are designed to assess metacognition. The mainstream of those tools are addressed to assess the metacognition prospective or retrospective to specific arithmetical performances. In prospective methods, such as self-report questionnaires and hypothetical interview, students have to indicate on a Likert-type of scale to what extent a statement (e.g., I ask myself questions to make sure I know the material I have been studying) is representative of their behaviour (e.g., Elshout-Mohr et al. 2003; Vermunt 1996). Retrospective techniques, both questionnaires and interviews have also been applied to assess metacognition (e.g., Artelt 2000).

192

A. Desoete

An obvious problem with retrospective assessment questionnaires is the risk of memory distortions due to the time lag between the actual performance of problem solving and the verbal reports afterwards. In addition to prospective and retrospective techniques, concurrent assessment, such as think-aloud protocols can take place. In thinking-aloud protocol analysis participants are instructed to merely verbalize their thoughts during task performance. Despite all the emphasis on metacognition, several problems emerge in the assessment of metacognition making study outcomes difficult to compare (e.g., Artzt and ArmourThomas 1992; Huet and Marine 1998; Pressley 2000). On the one hand, Veenman and his colleagues seem sceptic and point to the lack of accuracy and the limited explained variance of learning outcomes of prospective and retrospective assessment methods, such as selfreport questionnaires. On the other hand, concurrent-assessment techniques, such as thinkaloud protocols were found to be accurate but time-consuming techniques to assess metacognitive skills. Multi-method techniques seem indicated to get a good picture of metacognitive skills (Veenman 2005; Veenman et al. 2006). Because multi-method assessment is extensively time-consuming, it is investigated in this paper if a teacher questionnaire can be used to get a good picture of metacognitive skills in elementary school children. Although some researchers question the trustworthiness of teacher questionnaire data, reviews indicate that teachers judgments can serve as worthy assessments of students achievement-related behaviors triangulated with data gathered by other protocols (Winne and Perry 2000). Furthermore teachers perception of students use of skills was found to be an important predictor of academic performances in children with learning disabilities (Meltzer et al. 1998). Aims of the present study The short overview clearly shows that there is scant research concerning the assessment of metacognition. Research comparing types of measures of metacognitive skillfulness in children younger than 12 years of age is relatively limited, although there are studies showing that children can be strategic form early age (Perry 1998; Whitebread et al. 2005). In the current study we narrow our research to three major aims. First, the present study aims to add some insight to the value of teacher ratings on metacognitive skills of elementary school children, younger than 1112 years. It is hypothesized that the teacher questionnaire will correlate with the other assessment techniques. Therefore we focus on the teacher ratings of metacognitive skills in young children. The second aim of this study is to investigate whether concurrent and prospective and retrospective child-questionnaires differ in the assessment of metacognitive skills in young children. It is hypothesized that prospective and retrospective off-line questionnaire will have higher correlations than off-line and on-line or combined assessment techniques. Third, the present study aims to add to the body of knowledge concerning the relationship between on-line and off-line metacognitive skills and mathematical problem solving in young children. According to previous literature the prediction of mathematics by means of on-line techniques will be better than the prediction by off-line (prospective and retrospective) techniques. To investigate the role of metacognition above and beyond intelligence, all analyses will be run with IQ as covariate.

Multi-method assessment of metacognitive skills in elementary school

193

Method Participants Since it is time consuming to assess children with different metacognitive instruments, we started this study within a cross-sectional design in one school in an urban big town in Flanders. For the implications of the sample size and typicality of the teacher we refer to the discussion section. Studies with a longitudinal design and larger sample are currently being prepared. In this study the school was purposefully selected out of ten schools on behalf of the long experience of the teacher and the verbal fluency of the children. Teacher consent was requested and given. The female teacher had been teaching third graders for 20 years in the same school. Twenty children in their third year of this elementary school participated in this study. Participants were 13 girls and seven boys. Ethnic breakdown of the sample was 25% Asian, 10% African and 65% Caucasian, native Dutch-speaking children. All participants were fluent Dutch-speakers without histories of extreme hyperactivity, sensory impairment, brain damage, a chronic medical condition, insufficient instruction or serious emotional or behavioural disturbance, following regular elementary education in Flanders for more than 2 years. Informed consent from a parent of each participant was obtained before starting this study. The mean full-scale IQ of these children was 101.03 (SD=7.90), the mean verbal IQ was 102.60 (SD=8.33), the mean performance IQ was 101.63 (SD= 12.24) on WISC-III. At the time of testing, the participants had a mean age of 99.59 months (SD=3.27 months). Measures Teacher ratings, different mathematic tests and metacognitive tests were used in this study (see Table 1). Mathematics tests Initial mathematics can be seen as a broad domain of various computational skills. Dowker (2005) differentiated between two domains: mathematical reasoning and numerical facility. The Kortrijk Arithmetic Test (KRT-R) is often used to measure mathematical reasoning. The Arithmetics Number Facts Test (TTR) is often used to measure numerical facility in Belgium.
Table 1 Assessment instruments compared Mathematics KRT-R TTR Prospective test Retrospective test CDR Teacher Rating Thinking Aloud EPA2000 X X X X X X (X) X X X X X X X X X X X X X X X X X X Prediction Planning Monitoring Evaluation Calibration

194

A. Desoete

The Kortrijk Arithmetic Test Revision (Kortrijkse Rekentest Revision, KRT-R; Baudonck et al. 2006) is a Belgian test on arithmetic reasoning which requires children to solve mental arithmetic (e.g., 129+878=_) and number knowledge tasks (e.g., add three tens to 61 and you get _). The psychometric value of the KRT-R has been demonstrated on a sample of 3,246 Dutch-speaking children from grade 1 till 6. In the study the standardized total percentile based on Flanders norms was used. On the KRT-R the children in this study achieved a standardized mean percentile score of 39.37 (SD=5.59). This means that the study was conducted on a relatively low-achieving school. The Arithmetic Number Facts Test (Tempo Test Rekenen, TTR; de Vos, 1992) is a numerical facility test which requires children in grade 1 to solve as many number fact problems as possible within 5 minutes (e.g., 59=_). The psychometric value has been demonstrated for Flanders on a sample of 10,059 children (Ghesquire and Ruijssenaars 1994). On the TTR, the children in this study achieved a standardized mean percentile score of 55.76 (SD=31.99). This means that the children performed average on numerical fact retrieval. Metacognitive tests Metacognition can be assessed with off-line (prospective and retrospective), on-line and combined techniques. In this study the Prospective Assessment of Children (PAC or prospective test) and Retrospective Assessment of Children (RAC or retrospective test) were used as off-line ratings for children. Participants solved the Cognitive Developmental Arithmetics Test (CDR) as an off-line calibration rating for children. Teacher ratings were used as off-line rating for teachers. Thinking-aloud protocol analysis (TAP or think-aloud protocols) were used as on-line technique. The Evaluation and Prediction Assessment (EPA2000) was used as combined (prospective and retrospective) assessment. All metacognitive and mathematics instruments were tested in a previous studies in order to determine the usefulness for this age group and for the sensitivity in measuring individual differences (e.g., Desoete et al. 2001, 2006). Children were videotaped during word problem solving and fulfilling the prospective test, retrospective test, think aloud protocols, EPA2000, CDR, KRT-R and TTR. The teacher was interviewed after the completion of the teacher rating. Analyses showed that children, teachers and observers/ coders could handle the instruments well. Observers reviewed the videotape of their performance and were asked why they performed that way and what they thought while performing the task. The given answers all referred to the constructs in question. Off-line techniques The Prospective Assessment of Children is a child-questionnaire adapted from the MSA (Desoete et al. 2001) for this research line, is a 25 item rating scale questionnaire for children on metacognitive prediction, planning, monitoring and evaluation skills. Children have to indicate before solving any mathematical problem on a seven-point Likert-type of scale to what extent a statement (e.g., I control exercises I make) is representative of their behaviour during mathematical problem solving (1=never, 7=always). Metacognition was prospectively assessed one day before the real experiment. In this study Cronbachs alpha for the scale was 0.81 (26 items). For the subscales Cronbachs were 0.74 (ten items) ,0.55 (four items), 0.75 (nine items), 0.70 (two items) for prediction, planning, monitoring and evaluation respectively. The Retrospective Assessment of Children is the same 25 item rating scale questionnaire for children on metacognitive prediction, planning, monitoring and evaluation skills. Children have to indicate on a seven-point Likert-type of scale to what extent a statement

Multi-method assessment of metacognitive skills in elementary school

195

(e.g., I controlled exercises I made) was representative of their mathematical problem solving behaviour on the past task (1=never, 7=always). In this study Cronbachs alpha for the total scale was.89 (25 items). For the subscales Cronbachs were 0.77 (ten items), 0.52 (four items), 0.81 (nine items), 0.64 (two items) for prediction, planning, monitoring and evaluation respectively. The Cognitive Developmental Arithmetics Test (CDR; Desoete and Roeyers 2006) is a 90-item test developed for the assessment of cognitive and calibration skills of young children. The number of correct answers is the mathematics performance score (e.g., 60/90 on the test). In addition, children have to gauge confidence in the correctness of the given answers (e.g., I think I will obtain 70/90 on this test). The score children attribute to their work (e.g., here 70) is the macro-evaluation or performance calibration score. The difference between the mathematics performance (e.g., here 60) and the macro-evaluation or performance calibration score (e.g., 70) is the difference score (e.g., here 10) is changed in an absolute difference score (e.g., here 10). The absolute difference between the mathematics performance (e.g., here 60) and the macro-evaluation or performance calibration score (e.g., 70) is the calibration score (e.g., here 10). The larger misses (ten is more then eight) are misses, regardless of whether they are positive (f.ex. +8) or negative (f.ex. 10). Psychometric value has been demonstrated on a sample of 483 Dutch-speaking children in Flanders. To examine the psychometric characteristics of the developed metacognitive parameter, students were observed and videotaped during and interviewed after the test. In addition, Gutmanns split-half and SpearmanBrowns coefficients were 0.70 and 0.72. respectively. Furthermore, all variables were normally distributed and testretest correlations were 0.85 (p <0.0005) in a previous study (Desoete and Roeyers 2006). Children in this study achieved a standardized mean percentile score of 52.40/80 (SD=10.04). The Teacher Rating, which was created for this research line, is a 20 item rating scale teacher-questionnaire on metacognitive prediction (seven items), planning (four items), monitoring (six items) and evaluation (three items) skills (e.g., the child never (1)/always (7) knows in advance whether an exercise will be easy or difficult). Furthermore teachers scored the mathematical and reading performances as well as the intelligence of children [e.g., very low compared to peers (1)/ very good compared to peers (7)]. The teacher questionnaire was tested in previous studies in order to determine its construct validity (Desoete et al. 2001). Testretest correlations of 0.81 (p <0.01) and inter-rater reliabilities varying between 0.99 and 1.00 (p <0.01) were found. In this study Cronbachs of 0.98 was found for the test score (20 items). For the teacher rating subscores Cronbachs were 0.97, 0.89, 0.91 and 0.90 for prediction, planning, monitoring and evaluation respectively. On-line technique Thinking-aloud protocol analysis was applied during three word problem solving tasks. Children were merely instructed to verbalize their thoughts during three word problem task performance. In the case children fall silent, the assessor urged them to keep on thinking aloud. This thinking-aloud prompts children to think aloud during the whole problem solving. Metacognitive ratings were assessed by the psychologist concurrent with the participants ongoing process of solving math problems. The psychologist practiced this rating procedure in advance on several other participants not included in the sample until she felt confident that an adequate level of rating fluency was reached. Based on a review of metacognition literature in this age group metacognitive activities were inventoried by three psychology students and by the main researcher. Only the activities where all four persons agreed upon the concept and operationalisation, were included in the study. All protocols

196

A. Desoete

were transcribed verbatim and analyzed according to this metacognitive coding scheme on the presence these 40 activities derived from grounded analysis in previous studies in this age-group (e.g., Desoete et al. 2001, 2002): 11 prediction items (reading the problem oriented on comprehension, underlining important words, selecting relevant information, reading he task again to comprehend better, making a drawing, putting information together, writing down what was asked for, writing down what is already known, reflecting, estimating possible outcome, other behaviour pointing in the direction of prediction), six planning items (selecting relevant data, selecting the calculation needed, selecting relevant steps, selecting relevant materials, taking time designing an action plan, other behaviour that points in the direction of planning), 17 monitoring items (adhering to plan, correct in calculation, making correct use of unities and tens, systematic activities, making notes relating to the problem, orderly note-taking of problem solving steps, not forgetting steps, orderly sequencing steps, acting according to plan, monitoring problem-solving process, checking calculation, checking the answer with the estimated outcome, taking note of the precise answer, checking the results, referring tot problem statement in the answer, reflecting on a clear exact and precise answer, other behaviour that points in the direction of monitoring) and six evaluation items (summarizing the answer and reflecting on the answer, reflecting on what went well and how the tasks were solved, drawing a conclusion referring to the task, relating to future problems, relating to other problems, other behaviour that points in the direction of evaluation). The psychologist and two colleagues checked the ratings afterwards by replaying the tapes, which led to hardly any modifications in scores. All scores were also independently coded and controlled by the author. One prediction-item was deleted form further analyses because there was not enough variance in the scores of the children on this parameter. Areas of non-agreement (between the psychologist, two colleagues and the author) were discussed with reference to the definitions of the skills and were resolved through mutual consent. For each problem a metacognition score was calculated on the 39 activities, resulting in three scores, averaged later. Finally, a total metacognition score was calculated over the three problems (with a Cronbachs alpha of 0.89). In line with Veenman and Spaans (2005) a zero was given if the activity was absent, whereas a score of 1 was given if the activity was present. It was also possible to give half a point if the activity was initiated but not completed. Cronbachs of 0.88 was found for the total protocol analyses. For the subscales Cronbachs were 0.79, 0.60, 0.82 and 0.75 for prediction, planning, monitoring and evaluation respectively. Combined technique The Evaluation and Prediction Assessment (EPA2000; De Clercq et al. 2000) is a computerized procedure for assessing mathematics, prediction and evaluation. In the measurement of prediction skillfulness, children were asked to look at exercises without solving them and to predict on a four-point rating scale, whether they will be successful in this task. Children had to evaluate after solving the different mathematical problem-solving tasks on the same four-point rating scale. Children could give four ratings (1 absolutely sure I am wrong, 2 sure I am wrong, 3 sure I am correct, 4 absolutely sure I am correct). Metacognitive predictions or evaluations were awarded two points whenever they correspond to the childs actual performance on the task (predicting or evaluating 1 and doing the exercise wrong and rating 4 and doing the exercise correctly). Predicting and evaluating, rating 1 or 3 received 1 point whenever they correspond. Other answers did not gain any points, as they are considered to represent a lack of off-line metacognitive skillfulness. The three scores (prediction, mathematics and evaluation) were unrelated. For

Multi-method assessment of metacognitive skills in elementary school

197

instance, in theory a child could obtain maximum scores for prediction, a zero score for mathematics and a medium score for evaluation. To be sure that children were able to make consistent decisions between being sure I am wrong and absolutely sure I am wrong or the reverse sure I am right and absolutely sure I am right, children reviewed the videotape of their performance and were asked why they choose sure or very sure. They all could answer convincingly. They were capable to draw the fine line between sure and very sure and very sure being correct referred to a higher level of prediction skillfulness. The psychometric validity and reliability has been demonstrated on a sample of 550 Dutchspeaking third-graders (Desoete et al. 2002). Cronbachs alphas of 0.92, 0.91, and 0.89 were found for mathematics, prediction (with a max score of 160 points), and evaluation (with a max score of 160 points) respectively. In this study childrens mathematical skills on EPA2000 were 59.00/80 (SD=7.98). The average prediction score was 110.95/160 (SD= 16.87) whereas the average evaluation score was 115.63/160 (SD=14.73). Procedure All subjects were assessed individually, in a quiet room outside the classroom setting, where they completed the KRT-R (Baudonck et al. 2006), TTR (de Vos 1992) and the CDR (Desoete and Roeyers 2006), on two different days, for about 2 h in total. The regular teacher completed a teacher survey in the same period. To prevent an unintended learning effect with the repeated metacognitive measures, leading to correlations influenced by this effect, a counterbalanced design was used. Children got no training on how to provide verbalization. The prospective and retrospective task was performed at the beginning and at the end of a mathematics test (KRT-R or TTR). All participants were instructed to think aloud while solving the three math problems. The psychologist only urged them to continue thinking aloud whenever they fell silent with a standard prompt Please keep on thinking aloud. No help or feedback, whatsoever, was given by the psychologist. During the mathematics task, participants were provided no calculator but some blank sheets of paper for making notes. Sheets for note taking were removed during the KRT-R and EPA2000. The examiner, a psychologist, received practical and theoretical training in the assessment and interpretation of mathematics, and metacognition. The training took place 2 weeks before the start of the assessment. For every instrument, instructions given to the children in relation to their think alouds and scoring rules were explained. In order to guarantee reliability of the assessment, the psychologist had to test one child and score the protocol in advance. This protocol was analyzed and corrected by the main researcher of the study. The test-protocol was not included in the analyses of this study. Systematic, ongoing supervision and training was provided during the assessment of the first five children. The intelligence of all subjects was assessed 3 months after the metacognitive and mathematics tasks.

Results Relationship between the teacher rating and other skillfulness measures To investigate the relationship between the teacher ratings and the assessment by other instruments (research hypothesis 1) and to establish the relationship between the prospective, retrospective and on-line skill measures (research hypothesis 2), Pearson correlations were computed between the measures (see Table 2).

198

Table 2 Correlations among the metacognitive skills between the instruments Off-line Retrospective Off-line Calibration On-line Concurrent Combined EPA 2000

Intelligence

Off-line Prospective

Off-line tests Teacher prediction Teacher planning Teacher monitoring Teacher evaluation Prosp. Prediction Prosp. Planning Prosp. Monitoring Prosp. Evaluation Retrosp. Prediction Retrosp. Planning Retrosp. monitoring Retrosp. Evaluation Calibration/eval. On-line tests Th.aloud Prediction Th.aloud planning Th.aloud monitoring Th.aloud Evaluation EPA prediction EPA evaluation 0.19 0.34 0.20 0.02 0.73** 0.67** 0.78** 0.44** / / / / 0.05 / / / 0.51** / / / 0.25 / / / 0.05 / / / / 0.25 / 0.92** / / / / 0.14 0.42** 0.15 0.41** 0.41** 0.39** 0.10 0.19 0.05 0.55** 0.13 0.24 0.10 0.08 0.25* 0.13 0.24 0.10 0.08 0.33 0.24

0.40** 0.47** 0.48** 0.46** 0.56** 0.22 0.20 0.18 0.62** 0.20 0.10 0.16 0.22*

0.19 0.20 0.25 0.39** / / / / 0.73** 0.67** 0.78** 0.44** 0.25

0.53** / / 0.54** 0.02 / / 0.42**. 0.33 / / 0.24 0.92** 0.14 / / 0.42** / /

0.48** 0.35 0.25 0.21 0.36** 0.35**

0.10 0.19 0.05 0.55** 0.02 0.42**

Teacher teacher rating, Th Think, Prosp. prospective, Retrosp. retrospective, pred. prediction, eval. evaluation A. Desoete

*p <.09, **p <.01

Multi-method assessment of metacognitive skills in elementary school

199

In the left hand column all types of ratings are included. In the top row of the chart the type of assessment is given compared with type of ratings presented in the column. For example row 2 of Table 2 revealed that teacher ratings of prediction (TR prediction) correlated 0.19 with prediction prospectively assessed. Teacher ratings of prediction also correlated 0.19 with prediction retrospectively assessed. Moreover teacher ratings of prediction correlated 0.15 with prediction concurrently assessed with think aloud protocol and 0.53 with prediction assessed with EPA 2000. Row 3 of Table 2 revealed that teacher ratings of planning (TR planning) correlated 0.20 with planning prospectively assessed. Teacher ratings of planning also correlated 0.34 with planning retrospectively assessed. Moreover teacher ratings of planning correlated 0.41 with planning concurrently assessed with think aloud protocol and there was no measure of planning included in EPA 2000. Teacher ratings on prediction skills correlated significantly with EPA 2000 prediction skills. Teacher ratings on planning and monitoring skills correlated significantly with the results on the think aloud protocols. Teacher ratings on evaluation skills correlated significantly with calibration, think aloud protocols and with EPA2000. The correlations between prospective and retrospective assessment techniques were for prediction, planning, monitoring and evaluation r =0.732 (p =0.0005), r =0.668 (p =0.001), r =0.778 (p =0.0005) and r =0.442 (p =0.051) respectively. Think aloud protocols on prediction did not correlate significantly with other prediction measures. Planning and monitoring assessed with think aloud protocols correlated with the teacher opinion on the planning (resp. r =0.41; p <0.01; r =0.41; p <0.01) skills of pupils, but not with the prospective or retrospective child questionnaire results on these skills. Evaluation assessed with think aloud protocols correlated significantly with the teacher rating (r =0.39; p <0.01), the prospective child questionnaire (r =0.55; p <0.01) and the results on EPA 2000 (r =0.42; p <0.01). Importance of prediction, planning, monitoring and evaluation skills for mathematical problem solving To answer research question 3 and so establish to what extent the metacognitive skills were associated with mathematics performance in third grade, a principal components analysis and several regression analyses were conducted. Given the high intercorrelations between the mathematics subtest scores (KRT-R and TTR r =0.448; p =0.005; KRT-R and CDR r =0.551; p =0.01; KRT-R and EPA2000 r = 0.611; p =0.007; CDR and TTR r =0.174; p =0.462; CDR and EPA2000 r =0.310; p =0.196 and TTR and EPA2000 r =0.210; p =0.387) the internal structure of the mathematical data was first analyzed with a principal components analysis, to account for all the variance. This analysis was carried out to develop a small set of components empirically summarizing the correlations among the variables. Four components were needed to account for all the variance in our dataset. This initial number of four could be reduced to one, retaining enough variance for an adequate fit but not so many that parsimony was lost. This one component solution was based upon two criteria. The first criterion was that there was only one component with an eigenvalue higher than l (Kaizer normalization). Component 2, 3, and 4 had an eigenvalue of 0.850, 0.702 and 0.269 respectively) and were found not as important from a variance perspective. The second criterion as to the adequacy of a one component solution to our data set was that a one component solution accounted for 54.464% of the common variance with an eigenvalue of 2.179. Since all variables were normally distributed and did meet the assumptions for multiple regression, regression analyses were conducted in the sample to evaluate how well the

200

A. Desoete

metacognitive skills measured by different assessment techniques predicted the mathematics component in grade 3. Due to limited power the analyses were not combined in one analysis. The metacognitive skills and intelligence were included simultaneously as predictor variables. Off-line measure: teacher ratings The linear combination of intelligence assessed and prediction, planning, monitoring and evaluation off-line measured by teacher ratings was significantly related to mathematics component, and F(5, 19)=13.385, p =0.007. Adjusted R2 was 0.765 (see Table 3). Especially intelligence and teacher ratings of planning were beneficial for the variance in mathematics performances of third grade children. Off-line measures: prospective and retrospective child questionnaires A multiple regression analyses pointed out that the linear combination of the intelligence and the metacognitive skills measured by prospective child ratings was also significantly related to mathematics performances, F(5, 19)=6.092, p =0.003. Adjusted R2 was 0.573. Only intelligence was beneficial in the expected way for the variance in mathematics performance of third grade children (see Table 4). A third multiple regression analyses pointed out that the linear combination of intelligence and the metacognitive skills measured by retrospective child ratings was also significantly related to mathematics performances, F(5, 19)=6.901, p =0.002. Adjusted R2 was 0.608. Again only intelligence was beneficial in the expected way for the variance in mathematics performance of third grade children. On-line measures To establish to what extent the mathematics performance was associated with metacognitive skills measured by think aloud protocols, a regression analysis was performed on the mathematical component as outcome variable with intelligence and the sub scores simultaneously as predictor variables. Adjusted R2 was 0.529 and F(5, 19)=5.259, p = 0.006. Intelligence was a significant predictor (B =0.093, =0.682, t =3.545, p =0.003) in the expected way. There was a trend indicating that think aloud protocol assessing monitoring skills (B =0.248, =0.569, t =1.800, p =0.093) distributed to the variance in mathematics learning. Prediction (B = 0.206, = 0.356, t = 1.207, p =0.247), planning
Table 3 Prediction of mathematics component by teacher ratings Mathematics component Metacognitive skills teacher ratings Constant Intelligence Prediction Planning Monitoring Evaluation *p 0.05 Unstandardised coefficients 9.066 0.077 0.040 0.180 0.091 0.188 t 5.731 4.399 0.521 3.902 0.689 1.608 P 0.000 0.001* 0.610 0.002* 0.502 0.130

0.565 0.151 1.638 0.412 0.975

Multi-method assessment of metacognitive skills in elementary school Table 4 Prediction of mathematics component from prospective and retrospective questionnaires Mathematics component Prospective questionnaire Unstandardised coefficients Constant Intelligence Prediction Planning Monitoring Evaluation *p 0.05 5.950 0.087 0.018 0.078 0.005 0.067 T 1.969 3.320 0.843 1.866 0.237 0.921 p Retrospective questionnaire Unstandardised coefficients 5.584 0.072 0.002 0.072 0.001 0.040 t p

201

0.639 0.186 0.303 0.046 0.159

0.069 0.005* 0.414 0.083 0.816 0.373

0.531 0.028 0.373 0.013 0.121

0.531 2.465 0.090 2.058 0.066 0.534

0.127 0.027* 0.930 0.056 0.949 0.602

(B = 0.186, = 0.217, t = 1.047, p =0.313) and evaluation (B =0.414, =0.149, t =0.917, p =0.375) assessed by think aloud protocols in third grade children were no significant predictors for mathematics performances of these children. Combined measures To investigate to what extent the mathematics component score was associated with intelligence and with the metacognitive prediction and evaluation skills of the EPA2000, a regression analyses was conducted on the mathematical component as outcome variable and intelligence as well as the EPA2000 metacognitive sub scores simultaneously as predictor variables. Adjusted R2 was 0.601 and [F(3, 19)=10.545; p =0.000]. Intelligence (B =0.081, =0.592, t =3.788, p =0.002) was beneficial, whereas prediction (B =0.027, = 0.442, t =1.449, p =0.167) and evaluation (B = 0.004, = 0.061, t = 0.202, p =0.843) had no significant additional predictive value for the mathematics component score. Correlations among metacognitive skills The correlations between the metacognitive skills in the divergent instruments were computed (see Table 5). Positive and significant relations were found between most skills, except between evaluation and prediction or planning prospectively measured or assessed with think aloud protocols. The correlations highly depended on the technique used to assess metacognitive skills. There was a high intercorrelation amongst the teacher rating and EPA2000 sub scores. Differences between below-average, average and above-average performers Average ratings across the metacognitive subskills were computed. A MANCOVA was performed on these metacognitive subskills and mathematics-performance groups (based on the component matrix) in order to test if below-average, average and above-average participants differed on metacognitive skills. Preliminary comparisons revealed that the children in the three conditions did not differed significantly in TIQ [F(2, 17)=3.061, p =0.073]. However comparisons revealed

202 Table 5 Correlations among the metacognitive skills within the instruments Planning Off-line Prospective test Monitoring

A. Desoete

Evaluation

Retrospective test

Teacher Rating

Prediction Planning Monitoring Prediction Planning Monitoring Prediction Planning Monitoring Prediction Planning Monitoring Prediction

0.12 / / 0.51* / / 0.90* / / 0.55* / /

0.58* 0.12 / 0.69* 0.34 / 0.92* 0.95* / 0.84* 0.62* /

0.09 0.29 0.26 0.67* 48 0.53* 0.91* 0.96* 0.98* 0.08 0.02 0.13 0.89*

On-line Thinking aloud

Combined EPA2000

Thinking aloud = think aloud protocol, Prospective test = prospective assessment of children by questionnaire, Retrospective test = retrospective assessment of children by questionnaire *p <0.01

that above-average participants outperformed (although not significant) the two other groups on full-scale intelligence. The MANCOVA with summed EPA2000 results, teacher ratings, calibration, prospective scale, retrospective scale and Thinking Aloud scale as dependent variables and intelligence as covariate revealed, perhaps due to a very limited power no significant differences on the multivariate level for the performance group [F(12, 22)=1.575; p =0.172, power=0.63] and no significant effect for intelligence [F(6, 11)=1.086; p =0.427, power=0.27]. For M and SD we refer to Table 6. The performance group was predicted for 40.3% by EPA2000, for 29.9% by the teacher ratings, for 51% by the retrospective questionnaire, for 36.4% by the prospective questionnaire, for 32.9% by the think aloud protocols and for 15.1% by calibration results.

Discussion Since Flavell introduced the concept 30 years ago, different methods to assess metacognition have been used (Tobias and Everson 2002). This study is devoted to the multi-method assessment of metacognition in elementary school children. It is investigated if prospective, retrospective, on-line and combined (EPA2000) techniques can be used for elementary school children. We also focused on teacher ratings to investigate if such a questionnaire can have some value added in the assessment of metacognitive skills of young children. Overall, the results clearly confirm the value of ratings of an experienced teacher as actual measures of metacognitive skills in elementary school children. As predicted the teacher ratings on prediction skills correlated positively with the combined assessment by EPA2000 but not with the child questionnaire. As expected the teacher questionnaire on evaluations skills also correlated positively with the concurrent and combined assessment techniques. Moreover, teacher ratings correlated significantly with prospective child

Multi-method assessment of metacognitive skills in elementary school Table 6 Results of children with different mathematic skills in grade 3 Below-average performers M (SD; N =7) Intelligence Mathematic skills TTR (min 1max 100) KRT-R (min 1max 100) CDR (min 0max 90) EPA math. (min 0max 80) Metacognitive skills EPA2000(min 0max 320) Prosp. test (min 0max 175) Retrosp. test (min 0max 175) Teacher Rat. (min 0max 140) Calibration (min 0max 90) Think aloud (min 0max 40) 98.29 (6.94) 61.75 28.14 47.75 54.12 (13.123)b (13.246)b (9.346)b (4.120)b Average performers M (SD; N =8) 99.37 (4.78) 65.71 38.62 51.85 58.42 217.62 107.13 111.37 83.125 8.00 13.62 (11.071) (16.817) (9.702) (7.457) (26.494) (12.259) (21.043) (24.014)b (6.164) (5.289)b Above-average performers M (SD; N =5) 107.40 (8.96) 82.20 67.80 60.60 67.00 258.40 104.45 99.60 101.20 1.80 17.60 (17.370)a (26.003)a (7.536)a (6.892)a (30.30)a (14.449)b (26.178)b (18.619)a (3.492) (5.727)a

203

F(2,17) 3.061
F(2,16)

214.14 (16.201)b 134.43 (13.636)a 139.14 (14.847)a 73.428(17.558)c 10.25 (9.180) 14.86 (3.184)b

3.072* 6.584** 4.516** 4.148** F(2,16) 3.598** 3.057* 5.556** 2.278* 0.948 2.616*

abc: different indexes refer to significant between-group differences with a significance level of 0.05 TTR numerical facility, KRT-R mathematical reasoning, CDR mathematics test also used for calibration, EPA2000 mathematics test also used for prediction and evaluation, Prosp.test prospective child rating, Retrosp.test retrospective child rating, Teachter Rat. teacher rating, Think aloud think aloud protocol *p 1; **p 0.05

questionnaires and with measured intelligence. Since in elementary school children a combination of prediction and evaluation skills explains a substantial amount of variance in mathematics (Desoete et al. 2001) this is a relevant finding. Furthermore especially the rating of an experienced teacher on planning was associated with mathematics performances. Another relevant issue concerns the relationship assessment techniques of metacognitive skills. As expected, our dataset suggests convergent validity for the prospective and retrospective ratings of prediction, planning, monitoring and evaluation skills. In our study there seemed to be a fairly consistent mindset that was not much influenced by students actual performances, resulting in similarity of ratings before and after mathematical problem solving. This is in line with previous findings (Desoete et al. 2003) that metacognitive skills need to be taught explicitly in order to develop. They cannot be assumed to develop from freely experiencing mathematics. In addition, consistent with Veenmans literature review no significant correlations were demonstrated between the child questionnaires and the other techniques. The next research question focused on the extent to which metacognitive skills were associated with mathematics performance in third grade. In our dataset especially intelligence predicted mathematics. Most targeted metacognitive variables did not predict scores on the mathematics component after IQ was covaried out of the picture. The study revealed that the choice of diagnostic instruments highly determined the predicted percentage. Concerning the usefulness of the skills in mathematical performances, especially planning measured by teacher questionnaires could be regarded as the best estimates of variance. Our findings might suggest that metacognitive prediction and evaluation skills add noting in prediction performance above that accounted for by IQ in almost every measure and test. These results are in line with the intelligence model in which metacognitive skills are considered as manifestations of intelligence. However, in

204

A. Desoete

line with the combined model metacognitive planning measured with teacher ratings played a role above and beyond IQ. Our dataset certainly seems to indicate that in studies on metacognition related to mathematical problem solving IQ should be used as covariate. Moreover, our dataset revealed very high intercorrelations between prediction, planning, monitoring and evaluation skills rated by the teachers and between the prediction and evaluation skills assessed by EPA2000. In Think aloud protocols a high relationship was found between monitoring and prediction and planning. We found that skills are generally related, but that it is more appropriate to assess them separately. The evaluation skill seems relatively independent in prospective child ratings and think-aloud protocols. There was a positive correlation between evaluation assessed prospectively, retrospectively, concurrently and assessed with EPA 2000 and with a calibration approach. These results should be interpreted with care, since there are some limitations to the present study. First it should be acknowledged that sample size is a limitation of the present study. Obviously sample size is not a problem for significant correlations or regressions. However, when analyses have insufficient power and were not significant, a risk of type 2or -mistakes (concluding from the cohort that there were no differences although in reality there were differences in the population) can not be excluded. Additional research with larger groups of children is indicated. Such studies are recently being planned. Second, the results of this study should be interpreted with care since the analyses are based on a single teacher. It might be so that a less experienced teacher, a teacher who teaches different grades, or a teacher who has not been in the same school for 20 years might lead to other results. Her ability, expertise, familiarity and knowledge related to the students and their metacognition might be placed at the exemplary end that runs from novice to expert. Additional research is needed with less experienced and perceptive teachers in observing and rating their students. It might be also interesting to look how other teachers can reach such a level of performance. Such research is recently being planned. In addition metacognitive skills may be age-dependent and still maturing. Finally, the number of other possible causes of low mathematical functioning (language problems, hyperactivity, sensory impairment, brain damage, a chronic medical condition, insufficient instruction, serious emotional or behavioural disturbance) was restricted to a minimum in this study. These restrictions, causing a limitation in the random sampling, have to be noted as limitations of this research. Additional research should focus on these factors. Reflecting on the results of the present study there is evidence that how you test is what you get. Teacher questionnaires seem to give additional valuable information on the planning skills of third grade children. We suggest that researchers who are interested in skillfulness in young children use multiple-method designs, including teacher questionnaires.

References
Artelt, C. (2000). Wie prdiktiv sind retrospektive Selbstberichte ber den Gebrauch von Lernstrategien fr strategisches Lernen? German Journal of Educational Psychology, 14, 7284. Artzt, A. F., & Armour-Thomas, E. (1992). Development of a cognitive-metacognitive framework for protocol analysis of mathematical problem solving in small groups. Cognition and Instruction, 9, 137 175, doi:10.1207/s1532690xci0902_3. Baudonck, M., Debusschere, A., Dewulf, B., Samyn, F., Vercaemst, V., & Desoete, A. (2006). De Kortrijkse Rekentest Revision KRT-R. [The Kortrijk Arithmetic Test Revision KRT-R]. Kortrijk: CAR Overleie. Brown, A. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In F. E. Weinert, & R. H. Kluwe (Eds.), Metacognition, motivation and understanding (pp. 65116). Hillsdale, NJ: Erlbaum.

Multi-method assessment of metacognitive skills in elementary school

205

Carr, M., & Jessup, D. L. (1995). Cognitive and metacognitive predictors of arithmetics strategy use. Learning and Individual Differences, 7, 235247. doi:10.1016/10416080(95)900128. De Clercq, A., Desoete, A., & Roeyers, H. (2000). EPA2000: A multilingual, programmable computer assessment of off-line metacognition in children with mathematical-learning disabilities. Behavior Research Methods, Instruments, & Computers, 32, 304311. Demetriou, A., Gustafsson, J. E., Efklides, A., & Platsidou, M. (1992). Structural systems in developing cognition, science, and education. In A. Demetriou, M. Shayer, & A. Efklides (Eds.), Neo-Piagetian theories of cognitive development: Implications and applications for education (pp. 79103). London: Routledge. Desoete, A. (2007). Evaluating and improving the mathematics teaching-learning process through metacognition? Electronic Journal of Research in Educational Psychology, 5(3), 705730. Desoete, A., & Roeyers, H. (2006). Cognitieve Deelvaardigheden Rekenen (CDR). Rekentests voor 1ste, 2de en 3de graad. Herenthals: Vlaamse Vereniging voor Logopedisten (VVL). Desoete, A., Roeyers, H., & Buysse, A. (2001). Metacognition and mathematical problem solving in grade 3. Journal of Learning Disabilities, 34, 435449. doi:10.1177/002221940103400505. Desoete, A., Roeyers, H., & De Clercq, A. (2002). The measurement of individual metacognitive differences in mathematical problem solving. In M. Valcke, D. Gombeir, & W. C. Smith (Eds.), Learning styles. Reliability & validity proceedings of the 7th Annual ELSIN Conference Ghent University, Belgium (pp. 93102). Gent: Academia Press Scientific Publishers. Desoete, A., Roeyers, H., & De Clercq, A. (2003). Can off-line metacognition enhance mathematical problem solving? Journal of Educational Psychology, 95(1), 188200. doi:10.1037/00220663.95.1.188. Desoete, A., Roeyers, H., & Huylebroeck, A. (2006). Metacognitive skills in Belgian third grade children (age 8 to 9) with and without mathematical learning disabilities. Metacognition and Learning, 1(2), 119 135. doi:10.1007/s1140900681529. De Vos, T. (1992). Tempo-Test-Rekenen. Berkhout: Nijmegen. Dowker, A. (2005). Individual differences in arithmetic. Implications for psychology, neuroscience and education. Psychology Press: Hove, UK. Elshout-Mohr, M., Meijer, J., van Daalen-Kapteijns, M., & Meeus, W. (2003). A self-report inventory for metacognition related to academic tasks. Paper presented at the 10th Conference of the European Association for Research on Learning and Instruction (EARLI). Padova, Italy, 2630 August 2003. Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. B. Resnick (Ed.), The nature of intelligence (pp. 231235). Hillsdale, NJ: Erlbaum. Ghesquire, P., & Ruijssenaars, A. (1994). Vlaamse normen voor studietoetsen Rekenen en technisch lezen lager onderwijs. Leuven: K.U.L.-C.S.B.O. Huet, N., & Marine, C. (1998). Assessment of metacognition II. Concurrent measures. L Annee Psychologique, 98, 727742. Lin, L. M., & Zabrucky, K. M. (1998). Calibration of comprehension: Research and implications for education and instruction. Contemporary Educational Psychology, 23, 345391. doi:10.1006/ ceps.1998.0972. Lucangeli, D., Cornoldi, C., & Tellarini, M. (1998). Metacognition and learning disabilities in mathematics. In T.E. Scruggs, & M.A. Mastropieri (Eds.), Advances in learning and behavioral disabilities (pp. 219 285). Greenwich: JAI. Lucangeli, D., Galderisi, D., & Cornoldi, C. (1995). Specific and general transfer effects following metamemory training. Learning Disabilities Research & Practice, 10, 1121. Marine, C., & Huet, N. (1998). Assessment of metacognition IIndependent measures. L Annee Psychologique, 98, 711726. doi:10.3406/psy.1998.28566. Meltzer, L., Roditi, B., Houser, R. F., & Perlman, M. (1998). Perceptions of academic strategies and competence in students with learning disabilities. Journal of Learning Disabilities, 31, 437451. Montague, M. (1998). Research on metacognition in special education. In T. E. Scruggs, & M. A. Mastropieri (Eds.), Advances in learning and behavioural disabilities (p. 151). San Diego: Academic. Perry, N. E. (1998). Young childrens self-regulated learning and context that support it. Journal of Educational Psychology, 90, 715729. doi:10.1037/00220663.90.4.715. Pressley, M. (2000). Development of grounded theories of complex cognitive processing: Exhaustive withinand between study analyses of thinking-aloud data. In G. Schraw, & J. C. Impara (Eds.), Issues in the measurement of metacognition (pp. 262296). Lincoln, NE: Buros Institute of Mental Measurements. Sperling, R. A., Howard, B. C., Miller, L. A., & Murphy, C. (2002). Measures of childrens knowledge and regulation of cognition. Contemporary Educational Psychology, 27 , 51 79. doi:10.1006/ ceps.2001.1091. Sternberg, R. J. (2001). Metacognition, abilities, and developing expertise: what makes an expert student? In H. Hartman (Ed.), Metacognition in Learning and Instruction (pp. 247260). Dordrecht: Kluwer Academic Press.

206

A. Desoete

Swanson, H. L. (1990). Influence of metacognitive knowledge and aptitude on problem solving. Journal of Educational Psychology, 82, 306314. Tobias, S., & Everson, H. T. (2002). Knowing what you know and what you dont: Further research on metacognitive monitoring. College board research report 20023. New York: College Entrance Board. Veenman, M. V. J. (2005). The assessment of metacognitive skills: What can be learned from multi-method designs? In C. Artelt, & B. Moschner (Eds.), Lernstrategien und Metakognition: Implikationen fr Forschung und Praxis (pp. 7799). Mnster: Waxmann. Veenman, M. V. J., Kok, R., & Blote, A. W. (2005). The relation between intellectual and metacognitive skills in early adolescence. Instructional Science, 33, 193211. doi:10.1007/s1125100422748. Veenman, M. V. J., & Spaans, M. A. (2005). Relation between intellectual and metacognitive skills: Age and task differences. Learning and Individual Differences, 15, 159176. doi:10.1016/j.lindif.2004.12.001. Veenman, M. V. J., Van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006). Metacognition and learning. Conceptual and methodological considerations. Metacognition and Learning, 1, 314. doi:10.1007/ s1140900668930. Veenman, M. V. J., Wilhelm, P., & Beisheuzen, J. J. (2004). The relation between intellectual and metacognitive skills from a developmental perspective. Learning and Instruction, 14, 89109. doi:10.1016/j.learninstruc.2003.10.004. Vermeer, H. (1997). Sixth-grade students mathematical problem-solving behaviour. Motivational variables and gender differences. UFB: Leiden University. Vermunt, J. D. H. M. (1996). Metacognitive, cognitive and affective aspects of learning styles and strategies: A phenomenographic analysis. Higher Education, 31, 2550. doi:10.1007/BF00129106. Whitebread, D., Coltman, P., Anderson, H., Mehta, S., & Pasternak, D. P.(2005). Metacognition in young children: Evidence form a naturalistic study of 35 year olds. Paper presented at the 11th Biennial European Association for Research on Learning and Instruction (EARLI) Confrerence, Nicosia, Cyprus (August). Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. Boekaerts, P. E. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 531566). San Diego: Academic.

Anda mungkin juga menyukai