Anda di halaman 1dari 13

Formative assessment in discussion

tasks
Daniel Parsons

Open discussion tasks that involve problem-solving and the sharing of ideas
are not amenable to the elicitation of target language forms. By systematically
introducing formative assessment into the task, it is possible to improve
students elaboration of language related episodes (LREs). In this article, I
describe and evaluate the implementation of a resource based on the principles
of formative assessment. Through the use of extracts focusing on students LREs,
I show how formative assessment can be embedded into discussion tasks. The
extracts reveal that students are capable of engaging in formative assessment,
but may require more training in order to focus on specific language forms.

Introduction Eliciting target language features in open discussion tasks between peers is
notoriously difficult for teachers. Even after language input during a pre-task,
there is no guarantee that students will utilize the language in the ensuing
discussions (Ellis 2012: 2247). Furthermore, during the discussion tasks,
the kind of interactive feedback necessary for acquisition (such as clarification
requests) is not common among peers (ibid.: 21418). Post-tasks have
been identified as a crucial stage for reflection, consolidation, and planning
task repetition (Skehan 2014: 8). However, as Extract 1 illustrates, on-task
performances can often seem lacking in language production (the data used
is my own, which reflect the properties identified by Seedhouse 2004).

Extract 1
1 S1: One million yen.
2 S2: One million yen!? Its too expensive!
3 S1: Fifty thousand.
4 S2: Hmm
5 S3: Fifty?
6 S1: Ah- fifteen thousand ten.
7 S2: Ten thousand yen.
8 S3: Ten thousand?
9 S2: Yes.

ELT Journal Volume 71/1 January 2017; doi:10.1093/elt/ccw043  24


The Author 2016. Published by Oxford University Press; all rights reserved.
Advance Access publication May 24, 2016
10 S1: Okay.
11 S3: Okay.
12 S2: Okay.
Taken from a mixed proficiency class of first-year undergraduates at
a Japanese university, Extract 1 involved students deciding the level
of fines for public displays of antisocial behaviour (see Appendix for
transcription conventions used in all extracts). The students compared
a range of antisocial behaviours and ranked them according to the
level of fine imposed for each. Linguistically, they were expected to
provide argumentation and reasoning for their suggestions. Extract
1 shows students engaged in what Seedhouse (ibid.) characterized as
minimalization and indexicality in task-based learning. In other words,
the amount of language produced is minimized and is bound to context,
which in turn leaves little room for the elaboration of language form
(Seedhouse ibid.: 1256).
The students in Extract 1 are focused on task completion. However,
turns 36 show students engaged in a language related episode (LRE).
LREs have been defined as sequences within a dialogue in which the
speakers specifically address problematic features of the language they
are producing (Jackson 2001). For learners, this could mean, for example,
negotiating a correct preposition, correcting their own pronunciation,
or simply finding an appropriate word to use. Learners who scrutinize
language features in more detail during an LRE have been found to use
correct forms later (Storch 2008). The LRE in turns 36 in Extract 1 is not
elaborated since it lacks any discussion about the language.
This article uses a model of formative assessment to demonstrate how
students can be encouraged to elaborate their LREs during discussion
tasks. After introducing formative assessment, the article describes a
resource based on Ellis (2006: 34) idea of time out from discussions.
An analysis of student talk during the use of this resource provides some
insights into the efficacy of formative assessment.

Formative Cizek (2010: 38) contrasts formative assessment with summative


assessment assessment. In summative assessment, he includes measures of student
Formative achievement at the end of a unit of study for such purposes as classifying
assessment in students proficiencies and awarding grades. Formative assessment, on
education the other hand, is a process of gathering evidence of students strengths
and weaknesses to plan instruction, to help students develop autonomy,
and to build skills in self-evaluation. Cizek (ibid.: 24) defines any
formative assessment activity as one which provides information to the
teacher or learners and which can be used to modify teaching and improve
learning.
Wiliam (2011: 337) outlines some benefits associated with formative
assessment in a review of early studies which integrated assessment
and instruction. He describes a project in which teachers observed and
reflected on videos of students engaged in problem-solving activities. The
teachers observed the strategies used by students when solving problems,
and considered how students might solve other problems. These teachers

Formative assessment in discussion tasks 25


were encouraged to adapt their practices with this evidence in mind. Their
students were later found to have improved their problem-solving skills
as a result of the teacher activity. In contrast, the students of teachers who
had not observed the videos had made no improvements. Wiliam (ibid.:
36) cites further evidence, from a review of 250 studies, that formative
assessment embedded into classroom instruction can lead to a doubling of
student achievement.
Given that formative assessment has been shown to have a significant
impact on student achievement, it is reasonable to assume that it could
also be applied to discussion tasks in language teaching. In the remainder
of this article, I will argue that formative assessment can be mirrored
through LREs in discussion tasks.

LREs and formative LREs have been described in terms of noticing gaps in linguistic
assessment knowledge, being aware of what type of gap is noticed (metalinguistic
awareness), experimenting with new forms to test hypotheses about
accuracy, and the uptake of newly negotiated forms for integration into the
learners interlanguage (Ellis 2012: 16181). This description comes from
a cognitive-interactionist view of L2 learning, which posits that interaction
can provide learners with evidence to stimulate their thinking about
language form and update their language knowledge. A sociocultural
theory of language learning also supports this approach to LREs through
the idea that reformulations, recasts, and corrective feedback from peers
can offer scaffolding, a distribution of cognitive resources, and a reduction
in the complexity of the language learning activity (ibid.: 18490). In
fact, these processes are also mirrored in a theoretical description of the
characteristics of formative assessment.
McMillan (2010: 43) describes 11 characteristics of formative
assessment. These include evidence of student learning, student
self-assessment, feedback, and success of an activity. Such
characteristics can be interpreted in terms of LREs. Given that LREs
are a source of language development in discussion tasks, they thus
provide evidence of engagement with language and opportunities for
learning. Likewise, self-assessment occurs as part of LREs, since students
must be conscious of their linguistic shortcomings during discussions.
Furthermore, feedback is a natural part of discussion tasks since peers,
and sometimes the teacher, can provide alternative linguistic forms, which
are LREs. Finally, students make judgements about the accuracy of any
linguistic forms they decide upon. In other words, they evaluate the success
of an LRE. The characteristics of formative assessment are therefore
mirrored by LREs; LREs are a rich arena for formative assessment.
However, as illustrated in Extract 1, the success of an LRE is only as
effective as the quality of the talk in which students engage. Thus, the
success of formative assessment lies in its implementation. In fact,
Nicol and Macfarlane-Dick (2006) emphasize the teachers support of
learner self-regulation through clarification of what good performance
is, encouraging teacher and peer dialogue, and delivering high quality
feedback. Peer dialogue in discussion tasks, meanwhile, involves listening,
explaining, questioning, speculating, and hypothesizing, and formative

26 Daniel Parsons
peer assessment is an arrangement for learners to consider and specify
the level, value or quality of a product or performance of other equal status
learners (Topping 2010: 62).
Toppings model of formative peer assessment (ibid.: 636) introduces
five categories which impact on the effectiveness of peer dialogue. The
first category is organization and engagement, examples of which include
the time spent elaborating on a language problem, or the grouping of
peers in terms of dominant and passive personalities. The second is
scaffolding, which involves diagnosis of language-related problems by
students, and modelling of language forms. Thirdly, cognitive conflict
involves students noticing the gap in their language knowledge. The
fourth category is communication: peers listening, explaining, and
summarizing knowledge. The final category is affect, which means
students overcoming anxiety and taking ownership of learning for
themselves. In Toppings model, these five categories are considered to
be essential in extending declarative knowledge, procedural skill, and
automaticity. They can also promote the development of metacognitive
strategies for regulation of learning, and the attribution of learning
success to ones own efforts, as described by McMillan (op.cit.).
These ideas formed the basis of a pedagogical resource for students that
prompts time out from a discussion activity, and aims to guide them
towards effective peer dialogue during LREs. The resource was called Pit
Stop. It was introduced to students I worked with through the metaphor
that when a racing car needs maintenance during a race, it drives into
a pit stop, has some work done, and then returns to the race. Similarly,
students can take time out from a discussion, work on their language,
and then return to their discussion. Table 1 outlines the procedure for Pit
Stop, and shows how each step in the procedure is related to the processes
inherent in LREs and Toppings factors affecting the effectiveness of
formative peer assessment.

Questions about The design of Pit Stop as a formative assessment resource that fosters
Pit Stop effective peer dialogue during LREs leads to three questions.
1 Does Pit Stop mediate elaboration of LREs?
2 What are the ways that students solve language problems in elaborate
LREs?
3 Do students show evidence of effective formative peer assessment as
described in Table 1?

Design and I followed Toppings (ibid.) advice to prepare materials that can aid the
implementation of formative assessment process by creating a Pit Stop board. Each group
Pit Stop of students was given one board. Five squares were drawn on the board
and contained metalinguistic categories that paralleled the types of LREs
(mechanical, lexical, and morphological) identified in the literature (Philp,
Adams, and Iwashita 2014: 268). The categories were Pronunciation I
cant say, A spelling I dont know, A word I dont know, Grammar I
cant remember, and A sentence I cant say.
During discussions, students indicated the problematic metalinguistic
category on the Pit Stop board. Since students were high-beginner and

Formative assessment in discussion tasks 27


Pit Stop procedure LREs Formative assessment
Take time out from the discussion when Noticing Organisation and engagement:
experiencing language difficulties interaction style
Affect: self-disclosure
Cognitive conflict
Understand the language difficulty Metalinguistic awareness Scaffolding: diagnosis
Get help from your discussion group Production of LRE Communication: listen, explain,
members hypothesize
Work together to solve the problem Hypothesis testing, recast, Affect: accountability, ownership,
reformulation, correction motivation
Scaffolding: modelling, monitoring,
diagnosis, correction, information
modulation
Communication: clarify, speculate,
hypothesize, summarize, rehearse
Apply solution to the discussion Uptake Affect: ownership

1
ta b l e
Combining LREs and
formative assessment
through the Pit Stop
procedure
low-intermediate levels, I allowed them to switch from English to Japanese
when engaging in LREs. This decision was based on the sociocultural
view that comprehensible verbal action can mediate learning (Lantolf
and Poehner 2014: 657).
The students took a few weeks to get used to Pit Stop. I had to intervene
a number of times to remind some students that it was not permission
to continue their discussions in L1. Pit Stop was implemented from the
ninth lesson of a 15-lesson semester, and example dialogues were provided
to explain each step in the process. The students were encouraged to
follow the steps outlined in Table 1. First, upon noticing a language
problem, they should call for Pit Stop. Second, they should identify the
type of language problem. Third, they should explain the problem to
their peers. Fourth, the group should work together to solve the language
problem. Finally, they should exit Pit Stop and continue the discussion by
applying the solution.
Their discussions were recorded in language laboratories. Recording was
a regular feature of the classroom from the first lesson onwards. The
purpose of recording was for student self-reflection, and students were
given post-tasks that involved critically examining their own discussions.
I used these recordings to transcribe the Pit Stop episodes.
One class of 20 students agreed to participate in this study. The students
were split into six groups of three or four. For the discussion task
focused on in this study, each member of a group represented a country.
Students had to argue for investment in certain technologies, which
they believed would solve certain problems faced by their country. Each
group was required to agree on a final list of four technologies in order
of importance. Input sessions for key vocabulary, themes, and data about

28 Daniel Parsons
their respective countries were given in earlier lessons. The discussion
took place in the final lesson of the semester and the students were
given 20 minutes to complete the discussion. In total, 120 minutes of
discussions were recorded across all student groups, yielding 33 minutes
and 18 seconds of Pit Stop episodes, with an average episode length of
1 minute and 7 seconds. The shortest Pit Stop episode was 16 seconds,
the longest was 2 minutes and 10 seconds. There were a total of 39 Pit
Stop episodes. A Pit Stop episode also included the moments leading up
to a decision to enter Pit Stop (for example turns in which the students
hesitated over language form) and the moments after it finished (for
example uptake of the language form discussed during the LRE).

Analysis Pit Stop episodes were very diverse, but four characteristics stood out as
possibly mutually exclusive: those that showed high levels of support from
peers in English; those in which students exhibited high levels of anxiety
and uncertainty; those involving corrective feedback by peers during the
initiating stages; and those in which students failed to communicate their
own thoughts effectively.
Extract 2 shows a Pit Stop sequence between a low-intermediate level
student (S2) and two high-beginner students (S1, S3). This extract is
notable for its reflection of the categories of formative peer assessment.
First, we see verbal expressions of cognitive conflict, such as in turn 1,
followed by S1 diagnosing his own language problem in turn 5. Second,
the students work together to break down a complex argument into parts
between turns 6 and 23. This team work consists of monitoring and
regulating learning under Toppings category of organization. Toppings
category of communication is also observed as S1 imitates phrases in
turns 9, 16, 18, 20, and 22. From a cognitive-interactionist perspective, S2
and S3 have helped extend S1s declarative knowledge of key vocabulary,
and procedural skill in combining phrases. This leads to automaticity for
S1 in turn 28. Turn 28 acts as a final performance for S1 after an explicit
evaluation of performance readiness in turn 24. From a sociocultural
perspective, S2 and S3 are seen to have scaffolded this final performance.

Extract 2
1 S1: So in the past err
2 S2: Use pit stop?
3 S1: Yes
4 S2: Okay okay
5 S1: In the past, rate of food production was small, er no big, but now
6 S2: Err in the in the err maybe in the past
7 S3: Food producing
8 S2: Food producing percentage is
9 S1: Food produce
10 S2: Food produce err producing percentage is err

Formative assessment in discussion tasks 29


11 S3: Big?
12 S2: Err mu- much
13 S3: Bigger is okay
14 S2: Bigger
15 S2: But in recently (0.7) recently years k- recently years err
16 S1: [recently years
17 S3: It is decreasing year by year to
18 S1: [decreasing
19 S2: Hm
20 S1: Decreasing okay okay okay
21 S2: [okay okay so
22 S1: [year by year
23 S3: Year by year
24 S1: Okay lets go
25 S2: No problem no problem S1?
26 S1: Okay
27 S2: Okay
28 S1: Err food produce percentage is bigger other country so
big big big food percentage food percentage but recently
years it is decreasing year by year so I am very sad so to
change to get genetic engineering we can good food so
we can amount of food.
While this extract is notable for the students extensive use of English to
solve the language problem, closer examination reveals that the students
did not elaborate on the accuracy of each phrase. For example, there
was no elaboration over the choice between food produce and food
producing between turns 8 and 10. This sequence shows the same level
of detail as the LRE in Extract 1. Furthermore, there was no elaboration
on the syntax required to join the phrases, such as the use of than with
comparative adjectives. It can be argued, therefore, that the students
focused more on the construction of the argument and less on the syntax
and morphology of its parts.
In Extract 3, S1 had just made an argument that neuroscience was an
important investment, when her partners made a counter-argument, and
she had to concede that nuclear energy was more important. In order
to have her idea accepted, she needed to construct an argument that
was different from the one which had been rebutted. However, she was
unprepared for this. By turn 3, she initiates Pit Stop, with a diagnosis
in turn 4. She provides a detailed explanation of her problem. By turn
17, S3 is able to clarify what is new about S1s argument, which acts as
monitoring of learning. Toppings category of affect is seen to play an

30 Daniel Parsons
important role in this sequence, as evidenced by a negative self-evaluation
in turn 7, an apology in turn 5 from S1s classmate for putting her in a
difficult position of having to improvise a new argument, and echoes of
moral support in turns 9, 11, 12, 14, and 17. Topping (op.cit.: 65) explains
that a sense of loyalty can help to keep the [students] motivated and
on task. Moral support is probably aiding S1s regulation of her problem-
solving strategy.
Having verbalized and gained support for her idea, S1 begins to rehearse
her English from turn 19, while still receiving moral support from
her classmates in turns 20, 22, and 23. Turns 22 and 23 also act as an
evaluation of success. Crucially, her classmates offer no specific English
language support, and form-focused self-assessment is internal to S1, as
evidenced in the self-correction of investment for of neuroscience in
turn 21 to investment of neuroscience in turn 25. This sequence suggests
that, by empathizing with one another, students can successfully scaffold
each others participation. This contrasts with Extract 2 where scaffolding
was achieved through peer modelling. Both types of scaffolding might
serve a different means of regulating learning (Topping op.cit.: 65) and
could depend on the learners personality differences (Philp, Adams, and
Iwashita op.cit.: 95101).

Extract 3
1 S1: In my country two- two thirds of dementia is so is so many
2 S1: (laughter)
3 S1: I cant say it just a minute
4 S1: A sentence I cant say
5 S2: Wow sorry for this
6 S3: Ha ha ha ha
7 S1: [this is terrible I cant do even half
8 S1: The number of people sick with Alzheimers is two thirds
9 S2: Hmmm
10 S1: So its two thirds with Alzheimers
11 S2: [wow thats a lot
12 S3: [hmm
13 S1: There are so many
14 S2: Mm
15 S1: 
So by showing this number I want to say that there are so
many that we have to solve this problem
16 S1: hhh I must err
17 S3: Oh saying about solving is new
18 S1: Yeah right

Formative assessment in discussion tasks 31


19 S1: I must
20 S2: [must
21 S1: I nvest more invest I must more advance hh investment
for of neuroscience
22 S2: Maybe
23 S3: Okay okay
24 S1: Ah okay lets go
25 S1: Because so many there are many Alzheimers disease,
disease is serious problem, we must advance investment
of neuroscience.
Up to now, the assumption has been that the speaker initiates Pit Stop.
However, when listeners do not understand, they, too, can elicit Pit
Stop, as shown in Extract 4. S1 is making her argument, but in turn 4
she makes a pronunciation error. S3 questions this and elicits Pit Stop
in turn 7. A translation in turn 9 allows S3 to offer corrective feedback
in turn 10. This is followed by uptake of the correct pronunciation
by all the students. S3 adds a supportive comment in turn 18. While
this comment was directed to S2, who was making a note of the Pit
Stop episode, it could contribute to S1s retention of the new and
correct pronunciation. A distinctive feature of this Pit Stop episode is
that the students take a different route from the suggested sequence
through Pit Stop. In particular, the language difficulty is verbalized
before taking time out in Pit Stop. Entering Pit Stop in this way
provides a socially safe way for students to correct a peer by sharing
responsibility for modelling, clarifying, and explaining. In other words,
even though the Pit Stop procedure was designed to guide students
towards formative peer assessment, when the students are consciously
aware of what is happening in their learning interaction (Topping
op.cit.: 65), they can regulate their learning differently from the guide
provided for them.

Extract 4
1 S1: Okay I think we should invest in nuclear energy first
2 S2: [ohh
3 S1: then we then we can provide electricity cheaply
4 S1: a nd by providing electricity cheaply we can (0.7) get
stable energy (pronunciation of sta in stable is like sta in
statue)
5 S3: Stable energy (a question) what what is
6 S2: [what is
7 S1: Pit Sto::::::p
8 S2/S3: [Sto::::::::p
9 S1: Err stable

32 Daniel Parsons
10 S3: Ah stable (pronunciation rhymes with table)
11 S2: [Stable stable
12 S1: Ah stable (pronunciation rhymes with table) my pronunciation
was (0.4) okay okay
13 S2: [stable (correct pronunciation)
14 S1: Stable energy
15 S2: We can provide stable energy
16 S3: [can provide okay
17 S2: and its nuclear energy right okay okay
18 S3: Insert the ai vowel sound there
19 S2: (slowly) insert the ai vowel sound there
20 S2: Okay okay
21 S3: Okay
22 S1: Okay
23 S2: Finished
Extract 5 shows an example from a group of low-proficiency students in
the class. S2 diagnosed her problem but did not contextualize it precisely.
She was trying to translate a word with a variety of meanings. A lack of
context resulted in uncertainty over which meaning was intended. This
led to long silences while the students searched dictionaries and supplied
possible solutions. Her group members resorted to guessing and she
internally evaluated whether or not the suggestions were applicable. S2s
classmates do not question her about the intended meaning and so they
are unable to provide support such as modelling or correction. In contrast
to the previous extracts, the students here lack a joint understanding,
do not verbalize their regulation of learning, and offer no affective or
cognitive scaffolding. These prevent declarative knowledge or procedural
skill from being extended. As a result, S2s final performance was
probably not as advanced as it could have been.

Extract 5
1 S1: Pit Stop?
2 S2: Err pit stop word
3 S1: What?
4 S2: Err like throw out err get rid of
5 S1: Like to treat?
6 S2 Yes yes
7 S1 Right treatment err like that spent nuclear fuel
8 S2: (3) just a minute (searches dictionary)

Formative assessment in discussion tasks 33


9 S1: (3) or dissolve
10 S2: Mmm oh maybe that. I dont know this is weird.
11 (7 seconds searching)
12 S1: Discard?
13 S3: Discard maybe?
14 S2: Discard
15 S1: Finished? Pit stop
16 S1: Okay
17 S2: N
 uclear energy er is bad for our health so (2) nuclear
energy try to discard in America.
In sum, Pit Stop provided a way for students to overcome language
problems through formative peer assessment. However, Toppings
categories of formative assessment, which bring about monitoring
and regulation of learning, are seen in relation to the task demands of
constructing arguments instead of a focus on linguistic accuracy. Final
performances can be scaffolded through modelling or affect. However,
when these categories are not apparent, declarative knowledge, procedural
skill, and scaffolded performances are also not apparent.

Conclusion This article reported on an attempt to introduce a pedagogic resource,


Pit Stop, to foster elaborate LREs in discussion tasks. Pit Stop aims to
mediate effective peer dialogue towards improved focus on form. The
implementation of this procedure occurred in a classroom in which
the use of L1 and L2 was tightly controlled, and a focus on accuracy was
encouraged.
The first research question asked whether Pit Stop mediated elaborate
LREs. Based on the amount of time spent on interactional feedback during
Pit Stop as a proportion of the total discussion time, it can be argued that
Pit Stop led to vast improvements over examples such as Extract 1. The
time spent on interactional feedback also contrasts with studies reviewed
by Ellis (op.cit.: 21418) which report very small amounts of time.
The second question asked about the ways that students solved language
problems. First, the students tended to focus their LREs on the output
of appropriate arguments to meet the demands of the discussion task.
When students did focus on form, there were occasions when affective or
proficiency factors constrained a full diagnosis of the language problem.
Second, students could make their language problems salient, and they
were activated as instructional resources for each other (Wiliam op.cit.:
46). In this way, the problem identified by Seedhouse (op.cit.: 12830),
that learning does not necessarily take place as a result of a tasks work
plan, was ameliorated.
The final question asked whether students show evidence of effective
formative peer assessment. The Pit Stop procedure can be considered
effective when students diagnose and clarify their language problem,
when peers provide either affective or cognitive scaffolding, and when

34 Daniel Parsons
students believe they will be successful in their subsequent performances.
Only Extract 5 failed to show evidence of effective formative assessment.
It is possible that the lack of a clear diagnosis constrained the monitoring
and regulation of learning. Philp, Adams, and Iwashita (op.cit.) observe
that lower-proficiency pairings, such as with the group in Extract 5, can
play a constraining role in peer interaction.
As a tool for formative assessment during LREs, students probably require
more training to focus on form effectively. This can involve clarification
of what focus on form is and using teacher feedback to improve peer
interaction, as suggested by Nicol and Macfarlane-Dick (op.cit.: 205).
Students could also benefit from post-task work, and, thanks to the way
Pit Stop elicits language-related problems, the teacher can also target
specific features for follow up. For example, the students in Extract 2
could be encouraged to analyse the grammatical errors in the final stages
of the discussion. This kind of approach constitutes what Topping (op.cit.:
70) would regard as teacher feedback on the quality of peer assessment.
In order for the LREs to become formative at the level of form focus
instead of task focus, it seems that following up Pit Stop performances
with teacher input and feedback should be an integral part of an overall
programme of formative assessment.
Final version received March 2016

References Seedhouse, P. 2004. The Interactional Architecture


Andrade, H. L. and G. J. Cizek (eds.). 2010. of the Language Classroom: A Conversation Analysis
Handbook of Formative Assessment. New York, NY: Perspective. Malden, MA: Blackwell Publishing.
Routledge. Skehan, P. 2014. The context for researching a
Cizek, G. 2010. An introduction to formative processing perspective on task performance in
assessment: history, characteristics, and challenges P. Skehan (ed.). Processing Perspectives on Task
in H. L. Andrade and G. J. Cizek (eds.). Performance. Amsterdam: John Benjamins
Ellis, R. 2006. The methodology of task-based Publishing Company.
teaching. Asian EFL Journal 8/3: 1945. Storch, N. 2008. Metatalk in pair work activity:
Ellis, R. 2012. Language Teaching Research and level of engagement and implications for language
Language Pedagogy. Malden, MA: Wiley-Blackwell. development. Language Awareness 17/2: 95114.
Jackson, D. O. 2001. Language-related episodes. ELT Topping, K. 2010. Peers as a source of formative
Journal 55/3: 2989. assessment in H. L. Andrade and G. J. Cizek (eds.).
Lantolf, J. P. and M. E. Poehner. 2014. Sociocultural Wiliam, D. 2011. Embedded Formative Assessment.
Theory and the Pedagogical Imperative in L2 Education: Bloomington, IN: Solution Tree Press.
Vygotskian Praxis and the Research/Practice Divide.
New York, NY: Routledge. The author
McMillan, J. H. 2010. The practical implications of Daniel Parsons has been teaching in Japan for ten
educational aims and contexts for formative assessment years. He started out as an assistant language teacher
in H. L. Andrade and G. J. Cizek (eds.). on the JET Programme. He now teaches university EFL
Nicol, D. J. and D. Macfarlane-Dick. 2006. Formative and EAP classes, and is researching teaching methods
assessment and self-regulated learning: a model and which apply formative assessment, and is interested
seven principles of good feedback practice. Studies in in how technology can be applied in this area. He
Higher Education 31/2: 199218. currently works as an instructor at Kwansei Gakuin
Philp, J., R. Adams, and N. Iwashita. 2014. Peer Interaction University in the School of Science and Technology.
and Second Language Learning. New York, NY: Routledge. Email: daniel1124@kwansei.ac.jp

Formative assessment in discussion tasks 35


Appendix 1 Italics Original Japanese, translated by the author
Transcription (0.7) a 0.7 second pause
conventions used [ speech overlaps with previous turn
- sound is cut
: sound is extended

36 Daniel Parsons

Anda mungkin juga menyukai