Anda di halaman 1dari 8

Article Critique #1

Reference
Grant, M.M., Ross, S.M., Wang, W., & Potter, A. (2005). Computers on wheels (COWS): An
alternative to each one has one. British Journal of Educational Technology, 36(6), 1017-1034.

Article Summary
The article focuses on the implementation of the COWS (computer on wheels) program
in a fictitious school, Green River Elementary. The laptop program provided laptops on wheels
to be shared between four fifth grade classes. The budget did not allow for every student to have
a laptop but all four teachers did. The project was meant to determine the impact of changing
the ways students learn and teachers instruct in a technology-enhanced learning environment.
(p.1019). This is all meant to encourage student partnership, increase communication among the
student body, parents, teachers and administrators, and increase the use of technology in the
classroom.
There was an evaluation conducted using three different instruments to collect data from
pre-arranged one hour long observations to determine the impact of the use of technology in the
classroom for both students and teachers. Part of the evaluations objective was to find out if the
use of computers in the classroom was effective with instruction, how and how much, did
teachers integrate technology in the classroom, what methods did teachers use to encourage
higher-order learning and student center learning, and finally, how did the laptop program
influence teachers attitudes towards the use of technology. These concerns were all determined
with the use of the laptop program, the evaluation of the observers, the participation of the

teachers and students, the assessments and questionnaires, and lastly the data collected during the
evaluations.

Article Critique
When you hear the words program evaluation fear definitely comes to mind of many
school staff members. Evaluations are necessary for schools to do because evaluating is one of
the best ways to avoid wasting time and money. While reviewing the evaluation process of the
COWS program, there were a few things that could have been done better or differently to
provide a better picture for the evaluators.
Initially, there was no funding to train the teachers to integrate technology into the
lessons. Since this evaluation only had the participation of four teachers, perhaps it would have
been a good idea to send one teacher to training and have that teacher train the teachers that were
not able to attend the training. This training could have made a difference in the way teachers
implemented technology in their classes which could have altered some of the results of the
evaluation.
Another concern is the fact that all of these evaluations were scheduled. How could there
be a true picture of day to day use of technology in the classrooms? Perhaps some of the sessions
could have been random or unplanned and weighed in as well to get a real sense of what is going
on. On the days that the teachers did not get evaluated there should have been evidence of
students work that shows technology was used in these class sessions. Teachers could have also
discussed the evaluation they experienced with teachers not yet evaluated, almost as if they were
giving them a heads up on the expectations.

There were three different instruments used to score and collect data during the
evaluation. In the SOM (School Observation Measure), there were 24 instructional strategies
observed and that is too many for a teacher with no training with the expectation of keeping
students interest and attention. In the SCU (Survey for Computer Use), part of the data recorded
included how old the computers were and whether or not students had access to the internet. If
the evaluation was done on a day when the system is not working for lets say, a freak of nature,
should that be counted against the teacher and if the computers are too old because the schools
budget doesnt allow for new computers should that even be a factor? Isnt some technology
better than no technology? The final instrument used was RSCA (Rubric for student centered
activities) which appeared to be the most organized section for the instruments used during the
evaluation process.
It was a great idea to get the teachers perspectives using two surveys and a technology
skills assessment. It allowed teachers to rate themselves on their attitudes and skills. There were
also focus groups for all four teachers and eight to ten students done in the fall of 2003 and again
in May 2004. It was not clear if there were different teachers and students in the second focus
group.
The results from the evaluation were difficult to compare to other evaluations because
there were not many conducted before this one. Also, because this was done in one school for
example versus using several schools within the county, it is difficult to know how this would
have concluded with the use of a larger population.

Article Critique #2
Reference
Lowther, D., Ross, S., & Morrison, G. (2003). When each one has one: The influences on
teaching strategies and student achievement of using laptops in the classroom. ETR&D
Educational Technology Research and Development, 51(3), 23-44.

Article Summary
In the 2003 article, When Each One Has One: The Influences on Teaching Strategies
and Student Achievement of Using Laptops in the Classroom, by Lowther, D., Ross, S., &
Morrison, G the authors seek to address the degree to which school laptop programs can
influence students educational experiences and learning (p.25). The research issue being
addressed examined the educational effects of providing students with 24-hour access to laptop
computers (p.23). This pilot program used a control group of students and a laptop group of
students to make its determinations. With the use of quantitative and qualitative data, there were
significant advantages shown for the students who used the laptops consistently.
During the course of this one year pilot program, there were a group of fifth-, sixth-, and
seventh-grade students who had 24 hour access to laptop computers. These students were paired
with teachers who received computer integration training using the iNtegrating Technology for
inquiry (NTeQ) and each student was given their own laptop to use that was allowed to be taken
home. There was a second group of students, the control group, who were paired with teachers
who did not have the training and each of the teachers classroom had five computers for
students to share. Both groups were exposed to the same curriculum during the course of this
program. Although there were some positive outcomes for the students who had their own
laptops with 24 hour access and the ability to take them home and some differences between

them and the control group, there was not anything substantial that the control group was
lacking.

Article Critique
In the beginning, the authors included information from other similar studies that were
able to be used as references. This information provided background information for the
Crossover School District that could be valuable when conducting their study. Having the ability
to make comparisons could give insight on what things to try and what to leave alone.
Providing the teachers with the technology training was a great idea. Having the
technology knowledge to work with the students is always a plus because it is helpful in this fast
paced world we live in and to help keep up with our youth. For the students, participation was
voluntary however, charging the students $50 a month for use of the laptop while in the program
was not the best option for students who wanted to volunteer but couldnt afford to. Therefore I
do not agree that the reason mentioned in the article, it might be assumed that more capable
or motivated students relative to the control sample would participate, because there were
probably several students that wanted to participate but could not for financial reasons. Perhaps
it was done to deter certain students away because grant funding was limited and the number of
classrooms to participate in the program were limited as well.
The observation instrument used, School Observation Measure (SOM), measured
classroom events and activities in 10-12 randomly selected classrooms for 15 minutes each. The
observers did not have a connection to what they were evaluating nor did they know the purpose
of the study. Their lack of connection may have allowed them to miss important information
during the progression of the study. There was also prescheduled observations that were 4-15

minute segments that were summarized on one SOM form. Prescheduled has a better opportunity
of hitting all 24 strategies because they have time to preplan. The Survey of Computer Use
(SCU) was centered on the students use and ability with the computers. The areas looked at
were computer capacity and currency, configuration, student computer ability, and student
activities while using the computer. There was a five point rubric used by paired, trained
observers in 42 targeted observations. There was identical SCU responses on 86% of the items.
For the writing assessment there were samples that were randomly selected and blindly
scored on a four point rubric which resulted on four different scores per assessment. This left no
room for influence from one evaluator to the next.
When the students, teachers, and parents created surveys everyone did not participate and
people were randomly selected. I think results would have been more accurate if everyone
participated in the end of the program surveys and interviews. When questioning the parents two
questions should have appeared in the survey. First, how do you feel the laptop program worked
for your child and also, did you see areas of opportunity for improvements.
The pilot program was able to reveal some necessary changes to encompass an analysis
of preprogram achievement scores of laptop and control students, participant samples of only
those laptop and control teachers who participated in the same NTeQ training, and dependent
measures of student performance in both problem solving and writing skills. When the next
laptop program takes place, the control group and the laptop group should have the same access
to things on the internet and the number of participants in each evaluation, survey, and interview
should be the same.

Reference
Paterson, W., Henry, J., O'quin, K., Ceprano, M., & Blue, E. (2003).
Investigating the Effectiveness of an Integrated Learning System on Early Emergent Readers.
Reading Research Quarterly, 38(2), 172-207

Article Critique 3
Summary
In the 2003 article, Investigating the Effectiveness of an Integrated Learning System on
Early Emergent Readers, by Paterson, W., Henry, J., O'quin, K., Ceprano, M., & Blue, E., the
authors used teacher variables observational, survey, and interview data to help determine if an
Integrated Learning System (ILS) at the Waterford Early Reading Program, from the Waterford
Institute, would result in eight classrooms of kindergarten and first-grade students from a large
urban district in New York State, perform more effectively than eight similar non-Waterford
classrooms. This yearlong study, recycled a great deal of information as resources and there was
a brief history included about the integrated learning system, what it means and what is its
purpose. The authors included some research information on others experience with ILSs which
was useful background information when conducting this study. The information came from
studies conducted in the 1990s which is when innovations in graphics, animation and sound
allowed software to appeal to a largely untapped audience-preschool and kindergarten children
(p.176). There was a great deal of quantitative and qualitative data included in this study which
resulted in no compelling reason to purchase integrated learning systems for enhancing early
literacy development and reading achievement in particular (p204).
It is always wise to do research and collect references from other similar studies when
conducting your own study, especially when the timeframe for the study is lengthy. It can help
reduce obstacles and challenges by suggesting ways to avoid them or give insight to ideas that

can help your study be more successful. This study had a great deal of references that were more
than expected to the point where it could have been over whelming to the reader. Providing the
definition and a brief history of ILS was a creative way to start the article and intrigue the reader.
The teacher training to use the Waterford program was important to ensure the teachers
knew how to use the information in the classroom. The full-day training along with follow-up
trainings and online tutorials were offered by the program to provide the teachers extra support.
There was a hotline number provided to the teachers for technical support but what if the teacher
was in the classroom in the middle of a lesson, was she/he expected to stop teaching to call the
hotline?
The article also gave a breakdown of literacy, explaining the definition, best practices for
teachers, and stages for young children when the use of language is used for communication as
well as when children move from understanding and manipulating oral language to
understanding and manipulating written language (p181).

Anda mungkin juga menyukai