Anda di halaman 1dari 5

Overview of Responsive Evaluation.

What is Responsive Evaluation?


Responsive evaluation is an approach that sacrifices some precision in measurement, hopefully to
increase the usefulness of the findings to persons in and around the programme. It is the opposite of
pre-ordinate evaluation approaches where the focus is on goal, objectives, standards and research
type reports. It grew from lack of satisfaction with goal-oriented evaluation where emphasis was on
the degree to which intended goals have been realised. According to Stake (1975), responsive
evaluation is less reliant on formal communication and more reliant on natural communication. He
argues that responsive evaluation is based on observation and reaction hence it is not a new concept.

Responsive evaluation attempts to widen the array of data collection in evaluations from only focusing
on planned outcomes but also processes and antecedents that created the outcomes. It thus, started,
not as a methodology but more as a data inventory with comprehensive data collection matrix.
Responsive evaluation inevitably widens the evaluators scope of work and is hinged on value
pluralism as opposed to single comparison of intended and observed outcomes. It places emphasis
on processes, background and judgements seeking for consensus where it exists but without pressing
for consensus where it does not exist. Abma and Stake (n.d) compared programmes to work of art,
concluding that a work of art and a programme both have no single true value, yet they both have
value. This is a fundamental foundation of responsive evaluation which puts emphasis on value
pluralism acknowledging that there are several values, all of which are important, equally truthful but
still in conflict with each other. Evaluators should thus, appreciate that values in programmes are
different for different people and for different purposes. Whatever consensus in value there is, the
evaluator should seek to discover it.

The purpose of evaluation includes documentation of events, detecting institutional vitality, place
blame for trouble and taking corrective measures. Stake (1975) argues that each of these purposes
is related directly or indirectly to the values of a programme and may be a legitimate purpose for a
particular evaluation study. Each of the above processes needs separate data hence the emphasis
to widen the scope of evaluation during responsive evaluation and engagement of various project
stakeholders. He argued that there is need to broaden data collection to include data of all thirteen
kinds which gives an array of legitimate data from which the evaluator can select content to
consolidate. He advocates for robust data gathering and analysis about context, inputs, processes
and products, asserting that the truth only emerges when opposing forces submit evidence for cross-
examination before the jury. If the different value perspective of the people are captured in reporting
the success and failure of the programme then responsive evaluation has been done.

Key considerations for responsive evaluations

1
Stake (1975) makes an interesting observation about role of stakeholders in responsive evaluation
where he argues that he does not see responsive evaluation as a cooperative effort. To him, the
inquiry belongs to the evaluator who should use their professional talent and discipline to carry the
evaluation out. The stakeholders play a critical role in only providing vicarious experience and
reconstruction of quality, which the evaluator incorporates. He sees the evaluator with many hats,
calling him/her the judge, discovery learning teacher and facilitator but not a change agent.

Responsive evaluation puts the needs of the different stakeholders involved in a programme at the
heart of the evaluation. Stake (1975) emphasized that every programme has different constituents
with sometimes similar values better still sometimes divergent expectations of the programme and
the processes. He argues that it is very important for every evaluator to understand these
stakeholders, their language and their interests and be able to illustrate them to the constituents. The
degree of participation of stakeholders in the responsive evaluation process is therefore an important
but still contentious issue. He advocates for what he calls incorporating them into the evaluator role
where the evaluator sees stakeholders as core-evaluators not just consumers or recipients of
programme evaluation report or findings. He further argues that an evaluator has to use his discipline
and professional talent to carry out the inquiry with stakeholders giving him/her a vicarious experience
and reconstruction of quality. In his view, responsive evaluation does not see the inquiry as
cooperative effort although the stakeholders have a role. Substantial time must be invested into
learning and understanding the information needs and concerns of the various programme
stakeholders. In addition, evaluators must mix their expert knowledge with feelings, emotions and
preferences that might not be logical or lack justification.

To guide responsive evaluation, the following steps were suggested by Stake:


Makes a plan for observations and negotiations this entails development of data collection tools,
planning for data collection and negotiations on what stakeholders expect from the evaluation
process.
Observe the program it is very important to the evaluator to see the programme. This can be
done through visiting locations where activities were or are implemented and having discussions
with various stakeholders who played various roles in the process.
Prepares for brief narratives, portrayals, product displays, graphs, etc. this is when the evaluator
prepares products of the evaluations based on main findings.
Find out value to his audiences and gather expressions of worth from various individuals whose
points of view differ. Remember in responsive evaluation, consensus is taken positive; do not
force it when it does not exist because responsive evaluation appreciates value pluralism.
Checks the quality of his records validate what you have collected and seek clarification on
possible areas of contradiction. If the findings are accurate, the evaluator can get validation

2
evaluation findings from programme personnel and other experts even as other stakeholders react
to relevance of the findings.

Stake argues that the majority of the above-mentioned process occur informally and in iteration, it is
thus, valuable to keep a record of actions and reactions.

Instead of focusing on objectives and hypothesis during evaluations, Stake argues that the focus
should be on issues. He further postulated that issue denotes complexity, immediacy and valuing
and calls for identification of problems, issues and potential issues. The issues forms the basis for
discussions with clients and audiences hence forming he basis for developing a data collection plan.
Observations and interviews during responsive evaluation should be framed around the highlighted
issues. Responsive evaluation allows the evaluator to respond to emerging issues as well as pro-
conceived issues making data collection and reporting robust.

Stake (1975) argues that there is a thin line between judgemental and descriptive approaches to
evaluations. He argues that his perception has evolved to the extent of seeing judgemental act as
part of descriptive, part of observational act and evaluators can act more descriptively at one point
and more judgemental at another moment. New perceptions and interpretation of what you observe
emerge from time to time hence swing from judgemental, descriptive to observational. It is thus
important to understand this phenomenon and not to see responsive evaluation as one of the three
because it is a combination of all the three-in-one acts reflected differently to a certain degree.

Evaluators tend to over rely on preconceived ideas of success. It is important to understand the
purpose of the evaluation, pay attention to what is happening in the programme and then choose
value criteria and questions. Evaluators must not fail to highlight both the best and worst of the
programme. Stated objectives and data collection tools must not divert attention away from the things
of interest to the stakeholders.

Another important aspect for responsive evaluation is on how to reflect vicarious experience in the
evaluation process advocating for the use of stories and portrayals of people, places and events due
to their lifelikeness and concreteness. He argues that there is need to portray complexity, including
holistic impression, the mood and mystery of the experience. If programme staff or communities were
uncertain about something, the audience of the report should also feel that uncertainty hence more
ambiguity rather than less may be needed in our reports as we accurately portray vicarious
experiences.

3
Evaluators normally focus programmes as having measureable outcomes. Responsive evaluation
emphasizes the notion that not all programme outcomes can be measured, for some the payoffs
maybe diffuse, some may have delayed outcomes yet still some are beyond the scrutiny of evaluators.
Programme activities might have intrinsic value more than measurable or touchable outcomes. Often,
some programme activities can be done because they are the right things to do, not because they
have measurable outcomes. It is therefore false to presume that the only measurable outcomes testify
to the worth of a programme.

Responsive evaluation makes an interesting argument with regard to centralisation of evaluation


power/authority. It advocates for shifting of power from a central figure (the evaluator) to stakeholders
through holistic procedures of observations, judgements and interpretations. Stake argues that unlike
formal evaluations, responsive evaluations seek not to work more for the power that exists than
for the powers that should exists. In other words, he advocates for decentralisation of evaluation
power to stakeholders, with populist and localist approaches in the hope that it would yield more
evidence of the power that should exists as a results of the evaluation and project implementation
process.

The behaviour of a responsive evaluator are different from that of a pre-ordinate evaluator.
Responsive evaluator allocates a large expenditure and time of the evaluation resources to observing
the programme (30%), gathering judgements (15%) and preparing instruments (15%) while pre-
ordinate evaluator values preparing instruments (30%), processing formal data (25%) and preparing
formal reports (20%). Observation and feedback continue to be the important functions from the start
to end of responsive evaluation processes but not following any chronological order. Many of the
evaluation processes occur simultaneously with the evaluator returning to each event many times
before the evaluation ends. Stake used an analogy of a clock that spins clockwise, counter clockwise
and cross clockwise to illustrate the interconnectedness of the various processes during responsive
evaluation.

Challenges with responsive evaluation


Responsive evaluation is perceived to be subjective, as it does not focus on stated programme goals
and objectives and the intended outcomes but more on what happened and how it happened. It
gathers data of the background and the contexts which are key in defining success or failure. This
data is mainly collected through informal communication hence the process can be very subjective.
Stake argues that subjectivity can be reduced by replication and operational definition of ambiguous
terms even while heavily relying on insights of personal observation.

4
Stake argues that responsive evaluation is poorly suited for formal contracts in the sense that
evaluations based on formal contracts would need formalise data collection tools and formalised
means of communication. In formal contracting project intended goals and objectives become
deliverables hence the push to measure their achievement or lack of achievement. Responsive
evaluation is premised on informal communication and more like goal free evaluation or aimless
evaluation which tend not to sit well with most evaluators.

Scholars also argue that responsive evaluation can raise embarrassing questions mainly because of
the informal approach to data collection. Evaluators can start by asking basic questions to understand
the context, background and processes in the programme. This in itself can lead to embarrassment
as sometimes stakeholders and evaluators feel that they are being asked for the obvious. Latter in
the process, questions can be focused and again clarity would be needed for all ambiguous terms in
the project. This again contributes to embarrassment that people can feel about responsive
evaluation.

Conclusion
Finally, the reviewed articles offer very interesting insights on responsive evaluation but somewhat
unclear if it is applicable to all types/kinds of projects and data collection methods. It is unclear if
responsive evaluation can be applied to advocacy projects where results chains are can be implicitly
defined compared to nutrition, food security or health projects with explicit results chain. It is also not
clear how responsive evaluation can be affected by context and various factors like culture, religious
and socio economic issues that affect evaluator and stakeholder interaction. How do you apply
responsive evaluation in evaluating emergency projects compared to long-term development
projects? Would you apply responsive evaluation the same way in Asia and Africa? All these are
critical questions that remain to be answered or explored.

References
1. Tineke A. Abma and Robert E Stake, Stakes Responsive Evaluation: Core Ideas and Evolution
(n.d).
2. Robert E. Stake, (1975), Program Evaluation Particularly Responsive evaluation, Centre for
Instructional Research and Curriculum Evaluation, University of Illinois at Urbana Champaign,
1975

Anda mungkin juga menyukai