Anda di halaman 1dari 11

COMPLEXITY - Compilation of posts from my blog www.developblog.

org

Thursday, 25 November 2010

Simple Words about Complexity

In a helpful comment about my recent post "Donor Playgrounds and Unknowable Outcomes", my
friend Hélène complains about jargon. Why do we use fancy words? One reason is that sometimes
such elaborate terminology (= fancy words) is more accurate than simpler language - but only if
everyone involved has the same understanding of the words used. Then, fancy words convey the
impression that you know exactly what you are talking about. And finally, fancy words make harsh
truths sound elegant and not too painful - especially if the one who reads/ listens does not
understand what you mean. But then, what's the point in saying anything if it is not understood?

Since I believe that some of the more complicated texts I have posted in recent weeks are really
important and I enjoy writing exercises, I shall summarise the texts in plain terms, for people who are
not fond of jargon. Here's the first instalment: a crude plain-speak version of my post Donor
Playgrounds and Unknowable Outcomes. Attention: I don't mean to offend anyone!

Adam Fforde and Katrin Seidel, two knowledgeable people who have spent a long time working in
and researching on Cambodia, gave great presentations in Berlin earlier this week. Put plainly, what
they said could be boiled down to this:

People who pay for and carry out development projects often pretend they know what they are
doing, because they hold university degrees and they have done the same kinds of things in other
places.

Usually, they are convinced that what they do will improve other people's lives, and they believe that
they know exactly what changes are needed to improve other people's lives and what must be done
to make these changes happen.

The reality is that many people who say they are experts in development have no idea of what life is
really like in the places where they work and how people's lives change. Often, they don't even
bother asking ordinary people about their ideas and what they would like to do to improve their own
lives.

People who say they are development experts tend to contradict each other and make mistakes all
the time. Those mistakes can have terrible consequences for other people.

To make matters worse, many "experts" don't pay attention to what happens around them.
Sometimes, their work may mess up other people's lives, stir up quarrels and worse. But ordinary
people who suffer from the "experts'" work have nowhere to go if they want to complain, or stop
that "expert" work altogether.

Now, to end this terrible situation, more people who pay for and who carry out development work
should start to listen to local people and learn about their lives. They must know that "expert" work
is very dangerous and must be watched carefully by everyone who works with "experts". If
something goes wrong, it has to be stopped.
Wednesday, 24 November 2010

Donor Playgrounds and Unknowable Outcomes

"Donor Playground Cambodia" is the title of a highly commendable paper Adam Fforde and Katrin
Seidel have contributed to a conference on development policy, Thinking Ahead, organised by the
Heinrich Boell Foundation in Berlin. A core theme of their paper: "the tensions created by the belief
that development is both a known product of interventions guided by predictive knowledge, and the
sense that, really, the future is unknowable".

Cambodia is presented as a case study - it is a "donor playground" in that more than half of its
national budget is still funded through development assistance, and that the country has been a field
for massive "experimental donor interventions" since the early 1990s. It appears that the huge body
of experience built over almost two decades has spawned only patchy and inconsistent knowledge
on "what works"; the authors note "a plethora of statements about cause and effect that are
inconsistent and ignore each other". Government officials, donors and NGOs seem to hold highly
conflicting assumptions about each other's work; local processes that exist independently from
donor agendas appear to receive too little attention. In some fields, the country has recorded
enormous progress, e.g. in agricultural productivity, "despite" the absence of policy requirements as
defined by donors. Massive donor investment in land registration has been accompanied by a
dramatic rise in land conflicts, but donors have apparently missed the opportunity to examine their
policies more carefully - or learn about "traditional" types of land tenure in Cambodia before
introducing new systems.

The case study illustrates three central theses pointed out in Fforde's short conference presentation.
They could be summarised as follows:

Development policy is likely to be counterproductive if it relies on linear, cause-to-effect thinking


which assumes that the effects of development interventions are straightforward and predictable.
Such "lock-frame" notions (I borrow the term from R. Hummelbrunner - see my earlier post on
Beyond Logframe) may exclude genuine, effective engagement with and among those who are
supposed to "benefit" from development policies.

Development "experts" do not intervene in socio-culturally or politically "neutral" contexts. "What


works here" will not necessarily "work there": there is no empirical evidence to prove that "best
practice" can be transposed from one context (e.g. country, "community") to another. The
widespread idea that universal experts' recipes can be applied anywhere makes it hard to gain a
specific understanding of specific contexts. It also clouds the politics that guide decisions on
development interventions.

The debate on aid effectiveness tends to frame problems of human rights and power assymmetries
in terms of effective delivery of aid programmes. Political oversight of development interventions is
limited and lacks complaint mechanisms that would allow "target populations" to claim their rights
and hold development actors to account.

A first step to overcome these problems would be to shed the assumption of "knowability", and plan
with "adaptive reference frames" that are built on contributions from wide, diverse groups of
stakeholders, among whom the "target populations" play a central role. "Beneficiaries" should not
only contribute their knowledge (which must be acknowledged to start with), but also get the
opportunity to complain about development projects that affect them.

The full Fforde/ Seidel paper is available as part of the conference materials - find it here:
http://www.boell.de/downloads/20101119_Cambodia_Playground_Study.pdf . A cold kept me away
from the actual conference here in Berlin (still on-going as of today), but I could follow the
presentation on my computer screen via livestream - a wonderful technology!

Thursday, 12 August 2010

Beyond Logframe

The Japanese Ministry of Foreign Affairs has published an excellent booklet titled Beyond Logframe:
Using Systems Concepts in Evaluation, which you can download in English from its web-site:
http://www.fasid.or.jp/shuppan/hokokusho/pdf/h21-3.pdf. I am particularly enthusiastic about the
first article by Richard Hummelbrunner Beyond Logframe: Critique, Variations and Alternatives. Since
I experienced difficulties downloading the document from the Japanese site, I summarise some main
points below. If you'd like to have a copy of the full article via e-mail, please let me know - for those
who have my ordinary e-mail address, use that one, for those who don't, try micraab(at)web.de and
be prepared to wait for a few days.

As pointed out by Hummelbrunner, the Logical Framework Approach (LFA) was initially designed as a
military planning approach, for a context of strong central control and sharply defined goals. USAID
adapted the LFA to the development context in the late 1960s. Since then, it has been adopted in
development planning and monitoring throughout the world, to attain a near-monopoly in
development planning. The LFA has its merits, as it helps to conceptualise interventions in a uniform,
formalised way. Logical frameworks ("logframes"), if well-designed, can provide a convenient, simple
overview of the main features of an intervention.

The problem is, "[the LFA] reflects a management style which demands precisely structured and
quantifiable objectives ("management by objectives"), assuming that the actors dispose of all
relevant information and operate in rather stable environments. The focus is on the delivery of
activities and outputs, and on the achievement of intended effects through intended routes." In
development, you are likely to deal with multiple stakeholders who hold different opinions and
interests, in fluctuating situations which may require more flexibility than the LFA can accommodate.
Hummelbrunner introduces the terms "logic-less frame", "lack-frame" and "lock-frame" to illustrate
the dilemmas of applying a simple tool to a complex reality. "Logic-less frames" are invented to
justify an existing project design, e.g. when institutional donor funding is sought for an on-going
activity. The "lack-frame" over-simplifies, omitting vital aspects of a project as it ties activities and
objectives into a linear cause-to-effect chain. Finally, the "lock-frame" blocks learning and adaptation
to new opportunities and risks. Power imbalances between unequal "partners" in development
programmes exacerbate these difficulties. Hummelbrunner presents his observations from a real-life
case, an EU programme in Eastern Europe: "tunnel vision" dissociating projects from their contexts,
"mechanistic" styles of implementation which focus on conformity to pre-established plans, and lack
of involvement of local actors, local assets and expertise are listed as common problems associated
with excessive reliance on the LFA. In monitoring and evaluation, the LFA is of limited value, as
"tunnel vision" and the over-emphasis on compliance with pre-defined plans fail to capture the
complexity of development processes. "People operate with a much higher level of complexity than
can possibly be included in a logframe, so the neat logic does not work in reality."

Hummelbrunner shows how variations of the LFA can palliate some of these deficiencies: Project
Cycle Management (PCA) offers more opportunities for participation and adjustment - provided its
LFA elements are not over-emphasised; Social Network Analysis (SNA) captures complex
relationships that linear stage models such as the LFA cannot do justice to; and Outcome Mapping
(OM) focuses attention on changes in behaviours, relationships, actions and activities of individuals
and groups, as seen from different perspectives and taking into account the contributions of other
actors.

Systemic Project Management offers a total departure from the LFA, picturing development work as
intervening in a constellation of mutually interlocking systems and sub-systems with their own
processes, which are more or less directly related to producing (or inhibiting) the desired outputs and
outcomes. In contrast to the centralised control system underlying the LFA, Hummelbrunner
proposes three level of systemic management: normative management, i.e. policy that defines the
vision guiding the project; strategic management by supervisory entities, e.g. steering committees,
which decide on the effective use of resources ("to do the right things"); and operative level
management ("to do things right") carried out through decentralised coordination teams or service
providers. No pyramidal hierarchy of command and control is needed; systemic project management
relies on the self-organising capacity of the sub-systems.

All these different options deserve to be explored more systematically - as the author concludes: "A
frame exists for a more 'logical' use of logframe - provided this frame is noticed and used."

Saturday, 12 June 2010

Monitoring - It's the process that matters

Monitoring is about gathering information that helps us to function effectively, and about verifying
whether we do the right things in the right ways. It is a natural part of human life - for example, every
morning, I monitor the weather to determine what to wear so as to stay fresh and dry throughout
the day. Monitoring makes me more effective in my life.

Monitoring is also supposed to make international development more effective. The trouble is that
many people seem to think you can enhance the effectiveness of virtually anything by producing
tables with figures on them, preferably SMART ones. Opinions differ, even within a single agency, as
to whether it's the objectives, the indicators, the assumptions, the results or something in-between
that has to be SMART (specific, measurable, achievable, realistic and time-bound/ timely). The
consensus appears to be: whatever we talk about, it better be SMART, and SMART is when there are
figures attached.

In response to this trend, larger NGOs have designated specific people whose main task it is to
produce SMART figures to impress donors. One large and much admired agricultural organisation I
have worked with employed a small crowd of good-looking young women much appreciated in
donor meetings and fluent in several languages. They wrote proposals and reports to the donors, in
English, French, Spanish and even Italian. They set the numbers of farmers to be trained, of seedlings
to be planted and of acres to be watered. The rest of the organisation, as a senior member once
confessed to me, continued to do their work, to the best of their knowledge and skills, and in blissful
ignorance of those figures. The report-writing team produced its reports to the donors, never to be
translated into the local language, estimated the attainment of the targets at, say, 71% for the
farmers, at 103% for the seedlings and maybe 94% for the acres. When donors visited, the people
who did the actual work in the fields picked the sites they considered particularly successful or
interesting and took the donors there. I was among the donors and found those visits far more
enlightening and convincing than the figures in the reports. What the visits tended to leave unclear
was what had not worked well, and why. In that respect, I guess was hardly more ignorant than the
report writing staff. But the people who did the actual work knew very well what they were doing,
and could see the differences between successful and less successful parts of their project. They
were in touch with farmers, discussing what worked out nicely and what the difficulties were, and
they looked around to spot any interesting opportunity, or some risk that needed to be reckoned
with. That was real monitoring. The tables were fake.

I believe donors can be convinced to pay more attention to real monitoring. One key step is to
recognise that personal accounts are a legitimate way to find out about facts, and that figures can
only give a fragmentary and distorted image of reality. Numbers are useful for budgeting, accounting,
physics, engineering and other activities that require maths. Numbers can give an idea of the size or
scope of something - but they don't tell us what exactly that "something" is. A classical example: a
support centre for survivors of violence receives 40 clients' visits in February and 20 in March. Does
that mean that February is a more dangerous month than March? Or does it mean that the centre's
activities have caused a 50% drop in violence in the community in one month's time? Does it mean
that 20 out of the 40 February clients have been killed, leaving only 20 for March? You can spend
weeks interpreting such figures, and twist them either way to match your argument.

But the support centre in this example could also organise monthly staff meetings where everyone
shares good and bad experience from the past month. Such meetings are SMART if they focus on
specific topics and are run at regular intervals. They don't necessarily produce figures, but they
generate knowledge and disseminate it throughout the organisation. Regular exchanges between the
women and men who actually do the work are an excellent way to ensure an organisation keeps
learning and adapting itself to new challenges.

I know a human rights organisation which has played a consistently important role in its society over
decades. I believe that a main factor for the organisation's success is its system of monthly day-long
staff conferences, which maintain a constant flow of internal learning. In those meetings, people do
not fill in charts of benchmarks and indicators. They monitor, in a qualitative way that is adapted to
the complexity of reality, what happens within and around their organisation. Such regular,
appropriate monitoring processes keep engaging all members of the organisation in reviewing their
work, thus enhancing the organisation's effectiveness. Reports to donors that are based on
knowledge distilled through such a process give a precise, comprehensive and eminently legible idea
of an organisation's work. Of course you may still need to include figures - but only those that
matter, and only as one part of a bigger picture.

Tuesday, 4 May 2010


Output Outcome Impact Blues

A glorious and instructive song for evaluators, available from the very respectable Institute of
Development Research, so it can't be wrong. Click here to enjoy it: http://www.idrc.ca/uploads/user-
S/10960530301karaoke.swf

Sunday, 15 November 2009

A rant against shenmeyisi

Any professional discipline comes with jargon. Jargon can make communication more precise and
straightforward. For example, when you drop the word "AK47", most people interested in weapons
will know precisely what you're talking about and there is no need to spend any further time on
explaining what you mean. Unfortunately, jargon can also make communication less effective,
especially with concepts that are open to multiple, divergent interpretations. Or when people don't
bother translating jargon (which can indeed be difficult when you don't know exactly what you want
or need to say, to start with...). That is a serious and widespread problem in the development sector.

Just a few minutes ago I started reading a comprehensive evaluation report, nearly 70 pages written
in French in a country where French is an official language, as well as other languages that most
Europeans do not understand. Right in the middle of the executive summary, the very key and
probably only pages most users of the report will read, I come across this sentence: la planification et
le suivi portent plutôt sur les 'outputs' et pas assez sur les 'outcome' Quoi? Translated into English,
this sentence would run: Planning and monitoring focus on 'produits' rather than 'effet direct'. Eh?

There are official glossaries of evaluation terminology anyone can download from the internet (e.g.
the OECD glossary, available in many languages)! Why force-feed linguistic barbarisms on busy
people? In many countries I have worked in, many "ordinary" people are fluent in at least one
European and several "local" languages - do they have to learn English development jargon as well? If
we want people to understand what we say, wouldn't it be more straightforward to use one of the
languages they understand? How can we expect anybody to obtain any practical benefit from the
"learning" and "capacity building" we pretend to offer if we can't express ourselves clearly? Do we
really believe that the acquisition and indiscriminate use of funky vocabulary is a sign of superior
knowledge? *sigh*

PS for those among us who don't speak Chinese: "shenmeyisi" in Mandarin Chinese means "what
does it mean?" There is absolutely no need to have this term in the title line, but as you may have
noticed, it creates slight confusion and thus potentially demonstrates the intellectual superiority of
the author and jargon user - or does it? ;-)

Tuesday, 8 September 2009

Monitoring without SMART objectives

Just returned from an energising workshop on monitoring the We Can (End All Violence against
Women) Campaign in East and Central Africa. The campaign objectives - causing a shift in social
attitudes, getting people to take a visible stand against VAW (violence against women), building and
strengthening popular movements and alliances against VAW - are not really what project planners
call SMART (specific, measurable, achievable, realistic, time-bound). Does that mean that no
meaningful monitoring can take place?

Brainstorming through the WHY, WHAT and HOW of monitoring the campaign, some 25 participants
from Kenya, Tanzania and the Democratic Republic of Congo have concluded that there is plenty of
meaningful monitoring to be done, even without baselines or control groups. People are not
inanimate objects, you don't need a thermometer or scales to determine whether they have
changed: you can simply ask them and observe what they're doing! When you ask, make sure you
remember WHY you ask - the purpose of your monitoring - and WHAT it is that you are looking for.

The WHAT can be quite abstract, e.g. an intangible concept like "greater awareness", "campaign
ownership". When you come to the HOW - the ways in which you use indicators, i.e. pointers that
show what kind of change has happened - then you must be concrete. An indicator for commitment?
Check whether people show up for meetings. An indicator for attitude change? Ask people to tell you
precisely what has changed in their thinking and actions. For campaign visibility? Record in which
places you see the logo, or find out whether preachers and other opinion leaders mention the
campaign messages...

Of course, using these indicators is not as straightforward as applying a tape measure to assess the
length of an object. A tape measure shows one dimension. Social change has millions of dimensions.
By making deliberate decisions as to WHAT aspects of change we look for, WHY and HOW, by
combining different kinds of indicators (e.g. qualitative and quantitative), different methods of data
gathering (e.g. interviews and direct observation), different view-points (e.g. testimonies from
campaign participants and from external observers) and by recording our data systematically, we can
obtain reasonably reliable information on how we are doing with a campaign and what outcomes it
produces.

Wednesday, 19 August 2009

Milestones - or millstones around your neck?

People in development agencies like milestones - not the real life objects, but those imaginary
markers which indicate that a project is progressing as planned - or not. Craftspeople, engineers,
cooks and other people who work with tangible objects know what milestones they need to pass on
the way to the finished product. Sometimes traditional ceremonies accompany the passing of such
milestones - for example, in Germany people have a ceremony when they complete the roof
structure of a house.

How do you set milestones for non-tangible processes, as we keep encountering them in social
development and campaigning? Think of the example of the We Can End all Violence Against Women
Campaign: When you try to prompt hundreds, maybe thousands and ultimately millions of people to
reflect on their own attitudes and take action to end violence against women (VAW), you are likely to
get thousands, ultimately millions of different reactions and response. Human societies are
hypercomplex systems; anything that happens triggers a wave of repercussions, only few of which
can be reliably predicted. Where are you going to look for your milestones?
The makers of a campaign design it with an implicit or explicit theory in mind as to what immediate
reactions the campaign will trigger among its participants and addressees, and what actions may flow
from these first reactions. We can represent the expected or hoped-for chain reaction as a a flow
chart, as a causal pathway, as a logical framework connecting inputs + outputs + outcomes +
impact... but we must remember that these causal connections are only imaginary products of the
theory behind the campaign design. A key part of monitoring innovative work is to test the
hypotheses or assumptions that underly the project or campaign design. In innovative work, we need
to find out what happens to our theories in real life, whether actions and reactions follow our
theoretical causal connections or whether they take different paths.

Hence, milestones based on theoretical knowledge and hopes as to what could happen may turn into
millstones around our necks, weighing down our thinking and preventing us from capturing the rich
information we need to test our assumptions. No milestones in uncharted territory: you better set
the milestones on your return, after you have completed your journey, recalling what happened
where. And then travel again, to verify whether the same milestones reappear at each journey...

Tuesday, 21 July 2009

Technical Solutions to Social Problems?

"If only I had teeth down there" was the title of a public discussion organised by Terre des Femmes, a
Berlin-based group specialised in women's rights (Monday 20). Sonnet Ehlers, the South African
inventor of Rape-aXe, presented the prototype of her "anti-rape condom" for women. Rape-aXe does
not prevent rape - it only makes penetration extremely short and painful for the perpetrator: rigid
barbs drill into his skin and stay there. The device cannot be removed from the perpetrator's body
without medical assistance. For a graphic explanation, consult the official site. Ms. Ehlers says Rape-
aXe buys time - while the perpetrator recovers from the shock of finding his parts trapped inside the
"condom", his victim may get away.

The overwhelmingly female attendance was divided over the benefits and risks associated with Rape-
aXe, which is expected to hit the world market in October 2009. It may be a powerful deterrent. But
what if, as an undesired outcome, attacks on young girls, less likely to carry the device than adult
women, increase? Ms. Ehlers says she works on a protective device for children as well. Isn't it child
abuse, objects a participant, when you insert devices into a girls' body - even if they are meant to
protect her from assault? And who can afford these devices, anyway, if they must be replaced every
24 hours?

Ms. Ehlers's answers to the many questions are stern, curt, sometimes polemic and often evasive,
especially when it comes to psychological and social issues. As evidence for effectiveness, she
mentions discussions held in prisons with "many rapists" and in unspecified townships.

Technical solutions to social problems? I'm unconvinced. But the device is stunning indeed.
Tuesday, 30 June 2009

A plea for quality in M&E

Like the previous entry, this post is -remotely- inspired by Philipp Mayring's handbook on qualitative
content analysis (in German, 10th edition 2008). Mayring is a professor of psychology --- now don't
run away! Psychology can teach us a lot about assessing fuzzy development processes.

As shown in the post below, any scientific analysis rests on qualitative steps which determine what is
important, how the "what" should be measured, and how the measurements should be interpreted.
These steps are taken by researchers, i.e. by common mortals. There is no absolute truth (leaving
aside religious beliefs) - there are only theories. Even theories that come with figures are just
theories, to be confirmed or refuted in subsequent rounds of research.

When researchers do not explain what assumptions and decisions underly their data-gathering and
analysis, they are easily challenged. A recent posting on Duncan Green's blog describes how Oxford
economics professor Paul Collier, in his latest book War, Guns and Votes, mixes and matches
statistics to produce amazing guesses. They are just guesses: to his credit, Collier admits that. But a
sentence of the type "an annual expenditure of $100m on peacekeepers reduces the cumulative ten-
year risk of reversion to conflict very substantially from about 38% to 17%" does suggest a direct
cause-to-effect connection - while it's just a wild guess, a bold simplification of extremely complex
realities, a provocative entry point for a discussion.

"Hard" figures that cannot be verified by a close exam of the ways in which they have been collected,
selected and correlated are just numbers; they don't prove anything. Unfortunately, monitoring and
evaluation in development programmes, meant not just to provoke discussion, but to verify progress
and draw learning for the people involved, is often influenced by "naive" number worship. Where
"what can be counted" receives more attention than "what matters most and how can we best find
out", we may end up with "placebo" indicators. "Placebo" because they may make us feel better and
they may placate donors for a while, but they don't cure ignorance.

Friday, 26 June 2009

Quantitative is qualitative, too...

Yesterday I read an advertisement for a body lotion scientifically proven* to better the skin of 80% of
lotion users. The *footnote explained that, in a trial bringing together twenty women, 80% stated the
lotion made their skin feel smoother. Does that sound scientific enough? In any case, it illustrates
how you turn qualitative judgements (respondents' reported feelings) into "hard" figures, a
procedure which as such is not "manipulative" but established scientific practice.

As a matter of fact, qualitative judgements are at the roots of all quantitative measurement, even in
natural sciences: you need to decide what exactly you want to measure and determine how to
establish a scale before you can start counting. When Swedish astronomer Anders Celsius developed
his temperature scale (1742), he based it on the qualitative observation that water evaporated when
heated and solidified when cooled. Then he established a scale which put the boiling point at 0 and
the freezing point at 100 (no typing mistake - the original Celsius scale is the reverse of what you find
on today's thermometers). He decided that the intervals on the scale would be equal. There could
have been other solutions: for example, the Richter magnitude scale, used to measure seismic
energy released in earthquakes, is logarithmic: point 5 on the Richter scale comes with a shaking
magnitude that is ten times stronger than at point 4.

Qualitative analysis establishes concepts, categories and instruments for measuring. Only after all
this qualitative work is done, you can take your quantitative measurement: you look at the
thermometer and you read 28°C. If you're a big fan of quantitative indicators, enjoy this moment,
because it is brief, as the analysis that follows will be qualitative: is 28°C hot, warm, mild, cool or a bit
chilly?

Monday, 25 May 2009

Organisations in systems theory

Fritz B. Simon, psychiatrist-turned-management consultant (sic!) is a prolific author of highly legible


books bringing together systems theory and real life practice. His Introduction to Organisational
Systems Theory (my -slightly inaccurate- translation of the German title, Einführung in die
systemische Organisationstheorie) is delightful to read, but so condensed that any attempt to
summarise it would fail. I just want to share a couple of highlights...

A salient feature of organisations is that they can do one thing and its opposite at the same time.
Think of an oil company which causes horrid environmental degradation in, say, Nigeria, while
funding wildlife protection schemes in some other part of the planet. Such apparent inconsistency is
not necessarily a symptom of dysfunction. In the opposite, it is one of the reasons why organisations
come into being: when you're on your own, you can only do one thing in one place at a time; when
you're an organised group, you can be in many places and do many things at the same time; and if it
serves your purpose to do things that contradict each other, then that's possible, too.

And what is an organisation's first purpose? Here we come to autopoiesis, a term coined by
neurobiologist H.Maturana to desigate the process by which all living beings continuously create
themselves and keep themselves alive. Applied to organisations, autopoiesis means that the main
purpose of an organisation is its self-perpetuation, regardless of stated missions and aims.
Organisations are living beings, and living beings care about their own survival. This is a tough piece
to swallow for development agencies, who like to say that their job is to put themselves out of
business. But it may explain some of the apparent inconsistencies in our behaviour...

Monday, 29 December 2008

The learning contract

In a recent conversation with Charles Shamas, humanitarian law expert with a background in
cognitive sciences, we discussed connections between “law” and cognitive development. We believe
that the success of a development programme depends to a great extent on cognitive development
processes, i.e. the acquisition of new perceptions and knowledge systems, i.e. learning, among the
actors involved. In a nutshell: if you want to gain fresh insights and skills, or support others in their
learning processes, you must leave the safe territory of familiar knowledge so as to accommodate
new, previously unknown mental objects. “Old” certainties may be shattered by new discoveries.
“Can you live with ambiguity?” is a key question for job interviews in development NGOs. People
deal differently with change. Individual life histories shape people’s ability to accommodate, to
welcome the sense of destabilisation that comes with cognitive change. Situational factors play a
role, too: experimentation feels less risky in a friendly, stable environment than in an oppressive or
very unpredictable one.

And this is where the contract comes in. In an ideal learning situation, the learner and the mentor
conclude an implicit contract, under which they both commit resources –most importantly, their
time and efforts- to the learning process. The community of learning thus created forms a safe
environment, where ambiguity and discomfort can be tolerated or even welcomed as corollaries to
the learning process. A simple illustration: imagine the discomfort of slipping on dog’s droppings (a
real risk in Berlin) and falling painfully on your back. You’ll be angry, upset, worried about possible
injury and quite distracted from your original plans. In different circumstances, when I decided to
take up snowboarding, I spent entire afternoons carrying the board up a slope, sliding a few metres,
falling on my back, getting up, falling again, trying again, and falling once more… I fell twenty times
per hour and was covered with bruises at the end of the day. But I hardly noticed any discomfort,
because I had passed an implicit learning contract with myself (and an absent friend who would take
me to real mountains a month later), under which falling on my back was a perfectly normal,
acceptable risk in the pursuit of the new, more dynamic surfer’s equilibrium. Under a learning
contract, the learner trusts the mentor to support her efforts and to learn with her; the mentor
trusts the learner to expend effort in pursuit of her task.

In development, we pass contracts, too. Explicitly, such contracts focus on amounts of money
disbursed by a donor and results to be delivered by a grantee. Implicitly, they are about sharing
efforts in a development endeavour and bestowing trust upon each other, so as to create a safe
environment for accommodation, for setting aside old certainties so as to welcome new knowledge
without getting distracted or discouraged by difficulties. Arguably, this aspect of the development
partnership deserves to be recognised and cultivated.