Anda di halaman 1dari 106

HOW APPROPRIATE ARE LOCAL AUTHORITY

SUSTAINABILITY INDICATORS?

by

Bernadette Cass

A thesis presented in part-fulfilment of the degree of


Master of Science in accordance with the
regulations of the University of East Anglia

School of Environmental Sciences


University of East Anglia
University Plain
Norwich
NR4 7TJ

September 2008

2008

This copy of the dissertation has been supplied on condition that anyone who consults it is
understood to recognise that its copyright rests with the author and that no quotation from the
dissertation, nor any information derived there from, may be published without the authors
prior written consent. Moreover, it is supplied on the understanding that it represents an
internal University document and that neither the University nor the author are responsible for
the factual or interpretative correctness of the dissertation.

Printed on recycled paper

ABSTRACT
Sustainability, sustainable development and sustainability indicators have been
defined and the importance of operationalising these definitions with regards to
sustainability appraisal discussed. To improve local sustainability, Local Authorities in
England have each chosen a set of sustainability indicators to be used in sustainability
appraisal, in order to assess all options under consideration when choosing their
preferred option for their local development framework.

This dissertation explores a novel criteria-based method for deciding the


appropriateness of local sustainability indicators used in sustainability appraisals.
Over a medium period of time, for the author, this method exhibits stability when
using a reference set of indicators. The two extremes of the Defra classification of rural
and urban were used to ascertain whether, in England, there are differences between
the appropriateness of sustainability indicators chosen by Major Urban and Rural
80 Local Authorities in three regions.

To group the sustainability indicators into the three pillars of economic, environmental
and social sustainability, a classification framework was applied. A survey of Local
Authority officers and consultants involved in choosing sustainability indicators for the
three regions augmented the appropriateness method. Using these methods, an
attempt was made to suggest areas of improvement that would increase
appropriateness and possibly lower the number of local sustainability indicators used,
in order to make the communication of changes towards or away from sustainability
more transparent and manageable to all stakeholders that use the Local Authority
indicators.

ii

Table of Contents

Page

Abstract

ii

Table of Contents

iii

List of Tables

List of Figures

vi

Abbreviations and Acronyms

vii

Acknowledgements

ix

CHAPTER ONE: INTRODUCTION

1.1

Introduction

1.2

Outline to Subsequent Chapters

CHAPTER TWO: SUSTAINABILITY AND THE ENGLISH PLANNING


SYSTEM

2.1

Sustainability and Sustainable Development

2.1.1

Sustainability or Sustainable Development?

2.1.2

Weak and Strong Sustainability

2.1.3

Is Sustainability Achievable?

2.2

The Planning System in England

2.2.1

Planning in England

2.2.2

Sustainability Appraisal

2.2.3

Sustainability Indicators

10

2.3

How Many Sustainability Indicators Should Local Authorities Use?

13

2.4

Classification of Sustainability Indicators

13

2.5

Classification of Rural and Urban Sustainability Challenges

14

2.5.1

Definitions of Rural and Urban

14

2.5.2

Different Sustainability Challenges in Rural and Urban Areas

17

2.6

Objectives

19

iii

CHAPTER THREE: METHODOLOGY

21

3.1

Choosing Local Authorities

21

3.2

Definition of a Sustainability Indicator

22

3.3

Appropriateness of Indicators

22

3.3.1

Choosing Criteria

22

3.3.2

Method for Using Criteria-based Assessment

23

3.4

Subjectivity, Compatibility and Reliability

24

3.4.1

Subjectivity of the Author

25

3.4.2

Subjectivity of the Sources Chosen to Produce the Criteria


Method

26

3.4.3

Compatibility Matrix of Criteria

35

3.4.4

Trial Run of the Method

37

3.4.5

Reference Set of Indicators

38

3.4.6

Reliability of Appropriateness Method Using the Reference

3.4.7

Set Indicator

39

The Use of Weightings

41

3.5

Highest Scoring Individual Indicators - Top Ten Ranking

42

3.6

Number of Sustainability Indicators Used by Each Local Authority

42

3.7

Classifying Sustainability Indicators

42

3.8

Survey

42

3.9

Summary

43

CHAPTER FOUR: RESULTS AND ANALYSIS

44

4.1

Criteria-based Appropriateness of Sustainability Indicators

44

4.1.1

Individual Indicators

44

4.1.2

Top Ten ranking

45

4.1.3

Criteria

46

4.1.4

Super Criteria

51

4.1.5

Conclusion for the Appropriateness Method

51

4.2

Methods Used to Augment the Appropriateness Method

52

4.2.1

Defining Sustainability Indicators

52

4.2.2

Number of Indicators

54
iv

4.2.3

Classification of Indicators

55

4.2.4

Survey Information

56

4.2.5

Conclusion From the Methods Used to Augment the


Appropriateness Method

4.3

56

Summary

57

CHAPTER FIVE: CONCLUSION, EVALUATION AND


RECOMMENDATIONS

58

5.1

Conclusion and Evaluation

58

5.2

Recommendations for Local Authorities

59

5.3

Recommendations for Research

60

REFERENCES

61

APPENDICES

85

Appendix 1

Scoring Appropriateness of Sustainability Indicators

86

Appendix 2

Local Authorities Used in this Study

90

Appendix 3

Appropriateness Scores, Numbers of Indicators and


Classification of Sustainability Indicators

Appendix 4

91

Justification of Classification of Indicators for all Local


Authorities

95

List of Tables

Table 2.1

Page

Areas for Consideration in the Development of Sustainable


Indicators

12

Table 3.1

Criteria and Super Criteria Used to Assess Appropriateness of SIs

23

Table 3.2

The Criterion Scoring Method for Reliability

24

Table 3.3

Compatibility Matrix of the Thirteen Criteria and Three Super


Criteria

36

Table 3.4

LAs Used in Trial Run of Criteria-based Appropriate Method

37

Table 3.5

Trial Run Scoring of Appropriateness: Some Results from Dacorum 38

Table 3.6

The Reference Indicator Set Total Number of Indicators in

Table 3.7

Each Classification

39

Differences in Total and Percentage Scoring for Dacorum LA

40

vi

List of Figures
Figure 1.1

Page

Back Calculating Through the Cause-Effect Chain of Climate


Change

Figure 2.1

Sustainability Appraisal in LDFs

Figure 2.2

Rural Designations Where English Residents Live (2003)

16

Figure 2.3

The Heterogeneous Nature of MU and R80 Classifications

16

Figure 2.4

Estimates of End User CO2 Emissions for 2005 in England,


Using Defras Area Classification

18

Figure 3.1

Decision Tree Method for Choosing Regions and Local Authorities 21

Figure 3.2

Lead Authors and Organisations Used to Establish Criteria


to Assess Appropriateness

27

Figure 4.1

Stakeholders Involved in Choosing SIs in 2008

50

Figure 4.2

Groups and Organisations Involved in Local Agenda 21 (1998)

51

Figure 4.3

Sustainability Indicator Definitions Used by Local Authority


Officers and Found in Scoping Reports

Figure 4.4

53

Comparison of the Classification of Copeland Local


Authorities with the Average of the 38 Local Authorities

vii

55

ABBREVIATIONS AND ACRONYMS


ACCA

Association of Chartered Certified Accountants

A Levels

Advanced Levels

AONB

Area of Outstanding Natural Beauty

BBC

British Broadcasting Company

BREEAM

Building Research Establishment Environmental Assessment Method

BSA

British Sociological Association

BTO

British Trust for Ornithology

CAT

Centre for Alternative Technology

CIAT

Chartered Institute of Architectural Technologists

CRC

Commission for Rural Communities

CSD

Commission on Sustainable Development

DCLG

Department for Communities and Local Government

Defra

Department for Environment Food and Rural Affairs

Ec

Economic Sustainability Indicator

EE

East of England

EISs

Environmental Impact Statements

EMA

Environmental Management and Auditing

EMS

Environmental Management System

En

Environmental Sustainability Indicator

EU

European Union

FOE

Friends of the Earth

GCSE

General Certificate of Secondary Education

GDP

Gross Domestic Product

GHG

Green House Gas

HNC

Higher National Certificate

HND

Higher National Diploma

ICAEW

Institute of Chartered Accountants in England and Wales

ICT

Information and Communications Technology

IEMA

Institute of Environmental Management and Assessment

IMD

Index of Multiple Deprivation

IOW

Isle of Wight

LA

Local Authority

LADs

Local Authority Districts


viii

LDF

Local Development Framework

LGBT

Lesbian Gay Bisexual Transgender

MU

Major Urban

NGOs

Non Governmental Organisations

NW

North West

ODPM

Office of the Deputy Prime Minister

ONS

Office for National Statistics

OS

Ordnance Survey

PCT

Primary Care Trusts

PPM

Programme and Project Management

PPS

Planning Policy Statement

R80

Rural 80

RCEP

Royal Commission on Environmental Pollution

RSC

Royal Society of Chemistry

RSPB

Royal Society for the Protection of Birds

RTPI

Royal Town Planning Institute

SA

Sustainability Appraisal

SCS

Sustainable Communities Strategy

SE

South East

SEA

Strategic Environmental Assessment

SD

Sustainable Development

SI

Sustainability Indicator

SIs

Sustainability Indicators

SMEs

Small and Medium Size Enterprises

So

Social Sustainability Indicator

SS

Strong Sustainability

UK

United Kingdom

UKCIP

United Kingdom Climate Impacts Programme

UNECE

United Nations Economic Commission for Europe

WCED

World Commission on Environment and Development

WS

Weak Sustainability

WWF

World Wildlife Fund

ix

ACKNOWLEDGEMENTS

My thanks to my wonderful husband, Colin, who travelled to Norwich from Somerset


every weekend for a year and had to endure some 152 outings and visit around 365
pubs (and one wine bar).

To Alan Bond, my supervisor, for divulging insider knowledge to demonstrate my


method could be valid and also for patiently showing me how to improve. Finally, my
thanks to Bill Sturges for advising me on how to make this look good.

CHAPTER ONE: INTRODUCTION


1.1

Introduction

The limited resources left in the world for the 6,720,701,504 humans (US Census
Bureau, 2008) currently using the planets resources, and the opportunities for future
generations to be able to use similar resources, are some of the significant issues facing
the world today. It is not just the resources that we use, but also the direct and indirect
effects that are caused by the use of these resources that need to be considered. For
example, anthropogenic warming of the Earth caused by greenhouse gases (GHG)
(IPCC, 2007:3), such as the carbon dioxide produced from the combustion of fossil
fuels, is not a simple cause and effect situation (see Figure 1.1) and has long-term widereaching consequences that do not just include using up all the oil, gas and coal.

This dissertation aims to assess one method currently used by local authorities in
England, sustainability appraisal, which claims to lead towards stronger
sustainability.

Source: Gupta et al. (2006).

Figure 1.1: Back Calculating Through the Cause-Effect Chain of Climate Change

Different people and organisations have widely differing views on how to manage
these resources. The north and south of England have polarised views and it is really
the north of England that currently has this movement towards sustainable
1

development. The south of England seems just concerned with development. This
dissertation concentrates on the views from the north of England, with a particular
focus on sustainability within the English local authorities.

1.2

Outline to Subsequent Chapters

This first chapter provides an introduction to the motivation behind this research.
Chapter Two frames the case for developing an appropriate method for sustainability
indicators. The third chapter then outlines the methods used to determine the
appropriateness of sustainability indicators, the results of which are discussed in
Chapter Four. Conclusions then follow in Chapter Five, outlining how well the overall
aims and objectives were addressed. A set of appendices provides additional specific
details of the methodology not considered appropriate to the main text.

CHAPTER TWO: SUSTAINABILITY


AND THE ENGLISH PLANNING SYSTEM
2.1

Sustainability and Sustainable Development

Firstly, we need to consider the appropriate definition of sustainability and sustainable


development, along with considering weak or strong sustainability and whether this is
really achievable.

2.1.1

Sustainability or Sustainable Development?

The wide variety of definitions for sustainability and sustainable development in


academia, English government and organisations can lead to drawbacks in comparing
research and initiatives within this field. A divergence of views (Hopwood et al., 2005;
Counsell et al., 2006) and the availability of various information sources (Paris et al.,
2003; Glavic et al., 2007) has led to a confusion of definitions (Glavic et al., 2007).

Academically, the terms sustainability and development are clearly defined to be


conceptually distinct, whereas activist, political and legal definitions have different
meanings (McNeill, 2000). Academics and NGOs are more prone to use the term
sustainability in similar contexts, while government and private sector organisations
have tended to adopt the term sustainable development (Robinson, 2004). Tools are
needed to measure sustainability (Fraser et al., 2005), especially new tools that
transcend current conflicting views (Robinson, 2004). However, not one size fits all
and meaningful comparisons need to occur (Phillips and Bridges, 2005; Roberts, 2006),
therefore definitions become important. This much quoted definition is from Our
Common Future:

Humanity has the ability to make development sustainable, to


ensure that it meets the needs of the present without compromising
the ability of future generations to meet their own needs (WCED,
1987:8).

This definition is often the starting point where researchers begin to define
sustainability (Holland, 1997; Jackson, 2007; Morse, 2008). However, this definition
reflects a managerial approach and is therefore more attractive to government and
business than a more radical definition (Robinson, 2004), although intragenerational
and intergenerational equity does appear at the cornerstone of this WCED definition.

The terms sustainable development and sustainability have often been used
interchangeably and a wealth of definitions are available (Defra, 2005a; Dartford
Borough Council, 2006; Moles et al., 2007; Zidansek, 2007; Wigan Council, 2007;
Morse, 2008), including the definition by Dahl (2007) that sustainability is the
capacity of any system or process to maintain itself indefinitely. This definition
alludes to intragenerational and intergenerational equity, but does not refer to the
quality of the level maintained (i.e. using a quantified baseline). The following
definition is from the Draft UK Sustainable Communities Bill 2007: By local
sustainability we mean policies that work towards the long term well being of any
given area. That means promoting economic needs... (House of Commons, 2007).
Unlike the WCED and Dahl, The House of Commons introduces both policy and
economics into the definition. The commonalities in all three definitions are the
temporal range, being long-term, and the omission of a stated baseline. Robinson
(2003) defines sustainability as being related to values and fundamental changes in
individual attitudes towards nature (value changes) and sustainable development as
being orientated towards efficiency gains and improvements in technology (technical
fixes), with their ultimate goals being rather different. Of note is that a number of local,
regional and national government and associated organisations, in England, have
chosen not to define sustainability or sustainable development (ODPM, 2005; Scott
Wilson Business Consultancy, 2005; CRC, 2008; DCLG, 2008) within their
sustainability documents.

Robinson (2003) concedes that it may be worth leaving the definition of sustainable
development open, using the diplomats method of constructive ambiguity, and have
definitions emerge from attempts at implementing sustainable development. Moldan
and Dahl (2007) have the same opinion for the definition of sustainability. Because of

this lack of consensus in a definition, omitting a definition may be a viable alternative


to including a definition that is potentially unusable at the time.

2.1.2

Weak and Strong Sustainability

Academics further divide sustainability; this can then be applied to other organisations
and institutions. Researchers (Turner, 1993; Holland, 1997; Neumayer, 2003; Karlsson
et al., 2007) suggest that sustainability can be measured in degrees of sustainability,
termed weak and strong sustainability. Two (Holland, 1997; Karlsson et al., 2007),
three (Neumayer, 2003) and four part (Turner, 1993) classifications have been
suggested. These classifications are based upon what economists term natural capital,
which summarises the multiple and various services of nature benefiting human beings,
from natural resources to environmental amenities (Neumayer, 2007). Weak
sustainability (WS) is built upon the unlimited substitutability of natural capital
(Neumayer, 2003), whereas strong sustainability (SS) is more difficult to define
(Holland, 1997).

Turners (1993:9-15) four part classification includes very weak sustainability, weak
sustainability, strong sustainability and very strong sustainability, with the latter
suggested as being impossible to achieve by Turner. Neumayers (2003) three part
classification also includes WS and suggests two interpretations of SS as available in
the literature. In one interpretation, SS is the paradigm that calls for preserving natural
capital itself in value terms. In the second interpretation, SS is not defined in value
terms but calls for the preservation of those forms of natural capital that are regarded as
non-substitutable (the so-called critical natural capital) (Neumayer, 2003:24-25).

Official UK conceptualisations of sustainability adopt a weak interpretation. The


attraction for politicians and policy makers is that it offers scope for claiming the
adoption of a given stance, not entailing the sacrifice of living standards (Jackson,
2007). However, SS requires more radical changes and therefore is more difficult to
achieve and less attractive (Holland, 1997), requiring a sea of change in thinking
(Glasson et al., 2005).

From the classifications available for sustainability, different actors choose definitions
from the weak or strong sustainability viewpoint. Ekins et al. (2003) suggest that the
important point is that starting from a SS assumption of non-substitutability in general,
it is possible to shift to a WS position where that is shown to be appropriate. In this
dissertation, WS is taken to mean the same as SD, and SS will be defined as Neumayer
(2003) has concluded, unless stated otherwise; Elkins stance is used were applicable.

2.1.3

Is Sustainability Achievable?

It is suggested that a discussion of sustainability that only refers to definitions is


pointless without an understanding of how a definition is operationalised (Ozkaynak et
al., 2004). Operationalisation includes three main elements: people, processes and
outcomes (Oakland, 2000). Therefore, the three main factors that can determine the
outcome (i.e. the degree of sustainability) are the definition applied, the people
involved and the process used.

What shapes peoples pro-environmental behaviour is complex. Increased knowledge


and awareness, or greater affluence, in most cases does not lead to pro-environmental
behaviour (Kollmuss et al., 2002). For a process to emulate a sustainability definition,
the pro-environmental stance (or not) of the person leading and managing the project is
therefore important. Primarily, leaders and managers should have current proenvironmental behaviour patterns rather than just a high level of environmental
education (Kollmuss et al., 2002), as one of the variables contributing to achieving
project outcomes. The processes that leaders develop should show that economic
systems are underpinned by ecological systems and not vice versa (Holland, 1997).
Sustainability appraisal (SA) is an example of how a definition can be operationalised,
in this case within the English planning system.

2.2

The Planning System in England

The planning system in England, sustainability appraisal and indicators now need to be
considered in more detail.

2.2.1

Planning in England

A major culture shift in the English planning system has redefined the nature and
purpose of planning from land use to spatial planning (Wong et al., 2006). England
has been carrying out SA on development plans since 1992 and this SA has been
relatively effective at integrating environmental and sustainability considerations into
plans (Therivel et al., 2002). The 2004 Planning and Compulsory Purchase Act
requires planning authorities in England and Wales to undertake SAs of Local
Development Frameworks, amongst other things, which are also intended to fulfil
Strategic Environmental Assessment (SEA) requirements (Jackson, 2007). Sustainable
development is noted as being key to the reformed planning system (ODPM, 2005)
with SA as an integral part of the planning system (Defra, 2005a).

2.2.2

Sustainability Appraisal

An academic definition of SA is that:

SA is committed to positive overall contributions to a more


desirable and durable future, through the identification of Best
Options (not just acceptable undertakings), and it is designed to
achieve multiple reinforcing gains (rather than mere avoidance of
problems and mitigation of adverse effects) (Gibson, 2006).

This definition states exactly what type of best option should be considered. It is not
linked to policy or economic considerations. The UK Revised PPS12 (2008) defines SA
as an appraisal of the economic, social and environmental sustainability of the plan
(DCLG, 2008). Within this policy, SA is linked to the Sustainable Communities
Strategy (SCS), where the current emphasis on sustainable development now lies.
However, the Local Authorities (LAs) and consultancies used in this dissertation have
used the older version of PPS12, and the definition is slightly different (as the current
SCS was not in place then). Nevertheless, all government definitions are based on WS
principles, due to the inclusion of economic values and omitting to define what best
option really means (ODPM, 2005; DCLG, 2006).

Local authorities and the consultants that they employ define SA in two ways. Firstly,
those who mention SA contributing towards the achievement of emulating SD (DCLG,
2006), such as Copeland Borough Council (2005), Elmbridge Borough Council (2005)
and West Oxfordshire District Council (2008) (Gardner et al., 2006). Secondly, those
who additionally incorporate an intergenerational view into their definition, such as
Liverpool City Council (2005) and Breckland Council (2008) (Costaras et al., 2006).

None of the SA definitions in this section introduce the non-substitutionability of


natural capital; therefore all err towards degrees of WS. The operationalisation of the
LA definitions for SA all function within the same process framework, set out in Figure
2.1. This process operates in parallel with the DPD process.

For the operationalisation of SA in LAs, the similarities are the definitions of SA, the
process which LAs use for SA, and the statutory stakeholders from whom they must
invite comment. Differences occur in the subject specialisms of the LA officers and
their consultants, and the LAs choice of non-statutory stakeholders consulted for SA.

Gibson (2006) states that one should establish the SA contribution as the main test of
proposed purpose, option, design and practice. The processes must put application of
these sustainability-based criteria at the centre of decision-making, not as one advisory
contribution among many (Gibson, 2006). In UK policy, sustainable development is
only promoted by using SA. LAs are not required to justify national planning policy
when conducting SAs (for example, by appraising alternatives to national policy), even
if the non-policy alternative turns out to be the best option (ODPM, 2005); therefore,
Gibsons stance is not followed. Recent research within English regions has obtained
the view that SA is a weak science with subjective outcomes, where the big issues are
sidestepped (Counsell et al., 2006).

LDF
process

SA
process

Evidence
gathering

Scoping/
baseline

Preparing options

Developing
options

0
Pre production

Time (Years)

Production

Preparing the SA
report
Choosing
preferred option

Consulting on
LDF and SA
reports

Submission of
LDF

2
Examination

Examination of
the LDF report

Adoption and
monitoring

Monitoring the
significant effects

Adoption

Source: abridged from ODPM (2005:38).

Figure 2.1: Sustainability Appraisal in LDFs

Overall, SA is defined weakly in terms of sustainability by the UK government,


English LAs and associated consultancies. The operationalisation of these definitions
produces a weak degree of sustainability, as stronger sustainability (based on at least
multiple reinforcing gains and best option based on sustainability alone) is not the
central focus of a decision to choose a plan for an LDF.

2.2.3

Sustainability Indicators

The process of using SA to establish the best option for English LDFs employs the
objectives, targets and sustainability indicators approach (ODPM, 2005). Sustainability
indicators (SIs) are derived from the objectives chosen, as one approach to gauging
progress towards SD is to use SIs (Bell et al., 2001). England is now on its third
generation of SIs, developed since the first UK SD summit, which was instigated after
the 1992 Rio Earth Summit (Hall, 2007). In addition to SIs having numerous
definitions, there are alternative methods for choosing SIs, and also various techniques
to determine if SIs are appropriate.

Definitions vary depending on the actors who define SIs. Most academic and
government definitions contain an element of measurement (Astleithner et al., 2004;
ODPM, 2005) and a change over either time, space or both (Smith et al., 2001;
Astleithner et al., 2004; ODPM, 2005; Gasparatos et al., 2007). Cartwright et al.s
(2000) survey of LAs showed that the majority of respondents (51%) indicated that the
primary role of SIs was to help monitor progress towards SD, with raising awareness
and educating people acknowledged as key issues. Writing about the UK SIs, Hall
(2007) suggests that the principal role of indicators is communication, particularly to
the public and to ministers who do not need a lot of detail. Some researchers consider
that SIs can be framed in terms of degrees of SS (Holland, 1997; Bastianoni et al.,
2005) and WS (Holland, 1997), while Wackernagel et al. (2005) state that the
ecological footprint indicator tracks core requirements from SS and identifies priority
areas for WS. However, other researchers do not agree with this (Ayres, 2000) and
consider ecological footprints, due to methodological flaws, not to have any value for
policy evaluation or planning purposes (Neumayer, 2003:197).
10

The importance of community stakeholder involvement in the development of SIs has


been specifically identified by some researchers (Bond et al., 1998; Cartwright, 2000;
Bell et al., 2001; Astleithner et al., 2004). However, the drawbacks are also represented
in the literature (Astleithner et al., 2004; Fraser et al., 2005; Doak et al., 2005; Reed et
al., 2005) and Morse (2008) states that participation in general has received remarkably
little attention within SD literature compared to development literature.

Two methods for choosing SIs have been proposed by Reed (2005). The reductionist
framework (which is expert led) and the bottom-up participatory philosophy, focussing
on the importance of understanding local context (also known as the conversation
paradigm) (Bell et al., 2001). According to Gasparatos et al. (2007), SA so far has
relied on reductionist methodologies and tools. However, Bond et al.s (1998) survey
responses to UK LA stakeholders involvement in Agenda 21 showed that it was clear
that there had been community involvement, but the extent of the involvement was
unclear. A participatory integrated method has been utilised (Gupta et al., 2006), using
experts and a variety of stakeholders, not only to get stakeholders to identify indicators
but also to identify thresholds of acceptable and unacceptable risks (for dangerous
climate change). A key tool in communicating difficult concepts to stakeholders was by
using back calculation of cause and effect (related to climate change) (see Figure 1.1).
The researchers considered the production of a simple usable visual communication
method of their results as an essential part of this process, even though this was
considered inappropriate by experts (Gupta et al., 2006). This method agrees with other
researchers that communities need to be thinking through and deciding the kind of
future that they want to create (Robinson, 2004).

Different criteria are used for deciding the appropriateness of SIs, an example being
Donnelly et al. (2007) who propose criteria for the selection of four environmental
indicator types (biodiversity, water, climate and air) used in SEA, which is now part of
SA in England. Donnelly et al. (2007) consider it important to set criteria before a final
list of indicators is agreed upon, to ensure the most pertinent environmental issues for

11

SEA are properly addressed, yet other researchers use criteria both pre and post
decision-making (Lin et al., 2007) to decide appropriateness.

There is no single method that is easily repeatable from the point of view of LAs. From
the literature, there is no clear agreement on a set of criteria or measurement of
appropriateness of local SIs, but there are areas that many researchers agree need
further consideration. Table 2.1 shows some areas for further consideration in the
development of SIs. However, there is seldom a perfect SI, so the design generally
involves some methodological tradeoffs between technical feasibility, societal usability
and systemic consistency (Moldan et al., 2007).

Disaggregation of data a, h, i, j
Small scales needed g
Averages shade issues l
Innovation needed b, c, d
Current data is not being acted on k
Relationships are not straight forward (linking cause and effect) f
Indirect effects need consideration (climate, health and economy) e
Sources: (a) Coombes et al. (2004); (b) Robinson (2004); (c) Beveridge et al. (2005); (d) Defra (2005a);
(e) Bosello et al. (2006); (f) Doran et al. (2006); (g) Weich et al. (2006); (h) Hajat et al. (2007); (i) Lin et
al. (2007); (j) Warren (2007); (k) Hanratty et al. (2008); (l) Spilanis et al. (2008).

Table 2.1: Areas for Consideration in the Development of Sustainable Indicators

In conclusion, SIs have been defined by researchers who have suggested that some SIs
measure degrees of sustainability, although this is contested by other researchers.
Choosing SIs with their thresholds of acceptability in a participatory integrated way
may be a clearer way to communicate with stakeholders concerning the degree of
sustainability available from the best plan chosen for the LDF, via the process of SA.
The gaps in knowledge to decide on the appropriateness of SIs need to be considered in
future criteria-based appropriateness assessments.

12

2.3

How Many Sustainability Indicators Should Local Authorities Use?

There are numerous sets of SIs, each with widely varying numbers (Daniels, 2007),
with sets of over 100 indicators being common (Rydin et al., 2003). Two examples of
different researchers using a lower number of SIs are Fraser et al. (2005) who used 55
SIs in their Guernsey study and Spilanis et al. (2008) who used a total of 37 SIs in their
Greek Islands study. Hall (2007) identified a total of 5000 SIs when developing the first
generation of UK SIs and, from these, ideally wanted to reduce them to a set of fifty.
This reduction in numbers was so the set might be more manageable and
understandable. Reed et al. (2006) indicate that stakeholder involvement can lead to a
large number of potential indicators and, when establishing the first generation of UK
SIs, Hall came away from a days stakeholder involvement with a potential set of 400
SIs (Hall, 2007) rather than the 50 that was planned. Using their integrated method of
choosing indicators, Gupta et al. (2006) developed just 27 indicators of climate change
after using their considered method of both expert and stakeholder involvement.
Bossels (2001) systems-based approach for deriving comprehensive indicator sets
requires exactly 42 indicators. It turns the focus from an uncertain ad hoc search and
bargaining process to a much more systematic procedure with clear goals to find
indicators that represent all the important aspects of viability, sustainability and
performance (Reed et al., 2005). Currently, the UK's third generation set of SIs
numbers 68 (Defra, 2007). No exact numbers of SIs are suggested for LA SAs; the only
guidance is that the number of SIs needs to be manageable and developed with input
from relevant stakeholders (ODPM, 2005). However, LAs need to consider that
decision-makers and the public rapidly lose interest if presented with more than just a
few indicators (Moldan et al., 2007). Not only the number but also the types of SIs
chosen by LAs will be considered next.

2.4

Classification of Sustainability Indicators

Sustainability indicators may be easier to understand and interpret when assembled in


some conceptual framework, perhaps with a hierarchical arrangement of sub-domains.
The three pillars (economic, environmental and social) are one such framework, used in
Agenda 21 and, in a survey by Cartwright et al. (2000), 81% of LA respondents chose
13

this option - but many others are possible (Moldan and Dahl, 2007). A fourth pillar
(institutional indicators) was included in the system of sustainability indicators adopted
by the United Nations Commission on Sustainable Development (CSD) (Moldan et al.,
2007; Zidansek, 2007) and institutional indicators are currently used by some
researchers (Sheate et al., 2008). However, the institutional dimension is often
subsumed into the social dimension (Spangenberg, 2007).

Most indicator sets have assembled indicators for each of the three pillars whilst
neglecting the links between them. Interlinkage indicators are also called decoupling
indicators, and a number of the UK government strategy indicators take the form of
decoupling indicators. Decoupling is defined as how successful we are in breaking the
link between economic growth and environmental damage (Defra, 2005a). From the
classification systems available, the three pillars idea has been used by academics from
many disciplines in their research (Holland, 1997; Ekins et al., 2003; Rydin et al.,
2003; Astleithner et al., 2004; Lehtonen, 2004; Counsell et al., 2006; Glavic et al.,
2007; Huby et al., 2007; Niemeijer et al., 2008; Sutherland et al., 2008).

The three pillars classification system will be used in this research, as the widespread
use of this approach and the advantage of ease of communication are considered to
outweigh the disadvantages of lack of linkage and decoupling, and subsumation of the
institutional dimension. This research will be compared to a previously used method by
Bond et al. (1998), who present a fully referenced classification of the three pillars
system which will be adapted to classify LA SIs in 2008.

2.5

Classification of Rural and Urban Sustainability Challenges

2.5.1

Definitions of Rural and Urban

Definitions of rural and urban vary both within and between countries, and within
organisations worldwide a number of classification systems exist to divide rural and
urban (OECD, 1994; Reschovsky et al., 2002; Chomitz et al., 2005; Buckwell, 2006;
Fotso, 2006; Gallego, 2006; Weich et al., 2006; Huby et al., 2007; Vickers et al., 2007;
Zonneveld et al., 2007), as different users have different needs (Champion et al., 2006).
14

However, there are unclear and contradictory usages of the term rural within England
(Haynes et al., 2000; Baird et al., 2006; Keirstead et al., 2007; Manthorpe et al., 2008).
Sometimes a definition is not obvious in an academic paper (Ulubas-og lu et al., 2007)
and some academics consider that in cultural, social and economic terms the notion of
rurality in a country such as the United Kingdom is outdated (Champion and
Shepherd, 2006). In 2004, the Office for National Statistics (ONS) published a new
definition of rural areas covering England and Wales, launched alongside Defras Rural
Strategy (Champion and Shepherd, 2006). Defra considers the new definition of rural
areas to offer a distinctly different, potentially more useful and more transparent
approach to identifying LA districts (LADs), making it possible for policy makers,
researchers and others to interpret their results against a known set of benchmarks
within the classification (Defra, 2005).

Englands 354 unitary authorities and LADs have been allocated to one of six main
types. Three (176 LADs in all) are overwhelmingly urban in nature and are called
Major Urban, Large Urban and Other Urban (large market town). The rural types,
of which there are 178, are called Significant Rural (rural town), Rural 50 and
Rural 80, according to the proportion of people in rural settlements. Thus, Rural 80
LADs have between 80 and 100% of the people in rural settlements and Rural 50
LADs have more than 50%, while Significant Rural have more than the national
average of 26% (Champion and Shepherd, 2006). The percentage of residents in each
of the six types (2003 data) is illustrated in Figure 2.2.

From this, it can be established that overall 63.5% of Englands residents live in urban
areas, while 36.5% live in a rural area (2003 statistics). Each LA is classified into one
of the six designations. However, each LA can be further sub-classified (using the six
designations) and, as can be seen in Figure 2.3, both Rural 80 (R80) and Major Urban
(MU) are not homogeneous classifications. R80 can contain urban classifications and
MU can contain rural classifications. However, this classification system is more
transparent than others that have been used previously in England and research findings
can be discussed with this classification in mind.

15

Source: derived from Champion et al. (2006).

Figure 2.2: Rural Designations Where English Residents Live (2003)

Source: Defra (2004).

Figure 2.3: The Heterogeneous Nature of MU and R80 Classifications

16

As there is some transparency in this classification system (Defra, 2005) and the
classification system can be applied to all LAs in England, and also as a large amount
of administrative and other data is available only at this level (Champion et al., 2006),
this classification system was used in this dissertation.

2.5.2

Different Sustainability Challenges in Rural and Urban Areas

A number of organisations now use Defras classification of rural and urban areas in
their more recently produced documents and research (CRC, 2008; RCEP, 2008). In
England, researchers and organisations have suggested that different priorities are
needed for urban and rural areas in order to lead towards sustainability (Doran et al.,
2006; Champion et al., 2006; CRC, 2008; RCEP, 2008).

Carbon dioxide (CO2) emissions will be considered for both urban and rural areas,
bearing in mind that the UK has the goal of a 60% cut in CO2 emissions by 2050
(RCEP, 2008). The greatest new driver of public policy for rural communities, as for
the nation as a whole, is climate change. Climate change poses particular challenges for
rural communities, both in terms of the sustainability of peoples car-reliant lifestyles
and in the way landscapes and biodiversity will adapt (CRC, 2008).

The density and infrastructure of urban areas helps make them more efficient in terms
of per capita energy consumption and emissions, which are lower in many of the UKs
major cities than the national average (RCEP, 2008). This point is agreed with by CRC
(2008), as shown in Figure 2.4. CRC states that this is because people living in rural
areas carry out much more of their travelling by car. In all three rural classifications,
transport has a significantly larger (approximately twice the value) CO2 emission value
than all three urban areas.

17

Source: adapted from CRC, 2008:144).

Figure 2.4: Estimates of End User CO2 Emissions for 2005 in England,
Using Defras Area Classification

Reducing the CO2 concentration is a local as well as global sustainability issue. This
may need to be achieved by different methods for rural and urban areas, so therefore
the significant differences need to be studied. In this example, car transport is a
significant aspect in rural areas. Indicators to monitor car usage in rural areas are
needed to increase sustainability. As previously stated (Defra 2005a), indicators do not
always stand alone - they can link together. Currently, there are several sustainability
issues linked to the lack of public transport in rural areas, two of which are illustrated
below:

1. Increasing rurality is associated with the greater pace of growth in the number
of the oldest people (Buckwell, 2006; Champion et al., 2006), and the need to
travel disproportionately affects these people in rural areas, particularly for
those without their own cars (Baird and Wright, 2006).
2. With regards to adult literacy and numeracy, lack of transport, access and
childcare are major barriers to learning in rural areas (Atkin et al., 2005).

18

RCEP (2008) recommends that before development plans are approved, the
government should publish a clear assessment of the transport infrastructure needs for
all proposed housing growth, how they will be funded and the environmental and
health impacts of meeting those needs. This should be accompanied by a clear plan for
phasing in the necessary supporting infrastructure, ensuring that this new transport
provision is environmentally sustainable.

Overall, the 35.5% of English people who live in rural areas produced twice as much
carbon dioxide per person from personal transport use in 2005, than the 66.5% of
people who live in urban areas. This one piece of evidence indicates that there are real
differences between rural and urban sustainability. Rural and urban areas can face the
same issues (such as climate change) but have different scales of a problem, so they
may need different solutions to achieve the same level of sustainability. Conducting SA
to choose the best option and the most appropriate choice of sustainability indicators
when choosing an LDF may help address this rural/urban difference to achieve a higher
level of sustainability within all LDFs.

The remainder of this dissertation examines how appropriate the choices of SIs are
when choosing the best option LDF.

2.6

Objectives

From the classifications available for sustainability, different actors will choose
different definitions from the WS or SS viewpoint. Recent research within English
Regions has shown that SA is a weak science with subjective outcomes, where the big
issues are sidestepped (Counsell et al., 2006); although some authors do have the
opinion that SIs can measure degrees of sustainability and therefore are a useful tool. A
guidance exists for SA (ODPM, 2005); however, there is a gap in this guidance on how
to reliably assess the appropriateness of SIs. In order to fill this gap, the overall aim of
this project is to develop and apply a reliable criterion-based assessment that LAs can
use to assess the appropriateness of SIs used when selecting the best option for an
LDF. This project aim will be addressed through three specific objectives:

19

1. To critically appraise sustainability literature and guidelines to enable the


development of a reliable criterion-based assessment of appropriateness for SI
used in LA SA of LDFs.
2. To apply the criterion-based assessment of appropriateness to LA SIs from three
regions, to enable an evaluation of similarities and differences of
appropriateness between rural and urban LAs, both within and between those
regions.
3. To examine other methods which augment the appropriateness method, by
applying a classification framework to indicators, investigating numbers of
indicators, and surveying LA officers and consultants about the choosing of
indicators.

20

CHAPTER THREE: METHODOLOGY

3.1

Choosing Local Authorities

To research the similarities and differences in appropriateness of SIs in rural and urban
areas of England, the two extremes of rurality, Major Urban (MU) and Rural 80 (R80)
were used. There are nine relevant regions in England and, for this study, three regions
were chosen using the method shown in Figure 3.1.

Figure 3.1: Decision Tree Method for Choosing Regions and Local Authorities

21

3.2

Definition of a Sustainability Indicator

To locate the sustainability indicator definition used by individual LAs, the relevant LA
document containing the indicators was first consulted. If the indicators were found in a
separate appendix, then the scoping report was consulted. Using this method, it was
seen that sustainability and sustainable development were considered to have the
same meaning, with many government documents using these phrases interchangeably.
How LA officers and consultants define SIs is considered in the survey.

3.3

Appropriateness of Indicators

3.3.1

Choosing Criteria

The author followed the suggestion by Goodwin (2006) regarding the four key criteria
involved in the use of documentary sources of qualitative data. Therefore, the
information sources used for this criterion-based method were checked by the author
for authenticity, were credibly recorded, representative of the literature search carried
out, and finally the author decided whether the source could be used in the literal sense
or not. A choice was made to use authors such as Lin et al. (2007), whose research
contains evaluative criteria on gender equality and health for suites of indicators, but
does not explicitly name indicators in their research as SIs, but whose work would
nevertheless contribute towards the criteria-based method in a credible manner - in this
case, for disaggregation data. Authors such as Niemeijer and de Groot (2008) were also
included as they met the definition that their environmental indicators became SIs with
the addition of time, limit or target (Rickard et al., 2007:75).

From a literature search of 121 sources, 27 were chosen and used, with sources being
equally weighted to show transparency in method (Moles et al., 2007).

The results from the above process were then used to construct the criteria. Quotes
from each source were divided into groups according to their subject matter and
thirteen criteria groups formed. A short title was given to each criterion to reflect the
subject matter of the material it contained. Each criterion was divided into two or three
main areas, reflecting the complexity of the data in that group and a hierarchical
22

scoring system created, attaching a score of 0-3 for each level of hierarchy to reflect the
level of complexity in achieving that score. Finally, commonalities were established
between the thirteen criteria and three super criteria groups were formed. The full
criteria and scoring system is contained in Appendix 1. The criteria and super criteria
chosen are shown in Table 3.1.

Super Criteria

Criteria

Credibility

Measurement

Local Authority

Leads to strong
sustainability

Locality

Relevant to plan

Academic credibility

Disaggregation of
data

Actionable

Addressing
uncertainty

Measurability

Stakeholder
involvement

Environmental
receptors addressed

Reliability

Funding/cost
Easily
communicated

Table 3.1: Criteria and Super Criteria Used to Assess Appropriateness of SIs

3.3.2

Method for Using Criteria-based Assessment

The following method was used for all thirteen criteria. All data obtained was easily
found within the main body of the LA scoping report or appendices. Each criterion was
scored as zero, one, two, or three. An example of how the criterion of reliability was
scored is presented in Table 3.2.

Each criterion has the possibility of a zero level score; this was scored where no
information pertaining to this criterion could be found.

To score one, evidence to support at least one of the statements, or partial


statements, was accessed. A score of one had to be achieved before proceeding
to decide if a score of two was possible.

23

To score two, evidence to support at least one of the statements, or partial


statements to score two must be accessed. The scores of one and two had to be
achieved before deciding if a score of three was achievable.

To score three, evidence to support at least one of the statements, or partial


statements to score three had to be found.

Criterion

Score

Evidence Base for Score

No reliability

Repeatable and able to assess trends over space


(Niemeijer and de Groot, 2008)and time (Reed et al.,
2006), with the right spatial and temporal scales
(Sustainable Seattle, 2005; Niemeijer and de Groot,
2008).

Sensitive and respond in a predictable manner to changes


and stresses (Reed et al., 2006; Niemeijer and de Groot,
2008).

Repeatable and reproducible in different contexts, that


allows unambiguous interpretation (Niemeijer and de
Groot, 2008).

Reliability

Table 3.2: The Criterion Scoring Method for Reliability

3.4

Subjectivity, Compatibility and Reliability

To address different areas of subjectivity, compatibility and reliability in this method,


seven areas were evaluated:

1. Subjectivity of the author.


2. Subjectivity of the sources used to produce this method.
3. Compatibility of the chosen criteria.
4. Trial run of the method.
5. Reference set of indicators.
24

6. Reliability of the criteria-based method.


7. Use of weightings.

These areas will now be considered in more detail.

3.4.1

Subjectivity of the Author

This criterion-based assessment of sustainability indicators was created and used by


one person. Three main author-based areas of subjectivity were considered:

1. Skills subjectivity of the author


2. Knowledge subjectivity of the author
3. Single researcher versus team approach

The authors skill base and knowledge base were assessed as to how subjective they
would be towards scoring the thirteen criteria. Consideration was given as to how to
scale the authors subjectivity. Two options of matrix-rating scales were considered;
Likert and Semantic differential scales (SurveyMonkey, 2008a). However, neither of
these matrix-rating scales was considered suitable. Finding the correct wording for
subjectivity for five points on the Likert scale, or the semantic scale for either a
balanced or unbalanced scale, did not work with ease when trialled. Whatever the
author self scored was unlikely to be objective, so the opposing ends, subjective and
objective, of the semantic scale were not suitable. Therefore, a two point system of
weakly subjective and highly subjective was used. A mid-point was avoided, as the
author was marking herself and needed to be forced away from a neutral response.
Strongly subjective indicated that either the authors skills or knowledge needed
upgrading to be equal to the same level as weakly subjective, which was nearer to
objective, but not considered objective.

In relation to the chosen classification of indicators into the three pillars of


sustainability (economic, environmental and social), the following judgement of the
authors knowledge and skills appertaining to these three areas was made. The authors
knowledge and skills of these three pillars was weakly subjective for environmental
25

indicators, weakly subjective for social indicators and strongly subjective for economic
indicators.

The environmental impact statement (EIS) review package for assessing EISs uses a
criteria-based method involving 92 criteria in eight sections, compared to the authors
13 criteria in three sections (super criteria). Subjectivity in the EIS review package is
reduced, as the EIS is assessed by two independent reviewers on the basis of a double
blind approach. Here each reviewer assesses the EIS against the criteria and the
reviewers then compare results and agree grades (Glasson et al., 2005:395-407). It was
not possible to use this approach in this study, although such an approach would have
been most suitable for this method.

The overall criteria that needed to be examined more closely in the analysis of the data
were addressing uncertainty, locality and funding/cost (for which the author was
strongly subjective in both skills and knowledge) and scores for economic indicators
also needed to be scrutinised.

3.4.2

Subjectivity of the Sources Chosen to Produce the Criteria Method

Once the criteria method had been created, the provenance of the data used was
partially analysed to assess whether the subject of the lead author or government
document showed any preference to one subject specialism. The appropriateness
criteria were created from 27 sources (see Appendix 1).

Lead authors from an environmental or scientific subject area (48%) dominated the
sources used (see Figure 3.2). Economists formed the lowest percentage used,
comprising just 8%. This should be considered when analysing appropriateness;
however, most of the sources used had more than one lead author and their associated
subject specialisms are not considered further here.

26

Figure 3.2: Lead Authors and Organisations


Used to Establish Criteria to Assess Appropriateness

Justification for choosing criteria and their hierarchical scoring

The thirteen criteria were then examined individually, as below.

(1) Leads to stronger sustainability:


The definition of SS used is that of Neumayer (2003). The hierarchical scale created
loosely represents the range between Neumayers two researched interpretations of SS.
A score of one was given when natural capital was preserved in value terms, ranging to
a score of three when the preservation of natural capital was regarded as nonsubstitutable (with no value attached). Examples of the applied science needed to
measure this criterion were also assessed in the scoring. Professional judgement was
used by the author when a different applied measurement was stated in the indicator
that was absent in this criterions wording.

27

(2) Academic credibility:


This criterion assesses the credibility or believability of the data used (Bryman,
2004:30) and how an academic audience taken from all disciplines involved with SIs
would assess this. A hierarchy has been created starting with the believability of
accuracy and non-bias measurement scoring one, with wider parameters of the
individual measurement scoring two, then substantiation with suitable comparison to,
for example, national measurements, scoring three. Substantiation in research terms
refers to measurement validity (Bryman, 2004:72). It should be noted that this
substantiation comparison does not apply to the measurement of rankings, such as the
Index of Multiple Deprivation (IMD) because, as stated in the DCLG (2007) guidance
to using IMD:

The IMD 2007 is a relative measure of deprivation and therefore it


cannot be used to determine how much more deprived one area is
than another (DCLG, 2007).

Therivel and Ross (2007) concur with this statement, therefore it was concluded that
rankings cannot be used in the substantiation of a measurement.

Low scores in academic credibility can be obtained by popular SIs. An example of a


popular indicator used by many LAs is population of wild birds (as, for example, used
by Three Rivers) (Wooderson et al., 2006). Bird presence is often used as a proxy
indicator for biodiversity (Moldan et al., 2007:1). Work by Tratalos et al. (2007)
indicates a definite relationship between the abundance of bird species and household
density. The high density of housing suggested in current guidelines will result in a
lower overall avian abundance (Biggs et al., 2007). So temporal comparisons of bird
populations, for example as found in LA annual monitoring reports, can only be made
if the housing density of an LA exhibits no change. More importantly, species richness
is a very insensitive indicator of biodiversity loss, and species richness does not
distinguish between native and introduced species (Biggs et al., 2007:254). Gregory et
al. (2003), from the RSPB and the BTO, suggest using UK birds as indicators of
biodiversity when disaggregated by habitat; however, the ecosystem approach to
biodiversity indicators is more widely espoused (Munda, 2006; Biggs et al., 2007;
28

RCEP, 2008; Tasser et al., 2008). The challenge is for LAs to find a complementary set
of biodiversity indicators for their area, based on the type that they need for their policy
and not based on the type of abundant information available (Biggs et al., 2007), as in
the case of populations of birds, which the author assesses as having no academic
credibility.

(3) Addressing uncertainty:


Uncertainty in the context of this criterion is taken to mean unknown or open to
question. The intention is that this criterion is assessed in a positive way to show that an
LA is not playing safe and just measuring the indicators they feel comfortable with, but
are taking into account change (e.g. the effects of climate change). Scoring three, when
one is uncertain about the level, takes the measurement of uncertainty outside the
norm of system variability and can mean that this is not a tried and tested indicator in a
situation, or that events outside the current range of system variability can occur to give
the measure of uncertainty (e.g. the cost of oil suddenly rising). Variability occurs in all
three pillars of sustainability: economic (Oakland, 2000; House of Commons, 2008),
environmental (Defra, 2007; Tasser et al., 2008) and social (Lin et al., 2007).
Therefore, this criterion has the possibility of scoring highly with all types of
indicators. Variability and uncertainty need to be transparent in this assessment, hence
the addition of this criterion.

(4) Environmental receptors addressed:


The definition used for this criterion for named environmental receptors is taken from
both Annex 1 of the 2001 EU SEA directive (The European Parliament and the Council
of the European Union, 2001) and ODPM 2005 SA guidelines (ODPM, 2005), as SEA
is incorporated into SA in England. The hierarchy in this criterion is formed on the
basis that the interrelationship between receptors is of greater importance than just
naming one receptor. The synergistic effect of interrelationships at higher ecological
levels, such as ecosystems, is not usually found within LA boundaries (ODPM, 2005;
Reed et al., 2005). Tthis is a challenging area to be able to score three in, as LAs would
need to consider the availability of extra resources associated with measuring the

29

transboundary effects, and also persuade policy makers of the importance of


transboundary effects on the LA.

(5) Locality:
Locality is based on the definition of local being appropriate to the LA boundaries,
but not necessarily within the boundaries and can include any transboundary effects in
or out of the political boundaries. To assist with defining locality, the author used
physical maps of England, Ordnance Survey (OS) maps of some areas and the Excel
table of definitions of the six rurality components (Defra, 2004) within the LA (as
illustrated in Figure 2.3). However, there are some potential barriers such as the poor
quality of databases, especially at a local level, which are a potential threat to the
quality of the related indicators (Moldan et al., 2007:10).

The overall marking scheme for appropriateness ranges from 03, but in reality the
score of three is not available for locality as insufficient hierarchical information was
obtained from the literature research. However, this criterion was kept as it was
considered important by three groups of researchers. A scale of zero to two was used,
yet in practice a score of two was infrequent due to the information not being easily
available in sufficient detail in the scoping report. A maximum score of two in this
criterion gives less internal reliability.

(6) Disaggregation of data:


When undertaking the literature search for this dissertation, it became obvious that
many authors from various disciplines (Coombes et al., 2004; Doran et al., 2006; Biggs
et al., 2007; Hajat et al., 2007; Huby et al., 2007; Lin et al., 2007; Warren, 2007;
Manthorpe et al., 2008) see disaggregation of data as an essential tool of their analysis,
but that the knowledge they gained was often lost when politicians or policy makers
averaged data or grouped data together in an inappropriate manner. One example is the
SI of homelessness, which is treated in LA SI tables as an integrated figure. Reports
differ in regard to the contribution to homelessness made by ex-service personnel.
Strachan (2003) reported that 25% of Britains homeless are ex-service personnel,
whereas the BBC reported in 2008 that homeless ex-service personnel has gone up

30

from 5% to 6%. Although these are temporally different reports with different statistics,
it can be seen that a proportion of the homeless in England are ex-service personnel and
the local occurrence of this issue could be assessed for different plans if the SI of
homelessness is disaggregated appropriately.

The hierarchy scoring for this criterion is based firstly on the appropriate amount of
disaggregated data, for scores of zero to two, and then for a score of three the use of
current academic research describing areas where different groups of people can be
disadvantaged. Collecting evidence for this criterion was limited to the information
found in the indicator table used. The reason for this was that all who read the
document should be easily able to access indicators that they feel relevant to them and
see how improvement, via different plans and targets, are proposed.

(7) Measurability:
This refers to how the values in an indicator are measured and the extent to which it
measures reality (Bauler et al., 2007:56). Like models, indicators can reflect reality
only imperfectly; however, even within the measurable the quality of indicators is
determined largely by the way reality is changed into measures and data, be they
qualitative or quantitative. The quality of indicators inevitably depends on the
underlying data that are used to compose them (Moldan et al., 2007:9). The hierarchy
chosen for this criterion at first glance appears to have a subjective view biased towards
science and economics rather than social science. From bottom to top the hierarchy
goes from qualitative, then quantitative with the measurement being able to be adjusted
to reflect individual situations. To balance the ways different disciplines measure in
qualitative and quantitative terms, a number of ways of scoring, by choosing different
statements within each score, have been made available to give all three pillars of
sustainability an equal chance of achieving all scores.

(8) Reliability:
This criterion has the title of reliability (rather than reproducibility or repeatability) to
enable both the general meaning of the word to be conveyed and the statistical
interpretation of this word to be considered in the discussion. The hierarchical scoring

31

was based on the three prominent factors involved when considering reliability:
stability, internal reliability and inter-observer consistency (Bryman, 2004:71). More
detail on this theory can be found in section 3.4.6 of this dissertation.

(9) Relevant to plan:


SIs generally are intended to target ongoing political processes, yet they often are
developed with surprisingly political naivet (Moldan et al., 2007). This criterion looks
at one aspect, the set of targets from each LA, and how many suitable targets have been
used. The suitability of a target is defined as being either a visual target (up/down
arrows, smiley faces) or a written target. Alternatively, if an LA clearly stated that a
target was partially constructed, this was considered as suitable as a visual or written
target. However, if an LA had a column labelled targets, but the information
contained within the column appertaining to targets for individual indicators was either
partially or wholly lacking information about local targets, then this was deemed
unsuitable. When an LA left some or all of their target boxes blank, with no
information for future insertion of a target, these were also deemed unsuitable. Some
LAs used the not applicable stance a number of times without justification, and for
the purposes of this research such data was deemed unsuitable; although the author
considers that this point requires further deliberation in future work. The data collected
was then given a percentage total of possible targets (with the total possible targets
crudely assigned as the total number of indicators). A three tier systematic percentage
marking scheme was created to score this criterion. The LA was given a single score of
0, 1, 2 or 3 and this was assigned to all indicators in that LA. The author recognises that
the method used to score this criterion needs to be improved, to link each indicator with
an individual target, in order to show if the average is masking important information.

(10) Actionable:
Actionable is an important criterion for LAs as they are accountable to the people who
live there and need to be seen to be doing or actioning targets developed from SA
objectives and indicators which lead to more sustainable local living. However, it
would be easy for LAs to choose SIs with targets that have quick fixes in order to
persuade policy makers and the public that sustainability is achievable (Astleithner et

32

al., 2004). The harder longer-term actions that could lead to stronger sustainability,
often outside the temporal frame of the LDF, can be less visibly actionable so less
attractive to use. Therefore, there is conflict that needs to be addressed between what is
sustainable for the community and the political nature and timeframe of the LDF.
Crucially, effective action is much less common than cheerful visions and passionate
endorsement (Gibson, 2006), so an indicator scoring three in actionable does not
ensure that effective action occurs. In this criterion the hierarchical scoring goes from
WS, the easiest indicators to action (scoring one), through to SS where greater
knowledge and resources are needed to action indicators (scoring three).

(11) Stakeholder involvement:


Two ways of participatory stakeholder involvement in choosing SIs are firstly
involving the public, and secondly involving users to increase the efficiency of the
decision-making process (audience targeted) (Bauler et al., 2007:62). Statutory
stakeholders for SA are asked for information (ODPM, 2005), but Therivel et al.
(2006) state this is not always provided and, in the case of SEA, almost two in three
reports did not receive any response, so it should be kept in mind that an invitation to
consult about indicators is different to who has actually contributed towards choosing
SIs. Differentiating these two points is not easy from evidence currently available in
most scoping reports. Another barrier to obtaining evidence is that when LAs
demonstrate that they consult widely, such as with Wigan using the Wigan World
Summit with 250 consultees (WMBC, 2007:4), they then neglect to provide any easily
accessible evidence of the detail of the consultation. Some information can be found in
additional documents, such as appendices, if they are appropriately labelled and easily
accessible (an example being Elmbridge Borough Council, Appendix 5: Amendments
to Draft Scoping Report following Consultation) (EBC, 2005). The best practice to
enable the author to ascertain the input of consultees was found where LAs had
documents containing the consultation responses with the exact wording given by the
consultee and also the LAs response to the consultee, such as Spelthorne (SBC,
2007:201-245). The hierarchy in this criterion is based upon both number and range of
stakeholders. The score of three using the second method of audience targeted
participation (Bauler et al., 2007) was used as the preferred of the two participation

33

methods, in agreement with the views of Niemeijer and de Groot (2008) and Hall
(2007).

(12) Funding and cost:


LAs have limited budgets to spend on indicator development and monitoring, therefore
their decision to choose the most appropriate indicators will, to some extent, be based
on cost. There are a number of databases that are free for LAs to use, such as the Local
Authority Area Ecological Footprint (Audit Commission, 2005), but other data sets
involve yearly subscription costs. The hardest areas to fund are the development of
local indicators that require a baseline to be established, or indicators that have high
costs involved in collecting the data (Biggs et al., 2007). If the LA has chosen a large
number of SIs, this will mean that decision-making (concerning what to fund or which
data to use for an indicator) could be more complex with finite funding available. This
is the criterion that has the most potential incompatibilities with the other twelve
criteria (see Table 3.3).

(13) Easily communicated:


Communicability is the extent to which indicators are understood and the effectiveness
with which they convey their purpose and meaning to the target audience (Bauler et al.,
2007:57). An ideal indicator would be one that communicates for a specific purpose to
a range of audiences. However, Bauler et al. (2007:63) consider this may not be
achievable, given the diversity of stakeholders. Nevertheless, the UK Quality of Life
Barometer has been described as the most single important development in
communicating sustainable development by Anne Power, the UK Sustainable
Development

Commissioner

(Hall,

2007:301),

and

editions

of

Sustainable

Development Indicators in your Pocket have proved to be very popular and been
applauded by a variety of stakeholders (ibid.:302). Their popularity is due to a high
percentage of the information being communicated simply and visually (traffic light
evaluation method, charts and graphs), which links to the fact that 60% of all people
prefer a way of learning that is visual (Gardner et al., 2003). The hierarchy used
considers local community concerns being important (Holland, 1997) for a score of

34

one. The top score is given when the media starts to use SIs to communicate and, at a
higher level, analyse the changes over a period of time.

3.4.3

Compatibility Matrix of Criteria

A compatibility matrix was used to assess incompatibilities between the thirteen criteria
and the table of appropriateness in Appendix 1 was used to assess the level of
compatibility between the thirteen criteria. This serves to highlight potential conflicts
between pairs of criteria; Table 3.3 indicates where conflicts may arise. The results
show 28 potentially incompatible criterion pairings and, out of these, 26 are located
within the LA super criteria, one in measurement and one in credibility. The
criterion of funding/cost contains all 12 potential incompatibilities. The compatible
pairings are more evenly spread amongst the three super criteria, and for the uncertain
pairings the most uncertainty occurs in the credibility super criteria. It should be
noted that the criterion of funding/cost is also an area in which the author exhibits
strong subjectivity.

35

Credibility

Table 3.3: Compatibility Matrix

Measurement

Local Authority

of the Thirteen Criteria and

Relevant to Plan

Three Super Criteria

Credibility

Measurement

Local Authority

Leads to Strong Sustainability


Academic credibility
Addressing uncertainty
Environmental receptors addressed
Locality
Disaggregation of Data
Measurability
Reliability
Relevant to Plan
Actionable
Stakeholder involvement
Funding/cost
Easily Communicated

36

Key to Compatibility
Potentially incompatible
Uncertain
Compatible
No Links

3.4.4

Trial Run of the Method

To establish if this method exhibited a wide range of values of SI appropriateness and


was also reliable, a trial run was completed using four LAs, as shown in Table 3.4.

Local Authority

Region

Defra classification

Number of
indicators

Breckland

East England

Rural 80

59

Dacorum

East England

Major Urban

143

West Oxfordshire

South East

Rural 80

59

Mole Valley

South East

Major Urban

120

Table 3.4: LAs Used in Trial Run of Criteria-based Appropriate Method

From trialling the four LAs containing a total of 381 SIs, it was concluded that:

A wide range of values was observed for computations, such as for individual
indicators from 030 out of a total of 38 - this could lead to appropriateness
being established.

Three criteria were identified that needed their scoring method adapted:
Locality, Stakeholder Involvement and Relevant to Plan (see Appendix 1 for
final scoring system).

A method to ensure reliability of scoring needed to be created as there was


variation between similar indicators in different LAs. For example, Percentage
of affordable housing provided scored between 12 and 17.

See Table 3.5 for a trial run for appropriateness.

37

Leads to Strong Sustainability

Academic credibility

Addressing uncertainty

Environmental receptors addressed

Credibility Total/12

Locality

Disaggregation of Data

Measurability

Reliability

Measurement Total/11

Relevant to Plan

Actionable

Stakeholder involvement

Funding/cost

Easily Communicated

Local Authority Total/15

Indicator Total/38

Populations of wild birds

11

Area of semi-natural
habitat lost to development

13

Area of new semi-natural


habitat created

13

Dacorum Indicators

Table 3.5: Trial Run Scoring of Appropriateness: Some Results from Dacorum

3.4.5

Reference Set of Indicators

A reference indicator set was created, which was built up throughout the scoring of the
38 sets of indicators, containing 2,970 individual indicators. For ease of use, 18
different pages on an Excel spreadsheet were grouped as the set was built up. Table 3.6
illustrates the grouping. Indicator scores from the reference indicator set were not
directly transferred to the LA score sheet, even if the indicator had exactly the same
wording. Firstly, four criteria were scored individually: locality, disaggregation of data,
relevant to plan and stakeholder involvement; all of which gave unique scores for each
LA. As such, the same indicator may have different scores in different LAs (range of
difference 0-11). Similarly worded indicators were subjectively judged as to whether
they had the same score as the reference indicator or were added as a unique indicator
to the reference indicator set. The total number of SIs in the reference set at the end of
the scoring period was 325.

38

Air and Climate

Benefits and Work

Biodiversity

Business

Crime and Fire

Education

Energy

Health

Heritage

Housing

Materials

Pollution

Recreation

Soil and Land

Transport

Waste

Water

Number of
Reference
Indicators
in category

Access and Community

Reference
Indicator
Classificatio
n Category

30

16

26

41

16

16

18

12

27

10

14

29

11

27

Table 3.6: The Reference Indicator Set Total


Number of Indicators in Each Classification

3.4.6

Reliability of Appropriateness Method Using the Reference Indicator Set

The appropriateness method needed to be checked for reliability over the time the data
analysis was carried out. Dacorum was chosen for this reliability test as it was the first
LA to be scored after the trial run was carried out and the reference indicator set was
created, and it was also temporally appropriate. It contained 143 SIs, which was at the
top end of the range of number of SIs per LA (range was 24151). See Table 3.7 for
details.

Three prominent factors are involved when considering reliability: stability, internal
reliability and inter-observer consistency. Stability refers to administering a measure to
a group and then readministering it. If there is stability, then there will be little variation
over time in the results obtained. If it is a long span of time (a year or more), external
variables can change (Bryman, 2004:71).

39

Credibility Total/12

Locality

Disaggregation of Data

Measurability

Reliability

Measurement Total/11

817

134

261

208

609

249

143

223

358

973

2399

27.07.08

205

133

276

219

833

136

17

267

215

635

261

143

221

365

990

2458

10

-2

183

-1

2%

% change
from June
to July

Easily Communicated

Funding/cost

Actionable

Table 3.7: Differences in Total and Percentage Scoring for Dacorum LA

In this test for stability, the time of 37 days was deemed to be a reasonable time span
and appropriate for the measurement being used. However, a consideration of the
actual change or perception of the change (for example, in the UK economy over this
period of time) could be a variable that influences the authors viewpoint when scoring
economic indicators. The results show that the change over 37 days between the two
Dacorum total scores was 2%. Two individual criteria outliers were academic
credibility and disaggregation of data. However, the super criteria credibility (3%)
and measurement (4%) that these two criteria are in are still at acceptable levels of
repeatability.

Internal reliability looks at whether the indicators that make up a scale are consistent
(Bryman, 2004:71). In relation to this method, it should be considered whether each
criterions scoring of 1-3 is consistent between criteria (i.e. a score of two in the
criterion addressing uncertainty should equal a score of two in the criterion locality).
To address internal reliability, a statistical method could be applied to the data, either a

40

Indicator Total/38

Environmental receptors addressed


212

Local Authority Total/15

Addressing uncertainty
282

Stakeholder involvement

Academic credibility
121

Relevant to Plan

Leads to Strong Sustainability


202

Criteria
21.06.08

spilt-half method or Cronbachs alpha. This was not undertaken and the author notes
this omission for future research.

The third factor involved with reliability, inter-observer consistency, is when there is
lack of consistency between judgements made by more than one observer. However,
this was not present within this research as only one observer was used.

In conclusion, this method has stability, when using a reference set of indicators and
when measured by one researcher over a medium period of time (months) using one
LA. Events that influence the degree of consistency were considered, but not calculated
within the measurement of stability. The method has the possibility to be tested further
to assess whether other researchers could use it and obtain stability and inter-observer
consistency. Internal reliability should be calculated.

3.4.7

The Use of Weightings

Glasson et al. (2005:145) and Moles et al. (2007) indicate that weighting seeks to
identify the relative importance of criteria. The research by Malkina-Pykh and
Malkina-Pykh (2007) on quality of life indicators considered the main approach used
to derive weightings, expert opinion, as a means to determine the list of criteria and
their significance, which is in agreement with the methods used by Moles et al. (2007)
and Astleithner et al. (2004). Authors agree that panels of experts should decide
weightings (Astleithner et al., 2004; Glasson et al., 2005; Moles et al., 2007), but
weighting systems generate considerable debate (Glasson et al., 2005). In this study,
the author decided to give all criteria equal weighting, so that no judgements were
made as to which criterion was more or less important, and so as to make aggregation
as transparent as possible (Moles et al., 2007). As the author has strong subjectivity on
three of the thirteen criteria, and was not part of an expert team, to add weighting as an
additional variable to change the internal reliability of the method was considered
unsuitable.

41

3.5

Highest Scoring Individual Indicators - Top Ten Ranking

The frequency of different groups of SIs within each LA was established using the
reference set of indicators. SIs in each LA were ranked by individual indicator score,
from highest to lowest. The top ten, including equal tenth, were then grouped using the
reference set of indicators. The frequency of each group was then measured within the
top ten and compared to similarly categorised rurality LAs. The stability of the
groupings was good, as the reference indicator set proved to be repeatable over time
when scoring for appropriateness.

3.6

Numbers of Sustainability Indicators Used by Each Local Authority

SIs were copied word for word from the 38 LAs. A subjective decision was made when
an LA separated one indicator (from a published set) into two or more parts as to
whether to count it as one or more indicators. This happened on only four occasions, as
LAs usually copied indicators from known indicator sets as a whole indicator. When an
indicator was used twice or more in an LA indicator set, it was used only once for all
methods used in this research.

3.7

Classifying Sustainability Indicators

The method of Bond et al (1998) was used as the basis to classify the 2,970 SIs.
Decisions were made when classifying a number of indicators when they were not
obviously within one of the economic, environmental or social categories. Appendix 4
illustrates the justifications for each indicator that did not fit straightforwardly into one
category.

3.8

Survey

A short survey was undertaken to augment the data retrieved from the literature search,
web-based documents from LAs and results from the appropriateness method. To
enhance the Environmental Management System (EMS) of this dissertation, the survey
42

sent to 38 LAs and the six consultants working for LAs was carried out electronically
(Yun et al., 2006). The electronic method chosen was SurveyMonkey software, which
proved to be easy to use, good quality, flexible and professional looking. In order to
achieve the clearest questions and highest response rate, a selection of ideas were used
from Cohen et al. (2000), Bryman (2004) and SurveyMonkey (2008a; 2008b). To
ensure the person involved in choosing SIs was the person answering the survey, the
LA officers and consultants were individually contacted by telephone prior to sending
out the survey.

3.9

Summary

This chapter has described the methods developed to assess the appropriateness of SIs
and justified each area of the method produced. The methods, with justifications, that
augment the appropriateness method have also been described.

The next chapter discusses the results obtained when the developed methods were
applied to SIs from 38 LAs and whether any difference or similarities were seen which
were dependant on the ruralness designation of the LA.

43

CHAPTER FOUR: RESULTS AND ANALYSIS


The results chapter is divided into two interconnected parts: the first deals with aspects
associated with the criteria-based appropriateness method and the second with the other
methods used to augment the results obtained from the appropriateness method.
Conclusions from these two sections are then integrated in the summary (section 4.1.5).
A table of results containing appropriateness scores for criteria and super criteria,
numbers of indicators and classification of SIs can be found in Appendix 3.

4.1

Criteria-based Appropriateness of Sustainability Indicators

4.1.1

Individual Indicators

To establish whether an SI was appropriate, a mark out of 38 was given. The author
assumed that the higher the score then the more appropriate the indicator, similar to that
seen in the EIS review package (Glasson et al., 2005:395-396). However, Donnelly et
al. (2007) would disagree, considering that an environmental SI can be appropriate if it
meets just one of their criteria. The reference indicator set used gives this
appropriateness method stability and the scores for individual indicators were
explained by the differences in scoring of four of the thirteen criteria: stakeholder
involvement, relevant to plan, disaggregation and locality, which give a range of
0-11 marks that are applicable only to individual LAs. An example is Woking, which
has the highest score at both ends of the range of scores within one LA (from 14
minimum to 34 maximum). These high scores are due to a high starting score of seven
marks from three of the four individual criteria scores unique to Woking. Therefore, the
difference in scores for individual indicators with the same wording has no relationship
to the region or rurality of the LA. However, similar indicators can be compared
between different LAs and relative appropriateness can be assessed. A cut-off point of
appropriate versus inappropriate could have been set, but as there was no internal
reliability within the thirteen criteria, it would have been unsuitable to do this until this
method had been amended to take account of this. However, it could be suitable to look
at the type of individual indicators that score the highest marks and were therefore
measured by this method as being the most appropriate within each LA.
44

4.1.2

Top Ten Ranking

Environmental sustainability indicators were found most frequently in the top ten of the
38 LAs, and equally economic and social indicators were found disproportionately less.
Water was in 37/38 of LAs, air and climate was in 33/38, biodiversity was 33/38
and energy was 24/38. Business was found the least (at 1/38) and benefits and
work, education, heritage, materials, and recreation all scored 2/38 in the top
ten. The frequency with which the top four types of indicator occurred within the top
ten of LA indicator sets was not reliant upon the rurality designation of the LA.

No consistent pattern between the percentage of each of the three pillars and the top ten
scores within individual LAs was established. Take, for example, Uttlesford, where SIs
are 14% Economic (Ec), 28% Environmental (En) and 59% Social (So) (compared to
the average of 21% Ec, 47% En and 32% So). However, for Uttlesford, its top ten were
40% transport, 20% waste, 10% biodiversity and 10% air and climate. Therefore
80% were En SIs, compared to a score of 28% in total using the three pillars
classification. Top ten En SIs were higher scoring in all four criteria in the super
criterion of credibility, the difference being as many as 12 marks. This could be due
to a number of factors: the large proportion of lead authors who are in scientific or
environment fields (see section 3.4.2) especially in the credibility super criterion
scoring of hierarchies, the weak subjectivity of the author in three out of four of the
credibility criteria, and a longer tradition of seeing environmental indicators as SIs and
more credible data being available. No link was found between the number of
indicators and top ten scores.

When there was an anomalous result for top ten groups, this result was investigated
further. An example is given below. The case was noted of health indicators being in
the top ten of 7/9 NW MU LAs, with St Helens having 25% of their top ten as health
indicators. Statistics for 2001-2005 show that 1,300 people more than the national
average died from cancer in the NW, with 60% of these excess deaths being from lung
cancer caused by smoking (Lemon et al., 2007). All nine LAs in this sample from the
NW had indicators to cover this. However, Trafford had four indicators to reflect this,
45

including The number of smokers who had set a quit date and had successfully quit at
four week follow up (based on self-report) with NHS stop smoking services, which
scored 20, and also an indicator disaggregated into specific wards, The difference in
all age, all cause mortality (per 100,000 population); between the top (Clifford,
Bucklow-St. Martins, Urmston and Gorsehill) and bottom (Hale Barns, Hale Central,
Brooklands and Timperley) quintile wards in Trafford which scored 26. The second
indicator clearly had a sound database to work from and a suitable level of
disaggregation, whereas quitting smoking for four weeks is self reported and does not
have any long-term follow up, which is crucial as after initially successful quit
attempts, many people return to smoking within a year, reducing the public health
benefits of investment in smoking cessation (Lancaster et al., 2006). St Helens also has
a specific health issue in that the mortality rate for males from heart disease (2005) is
significantly higher than in the rest of England and Wales (Halton and St Helens PCT,
2007).

Soil and land in Suffolk Coastal (27%) and mid Suffolk (35%) LAs were anomalous
results, where there appeared to be a conflict between the high numbers of new housing
needed and local geological SSSIs being preserved (Mid Suffolk District Council,
2008; Natural England, 2008).

Environmental sustainability indicators were found most frequently in the top ten of the
38 LAs, but this was not reliant on the rurality designation of the LA. No consistent
pattern between the percentage of each of the three pillars and the top ten scores within
individual LAs was established, nor was a link found between the number of indicators
and top ten scores. When there was an anomalous result, it could be attributed to an
MU or R80 designation within a region, but this was not consistent between regions.

4.1.3

Criteria

The top three appropriateness marks for all LAs (Combined MU and R80) were for
easily communicated (83%), measurability (67%) and actionable (61%). Two of
the three criteria were found in the super criterion Local Authority; measurability was
found in the super criterion measurement. This result is backed up by Bauler et al.
46

(2007), whose five criteria for methodological strength of indicators include


measurability, communicability and feasibility. Methodological strength was high
in LAs from using this method and appropriateness was not dominated by any of the
four criteria that were unique to each LA (see section 4.1.1) and were scored differently
in each LA, such as disaggregation (7%) and relevant to plan (16%).

All of the top three criteria were potentially incompatible with funding/cost (see
Table 3.3), but still scored highly. This suggests there may be weightings given by LAs
when choosing SIs (not necessarily knowingly) to the criteria the author had chosen. If
weighting was to be considered more formally, it would require the panel of experts
approach as suggested by Astleithner et al. (2004), Glasson et al. (2005) and Moles et
al. (2007) and for LAs to become transparent as to their method. There is also the
possibility that the higher scores of these criteria could be linked to their ease of
operationalisation within SA, so indicators that exhibit these three functions appeal to
LA officers and consultants who have time constraints (ODPM, 2005) and will be able
please policy makers more easily with indicators that score highly in these three areas.

The lowest three criteria marks occurred for disaggregation (7%), relevant to plan
(16%) and academic credibility (35%). With regard to the very low score for
disaggregation, this may have been because LAs weight communication (83%) so
highly they fear they will mask the ease of communication by making the indicator
complicated to understand. However, in doing this they lose the chance of
communicating the detail of how they will improve the area of SD that the indicator
covers and, by this omission, may find that averages shade local issues (Spilanis et al.,
2008). Bauler et al. (2007) also allude to disaggregation and suggest that with regard to
their criterion, the purpose of an SI, including the appropriateness of scale, is
important.

The criterion relevant to plan (16%) often had a score of zero, as no targets were
present in the table containing indicators; a number of these reports were dated from
2005 and 2006 and still did not have available easily accessible current (2008)
information on targets, other than those in the Annual Monitoring Reports. However,

47

policy has a lifespan and different indicators may be appropriate for different time
spans within the policy (Rickard et al., 2007), so having some targets at the start of a
plan is appropriate. Therefore, all LAs should have a score for this criterion - however,
this was not so, and may perhaps show either some political naivet (Moldan et al.,
2007) on behalf of those who carried out the SA or a flaw in the method for scoring this
criterion (see section 3.3.2).

Credibility or believability of the data used was low at 35% and three main reasons
for this have been considered. Firstly, there was the potential for incompatibility
between this and the addressing uncertainty criterion and these two can be mutually
exclusive with regard to scoring if an indicator was under development at that time or
new to the LA. However, in practice it was possible to score higher scores for both, as
there was some crossover in the two hierarchies which was open to interpretation.
Secondly, the abundance of data can make it tempting to use the data but as in, for
example, the case of the indicators associated with bird population, simply having a lot
of data available does not make an indicator academically credible. A third area that
lowered this score was that of surveys, such as the British Crime Survey which has
consistently found perceived risks to exceed actual risks of victimisation (Tilley, 2005),
but which has been consistently used by LAs. However, the possible incompatibility
with addressing uncertainty will continue to lower this score, but the choice of credible
data can be rectified by LAs to increase their score in this criterion.

The remaining seven criteria all scored between 43% and 55% and addressing
uncertainty locality and funding/cost are in this group which are the authors three
most subjective areas. However, they did not score either highly or lowly, so any
subjectivity most probably took a neutral stance. There was no discernable difference in
criteria scores between MU and R80 LAs.

The average appropriateness score in LAs for stakeholder involvement was 45%, which
ties in with the survey where 53% of respondents consulted stated that they had
contacted one or more non-statutory stakeholder. 21/30 groups of non-statutory
stakeholders were consulted between 5% to 32% of the time (see Figures 4.1 and 4.2 to

48

compare between the results arising from this work and the earlier work reported by
Bond et al. (1998)). No direct comparison was found as to involvement in choosing
SIs, but compared with Bond et als (1998) survey of stakeholder involvement in LA
21, there appears to have been a considerably smaller percentage of stakeholders
consulted by the sample in this study. Overall, there was some non-statutory
involvement of stakeholders, but the extent of the involvement could only be seen
when LAs chose to give their responses to stakeholder involvement in their scoping
reports and associated documents (for example, see SBC, 2007).

This lack of non-statutory stakeholder involvement ties in with the ODPM (2005)
guidance on SA, which does not specifically ask for stakeholder involvement in
choosing indicators, but is in disagreement with a number of academic sources
(Holland, 1997; Cartwright, 2000; Bell et al., 2004; Fraser et al., 2005) who consider
stakeholder involvement a priority. It is notable that SMEs were not consulted at all,
although the partnership between business and sustainability is considered important by
a number of authors (Beveridge et al., 2005; Defra, 2005a; Willis et al., 2007). There
was no discernable difference between MU and R80 LAs with regard to stakeholder
involvement. Congleton and West Oxfordshire consulted the most groups, at 15 each.
The result for Congleton is discussed further in section 4.2.3. Figures 4.1 and 4.2
provide a comparison of stakeholder involvement in choosing SIs for the years 2008
and 1998 respectively.

The top three scoring criteria indicate that SIs in LAs have methodological strength,
despite incompatibilities with funding/cost, which may show hidden weightings given
by LAs. The low score in disaggregation may shade local issues, two of the three
criteria for which the author has strong subjectivity appeared to have had a neutral
effect in scoring, yet practically there was some crossover in scoring between
academic credibility and addressing uncertainty (strong subjectivity). There appears
to be less stakeholder involvement seen in 2008 than in 1998, but the extent is not
transparent. The mediocre overall score for most LAs may be improved by a number of
methods: choosing indicators that use credible data, disaggregating indicator data,
involving more non-statutory stakeholders and deciding some targets early in the plan

49

process. There was no discernable difference in individual criteria scores between MU


and R80 LAs.

Source: Authors current survey data.

Figure 4.1: Stakeholders Involved in Choosing Sustainability Indicators in 2008

50

Source: Bond et al. (1998).

Figure 4.2: Groups and Organisations Involved in Local Agenda 21 (1998)

4.1.4

Super Criteria

Of the three super criteria, local authority had the top score of 51%, with credibility
and measurement both equal at 44%. There was no discernible difference between
MU and R80 LAs. The super criterion with the largest range was local authority, with
SE MU Woking at 80% and the lowest being NW MU Bury at 38%. Overall, the super
criteria measure of appropriateness using averages hid the detail found in the
individual criterion scores.

4.1.5

Conclusion for the Appropriateness Method

Using the reference set of indicators gave this appropriateness method stability and the
top three scoring criteria indicate that this technique exhibits methodological strength in
LA choice of indicator. The super criteria measure of appropriateness shades issues
51

which can be made transparent by viewing the results of individual criterion. This
transparency determines areas where scores can be increased and inconsistencies
between the scoring of two criteria, and it also shows that environmental indicators
occur frequently when using the top ten method. Overall, there was no discernable
difference in appropriateness between MU and R80 designations, although anomalous
results occurred where there were issues in an individual LA, or an MU or R80
designation within a region.

4.2

Methods Used to Augment the Appropriateness Method

Four subsidiary methods were used to collect data to augment the appropriateness
method: defining SIs, number of indicators, classification of indicators and information
from the survey.

4.2.1

Defining Sustainable Indicators

Definitions of SIs are considered here, in order to examine connections between the
definition of SIs and their operationalisation within SA (Ozkaynak et al., 2004). The
results reveal that 45% of LA documents viewed contained definitions (reducing to
26% if those which contained no definitions are included) and 42% of the respondents
from the survey chose progress towards sustainability as the main basis to define an
SI. This compares with Cartwright et al. (2000), where 64% of LA officers chose
progress towards sustainable development as the main function. The results can be
seen in Figure 4.3.

52

Figure 4.3: Sustainability Indicator Definitions Used by Local Authority Officers and Found in Scoping Reports

53

The choice of definition may also be affected by the LA officers and consultants
highest level of education, subject specialism, experience in SA, and by their individual
pro-environmental behaviour (Kollmuss et al., 2002), which are all considered below.
The respondents were highly qualified, with 37% of respondents having Bachelor level
degrees and 63% of respondents having Masters level degrees as their highest
qualification. 79% of the respondents subject specialism was in town planning, with
63% belonging to the RTPI. Bond et al. (1998) surveyed LA officers involved in LA
21, finding that 53% worked in environmental departments, environmental health
departments, environmental units, or had an emphasis (at least in the title) concerning
the environment, which could link with the large percentage of LA officers in the
survey by Cartwright et al. (2000) who chose the SI definition leading towards
sustainability.

As the most important factor is not the amount of environmental knowledge but proenvironmental behaviour (Kollmuss et al., 2002), respondents pro-environmental
behaviour was considered by ascertaining membership of environmental, professional
and voluntary organisations, as well as their chosen charity to donate to. However,
there was insufficient data to propose any link (or lack of any link) between proenvironmental behaviour and choice of definition.

There was no discernable difference between definitions chosen by LA officers in MU


and R80 LAs.

4.2.2

Number of Indicators

Overall, the average number of indicators in all the LAs was 78, with consultancies
(10/38) alone averaging 91 indicators and LAs (28/38) averaging 74 indicators. The
regional averages were 84 for both the EE and the SE, and 69 for the NW. There was a
considerable difference between the average in the EE MU (134) and EE R80 (67), but
it should be noted that the EE MU had been undertaken by the same consultants. The
NW MU and R80 averages were identical, at 69. The SE MU (85) and R80 (83)
averages were similar.

54

4.2.3

Classification of Indicators

Overall, within the 38 LAs, the average classification contained 21% of indicators as
being economic, 32% as social and 47% environmental. When there was an anomalous
result for a classification of LA indicators, it was examined in more detail. Two
examples are discussed: firstly, the case of Copeland and secondly those of Bury and
Wealden.

Copeland is a NW R80 LA that has an obviously different profile of classification than


the average of the 38 LAs (see Figure 4.4). The higher level of environmental
indicators may arise from Copelands large area of outstanding landscape, or because it
houses one of the largest nuclear engineering sites in the world - the Sellafield nuclear
power facility (Copeland Borough Council, 2008). Cumbria County Council have also
pledged to discuss with district authorities the implications of having an underground
nuclear waste repository in the county. Both Allerdale and Copeland councils have
made a formal expression of interest in having such a repository (The Whitehaven
News, 2008).

Figure 4.4: Comparison of the Classification of


Copeland Local Authorities with the Average of the 38 Local Authorities
55

The second example of anomalous results were those of NW MU Bury and SE R80
Wealden, where both LAs had reasonably even scores for all three groups, which was
rare within this sample. However, currently there is no advice from the ODPM SA
Guidance (ODPM, 2005) on the proportion of SIs.

Figure 4.5: Comparison of Bury, Wealden and the


Average Percentage of Indicators in the 38 Local Authorities

4.2.4

Survey Information

One area of the survey was not incorporated with the rest of the results. That area is the
LAs and consultants knowledge of policies and standards within their organisation.
68% of respondents did not know whether they had these policies and standards. Only
the consultancies had environmental standards. 21% of the respondents had an IT
security policy and 21% of respondents had an environmental policy.

4.2.5

Conclusion From the Methods Used to Augment the Appropriateness


Method

Within the LAs there was little evidence of leadership exhibiting pro-environmental
behaviour, through introducing environmental policies or EMSs. This very weak
56

sustainability was backed up by the evidence found when defining indicators, as this
result was slightly in favour of English LAs operationalisation of a definition of SI,
leading to WS rather than SS. Consultancies used, on average, 23% more indicators
than LAs. Halls (2007) suggestion of 50 indicators is not used in practice by the
majority of LAs. Using the three pillars classification framework adapted from Bond et
al. (1998), most LAs had an average of around half their SIs classified as
environmental. However, there were exceptions to this and further research is needed to
ascertain why some LAs had a different ratio for the three pillars of sustainability.

4.3

Summary

The principal findings from this research are that none of the methods used
demonstrated any discernable difference between MU and R80 LAs, but differences
between individual LAs and within a rural designation in a region can be determined by
using the appropriateness method (which has good stability) and the classification
method. On the positive side, indicators chosen by LAs exhibit methodological strength
(Bauler et al., 2007) and the chosen criteria gave transparency to the results, which
enabled improvements to be recommended. Improvements are needed in proenvironmental leadership within LAs, in involvement of key stakeholders in choosing
SIs, in LAs to increase scores of six out of the thirteen criteria, and an inconsistency in
the appropriateness method needs to be removed. Operationalisation of SA will have
even stronger sustainability if LAs have clear definitions for all the terms and processes
used by all actors involved (Ozkaynak et al., 2004).

57

CHAPTER FIVE: CONCLUSION, EVALUATION AND


RECOMMENDATIONS
5.1

Conclusion and Evaluation

The overall drive and aim behind this project has been to develop a method that LAs
can use to assess the appropriateness of the SIs used in their SAs of LDFs. It needs to
be practical in that it should not be time rich, is reliable and can be carried out by
available LA personnel.

The method created for assessing appropriateness was developed successfully from an
extensive literature search. It provides a range of scores from 034 out of a total of 38,
for each of the 13 criteria and, most importantly, this method exhibits stability over a
medium period of time. Two improvements are still needed: firstly, to ascertain internal
reliability (Bryman, 2004:71), after which it would be suitable to trial if there was the
possibility of a cut-off point for pass or fail, as in the EIS Review Package (Oxford
Brookes University Impacts Assessment Unit, 2005); secondly, there is the need to
remove an inconsistency in scoring between academic credibility and addressing
uncertainty (see section 4.1.3).

This appropriateness method has been applied to 38 LAs from three regions and has not
distinguished major differences in appropriateness between MU and R80 LAs.
However, this may not be possible as some academics consider that in cultural, social
and economic terms the notion of rurality in a country such as the UK is outdated
(Champion and Shepherd, 2006). An improvement to the application of this method
would be to assess sets of indicators using two independent reviewers on the basis of a
double blind approach, as in the EIS Review Package(Oxford Brookes University
Impacts Assessment Unit, 2005). An enhancement could be to have one reviewer with
knowledge of the LA and one without (for example, an LA officer from another
region). This would encourage challenging discussion as to appropriateness and ensure
inter-observer consistency (Bryman, 2004:71).

58

Four other methods have been used to augment the appropriateness method. The
framework for classifying indicators (Bond et al., 1998) works well, despite the new
indicators used by LAs, because of its original reliability, having been constructed from
credible academic sources. The data extracted from the survey was most useful for
deciding the definition of SIs and stakeholder involvement. The author suggests
trialling two of these four methods as additional criteria within the appropriateness
assessment: namely, the number of indicators and the classification of indicators. This
would bring the total criteria to 15. Although the number of indicators does not appear
to affect the appropriateness of the indicator set (see section 4.2.2), the usability of
more than 50 indicators by the different actors involved must be taken into account
(Moldan et al., 2007).

This research alludes to LAs operationalisation of definitions within SA leading to WS


or SD, which is also the opinion of others (Counsell and Haughton, 2006; Gibson,
2006; Jackson, 2007). However, there are a number of ways that LAs could easily
improve this (see recommendations below).

Overall, the author considers that the aim and objectives have been achieved, and with
the changes instigated this appropriateness method should be trialled with LA officers
to discover their views on the methods usefulness.

5.2

Recommendations for Local Authorities

Easy wins:
1. To use disaggregated data to describe local variation within an indicator, so that
issues can be targeted and micro-managed to achieve the greatest sustainability
and to be clear in the table of indicators how the data is disaggregated.
2. To introduce a number of targets before a plan is approved, not just when it is
being implemented, as different targets are appropriate at different stages of the
plan.
3. LAs should be careful to choose indicators that have academically credible data,
rather than those with an abundance of data but little academic credibility.

59

Harder to do:
1. Decide on a method for stakeholder involvement that not only helps decide
indicators but also gives the stakeholders autonomy to decide acceptable and
unacceptable limits for the indicators. This would help with target setting.
2. Be decisive about the numbers of indicators chosen. If there are more than 50,
justify the extra resources needed to measure and manage them, and ensure that
details are given to all who want to see and use them.
3. LAs should invest time in developing EMSs and their continual improvement.
This way they will be leaders in the field and other organisations will respect
their pro-environmental behaviour.
4. Make sure local SMEs are involved at all possible stages of the SA.
Sustainability cannot be achieved unless LAs, NGOs and businesses work
together.

5.3

Recommendations for Research


1. This method of appropriateness, when amended, should be taken into the field
and trialled with LAs from different regions and degrees of rurality. The method
can be evaluated and changes made accordingly. LAs can then assess their SA
indicators using a robust method, which may help lower the number of
indicators that they choose for the final set, as they are able to justify their
decisions to stakeholders and policy makers. The method should be available
on-line at no cost.
2. A similar method to appropriateness has been set up for indicator classification.
This needs a larger trial to establish inter-observer consistency.
3. More research is needed to establish the best ways to achieve pro-environmental
behaviour within organisations.

60

REFERENCES
Adams, J. & White, M. (2006). Removing the health domain from the Index of
Multiple Deprivation 2004 - effect on measured inequalities in census measure of
health, Journal of Public Health, 28(4), 379-383.
Astleithner, F., Hamedinger, A., Holman, N. & Rydin, Y. (2004). Institutions and
indicators the discourse about indicators in the context of sustainability, Journal of
Housing and the Built Environment, 19, 7-24.
Atkin, C., Rose, A. & Shier, R. (2005). Provision of and Learner Engagement with
Adult Literacy, Numeracy and ESOL in Rural England. A Comparative Case Study.
London: Institute of Education, University of London.
Audit Commission. (2005, August). Local Quality of Life Indicators: Supporting Local
Communities to Become Sustainable. Retrieved August 15th, 2008, from Audit
Commission: www.audit-commission.gov.uk
Ayres, R.U. (2000). Commentary on the utility of the ecological footprint concept,
Ecological Economics, 32(3), 347-9.
Baird, G. A. & Wright, N. (2006). Poor access to care: rural health deprivation?,
British Journal of General Practice, August, 567-568.
Barr, S. & Gilg, A.W. (2007). A conceptual framework for understanding and
analysing

attitudes

towards

environmental

behaviour,

Swedish

Society for

Anthropology and Geography, 89B(4), 361-379.


Bastianoni, S., Nielsen, S.N., Marchettini, N. & Jorgensen, S.E. (2005). Use of
thermodynamic functions for expressing some relevant aspects of sustainability,
International Journal of Energy Research, 29, 53-64.

61

Bauler, T., Douglas, I., Daniels, P., Demkine, V., Eisenmenger, N., Grosskurth, J., et
al. (2007). Identifying methodological challenges. In: T. Hak, B. Moldan & A.L.
Dahl (eds.), Sustainability Indicators: A Scientific Assessment, Island Press, p.57.
BBC (2008). Homeless ex-service people, The Today Programme, 10 July, London.
Bell, S. & Morse, S. (2001). Breaking through the glass ceiling: who really cares
about sustainability indicators?, Local Environment, 6(3), 291-309.
Bell, S. & Morse, S. (2004). Experience with sustainability indicators and stakeholder
participation: a case study relating to a 'blue plan project' in Malta, Sustainable
Development, 12, 1-14.
Beveridge, R. & Guy, S. (2005). The rise of the eco-preneur and the messy world of
environmental innovation, Local Environment, 10(6), 665-676.
Biggs, R. O., Scholes, R.J., ten Brink, B.J. & Vackar, D. (2007). Biodiversity
indicators. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators: A
Scientific Assessment, Scientific Committee on Problems of the Environment (SCOPE),
pp.249-270.
Bond, A.J., Mortimer, K.J. & Cherry, J. (1998). Policy and practice: the focus of
Local Agenda 21 in the United Kingdom, Journal of Environmental Planning and
Management, 41(6), 767-776.
Bosello, F., Roson, R. & Tol, R.S. (2006). Economy-wide estimates of the
implications of climate change: human health, Ecological Economics, 58, 579- 591.
Bossel, H. (2001). Assessing viability and sustainability: a systems-based approach for
deriving comprehensive indicator sets, Conservation Ecology, 5(12).

62

Breckland Council. (2008). Sustainability Appraisal Scoping Report - Site Specific


Policies and Proposals Thetford Area Action Plan and Snetterton Heath Area Action
Plan.

Retrieved

May

28th,

2008,

from

Breckland

Council:

http://www.

breckland.gov.uk/brecklandsascopingreport08-2.pdf
Bryman, A. (2004). Social Research Methods, (2nd edn.), Oxford: Oxford University
Press.
Buckwell, A. (2006). Rural development in the EU, Economa Agraria y Recursos
Naturales, 6(12), 93-120.
Cartwright, L.E. (2000). Selecting local sustainable development indicators: does
consensus exist in their choice and purpose?, Planning Practice & Research, 15, 6578.
Champion, T. & Shepherd, J. (2006). Demographic Change in Rural England, Rural
Evidence Research Centre.
Chomitz, K.M., Buys, P. & Thomas, T.S. (2005). Quantifying the Rural-urban
Gradient in Latin America and the Caribbean, World Bank: World Bank.
Clark, D., Southern, R. & Beer, J. (2007). Rural governance, community
empowerment and the new institutionalism: A case study of the Isle of Wight, Journal
of Rural Studies, 23, 254266.
Cohen, L., Manion, L. & Morrison, K. (2000). Research Methods in Education (5th
edn.), London and New York: Routledge/Falmer.
Coombes, M. & Raybould, S. (2004). Finding work in 2001: urbanrural contrasts
across England in employment rates and local job availability, Area, 36(2), 202222.

63

Copeland Borough Council (2005). Copeland Local Plan 2001-2016 Sustainability


Appraisal, April. Retrieved 18 June 2008, from Copeland Borough Council:
http://www.copelandbc.gov.uk/ms/www/local-plan/PDF/

sustainability-assessment/S-

A_Introduction-Background.pdf
Copeland Borough Council (2008). Copeland Fast Facts, September. Retrieved 6
August 2008, from Copeland Borough Council: http://www.copelandbc.gov.uk/
main.asp?page=2891
Costaras, N. & Thomas, E. (2006). Sustainability Appraisal of the Local Development
Framework: Scoping Report, June. Retrieved 22 June 2008, from Uttlesford District
Council: http://www.uttlesford.gov.uk/Planning/local+plans+and+local+development+
framework/saframeworkv7.pdf
Counsell, D. & Haughton, G. (2006). Sustainable development in regional planning:
the search for new tools and renewed legitimacy, Geoforum, 37, 921-931.
CRC (2008). The State of the Countryside 2008, July. Retrieved July 2008, from
Commission for Rural Communities: www.ruralcommunities.gov.uk
Dahl, A. (1997). The big picture: comprehensive approaches, Part One: Introduction.
In: B. Moldan, S. Billharz & R. Matravers (eds.), Sustainability Indicators: A Report on
the Project on Indicators of Sustainable Development, SCOPE 58, Chichester, UK:
Wiley, pp.69-83.
Daniels, P.L. (2007). Annex: menu of selected sustainable development indicators.
In: T. Hak, B. Moldan & A. L. Dahl (eds.), Sustainability Indicators: A Scientific
Assessment, Island Press, pp.369-387.
Darnall, N. & Sides, S.R. (2008). Assessing the performance of voluntary
environmental programs: does certification matter?, Policy Studies Journal, 36(1).

64

Dartford Borough Council (2006). Dartford's Core Strategy - Preferred Policies


Approaches Document, July. Retrieved 8 December 2007, from http://www.dartford.
gov.uk/planningpolicy/CoreStrategy.htm
DCLG (2006). Planning Policy Statement 25, December. Retrieved 23 April 2008,
from Department of Communities and Local Government.
DCLG (2007a). Index of Multiple Deprivation. Retrieved 9 June 2008, from DCLG:
http://www.communities.gov.uk/communities/neighbourhoodrenewal/deprivation/
deprivation07/
DCLG. (2007b). Using the Indices of Multiple Deprivation. Retrieved 9 June 2008,
from

DCLG:

http://www.communities.gov.uk/communities/neighbourhoodrenewal/

deprivation/deprivation07/
DCLG (2008a). Ecotowns: Living a Greener Future, April. Retrieved 19 June 2008,
from IEMA: http://www.iema.net/readingroom/show/18185/c260
DCLG (2008b). Planning Policy Statement 12: Local Spacial Planning, 4 June.
Retrieved 6 August 2008, from Department of Communities and Local Government:
http://www.communities.gov.uk/documents/planningandbuilding/ pdf/pps12lsp.pdf
DCLG (2008c). Regional Planning 12 May. Retrieved 7 August 2008, from
Government

Office

for

the

East

of

England:

http://www.gos.gov.uk/goee/

docs/193657/193668/Regional_Spacial_Strategy/EE_Plan1.pdf
Defra (2004). Rural Definition and Local Authority Classification (Excel Spreadsheet).
Retrieved February 2008, from Defra: http://www.defra.gov.uk/ rural/ruralstats/ruraldefinition.htm
Defra (2005a). Securing the Future - UK Government Sustainable Development
Strategy, March. Retrieved February 2008, from Defra: http://www.sustainabledevelopment.gov.uk/publications/uk-strategy/index.htm

65

Defra (2005b). Defra Classification of Local Authority, July. Retrieved February 2008,
from DEFRA: www.defra.gov.uk
Defra (2007). Sustianable Indicators in Your Pocket 2007, London, England: Defra.
Dietz, S. & Neumayer, E. (2007). Weak and strong sustainability in the SEEA:
concepts and measurement, Ecological Economics, 61, 617-626.
Doak, J. & Parker, G. (2005). Networked space? The challenge of meaningful
participation and the new spatial planning in England, Planning, Practice and
Research, 20(1), 23-40.
Donnelly, A., Jones, M., O'Mahony, T. & Byrne, G. (2007). Selecting environmental
indicators for use in strategic environmental assessment, Environmental Impact
Assessment Review, 27, 161-175.
Donnelly, A., Salamin, N. & Jones, M.B. (2006). Changes in tree phenology: an
indicator of spring warming in Ireland?, Biology and Environment: Proceedings of the
Royal Iriah Academy, March.
Doran, T., Drever, F. & Whitehead, M. (2006). Health underachievement and
overachievement in English local authorities, Journal of Epidemiology and
Community Health, 60, 686-693.
EBC (2005). July. Retrieved 13 June 2008, from Elmbridge Borough Council:
http://www.elmbridge.gov.uk/search/search.asp
Ekins, P., Simon, S., Deutsch, L., Folke, C. & de Groot, R. (2003). A framework for
the practical application of the concepts of critical natural capital and strong
sustainability, Ecological Economics, 44, 165-185.
Elmbridge Borough Council (2005). Final Scoping Report for the Sustainability
Appraisal of the Core Strategy Development Plan Document, July. Retrieved 18 June
2008, from Elmbridge Borough Council: http://www.elmbridge.gov.uk
66

Enticott, G. & Entwistle, T. (2007). The spaces of modernisation: outcomes, indicators


and the local government modernisation agenda, Geoforum, 38, 999-1011.
Fotso, J.-C. (2006). Child health inequities in developing countries: differences across
urban and rural areas, International Journal for Equity in Health, 5(9).
Fraser, E.D., Dougill, A.J., Mabee, W.E., Reed, M. & McAlpine, P. (2005). Bottom
up and top down: Analysis of participatory processes for sustainability indicator
identification as a pathway to community empowerment and sustainable environmental
management, Journal of Environmental Management.
Gallego, F. (2006). Mapping Rural/Urban Areas from Population Density Grids.
Institute for Environment and Sustainability, JRC, Ispra (Italy).
Gardner, R., Laeger, S. & Wooderson, J. (2006). Watford Development Plan
Documents Strategic Environmental Assessment and Sustainability Appraisal Scoping
Report, February. Retrieved from Watford Borough Council.
Gasparatos, A., El-Haram, M. & Horner, M. (2007). A critical review of reductionist
approaches for assessing the progress towards sustainability, Environmental Impact
Assessment Review.
Gibson, R.B. (2006). Sustainability assessment: basic components of a practical
approach, Impact Assessment and Project Appraisal, 24(3), 170-182.
Glasson, J., Therivel, R. & Chadwick, A. (2005). Introduction to Environmental Impact
Assessment, 3rd edn, Oxford: Routledge.
Glavic, P. & Lukman, R. (2007). Review of sustainability terms and their definitions,
Journal of Cleaner Production, 15, 1875-1885.
Goodwin, M. (2006). Constructing and interpreting qualitative data. In: A. Bond
(ed.), Your Masters Thesis: How to Plan, Draft and Revise, Abergele: Baskerville
Press, pp.29-49.
67

Gregory, R.D., Noble, D., Field, R., Marchant, J., Raven, M. & Gibbons, D.W. (2003).
Using birds as indicators of biodiversity, Ornis Hungarica.
Gupta, J. & van Asselt, H. (2006). Helping operationalise Article 2: A
transdisciplinary methodological tool for evaluating when climate change is
dangerous, Global Environmental Change, 16, 83-94.
Hajat, S., Kovats, R.S. & Lachowycz, K. (2007). Heat-related and cold-related deaths
in England and Wales: who is at risk?, Occupational and Environmental Medicine, 64,
93-100.
Hall, S. (2007). The development of UK sustainable development indicators: making
indicators work. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators:
A Scientific Assessment, 1st edn., Scientific Committee on Problems of the Environment
(SCOPE), pp.293-307.
Halton and St Helens PCT (2007). Heart Disease. Retrieved 2 Sept 2008, from Halton
and

St

Helens

PCT:

http://www.haltonandsthelenspct.nhs.uk/library/documents/

heartdisease.pdf
Hanratty, B., Drever, F., Jacoby, A. & Whitehead, M. (2008). Retirement age
caregivers and deprivation of area of residence in England and Wales, Eur J Ageing,
4, 35-43.
Haynes, R. & Gale, S. (2000). Deprivation and poor health in rural areas: inequalities
hidden by averages, Health & Place, 275-285.
HHA (2007). Lifetime Homes: 21st Century Living. Retrieved February 2008, from
Lifetimehomes: www.lifetimehome.org.uk
Holden, M. (2008). Social learning in planning: Seattles sustainable development
codebooks, Progress in Planning, 69, 1-40.

68

Holland, L. (1997). The role of expert working parties in the successful design and
implementation of sustainability indicators, European Environment, 7, 39-45.
Hopwood, B., Mellor, M. & O'Brien, G. (2005). Sustainable development: mapping
different approaches, Sustainable Development, 13, 38-52.
House of Commons (2007). The Sustainable Communities Bill: Bill 17 of 2006/715
January. Retrieved May 2008.
House of Commons (2008). Economic Indicators, May 2008 Research Paper 08/43.
London: House of Commons.
House of Commons: Environmental Audit Committee (2008a). Making Government
Operations More Sustainable: A Progress Report, 1 July. Retrieved August 2008, from
House of Commons.
House of Commons: Environmental Audit Committee (2008b). Eighth Report of
Session 20070812 July. Retrieved August 2008, from House of Commons.
Huby, M., Owen, A. & Cinderby, S. (2007). Reconciling socio-economic and
environmental data in a GIS context: an example from rural England, Applied
Geography, 27, 1-13.
IFF Research Ltd. (2008). The Annual Survey of Small Businesses' Opinions 2006/7:
Summary Report of Findings Among UK SME Employers, IFF Research Ltd.
IPCC (2007). Summary for policy makers. In: Climate Change 2007: The Physical
Science Basis, Cambridge University Press.
Jackson, T. (2007). Mainstreaming sustainability in local economic development
practice, Local Economy, 22(1), 12-26.

69

Karlsson, S., Dahl, A.L., Biggs, R.O., ten Brink, B.J., Gutierrez-Espeleta, E., Hj.
Hasan, M.N., et al. (2007). Meeting conceptual challenges. In: T. Hak, B. Moldan &
A.L. Dahl (eds.), Sustainability Indicators, 1st edn., London: Scientific Committee on
Problems of the Environmrnt (SCOPE), pp.31-32.
Keirstead, J. & Leach, M. (2007). Bridging the gaps between theory and practice: a
service niche approach to urban, Sustainable Development.
Kollmuss, A. & Agyeman, J. (2002). Mind the gap: why do people act
environmentally and what are the barriers to pro-environmental behavior?,
Environmental Education Research, 8(3), 241-260.
Lamond, J. & Proverbs, D. (2006). Does the price impact of flooding fade away?,
Structural Survey, 24(5), 363-377.
Lancaster, T., Hajek, P., Stead, L.F., West, R. & Jarvis, M.J. (2006). Prevention of
relapse after quitting smoking, Arch Intern Med, 166, 828-835.
Lehtonen, M. (2004). The environmentalsocial interface of sustainable development:
capabilities, social capital, institutions, Ecological Economics, 49, 199-214.
Lemon, D., Flatt, G., Shack, L., Ellison, T. & Moran, T. (2007). Excess Cancer
Mortality and Incidence by PCT in the North West, 2001-2005, December. Retrieved
September

2nd,

2008,

from

North

West

Cancer

Intelligence

Service:

http://www.christie.nhs.uk/press/2007/docs/NWCancerIntelligenceReport_171207.pdf
Lin, V., Gruszin, S., Ellickson, C., Glover, J., Silburn, K., Wilson, G., et al. (2007).
Comparative evaluation of indicators for gender equity and health, International
Journal of Public Health, 52, S19S26.
Liverpool City Council (2005). Core Strategy Development Plan Document
Sustainability Appraisal Scoping Report, April. Retrieved 18 May 2008, from
Liverpool City Council: http://www.liverpool.gov.uk/Images/tcm21-35170.pdf

70

Malkina-Pykh, I.G. & Malkina-Pykh, Y.A. (2007). Quality-of-life indicators at


different scales: theoretical background, Ecological Indicators.
Manthorpe, J., Iliffe, S., Clough, R., Cornes, M., Bright, L., Moriaty, J., et al. (2008).
Elderly peoples perspectives on health and well-being in rural communities in
England: findings from the evaluation of the National Service Framework for Older
People, Health and Social Care in the Community.
McNeill, D. (2000). The concept of sustainable development. In: D. McNeill, K. Lee,
H. Alan & D. McNeill (eds.), Global Sustainable Development in the 21st Century.
Edinburgh: Endinburgh University Press.
Mid Suffolk District Council (2008). Proposals Maps, 26 August. Retrieved September
2008, from Mid Suffolk District: http://www.midsuffolk.gov.uk/cgi-bin/MsmGo.
exe?grab_id=502&page_id=1713664&query=sssi&hiword=SSSIS+sssi+
Moldan, B. & Dahl, A.L. (2007). Challenges to sustainability indicators. In: T. Hak,
B. Moldan & A.L. Dahl (eds.), Sustainability Indicators: A Scientific Assessment, 1st
edn., London: Scientific Committee on the Problems of the Environment (SCOPE).
Moles, R., Foley, W., Morrissey, J. & O'Regan, B. (2007). Practical appraisal of
sustaibable development - metodologies for sustainability measurement at settlement
level, Environmental Impact Assessment Review.
Monfreda, C., Wackernagel, M. & Deumling, D. (2004). Establishing national natural
capital accounts based on detailed ecological footprint and biological capacity
assessments, Land Use Policy, 21, 231-246.
Morse, S. (2008). Post-sustainable development, Sustainable Development.
Munda, G. (2006). Social multi-criteria evaluation for urban sustainability policies,
Land Use Policy, 23, 86-94.

71

Natural England (2008). East of England. Retrieved September 2008, from Natural
England: http://www.naturalengland.org.uk/sone/default.htm
Neumayer, E. (2003). Weak versus Strong Sustainability, 2nd edn., Cheltenham:
Edward Elgar.
Neumayer, E. (2007). A missed opportunity: the Stern Review on climate change fails
to tackle the issue of non-substitutable loss of natural capital, Global Environmental
Change, 17, 297-301.
Niemeijer, D. & de Groot, R.S. (2008). A conceptual framework for selecting
environmental indicator sets, Ecological Indicators, 8, 14-25.
Noss, R.F. (1990). Indicators for monitoring biodiversity: a hierarchial approach,
Conservaional Biology, 4, 355-64.
Oakland, J.S. (2000). Statistical Process Control, 4th edn., Oxford: Butterwoth
Heinemann.
ODPM (2005). Sustainability Appraisal of Regional Spatial Strategies and Local
Development Documents. Retrieved December 8th, 2007, from http://www.
communities.gov.uk/embedded_object.asp?id=1161346
ODPM (2006). A Practical Guide to the Strategic Environmental Assessment
September. Retrieved April 2008, from www.odpm.gov.uk
OECD (1994). Creating Rural Indicators, Paris.
Oxford Brookes University Impacts Assessment Unit (2005). Environmental impact
statement review package. In: J. Glasson, R. Therivel & A. Chadwick, Introduction to
Environmental Impact Assessment, Oxford: Routledge, pp.395-407.
Ozkaynak, B., Devine, P. & Rigby, D. (2004). Operationalising strong sustainability:
definitions, methodologies and outcomes, Environmental Values, 13, 279303.
72

Parris, T.M. & Kates, R.W. (2003). Characterising and measuring sustainable
development, Annual. Review Environmental Resoures, 28, 55986.
Phillips, R. & Bridges, S. (2005). Integrating community indicators with economic
development planning. In: R. Phillips (ed.), Community Indicators Measuring
Systems, Aldershot, UK: Ashgate.
Pitt, M. (2007). The Pitt Review, December. Retrieved 23 March 2008, from Cabinet
Office: www.cabinetoffice.gov.uk/thepittreview
RCEP (2008). The Urban Environment. Retrieved 25 July 2008, from Royal
Commission On Environmental Pollution.
Reed, M.S., Fraser, E.D. & Dougill, A.J. (2006). An adaptive learning process for
developing and applying sustainability indicators with local communities, Ecological
Economics, 59, 406-418.
Reed, M., Fraser, E.D., Morse, S. & Dougill, A.J. (2005). Integrating methods for
developing sustainability indicators to facilitate learning and action, Ecology and
Society, 10(1).
Reschovsky, J.D. & Staiti, A.B. (2002). Access and quality: does rural America lag
behind?, Health Affairs, 24(4), 1128-1139.
Rickard, L., Jesinghaus, J., Amann, C., Glaser, G., Hall, S., Chealtle, M., et al. (2007).
Ensuring policy relevance. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability
Indicators: A Scientific Assessment, Island Press, pp.65-79.
Roberts, P. (2006). Evaluating regional sustainable development: approaches, methods
and the politics of analysis, Journal of Environmental Planning and Management,
49(4), 515-532.
Robinson, J. (2004). Squaring the circle? Some thoughts on the idea of sustainable
development, Ecological Economics, 48, 369-384.
73

Rydin, Y., Holman, N. & Wolff, E. (2003). Local sustainability indicators, Local
Environment, 8(6), 581-589.
SBC (2007). Sustainability Appraisal Report of the Spelthorne Development Plan Strategy and Policies Preferred Options and Proposals Preferred Options DPDs
Appendices, April. Retrieved 20 June 2008, from Spelthorne Borough Council:
http://www.spelthorne.gov.uk/sustainability_appraisal_appendices_ april_2007.pdf
Scott Wilson Business Consultancy (2005). Local Development Framework, July.
Retrieved 7 August 2008, from St Helens Council: http://sthelens.gov.uk/SITEMAN/
publications/31/LDF10sustainabilityappraisalscopingreport.pdf
Sheate, W.R., do Partidario, M.R., Byron, H., Bina, O. & Dagg, S. (2008).
Sustainability assessment of future scenarios: methodology and application to
mountain areas of Europe, Environmental Management, 41, 282-299.
Smith, R. (2008). Green housing policy must be more radical says Rynd Smith,
Guardian, 30 July, 4.
Smith, S.P. & Sheate, W.R. (2001). Sustainability appraisal of English regional plans:
incorporating the requirements of the EU strategic environmental assessment
directive, Impact Assessment and Project Appraisal, 19(4), 263-276.
Spangenberg, J.H. (2007). The instututional dimension of sustainable development.
In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators: A Scientific
Assessment, Scientific Committee in Problems of the Environment, pp.107-124.
Spilanis, I., Kizos, T., Koulouri, M., Kondyli, J., Vakoufaris, H. & Gatsis, I. (2008).
Monitoring sustainability in insular areas, Ecological Indicators.
Strachan, H. (2003). The civil-military 'gap' in Britain, Journal of Strategic Studies,
26(2), 43-63.

74

SurveyMonkey (2008a). SurveyMonkey User Manual. Retrieved May 14th, 2008, from
SurveyMonkey: www.surveymonkey.com
SurveyMonkey (2008b). Smart Survey Design. Retrieved 14 May 2008, from
SurveyMonkey: www.surveymonkey.com
Sustainable Seattle (2005). Regional Indicators. Retrieved 20 February 2008, from
Sustainable Seattle: http://www.sustainableseattle.org/Programs/RegionalIndicators/
IndCriteria/view?searchterm=indicators
Sutherland, W.J. et al. (2008). Future novel threats and opportunities facing UK
biodiversity identified by horizon scanning, Journal of Applied Ecology, 45, 821-833.
Tasser, E., Sternbach, E. & Tappeiner, U. (2008). Biodiversity indicators for
sustainability monitoring at municipality level: an example of implementation in an
alpine region, Ecological Indicators, 8, 204-223.
The European Parliament and the Council of the European Union (2001). Directive
2001/42/EC of the European Parliament and of the Council of 27 June 2001 on the
assessment of the effects of certain plans and programmes on the environment,
Official Journal L 197, 0030-0037.
The Whitehaven News (2008). 3 July. Retrieved September 2008, from The
Whitehaven News: http://www.whitehaven-news.co.uk/se/1.134870
Therivel, R. & Minas, P. (2002). Ensuring effective sustainability appraisal, Impact
Assessment and Project Appraisal, 20(2), 81-91.
Therivel, R. & Ross, B. (2007). Cumulative effects assessment: does scale matter?,
Environmental Impact Assessment Review, 27, 365-385.
Therivel, R. & Walsh, F. (2006). The strategic environmental asessment directive in
the UK: 1 year onwards, Environmental Impact Assessment Review, 26(7), 663-675.

75

Thompson, B. (2007). Green retail: retailer strategies for surviving the sustainability
storm, Journal of Retail and Leisure Property, 6(4), 281-286.
Tilley, N. (2005). Crime reduction: a quarter century review, Public Money &
Management, October, 267-274.
Tratalos, J., Fuller, R.A., Evans, K.L., Davies, R.G., Newson, S.E., Greenwood, J.J., et
al. (2007). Bird densities are associated with household densities, Global Change
Biology, 13, 1685-1695.
Turner, K.R. (1993). Sustainability: principles and practice. In: R.K. Turner & R.K.
Turner (eds.), Sustainable Environmental Economics and Management, London:
Belhaven Press.
Ulubas-og lu, M.A. & Cardak, B.A. (2007). International comparisons of ruralurban
educational attainment: data and determinants, European Economic Review, 51, 1828
1857.
UNECE (1991). Policies and Systems of Environmental Impact Assessment, Geneva:
United Nations.
US Census Bureau (2008). World POPClock Projection. Retrieved 1 September 2008,
from US Census Bureau: http://www.census.gov/ipc/www/ popclockworld.html
Vickers, D. & Rees, P. (2007). Creating the UK national statistics 2001 output area
classification, J. R. Statist. Soc. A, Part 2, 170, 379-403.
Wackernagel, M., Monfreda, C., Moran, D., Wermer, P., Goldfinger, S. & Deumling,
D. (2005). National Footprint and Biocapacity Accounts 2005: The Underlying
Calculation Method, Oakland: Global Footprint Network.
Warren, M. (2007). The digital vicious cycle: links between social disadvantage and
digital exclusion in rural areas, Telecommunications Policy, 31, 374-388.

76

WCED (1987). Our Common Future, Oxford, New York: Oxford University Press.
Weich, S., Twigg, L. & Lewis, G. (2006). Rural/non-rural differences in rates of
common mental disorders in Britain, British Journal of Psychiatry, 18(8), 51-57.
West Oxfordshire District Council (2008). West Oxfordshire Local Development
Framework Sustainability Appraisal Scoping Report, February. Retrieved 29 May
2008, from West Oxfordshire District Council: http://www.westoxon.gov.uk/files/
download/5169-2445.pdf
Wigan Council (2007). Coping Report for the Sustainability Appraisal of Wigan Local
Development framework, September. Retrieved 7 August 2008, from Wigan Council:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
Willis, R., Webb, M. & Wilsdon, J. (2007). The Disrupters: Lessons for Low-Carbon
Innovation From the New Wave of Environmental Pioneers, NESTA.
WMBC (2007). Retrieved 18 June 2008, from Wigan Metropolitan Borough Council:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
Wong, C., Baker, M. & Kidd, S. (2006). Monitoring spatial strategies: the case of
local development documents in England, Environment and Planning C: Government
and Policy, 24, 533-552.
Wooderson, J., Gardner, R. & Laeger, S. (2006). Draft Scoping Report for the Strategic
Environmental Assessment and Sustainability Appraisal for the Emerging Three Rivers
Development Plan Documents, February. Retrieved 18 June 2008, from Three Rivers
District

Council:

http://www.threerivers.gov.uk/Default.aspx/Web/Sustainability

Appraisal
Woodger, M. (2008). Where Can I Find EFDC SA SustainabilityIindicators?

77

Yun, G.W. & Trumbo, C.W. (2006). Comparative response to a survey executed by
post, e-mail, and web form, Journal of Computer Mediated Communications, 6(2).
Zidansek, A. (2007). Sustainable development and happiness in nations, Energy, 32,
891-897.
Zonneveld, W. & Stead, D. (2007). European territorial cooperation and the concept
of urban-rural relationships, Planning, Practice & Research, 22(3), 439-453.

Documents containing Sustainability Indicators


from LAs used in this research
EAST OF ENGLAND
MAJOR URBAN
Dacorum:
http://www.dacorum.gov.uk/pdf/Dacorum%20Consultation%20Scoping%20Report%2
006-02-23b%20FINAL.PDF

Three Rivers:
http://www.threerivers.gov.uk/Default.aspx/Web/SustainabilityAppraisal

Watford:
http://www.watford.gov.uk/ccm/content/planning-and-development/sustainabilityappraisal-sa-and-strategic-environmental-assessment-sea-scoping-report-march2006.en;jsessionid=154F8883BBD7B6B882FDFFDA5A5FD79A

78

RURAL 80
Breckland:
http://www.breckland.gov.uk/2a_sustainability.pdf
accessed May 28th 2008

Fenland:
http://www.fenland.gov.uk/ccm/content/development-policy/ldf/sustainable-appraisalscoping-report.en
accessed on 22nd June 2008

Huntingdonshire:
http://www.huntsdc.gov.uk/NR/rdonlyres/33B33A0B-661B-4B4B-8B44F9E70B4473B9/0/scoping_report_revision_sept_2007.pdf
accessed on 30th June 2008

Mid Suffolk:
http://www.midsuffolk.gov.uk/NR/rdonlyres/A8795816-7D79-4DC7-902883256CB4F5C2/0/DraftAAPScopingReportApril2008.pdf
accessed on 30th June 2008

North Norfolk:
http://www.northnorfolk.org/ldf/documents/Core_Strategy_Sustainability_Appraisal_
WEB_VERSION.pdf
accessed on 30th June 2008

South Cambridgeshire:
http://www.scambs.gov.uk/documents/retrieve.htm?pk_document=3611
accessed on 22nd June 2008

79

South Norfolk:
http://www.eastspace.net/gndp/documents/SA_SCOPING_REPORT__FINAL_VERSION_-_ADOPTED_DEC_2007.pdf
accessed May 28th 2008

Suffolk Coastal:
http://www.suffolkcoastal.gov.uk/NR/rdonlyres/FEC5C8F8-1354-4611-9F071E400CB47C22/0/SAScopingJune06.pdf
accessed on 22nd June 2008

Uttlesford:
http://www.uttlesford.gov.uk/Planning/local+plans+and+local+development+framewor
k/saframeworkv7.pdf
accessed on 22nd June 2008

NORTH WEST
MAJOR URBAN
Bury:
http://www.bury.gov.uk/NR/rdonlyres/40528837-4D39-4290-AA353473F2EA34F6/0/SATaskA2BaselineInformation2007.pdf
accessed on 22nd June 2008

Liverpool:
http://www.liverpool.gov.uk/Images/tcm21-35170.pdf
accessed on 22nd June 2008

Oldham:
http://www.oldham.gov.uk/ldf-sustainability-appraisal-scoping-report-web.pdf
accessed on 22nd June 2008

80

Salford:
http://www.salford.gov.uk/appendix4a.pdf
accessed on 22nd June 2008

Sefton:
http://www.sefton.gov.uk/default.aspx?page=5866
accessed on 22nd June 2008

St Helens:
http://sthelens.gov.uk/SITEMAN/publications/31/LDF10sustainabilityappraisalscoping
report.pdf
accessed on 22nd June 2008

Stockport:
http://s1.stockport.gov.uk/council/corestrategy/chapter_393.html
accessed on 22nd June 2008

Trafford:
http://www.trafford.gov.uk/cme/live/dynamic/DocMan2Document.asp?document_id=5
9458AE9-1B9E-43B4-9B47-5B253A3D96D2
accessed 18th June 2007

Wigan:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
accessed 18th June 2007

RURAL 80
Allerdale:
http://www.allerdale.gov.uk/downloads/page1001/Core%20Strategy%20Scoping%20R
eport_1%20amendment.pdf
accessed 18th June 2007

81

Congleton:
http://www.congleton.gov.uk/pool/1/310720060253.pdf
accessed 22nd June 2007

Copeland:
http://www.copelandbc.gov.uk/ms/www/local-plan/PDF/sustainabilityassessment/Appendix-6_Formulation-Sustainability-Indicators.pdf
accessed 22nd June 2007

Eden:
http://www.eden.gov.uk/pdf/pp-Final-Core-Strategy-Scoping-Report.pdf
accessed 18th June 2007

Ribble Valley:
http://www.ribblevalley.gov.uk/downloads/LOCAL_DEVELOPMENT_SCHEME_AD
OPTED_MASTER_version_2007.pdf
accessed 18th June 2007

South Lakeland:
http://www.southlakeland.gov.uk/downloads/page1901/SA_Scoping_Report_280606.p
df
accessed 18th June 2007

SOUTH EAST
MAJOR URBAN
Dartford:
http://www.dartford.gov.uk/planningpolicy/DBCGBC_SAofLDFsScopingReportMar0
51_000.pdf
accessed 6th June 2008

82

Elmbridge:
http://www.elmbridge.gov.uk/search/search.asp
accessed 30th June 2008

Gravesham:
http://www.dartford.gov.uk/planningpolicy/DBCGBC_SAofLDFsScopingReportMar0
51_000.pdf
accessed 6th June 2008

Gravesham and Dartford have the same consultants and the same indicators:

Mole Valley:
http://www.molevalley.gov.uk/media/pdf/i/c/Scoping_Report_-_July_2005.pdf
accessed 6th June 2008

Spelthorne:
http://www.spelthorne.gov.uk/sustainability_appraisal_appendices_april_2007.pdf
accessed 20th June 2008

Woking:
http://www.woking.gov.uk/planning/policy/ldf/fsar.pdf
accessed 20th June 2008

RURAL 80
Isle of Wight:
http://www.iwight.com/living_here/planning/images/2ScopingReport.pdf
accessed 20th June 2008

Mid Sussex:
http://www.midsussex.gov.uk/Nimoi/sites/msdcpublic/resources/Final%20in%20pdf%
20format.pdf
accessed 20th June 2008

83

South Oxfordshire:
http://www.southoxon.gov.uk/ccm/content/planning/policy/sustainability-appraisal.en
accessed 20th June 2008

Wealden:
http://www.wealden.gov.uk/Planning_and_Building_Control/Local_Plan/Sustainability
_appraisal_SEA.pdf
accessed 20th June 2008

West Oxfordshire:
http://www.westoxon.gov.uk/files/download/5169-2445.pdf
accessed 29th May 2008

84

APPENDICES

85

Appendix 1: Scoring Appropriateness of Sustainability Indicators


Criterion
Leads to Stronger
Sustainability

Score
Does not lead to SS.

Is receptor based (Holland, 1997) and also has one or more of the following: Framed in terms of physical units,
possibly relating to carrying capacity or non-substitutability of capital (Holland, 1997); Is stock and/or distributional
(Gasparatos et al., 2007); Exceeds environmental quality standards (The European Parliament and the Council of the
European Union, 2001); Measures the effect on areas or landscapes which have a recognised national, community or
international protection status (The European Parliament and the Council of the European Union, 2001; Niemeijer &
de Groot, 2008).
Measuring (part of) the cumulative (The European Parliament and the Council of the European Union, 2001),
symbiotic (Sustainable Seattle, 2005) and transboundary nature of the effects (The European Parliament and the
Council of the European Union, 2001). Covers a range of good receptors. Possibly measures local smaller sites of
industrialisation (Holland, 1997), their customers and suppliers (Darnall & Sides, 2008).
Integrating all 3 strands of sustainability (Holland, 1997), intragenerational, intergenerational (Sustainable Seattle,
2005) and inter-species (Holland, 1997). Localised development of indicators including horizon scanning to identify
trends and indicators of emerging innovations (Government, 2005). Using excellent practice in determining climate
change indicators (e.g. biodiversity ones that are sensitive to anthropogenic change) (Tasser et al., 2008) as they
have substantial effects on health, social and economic indicators (Allman et al., 2004; Bosello et al., 2006).
No academic credibility
Accurate and be bias free (Reed et al., 2006), measures what it is designed to measure (Sustainable Seattle, 2005;
Niemeijer & de Groot, 2008).
Academically robust (Reed et al., 2006; Niemeijer & de Groot, 2008) and able to show the probability, duration,
frequency and reversibility of the effects (European Parliament and the Council of the European Union, 2001).
Substantiated; shows method of verification and where results can be compared with progress elsewhere (Roberts,
2006; Niemeijer & de Groot, 2008).

Academic Credibility

Evidence Base for Score

0
1
2
3

86

Addressing
Uncertainty

Environmental
Receptors Addressed

Locality

Disaggregation of
Data

Does not address uncertainty.

1 Representative of system variability (Reed et al., 2006).


2 Able to have the possibility of measuring unseen effects (e.g. by proxy measurements) (Roberts, 2006).
3 Uncertain about the level (which means we can really gain something from studying it) (Niemeijer & de Groot, 2008).
0
Environmental receptors not addressed.
1 Named receptor from annex 1(f) SEA Directive (European Parliament and the Council of the European Union, 2001), or
from ODPM (2005). To focus on identifying the total effect of both direct and indirect effects on receptors. Receptors may
include natural resources (e.g. air, water, soil), sections of the population (e.g. people living in particular areas or
vulnerable members of the community) or ecosystems and species (e.g. heathland).
2 Interrelationship between receptors (Niemeijer & de Groot, 2008).
3 Bearing on a fundamental process or widespread change (Niemeijer & de Groot, 2008). Considered in relation to the
nature and extent of the receptors, such as ecosystems and communities (Global Reporting Initiative, 2007), rather than
administrative boundaries (ODPM, 2005).
0 Not collected locally.
1 Be relevant to the local system/environment (Reed et al., 2006; Donnelly et al., 2006). Collected at as local level as
possible (Fraser et al., 2005).
2 Collected in conjunction with a local organisation (e.g. university).
3 No level 3 available.
0
Data not disaggregated.
1 Some demographic detail e.g. age (Coombes & Raybould, 2004; Hajat et al., 2007; Manthorpe et al., 2008), race/ethnicity,
income/socio-economic status, education level (Sustainable Seattle, 2005), deprivation (Doran et al., 2006), problem
areas, gender (Lin et al., 2007), sexual orientation, homestead type (nursing home, private house, traveller) as appropriate;
access to IT (Warren, 2007), rural versus urban (Reschovsky & Staiti, 2002). Does not use relative effects (e.g. deprivation
in Essex to deprivation in East London) (Therivel & Ross, 2007), compares like with like only.
2 Richer in demographic detail. May be disaggregated to ward level (ODPM, 2005) or below. Does not make assumptions
about disaggregation (e.g. rural aging is not homogeneous) (Manthorpe et al., 2008).
87

Measurability

Reliability

Relevant to plan

Actionable

3 Based on current research (e.g. that community services are powerful determinants of health) (Doran et al., 2006); the
relationship between material deprivation and ill-health is strong but not straightforward (Doran et al., 2006). Collects new
disaggregated data for indicators (e.g. precise quantification of the scale of the digital divide) (Warren, 2007), local SMEs
supplying LAs (IFF Research Ltd., 2008), projected population statistics (NSO, 2007; Shaw, 2007).
0 Not easily measured.
1 Easily measurable (Reed et al., 2006; Donnelly et al., 2007) in qualitative terms (Niemeijer & de Groot, 2008) using
existing data (Reed et al., 2006) and collected and analysed through established manageable methods (Sustainable Seattle,
2005) with units of measurement meaningful (Roberts, 2006).
2 Easily measured in quantitative terms (Niemeijer & de Groot, 2008), using existing historical record of comparative data
(Niemeijer & de Groot, 2008); capable of being updated regularly (Donnelly et al., 2007).
3 Units of measurement can be adjusted to reflect individual situations (Roberts, 2006) and data can be collected to reflect
the area and the magnitude of the effect (European Parliament and the Council of the European Union, 2001).
0 No reliability
1 Repeatable and able to assess trends over space (Niemeijer & de Groot, 2008) and time (Reed et al., 2006), with the right
spatial and temporal scales (Sustainable Seattle, 2005; Niemeijer & de Groot, 2008).
2 Sensitive and respond in a predictable manner to changes and stresses (Reed et al., 2006; Niemeijer & de Groot, 2008).
3 Repeatable and reproducible in different contexts (Niemeijer & de Groot, 2008) that allows unambiguous interpretation
(Niemeijer & de Groot, 2008).
0 Not relevant to plan.
1 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 1% - 33% of targets.
2 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 34% - 67% of targets. Could be
able to relate to opportunities for plan change, in LA, businesses, non-profit organisations, institutions and individuals
(Sustainable Seattle, 2005).
3 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 68% - 100% of targets. Could be
time bound. Sensitive to changes within policy timeframes (Niemeijer & de Groot, 2008). Can identify conflict with plan
objectives in order that alternatives may be explored (Donnelly et al., 2007).
0 Not easily actionable.
1 Does not require excessive data collection skills (Reed et al., 2006; Niemeijer & de Groot, 2008). Well established links
with specific management practice or intervention. Have target, baseline or threshold against which to measure them/take
action (Reed et al., 2006; Niemeijer & de Groot, 2008).
88

Stakeholder
Involvement

Funding/Cost

Easily Communicated

2 Measure conditions or activities that can be changed in a positive direction by local actions (Sustainable Seattle, 2005).
Provide early warning of detrimental change (Reed et al., 2006; Donnelly et al., 2007; Niemeijer & de Groot, 2008).
3 Shared resources across LAs (Scott Wilson, Levett-Therivel Sustainability Consultants, Treweek Environmental
Consultants and Land Use Consultants, 2006) (e.g., for local biodiversity indicators, such as vascular plant numbers)
(Tasser et al., 2008).
0
Stakeholder involvement: mandatory groups consulted only.
1 Indicators developed with some stakeholders. Notable stakeholders and/or minorities left out (ODPM, 2005).
2 Indicators developed with a good range of stakeholders. Identifying areas most at risk to damage(Donnelly et al., 2007)
and what is important to stakeholders (Reed et al., 2006).
3 Wide consultation of stakeholders in type and number of each stakeholder type in choosing indicator. User driven to be
relevant to target audience (Niemeijer & de Groot, 2008).
0 Not easily to fund, monitor or report.
1 Have potential for funding. Be cost effective to measure (Sustainable Seattle, 2005; Reed et al., 2006; Niemeijer & de
Groot, 2008).
2 Benefits of information should outweigh the costs of usage (Niemeijer & de Groot, 2008).
3 Help optimise the number of indicators (Donnelly et al., 2007).
0 Not easily communicated.
1 Community assets (Reed et al., 2006) and concerns are reflected in the data and analyses of each indicator (Sustainable
Seattle, 2005).
2 Ability to communicate information to a level appropriate for making policy decisions (Reed et al., 2006) and to the
general public (Donnelly et al., 2007; Niemeijer & de Groot, 2008).
3 Attractive to local media. The press publicises them and uses them to monitor and analyse (Sustainable Seattle, 2005).

89

Appendix 2: Local Authorities Used in this Study


DEFRA area Classification
EE Rural 80 LAs

Local Authority

*Babergh
Breckland
*East
Cambridgeshire

NW Rural 80
LAs
Allerdale
Congleton

SE Rural 80
LAs
Chichester
Isle of Wight

EE Major
Urban LAs
*Broxbourne
Dacorum

NW Major
Urban LAs
*Bolton
Bury

Copeland

Mid Sussex

Epping Forest

*Knowsley

Three Rivers

Liverpool

Gravesham

Watford

*Manchester

Mole Valley

Oldham

*Runnymede

*Rochdale

Spelthorne

Salford

Woking

Fenland

Eden

Forest Heath

Ribble Valley

Huntingdonshire

South Lakeland

South
Oxfordshire
Wealden
West
Oxfordshire

*Maldon
Mid
Bedfordshire
Mid Suffolk
North Norfolk
South
Cambridgeshire
South Norfolk
Suffolk Coastal
Uttlesford

Sefton
St. Helens
Stockport
#Tameside
Trafford
Wigan

90

SE Major
Urban LAs
Dartford
Elmbridge
Epsom and
Ewell

Reason for LA not being


used in this research
* Unable to locate
Located too late to process
# Document locked
Had to withdraw under
advice from DCLG
SA not completed yet
(Woodger, 2008).

Addressing uncertainty

Environmental Receptors Addressed

Credibility Total/12

Locality

Disaggregation of Data

Measurability

Reproducibility

Relevant to Plan

Actionable

Stakeholder Involvement

Funding/cost

Easily Communicated

Local Authority Total/15

Economic Classification

Environmental Classification

Social Classification

47

28

65

49

47

47

60

48

38

58

33

52

83

45

44

143

10%

63%

27%

Three Rivers

EE

MU

47

27

66

49

47

47

61

48

39

58

33

52

83

45

44

140

11%

64%

26%

Watford

EE

MU

41

33

49

51

43

46

61

50

40

63

33

33

87

47

44

118

11%

61%

28%

Breckland

EE

R80

41

29

37

53

40

47

12

64

51

43

67

58

33

53

73

54

46

59

25%

54%

20%

Fenland

EE

R80

42

38

51

52

46

48

69

58

44

61

33

53

83

46

46

43

19%

44%

37%

S Cambs

EE

R80

44

38

50

52

46

45

67

57

42

59

50

85

39

42

40

20%

45%

35%

S Norfolk

EE

R80

48

39

45

52

46

50

71

57

46

67

63

67

52

80

65

54

42

31%

50%

19%

Suffolk coastal

EE

R80

43

43

52

47

46

46

72

59

45

67

54

87

42

44

143

22%

44%

34%

Uttlesford

EE

R80

35

39

43

45

41

45

74

61

45

33

67

57

86

49

45

58

14%

28%

59%

Huntingdonshire

EE

R80

39

28

50

46

41

42

67

53

41

67

65

33

54

85

61

49

24

13%

42%

46%

Mid Suffolk

EE

R80

40

36

48

44

42

44

68

56

42

62

67

51

83

53

46

137

24%

41%

35%

91

Indicator Number in LA

Academic Credibility

MU

Indicator Total/38

Leads to Strong Sustainability

EE

Measurement Total/11

Rural Designation (Defra, 2004)

Dacorum

Local Authority

Region

Appendix 3: Appropriateness Scores, Numbers of Indicators and Classification of Sustainability Indicators

Academic Credibility

Addressing uncertainty

Environmental Receptors Addressed

Credibility Total/12

Locality

Disaggregation of Data

Measurability

Reproducibility

Relevant to Plan

Actionable

Stakeholder Involvement

Funding/cost

Easily Communicated

Local Authority Total/15

Economic Classification

Environmental Classification

Social Classification

36

46

49

43

48

67

57

45

63

52

82

40

43

57

21%

51%

28%

Bury

NW

MU

36

35

46

43

40

46

63

55

42

56

48

85

38

40

103

33%

31%

36%

Liverpool

NW

MU

41

36

49

48

43

45

67

55

42

33

56

100

48

81

64

51

47

23%

40%

36%

Oldham

NW

MU

39

33

43

45

40

48

70

60

45

63

33

50

87

47

44

61

16%

46%

38%

Salford

NW

MU

44

39

52

50

46

43

15

71

57

47

56

67

49

85

51

48

35

23%

37%

40%

Sefton

NW

MU

38

43

41

51

43

47

10

68

56

45

59

33

49

83

45

44

32

28%

44%

28%

St Helens

NW

MU

43

32

50

45

42

44

69

54

43

61

67

48

86

54

47

82

28%

30%

42%

Stockport

NW

MU

40

37

62

55

48

42

59

51

39

56

33

54

78

45

44

37

19%

46%

35%

Trafford

NW

MU

48

30

46

48

43

46

67

51

43

67

63

67

47

84

65

52

125

18%

39%

43%

Wigan

NW

MU

45

30

51

50

44

47

10

68

56

45

63

33

54

89

48

46

103

23%

40%

37%

S Lakeland

NW R80

44

36

55

51

47

46

65

55

42

65

67

54

86

55

49

50

20%

46%

34%

Ribble Valley

NW R80

40

31

47

48

41

44

10

69

55

45

67

64

67

54

84

67

52

87

23%

36%

41%

Eden

NW R80

41

42

46

49

45

47

75

61

48

62

67

54

87

54

49

78

27%

31%

42%

92

Indicator Number in LA

Leads to Strong Sustainability


43

Indicator Total/38

Rural Designation (Defra, 2004)


R80

Measurement Total/11

Region
EE

Local Authority
North Norfolk

Environmental Receptors Addressed

Credibility Total/12

Locality

Disaggregation of Data

Measurability

Reproducibility

Relevant to Plan

Actionable

Stakeholder Involvement

Funding/cost

Easily Communicated

Local Authority Total/15

Economic Classification

Environmental Classification

Social Classification

48

47

69

58

45

62

33

52

84

47

47

55

22%

62%

16%

Congleton

NW R80

43

46

50

49

47

43

74

63

47

64

33

53

85

47

47

68

19%

43%

38%

Allerdale

NW R80

47

32

55

51

46

47

65

59

44

64

67

50

81

52

48

78

13%

49%

38%

Mole Valley

SE

MU

36

29

45

37

37

44

10

60

45

39

50

33

46

69

40

41

120

21%

55%

24%

Dartford

SE

MU

42

35

52

52

46

46

71

59

46

67

33

56

89

49

47

49

20%

59%

20%

Gravesham

SE

MU

42

35

52

52

46

46

71

59

46

67

33

56

89

49

47

49

20%

59%

20%

Elmbridge

SE

MU

41

34

51

53

45

45

65

49

41

55

67

51

75

49

45

120

23%

52%

26%

Spelthorne

SE

MU

40

34

43

51

42

45

66

53

43

60

100

50

77

58

48

75

23%

53%

24%

Woking

SE

MU

44

35

52

50

45

46

11

68

54

44

100

63

100

52

87

80

59

95

21%

44%

35%

Isle of Wight

SE

R80

53

35

54

53

49

49

68

56

45

62

100

52

83

59

52

151

22%

53%

25%

W Oxon

SE

R80

41

38

41

57

44

45

68

56

44

60

33

48

73

47

45

59

24%

54%

22%

Mid Sussex

SE

R80

46

35

49

49

45

47

68

60

45

62

67

52

80

52

48

62

21%

50%

29%

S Oxon

SE

R80

42

38

47

47

44

42

15

70

58

46

33

67

52

88

48

46

55

20%

45%

35%

93

Indicator Number in LA

Addressing uncertainty

57

Indicator Total/38

Academic Credibility

49

Measurement Total/11

Leads to Strong Sustainability

39

Rural Designation (Defra, 2004)

49

Region

NW R80

Local Authority
Copeland

Wealden

averages ->

Local Authority

67
54
33
47
80
56

Environmental Classification

31%

42
39%

52
30%

63
90

7
48

47

Economic Classification

32%

42

Indicator Number in LA

47%

Local Authority Total/15

21%

Easily Communicated

78

Funding/cost

47

Stakeholder Involvement

51

Actionable

83

Relevant to Plan

51

Measurement Total/11

45

Reproducibility

61

Measurability

16

Disaggregation of Data

44

Locality

55

Credibility Total/12

67

Environmental Receptors Addressed

6.5

Addressing uncertainty

49

46

Academic Credibility

49

44

Leads to Strong Sustainability

30

49

Rural Designation (Defra, 2004)

40

49

R80
35

SE
43

94

Region

Social Classification

Indicator Total/38

Appendix 4: Justification of Classification of Indicators for all Local Authorities

Indicator (main points)

EMS adoption

Nearest Indicator to the


Bond et al. (1998)
Classification

Justification of Classification

Classification

Economic as has no guaranteed positive effect on


environment (Darnall & Sides, 2008)
Direct economic impacts for dependants

Economic

Women in senior positions*

Foster development of
environmental industry
Employment

Deprivation

Life expectancy

Economic

Number of ICT schemes implemented

Access to local services

Indirect impacts on local and


regional economy
Indirect impacts on local and
regional economy
Local business reduces impact
on the environment
Transport

Insulation

Resources (energy)

EIA

Resources/Biodiversity

Regional energy consumption compared with


population and GDP*
The number of planning applications granted
contrary to the advice of the Environment
Agency on either flood defence grounds or

Energy

(RCEP, 2008:34) links deprivation with life


expectancy, as do Doran et al. (2006)
Warren (2007) disadvantages innovation in rural
businesses
Age specific (18-25) migration flows out of rural
areas (Champion & Shepherd, 2006)
Less transport, therefore back calculating (Gupta &
van Asselt, 2006)
Back calculating through the cause-effect chain of
climate change (Gupta & van Asselt, 2006)
Back calculating through the cause-effect chain of
climate change (Gupta & van Asselt, 2006)
EIA is defined as: An assessment of the impact of a
planned activity on the environment (UNECE,
1991).
First part of indicator is energy

Water

Not social as housing not mentioned

Net migration of people


Number of farmers markets and farm shops*

95

Economic

Economic
Economic
Economic
Environmental
Environmental
Environmental

Environmental
Environmental

water quality.
Education (any age/educational establishment
or workplace)
Disabled access (buildings, outside areas,
transport)

Educational facilities
Neighbourhood liveability

Play areas

Recreational facilities/health

Development in risk of flood area

Land and housing values

Perception of situation of individuals and


community (in surveys)
Settlement size and proportion

Social psychological

Average weekly hours worked


Number of unauthorised Gypsy and Traveller
sites and numbers of caravans on them

Activity pattern: recreation, land


use, transportation
Individual and family level
impacts
Individual and family level
impacts /Health/Conflict

Only one reference to education in Bond et al.s


(1998) classification (i.e. educational facilities)
By 2050, half of Englands population will be over
45 (Champion & Shepherd, 2006) and access for
older and very old age groups will have the same
needs as disabled
Community services, such as recreational facilities
are powerful determinants of health (Doran et al.,
2006)
Risk to life low, compared to RTAs or heart disease
(in the same period of time) so not economic (life
expectancy). As illustrated by The floods during
June and July 2007 were a wake-up call. The three
months from May to July were the wettest since
records began and the events that followed have
been linked to the deaths of 13 people (Pitt,
2007:3).
For example, perception of crime is not the same as
crime rate (Tilley, 2005)
Sustainability is related to settlement size and
characteristics (Moles et al., 2007)
Health impacts on individual, relationship impacts
on families
Within the travelling community and between the
travelling community and the settled community

Social
Social

Social

Social

Social
Social
Social
Social

Altogether the totals for indicators that need justification in each classification category are Economic (20), Environmental (15) and Social (39). A
total of 74 Sustainability Indicators are in this indicator reference set. A sample of 19 are shown in this appendix.

96

Anda mungkin juga menyukai