SUSTAINABILITY INDICATORS?
by
Bernadette Cass
September 2008
2008
This copy of the dissertation has been supplied on condition that anyone who consults it is
understood to recognise that its copyright rests with the author and that no quotation from the
dissertation, nor any information derived there from, may be published without the authors
prior written consent. Moreover, it is supplied on the understanding that it represents an
internal University document and that neither the University nor the author are responsible for
the factual or interpretative correctness of the dissertation.
ABSTRACT
Sustainability, sustainable development and sustainability indicators have been
defined and the importance of operationalising these definitions with regards to
sustainability appraisal discussed. To improve local sustainability, Local Authorities in
England have each chosen a set of sustainability indicators to be used in sustainability
appraisal, in order to assess all options under consideration when choosing their
preferred option for their local development framework.
To group the sustainability indicators into the three pillars of economic, environmental
and social sustainability, a classification framework was applied. A survey of Local
Authority officers and consultants involved in choosing sustainability indicators for the
three regions augmented the appropriateness method. Using these methods, an
attempt was made to suggest areas of improvement that would increase
appropriateness and possibly lower the number of local sustainability indicators used,
in order to make the communication of changes towards or away from sustainability
more transparent and manageable to all stakeholders that use the Local Authority
indicators.
ii
Table of Contents
Page
Abstract
ii
Table of Contents
iii
List of Tables
List of Figures
vi
vii
Acknowledgements
ix
1.1
Introduction
1.2
2.1
2.1.1
2.1.2
2.1.3
Is Sustainability Achievable?
2.2
2.2.1
Planning in England
2.2.2
Sustainability Appraisal
2.2.3
Sustainability Indicators
10
2.3
13
2.4
13
2.5
14
2.5.1
14
2.5.2
17
2.6
Objectives
19
iii
21
3.1
21
3.2
22
3.3
Appropriateness of Indicators
22
3.3.1
Choosing Criteria
22
3.3.2
23
3.4
24
3.4.1
25
3.4.2
26
3.4.3
35
3.4.4
37
3.4.5
38
3.4.6
3.4.7
Set Indicator
39
41
3.5
42
3.6
42
3.7
42
3.8
Survey
42
3.9
Summary
43
44
4.1
44
4.1.1
Individual Indicators
44
4.1.2
45
4.1.3
Criteria
46
4.1.4
Super Criteria
51
4.1.5
51
4.2
52
4.2.1
52
4.2.2
Number of Indicators
54
iv
4.2.3
Classification of Indicators
55
4.2.4
Survey Information
56
4.2.5
4.3
56
Summary
57
58
5.1
58
5.2
59
5.3
60
REFERENCES
61
APPENDICES
85
Appendix 1
86
Appendix 2
90
Appendix 3
Appendix 4
91
95
List of Tables
Table 2.1
Page
12
Table 3.1
23
Table 3.2
24
Table 3.3
36
Table 3.4
37
Table 3.5
Table 3.6
Table 3.7
Each Classification
39
40
vi
List of Figures
Figure 1.1
Page
Figure 2.1
Figure 2.2
16
Figure 2.3
16
Figure 2.4
18
Figure 3.1
Figure 3.2
27
Figure 4.1
50
Figure 4.2
51
Figure 4.3
Figure 4.4
53
vii
55
A Levels
Advanced Levels
AONB
BBC
BREEAM
BSA
BTO
CAT
CIAT
CRC
CSD
DCLG
Defra
Ec
EE
East of England
EISs
EMA
EMS
En
EU
European Union
FOE
GCSE
GDP
GHG
HNC
HND
ICAEW
ICT
IEMA
IMD
IOW
Isle of Wight
LA
Local Authority
LADs
LDF
LGBT
MU
Major Urban
NGOs
NW
North West
ODPM
ONS
OS
Ordnance Survey
PCT
PPM
PPS
R80
Rural 80
RCEP
RSC
RSPB
RTPI
SA
Sustainability Appraisal
SCS
SE
South East
SEA
SD
Sustainable Development
SI
Sustainability Indicator
SIs
Sustainability Indicators
SMEs
So
SS
Strong Sustainability
UK
United Kingdom
UKCIP
UNECE
WCED
WS
Weak Sustainability
WWF
ix
ACKNOWLEDGEMENTS
Introduction
The limited resources left in the world for the 6,720,701,504 humans (US Census
Bureau, 2008) currently using the planets resources, and the opportunities for future
generations to be able to use similar resources, are some of the significant issues facing
the world today. It is not just the resources that we use, but also the direct and indirect
effects that are caused by the use of these resources that need to be considered. For
example, anthropogenic warming of the Earth caused by greenhouse gases (GHG)
(IPCC, 2007:3), such as the carbon dioxide produced from the combustion of fossil
fuels, is not a simple cause and effect situation (see Figure 1.1) and has long-term widereaching consequences that do not just include using up all the oil, gas and coal.
This dissertation aims to assess one method currently used by local authorities in
England, sustainability appraisal, which claims to lead towards stronger
sustainability.
Figure 1.1: Back Calculating Through the Cause-Effect Chain of Climate Change
Different people and organisations have widely differing views on how to manage
these resources. The north and south of England have polarised views and it is really
the north of England that currently has this movement towards sustainable
1
development. The south of England seems just concerned with development. This
dissertation concentrates on the views from the north of England, with a particular
focus on sustainability within the English local authorities.
1.2
This first chapter provides an introduction to the motivation behind this research.
Chapter Two frames the case for developing an appropriate method for sustainability
indicators. The third chapter then outlines the methods used to determine the
appropriateness of sustainability indicators, the results of which are discussed in
Chapter Four. Conclusions then follow in Chapter Five, outlining how well the overall
aims and objectives were addressed. A set of appendices provides additional specific
details of the methodology not considered appropriate to the main text.
2.1.1
This definition is often the starting point where researchers begin to define
sustainability (Holland, 1997; Jackson, 2007; Morse, 2008). However, this definition
reflects a managerial approach and is therefore more attractive to government and
business than a more radical definition (Robinson, 2004), although intragenerational
and intergenerational equity does appear at the cornerstone of this WCED definition.
The terms sustainable development and sustainability have often been used
interchangeably and a wealth of definitions are available (Defra, 2005a; Dartford
Borough Council, 2006; Moles et al., 2007; Zidansek, 2007; Wigan Council, 2007;
Morse, 2008), including the definition by Dahl (2007) that sustainability is the
capacity of any system or process to maintain itself indefinitely. This definition
alludes to intragenerational and intergenerational equity, but does not refer to the
quality of the level maintained (i.e. using a quantified baseline). The following
definition is from the Draft UK Sustainable Communities Bill 2007: By local
sustainability we mean policies that work towards the long term well being of any
given area. That means promoting economic needs... (House of Commons, 2007).
Unlike the WCED and Dahl, The House of Commons introduces both policy and
economics into the definition. The commonalities in all three definitions are the
temporal range, being long-term, and the omission of a stated baseline. Robinson
(2003) defines sustainability as being related to values and fundamental changes in
individual attitudes towards nature (value changes) and sustainable development as
being orientated towards efficiency gains and improvements in technology (technical
fixes), with their ultimate goals being rather different. Of note is that a number of local,
regional and national government and associated organisations, in England, have
chosen not to define sustainability or sustainable development (ODPM, 2005; Scott
Wilson Business Consultancy, 2005; CRC, 2008; DCLG, 2008) within their
sustainability documents.
Robinson (2003) concedes that it may be worth leaving the definition of sustainable
development open, using the diplomats method of constructive ambiguity, and have
definitions emerge from attempts at implementing sustainable development. Moldan
and Dahl (2007) have the same opinion for the definition of sustainability. Because of
2.1.2
Academics further divide sustainability; this can then be applied to other organisations
and institutions. Researchers (Turner, 1993; Holland, 1997; Neumayer, 2003; Karlsson
et al., 2007) suggest that sustainability can be measured in degrees of sustainability,
termed weak and strong sustainability. Two (Holland, 1997; Karlsson et al., 2007),
three (Neumayer, 2003) and four part (Turner, 1993) classifications have been
suggested. These classifications are based upon what economists term natural capital,
which summarises the multiple and various services of nature benefiting human beings,
from natural resources to environmental amenities (Neumayer, 2007). Weak
sustainability (WS) is built upon the unlimited substitutability of natural capital
(Neumayer, 2003), whereas strong sustainability (SS) is more difficult to define
(Holland, 1997).
Turners (1993:9-15) four part classification includes very weak sustainability, weak
sustainability, strong sustainability and very strong sustainability, with the latter
suggested as being impossible to achieve by Turner. Neumayers (2003) three part
classification also includes WS and suggests two interpretations of SS as available in
the literature. In one interpretation, SS is the paradigm that calls for preserving natural
capital itself in value terms. In the second interpretation, SS is not defined in value
terms but calls for the preservation of those forms of natural capital that are regarded as
non-substitutable (the so-called critical natural capital) (Neumayer, 2003:24-25).
From the classifications available for sustainability, different actors choose definitions
from the weak or strong sustainability viewpoint. Ekins et al. (2003) suggest that the
important point is that starting from a SS assumption of non-substitutability in general,
it is possible to shift to a WS position where that is shown to be appropriate. In this
dissertation, WS is taken to mean the same as SD, and SS will be defined as Neumayer
(2003) has concluded, unless stated otherwise; Elkins stance is used were applicable.
2.1.3
Is Sustainability Achievable?
2.2
The planning system in England, sustainability appraisal and indicators now need to be
considered in more detail.
2.2.1
Planning in England
A major culture shift in the English planning system has redefined the nature and
purpose of planning from land use to spatial planning (Wong et al., 2006). England
has been carrying out SA on development plans since 1992 and this SA has been
relatively effective at integrating environmental and sustainability considerations into
plans (Therivel et al., 2002). The 2004 Planning and Compulsory Purchase Act
requires planning authorities in England and Wales to undertake SAs of Local
Development Frameworks, amongst other things, which are also intended to fulfil
Strategic Environmental Assessment (SEA) requirements (Jackson, 2007). Sustainable
development is noted as being key to the reformed planning system (ODPM, 2005)
with SA as an integral part of the planning system (Defra, 2005a).
2.2.2
Sustainability Appraisal
This definition states exactly what type of best option should be considered. It is not
linked to policy or economic considerations. The UK Revised PPS12 (2008) defines SA
as an appraisal of the economic, social and environmental sustainability of the plan
(DCLG, 2008). Within this policy, SA is linked to the Sustainable Communities
Strategy (SCS), where the current emphasis on sustainable development now lies.
However, the Local Authorities (LAs) and consultancies used in this dissertation have
used the older version of PPS12, and the definition is slightly different (as the current
SCS was not in place then). Nevertheless, all government definitions are based on WS
principles, due to the inclusion of economic values and omitting to define what best
option really means (ODPM, 2005; DCLG, 2006).
Local authorities and the consultants that they employ define SA in two ways. Firstly,
those who mention SA contributing towards the achievement of emulating SD (DCLG,
2006), such as Copeland Borough Council (2005), Elmbridge Borough Council (2005)
and West Oxfordshire District Council (2008) (Gardner et al., 2006). Secondly, those
who additionally incorporate an intergenerational view into their definition, such as
Liverpool City Council (2005) and Breckland Council (2008) (Costaras et al., 2006).
For the operationalisation of SA in LAs, the similarities are the definitions of SA, the
process which LAs use for SA, and the statutory stakeholders from whom they must
invite comment. Differences occur in the subject specialisms of the LA officers and
their consultants, and the LAs choice of non-statutory stakeholders consulted for SA.
Gibson (2006) states that one should establish the SA contribution as the main test of
proposed purpose, option, design and practice. The processes must put application of
these sustainability-based criteria at the centre of decision-making, not as one advisory
contribution among many (Gibson, 2006). In UK policy, sustainable development is
only promoted by using SA. LAs are not required to justify national planning policy
when conducting SAs (for example, by appraising alternatives to national policy), even
if the non-policy alternative turns out to be the best option (ODPM, 2005); therefore,
Gibsons stance is not followed. Recent research within English regions has obtained
the view that SA is a weak science with subjective outcomes, where the big issues are
sidestepped (Counsell et al., 2006).
LDF
process
SA
process
Evidence
gathering
Scoping/
baseline
Preparing options
Developing
options
0
Pre production
Time (Years)
Production
Preparing the SA
report
Choosing
preferred option
Consulting on
LDF and SA
reports
Submission of
LDF
2
Examination
Examination of
the LDF report
Adoption and
monitoring
Monitoring the
significant effects
Adoption
2.2.3
Sustainability Indicators
The process of using SA to establish the best option for English LDFs employs the
objectives, targets and sustainability indicators approach (ODPM, 2005). Sustainability
indicators (SIs) are derived from the objectives chosen, as one approach to gauging
progress towards SD is to use SIs (Bell et al., 2001). England is now on its third
generation of SIs, developed since the first UK SD summit, which was instigated after
the 1992 Rio Earth Summit (Hall, 2007). In addition to SIs having numerous
definitions, there are alternative methods for choosing SIs, and also various techniques
to determine if SIs are appropriate.
Definitions vary depending on the actors who define SIs. Most academic and
government definitions contain an element of measurement (Astleithner et al., 2004;
ODPM, 2005) and a change over either time, space or both (Smith et al., 2001;
Astleithner et al., 2004; ODPM, 2005; Gasparatos et al., 2007). Cartwright et al.s
(2000) survey of LAs showed that the majority of respondents (51%) indicated that the
primary role of SIs was to help monitor progress towards SD, with raising awareness
and educating people acknowledged as key issues. Writing about the UK SIs, Hall
(2007) suggests that the principal role of indicators is communication, particularly to
the public and to ministers who do not need a lot of detail. Some researchers consider
that SIs can be framed in terms of degrees of SS (Holland, 1997; Bastianoni et al.,
2005) and WS (Holland, 1997), while Wackernagel et al. (2005) state that the
ecological footprint indicator tracks core requirements from SS and identifies priority
areas for WS. However, other researchers do not agree with this (Ayres, 2000) and
consider ecological footprints, due to methodological flaws, not to have any value for
policy evaluation or planning purposes (Neumayer, 2003:197).
10
Two methods for choosing SIs have been proposed by Reed (2005). The reductionist
framework (which is expert led) and the bottom-up participatory philosophy, focussing
on the importance of understanding local context (also known as the conversation
paradigm) (Bell et al., 2001). According to Gasparatos et al. (2007), SA so far has
relied on reductionist methodologies and tools. However, Bond et al.s (1998) survey
responses to UK LA stakeholders involvement in Agenda 21 showed that it was clear
that there had been community involvement, but the extent of the involvement was
unclear. A participatory integrated method has been utilised (Gupta et al., 2006), using
experts and a variety of stakeholders, not only to get stakeholders to identify indicators
but also to identify thresholds of acceptable and unacceptable risks (for dangerous
climate change). A key tool in communicating difficult concepts to stakeholders was by
using back calculation of cause and effect (related to climate change) (see Figure 1.1).
The researchers considered the production of a simple usable visual communication
method of their results as an essential part of this process, even though this was
considered inappropriate by experts (Gupta et al., 2006). This method agrees with other
researchers that communities need to be thinking through and deciding the kind of
future that they want to create (Robinson, 2004).
Different criteria are used for deciding the appropriateness of SIs, an example being
Donnelly et al. (2007) who propose criteria for the selection of four environmental
indicator types (biodiversity, water, climate and air) used in SEA, which is now part of
SA in England. Donnelly et al. (2007) consider it important to set criteria before a final
list of indicators is agreed upon, to ensure the most pertinent environmental issues for
11
SEA are properly addressed, yet other researchers use criteria both pre and post
decision-making (Lin et al., 2007) to decide appropriateness.
There is no single method that is easily repeatable from the point of view of LAs. From
the literature, there is no clear agreement on a set of criteria or measurement of
appropriateness of local SIs, but there are areas that many researchers agree need
further consideration. Table 2.1 shows some areas for further consideration in the
development of SIs. However, there is seldom a perfect SI, so the design generally
involves some methodological tradeoffs between technical feasibility, societal usability
and systemic consistency (Moldan et al., 2007).
Disaggregation of data a, h, i, j
Small scales needed g
Averages shade issues l
Innovation needed b, c, d
Current data is not being acted on k
Relationships are not straight forward (linking cause and effect) f
Indirect effects need consideration (climate, health and economy) e
Sources: (a) Coombes et al. (2004); (b) Robinson (2004); (c) Beveridge et al. (2005); (d) Defra (2005a);
(e) Bosello et al. (2006); (f) Doran et al. (2006); (g) Weich et al. (2006); (h) Hajat et al. (2007); (i) Lin et
al. (2007); (j) Warren (2007); (k) Hanratty et al. (2008); (l) Spilanis et al. (2008).
In conclusion, SIs have been defined by researchers who have suggested that some SIs
measure degrees of sustainability, although this is contested by other researchers.
Choosing SIs with their thresholds of acceptability in a participatory integrated way
may be a clearer way to communicate with stakeholders concerning the degree of
sustainability available from the best plan chosen for the LDF, via the process of SA.
The gaps in knowledge to decide on the appropriateness of SIs need to be considered in
future criteria-based appropriateness assessments.
12
2.3
There are numerous sets of SIs, each with widely varying numbers (Daniels, 2007),
with sets of over 100 indicators being common (Rydin et al., 2003). Two examples of
different researchers using a lower number of SIs are Fraser et al. (2005) who used 55
SIs in their Guernsey study and Spilanis et al. (2008) who used a total of 37 SIs in their
Greek Islands study. Hall (2007) identified a total of 5000 SIs when developing the first
generation of UK SIs and, from these, ideally wanted to reduce them to a set of fifty.
This reduction in numbers was so the set might be more manageable and
understandable. Reed et al. (2006) indicate that stakeholder involvement can lead to a
large number of potential indicators and, when establishing the first generation of UK
SIs, Hall came away from a days stakeholder involvement with a potential set of 400
SIs (Hall, 2007) rather than the 50 that was planned. Using their integrated method of
choosing indicators, Gupta et al. (2006) developed just 27 indicators of climate change
after using their considered method of both expert and stakeholder involvement.
Bossels (2001) systems-based approach for deriving comprehensive indicator sets
requires exactly 42 indicators. It turns the focus from an uncertain ad hoc search and
bargaining process to a much more systematic procedure with clear goals to find
indicators that represent all the important aspects of viability, sustainability and
performance (Reed et al., 2005). Currently, the UK's third generation set of SIs
numbers 68 (Defra, 2007). No exact numbers of SIs are suggested for LA SAs; the only
guidance is that the number of SIs needs to be manageable and developed with input
from relevant stakeholders (ODPM, 2005). However, LAs need to consider that
decision-makers and the public rapidly lose interest if presented with more than just a
few indicators (Moldan et al., 2007). Not only the number but also the types of SIs
chosen by LAs will be considered next.
2.4
this option - but many others are possible (Moldan and Dahl, 2007). A fourth pillar
(institutional indicators) was included in the system of sustainability indicators adopted
by the United Nations Commission on Sustainable Development (CSD) (Moldan et al.,
2007; Zidansek, 2007) and institutional indicators are currently used by some
researchers (Sheate et al., 2008). However, the institutional dimension is often
subsumed into the social dimension (Spangenberg, 2007).
Most indicator sets have assembled indicators for each of the three pillars whilst
neglecting the links between them. Interlinkage indicators are also called decoupling
indicators, and a number of the UK government strategy indicators take the form of
decoupling indicators. Decoupling is defined as how successful we are in breaking the
link between economic growth and environmental damage (Defra, 2005a). From the
classification systems available, the three pillars idea has been used by academics from
many disciplines in their research (Holland, 1997; Ekins et al., 2003; Rydin et al.,
2003; Astleithner et al., 2004; Lehtonen, 2004; Counsell et al., 2006; Glavic et al.,
2007; Huby et al., 2007; Niemeijer et al., 2008; Sutherland et al., 2008).
The three pillars classification system will be used in this research, as the widespread
use of this approach and the advantage of ease of communication are considered to
outweigh the disadvantages of lack of linkage and decoupling, and subsumation of the
institutional dimension. This research will be compared to a previously used method by
Bond et al. (1998), who present a fully referenced classification of the three pillars
system which will be adapted to classify LA SIs in 2008.
2.5
2.5.1
Definitions of rural and urban vary both within and between countries, and within
organisations worldwide a number of classification systems exist to divide rural and
urban (OECD, 1994; Reschovsky et al., 2002; Chomitz et al., 2005; Buckwell, 2006;
Fotso, 2006; Gallego, 2006; Weich et al., 2006; Huby et al., 2007; Vickers et al., 2007;
Zonneveld et al., 2007), as different users have different needs (Champion et al., 2006).
14
However, there are unclear and contradictory usages of the term rural within England
(Haynes et al., 2000; Baird et al., 2006; Keirstead et al., 2007; Manthorpe et al., 2008).
Sometimes a definition is not obvious in an academic paper (Ulubas-og lu et al., 2007)
and some academics consider that in cultural, social and economic terms the notion of
rurality in a country such as the United Kingdom is outdated (Champion and
Shepherd, 2006). In 2004, the Office for National Statistics (ONS) published a new
definition of rural areas covering England and Wales, launched alongside Defras Rural
Strategy (Champion and Shepherd, 2006). Defra considers the new definition of rural
areas to offer a distinctly different, potentially more useful and more transparent
approach to identifying LA districts (LADs), making it possible for policy makers,
researchers and others to interpret their results against a known set of benchmarks
within the classification (Defra, 2005).
Englands 354 unitary authorities and LADs have been allocated to one of six main
types. Three (176 LADs in all) are overwhelmingly urban in nature and are called
Major Urban, Large Urban and Other Urban (large market town). The rural types,
of which there are 178, are called Significant Rural (rural town), Rural 50 and
Rural 80, according to the proportion of people in rural settlements. Thus, Rural 80
LADs have between 80 and 100% of the people in rural settlements and Rural 50
LADs have more than 50%, while Significant Rural have more than the national
average of 26% (Champion and Shepherd, 2006). The percentage of residents in each
of the six types (2003 data) is illustrated in Figure 2.2.
From this, it can be established that overall 63.5% of Englands residents live in urban
areas, while 36.5% live in a rural area (2003 statistics). Each LA is classified into one
of the six designations. However, each LA can be further sub-classified (using the six
designations) and, as can be seen in Figure 2.3, both Rural 80 (R80) and Major Urban
(MU) are not homogeneous classifications. R80 can contain urban classifications and
MU can contain rural classifications. However, this classification system is more
transparent than others that have been used previously in England and research findings
can be discussed with this classification in mind.
15
16
As there is some transparency in this classification system (Defra, 2005) and the
classification system can be applied to all LAs in England, and also as a large amount
of administrative and other data is available only at this level (Champion et al., 2006),
this classification system was used in this dissertation.
2.5.2
A number of organisations now use Defras classification of rural and urban areas in
their more recently produced documents and research (CRC, 2008; RCEP, 2008). In
England, researchers and organisations have suggested that different priorities are
needed for urban and rural areas in order to lead towards sustainability (Doran et al.,
2006; Champion et al., 2006; CRC, 2008; RCEP, 2008).
Carbon dioxide (CO2) emissions will be considered for both urban and rural areas,
bearing in mind that the UK has the goal of a 60% cut in CO2 emissions by 2050
(RCEP, 2008). The greatest new driver of public policy for rural communities, as for
the nation as a whole, is climate change. Climate change poses particular challenges for
rural communities, both in terms of the sustainability of peoples car-reliant lifestyles
and in the way landscapes and biodiversity will adapt (CRC, 2008).
The density and infrastructure of urban areas helps make them more efficient in terms
of per capita energy consumption and emissions, which are lower in many of the UKs
major cities than the national average (RCEP, 2008). This point is agreed with by CRC
(2008), as shown in Figure 2.4. CRC states that this is because people living in rural
areas carry out much more of their travelling by car. In all three rural classifications,
transport has a significantly larger (approximately twice the value) CO2 emission value
than all three urban areas.
17
Figure 2.4: Estimates of End User CO2 Emissions for 2005 in England,
Using Defras Area Classification
Reducing the CO2 concentration is a local as well as global sustainability issue. This
may need to be achieved by different methods for rural and urban areas, so therefore
the significant differences need to be studied. In this example, car transport is a
significant aspect in rural areas. Indicators to monitor car usage in rural areas are
needed to increase sustainability. As previously stated (Defra 2005a), indicators do not
always stand alone - they can link together. Currently, there are several sustainability
issues linked to the lack of public transport in rural areas, two of which are illustrated
below:
1. Increasing rurality is associated with the greater pace of growth in the number
of the oldest people (Buckwell, 2006; Champion et al., 2006), and the need to
travel disproportionately affects these people in rural areas, particularly for
those without their own cars (Baird and Wright, 2006).
2. With regards to adult literacy and numeracy, lack of transport, access and
childcare are major barriers to learning in rural areas (Atkin et al., 2005).
18
RCEP (2008) recommends that before development plans are approved, the
government should publish a clear assessment of the transport infrastructure needs for
all proposed housing growth, how they will be funded and the environmental and
health impacts of meeting those needs. This should be accompanied by a clear plan for
phasing in the necessary supporting infrastructure, ensuring that this new transport
provision is environmentally sustainable.
Overall, the 35.5% of English people who live in rural areas produced twice as much
carbon dioxide per person from personal transport use in 2005, than the 66.5% of
people who live in urban areas. This one piece of evidence indicates that there are real
differences between rural and urban sustainability. Rural and urban areas can face the
same issues (such as climate change) but have different scales of a problem, so they
may need different solutions to achieve the same level of sustainability. Conducting SA
to choose the best option and the most appropriate choice of sustainability indicators
when choosing an LDF may help address this rural/urban difference to achieve a higher
level of sustainability within all LDFs.
The remainder of this dissertation examines how appropriate the choices of SIs are
when choosing the best option LDF.
2.6
Objectives
From the classifications available for sustainability, different actors will choose
different definitions from the WS or SS viewpoint. Recent research within English
Regions has shown that SA is a weak science with subjective outcomes, where the big
issues are sidestepped (Counsell et al., 2006); although some authors do have the
opinion that SIs can measure degrees of sustainability and therefore are a useful tool. A
guidance exists for SA (ODPM, 2005); however, there is a gap in this guidance on how
to reliably assess the appropriateness of SIs. In order to fill this gap, the overall aim of
this project is to develop and apply a reliable criterion-based assessment that LAs can
use to assess the appropriateness of SIs used when selecting the best option for an
LDF. This project aim will be addressed through three specific objectives:
19
20
3.1
To research the similarities and differences in appropriateness of SIs in rural and urban
areas of England, the two extremes of rurality, Major Urban (MU) and Rural 80 (R80)
were used. There are nine relevant regions in England and, for this study, three regions
were chosen using the method shown in Figure 3.1.
Figure 3.1: Decision Tree Method for Choosing Regions and Local Authorities
21
3.2
To locate the sustainability indicator definition used by individual LAs, the relevant LA
document containing the indicators was first consulted. If the indicators were found in a
separate appendix, then the scoping report was consulted. Using this method, it was
seen that sustainability and sustainable development were considered to have the
same meaning, with many government documents using these phrases interchangeably.
How LA officers and consultants define SIs is considered in the survey.
3.3
Appropriateness of Indicators
3.3.1
Choosing Criteria
The author followed the suggestion by Goodwin (2006) regarding the four key criteria
involved in the use of documentary sources of qualitative data. Therefore, the
information sources used for this criterion-based method were checked by the author
for authenticity, were credibly recorded, representative of the literature search carried
out, and finally the author decided whether the source could be used in the literal sense
or not. A choice was made to use authors such as Lin et al. (2007), whose research
contains evaluative criteria on gender equality and health for suites of indicators, but
does not explicitly name indicators in their research as SIs, but whose work would
nevertheless contribute towards the criteria-based method in a credible manner - in this
case, for disaggregation data. Authors such as Niemeijer and de Groot (2008) were also
included as they met the definition that their environmental indicators became SIs with
the addition of time, limit or target (Rickard et al., 2007:75).
From a literature search of 121 sources, 27 were chosen and used, with sources being
equally weighted to show transparency in method (Moles et al., 2007).
The results from the above process were then used to construct the criteria. Quotes
from each source were divided into groups according to their subject matter and
thirteen criteria groups formed. A short title was given to each criterion to reflect the
subject matter of the material it contained. Each criterion was divided into two or three
main areas, reflecting the complexity of the data in that group and a hierarchical
22
scoring system created, attaching a score of 0-3 for each level of hierarchy to reflect the
level of complexity in achieving that score. Finally, commonalities were established
between the thirteen criteria and three super criteria groups were formed. The full
criteria and scoring system is contained in Appendix 1. The criteria and super criteria
chosen are shown in Table 3.1.
Super Criteria
Criteria
Credibility
Measurement
Local Authority
Leads to strong
sustainability
Locality
Relevant to plan
Academic credibility
Disaggregation of
data
Actionable
Addressing
uncertainty
Measurability
Stakeholder
involvement
Environmental
receptors addressed
Reliability
Funding/cost
Easily
communicated
Table 3.1: Criteria and Super Criteria Used to Assess Appropriateness of SIs
3.3.2
The following method was used for all thirteen criteria. All data obtained was easily
found within the main body of the LA scoping report or appendices. Each criterion was
scored as zero, one, two, or three. An example of how the criterion of reliability was
scored is presented in Table 3.2.
Each criterion has the possibility of a zero level score; this was scored where no
information pertaining to this criterion could be found.
23
Criterion
Score
No reliability
Reliability
3.4
3.4.1
The authors skill base and knowledge base were assessed as to how subjective they
would be towards scoring the thirteen criteria. Consideration was given as to how to
scale the authors subjectivity. Two options of matrix-rating scales were considered;
Likert and Semantic differential scales (SurveyMonkey, 2008a). However, neither of
these matrix-rating scales was considered suitable. Finding the correct wording for
subjectivity for five points on the Likert scale, or the semantic scale for either a
balanced or unbalanced scale, did not work with ease when trialled. Whatever the
author self scored was unlikely to be objective, so the opposing ends, subjective and
objective, of the semantic scale were not suitable. Therefore, a two point system of
weakly subjective and highly subjective was used. A mid-point was avoided, as the
author was marking herself and needed to be forced away from a neutral response.
Strongly subjective indicated that either the authors skills or knowledge needed
upgrading to be equal to the same level as weakly subjective, which was nearer to
objective, but not considered objective.
indicators, weakly subjective for social indicators and strongly subjective for economic
indicators.
The environmental impact statement (EIS) review package for assessing EISs uses a
criteria-based method involving 92 criteria in eight sections, compared to the authors
13 criteria in three sections (super criteria). Subjectivity in the EIS review package is
reduced, as the EIS is assessed by two independent reviewers on the basis of a double
blind approach. Here each reviewer assesses the EIS against the criteria and the
reviewers then compare results and agree grades (Glasson et al., 2005:395-407). It was
not possible to use this approach in this study, although such an approach would have
been most suitable for this method.
The overall criteria that needed to be examined more closely in the analysis of the data
were addressing uncertainty, locality and funding/cost (for which the author was
strongly subjective in both skills and knowledge) and scores for economic indicators
also needed to be scrutinised.
3.4.2
Once the criteria method had been created, the provenance of the data used was
partially analysed to assess whether the subject of the lead author or government
document showed any preference to one subject specialism. The appropriateness
criteria were created from 27 sources (see Appendix 1).
Lead authors from an environmental or scientific subject area (48%) dominated the
sources used (see Figure 3.2). Economists formed the lowest percentage used,
comprising just 8%. This should be considered when analysing appropriateness;
however, most of the sources used had more than one lead author and their associated
subject specialisms are not considered further here.
26
27
Therivel and Ross (2007) concur with this statement, therefore it was concluded that
rankings cannot be used in the substantiation of a measurement.
RCEP, 2008; Tasser et al., 2008). The challenge is for LAs to find a complementary set
of biodiversity indicators for their area, based on the type that they need for their policy
and not based on the type of abundant information available (Biggs et al., 2007), as in
the case of populations of birds, which the author assesses as having no academic
credibility.
29
(5) Locality:
Locality is based on the definition of local being appropriate to the LA boundaries,
but not necessarily within the boundaries and can include any transboundary effects in
or out of the political boundaries. To assist with defining locality, the author used
physical maps of England, Ordnance Survey (OS) maps of some areas and the Excel
table of definitions of the six rurality components (Defra, 2004) within the LA (as
illustrated in Figure 2.3). However, there are some potential barriers such as the poor
quality of databases, especially at a local level, which are a potential threat to the
quality of the related indicators (Moldan et al., 2007:10).
The overall marking scheme for appropriateness ranges from 03, but in reality the
score of three is not available for locality as insufficient hierarchical information was
obtained from the literature research. However, this criterion was kept as it was
considered important by three groups of researchers. A scale of zero to two was used,
yet in practice a score of two was infrequent due to the information not being easily
available in sufficient detail in the scoping report. A maximum score of two in this
criterion gives less internal reliability.
30
from 5% to 6%. Although these are temporally different reports with different statistics,
it can be seen that a proportion of the homeless in England are ex-service personnel and
the local occurrence of this issue could be assessed for different plans if the SI of
homelessness is disaggregated appropriately.
The hierarchy scoring for this criterion is based firstly on the appropriate amount of
disaggregated data, for scores of zero to two, and then for a score of three the use of
current academic research describing areas where different groups of people can be
disadvantaged. Collecting evidence for this criterion was limited to the information
found in the indicator table used. The reason for this was that all who read the
document should be easily able to access indicators that they feel relevant to them and
see how improvement, via different plans and targets, are proposed.
(7) Measurability:
This refers to how the values in an indicator are measured and the extent to which it
measures reality (Bauler et al., 2007:56). Like models, indicators can reflect reality
only imperfectly; however, even within the measurable the quality of indicators is
determined largely by the way reality is changed into measures and data, be they
qualitative or quantitative. The quality of indicators inevitably depends on the
underlying data that are used to compose them (Moldan et al., 2007:9). The hierarchy
chosen for this criterion at first glance appears to have a subjective view biased towards
science and economics rather than social science. From bottom to top the hierarchy
goes from qualitative, then quantitative with the measurement being able to be adjusted
to reflect individual situations. To balance the ways different disciplines measure in
qualitative and quantitative terms, a number of ways of scoring, by choosing different
statements within each score, have been made available to give all three pillars of
sustainability an equal chance of achieving all scores.
(8) Reliability:
This criterion has the title of reliability (rather than reproducibility or repeatability) to
enable both the general meaning of the word to be conveyed and the statistical
interpretation of this word to be considered in the discussion. The hierarchical scoring
31
was based on the three prominent factors involved when considering reliability:
stability, internal reliability and inter-observer consistency (Bryman, 2004:71). More
detail on this theory can be found in section 3.4.6 of this dissertation.
(10) Actionable:
Actionable is an important criterion for LAs as they are accountable to the people who
live there and need to be seen to be doing or actioning targets developed from SA
objectives and indicators which lead to more sustainable local living. However, it
would be easy for LAs to choose SIs with targets that have quick fixes in order to
persuade policy makers and the public that sustainability is achievable (Astleithner et
32
al., 2004). The harder longer-term actions that could lead to stronger sustainability,
often outside the temporal frame of the LDF, can be less visibly actionable so less
attractive to use. Therefore, there is conflict that needs to be addressed between what is
sustainable for the community and the political nature and timeframe of the LDF.
Crucially, effective action is much less common than cheerful visions and passionate
endorsement (Gibson, 2006), so an indicator scoring three in actionable does not
ensure that effective action occurs. In this criterion the hierarchical scoring goes from
WS, the easiest indicators to action (scoring one), through to SS where greater
knowledge and resources are needed to action indicators (scoring three).
33
methods, in agreement with the views of Niemeijer and de Groot (2008) and Hall
(2007).
Commissioner
(Hall,
2007:301),
and
editions
of
Sustainable
Development Indicators in your Pocket have proved to be very popular and been
applauded by a variety of stakeholders (ibid.:302). Their popularity is due to a high
percentage of the information being communicated simply and visually (traffic light
evaluation method, charts and graphs), which links to the fact that 60% of all people
prefer a way of learning that is visual (Gardner et al., 2003). The hierarchy used
considers local community concerns being important (Holland, 1997) for a score of
34
one. The top score is given when the media starts to use SIs to communicate and, at a
higher level, analyse the changes over a period of time.
3.4.3
A compatibility matrix was used to assess incompatibilities between the thirteen criteria
and the table of appropriateness in Appendix 1 was used to assess the level of
compatibility between the thirteen criteria. This serves to highlight potential conflicts
between pairs of criteria; Table 3.3 indicates where conflicts may arise. The results
show 28 potentially incompatible criterion pairings and, out of these, 26 are located
within the LA super criteria, one in measurement and one in credibility. The
criterion of funding/cost contains all 12 potential incompatibilities. The compatible
pairings are more evenly spread amongst the three super criteria, and for the uncertain
pairings the most uncertainty occurs in the credibility super criteria. It should be
noted that the criterion of funding/cost is also an area in which the author exhibits
strong subjectivity.
35
Credibility
Measurement
Local Authority
Relevant to Plan
Credibility
Measurement
Local Authority
36
Key to Compatibility
Potentially incompatible
Uncertain
Compatible
No Links
3.4.4
Local Authority
Region
Defra classification
Number of
indicators
Breckland
East England
Rural 80
59
Dacorum
East England
Major Urban
143
West Oxfordshire
South East
Rural 80
59
Mole Valley
South East
Major Urban
120
From trialling the four LAs containing a total of 381 SIs, it was concluded that:
A wide range of values was observed for computations, such as for individual
indicators from 030 out of a total of 38 - this could lead to appropriateness
being established.
Three criteria were identified that needed their scoring method adapted:
Locality, Stakeholder Involvement and Relevant to Plan (see Appendix 1 for
final scoring system).
37
Academic credibility
Addressing uncertainty
Credibility Total/12
Locality
Disaggregation of Data
Measurability
Reliability
Measurement Total/11
Relevant to Plan
Actionable
Stakeholder involvement
Funding/cost
Easily Communicated
Indicator Total/38
11
Area of semi-natural
habitat lost to development
13
13
Dacorum Indicators
Table 3.5: Trial Run Scoring of Appropriateness: Some Results from Dacorum
3.4.5
A reference indicator set was created, which was built up throughout the scoring of the
38 sets of indicators, containing 2,970 individual indicators. For ease of use, 18
different pages on an Excel spreadsheet were grouped as the set was built up. Table 3.6
illustrates the grouping. Indicator scores from the reference indicator set were not
directly transferred to the LA score sheet, even if the indicator had exactly the same
wording. Firstly, four criteria were scored individually: locality, disaggregation of data,
relevant to plan and stakeholder involvement; all of which gave unique scores for each
LA. As such, the same indicator may have different scores in different LAs (range of
difference 0-11). Similarly worded indicators were subjectively judged as to whether
they had the same score as the reference indicator or were added as a unique indicator
to the reference indicator set. The total number of SIs in the reference set at the end of
the scoring period was 325.
38
Biodiversity
Business
Education
Energy
Health
Heritage
Housing
Materials
Pollution
Recreation
Transport
Waste
Water
Number of
Reference
Indicators
in category
Reference
Indicator
Classificatio
n Category
30
16
26
41
16
16
18
12
27
10
14
29
11
27
3.4.6
The appropriateness method needed to be checked for reliability over the time the data
analysis was carried out. Dacorum was chosen for this reliability test as it was the first
LA to be scored after the trial run was carried out and the reference indicator set was
created, and it was also temporally appropriate. It contained 143 SIs, which was at the
top end of the range of number of SIs per LA (range was 24151). See Table 3.7 for
details.
Three prominent factors are involved when considering reliability: stability, internal
reliability and inter-observer consistency. Stability refers to administering a measure to
a group and then readministering it. If there is stability, then there will be little variation
over time in the results obtained. If it is a long span of time (a year or more), external
variables can change (Bryman, 2004:71).
39
Credibility Total/12
Locality
Disaggregation of Data
Measurability
Reliability
Measurement Total/11
817
134
261
208
609
249
143
223
358
973
2399
27.07.08
205
133
276
219
833
136
17
267
215
635
261
143
221
365
990
2458
10
-2
183
-1
2%
% change
from June
to July
Easily Communicated
Funding/cost
Actionable
In this test for stability, the time of 37 days was deemed to be a reasonable time span
and appropriate for the measurement being used. However, a consideration of the
actual change or perception of the change (for example, in the UK economy over this
period of time) could be a variable that influences the authors viewpoint when scoring
economic indicators. The results show that the change over 37 days between the two
Dacorum total scores was 2%. Two individual criteria outliers were academic
credibility and disaggregation of data. However, the super criteria credibility (3%)
and measurement (4%) that these two criteria are in are still at acceptable levels of
repeatability.
Internal reliability looks at whether the indicators that make up a scale are consistent
(Bryman, 2004:71). In relation to this method, it should be considered whether each
criterions scoring of 1-3 is consistent between criteria (i.e. a score of two in the
criterion addressing uncertainty should equal a score of two in the criterion locality).
To address internal reliability, a statistical method could be applied to the data, either a
40
Indicator Total/38
Addressing uncertainty
282
Stakeholder involvement
Academic credibility
121
Relevant to Plan
Criteria
21.06.08
spilt-half method or Cronbachs alpha. This was not undertaken and the author notes
this omission for future research.
The third factor involved with reliability, inter-observer consistency, is when there is
lack of consistency between judgements made by more than one observer. However,
this was not present within this research as only one observer was used.
In conclusion, this method has stability, when using a reference set of indicators and
when measured by one researcher over a medium period of time (months) using one
LA. Events that influence the degree of consistency were considered, but not calculated
within the measurement of stability. The method has the possibility to be tested further
to assess whether other researchers could use it and obtain stability and inter-observer
consistency. Internal reliability should be calculated.
3.4.7
Glasson et al. (2005:145) and Moles et al. (2007) indicate that weighting seeks to
identify the relative importance of criteria. The research by Malkina-Pykh and
Malkina-Pykh (2007) on quality of life indicators considered the main approach used
to derive weightings, expert opinion, as a means to determine the list of criteria and
their significance, which is in agreement with the methods used by Moles et al. (2007)
and Astleithner et al. (2004). Authors agree that panels of experts should decide
weightings (Astleithner et al., 2004; Glasson et al., 2005; Moles et al., 2007), but
weighting systems generate considerable debate (Glasson et al., 2005). In this study,
the author decided to give all criteria equal weighting, so that no judgements were
made as to which criterion was more or less important, and so as to make aggregation
as transparent as possible (Moles et al., 2007). As the author has strong subjectivity on
three of the thirteen criteria, and was not part of an expert team, to add weighting as an
additional variable to change the internal reliability of the method was considered
unsuitable.
41
3.5
The frequency of different groups of SIs within each LA was established using the
reference set of indicators. SIs in each LA were ranked by individual indicator score,
from highest to lowest. The top ten, including equal tenth, were then grouped using the
reference set of indicators. The frequency of each group was then measured within the
top ten and compared to similarly categorised rurality LAs. The stability of the
groupings was good, as the reference indicator set proved to be repeatable over time
when scoring for appropriateness.
3.6
SIs were copied word for word from the 38 LAs. A subjective decision was made when
an LA separated one indicator (from a published set) into two or more parts as to
whether to count it as one or more indicators. This happened on only four occasions, as
LAs usually copied indicators from known indicator sets as a whole indicator. When an
indicator was used twice or more in an LA indicator set, it was used only once for all
methods used in this research.
3.7
The method of Bond et al (1998) was used as the basis to classify the 2,970 SIs.
Decisions were made when classifying a number of indicators when they were not
obviously within one of the economic, environmental or social categories. Appendix 4
illustrates the justifications for each indicator that did not fit straightforwardly into one
category.
3.8
Survey
A short survey was undertaken to augment the data retrieved from the literature search,
web-based documents from LAs and results from the appropriateness method. To
enhance the Environmental Management System (EMS) of this dissertation, the survey
42
sent to 38 LAs and the six consultants working for LAs was carried out electronically
(Yun et al., 2006). The electronic method chosen was SurveyMonkey software, which
proved to be easy to use, good quality, flexible and professional looking. In order to
achieve the clearest questions and highest response rate, a selection of ideas were used
from Cohen et al. (2000), Bryman (2004) and SurveyMonkey (2008a; 2008b). To
ensure the person involved in choosing SIs was the person answering the survey, the
LA officers and consultants were individually contacted by telephone prior to sending
out the survey.
3.9
Summary
This chapter has described the methods developed to assess the appropriateness of SIs
and justified each area of the method produced. The methods, with justifications, that
augment the appropriateness method have also been described.
The next chapter discusses the results obtained when the developed methods were
applied to SIs from 38 LAs and whether any difference or similarities were seen which
were dependant on the ruralness designation of the LA.
43
4.1
4.1.1
Individual Indicators
To establish whether an SI was appropriate, a mark out of 38 was given. The author
assumed that the higher the score then the more appropriate the indicator, similar to that
seen in the EIS review package (Glasson et al., 2005:395-396). However, Donnelly et
al. (2007) would disagree, considering that an environmental SI can be appropriate if it
meets just one of their criteria. The reference indicator set used gives this
appropriateness method stability and the scores for individual indicators were
explained by the differences in scoring of four of the thirteen criteria: stakeholder
involvement, relevant to plan, disaggregation and locality, which give a range of
0-11 marks that are applicable only to individual LAs. An example is Woking, which
has the highest score at both ends of the range of scores within one LA (from 14
minimum to 34 maximum). These high scores are due to a high starting score of seven
marks from three of the four individual criteria scores unique to Woking. Therefore, the
difference in scores for individual indicators with the same wording has no relationship
to the region or rurality of the LA. However, similar indicators can be compared
between different LAs and relative appropriateness can be assessed. A cut-off point of
appropriate versus inappropriate could have been set, but as there was no internal
reliability within the thirteen criteria, it would have been unsuitable to do this until this
method had been amended to take account of this. However, it could be suitable to look
at the type of individual indicators that score the highest marks and were therefore
measured by this method as being the most appropriate within each LA.
44
4.1.2
Environmental sustainability indicators were found most frequently in the top ten of the
38 LAs, and equally economic and social indicators were found disproportionately less.
Water was in 37/38 of LAs, air and climate was in 33/38, biodiversity was 33/38
and energy was 24/38. Business was found the least (at 1/38) and benefits and
work, education, heritage, materials, and recreation all scored 2/38 in the top
ten. The frequency with which the top four types of indicator occurred within the top
ten of LA indicator sets was not reliant upon the rurality designation of the LA.
No consistent pattern between the percentage of each of the three pillars and the top ten
scores within individual LAs was established. Take, for example, Uttlesford, where SIs
are 14% Economic (Ec), 28% Environmental (En) and 59% Social (So) (compared to
the average of 21% Ec, 47% En and 32% So). However, for Uttlesford, its top ten were
40% transport, 20% waste, 10% biodiversity and 10% air and climate. Therefore
80% were En SIs, compared to a score of 28% in total using the three pillars
classification. Top ten En SIs were higher scoring in all four criteria in the super
criterion of credibility, the difference being as many as 12 marks. This could be due
to a number of factors: the large proportion of lead authors who are in scientific or
environment fields (see section 3.4.2) especially in the credibility super criterion
scoring of hierarchies, the weak subjectivity of the author in three out of four of the
credibility criteria, and a longer tradition of seeing environmental indicators as SIs and
more credible data being available. No link was found between the number of
indicators and top ten scores.
When there was an anomalous result for top ten groups, this result was investigated
further. An example is given below. The case was noted of health indicators being in
the top ten of 7/9 NW MU LAs, with St Helens having 25% of their top ten as health
indicators. Statistics for 2001-2005 show that 1,300 people more than the national
average died from cancer in the NW, with 60% of these excess deaths being from lung
cancer caused by smoking (Lemon et al., 2007). All nine LAs in this sample from the
NW had indicators to cover this. However, Trafford had four indicators to reflect this,
45
including The number of smokers who had set a quit date and had successfully quit at
four week follow up (based on self-report) with NHS stop smoking services, which
scored 20, and also an indicator disaggregated into specific wards, The difference in
all age, all cause mortality (per 100,000 population); between the top (Clifford,
Bucklow-St. Martins, Urmston and Gorsehill) and bottom (Hale Barns, Hale Central,
Brooklands and Timperley) quintile wards in Trafford which scored 26. The second
indicator clearly had a sound database to work from and a suitable level of
disaggregation, whereas quitting smoking for four weeks is self reported and does not
have any long-term follow up, which is crucial as after initially successful quit
attempts, many people return to smoking within a year, reducing the public health
benefits of investment in smoking cessation (Lancaster et al., 2006). St Helens also has
a specific health issue in that the mortality rate for males from heart disease (2005) is
significantly higher than in the rest of England and Wales (Halton and St Helens PCT,
2007).
Soil and land in Suffolk Coastal (27%) and mid Suffolk (35%) LAs were anomalous
results, where there appeared to be a conflict between the high numbers of new housing
needed and local geological SSSIs being preserved (Mid Suffolk District Council,
2008; Natural England, 2008).
Environmental sustainability indicators were found most frequently in the top ten of the
38 LAs, but this was not reliant on the rurality designation of the LA. No consistent
pattern between the percentage of each of the three pillars and the top ten scores within
individual LAs was established, nor was a link found between the number of indicators
and top ten scores. When there was an anomalous result, it could be attributed to an
MU or R80 designation within a region, but this was not consistent between regions.
4.1.3
Criteria
The top three appropriateness marks for all LAs (Combined MU and R80) were for
easily communicated (83%), measurability (67%) and actionable (61%). Two of
the three criteria were found in the super criterion Local Authority; measurability was
found in the super criterion measurement. This result is backed up by Bauler et al.
46
All of the top three criteria were potentially incompatible with funding/cost (see
Table 3.3), but still scored highly. This suggests there may be weightings given by LAs
when choosing SIs (not necessarily knowingly) to the criteria the author had chosen. If
weighting was to be considered more formally, it would require the panel of experts
approach as suggested by Astleithner et al. (2004), Glasson et al. (2005) and Moles et
al. (2007) and for LAs to become transparent as to their method. There is also the
possibility that the higher scores of these criteria could be linked to their ease of
operationalisation within SA, so indicators that exhibit these three functions appeal to
LA officers and consultants who have time constraints (ODPM, 2005) and will be able
please policy makers more easily with indicators that score highly in these three areas.
The lowest three criteria marks occurred for disaggregation (7%), relevant to plan
(16%) and academic credibility (35%). With regard to the very low score for
disaggregation, this may have been because LAs weight communication (83%) so
highly they fear they will mask the ease of communication by making the indicator
complicated to understand. However, in doing this they lose the chance of
communicating the detail of how they will improve the area of SD that the indicator
covers and, by this omission, may find that averages shade local issues (Spilanis et al.,
2008). Bauler et al. (2007) also allude to disaggregation and suggest that with regard to
their criterion, the purpose of an SI, including the appropriateness of scale, is
important.
The criterion relevant to plan (16%) often had a score of zero, as no targets were
present in the table containing indicators; a number of these reports were dated from
2005 and 2006 and still did not have available easily accessible current (2008)
information on targets, other than those in the Annual Monitoring Reports. However,
47
policy has a lifespan and different indicators may be appropriate for different time
spans within the policy (Rickard et al., 2007), so having some targets at the start of a
plan is appropriate. Therefore, all LAs should have a score for this criterion - however,
this was not so, and may perhaps show either some political naivet (Moldan et al.,
2007) on behalf of those who carried out the SA or a flaw in the method for scoring this
criterion (see section 3.3.2).
Credibility or believability of the data used was low at 35% and three main reasons
for this have been considered. Firstly, there was the potential for incompatibility
between this and the addressing uncertainty criterion and these two can be mutually
exclusive with regard to scoring if an indicator was under development at that time or
new to the LA. However, in practice it was possible to score higher scores for both, as
there was some crossover in the two hierarchies which was open to interpretation.
Secondly, the abundance of data can make it tempting to use the data but as in, for
example, the case of the indicators associated with bird population, simply having a lot
of data available does not make an indicator academically credible. A third area that
lowered this score was that of surveys, such as the British Crime Survey which has
consistently found perceived risks to exceed actual risks of victimisation (Tilley, 2005),
but which has been consistently used by LAs. However, the possible incompatibility
with addressing uncertainty will continue to lower this score, but the choice of credible
data can be rectified by LAs to increase their score in this criterion.
The remaining seven criteria all scored between 43% and 55% and addressing
uncertainty locality and funding/cost are in this group which are the authors three
most subjective areas. However, they did not score either highly or lowly, so any
subjectivity most probably took a neutral stance. There was no discernable difference in
criteria scores between MU and R80 LAs.
The average appropriateness score in LAs for stakeholder involvement was 45%, which
ties in with the survey where 53% of respondents consulted stated that they had
contacted one or more non-statutory stakeholder. 21/30 groups of non-statutory
stakeholders were consulted between 5% to 32% of the time (see Figures 4.1 and 4.2 to
48
compare between the results arising from this work and the earlier work reported by
Bond et al. (1998)). No direct comparison was found as to involvement in choosing
SIs, but compared with Bond et als (1998) survey of stakeholder involvement in LA
21, there appears to have been a considerably smaller percentage of stakeholders
consulted by the sample in this study. Overall, there was some non-statutory
involvement of stakeholders, but the extent of the involvement could only be seen
when LAs chose to give their responses to stakeholder involvement in their scoping
reports and associated documents (for example, see SBC, 2007).
This lack of non-statutory stakeholder involvement ties in with the ODPM (2005)
guidance on SA, which does not specifically ask for stakeholder involvement in
choosing indicators, but is in disagreement with a number of academic sources
(Holland, 1997; Cartwright, 2000; Bell et al., 2004; Fraser et al., 2005) who consider
stakeholder involvement a priority. It is notable that SMEs were not consulted at all,
although the partnership between business and sustainability is considered important by
a number of authors (Beveridge et al., 2005; Defra, 2005a; Willis et al., 2007). There
was no discernable difference between MU and R80 LAs with regard to stakeholder
involvement. Congleton and West Oxfordshire consulted the most groups, at 15 each.
The result for Congleton is discussed further in section 4.2.3. Figures 4.1 and 4.2
provide a comparison of stakeholder involvement in choosing SIs for the years 2008
and 1998 respectively.
The top three scoring criteria indicate that SIs in LAs have methodological strength,
despite incompatibilities with funding/cost, which may show hidden weightings given
by LAs. The low score in disaggregation may shade local issues, two of the three
criteria for which the author has strong subjectivity appeared to have had a neutral
effect in scoring, yet practically there was some crossover in scoring between
academic credibility and addressing uncertainty (strong subjectivity). There appears
to be less stakeholder involvement seen in 2008 than in 1998, but the extent is not
transparent. The mediocre overall score for most LAs may be improved by a number of
methods: choosing indicators that use credible data, disaggregating indicator data,
involving more non-statutory stakeholders and deciding some targets early in the plan
49
50
4.1.4
Super Criteria
Of the three super criteria, local authority had the top score of 51%, with credibility
and measurement both equal at 44%. There was no discernible difference between
MU and R80 LAs. The super criterion with the largest range was local authority, with
SE MU Woking at 80% and the lowest being NW MU Bury at 38%. Overall, the super
criteria measure of appropriateness using averages hid the detail found in the
individual criterion scores.
4.1.5
Using the reference set of indicators gave this appropriateness method stability and the
top three scoring criteria indicate that this technique exhibits methodological strength in
LA choice of indicator. The super criteria measure of appropriateness shades issues
51
which can be made transparent by viewing the results of individual criterion. This
transparency determines areas where scores can be increased and inconsistencies
between the scoring of two criteria, and it also shows that environmental indicators
occur frequently when using the top ten method. Overall, there was no discernable
difference in appropriateness between MU and R80 designations, although anomalous
results occurred where there were issues in an individual LA, or an MU or R80
designation within a region.
4.2
Four subsidiary methods were used to collect data to augment the appropriateness
method: defining SIs, number of indicators, classification of indicators and information
from the survey.
4.2.1
Definitions of SIs are considered here, in order to examine connections between the
definition of SIs and their operationalisation within SA (Ozkaynak et al., 2004). The
results reveal that 45% of LA documents viewed contained definitions (reducing to
26% if those which contained no definitions are included) and 42% of the respondents
from the survey chose progress towards sustainability as the main basis to define an
SI. This compares with Cartwright et al. (2000), where 64% of LA officers chose
progress towards sustainable development as the main function. The results can be
seen in Figure 4.3.
52
Figure 4.3: Sustainability Indicator Definitions Used by Local Authority Officers and Found in Scoping Reports
53
The choice of definition may also be affected by the LA officers and consultants
highest level of education, subject specialism, experience in SA, and by their individual
pro-environmental behaviour (Kollmuss et al., 2002), which are all considered below.
The respondents were highly qualified, with 37% of respondents having Bachelor level
degrees and 63% of respondents having Masters level degrees as their highest
qualification. 79% of the respondents subject specialism was in town planning, with
63% belonging to the RTPI. Bond et al. (1998) surveyed LA officers involved in LA
21, finding that 53% worked in environmental departments, environmental health
departments, environmental units, or had an emphasis (at least in the title) concerning
the environment, which could link with the large percentage of LA officers in the
survey by Cartwright et al. (2000) who chose the SI definition leading towards
sustainability.
As the most important factor is not the amount of environmental knowledge but proenvironmental behaviour (Kollmuss et al., 2002), respondents pro-environmental
behaviour was considered by ascertaining membership of environmental, professional
and voluntary organisations, as well as their chosen charity to donate to. However,
there was insufficient data to propose any link (or lack of any link) between proenvironmental behaviour and choice of definition.
4.2.2
Number of Indicators
Overall, the average number of indicators in all the LAs was 78, with consultancies
(10/38) alone averaging 91 indicators and LAs (28/38) averaging 74 indicators. The
regional averages were 84 for both the EE and the SE, and 69 for the NW. There was a
considerable difference between the average in the EE MU (134) and EE R80 (67), but
it should be noted that the EE MU had been undertaken by the same consultants. The
NW MU and R80 averages were identical, at 69. The SE MU (85) and R80 (83)
averages were similar.
54
4.2.3
Classification of Indicators
Overall, within the 38 LAs, the average classification contained 21% of indicators as
being economic, 32% as social and 47% environmental. When there was an anomalous
result for a classification of LA indicators, it was examined in more detail. Two
examples are discussed: firstly, the case of Copeland and secondly those of Bury and
Wealden.
The second example of anomalous results were those of NW MU Bury and SE R80
Wealden, where both LAs had reasonably even scores for all three groups, which was
rare within this sample. However, currently there is no advice from the ODPM SA
Guidance (ODPM, 2005) on the proportion of SIs.
4.2.4
Survey Information
One area of the survey was not incorporated with the rest of the results. That area is the
LAs and consultants knowledge of policies and standards within their organisation.
68% of respondents did not know whether they had these policies and standards. Only
the consultancies had environmental standards. 21% of the respondents had an IT
security policy and 21% of respondents had an environmental policy.
4.2.5
Within the LAs there was little evidence of leadership exhibiting pro-environmental
behaviour, through introducing environmental policies or EMSs. This very weak
56
sustainability was backed up by the evidence found when defining indicators, as this
result was slightly in favour of English LAs operationalisation of a definition of SI,
leading to WS rather than SS. Consultancies used, on average, 23% more indicators
than LAs. Halls (2007) suggestion of 50 indicators is not used in practice by the
majority of LAs. Using the three pillars classification framework adapted from Bond et
al. (1998), most LAs had an average of around half their SIs classified as
environmental. However, there were exceptions to this and further research is needed to
ascertain why some LAs had a different ratio for the three pillars of sustainability.
4.3
Summary
The principal findings from this research are that none of the methods used
demonstrated any discernable difference between MU and R80 LAs, but differences
between individual LAs and within a rural designation in a region can be determined by
using the appropriateness method (which has good stability) and the classification
method. On the positive side, indicators chosen by LAs exhibit methodological strength
(Bauler et al., 2007) and the chosen criteria gave transparency to the results, which
enabled improvements to be recommended. Improvements are needed in proenvironmental leadership within LAs, in involvement of key stakeholders in choosing
SIs, in LAs to increase scores of six out of the thirteen criteria, and an inconsistency in
the appropriateness method needs to be removed. Operationalisation of SA will have
even stronger sustainability if LAs have clear definitions for all the terms and processes
used by all actors involved (Ozkaynak et al., 2004).
57
The overall drive and aim behind this project has been to develop a method that LAs
can use to assess the appropriateness of the SIs used in their SAs of LDFs. It needs to
be practical in that it should not be time rich, is reliable and can be carried out by
available LA personnel.
The method created for assessing appropriateness was developed successfully from an
extensive literature search. It provides a range of scores from 034 out of a total of 38,
for each of the 13 criteria and, most importantly, this method exhibits stability over a
medium period of time. Two improvements are still needed: firstly, to ascertain internal
reliability (Bryman, 2004:71), after which it would be suitable to trial if there was the
possibility of a cut-off point for pass or fail, as in the EIS Review Package (Oxford
Brookes University Impacts Assessment Unit, 2005); secondly, there is the need to
remove an inconsistency in scoring between academic credibility and addressing
uncertainty (see section 4.1.3).
This appropriateness method has been applied to 38 LAs from three regions and has not
distinguished major differences in appropriateness between MU and R80 LAs.
However, this may not be possible as some academics consider that in cultural, social
and economic terms the notion of rurality in a country such as the UK is outdated
(Champion and Shepherd, 2006). An improvement to the application of this method
would be to assess sets of indicators using two independent reviewers on the basis of a
double blind approach, as in the EIS Review Package(Oxford Brookes University
Impacts Assessment Unit, 2005). An enhancement could be to have one reviewer with
knowledge of the LA and one without (for example, an LA officer from another
region). This would encourage challenging discussion as to appropriateness and ensure
inter-observer consistency (Bryman, 2004:71).
58
Four other methods have been used to augment the appropriateness method. The
framework for classifying indicators (Bond et al., 1998) works well, despite the new
indicators used by LAs, because of its original reliability, having been constructed from
credible academic sources. The data extracted from the survey was most useful for
deciding the definition of SIs and stakeholder involvement. The author suggests
trialling two of these four methods as additional criteria within the appropriateness
assessment: namely, the number of indicators and the classification of indicators. This
would bring the total criteria to 15. Although the number of indicators does not appear
to affect the appropriateness of the indicator set (see section 4.2.2), the usability of
more than 50 indicators by the different actors involved must be taken into account
(Moldan et al., 2007).
Overall, the author considers that the aim and objectives have been achieved, and with
the changes instigated this appropriateness method should be trialled with LA officers
to discover their views on the methods usefulness.
5.2
Easy wins:
1. To use disaggregated data to describe local variation within an indicator, so that
issues can be targeted and micro-managed to achieve the greatest sustainability
and to be clear in the table of indicators how the data is disaggregated.
2. To introduce a number of targets before a plan is approved, not just when it is
being implemented, as different targets are appropriate at different stages of the
plan.
3. LAs should be careful to choose indicators that have academically credible data,
rather than those with an abundance of data but little academic credibility.
59
Harder to do:
1. Decide on a method for stakeholder involvement that not only helps decide
indicators but also gives the stakeholders autonomy to decide acceptable and
unacceptable limits for the indicators. This would help with target setting.
2. Be decisive about the numbers of indicators chosen. If there are more than 50,
justify the extra resources needed to measure and manage them, and ensure that
details are given to all who want to see and use them.
3. LAs should invest time in developing EMSs and their continual improvement.
This way they will be leaders in the field and other organisations will respect
their pro-environmental behaviour.
4. Make sure local SMEs are involved at all possible stages of the SA.
Sustainability cannot be achieved unless LAs, NGOs and businesses work
together.
5.3
60
REFERENCES
Adams, J. & White, M. (2006). Removing the health domain from the Index of
Multiple Deprivation 2004 - effect on measured inequalities in census measure of
health, Journal of Public Health, 28(4), 379-383.
Astleithner, F., Hamedinger, A., Holman, N. & Rydin, Y. (2004). Institutions and
indicators the discourse about indicators in the context of sustainability, Journal of
Housing and the Built Environment, 19, 7-24.
Atkin, C., Rose, A. & Shier, R. (2005). Provision of and Learner Engagement with
Adult Literacy, Numeracy and ESOL in Rural England. A Comparative Case Study.
London: Institute of Education, University of London.
Audit Commission. (2005, August). Local Quality of Life Indicators: Supporting Local
Communities to Become Sustainable. Retrieved August 15th, 2008, from Audit
Commission: www.audit-commission.gov.uk
Ayres, R.U. (2000). Commentary on the utility of the ecological footprint concept,
Ecological Economics, 32(3), 347-9.
Baird, G. A. & Wright, N. (2006). Poor access to care: rural health deprivation?,
British Journal of General Practice, August, 567-568.
Barr, S. & Gilg, A.W. (2007). A conceptual framework for understanding and
analysing
attitudes
towards
environmental
behaviour,
Swedish
Society for
61
Bauler, T., Douglas, I., Daniels, P., Demkine, V., Eisenmenger, N., Grosskurth, J., et
al. (2007). Identifying methodological challenges. In: T. Hak, B. Moldan & A.L.
Dahl (eds.), Sustainability Indicators: A Scientific Assessment, Island Press, p.57.
BBC (2008). Homeless ex-service people, The Today Programme, 10 July, London.
Bell, S. & Morse, S. (2001). Breaking through the glass ceiling: who really cares
about sustainability indicators?, Local Environment, 6(3), 291-309.
Bell, S. & Morse, S. (2004). Experience with sustainability indicators and stakeholder
participation: a case study relating to a 'blue plan project' in Malta, Sustainable
Development, 12, 1-14.
Beveridge, R. & Guy, S. (2005). The rise of the eco-preneur and the messy world of
environmental innovation, Local Environment, 10(6), 665-676.
Biggs, R. O., Scholes, R.J., ten Brink, B.J. & Vackar, D. (2007). Biodiversity
indicators. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators: A
Scientific Assessment, Scientific Committee on Problems of the Environment (SCOPE),
pp.249-270.
Bond, A.J., Mortimer, K.J. & Cherry, J. (1998). Policy and practice: the focus of
Local Agenda 21 in the United Kingdom, Journal of Environmental Planning and
Management, 41(6), 767-776.
Bosello, F., Roson, R. & Tol, R.S. (2006). Economy-wide estimates of the
implications of climate change: human health, Ecological Economics, 58, 579- 591.
Bossel, H. (2001). Assessing viability and sustainability: a systems-based approach for
deriving comprehensive indicator sets, Conservation Ecology, 5(12).
62
Retrieved
May
28th,
2008,
from
Breckland
Council:
http://www.
breckland.gov.uk/brecklandsascopingreport08-2.pdf
Bryman, A. (2004). Social Research Methods, (2nd edn.), Oxford: Oxford University
Press.
Buckwell, A. (2006). Rural development in the EU, Economa Agraria y Recursos
Naturales, 6(12), 93-120.
Cartwright, L.E. (2000). Selecting local sustainable development indicators: does
consensus exist in their choice and purpose?, Planning Practice & Research, 15, 6578.
Champion, T. & Shepherd, J. (2006). Demographic Change in Rural England, Rural
Evidence Research Centre.
Chomitz, K.M., Buys, P. & Thomas, T.S. (2005). Quantifying the Rural-urban
Gradient in Latin America and the Caribbean, World Bank: World Bank.
Clark, D., Southern, R. & Beer, J. (2007). Rural governance, community
empowerment and the new institutionalism: A case study of the Isle of Wight, Journal
of Rural Studies, 23, 254266.
Cohen, L., Manion, L. & Morrison, K. (2000). Research Methods in Education (5th
edn.), London and New York: Routledge/Falmer.
Coombes, M. & Raybould, S. (2004). Finding work in 2001: urbanrural contrasts
across England in employment rates and local job availability, Area, 36(2), 202222.
63
sustainability-assessment/S-
A_Introduction-Background.pdf
Copeland Borough Council (2008). Copeland Fast Facts, September. Retrieved 6
August 2008, from Copeland Borough Council: http://www.copelandbc.gov.uk/
main.asp?page=2891
Costaras, N. & Thomas, E. (2006). Sustainability Appraisal of the Local Development
Framework: Scoping Report, June. Retrieved 22 June 2008, from Uttlesford District
Council: http://www.uttlesford.gov.uk/Planning/local+plans+and+local+development+
framework/saframeworkv7.pdf
Counsell, D. & Haughton, G. (2006). Sustainable development in regional planning:
the search for new tools and renewed legitimacy, Geoforum, 37, 921-931.
CRC (2008). The State of the Countryside 2008, July. Retrieved July 2008, from
Commission for Rural Communities: www.ruralcommunities.gov.uk
Dahl, A. (1997). The big picture: comprehensive approaches, Part One: Introduction.
In: B. Moldan, S. Billharz & R. Matravers (eds.), Sustainability Indicators: A Report on
the Project on Indicators of Sustainable Development, SCOPE 58, Chichester, UK:
Wiley, pp.69-83.
Daniels, P.L. (2007). Annex: menu of selected sustainable development indicators.
In: T. Hak, B. Moldan & A. L. Dahl (eds.), Sustainability Indicators: A Scientific
Assessment, Island Press, pp.369-387.
Darnall, N. & Sides, S.R. (2008). Assessing the performance of voluntary
environmental programs: does certification matter?, Policy Studies Journal, 36(1).
64
DCLG:
http://www.communities.gov.uk/communities/neighbourhoodrenewal/
deprivation/deprivation07/
DCLG (2008a). Ecotowns: Living a Greener Future, April. Retrieved 19 June 2008,
from IEMA: http://www.iema.net/readingroom/show/18185/c260
DCLG (2008b). Planning Policy Statement 12: Local Spacial Planning, 4 June.
Retrieved 6 August 2008, from Department of Communities and Local Government:
http://www.communities.gov.uk/documents/planningandbuilding/ pdf/pps12lsp.pdf
DCLG (2008c). Regional Planning 12 May. Retrieved 7 August 2008, from
Government
Office
for
the
East
of
England:
http://www.gos.gov.uk/goee/
docs/193657/193668/Regional_Spacial_Strategy/EE_Plan1.pdf
Defra (2004). Rural Definition and Local Authority Classification (Excel Spreadsheet).
Retrieved February 2008, from Defra: http://www.defra.gov.uk/ rural/ruralstats/ruraldefinition.htm
Defra (2005a). Securing the Future - UK Government Sustainable Development
Strategy, March. Retrieved February 2008, from Defra: http://www.sustainabledevelopment.gov.uk/publications/uk-strategy/index.htm
65
Defra (2005b). Defra Classification of Local Authority, July. Retrieved February 2008,
from DEFRA: www.defra.gov.uk
Defra (2007). Sustianable Indicators in Your Pocket 2007, London, England: Defra.
Dietz, S. & Neumayer, E. (2007). Weak and strong sustainability in the SEEA:
concepts and measurement, Ecological Economics, 61, 617-626.
Doak, J. & Parker, G. (2005). Networked space? The challenge of meaningful
participation and the new spatial planning in England, Planning, Practice and
Research, 20(1), 23-40.
Donnelly, A., Jones, M., O'Mahony, T. & Byrne, G. (2007). Selecting environmental
indicators for use in strategic environmental assessment, Environmental Impact
Assessment Review, 27, 161-175.
Donnelly, A., Salamin, N. & Jones, M.B. (2006). Changes in tree phenology: an
indicator of spring warming in Ireland?, Biology and Environment: Proceedings of the
Royal Iriah Academy, March.
Doran, T., Drever, F. & Whitehead, M. (2006). Health underachievement and
overachievement in English local authorities, Journal of Epidemiology and
Community Health, 60, 686-693.
EBC (2005). July. Retrieved 13 June 2008, from Elmbridge Borough Council:
http://www.elmbridge.gov.uk/search/search.asp
Ekins, P., Simon, S., Deutsch, L., Folke, C. & de Groot, R. (2003). A framework for
the practical application of the concepts of critical natural capital and strong
sustainability, Ecological Economics, 44, 165-185.
Elmbridge Borough Council (2005). Final Scoping Report for the Sustainability
Appraisal of the Core Strategy Development Plan Document, July. Retrieved 18 June
2008, from Elmbridge Borough Council: http://www.elmbridge.gov.uk
66
Gregory, R.D., Noble, D., Field, R., Marchant, J., Raven, M. & Gibbons, D.W. (2003).
Using birds as indicators of biodiversity, Ornis Hungarica.
Gupta, J. & van Asselt, H. (2006). Helping operationalise Article 2: A
transdisciplinary methodological tool for evaluating when climate change is
dangerous, Global Environmental Change, 16, 83-94.
Hajat, S., Kovats, R.S. & Lachowycz, K. (2007). Heat-related and cold-related deaths
in England and Wales: who is at risk?, Occupational and Environmental Medicine, 64,
93-100.
Hall, S. (2007). The development of UK sustainable development indicators: making
indicators work. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators:
A Scientific Assessment, 1st edn., Scientific Committee on Problems of the Environment
(SCOPE), pp.293-307.
Halton and St Helens PCT (2007). Heart Disease. Retrieved 2 Sept 2008, from Halton
and
St
Helens
PCT:
http://www.haltonandsthelenspct.nhs.uk/library/documents/
heartdisease.pdf
Hanratty, B., Drever, F., Jacoby, A. & Whitehead, M. (2008). Retirement age
caregivers and deprivation of area of residence in England and Wales, Eur J Ageing,
4, 35-43.
Haynes, R. & Gale, S. (2000). Deprivation and poor health in rural areas: inequalities
hidden by averages, Health & Place, 275-285.
HHA (2007). Lifetime Homes: 21st Century Living. Retrieved February 2008, from
Lifetimehomes: www.lifetimehome.org.uk
Holden, M. (2008). Social learning in planning: Seattles sustainable development
codebooks, Progress in Planning, 69, 1-40.
68
Holland, L. (1997). The role of expert working parties in the successful design and
implementation of sustainability indicators, European Environment, 7, 39-45.
Hopwood, B., Mellor, M. & O'Brien, G. (2005). Sustainable development: mapping
different approaches, Sustainable Development, 13, 38-52.
House of Commons (2007). The Sustainable Communities Bill: Bill 17 of 2006/715
January. Retrieved May 2008.
House of Commons (2008). Economic Indicators, May 2008 Research Paper 08/43.
London: House of Commons.
House of Commons: Environmental Audit Committee (2008a). Making Government
Operations More Sustainable: A Progress Report, 1 July. Retrieved August 2008, from
House of Commons.
House of Commons: Environmental Audit Committee (2008b). Eighth Report of
Session 20070812 July. Retrieved August 2008, from House of Commons.
Huby, M., Owen, A. & Cinderby, S. (2007). Reconciling socio-economic and
environmental data in a GIS context: an example from rural England, Applied
Geography, 27, 1-13.
IFF Research Ltd. (2008). The Annual Survey of Small Businesses' Opinions 2006/7:
Summary Report of Findings Among UK SME Employers, IFF Research Ltd.
IPCC (2007). Summary for policy makers. In: Climate Change 2007: The Physical
Science Basis, Cambridge University Press.
Jackson, T. (2007). Mainstreaming sustainability in local economic development
practice, Local Economy, 22(1), 12-26.
69
Karlsson, S., Dahl, A.L., Biggs, R.O., ten Brink, B.J., Gutierrez-Espeleta, E., Hj.
Hasan, M.N., et al. (2007). Meeting conceptual challenges. In: T. Hak, B. Moldan &
A.L. Dahl (eds.), Sustainability Indicators, 1st edn., London: Scientific Committee on
Problems of the Environmrnt (SCOPE), pp.31-32.
Keirstead, J. & Leach, M. (2007). Bridging the gaps between theory and practice: a
service niche approach to urban, Sustainable Development.
Kollmuss, A. & Agyeman, J. (2002). Mind the gap: why do people act
environmentally and what are the barriers to pro-environmental behavior?,
Environmental Education Research, 8(3), 241-260.
Lamond, J. & Proverbs, D. (2006). Does the price impact of flooding fade away?,
Structural Survey, 24(5), 363-377.
Lancaster, T., Hajek, P., Stead, L.F., West, R. & Jarvis, M.J. (2006). Prevention of
relapse after quitting smoking, Arch Intern Med, 166, 828-835.
Lehtonen, M. (2004). The environmentalsocial interface of sustainable development:
capabilities, social capital, institutions, Ecological Economics, 49, 199-214.
Lemon, D., Flatt, G., Shack, L., Ellison, T. & Moran, T. (2007). Excess Cancer
Mortality and Incidence by PCT in the North West, 2001-2005, December. Retrieved
September
2nd,
2008,
from
North
West
Cancer
Intelligence
Service:
http://www.christie.nhs.uk/press/2007/docs/NWCancerIntelligenceReport_171207.pdf
Lin, V., Gruszin, S., Ellickson, C., Glover, J., Silburn, K., Wilson, G., et al. (2007).
Comparative evaluation of indicators for gender equity and health, International
Journal of Public Health, 52, S19S26.
Liverpool City Council (2005). Core Strategy Development Plan Document
Sustainability Appraisal Scoping Report, April. Retrieved 18 May 2008, from
Liverpool City Council: http://www.liverpool.gov.uk/Images/tcm21-35170.pdf
70
71
Natural England (2008). East of England. Retrieved September 2008, from Natural
England: http://www.naturalengland.org.uk/sone/default.htm
Neumayer, E. (2003). Weak versus Strong Sustainability, 2nd edn., Cheltenham:
Edward Elgar.
Neumayer, E. (2007). A missed opportunity: the Stern Review on climate change fails
to tackle the issue of non-substitutable loss of natural capital, Global Environmental
Change, 17, 297-301.
Niemeijer, D. & de Groot, R.S. (2008). A conceptual framework for selecting
environmental indicator sets, Ecological Indicators, 8, 14-25.
Noss, R.F. (1990). Indicators for monitoring biodiversity: a hierarchial approach,
Conservaional Biology, 4, 355-64.
Oakland, J.S. (2000). Statistical Process Control, 4th edn., Oxford: Butterwoth
Heinemann.
ODPM (2005). Sustainability Appraisal of Regional Spatial Strategies and Local
Development Documents. Retrieved December 8th, 2007, from http://www.
communities.gov.uk/embedded_object.asp?id=1161346
ODPM (2006). A Practical Guide to the Strategic Environmental Assessment
September. Retrieved April 2008, from www.odpm.gov.uk
OECD (1994). Creating Rural Indicators, Paris.
Oxford Brookes University Impacts Assessment Unit (2005). Environmental impact
statement review package. In: J. Glasson, R. Therivel & A. Chadwick, Introduction to
Environmental Impact Assessment, Oxford: Routledge, pp.395-407.
Ozkaynak, B., Devine, P. & Rigby, D. (2004). Operationalising strong sustainability:
definitions, methodologies and outcomes, Environmental Values, 13, 279303.
72
Parris, T.M. & Kates, R.W. (2003). Characterising and measuring sustainable
development, Annual. Review Environmental Resoures, 28, 55986.
Phillips, R. & Bridges, S. (2005). Integrating community indicators with economic
development planning. In: R. Phillips (ed.), Community Indicators Measuring
Systems, Aldershot, UK: Ashgate.
Pitt, M. (2007). The Pitt Review, December. Retrieved 23 March 2008, from Cabinet
Office: www.cabinetoffice.gov.uk/thepittreview
RCEP (2008). The Urban Environment. Retrieved 25 July 2008, from Royal
Commission On Environmental Pollution.
Reed, M.S., Fraser, E.D. & Dougill, A.J. (2006). An adaptive learning process for
developing and applying sustainability indicators with local communities, Ecological
Economics, 59, 406-418.
Reed, M., Fraser, E.D., Morse, S. & Dougill, A.J. (2005). Integrating methods for
developing sustainability indicators to facilitate learning and action, Ecology and
Society, 10(1).
Reschovsky, J.D. & Staiti, A.B. (2002). Access and quality: does rural America lag
behind?, Health Affairs, 24(4), 1128-1139.
Rickard, L., Jesinghaus, J., Amann, C., Glaser, G., Hall, S., Chealtle, M., et al. (2007).
Ensuring policy relevance. In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability
Indicators: A Scientific Assessment, Island Press, pp.65-79.
Roberts, P. (2006). Evaluating regional sustainable development: approaches, methods
and the politics of analysis, Journal of Environmental Planning and Management,
49(4), 515-532.
Robinson, J. (2004). Squaring the circle? Some thoughts on the idea of sustainable
development, Ecological Economics, 48, 369-384.
73
Rydin, Y., Holman, N. & Wolff, E. (2003). Local sustainability indicators, Local
Environment, 8(6), 581-589.
SBC (2007). Sustainability Appraisal Report of the Spelthorne Development Plan Strategy and Policies Preferred Options and Proposals Preferred Options DPDs
Appendices, April. Retrieved 20 June 2008, from Spelthorne Borough Council:
http://www.spelthorne.gov.uk/sustainability_appraisal_appendices_ april_2007.pdf
Scott Wilson Business Consultancy (2005). Local Development Framework, July.
Retrieved 7 August 2008, from St Helens Council: http://sthelens.gov.uk/SITEMAN/
publications/31/LDF10sustainabilityappraisalscopingreport.pdf
Sheate, W.R., do Partidario, M.R., Byron, H., Bina, O. & Dagg, S. (2008).
Sustainability assessment of future scenarios: methodology and application to
mountain areas of Europe, Environmental Management, 41, 282-299.
Smith, R. (2008). Green housing policy must be more radical says Rynd Smith,
Guardian, 30 July, 4.
Smith, S.P. & Sheate, W.R. (2001). Sustainability appraisal of English regional plans:
incorporating the requirements of the EU strategic environmental assessment
directive, Impact Assessment and Project Appraisal, 19(4), 263-276.
Spangenberg, J.H. (2007). The instututional dimension of sustainable development.
In: T. Hak, B. Moldan & A.L. Dahl (eds.), Sustainability Indicators: A Scientific
Assessment, Scientific Committee in Problems of the Environment, pp.107-124.
Spilanis, I., Kizos, T., Koulouri, M., Kondyli, J., Vakoufaris, H. & Gatsis, I. (2008).
Monitoring sustainability in insular areas, Ecological Indicators.
Strachan, H. (2003). The civil-military 'gap' in Britain, Journal of Strategic Studies,
26(2), 43-63.
74
SurveyMonkey (2008a). SurveyMonkey User Manual. Retrieved May 14th, 2008, from
SurveyMonkey: www.surveymonkey.com
SurveyMonkey (2008b). Smart Survey Design. Retrieved 14 May 2008, from
SurveyMonkey: www.surveymonkey.com
Sustainable Seattle (2005). Regional Indicators. Retrieved 20 February 2008, from
Sustainable Seattle: http://www.sustainableseattle.org/Programs/RegionalIndicators/
IndCriteria/view?searchterm=indicators
Sutherland, W.J. et al. (2008). Future novel threats and opportunities facing UK
biodiversity identified by horizon scanning, Journal of Applied Ecology, 45, 821-833.
Tasser, E., Sternbach, E. & Tappeiner, U. (2008). Biodiversity indicators for
sustainability monitoring at municipality level: an example of implementation in an
alpine region, Ecological Indicators, 8, 204-223.
The European Parliament and the Council of the European Union (2001). Directive
2001/42/EC of the European Parliament and of the Council of 27 June 2001 on the
assessment of the effects of certain plans and programmes on the environment,
Official Journal L 197, 0030-0037.
The Whitehaven News (2008). 3 July. Retrieved September 2008, from The
Whitehaven News: http://www.whitehaven-news.co.uk/se/1.134870
Therivel, R. & Minas, P. (2002). Ensuring effective sustainability appraisal, Impact
Assessment and Project Appraisal, 20(2), 81-91.
Therivel, R. & Ross, B. (2007). Cumulative effects assessment: does scale matter?,
Environmental Impact Assessment Review, 27, 365-385.
Therivel, R. & Walsh, F. (2006). The strategic environmental asessment directive in
the UK: 1 year onwards, Environmental Impact Assessment Review, 26(7), 663-675.
75
Thompson, B. (2007). Green retail: retailer strategies for surviving the sustainability
storm, Journal of Retail and Leisure Property, 6(4), 281-286.
Tilley, N. (2005). Crime reduction: a quarter century review, Public Money &
Management, October, 267-274.
Tratalos, J., Fuller, R.A., Evans, K.L., Davies, R.G., Newson, S.E., Greenwood, J.J., et
al. (2007). Bird densities are associated with household densities, Global Change
Biology, 13, 1685-1695.
Turner, K.R. (1993). Sustainability: principles and practice. In: R.K. Turner & R.K.
Turner (eds.), Sustainable Environmental Economics and Management, London:
Belhaven Press.
Ulubas-og lu, M.A. & Cardak, B.A. (2007). International comparisons of ruralurban
educational attainment: data and determinants, European Economic Review, 51, 1828
1857.
UNECE (1991). Policies and Systems of Environmental Impact Assessment, Geneva:
United Nations.
US Census Bureau (2008). World POPClock Projection. Retrieved 1 September 2008,
from US Census Bureau: http://www.census.gov/ipc/www/ popclockworld.html
Vickers, D. & Rees, P. (2007). Creating the UK national statistics 2001 output area
classification, J. R. Statist. Soc. A, Part 2, 170, 379-403.
Wackernagel, M., Monfreda, C., Moran, D., Wermer, P., Goldfinger, S. & Deumling,
D. (2005). National Footprint and Biocapacity Accounts 2005: The Underlying
Calculation Method, Oakland: Global Footprint Network.
Warren, M. (2007). The digital vicious cycle: links between social disadvantage and
digital exclusion in rural areas, Telecommunications Policy, 31, 374-388.
76
WCED (1987). Our Common Future, Oxford, New York: Oxford University Press.
Weich, S., Twigg, L. & Lewis, G. (2006). Rural/non-rural differences in rates of
common mental disorders in Britain, British Journal of Psychiatry, 18(8), 51-57.
West Oxfordshire District Council (2008). West Oxfordshire Local Development
Framework Sustainability Appraisal Scoping Report, February. Retrieved 29 May
2008, from West Oxfordshire District Council: http://www.westoxon.gov.uk/files/
download/5169-2445.pdf
Wigan Council (2007). Coping Report for the Sustainability Appraisal of Wigan Local
Development framework, September. Retrieved 7 August 2008, from Wigan Council:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
Willis, R., Webb, M. & Wilsdon, J. (2007). The Disrupters: Lessons for Low-Carbon
Innovation From the New Wave of Environmental Pioneers, NESTA.
WMBC (2007). Retrieved 18 June 2008, from Wigan Metropolitan Borough Council:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
Wong, C., Baker, M. & Kidd, S. (2006). Monitoring spatial strategies: the case of
local development documents in England, Environment and Planning C: Government
and Policy, 24, 533-552.
Wooderson, J., Gardner, R. & Laeger, S. (2006). Draft Scoping Report for the Strategic
Environmental Assessment and Sustainability Appraisal for the Emerging Three Rivers
Development Plan Documents, February. Retrieved 18 June 2008, from Three Rivers
District
Council:
http://www.threerivers.gov.uk/Default.aspx/Web/Sustainability
Appraisal
Woodger, M. (2008). Where Can I Find EFDC SA SustainabilityIindicators?
77
Yun, G.W. & Trumbo, C.W. (2006). Comparative response to a survey executed by
post, e-mail, and web form, Journal of Computer Mediated Communications, 6(2).
Zidansek, A. (2007). Sustainable development and happiness in nations, Energy, 32,
891-897.
Zonneveld, W. & Stead, D. (2007). European territorial cooperation and the concept
of urban-rural relationships, Planning, Practice & Research, 22(3), 439-453.
Three Rivers:
http://www.threerivers.gov.uk/Default.aspx/Web/SustainabilityAppraisal
Watford:
http://www.watford.gov.uk/ccm/content/planning-and-development/sustainabilityappraisal-sa-and-strategic-environmental-assessment-sea-scoping-report-march2006.en;jsessionid=154F8883BBD7B6B882FDFFDA5A5FD79A
78
RURAL 80
Breckland:
http://www.breckland.gov.uk/2a_sustainability.pdf
accessed May 28th 2008
Fenland:
http://www.fenland.gov.uk/ccm/content/development-policy/ldf/sustainable-appraisalscoping-report.en
accessed on 22nd June 2008
Huntingdonshire:
http://www.huntsdc.gov.uk/NR/rdonlyres/33B33A0B-661B-4B4B-8B44F9E70B4473B9/0/scoping_report_revision_sept_2007.pdf
accessed on 30th June 2008
Mid Suffolk:
http://www.midsuffolk.gov.uk/NR/rdonlyres/A8795816-7D79-4DC7-902883256CB4F5C2/0/DraftAAPScopingReportApril2008.pdf
accessed on 30th June 2008
North Norfolk:
http://www.northnorfolk.org/ldf/documents/Core_Strategy_Sustainability_Appraisal_
WEB_VERSION.pdf
accessed on 30th June 2008
South Cambridgeshire:
http://www.scambs.gov.uk/documents/retrieve.htm?pk_document=3611
accessed on 22nd June 2008
79
South Norfolk:
http://www.eastspace.net/gndp/documents/SA_SCOPING_REPORT__FINAL_VERSION_-_ADOPTED_DEC_2007.pdf
accessed May 28th 2008
Suffolk Coastal:
http://www.suffolkcoastal.gov.uk/NR/rdonlyres/FEC5C8F8-1354-4611-9F071E400CB47C22/0/SAScopingJune06.pdf
accessed on 22nd June 2008
Uttlesford:
http://www.uttlesford.gov.uk/Planning/local+plans+and+local+development+framewor
k/saframeworkv7.pdf
accessed on 22nd June 2008
NORTH WEST
MAJOR URBAN
Bury:
http://www.bury.gov.uk/NR/rdonlyres/40528837-4D39-4290-AA353473F2EA34F6/0/SATaskA2BaselineInformation2007.pdf
accessed on 22nd June 2008
Liverpool:
http://www.liverpool.gov.uk/Images/tcm21-35170.pdf
accessed on 22nd June 2008
Oldham:
http://www.oldham.gov.uk/ldf-sustainability-appraisal-scoping-report-web.pdf
accessed on 22nd June 2008
80
Salford:
http://www.salford.gov.uk/appendix4a.pdf
accessed on 22nd June 2008
Sefton:
http://www.sefton.gov.uk/default.aspx?page=5866
accessed on 22nd June 2008
St Helens:
http://sthelens.gov.uk/SITEMAN/publications/31/LDF10sustainabilityappraisalscoping
report.pdf
accessed on 22nd June 2008
Stockport:
http://s1.stockport.gov.uk/council/corestrategy/chapter_393.html
accessed on 22nd June 2008
Trafford:
http://www.trafford.gov.uk/cme/live/dynamic/DocMan2Document.asp?document_id=5
9458AE9-1B9E-43B4-9B47-5B253A3D96D2
accessed 18th June 2007
Wigan:
http://www.wigan.gov.uk/NR/rdonlyres/2A44F7A5-ADD8-4FA6-8F95A3ECB8A1CADD/0/SAScopingReport891kb.pdf
accessed 18th June 2007
RURAL 80
Allerdale:
http://www.allerdale.gov.uk/downloads/page1001/Core%20Strategy%20Scoping%20R
eport_1%20amendment.pdf
accessed 18th June 2007
81
Congleton:
http://www.congleton.gov.uk/pool/1/310720060253.pdf
accessed 22nd June 2007
Copeland:
http://www.copelandbc.gov.uk/ms/www/local-plan/PDF/sustainabilityassessment/Appendix-6_Formulation-Sustainability-Indicators.pdf
accessed 22nd June 2007
Eden:
http://www.eden.gov.uk/pdf/pp-Final-Core-Strategy-Scoping-Report.pdf
accessed 18th June 2007
Ribble Valley:
http://www.ribblevalley.gov.uk/downloads/LOCAL_DEVELOPMENT_SCHEME_AD
OPTED_MASTER_version_2007.pdf
accessed 18th June 2007
South Lakeland:
http://www.southlakeland.gov.uk/downloads/page1901/SA_Scoping_Report_280606.p
df
accessed 18th June 2007
SOUTH EAST
MAJOR URBAN
Dartford:
http://www.dartford.gov.uk/planningpolicy/DBCGBC_SAofLDFsScopingReportMar0
51_000.pdf
accessed 6th June 2008
82
Elmbridge:
http://www.elmbridge.gov.uk/search/search.asp
accessed 30th June 2008
Gravesham:
http://www.dartford.gov.uk/planningpolicy/DBCGBC_SAofLDFsScopingReportMar0
51_000.pdf
accessed 6th June 2008
Gravesham and Dartford have the same consultants and the same indicators:
Mole Valley:
http://www.molevalley.gov.uk/media/pdf/i/c/Scoping_Report_-_July_2005.pdf
accessed 6th June 2008
Spelthorne:
http://www.spelthorne.gov.uk/sustainability_appraisal_appendices_april_2007.pdf
accessed 20th June 2008
Woking:
http://www.woking.gov.uk/planning/policy/ldf/fsar.pdf
accessed 20th June 2008
RURAL 80
Isle of Wight:
http://www.iwight.com/living_here/planning/images/2ScopingReport.pdf
accessed 20th June 2008
Mid Sussex:
http://www.midsussex.gov.uk/Nimoi/sites/msdcpublic/resources/Final%20in%20pdf%
20format.pdf
accessed 20th June 2008
83
South Oxfordshire:
http://www.southoxon.gov.uk/ccm/content/planning/policy/sustainability-appraisal.en
accessed 20th June 2008
Wealden:
http://www.wealden.gov.uk/Planning_and_Building_Control/Local_Plan/Sustainability
_appraisal_SEA.pdf
accessed 20th June 2008
West Oxfordshire:
http://www.westoxon.gov.uk/files/download/5169-2445.pdf
accessed 29th May 2008
84
APPENDICES
85
Score
Does not lead to SS.
Is receptor based (Holland, 1997) and also has one or more of the following: Framed in terms of physical units,
possibly relating to carrying capacity or non-substitutability of capital (Holland, 1997); Is stock and/or distributional
(Gasparatos et al., 2007); Exceeds environmental quality standards (The European Parliament and the Council of the
European Union, 2001); Measures the effect on areas or landscapes which have a recognised national, community or
international protection status (The European Parliament and the Council of the European Union, 2001; Niemeijer &
de Groot, 2008).
Measuring (part of) the cumulative (The European Parliament and the Council of the European Union, 2001),
symbiotic (Sustainable Seattle, 2005) and transboundary nature of the effects (The European Parliament and the
Council of the European Union, 2001). Covers a range of good receptors. Possibly measures local smaller sites of
industrialisation (Holland, 1997), their customers and suppliers (Darnall & Sides, 2008).
Integrating all 3 strands of sustainability (Holland, 1997), intragenerational, intergenerational (Sustainable Seattle,
2005) and inter-species (Holland, 1997). Localised development of indicators including horizon scanning to identify
trends and indicators of emerging innovations (Government, 2005). Using excellent practice in determining climate
change indicators (e.g. biodiversity ones that are sensitive to anthropogenic change) (Tasser et al., 2008) as they
have substantial effects on health, social and economic indicators (Allman et al., 2004; Bosello et al., 2006).
No academic credibility
Accurate and be bias free (Reed et al., 2006), measures what it is designed to measure (Sustainable Seattle, 2005;
Niemeijer & de Groot, 2008).
Academically robust (Reed et al., 2006; Niemeijer & de Groot, 2008) and able to show the probability, duration,
frequency and reversibility of the effects (European Parliament and the Council of the European Union, 2001).
Substantiated; shows method of verification and where results can be compared with progress elsewhere (Roberts,
2006; Niemeijer & de Groot, 2008).
Academic Credibility
0
1
2
3
86
Addressing
Uncertainty
Environmental
Receptors Addressed
Locality
Disaggregation of
Data
Measurability
Reliability
Relevant to plan
Actionable
3 Based on current research (e.g. that community services are powerful determinants of health) (Doran et al., 2006); the
relationship between material deprivation and ill-health is strong but not straightforward (Doran et al., 2006). Collects new
disaggregated data for indicators (e.g. precise quantification of the scale of the digital divide) (Warren, 2007), local SMEs
supplying LAs (IFF Research Ltd., 2008), projected population statistics (NSO, 2007; Shaw, 2007).
0 Not easily measured.
1 Easily measurable (Reed et al., 2006; Donnelly et al., 2007) in qualitative terms (Niemeijer & de Groot, 2008) using
existing data (Reed et al., 2006) and collected and analysed through established manageable methods (Sustainable Seattle,
2005) with units of measurement meaningful (Roberts, 2006).
2 Easily measured in quantitative terms (Niemeijer & de Groot, 2008), using existing historical record of comparative data
(Niemeijer & de Groot, 2008); capable of being updated regularly (Donnelly et al., 2007).
3 Units of measurement can be adjusted to reflect individual situations (Roberts, 2006) and data can be collected to reflect
the area and the magnitude of the effect (European Parliament and the Council of the European Union, 2001).
0 No reliability
1 Repeatable and able to assess trends over space (Niemeijer & de Groot, 2008) and time (Reed et al., 2006), with the right
spatial and temporal scales (Sustainable Seattle, 2005; Niemeijer & de Groot, 2008).
2 Sensitive and respond in a predictable manner to changes and stresses (Reed et al., 2006; Niemeijer & de Groot, 2008).
3 Repeatable and reproducible in different contexts (Niemeijer & de Groot, 2008) that allows unambiguous interpretation
(Niemeijer & de Groot, 2008).
0 Not relevant to plan.
1 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 1% - 33% of targets.
2 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 34% - 67% of targets. Could be
able to relate to opportunities for plan change, in LA, businesses, non-profit organisations, institutions and individuals
(Sustainable Seattle, 2005).
3 Must link to quantitative and qualitative targets in plan (Niemeijer & de Groot, 2008): 68% - 100% of targets. Could be
time bound. Sensitive to changes within policy timeframes (Niemeijer & de Groot, 2008). Can identify conflict with plan
objectives in order that alternatives may be explored (Donnelly et al., 2007).
0 Not easily actionable.
1 Does not require excessive data collection skills (Reed et al., 2006; Niemeijer & de Groot, 2008). Well established links
with specific management practice or intervention. Have target, baseline or threshold against which to measure them/take
action (Reed et al., 2006; Niemeijer & de Groot, 2008).
88
Stakeholder
Involvement
Funding/Cost
Easily Communicated
2 Measure conditions or activities that can be changed in a positive direction by local actions (Sustainable Seattle, 2005).
Provide early warning of detrimental change (Reed et al., 2006; Donnelly et al., 2007; Niemeijer & de Groot, 2008).
3 Shared resources across LAs (Scott Wilson, Levett-Therivel Sustainability Consultants, Treweek Environmental
Consultants and Land Use Consultants, 2006) (e.g., for local biodiversity indicators, such as vascular plant numbers)
(Tasser et al., 2008).
0
Stakeholder involvement: mandatory groups consulted only.
1 Indicators developed with some stakeholders. Notable stakeholders and/or minorities left out (ODPM, 2005).
2 Indicators developed with a good range of stakeholders. Identifying areas most at risk to damage(Donnelly et al., 2007)
and what is important to stakeholders (Reed et al., 2006).
3 Wide consultation of stakeholders in type and number of each stakeholder type in choosing indicator. User driven to be
relevant to target audience (Niemeijer & de Groot, 2008).
0 Not easily to fund, monitor or report.
1 Have potential for funding. Be cost effective to measure (Sustainable Seattle, 2005; Reed et al., 2006; Niemeijer & de
Groot, 2008).
2 Benefits of information should outweigh the costs of usage (Niemeijer & de Groot, 2008).
3 Help optimise the number of indicators (Donnelly et al., 2007).
0 Not easily communicated.
1 Community assets (Reed et al., 2006) and concerns are reflected in the data and analyses of each indicator (Sustainable
Seattle, 2005).
2 Ability to communicate information to a level appropriate for making policy decisions (Reed et al., 2006) and to the
general public (Donnelly et al., 2007; Niemeijer & de Groot, 2008).
3 Attractive to local media. The press publicises them and uses them to monitor and analyse (Sustainable Seattle, 2005).
89
Local Authority
*Babergh
Breckland
*East
Cambridgeshire
NW Rural 80
LAs
Allerdale
Congleton
SE Rural 80
LAs
Chichester
Isle of Wight
EE Major
Urban LAs
*Broxbourne
Dacorum
NW Major
Urban LAs
*Bolton
Bury
Copeland
Mid Sussex
Epping Forest
*Knowsley
Three Rivers
Liverpool
Gravesham
Watford
*Manchester
Mole Valley
Oldham
*Runnymede
*Rochdale
Spelthorne
Salford
Woking
Fenland
Eden
Forest Heath
Ribble Valley
Huntingdonshire
South Lakeland
South
Oxfordshire
Wealden
West
Oxfordshire
*Maldon
Mid
Bedfordshire
Mid Suffolk
North Norfolk
South
Cambridgeshire
South Norfolk
Suffolk Coastal
Uttlesford
Sefton
St. Helens
Stockport
#Tameside
Trafford
Wigan
90
SE Major
Urban LAs
Dartford
Elmbridge
Epsom and
Ewell
Addressing uncertainty
Credibility Total/12
Locality
Disaggregation of Data
Measurability
Reproducibility
Relevant to Plan
Actionable
Stakeholder Involvement
Funding/cost
Easily Communicated
Economic Classification
Environmental Classification
Social Classification
47
28
65
49
47
47
60
48
38
58
33
52
83
45
44
143
10%
63%
27%
Three Rivers
EE
MU
47
27
66
49
47
47
61
48
39
58
33
52
83
45
44
140
11%
64%
26%
Watford
EE
MU
41
33
49
51
43
46
61
50
40
63
33
33
87
47
44
118
11%
61%
28%
Breckland
EE
R80
41
29
37
53
40
47
12
64
51
43
67
58
33
53
73
54
46
59
25%
54%
20%
Fenland
EE
R80
42
38
51
52
46
48
69
58
44
61
33
53
83
46
46
43
19%
44%
37%
S Cambs
EE
R80
44
38
50
52
46
45
67
57
42
59
50
85
39
42
40
20%
45%
35%
S Norfolk
EE
R80
48
39
45
52
46
50
71
57
46
67
63
67
52
80
65
54
42
31%
50%
19%
Suffolk coastal
EE
R80
43
43
52
47
46
46
72
59
45
67
54
87
42
44
143
22%
44%
34%
Uttlesford
EE
R80
35
39
43
45
41
45
74
61
45
33
67
57
86
49
45
58
14%
28%
59%
Huntingdonshire
EE
R80
39
28
50
46
41
42
67
53
41
67
65
33
54
85
61
49
24
13%
42%
46%
Mid Suffolk
EE
R80
40
36
48
44
42
44
68
56
42
62
67
51
83
53
46
137
24%
41%
35%
91
Indicator Number in LA
Academic Credibility
MU
Indicator Total/38
EE
Measurement Total/11
Dacorum
Local Authority
Region
Academic Credibility
Addressing uncertainty
Credibility Total/12
Locality
Disaggregation of Data
Measurability
Reproducibility
Relevant to Plan
Actionable
Stakeholder Involvement
Funding/cost
Easily Communicated
Economic Classification
Environmental Classification
Social Classification
36
46
49
43
48
67
57
45
63
52
82
40
43
57
21%
51%
28%
Bury
NW
MU
36
35
46
43
40
46
63
55
42
56
48
85
38
40
103
33%
31%
36%
Liverpool
NW
MU
41
36
49
48
43
45
67
55
42
33
56
100
48
81
64
51
47
23%
40%
36%
Oldham
NW
MU
39
33
43
45
40
48
70
60
45
63
33
50
87
47
44
61
16%
46%
38%
Salford
NW
MU
44
39
52
50
46
43
15
71
57
47
56
67
49
85
51
48
35
23%
37%
40%
Sefton
NW
MU
38
43
41
51
43
47
10
68
56
45
59
33
49
83
45
44
32
28%
44%
28%
St Helens
NW
MU
43
32
50
45
42
44
69
54
43
61
67
48
86
54
47
82
28%
30%
42%
Stockport
NW
MU
40
37
62
55
48
42
59
51
39
56
33
54
78
45
44
37
19%
46%
35%
Trafford
NW
MU
48
30
46
48
43
46
67
51
43
67
63
67
47
84
65
52
125
18%
39%
43%
Wigan
NW
MU
45
30
51
50
44
47
10
68
56
45
63
33
54
89
48
46
103
23%
40%
37%
S Lakeland
NW R80
44
36
55
51
47
46
65
55
42
65
67
54
86
55
49
50
20%
46%
34%
Ribble Valley
NW R80
40
31
47
48
41
44
10
69
55
45
67
64
67
54
84
67
52
87
23%
36%
41%
Eden
NW R80
41
42
46
49
45
47
75
61
48
62
67
54
87
54
49
78
27%
31%
42%
92
Indicator Number in LA
Indicator Total/38
Measurement Total/11
Region
EE
Local Authority
North Norfolk
Credibility Total/12
Locality
Disaggregation of Data
Measurability
Reproducibility
Relevant to Plan
Actionable
Stakeholder Involvement
Funding/cost
Easily Communicated
Economic Classification
Environmental Classification
Social Classification
48
47
69
58
45
62
33
52
84
47
47
55
22%
62%
16%
Congleton
NW R80
43
46
50
49
47
43
74
63
47
64
33
53
85
47
47
68
19%
43%
38%
Allerdale
NW R80
47
32
55
51
46
47
65
59
44
64
67
50
81
52
48
78
13%
49%
38%
Mole Valley
SE
MU
36
29
45
37
37
44
10
60
45
39
50
33
46
69
40
41
120
21%
55%
24%
Dartford
SE
MU
42
35
52
52
46
46
71
59
46
67
33
56
89
49
47
49
20%
59%
20%
Gravesham
SE
MU
42
35
52
52
46
46
71
59
46
67
33
56
89
49
47
49
20%
59%
20%
Elmbridge
SE
MU
41
34
51
53
45
45
65
49
41
55
67
51
75
49
45
120
23%
52%
26%
Spelthorne
SE
MU
40
34
43
51
42
45
66
53
43
60
100
50
77
58
48
75
23%
53%
24%
Woking
SE
MU
44
35
52
50
45
46
11
68
54
44
100
63
100
52
87
80
59
95
21%
44%
35%
Isle of Wight
SE
R80
53
35
54
53
49
49
68
56
45
62
100
52
83
59
52
151
22%
53%
25%
W Oxon
SE
R80
41
38
41
57
44
45
68
56
44
60
33
48
73
47
45
59
24%
54%
22%
Mid Sussex
SE
R80
46
35
49
49
45
47
68
60
45
62
67
52
80
52
48
62
21%
50%
29%
S Oxon
SE
R80
42
38
47
47
44
42
15
70
58
46
33
67
52
88
48
46
55
20%
45%
35%
93
Indicator Number in LA
Addressing uncertainty
57
Indicator Total/38
Academic Credibility
49
Measurement Total/11
39
49
Region
NW R80
Local Authority
Copeland
Wealden
averages ->
Local Authority
67
54
33
47
80
56
Environmental Classification
31%
42
39%
52
30%
63
90
7
48
47
Economic Classification
32%
42
Indicator Number in LA
47%
21%
Easily Communicated
78
Funding/cost
47
Stakeholder Involvement
51
Actionable
83
Relevant to Plan
51
Measurement Total/11
45
Reproducibility
61
Measurability
16
Disaggregation of Data
44
Locality
55
Credibility Total/12
67
6.5
Addressing uncertainty
49
46
Academic Credibility
49
44
30
49
40
49
R80
35
SE
43
94
Region
Social Classification
Indicator Total/38
EMS adoption
Justification of Classification
Classification
Economic
Foster development of
environmental industry
Employment
Deprivation
Life expectancy
Economic
Insulation
Resources (energy)
EIA
Resources/Biodiversity
Energy
Water
95
Economic
Economic
Economic
Economic
Environmental
Environmental
Environmental
Environmental
Environmental
water quality.
Education (any age/educational establishment
or workplace)
Disabled access (buildings, outside areas,
transport)
Educational facilities
Neighbourhood liveability
Play areas
Recreational facilities/health
Social psychological
Social
Social
Social
Social
Social
Social
Social
Social
Altogether the totals for indicators that need justification in each classification category are Economic (20), Environmental (15) and Social (39). A
total of 74 Sustainability Indicators are in this indicator reference set. A sample of 19 are shown in this appendix.
96