Anda di halaman 1dari 494

2

© 2014 by H. Scott Matthews, Chris T. Hendrickson and Deanna H. Matthews

Cover design by Mireille Mobley, 2015

Suggested Citation:

H. Scott Matthews, Chris T. Hendrickson, and Deanna Matthews, Life Cycle Assessment:
Quantitative Approaches for Decisions that Matter, 2014. Open access textbook, retrieved from
https://www.lcatextbook.com/ .

This work is licensed under a Creative Commons Attribution-ShareAlike


4.0 International License, with its associated liability limitations and lack of
warranty.

In short, you are free to:

Share — copy and redistribute the material in this textbook in any medium or format

Adapt — remix, transform, and build upon the material for any purpose, even commercially.

Under the following terms:

Attribution — You must give appropriate credit, provide a link to the license, and indicate if
changes were made. You may do so in any reasonable manner, but not in any way that suggests
the licensor endorses you or your use.

ShareAlike — If you remix, transform, or build upon the material, you must distribute your
contributions under the same license as the original.

Beyond the Creative Commons license, we want to note that we encourage anyone to build on our
material and to please let us know if you do so it too can be included on the lcatextbook.com site!

Microsoft, Encarta, MSN, and Windows are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.

MATLAB is a registered trademark of The MathWorks, Inc. in the United States and/or other
countries.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
3

Dedication

To Lester Lave and Francis McMichael

Who taught us to work on problems that matter

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
4

Table of Contents
Dedication.......................................................................................................................3
Table of Contents ............................................................................................................4
Preface ............................................................................................................................8
Chapter 1 : Life Cycle and Systems Thinking .................................................................11
Learning Objectives for the Chapter ..................................................................................... 11
Overview of Life Cycles ......................................................................................................... 11
A Brief History of Engineering and The Environment ............................................................ 13
Life Cycle Thinking ................................................................................................................ 15
Systems Thinking in the Life Cycle......................................................................................... 18
A History of Life Cycle Thinking and Life Cycle Assessment ................................................... 19
Decisions Made Without Life Cycle Thinking......................................................................... 21
Inputs and Outputs of Interest in Life Cycle Models ............................................................. 23
From Inputs and Outputs to Impacts .................................................................................... 25
The Role of Design Choices ................................................................................................... 28
What Life Cycle Thinking and Life Cycle Assessment Is Not................................................... 29
Chapter Summary ................................................................................................................. 29
Chapter 2 : Quantitative and Qualitative Methods Supporting Life Cycle Assessment.34
Learning Objectives for the Chapter ..................................................................................... 34
Basic Qualitative and Quantitative Skills............................................................................... 34
Working with Data Sources................................................................................................... 35
Accuracy vs. Precision ........................................................................................................... 39
Uncertainty and Variability ................................................................................................... 40
Management of Significant Figures....................................................................................... 41
Ranges .................................................................................................................................. 43
Units and Unit Conversions................................................................................................... 46
Energy-Specific Considerations ............................................................................................. 47
Use of Emissions or Resource Use Factors ............................................................................ 49
Estimations vs. Calculations .................................................................................................. 50
Attributes of Good Assumptions........................................................................................... 54
Validating your Estimates ..................................................................................................... 56
Building Quantitative Models ............................................................................................... 58
A Three-step method for Quantitative and Qualitative Assessment ..................................... 60
Chapter Summary ................................................................................................................. 61
Chapter 3 : Life Cycle Cost Analysis ..............................................................................64
Learning Objectives for the Chapter ..................................................................................... 64
Life Cycle Cost Analysis in the Engineering Domain .............................................................. 64
Discounting Future Values to the Present ............................................................................. 67
Life Cycle Cost Analysis for Public Projects............................................................................ 70

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
5

Deterministic and Probabilistic LCCA .................................................................................... 72


Chapter Summary................................................................................................................. 75
Chapter 4 : The ISO LCA Standard – Goal and Scope .................................................... 80
Learning Objectives for the Chapter ..................................................................................... 80
Introduction to Standards .................................................................................................... 80
The Life Cycle Assessment Standard ..................................................................................... 82
ISO LCA Study Design Parameters ........................................................................................ 84
Chapter Summary................................................................................................................. 96
Chapter 5 : Data Acquisition and Management for Life Cycle Inventory Analysis ..... 100
Learning Objectives for the Chapter ................................................................................... 100
ISO Life Cycle Inventory Analysis ........................................................................................ 101
Life Cycle Interpretation ..................................................................................................... 113
Identifying and Using Life Cycle Data Sources .................................................................... 114
LCI Data Module Metadata and Formats ............................................................................ 125
Referencing Secondary Data............................................................................................... 129
Additional Considerations about Secondary Data and Metadata ....................................... 131
Chapter Summary............................................................................................................... 135
Advanced Material for Chapter 5 ....................................................................................... 139
Section 1 - Accessing Data via the US LCA Digital Commons............................................... 139
Section 2 – Accessing LCI Data Modules in SimaPro ........................................................... 144
Section 3 – Accessing LCI Data Modules in openLCA .......................................................... 150
Chapter 6 : Analyzing Multiple Output Processes and Multifunctional Product Systems
................................................................................................................................... 162
Learning Objectives for the Chapter ................................................................................... 162
Allocation of Flows for Processes with Multiple Products .................................................. 164
An Example of Allocation of Process Flows in US LCI Database .......................................... 168
Chapter Summary............................................................................................................... 186
Further Reading.................................................................................................................. 187
Chapter 7 : Uncertainty and Variability in LCA ........................................................... 192
Learning Objectives for the Chapter ................................................................................... 192
Methods to Address Uncertainty and Variability ............................................................... 202
Quantitative Methods to Address Uncertainty and Variability........................................... 206
Ranges ................................................................................................................................ 206
Sensitivity Analysis ............................................................................................................. 214
Chapter Summary............................................................................................................... 217
Chapter 8 : LCA Screening via Economic Input-Output Models ................................. 222
Learning Objectives for the Chapter ................................................................................... 222
Input-Output Tables and Models........................................................................................ 222
Input–Output Models Applied to Life Cycle Assessment .................................................... 230
Introduction to the EIO-LCA Input-Output LCA Model........................................................ 234

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
6

EIO-LCA Example: Automobile Manufacturing.................................................................... 238


Beyond Cradle to Gate Analyses with IO-LCA Models ......................................................... 243
Chapter Summary ............................................................................................................... 245
Advanced Material for Chapter 8 - Overview ...................................................................... 250
Section 1 - Linear Algebra Derivation of Leontief (Input-Output) Model Equations ............ 250
Section 2 – Commodities, Industries, and the Make-Use Framework of EIO Methods ....... 252
Section 3 – Further Detail on Prices in IO-LCA Models ........................................................ 255
Section 4 – Mapping Examples from Industry Classified Sectors to EIO Model Sectors....... 259
Section 5 – Modeling Effects of Multiple Final Demand Entries .......................................... 264
Section 6 – Spreadsheet and MATLAB Methods for Using EIO Models ............................... 268
Section 7 – IO-LCA-based Uncertainty Analysis: Example with Ranges ............................... 280
Chapter 9 : Advanced Life Cycle Models .................................................................... 299
Learning Objectives for the Chapter ................................................................................... 299
Process Matrix Based Approach to LCA............................................................................... 299
Connection Between Process- and IO-Based Matrix Formulations ..................................... 304
Extending process matrix methods to post-production stages ........................................... 313
Categories of Hybrid LCA Models ........................................................................................ 316
Chapter Summary ............................................................................................................... 322
Advanced Material for Chapter 9 – Section 1 – Process Matrix Models in MATLAB............ 328
Advanced Material for Chapter 9 – Section 2 – Process Matrix Models in SimaPro ............ 330
Advanced Material for Chapter 9 – Section 3 – Process Matrix Models in openLCA ........... 334
Advanced Material for Chapter 9 – Section 4 – Allocation in Process Matrix Models ......... 338
Advanced Material for Chapter 9 – Section 5 – Hybrid EIO-LCA Models ............................. 342
Advanced Material for Chapter 9 – Section 6 – Connections in Process Matrix Models:
Comparison of US LCI and EIO-LCA...................................................................................... 346
Advanced Material for Chapter 9 – Section 7 – Uncertainty in Process Matrix Models: Case
Study of US LCI .................................................................................................................... 360
Chapter 10 : Life Cycle Impact Assessment ................................................................ 366
Learning Objectives for the Chapter ................................................................................... 366
Why Impact Assessment? ................................................................................................... 366
Overview of Impacts and Impact Assessment ..................................................................... 367
ISO Life Cycle Impact Assessment ....................................................................................... 372
Chapter Summary ............................................................................................................... 394
Advanced Material for Chapter 10 – Section 2 – Process Matrix Models in SimaPro for
Impact Assessment ............................................................................................................. 403
Advanced Material for Chapter 10 – Section 3 – Uncertainty from Choice of Impact
Assessment Method ........................................................................................................... 413
Chapter 11 : Placeholder ............................................................................................. 440
Deterministic and Probabilistic LCCA – keep in chapter 3 or here? ..................................... 449
Chapter 12 : Advanced Hybrid Hotspot and Path Analysis ......................................... 463

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
7

Learning Objectives for the Chapter ................................................................................... 463


Results of Aggregated LCA Methods................................................................................... 463
A Disaggregated Two-Unit Example ................................................................................... 466
Structural Path and Network Analysis ................................................................................ 467
Web-based Tool for SPA ..................................................................................................... 475
Chapter Summary............................................................................................................... 486
Advanced Material for Chapter 12 – Section 1 - MATLAB Code for SPA ............................. 489

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
8

Preface
After finishing our book on Economic Input Output Life Cycle Assessment (EIO-LCA)
(Hendrickson, Lave and Matthews 2006) with help from our colleagues Arpad Horvath, Satish
Joshi, Fran McMichael, Heather MacLean, Gyorgyi Cicas, Deanna Matthews and Joule
Bergerson, we assumed that would be our final word. We did not imagine writing another
book on the topic. The 2006 book successfully demonstrated the EIO-LCA approach and
demonstrated various applications. At Carnegie Mellon University (CMU), we had a
sustainability sequence of four half-semester courses in our graduate program in Civil and
Environmental Engineering. Only one of those courses was on environmental life cycle
assessment (LCA), and over the course of a seven-week term there was only so much material
that could be covered. Also, that LCA follows an established process set by the International
Organization for Standardization (ISO) (and other similar agencies) meant that it was hard to
justify writing a book that teaches you how to use an existing recipe. Imagine writing a
cookbook that intends to teach you how to read other cookbooks!

But after using the book for a few years, we realized how much other material was needed and
how the book had only limited value as a textbook (which was not even the intent of the book
in the first place). Our half-semester graduate LCA course grew to a full semester. We
supplemented readings from our book with many other resources – to the point that as of a
few years ago we were only assigning a few of the original book chapters. So while this book
was not really planned, the preparations for it have been happening for the last five years.

Another driving force is that LCA has changed since 2006. From our observations as
educators, researchers, practitioners, and peer reviewers in the LCA community, there are
trends that concern us. One of the trends is that practitioners are depending too much on
LCA software features (i.e., pressing buttons) without fully understanding the implications of
simply pressing buttons in existing software tools and reporting the results. In particular, many
practitioners accept calculations without considering the large amount of underlying
uncertainty in the numbers. These observations are especially concerning as LCA (as the title
of the book implies) is increasingly being used to support ‘big decisions’ rather than simple
decisions such as whether to use paper or plastic bags (we actually favor cloth bags).

And thus we have prepared this free e-book to help educate you about LCA. Let us clearly
note that this book should be a supplement to, not a substitute for, acquiring, reading, and
learning the established LCA standards. This book is intended to be a companion to an
organized tour of those standards, not a replacement. In addition, we have organized chapters
in a consistent way so that it can be used for undergraduate or graduate audiences. For many
of the chapters, there are sections at the end of each chapter that we expect an undergraduate
course may skip but that a graduate course may dive into quite deeply. We use the book in this
serial format in our own undergrad and graduate LCA courses at CMU.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
9

This book (like its predecessor) is about life cycle thinking and acquiring and generating the
information to make sound decisions to improve environmental quality and sustainability.
How can we design products, choose materials and processes, and decide what to do at the
end of a product’s life in ways that produce fewer environmental discharges and use less
material and energy?

We also should add that pursuing environmental improvement is only one of many social
objectives. In realistic design situations, costs and social impacts are also important to consider.
This book focuses on environmental impacts, although life cycle costs are discussed in Chapter
3. Readers are encouraged to also seek out material on life cycle costs and social impacts. A
good starting point is our free, online book on Civil Systems Planning, Investment and Pricing
(Hendrickson and Matthews, 2013).

We expect that users of this book are generally knowledgeable about environmental and
energy issues, are comfortable with probability, statistics, and building small quantitative
models, and are willing to learn new methods that will help organize broad thoughts about
how products, processes, and systems can be assessed.

In summary, we consider this a ‘take two’ of our original purpose – to have a unified resource
for use in our own courses. This book’s significantly expanded scope benefits from our
collective 40 years of experience in LCA. We overview the ISO LCA Framework, but spend
most of the time and space discussing the needs and practices associated with assembling,
modeling, and analyzing the data that will support assessments.

We thank our colleagues Xiaoju (Julie) Chen, Gwen DiPietro, Rachel Hoesly, and Francis
McMichael from CMU, Vikas Khanna and Melissa Bilec at the University of Pittsburgh, Bo
Weidema from Aalborg University, and Joyce Cooper of the University of Washington for
their many thoughts, comments, and contributions to make this book project a success. Special
thanks to Michael M. Whiston, Cate Fox-Lent, and Scott O’Dowd who provided substantial
proofreading assistance for drafts. We also thank dozens of students and colleagues for many
interactions, questions and inspirations over the years.

We hope that our experiences, as represented here in this free e-book, will make you a more
informed and educated teacher and practitioner of LCA and allow you to learn it and apply it
right the first time - as you are introduced to the topic.

H. Scott Matthews
Deanna H. Matthews
Chris T. Hendrickson
August 2015

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
10

References
Hendrickson, Chris T., Lester B. Lave, and H. Scott Matthews. Environmental life cycle
assessment of goods and services: An input-output approach. RFF Press, 2006.

Hendrickson, Chris T. and H. Scott Matthews, Civil Infrastructure Planning, Investment and
Pricing. http://faculty.ce.cmu.edu/textbooks/cspbook/(accessed August, 2015).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 11

Chapter 1 : Life Cycle and Systems Thinking


In this chapter, we introduce the concept of “thinking” about life cycles. Whether or not you
become a practitioner of LCA, this skill of broadly considering the implications of a product
or system is useful. We first provide definitions of life cycles and a short history of LCA as it
has grown and developed over the past decades, then give some examples where application
of life cycle thinking (rather than completion of full-blown LCAs) will demonstrate where
analyses can lead (or have already led) to poor decisions. The goal is to learn how to think
about problems from a system wide perspective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. State (a) the concept of a life cycle and (b) its various stages as related to assessment
of products.

2. Illustrate the complexity of life cycles for even simple products.

3. Explain why environmental problems, like physical products, (a) are complex and (b)
require broad thinking and boundaries that include all stages of the life cycle.

4. Describe what kinds of outcomes we might expect if we fail to use life cycle thinking.

Overview of Life Cycles


We first learn about life cycles at a young age – the butterfly’s genesis from egg to larva to
caterpillar to chrysalis to butterfly; the path of water from precipitation into bodies of water,
then evaporation or transpiration back into the air. Frogs, tomatoes in the garden, seasons
throughout the year – all life cycles we know or experience in our own life cycle. Each
individual stage along the cycle is given a distinct term to distinguish it from the others, yet
each stage flows seamlessly into the next often with no clear breaks. The common theme is a
continuous stepwise path, one stage morphing into the next, where after some time period we
are back to the initial starting point. A dictionary definition of life cycle might be “a series of
stages or changes in the life of an organism”. Here we consider this definition for products,
physical processes, or systems.

While we often are taught or consider life cycles as existing in the natural world, we can just
as easily apply the concept to manmade products or constructs: aluminum’s journey from
beverage can to recycle bin back to beverage can; a cellphone we use for our 2-year contract
period then hold onto (because it must have some value!) before donating to a good cause

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
12 Chapter 1: Life Cycle and Systems Thinking

where (we presume) it is used again before…being recycled? … being thrown away? The same
common theme – a continuous stepwise path, one stage morphing into the next, where after
some time we are (or may be) back to the initial starting point. It is these kinds of life cycles
for manmade products and systems that are the focus of this book.

As the domain of sustainable management has taken root, increasingly stakeholders describe
the need for decision making that considers the “life cycle”. But what does that mean? Where
does that desire and intent come from?

The entire life cycle for a manmade product goes from obtaining everything needed to make
the product, through manufacturing it, using it, and then deciding what to do with it once it is
no longer being used. Returning to the natural life cycles described above this means going
from the birth of the product to its death. As such, this kind of view is often called a “cradle
to grave” view of a product, where the cradle represents the birthplace of the product and the
grave represents what happens to it when we are done with it – often to be thrown into a
landfill. Some life cycles may focus on the process of making the product (up to the point of
leaving the factory) and have a “cradle to gate” view, where the word gate refers to the factory
gate. If we have a fairly progressive view, we might think about alternatives to a “grave”. That
might mean recycling of some sort, or taking back the product and using it again. Building on
this alternative terminology, proponents have also referred to the complete recycling of
products as going from “cradle to cradle”.

Consider some initial product life cycle views:

• A piece of fruit is grown on a farm which uses water and perhaps various fertilizers
and equipment to bring it to market. There it is sold to either a food service business
or an individual consumer. While much of it is hopefully eaten, some of it will not be
edible and the remainder will be disposed of as food waste – either as compost or in
the trash.

• A tuxedo is sewn together at a factory and then distributed and sold. It is purchased
either for personal use (perhaps only being used once or twice a year), or for the
purposes of renting it out for profit to people who need it only once, and maybe
cannot justify the cost of buying one. The rental tuxedo will be rented several times a
month, and after each rental it is cleaned and prepared for the next rental. Eventually
the tuxedo will either be too worn to use, or the owner will grow out of it. At that
point it is likely donated or thrown away.

• A car is put together from components at a factory. It is then delivered to a dealer,


purchased by a consumer, and driven for a number of years. At some point the owner
decides to get rid of the car – perhaps selling it to another driver who uses it for several

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 13

years. Eventually its owner finds no sufficient value for it, and it will likely be shredded
into small pieces and useful metals reclaimed.

• A computer is assembled from components manufactured across the world (all of


which are shipped to an assembly line). It is bought and plugged in by the owner,
consuming electricity for several years before becoming obsolete. At the end of its
useful life it might be sold for a fraction of its purchase price, or may be donated to a
party that still finds value in it, or it may be stored under a desk for several years. Like
the car example above, though, eventually the owner will find no sufficient value for
it and want to get rid of it.

We can already start to think about some implications of these basic life cycles. Using fuels
and electricity generates pollution. Applying fertilizers results in runoff and stream
contamination. Washing a tuxedo releases chemicals into wastewater systems that need to be
removed. Making semiconductor chips consumes large amounts of water and uses hazardous
chemicals. Finally, putting items in landfills minimizes our opportunity to continue extracting
usefulness from those value-added items, takes up land that we cannot then use for other
purposes, and, if the items contain hazardous components, leaks may eventually contaminate
the environment.

This is a modern view of a product. We have not always been so broad and comprehensive in
thinking about such things. In the next few sections we briefly talk about the related history
of this kind of thinking, and also give some sobering examples of decisions and products that
were made (or promoted) that had not fully considered the life cycle.

A Brief History of Engineering and The Environment


Before we further motivate life cycle thinking, let’s briefly talk about the history of industrial
production, environmental engineering, science, and management as it applies to managing
the impacts of products. While engineers and others have been creating production or
manufacturing processes for products for centuries, nearly all of the production systems we
have created in that time are “linear”, i.e., we need to keep feeding the system with input at
one end to create output at the other. We design such linear processes independently of
whether we will have long-lasting supplies of the needed inputs, and certainly have not made
contingencies for how to change the process should we begin to run out of those resources.
We also have thought quite linearly in terms of how well the natural environment could deal
with any potential wastes from the production systems we have designed.

It is worth realizing that environmental engineering (i.e., the integration of science to improve
our natural environment) is a fairly young discipline. While there is evidence of ancient
civilizations making interesting and innovative solutions to dealing with wastes, the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
14 Chapter 1: Life Cycle and Systems Thinking

establishment of a real environmental engineering profession was not really formalized until
around 1900. Initially, what we now call environmental engineering grew out of the need to
better manage urban wastes, and thus most of the activity was originally referred to as “sanitary
engineering”. Such activities involved diversion of waste streams to distant sinks to avoid local
health problems, such as sewer systems (Tarr 1996). Eventually, end of pipe treatment
emerged. By end of pipe, we mean that the engineering problem was focused on what to do
with the waste of a system (e.g., a factory or a social waste collection system) after it has already
been produced. Releases of wastes and undesirable outputs to the environment are also called
emissions. Another historical way of dealing with environmental problems has been through
remediation. Remediation occurs after the pollution has already occurred, and may involve
cleaning up a toxic waste dump, dredging a river to remove long-buried contaminants that
were dumped there via an effluent pipe, or converting contaminated former industrial sites
(brownfields) into new developments. The remediation activities may occur soon after or even
decades after the initial pollution occurred.

An alternative paradigm was promoted in the 1980s, referred to as pollution prevention (P2,
or cleaner production). It is probably obvious that the whole point of this alternative
paradigm was to make stakeholders realize that it is costly and late in the process to wait until
the end of the pipe to manage wastes. If we were to think about the inevitable waste earlier in
the process chain, we could create a system that produces less (or ideally, no) waste.

A newer paradigm is to promote sustainability. Achieving sustainability refers to the broader


balancing of social, economic, and environmental aspects within the planet’s ability to provide.
The United Nations’ Brundtland Commission (1987) suggested “sustainable development is
development that meets the needs of the present without compromising the ability of future
generations to meet their own needs”.

Almost all people in developed nations share the goals of improving environmental quality
and making sure that future generations have sufficient resources. Unfortunately, consumers,
business leaders, and government officials do not have the information required to make
informed decisions. We need to develop tools that tell these decision makers the life cycle
implications of their choices in selecting materials, products, or energy sources. These
decisions are complicated: they depend on the environmental and sustainability aspects of all
products and services that contribute to making, operating, and disposing of those materials,
products, or energy sources. They also depend on being able to think non-linearly about our
production systems and envision the possibilities of resource scarcity or a lack of resilience in
the natural environment. Accomplishing these goals requires life cycle thinking, or thinking
about environmental problems from a systems perspective.

Nowadays all of these activities are part of what we refer to as environmental engineering.
Despite trends towards pollution prevention and sustainability, basic challenges remain to
design better end of pipe systems even in the developed world where pollution prevention is
well known but is deemed as too expensive for particular processes (or where all cost-effective

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 15

P2 solutions have already been implemented). But the general goal of the field is to reduce
pollution in our natural environment, and a primary objective is to encourage broader thinking
and problem solving that goes back before the end of the pipe and prevents pollution
generation. Practically, we will not achieve a pollution-free world in our lifetimes. But we can
help get there by thinking about environmental problems in a life cycle context, and ideally
identify solutions that focus on stages earlier in the life cycle than the point where the waste
pipe interfaces with our natural environment.

Life Cycle Thinking


Now that we have introduced the idea of a life cycle, and motivated why thinking about
products as systems or life cycles is important, we can dive deeper into the ways this kind of
thinking is defined and how it has evolved. Much of this development has come in the
engineering and science communities, and thus the views and representations of life cycles are
fairly technical. That said, given the typically focused and detailed views of scientists and
engineers, you will see that the way these systems are studied is quite broad.

A conceptual view of the stages of such life cycles is in Figure 1-1. Beginning with the linear
path along the top, we first extract raw materials from the ground, such as ores or petroleum.
Second, these are processed, transformed or combined to make basic material or substance
building blocks, such as metals, plastics or fuels. These materials are combined to manufacture
a product such as an automobile. These final products are then shipped (while not shown) by
some mode of transport to warehouses and/or stores to be purchased and used by other
manufacturers or consumers. During a product’s use phase it may be used to make life easier,
provide services, or make other products, and this stage may require use of additional energy
or other resources (e.g., water). When the product is no longer needed, it enters its “end of
life” which means managing its disposition, possibly treating it as waste.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
16 Chapter 1: Life Cycle and Systems Thinking

Figure 1-1: Overview of a Physical Product Life Cycle (OTA, 1992)

As Figure 1-1 also shows, at the end of life phase there are alternatives to treating a product
as waste. The common path (linear path across the top) is for items to be thrown away, a
process that involves collection in trucks and putting the item as waste in a landfill. However,
the bottom row of lines and arrows connect the end of life phase back to previous stages of
the typical life cycle through alternative disposition pathways. Over the course of a life cycle,
products, energy and materials may change form but will not disappear. Reuse takes the
product as is (or with very minor effort) and returns it to the use phase, such as a tuxedo.
Remanufacturing returns the product to the manufacturing stage, which may mean partially
disassembling the product but then re-assembling it into a new final product to be delivered,
such as a power tool or a photocopier. Finally, recycling involves taking a product back to its
raw materials, which can then be processed into any of a number of other products, such as
aluminum beverage cans or cardboard boxes. This bottom row also reminds us that despite
the colloquial use of the word “recycling” in society, recycling has a very distinct definition, as
noted above. Other disposition options have their own terms. An Internet search would turn
up hundreds more pictures of life cycles, but for our introductory purposes these will suffice.
Once we discuss the actual ISO LCA Framework in Chapter 4 we will see the standard figures
and some additional useful ones.

If you are from an engineering background, you might be asking where the other traditional
product stages fit in to the product life cycle described above. In engineering, the typical
product life cycle starts with initiation of an idea, as well as research and design iterations that
lead to multiple prototypes, and eventually, mass production. One could classify all such
activities as research and development (or R&D) that would come to the left of all activities
(or perhaps in parallel with some activities such as material extraction) in Figure 1-1. We could
imagine a reverse flow arrow for “Re-design” going along the bottom of Figure 1-1 to
represent product failures or iterations. While not represented in the figure above, all of these
R&D-like activities are relevant stages in the life cycle. As we will see, though, when analyzing
life cycles for environmental impact, these stages are typically ignored.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 17

Simple and Complex Life Cycles


Before we go further in our discussion of life cycles, it is useful to pause and think about all
of the components of something with a very simple life cycle, like a paper clip. Get a blank
sheet of paper, and write “paper clip” in a corner of the sheet. If we think very simply about
its life cycle (e.g., using Figure 1-1 as a guide), we can work backwards from the paper clip we
are used to. To get its shape, it is coiled with machinery. We can write “coiling” and draw an
arrow from the words “coiling” to “paper clip”. Before coiling it is just a straight wiry piece
of steel. Steel is made from iron and carbon. We can write “steel” and draw an arrow to
“coiling”. Iron ore and the carbon source both need to be extracted from the ground. All of
these components and pieces are shipped between factories by truck, rail, or other modes of
transportation. Any or all of these stages of the life cycle could be added to the diagram.

Putting all these materials and processes into a diagram is not so simple. Even that description
above for a paper clip was very terse. If we think a little more, we realize that all of those stages
have life cycles of their own. For example, the machinery that coils the steel wire into a paper
clip must be manufactured (its use phase is making the paper clip!). The metal and other parts
needed to make the machine also must be processed and extracted. The same goes for all of
the transportation vehicles and the infrastructure they travel on and the factories to make iron
and steel, etc. Figure 1-2 shows what the diagram might look like at this point.

Figure 1-2: Exploded View Diagram of Production of Paper Clip

This chain goes back, almost infinitely, and the sheet of paper is quickly filled with words and
arrows. Even a product as simple as a paper clip has a complex life cycle. Thus a product that
we consider to be “complex” (for example a car) has a ridiculously complex life cycle! Now

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
18 Chapter 1: Life Cycle and Systems Thinking

that we can appreciate the complexity of all life cycles, you can begin to understand why our
thought processes and models need to be sufficiently complex to incorporate them.

Without going in to all of the required detail, but to impress upon you the complexity of LCA
for more complex products, consider that a complete LCA of an automobile would require
careful energy and materials balances for all the stages of the life cycle:

1. the facilities extracting the ores, coal, and other energy sources;

2. the vehicles, ships, pipelines, and other infrastructure that transport the raw
materials, processed materials, and subcomponents along the supply chain to
manufacture the consumer product, and that transport the products to the
consumer: iron ore ships, trucks carrying steel, engines going to an automobile
assembly plant, trucks carrying the cars to dealers, trucks transporting gasoline,
lubricating oil, and tires to service stations;

3. the factories that make each of the components that go into a car, including
replacement parts, and the car itself;

4. the refineries and electricity generation facilities that provide energy for making
and using the car; and

5. the factories that handle the vehicle at the end of its life: battery recycling,
shredding, landfills for shredder waste.

Each of these tasks requires energy and materials. Reducing requirements saves energy, as well
as reducing the environmental discharges, along the entire supply chain. Often a new material
requires more energy to produce, but promises energy savings or easier recycling later.
Evaluating whether a new material helps improve environmental quality and sustainability
requires an examination of the entire life cycles of the alternatives. To make informed
decisions, consumers, companies, and government agencies must know the implications of
their choices for environmental quality and sustainability. Having good intentions is not
sufficient when a seemingly attractive choice, such as a battery-powered car, can wind up
harming what the manufacturer and regulator were trying to protect. This book provides some
of the tools that allow manufacturers and consumers to make the right choices.

Systems Thinking in the Life Cycle


All of this discussion of increasingly larger scales of problems requires us to be more explicit
in discussing an issue of critical importance in LCA studies that relates to system boundaries.
Of course a system is just a collection or set of interconnected parts, and the boundary is the
subset of the overall system that we care to focus on. Our chosen system boundary helps to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 19

shape and define what the appropriate parts are that we should study. Above we suggested
that the entire life cycle boundary goes from cradle to grave or cradle to cradle. Either choice
means that we will have a very large system boundary, and maintaining that boundary (as we
will see later) may require a significant amount of effort to complete a study. Due to this effort
requirement, or because of different interests, we may instead choose a smaller system
boundary. If we are a manufacturer, perhaps our focus is only the cradle to gate impacts. If
so, our boundary would include only the stages up to manufacturing. It is also possible that
the boundary of our interest lies only in our factory. Constraining the system boundary in
these ways means we no longer have a life cycle study.

Life cycle thinking is not restricted to manufactured products. Services, systems, and even
entire urban areas can be better understood via life cycle thinking. Services are particularly
interesting because such activities are typically considered as having very low impacts (e.g.,
consulting or banking) because there is no physical good being created, but in reality the same
types of effects are uncovered across the life cycle through the service sector’s dependence on
fuels and electricity. Entire systems (e.g., a roadway network or the electric power grid) can be
considered from building all of the equipment components and also then thinking about its
design and disposition. At an even higher level, the life cycle of cities includes the life cycles
of all of the resources consumed by residents of the city, not just the activities they do within
the city’s borders.

Finally, life cycle thinking is often useful when making comparisons, such as paper vs. plastic
bags or cups, cloth vs. disposable diapers, or retail shopping vs. e-commerce. The relevant
issues to deal with in such comparisons would be whether one option is more useful than
another, whether they are equal, whether they have similar production processes, etc. In fact
as we will see some of the great classic comparisons that have been done in the life cycle
analysis domain were very simple comparisons.

A History of Life Cycle Thinking and Life Cycle Assessment


We will discuss the formal methods that apply life cycle thinking to real questions in future
chapters (called life cycle analysis or assessment). In a life cycle analysis or assessment, the total
and comparative impacts of the life cycle stages are considered, with or without quantification
of those impacts. But to start, let us talk about some of the original studies that inspire the
field of life cycle thinking (before we even knew there was a field for such things).

Most people attribute the first life cycle assessment (LCA) to Coca-Cola in 1969. At the time,
Coca-Cola sold its product to consumers in individual glass bottles. Coca-Cola was trying to
determine whether to use glass or plastic containers to deliver their beverage product, and
wanted to formally support a decision given the tradeoffs between the two materials. Glass is
a natural material, but Coca-Cola suggested switching to plastic bottles. They reasoned that

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
20 Chapter 1: Life Cycle and Systems Thinking

this switch would be desirable for the ability to produce plastics in their own facilities, the
lower weight of plastic to reduce shipping costs, and the recyclability of plastic versus glass at
the time. No specific form of this study has been publicly released but we can envision the
considerations that would have been made.

More recently, in the early 1990s, there were various groups of researchers debating the
question of “Paper or plastic?” This simple question, which you might get at the grocery store
checkout counter or coffee shop, turned into relatively complex exchanges of ideas and results.
We may think that we know that the correct answer is “paper,” because it is a “natural”
product rather than some chemical based material like plastic. We can feel self-satisfied, even
if the bag gets wet and tears, spilling our purchases on the ground because we made the natural
and environmentally friendly decision. But even these simple questions can, and should, be
answered by data and analysis, rather than just a feeling that the natural product is better. The
ensuing analysis ignited a major controversy over how to decide which product is better for
the environment, beginning with an analysis of paper versus polystyrene cups (Hocking 1991).
Hocking’s initial study was focused on energy use and estimated that one glass cup used and
rewashed 15 times required the same amount of energy as manufacturing 15 paper cups. He
also estimated break-even use values for ceramic and plastic cups. The response generated
many criticisms and spawned many follow-up studies (too many to list here). In the end,
though, what was clear at the time of these studies was that there was no single agreed upon
answer to the simple question of “paper vs. plastic”. Even now, any study using the best data
and methods available today, will still conclude with an answer along the line of “it depends”.
This is a sobering outcome for a discipline (life cycle thinking) trying to gain traction in the
scientific community.

Beyond these studies, other early analyses surprised people, since they found that paper bags,
paper cups (or even ceramic cups), and cloth diapers were not obviously superior to their
maligned alternatives (i.e., plastic bags, styrofoam cups and disposable diapers) in terms of
using less energy and materials, producing less waste, or even disposal at the end of life.

• Paper for bags requires cutting trees and transporting them to a paper mill, both of
which use a good deal of energy. Paper-making results in air emissions and water
discharges of chlorine and biological waste. After use, the bag goes to a landfill where
it gradually decays, releasing methane.

• A paper hot-drink cup generally has a plastic coating to keep the hot liquid from
dissolving the cup. The plastic coating introduces the same problems as the foam
plastic cup. The plastic is made from petroleum with relatively small environmental
discharges. Perhaps most surprising, washing a single ceramic cup by hand uses a good
deal of hot water and soap, resulting in discharges of waste water that has to be treated
and the expenditure of a substantial amount of fuel to heat the water, although washing
the cup in a fully loaded dish washer uses less soap and hot water per cup.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 21

• The amount of hot water and electricity required to wash and dry cloth diapers is
substantial. If water is scarce or sewage is not treated, washing cloth diapers is likely
to cause more pollution than depositing disposable diapers in a landfill. The best
option depends on the issue of water availability (washing uses much more water) and
heating the water.

In short, it is not obvious which product is more environmentally benign and more sustainable.
Such results are counterintuitive, but they reinforce the importance of life cycle thinking.

The analyses found that the environmental implications of choosing paper versus plastic were
more similar than people initially thought. Which is better depends on how bad one thinks
water pollution is compared to air pollution compared to using a nonrenewable resource.
Perhaps most revealing was the contrast between plants and processes to make paper versus
plastic. The best plant-process for making paper cups was much better than the worst plant-
process; the same was true for plastic cups. Similarly, the way in which the cups were disposed
of made a great deal of difference. Perhaps the most important lesson for consumers was not
whether to choose one material over another, but rather to insist that the material chosen be
made in an environmentally friendly plant.

The original analyses showed that myriad processes are used to produce a material or product,
and so the analyst has to specify the materials, design, and processes in great detail. This led
to another problem: in a dynamic economy, materials, designs, and processes are continually
changing in response to factor prices, innovation, regulations, and consumer preferences. For
example, in a life cycle assessment of a U.S.-manufactured automobile done in the mid-1990s,
the design and materials had changed significantly by the time the analysis was completed years
later. Still another problem is that performing a careful material and energy balance for a
process is time-consuming and expensive. The number of processes that are practical to
analyze is limited. Indeed, the rapid change in designs, materials, and processes together with
the expense of analyzing each one means that it is impractical and inadvisable to attempt to
characterize a product in great detail. The various dependencies, rationales, and assumptions
used all make a great deal of difference in the studies mentioned above (for which we have
provided no real detail yet). LCA has a formal and structured way of doing the analysis, which
we will begin to discuss in Chapter 4.

Decisions Made Without Life Cycle Thinking


Hopefully you are already convinced that life cycle thinking is the appropriate way of thinking
about problems. But this understanding is certainly not universal, and there are various
examples of not taking a life cycle view that led to poor (albeit well intentioned) decisions
being made.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
22 Chapter 1: Life Cycle and Systems Thinking

A useful example is the consideration of electric vehicles in the early 1990s. At the time,
California and other states were interested in encouraging the adoption of vehicles with no
tailpipe emissions in an effort to reduce emissions in urban areas and to gain the associated air
quality benefits. Policymakers at the time had a specific term for such vehicles – “zero
emissions vehicles (ZEVs)”. The
thought was that getting a small but
significant chunk of the passenger
vehicle fleet to have zero emissions
could yield big benefits. Regulations at
the time sought to get 2% of new
vehicles sold to be ZEVs by 1998. In
parallel, manufacturers such as General
Motors had been designing and
developing the EV-1 and similar cars to
meet the mandated demand for the
vehicles (see Figure 1-3).
Figure 1-3: General Motors’ EV-1
(Source: motorstown.com) So why did we refer to this case as one
about life cycles? The electric vehicles to be produced at the time were much different than
the electric vehicles of today that include hybrids and plug-in hybrids. These initial cars were
rechargeable, but the batteries were lead-acid batteries – basically large versions of the starting
and ignition batteries we use in all cars (by large, we mean the batteries were 1,100 pounds!).
Let us go back to Figure 1-1 and use life cycle thinking to briefly consider such a system. How
would the cars be recharged? They would run on electricity, which even in a progressive state
like California leads to various emissions of air pollutants. Similarly, the batteries would have
large masses of lead that would need to be processed efficiently. Lead must be extracted,
smelted, and processed before it can be used in batteries and then, old lead-acid batteries are
often collected and recycled. None of these processes are 100% efficient, despite the claims at
the time by industry that it was the case. Would these vehicles be produced in factories with
no pollution? It is hard to consider that these vehicles would really have “zero emissions” –
but then again, zero is a very small number! There would be increased emissions in the life
cycle of these electric vehicles – the question was whether those increases would fully offset
the potential gains of reduced tailpipe emissions.

Aside from the perils of considering anything as having zero emissions, various parties began
to question whether these vehicles would in fact have any positive improvement on air quality
in California, and further, given the need for more electricity and lead, whether one could even
consider them as beneficial. In a study published by Lave et al. (to whom this book is
dedicated) in Science in 1995, the authors built a simple but effective model of the life cycle of
these vehicles that estimated that generating the electricity to charge the batteries would result
in greater emissions of nitrogen oxide pollution than gasoline-powered cars. Eventually,
California backed off of its mandate for ZEVs, partly because of such studies, and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 23

policymakers learned important lessons about considering whole life cycles as well as casual
use of the number zero. The policymakers had been so focused on the problem of reducing
tailpipe emissions that they had overlooked the back-end impacts from lead and increased
electricity generation.

It is fair to say this was one of the first instances of life cycle thinking being used to change a
“big decision”. The lesson again is that life cycle thinking is needed to make informed decisions
about environmental impacts and sustainability. Being prepared to use life cycle thinking and
analysis to support big decisions is the focus of this book.

A more recent example of life cycle thinking in big decisions is the case of compact fluorescent
lamps (CFLs), which were heavily promoted as energy efficient alternatives to incandescent
bulbs. While CFLs use significantly less electricity in providing the same amount of light (and
thus cost less in the use phase) as traditional bulbs, their disposal represented a problem due
to the presence of a small amount of mercury in the lamps (about 4mg per bulb). This amount
of mercury is not generally a problem for normal, intact, use of the lamps (and is less mercury
than would be emitted from electric power plants to power incandescent bulbs). However,
broken CFLs could pose a hazard to users due to mercury vapor – and the DOE Energy Star
guide to CFLs has somewhat frightening recommendations about evacuating rooms, using
sealed containers, and staying out of the room for several hours. None of this information was
good news for consumers thinking about a choice of incandescent vs. CFL lighting choices.

The examples and discussion above hopefully reveal that you can think about life cycles
quantitatively or qualitatively, meaning with or without numbers (more on that in Chapter 2).

Inputs and Outputs of Interest in Life Cycle Models


Above we have suggested that there is a need to think about products, services, and other
processes as systems by considering the life cycle. We have also mentioned some popular
examples of the kinds of life cycle thinking studies that have been done. It is also worth
discussing the types of effects across a life cycle that we might be interested in tracking or
accounting for.

By ‘effects’ we mean what happens as a result of a product being manufactured, or a service


being provided, etc. There are likely economic costs incurred, for example by paying for the
parts and labor needed for assembly. There are interesting and relevant issues to consider
when focused purely on economic factors, and Chapter 3 discusses this type of thinking.

In many cases, the ‘effects’ of producing or using a product mean consuming energy in some
way. Likewise, there may be emissions of pollution to the air, water, or land. There are many
such effects that one might be interested in studying, and more importantly, in being able to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
24 Chapter 1: Life Cycle and Systems Thinking

detect and measure. Thus, we can already create a list of potential effects that one might be
concerned about in a life cycle study. In terms of effects associated with inputs to life cycle
systems, we could be concerned about:

• Use of energy inputs, including electricity, as well as solid, liquid, and gaseous fuels.

• Use of resources as inputs, such as ores, fertilizers, and water.

Note that our concern with energy and resource use as inputs may be in terms of the quantities
of resources used and/or the extent to which the use of these resources depletes the existing
stock of that resource (i.e., are we consuming a significant share of the available resource?).
We may also be concerned with whether the energy or resources being consumed are
renewable or non-renewable.

In terms of effects associated with outputs of life cycle systems, we could be concerned about:

• The product created as a result of an activity, such as electricity from a power plant.

• Emissions of air pollution, for example conventional air emissions such as sulfur
dioxide, nitrogen oxides, and carbon monoxide.

• Emissions of greenhouse gases, such as carbon dioxide, methane, and nitrous oxide.

• Emissions to fresh or seawater, including solid waste, chemical discharges, toxics, and
warming.

• Other emissions of hazardous or toxic wastes to air, land, water, or recycling facilities.

In short, there is no shortage of energy, environmental, and other effects that we may care
about and which may be estimated as part of a study. As we will see later in the book, we may
have interest in many effects but only be able to get quality data for a handful of them. We
can choose to include any effect for which we think we can get data over as many of the parts
of the life cycle as possible. One could envision annotating the paper clip life cycle diagram
created above with colored bars representing activities in the life cycle we anticipate have
significant inputs or outputs associated with them. For example, activities that we expect to
consume significant quantities of water could have a blue box drawn around them or to have
a blue square icon placed next to them. Activities we expect to release significant quantities of
air pollutants could have black boxes or icons. Activities we expect to create a large amount
of solid waste could be annotated with brown. While simplistic (and not informed by any data)
such diagrams can be useful in terms of helping us to look broadly at our life cycle of interest
and to see where in the life cycle we anticipate the problems to occur.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 25

Aside from simply keeping track of (accounting for) all of these effects across the life cycle, a
typical reason for using life cycle thinking is to not just measure but prioritize. Another way
of referring to this activity is hot spot analysis, where we look at all of the effects and decide
which of the life cycle stages contributes most to the total (where “hot spots” appear). Our
colored box or icon annotation above could be viewed as a crude hot spot analysis, because it
is not informed by actual data yet.

For most cars, the greatest energy use happens during the use phase. Cars in the United States
are typically driven more than 120,000 miles over their useful lives. Even fairly fuel-efficient
cars will use more energy during use than at any other stage of their life cycle. Likewise hot
spots for toxic air emissions for cars may appear in the use phase as well as in the refining of
petroleum into gasoline. These examples illustrate why we use life cycle thinking – as we have
shown above our intuition is not sufficient in assessing where effects occur, and only by
actually collecting data and estimating the effects can we effectively identify hot spots. This
use of life cycle thinking to support hot spot analysis helps us identify where we need to focus
our attention and efforts to improve our engineering designs. If done in advance, it can have
a significant benefit. If done too late, it can lead to designs such as large lead-acid battery
vehicles.

Similarly, if we create a plan to generate numerical values representing several of these life
cycle effects, we will eventually have to make decisions about how to compare them or
prioritize them. Such a decision process will be complicated by needing to compare releases
of the same type of pollution across various media (air, water, or land) and also by needing to
compare releases of one pollutant against another, comparing pollution and energy, etc. While
complicated, the process of making all of these judgments and choices will assist with making
a study that we can use to help our decision process. Chapter 12 overviews the types of
methods used to support such assessments.

From Inputs and Outputs to Impacts


It is appropriate early on in this textbook to briefly discuss the kinds of uses, emissions, and
releases discussed above in connection with the types of environmental or resource use
problems they create. The new concept in this section is the idea of an environmental
impact. Unlike the underlying inputs and outputs of interest such as resource use or
emissions, an environmental impact exists when the underlying flows cause an environmental
problem. One can think of the old phrase “if a tree falls in the forest but no one is there to
hear it, does it make a sound?” This is similar to the connection between environmental
releases and environmental impacts. It is possible that a release of a specific type and quantity
of pollutant into the environment could have little or no impact. But if the release is of
sufficient quantity, or occurs in a location near flora or fauna (especially humans), it is likely
that there will be measurable environmental impact(s). Generally, our concerns are motivated

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
26 Chapter 1: Life Cycle and Systems Thinking

by the impacts but are indicated by the uses or releases because most of us cannot directly
estimate the impacts. In other words, we often look at the quantities of inputs and outputs as
a proxy for the impacts themselves that need to be estimated separately.

This brief section is not a substitute for a more rigorous introduction to such environmental
management issues, and should be supplemented with external work or reading if this is not
an area of your expertise. One could easily spend a whole semester learning about these
underlying connections before attempting to become an expert in life cycle thinking.

Example Indicators for Impacts that Inspire Life Cycle Thinking


In this section, we present introductory descriptions of several prominent environmental
impacts considered in LCA studies as exemplars and discuss how various indicators can guide
us to the actual environmental problems created. If interested, there are more detailed
summaries available elsewhere from agencies, such as the US Environmental Protection
Agency, US Geological Survey, the Department of Energy, and we will circle back to
discussing them in Chapter 12.

Impact: Fossil fuel depletion – Use of energy sources like fossil fuels is generally an easy way
to measure activity because energy costs us to acquire, and there are billing records and energy
meters available to give specific quantities. Beyond the basic issue of using energy, much of
our energy use comes from unsustainable sources such as fossil fuels that are finite in supply.
We might care simply about the finiteness of the energy resource availability as a reason to
track energy use across the life cycle. As mentioned above, we might seek to separately classify
our use of renewable and non-renewable energy. We might also care about whether a life cycle
system at scale could consume significant amounts of the available resources. If so, the use of
energy by our life cycle could be quite significant. In the context of our descriptions above,
some quantity of fossil energy use (e.g., in BTU or MJ) may be an indicator for the impact of
fossil fuel depletion. Of course, all of the energy extraction, conversion, and combustion
processes may lead to other types of environmental impacts (like those detailed below).

Impact: Global Warming / Climate Change – Most people know that there is considerable
evidence suggesting that manmade emissions of greenhouse gases (GHGs) lead to global
warming or climate change. The majority of such GHG emissions come from burning fossil
fuels. While we might already be concerned with the use of energy (above), caring more
specifically about how our choices of energy sources may affect climate change is an additional
impact to consider. Carbon dioxide (CO2) is the most prominent greenhouse gas, but there
are other GHGs that are emitted from human activities that also lead to warming of the
atmosphere such as methane (CH4) and nitrous oxide (N2O). These latter GHGs have far
greater warming effects per unit than carbon dioxide and are emitted from systems such as oil
and gas energy infrastructure systems and agricultural processes. GHGs are inevitably global

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 27

pollutants, as increasing concentrations of them in the atmosphere lead to impacts all over the
planet, not just in the region or specific local area where they are emitted. These impacts may
eventually manifest as increases in sea levels, migration of biotic zones, changes in local
temperatures, etc. Our concern about climate change may be rooted in a desire to assess which
component or stage of our product or process has the highest carbon footprint, and thus all
else equal, the biggest contributor to climate change. The GHG emissions are indicators of
the impacts of global warming and climate change.

Impact: Ozone Depletion – In the early 1970s, scientists discovered that human use of certain
substances on the earth, specifically chlorofluorocarbons (CFCs), led to reduction in the
quantity of ozone (O3) in the stratosphere for a period of 50-100 years. This phenomenon is
often tracked and referred to as “holes in the ozone layer”. The ozone layer, amongst other
services, keeps ultraviolet rays from reaching the ground, preserving plant and ocean life and
avoiding impacts such as skin cancers. The Montreal Protocol called for a phase out of
chemicals that deplete the ozone layer, but not all countries ratified it, not all relevant
substances were included, and not all uses were phased out. Consequently, while emissions of
many of these substances have been dramatically reduced in the past 30 years, they have not
been eliminated, and given the 50-100 year lifetime, ozone depletion remains an impact of
concern. Thus, releases of the various ozone-depleting substances can be indicators of
potential continued impacts of ozone depletion. Note that there is also “ground level” ozone
that is created by interactions of local pollutants and helps to create smog, which, when
breathed in, can affect human health. This is an entirely different but important potential
environmental impact related to ozone.

Impact: Acid Rain – Releases of various chemicals or chemical compounds lead to increased
levels of acidity in a local or regional environment. This acidity penetrates the water cycle and
can eventually move into clouds and rain droplets. In the developed world the key linkage was
between emissions of sulfur dioxide (SO2) and acidity of freshwater systems. One of the
original points of concern was emissions of sulfur dioxide by coal-fired power plants because
they were large single sources and also because they could be fairly easily regulated. Emissions
of these pollutants are an indicator of the potential impacts of more acidic environments such
as plants and aquatic life destroyed. While in this introduction we have only listed acid rain as
an impact, acid rain is part of a family of environmental impacts related to acidification, which
we will discuss in more detail later. In short, other non-sulfur compounds like nitrogen oxides
can also lead to acidification of waterways, and systems other than freshwater can be affected.
Acidification of water occurs due to global uptake of carbon dioxide and is of increasing
concern in oceans where acidification affects coral reefs and thus the entire ocean ecosystem.

There are various other environmental impacts that have been considered in LCA studies,
such as those associated with eutrophication, human health, and eco-toxicity, but we will save
discussion of them for later in the text. These initial examples, though, should demonstrate
that there are a wide variety of local and global, small and large scale, and scientifically relevant

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
28 Chapter 1: Life Cycle and Systems Thinking

indicators that exist to help us to assess the many potential environmental impacts of products
and systems.

The Role of Design Choices


The principles of LCA can help to build frameworks that allow us to consider the implications
of making design (or re-design) decisions and to track the expected outcomes across the life
cycle of the product. For example, deciding whether to make a car out of aluminum or steel
involves a complicated series of analyses:

• Would the two materials provide the same level of functionality? Would structural
strength or safety be compromised with either material? Lighter vehicles have been
found to be less safe in crashes, although improved design and new automation
technology might remove this difference (NRC 2002, Anderson 2014). A significant
drop in safety for the lighter vehicles would outweigh the energy savings, depending
on the values of the decision maker.

• Are there any implications for disposal and reuse of the materials? At present, about
60% of the mass of old cars is recycled or reused. Moreover, motor vehicles are among
the most frequently recycled of all products since recycling is usually profitable; both
aluminum and steel are recycled and reused from automobiles (Boon et al. 2000). It
takes much less energy to recycle aluminum than to refine it from ore. The advantage
for recycling steel is smaller.

• What is the relative cost of the two materials, both for production and over the lifetime
of the vehicle? An aluminum vehicle would cost more to build, but be lighter than a
comparable steel vehicle, saving some gasoline expenses over the lifetime of the
vehicle. Do the gasoline savings exceed the greater cost of manufacturing? Of energy?
Of environmental quality?

In this example, steel, aluminum, copper, glass, rubber, and plastics are the materials, while
electricity, natural gas, and petroleum are the energy that go into making, using, and disposing
of a car. The vehicle runs on gasoline, but also needs lubricating oil and replacement parts
such as tires, filters, and brake linings. At the end of its life, the typical American car is
shredded; the metals are recycled, and the shredder waste (plastic, glass, and rubber) goes to a
landfill.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 29

What Life Cycle Thinking and Life Cycle Assessment Is Not


The purpose of this chapter has been to motivate life cycle thinking, and why it should be
chosen to ensure broadly scoped analysis of issues with potential environmental impacts – i.e.,
we have been introducing “what life cycle thinking is”. We end the chapter by briefly
summarizing what life cycle thinking (and, by extension, life cycle assessment) is not able to
achieve.

First, life cycle thinking will not ensure a path to sustainability. If anything, thinking more
broadly about environmental problems has the potential side effect of making environmental
problems seem even more complex. At the least it will typically lead to greater estimates of
environmental impact as compared to studies with more limited scopes. But life cycle thinking
can be a useful analytical and decision support tool for those interested in promoting and
achieving sustainability.

Second, life cycle thinking is not a panacea - a magic pill or remedy that solves all of society’s
problems. It is merely a way of structuring or organizing the relevant parts of a life cycle and
helping to track performance. Addressing the economic, environmental, and social issues in
the context of sustainability can be done without using LCA. To reduce energy and
environmental impacts associated with product or process life cycles, we must want to take
action on the findings of our studies. By taking action we decide to improve upon the current
impacts of a product and make changes to the design, manufacture, or use of the current
systems so that future impacts are reduced.

LCA is not a single model solution to our complex energy and environmental problems. It is
not a substitute for risk analysis, environmental impact assessment, environmental
management, benefit-cost analysis, etc. All of these related methods have been developed over
many years and may still be useful in bringing to the table to help solve these problems. LCA
can in most cases interact with these alternative methods to help make decisions.

Chapter Summary
Life cycle assessment (LCA) is a framework for viewing products and systems from the cradle
to the grave. The key benefit of adopting such a perspective is the creation of a “systems
thinking” view that is broadly encompassing and can be analyzed with existing methods. When
a life cycle perspective has not been used, unexpected environmental impacts have occurred,
some that may been anticipated with a broader view.

As we will see in the chapters to come, even though there is a standard for applying life cycle
thinking to problem solving, it is not a simple recipe. There are many study design choices,
variations, and other variables in the system. One person may apply life cycle thinking in one

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
30 Chapter 1: Life Cycle and Systems Thinking

way, and another in a completely different way. We cannot expect then that simply using life
cycle thinking will lead to a single right answer that we can all agree on.

References for this Chapter


Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., Oluwatola, O. Autonomous
Vehicle Technology: A Guide for Policymakers, Santa Monica, CA: RAND Corporation, RR-
443-RC, 2014.

Boon, Jane E., Jacqueline A. Isaacs, and Surendra M. Gupta, “Economic Impact of
Aluminum-Intensive Vehicles on the U.S. Automotive Recycling Infrastructure”, Journal of
Industrial Ecology 4(2), pp. 117–134, 2000.

Hocking, Martin B. “Paper versus polystyrene: a complex choice.” Science 251.4993 (1991):
504-505.

Lave, Lester, Hendrickson, Chris, and McMichael, Francis, “Environmental implications of


electric cars”, Science, Volume 268, Issue 5213, pp. 993-995, 1995.

Mihelcic, James R., et al. “Sustainability science and engineering: The emergence of a new
metadiscipline.” Environmental Science and Technology 37.23 (2003): 5314-5324.

Tarr, Joel, The Search for the Ultimate Sink, University of Akron Press, 1996.

United Nations General Assembly (1987) Report of the World Commission on Environment and
Development: Our Common Future. Transmitted to the General Assembly as an Annex to
document A/42/427 - Development and International Co-operation: Environment.

United States Office of Technology Assessment (OTA), “Green Products by Design: Choices
for a Cleaner Environment”, OTA-E-541, 1992.

End of Chapter Questions

Objective 1: State (a) the concept of a life cycle and (b) its various stages as related
to assessment of products.

1. Describe the major activities in each of the five life cycle stages of Figure 1 for a soft drink
beverage container of your choice. Describe also the activities needed to support reuse,
remanufacturing, and recycling activities for the container chosen.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 1: Life Cycle and Systems Thinking 31

Objective 2: Illustrate the complexity of life cycles for even simple products.

2. Draw by hand or with software a diagram of a life cycle for a simple product (other than
a paper clip as shown in-chapter), with words representing the various activities in the life
cycle needed to make the product, and arrows representing connections between the
activities. Annotate the diagram with colors or shading to represent hot spots for two
inputs or outputs that you believe are relevant for decisions associated with the product.

3. Do the same exercise as in Question 2, but for a school or university, which is providing
a service not making a physical product.

Objective 3: Explain why environmental problems, like physical products, (a) are
complex and (b) require broad thinking and boundaries that include all stages of the
life cycle.

4. Power plants (especially fossil-fuel based coal and gas-fired units) are frequently mentioned
sources of environmental problems. List three specific types of outputs to the
environment resulting from these fossil plants. Which other parts of the life cycle of
producing electricity from fossil plants also contribute to these problems?

Objective 4: Describe what kinds of outcomes we might expect if we fail to use life
cycle thinking.

5. Across the life cycle of a laptop computer, discuss which life cycle stages might contribute
to the environmental impact categories discussed in the chapter (global warming, ozone
depletion, and acid rain). Are there other classes of environmental impact you can envision
for this product?

Synthesis of Objectives

6. Suppose that a particular truck requires diesel fuel to transport freight (that is, moving tons
of freight over some distance). In the process, carbon dioxide is emitted from the truck.

a. In the terminology of life cycle thinking presented in this chapter, what does the
diesel fuel represent?

b. What do the freight movement and carbon dioxide represent?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
32 Chapter 1: Life Cycle and Systems Thinking

c. What stage of the truck life cycle is being presented in this problem so far? What
other truck life cycle stages might be important to consider?

d. In considering the environmental impacts of trucks, would it be advisable to expand


our system of thinking to include providing roadways? Why or why not?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Dana Fradon, The New Yorker May 17, 1976
Credit: Dana Fradon/The New Yorker Collection/The Cartoon Bank
34 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Chapter 2 : Quantitative and Qualitative Methods


Supporting Life Cycle Assessment
In this chapter, we introduce basic quantitative skills needed to perform successful work in
LCA. The material is intended to build good habits in critically thinking about, assessing, and
documenting your work in the field of LCA (or, for that matter, any type of systems analysis
problem). First we describe good habits with respect to data acquisition and documentation.
Next we describe skills in building and estimating simple models. These skills are not restricted
to use in LCA and should be broadly useful for business, engineering, and policy modeling
tasks. As this book is intended to be used across a wide set of disciplines and levels of
education, we write as if aimed at undergraduates who may not be familiar with many of these
concepts. This chapter may be a cursory review for many students. Regardless, improving
basic skills will make your LCA work even more effective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Apply appropriate skills for qualitative and quantitative analysis.

2. Document values and data sources in support of research methods.

3. Improve your ability to perform back of the envelope estimation methods.

4. Approach any quantitative question by means of describing the method, providing the
answer, and describing what is relevant or interesting about the answer.

Basic Qualitative and Quantitative Skills


To be proficient in any type of systems analysis, you need to have sharp analytical skills
associated with your ability to do research, and much of this chapter is similar to what one
might learn in a research methods course. While the skills presented here are generally useful
(and hopefully will serve you well outside of the domain of LCA) we use examples relevant to
LCA to emphasize and motivate their purpose.

Much of LCA involves doing “good research” and communicating the results clearly. That is
why so many people with graduate degrees are able to learn LCA quickly – because they already
have the base of skills needed to be successful, and just need to learn the new domain
knowledge. Amongst the most important skills are those associated with your quantitative and
qualitative abilities.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 35

Quantitative skills are those associated with your ability to create numerical manipulations
and results, i.e., using and applying math and statistics. Qualitative skills are those related to
your ability to think and write about your work beyond numbers, and to describe the relevance
of your results. While this textbook is more heavily geared towards improving your
quantitative skills, there are many examples and places of emphasis throughout the text that
are intended to develop your qualitative skills. You will need to be proficient at both to
successfully appreciate and perform LCA work.

Identifying your own weaknesses in these two areas now can help you improve them while
you are also learning new material relevant to the domain. Your quantitative skills are relatively
easy to assess – e.g., if you can correctly answer a technical or numerical question by applying
an equation or building a model, you can “pass the test” for that quantitative skill. Qualitative
skills are not as easy to evaluate and so must be assessed in different ways, e.g., your ability to
synthesize or summarize results or see the big picture could be assessed by using a rubric that
captures the degree to which you put your findings into context.

In the remainder of this chapter, we’ll first review some of the key quantitative types of skills
that are important (and which are at the core of life cycle studies) and then discuss how to mix
qualitative and quantitative skills to produce quality LCA work. One of the most important
skills is identifying appropriate data to use in support of analyses.

Working with Data Sources


Most data are quantitative, i.e., you are provided a spreadsheet of numerical values for some
process or activity and you manipulate the data in some quantitative way (e.g., by finding an
average, sorting it, etc.). But data can also be qualitative – you may have a description of a
process that discusses how a machine assembles inputs, or you may generally know that a
machine is relatively old (without knowing an exact date of manufacture). Being able to work
with both types of data is useful when performing LCA.

As we seek to build a framework for building quantitative models, inevitably one of the
challenges will be to find data (and in LCA, finding appropriate data will be a recurring
challenge). But more generally we need to build skills in acquiring and documenting the data
we find. As we undertake this task, it is important to understand the difference between
primary and secondary sources. A primary source of data comes directly from the entity
collecting the data and/or analyzing it to find a result. It is thus generally a definitive source
of information, which is why you want to find it. A secondary source is one that cites or
reuses the information from the primary source. Such sources may use the information in
different ways inconsistent with the primary source’s stated goals and intentions, and may
incorporate biases. It is thus good practice to seek the primary source of the information and
not merely a source that makes use of it. Finding (and reading, if necessary) the primary source

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
36 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

also allows you to gain appreciation for the full context that reported the result. This context
may include the sponsor of the study, any time or data constraints, and perhaps caveats on
when or how the result should be used.

In today’s Internet search-enabled world, secondary sources are far more prominent. Search
engines are optimized to find often linked to and repeated sources, not necessarily primary
sources. As an example, the total annual emissions of greenhouse gases in the US are prepared
in a study and reported every year by the US Environmental Protection Agency (EPA). The
EPA spends a substantial amount of time - with the assistance of government contractors -
each year refining the methods and estimates of emissions to be reported. Given their official
capacity and the work done, the reporting of this annual estimate (i.e., ‘the number’) is a
primary source. This number, which is always for a prior period and is therefore a few years
old, gets noticed and reported on by hundreds of journalists and media outlets, and thousands
of web pages or links are created as a result. A web search for “annual US GHG emissions”
turns up millions of hits. The top few may be links to the latest EPA report or the website
that links to the report. The web search may also point to archived EPA reports of historical
emissions published in previous years. But there is only a single primary source for each year’s
emissions estimate – the original study by EPA.

The vast majority of the web search results lead to studies ‘re-reporting’ the original published
EPA value. It is possible that the primary source is not even in the top 10 of the ordered
websites of a web search. This phenomenon is important because when looking for data
sources, it is easy to find secondary sources, but there is often a bit of additional work needed
to track backwards to find and cite the primary source. It is the primary source that one should
use in any model building and documentation efforts (even if you found it via finding a
secondary source first). A primary source of data is typically from a credible source, and citing
“US EPA” instead of “USA Today” certainly improves the credibility of your work.
Backtracking to find these primary sources can be tricky because often newspaper articles will
simply write “EPA today reported that the 2011 emissions of greenhouse gases in the United
States were 7 billon metric tons” without giving full references within the article. Blogs on the
other hand tend to be slightly more academic in nature and may cite sources or link to websites
(and of course they still might link to a secondary source). If your secondary sources do not
link to the EPA report directly, you need to do some additional searching to try to find the
primary source. It will help your search that you know the numerical value that will be found
in the primary source (but of course you should confirm that the secondary source used the
correct and most up to date value). With some practice you will become adept at quickly
locating primary sources.

The relevant contextual information that may appear in the official EPA source includes things
like how the estimate was created, what year it is for, what the year-over-year change was, and
which activities were included. All of that contextual information is important. A more
frequently reported estimate of US GHG emissions (only a few months old when reported)
comes from the US Department of Energy, but only includes fossil fuel combustion activities,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 37

which are far easier to track because power plants annually report their fuel use to the
Department. If you were looking for a total inventory of US greenhouse gas emissions, the
EPA source is the definitive source.

After finding appropriate data, it is essential to reference the source adequately. It is assumed
that you are generally familiar with the basics of creating footnote or endnote references or
bibliographical references to be used in a report. You can see short bibliographical reference
lists at the end of each of the chapters of this textbook. Primary data sources should be
completely referenced, just as if you were excerpting something from a book. That means you
need to give the full bibliographic reference as well as point to the place inside the source
where you found the data. That might be the page number if you borrow something from the
middle of a report, or a specific Table or Figure within a government report. For example, if
you needed data about the electricity consumption per square foot for a commercial building,
the US Department of Energy’s Energy Information Administration 2003 Commercial
Buildings Energy Consumption Survey (CBECS) suggests the answer is 14.1 kWh/square foot
(for non-mall buildings). The summary reports for this survey are hundreds of pages in length.
The specific value of 14.1 kWh/sf is found (on page 1) of Table C14. By referencing this
source specifically, you allow others to reproduce your study quickly. You also are allowing
others (who may stumble upon your own work when looking for something else) to use your
work as a secondary source. The full primary source reference for the CBECS data point could
look like this:

US Dept. of Energy, 2003 Commercial Buildings Energy Consumption Survey (CBECS), Table C14.
“Electricity Consumption and Expenditure Intensities for Non-Mall Buildings, 2003”, 2006,
http://www.eia.gov/consumption/commercial/data/2003/pdf/c14.pdf, last accessed July 5, 2013.

What is unfortunately common is to see very loose or abbreviated referencing of data sources,
such as “DOE CBECS”. Such casual referencing is problematic for many reasons. The DOE
has done at least four CBECS surveys, roughly four years apart, since 1992, for which they
have made the results available online. If one finds a single data point on the Energy
Information Administration’s website and uses it in a study, that data point might come from
any of these four surveys, which span 20 years of time, from any of the thousands of pages of
data summaries. With only a reference to “CBECS”, one would have no way of knowing how
recent, relevant, or useful is your data point.

Beyond the examples above, one might be interested in the population of a country, the
average salary of workers, or other fundamental data. You are likely (and encouraged) to find
and report multiple primary sources. These multiple sources could come from independent
agencies or groups who sought to find answers to the same or very similar questions. A rule
of thumb is to seek and report results from at least three such sources if possible. In the best
case, the primary sources yield the same (or nearly equal) data. In reality, they will likely disagree
to a small or large extent. There may be very easy explanations for why they differ, such as
using different assumptions or methods. By noting and representing that you have found

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
38 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

multiple data points, and summarizing reasons for the differences, you gain the ability to judge
whether to simply use an assumption based on the three sources, or need to use a range or an
average. The practice of seeking multiple sources will sometimes even uncover errors in
original studies or data reports, or at the least make you realize that a primary source found is
not appropriate to use in your own work given differences in how the result was made.

“When we look up a number in more than one place, we may get several different
answers, and then we have to exercise care. The moral is not that several answers are
worse than one, but that whatever answer we get from one source might be different if
we got it from another source. Thus we should consider both the definition of the
number and its unreliability.” -- Mosteller (1977)

If you end up with several values, it may be useful to summarize them in a table. If you had
been trying to find the total US greenhouse gas emissions as above, you might summarize it
like in Figure 2-1. Additional rows could be added for other primary or secondary sources. A
benefit of organizing these summary tables is that it allows the audience to better understand
your underlying data sources as well as potential issues with applying them.
Value (million Source Type of Comments
metric tons CO2) Source
US EPA, Inventory Of U.S. Greenhouse
6,702 Primary Value is for 2011.
Gas Emissions And Sinks: 1990-2011
Value is for 2011. Only
US DOE, U.S. Energy-Related Carbon
5,471 Primary counts energy-related
Dioxide Emissions, 2011
emissions.
Environmental News Network, US
6,702.3 Greenhouse Gas Emissions are Down, April Secondary Specifically references EPA.
21, 2013
Figure 2-1: Summary of Sources for US Greenhouse Emissions

A final note about seeking data sources pertains to the use of statistical abstracts. Such
references exist for many countries, states and organizations like universities. These abstracts
are valuable reference materials that are loaded with many types of summary data. They are
typically organized by sections or chapter of related data. For example, the Statistical Abstract
of the United States (2011) has sections on agriculture, manufacturing, energy, and transportation
(all of which are potentially relevant for LCA studies). Each of the sections contains a series
of data tables. The Agriculture section has, amongst other interesting facts, data on the number
of farms and area of cropland planted for many types of crops. The Table (Number 823) of
farms and cropland has a footnote showing the primary source of the data, in this case the
2007 Census of Agriculture. Such abstracts may also have other footnotes that need to be
considered when using them as a source, such as noting the units of presentation (e.g., dollar
values in millions), or the boundaries considered.

This example is intended to reinforce two important facts of using statistical abstracts. First,
it is important to realize that while generally statistical abstracts may be a convenient “go to”

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 39

reference source, they are not a primary source. The best practice is to use statistical abstracts
as links to primary sources – and then go read the primary source. Re-publication of data
sometimes leads to errors, or omissions of important footnotes like units or assumptions used.
Second, despite the “2012” in the title of the abstract, it is generally not true that all data within
is from the year 2012. Generally though, any data contained within is the most recent available.
Abstracts for states and other organizations are organized in similar ways and with similar
source referencing. Finally, it is worth noting that in the age of Google, statistical abstracts are
no longer the valuable key reference sources that they once were. Nonetheless, they are still a
great first “one stop” place to look for information, especially if doing research in a library
with an actual book.

Accuracy vs. Precision


We seek primary sources (and multiple primary sources) because we want to get credible values
to use. Depending on the kind of model we are building, we may simply need a reasonable
estimate, or we may need a value as exact as possible. This raises the issue of whether we are
seeking accuracy or precision in our search for sources and/or our model building efforts.
While the words accuracy and precision are perhaps synonyms to lay audiences, the “accuracy
versus precision” dialogue is a long-standing one in science. We are often asked to clarify our
goals more clearly in terms of what we are seeking – accuracy or precision (or both)—in our
system of measurement.

The accuracy of a measurement system is the degree to which measurements made are close
to the actual value (of course, as measured by some always correct system or entity). The
precision of a measurement system is the degree to which repeated measurements give the
same results. Precision is thus also referred to as repeatability or reproducibility.

In addition to physical measurement systems, these features are relevant to computational


methods on data, such as statistical transformations, Microsoft ® Excel ®1 models, etc. Figure
2-2 summarizes the concepts of accuracy and precision within the context of aiming at a target,
but could be analogously used to consider our measurements of a value.

1 Microsoft and Excel are registered trademarks of Microsoft Corporation. In the rest of the book, just “Microsoft Excel” will be used.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
40 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Accurate Inaccurate

Precise

Imprecise

Figure 2-2: Comparison of accuracy and precision. Source: NOAA 2012

Systems can thus be accurate but not precise, precise but not accurate, or neither, or both.
Systems are considered valid when they are both accurate and precise. With respect to our
CBECS example above, the survey used could provide an inaccurate (but precise) result if mall
and non-mall buildings are included in an estimate of retail building effects. It could produce
an imprecise (but accurate) result if samples from different geospatial regions do not align with
the actual geospatial mix of buildings. Performing mathematical or statistical operations (e.g.,
averages) on imprecise values may not lead to a value that is credible to use in your work.

When a measurement system is popular and needs to be known to be accurate and precise,
typically a standard is made for all parties to agree upon how to test and formalize the features
of the system (e.g., how to perform the test many times and assess the results). Standards will
be discussed more in Chapter 4.

Uncertainty and Variability


As we seek to find multiple sources for our data needs, inevitably we will come across
situations where the data do not agree to the extent that we would hope. This will lead us to
situations of dealing with uncertainty and variability of our data. While the ways in which we

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 41

work with and model uncertain and variable data are similar, we first separately define each
condition. These simple definitions will be used here, with further detail in later chapters as
needed. Variability exists because of heterogeneity or diversity of a system. It may be, for
example, that the energy used to manufacture an item differs between the morning and
afternoon shift in a factory. Uncertainty exists because we either are unable to precisely
measure a value, or lack full information, or are ignorant of some state. It is possible that if we
did additional research or improved our measurement methods, we could reduce the
uncertainty and narrow in on a most likely outcome or value. Variability, on the other hand, is
not likely to be reducible – it may exist purely due to natural or other factors outside of our
control.

Management of Significant Figures


Beyond thinking that we have created a way of accurately and precisely measuring a quantity,
we also want to ensure that we appropriately represent the result of our measurement. Many
of us learned of the importance of managing the use of significant figures (or digits) in
middle school. Two important lessons learned that merit mention in this context relate to
leading and trailing zeros and reporting the results of mathematical operations. Remember
that trailing zeros are always significant and indicate the level of precision of the measurement.
Leading zeros (after a decimal point), however, are not significant. This means that a value like
0.00037 still has only two significant digits because scientific notation would refer to it as 3.7E-
04 and the first component of the notation (3.7) represents all of the significant digits. Also
take care not to introduce extra digits in the process of adding, subtracting, multiplying, or
dividing significant figures. That means, for example, not perpetuating a result from a
calculator or spreadsheet that multiplies two 2-digit numbers and reporting 4 digits. The
management of significant digits means reporting only 2 digits from such a result, even if it
means rounding off to achieve the second digit.

Recall that the basis for such directives is that our measurement devices are calibrated to a
fixed number of digits. A graduated cylinder used to measure liquids in a laboratory usually
shows values in 1 ml increments (e.g., 10, 11, or 12 ml). We then attempt to estimate the level
of the liquid to the nearest 10th of an increment. As an example, when measuring a liquid we
would report values like 10.2 ml – with three significant figures - which expresses our
subjective view that the height of the liquid is approximately 2/10ths of the way between the
10 and 11 ml lines. Given our faith in the measurement system, we are quite sure of the first
2 digits to the left of the decimal point (e.g., 10), and less sure of the digit to the right of the
decimal point as it is our own estimate given the uncertainty of the measurement device, and
thus is the least significant figure.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
42 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

When counting significant figures, think about scientific notation.


• All nonzero digits are significant
• Zeroes between nonzero digits are significant
• Trailing zeroes that are also to the right of a decimal point in a number are
significant

Digits do not increase with calculations.


• When adding and subtracting, the result is rounded off to have the same
number of decimal places as the measurement with the least decimal
places.
• When multiplying and dividing, the result is rounded off to have the same
number of significant figures as in the component with the least number of
significant figures.
Figure 2-3: Summary of Rules of Thumb for Managing Significant Figures

Inevitably, our raw measurements will be used in additional calculations. For example our
graduated cylinder observation of volume can then be used to find mass, molarity, etc. If those
subsequent calculations are presented with five significant figures (since that’s what the
calculator output reads), such results overstate the accuracy of the calculations based on the
original data, and by implication understate their uncertainty. Figure 2-3 summarizes rules for
managing significant figures. We will circle back to discussing data acquisition in the context
of life cycle assessment in a later chapter.

Going back to our CBECS example, the published average electricity use of 14.1 kWh/square
foot is a ratio with three significant figures. That published value represents an average of
many buildings included in the survey. The buildings would give a wide range of electricity
consumption values in the numerator. However, the three significant figures reported are likely
because some relatively small buildings led to a value with only three significant figures. If not
concerned about managing significant figures, DOE could have reported a value of 14.1234
kWh/sf. This result would have led to negligible modeling errors, but would have added
extraneous digits for no reason.

One of the main motivations for managing the number of significant digits is in considering
how to present model results of an LCA. As many LCAs are done in support of a comparison
of two alternatives, an inevitable task is comparing the quantitative results of the two. For such
a comparison to be valid, it is important not to report more significant figures in the result
than were present in the initial measured values. A common output of an LCA, given the need
to maintain assumptions between the modeling of various alternatives, is that the alternatives
would have very similar effects across at least one metric. Consider a hypothetical result where

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 43

the energy use Alternative A is found to be 7.56 kWh and for Alternative B is 7.57 kWh.
Would one really expect a decision maker to prioritize one over the other because of a 0.01
kWh reduction in energy use, which is a 0.1% difference, or a savings worth less than 0.1 cents
at current US electricity prices? Aside from the fact that it is a trivial amount, it is likely outside
of the range of measurement available.

In LCA, we do not have the same ‘measurement device’ issues used to motivate a middle
school introduction to significant digits. Instead, the challenge lies in understanding the
uncertainty of the ‘measurement process’ or the ‘method’ used to generate the numerical
values needed for a study. So while we do not worry about the number of digits on a graduated
cylinder, we need to consider that the methods are uncertain.

In the absence of further guidance, what would we recommend? Returning to our discussion
above an LCA practitioner should seek to minimize the use of significant digits. We generally
recommend reporting no more than 2 or 3 digits (if for no other reason than consideration of
uncertainty). In the example of the previous paragraph that would mean comparing two
alternatives with identical energy use – i.e., 7.6 kWh. The comparison would thus have the
appropriate outcome – that the alternatives are equivalent.

Ranges
If you are able to find multiple primary sources, it is typically more useful to fully represent all
information you have than to simply choose a single point as a representation. If you use a
single value, you are making a conscious statement that one particular value is the most correct
and the others are irrelevant. In reality, you may have more than one value being potentially
correct or useful, e.g., because you found multiple credible primary sources. By using ranges,
you can represent multiple data points, or a small set or subset of data. While individual data
points are represented by a single number (e.g., 5), a range is created by encapsulating your
multiple data points, and may be represented with parentheses, such as (0,5) or (0-5). A range
represented as such could mean “a number somewhere from 0 to 5”.

The values used as the limits of a range may be created with various methods. Often used
parameters of ranges are the minimum and maximum values of a dataset. In an energy
technology domain, you might want to represent a range of efficiency values of an electricity
generation technology, such as (30%, 50%).

If you have a large amount of data, then it might be more suitable to use the 5th and 95th percentile
values as your stated range. While this may sound like an underhanded way of ignoring data, it
can be appropriate to represent the underlying data if you believe some of the values are not
representative or are overly extreme. Using the same technology efficiency example, you may
find data on efficiencies of all coal-fired or gas-fired power plants in the US, and decide that
the lowest efficiency values (in the 10-20% range) are far outside of the usual practice because

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
44 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

they represent the efficiencies of plants that are used very infrequently or are using extremely
out of date technology. There could be similarly problematic values at the high end of the full
range of data if the efficiency for a newer plant has be estimated by the manufacturer, but the
plant has not been in service long enough to measure the true efficiency. Using these percentile
limits in the ranges can help to slightly constrain the potential values in the data.

Ranges can be used to represent upper and lower bounds. Bounding analysis is useful when you
do not actually have data but have a firm (perhaps even qualitative) belief that a value is unlikely
to be beyond a certain quantity. A bounding analysis of energy technology might lead you to
conclude that given other technologies, it is unlikely that an efficiency value could be less than
20% or greater than 90%. Using a range in this way constrains your data to values that you
feel are the most realistic or representative.

Finally, ranges can be used to represent best or worst case scenarios. The limit values chosen for
the stated ranges are thus subjectively chosen although perhaps by building on some range
limits derived from some of the other methods above. For example, you might decide that a
“best case value” for efficiency is 100% and “worst case” value is 0% (despite potentially being
unrealistic). Best and worst case limits are typically most useful when modeling economic
parameters, e.g., representing the highest salary you might need to pay a worker or the lowest
interest rate you might be able to get for a bank loan. Best and worst cases, by their nature,
are themselves unlikely. It is not very probable that all of your worst parameters will occur,
just as it is improbable that all best parameters will occur. Thus you might consider the best-
worst ranges as a type of bounding analysis.

Another way of implementing a range is by using statistical information from the data, such as
the variance, standard error, or standard deviation. You may recall from past statistics courses
that the variance is the average of the squared differences from the mean, and the standard
deviation (how much you expect one of the data points to be different from the mean) is the
square root of the variance. The standard error (the “precision of the average”, or how much
you might expect a mean of a subsample to be different from the mean of the entire sample)
is the standard deviation divided by the square root of the number of samples of data. Either
of these values if available can be used to construct confidence intervals to give some sense of
the range of the underlying data. A related statistical metric is the relative standard error (RSE),
which is defined as the standard error divided by the mean and multiplied by 100, which gives
a percentage-like range variable. Another way to think about the RSE is as a metric
representing the standard error relative to the mean estimate on a scale from zero to 100. As
the RSE increases, we would tend to believe our mean estimate is less precise when referring
to the true value in the population being studied. Of course when found in this way, the range
will be symmetric around the mean.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 45

A 95-percent confidence range is calculated for a given survey (mean) estimate from the RSE
via a three-step process. First, divide the RSE by 100 and multiply by the mean presented by
the survey to get the standard error. Second, multiply the standard error by 1.96 to generate
the confidence error (recall from statistics that the value 1.96 comes from the shape and
structure of an arbitrarily assumed normal distribution and its 0.975 quantile). Finally, add and
subtract the confidence error to the survey estimate from the second step to create the 95%
confidence range. Note that a 95% confidence range is not the same as a 5th-95th percentile
range. A 95% confidence range represents the middle 95% of a normal distribution, or a 2.5th-
97.5th percentile range, leaving only 2.5% of the distribution at the top and bottom. A 5th- 95th
percentile range leaves 5% on the top and bottom.

Example 2-1:

Question: Develop a 95% confidence interval for the 2003 CBECS estimate of US
commercial building electricity consumption per square foot (14.1 kWh/sf) given the RSE (3.2).

Answer: Given the RSE definition provided above, the standard error is (3.2/100)*14.1 =
0.45 kWh/square foot, and the confidence error is 0.88 kWh/square foot. Thus, the 95%
confidence interval would be 14.1 +- 0.88 kWh/square foot. Note that this range seems to
contradict the 25th-75th percentile range of 3.6-17.1 provided directly by the survey (it is a much
tighter distribution around the mean of 14.1). However the confidence interval is representing
something different –how confident we should be that the average electricity use of all of the
buildings surveyed (as if we re-did the survey multiple times) would be approximately 14.1, not
trying to represent the underlying range of actual electricity use of the buildings! If you are
making a model that needs to represent the range of electricity use, the provided 25th-75th
percentile values are likely much more useful.

Source: US Dept. of Energy, 2003 Commercial Buildings Energy Consumption Survey (CBECS),
RSE Tables for Consumption and Expenditures for Non-Mall Buildings,
http://www.eia.gov/consumption/commercial/data/2003/pdf/c1rse-c38rse.pdf, page 94.

A main benefit of using ranges instead of single point estimates is that the range boundaries
can be used throughout a model. For example one can propagate the minimum values of
ranges through all calculations to ensure a minimum potential result, or the maximum values
to get a maximum potential result. One word of caution when using ranges as suggested above
is to maintain the qualitative sense of the range boundaries. If you are envisioning a best-worst
kind of model, then the “minimum” value chosen in your range boundary should consistently
represent the worst case possible. This is important because you may have a parameter in your
model that is very high but represents a worst case, for example, a loss factor from a
production process. In a best-worst range type of model, you want to have all of your best
and worst values ordered in this way so that your final output range represents the worst and
best case outputs given all of the worst possible variable values, and all possible best values.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
46 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Units and Unit Conversions


In quantitative analysis, it is critical to maintain awareness of the unit of analysis. That might
mean noting grams or kilograms, short tons or metric tons (a.k.a. tonnes). While conversions
can be simple, such as multiplying or dividing by 1000 in SI units, this is an area where many
errors occur, especially when done manually. It is easy to make errors by not thinking out the
impacts and accidentally multiply instead of divide, or vice versa. Thus a good practice is to
ask yourself whether the resulting conversion makes sense. This is also known as applying a
reasonableness test, or a sanity check. Some refer to it as a “sniff test”, suggesting that you
might be able to check whether the number smells right. To convert from kilograms to grams,
we multiply by 1000 - the result should be bigger because we should have many more grams
than we do kilograms. If we accidentally divide by 1000 (an error the authors themselves have
made many times in the rush of getting a quick answer) the number gets smaller and the sniff
test would tell us it must be an error.

In the context of finding sources for data, simple changes of unit scales, such as grams to
kilograms, don’t require extensive referencing. When performing simple unit conversions like
this, it is typical that instead of seeking external data sources you would simply document the
step used (e.g., you would state that you “converted to kilograms”).

There are however more complex unit conversions that change the entire basis of comparison
(not just kg to g). If you are changing more than just the scale, such as switching from British
Thermal Units (BTU) to megajoules (MJ), this is referred to as performing physical or energy
unit conversions. A unit conversion factor is just a mathematical relation between the same
underlying phenomena but with different measurement scales, such as English and SI (metric)
units. For example you may find a data source expressing emissions in pounds but need to
report it in kilograms (or metric tons). This type of conversion does not require much
documentation either, e.g., you could write that you “assumed 2.2 pounds per kilogram”. Such
conversions still need to be done and managed correctly. In 1999, NASA famously lost the
Mars Climate Orbiter after a nine-month mission when navigation engineers gave commands
in metric units to the spacecraft, whose software engineers had programmed to it to operate
with English units, causing the vehicle to overshoot the planet.

If you do not know the conversion factors needed, then you will need to search for sources
of your conversion factors using the same methods discussed above. If you were to do a search
for unit conversions with the many tools and handbooks available, you will certainly find
slightly different values in various sources, although most of these differences are simply due
to rounding off or reducing digits. One source may say 2.2 pounds per kg, another 2.20462,
and yet another 2.205. Practically speaking any of these unit conversions will lead to the same
result (they would be at most 0.2% apart) and quantity aside, in the big picture they are all the
same number, i.e., 2.2. The existence of multiple conversion factors is the reason why to state
the one you used. Without stating the actual conversion factor used, someone else may not be
able to reproduce your study results (or may assume an alternative unit conversion factor and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 47

not understand why your results are different). Given the scientific and engineering basis of
unit conversion factors, you do not typically need to cite specific ‘sources’ for them, just the
numbers used.

As you build your models, your calculations will become increasingly complex. You can
double-check your calculations by tracing your units. As a simple example, assume you have
towboat transit time data for a stretch of river between two locks. You know the transit time
in minutes (140), and the distance between locks in miles (6.1). Equation 2-1 shows how to
calculate the towboat speed in kilometers/hour, which could later allow you to calculate power
load and emissions rates. Getting the speed units wrong, despite being a trivial conversion,
could have disastrous effects on your overall model results. Tracing the units confirms that
you have used all of the necessary conversion factors, and used them appropriately and in the
right order.
"# ).+ #-./0 1/23//4 .%5"0 + "-.%#/2/' )7 #-4&2/0
𝑥 $%&' = +67 #-4&2/0 2'840-2 2-#/ × 7.):+ #-./
× + $%&'
= 4.2 𝑘𝑚/ℎ𝑟 (2-1)

We end by briefly discussing the need to manage units in calculations. Note that when solving
Equation 2-1, a calculator would report the speed as 4.2098 km/hr, a level of accuracy that
would be impossible to achieve (and silly to present). The reason to document the units is so
that when we are using them in calculations that we do the mathematical operations correctly,
i.e., adding kg to kg, not kg to g. The ‘Hillsville’ graphic from The New Yorker (presented at the
beginning of this chapter) is another reminder of adding units.

Energy-Specific Considerations
An important underlying concept for energy conversion processes is efficiency, which
measures how much of the input energy can be converted to output energy (represented as
output/input). You may recall from a physics course that energy is a measure of the amount
of work done or generated, with units such as joules, BTU, or kilowatt-hours. On the other
hand, power is the rate at which energy is used or transformed, with units such as watts or
joules/second. Electricity generation processes often are assessed in terms of their efficiency.
Thus a power plant that outputs 50 units of energy from 100 units of fuel input is 50%
efficient. But moving between different energy sources with different units, e.g., fuel in BTU
and electricity in kWh, can be more complicated than they appear.
One should not treat efficiency as a unit conversion factor. While engineering reference
manuals give the unit conversion factor ‘1 kWh = 3,413 BTU’, supplying 3,413 BTU of coal
in a steam turbine will not produce 1 kWh of electricity. No energy system is 100% efficient.
This is because of the many intermediate processes in the plant. An important thermodynamic

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
48 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

concept specific to the modeling of fuel use pertains to the heating value of the fuel, which
refers to the energy released from combusting the fuel2, with units such as kJ/kg or BTU/lb.

The effect of a thermodynamic process that converts a fuel input to a quantity of electricity is
represented as a heat rate. In describing the conversion of fossil energy from a fuel in a power
plant, the heat rate for a typical coal–fired plant may be 10,000 BTU (of coal input) to generate
1 kWh (electricity output). The reason that the power plant heat rate is not the same as the
unit conversion factor 1 kWh = 3,413 BTU is because converting coal to electricity requires
burning the coal, then using the produced heat to turn water into steam, and then using the
pressurized steam to spin a turbine which is connected to a generator. Losses exist throughout
all of these steps, and thus, far more than the 3,413 BTU of fuel is needed to generate 1 kWh
of electricity. The overall efficiency of this power plant is 1 kWh per 10,000 BTU, or, given
the unit conversion factor for a kilowatt-hour, 3,413 BTU / 10,000 BTU = 34%. Natural gas
plants can have efficiencies of about 50%. While not comprised of burning fuels, solar PV
cells are about 10% efficient. It may be surprising to you to learn that in the 21st century we
rely on such inefficient methods to make our electricity! The point of this example is that a
power plant’s efficiency is not a unit conversion factor. Unit conversion factors are
mathematical conversions. Power plant efficiencies are thermodynamic metrics. Be sure to
keep these concepts separate. For all of the reasons mentioned above, it is important to
document all conversion factors, rates, or efficiencies used, as others might make different
assumptions.

The quantity of energy used locally for a specific task is typically referred to as site energy,
such as the electricity we use for recharging a laptop or mobile phone. However, site uses of
energy typically lead to an even greater use of energy elsewhere, such as at a power plant. As
noted above, the energy conversion performance of a coal-fired power plant, as well as losses
from the power grid, means that for every 3 units of energy in the coal burned at a plant we
can use only about 1 unit of energy at our electrical outlet. That amount of original energy
needed, such as at a power plant, is referred to as primary or source energy.

As a final note, ‘converting’ between energy and power is not appropriate for LCA analyses,
but is often done to provide examples or benchmarks to lay audiences. For example, 300 kWh
of electricity may be referred to as the quantity such that a 30-Watt light bulb is used for 10,000
hours (a bit more than a complete year).

2Of particular importance is which heating value is used. The difference between the lower heating value (LHV) and the higher heating
value (HHV) is whether the energy used to vaporize liquid water in the combustion process is included or not. While the difference
between HHV and LHV is typically only 10%, you can argue that the HHV is a more inclusive metric, consistent with the system and life
cycle perspectives of LCA. Regardless, this is another example of why all relevant assumptions need to be explicit in energy analysis.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 49

Use of Emissions or Resource Use Factors


Many production processes have releases to the environment, such as the various types of
pollutants mentioned in Chapter 1. For many analyses, an emissions factor is needed to
represent the units of emissions released as a function of some level of activity. We will discuss
specific data sources for emissions factors in later chapters, but most emissions factors can be
found using the same type of methods needed to find primary data sources or unit
conversions. Emissions factors may be sourced from government databases or reports (e.g.,
the US EPA’s AP-42 database) or technical specifications of a piece of equipment and as such
should be explicitly cited if used. Given the potential for discrepancies in emissions factors,
you should look for multiple sources of emissions factors and represent them with a range of
values.

Beyond finding sources, knowledge of existing physical quantities and chemical processes can
be used to find emissions factors. Equation 2-2 can be used to generate a CO2 emissions factor
for a combusted fuel based on its carbon content (as found by laboratory experiments) and an
assumed oxidation rate of carbon (the percent of carbon that is converted into CO2 during
combustion):

CO2 emissions from burning fuel (kg / MMBTU) =

Carbon Content Coefficient (kg C / MMBTU) * Fraction Oxidized * (44/12) (2-2)

where the 44/12 parameter in Equation 2-2 is the ratio of the molecular weight of CO2 to the
molecular weight of carbon, and MMBTU stands for million BTU.

If we were doing a preliminary analysis and only needed an approximate emissions factor, we
could assume the fraction oxidized is 1 (100% or complete oxidation). In reality, the fraction
oxidized could be closer to 0.9 than 1 for some fuels. For an example of coal with a carbon
content of 25 kg C per MMBTU, and assuming perfect oxidation, the emissions factor would
be 92 kg CO2 / MMBTU.

Various emissions factors can be developed through similar methods by knowing contents of
elements (such as for SO2), however, other emissions factors are a function of the choice of
combustion and emissions control technologies used (such as for nitrogen oxide or particulate
matter emissions)

In LCA, we will also discover resource use factors, such as material input factors, that are used
and developed in similar ways as emissions factors. The main difference is that resource use
factors are made as a function of input rather than output.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
50 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Estimations vs. Calculations


“It is the mark of an instructed mind to rest satisfied with the degree of precision which
the nature of the subject permits and not to seek an exactness where only an
approximation of the truth is possible.” - Aristotle

“God created the world in 4004 BC on the 23rd of October.” – Archbishop James Ussher
of Ireland, The Annals of the Old Testament, in 1650 AD

“.. at nine o’clock in the morning.” –John Lightfoot of Cambridge, in 1644 AD

Most courses and textbooks teach you how to apply known equations and methods to derive
answers that are exact and consistent (and selfishly, easy to grade). These generally are activities
oriented towards teaching calculation methods. Similarly, methods as described above can
assist in finding and documenting data needed to support calculations. A simple example of a
calculation method is applying a conversion factor (e.g., pounds to kilograms). More complex
calculation methods may involve solving for distance traveled given an equation relating
distance to velocity and time. As solving LCA problems seldom requires you to learn a
completely new calculation method, we presume you have had sufficient exposure to doing
calculations.

But what if all else fails and you cannot find a primary source or a needed unit conversion?
What if we are unable to locate an appropriate calculation method? An alternative method
must be found that assists in finding a quantitative answer, and which preserves a scientific
method, but is flexible enough to be useful without all needed data or equations. Such an
alternative could involve conducting a survey of experts or non-experts, or guessing the
answer. It is this idea of “guessing” the answer that is the topic of this section. Here we assume
that there is a time-critical aspect to the situation, and that you require a relative guess in lieu
of investing a substantial more amount of time looking for a source, conducting a complete
survey, etc.

Estimation methods use a mix of qualitative and quantitative approaches to yield a “ball-
park”, or “back of the envelope”, or order of magnitude assessment of an answer. These are
not to be confused with the types of estimation done in statistical analyses that are purely
quantitative in nature (e.g., estimating parameters of a regression equation). With estimation
methods, we seek an approximately right answer that is adequate for our purpose – thus the
concept that we are merely looking for an order of magnitude result, or one that we could do
in the limited space of an envelope. The quotations at the beginning of this section are given
here to represent the spectrum of the exact versus approximate methods being contrasted.

Estimation methods are sometimes referred to as educated guessing or opportunistic problem


solving. As you will see, the intent is to create educated guesses that do not sound like guesses.
The references at the end of this chapter from Koomey, Harte (both focused on

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 51

environmental issues), Weinstein and Adam, and Mahajan are popular book-length resources
and are highly recommended reading if you find this topic interesting.

Estimation methods succeed by using a structured approach of creating and documenting


assumptions relevant to the question rather than simply plugging in known values into an
equation. In this context, you need to adjust your expectations (and those of your audience)
to reflect the fact that you are not seeking a calculated value. You may be simply trying to
correctly represent the sign and/or the order of magnitude of the result. “Getting the sign
right” is fairly straightforward but still often difficult. Approximating the order of magnitude
means generating a value where only one significant figure is needed and the “power of 10”
behind it gives a sense of how large or small it is (i.e., is the value in the millions or billions?).

If you come from a “hard science” discipline such as chemistry or physics, the thought of
generating an answer without an equation may sound like blasphemy. But recall the premise
of estimation methods – that you do not have access to, are unable to acquire, or unfamiliar
with the data and equations needed for a calculated result. We are not suggesting you need to
use estimations to find the force of gravity, the number of molecules per mole, etc. Many
students may have encountered these methods in the form of classroom exercises known as
“Fermi Problems”. Furthermore, such estimation challenges are being used more and more
frequently as on-the-spot job interview questions for those entering technical fields.

While the mainstream references mentioned above give many examples of applying estimation
methods, other references are useful for learning the underlying methods. Mosteller (1977)
lists several building block-type methods that can be used and intermixed to assist in
performing estimation. You are likely familiar with many or all of them, but may not have
considered their value in improving you estimation skills:

• Rules of thumb – Even a relative novice has various numbers and knowledge in hand
that can help to estimate other values. For example, if performing a financial analysis
it is useful to know the “rule of 72” that defines when an invested amount will double
in value. Likewise, you may know of various multipliers used in a domain to account
for waste, excess, or other issues (e.g., contingency or fudge factors). The popular
Moore’s Law for increases in integrated circuit densities over time is an example. Any
of these can be a useful contributor to a good estimation. Also realize that one person’s
rule of thumb may be another’s conversion factor.

• Proxies or similarity – Proxy values in estimation are values we know in place of one
we do not know. Of course the needed assumption is that the two values are expected
to be similar. If we are trying to estimate the total BTU of energy contained in a barrel
of diesel fuel, but only had memorized data for gasoline, we could use the BTU/gallon
of gasoline as a proxy for diesel fuel (in reality the values are quite close, as might be
expected since they are both refined petroleum products). Beyond just straight

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
52 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

substitution of values via proxy, we can use similarity methods to reuse datasets from
other purposes to help us. For example if we wanted to know estimates of leakage
rates for natural gas pipelines in the US, we might use available data from Canada
which has similar technologies and environmental protection policies.

• Small linear models – Even if we do not have a known equation to apply to an


estimation, we can create small linear models to help us. If we seek the total emissions
of a facility over the course of a year, we could use a small linear model (e.g., of the
form y = mx + b) that estimates such a value (y) by multiplying emissions per workday
(m) by number of work days (x). In a sense we are creating shortcut equations for our
needs.

Of course, these small linear models could be even more complicated, for example by
having the output of one equation feed into another. In the example above, we could
have a separate linear model to first estimate emissions per day (perhaps by multiplying
fuel use by some factor). Another way of using such models is to incorporate growth
rates, e.g., by having b as some guess of a value in a previous year, and mx the product
of an estimated growth rate and the number of years since.

• Factoring – Factoring is similar to the small linear models mentioned above, except
in purely multiplicative terms. Factoring seeks to mimic a chain of unitized conversions
(e.g., in writing out all of the unitized numerators and denominators for converting
from days in a year to seconds in a year, which looks similar to Equation 2-1). As
above, the goal here is to estimate the individual terms and then multiply them together
to get the right value with the right units. The factors in the equation used may be
comprised of constants, probabilities, or separately modeled values.

• Bounding – Upper and lower bounds were discussed in the context of creating ranges
for analysis purposes, but can also be used in estimations. Here, we can use bounds to
help set the appropriate order of magnitude for a portion of the analysis and then use
some sort of scaling or adjustment factor to generate a reasonable answer. For example
if we were trying to estimate how much electricity we could generate via solar PV
panels, using the entire land mass of the world would give us an upper bound of
production. We could then scale down such a number by a guess at the fraction of
land that is highly urbanized or otherwise not fit for installation.

• Segmentation or decomposition – In this type of analysis, we break up a single part


into multiple but distinct subparts, and then separately estimate a value for each
subpart and then report a total. If we were trying to estimate fossil-based carbon
dioxide emissions for the US, we could estimate carbon dioxide emissions separately
for fossil fueled power plants, transportation, and other industries. Each of these
subparts may require its own unique estimation method (e.g., a guess at kg of CO2 per

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 53

kWh, per vehicle mile traveled, etc.) that are added together to yield the original
unknown total emissions of CO2.

• Triangulation – Using triangulation means that we experiment in parallel with


multiple methods to estimate the same value, and then assess whether to use one of
the resulting values or to generate an average or other combination of them.
Triangulation is especially useful when you are quite uncertain of what you are
estimating, or when the methods you are otherwise choosing have many guesses in
them. You can then control whether to be satisfied with one of your results, or to use
a range. Of course if your various parallel estimates are quite similar you could just
choose a consensus value.

While Mosteller summarized these specific building blocks, you should not feel limited by
them. Various other kinds of mathematical functions, convolutions, and principles could be
brought to bear to aid in your estimation efforts. Beyond these building blocks, you should try
to create ranges (since you are estimating unknown quantities) by assuming ranges of constants
in your methods or by using ranges created from triangulation. Do not assume that you can
never “look up” a value needed within the scope of your estimation. There may be some
underlying factor that could greatly help you find the unknown value you seek, such as the
population of a country, the total quantity of energy used, etc. You can use these to help you
reach your goal, but be sure to cite your sources for them. It might be useful to avoid using
these reference source values while you are first learning how to do estimation, and then
incorporate them when you are more experienced.

As expressed by several of the building block descriptions, a key part of good estimations is
using a “divide and conquer” method. This means you recursively decompose a high-level
unknown value as an expression of multiple unknown values and estimate each of them
separately. A final recommendation is that you should be creative and also to consider “outside
the box” approaches that leverage personal knowledge or experience. That may mean using
special rules of thumb or values that you already know, or attempting methods that you have
good experience in already. Now that we have reviewed the building blocks, Example 2-2
shows how to apply them in order to create a simple estimate.

Example 2-2: Estimating US petroleum consumption per day for transportation

Question: Given that the total miles driven for all vehicles in the US is about 3 trillion miles
per year, how many gallons of petroleum are used per day in the US for transportation?

Answer: If we assume an average fuel economy figure of about 20 miles per gallon we can
estimate that 150 billion gallons (3 trillion miles / 20 miles per gallon) of fuel are consumed per
year. That is about 400 million gallons per day.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
54 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

You might also develop estimations to serve a specific purpose of explaining a result to be
presented to a general audience. In these cases, you might want to find a useful visual or mental
reference that the audience has, and place a result in that context. Example 2-3 shows how
you might explain a concentration of 1 ppb (1 part per billion).

Example 2-3: Envisioning a one part per billion concentration

Question: How many golf balls would it take to encircle the Earth?

Answer: Assume that the diameter of a golf ball is approximately 1.5 inches, and that the
circumference of the Earth is about 25,000 miles (roughly 10x the distance to travel coast to coast
in the United States). We can convert 25,000 miles to 1.6 x 109 inches. Thus there would be 1.6 x
109 inches / 1.5 inches, or ~1 billion golf balls encircling the Earth.

Thus, if trying to explain the magnitude of a 1 part per billion (ppb) concentration, think about
there being one red golf ball along the equator that has 1 billion white balls lined up!

Acknowledgment to “Guesstimation” book reference for motivating this example.

Attributes of Good Assumptions


One of the key benefits of becoming proficient in estimation is that your skills in documenting
the assumptions of your methods will improve. As application of estimation methods requires
you to make explicit assumptions about the process used to arrive at your answer, it is worth
discussing the attributes of good assumptions. You may have the impression that making
assumptions is a bad thing. However, most research has at its core a structured set of
assumptions that serve to refine and direct the underlying method. Your assumptions may
refer specifically to the answer you are trying to find, as well as the measurement technologies
used, the method, or the analysis. You might think of your assumptions as setting the “ground
rules” or listing the relevant information that is believed to be true. You should make and
write assumptions with the following attributes.

1. Clarify and Simplify - First, realize that the whole point of making an assumption is
to help to clarify the analysis (or at the least to rule out special cases or complications).
Assumptions ideally also serve to refine and simplify your analysis. It is not useful to
have an assumption that makes things harder either for your analysis or for the
audience to follow your process. For example, if you were trying to estimate the
number of power plants in the US, you might first assume that you are only
considering power plants greater than 50 MW in capacity. Or you might assume that
you are only considering facilities that generate and sell electricity (which would ignore
power plants used by companies to make their own power). By making these
assumptions, you are ruling out a potentially significant number of facilities (leading to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 55

an undercount of the actual), but you have laid out this fact explicitly at the beginning
as opposed to doing it without mention.

It is possible that an assumption may be required in order to make any estimate at all.
For example, you might need to assume that you are only estimating fossil-based
power plants, because you have no idea of the capacities, scale, or processes used in
making renewable electricity.

2. Correct, credible and feasible - If it is not obviously true (i.e., you are not stating
something that is a well known fact), your audience should read an assumption and
feel that it is valid - even if hard to believe or agree with. For example, you should not
assume a conversion factor inconsistent with reality, such as there being only four days
in a week or 20 hours in a day.

3. Not a shortcut - While assumptions help to narrow down and refine the space in
which you are seeking an answer, they should not serve to merely carve out an overly
simple path towards a trivial solution. Your audience should not be left with the
impression that you ran out of time or interest in finding the answer and that you
substituted a good analysis with a convenient analysis. For example, you might assume
that you were only counting privately owned power plants. This is a narrowing of the
boundaries of the problem, but does not sound like you are purposely trying to make
the problem trivial.

4. Unbiased – Your assumptions should not incorporate a degree of connection to some


unrelated factor. For example, in estimating the number of power plants you do not
want to rely on a geospatial representation associated with the number of facilities that
make ethanol, which are highly concentrated in areas where crops like corn grow.

Beyond listing them, it is good practice to explicitly write a justification for your assumptions.
In the power plant example above, the justification for why you will only count relatively large
(> 50 MW) facilities might be “because you believe that the number of plants with smaller
capacities is minimal given the demands of the power grid”. Since you’re looking for an order
of magnitude estimate, neglecting part of the solution space should have no practical effect.
In the case of assuming only privately owned facilities, the justification might simply specify
that you are not estimating all plants, just those that are privately owned. In Example 2-2, the
20 miles per gallon assumed fuel economy is appropriate for passenger vehicles, but not so
much for trucks or buses that are pervasive. In that example, it would be useful to state and
justify an assumption explicitly, such as “Assuming that most of the miles traveled are in
passenger vehicles, which have a fuel economy of 20 miles per gallon, …”

Writing out the thought process behind your assumptions helps to develop your professional
writing style, and it helps your audience to more comfortably follow and appreciate the analysis
you have done. Furthermore, by becoming proficient at writing up the assumptions and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
56 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

process used to support back of the envelope calculations, you become generally proficient at
documenting your methods. Hopefully you will leverage these writing skills in other tasks.

In the alternative where you do not state all of your assumptions, the readers are left to figure
them out themselves, or to create their own assumptions based on your incomplete
documentation. Needless to say either of those options raises the possibility that they make
bad assumptions about your work.

Validating your Estimates


When you have to estimate a quantity, it is important that you attempt to ensure that the value
you have estimated makes sense (see the discussion earlier in this chapter about reasonableness
tests). Even though you have estimated a quantity that you were unable to find a good citation
for originally, you should still be able to validate it by comparing it to other similar values.

As a learning experience, you might try to estimate a value with a known and available number
that you know can be found but that you do not already know the answer to (e.g., the number
of power plants in the US, or a value that you could look up in a statistical abstract). Doing so
helps you to hone your skills with little risk, meaning that you can try various methods and
directly observe which assumptions help you arrive at values closest to the ‘real answer’ and
track the percentage error in each of your attempts before looking at the real answer. The
goals in doing so are explicitly to learn from doing many estimates of various quantities (not
just 5 attempts at the same unknown value) and to increasingly understand why your estimates
differ from the real answers. You may not be making good assumptions, or you might be
systematically always guessing too high or too low. It is not hard to become proficient after
you have tried to estimate 5-10 different values on your own. When doing so, try to apply all
of the building block methods proposed by Mosteller. Example 2-4 shows an example of a
validation of an estimate found earlier in the chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 57

Example 2-4: Validating Result found in Example 2-2

In Example 2-2, we quickly estimated that the transportation sector consumes 400 million
gallons per day of petroleum.

The US Energy Information Administration reports that about 7 billion barrels of crude oil and
other petroleum products were consumed in 2011. About 1 billion barrels equivalent was for
natural gas liquids not generally used in transportation. That means about 17 million barrels per
day (about 850 million gallons per day at about 50 gallons per barrel) was consumed. That is
roughly twice as high as our estimate in Example 2-2, but still in the same order of magnitude.

Let’s think more about the reasons why we were off by a factor of two. First off, we attempted an
estimate in one paragraph with two assumptions. The share of passenger vehicles in total miles
driven is not 100%, and heavy trucks represent 10% of the miles traveled and about one-fourth
of fuel consumed (because their fuel economies are approximately 5 mpg, not 20). Considering
these deviations our original estimate, while simplistic, was useful.

Sources: US DOE, EIA, Annual Petroleum and Other Liquids Consumption Data
http://www.eia.gov/dnav/pet/pet_cons_psup_dc_nus_mbbl_a.htm

US Department of Transportation, Highway Statistics 2011, Table VM-1.

Beyond validation of your own estimates, you might also want to do a reasonableness test on
someone else’s value. You will often find numbers presented in newspapers or magazines as
well as scholarly journals that you are curious about or fail a quick sniff test. You can use the
same estimation methods to validate those numbers. Just because something is published does
not necessarily mean it has been extensively error-checked. Mistakes happen all the time and
errata are sometimes (but not always) published to acknowledge them. Example 2-5 shows a
validation of values published in mainstream media pertaining to EPA’s proposed 2010 smog
standard.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
58 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Example 2-5: Validating a comparative metric used in a policy discussion

Question: Validate the number of tennis balls in the following CBS News excerpt (2010)
pertaining to the details of EPA’s proposed 2010 smog standard.

“The EPA proposal presents a range for the allowable concentration of ground-level ozone, the
main ingredient in smog, from 60 parts per billion to 70 parts per billion. That’s equivalent to
60 to 70 tennis balls in an Olympic-sized swimming pool full of a billion tennis balls.”
Answer: Suppose your sniff test fails because you realize a billion tennis balls is a very large
number of balls for this pool. A back of the envelope estimate suggests the approximate size of
an Olympic pool is 50m x 25m * 2m = 2500 cubic meters. Similarly, assume a tennis ball occupies
a 2.5 inch (70 mm or 0.07m) diameter cube so it thus has a volume of 0.00034 m^3. Such a pool
holds only about 7 million tennis balls, almost three orders of magnitude less than the 1 billion
suggested in the excerpt. Of course, we could further refine our assumptions such that the pool
can be uniformly deeper, or that the tennis ball fully occupies that cube (to consider that adjacent
tennis balls could fill in some of the voids when stacked) but none would fully account for the
several orders of magnitude difference.

You cannot put a billion tennis balls in an Olympic-sized pool, thus the intended reference point
for the lay audience was erroneous. It is likely an informal reference from the original EPA Fact
Sheet was copied badly in the news article (e.g., “60-70 balls in a pool full of balls”).

Thanks to Costa Samaras of Carnegie Mellon University for this example.

Now that we have built important general foundations for working with and manipulating
data, we turn our attention to several concepts more specific to LCA.

Building Quantitative Models


Given all of the principles above, you should now be prepared to build the types of models
needed for robust life cycle thinking. These models have inputs and outputs. The inputs are
the various parameters, variables, assumptions, etc., and the output is the result of the equation
or manipulation performed on the inputs. In a typical model, we have a single set of input
values and a single output value. If we have ranges, we might have various sets of inputs and
multiple output values. Beyond these typical models there are other types of models we might
choose to build that are less straightforward.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 59

A breakeven analysis solves for the input value that has a prescribed effect on a model. A
classic example, and where the name ‘breakeven’ comes from is if you are building a profit or
loss model, where your default model may suggest that profits are expected to be negative (i.e.,
the result is less than $0). A relevant breakeven analysis may assess the input value (e.g., price
of electricity or number of units sold) needed to lead to a profit outcome, i.e., a $0 (or positive)
value. It is what you need to ‘break even’, or make profit. This is simply back-solving to find
the input required to meet the specified conditions of the result. Not all breakeven analyses
need to be about monetary values, and do not need to be set against zero. Using the example
of Equation 2-1, you could back-solve for the transit time for a towboat moving at a speed of
5 km/hr. While the math is generally easy for such analyses, common software like Microsoft
Excel have built-in tools (Goal Seek) to automate them. Goal Seek is quite comprehensive in
that it can solve for a breakeven value across a fairly complicated spreadsheet of values.

The final quantitative skill in this chapter shows how robust results are to changes in the
parameters or inputs of your model. In a sensitivity analysis, you consider various model
inputs individually, and assess the degree to which changing the value of those inputs has
meaningful effects on the results (it is called a sensitivity analysis because you are seeing how
‘sensitive’ the output is to changes in the inputs). By meaningful, you are, for example,
assessing whether the sign of the result changes from positive to negative, or whether it
changes significantly, e.g., by an order of magnitude. If small changes in input values have a
big effect on the output, you would say that your output is sensitive. If even large changes in
the inputs have modest effect on the output, then the output is not sensitive. If any such
results occur across the range of inputs used in the sensitivity analysis, then your qualitative
analysis should support that finding by documenting those outcomes. Note that a sensitivity
analysis changes each of your inputs independently (i.e., changing one while holding all other
inputs constant). You perform a sensitivity analysis on all inputs separately and report when
you identify that the output is sensitive to a given input. Again referring to the towboat
example (Equation 2-1) we could model how the speed varies as the time in transit varies over
a range of 20 minutes to more than 4 hours. Figure 2-4 shows the result of entering values for
transit time in increments of 20 minutes into Equation 2-1. It suggests that the speed is not
very sensitive to large transit times, but changes significantly for small transit times. We will
show more examples of breakeven and sensitivity analyses in Chapter 3.

Figure 2-4: Sensitivity of Towboat Speed to Transit Time

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
60 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

A Three-step method for Quantitative and Qualitative Assessment


We conclude the chapter with suggestions on how to qualitatively and quantitatively answer
questions. LCA is about life cycle assessment. While we have not yet demonstrated the method
itself, it is important to develop assessment skills. If you are doing quantitative work (as you
will need to do to successfully complete an LCA), a general guideline is that you should think
of each task as having three parts:

(1) A description of the method used to complete the task,

(2) The result or output (quantitative or qualitative) of the task, and

(3) A critical assessment, validation, or thought related to the result.

The amount of time and/or text you develop to document each of these 3 steps varies based
on the expectations and complexity of the task (and perhaps within the constraints of the size
of a study).

In step one, you should describe any assumptions, data sources found, equations needed, or
other information required to answer the question. In step two, you state the result, including
units if necessary. In step three, you somehow comment on, validate, or otherwise reflect on
the answer you found. This is an important step because it allows you to both check your work
(see the example about unit conversions above) and to convince the reader that you have not
only done good work but have also spent some time thinking about the implication of the
result. For example, a simple unit conversion might be documented with the three-step
method as follows:

“Inputs of plastic were converted from kg to pounds (2.2 lbs. per 1 kg) yielding 100 kg
of inputs. This value represents 20% of the mass flows into the system.”

Each of the three expected steps is documented in those 2 sentences: the method (a basic unit
conversion), the result (100 kg), and an assessment (20% of the total flows). If this were part
of an assignment, you could envision the instructor deciding on how to give credit for each
part of the question, e.g., 3 points for the method, 2 points for the result, and 2 points for the
assessment. Such a rubric would emphasize the necessity of doing each part, and could also
formalize the expectations of working in this manner and forming strong model building
habits. For many types of problem solving—especially those related to LCA, where many
answers are possible depending on how you go about modeling the problem—the emphasis
may be on parts 1 and 3, relatively de-emphasizing the result found in part 2. In other domains,
such as in a mathematics course, the result (part 2) may be the only significant part in terms
of how you are assessed. Regardless, you probably still used a method (and may have briefly
shown it by writing an equation and applying it), and hopefully tried to quickly check your

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 61

result to ensure it passed a reasonableness test, even if you did not in detail write about each
of those steps.

A way of remembering the importance of this three-step process is that your answer should
never simply be a graph or a number. There is always a need to discuss the method you used
to create it, as well as some reflection on the value. Regardless of the grading implications and
distributions, hopefully you can see how this three-step process always exists – it is just a
matter of translating the question or task presented to determine how much effort to make in
each part, and how much documentation to provide as an answer. You will find that
performing LCA constitutes assembling many small building block calculations and mini-
models into an overall model. If you have mostly ignored how you came up with these building
block results, it will be difficult to follow your overall work, and to follow how the overall
result was achieved.

Chapter Summary
In LCA, any study will be composed of a collection of many of the techniques above. You’ll
be piecing together emissions factors and small, assumption-based estimates, generating new
estimates, and summarizing your results.

People entering a field of science or engineering frequently state their being more comfortable
with numbers or equations than they are with ‘writing’ as one reason for their choice of career.
Perhaps unfortunately, communicating your method, process, and results via writing is an
especially important skill in conducting life cycle assessment. However, the LCA framework
can provide a strong foundation for technical practitioners to organize their writing and
practice their communication skills.

References for this Chapter


CBS News, “Reversing Bush, EPA Toughens Smog Rules”, via Internet,
http://www.cbsnews.com/news/reversing-bush-epa-toughens-smog-rules/, last accessed
July 20, 2014.

Harte, John , Consider a Spherical Cow: A Course in Environmental Problem Solving,


University Science Books, 1988.

Koomey, Jonathan, Turning Numbers into Knowledge, Analytics Press, 2008.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
62 Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Mahajan, Sanjoy, Street-Fighting Mathematics: The Art of Educated Guessing and


Opportunistic Problem Solving, MIT Press, 2010.

Mosteller, Frederick, “Assessing Unknown Numbers: Order of Magnitude Estimation”, in


Statistics and Public Policy, William Fairley and Frederick Mosteller, editors, Addison-
Wesley, 1977.

NOAA 2012, Surveying: Accuracy vs. Precision, via Internet,


http://celebrating200years.noaa.gov/magazine/tct/tct_side1.html

U.S. Census Bureau, Statistical Abstract of the United States: 2012 (131st Edition)
Washington, DC, 2011; available at http://www.census.gov/compendia/statab/

Weinstein, Lawrence, and Adams, John A., Guesstimation: Solving the World’s Problems on
the Back of a Cocktail Napkin, Princeton University Press, 2008.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment 63

End of Chapter Questions

Objective 1: Apply appropriate skills for qualitative and quantitative analysis.

1. Find the fraction of the population that lives in cities versus rural areas in the US, or in
your home state. Validate your findings as possible.

Objective 2: Document values and data sources in support of research methods.

2. Find and reference three primary sources for the amount of energy used in residences in
the United States. Validate your findings as possible.

Objective 3: Improve your ability to perform back of the envelope estimation methods.

3. Estimate the number of hairs on your head.

4. Estimate the number of swimming pools in Los Angeles.

5. Estimate the total weight of the population in your home state.

Objective 4: Approach any quantitative question by means of describing the method,


providing the answer, and describing what is relevant or interesting about the answer.

6. In terms of area, how much pizza is eaten by people in your country in one day? Give
answer in terms of the number of football fields that would be covered.

7. How many gallons of beer are consumed each day in your country? Give answer in terms
of how high it would fill a professional football stadium.

8. How many refrigerators are bought by people in your country each day? If stacked end-
to-end, how long would it take to walk around them?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
64 Chapter 3: Life Cycle Cost Analysis

Chapter 3 : Life Cycle Cost Analysis

In this chapter, we begin our discussion of life cycle analytical methods by overviewing the
long-standing domain of life cycle cost analysis (LCCA). It is assumed that the reader already
understands the concepts of costs and benefits – if not, a good resource is our companion e-
book on civil systems planning (Hendrickson and Matthews 2013). The methods and concepts
from this domain form the core of energy and environmental life cycle assessment that we will
introduce in Chapter 4. We describe the ideas of first cost and recurring costs, as well as
methods to put all of the costs over the life of a product or project into the financial-only basis
of common monetary units.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Describe the types of costs that are included in a life cycle cost analysis.

2. Assess the difference between one-time (first) costs and recurring costs.

3. Select a product or project amongst alternatives based on life cycle cost.

4. Convert current and future monetary values into common monetary units.

Life Cycle Cost Analysis in the Engineering Domain


Material, labor, and other input costs have been critical in the analysis of engineered systems
for centuries. Studies of costs are important to understand and make decisions about product
design as these will inevitably allow you to profit from a successful one. Separate from cost is
the concept of benefit, which includes the value you would receive from an activity such as
using a product. Many of the cost models used to support engineering decisions have been
relatively simple, for example, summing all input costs and ensuring they are less than the
funds budgeted for a project.

Engineers have been estimating the total cost of operating a product or system over its life,
also known as the life cycle cost, for decades. Life cycle cost analysis (LCCA) has been
used to estimate and track lifetime costs of bridges, highways, other structures, and
manufactured goods because important and costly decisions need to be made for efficient
management of social resources used by these structures and goods. Early design and
construction decisions are often affected by future maintenance costs. Given this history,
LCCA has most often been used for decision support on fairly large-scale projects. It has also,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 65

however, been applied to individual products. LCCA is often performed as a planning tool
early in a project but is also done during a project’s lifetime. We focus on LCCA because its
economic focus across the various life cycle phases is very similar to the frameworks we will
need to build for our energy and environmental life cycle models. If you can understand the
framework, and follow the quantitative inputs and models used, you will better be able to
understand LCA.

The project that is already being undertaken or is already in place is typically referred to as the
status quo. Key to the foundation of LCCA is a set of alternative designs (or alternatives)
to be considered, which may vary significantly or only slightly from one another. These
alternatives may have been created specifically in an attempt to reduce costs, or may simply be
alternatives deviating from an existing project design along non-cost criteria. The decision rule
for life cycle costing is to choose the lowest cost option.

With respect to the various costs that may be incurred across the life cycle, first (or initial)
cost refers to costs incurred at the beginning of a project. First cost generally refers only to
the expense of constructing or manufacturing as opposed to any overhead costs associated
with designing a product or project – it is the ‘shovel in the ground’ or factory costs. While
design and other overhead costs may be routinely ignored in cost analyses, and in LCA, they
are real costs that are within the life cycle. Future costs refer to costs incurred after
construction/manufacture is complete and typically occur months to years after. Recurring
costs are those that happen with some frequency (e.g., annually) during the life of a project.
In terms of accounting and organization, these costs are often built into a timeline, with first
costs represented as occurring in ‘year 0’ and future/recurring costs mapped to subsequent
years in the future. The sum of all of these costs is the total cost. The status quo will often
involve using investments that have already been made. The original costs of these investments
are termed sunk costs and should not be included in estimation of the new life cycle cost of
alternatives from the present period.

Beyond civil engineering, LCCA also presents itself in concepts such as whole-life cost or total
cost of ownership. Total cost of ownership (TCO) is used in the information technology
industry to capture costs of purchasing, maintaining, facilities, training users, and keeping
current a hardware-software system. TCO analyses have been popular for comparisons
between proprietary software and open source alternatives (e.g., Microsoft Office vs.
OpenOffice) as well as for operating systems (Mac vs. Windows). However, not all decisions
get made on the basis of knowing the minimum TCO—despite many TCO studies showing
lower costs, neither Mac nor OpenOffice have substantial market share. Before discussing
LCCA in the context of some fairly complex settings, let us first introduce a very simple but
straightforward example that we will revisit throughout the book.

E-resource: In the Chapter 3 folder is a spreadsheet showing calculations for most of


the Example problems in this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
66 Chapter 3: Life Cycle Cost Analysis

Example 3-1: Consider a family that drinks soda. Soda is a drink consisting of carbonated water,
flavoring and (usually) sweetener. The family’s usual way of drinking it is buying 2-liter bottles
of soda from a store at a price of $1.50 each. An alternative is to make soda on demand with a
home soda machine. The machine carbonates a 1-liter bottle of water, and the user adds a small
amount of separately purchased flavor syrup to produce a 1-liter bottle of flavored soda. An
advantage of a home soda machine is that it can be easily stored, and use of flavor bottles removes
the need to purchase and store soda bottles (which are mostly water) in advance. Soda machines
cost $75 and come with several 1-liter empty bottles and a carbonation canister for 60 liters of
water. Flavor syrup bottles cost $5 and make 50 8-ounce servings (12 liters) of flavored soda.
Additional carbonation canisters cost $30.

Question: If the family drinks 2 liters of soda per week (52 per year), compare the costs of 2-
liter soda bottles with the purchase of a soda maker and flavor bottles in the first year period.

Answer: The cost of soda from a store is $1.50 * 1 = $1.50 per week, or $78 per year. Note
that this cost excludes any cost of gasoline or time required for shopping. For the soda machine
option, we need a soda machine ($75) and sufficient flavor syrup bottles to make 104 liters of
soda (about 9 bottles or $45), and would use the entire first (free) carbonation canister and most
of a second ($30). Thus the soda machine cost for the first year year is $75+$45+$30 =$150.
This cost also excludes any cost of water, gasoline, or time (as well as unused syrup or
carbonation). Over a one-year period, the life cycle cost or total cost of ownership for a soda
machine is almost double that of store-bought bottles. The soda machine provides additional
benefit for those who dislike routine shopping or have a high value of their time, which we noted
has not been included.

We can use the methods of Chapter 2 to find breakeven values for the soda machine.

Example 3-2: Find the breakeven price of soda bottles in Example 3-1 compared to buying a
soda machine over one year, without considering discounting.

Answer: The breakeven price is the price such that when we consider all of the costs for
each option, they are exactly equal. The cost of soda bottles is $72 per year less expensive than
the machine. You could either divide $72 by 52 bottles (i.e., $1.38 cents per bottle) and add it to
the current price, or about $2.88/bottle) or explicitly solve for the price per bottle using the
equation $150=52 bottles * p, where p is the price per bottle. Again, we find that, at $2.88 per
bottle, purchased soda will cost the same as homemade soda over a one-year period.

Beyond breakeven values, another metric to assess the feasibility of a project is to consider its
payback period, which suggests the time until an investment pays for itself in terms of future
savings. A simple formula for payback period (PP) is defined in Equation 3-1:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 67

PP = First Cost Investment / Annual Recurring Cost Savings (3-1)

Example 3-3: Assuming the cost of carbonation and syrup for making soda at home is instead
$73 per year, find the payback period for the soda machine in Example 3-1.

Answer: From Example 3-1, the first cost investment of the machine is $75. Since the cost
of 2-liter bottles is $78 and the home soda costs are also $73, the annual savings are ($78-
$73)=$5, so the payback period is $75/$5 = 15 years for the soda machine to be worthwhile. In
reality there are other costs to making soda at home that were not considered. However,
convenience or other factors may nonetheless make the soda machine an attractive investment.

Some businesses use strict constraints on payback periods to help make financial decisions,
such as requiring projects to pay for themselves within 2 years of making the investment. Such
constraints allow an analyst to solve for necessary savings for an investment in order to know
whether it is likely to be approved by management. LCCA and similar methods are increasingly
being used to consider the value of energy efficiency improvements in the residential and
commercial building sectors, such as by improving insulation.

Example 3-4: Consider a building owner interested in reducing the heating bill by paying a
contractor to install blown-in insulation in exterior walls. If the contractor’s bid is for $10,000,
and the company has a 2-year payback rule for investments, how much savings is needed per
month for the project to be approved?

Answer: Using Equation 3-1, the annual cost savings would need to be $5,000. Over 2
years, the monthly savings would need to be about $420.

Discounting Future Values to the Present


While a full discussion of the need to convert future values to present values as a common
monetary unit is beyond the scope of this chapter, this activity is shown to ensure that the
time value of money is represented. There are many resources available to better understand
the theory of such adjustments, including Au and Au’s Engineering Economics book (1992).

In short, though, just like other values that are increased or decreased over time due to growth
or decay rates, financial values can and should be adjusted if some values are in current
(today’s) dollars and values in the future are given in then-current values. If that is the case,
there is a simple method to adjust these values, as shown in Equation 3-2:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
68 Chapter 3: Life Cycle Cost Analysis

F = P (1 + r) n ó P = F (1 + r) -n (3-2)

where P represents a value in present (today’s) dollars, F represents a value in future dollars, r
is the percentage rate used to discount from future to present dollars, and n is the number of
years between present and future. Equation 3-2 can be used to convert any future value into
a present value. Equation 3-2 is usually used with constant dollars, which have been adjusted
for the effects of inflation (not shown in this chapter). When values are plugged into Equation
3-2 the P or F results are referred to as present or future discounting factors. Thus, if r=5%,
n=1, and a future value (F) of $100 into Equation 3-2, we would get a present discounting
factor of 0.952, which means the future value would be discounted by 4.8%.

Example 3-5: What is the present total cost over 5 years of soda made at home using the costs
from Example 3-1 (ignoring unused syrup and carbonation) at a discount rate of 5%?

Answer: The table below summarizes the approximated costs for each of the five years.

Year 0 Year 1 Year 2 Year 3 Year 4 Year 5


Soda Machine $75 0 0 0 0 0
Flavor - $45 $45 $45 $45 $45
Carbonators - $30 $60 $60 $60 $60
Total $75 $75 $105 $105 $105 $105
The soda machine is bought at the beginning of Year 1 (a.k.a. Year 0) and costs $75. It does not
need to be discounted as that is already in present dollars. It would cost $45 for flavor bottles in
each Year 1 through 5. The first carbonator is free, but the second costs $30 in Year 1. Two are
needed ($60) in every subsequent year. Thus, the present cost (rounded off to 2 significant digits
with present discounting factors of 0.952, 0.907, 0.864, 0.823, and 0.784) is:

Present Value of Cost = $75 + $75/1.05 + $105/1.052 + $105/1.053 + $105/1.054 + $105/1.055

= $75 + $71 + $95 + $91 + $86 + $82 = about $500

Now that we have a slightly more rigorous way of dealing with costs over time, let us consider
the advanced Example 3-6 comparing life cycle costs of two different new cars.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 69

Example 3-6: Consider a new car purchase decision for someone deciding between a small
sedan or a small hybrid electric vehicle. A key part of such a comparison is to assume that the
buyer is considering otherwise equivalent vehicles in terms of features, size, and specifications.
Given this constraint we compare a 2013 Toyota Corolla with a 2013 Toyota Prius. While the
engines are different, the seating capacity and other functional characteristics are quite similar.

Question: What are the total costs over 5 years for the two cars assuming 15,000 miles driven
per year?

Answer: Edmunds.com (2013) has a “True Cost to Own” calculator tool that makes this
comparison trivial. Note that the site assumes the equivalent of financing the car, and the values
are not discounted. Selecting just those vehicles and entering a zip code gives values that should
look approximately like the values listed below. Even driving 15,000 miles per year, the Prius
would be $5,000 more expensive than just buying a small, fuel-efficient Corolla.

2013 Toyota Corolla


Year 1 Year 2 Year 3 Year 4 Year 5 Total
Depreciation $2,766 $1,558 $1,370 $1,215 $1,090 $7,999
Taxes & Fees $1,075 $36 $36 $36 $36 $1,219
Financing $572 $455 $333 $205 $74 $1,639
Fuel $1,892 $1,949 $2,007 $2,068 $2,130 $10,046
Insurance $2,288 $2,368 $2,451 $2,537 $2,626 $12,270
Maintenance $39 $410 $361 $798 $1,041 $2,649
Repairs $0 $0 $89 $215 $314 $618
Tax Credit $0 $0
True Cost to Own $8,632 $6,776 $6,647 $7,074 $7,311 $36,440

2013 Toyota Prius


Year 1 Year 2 Year 3 Year 4 Year 5 Total
Depreciation $7,035 $2,820 $2,481 $2,200 $1,973 $16,509
Taxes & Fees $1,919 $36 $36 $36 $36 $2,063
Financing $1,045 $830 $608 $375 $134 $2,992
Fuel $1,098 $1,131 $1,165 $1,200 $1,236 $5,830
Insurance $1,920 $1,987 $2,057 $2,129 $2,203 $10,296
Maintenance $39 $423 $381 $786 $1,784 $3,413
Repairs $0 $0 $89 $215 $314 $618
Tax Credit $0 $0
True Cost to Own $13,056 $7,227 $6,817 $6,941 $7,680 $41,721

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
70 Chapter 3: Life Cycle Cost Analysis

The additional consideration of the time value of money suggests that we should have a more
robust method to consider the payback period of investment, since future savings amounts
have incrementally less value to us. The discounted payback period (DPP) is found by
comparing the first cost of investment against the cumulative discounted cash flows of a
project. Note that the measure expressed in Equation 3-1 is usually referred to as a simple
payback period since it does not include the effects from the time value of money.

Example 3-7: If the insulation investment in Example 3-4 has a yearly savings of exactly
$5,000 as needed for the simple payback period rule, and the discount rate is 10%, find the
discounted payback period, assuming the savings occur at the end of each year.

Answer: In this case, the discounted value at the end of year 1 is $4,545 and the value at
the end of year 2 is $4,132 (for a cumulative sum of $8,677). Given the first cost of $10,000, the
project would not pay for itself as of the end of year 2. The discounted savings for year 3 is $3,757,
and the cumulative savings is $12,434. The table below summarizes the savings for each of the
three years.

Year 0 Year 1 Year 2 Year 3


Energy Savings - $5,000 $5,000 $5,000
Discounted Energy Savings - $4,545 $4,132 $3,757
Cumulative Discounted Savings $4,545 $8,677 $12,434
It is clear the discounted payback period is between years 2 and 3. We can approximate the exact
payback time by determining the fraction of time left needed to recoup the remaining part of the
$10,000 after year 2, or $1,323/$3,757 = 0.35 years. Thus, the overall discounted payback period
is 2.35 years. It is no surprise that the discounted payback period is higher than the simple
payback period of 2 years, given that the future savings are worth less to us.

Life Cycle Cost Analysis for Public Projects


The examples above were all centered on life cycle costing for personal or individual decisions.
However, as introduced, generally LCCA is applied to public projects such as buildings or
infrastructure. Life cycle stages of infrastructure systems are similar to those we discussed in
Chapter 1. They also rely on resource extraction and assembly, although infrastructure is
generally constructed rather than manufactured. The use phase is occupation or use by the
public. The use phase also involves maintenance, repair, or rehabilitation activities. The end
of life phase is when it is demolished, either because it is no longer needed or is being replaced.

LCCA is a useful tool to help assess how various decisions will affect cost. For example, a
particular design may be adjusted, resulting in increased initial cost, but as a means to reduce

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 71

planned maintenance costs. The design change could take the form of a planned increase in
the expected time until rehabilitation, reduction in the actual expenditure at time of
maintenance, or by changing the cost structure.

LCCA also has a fairly large scope of stakeholder costs to include, accounting for both owner
costs and user costs over the whole life cycle, as shown in Equation 3-3.

Life Cycle Costs (LCC) = ∑4+ 𝑂𝑤𝑛𝑒𝑟 𝐶𝑜𝑠𝑡2 + ∑4+ 𝑈𝑠𝑒𝑟 𝐶𝑜𝑠𝑡2 (3-3)

Owner costs are those incurred by the party responsible for the product or project, while user
costs are incurred by the stakeholders who make use of it. For example, a state or local
department of transportation may own a highway, but local citizens will be the users. The
owner costs are straightforward to consider – they are the cost of planning and building the
highway. The user costs might include the value of drivers’ time spent waiting in traffic (and
thus, we have incentive to choose options which would minimize this cost). User costs may
be quite substantial and a multiple or order of magnitude higher than the owner costs. Figure
3-1 organizes the various types of life cycle costs in rows and columns and shows example
costs for a highway project. For products purchased by private parties, owner and user costs
are the same category and do not need to be distinguished.

Recurring
First Recurring Recurring
Category (Year n)
(Year 0) (Year 1) (Year 2)
Design Financing Financing Financing
Owner Construction Maintenance Maintenance Rehabilitation

Vehicle Use Vehicle Use Vehicle Use


Tolls Tolls Tolls
User Cost of Time Cost of Time Cost of Time
Driving Driving Driving

Figure 3-1: Example Life Cycle Cost Table for Highway Project

LCCA focuses only on costs. While differences in costs between two alternatives may be
considered ‘benefits’, true benefit measures are not used. In the end, LCCA generally seeks to
find the least costly project alternative over the life cycle considering both owner and user
costs. But since agency decision makers are responsible for the project over the long run, they
could be biased towards selecting projects with minimum owner life cycle costs regardless of
user costs since user costs are not part of the agency budget. This stakeholder difference will
also manifest itself when we discuss LCA later in the book, as there may be limited benefits
to a company making their product have a lower environmental impact if the consumer is the
one who will benefit from it (e.g., if it costs more for the company to produce and perhaps
reduces profits but uses less electricity in the use phase).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
72 Chapter 3: Life Cycle Cost Analysis

Various government agencies suggest and expect LCCA practices to be part of the standard
toolbox for engineers, planners, and decision makers. The US Federal Highway
Administration (FHWA) has promoted LCCA since 1991 and the US Department of
Transportation (DOT) created a Life Cycle Cost Analysis Primer (2002) to formalize their
intentions to have engineers and planners use the tool in their practice. In this document, they
describe the following steps in LCCA:

1. Establish design alternatives, including status quo

2. Determine activity timing

3. Estimate costs (agency and user)

4. Compute life cycle costs (LCC)

5. Analyze the results

While we have described most of these steps already, we emphasize them to demonstrate that
LCCA does not end with simply determining the life cycle costs of the various alternatives. It
is a multi-step process and it ends with an expected conclusion and analysis, building upon the
three-part system we introduced in Chapter 2. Such an analysis may reveal that the LCC of
one of the alternatives is merely 1% less than the next best alternative, or that it is 50% less.
It might also indicate that there is too much uncertainty in the results to make any conclusion.
In the end, the analyst’s result may not be the one chosen by the decision maker due to other
factors such as budgets, politics, or different assessments of the relative worth of the various
cost categories. Regardless, the act of analyzing the results is a critical component of any
analytical framework.

Deterministic and Probabilistic LCCA


Our examples so far, as well as many LCCAs (and LCAs, as we will see later) are
deterministic. That means they are based on single, fixed values of assumptions and
parameters, but more importantly it suggests that there is no chance of risk or uncertainty that
the result might be different. Of course, it is very rare that there would be any big decision we
might want to make that lacks risk or uncertainty. Probabilistic or stochastic models are
built based on some expected uncertainty, variability, or chance.

Let us first consider a hypothetical example of a deterministic LCCA as done in DOT (2002).
Figure 3-2 shows two project alternatives (A and B) over a 35-year timeline. Included in the
timeline are cost estimates for the life cycle stages of initial construction, rehabilitation, and
end of use. An important difference between the two alternatives is that Alternative B has
more work zones, which have a shorter duration but that cause inconvenience for users,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 73

leading to higher user costs as valued by their productive time lost. Following the five-step
method outlined above, DOT showed these values:

Figure 3-2: Deterministic LCCA for Construction Project (Source: DOT, 2002)

Without discounting, we could scan the data and see that Alternative A has fewer periods of
disruption and fairly compact project costs in three time periods. Alternative B’s cost structure
(for both agency and user costs) is distributed across the analysis period of 35 years. Given the
time value of money, however, it is not obvious which might be preferred.

At a 4% rate, the discounting factors using Equation 3-2 for years 12, 20, 28, and 35 are 0.6246,
0.4564, 0.3335, and 0.2534, respectively. Thus, for Alternative A the discounted life cycle
agency costs would be $31.9 million and user costs would be $22.8 million. For Alternative B,
they would be $28.3 million and $30.0 million, respectively. As DOT (2002) noted in their
analysis, “Alternative A has the lowest combined agency and user costs, whereas Alternative
B has the lowest initial construction and total agency costs. Based on this information alone,
the decision-maker could lean toward either Alternative A (based on overall cost) or

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
74 Chapter 3: Life Cycle Cost Analysis

Alternative B (due to its lower initial and total agency costs). However, more analysis might
prove beneficial. For instance, Alternative B might be revised to see if user costs could be
reduced through improved traffic management during construction and rehabilitation.”

Even though this was a hypothetical example created to demonstrate LCCA to the civil
engineering audience, presumably you are already wondering how robust these numbers are
to other factors and assumptions. DOT also noted, “Sensitivity analysis could be performed
based on discount rates or key assumptions concerning construction and rehabilitation costs.
Finally, probabilistic analysis could help to capture the effects of uncertainty in estimates of
timing or magnitude of costs developed for either alternative.”

While engineers have been collecting data on their products for as long as they have been
designing products, the types of data required to complete LCCA analyses are generally much
different than the usually collected data. LCCA can require planners to have estimates of future
construction or rehabilitation costs, potentially a decade or more from the time of
construction. These are obviously uncertain values (and further suggests the need for
probabilistic methods).

For big decisions like that in the DOT example, one would want to consider the ranges of
uncertainty possible to ensure against a poor decision. Building on DOT’s recommendation,
we could consider various values of users’ time, the lengths of time of work zone closures, etc.
If we had ranges of plausible values instead of simple deterministic values, that too could be
useful. Construction costs and work zone closure times, for example, are rarely much below
estimates (due to contracting issues) but in large projects have the potential to go significantly
higher. Thus, an asymmetric range of input values may be relevant for a model.

We could also use probability distributions to represent the various cost and other assumptions
in our models. By doing this, and using tools like Monte Carlo simulation, we could create
output distributions of expected life cycle cost for use in LCCA studies. We could then
simulate costs of the alternatives, and choose the preferred alternative based on combinations
of factors such as the lowest mean value of cost and the lowest standard deviation of cost.
Finally, probabilistic methods support the ability to quantitatively assess the likelihood that a
particular value might be achieved. That means you might be able to assess how likely each
alternative is to be greater than zero, or how likely it is that the cost of Alternative A is less
than Alternative B. It is by exploiting such probabilistic modeling that we will be able to gain
confidence that our analysis and recommendations are robust to various measures of risk and
uncertainty, and hopefully, support the right decisions. We will revisit these concepts in
Chapter 7 after we have learned a bit more about LCA models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 75

Chapter Summary
As introduced in Chapter 1, sustainability involves social, economic, and environmental
factors. We can track cost over the life cycle of products or projects and use it as a basis for
making decisions regarding comparative economic performance. There are various methods
and applications to perform life cycle cost analysis (LCCA) in support of decisions for basic
products, choices, and for infrastructure systems. Depending on the complexity of the project,
we may want to adjust for the time value of money by using discounting methods that
normalize all economic flows as if they occurred in the present. A benefit of using such
methods is that they allow incorporation of costs by both the owner as well as other users.
Beyond deterministic methods, LCCA can support probabilistic methods to ensure we can
make robust decisions that incorporate risk and uncertainty.

Now that you have been exposed to the basics of LCCA, you can appreciate how building on
the straightforward idea of considering costs over the life cycle can broaden the scope involved
in life cycle modeling. As we move forward in this textbook to issues associated with energy
or environmental life cycle assessment, concepts of life cycle cost analysis should remain a
useful part of LCA studies.

References for this Chapter


Hendrickson, Chris T. and H. Scott Matthews, Civil Infrastructure Planning, Investment and
Pricing. http://cspbook.ce.cmu.edu/ (accessed July, 2013).

Tung Au and Thomas P. Au, Engineering Economics for Capital Investment Analysis, 2nd
edition, Prentice-Hall, 1992. Available at http://engeconbook.ce.cmu.edu.

edmunds.com, website, www.edmunds.com, last accessed January 2, 2013.

US Department of Transportation Office of Asset Management, “Life-Cycle Cost Analysis


Primer”, FHWA-IF-02-047 2002. Available at http:
http://www.fhwa.dot.gov/infrastructure/asstmgmt/lcca.cfm

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
76 Chapter 3: Life Cycle Cost Analysis

End of Chapter Questions

Objective 1: Describe the types of costs that are included in a life cycle cost analysis.

1. Generate a life cycle cost summary table (using Figure 3-1 as a template) for the following:

a. A privately purchased computer

b. A public airport

c. A sports arena or stadium

Objective 2: Assess the difference between one-time (first) costs and recurring costs.

2. Building on Example 3-1, find the total cost of buying 2-liter bottles of soda over a 5-year
period, with and without discounting at a rate of 5%.

3. What would be the costs of the two soda options for the case described in Example 3-1 if
you spent 3 minutes per shopping trip buying soda, and that time spent had a cost of $20
per hour? What is the cost of time per hour that would make the two soda options equal
in cost? Separately, what is the amount of time spent per shopping trip needed to make
the two soda options equal in cost?

4. Building on question 3, what would be the costs of the two soda options if you also
considered the cost of shopping for the soda machine, syrup bottles, and carbonation
canisters (each of which took 3 minutes of time and cost was still $20/hour)?

5. Building on Example 3-1, but where you must drive 5 miles round trip to the store in a
vehicle that gets 25 miles per gallon (at a gasoline price of $3.50 per gallon) in order to buy
the soda machine, flavor bottles, carbonation canisters, or two-liter bottles every time you
want to drink soda, what are the total costs in the first year? What are the total discounted
costs over 5 years at a 5% rate? Discuss qualitatively how your model results might change
if you were buying other items on your shopping trips.

6. Combine the original Example 3-1 model, as well as the additional data and assumptions
from Questions 2 through 5 above. Calculate total life cycle costs for one year, and over 5
years, for each option and create a visual to summarize your results. Which alternative
should be chosen over 5 years, buying soda from a store or buying a soda machine? Which
should be chosen over 10 years? Are there any other costs our model should include?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 3: Life Cycle Cost Analysis 77

Objective 3: Select a product or project amongst alternatives based on life cycle cost.

7. What are the total costs to own for the two vehicles in Example 3-6 with a 5% discount
rate (assuming all listed costs occur at the end of each year)? Which vehicle would you
choose? Does your decision ever change if the discount rate varies from 0 to 20%?

8. A household is considering purchasing a washing machine and has narrowed the


choice to two alternatives. Machine 1 is a standard top-loading unit with a purchase
cost of $500. This machine uses 40 gallons of water and 2 kilowatt-hours of electricity
per load (assuming an electric water heater). The household would do roughly 8 loads
of laundry per week with this machine. Machine 2 is a front-loading unit; it costs
$1,000, but it can wash double the amount of clothes per load, and each load uses half
the water and electricity. Assume that electricity costs 8 cents/kWh and water is $2 per
1,000 gals.

a. Generate a life cycle cost summary table for the two washing machines

b. Develop a life cycle cost comparison of the two machines over a 10-year
period without discounting. Which machine should be chosen if considering
only cost?

c. In which year are the cumulative costs approximately the same?

d. Which would you choose over a 10-year period with a 3% discount rate?

9. A recent and continuing concern of automobile manufacturers is to improve fuel


economy. One of the easiest ways to accomplish this is to make cars lighter. To do
this, vehicle manufacturers have substituted specially strengthened— but lighter—
aluminum for steel (they have also experimented with carbon fibers). Unfortunately,
processed aluminum is more expensive than steel - about $3,000 per ton instead of
$750 per ton for steel. Aluminum-intensive vehicles (AIVs) are expected to weigh less
by replacing half of the steel in the car with higher-strength aluminum on a 1 ton of
steel to 0.8 ton of aluminum basis. This is expected to reduce fuel use 20%.

Assume:

• Current cars can travel 25 miles per gallon of gasoline and gasoline costs $3.50
per gallon

• Current [steel] cars cost $20,000 to produce, of which $1,000 is currently for steel
and $250 for aluminum

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
78 Chapter 3: Life Cycle Cost Analysis

• AIVs are equivalent to current cars except for substitution of lighter aluminum
for steel

• All cars are driven 100,000 miles

• All tons are short tons (2,000 pounds)

a) Of current cars and AIVs, which is cheaper over the life cycle (given only the
information above)? Develop a useful visual aid to compare life cycle costs across
steel vehicles and AIVs.

b) How sensitive would our cost estimates for steel, aluminum, and gas have to be to
reverse your opinion on which car was cheaper over the life cycle?

c) Do your answers above give you enough information to determine whether we


should produce AIVs? What other issues might be important?

Objective 4: Convert current and future monetary values into common monetary
units.

10. How sensitive is the decision in Example 3-6 to the Corolla’s annual cost of fuel (without
discounting)? Create a graphic to show your result.

11. How sensitive (quantitatively and qualitatively) is the choice of washing machines over 10
years to the discount rate, price of electricity, and price of water?

12. Using the values from Question 6, what is the discounted payback period of the soda
machine (with discount rate 5%)? What about for the values in Question 2?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 79

Photo of nuclear electricity generation facility in France prominently showing its


certification to the ISO 14001 Environmental Management Standard.

Photo credit: By Pierre-alain Dorange (Own work) [CC-BY-SA-3.0


(http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
http://upload.wikimedia.org/wikipedia/commons/0/08/Centrale_Nucl%C3%A9aire_du_Blayais.jpg

Note: Some of the content of this chapter was created via a grant from the NIST
Standards Services Curricula Development Cooperative Agreement Program
(#70NANB15H338). The opinions stated are those of the authors and not of the
National Institute of Standards and Technology.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
80 Chapter 4: The ISO LCA Standard – Goal and Scope

Chapter 4 : The ISO LCA Standard – Goal and Scope

We have discussed many of the skills necessary to complete an LCA. Now we present the
framework for planning and organizing such a study. In this chapter, we introduce and discuss
standards (specifically environmental standards), and supplement information found in the
official ISO Standard for LCA. We only summarize and expand on the most critical
components, thus this chapter is not intended to be a substitute for reading and studying the
entire ISO Standard (likely more than once to gain sufficient understanding). The rationale for
studying the ISO Standard is to build a solid foundation on which to understand the specific
terminology used in the LCA community and to learn directly from the Standard what is
required in an LCA, and what is optional. We use excerpts and examples from completed LCA
studies to highlight key considerations since examples are generally lacking in the Standard. As
such, the purpose of this chapter is not to re-define the terminology used but to help you
understand what the terms mean from a practical perspective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Describe types of standards, and the entities and processes that create them.

2. Describe the four major phases of the ISO LCA Standard

3. List all of the ISO LCA Standard study design parameters (SDPs)

4. Review SDPs given for an LCA study and assess both their appropriateness and
potential challenges in using them

5. Generate specific SDPs for an LCA study of your choosing

Introduction to Standards
Before we specifically discuss the LCA Standard, we review standards in general. Standards
are an agreed way of doing something and may be created to make a specific activity or process
consistent, or to be done using common guidelines or methods. They might also be created
to generally level the playing field in a particular market by ensuring that everyone produces
or operates the same way, such as with the same management systems. Standards are made
for a variety of reasons, and exist at many levels, from local (e.g., building codes) up to global
levels. They are often backed by significant research effort. Figure 4-1 shows a summary of
the kinds of standards produced.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 81

Type of Standard Description Example Application

Specifications A prescriptive set of absolute requirements Product Safety

Codes of practice Recommendations of practices Construction

Methods A prescriptive way of measuring or testing Materials testing

Terminology or
A set of terms and definitions Conformity
Vocabulary

A set of qualities or requirements to ensure


Product or Process Medical devices
effective function or level of service
Figure 4-1: Types of Standards and Example Applications (BSI 2017)

There are many entities around the world that work on developing and promoting the use of
standards. These entities may represent groups of producers, consumers, retailers, industry
associations, or regulators. ASTM International has been developing standards for specific
tests and materials for more than 100 years. In the civil engineering and construction sector,
there are standards for concrete; for example, a project request for proposals could require
that all material used meet ‘ASTM C94 concrete’. This means that any concrete used in the
project must meet the testing standard defined in ASTM C94, developed by ASTM. ISO (the
International Organization for Standardization – the acronym is really the Greek word ‘iso’
meaning ‘same’ or ‘equal’) creates standards geared towards safety, quality, and management
standards, and various companies and entities around the world follow these standards.
Professional societies like the Institute for Electrical and Electronics Engineers (IEEE) have
affiliated standards associations that create standards related to information and
communication technology devices, such as the IEEE networking standards for wireless
networking (802.11). National entities, such as the US National Institute for Standards and
Technology (NIST) or others around the world, initiate or support work on standards of
national interest, and also perform conformity assessment, which is activities to ensure
adherence to standards. Without standards, different companies could make products with
limited interoperability, or use materials with unknown or highly uncertain performance.

The ISO standard development process has the following components: it (1) responds to a
market need; (2) is based on expert opinion; (3) is developed by a multi-stakeholder team; and
(4) is ratified by consensus. The actual standard is drafted, edited, and revised by a technical
committee of global experts based on comments until consensus (75% agreement) is reached
(ISO 2012). Standards developed by other organizations follow a similar process flow but may
vary in terms of ratification requirements, time in review, etc.

Two prominent examples of global management standards are the ISO 9000 and ISO 14000
families. The ISO 9000 family developed quality management systems for organizations to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
82 Chapter 4: The ISO LCA Standard – Goal and Scope

improve their consistency. The ISO 9000 standards were first formalized in 1987 via
participation by industry associations, manufacturers, and national entities, and have been
updated several times (most recently in 2008). This family of standards has led to significant
improvements around the world in managing the production of high-value manufactured
products and services by innovative processes and tracking of quality indicators. In the 1990s,
entities around the world began to work on an analogous set of standards to promote high-
value environmental management systems through improved management frameworks and
indicators (ISO 14000), with the first version in 1996 and the most recent in 2015. ISO 14000
is more than just a management framework. Specific standards under the ISO 14000 umbrella
provide standards for environmental auditing and labeling, environmental communication,
and greenhouse gas reporting. These two examples of standards inspire how multiple groups
come together to create a standard, and that they can evolve over time.

The Life Cycle Assessment Standard


There are various frameworks for performing life cycle assessment (LCA) but the primary and
globally accepted way of doing it follows the ISO LCA Standard (which is comprised
primarily of two related standards, 14040:2006 and 14044:2006), which we assume you have
accessed and read separately. We will refer to both underlying standards as the ISO Standard.
The notation “14040:2006” means that the ISO LCA Standard is in the “ISO 14000” family
of standards. The version current as of the time of writing this book was most recently updated
in 2006. The first version of the ISO LCA Standard was published in 1997.

One thing that you may now realize is that many of the foundational life cycle analyses
mentioned in Chapter 1 (e.g., by Hocking, Lave, etc.) were completed before the LCA
Standard was formalized. That does not mean they were not legitimate life cycle studies – it
just means that today these could not be referred to as ISO compliant, i.e., that the study
conforms to the LCA Standard as published. While it may seem trivial, compliance with the
many ISO standards is typically a goal of an entity looking for global acceptance and
recognition. This is not just in the LCA domain – firms in the automotive supply chain seek
“ISO 9000 compliance” to prove they have quality programs in place at their companies that
meet the standard set by ISO, so they are able to do business in that very large global market.

It should be obvious why an LCA standard is desirable. Without a formal set of requirements
and/or guidelines, anyone could do an LCA according to her own views of how a study should
be done and what methods would be appropriate to use. In the end, 10 different parties could
each perform an LCA on the same product and generate 10 different answers. The LCA
Standard helps to normalize these efforts. However, as we will see below, its rules and
guidelines are not overly restrictive. Simply having 10 parties conforming to the Standard does
not guarantee you would not still generate 10 different answers! One could alternatively argue

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 83

that in a field like LCA, a diversity of thoughts and approaches is desirable, and thus, that
having a prescriptive standard stifles development of methods or findings.

As you have read separately, the ISO LCA Standard formalizes the quantitative modeling and
accounting needs to implement life cycle thinking to support decisions. ISO 14040:2006 is the
current ‘principles and framework’ of the Standard, and is written for a managerial audience
while ISO 14044:2006 gives the ‘requirements and guidelines’ as for a practitioner. Given that
you have already read the Standard (and have their glossaries of defined terms to help guide
you), you are already familiar with the basic ideas of inputs, outputs, and flows.

At a high level, Figure 4-2 summarizes the ISO LCA Standard’s 4 phases: goal and scope
definition, inventory analysis, impact assessment, and interpretation. The goal and scope are
statements of intent for your study, and part of what we will refer to as the study design
parameters (discussed below). They explicitly note the reason why you are doing the study,
as well as the study reach. In the inventory analysis phase, you collect and document the data
needed (e.g., energy use and emissions of greenhouse gases) to meet the stated goal and scope.
In the impact assessment phase you transition from tracking simple inventory results like
greenhouse gas emissions to impacts such as climate change. Finally, the interpretation phase
looks at the results of your study, puts them into perspective, and may recommend
improvements or other changes to reduce the impacts.

Figure 4-2: Overview of ISO LCA Framework (Source: ISO 14040:2006)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
84 Chapter 4: The ISO LCA Standard – Goal and Scope

It is important to recognize that all of the double arrows mean that the four phases are iterative,
i.e., you might adjust the goal and scope after trying to collect inventory data and realizing
there are challenges in doing so. You may get to the interpretation phase and realize the data
collected does not help answer the questions you wanted and then revise the earlier parts. You
may get unexpected results that make reaching a conclusion difficult, and need to add
additional impact assessments. Thus, none of the phases are truly complete until the entire
study is complete. From experience, every study you do will be modified as you go through it.
This is not a sign of weakness or failure; it is the prescribed way of improving the study as you
learn more about the product system in question.

As ISO mentions, it is common for a study following the Standard to exclude an impact
assessment phase, but such a study is called a life cycle inventory (LCI). That is, its final
results are only the accounting-like exercise of quantifying total inputs and outputs without
any consideration of impact. You could interpret this to mean that impact assessment is not a
required component, but more correctly it is required of an LCA study but not an LCI. That
said, we will generally use the phrase “LCA” to refer either to an LCA or an LCI, as is common
in the field. Its also worth noting that LCA stands for life cycle assessment not life cycle analysis,
which recognizes that an assessment is typically required for a comparative study to be useful.
Finally, it may be impossible to produce an objective LCA, one where no value judgments
have been made.

The right hand side of Figure 4-2 gives examples of how LCA might be used. The first two,
for product improvement and strategic planning, are common. In this book we focus more
on “big decisions” and refer to activities such as informing public policy (e.g., what types of
incentives might make paper recycling more efficient?) and assessing marketing claims. In
these domains the basis of the study might be in comparing between similar products or
technologies.

In the rest of this chapter, we focus on the goal and scope phases of LCA. Subsequent chapters
discuss the inventory, interpretation, and impact assessment phases in greater detail.

ISO LCA Study Design Parameters


As noted above, ISO requires a series of parameters to be qualitatively and quantitatively
described for an LCA study, which in this text we refer to as the study design parameters
(SDPs), listed in Figure 4-3. SDP is a phrase we created to help teach the topic, but that is not
listed in the ISO Standard. We discuss some of the relevant components in this Chapter, and
save discussion of others for later in the book. In this section we provide added detail and
discussion about the underlying needs of each of these parameters and discuss hypothetical
parameter statements and values in terms of their ISO conformance.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 85

Scope Items:
Goal Product System
System Boundary
Functional Unit
Inventory Inputs and Outputs
LCIA Methods Used
Figure 4-3: Study Design Parameters (SDPs) of ISO LCA Framework

Think of the SDPs as a summary of the most important organizing aspects of an LCA. The
SDPs are a subset of the required elements in an LCA study, but are generally the most
critical considerations and thus those that at a glance would tell you nearly everything you
needed to know about what the study did and did not seek to do. Thus, these are items that
need to be chosen and documented very well so there is no confusion. In documenting each
in your studies, you should specifically use the keywords represented in the Standard (e.g.,
“the goal of this study is”, “the functional unit is”, etc.) Expanding on what is written in the
ISO LCA Standard we discuss each of the items in the SDP below.

SDP 1. Goal

The goal of an LCA must be clearly stated. ISO requires that the goal statement include
unambiguous statements about four items: (1) the intended application, (2) the reasons for
carrying out the study, (3) the audience, and (4) whether the results will be used in comparative
assertions released publicly. An easy way to think about the goal statement of an LCA report
is that it must fully answer two questions: “who might care about this and in what context?”
(for points #3 and #4) and “why we did it and what will we do with it?” (#2 and #1). As
noted above, the main components of an LCA are iterative. Thus, it is possible you start an
LCA study with a goal, and by going through the effort needed to complete it, the goal is
changed because more or less is possible than originally planned.

Below are excerpts of the goal statement from an LCA study comparing artificial and natural
Christmas trees bought in the US3 (PE Americas 2010).

“The findings of the study are intended to be used as a basis for educated external
communication and marketing aimed at the American Christmas tree consumer.”

“The goal of this LCA is to understand the environmental impacts of both the most
common artificial Christmas tree and the most common natural Christmas tree, and to
analyze how their environmental impacts compare.”

3 In the interest of full disclosure, one of the authors of this book (HSM) was a paid reviewer of this study.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
86 Chapter 4: The ISO LCA Standard – Goal and Scope

“This comparative study is expected to be released to the public by the ACTA to refute
myths and misconceptions about the relative difference in environmental impact by real
and artificial trees.”

From these sentences, we can understand all 4 of the ISO-required items of the goal statement.
The intended application is external marketing. The reasons are to refute misconceptions. The
audience is American tree consumers. Finally, the study was noted to be planned for public
release (which was a vague statement at the time - but it was released and is available on the
web). We will discuss further implications of public studies later.

The examples above help constitute a good goal statement. It should be clear that skipping
any of the 4 required items or trying to streamline the goal for readability could lead to an
inappropriate goal statement. For example, the sentence “This study seeks to find the energy
use of a power plant” is clear and simple but only addresses one of the four required elements
of a goal. It also never uses the word “goal” which could be perceived as stating no goal.

Beyond the stated goals, we could consider what is not written in the goals. From the above
statements, there would be no obvious use of the study by a retailer, e.g., to decide whether to
stock one kind of tree over another. It is useful to consider what a reader or reviewer of the
study would think when considering your goal statement. A reviewer would be sensitive to
biases and conflicts, as well as creative use of assumptions in the SDP that might favor one
alternative over others. Likewise, they may be sensitive to the types of conclusions that may
arise from your study given your chosen goals. You want to write so as to avoid such
interpretations.

One of the primary reasons that analysts seek to use LCA is to make a comparative assertion,
which is when you compare multiple products or systems, such as two different types of
packaging, to be able to conclude and state (specifically, to make a claim) that one is better
than the other (has lower impacts). As noted above, the ISO LCA Standard requires that such
an intention be noted in the goal statement.

Scope

Although ISO simply lists “goal and scope”, a goal statement is just a few sentences while the
scope may be several pages. The study scope is not a single statement but a collection of
qualitative and quantitative information denoting what is included in the study, and key
parameters that describe how it is done. Most of the SDPs are part of the scope. There are 12
separate elements listed in ISO’s scope requirements, but our focus is on five of them that are
part of the SDPs: the product system studied, the functional unit(s), system boundaries, and
the inventory and/or impact assessments to be tracked. The other ten are important (and
required for ISO compliance) but are covered sufficiently either in the ISO Standard or
elsewhere in this book.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 87

While these five individual scope SDPs are discussed separately below, they are highly
dependent on each other and thus difficult to define separately. We acknowledge that this
interdependency of terminology typically confuses most readers, as every definition of one of
the scope SDPs contains another SDP term. However, a clear understanding of these terms is
crucial to the development of a rigorous study and we recommend you read the following
section, along with the ISO Standard, multiple times until you are comfortable with the
distinctions.

SDP 2. Functional Unit

While we list only the functional unit as an SDP, the ISO Standard requires a discussion of
the function of the product system as well. A product system (as defined in ISO 14040:2006
and expanded upon below) is a collection of processes that provide a certain function. The
function represents the performance characteristics of the product system, or in layman’s
terms, “what does it do?” A power plant is a product system that has a function of generating
electricity. The function of a Christmas tree product system is presumably to provide
Christmas joy and celebrate a holiday. The function of a restroom hand dryer is drying hands.
The function of a light bulb is providing light. In short, describing the function is pretty
straightforward, but is done to clarify any possible confusions or assumptions that one might
make from otherwise only discussing the product system itself.

The functional unit, on the other hand, must be a clearly and quantitatively defined measure
relating the function to the inputs and outputs to be studied. Unfortunately, that is all the
description the ISO Standard provides. This ambiguity is partly the reason why the expressed
functional units of studies are often inappropriate. A functional unit should quantify the
function in a way that makes it possible to relate it to the relevant inputs and outputs (imagine
a ratio representation). As discussed in Chapter 1, inputs are items like energy or resource use,
and outputs are items like emissions or waste produced. You thus need a functional unit that
bridges the function and the inputs or outputs. Your functional unit should explicitly state units
(as discussed in Chapter 2) and the results of your study will be normalized by your functional
unit.

Building on the examples above, a functional unit for a coal-fired power plant might be “one
kilowatt-hour of electricity produced”. Then, an input of coal could be described as “kilograms
of coal per one kilowatt-hour of electricity produced (kg coal/kWh)” and a possible output
could be stated as “kilograms of carbon dioxide emitted per kilowatt-hour of electricity
produced (kg CO2/kWh).” For a Christmas tree the functional unit might be “one holiday
season” because while one family may leave a tree up for a month and another family for only
a week, both trees fulfill the function of providing Christmas joy for the holiday season. For a
hand dryer it might be “one pair of hands dried”. For a light bulb it might be “providing 100
lumens of light for one hour (a. k. a. 100 lumen-hours)”. All of these are appropriate because
they discuss the function quantitatively and can be linked to study results. Figure 4-4
summarizes the bridge between function, functional units, and possible LCI results for the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
88 Chapter 4: The ISO LCA Standard – Goal and Scope

four product systems discussed. While not explicit to function, you could have a study where
your functional unit was “per widget produced” which would encompass the cradle to gate
system of making a product.

Product
Function Functional Unit Example LCI Results
System

Power Plant Generating electricity 1 kWh of electricity kg CO2 per kWh


generated

Christmas Tree Providing holiday joy 1 undecorated tree MJ energy per


over 1 holiday season undecorated tree per
holiday season

Hand Dryer Drying hands 1 pair of hands dried in MJ energy per pair of
a restroom facility hands dried in restroom

Light Bulb Providing light 100 lumens light for 1 g Mercury per 100 lumen-
hour (100 lumen-hrs) hrs
Figure 4-4: Linkages between Function, Functional Unit, and Example LCI Results
for hypothetical LCA studies

The functional unit should as far as possible relate to the functions of the product rather than
to the physical product. For example, “seating support for one person working at a computer
for one year” is preferable to “one computer workstation chair”, “freezing capacity of 200 dm3
at -18°C” is preferable to “one 200 dm3 refrigerator”, and “annual lighting of a work area of
10 square meters with 30 lux” is preferable to “bulbs providing 30,000 lumen for one year”.
In this way, it is ensured that all obligatory properties - as well as the duration of the product
performance - are addressed. ISO 14049:2002 (Section 3) provides additional suggestions.

Now that we have provided some explicit discussion of functional units, we discuss common
problems with statements of functional units in studies. One common functional unit problem
is failure to express the function quantitatively or without units. Often, suggested functional
units sound more like a function description, e.g., for a power plant “the functional unit is
generating electricity”. This cannot be a viable functional unit because it is not quantitative
and also because no unit was stated. Note that the units do not need to be SI-type units. The
unit can be a unique unit relevant only for a particular product system, as in “1 pair of hands
dried”.

Another common problem in defining a study’s functional unit is confusing it with the inputs
and outputs to be studied. For example, “tons of CO2” may be what you intend to use in your
inventory analysis, but is not an appropriate functional unit because it is not measuring the
function, it is measuring the greenhouse gas emission outputs of the product system. Likewise,
it is not appropriate to have a functional unit of “kg CO2 per kWh” because the CO2 emissions,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 89

while a relevant output, have nothing to do with the expression of the function. Further, since
results will be normalized to the functional unit, subsequent emissions of greenhouse gas
emissions in such a study would be “kg CO2 per kg CO2 per kWh”, which makes no sense.
Thus, product system inputs and outputs do not belong in a functional unit definition.

For LCA studies that involve comparisons of product systems, choices of functional units are
especially important because the functional unit of the study needs to be unique and consistent
across the alternatives. For example, an LCA comparing fuels needs to compare functionally
equivalent units. It would be misleading to compare a gallon of ethanol and a gallon of gasoline
(i.e., a functional unit of gallon of fuel), because the energy content of the fuels is quite
different (gasoline is about 115,000 BTU/gallon while ethanol (E100) is about 75,000
BTU/gallon). In terms of function or utility, you could drive much further with a gallon of
gasoline than with ethanol. You could convert to gallons of gasoline equivalent (GGE) or
perhaps use a functional unit based on energy content (such as BTU) of fuel, or based on
driving 1 mile. Likewise, if comparing coal and natural gas to make electricity, an appropriate
functional unit would be per kWh or MWh, not per MJ of fuel, which ignores the differences
in the energy content of these fuels and in the efficiencies of coal-fired and NG-fired boilers.

In their detailed LCA guidance, Weidema et al. (2004) stress that for comparative studies the
functional unit must be defined in terms of the obligatory product properties required by the customers
in the market on which the product is sold. The obligatory product properties are those that the
product must have in order to be at all considered as a relevant alternative by the customers.
More details on market segmentation can be found in Weidema et al. (2004).

Using an inappropriate function or functional unit will lead to lots of wasted effort if a study
were later reviewed and found to be faulty. If you were to use functional units that, for
example, had no actual units, you would create results that were not normalized to anything.
Having to go back and correct that after a study is done is effectively an entirely new study.

SDPs 3 and 4. Product System and System Boundary

Before discussing an ISO LCA product system, we first discuss products, which can be any
kind of good or service. This could mean a physical object like a component part, or software
or services. Processes, similarly are activities that transform inputs to outputs. As already
mentioned, an ISO LCA product system is the definition of the relevant processes and flows
related to the chosen product life cycle that lead to one or more functions. Even virtual
products like software (or cloud services) have many processes needed to create them.
Products are outputs of such systems, and a product flow represents the connection of a
product between product systems (where it may be an output of one and an input of another).
For example, the product output of a lumber mill process—wood planks—may be an input
to a furniture manufacturing process. Similarly, petroleum is the product output of an oil
extraction process and may be an input into a refinery process that has product outputs like
gasoline and diesel fuels.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
90 Chapter 4: The ISO LCA Standard – Goal and Scope

A product system is comprised of various subcomponents as defined below, but is generally


comprised of various processes and flows. The system boundary notes which subset of the
overall collection of processes and flows of the product system are part of the study, in
accordance with the stated study goals.

While not required, a diagram is crucial in helping the audience appreciate the complexity of
the product system and its defined system boundary. The diagram is created by the study
authors (although it may be generated by the LCA software used in the study). This diagram
should identify the major processes in the system and then explicitly note the system boundary
chosen, ideally with a named box “system boundary” around the processes included in the
study. Alternatively, some color-coded representation could be used to identify the processes
and flows contained within the boundary. Even with a great product system diagram, the study
should still discuss in detailed text the various processes and flows. Figure 4-5 shows the
generic product system and system boundary example provided in ISO 14040:2006. If your
study encompasses or compares multiple products, then you have to define several product
systems.

Figure 4-5: ISO 14040 Product System and System Boundary example

There are a few key components of a product system diagram (also called a process flow
diagram). Boxes in these diagrams represent various forms of processes, and arrows represent
flows, similar to what might be seen in a mass balance or materials flow analysis. Boxes (or
dashed lines) may represent system boundaries. At the highest level of generality (as in Figure
4-5) the representation of a product system may be such that the process boxes depicted
correspond to entire aggregated life cycle stages (raw materials, production, use, etc.) as

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 91

discussed in Chapter 1. In reality each of these aggregated stages may be comprised of many
more processes, as we discuss below.

It is worth discussing the art of setting a system boundary in more detail. Doing an LCA that
includes data on every specific process in the product system of a complicated product is
impossible. An automobile has roughly 30,000 components. Tens of thousands of specific
processes are involved in mining the ores, making the ships, trucks, and railcars used to
transport the materials, refining the materials, making the components, and assembling the
vehicle. However, a reasonably aggregated LCA could be done of the product system that
incorporates all relevant aspects but sacrifices some level of process-specific detail.

LCA models are able to capture direct and indirect effects of systems. In general, direct
effects are those that happen directly as a result of activities in the process in question.
Indirect effects are those that happen as a result of the activities, but outside of the process
in question. For example, steel making requires iron ore and oxygen directly, but also
electricity, environmental consulting, natural gas exploration, production, and pipelines, real
estate services, and lawyers. Directly or indirectly, making cars involves the entire economy,
and getting specific mass and energy flows for the entire economy is impossible.

Since conducting a complete LCA is impossible, what can we do? As we will see below, the
ISO Standard provides for ways of simplifying our analyses so as not to require us to track
every possible flow. But we still need to make key decisions (e.g., about stages to include) that
can eventually lead to model simplifications. Focusing on the product itself while ignoring all
other parts of the life cycle would lead to inaccurate and biased results, as shown in the
example of the battery-powered car in Chapter 1.

An LCA of a generic American passenger car was undertaken by the three major automobile
manufacturing companies, aka the ‘big three’, in the US in the mid-1990s. This study looked
carefully at the processes for extracting ore and petroleum and making steel, aluminum, and
plastic for use in vehicles. It also looked carefully at making the major components of a car
and assembling the vehicle. Given the complexity described above, the study was forced to
compromise by selecting a few steel mills and plastics plants as ‘representative’ of all plants.
Similarly, only a few component and assembly plants were analyzed. Whether the selected
facilities were really representative of all plants cannot be known. Finally, many aspects of a
vehicle were not studied, such as much of the transportation of materials and fuels and ‘minor’
components. Nonetheless, the study was two years in duration (with more than 10 person
years of effort) and is estimated to have cost millions of dollars.

Thus, system boundaries need to be justified. Beyond the visual display and description of the
boundary used in the study, the author should also explain choices and factors that led to the
boundary as finally chosen and used. By justifying, you allow the audience to better appreciate
some of the challenges faced and tradeoffs made in the study. Other justifications for system
boundary choices may include statements about a process being assumed or found to have

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
92 Chapter 4: The ISO LCA Standard – Goal and Scope

negligible impact, or in the case of a comparative study, that identical processes existed in both
product systems and thus would not affect the comparison. As mentioned above, significant
effort looking for data could fail, and data for a specific process may be unavailable. Proxy
data from a similar alternative process could be used instead.

Process Flows
Product systems have elementary flows into and out of them. As defined by ISO 14044,
elementary flows are “material or energy entering the system being studied that has been
drawn from the environment without previous human transformation, or material or energy
leaving the system being studied that is released into the environment without subsequent
human transformation.” Translating, that means pure flows that need no other process to
represent them on the input or output side of the model.

For the sake of discussion, assume that Figure 4-5 is the product system and boundary diagram
for a mobile phone. The figure shows that the product system for the mobile phone as defined
with its rectangular boundary has flows related to input products and elementary flows. The
input product (on the left side of the figure) is associated with another product system and is
outside of the system boundary. Likewise, on the right side of the figure, the mobile phone
“product flow” is an input to another system. As an example, the left side of the figure product
flow could represent that the mobile phone comes with paper instructions printed by a third
party (but which are assumed to not be part of the study) and on the right side could be noting
that the mobile phone as a device can be used in wireless voice and data network systems (the
life cycles of such equipment also being outside the scope of the study). That’s not to say that
no use of phones is modeled, as Figure 4-5 has a “use phase” process box inside the boundary,
but which may only refer to recharging of the device. The study author may have chosen the
boundary as such because they are the phone manufacturer and can only directly control the
processes and flows within the described boundary. As long as their goal and scope elements
are otherwise consistent with the boundary, there are no problems. However, if, for example,
the study goal or scope motivated the idea of using phones to make Internet based purchases
for home delivery, then the current system boundary may need to be modified to consider
impacts in those other systems, for example, by including the product system box on the right.

Figure 4-5 might be viewed as implying that the elementary flows are not part of the study
since they are outside of the system boundary. This is incorrect, however, because these
elementary flows while not part of the system are the inputs and outputs of interest that may
have motivated the study, such as energy inputs or greenhouse gas emission outputs. They are
represented this way because they directly enter the system from the technosphere. In short,
they are in the study but outside of the system.

Product system diagrams may be hierarchical. The high level diagram (e.g., Figure 4-5) may
have detailed sub-diagrams and explanations to describe how other lower-level processes
interact. These hierarchies can span multiple levels of aggregation. At the lowest such level, a

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 93

unit process is the smallest element considered in the analysis for which input and output
data are quantified. Figure 4-6 shows a generic interacting series of three unit processes that
may be a subcomponent of a product system.

Figure 4-6: Unit Processes (Source: ISO 14040:2006)

Figure 4-7 gives an example of how one might detail the high level “Waste Treatment” process
from Figure 4-5 in the manner of Figure 4-6, where the unit processes are one of the three
basic steps of collecting, disassembling, and sorting of e-waste. Additional unit processes (not
shown) could exist for disposition of outputs.

Figure 4-7: Process Diagram for E-waste treatment

It is at the unit process level, then, that inputs and outputs actually interact with the product
system. While already defined in Chapter 1, ISO specifically considers them as follows. Inputs
are “product, material or energy flows that enter a unit process” and may include raw materials,
intermediate products and co-products. Outputs are “products, material or energy flows that
leave a unit process” and may include raw materials, intermediate products, products, and
releases e.g., emissions, and waste. Raw materials are “primary or secondary material that is

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
94 Chapter 4: The ISO LCA Standard – Goal and Scope

used to produce a product” and waste is “substances or objects to be disposed of. Intermediate
products flow between unit processes (such as cumulatively assembled components). Co-
products are two or more products of the same process or system.

The overall inputs and outputs to be measured by the study should be elementary flows. This
is why “electricity” is not typically viewed as an input, i.e., it has not been drawn from the
environment without transformation. Electricity represents coal, natural gas, sunlight, or water
that has been transformed by generation processes. “MJ of energy” on the other hand could
represent untransformed energy inputs.

In the Christmas tree LCA mentioned above, which compares artificial and natural trees, the
following text was used (in addition to a diagram): “For the artificial tree the system boundary
includes: (1) cradle-to-gate material environmental impacts; (2) the production of the artificial
tree with tree stand in China; (3) transportation of the tree and stand to a US retailer, and
subsequently a customer’s home; and (4) disposal of the tree and all packaging.”

SDP 5. Inventory Inputs and Outputs

The definition of your study needs to explicitly note the inputs and/or outputs you will be
focusing on in your analysis. That is because your analysis does not need to consider the
universe of all potential inputs and outputs. It could consider only inputs (e.g., an energy use
footprint), only outputs (e.g., a carbon emissions footprint), or both. The input and output
specification part of the scope is not explicitly defined in the ISO Standard. It is presumably
intended to be encompassed by the full product system diagram with labeled input and output
flows. Following the example above, your mobile phone study could choose to track inputs
of water, energy, or both, but needs to specify them. By explicitly noting which inputs and/or
outputs you will focus on, it helps the audience better understand why you might have chosen
the selected system boundary, product system, functional unit, etc. If you fail to explicitly note
which quantified inputs and outputs you will consider in your study (or, for example, draw a
generic product system diagram with only the words “inputs” and “outputs”) then the
audience is left to consider or assume for themselves which are appropriate for your system,
which could be different than your intended or actual inputs and outputs. Chapter 5 discusses
the inventory analysis component of LCA in more detail.

SDP 6. Impact Assessment

ISO 14040 requires you to explicitly list “the impact categories selected and methodology of
impact assessment, and subsequent interpretation to be used”. While we save more detailed
discussion of impact assessment for Chapter 12, we offer some brief discussion and examples
here so as to help motivate how and why your choice of impact assessment could affect your
other SDP choices.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 95

As we discussed in Chapter 1, there is a big difference between an inventory (accounting) of


inputs and outputs and the types of impacts they can have. While we may track input and
output use of energy and/or greenhouse gas emissions, the impacts of these activities across
our life cycle could be resource depletion, global warming, or climate change. In impact
assessment we focus on the latter issues. Doing so will require us to use other methods that
have been developed in conjunction with LCA to help assess impacts. Specifically, there are
impact assessment methods to consider cumulative energy demand (CED) and to assess the
global warming potential (GWP) of emissions of various greenhouse gases. If we chose to
consider these impacts in our study, then we explicitly state them and the underlying methods
in the SDP. Again, the point of doing so explicitly is to ensure that at a glance a reader can
appreciate decisions that you have made up front before having to see all of your study results.

There are other required elements for the goal and scope, as noted above, but the SDPs are
the most important and time consuming. They are the scope elements that need to be most
carefully worded and evaluated.

A Final Word On Comparative Assertions And Public Studies

Comparative studies can only be done if the life cycle models created for each compared
product use the same study design parameters, such as the same goal and scope, functional
unit, and system boundary. The ISO Standard in various places emphasizes what needs to be
done if you are going to make comparative assertions. By making such assertions you are
saying that applying the ISO Standard has allowed you to make the claim. For example, ISO
requires that for comparative assertions, the study must be an LCA and not simply an LCI,
and that a special sensitivity analysis is done. The additional rules related to when you intend
to make comparative assertions are in place both to ensure high quality work and to protect
the credibility of the Standard. If several high visibility studies were done without all of these
special considerations, and the results were deemed to be suspicious, the Standard itself might
be vulnerable to criticism.

Similarly, ISO requires an LCA to be peer reviewed if the comparative results are intended for
public release. This means that a team of experts (typically three) needs to review the study,
write a report of its merits, and assess whether it is compliant with the ISO Standard (i.e.,
whether all of the goal, scope, etc., elements have been done in accordance with what is written
in the Standard). A vast majority of completed LCAs are not seen by the public, and therefore
have not been peer-reviewed. That does not mean they are not ISO compliant, just that they
have not been reviewed as such and designated as compliant.

E-resource: On the www.lcatextbook.com website, in the Chapter 4 folder, is a


spreadsheet listing publicly available LCA studies from around the world for many
different products. Amongst other aspects, this spreadsheet shows whether studies were peer
reviewed (which is interesting because they have all been “released to the public” but not all
have been peer reviewed). PDF files of most of the studies listed are also available. The icon

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
96 Chapter 4: The ISO LCA Standard – Goal and Scope

to the left will be used in the remainder of the book to designate resources available on the
textbook website. Readers are urged to read one or more of these public studies that are of
interest to them as a means of becoming familiar with LCA studies.

Chapter Summary
The ISO LCA Standard is an internationally recognized framework for performing life cycle
assessment, and has been developed and revised over time to guide practitioners towards
making high-quality LCA studies. Any LCA practitioner should first read and know the
requirements of the Standard. This chapter has focused on a subset of the Standard, namely
the so-called study design parameters (SDPs) which comprise the main high-level variables for
a study. When presented properly, SDPs allow the audience to quickly appreciate the goals
and scope of the study. The chapter focused on practical examples of SDPs from actual studies
and seeks to demonstrate the importance of the bridge between product systems and their
functional units and LCI results. When the integrity of this bridge is maintained, and common
mistakes avoided, high-quality results can be expected.

References for this Chapter


British Standards Institute (BSI), http://www.bsigroup.com/en-GB/standards/Information-
about-standards/different-types-of-standards/, last accessed January 15, 2017.

ISO 2013 http://www.iso.org/iso/home/standards_development.htm, last accessed


February 1, 2013.

PE Americas, “Comparative Life Cycle Assessment of an Artificial Christmas Tree and a


Natural Christmas Tree”, November 2010.
http://www.christmastreeassociation.org/pdf/ACTA%20Christmas%20Tree%20LCA%20
Final%20Report%20November%202010.pdf

Life Cycle Assessment: Principles And Practice, United States Environmental Protection
Agency, EPA/600/R-06/060, May 2006.

Weidema, Bo, Wenzel, Henrik, Petersen, Claus, and Hansen Klaus, The Product, Functional
Unit and Reference Flows in LCA, Environmental News, 70, 2004. http://lca-
net.com/publications/show/product-functional-unit-reference-flows-lca/

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 97

End of Chapter Questions

Objective 1: Describe standards and the steps in creating them

1. Discuss where the LCA Standard fits within the ISO 14000 family of environmental
management standards and the process used to formalize it.

Objective 2: Describe the four major phases of the ISO LCA Standard

2. Name and describe in a few sentences each of the four major phases found in the ISO
LCA standard.

Objective 3: List all of the ISO LCA Standard study design parameters (SDPs)

3. Compile a list of the SDPs listed in the chapter above. For each SDP, explain in a sentence
or two how it relates to the major phases and overall LCA.

Objective 4: Review SDPs given for an LCA study and assess their appropriateness
and anticipate potential challenges in using them

4. Consider the following examples of goal statements for three different hypothetical LCA
studies. Answer the questions (a-b) for each goal statement below.

• “The goal of this study is to find the energy use of making ice cream.”

• “The goal of this study is to produce an LCA for internal purposes.”

• “This study seeks to do a life cycle assessment of a computer to be used for future
design efforts.”

a. Briefly discuss the ISO compliance of the stated goal as written.

b. Propose revisions if needed for the hypothetical goal statement to meet ISO
requirements.

5. Read one of the LCA studies found by using the E-resource link at the end of the chapter.
Summarize the study design parameters of the chosen study, and discuss any discrepancies
or problems found, and how they could be improved.

Objective 5: Generate specific SDPs for an LCA study of your choosing

6. Draw a product system diagram for a paper clip labeling inputs, outputs, intermediate
flows, etc., as in Figure 4-5.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
98 Chapter 4: The ISO LCA Standard – Goal and Scope

7. Draw a product system diagram for the purchase of an airplane ticket via an electronic
commerce website, labeling inputs, outputs, intermediate flows, etc., as in Figure 4-5.

8. For a hypothetical LCA topic of your choosing, describe how the inventory and impact
sections would be compiled. Discuss the difference between impact and inventory with
reference to examples from your hypothetical LCA.

9. Consider the examples of study design parameters (SDPs) for four hypothetical LCA
studies in the table below.

Fill in the rest of the entries to complete the table. Also, correct any existing entries that
appear wrong. Be sure that the SDPs bridge the various elements of the study using
appropriate values and units.

Product
System Function Functional Unit LCI Results

Printed book Collect 100 pages of


printed text

Portable flash Storing electronic energy per gigabyte


memory drive content

E-book reader CO2 emissions per


reader bought

Automobile 1 mile driven

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 4: The ISO LCA Standard – Goal and Scope 99

BLANK FOR NOW

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
100 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Chapter 5 : Data Acquisition and Management for Life


Cycle Inventory Analysis

Now that the most important elements of the LCA Standard are understood, we can begin to
think about the work needed to get data for a study. In this chapter, we introduce the inventory
analysis phase of the LCA Standard, and how, within that phase, to acquire and use data for
an LCA or an LCI study. We also build our first life cycle model, and interpret its results. As
data collection, management, and modeling are typically the most time-consuming
components of an LCA, understanding how to work with source data is a critical skill. We
build on concepts from Chapter 2 in terms of referencing and quantitative modeling.
Improving your qualitative and quantitative skills for data management will enhance your
ability to perform great LCAs. While sequentially this chapter is part of the content on process-
based life cycle assessment, much of the discussion is relevant to LCA studies in general.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Recognize how challenges in data collection may lead to changes in study design
parameters (SDPs), and vice versa

2. Map information from LCI data modules into a unit process framework

3. Explain the difference between primary and secondary data, and when each might be
appropriate in a study

4. Document the use of primary and secondary data in a study

5. Create and assess data quality requirements for a study

6. Extract data and metadata from LCI data modules and use them in support of a
product system analysis

7. Generate an inventory result from LCI data sources

8. Perform an interpretation analysis on LCI results

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 101

ISO Life Cycle Inventory Analysis


After reviewing the ISO LCA Standard and its terminology in Chapter 4, you should be able
to envision the level and type of effort needed to perform an inventory analysis of a chosen
product system. Every study using the ISO Standard has an inventory analysis phase, but as
discussed above, many studies end at this phase and are called LCI studies. Those that continue
on to impact assessment are LCAs. That does not mean that LCI studies have better inventory
analyses than LCAs, in fact LCAs may require more comprehensive inventory analyses to
support the necessary impact assessment.

Figure 5-1, developed by the US EPA, highlights the types of high-level inputs and outputs
that we might care to track in our inventory analysis. As originally mentioned in Chapter 1, we
may be concerned with accounting for material, energy, or other resource inputs, and product,
intermediate, co-product, or release outputs. Recall that based on how you define your goal,
scope, and system boundary, you may be concerned with all or some of the inputs and outputs
defined in Figure 5-1.

Figure 5-1: Overview of Life Cycle Assessment (Source: US EPA 1993)

Inventory analysis follows a straightforward and repeating workflow, which involves the
following steps (as taken from ISO 14044:2006) done as needed until the inventory analysis
matches the then-current goal and scope:

• Preparation for data collection based on goal and scope


• Data Collection

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
102 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

• Data Validation (do this even if reusing someone else’s data)


• Data Allocation (if needed)
• Relating Data to the Unit Process
• Relating Data to the Functional Unit
• Data Aggregation
As the inventory analysis process is iterated, the system boundary and/or goal and scope may
be changed (recall the two-way arrows in Figure 4-1). The procedure is as simple as needed,
and gets more complex as additional processes and flows are added. Each of the inventory
analysis steps are discussed in more detail below, with brief examples for discussion. Several
more detailed examples are shown later in the chapter.

Step 1 - Preparation for data collection based on goal and scope

The goal and scope definition guides which data need to be collected (noting that the goal and
scope may change iteratively during the course of your study and thus may cause additional
data collection effort or previously collected data to be discarded). A key consideration is the
product system diagram and the chosen system boundary. The boundary shows which
processes are in the study and which are not. For every unit process in the system boundary,
you will need to describe the unit process and collect quantitative data representing its
transformation of inputs to outputs. For the most fundamental unit processes that interface
at the system boundary, you will need to ensure that the inputs and outputs are those
elementary flows that pass through the system boundary. For other unit processes (which may
not be connected to those elementary flow inputs and outputs) you will need to ensure they
are connected to each other through non-elementary flows such as intermediate products or
co-products.

When planning your data collection activities, keep in mind that you are trying to represent as
many flows as possible in the unit process shown in Figure 5-2. Choosing which flows to place
at the top, bottom, left, or right of such a diagram is not relevant. The only relevant part is
ensuring inputs flow into and outputs flow out of the unit process box. You want to
quantitatively represent all inputs, either from nature or from the technosphere (defined as the
human altered environment, thus flows like products from other processes). By covering all
natural and human-affected inputs, you have covered all possible inputs. You want to
quantitatively represent outputs, either as products, wastes, emissions, or other releases. Inputs
from nature include resources from the ground, from water, or air (e.g., carbon dioxide to be
sequestered). Outputs to nature will be in the form of emissions or releases to compartments in
the ground, air, or water. Outputs may also be classified to direct human uptake for food
products, medicines, etc.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 103

Figure 5-2: Generalized Unit Process Diagram

As a tangible example, imagine a product system like the mobile phone example in Chapter 4
where we have decided that the study should track water use as an input. Any of the unit
processes within the system boundary that directly uses water will need a unit process
representation with a quantity of water as an input and some quantitative measure of output
of the process. For mobile phones, such processes that use water as a direct input from nature
may include plastic production, energy production, and semiconductor manufacturing. Other
unit processes within the boundary may not directly consume water, but may tie to each other
through flows of plastic parts or energy. They themselves will not have water inputs, but by
connecting them all together, in the end, the water use of those relevant sectors will still be
represented. The final overall accounting of inventory inputs and/or outputs across the life
cycle within the system boundary is called a life cycle inventory result (or LCI result).

The unit process focus of LCA drives the need for data to quantitatively describe the
processes. If data is not available or inaccessible, then the product system, system boundary,
or goal may need to be modified. Data may be available but found not to fit the study. For
example, an initial system boundary may include a waste management phase, but months of
effort could fail to find relevant disposition data for a specific product of the process. In this
case, the system boundary may need to be adjusted (made smaller) and other SDPs edited to
represent this lack of data in the study. On the other hand, data that is assumed to not be
available at first may later be found, which would allow an expansion of the system boundary.
In general, system boundaries are made smaller not larger over the course of a study.

Step 2 - Data Collection

For each process within the system boundary, ISO requires you to “measure, calculate, or
estimate” data to quantitatively represent the process in your product system model. In LCA,
the ‘gold standard’ is to collect your own data for the specific processes needed, called primary
data collection. This means directly measuring the inputs and outputs of a process on-site for
the specific machinery use or transformation that occurs. For example, if you required primary
data for energy use of a process in an automobile assembly line that fastens a component on

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
104 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

to the vehicle with a screw, you might attach an electricity meter to the piece of machinery
that attaches the screw. If you were trying to determine the quantity of fuel or material used
in an injection molding process, you could measure those quantities as they enter the machine.
If you were trying to determine the quantity of emissions you could place a sensor near the
exhaust stack.

If you collect data with methods like this, intended to inventory (i.e., count and categorize)
per-unit use of inputs or outputs, you need to use statistical sampling and other methods to
ensure you generate statistically sound results. That means not simply attaching the electricity
meter one time, or measuring fuel use or emissions during one production cycle (one unit
produced). You should repeat the same measurement multiple times, and perhaps on multiple
pieces of identical equipment, to ensure that you have a reasonable representation of the
process and to guard against the possibility that you happened to sample a production cycle
that was overly efficient or inefficient with respect to the inputs and outputs. The ISO
Standard gives no specific guidance or rules for how to conduct repeated samples or the
number of samples to find, but general statistical principles can be used for these purposes.
Your data collection summary should then report the mean, median, standard deviation, and
other statistical properties of your measurements. In your inventory analysis you can then
choose whether to use the mean, median, or a percentile range of values.

Note that many primary data collection activities cannot be completed as described above. It
may not be possible to gain access to the input lines of a machine to measure input use on a
per-item processed basis. You thus may need to collect data over the course of time and then
use total production during that time to normalize the unit process inventory. For the
examples in the previous paragraph, you might collect electricity use for a piece of machinery
over a month and then divide by the total number of vehicles that were assembled. Or you
may track the total amount of fuel and material used as input to the molding machine over the
course of a year. In either case, you would end up with an averaged set of inputs and/or
outputs as a function of the product(s) of the unit process. The same general principles
discussed above apply here with respect to finding multiple samples. In this case you could
find several monthly values or several yearly values to find an average, median, or range.

ISO14044:2006 (Annex A) gives examples of ‘data collection sheets’ to support your primary
data collection activities. ISO 14049:2012 (Section 5) also provides suggestions on organizing
data. The examples are provided to ensure, among other things, that you are recording
quantities and units, dates and locations of record keeping, and descriptions of sampling done.
The most likely scenario is that you will create electronic data collection sheets by recording
all information in a spreadsheet. This is a fair choice because from our perspective, Microsoft
Excel is the most popularly used software tool in support of LCA. Even practitioners using
other advanced LCA software packages still typically use Microsoft Excel for data
management, intermediate analysis, and graphing.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 105

Collecting primary data can be difficult or impossible if you do not own all the equipment or
do not have direct access to it either due to geospatial or organizational barriers. This is often
the case for an LCA consultant who may be tasked with performing a study for a client but
who is given no special privileges or access to company facilities. Further, you may need to
collect data from processes that are deemed proprietary or confidential by the owner. This is
possible in the case of a comparative analysis with some long-established industry practice
versus a new technology being proposed by your client or employer. In these cases, the
underlying data collection sheets may be confidential. Your analysis may in these cases only
“internally use” the average data points without publicly stating the quantities found in any
subsequent reports. If the study is making comparative assertions, then it may be necessary to
grant to third-party reviewers (who have signed non-disclosure agreements) access to the data
collection sheets to appreciate the quality of the data and to assess the inventory analysis done
while maintaining overall confidentiality.

Standards for Process Evaluation and Measurement

Standards can be used to clarify the process of collecting data useful for LCA. First are the ISO
14000 family of standards, including the specifics listed in Section 4.3.2 of ISO 14044. You may
find, however, that the top-down approach provided in ISO 14044 does not provide sufficient
guidance on how to collect the detailed data needed for quantitative LCA analyses.

There are emerging bottom-up methods, such as ASTM E3012-16, that provide additional
granularity to the process of data collection in support of LCA studies. ASTM E3012-16 (part of
the E60.13 standard family focused on “characterizing industry manufacturing processes for
sustainability-related decisions”) outlines a method to characterize and measure processes to
support meaningful sustainability analyses. Characterization in this standard identifies unit
manufacturing processes (UMPs), as well as their key performance indicators and boundaries. It
identifies attributes like its inputs, outputs, and transformation functions. These should sound
very similar to the LCA Standard’s terminology from Chapter 4.

By following the measurement guidelines of the standard, the process inputs and outputs can be
more efficiently inventoried, and tracked over time to show progress towards sustainable
manufacturing goals.

Beyond issues of access, while primary data is considered the ‘gold standard’ there are various
reasons why the result may not be as good as expected in the context of an LCA study. First,
the data is only as good as the measurement device (see accuracy and precision discussion in
Chapter 2). Second, if you are not able to measure it yourself then you outsource the
measurement, verification, and validation to someone else and trust them to do exactly as you
require. Various problems may occur, including issues with translation (e.g., when measuring
quantities for foreign-owned or contracted production) or not finding contacts with sufficient
technical expertise to assist you. Third, you must collect data on every input and output of the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
106 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

process relevant to your study. If you are using only an electric meter to measure a process
that also emits various volatile organic compounds, your collected data will be incomplete with
respect to the full litany of inputs and outputs of the process. Your inventory for that process
would undercount any other inputs or outputs. This is important because if other processes
in your system boundary track volatile organics (or other inputs and outputs) your primary
data will undercount the LCI results.

The alternative to primary data collection is to use secondary data (the “calculating and
estimating” referenced above). Broadly defined, secondary data comes from life cycle
databases, literature sources (e.g., from searches of results in published papers), and other past
work. It is possible you will find data closely, but not exactly, matching the required unit
process. Typical tradeoffs to accessibility are that the secondary data identified is for a different
country, a slightly different process, or averaged across similar machinery. That does not mean
you cannot use it – you just need to carefully document the differences between the process
data you are using and the specific process needed in your study. While deemed inferior given
the use of the word secondary, in some cases secondary data may be of comparable or higher
quality than primary data. Secondary data can typically be found because it has been published
by the original author who generated it as primary data for their own study (and thus is typically
of good quality). In short, one analyst’s primary data may be another’s secondary data. Again,
the ‘secondary’ designation is simply recognition that it is being ‘reused’ from a previously
existing source and not collected new in your own study. Many credible and peer reviewed
studies are constructed mostly or entirely of secondary data. More detail on identifying and
using secondary data sources like LCI databases is below.

For secondary data, details about the secondary source (including a full reference), the
timestamp of the data record, and when you accessed it should be given. You must
quantitatively maintain the correct units for the inputs and outputs of the unit process. While
not required, it is convenient to make tables that neatly summarize all of this information.

Regardless of whether your data for a particular process comes from a primary or secondary
source, the ISO Standard requires you to document the data collection process, give details on
when data have been collected, and other information about data quality. Data quality
requirements (DQRs) are required scope items that we did not discuss in Chapter 4 as part
of the SDP, but characterize the fundamental expectations of data that you will use in your
study. As specified by ISO 14044:2006, these include statements about your intentions with
respect to age of data, geospatial reach, completeness, sources, etc. Data quality indicators
are summary metrics used to assess the data quality requirements.

For example, you may have a data quality requirement that says that all data will be primary,
or at least secondary but from peer-reviewed sources. For each unit process, you can have a
data quality indicator noting whether it is primary or secondary, and whether it has been peer-
reviewed. Likewise, you may have a DQR that says all data will be from the same geospatial
region (e.g., a particular country like the US or a whole region like North America). It is

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 107

convenient to summarize the DQRs in a standardized tabular form. The first two columns of
Figure 5-3 show a hypothetical DQR table partly based on text from the 2010 Christmas tree
study mentioned previously. The final column represents how the requirements might be
indicated as a summary in a completed study. The indicated values are generally aligned with
the requirements (as they should be!).

Data Quality Category Requirement Data Quality Indicator


Artificial trees: 2009 data
Temporal Data within 10 years of study Natural trees: 2002-2009 data
Artificial trees: China
Geospatial Data matches local production Natural trees: US
All processes used in study are
Most common production process representative of most common
Technological practices
basis
Figure 5-3: Sample Data Quality Requirements (DQR) Table

Beyond using primary or secondary data, you might need to estimate the parameters for some
or all of the input and outputs of a unit process using methods as introduced in Chapter 2.
Your estimates may be based on data for similar unit processes (but which you deem to be
too dissimilar to use directly), simple transformations based on rules of thumb, or triangulated
averages of several unit processes. From a third-party perspective, estimated data is perceived
as lower quality than primary or secondary sources. However when those sources cannot be
found, estimating may be the only viable alternative.

Example 5-1: Estimating energy use for a service

Question: Consider that you are trying to generate a unit process associated with an internal
corporate design function as part of the life cycle “overhead” of a particular product and given
the scope of your study need to create an input use of electricity. Your company is all located in
one building. There is no obvious output unit for such a process, so you could define it to be per
1 product designed, per 1 square foot of design space, etc., as convenient for your study.

Answer: You could estimate the input electricity use for a design office over the course of a
year and then try to normalize the output. If you only had annual electricity use for the entire
building (10,000 kWh), and no special knowledge about the energy intensity of any particular
part of the building as subdivided into different functions like sales and design, you could find
the ratio of the total design space in square feet (2,000 sf) as compared to the total square feet of
the building (50,000 sf), and use that ratio (2/50) to scale down the total consumption to an
amount used for design over the course of a year (400 kWh). If your output was per product, you
could then further normalize the electricity used for the design space by the unique number of
products designed by the staff in that space during the year.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
108 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

You could add consideration of non-electricity use of energy (e.g., for heating or cooling) with
a similar method. Note that such ancillary support services like design, research and
development, etc., generally have been found to have negligible impacts, and thus many studies
exclude these services from their system boundaries.

Step 3 - Data Validation

Chapter 2 provided some general guidance on validating research results. With respect to
validating LCI data, you generally need to consider the quantitative methods used and ensure
that the resulting inventories meet your stated DQRs. Data validation should be done after
data is collected but before you move on to the actual inventory modeling activities of your
LCA.

As an example of validation, it may be useful to validate energy or mass balances of your


processes. Using the injection molding process example from Step 2, one would expect that
the total input mass of material to be greater than (but approximately equal to) the output
mass of molded plastic. You can ensure that the total mass input of plastic resin, fuels, etc., is
roughly comparable to the mass of molded plastic (subject to reasonable losses). If the
balances are deemed uneven, you can assess whether the measured process is merely inefficient
or whether there is a problem in your data collection, and thus resample.

You can use available secondary data to validate primary data collection. If you have chosen
to collect your own data for a process that is similar to processes for which there is already
secondary data available, you can quantitatively compare your measured results with the
published data. Again, if there are significant differences then you will need to determine the
source of the discrepancy. You can validate secondary data that you have chosen to use against
other sources in similar ways.

The results of validation efforts can be included in the main text of your report or in an
included Appendix, depending on the level of detail and explanation needed. If you collected
primary data and compared it to similar data from the same industry, the following text might
be included to show this:

“Collected data from the year 2012 on the technology-specific process used in this study
was compared to secondary data on the similar injection molding process from 2005
(Reference). The mean of collected data was about 10% lower than the secondary data.
This difference is not significant, and so the collected data is used as the basis for the
process in the study.”

If validation suggests the differences are more substantial, that does not automatically mean
that the data is invalid. It is possible that there are no good similar data sources to compare
against, or that the technology has changed substantially. That too could be noted in the study,
such as:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 109

“Collected data from the year 2012 on the technology-specific process used in this study
was compared to secondary data on the similar injection molding process from 2005
(Reference). The mean of collected data was about 50% lower than the secondary data.
This difference is large and significant, but is attributed to the significant improvements
in the industry since 2005, and so the collected data is still chosen as the basis for the
process in the study.”

As noted above, the validation step is where you re-assess whether the quantitatively sound
data you want to use also is within the scope of your DQRs. Many studies state DQRs to use
all primary data at the outset, but subsequently realize it is not possible. Likewise, studies may
not be able to find sufficient geospatially focused data. In both cases, the DQRs would need
to be iteratively adjusted as the study continues. This constant refining of the initial goal and
scope may sound like “cheating”, but the purpose of the DQRs is as a single and convenient
summary of the study goals for data quality. It allows a reader to quickly get a sense of how
relevant the study results are given the final DQRs. While not required, you can state initial
goal DQRs alongside final DQRs upon completion of the study.

Step 4 - Data Allocation (if needed)

Allocation will be discussed more in Chapter 6, but in short, allocation is the quantitative
process done by the study analyst to assign specific quantities of inputs and outputs to the
various products of a process based on some mathematical relation between the products. For
example, you may have a process that produces multiple outputs, such as a petroleum refinery
process that produces gasoline, diesel, and other fuels and oils. Refineries use a significant
amount of energy. Allocation is needed to quantitatively connect the energy input to each of
the refined products. Without specified allocation procedures, the connections between those
inputs and the various products could be done haphazardly. The ISO Standard suggests that
the method you use to perform the allocation should be based on underlying physical
relationships (such as the share of mass or energy in the products) when possible. For example,
if your product of interest is gasoline, you will need to determine how much of the total
refinery energy was used to make the gasoline. For a mass allocation, you could calculate it by
using the ratio of the mass of the gasoline produced to the total mass of all of the products.
You may have to further research the energetics of the process to determine what allocation
method is most appropriate.

If physical relationships are not possible, then methods such as economic allocation—such as
by eventual sale price— could be used. ISO also says that you should consistently choose
allocation methods as much as possible across your product system, meaning that you should
try not to use a mass-based allocation most of the time and an energy-based allocation some
of the time. This is because the mixing of allocation methods could be viewed by your audience
or reviewers as a way of artificially biasing results by picking allocations that would lead to
desired results. Allocation is conceptually similar to the design space electricity in Example 5-
1. Most allocations are just linear transformations of effects.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
110 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

When performing allocation, the most important considerations are to fully document the
allocation method chosen (including underlying allocation factors) and to ensure that total
inputs and outputs are equal to the sum of the allocated inputs and outputs. It is possible that
none of your unit processes have multiple products, and thus you do not need to perform
allocation. You might also be able to avoid allocation entirely, as we will see later.

Step 5 - Relating Data to the Unit Process

In this step you scale the various collected data into a representation of the output of the unit
process. Regardless of how you have defined the study overall, this step requires you to collect
all of the inputs and outputs as needed for 1 unit output from that process. The first two
columns of Figure 5-4 adapt process data from Franklin Associates (2011) on behalf of the
American Chemistry Council (ACC) for injection molding processes in the US. The data was
collected via a survey of select ACC member companies that have injection molding
operations.

Flow Raw LCI LCI Flow per LCI Flow per


pound of part functional
unit need
(0.13 pound)

Output of Plastic Part (pounds) 1,000 1 0.13

Input of Virgin Resin (pounds) 1,034 1.034 0.134

Input of Water (gallons) 80 0.08 0.01

Input of Energy (BTU) 8.4 million 8,400 1,100

Output of solid waste to landfill 16 0.016 0.002


(pounds)
Figure 5-4: Excerpted LCI Data for Injection Molded Plastic Parts
Source: Franklin Associates (2011)

In terms of relating the data to the injection molding unit process, note that the data has
already been scaled to the basis of 1,000 pounds of plastic injection molded part. It may
separately be desirable to scale it on the basis of per 1 pound (or 1 kg) of molded part, which
could be done easily by dividing the provided results by 1,000 (or by 2,200), as shown in
column three. These results also need to be validated, for example, by using mass balances.

Step 6 - Relating Data to the Functional Unit

The reason why this step is included in the ISO LCA Standard is to remind you that you are
doing an overall study on the basis of 1 functional unit of product output, not simply of 1 unit

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 111

of all processes involved. Either during the data collection phase, or in subsequent analysis,
you will need to scale the data collected so that the relative amount of product or intermediate
output of the unit process is related to the amount needed per functional unit. Eventually, all
of your unit process flows will need to be scaled to a per-functional unit basis. If all unit
processes have been modified as such, then finding the total LCI results per functional unit is
a trivial procedure.

Building on the injection molding example, the reason you may be interested in an injection
molding unit process is because you are studying the life cycle of milk with the functional unit
of ‘1 gallon of milk’, which is assumed to require milk, molded plastic packaging, and adhesive
labels. You would collect data for each of the components of the packaged milk. If the empty
plastic milk container has a mass of 2 ounces (0.13 pounds), then the excerpted data above
could be used by scaling all of the ‘per pound’ values by 0.13, as shown in column four. This
result also needs to be validated.

Step 7 - Data Aggregation

In this step, all unit process data in the product system diagram are combined into a single
result for the modeled life cycle of the system. What this typically means is summing all
quantities of all inputs and outputs into a single total result on a functional unit basis.

Aggregation occurs at multiple levels. Figure 4-4 showed the various life cycle stages within
the view of the product system diagram. A first level of aggregation may add all inputs and
outputs under each of the categories of raw material acquisition, use, etc. A second level of
aggregation may occur across all of these stages into a final total life cycle estimate of inputs
and outputs per functional unit. Aggregated results are often reported in a table showing total
inputs and outputs on per-process, or per stage, values, and then a sum for the entire product
system. Example 5-2 shows aggregated results for a published study on wool from sheep in
New Zealand. The purpose of such tables is to emphasize category level results, such as that
half of the life cycle energy use occurs on farm. Results could also be graphed.

Example 5-2: Aggregation Table for Published LCA on Energy to Make Wool
(Source: The AgriBusiness Group, 2006)
Energy Use
Life Cycle Stage
(GJ per tonne wool)
On Farm 22.6
Processing 21.7
Transportation 1.5
Total 45.7

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
112 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Beyond such tables, product system diagrams may be annotated with values for different levels
of aggregation by adding quantities per functional unit. Example 5-3 shows a diagram for a
published study on life cycle effects of bottled water and other beverage systems performed
for Nestle Waters. Such values can then be aggregated into summary results.

Example 5-3: Aggregation Diagram for Bottled Water (Source: Quantis, 2010)

We have above implied that aggregation of results occurs over a relatively small number of
subcomponents. However, a product system diagram may be decomposed into multiple sets
of tens or hundreds of constituent pieces that need to be aggregated. If all values for these
subcomponents are on a functional unit basis, the summation is not difficult, but the
bookkeeping of quantities per subcomponent remains an issue. If the underlying
subcomponent values are not consistently on a per functional unit basis, units of analysis
should be double checked to ensure they can be reliably aggregated.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 113

Life Cycle Interpretation


Because some studies only include an inventory (LCI), we discuss Interpretation, the final step
for all LCAs and LCIs, now. For those studies (LCAs) that also include an impact assessment,
the procedures for the assessment will be discussed in Chapter 10. There is little detail
provided in the ISO Standard on what must be done in this phase, but interpretation is similar
to the last step of the “three step” method introduced in Chapter 2. The interpretation phase
refers to studying the results of the goal and scope, inventory analysis, and impact assessment,
in order to make conclusions and recommendations that can be reported. As shown in Figure
4-1, interpretation is iterative with the three other phases. As this chapter is focused on
inventory analysis, much of the discussion and examples provided relate to interpreting
inventory results, but the same types of interpretation can be done with impact assessment
results (to be discussed in Chapter 10).

A typical first task in interpretation is to study your results to determine whether conclusions
can be made based on the inventory results that are consistent with the goal and scope. One
of the most common and important interpretation tasks involves discussing which life cycle
stage leads to the largest share of LCI results. A high-level summary helps to set the stage for
subsequent analyses. For example, an LCA of a vehicle will likely show that the use phase
(driving the car) is the largest energy user, as compared to manufacturing and recycling. An
interpretation task could involve creating a tabular or graphical summary showing the energy
use contributions for each of the stages. The interpretation of results from the study
summarized in Example 5-2 could note that energy use on farms is about equal to that in the
processing stage, and that transportation energy use appears negligible.

Part of your goal statement may have been to do a comparison between two types of products
and assess whether the life cycle energy use of one is significantly less than the other. If your
inventory results for the two products are nearly identical (say only 1% different) then it may
be difficult to scientifically conclude that one is better than the other given the various
uncertainties involved. Such an interpretation result could cause you to directly state that no
appreciable difference exists, or it may cause you to change the system boundary in a way that
ends up making them significantly different.

A key part of interpretation is performing relevant sensitivity analyses on your results. The
ISO Standard does not require specific sensitivity analysis scenarios as part of interpretation,
but some consideration of how alternative parameters for inputs, outputs, and methods used
(e.g., allocation) would affect the final results is necessary. As discussed in Chapter 2, a main
purpose of sensitivity analysis is to help assess whether a qualitative conclusion is affected by
quantitative changes in the parameters of the study. For example, if your general qualitative
conclusion is that product A uses significantly less energy than product B, the sensitivity
analysis may test whether different quantitative assumptions related to A or B lead to results
where energy use of A is roughly equal to B, or where A is greater than B. Any of the latter
two outcomes is qualitatively different than the initial conclusion, and it would be important

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
114 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

for the sensitivity results to be stated so that it was clear that there is a variable that if credibly
changed by a specified amount, has the potential to alter the study conclusions.

While on the subject of assessing comparative differences, it is becoming common for


practitioners in LCA to use a ‘20% rule’ when testing for significant differences. The 20% rule
means that the difference between two LCI results, such as for two competing products, must
be more than 20% different for the results to be deemed significantly different, and thus for
one to be declared as lower than the other. While there is not a large quantitative framework
behind the choice of 20% specifically, this heuristic is common because it roughly expresses
the fact that all data used in such studies is inherently uncertain, and by forcing 20%
differences, then relatively small differences would be deemed too small to be noted in study
conclusions. We will talk more about modeling and assessing uncertainties in Chapter 7 on
uncertainty.

Interpretation can also serve as an additional check on the goal and scope parameters. This is
where you could assess whether a system boundary is appropriate. As an example, while the
ISO Standard encourages full life cycle stage coverage within system boundaries, it does not
require that every LCA encompass all stages. One could try to defend the validity of a life cycle
study of an automobile that focused only on manufacturing, or only on the use stage. The
results of the interpretation phase could then internally weigh in on whether such a decision
was appropriate given the study goal. If a (qualified) conclusion can be drawn, the study could
be left as-is, if not, a broader system boundary could be chosen, with or without preliminary
LCI results.

Regardless, the real purpose of interpretation is to improve the quality of your study, especially
the quality of the written conclusions and recommendations that arise from your quantitative
work. As with other quantitative analysis methods, you will need to also improve your
qualitative skills, including documentation, to ensure that your interpretation efforts are
worthwhile.

Identifying and Using Life Cycle Data Sources


In support of modeling the inputs and outputs associated with unit processes, you will need a
substantial amount of data. Even studies of simple product systems may require data on 10
different unit processes. While this may sound like a small amount of effort, as you will see
below, the task of finding, documenting, manipulating, validating and using life cycle data is
time consuming. The text above gave a fair amount of additional detail related to developing
your own primary data via collection and sampling efforts. This section is related to the
identification and use of secondary data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 115

One prominent source of secondary data is the thousands of peer-reviewed journal papers
done over time by the scientific community, also known as literature sources. Some of these
papers have been explicitly written to be a source of secondary data, while authors of other
papers developed useful data in the course of research (potentially on another topic) and made
the process-level details available as part of the paper or in its supporting information.
Sometimes the study authors are not just teams of researchers, but industry associations or
trade groups (e.g., those trying to disseminate the environmental benefits of their products).
Around the world, industry groups like Plastics Europe, the American Chemistry Council, and
the Portland Cement Association have sponsored projects to make process-based data
available via publications. It is common to see study authors citing literature sources, and doing
so requires you to simply use a standard referencing format like you would for any source.
Unfortunately, data from such sources is typically not available in electronic form, and thus
there are potentials for data entry or transcription errors as you try to make use of the
published data. It is due to issues like these that literature sources constitute a relatively small
share of secondary data used in LCA studies.

There is a substantial amount of secondary data available to support LCAs in various life cycle
databases. These databases are the main source of convenient and easy to access secondary
data. Some of the data represented in these databases are from the literature sources mentioned
above. Since the first studies mentioned in Chapter 1, various databases comprised of life cycle
inventory data have been developed. The original databases were sold by Ecobilan and others
in the mid-1990s. Nowadays the most popular and rigorously constructed database is from
ecoinvent, developed by teams of researchers in Switzerland and available either by paying
directly for access to their data website or by an add-on fee to popular LCA system tools such
as SimaPro and GaBi (which in turn have their own databases). None of these databases are
free, and a license must be obtained to use them. On the other hand, there are a variety of
globally available and publicly accessible (free) life cycle databases. In the US, LCI data from
the National Renewable Energy Laboratory (NREL)’s LCI database and the USDA’s LCA
Digital Commons are popular and free.4 Figure 5-5 summarizes the major free and paid life
cycle databases (of secondary data) in the world that provide data at the unit process level for
use in life cycle studies. Beyond the individual databases, there is also an “LCA Data Search
Engine,” managed by the United Nations Environmental Programme (UNEP), that can assist
in finding available free and commercial unit process data (LCA-DATA 2013). All of the
databases have their own user’s guides that you should familiarize yourself with before
searching or using the data in your own studies.

4Data from the NREL US LCI Database has been transferred over to the USDA LCA Digital Commons as of 2012. Both datasets can
now be accessed from that single web database.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
116 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Database Approximate Number of Notes


Cost Processes

ecoinvent 2,500 Euros 4,000+ Has data from around the world, but majority is
($3,000 USD) from Europe. Available directly, or embedded
within LCA software.

US LCI Database Free 1,000+ US focused. Now hosted by USDA LCA Digital
(companies, Commons.
agencies pay to
publish data)

USDA LCA Free 30,000+ Focused on agricultural products and processes.


Digital Commons (manufacturers Geospatially specific unit processes for specific US
and agencies states.
pay to publish
data)

ELCD Free 300+ Relatively few processes, spread across various


sectors. Additional data being added rapidly.

BEES Free Focused on building and construction materials.

GaBi $3,000 USD 5,000+ Database made by PE International. Global, but


heavily focused on European data.
Figure 5-5: Summary of Data Availability for Free and Licensed LCA Databases
(Sources provided at end of chapter)

These databases can be very comprehensive, with each containing data on hundreds to
thousands of unique processes, with each process comprised of details for potentially
hundreds of input or output flows. Collecting the various details of inputs and outputs for a
particular unit process (which we refer to as an LCI data module but which are referred to
as ‘datasets’ or ‘processes’ by various sources) requires a substantial amount of time and effort.
This embedded level of effort for unit process data is important because even though it
represents a secondary data source, to create a superior set of primary data for a study, you
might need to collect data for 100 or more input and output flows for the process. Of course
your study may have a significantly smaller scope that includes only 5 flows, and thus your
data collection activities would only need to measure those. The databases do highlight an
ongoing conundrum in the LCA community – the naïve stated preference for primary data
when substantial high-quality secondary data is pervasive. Another benefit of these databases
is that subsets of the data modules are created and maintained consistently, thus a common
set of assumptions or methods would be associated with hundreds of processes. This is yet
another difference to primary data which could have a set of ad-hoc assumptions used in its
creation.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 117

In the rest of this chapter we consider an LCI of the CO2 emitted to generate 1 kWh of coal-
fired electricity in the United States. Our system boundary for this example (as in Figure 5-6)
has only three unit processes: mining coal, transporting it by rail, and burning it at a power
plant. The refinery process that produces diesel fuel, an input for rail, is outside of our
boundary, but the effects of using diesel as a fuel are included.

Figure 5-6: Product System Diagram for Coal-Fired Electricity LCI Example

To achieve our goal of the CO2 emitted per kWh, we will need to find process-level data for
coal mining, rail transportation, and electricity generation. In the end, we will combine the
results from these three unit processes into a single estimate of total CO2 per kWh. This way
of performing process-level LCA is called the process flow diagram approach.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
118 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

A note about the limitations of the process flow diagram approach

The system boundary for the coal-fired electricity example used in this chapter has only three unit
processes: mining coal, transporting it by rail, and burning it at a power plant. We assume for now, beyond
the fact that this is an academic example, that such a tight boundary is warranted because these processes
are known to be significant parts of the supply chain of making coal-fired power with respect to CO2
emissions. But by using such a limited number of processes in a product system as complex as electricity
generation, we are ignoring processes that may lead to significant impacts (even if not associated with
CO2).

We will discuss the use of matrix-based screening methods to help us set and assess the effect of system
boundaries in Chapters 8 and 9, and will generally find that significant underestimation (error) occurs as
a result of having truncated (cut off) the scope of product systems to include only a few processes. In
systems where high quality process data exists for entire systems, the matrix-based methods will yield far
superior results than what are possible with a process flow diagram approach. As such, the example
provided throughout this chapter is perhaps most useful in ensuring that you are able to see how life cycle
data can fit together into a model, and to explicitly follow the calculation chain for a process model.

We will focus on the US LCI database (2013) in support of this relatively simple example. This
database has a built-in search feature such that typing in a process name or browsing amongst
categories will show a list of available LCI data modules (see the Advanced Material at the end
of this chapter for brief tutorials on using the LCA Digital Commons website, that hosts the
US LCI data, as well as other databases and tools). Searching for “electricity” yields a list of
hundreds of processes, including the following LCI data modules. Note that names of
processes from databases will be shown in italic font.

• Electricity, diesel, at power plant


• Electricity, lignite coal, at power plant
• Electricity, natural gas, at power plant
• Electricity, anthracite coal, at power plant
• Electricity, bituminous coal, at power plant
The nomenclature used may be confusing, but is somewhat consistent across databases. The
constituents of the module name can be deciphered as representing (1) the product, (2) the
primary input, and (3) the boundary of the analysis. In each of the cases above, the unit process
is for making electricity. The inputs are various types of fuels. Finally, the boundary is such
that it represents electricity leaving the power plant (as opposed to at the grid, or at a point of
use like a building). Once you know this nomenclature, it is easier to browse the databases to
find what you are looking for specifically.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 119

Given the above choices, we want to use one of the three coal-fueled electricity generation
unit processes in our example. Lignite and anthracite represent small shares of the generation
mix, so we choose bituminous coal as the most likely representative process and use the last
data module in the list above (alternatively, we could develop a weighted-average process
across the three types that would be useful). Using similar brief search methods in the US
NREL website we would find the following unit processes as relevant for the other two pieces
of our system:

• Bituminous coal, at mine


• Transport, train, diesel powered

These two processes represent mining of bituminous coal and the transportation of generic
product by diesel-powered train.

Figure 5-7 shows an abridged excerpt of the US NREL LCI data module for Electricity, bituminous
coal, at power plant. The entire data module is available publicly.5 Within the US NREL LCI
database website, such data is found by browsing or searching for the process name and then
viewing the “Exchanges”. These data modules give valuable information about the specific
process chosen as well as other processes they are linked to. While here we discuss viewing
the data on the website, it can also be downloaded to a Microsoft Excel spreadsheet or as
XML.

It is noted that this is an abridged view of the LCI data module. The complete LCI data module
consists of quantitative data for 7 inputs and about 60 outputs. For the sake of the example in
this section, we assume the abridged inventory and ignore the rest of the details. Most of the
data modules in databases have far more inputs and outputs than in this abridged module; it
is not uncommon to find data modules with hundreds of outputs (e.g., for emissions of
combustion processes). If you have a narrow scope that focuses on a few air emission outputs,
many of the other outputs can be ignored in your analysis. However if you plan to do life cycle
impact assessment, the data in the hundreds of inputs and/or outputs may be useful in the
impact assessment. If your study seeks to do a broad impact assessment, collecting your own
primary data can be problematic as your impact assessment will all but require you to broadly
consider the potential flows of your process. If you focus instead on just a few flows you deem
to be important, then the eventual impact assessment could underestimate the impacts of your
process. This is yet another danger of primary data collection (undercounting flows).

5Data from the NREL US LCI database in this chapter are as of July 20, 2014. Values may change in revisions to the database that cannot
be expressed here.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
120 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Flow Category Type Unit Amount Comment

Inputs

bituminous coal, at mine root/flows ProductFlow kg 4.42e-01

transport, train, diesel root/flows ProductFlow t*km 4.61e-01 Transport from mine
powered to power plant

Outputs

electricity, bituminous coal, at root/flows ProductFlow kWh 1.00


power plant

carbon dixoide, fossil air/unspecified ElementaryFlow kg 9.94e-01


Figure 5-7: Abridged LCI data module from US NREL LCI Database for bituminous coal-fired
electricity generation. Output for functional unit italicized. (Source: US LCI Database 2012)

Figure 5-7 is organized into sections of data for inputs and outputs. At the top, we see the
abridged input flows into the process for generating electric power via bituminous coal.
Recalling the discussion of direct and indirect effects from Chapter 4, the direct inputs listed
are bituminous coal and train transport. The direct outputs listed are fossil CO2 emissions
(which is what results when you burn a fossil fuel) and electricity. Before discussing all of the
inputs and outputs, we briefly focus on the output section to identify a critical component of
the data module – the electricity output is listed as a product flow, with units of 1 kWh. Every
LCI process will have one or more outputs, and potentially have one or more product flows
as outputs, but this module has only one. That means that the functional unit basis for this
unit process is per (1) kWh of electricity. All other inputs and outputs in Figure 5-7,
representing the US NREL LCI data module for Electricity, bituminous coal, at power plant are
presented as normalized per 1 kWh. You could think of this module as providing energy
intensities or emissions factors per kWh. Thinking back to the discussion above on data
collection, its unlikely that the study done to generate this LCI data module actually measured
the inputs and outputs needed to make just 1 kWh of electricity at a power plant – it is too
small a value. In reality, it is likely that the inputs and outputs were measured over the course
of a month or year, and then normalized by the total electricity generation in kWh to find
these normalized values. It is the same process you would do if you were making the LCI data
module yourself. We will discuss how to see the assumptions and boundaries for the data
modules later in this chapter.

We now consider the abridged data module in more detail. In Figure 5-7, each of the input
flows are a product flow from another process (namely, the product of bituminous coal mining
and the product of train transportation). The unit basis assumption for those inputs is also
given – kg for the coal and ton-kilometers (t*km) for the transportation. A ton-kilometer is a

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 121

compound unit (like a kilowatt-hour) that expresses the movement of 1 ton of material over
the distance of 1 kilometer. Both are common SI units. Finally the amount of input required
is presented in scientific notation and can be translated into 0.442 kg of coal and 0.46 ton-km
of train transport. Likewise, the output CO2 emissions to air are estimated at 0.994 kg. All of
these quantities are normalized on a per-kWh generated basis. The comment column in Figure
5-7 (and which appears in many data modules) gives brief but important notes about specific
inputs and outputs. For example, the input of train transportation is specified as being a
potential transportation route from mine to power plant, which reminds us that the unit
process for generating electricity from coal is already linked to a requirement of a train from
the mine.6

Now that we have seen our first example of a secondary source LCI data module, Figure 5-8
presents a graphical representation of the abridged unit process similar to the generic diagram
of Figure 5-2. The direct inputs, which are product flows from other man made processes, are
on the left side as inputs from the technosphere. The abridged unit process has no direct
inputs from nature. The direct CO2 emissions are at the top. The output product, and
functional unit basis of the process, of electricity is shown on the right. All quantitative values
are representative of the functional unit basis of the unit process.

Figure 5-8: Unit Process Diagram for abridged electricity generation unit process

Returning to our example LCA problem, we now have our first needed data point, that the
direct CO2 emissions are 0.994 kg / kWh generated. Given that we have only three unit
processes in our simple product system, we can work backwards from this initial point to get
estimated CO2 emissions values from mining and train transport. Again using the NREL LCI
database, Figure 5-9 shows abridged data for the data module bituminous coal, at mine. The

6 The unabridged version of the module has several other averaged transport inputs in ton-km, such as truck, barge, etc. Overall, the
module gives a “weighted average” transport input to get the coal from the mine to the power plant. Since we are only using the abridged
(and unedited) version, we will otherwise undercount the upstream CO2 emissions from delivering coal since we are skipping the weighted
effects from those other modes.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
122 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

output and functional unit is 1 kg of bituminous coal as it leaves the mine. Two important
inputs are diesel fuel needed to run equipment, and coal. It may seem odd to see coal listed as
an input into a coal mining process, but note it is listed as a resource and as an elementary
flow. As discussed in Chapter 4, elementary flows are flows that have not been transformed
by humans. Coal trapped in the earth for millions of years certainly qualifies as an elementary
flow by that definition! Further, it reminds us that there is an elementary flow input within
our system boundary, not just many product flows. This particular resource is also specified
as being of a certain quality, i.e., with energy content of about 25 MJ per kg. Finally, we can
see from a mass balance perspective that there is some amount of loss in the process, i.e., that
every 1.24 kg of coal in the ground leads to only 1 kg of coal leaving the mine.

Flow Category Type Unit Amount Comment

Inputs

Coal, bituminous, 24.8 MJ per kg resource/ground ElementaryFlow kg 1.24

Diesel, combusted in industrial root/flows ProductFlow l 8.8e-03


boiler

Outputs

Bituminous coal, at mine root/flows ProductFlow kg 1.00


Figure 5-9: Abridged LCI data module from US NREL LCI Database for bituminous coal mining.
Output for functional unit italicized. (Source: US LCI Database 2012)

Figure 5-10 shows the abridged NREL LCI data module for rail transport (transport, train, diesel
powered). The output / functional unit of the process is 1 ton-km of rail transportation service
provided. Providing that service requires 0.00648 liters of diesel fuel and emits .0189 kg of
CO2, both per ton-km.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 123

Flow Category Type Unit Amount Comment

Inputs

Diesel, at refinery root/flows ProductFlow l 6.48e-03

Outputs

Carbon dixoide, fossil air/unspecified ElementaryFlow kg 1.89e-02

transport, train, diesel root/flows ProductFlow t*km 1


powered
Figure 5-10: Abridged LCI data module from US NREL LCI Database for rail transportation. Output
for functional unit italicized. (Source: US LCI Database 2012)

To then find the total CO2 emissions across these three processes, we can work backwards
from the initial process. We already know there are 0.994 kg/kWh of CO2 emissions at the
power plant. But we also need to mine the coal and deliver it by train for each final kWh of
electricity. The emissions for those activities are easy to associate, since Figure 5-7 provides us
with the needed connecting units to estimate the emissions per kWh. Namely, that 0.442 kg
of coal needs to be mined and 0.461 ton-km of rail transport needs to be used per kWh of
electricity generated. We can then just use those unit bases to estimate the CO2 emissions from
those previous processes. Figure 5-9 does not list direct CO2 emissions from coal mining,
although it does list an input of diesel used in a boiler.7 If we want to assume that we are only
considering direct emissions from each process, we can assume the CO2 emissions from coal
mining to be zero,8 or we could expand our boundary and acquire the LCI data module for
the diesel, combusted in industrial boiler process. Our discussion below follows the assumption that
direct emissions are zero.

Figure 5-10 notes that there are 0.0189 kg of CO2 emissions per ton-km of rail transported.
Equation 5-1 summarizes how to calculate CO2 emissions per kWh for our simplistic product
system. Other than managing the compound units, it is a simple solution: about 1 kg CO2 per
kWh. If we were interpreting this result, we would note that the combustion of coal at the
power plant is about 99% of the total emissions.
kg CO: kg CO: kg coal kg CO: ton − km kg CO:
= 0.994 + U0.442 ∗0 [ + U0.461 ∗ 0.0189 [
kWh kWh kWh kg coal kWh ton − km
kg CO: kg CO:
= 0.994 + 0.0087
kWh kWh
ef gh
= 1.003 ejk i (5 − 1)

7 This particular input of “diesel, combusted in industrial boiler” may not be what you would expect to find in an LCI data module, since it is a
description of how an input of diesel is used. Such flows are fairly common though.
8 Also, the unabridged LCI data modules list emissions of methane to air, which could have been converted to equivalent CO2 emissions.

Doing so would change the result above by about 10%. Considering all GHG emissions together will be discussed in Chapter 10.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
124 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

The estimated CO2 emissions for coal-fired electricity of 1 kg / kWh was obtained relatively
easily, requiring only three steps and queries to a single database (US NREL LCI). As always
one of our first questions should be “is it right?” We can attempt to validate this value by
looking at external references. Whitaker et al (2012) reviewed 100 LCA studies of coal-fired
electricity generation and found the median value to be 1 kg of CO2 per kWh, thus we should
have reasonable faith that the simple model we built leads to a useful result. Of course we can
add other processes to our system boundary (such as other potential transportation modes)
but we would not appreciably change our simple result of 1 kg/kWh. Note that anecdotally
experts often refer to the emissions from coal-fired power plants to be 2 pounds per kWh,
which is a one significant digit equivalent to our 1 kg/kWh result.

Process flow diagram-based life cycle models are constructed in this way. For each unit
process within the system boundary, data (primary or secondary) is gathered and flows
between unit processes are modeled. Since you must find data for each process, such methods
are often referred to as ‘bottom up’ studies because you are building them up from nothing,
as you might construct a building on empty land.

Beyond validating LCI results, you should also try to validate the values found in any unit
process you decide to use, even if sourced from a well-known database. That is because errors
can and do exist in these databases. It is easy to accidentally put a decimal in the wrong place
when creating a digital database. As an example, the US NREL LCI database had an error in
the CO2 emissions of its air transportation process, of 53 kg per 1000 ton-km (0.053 kg per
ton-km) for several years before it was fixed. This error was brought to their attention because
observant users noted that this value was less than the per-ton-km emissions for truck
transportation, which went against common sense. Major releases of popular databases are
also imperfect. It is common to have errors found and fixed, but this may happen months
after licenses have been purchased, or worse, after studies have been completed. These are
additional reasons why despite being of high quality, you need to validate your data sources.

The discussion above was focused on the US NREL LCI Database, which contains only
process data for US-based production, yet there are other considerations both for data access
and metadata for the other databases. As noted in Figure 5-5, the ecoinvent database is far
more geospatially diverse. While generally focused on Europe, data can be found in ecoinvent
for other regions of the world as well. This fact creates a new challenge in interpreting available
process data modules, namely, determining the country of production basis assumption for
the data. While examining the metadata can be useful, ecoinvent and other databases typically
summarize the country used within the process naming convention. For example, a process
you might find within ecoinvent might be called electricity, hard coal, at power plant, DE, where
the first part is the process name formatted similar to the NREL database, and at the end is
an abbreviated term for the country or region to which that process is representative. Figure
5-11 summarizes some of the popular abbreviations used for country basis within ecoinvent
and other databases.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 125

Country or Region Abbreviation Country or Region Abbreviation


Norway NO Japan JP
Australia AU Canada CA
India IN Global GLO
China CN Europe RER
Germany DE Africa RAF
United States US Asia RAS
Netherlands NL Russian Federation RU
Latin America and the
Hong Kong HK RLA
Caribbean
France FR North America RNA
United Kingdom GB Middle East RME
Figure 5-11: Summary of abbreviations for countries and regions in ecoinvent

Ecoinvent has substantially more available metadata for its data modules, including primary
sources, representative years, and names of individuals who audited the datasets. While
ecoinvent data are not free, the metadata is freely accessible via the database website. Thus,
you could do a substantial amount of background work verifying that ecoinvent has the data
you want before deciding to purchase a license.

A particular feature of ecoinvent data is its availability at either the unit process or system
process level. Viewing and using ecoinvent system processes is like using already rolled-up
information (and computations would be faster), while using unit processes will be more
computationally intensive. This will be discussed more in Chapter 9.

LCI Data Module Metadata and Formats


Our example using actual LCI data modules from the US NREL LCI database jumped straight
into extracting and looking at the quantitative data. However, all LCI data modules provide
some level of metadata, which is information regarding how the data was collected, how the
modules were constructed, etc. Metadata is also referred to as “data about data”.

The metadata that we care about for our unit processes are elements such as the year the data
was collected, where it was collected, whether the values are single measurements or averages,
and whether it was peer reviewed. To understand metadata more, we can look at the metadata
available for the processes we used above. The US NREL LCI Database has three different
metadata categories as well as the Exchanges information shown above. Figure 5-12 shows
metadata from the Activity metadata portion of the US NREL LCI database for the Electricity,
bituminous coal, at power plant process used above. This metadata notes that the process falls into
the Utilities subcategory (used for browsing on the website) and that it has not yet been fully

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
126 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

validated. It applies to the US, and thus it is most appropriate for use in studies looking to
estimate impacts of coal-fired electricity generation done within the United States. Note that
this does not mean that you can only use it for that geographical region. A process like coal-
fired generation is quite similar around the world; although factors such as pollution controls
may differ greatly by region. However, since capture of carbon is basically non-existent, if we
wanted to use this process to estimate CO2 emissions from coal-fired generation in other
regions it might still be quite useful.

The metadata field for “infrastructure process” notes whether the process includes estimated
infrastructure effects. For example, one could imagine two parallel unit processes for electricity
generation, where one includes estimated flows from needing to build the power plant and
one does not (such as the one referenced above). In general, infrastructure processes are fairly
rare, and most LCA study scopes exclude consideration of infrastructure for simplicity.

Name Electricity, bituminous coal, at power plant

Category Utilities - Fossil Fuel Electric Power Generation

Description Important note: although most of the data in the US LCI database
has undergone some sort of review, the database as a whole has
not yet undergone a formal validation process. Please email
comments to lci@nrel.gov.

Location US

Geography Comment United States

Infrastructure Process False

Quantitative Reference Electricity, bituminous coal, at power plant


Figure 5-12: Activity metadata for Electricity, bituminous coal, at power plant process

Figure 5-13 shows the Modeling metadata for the coal-fired generation unit process. There is
no metadata provided for the first nine categories of this category, but there are ten references
provided to show the source data used to make the unit process. While a specific “data year”
is not dictated by the metadata, by looking at the underlying data sources, the source data came
from the period 1998-2003. Thus, the unit process data would be most useful for analyses
done with other data from that time period. If we wanted to use this process data for a more
recent year, we would either have to look for an LCI data module that was newer, or verify
that the technologies have not changed much since the 1998-2003 period.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 127

LCI Method, No metadata provided for these categories


Modelling constants,
Data completeness,
Data selection, Data
treatment, Sampling
procedure, Data
collection period,
Reviewer, Other
evaluation

Sources U.S. EPA 1998 Emis. Factor AP-42 Section 1.1, Bituminus and Subbituminus
Utility Combustion

U.S. Energy Information Administration 2000 Electric Power Annual 2000

Energy Information Administration 2000 Cost and Quality of Fuels for Electric
Utility Plants 2000

Energy Information Administration 2000 Electric Power Annual 2000

U.S. EPA 1998 Study of Haz Air Pol Emis from Elec Utility Steam Gen Units V1
EPA-453/R-98-004a

U.S. EPA 1999 EPA 530-R-99-010

unspecified 2002 Code of Federal Regulations. Title 40, Part 423

Energy Information Administration 9999 Annual Steam-Electric Plant Operation


and Design Data

Franklin Associates 2003 Data Details for Bituminous Utility Combustion


Figure 5-13: Modeling metadata for Electricity, bituminous coal, at power plant process (abridged)

Finally, Figure 5-14 shows the Administrative metadata for the Electricity, bituminous coal, at power
plant process. There are no explicitly-defined intended applications (or suggested restrictions
on such applications), suggesting that it is broadly useful in studies. The data are not
copyrighted, are publicly available, and were generated by Franklin Associates, a subsidiary of
ERG, one of the most respected life cycle consulting businesses in the US. The “Data
Generator” is a significant piece of information. You may opt to use or not use a data source
based on who created it. A reputable firm has a high level of credibility. A listed individual
with no obvious affiliation or reputation might be less credible. Finally, the metadata notes
that it was created and last updated in October 2011, meaning that perhaps it was last checked
for errors on this date, not that the data is confirmed to still be valid for the technology as of
this date.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
128 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Intended Applications “

Copyright false

Restrictions All information can be accessed by everybody.

Data Owner

Data Generator Franklin Associates

Data Documentor Franklin Associates

Project

Version

Created 2011-10-24

Last Update 2011-10-24


Figure 5-14: Administrative metadata for Electricity, bituminous coal, at power plant process

Our metadata examples have focused on the publicly available US LCI Database, but other
databases like ELCD and ecoinvent use similar metadata formats. These other databases
typically have more substantive detail, in terms of additional fields and more consistent entries.
Since these other data sources are not public, we have not used examples here.

You should browse through the available metadata for any of the databases that you have
access to, so that you can better appreciate the records that may exist within various metadata
records. Remember that the reason for better appreciating the value of the metadata is to help
you with deciding which secondary data sources to use, and how compatible they are with
your intended goal and scope.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 129

The EcoSpold Data Format

The wide availability of secondary data sources has led to a need to standardize the data structures for
storing and sharing LCI data. As with many facets of LCA, there is a standard (in this case, formatting
instructions) for exchanging information in LCI data modules, known as EcoSpold. The EcoSpold format,
developed by providers of LCI databases like ecoinvent, is a structured way of exchanging LCI data, where
details such as flows and unit values are classified for each process. It is based on XML, an umbrella
markup language like HTML, which is used in creating web pages (Meinshausen 2016).

There is no requirement that LCA databases or tools use the EcoSpold format, but most databases and
software use this format. The beauty of format standards is that their existence is transparent to the
practitioner (i.e., you do not need to know or understand it to use its data), similar to how you do not need
to know the details of the MP3 audio file format to use devices that play the files.

EcoSpold thus creates ‘containers’ of information for an LCI, such as which elementary flows exist for a
process, what their physical flow values (and units) are, which other processes in the database are
connected via product flows, and various administrative information as discussed in the metadata
examples above. Should you be interested in seeing what such files look like, the Advanced Material for
this chapter shows how to download EcoSpold XML files from the US LCI database.

Referencing Secondary Data


When you use secondary data as part of your study it must be appropriately referenced, as
with any other source. Referencing data sources was first mentioned in Chapter 2, but here we
discuss several important additions for referencing data from LCA databases. As an example,
the US NREL LCI database explicitly suggests the following referencing style for use of its
data modules:

When referencing the USLCI Database, please use the following format: U.S. Life Cycle
Inventory Database. (2012). National Renewable Energy Laboratory, 2012. Accessed
November 19, 2012: https://www.lcacommons.gov/nrel/search

However, this is the minimum referencing you should provide for process data. First of all,
you can not simply reference the database. You need to ensure that the specific unit process
from which you have used data is clear to the reader, for example if they would like to validate
your work. That means you need to explicitly reference the name of the process (either
obviously in the text or in the reference section). In the US NREL database and other sources,
there may be hundreds of LCI data modules for electricity. Thus, the danger is that in the
report you loosely reference data for coal-fired electricity generation as being from “the NREL
database”, but do not provide enough detail for the reader to know which electricity process
was used. Unfortunately, this is a common occurrence in LCA reports. This situation can be
avoided by explicitly noting the name of the process used in the reference, such as:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
130 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant unit process
(2012). National Renewable Energy Laboratory, 2012. Accessed Nov. 19, 2012:
https://www.lcacommons.gov/nrel/search

A generic reference to the database, as given at the top of this section, may be acceptable if
the report separately lists all of the specific processes used in the study, such as in an inventory
data source table listing all of the processes used.

You will likely use multiple unit processes from the same database. You can either create
additional references like the one above for each process, or use a combined reference that
lists all processes as part of the reference, such as:

U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant; bituminous coal,
at mine; transport, train, diesel powered unit processes (2012). National Renewable Energy
Laboratory, 2012. Accessed Nov. 19, 2012: https://www.lcacommons.gov/nrel/search

The greater the number of similar processes, the greater the need to specify which specific
data module you used in your analysis. This becomes especially important if you are using LCI
data modules from several databases.

A final note about referencing is that the LCA databases are generally not primary sources,
they are secondary sources. Ideally, sources would credit the original author, not the database
owner who is just providing access. If the LCI data module is taken wholesale from another
source (i.e., if a single source were listed in the metadata), it may make sense to also reference
the primary source, or to add the primary source to the database reference. In this case the
reference might look like one of the following:

RPPG Of The American Chemistry Council 2011. Life Cycle Inventory Of Plastic
Fabrication Processes: Injection Molding And Thermoforming.
http://plastics.americanchemistry.com/Education-Resources/Publications/LCI-of-
Plastic-Fabrication-Processes-Injection-Molding-and-Thermoforming.pdf. via U.S. Life
Cycle Inventory Database. Injection molding, rigid polypropylene part, at plant unit process
(2012). National Renewable Energy Laboratory, 2012. Accessed November 19, 2012:
https://www.lcacommons.gov/nrel/search

U.S. Life Cycle Inventory Database. Injection molding, rigid polypropylene part, at plant unit
process (2012). National Renewable Energy Laboratory, 2012. Accessed November 19,
2012: https://www.lcacommons.gov/nrel/search (Primary source: RPPG Of The
American Chemistry Council 2011. Life Cycle Inventory Of Plastic Fabrication Processes:
Injection Molding And Thermoforming.
http://plastics.americanchemistry.com/Education-Resources/Publications/LCI-of-
Plastic-Fabrication-Processes-Injection-Molding-and-Thermoforming.pdf)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 131

As noted in Chapter 2, ideally you would identify multiple data sources (i.e., multiple LCI data
modules) for a given task. This is especially useful when using secondary data because you are
not collecting data from your own controlled processes. Since the data is secondary, it is likely
that there are slight differences in assumptions or boundaries than what you would have used
if collecting primary data. By using multiple sources, and finding averages and/or standard
deviations, you could build a more robust quantitative model of the LCI results. We will
discuss such uncertainty analysis for inventories in later chapters.

Additional Considerations about Secondary Data and Metadata


Given the types and classes of data we are likely to find in life cycle studies, we introduce in
this subsection a few more considerations to ensure you are finding and using appropriate
types of data to match the needs of your study. These considerations are in support of the
data quality requirements. This issue will be revisited in Chapter 7 pertaining to uncertainty.

Temporal Issues

In creating temporal data quality requirements, you will set a target year (or years) for data
used in your study. For example, you might have a DQR of “2005 data” or “data from 2005-
2007” or “data within 5 years of today”. After setting target year(s) you then must do your
best to find and use data that most closely matches the target. It is likely that you will not be
able to match all data with the target year(s). When setting and evaluating temporal DQRs, the
following issues need to be understood.

You may need to do some additional work to guarantee you know the basis year of the data
you find, but this is time well spent to ensure compatibility of the models you will build. You
will need to distinguish between the year of data collection and year of publication. In our
CBECS example in Chapter 2, the data were collected in the year 2003 but the study was not
published by DOE until December 2006 (or, almost 2007). It is easy to accidentally consider
the data as being for 2006 because the publication year is shown throughout the reports. But
the data were representative of the year 2003. If your temporal DQR was set at “2005”, you
might still be able to justify using the 2003 CBECS data, but would need to assess whether the
electricity intensity of buildings likely changed significantly between 2003 and 2005. The same
types of issues arise when using sources such as US EPA’s AP-42 data, which are compilations
of (generally old) previously estimated emissions factors. Other aspects of your DQRs may
further help decide the appropriateness of data newer or older than your target year.

The same is true of dates given in the metadata of LCI data modules. You don’t care about
when you accessed the database, or when it was published in the database. You care about the
primary source’s years of analysis. Figure 5-13 showed metadata on the coal-fired electricity

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
132 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

generation process where the underlying data was from 1998-2003, and which was put in the
US LCI database in 2011. An appropriate “timestamp” for this process would be 1998-2003.

Geospatial Issues

You must try to ensure that you are using data with the right geographical and spatial scope
to fit your needs. If you are doing a study where you want to consider the emissions associated
with producing an amount of electricity, then you will find many potential data sources to use.
The EIA has data that can give you the average emissions factors for electricity generation
across the US. E-GRID (a DOE-EPA partnership) can give you emissions factors at fairly
local levels, reflecting the types of power generation used within a given region. The question
is the context of your study. Are you doing a study that inevitably deals with national average
electricity? Then the EIA data is likely suitable. Or are you doing a study that needs to know
the impact of electricity from a particular factory’s production? In that case you likely want a
fairly local data source, e.g., from E-GRID. An alternative is to leverage the idea of ranges,
presented in Chapter 2, to represent the whole realm of possible values for electricity
generation, including various local or regional averages all the way up to the national average.

Measurement vs. Accounting Standards


Now that various LCI data issues have been introduced, we consider issues associated with
LCA as an accounting framework - as opposed to a measurement standard. One of the core
challenges in performing the accounting task of an LCI is that the input and output flows for
a product system are generally not measured and quantified in the same way as in other
scientific fields. The following example discusses typical ways in which scientific
measurements are acquired and used.

Consider the case of vehicle engines, where scientific instrumentation is available to measure
fuel use, e.g., by using an ultrasonic flow meter. This device applies the science of fluid
dynamics and acoustics to quantitatively measure fluid flow through a vessel by averaging
differences in transit time between pulses of sound waves. The flow meter result is a value
with several significant figures and has an uncertainty range given in the machine’s technical
specifications (up to ±1%). That means that the value may be 1% higher or lower than the
measured value but the meter would not be able to detect this difference. Most scientists would
be satisfied with this level of uncertainty, but if less uncertainty is desired, more expensive or
alternative instruments are needed.

Now consider a potential application of such instrumentation, where we seek to quantify the
amount of fuel used in a vehicle driven over a distance of 1 km. To do this, a standard could
be developed for repeatedly measuring the fuel flow over the same 1 km. Each measurement
would represent an independent attempt. In the end, the test procedure could dictate that the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 133

test result is the average (mean) of 10 measurements with an ultrasonic flow meter, and would
provide a scientific value for fuel use per 1 km. For multiple vehicles, the test procedure could
be used and the average fuel use compared to decide which vehicle had the lowest. Test
protocols like this exist to support federal fuel efficiency standards and policies.

In LCA, on the other hand, ideal measurement devices do not exist - in many cases, primary
data for key underlying processes do not even exist. An array of techniques may be used to
generate the data for LCI databases or modules. For a hypothetical fruit delivery truck, a study
may use average load and fuel economy assumptions to derive an estimate of the fuel needed
to deliver a certain amount of product over a certain distance. The result is an estimate – not
a measurement – of diesel use (e.g., 5 liters) on the basis of a convenient functional unit (e.g.,
per one-way trip). By applying an emissions factor of approximately 20 pounds CO2 per gallon,
CO2 emissions (10 kg per trip) would be estimated. When looking at the metadata for process
data modules, it may be difficult to know whether any input or output flows were measured
as opposed to estimated, but if the provided sources are all reports (such as in the metadata
examples above), then it is likely that flows were estimated using the estimation methods
described in Chapter 2. For various types of studies, this level of effort and data quality may
be sufficient. However, estimated results for a broad category like highway trucks from
available data are not of the same quality as attaching a measurement device to a specific truck’s
fuel system to determine the exact amount of diesel used, or measuring the emissions rate
through a truck’s tailpipe. Thus, the uncertainty associated with estimation methods (such as
those presented in Chapter 2) is greater than that associated with measurement devices (which
typically have uncertainty of about ±1%).

In general, all LCA data has an appreciable uncertainty factor (due to the uncertain nature of
methods used) that should be considered for data sources and model results. Given that the
complexity of product systems in LCAs could include tens, hundreds, or more processes, there
could be substantial uncertainty associated with the final result. The presence of this
uncertainty also means that results of LCA studies may have a different level of quantitative
rigor than other scientific studies and models based on measurements and application of
equations.

While we may not be able to ‘measure’ flows in LCA easily, we can begin to consider what we
might do if we had separate flow values, including from different sources and techniques, and
be interested in analyzing the average, standard deviation, etc., of these separate values.

Before developing the uncertain nature of LCA further, consider the emerging development
of a breakthrough product that rapidly provides measurements relevant to LCA practitioners.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
134 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Emerging Development: The Carbon Metrometer

Consider a new scientific device that can measure the embodied fossil CO2 emissions of a
manufactured product to four significant figures. The user opens the lid of this device and puts
in a computer, or a smartphone, and after a few minutes it returns a precise measure of the CO2
needed to produce all of its subcomponents, raw material and energy inputs, and transportation
of all of these components through the global supply chain until received by the customer.
Imagine that additional features could quantify specific greenhouse gases emitted (e.g., methane
and non-fossil CO2), as well as the emissions from use phase and disposal.

This would certainly be an amazing device, and even with


some modest measurement error rates, it would render the
LCA Standard and all of its uncertainties obsolete.

Of course, you probably realize that such a device does not


exist, and further, it is impossible to create. It is impossible
to measure embodied CO2, as there is insufficient residual
carbon left in the product to link that to the carbon required
during manufacturing. Likewise it is not possible to know a
product’s journey through the global supply chain by
analyzing just the final product in its current location.

Inevitably, the challenge faced in the LCA world is that


Source: chileflora.com various stakeholders assume that such a device not only
exists (or, at least, the underlying science needed to create such a device exists), but that it can
be regularly used to inform a range of questions. Some of these stakeholders think LCA is such a
device. In the absence of a metrometer or other measurement methods, we can use LCA to model
values like embodied fossil CO2 in a product. These models will likely have more uncertainty than
a typical scientific measurement device.

Uninformed stakeholders and critics of LCA may unrealistically expect quantitative results with
“measurement device quality” from our studies. LCAs done without sufficient consideration of
uncertainty and variability (e.g., by providing only point estimate results) do a disservice to the
LCA world, and opportunities are lost to educate the various stakeholders about the practical
reality of LCA as an accounting method.

This lack of true measurement tools and the thought example serve as important considerations
regarding how rigorously one can compare results across multiple product systems, and how an
audience may interpret your results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 135

The metrometer discussion hopefully inspires reflection on the feasibility of meeting an LCA
study’s goals, as well as highlighting the limitations of an accounting standard. Results
comparable in quality to those available from scientific measurement systems are impossible.
An appropriate second-best goal is to seek results that are sufficiently robust to overcome
known sources of uncertainty and variability, which (again) is a goal of this chapter.

Sadly, in the field of LCA there are many practitioners who actively or passively ignore the
effects of uncertainty or variability in their studies. They treat all model inputs as single values
and generate only a single result. The prospect of uncertainty or variability is lost in their
model, and typically then that means those effects are lost on the reader of the study. How
can we support a big decision (e.g., paper vs. plastic?) if there is much uncertainty in the data
but we have completely ignored it? We are likely to end up supporting poor decisions if we
do so. We devote Chapter 7 to methods of overcoming and structuring uncertainty in LCA
models.

Chapter Summary
Typically, the most time consuming aspect of an LCA (or LCI) study relates to the data
collection and management phase. While the LCA Standard encourages practitioners to collect
primary data for the product systems being studied, typically secondary data is used from prior
published studies and databases. Using secondary data requires being knowledgeable and
cognizant of issues relating to the sources of data presented and also requires accurate
referencing. Data quality requirements help to manage expectations of the study team as well
as external audiences pertaining to the goals of your data management efforts. Utilization of
effective LCI data management methods leads to excellent and well-received studies.

References for this Chapter


BEES LCA Tool, website, http://ws680.nist.gov/Bees/Default.aspx, last accessed August 12,
2013.

ecoinvent website, www.ecoinvent.ch, last accessed August 12, 2013.

ELCD LCA Database, website, http://lca.jrc.ec.europa.eu/lcainfohub/, last accessed August


12, 2013.

Environmental Protection Agency. 1993. Life Cycle Assessment: Inventory Guidelines and
Principles. EPA/600/R-92/245. Office of Research and Development. Cincinnati, OH, USA.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
136 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Franklin Associates. 2011. Final Report: Life Cycle Inventory Of Plastic Fabrication Processes:
Injection Molding And Thermoforming. Via
http://plastics.americanchemistry.com/Education-Resources/Publications/LCI-of-Plastic-
Fabrication-Processes-Injection-Molding-and-Thermoforming.pdf

Gabi Software, website, http://www.gabi-software.com/, last accessed August 12, 2013.

LCA-DATA, UNEP, website, http://lca-data.org:8080/lcasearch, last accessed August 12,


2013.

I. Meinshausen, P. Müller-Beilschmidt, and T. Viere, “The EcoSpold 2 format—why a new


format?”, The International Journal of Life Cycle Assessment, September 2016, Volume 21, Issue 9,
pp. 1231–1235.

Quantis, “Environmental Life Cycle Assessment of Drinking Water Alternatives and


Consumer Beverage Consumption in North America”, LCA Study completed for Nestle
Waters North America, 2010, http://www.beveragelcafootprint.com/wp-
content/uploads/2010/PDF/Report_NWNA_Final_2010Feb04.pdf, last accessed
September 9, 2013.

The Agribusiness Group, “Life Cycle Assessment: New Zealand Merino Industry Merino
Wool Total Energy Use and Carbon Dioxide Emissions”, 2006,
http://www.agrilink.co.nz/Portals/Agrilink/Files/LCA_NZ_Merino_Wool.pdf, last
accessed September 1, 2013.

US NREL LCI Database, website, http://www.nrel.gov/lci/, last accessed August 12, 2013.

U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant, bituminous coal, at
mine, and transport, train, diesel powered unit processes (2012). National Renewable Energy
Laboratory, 2012. Accessed August 15, 2013: https://www.lcacommons.gov/nrel/search

USDA LCA Digital Commons, website, http://www.lcacommons.gov, last accessed August


12, 2013.

Whitaker, Michael, Heath, Garvin A., O’Donoughue, Patrick, and Vorum, Martin, “Life Cycle
Greenhouse Gas Emissions of Coal-Fired Electricity Generation: Systematic Review and
Harmonization”, Journal of Industrial Ecology, 2012. DOI: 10.1111/j.1530-9290.2012.00465.x

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 137

End of Chapter Questions

Objective 1. Recognize how challenges in data collection may lead to changes in


study design parameters (SDPs), and vice versa

1. Using the US NREL LCI Database (from the USDA Digital Commons) or another LCI
database, search or browse amongst the available categories. For each of the following
broadly defined processes in the list below, discuss how many different LCI data modules
are available and qualitatively discuss what different assumptions have been used to
generate the data modules.

a. Refining of petroleum

b. Generating electricity from fossil fuel

c. Truck transportation

Objective 2. Map information from LCI data modules into a unit process framework
AND

Objective 7. Generate an inventory result from LCI data sources

2. Build on the example shown in Equation 5-1 by including the diesel, combusted in industrial
boiler process (referenced as an input in the bituminous coal mining process) within the system
boundary. What is your revised estimate of CO2 emissions per kWh? How different is
your updated estimate?

3. Build on the example shown in Equation 5-1 by including within the system boundary
refining of the diesel used in the coal mining and rail transportation processes. Assume you
have LCI data that there are 2.5 E-04 kg fossil CO2 emissions per liter of diesel fuel refined.
How different is your updated estimate?

Objective 3. Explain the difference between primary and secondary data, and when
each might be appropriate in a study

4. Explain the difference between primary and secondary data. Provide an example of when
each would be appropriate for a study.

Objective 4. Document the use of primary and secondary data in a study

5. The data identified in part 1c above would be secondary data if you were to use it in a
study. If you instead wanted primary data for a study on trucking, discuss what methods
you might use in order to get the data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
138 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Objective 5. Create and assess data quality requirements for a study

6. If you had data quality requirements stating that you wanted data that was national (US)
in scope, and from within 5 years of today, how many of the LCI data modules from
Question 1 would be available? Which others might still be relevant? Justify your answer.

Objective 6. Extract data and metadata from LCI data modules and use them in
support of a product system analysis

7. Using an LCI database available to you, search for one LCI data module in each of the
following broad categories - energy, agriculture, and transportation. For each of the three,
do the following:

a. List the name of the process.

b. Identify the functional unit.

c. Draw a unit process diagram.

d. Try to do a brief validation of the data reported.

e. Comment briefly on an example LCA study that this process might be appropriate
for, and one where it would not be appropriate.

f. Show how to appropriately reference the LCI data module in a study.

Objective 8. Perform an interpretation analysis on LCI results

8. Write an interpretation analysis for the various results expressed in Figure 5-5, and your
results from Questions 3 and 4 above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 139

Advanced Material for Chapter 5


The advanced material in this chapter will demonstrate how to find and access LCI data
modules from various popular databases and software tools, and how to use the data to build
simple models like the main model presented in this chapter related to coal-fired electricity.

Not all databases and software tools are discussed; however, access methods are generally very
similar across tools. For consistency, we will demonstrate how to find the same process data
as used in this chapter so that you can learn about the different options and selections needed
to find equivalent data and metadata across tools. Specifically, we will demonstrate how to
find data from the US LCI database by using the LCA Digital Commons Website, SimaPro (a
commercial LCA tool) and openLCA (a free LCA tool).

The databases and tools use different terminology, categories, etc., to organize LCI data, but
can all lead to the same data. Seeing how each of the tools categorizes and refers to the data is
an important concept to understand.

Section 1 - Accessing Data via the US LCA Digital Commons


The LCA Digital Commons is a free, US government-sponsored and hosted web-based data
resource. Given that all of its data are publicly available, it is a popular choice for practitioners.
Thus, it is also a great resource for learning about what LCI data looks like, how to access it,
and how to build models.

The main purpose of the Digital Commons is to act as a resource for US Department of
Agriculture (USDA) agricultural data and, as a result, accessing the home page (at
https://www.lcacommons.gov/discovery) will filter access to those datasets. However, the
US LCI database previously hosted by NREL (at http://www.nrel.gov/lci/), and mentioned
extensively in this chapter, is also hosted via the Digital Commons website (at
https://www.lcacommons.gov/nrel/search). Given its comprehensiveness, most of the
discussion in this book is related to use of the NREL data. The examples provided below are
for accessing the NREL data source, which has slightly different metadata and contents than
the USDA data but a similar method for searching and viewing.

The LCI data modules on the Digital Commons website can be accessed via searching or
browsing. Brief overviews are provided for both options, followed by how to view and
download selected modules. Before following the tutorial below, you should consider
registering for an account on the Digital Commons website (you will need separate accounts
for the USDA and NREL data). While an account is not required to view all of the data, it is
required if you wish to download the data. You can copy and paste the data from a web
browser instead of downloading but this sometimes leads to formatting errors.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
140 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Browsing for LCI Data Modules on the Digital Commons (NREL)

Figure 5-15 shows the NREL Digital Commons home page, where the left hand side shows
how the data modules are organized, including dataset type (elementary flows or unit
processes), high-level categories (like transportation and utilities), and year of data.9

Figure 5-15: Excerpt of LCA Digital Commons Website Home Page

Clicking on the + icon next to the categories generally reveals one or more additional sub-
categories. For example, under the Utilities category there are fossil-fired and other generation
types. Clicking on any of the dataset type, category/subcategory or year checkboxes will filter
the overall data available. The “order by” box will sort the resulting modules. Filtering by
(checking) Unit processes and the Fossil fuel electric power generation category under Utilities, and
ordering by description will display a subset of LCI data modules, as shown in Figure 5-16. A
resulting process module can be selected (see below for how to do this and download the
data).

9 The examples of the NREL US LCI Database in this section are as of July 2014, and may change in the future.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 141

Figure 5-16: Abridged View of LCA Digital Commons Browsing Example Results

Searching for an LCI data module via keyword

The homepage has a search feature, and entering a keyword such as electricity and pressing the
Go button on the right hand side, as shown in Figure 5-17, will return a list of data modules
within the NREL LCI database that have that word in the title or category, as shown in Figure
5-18.

Figure 5-17: Keyword search entry on homepage of NREL engine of LCA Digital Commons Website

Figure 5-18: Abridged Results of electricity keyword search

Figure 5-18 indicates that the search engine returns more than 100 LCI data modules (records)
that may be relevant to “electricity”. Some were returned because electricity is in the name of
the process and others because they are in the Electric power distribution data category. When

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
142 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

searching, you can order results by relevance, description, or year. Once a set of search results
is obtained, results can be narrowed by filtering via the options on the left side of the screen.
For example, you could choose a subset of years to be included in the search results, which
can help ensure you use fairly recent instead of old data. You can also filter based on the LCI
data categories available, in this case by clicking on the + icon next to the high-level category
for Utilities, which brings up all of the subcategories under utilities. Figure 5-19 shows the
result of a keyword search for ‘electricity’, ordered by relevance, and filtered by the Utilities
subcategory of Fossil fuel electric power generation and by data for year 2003. The fifth search result
listed is the same one mentioned in the chapter that forms the basis of the process flow
diagram example.

Figure 5-19: Abridged Results of electricity keyword search, ordered and filtered

Selecting and viewing an LCI data module

When you have searched or browsed for a module and selected by clicking on it, the module
detail summary is displayed, as in Figure 5-20.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 143

Figure 5-20: Details for Electricity, bituminous coal, at power plant process on LCA digital commons

The default result is a view of the Activity tab, which was shown in Figure 5-12. The
information available under the Modeling and Administrative tabs was presented in Figure 5-13
and Figure 5-14. Finally, an abridged view of the information available on the Exchanges tab
was also shown in Figure 5-7. Not previously mentioned is that the module can be downloaded
by first clicking on the shopping cart icon in the top right (adjacent to the “Next item” tag).
This adds it to your download cart. Once you have identified all of the data you are interested
in, you can view your current cart (menu option shown in Figure 5-21) and request them all
to be downloaded (Figure 5-22).

Figure 5-21: Selection of Current Cart Download Option on LCA Digital Commons

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
144 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-22: Cart Download Screen on LCA Digital Commons

After clicking download, you will be sent a link via the e-mail in your account registration. As
noted, the format will be an Ecospold XML file. For novices, viewing XML files can be
cumbersome, especially if just trying to look at flow information. While less convenient, the
download menu (All LCI datasets submenu) will allow you to receive a link to a ZIP file
archive containing all of the NREL modules in Microsoft Excel spreadsheet format (or you
can receive all of the modules as Ecospold XML files). You can also download a list of all of
the flows and processes used across the entire set of about 600 modules.

A spreadsheet of all flows and unit processes in the US LCI database (and their categories)
is on the http://www.lcatextbook.com website in the Chapter 5 folder.

When uncompressed the Electricity, bituminous coal, at power plant module file has four
worksheets, providing the same information as seen in the tabs of the Digital
Commons/NREL website above. The benefit of the spreadsheet file, though, is the ability to
copy and paste that values into a model you may be building. We will discuss building
spreadsheet models with such data in Section 4 of this advanced material.

Section 2 – Accessing LCI Data Modules in SimaPro


As mentioned in the chapter, SimaPro is a popular commercial software program specifically
aimed at building quantitative LCA models. Its value lies both in these model-building support
activities as well as in being able to access various datasets from within the program.
Commercial installations of SimaPro cost thousands of dollars, but users may choose
commercial databases (e.g., ecoinvent) to include in the purchase price. Regardless of which
databases are chosen, SimaPro has the ability to use various other free datasets (e.g., US NREL,
ELCD, etc.). This tutorial assumes that such databases have already been installed and will
demonstrate how to find the same US NREL-based LCI data as in Section 1.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 145

This tutorial also does not describe any of the initial steps needed to purchase a license for or
install SimaPro on your Windows computer or server. It will only briefly mention the login
and database selection steps, which are otherwise well covered in the SimaPro guides provided
with the software.

Note that SimaPro refers to the overall modeling environment of data available as a “database”
and individual LCI data sources (e.g., ecoinvent) as “libraries”. After starting SimaPro,
selecting the database (typically called “Professional”), and opening or creating a new project
of your choice, you will be presented with the screen in Figure 5-23. On the left side of the
screen are various options used in creating an LCA in the tool. By default the “processes” view
is selected, showing the names and hierarchy of all processes in the currently selected libraries
of the database. This list shows thousands of processes (and many of those will be from the
ecoinvent database given its large size).

Figure 5-23: Default View of Processes in Libraries When Starting SimaPro

You can narrow the processes displayed by clicking on “Libraries” on the left hand side menu,
which will display Figure 5-24. Here you can select a subset of the available libraries for use in
browsing (or searching) for process data. You can choose “Deselect all” and then to follow
along with this tutorial, click just the “US LCI” database library in order to access only the US
NREL LCI data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
146 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-24: List of Various Available Libraries in SimaPro

If you then click the “Processes” option on the left hand side, you return to the original screen
but now SimaPro filters and shows only processes from the selected libraries, as in Figure
5-25. Many of the previously displayed processes are no longer displayed.

Figure 5-25: View of Processes and Data Hierarchy for US-LCI Library in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 147

Now that you have prepared SimaPro to look for the processes in a specific database library,
you can browse or search for data.

Browsing for LCI Data Modules in SimaPro

Looking more closely at Figure 5-25, the middle pane of the window shows the categorized
hierarchy of data modules (similar to the expandable hierarchy list in the Digital Commons
tool). However, these are not the same categories used on the NREL LCA Digital Commons
website. Instead, they are the standard categories used in SimaPro for processes in any library.
Clicking on the + icon next to any of the categories will expand it and show its subcategories.
To find the Electricity, bituminous coal process, expand the Energy category then expand
Electricity by fuel, then expand coal, resulting in a screen like Figure 5-26. Several of the other
processes burning coal to make electricity and mentioned in the chapter would also be visible.

Figure 5-26: Processes Shown by Expanding Hierarchy of Coal-Sourced Electricity in SimaPro

The bottom pane shows some of the metadata detail for the selected process. By browsing
throughout the categories (and collapsing or expanding as needed) and reading the metadata
you can find a suitable process for your model. The tutorial will demonstrate how to view or
download such data after briefly describing how to search for the same process.

Searching for a process in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
148 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Once libraries have been specified as noted above, clicking on the magnifying glass icon in the
toolbar brings up the search interface as shown in Figure 5-27. You enter your search term in
the top box, and then choose from several search options. If you are just looking for process
data (as in this tutorial) then you would want to restrict your choice of where to look for the
data to only libraries you have currently chosen (i.e., via the interface in Figure 5-24) rather
than all libraries. This will also make your search return results more quickly. Note the default
search only looks in the names of processes, not in the metadata (the “all fields” option
changes this behavior).

Figure 5-27: Search Interface in SimaPro

Figure 5-28 shows the result of a narrowed search on the word “electricity” in the name of
processes only in “Current project and libraries” and sorted by the results column “Name”.
Since we have already selected only the US LCI database in libraries, the results will not include
those from ecoinvent, etc. One of the results is the same Electricity, bituminous coal, at power plant
process previously discussed.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 149

Figure 5-28: Results of Modified Search for Electricity in SimaPro

By clicking “Go to” in the upper right corner of the search results box, SimaPro “goes to” the
same place in the drill-down hierarchy as shown in Figure 5-26.

Viewing process data in SimaPro

To view process data, choose a process by clicking on it (e.g., as in Figure 5-26) and then click
the View button on the right hand side. This returns the process data and metadata overview
shown in Figure 5-29. Similar to the Digital Commons website, the default screen shows high-
level summary information for the process. Full information is found in the documentation
and system description tabs.

Figure 5-29: Process Data and Metadata Overview in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
150 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Clicking on the input-output tab displays the flow data in Figure 5-30, which for this process
is now quite familiar. If you need to download this data, you can do so by choosing “Export”
in the File menu, and choosing to export as a Microsoft Excel file.

Figure 5-30: View of Process Flow Data (Inputs and Outputs) in SimaPro

Section 3 – Accessing LCI Data Modules in openLCA


openLCA is a free LCA modeling environment (available at http://www.openlca.org/)
available for Windows, Mac, and Linux operating systems. While installation and configuration
can be quite complicated (and is not detailed here), various datasets are available. The tutorial
assumes you have access to a working openLCA installation with the US LCI database, and
discusses how to find the same US NREL-based LCI data as in Section 1.

After launching openLCA and connecting to your data source you should see a list of all of
your databases, as shown in Figure 5-31. If you do not see the search and navigation tabs, you
may add them under the “Window menu -> Show views option” to add them. If you have
installed the US LCI database, it should be one of the options available.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 151

Figure 5-31: List of Data Connections in openLCA

Browsing for process data in openLCA

Clicking on the triangle to the left of the folder allows you to open it and see the standard
hierarchy of information for all data sources in openLCA, like in Figure 5-32. This is where
you could see the process data, types of flows, and units.

Figure 5-32: Hierarchical Organization of Information for openLCA Databases

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
152 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

If you double click on the “Processes” folder it will display the same sub-hierarchy of
processes (not shown here) that we saw in the NREL/Digital Commons website in Section 1.
All of the data for unit processes are contained under that folder. If you click on the “Utilities”
subcategory folder, then the “Fossil Fuel Electric Power Generation” folder, you will see the
Electricity, bituminous coal, at power plant seen above, as shown in Figure 5-33. Several of the other
processes burning coal to make electricity and mentioned in the chapter would also be visible.

Figure 5-33: Expanded View of Electricity Processes in Fossil Fuel Generation Category

Searching for a process in openLCA

Instead of using the Navigation tab, a search for process data can be done using the Search
tab. Clicking on the search tab brings up the search interface, as shown in Figure 5-34.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 153

Figure 5-34: Default Search Interface in openLCA

In the first search option, you may search in all databases or narrow the scope of your search
to only a single database (e.g., to the US-LCI database). In the second option, you may search
all object types, or narrow the scope of your search to just “Processes”, etc. Finally, you can
enter a search term, such as “electricity”. If you choose to search for “electricity” only in your
US LCI database (note you may have named it something different), and only in processes,
and click search you will be presented with the results as in Figure 5-35. Note that these results
have been manually scrolled down to show the same Electricity, bituminous coal, at power plant
process previously identified.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
154 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-35: Search Results for Electricity in US-LCI Database in OpenLCA

Unlike the other tools, there is no quick and easy way to skim metadata to ensure which
process you want to use.

Viewing process data in openLCA

To view process data, choose a process by double-clicking on it from either the browse or
search interface. This opens a new pane of the openLCA environment and returns the process
data and metadata overview, as shown in Figure 5-36. Similar to the Digital Commons website,
the default screen shows high-level summary information for the process (not all of the
information is shown in the figure). Additional information is available in the Inputs/Outputs,
Administrative information, other tabs at the bottom of this pane.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 155

Figure 5-36: Process Data and Metadata Overview in openLCA

Clicking on the Inputs/Outputs tab displays the flow data in Figure 5-37, which for this
process is now quite familiar.

Figure 5-37: View of Process Flow Data (Inputs and Outputs) in openLCA

If you need to download this data, you can do so by choosing “Export” in the File menu, but
you cannot export it as a Microsoft Excel file.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
156 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Section 4 – Spreadsheet-based Process Flow Diagram Models


Now that process data has been identified, quantitative process flow diagram-based LCI
models can be built. Amongst the many tools to build such models, Microsoft Excel is one of
the most popular. Excel has many built-in features that are useful for organizing LCI data and
calculating results, and is already familiar to most computer users.

To make these examples easy to follow, we repeat the core example from this chapter (and
shown in Figure 5-6) involving the production of coal-fired electricity via three unit processes
in the US LCI database. The US LCI database is used since it is freely available and indicative
of many other databases (e.g., ELCD). To replicate the structure of the core model from
Chapter 5, we need to manage our process data in support of our process flow diagram. The
following steps illustrate the quantitative structure behind a process-flow diagram based LCI
model.

1) Find all required process data

In the first few sections of the advanced material for this chapter, we showed how to find the
required process data from the US LCI database via several different tools. Using similar
browse and search methods, you can find the LCI data for the other two processes so that
you have found US LCI data for these three core processes:

• Electricity, bituminous coal, at power plant


• Bituminous coal, at mine
• Transport, train, diesel powered
Depending on which tool you used to find the US LCI process data, it may be easy to export
the input and output flows for the functional unit of each process into Excel. If not, you may
need to either copy/paste, or manually enter, the data. Recall that accessing the US LCI data
directly from the LCA Digital Commons can yield Microsoft Excel spreadsheet files.

2) Organize the data into separate worksheets

A single Microsoft Excel spreadsheet file can contain many underlying worksheets, as shown
in the tabs at the bottom of the spreadsheet window. For each of the downloaded or exported
data modules, copy / paste the input/output flows into a separate Microsoft Excel worksheet.
If you downloaded the US LCI process data directly from the lcacommons.gov website, the
input/output flow information is on the “X-Exchange” worksheet of the downloaded file (the
US LCI data in other sources would be formatted in a similar way). The Transport, train, diesel
powered process has 1 input and 9 outputs (including the product output), as shown in Figure
5-38.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 157

Figure 5-38: Display of Extracted flows for Transport, train, diesel powered process from US LCI

3) Create a separate “Model” worksheet in the Microsoft Excel file

This Model worksheet will serve as the primary workspace to keep track of the relevant flows
for the process flow diagram. This sheet uses cell formulas to reference the flows on the other
worksheets that you created from the process LCI datasets.

Beyond just referencing the flows in the other worksheets, the Model worksheet must scale
the functional unit-based results as needed based on the process flow diagram. For example,
in Equation 5-1, results were combined for 1 kWh of electricity from bituminous coal, 0.46
ton-km of train transportation, and from 0.44 kg of coal mining. Since the process LCI data
modules are generally normalized on a basis of a functional unit of 1, we need to multiply
these LCI results by 1, 0.46, or 0.44.

Basic LCI Spreadsheet Example

In this example, a basic cell formula is created on the Model worksheet to add the output flows
of CO2 from the three separate process worksheets. We first make a summary output result
cell for each of the three processes where we multiply the CO2 emissions value from each
worksheet (e.g., the rounded value 0.019 in cell G8 of Figure 5-38) by the functional unit scale
factor listed above. Then we find the sum of CO2 emissions across the three processes by
typing = into an empty cell and then successively clicking on the three scaled process emissions
values.

The Chapter 5 folder has a “Simple and Complex LCI Models from US LCI” spreadsheet
file following the example as shown in the Chapter (which only tracked emissions of fossil
CO2). Figure 5-39 shows an excerpt of the “Simple Model” worksheet in the file. The same
result as shown in the chapter (not rounded off) is visible in cell E8, with the cell formula
=B8+C8+D8.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
158 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-39: Simple Spreadsheet-Based Process LCI Model

This simple LCI model shows a minimal effort result, such that using a spreadsheet is perhaps
overkill. Tracking only CO2 emissions means that we only have to add three scaled values,
which could be accomplished by hand or on a calculator. However this spreadsheet motivates
the possibility that a slightly more complex spreadsheet could be created that tracks all flows,
not just emissions of CO2.

Complex LCI Spreadsheet Example

Beyond the assumptions made in the simple model above, in LCA we often are concerned
with many (or all) potential flows through our product system. Using the same underlying
worksheets from the simple spreadsheet example, we can track flows of all of the outputs
listed in the various process LCI data modules (or across all potential environmental flows).
This not only allows us a more complete representation of flows, but better prepares us for
next steps such as impact assessment.

In this complex example, we use the same three underlying input/output flow worksheets, but
our Model worksheet more comprehensively organizes and calculates all tracked flows from
within a dataset. Instead of creating cell formulas to sum flows for each output (e.g., CO2) by
clicking on individual cells in other worksheets, we can use some of Excel’s other built-in
functions to pull data from all listed flows of the unit processes into the summary Model
worksheet. An example file is provided, but the remaining text in this section describes in a
bit more detail how to use Excel’s SUMPRODUCT function for this task.

The SUMPRODUCT function in Microsoft Excel, named as such because it finds the sum of
a series of multiplied values, is typically used as a built-in way of finding a weighted average.
Each component of the function is multiplied together. For example, instead of the method
shown in the Simple LCI spreadsheet above, we could have copied the CO2 emissions values

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis 159

from the three underlying worksheets into the row of cells B8 through D8, and then used the
function =SUMPRODUCT(B4:D4*B8:D8) to generate the same result.

The “Simple and Complex LCI Models” file has a worksheet “Simple Model (with
SUMPRODUCT)” showing this example in cell E8, yielding the same result as above.

However the SUMPRODUCT function can be more generally useful, because of how Excel
manages TRUE and FALSE values and the fact that the “terms” of SUMPRODUCT are
multiplied together. In Excel, TRUE is represented as 1 and FALSE is represented as 0 (they
are Booleans). So if we have “terms” in the SUMPRODUCT that become 1 or 0, we can use
SUMPRODUCT to only yield results when all expressions are TRUE, else return 0. This is
like achieving the mathematical equivalent of if-then statements on a range of cells.

The magic of this SUMPRODUCT function for our LCI purposes is that if we have a master
list of all possible flows, compartments, and sub-compartments, we can find whether flow
values exist for any or all of them. On the US LCI Digital Commons website, a text file can
be downloaded with all of the nearly 3,000 unique compartment flows present in the US LCI
database. This master list of flows can be pasted into a Model worksheet and then used to
“look up” whether numerical quantities exist for any of them.

A representative cell value in the complex Model worksheet, which has similar cell formulas
in the 3,000 rows of unique flows, looks like this (where cells A9, B9, and C9 are the flow,
compartment, and subcompartment values we are trying to match in the process data):

=E$4*SUMPRODUCT((Electricity_Bitum_Coal_Short!$A$14:$A$65=A
9)*(Electricity_Bitum_Coal_Short!$C$14:$C$65=B9)*(Electrici
ty_Bitum_Coal_Short!$D$14:$D$65=C9)*Electricity_Bitum_Coal_
Short!$G$14:$G$65)

This cell formula multiplies the functional unit scale factor in cell E4 by the SUMPRODUCT
value of:

• whether the flow name, compartment, and subcompartment in the unit flows for the
coal-fired electricity process match every item in the master list of flows.

• and, if the flow/compartment/subcompartment values match, the inventory value for


the matched flow.

Within the SUMPRODUCT, if the flow/compartment/subcompartment in the unit process


data doesn’t match the flow/compartment/subcompartment on the row of the Model
worksheet, the Boolean values are all 0’s and the result is 0. If they all match, the Boolean
results are 1, and the final part of the SUMPRODUCT expression (the actual flow quantity)
is returned.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
160 Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-40: Complex Spreadsheet-Based Process LCI Model

The Chapter 5 folder on the textbook website has spreadsheets with all of the flows and
processes in the US LCI database, as downloaded from the LCA Digital Commons website.

The ‘Simple and Complex LCI Models’ file has a worksheet ‘Complex Model’ that shows
how to use the SUMPRODUCT function to track all 3,000 flows present in the US LCI
database (from the flow file above). Of course the results are generally zero for each flow due
to data gaps, but this example model expresses how to broadly track all possible flows. You
should be able to follow how this spreadsheet was made and, if needed, add additional
processes to this spreadsheet model.

Note: In the US LCI database, there are various processes referred to as “Dummy” or
“CUTOFF” processes. This is just a notation from the database creators that these are known
flows, but they do not connect with any upstream processes within the database. For example,
a “CUTOFF” or “Dummy” sludge process mentioned in an inventory just means there is
sludge that needs to be managed, but that the database has no sludge treatment process to
assess the impacts. It is like a flag saying there is activity, but there will be no quantitative
estimates of the effects of that process.

Homework Questions for this Section

1. Answer Question 2 from the end of Chapter 5 by using the ‘Simple and Complex LCI
Models’ spreadsheet introduced in this section.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 161

Photo Credit: © Chris Goldberg, 2009, via Creative Commons license (CC BY-NC 2.0)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
162 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

Chapter 6 : Analyzing Multiple Output Processes and


Multifunctional Product Systems
In Chapter 5, we showed the relatively simple steps of building a process flow diagram-based
LCA model where there was only one product in the system. However, product systems in
LCA studies may have multiple products, with each product providing multiple functions.
Analyzing these systems introduces new complexities, and this chapter demonstrates various
methods (referenced in the Standard) for overcoming or addressing these challenges. The
methods described herein modify either the systems studied or the input and output flow
values so that the multifunction systems can be quantitatively assessed.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Discuss the challenges presented by processes and systems with multiple products
and functions.

2. Perform allocation of flows to co-products from unallocated data.

3. Estimate inventory flows from a product system that has avoided allocation via
system expansion.

4. List and discuss potential consequences of a product system in an economy, and


the ways in which those consequences could be modeled via LCA

Multifunction Processes and Systems

Many processes and product systems are simple enough that they have only a single product
output that provides a single function. However, even when tightly scoped, there are also many
processes and systems that will have multiple products that each provide their own function.
A good example is a petroleum refinery that has outputs of gasoline, diesel, and other products.
LCA studies typically have function and functional unit definitions related to the life cycle
effects of only one product. As such, a method is needed to connect input and output flow
data with a desired functional unit, subject to the data associated with multiple products. The
method chosen can have a significant effect on the results, and thus, the choice of method is
controversial. How to deal with such systems is subject to much debate.

Building on the example figures and discussion in Chapters 4 and 5, Figure 6-1 shows a generic
view of a unit process with multiple product outputs that each provides their own function.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 163

In this case, there are three Products, A, B, and C, associated with functions 1, 2, and 3,
respectively.

Figure 6-1: Generic Illustration of a Unit Process with Multiple Products or Co-Products

Co-products exist when a process has more than one product output – which is a fairly
common outcome, given the complexity of many industrial processes. If the goal of our study
is to assess the effects associated with Product A, which provides Function 1, we need to find
a way to model the provision of Products B and C, which provide Functions 2 and 3. In the
context of a particular study, typically the product of primary interest (i.e., Product A above)
is referred to as the product, and any other products (i.e., Products B and C above) as co-
products, but this is not a standard terminology. We distinguish between processes with joint
production (with fixed proportions between the co-products), and those with combined
production (with independently variable proportions between the co-products). For example,
various mining processes lead to relatively fixed proportions of outputs in terms of ores and
other substances, such as zinc extraction, which leads to cadmium, mercury, and other co-
products in what can be considered fixed amounts. An example of combined production could
be a refinery, which converts crude into various output mixes.

The Standard suggests two ways of approaching this problem: either by partitioning the
process so that a set of quantitative connections are derived between the inputs and outputs
and the various products (known as allocation), or by changing the way in which we have
defined our system so that we can clearly show just the effects associated with Product A and
its associated function (known as system expansion). While system expansion is the preferred
method, we discuss allocation first because it is simpler to understand and also helps to frame
the broader discussion.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
164 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

Allocation of Flows for Processes with Multiple Products


For a unit process, the goal of allocation is to assign a portion of each input and non-product
(e.g., emission) output to each of the various products, such that the sum of all product shares
equals the total input and output flows for the process. Allocation is also referred to as
partitioning.

In Chapter 5, we accessed the US LCI database information and directly used all of the data
without modification in our model for the bituminous coal-fired electricity process flow
diagram. We even used some of the information in the process data to decide the multipliers
needed in using our other process data sources. The reason we would directly use all of the
data is because the Electricity, bituminous coal, at power plant process listed only one product:
electricity (see Figure 5-7).

For other LCA models, we may have to manipulate the data in some way to make it fit the
needs of our study. However, one could envision an alternative process where aside from
generating electricity, the process also produced heat (e.g., a combined heat and power, or
CHP system). Such a process has multiple products, heat and power, each of which has a
different function. Furthermore, we might want to derive a mathematical means of associating
a relevant portion of the quantified inputs and outputs to each of the products (i.e., to know
how much pollution we associate to each product of the system). This association is called
allocation.

The data associated with processes or systems having multiple product outputs may be
organized in several ways. Their most raw form (i.e., as collected) will be an inventory of
unallocated inputs and outputs, and relative quantities of co-product outputs. These
unallocated flows represent a high level view of the process, representing all flows as measured
but without concern for how those flows may connect to specific co-products. An example
would be process data for an entire refinery that tracks all inputs (crude oil, energy, etc.) and
quantifies all outputs (e.g., diesel, gasoline, etc.). Alternatively, process data may consist of
already allocated flow estimates of inputs and outputs for each co-product. For instance, the
refinery process data would contain estimates of crude oil and energy inputs used for each unit
of gasoline, diesel, etc.

In allocation, the key concern is determining the appropriate mathematical relationship to


transform the unallocated flows to allocated flows. The Standard gives specific, yet somewhat
vague directions on the appropriate methods of allocation. First off, ISO says that, if possible,
allocation should be avoided, which we will assume has been deemed not possible. One way
of avoiding allocation (to be discussed later) is to subdivide the process into smaller sub-
processes. But, for the sake of discussion, if allocation cannot be avoided, then the Standard
says that the inputs and outputs of the process should be partitioned between its products
based on a physical relationship. It states that the physical relationship “should reflect the way in

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 165

which the inputs and outputs are changed by quantitative changes in the products or functions
delivered by the system.” Further, if the physical relationship alone is not sufficient to perform
the allocation, then other relationships should be used, such as economic factors. Commonly
used allocation methods include mass-based, energy-based, and economic-based methods.
Not all systems will be able to be allocated in these ways, as some products have no mass, may
differ in utility of energy (e.g., heat versus electricity), and economic features like market value
may fluctuate widely.

It is important to observe that the Standard does not prescribe which allocation method to
use, just a preference for a method based on physical relationship. It does not say to always
use a mass-basis or an energy-basis for allocation, or to never use an economic basis. The only
specifications provided pertain to reuse and recycling, where the Standard gives an ordering
of preference for allocation methods, specifically, physical properties (e.g., mass), economic
values, and number of uses. Practitioners often use this same ordering for allocating processes
other than reuse and recycling, which may be a useful heuristic, but it is not prescribed by the
Standard.

As with other work, choices and methods behind allocation should be justified and
documented. In addition, the Standard requires that when several possible allocations seem
reasonable, sensitivity analysis should be performed to show the relative effects of the methods
on the results (see section at the end of the chapter). For example, we might compare the
results of choosing a mass-based versus an energy-based allocation method.

The following example does not use a traditional production process with various co-products,
but it will help to motivate and explain allocation. In this example, consider a truck
transporting different fruits and vegetables. The truck is driven from a farm to a market, as in
the photo at the beginning of this chapter. For this one-way trip, the truck consumes 5 liters
of diesel fuel, and it emits various pollutants (but these are not yet quantified in this example).
If apples, watermelons, and lettuce were the only three kinds of produce delivered, the
collected data might show that the truck delivered produce in your measured trip with the
values shown in Figure 6-2. ‘Per type’ values are the per piece values multiplied by the number
of pieces for each type of fruit or vegetable.
Items Mass Market Value
Pieces Per Piece Per Type Per Piece Per Type
Apples 100 0.2 kg 20 kg $0.40 $40
Watermelon 25 2 kg 50 kg $4 $100
Lettuce 50 0.4 kg 20 kg $1 $50
Total 175 pieces - 90 kg - $190
Figure 6-2: Summary Information of Fruits and Vegetables Delivered (per piece and per type)

If we focus on determining how to allocate the diesel fuel, our LCA-supported question
becomes, “how much of the diesel fuel use is associated with each fruit and vegetable?” To
answer this, we need to do an allocation, which requires only simple math. The allocation

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
166 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

process involves determining the allocation factor, or the quantitative share of flows to be
associated with each unit of product, and then multiplying the unallocated flow (in this case,
5 liters of diesel fuel) by these allocation factors to find the allocated flows. Before we show
the general equation for doing this, we continue with the produce delivery truck example. We
use significant digits casually here to help follow the methods.

If the allocation method chosen was based on number of items in the truck, from Figure 6-2,
there are 175 total items, so the flow per unit (items) is 1/175 items. Each item of produce in
the truck, regardless of the type of produce, would be allocated (1/175 items)*(1 item) *(5
liters) of diesel, or 0.029 liters. The value (1/175 items)*(1 item) is the allocation factor, 5 liters
is the unallocated flow, and 0.029 liters is the allocated flow. Alternatively, in a mass-based
allocation, the total mass transported was 90 kg. The flow with a mass-based allocation is 1/90
kg, and each apple would be allocated 0.2 kg*(1/90 kg)*5 liters, or 0.011 liters of diesel. Finally,
the total market value of all produce on the truck is $190. The flow per dollar is 1/$190, and
each apple would be allocated $0.4*(1/$190)*5 liters, or 0.01 liters of diesel. The allocation
factors and allocated flows of diesel fuel for each fruit and vegetable are shown in Figure 6-3.
The results show that the diesel fuel allocated is quite sensitive to the type of allocation chosen
for apples and watermelon – the diesel fuel allocated varies for apples from 0.01 to 0.029 liters
(a factor of 3), and for watermelon, from 0.029 to 0.11 liters (a factor of 4). The allocated flow
of diesel for lettuce is much less sensitive – varying from 0.022 to 0.029 liters (only about
30%).
Item Mass Economic
Allocation Allocated Allocation Allocated Allocation Allocated
factor flow factor flow factor flow
(liters) (liters) (liters)
Apples 0.2 kg * 1/90 kg 0.011 $0.40 *1/$190 0.011
1 item *
Watermelon 0.029 2 kg * 1/90 kg 0.11 $4 *1/$190 0.11
1/175 item
Lettuce 0.4 kg * 1/90 kg 0.022 $1 *1/$190 0.026
Figure 6-3: Allocation Factors and Allocated Flows Of Diesel Fuel per Type of Produce

To validate that our math is correct, we check that the sum of the allocated flows equals the
unallocated value (5 liters). For allocation by items, 0.029 l/item * 175 items = 5.075 liters. By
mass, the check is 0.011*100 + 0.11*25 + 0.022*50 = 4.95 liters. For price, the check is
0.01*100 + 0.11*25 + 0.026*50 = 1+2.75+1.3 = 5.05 liters. The allocations appear correct,
and the slight discrepancies from 5 liters are due to rounding.

The estimates from Figure 6-3 could be used to support a cradle to consumer LCI of energy
use for bringing fruit to market. If you had process data on energy use for producing (growing)
an apple, for instance, you could expand the scope of your study by adding one of the allocated
flows, i.e., 0.029, 0.011, or 0.01 liters of diesel fuel for transport. As noted above, a key concern
is the choice of allocation method (or methods) in support of such a study. While the Standard
says the first priority is to use a physical relationship-based factor, the larger issue is whether
any of the allocation methods would individually lead to a different result. If in the apple LCI,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 167

for instance, you chose the economic allocation over the item-based allocation, you would be
choosing a factor that represents 3 times less estimated transport energy. If the energy used to
grow the apple was otherwise comparable in magnitude to the energy required for transport,
then the choice of transportation allocation method could have a significant effect on the
overall result. In this case, the choice of allocation may be construed as biasing the overall
answer. Since the Standard suggests using sensitivity analyses, the best option may be to show
the cradle to consumer results using all three types of allocation.

The same math could be used to allocate other flows if available, such as data on an unallocated
output flow of 10 kg of CO2 emissions from the truck. Since the allocation factors represent
flow shares of individual pieces of fruit, we use the same allocation factors as we did for diesel
fuel to allocate the CO2 emissions. For any type of produce, the item-based allocation flow
would be 1/175th of the 10 kg of CO2, or 0.057 kg of CO2. A mass-based allocation for each
apple would distribute 0.2 kg/90 kg * 10kg = 0.022 kg of CO2. Of course, all of the allocated
flows of CO2 would have a value exactly double those of diesel (since there are 10 kg versus 5
liters of unallocated flow). The relative sensitivities of the various allocation choices would be
the same.

Note that in the delivery truck example, it was implicitly assumed that all of the produce were
sold at market - we expected to get $190 in revenue. Aside from being a convenient
assumption, it also implies that the truck would return back to the farm with no produce. One
could argue that this empty return trip (referred to as backhaul in the transportation industry)
requires additional consumption of fuel and generates additional air emissions that should be
allocated to the produce sold at market. Given the weight of the fruit compared to the total
weight of the truck, it’s likely the backhaul consumed a similar amount of fuel, and thus, adding
the backhaul process might double the allocated flows of diesel for delivery in an updated
cradle to consumer LCA. For larger trucks or ocean freighters, an empty backhaul may
consume significantly less fuel. Regardless, these considerations represent potential additions
to the system boundary compared to the delivery alone.

The delivery truck example is not just an example chosen to simplify the discussion of
allocation. Indeed, similar process data would have to be allocated to support different LCIs
and LCA studies. For example, a study on the LCA of making purchases online versus in retail
stores might allocate the energy required for driving a UPS or FedEx delivery truck amongst
the packages delivered that day. It is not obvious which of the allocation methods is best, nor
is it obvious how the allocated results might change with the change of the method. The mass
of the boxed products is potentially a bigger factor in how much fuel is used, and the variation
in the value of the boxes is likely much higher (especially on a per unit mass basis!) than in our
simple produce delivery truck example.

Now that the general quantitative methods of allocation have been discussed, Equation 6-1
represents a general allocation equation, recalling that unit processes generally have multiple
unallocated flows (in this case, indexed by i). Every allocated flow can be associated with each

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
168 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

of the n co-products (indexed by j) using the product of the unallocated flow and the allocation
factor for each co-product. The fraction on the right hand side of Equation 6-1 is the
previously defined allocation factor, which divides the unit parameter of the co-product j, wj
(e.g., mass per unit), for the chosen allocation method by the product of the number of units
(m) and the unit parameters for all n co-products (with the sum indexed by k).
𝑤v
𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤-,v = 𝑈𝑛𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤- ∗ (6 − 1)
∑4"w+ 𝑚" 𝑤"

Applying this equation to the truck delivery example, the unallocated flow of diesel fuel is 5
liters, the mass-based allocation factor for apples is the mass per apple divided by the sum of
mass of all of the produce in the truck, or 0.2 kg / (100 items*0.2 kg/item + 25 items*2
kg/item + 50 items*0.4 kg/item) = 0.00222, so the allocated flow per apple is 0.011 liters.
Using these values in Equation 6-1 generates all of the results in Figure 6-3.

While this transportation example is useful for explaining allocations, the equation and general
method are useful in deriving allocations for other unit processes. In a subsequent section, we
discuss the ways in which allocated flows are implemented and documented in existing LCI
data modules by looking at actual data modules.

Allocation methods within the scope of a study should be as consistent as possible, and
comparative studies should use the same allocation methods (e.g., allocating all flows on a
mass basis) for each system or process. Due to challenges associated with data acquisition, this
may prove difficult, so at least analogous processes should be allocated in the same way (e.g.,
all refining processes on a mass basis). Regardless, all allocation methods and deviations from
common allocation assumptions should be documented.

‘Allocation – Fruit Truck’ spreadsheet shows how to use Equation 6-1 to derive the values
shown in Figure 6-3.

An Example of Allocation of Process Flows in US LCI Database


Existing LCI databases have information on many unit processes with co-product outputs.
When used with search engines and LCA software, the unallocated and allocated flows may
be found, and comparing them helps to understand the link between the unallocated and
allocated data.

The public US LCI database provides various process data modules where there is already
available information on products and co-products, and thus excellent examples to learn about
allocation. Two prime examples are for the two refinery processes in the US LCI database,
named Petroleum refining, at refinery and Crude oil, in refinery.10 While created by the same
10 Other quite accessible examples in the US LCI database include agricultural and forestry products.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 169

organization (Franklin Associates), these two unit processes provide different LCIs for a
refinery. If you search for “refinery” or “refining” in the Digital Commons/NREL website
(as demonstrated in Chapter 5), various unit processes and allocated co-products are returned,
including:

• Petroleum refining, at refinery


• Diesel, at refinery (Petroleum refining, at refinery)
• Gasoline, at refinery (Petroleum refining, at refinery)
• Crude oil, in refinery
• Diesel, at refinery (Crude oil, in refinery)
• Gasoline, at refinery (Crude oil, in refinery)

The co-products in the returned result can be identified because the name of the co-product
as well as the name of the unallocated refinery process model (in parentheses) is given. The
connection between these two types of data is discussed in more detail below.

Following the format of Chapter 5, Figure 6-4 shows the data available in the US LCI unit
process Petroleum refining, at refinery. The table has been abridged by removing various
elementary flow inputs to save space and reduced to 3 significant figures. The ‘category’ and
‘comment’ fields have also been removed. This crude oil refining process shows nine italicized
product flows representing various fuels and related refinery outputs, e.g., diesel fuel, bitumen
and refinery gas. The last row also shows a functional unit basis for the unit process, i.e., per
1 kg of petroleum refining, which is just a bookkeeping reference entry and does not represent
an additional product. Unlike other process data modules we discovered in Chapter 5, the
outputs of this crude oil process are not all in a singular unit such as 1 gallon or 1 kg. Instead,
the actual product flow quantities are 0.252 liters of diesel, 0.57 liters of gasoline, etc. The
reason for this difference should be clear – there are multiple products! It is not possible to
have a single set of raw inputs and outputs all be normalized to ‘1 unit of product’ when more
than one product exists. This further motivates the notion that the refinery inputs and output
flows would need to be allocated.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
170 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

Flow Flow Type Unit Amount


Inputs
Electricity, at grid, US, 2008 ProductFlow kWh 0.143
Natural gas, combusted in industrial boiler ProductFlow m3 0.011
Residual fuel oil, combusted in industrial boiler ProductFlow L 0.027
Liquefied petroleum gas, combusted in industrial boiler ProductFlow L 0.001
Transport, barge, diesel powered ProductFlow tkm 0.000
Transport, barge, residual fuel oil powered ProductFlow tkm 0.001
Transport, ocean freighter, diesel powered ProductFlow tkm 0.490
Transport, ocean freighter, residual fuel oil powered ProductFlow tkm 4.409
Transport, pipeline, unspecified petroleum products ProductFlow tkm 0.652
Crude oil, extracted ProductFlow kg 1.018
Dummy_Disposal, solid waste, unspecified, to sanitary landfill ProductFlow kg 0.006
Outputs
Benzene ElementaryFlow kg 1.08E-06
Carbon dioxide, fossil ElementaryFlow kg 2.51E-04
Carbon monoxide ElementaryFlow kg 4.24E-04
Methane, chlorotrifluoro-, CFC-13 ElementaryFlow kg 2.18E-08
Methane, fossil ElementaryFlow kg 3.70E-05
Methane, tetrachloro-, CFC-10 ElementaryFlow kg 1.36E-09
Particulates, < 10 um ElementaryFlow kg 3.15E-05
Particulates, < 2.5 um ElementaryFlow kg 2.31E-05
SO2 ElementaryFlow kg 2.47E-04
Diesel, at refinery ProductFlow L 0.252
Liquefied petroleum gas, at refinery ProductFlow L 0.049
Gasoline, at refinery ProductFlow L 0.57
Residual fuel oil, at refinery ProductFlow L 0.052
Bitumen, at refinery ProductFlow kg 0.037
Kerosene, at refinery ProductFlow L 0.112
Petroleum coke, at refinery ProductFlow kg 0.060
Refinery gas, at refinery ProductFlow m3 0.061
Petroleum refining coproduct, unspecified, at refinery ProductFlow kg 0.051
Petroleum refining, at refinery ProductFlow kg 1
Figure 6-4: US LCI Database Module for Petroleum refining, at refinery

The comment fields in the Petroleum refining, at refinery process, which were removed in Figure
6-4, are excerpted in Figure 6-5. They contain notes related to how the input and output flows
could be allocated to the co-products, and how the creators of this data module derived the
converted allocated values for the specific co-products on a ‘per unit of product’ basis. The
result of this procedure is the various co-product based LCI modules listed above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 171

Product Comment
Diesel, at refinery Mass (0.2188 kg/kg output) used for allocation.
Liquefied petroleum gas, at refinery Mass (0.0266 kg/kg output) used for allocation.
Gasoline, at refinery Mass (0.4213 kg/kg output) used for allocation.
Residual fuel oil, at refinery Mass (0.0489 kg/kg output) used for allocation.
Bitumen, at refinery Mass (0.0372 kg/kg output) used for allocation.
Kerosene, at refinery Mass (0.0910 kg/kg output) used for allocation.
Petroleum coke, at refinery Mass (0.0596 kg/kg output) used for allocation.
Refinery gas, at refinery Mass (0.0451 kg/kg output) used for allocation.
Petroleum refining co-product, at refinery Mass (0.0515 kg/kg output) used for allocation.
Figure 6-5: Comments Related to Allocation for Co-Products Of Petroleum refining, at refinery

The available US LCI database Microsoft Excel spreadsheet for the Petroleum refining, at
refinery process module shows these allocation factors in the X-Exchange worksheet, in
columns to the right of the comment fields.

The summary product flow of “per kg of petroleum refining” and the comment fields should
clarify that allocation is based on the physical relationship of mass of the various products.
For example, the first row of Figure 6-5 shows the data needed to create an allocation factor
for the diesel co-product. The comment in Row 1 says that 0.2188 kg diesel is produced per
kg refinery output, or in other words, 21.88% of the refinery product represented in the data
for this unit process becomes diesel on a mass basis. The value 21.88% is thus the allocation
factor for diesel. Likewise, 2.66% by mass becomes LPG, and 42.13% becomes gasoline. The
sum of all of the mass fractions provided is 1 kg, or 100% of the mass of total refinery output.
With these values, you could use the information in Figure 6-5 to transform the unallocated
inputs and outputs into allocated flows for your desired co-product.

Management of different units and conversions can be a complicating factor in transforming


from unallocated to allocated values in data modules. Figure 6-4 shows all of the inputs and
outputs connected to the refinery on a basis of a functional unit of 1 kg of refined petroleum
product (which we noted was just a book-keeping entry). Note that the co-products have
multiple units– liters, cubic meters, and kilograms. But the product flow of diesel given in the
unit process is 0.252 liter, rather than 1 liter, or rather than 1 kg. We are likely interested in an
allocated flow per 1 unit of co-product, which means our previous allocation equation needs
an additional term, as in the generalized Equation 6-2.
𝑈𝑛𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤- 𝑤v 𝑢𝑛𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤 𝑢𝑛𝑖𝑡𝑠
𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤-,v = ∗ ∗ (6 − 2)
𝑢𝑛𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤 𝑢𝑛𝑖𝑡𝑠 ∑4"w+ 𝑚" 𝑤" 𝑐𝑜– 𝑝𝑟𝑜𝑑𝑢𝑐𝑡v 𝑢𝑛𝑖𝑡𝑠

Since the allocation factor for diesel in the refinery on a mass basis is 0.2188 (21.88%), that
amount of each of the inputs of the refinery process would be associated with producing 0.252
liters of diesel. Using Equation 6-2, the allocated flow of crude oil (row 10 of Figure 6-4)
needed to produce 1 liter of diesel fuel is:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
172 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

1.018 𝑘𝑔 𝑐𝑟𝑢𝑑𝑒 𝑜𝑖𝑙 1 𝑘𝑔 𝑟𝑒𝑓𝑖𝑛𝑒𝑟𝑦 𝑜𝑢𝑡𝑝𝑢𝑡 0.884 𝑘𝑔 𝑐𝑟𝑢𝑑𝑒 𝑜𝑖𝑙


𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑓𝑙𝑜𝑤5'&|/,|-/0/. = × 0.2188 × =
1 𝑘𝑔 𝑟𝑒𝑓𝑖𝑛𝑒𝑟𝑦 𝑜𝑢𝑡𝑝𝑢𝑡 0.252 𝑙𝑖𝑡𝑒𝑟𝑠 𝑑𝑖𝑒𝑠𝑒𝑙 𝑙𝑖𝑡𝑒𝑟𝑠 𝑑𝑖𝑒𝑠𝑒𝑙

Other comments in the US LCI data module (not shown here) note that the assumed density
of diesel is 1.153 liter/kg, so this allocated result means 0.884 kg crude oil is needed to produce
1 liter /(1.153 liter/kg) = 0.867 kg of diesel. This mass-based ratio is comparable to the 1.018
kg crude oil needed to produce the overall 1 kg of refinery product.11 While the 1.153 liter/kg
value may or may not be consistent with a unit conversion factor you might find on your own,
it is important to use the same ones as used in the study, else you may derive odd results, such
as requiring less than 1 kg of crude to produce 1 kg of diesel.

Similarly, the amount of crude oil needed to make 1 liter of gasoline would be
1.018 𝑘𝑔 𝑐𝑟𝑢𝑑𝑒 𝑜𝑖𝑙 1 𝑘𝑔 𝑟𝑒𝑓𝑖𝑛𝑒𝑟𝑦 𝑜𝑢𝑡𝑝𝑢𝑡 0.752 𝑘𝑔 𝑐𝑟𝑢𝑑𝑒 𝑜𝑖𝑙
× 0.4213 × =
1 𝑘𝑔 𝑟𝑒𝑓𝑖𝑛𝑒𝑟𝑦 𝑜𝑢𝑡𝑝𝑢𝑡 0.57 𝑙𝑖𝑡𝑒𝑟𝑠 𝑔𝑎𝑠𝑜𝑙𝑖𝑛𝑒 𝑙𝑖𝑡𝑒𝑟𝑠 𝑔𝑎𝑠𝑜𝑙𝑖𝑛𝑒

or, using the US LCI module’s assumed density of 1.353 liter/kg, we need 0.752 kg of crude
oil to produce 1/1.353 = 0.739 kg of gasoline. The same allocation factors are used to
transform the other unallocated flows into allocated flows (e.g., the many other inputs and
outputs listed in Figure 6-4) per unit of fuel.

Some of the results above may be unintuitive – i.e., that you started with a process making
0.252 liters of diesel, but ended up needing 0.884 kg of crude petroleum to produce 1 liter of
diesel. Or, if the refinery has multiple products, why is more than 1 unit of crude oil (rather
than just a fraction of 1) needed to produce a unit of refined fuel?

Figure 6-6 tries to rationalize the potential sources of confusion above by showing how much
crude oil is needed to produce varying quantities and units of the nine co-products. This
information is based on the input flow of crude oil into the unit process (1.018 kg crude / kg
refinery product), the allocation factors from Figure 6-5, and the unit conversions provided in
the US LCI data module. The results show the allocated mass of crude oil in two different
forms. First, the ‘per given process units’ column calculates the flows based on those of the
original unit process from Figure 6-4 (e.g., per 0.252 liters of diesel) and with the allocation
factors (rounded to three significant digits, e.g., 0.219 instead of 0.2188) from Figure 6-5.
Second, the ‘per normalized process units’ column calculates the flows based on the varying
unitized flow of each product (e.g., per whole 1 liter of diesel instead of per 0.252 liters). Recall
that the latter column has also been adjusted by the 1.018 kg of crude needed overall to make
1 kg of total refined products!

11 This could also be represented as adding yet another unit conversion at the end of Equation 6-2, from liters to kg of diesel.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 173

Allocated Crude oil (kg) per


Given Process Normalized
Co-product (units given in US LCI module)
Units Process Units
Diesel (liters) 0.219 / 0.252 l 0.884 / l
Liquefied petroleum gas (liters) 0.027 / 0.049 l 0.552 / l
Gasoline (liters) 0.421 / 0.57 l 0.752 / l
Residual fuel oil (liters) 0.049 / 0.052 l 0.962 / l
Bitumen (kg) 0.037 / 0.037 kg 1.018 / kg
Kerosene (liters) 0.091 / 0.112 l 0.824 / l
Petroleum coke (kg) 0.060 / 0.060 kg 1.018 / kg
Refinery gas (m3) 0.045 / 0.061 m3 0.751 / m3
Petroleum refining coproduct, unspec. (kg) 0.051 / 0.051 kg 1.018 / kg
Figure 6-6: Comparison of Allocated Quantities of Crude Oil for Nine Co-products of Petroleum
refining, at refinery Process, for Various Co-product Units (l = liter).

The first two result columns of Figure 6-6 summarize results using the methods just
demonstrated for gasoline and diesel. The main difference between them is whether the unit
basis of the co-product is the fractional unit value given in the process data (Figure 6-4), or
whether it has been represented per 1 unit of co-product basis. There were three different
units of product presented (liter, kg, and m3), which may otherwise distort a consistent view
of the effects of allocation. Figure 6-7 shows an interesting final result, allocated crude oil per
mass of co-product.

Allocated Crude oil (kg) per


Co-product (units given in US LCI module)
kg product
Diesel (liters) 1.018
Liquefied petroleum gas (liters) 1.016
Gasoline (liters) 1.018
Residual fuel oil (liters) 1.019
Bitumen (kg) 1.018
Kerosene (liters) 1.018
Petroleum coke (kg) 1.018
Refinery gas (m3) 1.019
Petroleum refining coproduct, unspec. (kg) 1.018
Figure 6-7: Comparison of Allocated Quantities of Crude Oil for Nine Co-products of Petroleum
refining, at refinery Process, per kg of co-product

The results of Figure 6-7 may be surprising, as all of the co-products have the same
requirement of crude oil per kg of product (about 1.018, the original unit process flow). If the
whole point of allocation was to assign flows to the different products, why does it appear that
they all have the same allocated value? In this case, it is because a mass-based allocation was
used (from Figure 6-5), thus the appearance of the crude oil per kg is constant. Since we were

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
174 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

looking at an energy system where the default units were in liters or m3, it disguised this
underlying fact. This outcome is pervasive, though, in LCA. If, for instance, we had performed
an economic-based allocation, the column ‘crude oil per dollar of product’ would have
constant values (the same effect would be have been seen in Figure 6-3 – the mass based
allocation factor was 0.0055 liters per kg for all produce). The economically allocated results
would also not be mass balanced, i.e., some outputs would use less mass of crude oil than what
is in the mass of the product.

Regardless, you can follow the use of allocated input and output flows for a co-product based
on data modules in the US LCI model by exploring their data and metadata in openLCA,
SimaPro, etc., using the same methods shown in the Advanced Material for Chapter 5.

Avoiding Allocation
When allocation was introduced, it was stipulated that the Standard says that the main goal
should be to avoid it. More specifically, the Standard says that allocation should be avoided
in one of two ways, each of which has the goal of making more direct connections between
the input and output flows of a process and its products, so as to remove the need for
allocating those flows.

The first recommended alternative to allocation is to sub-divide or disaggregate the unit


process into a sufficient number of smaller sub-processes, until none of these processes have
multiple products. Thus the link can be made between inputs and outputs and a single product
for each process (and thus the system). While this solution is very attractive, it requires being
able to collect additional data for all of the new sub-processes. It may also be a puzzling
suggestion because the Standard elsewhere defines a unit process as “the smallest element in
the inventory analysis for which input and output data are quantified.” If a unit process can
be broken into sub-processes, then it was not the smallest possible element in the first place.
However, this reminds the analyst to create processes as distinct and small in boundary as
possible given access to data, so as to avoid allocation issues. In this case, it means trying to
collect data at sufficient resolution as to be able to have processes and systems with singular
product outputs.

Figure 6-8 illustrates a simple case of disaggregation, where a process similar to Figure 6-1
with multiple product outputs is subdivided into multiple processes (in this case, 1 and 2), with
distinct Products A and B. In reality, the disaggregated system may need to have more than 2
processes, and some processes may have only intermediate flows leading to an eventual
product output. Generally, creating process models at a lower level (alternatively called a
higher level of resolution) is no different than creating a higher-level process model that
requires more effort and data. In short, the goal of this ‘divide and conquer’ style approach is
to ensure the result is a set of unit processes with single outputs and no co-products.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 175

Figure 6-8: Disaggregation into Sub-processes

There are other possible ways of avoiding allocation in this way without disaggregating
processes, such as by drawing different explicit boundaries around the system, so that co-
products are not included. For example, an overall oil refinery process has hundreds of
processes and many products. If only a particular product is of interest, then it may be possible
to draw a smaller and more explicit boundary around the inputs, outputs, and processes in a
refinery needed just for that product (and that has no connection to other products). Thus,
allocation may be able to be avoided, since it is possible to vary the outputs individually, so
that you can model the inputs and outputs resulting from an additional unit of output of one
of the products without an increase in any of the other products.

The second alternative to allocation recommended in the Standard is system expansion, or,
“expanding the product system to include the additional functions related to the co-products.”
The Standard offers little detail on this method, so practitioners and researchers have
developed various interpretations and demonstrations of system expansion, such as Weidema
(2000). In short, in system expansion we think more broadly about the product systems and
how they are defined. We leverage the facts that systems with multiple product outputs are
typically multifunctional, and that LCA requires definition and comparison of systems on a
functional unit basis. System expansion adds production of outputs to product systems so that
they can be compared on the basis of having equivalent function.

Going back to the earlier example of a system producing heat and electric power, each of these
products provides a different function – the ability to provide warmth and the ability to
provide power. In a hypothetical analysis based only on a functional unit of electricity,
comparing a CHP system with a process producing only electricity would be unequal. Figure
6-9 generalizes the unequal comparison of the two different systems, one with a single product

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
176 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

and function and one with two products and functions. Expansions would be similar for more
than two products.

Closing the loop on the earlier distinction of joint and combined production, we note that in
combined production, subdivision according to a physical causality is always possible, and in
joint production, that system expansion is always possible, and allocation can be used as a
proxy in some specific cases.

Figure 6-9: Initial Comparison of Multifunctional Product Systems (Source: Tillman 1994)

Note that while the two systems provide the identical Function 1 (providing electricity in our
example), the technological process behind providing that function is not generally assumed
to be identical (one is Product A and the other is Product C). If the functional unit of a study
is, for instance, ‘moving a truck 1 km’, the product to support that function might be gasoline
or diesel fuel. Likewise, the product could be the same, but the technology behind making it
different. In such an example, the product could be electricity, but generated using renewable
or non-renewable generation technology.

In a comparative LCA, consideration of functions is important. Product systems with a


different number of functions can be analyzed by adding functions and products across
systems as needed until the systems are equal (i.e., by ‘expanding the system boundary’ for
systems that do not have enough function outputs). Considering Function 1 to be providing
power and Function 2 to be providing heat, for instance, system expansion allows the product
systems in Figure 6-9 to be compared by adding processes representing the production of heat
to the system (making Product B). Figure 6-10 shows the result of such a system expansion.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 177

Figure 6-10: System Expansion by Adding Processes (Source: Tillman 1994)

The system expansion focuses on ensuring that various systems provide the same functions.
However, the additional functions may be provided via various products, and thus a variety of
alternative technologies or production processes. When considering how to model additional
function in the expanded process, it may be done by modeling the identical product as in the
multifunction products (e.g., natural-gas fired electricity), or with an alternative product
and/or technology providing the same function (e.g., gasoline or diesel, renewable or non-
renewable electricity). Of course, using different production technologies in system expansion
may to lead to significantly different results, but the reality of markets and practice may support
this case. Alternative technology assumptions for the expanded system should be considered
in a sensitivity analysis.

The procedure for identification of alternative technologies is identical to identification of the


technologies for any other input to the system. Thus, this procedure and its implementation
is not particular to alternative technologies. For example, in expanding a system, electricity
produced as a co-product by burning lignin in a biofuel production plant may be alternatively
produced by US average electricity. Likewise, solar cells may be a poor choice to expand a
system in comparison of a fossil-based process that produces electricity.

The general example presented in Figure 6-10 is straightforward, and system expansion is not
necessarily more complex than disaggregation – in fact, it can often be quite simple. One
should not view either disaggregation or system expansion as the preferred alternative to
allocation, and likewise not generalize one or the other as being more difficult. It should be
noted that disaggregation requires that each product in real life can be produced alone, thus
reflecting variable proportions of the output, implying that a physical description between
inputs and outputs can be established. More importantly, if system expansion is applied to
such a situation, the result will default to the same result as with disaggregation.

In a particular study, performing disaggregation could be time and/or data prohibitive and
thus system expansion is the only alternative to allocation. On the other hand, system
expansion may be hard to motivate or justify given challenges in identifying alternative

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
178 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

technologies for the expanded system, which leads to spending time and effort in
disaggregating the processes. The refinery example discussed earlier is a useful example. The
high-level refinery process data module shown is highly aggregated (and as a result it has many
product outputs). Far more detailed process models of refineries have been developed and
could be used if needed for a study in lieu of using the allocated refinery module presented.
While this may take substantial effort, it avoids the need for the system expansion approach
that would require creating larger comparative product systems with added production for
many more refinery outputs.

It is hopefully intuitive that the math behind the system expansion example in Figure 6-10 that
‘adds to process 2’ would lead to the same relative result as ‘subtracting’ (i.e., crediting) the
same process data from the results of Process 1. This equivalent method is shown in Figure
6-11, and it is referred to as the substitution or avoided burden approach. This is still
considered system expansion because the boundary was expanded for one of the systems (but
the numerical results are credited instead of added).

Figure 6-11: System Expansion of a Multifunction Process via Subtraction (Source: Tillman 1994)

Now that the methods of system expansion have been introduced, additional clarification is
needed to know how to organize the expansion analysis. The way in which a system is
expanded in multifunction systems requires knowledge of the determining and dependent
product outputs. The determining product (or reference product) is the product for which
a change in demand affects the production volume, while the dependent product (or by-
product) is one for which a change in demand does not affect the production volume. In
other words, the determining product constrains the production volume of the dependent co-
products (Weidema 2000).

When the co-product of interest is the determining product, it is assigned all inventory flows
in the multiple output system, and the flows from the displaced product are subtracted. When
the co-product of interest is the dependent product, then the multi-output process is not

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 179

needed, and a process representing the additionally needed supply of that product is included
instead (see soybean example in Schmidt (2010) for a good example).

The result of either allocation or system expansion methods may cause other issues in your
study, since modifying the original model may mean your study scope changes (and potentially
your goal as well). In the heat and power example, a study originally seeking to compare only
the effects of ‘generating 1 MJ of heat’ in two different systems (i.e., Product A vs. Product C)
would need to have its study design parameters adjusted to consider systems providing power
and an equivalent amount of heat (via Product B), e.g., ‘generating 1 kWh of electricity and
producing 100 MJ of heat’. An exception is when using the avoided burden system expansion,
since this always keeps the original functional unit intact.

For the heat and power example, the US LCI database has various data to support a system
expansion effort. Figure 6-12 shows abridged US LCI data for CHP.

Flow Type Unit Amount


Inputs
Wood, NE-NC hardwood ProductFlow kg 4E-02
Natural gas, combusted in industrial boiler ProductFlow m3 0.004
Outputs
Carbon dioxide, fossil air/unspecified g 8
Carbon dioxide, biogenic air/unspecified g 70
Heat, onsite boiler, hardwood mill average, NE-NC ProductFlow MJ 1
Electricity, onsite boiler, hardwood mill average, NE-NC ProductFlow kWh 5.0E-05
Figure 6-12: Process Data for Hardwood Combined Heat and Power (CHP)
(abridged, adapted from various US LCI database modules)

The US LCI database also has process data for heat generation, as in Figure 6-13.

Flow Type Unit Amount


Inputs
Natural gas, processed, at plant ProductFlow m3 1
Outputs
Carbon dioxide, fossil air/unspecified kg 2
Heat, natural gas, in boiler ProductFlow MJ 30
Figure 6-13: Process Data for Generating Heat (from US LCI Heat, natural gas, in boiler)

Biogenic emissions occur as a result of burning or decomposing bio-based products. The


‘biogenic’ CO2 emissions in the CHP process data arise from burning the wood, and the fossil
CO2 emissions come from burning the gas. The carbon in the wood comes during its growth
cycle via natural uptake of carbon (in this case through photosynthesis). In lieu of crediting
the natural product for this same uptake of carbon, which would lead to a net of zero

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
180 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

emissions, the biogenic emissions of CO2 in a full life cycle may be considered to be neutral.
As such, biogenic carbon emissions are accounted for but generally tracked separate from
fossil or other sources. Example 6-1 demonstrates how to use the US LCI data for a system
expansion involving CHP.

Example 6-1: Avoid allocation of the direct CO2 emissions from a hardwood mill combined heat
and power (CHP) process.

Answer: As motivated above, the requested comparison is problematic, as the CHP process has
two outputs with two functions and the second has only one with one function. This is a good
case for using system expansion. In the CHP process, heat is the determining co-product
(because demand for heat is driving the use of the process and electricity could be carried over
long distances). Thus, electricity is the dependent co-product. We thus need to consider an
alternative process for producing electricity to subtract from the existing CHP process, or
avoiding the alternate production of electricity, as in Figure 6-11. One alternative is electricity
produced by burning natural gas, using abridged US LCI data in Figure 6-14.

Flow Type Unit Amount


Inputs
Natural gas, processed, at plant ProductFlow m3 0.3
Outputs
Carbon dioxide, fossil air/unspecified kg 0.6
Electricity, natural gas, at power plant ProductFlow kWh 1
Figure 6-14: Data for Gas-Fired Electricity Generation (from US LCI Electricity, natural gas, at power plant)

Our goal is to compare the two heat-producing systems based on a common functional unit of
‘producing 1 MJ of heat’. Tracking the CO2 emissions, the CHP process emits 78 g of CO2 for the
1 MJ heat and 5E-05 kWh electricity produced in the baseline, of which 70 g are biogenic
emissions and 8 g are fossil-based. We see that generating 5E-05 kWh of electricity via natural
gas would emit 0.6 kg CO2 / kWh * 5E-05 kWh = 0.03 g CO2. However, all of the CO2 in this
expanded system is fossil-based. Thus, the total CO2 emissions from producing 1 MJ of heat are
78 g – 0.03 g = 77.97 g, of which 70 g are biogenic (unchanged) and 7.97 g are fossil CO2.

Comparing to the heat only process (Figure 6-13), which emitted 2 kg/30 MJ = 67 g fossil
CO2/MJ, we see that the systems are roughly comparable in total CO2 emissions (78 vs. 67 g), but
the CHP unit using wood input has about 90% less fossil CO2 emissions (8 g versus 67 g).

If, for the sake of discussion, an energy-based allocation was done instead in Example 6-1,
then the CHP system produces 1 MJ of heat and 0.00018 MJ of electricity (at 3.6 MJ / kWh)
for a total of 1.00018 MJ. Thus the fossil CO2 would have been allocated 99.98% (1

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 181

MJ/1.00018 MJ) to the 1 MJ heat and 0.02% (0.00018 MJ/1.00018 MJ) to the 5E-05 kWh of
electricity produced. Comparing the allocated CHP process and its explicit estimate of
emissions for heat to the heat generation process would be ~8 g fossil CO2 versus 67 g CO2,
or 90% less, a similar difference than above but not qualitatively different.

An incorrect approach

For the sake of illustration, we also show what would happen if the CHP system were
incorrectly expanded in the opposite way, i.e., with electricity as the determining product and
heat the dependent product. In this incorrect case, we would use alternative process data for
producing heat to add to a standalone gas-fired electricity process. Using the hardwood mill
boiler as the baseline, our goal would be to compare the two systems based on a common
functional unit of ‘producing 1 MJ of heat and 5E-05 kWh of electricity’ (or, alternatively, to
the scaled functional unit ‘producing 1 kWh of electricity and 20,000 MJ of heat’).

Tracking the CO2 emissions, the CHP process emits 78 grams of CO2 for the 1 MJ heat/5E-
05 kWh electricity functional unit, of which 70 g are biogenic emissions and 8 g are fossil-
based. For the system expanding electricity and heat processes, generating 5E-05 kWh of
electricity would emit 0.6 kg CO2 / kWh * 5E-05 kWh = 0.03 g CO2, and producing 1 MJ of
heat would emit (2 kg/30 MJ) * 1 MJ = 67 g CO2. However, all of the CO2 in this expanded
system is fossil-based. In fact, the systems are roughly comparable in total CO2 emissions, but
the CHP unit using wood input has 59 g, or about 90%, less fossil CO2 emissions (8 g versus
67.03 g, the extra digits shown to ensure you see it includes both parts).

Instead of adding the heat process to the electricity process, we could credit the CHP process
for avoided production of heat, as in Figure 6-11. The 2 kg/30 MJ = 67 g of fossil CO2 from
Figure 6-13 would be subtracted from the CHP system. In this case, it means the CHP system
has 8 g – 67 g = -59 g of fossil CO2 emissions per 5E-05 kWh of electricity! The relative
difference between the systems is the same (59 g less fossil CO2). Note how this incorrect
approach leads to a substantially different consideration of the CO2 in the system.

Putting aside the fact that the approach above is incorrect, note that the ‘negative emissions’
result of such expansion methods has caused some stakeholders to suggest not using the
subtraction method, but again, the relative difference is the important result, regardless of the
sign convention used. A negative result is an odd outcome, especially as compared to
allocation, which only leads to positive flows. But, the Standard suggests avoiding allocation.
The negative result itself is not an indicator of an incorrect application of system expansion –
various appropriately done system expansions lead to the view that impacts are comparatively
less than in the unexpanded system.

The CHP example motivates the argument of which alternate production process is used in
the expansion, and additionally whether the alternative production process chosen is
representative of the average, some best or worst case, or merely based on the only data point

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
182 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

available. The choice should be sufficiently documented, and follow ISO 14049, which
suggests (in clause 6.4):

“The supplementary processes to be added to the systems must be those that would actually
be involved when switching between the analyzed systems. To identify this, it is necessary
to know:

• whether the production volume of the studied product systems fluctuate in time ..,
or the production volume is constant,

• (...) whether (...) the inputs are delivered through an open market, in which case it
is also necessary to know:

o whether any of the processes or technologies supplying the market are


constrained (in which case they are not applicable, since their output will
not change in spite of changes in demand),

o which of the unconstrained suppliers/technologies has the highest or


lowest production costs and consequently is the marginal supplier/
technology when the demand for the supplementary product is generally
decreasing or increasing, respectively.

Further discussion on using average or other processes follows in the next section.

Expanding Systems in the Broader Context


For the additional production added via system expansion to make the various product
systems equivalent, so far discussions and examples have addressed what, on average, is an
appropriate additional production. This is because, so far, only an attributional or descriptive
approach has been discussed for LCA studies. Attributional LCAs seek to determine the
effects now, or in the past, which inevitably means that our concerns are restricted to average
effects. However, emerging practice and need in LCA often seeks to consider the
consequences of product systems or changes to them. In consequential LCA studies,
marginal, instead of average, effects are considered (Finnveden et al. 2009). Marginal effects
are those effects that happen ‘at the margin’, and in economics refer to effects associated with
the next additional unit of production. Furthermore, consequential analyses seek to determine
what would change or need to change given the influence of changing product systems on
markets. Using the CHP example, heat is currently, on average, produced by burning average
domestic natural gas. But on the margin, it is likely that such gas may be shale gas from
unconventional wells. In the future, perhaps some other alternative fuel would be the marginal

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 183

source. Certain resources, products, and flows may be quite scarce, and a significant demand
for one resource in a product system could, on the margin, lead to an alternative that is radically
different. Note that the average and marginal technologies used in an analysis could be
identical – they do not need to be different – but when different, the effects associated with
them could be substantially different.

Consequential studies often use separate economic models to aid in assessing changes. For
example, increased exploration and production of unconventional natural gas (e.g., shale gas)
has been stated through attributional LCA studies as leading to 50% reductions in the carbon
intensity of electricity generation, since natural gas on average would replace coal-fired
electricity generation which emits 50% less CO2 per kWh. Some of these studies were done
when coal was the source of 50% of our electricity generation. But on the margin, increasing
the supply of low-priced natural gas has the prospect of replacing not just coal-fired generation
but also electricity from nuclear power plants, which have very low carbon intensities. Using
such marginal assumptions, and an economically driven electricity dispatch model, Venkatesh
(2012) suggests that a consequence of cheap natural gas on regional electricity CO2 emissions
could be reductions of only 7-15%, far lower than the 50% expected on average.

Another consequential effect seen in LCA studies relates to how land is used and managed.
Historically, studies related to activities such as biofuel production modeled only the effects
of plowing and managing existing cropland for corn or other crops (known as direct land use).
That means that the studies assumed that production would, on average, occur in places where
the same, or similar, crop was already being grown, and thus, the impacts of continuing to do
so are relatively modest. Recent studies (Searchinger 2008, Fargione 2008) highlighted the fact
that increased use of land for crops used to make bio-based fuels in one part of the world can
lead to conversion of other types of land, e.g., forests, into cropland in other parts of the world
(a phenomenon called indirect land use change). In such cases, the carbon emissions and
other effects of converting land into cropland are far higher. This consequence of increasing
use of cropland for biofuels has been quantitatively estimated and added to the other LCA
effects, and leads to results substantially different than those that do not consider this effect.

In LCA, considering the market-based effects of a substitution typically leads to a discussion


of displacement of products. Displacement occurs when the production of a co-product of
a system displaces (offsets) production of another product in the market. The quantitative
effect of displacement is that the flows from what would occur when producing the alternative
product are ‘credited’ to the main product system because it is assumed that the displacement
results in less production of the alternative product. A traditional example of displacement is
for a system where electricity is produced as a co-product. In such situations, usually the
electricity co-product is assumed to displace alternative production of electricity, typically
assumed to be grid electricity. The effect of displacement is thus crediting (subtracting from)
the inventory of the product system with the inventory of an equivalent amount of grid
electricity. More practically, a co-product will often displace a different product. An often-
stated displacement assumption in the biofuels domain is that the ethanol co-product dried

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
184 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

distillers grains with solubles (DDGS) displaces animal feed. This is because DDGS are
assumed to be a viable alternative food for livestock, and so the inventory flows of producing
a functionally equivalent amount of animal feed are credited to the product system. DDGS
typically displaces only about 60% of animal feeds on a mass basis given differences in protein
content (meaning that it displaces animal feed one-to-one on a protein basis). Production of
heat may not be functionally equivalent either, and a displacement ratio may be needed to
make a relevant comparison in those cases. Both of these are examples of the displacement
ratio (or displacement efficiency) that would be expected, and it may be greater than, less
than, or equal to one based on cost, quality, or other factors.

Displacement is a consequence of production and market availability of a co-product;


however, even LCAs that do not incorporate complex economic market models can consider
effects from displacement. Consideration of the price and quantity differences resulting from
displacement would be an appropriate addition for a consequential LCA.

These “ripple-through” market considerations are at the heart of marginal analysis and
economics, and thus consequential LCA. Whether a study is attributional or consequential in
nature should be specified along with other parameters of a study. Of course, a study could
be interested in both average and marginal effects, so as to consider the relative implications
of the introduction of a product system to the market.

Additional sources on consequential LCA and differences from attributional LCA are
provided in the references section at the end of this chapter.

Comparative Analysis of Allocation and System Expansion


While the Standard clearly states that allocation should be avoided in favor of system
expansion, as consistent with other LCA study issues, a primary concern is whether a
difference in study goals has a significant effect on the study results. In this case, we are
concerned as to whether the qualitative and quantitative conclusions change based on our
study goal leading to use of an allocation method, and/or whether we perform system
expansion instead of allocation. The study results should explicitly note how use of allocation,
system expansion, etc. has a large effect on the results (i.e., whether different goals would have
led to quantitatively and qualitatively different results).

To perform a comparative analysis of allocation methods, the results are found using each of
the alternative allocations. It is common to see a table or graph comparing the effect at the
process or whole product system level. Figure 6-15 shows a graphical comparison of the three
different allocation methods for the three different fruits in the delivery truck example at the
beginning of the chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 185

0.12

0.1

Allocated Flow (liters)


0.08

0.06

0.04

0.02

0
Item Mass Economic

Apples Watermelon Lettuce

Figure 6-15: Comparison of Allocation Methods

As discussed earlier, but perhaps made more visually clear above, the allocation method used
has a fairly substantial effect on how much fuel is allocated to each fruit or vegetable. The
item-based method gives exactly the same allocation, while the mass and economic-based
methods allocate far more to watermelon than the other fruits. In such cases, the choice of
allocation has a significant effect on the overall results, and thus, the implications should be
noted in the study.

Beyond the example above, other life cycle studies have performed similar comparisons.
Jaramillo et al (2009) studied the life cycle carbon emissions of injecting CO2 from power
plants underground to enhance oil recovery operations. The study showed the intermediate
carbon emission factors for each of the products (oil and electricity) depending on various
allocation and/or system expansion study goals. Figure 6-16 shows the various emission
factors for electricity. Again, such an analysis suggests that the allocation method and/or the
assumptions made in support of system expansion significantly affect the results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
186 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

Figure 6-16: Comparative Analysis of Allocation Methods and System Expansion for Electricity
Related Emissions from Enhanced Oil Recovery (Source: Jaramillo et al 2009)

These examples demonstrate ways in which studies can be documented in order to support
allocation or system expansion, and that the choices made can lead to results that are
significantly different.

Chapter Summary
Managing the data and assumptions associated with multifunctional systems is a pervasive
challenge in LCA. While allocation is an often-used method, the Standard recommends
avoiding it in favor of system expansion. Allocation is a quantitative exercise that partitions
flows across products based on established relationships between them. System expansion
avoids allocation by expanding the boundaries of analysis to include alternate production and
functions. As with other elements of LCA, choices in performing either method need to be
sufficiently documented, and, where relevant, supported by sensitivity analysis. Effectively
managing multifunctional systems ensures high quality studies that can be more readily
reviewed and compared to other studies.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 187

References for this Chapter


Fargione, J., Hill, J., Tilman, D., Polasky, S., Hawthorne, P., 2008. Land Clearing and the
Biofuel Carbon Debt. Science 319, 1235.

Finnveden G, Hauschild MZ, Ekvall T, Guinée J, Heijungs R, Hellweg S, Koehler A,


Pennington D, and Suh S, Recent developments in Life Cycle Assessment, Journal of
Environmental Management, 2009, 91(1), pp.1-21.

Goldberg, Chris, license - https://creativecommons.org/licenses/by-nc/2.0/legalcode

Schmidt, J.H., “Challenges Relating to Data and System Delimitation in Life Cycle
Assessments of Food Products”, in Environmental Assessment and Management in the Food
Industry: Life Cycle Assessment and Related Approaches, Woodhead Publishing, 2010.

Searchinger, T., Heimlich, R., Houghton, R.A., Dong, F., Elobeid, A., Fabiosa, J., Tokgoz, S.,
Hayes, D., Yu, T.H., 2008. Use of US Croplands for Biofuels Increases Greenhouse Gases
Through Emissions from Land-Use Change. Science 319, 1238.

Sheehan, J.; Camobreco, V.; Duffield, J.; Graboski, M.; Shapouri, H. Life Cycle Inventory of
Biodiesel and Petroleum Diesl for Use in an Urban Bus, NREL/SR-580-24089, National Renewable
Energy Laboratory: Golden Colorado, 1998. http://nrel.gov/docs/legosti/fy98/24089.pdf

Tillman, Anne-Marie, Ekvall, Tomas, Baumann, Henrikke, and Rydberg, Tomas, Choice of
system boundaries in life cycle assessment, Journal of Cleaner Production, Vol, 2, Issue 1, pp.
21-29, 1994.

Venkatesh, Aranya; Jaramillo, Paulina; Griffin, W.; Matthews, H. S., “Implications of changing
natural gas prices in the United States electricity sector for SO2, NOX and life cycle GHG
emissions”, Environmental Research Letters, 7, 034018, 2012

Weidema, Bo, Avoiding Co-Product Allocation in Life-Cycle Assessment, Journal of Industrial


Ecology, Volume 4, Issue 3, pp. 11–33, July 2000.

Further Reading
Wang, M.; Lee, H.; Molburg, J. Allocation of energy use in petroleum refineries to petroleum
products. The International Journal of Life Cycle Assessment 2004, 9, 34–44.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
188 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

End of Chapter Questions

Objective 1. Discuss the challenges presented by processes and systems with


multiple products and functions.

1. Beyond those discussed in the chapter, provide an example of a process that creates
multiple products and/or multiple functions. Clearly distinguish the products and
functions and note which product(s) help provide which function(s). What would be the
challenges in comparing your chosen process to others that may also have multiple
products or functions.

Objective 2. Perform allocation of flows to co-products from unallocated data.

2. Using the fruit truck allocation example from the chapter as a basis, derive the allocated
flows for each of the following unallocated flows not shown in the inventory for the fruit
delivery truck process for each of the three allocation methods (per item, by mass, and by
economic value):

• 1 kg maintenance oil (input to the process)

• 0.5 kg carbon monoxide to the air (output of the process)

• 0.1 kg particulate matter less than 10 microns in diameter (output of the process)

3. Redo Figure 6-3, including allocated flows of 5 liters diesel, if the market values of apples,
watermelon, and lettuce are $0.25, $1, and $2, on a per-item basis.

Objective 3. Estimate inventory flows from a product system that has avoided
allocation via system expansion.

4. Figure 6-17 shows an abridged portion of the unit process for processing soybeans to
make soy oil, which can be used to make alternative fuels. The table has been normalized
per unit of soy oil. For ease of use, all separate energy sources have been aggregated into
a single “Energy” flow. In each question below you will seek to find the amount of energy
associated with producing soy oil and/or soybean meal using allocation and system
expansion methods.

Note that this question uses a mix of English and metric units, which is a common issue
to deal with in LCA.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems 189

Flow (unit) Value


Inputs
Soybean (lb) 5.7
Energy (MJ) 6
Outputs (lb)
Soybean oil 1
Soybean meal 4.48
Figure 6-17: Soybean Processing (Source: Sheehan et al 1998)

a) Allocate the input energy flow based on a mass basis.

b) Assuming a market value for soybean oil and meal are $1.05/kg and $0.29/kg,
respectively, allocate the input energy flow on a market value basis.

c) Assuming energy content for soybean oil and meal are 37.2 MJ/kg and 16.2 MJ/kg,
respectively, allocate the input energy flow on an energy basis.

Figure 6-18 and 6-19 provide alternative production data for soybean processing
outputs. Use the data as needed to answer questions d and e via system expansion.
Where relevant, consider the case where soybean meal substitutes for two kinds of
feed on a protein equivalent basis, and soybean meal has a protein content of 48%.
Assume oils are equivalent as substitutes.
Feed Protein Content Energy Use
(MJ/kg)
Barley 12% 2.4
Corn 9% 2.1
Figure 6-18: Protein Content and Energy Use for Barley, Corn, and Soybean as Feed

Feed Energy Use


(MJ/kg)
Palm Oil 8
Rapeseed oil 18
Figure 6-19: Life Cycle Energy Use for Producing Palm and Rapeseed Oil

d) Currently, soybean meal is the determining product. What is the energy use of the
product in the system?

e) In the future, market conditions may change such that soybean oil is the determining
product. In that case, what is the energy use of the product in the system?

f) Discuss your various results and comment on which of the methods seem most
appropriate.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
190 Chapter 6: Analyzing Multiple Output Processes and Multifunctional Product Systems

Objective 4. List and discuss potential consequences of a product system in an


economy, and the ways in which those consequences could be modeled via LCA

5. Qualitatively discuss a potential consequence that might occur when each of the following
occurs, and what might need to be done to consider those consequences in an LCA study.

a) Corn is used to make biofuels

b) Growth in e-commerce purchases leads to larger delivery fleets

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 191

THIS PAGE INTENTIONALLY LEFT BLANK TO PRESERVE FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
192 Chapter 7: Uncertainty and Variability in LCA

Chapter 7 : Uncertainty and Variability in LCA

“A decision made without taking uncertainty into account is barely worth calling a
decision.” Wilson (1985).

Every value we model, measure or estimate is uncertain. Prior chapters provided brief
introductions to the concepts of uncertainty and variability. This chapter discusses these
concepts more substantively related to data and modeling for LCI and LCA studies.12

We also discuss the implications of uncertainty and variability in terms of interpreting study
results. These issues are perhaps most critical when performing comparative assessments
where our qualitative conclusions may be dependent upon the quantitative strength of our
data and results. The overall goal is to be able to express relevant effects of uncertainty and
variability in LCA, either in qualitative or quantitative form. Chapter 11 will build on this
chapter and present advanced methods for modeling uncertainty.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Describe the various sources and types of uncertainty and variability for data, model
inputs, and methods

2. Describe and apply qualitative, semi-quantitative, and quantitative methods to address


uncertainty in LCA

3. Apply a sensitivity analysis to help frame uncertainty in an LCA model

4. Interpret uncertain results to support study conclusions

Why Uncertainty Matters


In Chapter 2, variability was generally defined as related to the diversity and heterogeneity of
systems, and uncertainty as resulting from a lack of information or an inability to measure.
But, having seen several examples of data and methods, we can now define these terms in the
specific context of life cycle inventories and assessment.

12Thanks to Jeremy Gregory, Randy Kirchain, and Elsa Olivetti of MIT, Gwen DiPietro of CMU, and Pascal Lesage of CIRAIG for their
critical insights into organizing the content of this chapter. Also, much of the conceptual treatment of uncertainty in this chapter is based
on Morgan and Henrion (1992).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 193

In LCA, uncertainty, also called epistemic uncertainty, exists as a result of using inputs or
methods that imperfectly capture the characteristics of the product system. The data itself
could be unavailable, or of questionable quality. The methods may similarly be imperfect. Note
that in many domains, uncertainties are also referred to as sources of error, which (while an
imperfect descriptor) implies that with effort we could generate a result with less error, i.e.,
more certainty. Variability, also called aleatory uncertainty, in LCA occurs as a result of
randomness in the data (or potentially the methods). Variable results lead to different values
of data, even from technically identical processes.

As noted in Chapter 2, uncertainty can generally be reduced by performing additional research,


while variability cannot be similarly reduced. These statements remain true in the context of
LCA, but we can seek ways to either better manage or represent uncertainty and variability in
our work.

In LCA studies, and in most examples so far in this book, a sequence of assumptions and data
inputs into a model lead to results that can be expressed either in table or figure form. Such
results are typically the total LCI (or LCIA) results expressed as point estimate (deterministic)
values. Such single point results fail to convey the inherent uncertainty in LCA data and
modeling.

Two general rationales for performing LCA have been suggested previously: (1) hot spot
identification, or (2) supporting a comparison of multiple product systems. In the case of hot
spots, the general goal is to conclude where the highest LCI values exist across the system,
and thus which are most worthy of our attention for impact assessment (and potentially
making design changes to mitigate them). The hot spot test (or algorithm) is to find all distinct
activities, and sort or rank them based on their contribution to the overall LCI in the total
system. Those with the highest LCI values or ranks are the biggest contributors meriting our
interest. In the case of a comparison (as shown in Figure 7-1), the goal is to conclude which
alternative, e.g., A or B, has the lower LCI value, and thus, is preferable. So far, the test for
making such a conclusion has been a simple ‘less than’ comparison, i.e., if ‘A is less than B’
for a particular LCI flow, then A meets the test and is concluded to have better performance
than B. These two prototypical LCA cases will be revisited throughout this chapter.

The two rationales discussed thus far have not accounted for uncertainty and variability. That
is, it hasn’t mattered whether a comparison or ranking was only different in the third or fourth
significant digit. If one activity or alternative differs by 0.001% compared to another, then it
is ranked higher or lower in the hotspot ranking, or in the comparison. Whether that difference
is significantly different has not yet been addressed.

The ISO Standard specifies several points related to uncertainty. ISO 14040 notes that
amongst the key features of an LCA study, “LCA addresses potential environmental impacts;
LCA does not predict absolute or precise environmental impacts due to:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
194 Chapter 7: Uncertainty and Variability in LCA

• the relative expression of potential environmental impacts to a reference unit, 



• the integration of environmental data over space and time, 

• the inherent uncertainty in modelling of environmental impacts, and 

• the fact that some possible environmental impacts are clearly future impacts”

While not discussed in Chapters 4 or 5, ISO 14044 suggests that “data quality requirements
should address the uncertainty of the information, e.g., data, models, and assumptions” and
that uncertainty analysis and sensitivity analysis shall be done for comparative studies intended
for public release.

Figure 7-1: Deterministic Comparison of LCI Results for Two Options

In the broader sense of using LCA to support comparative decisions, such basic methods can
lead to challenging situations because using the data and methods available to LCA, nothing
can be known with certainty. It is difficult to identify a relatively larger or smaller value with
confidence. Uncertainty and variability affect LCA studies in many ways, and recognition of
the associated issues should be addressed when using data, creating models, and interpreting
results. When considering uncertainty and variability, the results will generally no longer be
point estimates. Thus, the simple ‘A is less than B’ test must be strengthened to interpret
results and make robust conclusions.
The types of decisions where LCA is being used to provide information have grown in
importance in the last 25 years. Chapter 1 introduced some of the landmark studies in the
field. Notable amongst these were those associated with the ‘paper versus plastic’ debate of
the 1990s. These debates raged in terms of trying to promote paper or plastic as the material
of choice for items such as cups and shopping bags. The general answer back then (and for
many years since) in relation to a question of “which is better for the environment, paper or
plastic?” has been a resounding “it depends,” i.e., the results have been inconclusive. Similar
“it depends” conclusions have arisen in comparisons of cloth and disposable diapers and
traditional versus online retail shopping.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 195

But even the relatively important question of paper vs. plastic, while important from a scale
perspective given the massive quantities of each material used in the technosphere, pales in
comparison to some of the more important and timely decisions for which LCA has been
recently sought to offer advice. These decisions have been related to issues like biofuels (i.e.,
gasoline vs. ethanol), alternative vehicles (internal combustion engines vs. battery and hybrid
electrics), and others whose motivations lie beyond merely the environmental issues. These
latter examples are policy questions for which society needs answers so as to more effectively
decide how to allocate billions of dollars of resources to stimulate investments that we believe
will have far-reaching benefits. At these levels, the importance is far greater than just
promoting an eco-friendly drinking cup, and these are the decisions that matter, as motivated in
the title of this book. Unfortunately, some LCA studies for policy decisions (and many others)
have still produced “it depends” conclusions.

An important source of uncertainty in current LCA studies of bioresources

One of the most recent and relevant big issues that could have been better informed
through more attention to uncertainty relates to indirect land use change (ILUC) as
related to biofuels. While a complex issue, simply put, growing crops for food or fuel
requires land. Converting land from one use to another, such as from forests to farm,
leads to significant greenhouse gas emissions from biomass decomposition, disturbance
of soils and other activities. Finally and most importantly, a likely result of promoting
biofuels is an increase in price of the crops, which incentivizes increased production and
conversion of land. It is thus possible that more GHG emissions result in a biofuel system
than without it. Various studies have considered ILUC emissions and the results vary.
Critically, there is no way to measure ILUC directly, and there is thus no objective way
to tell which estimate (if any) is correct. In modeling biofuel systems, various
researchers, including the US EPA, have conservatively considered ILUC effects, leading
to conclusions that biofuels can reduce GHG emissions. However, using some of the
higher estimates of ILUC effects on GHG emissions would reverse this conclusion. Thus,
the conclusion in this case ‘depends’ highly on which ILUC values are considered, and
the use of any one of the ILUC estimates creates a very narrow (and perhaps misleading)
conclusion. Clearly, uncertainty matters.

LCA conclusions often ‘depend’ on many things, often directly related to uncertainties in the
results, e.g., because the quantitative results were close or could have swung in a different
direction with a different assumption. In comparative studies, definitive ‘clear winner’ results
more fairly should have been deemed inconclusive after accounting for uncertainty. In a field
that seeks to perform accounting of impacts, inconclusive results can be seen as a failure of a
method. Put bluntly, the fact that the answer presented in LCA studies continues to be “it
depends” in so many cases has led some to suggest that the domain of LCA is either unable

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
196 Chapter 7: Uncertainty and Variability in LCA

to answer significant questions, or when faced with significant questions, the methods of LCA
are not powerful enough to help inform these important decisions.

In reality, it is rarely a failure of the LCA Standard when studies are not able to produce
conclusive answers. The underlying failure is more often a lack of substantive and quantitative
effort in the data and methods.

Thus, the main goal of this chapter is to better understand how we can leverage existing
practices and methods to best inform decisions that matter – whether that decision is one you
care about personally, or one your country needs to know before investing large sums of
money. We seek more robust methods where we can feel comfortable with our stated
conclusions regarding the performance of product systems, whether for identifying hot spots
or comparing systems. Doing so will require that we reopen our introductions of uncertainty
and variability from earlier in the book, and also look at the wealth of available data so that
our results can be informed by ‘all of the data’ rather than just data from a single known source
that we choose for our study. We seek methods to show specifically what our conclusions
might depend on, and how to interpret uncertain results.

A guiding framework for this chapter with respect to uncertainty is to define it, characterize
it, and to conduct an uncertainty analysis, via qualitative or quantitative approaches. These
methods range from qualitative identification of uncertainty and variability, to quantitative
methods like sensitivity and uncertainty analysis and use of ranges, as well as (later in the book)
probabilistic and statistical methods.

Sources of Uncertainty and Variability Relevant to LCA


Figure 7-2 shows a generic representation of a model (not just for LCA) and its three main
components. Data and inputs are used in methods, and the combination of data, inputs, and
methods produce results (a.k.a., outputs). All three of these components can be uncertain or
variable, and the forms of uncertainty can be similar across the three. Data and inputs have a
dashed line coming into the model boundary as they typically come from sources outside the
model.

Figure 7-2: Generic Model Flow Diagram

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 197

Note: In the remainder of this chapter, ‘uncertainty’ will be used as shorthand pertaining to
both uncertainty and variability (which, don’t forget is a type of uncertainty). Some issues
specific to variability will be noted.

Uncertainty and variability are always relevant in LCA studies, but uncertainty and variability
are perhaps most important when building comparative models (especially those that will lead
to comparative assertions). Heijungs and Huijbregts (2004), Williams, et al (2009) and Lloyd
and Ries (2007) provide excellent descriptions of the various uncertainties relevant to LCA.
These sources inspired the generalized types of uncertainty summarized below. Most of these
types of uncertainty are inherent in any LCA study, whether using process-based or other
methods. The next few sections describe the uncertainty types shown in Figure 7-3 in more
detail, as organized by the three model components stated above.

Uncertainty Source of Uncertainty Potential Remedies


Type
Data Errors or imperfections in sources for Additional collection of data, e.g., from
model inputs multiple sources or measurements.
Treatment of values as ranges instead of
point estimates.
Cutoff Choices in modeled product system Expansion of boundaries to include
boundaries additional effects, use of methods that
include comprehensive boundaries to
assess degree of cutoff errors. Also
hybrid LCA methods (see Chapter 9).
Aggregation Similar higher- or lower-level process Disaggregation of data to lower levels,
data being used as a proxy for desired hybrid LCA methods.
process.
Geospatial Variations in the locations of Incorporating location-specific data for
processes as related to (potentially key processes or known hot spots.
uncertain) data
Temporal Technological progress not being fully Performing time series analysis and
represented in (potentially old) data forecasting.
Model or Due to structural or conceptual Comparison of results from multiple
Method aspects of chosen methods. methods.
Figure 7-3: Categorization of Sources of Uncertainty in LCA (Modified from Williams et al 2009)

Data or Input Uncertainty


Our discussion begins with uncertainty related to the first model component, data and inputs.
As discussed in previous chapters, and earlier in this chapter, LCA data sources can vary widely
in quality and applicability for a particular study. Inputs (i.e., to models) can be based on data

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
198 Chapter 7: Uncertainty and Variability in LCA

or may be known parameters or assumptions. There are several relevant general definitions
regarding data uncertainty described below, all of which inherently consider that a particular
underlying ‘value’ of a quantity used is uncertain. As such, data/input uncertainty is perhaps
the easiest to comprehend.

Measurement uncertainty generally refers to the case where a ‘ground truth’ or perfect
measurement is possible using a particular technology, and measurement using an alternative
technology will lead to differing degrees of imperfect results. This is analogous to the
graduated cylinder example in Chapter 2 – if it were possible to produce a cylinder with more
gradual lines on it, we would expect to be able to produce measurements that were less
uncertain. But measurement uncertainty may be more than just determining the appropriate
number of significant figures to report. In LCA terms, this could refer to the previous
description of LCA as an accounting method vs. measurement method, or, the lack of a carbon
metrometer, as discussed in Chapter 5. However, some LCA data is based on measured data,
and these data sources are still uncertain due to the use of imperfect measurement
technologies. Thus, reported flows of emissions or releases, or quantities of energy and
resource use, should be considered as having various levels of uncertainty. Such levels may or
may not be quantified.

Parameter uncertainty is when the parameters (i.e., inputs) used in a model are uncertain.
Typically, all parameters in a model have some degree of uncertainty, except for physical
constants. In LCA, the parameters can be the various data sources for LCI, but also unit
conversions, assumptions about number of uses over the life cycle, etc. A few specific sources
of parameter uncertainty are:

Survey issues: Uncertainties in survey-based source data can result from sampling, reporting,
and other issues. Data sources in government reports (which are used to populate process
data modules and other LCA datasets) often come from surveys of individual firms or
facilities. The parameter derived from the survey could be transport distance or mode for
getting fuel to make electricity. Surveys are not sent to (and responses are not required of)
all relevant firms or facilities in an industry – statistical sampling methods are used. These
sampling methods may lead to unrepresentative sets of facilities asked. In addition, survey
questions can be misinterpreted, and thus, data can be incorrectly provided. The result is
that surveys report incorrect values, which if used, lead to irrelevant models. Minimizing
survey errors depends upon the actions of survey designers, the firms surveyed and the data
compilers in the government. Thus, LCA practitioners cannot reduce this uncertainty.

Incomplete and missing data: As noted in Chapter 5, process data modules often have
information for a limited number of flows. This means that there potentially exist flows
that are not tracked. Public data sources, which may be the original source for LCI data,
are limited by their accuracy and completeness of coverage. For example, US EPA’s Toxic
Release Inventory is not collected for some industrial sectors or for plants below a specified
size or threshold level of releases. As a result, estimates of toxic emissions from processes

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 199

(the parameter of interest) tend to be underestimated. The largest problem with missing
data comes from data that are not collected. Correcting data sources for incomplete or
missing data could require a level of effort that can’t be afforded or justified for a given
LCA study.

Quantity conversions: As discussed in Chapter 2, physical unit-based flow data often arise
from conversions, such as transforming fuel expenditure data via assumed unit prices. Even
unit price and expense data taken from the same year could vary on a daily basis. In addition,
price assumptions for aggregated products could vary at a sub-product level. As an example,
the input flow of ‘gasoline use per kg’ for a plastic process could be uncertain based on
using an annual average industrial fuel price (or even average petrochemical production)
from a DOE report.

Geospatial uncertainty refers to variations in natural or technical processes in different


places around the world. This could refer to ways in which a particular process is done in
different countries or regions (compared to others), different degrees of biogenic effects, etc.
As such, there could be geospatial differences in impacts from releasing particular pollutants
to the environment (e.g., higher damages from emitting particulates in urban vs. rural areas).

Temporal uncertainty refers to changes in processes or assumptions over time. For example,
consider modeling a current product using data from the year 2000. One would expect some
change in the efficiency of the process over such a long period. Finding data that matches the
current year may be difficult unless a substantial amount of primary data is obtained.

Old data: Databases can have data that is generally quite old. This could be problematic
because there may be technological change in sectors or processes, as new production
techniques are introduced, and these changes would not be represented in old data;
replacing human labor with machines is a typical example. There may be changes in the
demand for certain inputs, resulting in capacity constraints and changes in the production
mix. Newer products may be invented and introduced. Relative price changes may occur
which lead manufacturers to change their production process.

In considering how temporal factors could affect data and models, consider the age of data in
the US LCI database, started in the mid-2000s. Looking at the search function in the database’s
website, you can find a distribution of the ‘basis year’ of all of the posted data modules. This
is a date that is not visible within the metadata, but is available for downloaded data modules
and summarized in the web server. Figure 7-4 shows a graph of the distribution of the basis
years. In short, there is a substantial amount of relatively old data, and a substantial amount of
data where this basis year is not recorded (value given as ‘9999’). Half of the 200 data modules
updated in 2010 are from an update to the freight transportation datasets. The presence of all
of this ‘old data’ suggests that process data is not always going to be new, even if the date you
download the data is 2018! All of these factors could be important when considering the
suitability of data in a particular database.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
200 Chapter 7: Uncertainty and Variability in LCA

Figure 7-4: Frequency Distribution of Data Basis Years in US LCI Database


(as of August 15, 2013)

Forecasting: LCA analysts compound the problems of change over time by extrapolating into
the future. LCA users are often interested in impacts in the future, after the introduction
of new designs and products. Models based on older data may or may not be useful in
support of these forecasted effects. Undoubtedly, such models introduce uncertainty.

Model or Method Uncertainty


Beyond issues associated with uncertainty in data and inputs, the choice and type of methods,
as well as the underlying activities, lead to uncertainty. Compared to data uncertainty, which
generally relates to different quantitative values, uncertainty about model or method is harder
to think about.

Fundamental to this discussion about models is the premise that “all models are wrong; the
practical question is how wrong do they have to be to not be useful” (Box & Draper, 1987).

Process-based model uncertainties: While process-based models are often viewed as


preferable given their connection to specific technological descriptions and data, process
models still have uncertainties. Process models are only as good as the process data available

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 201

therein, both in terms of the resolution (number of processes in the model) as well as the
number of input and output flows available. Process-based models are also typically linear.

Software tool uncertainty: While most major LCA software tools use advanced methods and
thus largely depend on matrix multiplication of the same underlying data (as we will see in
coming chapters), different tools using the same LCI data may generate different results. This
is because there may be internal differences in how parameters, assumptions, etc., are used.
Software tools may also have graphical user interfaces that blur the differences between data
available, leading to differences in what we actually model.

Database uncertainty: The way in which LCI databases collect, record, interpret, or present
data may be different. A default scope assumption in one database may be very different than
another (e.g., one might include transportation of a product while another may not). While the
underlying issue may be related to the models, when implemented, this difference may lead to
parameter uncertainty!

Allocation uncertainty: As noted in Chapter 6, a study may choose a particular allocation


method (either simply assuming a single method, or as the result of exploring various
allocation methods and choosing the one with the most benign impact on the results). As
noted in Chapter 6, an appropriate way of dealing with allocation uncertainty is performing a
sensitivity analysis.

This subsection should make clear that methods and tools, not just data and inputs, can
contribute significant uncertainty to LCA results. Generally, the best means to compensate for
method uncertainty is to use multiple methods and compare results.

Uncertainties in Results
There is no unique type of result uncertainty. Results are uncertain because of the accumulated
effect of uncertain data, inputs, assumptions, and methods. However, uncertainties in results
are the most visible and tangible expressions of uncertainty to both the practitioner and to the
audience – thus, it is this uncertainty that is the most visible. LCA study results are typically
the most featured component of a study, with prominent graphics and references in the LCA
summary. Much of the focus on managing uncertainty in LCA in this chapter is centered
around interpreting and displaying the effects of uncertainty on study results.

Most LCA studies will be completed in part by computers. However, computers know nothing
inherently about uncertainty and will display results to as many significant figures as allowed,
which will give a false sense of accuracy or precision. This is risky when the results are provided
to users who are generally ignorant of uncertainty, often leading to a false impression of greater
certainty than is appropriate, and summary tables or graphics that support this ignorance.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
202 Chapter 7: Uncertainty and Variability in LCA

Uncertainty in results may not be obvious from its underlying components; it can be greater
or smaller than the sum of its parts. As a result, we need methods to characterize such
uncertainty. The final column of Figure 7-3 suggests typical remedies from Williams et al
(2009) to deal with the various types of uncertainties in LCA.

The following section further describes various qualitative and quantitative approaches to
addressing uncertainty across the various uncertainty types described previously. A guiding
framework for this chapter with respect to uncertainty, and which forms a good basis for any
analyst, is to define uncertainty, characterize uncertainty, and to conduct uncertainty analysis.

Methods to Address Uncertainty and Variability


As discussed throughout this chapter, our primary reason for caring about uncertainty is that
we need to be aware of how uncertainties affect our results, and more specifically, whether
they affect the conclusions arising out of our interpretation of the results. Two popular goals
for LCA studies are 1) to identify the ‘hot spots’ of a single product system in support of
improvements, and 2) to compare alternate technologies, processes, or approaches. These two
prominent types of studies are also critically connected to considerations of uncertainty and
are the focus of the discussions below. We want to ensure our interpretation of hot spots is
focused on the appropriate components, or that we are able to confidently assess which of
multiple systems we expect to have the lowest impact. If uncertainty is substantial, a particular
process could be wrongly tagged as a hot spot (or not tagged when it should be). Likewise, in
comparative studies, we care about our model’s robustness; that is, we seek the ability to
conclude whether A is better than B (given the uncertainty).

Figure 7-5 summarizes alternative approaches, ranging from qualitative to quantitative, that
are used in studies and are described in more detail below. These approaches may be the
specific remedies needed to address the uncertainties mentioned in Figure 7-3.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 203

Type Uncertainty Brief Description


Assessment
Approach
Qualitative Discussion of sources Textual summary (without quantitifaction) of uncertainties
Significance heuristics Use of pre-set thresholds in comparing alternatives
Semi-
quantitative Subjective assessment of data quality to generate parametric
Pedigree matrix
uncertainty factors
Sensitivity analysis Use of different values of parameters to see effect on results
Uncertainty Tracking and use of various data and inputs to see ranges of
propagation via ranges results possible
Quantitative
Uncertainty
Use of probability distributions to generate probabilistic
propagation via
results (not just ranges)
distributions
Figure 7-5: Summary of Qualitative and Quantitative Approaches to Managing Uncertainty

Qualitative and Semi-Quantitative Methods


As defined in Chapter 2, qualitative methods are those that qualify, rather than quantify,
results. Qualitative methods, in general, do not use numerical values as part of the uncertainty
analysis but instead focus on discussion and textual representations of uncertainty. Semi-
quantitative methods use partially quantitative methods – in the case of uncertainty
assessment, they may use indirect or proxy methods rather than direct measurement of
uncertainty. These methods might not incorporate uncertainty into the model results.

Qualitative: Discussion of Sources of Uncertainty Relevant to a Study

As mentioned in Chapter 5, an LCA study should discuss data quality issues, including aspects
of uncertainty and variability of data. An initial example of a qualitative assessment of
uncertainty in an LCA study would be to describe in words the expected effect of the various
kinds of uncertainty. This could include separate discussions pertaining to the various data
quality categories.

Interpretation: A hypothetical qualitative assessment that could be provided in a study is shown


in Figure 7-6, which builds on the concept of data quality indicators from Chapter 5.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
204 Chapter 7: Uncertainty and Variability in LCA

Data Quality Uncertainty Assessment


Indicator Category
All data for key processes are based on measurements (primary data), so
Reliability
uncertainty is deemed to be relatively low for this category.
Various processes include only effects of direct production, leading to some
Completeness
cutoff uncertainty.
All data is from the ecoinvent database, which generally has recent data,
Temporal
leading to relatively low uncertainty.
All data sources were available from the same country of relevance to the
Geographical
study (USA), so uncertainty is deemed to be low for this category.
Further All processes relevant to this study are traditional and long-studied
technological processes, thus, there is little uncertainty with respect to representativeness
correlation of data.
Figure 7-6: Example Qualitative Assessment of Uncertainty via Data Quality Categories

While a mere description cannot give specific quantitative support to uncertainty assessment,
it is useful in ensuring that the reader is aware that the study was done with knowledge of the
stated uncertainties, as opposed to being ignorant of them. Such text, as described in Chapter
5, would be the minimum expected in an LCA study. You may be wondering whether it is
possible to form a quantitative measure of uncertainty from the various data quality categories
and how uncertain they are – such a method is described in Chapter 11.

Semi-Quantitative: Significance Heuristics

Now that we have come this far in our discussions about uncertainty in LCA, hopefully you
now realize that a simple deterministic ‘A is less than B test’ can be short-sighted in terms of
a comparison criteria. Many LCA consulting firms who perform LCA work under contract
for sponsors have adopted semi-quantitative approaches to support comparisons of results. A
heuristic is a simplistic problem solving approach that is deemed sufficient, often referred to
as a ‘rule of thumb’. Significance heuristics are pre-defined thresholds of uncertainty such that
comparative results are presumed equal unless surpassing the threshold. In other words, in
lieu of a study being able to fully assess the effect of uncertainty, the authors note that they
will require results to be different by a preset level before concluding that one is better. Note
that ‘pre-defined’ is important, in that the thresholds need to be set before data is collected so
authors cannot manipulate conclusions (i.e., later change the threshold to be a lower value).

In advance of conducting the study, authors create internally consistent definitions of


‘significance’ in the context of comparing alternatives. These rules of thumb are rooted in the
types of significance testing performed in statistical analyses (e.g., confidence intervals) but
which are generally not usable given the small number of data points used in such studies.
Thresholds adopted in practice, such as for energy and carbon emissions, are at least 20%,
with even higher percentages for other LCI categories. This means that LCI values to be
compared for Alternatives A and B would need to be at least 20% different to consider the
difference as meaningful or significant. Importantly, comparative results less than the 20%
threshold would be considered inconclusive, as the alternatives would be deemed equivalent.
Comparative differences surpassing 20% support a conclusion of one alternative being less

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 205

than another. Note that there is not a solid science behind the choice of a threshold of 20%,
versus 15% or 30%. This choice is fairly subjective, and it will be revisited in a later section.
Setting such a threshold is a useful practice, and is in line with the goals we have in this chapter
of making sure that we are sufficiently aware of the uncertainties in our work when preparing
our results.

As stated in Chapter 2, management of significant digits is a related issue. While good practice
involves reporting only a small number of digits – generally 2 or 3 at most, even if we are
skeptical that the data quality can support them – using too few digits will simultaneously
constrain the identification of differences for comparisons. Consider, for example, LCI results
for a specific inventory flow are found to be 0.6 units and 1.4 units. The underlying difference
is about 50%. However, if the results are rounded to only one digit, each would be rounded
to 1 unit, meaning that the results are equivalent. In such cases it may be more useful to present
2 significant digits to maintain the underlying data but still be careful in assessing the difference
qualitatively, such as by noting that the results are still quite similar.

An additional aspect of significance heuristics involves systems where flows can be net positive
or negative (for example, those associated with biogenic carbon emissions containing carbon
sinks that reduce carbon). In these cases, significance heuristics may be explicitly stated to
conclude comparative differences in sign as significant. Thus, if alternative A has a positive
flow of GHGs, and B has a negative flow of GHGs, the negative flow would be concluded as
being significantly lower. Once again, though, the relative difference should still be assessed
via a comparative difference – would you really want to conclude that a system with -0.001 kg
CO2 is lower than a system with +0.001 kg CO2? The general 20% difference threshold may
still be useful in such cases!

Interpretation: In visually implementing a 20% significance heuristic, Figure 7-7 shows a


hypothetical summary of results for three products, and a horizontal 20% reduction line from
alternative B. Here, A is below the 20% threshold and can be concluded to be lower than B.
However, alternative C does not meet the threshold, so the comparison is inconclusive. Given
that significance heuristics are subjectively set based on perceptions of uncertainty or lack of
data quality, an inconclusive result translates to: “the uncertainty or variability in the underlying
model or data is too great so as to prevent a clear conclusion regarding which is better.”

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
206 Chapter 7: Uncertainty and Variability in LCA

Figure 7-7: Graphical Illustration of 20% Difference as Significance Heuristic

The use of such heuristics is only needed when other quantitative assessment methods of
uncertainty are not possible, i.e., quantitative approaches (as described below) are preferable.

Generally, the qualitative and semi-quantitative methods are not sufficiently robust for the
quantitative support we need or want when using LCA to assess big decisions, or decisions
that matter. While the approaches above can help convey considerations of uncertainty, fully
quantitative methods are needed to fully represent the uncertainty in studies.

Quantitative Methods to Address Uncertainty and Variability


In each of the approaches discussed in this section, attention is given to developing visual aids
that serve to express the quantified uncertainty, so that the audience is better able to quickly
appreciate it. As before, the rationale for applying quantitative methods is to answer the
qualitative goal of determining how robust the results are, and determining whether
conclusions or interpretation change when uncertainty is considered.

Ranges
Figure 7-8 shows the general flow for a model with ranges, which is otherwise similar to Figure
7-2 except that the inputs and results are expressed with ranges, here shown with a box-
whisker plot like representation.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 207

Figure 7-8: General Flow for Model with Parameter Ranges

When introduced in Chapter 2, ranges were suggested as a simple way of representing multiple
estimates or sources instead of reporting only a single value. Ranges can also be used in the
LCA context to express values from different sources or data modules. In cases where multiple
data points or value exist, ranges can be useful. They can pertain to inputs or outputs of LCA
models.

Note that results of using alternative data sources are often presented as different rows or
columns of results in a table. However, a graphical solution involves use of uncertainty bars
(or, in the terminology of software like Microsoft Excel, ‘error bars’). We note that the term
error bars is problematic because it suggests the range of values are due to errors, as opposed
to due to variability or uncertainty, as we need in our work, since we rarely have a good sense
of the actual underlying error in measurement, etc., of our data.

Interpretation: With ranges, an analyst should not be comfortable making a conclusion about
the LCI values of A and B if any part of the error bars overlap. A comparison of two
alternatives A and B may look generally like Figure 7-9. In this example, the top-end range of
A seems to overlap the low-end range of B, thus the analyst should specify this outcome and
avoid making a firm conclusion pertaining to the comparison.

Figure 7-9: Comparison of LCI Results with Ranges

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
208 Chapter 7: Uncertainty and Variability in LCA

Process Flow Diagram-based Example with Ranges


The discussion above presumed multiple independently available inventories for the same
processes, so that you could build a more robust quantitative model of the LCI results. The
use of multiple sources could quantitatively represent the effects of different assumptions and
boundaries, helping to show when they matter.

In reality, we often are making more generic assertions of ‘coal-fired electricity’ that don’t
specify the specific fuel, transportation mode, or generation technology. In this section, we
return to Chapter 5’s main process flow diagram example for coal-fired electricity (repeated
Figure 5-6 below).

Figure 5-6 (repeated): Product System Diagram for Coal-Fired Electricity LCI Example

However, we now consider that our study was interested in a more generic consideration of
the generation of coal-fired electricity. That means that our focus isn’t exclusively on
bituminous coal-fired electricity, and includes other types of coal as well. While there are many
possibilities, the US LCI database has information on only three types of coal to be used in
power plants: Electricity, anthracite coal, at power plant, Electricity, lignite coal, at power plant, and, of
course, Electricity, bituminous coal, at power plant. If we want to consider all three types of coal-
fired generation in our study, then we will want to generate inventories for all three and to
somehow represent the results. A great way to do this is to use ranges.

As a refresher of the model in Chapter 5, the study boundary included process data from
mining the coal, transporting it, and burning it at a coal-fired power plant. Figure 7-10 shows
an initial high-level comparison of the three coal-fired electricity generation processes from

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 209

the US LCI database, similar to the scope of Figure 5-6.13 The summary includes several
elementary flows as well as the main physical inputs needed for the associated upstream mining
and transport processes, but show only the effects from the generation processes.

Before proceeding, we stipulate that this example is imperfect. While the US LCI database
provides data on these three types of coal for producing electricity, anthracite coal is not
typically a primary fuel source for such an application. If anything, anthracite might be used
to spike the energy content in an existing bituminous or lignite plant, but it would not typically
be used as the main fuel due to high cost. Nonetheless, to demonstrate ways in which ranges
can be generated from LCI data, we believe the comparison is useful.

Flow (unit / kWh electricity Anthracite Bituminous Lignite


generated) coal coal coal

Outputs

Dinitrogen monoxide (kg / kWh) 6.34 E-06 2.42 E-05 2.5 E-05

Sulfur oxides (kg / kWh) 2.09 E-02 N/A 4.33 E-03

Carbon dioxide, fossil (kg / kWh) 1.011 0.994 1.088

Methane, fossil (kg / kWh) 7.12 E-06 8.31 E-06 1.56 E-05

Inputs

Coal (kg / kWh) 0.356 0.442 0.782

Barge transport (ton-km / kWh) N/A 5.59 E-02 N/A

Rail transport (ton-km / kWh) N/A 0.461 8.06 E-04

Truck transport (ton-km / kWh) 9.17E-02 2.99 E-03 8.6 E-03


Figure 7-10: Excerpted Inputs and Outputs for Coal-Fired Electricity Generation (Source: US LCI)

This initial summary already previews some of the relevant uncertainties associated with a
more generic view of coal-fired electricity generation in the US. While the CO2 emissions per
kWh for generating electricity are relatively certain, varying only a few percentage points from
0.994 to 1.09 kg/kWh, the methane emissions factors, coal heat rate, and use of transportation
services vary more widely – up to an order of magnitude!

13In the original bituminous coal-fired electricity example in Chapter 5, only transport of coal by train was included since it was the largest
in ton-km. In this updated analysis, all transport modes used to get coal to the power plant in each process are included.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
210 Chapter 7: Uncertainty and Variability in LCA

And, don’t forget, the three US LCI unit processes for coal-fired generation are based on a
series of rigid assumptions, and the LCI of power generation would of course vary at the plant
level – we just do not have sufficient information yet to represent that variability. For example,
the unit process for anthracite coal generation assumes that all transport from mine to power
plant is done via truck, with no use of rail or barge modes, which generally have lower
emissions intensities than truck transportation. The other coal generation processes also
assume static mixes of transportation modes. Finally, don’t forget that ‘N/A’ values in the
table simply represent that no information was provided in the inventory, and that they do not
necessarily report zero emissions of a species for the process. The ‘N/A’ values could just be
the result of data gaps, and likely are in this example.

Likewise, Figure 7-11 shows the analogous flows for the three types of coal mining processes
in US LCI that connect with the electricity generation processes.14 Again, these are only the
effects from the mines and not from any upstream processes. As a range, we could summarize
the methane emissions from mining as 1-4 E-3 kg per kg of coal mined. Notable is that the
US LCI database has no information regarding several of the flows identified above at any of
the three types of mines, including fossil CO2 emissions, despite a robust fuel mix for operating
equipment needed to extract and move coal in the mine and on site. This is clearly a data gap!

Flow (kg / kg of coal mined) Anthracite Bituminous Lignite coal


coal coal

Dinitrogen monoxide N/A N/A N/A

Sulfur oxides N/A N/A N/A

Carbon dioxide, fossil N/A N/A N/A

Methane, fossil 1.59 E-03 3.99 E-03 1.13 E-03


Figure 7-11: Outputs of Coal Mining Data Modules in US LCI (abridged)

Finally, Figure 7-12 summarizes the inventory flows for the barge, truck and train transport
processes,15 again with no upstream processes included. Note several processes were added
for completeness even though not explicitly stated as inputs in the electricity generation unit
processes of US LCI.

14The processes are: Anthracite coal, at mine, Bituminous coal, at mine, and Lignite coal, at surface mine. Note also that the methane flow for
anthracite coal is generically listed as ‘methane’ and not ‘methane, fossil’ but is assumed to be fossil in this table.

15 The processes are: Transport, barge, average fuel mix, Transport, barge, diesel powered, Transport, barge, residual fuel oil powered, Transport, combination
truck, diesel powered, and Transport, train, diesel powered. Note average combination truck transport process assumes 100% diesel fueled, so the
average truck process is unnecessary. The average barge mix process assumes a weighted average of 22% diesel and 78% residual-fueled
barge transport.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 211

Flow (kg / ton-km) Barge Barge Barge Train Comb.


(diesel) (residual) (average) (diesel) Truck
(diesel))

Dinitrogen monoxide N/A 6.97 E-07 5.4 E-07 4.75 E-07 1.99 E-06

Sulfur oxides 6.2 E-06 9.03 E-05 7.2 E-05 4.19 E-06 1.76 E-05

Carbon dioxide, fossil 2.81 E-02 2.87 E-02 2.86 E-02 1.89 E-02 7.99E-02

Methane, fossil 6.49 E-07 6.43 E-07 6.44 E-07 9.05 E-07 1.29 E-06
Figure 7-12: Outputs of Transportation Data Modules in US LCI (abridged)

Here, again, it is clear there is some significant variability across the four transport processes,
although there are inventory values for all of the four flows we have been tracking. All but
carbon dioxide have an order of magnitude difference in the intensity factors per ton-km.

Similar to our model in Chapter 5, we need to scale the results by the functional unit so that
we produce results across all three types of coal on a ‘per kWh’ basis. These scaling factors
come from the ‘Inputs’ section of Figure 7-10, and are summarized in the top section of Figure
7-13. For example, the process input for coal mining for anthracite coal is 0.356 kg/kWh,
from the anthracite column in Figure 7-10. The bottom half of Figure 7-13 summarizes the
four emissions outputs we have been tracking, and shows specific detail on how the fossil CO2
emissions would be calculated from data shown in the previous figures (the results for the
other outputs would be found similarly). Admittedly, our suggestions of significant figures are
slightly abused here to ensure you are able to replicate the results.

We again see that there is a relatively small range of results across the three electricity
generation types for carbon dioxide (only about 8% from highest to lowest), but differences
of a factor of four for dinitrogen monoxide, a factor of three for methane, and about 4 orders
of magnitude for sulfur oxides. For these other three flows, these are substantial variations
when more generally considering the effects of coal-fired electricity.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
212 Chapter 7: Uncertainty and Variability in LCA

Anthracite Bituminous Lignite

Process Input Summary (and relevant unit comparison to kWh)

Coal-fired electricity 1 kWh / kWh 1 kWh / kWh 1 kWh / kWh


generation (kWh)

Coal mining (kg / kWh) 0.356 0.442 0.782

Barge transport (ton- N/A 5.59 E-02 N/A


km / kWh)

Train transport (ton-km N/A 0.461 8.06 E-04


/ kWh)

Truck transport (ton- 9.17E-02 2.99 E-03 8.6 E-03


km / kWh)
Calculated Outputs (kg per kWh), with details shown
Carbon dioxide, fossil 1.011 + (0.356 * 0) + 0.994 + (0.442 * 0) + 1.088 + (0.782 * 0) +
(9.17 E-02 * 7.99 E- (5.59 E-02 * 2.86 E- (8.06 E-04 * 1.89 E-
02) 02) + 02) +
(0.461 * 1.89 E-02) (8.6 E-03 * 7.99E-
= 1.018 + (2.99 E-03 * 7.99 02)
E-02)
= 1.089
= 1.005

Dinitrogen monoxide 6.5 E-06 2.5 E-05 2.5 E-05

Methane, fossil16 5.8 E-04 1.8 E-03 9 E-04

Sulfur oxides 0.021 6.0 E-06 4.3 E-03


Figure 7-13: Variation in Outputs of Coal-Fired Electricity in US LCI (abridged)

In the analysis so far, we have focused only on four specific airborne emissions. However we
can leverage the spreadsheet models developed in Chapter 5 to show the variations in flows
for all of the nearly 3,000 inventory flows in the US LCI model.

E-resource: The file ‘Complex Models from US LCI with Uncertainty’ in the Chapter 7
folder demonstrates how to adapt the cell formulas and worksheets introduced in the
‘Simple and Complex’ spreadsheets of Chapter 5 for the more broadly scoped coal-fired
electricity example discussed above. It adds a chart comparing the total inventories of each
flow via error bars. This file can be modified as a template for other comparisons.

16 Again, these are the fossil methane and unspecified methane emissions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 213

Figure 7-14 is an example of the uncertainty chart provided by this new spreadsheet model.
It summarizes ranges of results for all 60 non-zero flows in the US LCI database for the three
types of coal-fired electricity generation. Note that the graph is on a log-scale (meaning each
unit on the y-axis is a difference of an order of magnitude, not a single unit).

Interpretation: The relatively large error bars visible on this graph highlight about 20 flows where
the three models differ in estimated results per flow by one or more orders of magnitude. In
the graph, the x-axis represents the ‘flow numbers’ that are ordered alphabetically in the model
(adding text labels of each flow name makes the graph unreadable). The range results of the
four flows tracked through this chapter are annotated on the graph, with each label to the left
of the graphed range. The fossil CO2 result, US LCI flow #505, is consistent with our earlier
discussion of its negligible variation in results (the error bars are effectively invisible on a log-
scale graph), while the other three have obviously large variability (recall the US LCI database
has separate flows for methane, which were aggregated in our results). Hopefully, these results
strongly reinforce the motivations in this chapter about appreciating the effects of uncertainty
and variability when making conclusions from LCA data and models.

Carbon
Dioxide,
fossil

Methane Sulfur
oxides
Methane,
Dinitrogen fossil
monoxide

Figure 7-14: Graphical Summary of Variations in Three Coal-Fired Electricity Models from US LCI

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
214 Chapter 7: Uncertainty and Variability in LCA

Sensitivity Analysis
As stated in Chapter 2, sensitivity analysis is a means of assessing the effect on model outputs
(results) from a specified change in a single input variable. Chapter 4 noted that the LCA
Standard explicitly calls for sensitivity analysis. It is thus a key concept for uncertainty
assessment in LCA. Sensitivity analysis is a quantitative way of determining to what extent
results change when an input is varied. It is done partially, i.e., by changing one input or
assumption at a time while holding all others constant. This allows a specific description of
how much variation in results would be expected from changing just that one input.17

Sensitivity analysis is often done in software models, such as spreadsheets or MATLAB


environments, given the availability of free or commercial code that streamlines the sensitivity
analysis procedure. LCA models may or may not be implemented in software; thus, the
workflow for completing a sensitivity analysis in this context may be unclear. Beyond just input
data, a sensitivity analysis can also be done for different assumptions, such as a parameter or
rate used in the study. These assumptions could also relate to choices among alternative
allocation or system expansion schemes, assumptions about the use or availability of renewable
electricity in a process, etc.

Note that most sensitivity analysis software tools have default sensitivity ranges, e.g., to
consider + or - 50% changes in inputs. While defaults are convenient, ranges for sensitivity
analyses should always be justified. For example, for a percentage-based model input, the full
range is 0 to 100%, but parts of this range may be infeasible and thus not relevant.

Beyond LCA models, sensitivity analyses are generally used (and automated by software) to
automatically check the sensitivity of model outputs on all modeled input variables. Software
such as DecisionTools Suite has Microsoft Excel plug-ins to automate sensitivity analyses in
spreadsheet-based models. Unfortunately, the software tools for LCA do not make it easy to
practice uncertainty assessment for students. The course license version of SimaPro does not
include uncertainty analysis features, although a research version of the program does.
OpenLCA, however, includes uncertainty assessment tools in all versions.

Case Study: Effects of Shipping Distance Assumptions

As an illustration of a sensitivity analysis that builds on the examples above, we consider the
importance and effects of shipping the coal from mine to power plant. Before doing so, it is
useful to consider the initial assumption of shipping distances inherent in the US LCI
inventories. For example, the anthracite coal-fired electricity process notes 0.092 ton-km of
truck transport of coal per kWh of electricity. Considering that it also notes 0.356 kg (3.56 E-
4 tons) of coal is needed per kWh, then that means the average shipping distance by truck of

17 Other methods, such as multi-way sensitivity analysis and simulation, can show the effects of changing more than one variable at a time.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 215

that coal is 0.092 ton-km / 3.56 E-4 tons, or about 250 km (or about 150 miles). Likewise, the
bituminous coal power plant is shipping its coal 125 km (5.59 E-02/4.42 E-4) by barge, 7 km
(2.99 E-03/4.42 E-4) by truck, and 1,050 km (0.461 /4.42 E-4) by train. Lignite is shipping
its coal, on average, 10 km (8.6 E-03/7.82 E-4) by truck, and 1 km (8.06 E-04/7.82 E-4) by
train.

The shipping distance results for lignite may seem suspicious, as it implies that the lignite
power plants are all sited extremely close to the lignite mines, and further that the most
inefficient mode (truck) would be used to deliver the coal. However, lignite plants, unlike other
types of coal, are often sited very close to known deposits, called minemouth plants, so the
relatively small distances are in line with reality (see lignite.com for maps and data on lignite
mines and power plants). Real-world examples can be found where lignite power plants are
further from the mines, such as the Beulah Mine and the R.M. Heskitt generating station in
North Dakota where lignite is shipped about 70 miles (110 km) by train.

A sensitivity analysis can consider the effects on the lignite-fired electricity LCI results by
varying the initial assumption of 10 km truck / 1km train shipment. Four scenarios are created:
(1) the base average case that uses the built-in assumptions of the US LCI data, (2) a
minemouth power plant which is assumed to be located next to the mine and thus has no
transportation inputs, and (3) a nearby plant that ships lignite 110 km by train, and (4) a remote
plant that ships lignite 1,000 km by train. Figure 7-15 summarizes the values, analogous to
Figure 7-13, mostly with updated intensity factors for the transportation modes. Note that the
train intensity for the nearby case is found by scaling the 0.782 kg of coal needed per kWh
(7.82 E-4 tons) up by the 110 km distance, i.e., 7.82E-4*110 = 0.086 ton-km / kWh. Similarly,
the remote is found by scaling up by 1000 km, or 7.82E-4*1000 = 0.782 ton-km / kWh.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
216 Chapter 7: Uncertainty and Variability in LCA

Minemouth Base Nearby Remote

(min) (avg) (max)


Process Input Summary (and relevant unit comparison to kWh)

Distance/mode (km) 0 10 truck, 1 rail 110 rail 1,000 rail

Coal mining (kg / 0.782 0.782 0.782 0.782


kWh)

Train transport (ton- 0 8.06 E-04 0.086 0.782


km / kWh)

Truck transport (ton- 0 8.6 E-03 0 0


km / kWh)
Calculated Outputs (kg per kWh), with details shown
Carbon dioxide, fossil 1.088 kg/kWh + 1.088 + (0.782 * 0) + 1.088 + (0.782 * 1.088 + (0.782 * 0)
(0.782 kg/kWh * (8.06 E-04 ton- 0) + +
0 kg CO2/kg) km/kWh * 1.89 E-02 (0.086* 1.89 E- (0.782 * 1.89 E-02)
kg CO2/ton-km ) + 02) + +
= 1.088 (8.6 E-03 ton- 0 0
km/kWh * 7.99E-02
kg CO2/ ton-km) = 1.09 = 1.1

= 1.089

Dinitrogen monoxide 2.5 E-05 2.5 E-05 2.5 E-05 2.5 E-05

Methane, fossil18 9 E-04 9 E-04 9 E-04 9 E-04

Sulfur oxides 4.3 E-03 4.3 E-03 4.3 E-03 4.3 E-03
Figure 7-15: Variation in LCI Outputs of Lignite-Fired Electricity For Four Cases (abridged)

Interpretation: The results show that a 100x increase in the shipping distance has almost no
effect on any of the life cycle emissions that have been the focus of the chapter so far. That’s
not to say that there haven’t been 100x increases in the relative emissions from shipping – just
that compared to the combustion of the lignite at the power plant, they are negligible.

While shipping distance was chosen as the subject of the sensitivity analysis presented, other
relevant sensitivity analyses could be done with factors such as the size of the system boundary
(i.e., including more upstream processes), different LCI database values, etc.

18 Again, these are the fossil methane and unspecified methane emissions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 217

Chapter Summary
The practice of Life Cycle Assessment inevitably involves considerable uncertainty. This
uncertainty can be reduced with careful analysis of the underlying data and production
processes. Many practitioners continue to ignore the effects of uncertainty or variability in
their studies, considering all model inputs as single values and generating only a single result.
Such efforts have limited usefulness in informing others or supporting important decisions,
which can lead to poor (or at least, uninformed) decisions. In this chapter we demonstrated
multiple ways in which you can qualitatively or quantitatively consider uncertainty so as to
produce robust life cycle results. The use of significance heuristics, ranges, sensitivity analysis,
and related tools can help to create better LCA studies that support more robust results.

References for this Chapter

Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfaces, John
Wiley & Sons.
Heijungs, R. and Huijbregts, M., A review of approaches to treat uncertainty in LCA,
Proceedings of the IEMSS conference, Osnabruck 2004.

Lloyd, S. M., and Ries, R., Characterizing, Propagating, and Analyzing Uncertainty in Life-
Cycle Assessment: A Survey of Quantitative Approaches, Journal of Industrial Ecology, 11
(1), 161-179, 2007.
Morgan, M.G., and Henrion, M., Uncertainty: A Guide to Dealing with Uncertainty in
Quantitative Risk and Policy Analysis, Cambridge University Press, 1992.
Williams, E., Weber, C., and Hawkins, T., Hybrid Framework for Managing Uncertainty in
Life Cycle Inventories, Journal of Industrial Ecology, Vol. 13, Issue 6, pp. 928-944, 2009.
Wilson, R., E. Crouch, and L. Zeise, “Uncertainty in Risk Assessment”, Banbury Report 19:
Risk Quantitation and Regulatory Policy, Cold Spring Harbor Laboratory, 1985.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
218 Chapter 7: Uncertainty and Variability in LCA

End of Chapter Questions

Objective 1. Describe the various sources and types of uncertainty and variability for
data, model inputs, and methods

Note: your instructor may suggest an alternate process to consider for the following questions
other than the one specified.

1. Using the LCI data and metadata for the US LCI process Electricity, bituminous coal, at power
plant (e.g., as used in Chapter 5), describe in words the relevant sources and types of
uncertainty that would be relevant to consider if using the process in an LCA study.

Objective 2. Describe and apply qualitative, semi-quantitative, and quantitative


methods to address uncertainty in LCA

Objective 4. Interpret uncertain results to support study conclusions

2. Considering the same US LCI process as in question 1, produce a qualitative uncertainty


assessment using the LCI data quality categories summarized in Figure 7-6. Provide several
relevant sentences for each of the five categories.

3. Considering the US LCI processes Electricity, lignite coal, at power plant, and Electricity,
bituminous coal, at power plant, perform a semi-quantitative uncertainty assessment of the
following flows. For both of the compared flows, what kind of conclusion would you be
comfortable making given the data? Write a specific one to two sentence interpretation
that would be appropriate for inclusion in a report that is comparing the flows.

a. Use a significance threshold of 20% on the flow of fossil CO2 to air

b. Use a significance threshold of 30% on the flow of chromium to air

Objective 3. Apply sensitivity analysis to help frame uncertainty in an LCA model

4. Using the lignite-fired electricity model from the chapter (e.g., as shown in Figure 7-15),
find the rail shipping distance needed such that a remote plant would have fossil CO2
emissions that were 20% different than a minemouth plant. What about if it is shipped
mostly by barge instead?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 7: Uncertainty and Variability in LCA 219

THIS PAGE LEFT BLANK TO PRESERVE FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
220 Chapter 7: Uncertainty and Variability in LCA

Comments/placeholders/to do
1. Revisit a previous problem, given a stated uncertainty range. Still able to make a
reasonable conclusion, and/or how to interpret/conclude given the uncertainty?
2. Look more broadly at Sens Anal Case study 1 – create a visual that shows the
coal and gas GWP values across a broader range. Further, find the breakeven
value of methane GWP that would change the answer
3. Truncation error / cutoff uncertainty?

Add sidebar about Railroad ads on NPR – “we can ship XX on a gallon of diesel fuel”?

Gwen on meas/accounting (now Chap 5): This discussion ignores the actual fuel economy methodology
used by EPA in support of the CAFÉ standards and the published fuel economy data required on
vehicles. Could make the example real here.

Gwen on previous conclusion of LCA being different than other scientific studies.. I’m not sure I agree
with this. Uncertainty is assessed in all types of science and engineering, and LCA of course should and
does apply similar tools to depict the impact of uncertainty on the interpretation of its results.
The implication is that LCA is “lesser” because it isn’t based on measurements and equations. But that
overstates the prevalence of such high standard/accuracy studies and models. Much of the time,
regardless of the field, we are dealing with uncertainty, whether it’s understanding cancer pathways,
failure scenarios for space shuttles, or the explanation of local poverty. We often don’t know things
with 100% certainty, so why pretend?

Gwen’s comment on inconclusive/it depends argument: I find myself wanting to say push back here.
There’s a difference between inconclusive and nuanced. The consideration of uncertainty also illustrates
the opportunities for change. Under certain circumstances, the results change. If the alternate results are
preferable from a societal perspective, the important part of the uncertainty ranges points to leverage
opportunities. Traditional bookstores have an advantage in communities where walking is more important
than driving…
Gwen on clear winner para: This paragraph seems awfully defensive to me. Consideration of uncertainty
allows for the identification of leverage points – what do you need to control for in order to ensure ultimate
environmental benefits?
Gwen on underlying failure para: If the readers are pushing back about an “it depends” answer, it really
just illustrates their inability to accept the nuanced analysis that reveals the impacts of uncertainty. So the
trick is to make the results of the uncertainty analysis accessible to the readers.
Of course, “failure” can be associated with the practitioner’s failure to consider uncertainty. A separate
problem that reflects an unsophisticated or incomplete LCA.

Another thought, maybe not appropriate for the book, is that a good bit of this uncertainty could be
minimized by research investments in data – updated IO tables, more sophisticated R matrices, etc.
REFERENCES (paste from Chapter 11!!))

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 221

Classroom Exercise To Motivate Input-Output and Matrix Based Methods

The next two chapters motivate quantitatively-driven methods to support life cycle studies. Both
input-output and process matrix methods rely on linear algebra and matrix-based methods to
assist computational efforts. Armed with the introduction to life cycles and process flow diagram
approaches, the reader is prepared to learn about these matrix-based methods.

A team of researchers at Carnegie Mellon University has created a game-like simulation to assist
with this effort. It involves small groups simulating the production of four goods, each of which
has a small number of input and output flows. However the four goods have interdependent
flows. A key learning objective of the simulation is to realize how interdependent flows lead to
process flow diagrams which are dependent on each other, and how addressing that dependency
requires additional demand and estimation of effects upstream. Through the exercise, the
underpinnings of the matrix approach are revealed. In the end, using matrices to solve such
problems is shown to be much more straightforward, and avoids various potential math errors.

This exercise has been designed and tested with audiences ranging from middle school students
through corporate executives, all of which are in the process of learning about LCA. Ideal group
sizes are about 4 persons, although slightly smaller or larger groups work well. We strongly
suggest that this exercise be done, ideally in a classroom or small group setting,
before proceeding with subsequent chapters. Post-assessments have shown a significant
increase in understanding of these topics amongst those exposed to this simulation.

The whole exercise, including the introduction and motivation, as well as blank copies of the
various purchase order and tracking forms, was previously published (Hawkins 2009) and has
been made freely available through the generous support of the Journal of Industrial Ecology. A
direct link is available via the textbook website, under E-resources for Chapter 8.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
222 Chapter 8: LCA Screening via Economic Input-Output Models

Chapter 8 : LCA Screening via Economic Input-Output


Models
The construction and use of economic input-output based LCA models, and the powerful
screening capabilities they provide are introduced in this chapter. As described in Chapter 5,
process-based LCA models are ‘bottom up’ in type, and are defined by the scope laid out by
the analyst. Economic input-output LCA models can be thought of as being a ‘top-down’ type
because they give a holistic estimate of resources needed for a product across the economy.
We recommend our separately published classroom simulation on input-output LCA models
(Hawkins 2009). This simulation exercise has been developed explicitly to make these kinds
of models understandable and to help you appreciate their strengths and limitations. We will
also describe the mathematical structure of the EIO-LCA model. Those intending to use
economic input-output tables in their own LCA studies are highly encouraged to read the
Advanced Material at the end of this chapter.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Describe how economic sector data are organized into input-output tables

2. Compute direct, indirect, and total effects using an input-output model

3. Assess how an input-output model might be used as a screening tool.

Input-Output Tables and Models What are economic sectors?


In the 1930s, economist Wassily Leontief developed an
economic input–output table of the United States Sectors are groups of companies
economy and a system of equations to use them in models with similar products. Given the
model, there may be a single
(Leontief, 1986). His model represented the various inputs
sector for all manufactured goods,
required to produce a unit of output in each economic or many separate sectors, for
sector based on surveyed census data of purchases and sales everything from electricity (large
of industries. By assembling a table describing all of the economic output) to tortillas
major economic sectors, he was able to trace all of the (relatively small output). There
economic purchases needed to produce outputs in each are various national and global
sector, all the way back to the beginning when raw materials systems for categorizing sectors,
are extracted. The result was a comprehensive model of the leading to differences in the
U.S. economy. For this work, Leontief received the Nobel number of sectors used and
Prize in Economics in 1973. reports in the IO tables.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 223

An economic input–output (EIO, or just IO) table divides an entire economy into distinct
economic sectors. The tables can represent total sales from one sector to others, purchases
from one sector, or the amount of purchases from one sector to produce a dollar of output.

Input-output models were popular in the mid-20th century for high-level economic planning
purposes. They were used so that governments could better understand the requirements of,
and plan for, activities like war planning, procurement, effects of disarmament, or economic
requirements for building infrastructure such as roads. As will be seen below, economic input-
output models are vital for developing national economic accounts, and can also be used for
environmental life cycle assessment.

Vectors, matrices, and notation

A vector is a one-dimensional set of values (called elements) referenced by an index. If a vector


is named X then X1 is the first element, X2 is the second element, ... , and Xn is the last element.
If implemented in a spreadsheet, a vector could be arranged as elements in a row or in a column.
In this book, we use upper case italicized letters to represent vectors. Individual elements (a
row/column entry) are upper case, italic, and denoted by an index.

A matrix is a two-dimensional array of values referenced by both a row and column index. (The
plural of matrix is matrices.) In a spreadsheet, a matrix would have rows and columns. One of
the most popular matrices is an identity matrix (I), which has the number one for all elements
on the diagonal (where the row and column indices are equal), and zeroes in all other cells. We
use upper case bold letters to represent matrices. A two-dimensional identity matrix is defined
as having these elements:

1 0
𝐈 = € •
0 1
Note in equations multiplying vectors and matrices together, the multiplication sign is omitted.

Figure 8-1 shows the structure of an IO transactions table. Each entry Zij represents the
input to column sector j’s production process from row sector i. The final column, total output
of each sector X, has n elements, each the sum across the Zij inputs from the other sectors
(a.k.a. the intermediate output, Oi) plus the output supplied by the sector directly to the final
demand Yi of consumers. To help explain these different components, the intermediate
outputs O are being sold to other producers to make other goods, while final demand is sales
to users of the product as is. For example, a tire manufacturer might sell some of its output as
intermediate product to an automobile producer for new cars and as final product to
consumers as replacement tires. For each of the n columns of the transactions table, the
column sum represents the total amount of inputs X to each sector from other sectors. The

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
224 Chapter 8: LCA Screening via Economic Input-Output Models

two X values for each sector are equal. A transactions table is like a spreadsheet of all
purchases, thus values for a large economy could be in billions or trillions of currency units.

Intermediate Final Total


Input to sectors output demand output
O Y X
Output from sectors 1 2 3 n

1 Z11 Z 12 Z 13 Z 1n O1 Y1 X1
2 Z21 Z22 Z23 Z2n O2 Y2 X2
3 Z31 Z32 Z33 Z3n O3 Y3 X3
n Zn1 Zn2 Zn3 Znn On Yn Xn
Intermediate input I I1 I2 I3 In

Value added V V1 V2 V3 Vn GDP


Total output X X1 X2 X3 Xn
Figure 8-1: Example Structure of an Economic Input–Output Transactions Table

Notes: Matrix entries Zij are the input to economic sector j from sector i. Total (row) output for each sector i,
Xi, is the sum of intermediate outputs used by other sectors, Oi, and final demand by consumers. Total
(column) output can also be defined as the sum of intermediate input purchases and value added.
Intermediate input, I, in an IO table is the sum of the inputs coming from other sectors (and
is distinct from the identity matrix I). Value Added, V, is defined by economists as the increase
in value as a result of a process. In an IO model, it is the increase associated with taking inputs
valued in total as I and making output valued X. While value added serves a purpose in
ensuring consistency between total inputs and total output, it includes some real aspects of
industrial production such as direct labor expenses, profits and taxes. While not shown here,
some IO frameworks have a sector representing household activities.

The typical process of making a transactions table involves acquiring data on transactions of
an economy through periodic surveys of companies in the various sectors to assess how much
economic output they are producing, and which other companies (and from which sectors)
they buy from. As you might imagine, these data collection activities can be very time and
resource intensive. The methods involve only a sampling of companies surveyed rather than
surveying every company. Like methods used for counting population, additional statistical
analyses are done to check the results and to ensure representative totals have been estimated.
A fundamental concern thus relates to deciding how many sectors to divide the economy into.
With fairly little effort one could develop a very coarse model of an economy, e.g., with 10
sectors where agriculture, mining, manufacturing, etc. represent various very aggregated
sectors of activity. But such models have very low resolution and answer only a limited number
of questions. Thus, all parties have incentive to invest resources in generating tables with
higher numbers of sectors so that more detailed analyses are possible.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 225

Since the effort needed to make a table is measured in person-years, often the highest
resolution tables (greater number of sectors) of an economy are not made annually. The fact
that economic production does not change quickly, i.e., the production recipe for the sectors
does not change much from year to year, further supports only periodic need for detailed
tables. There are typically annual and benchmark tables, where lower resolution (fewer
sectors) tables are made annually and higher resolution (more sectors) benchmark tables are
made less frequently, such as every 5 years as done in the U.S.

Gross Domestic Product Economic input-output models are developed for all
(GDP) is an indicator of output of major countries, usually by government agencies
an economy, measured by the sum such as the US Department of Commerce’s Bureau
of final demands or value added of Economic Analysis (BEA). Their primary use is
across sectors. Consider the to help develop national accounts used to estimate
alternative – if final demands and economic data such as the Gross Domestic Product
intermediate outputs were both (GDP). Most nations routinely develop input–
components of the output of the output models with 50-100 sectors, although few are
economy, then much of that as detailed as the current benchmark year 2002 428-
output would be “double sector model of the United States. (Given the
counted”. Extending the example processing time required, the IO tables reporting
above, such an economic measure
2002 values were released in 2007. Tables with data
would count both the value of
production of intermediate tires
collected in 2007 were published in 2013.) A
in new cars as well as the value of recurring criticism of using EIO-based models is
the new cars that came with those that they rely on relatively old data due to these lags
tires. Such an outcome would be in release of the economic data. However, as shown
undesirable, thus only final in Figure 5-14, available process data tends to be
demands are counted. fairly old as well. Not all IO tables are made by
government employees. In various developing
countries, where expertise in government agencies
may not exist to do such work, these same activities could be done by other parties like
academic researchers in the home or a foreign country.

For calculation purposes, it is helpful to normalize the IO table to represent the proportional
input from each sector for a single dollar of output. This table is calculated by dividing each
Zij entry by the total (column) output of that sector, Xj. We denote the resulting table - with all
entries between zero and one - as matrix A showing the requirements of other sectors directly
required to produce a dollar of output for each sector. When done in this way, A is called the
direct requirements table (or matrix). It is called “direct” because these purchases happen
at the highest level of decision making – i.e., the direct purchases needed to produce
automobiles are windshields, tires, and engines.

Example 8-1 illustrates the transformation of an IO table into its corresponding A matrix. An
economic input–output model is linear, so the effects of a $1,000 purchase from a sector will
be exactly ten times greater than the effects of a $100 purchase from the same sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
226 Chapter 8: LCA Screening via Economic Input-Output Models

Example 8-1: In this example, we will use the methods defined above to develop an A matrix.

Assume a transactions table for a simple 2-sector economy (values in billions):

1 2 Y X
1 150 500 350 1000
2 200 100 1700 2000
V 650 1400
X 1000 2000
Question: What is the direct requirements matrix (A) for this economy?

Answer: We use the Zij/Xj formulation as described above. For example, the 150 and 200
values in column 1 are divided by 1000 (the X value in column 1) and the 500 and 100 are
normalized by 2000. Thus:

. 15 . 25•
𝐀=€
. 2 . 05

The A matrix (and a Leontief model in general) thus represents a series of “production
recipes” for all of the sectors in the economy. A production recipe is just like a recipe for
cooking food, where you are told all of the ingredients needed to prepare a meal and in which
numerical amounts. Soon after Leontief won the Nobel Prize, he was quoted as saying “When
you make bread, you need eggs, flour, and milk. And if you want more bread, you must use
more eggs. There are cooking recipes for all the industries in the economy.”

In A matrix terms, since all values in a column are normalized by the total sector output, each
of the coefficients in the production recipe is fractional. As a hypothetical and simple example,
imagine a small economy with only two sectors, like in Example 8-1, which are for electricity
generation and coal mining. The production recipe for making $1 worth of electricity would
involve purchasing a fraction of a dollar from the coal mining sector (as well as some
electricity). Likewise the production recipe for making $1 worth of coal would involve
purchasing some electricity and coal. This interdependence between sectors is common, and
a critical reason why EIO models are so useful in representing systems.

A key benefit of using EIO models is not the organization of the economy into tabular form.
It is that the direct requirements table can be used to trace out everything needed in the
manufacture of a product going back to the very beginning of the life cycle of that product.
One can envision this by considering what direct purchases are needed, then what purchases
are needed to produce those direct purchases, and continuing back through the purchasing
levels to the initial raw materials obtained through mining or farming (n levels back). If
considering the manufacture of windows, a window manufacturing column may show
purchases of glass and wood (or metal) framing pieces. The glass manufacturing column would

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 227

show purchases of sand and other minerals, and the wood framing sector would show
purchases of forestry products. If given enough time, one could piece together the total
requirements by iteratively looking up the A matrix chain for such information.

Algebraically, the required economic purchases in all sectors of the economy required to make
any desired output Y is simply a vector of the various sectors’ final demand inputs. Thus, the
total purchases, X, needed to generate output Y can be calculated as:

X = [I + A + A×A + A×A×A + … ] Y = IY +AY + A2Y + A3Y + … (8-1)

where X is the vector (or list) of required production, I is the identity matrix, A is the direct
requirements matrix (with rows representing the required inputs from all other sectors to make
a unit of output for that row’s sector) and Y is the vector of desired output. For example, this
model might be applied to represent the various requirements for producing electricity
purchased by residences. In Equation 8-1, the summed terms represent the production of the
desired output itself, electricity (IY), as well as contributions from the first tier suppliers, e.g.,
coal or natural gas (AY), the second tier suppliers, e.g., equipment used at coal mines (A2Y),
etc. In input-output terminology we refer generally to the IY and AY terms (i.e., [I + A]Y) as
the direct purchases (because those are everything related directly to the decisions made by
the operators of the final production facility) and all other A2Y, A3Y, etc., terms as indirect
purchases (since those production decisions are made beyond the direct operators). The sum
of the direct and indirect purchases is the total purchases. Note that the terminology used in
IO models may differ from that of other modeling domains (and as introduced in Chapter 4)
but is emphasized for consistency. In other domains, direct purchases may only refer to IY.

IO models that estimate direct and indirect purchases use Equation 8-1 to combine the various
production recipes across the supply chain into a
How are economic input- total supply chain. That is, a final demand of $20,000
output and process based into the automobile manufacturing sector will
methods similar? determine all of the direct ingredients (in dollars)
needed to produce the car. One of these direct
Think of each of the sectors of the
IO model as a process. In each
requirements may be a $2,500 engine. Therefore, the
sector’s process, the “inputs” are IO model also then (in the A2Y term) estimates the
the economic inputs (as ingredients needed to produce the $2,500 engine. In
purchased) from all of the other the end, the thousands of overall ingredients needed
sectors and the “output” is the to produce the car are all included in the total
product of the sector. An IO model purchases estimate. And all are aggregated into the
is thus a linear system of relevant sectors, even if they occur at different tiers
individual economic process of the supply chain (i.e., purchases of any direct or
models. indirect electricity are all added into a single sectoral
total for purchases of electricity). Since IO models
by default represent flows across the entire economy, they can be classified as a ‘top down’

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
228 Chapter 8: LCA Screening via Economic Input-Output Models

approach. Such methods give high level perspectives that can subsequently be decomposed
into pieces.

All of the linear algebra or matrix math needed is easily done in Microsoft Excel for small
models and there are many resources available on linear algebra as well as Excel matrix arrays
and functions should you decide to use them. In the Advanced Material at the end of this
chapter (Section 6), we describe how to do these operations in Excel and MATLAB ®,19 a
popular scientific analysis tool used by many academics and researchers that specializes in
matrix based computation.

For those of you familiar with the mathematics of infinite series (or matrix math), you will
recognize that the series in Equation 8-1 can be replaced by [I–A]–1, where the –1 indicates the
multiplicative matrix inverse, following an infinite geometric series approximation. Thus,
Equation 8-1 can be simplified to Equation 8-2 (see Advanced Material Section 1 at the end
of this chapter for more detail):

X = [I – A]–1 Y (8-2)

Two important observations can be made from the use of Equations 8-1 and 8-2 above. First,
since summing all direct and indirect purchases results in all of the required production needed,
[I – A]–1 is called the total requirements table (or matrix).

Using the model in Equations 8-1 or 8-2, we can estimate all of the economic outputs required
throughout an economy to produce a specified set of products or services. The total of these
outputs is often called the supply chain for the product or service, where the chain is the
sequence of suppliers. For example, an iron ore mine supplies a blast furnace to make steel, a
steel mill supplies a fabricator that in turn ships their product to a motor vehicle assembly
plant. To make an automobile with its 20,000–30,000 components, numerous chains of this
sort are required. An input–output model includes all such chains within the linear model in
Equation 8-1.

In the Advanced Material at the end of this chapter (Section 2), we further describe the
underlying data sources for the production recipes and transactions tables discussed here. With
their whole economy or whole supply chain type, IO models provide very large scopes that
can overcome some of the limitations observed with process flow diagram-based models in
Chapter 5. In the next chapter, we will see how to apply similar top-down matrix methods to
process data.

19 MATLAB is a registered trademark of The MathWorks, Inc., and will be referred to as MATLAB in the remainder of the book.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 229

Example 8-2: Building on Example 8-1 we calculate direct and total requirement vectors.

Question: Find the direct and total requirements of a $100 billion final demand into each of
the two sectors separately.

Answer: Using the direct requirements matrix (A) found above, we know that:

. 15 . 25 1 0 100 0
𝐀=€ •,𝐈=€ •, Y1 = € •, Y2 = € •
. 2 . 05 0 1 0 100
By definition, the direct requirements are [I+A]Y, and the total requirements are [I-A]-1Y. Using
Excel or a similar tool, we can find the (rounded off) inverse matrix, which is:
1.254 . 33
[𝐈 − 𝐀]…+ = € •
. 264 1.12
Thus, the direct requirements for Y1 and Y2 are:
115 25
[I+A]Y1 = € • and [I+A]Y2 = € •,
20 105
meaning that a $100 billion demand from sector 1 requires $115 billion in purchases from sector 1
and $20 billion in purchases from sector 2. Similarly, the total requirements are:

125.4 33.0
[I-A]-1 Y1 = € • and [I-A]-1 Y2 = € •, considering significant figures.
26.4 112.2

The supply chain perspective gives a basis for considering effects that happen before or after
a product is manufactured. We typically refer to points and decisions made before a product
is manufactured as upstream and those made after a product is manufactured as
downstream. Building on the previous example, from the perspective of the blast furnace,
the iron ore mine is upstream and the vehicle assembly plant is downstream. The upstream
and downstream terminology also can apply to process-based models.

Compared to process-based methods, IO methods take a more aggregate view of the sectors
producing the goods and services in the U.S. economy. IO models are quick and efficient but
are not perfect. Some of their key assumptions and limitations are listed below, and lead to
various uncertainties, which are discussed below and in the Advanced Material.

Sectors represent average production. All production facilities in the country that make products and
provide services are aggregated into a fixed number of sectors (in the US economy models
discussed in this book, approximately 400 sectors). Similar production facilities are all assigned
by definition into the same sector, and the model assumes identical production in all facilities
of these sectors. In short, no facility in a sector produces any differently than any other in the
model (even if in fact this is not true). This is the so-called ‘average production’ assumption.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
230 Chapter 8: LCA Screening via Economic Input-Output Models

You can get a sense of why this is true by referring back to Figure 8-1, which simply aggregates
all transactions from all of the facilities into the various Z values, then normalizes by the total
output of the entire sector, creating average A values.

Input-output models are completely linear. That is, if 10% more output from a particular factory is
needed, each of the inputs will have to increase exactly 10%. This of course is not generally
true, as there could be economies of scale that allow use of inputs to increase less than 10%.
However, this assumption is also common in process-based models.

Manufacturing impacts only. Given the data sources available and used, IO models generally
estimate total expenditures only up to the point of manufacture; that is, they do not estimate
downstream effects from product use (e.g., consideration of the gasoline needed to run the
consumer’s car) or end-of-life (e.g., disposal costs and impacts). We will describe below ways
to use IO models to estimate impacts beyond the manufacturing phase.

Capital investments excluded. Capital inputs to manufacturing are not included in most IO tables.
In the US, such transactions are available in a supplemental transactions table and could be
added. Exclusion of capital investments is also a typical assumption for process-based models.

Domestic production. An IO model for a single economy is limited to estimating effects within
that country. Despite the fact that many inputs are likely sourced (imported) from other
countries in today’s global economy, imported inputs are assumed to be produced in the same
way as in the model’s home country. For certain sectors, this may present a problem because
there is so little production done within the home country that the data and/or environmental
flows represented are not robust, but the model will still treat that production as if done wholly
within the home country and with the associated domestic impacts. Models that move beyond
this assumption are possible but beyond the scope of this chapter.

Circularity is inherent and incorporated into the model. In the previous chapters, we noted that all
interdependent systems have circularities such as steel needed to make steel, etc., and that this
complicated the ability to build process models. IO models embrace the existence of
circularity, and the effects are included within the basic model and matrix inversion.

Input–Output Models Applied to Life Cycle Assessment


Now that the underlying economic input-output models have been introduced, we can discuss
how they are applied to support decisions about LCA. By appending data on energy,
environmental, and other flows to the input-output table, non-economic impacts can be
predicted. The resultant models are referred to as environmentally extended input-output
models (EEIO).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 231

Here we differentiate between using economic input-output methods generally to support


LCA as IO-LCA and the specific method as implemented with our colleagues in the Green
Design Institute of Carnegie Mellon University as EIO-LCA (see below). This differentiation
seeks to emphasize general IO-LCA methods and practices, as well as specific data sources
and assumptions for the EIO-LCA model. Within the US, there are various tables and
resources related to IO-LCA. Aside from EIO-LCA, there are the CEDA database and
OpenIO. There are other similar IO-LCA models outside the US.

The advantage of the process-based approach as described in Chapters 4 and 5 is that it can
answer as detailed a question as desired concerning the materials and energy balances of each
facility studied, assuming that adequate resources exist to collect and analyze data on the
relevant flows. In practice, getting these balances is sufficiently difficult that they rarely are
able to answer detailed questions. The disadvantage of the process-based approach is that the
required expense and time means that generally a tight boundary must be drawn that excludes
many of the facilities relevant to activities within the overall system.

The advantage of the IO-LCA approach is that a specific boundary decision is not required,
because by default the boundary is the entire economy of production-related effects, including
all the material and energy inputs. Another major advantage is that it is quick and inexpensive.
Results can be generated in seconds at no cost other than the time involved. Note though that
with respect to modeling in support of an ISO compliant LCA, IO-LCA methods are generally
most useful as a screening tool rather than as the core model needed to answer the necessary
goals of the LCA task. We introduce IO-LCA methods so that the overall LCA task can be
improved by gaining an appreciation for where the greatest systemwide impacts occur. Such
knowledge can inform choices of scope, boundaries, and data sources for process based
models. They can also be used to help validate results from process-based methods, as the
more comprehensive IO-LCA boundaries will generally lead to higher estimates of impacts
(upper bounds), which can be used to assess whether the process-based results seem
reasonable.

The IO-LCA approach has a major disadvantage: it uses aggregate and average data for a
sector rather than detailed data for a specific process. For example, an IO-LCA model will
yield results for the average production from, say, the sector iron and steel mills, rather than from
producing particular steel alloys required for an automobile (which one could find in process
data). As another example, the US IO table does not distinguish between generating electricity
using a 50-year-old coal plant and using a new combined-cycle gas turbine. The former emits
much more pollution per kWh than the latter. A process model could compare the different
processes to the degree desired. The process models can be specific to particular materials or
processes, rather than the output from a sector of the economy. Even with more than 400
sectors available, analysts would often like to disaggregate IO-LCA models, such as dividing
the plastics sector into production of different types of plastic. Process models can also handle
nonlinear effects.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
232 Chapter 8: LCA Screening via Economic Input-Output Models

While we focus on the use of IO-LCA in this chapter, we also describe ‘hybrid’ models in
which IO-LCA and process models are combined to exploit the advantages of both (see
Chapter 9). In hybrid models, the results from production of a chemical might be from a
process model, while the effects of the inputs to the process might be assessed with IO-LCA.
With a hybrid model, the reliance on process models can vary from slight (such as one model)
to very extensive (IO-LCA might be used only for a single input such as electricity).

IO-LCA (or EEIO) models work by following the flow chart shown in Figure 8-2. Economic
activity generates environmental impacts. Production of steel generates solid waste (slags, air
pollution, wastewater) and consumes energy that results in greenhouse gas emissions. These
environmental impacts can be assumed to be linear in their magnitude and can also be
described as vectors.

Estimate final demand (Y)

Assess direct and indirect economic requirements (X)

Assess overall environmental or energy impacts per sector (E)

Sum sector level impacts for overall impacts

Figure 8-2: Flow chart for IO-LCA models

We have already described above the process needed to complete the first two steps. In this
section, we describe the last two steps that allow use of the models for LCA screening
purposes. In our discussion, we use ‘dollars’ as a currency given our own biases, but IO-LCA
models can and have been derived around the world in many currencies. Our use of dollars is
meant to merely provide a consistent terminology for expression of economic values.

Once economic output for each sector (X) is known, a vector of total environmental effects
(i.e., the sum of direct and indirect environmental effects) for each sector can be obtained by
multiplying the output by the environmental impact per dollar of output:

E = RX = R [I – A]–1 Y (8-3)

where E is the vector of environmental burdens (such as toxic emissions or electricity use for
each production sector), and R is a matrix with diagonal elements representing the impact per

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 233

dollar of output for each sector for a particular energy or environmental burden (Lave 1995,
Hendrickson 1998, Leontief 1970). A variety of environmental burdens may be included in
this calculation. For example, from estimates of resource inputs (electricity, fuels, ores, and
fertilizers), we can further estimate multiple environmental outputs (toxic emissions by media,
hazardous waste generation and management, conventional air pollutant emissions, global
warming potential, and ozone depleting substances). We find direct and indirect
environmental burdens by multiplying the R matrix by the direct and indirect purchases. As
before, the total environmental burdens are the sum of direct and indirect.

While the matrix math is trivial, the data needs are worth discussing. The R matrix has units
of burdens per dollar of output (e.g., kg CO2/$). Multiplying R by a vector (X) with unit of
dollars for each sector will yield E, with units of burdens by sector (e.g., kg CO2), removing
cost dependence from the final impact tally. Deriving R (in units of burdens per dollar) merits
additional discussion; understanding the process and its limitations is critical to understanding
how and why IO-LCA models can be used to support LCA tasks.

IO-LCA model results are better understood by example. Like the purely economic IO models
on which they are built, IO-LCA models can estimate environmental burdens across the
supply chain. Revisiting our example of a final demand of $20,000 of automobiles, if our R
matrix was for emissions, then the E = RX vector would represent total emissions. Included
in these estimated emissions would be not only the emissions from the automobile factory,
but also the emissions from the tire factory, the rubber factory, and all other upstream
processes (including transportation) that supported production of that $20,000 car. All of this
is possible since the simple assumption of the R matrix is that it contains emissions per dollar
for each sector, and the X vector has already estimated the necessary economic outputs for all
of the sectors. As mentioned in the beginning of this chapter, in addition to the inputs from
each sector, the output, X, includes value added such as labor or profits. As a result, emissions
will be indirectly associated with these activities as well.

Example 8-3: Build on methods above to estimate direct and total environmental burdens by sector.

Question: What are the direct and total emissions of waste for the inputs specified in Example 8-2?
Assume emissions of waste per billion dollars of output of sector 1 are 50 g and sector 2 are 5 g.

Answer: The direct emissions are E = R [I+A] Y and total emissions are E = RX = R[I – A]–1 Y.
50 0
Example 8-2 derived [I+A] Y and [I – A]–1 Y for each of the two sectors. As given above, 𝐑 = € •. For
0 5
Y1 and Y2 the direct emissions are:
(50 ∗ 115) + (0 ∗ 20) = 5750 1250
‡ ˆ and € •.
(0 ∗ 115) + (5 ∗ 20) = 100 525
Thus, the sum of direct emissions for Y1 are 5,850 g (5.9 kg) and for Y2 are 1,775 g (1.8 kg). Similarly the total
emissions are 6403 g (6.4 kg) and 2211 g (2.2 kg), respectively, with consideration for significant figures. The
direct emissions, in general, are a fairly large share of the total emissions in both cases.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
234 Chapter 8: LCA Screening via Economic Input-Output Models

Producing an R matrix for a burden, e.g., sulfur dioxide emissions, requires a comprehensive
data source of such emissions on a sector level. The sector classification of relevance is that
of the associated IO model. As mentioned above, IO models typically follow existing
classification schemes (e.g., the US 2002 benchmark model generally follows the NAICS
classification system used throughout North America). Thus, a data source of total sulfur
dioxide emissions broken up by NAICS sector is required. Such a data source is ideally already
available and in the US, can be obtained from the Environmental Protection Agency.
However, some work may need to be done to translate, convert, or re-classify existing data
into a NAICS or IO sector basis to use in an IO-LCA model (see the Advanced Material for
this Chapter, Section 4). Once total sulfur dioxide emissions in an economy in a given year are
found (ideally the data source would provide emissions in the same year as the IO table used)
and allocated to the various IO sectors, each of the sector emissions values is divided by the
total output (Xi) of the sector. The result is the R matrix of burdens per dollar of output.
Typically, IO models are used at the scale of “millions of dollars” (as opposed to dollars or
thousands of dollars) so the normalization factor Xi would be scaled to millions (i.e., instead
of dividing by $120 million would be divided by 120). This same process would be repeated
for all burdens of interest in the IO-LCA model building process. Example 8-4 provides
insight into generating R matrix values for the electricity sector.

Example 8-4: In this example, we show how to derive an R matrix value for a particular sector.

Question: What is the R matrix value for the electric power sector for sulfur dioxide emissions in
2002 (in short tons per million dollars)?

Answer: EPA data for 2002 suggests that the total sulfur dioxide emissions from power generation
was 10.7 million short tons. In 2002, the sector output of the power generation and supply sector was $250
billion. Thus the R matrix value for the power generation sector would be (10.7 million short tons) / ($250
billion) = 42.8 short tons per million dollars.

Uncertainties in IO-LCA Models


Similar to Chapter 7, we revisit several categories of uncertainty for IO-LCA models.

Temporal uncertainty: IO models generally lag a bit more than process based data sources, as it
takes considerable time to assemble and report the data required for national input–output
tables, and then further to assemble data for the various environmental impacts (i.e., R
matrices). That said, production processes and IO tables have considerable consistency over
time (see Advanced Material for this Chapter - Section 8 for more details).

Linear production: the environmental impact vectors generally use average impacts per dollar of
output across the entire sector, even though the effects of a production change might not be
incremental or marginal. For example, increasing demand for a new product might be

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 235

produced in a plant with advanced energy efficiency or pollution control equipment or with
brand new technology. If production functions are changing rapidly, are discontinuous, or are
not marginal, then the linear approximations will be relatively poor. Reducing errors of this
kind requires more effort on the part of the LCA practitioner. A simple approach would be to
alter the parameters of the IO-LCA model to reflect the user’s beliefs about their actual values.
Thus, estimates of marginal changes in R vectors may be substituted for the average values
provided in the standard model.

Foreign production: Process or IO-based models may assume that manufacture of inputs occurs
in the same geographic region as the manufacture of the product. For example, IO-LCA
models of a country represent typical production across the supply chain within each sector
of the home country, even though some products and services might be produced outside the
country and imported. The impacts of production in other regions may be the same, or
substantially different. This variation could be the result of different levels of environmental
regulation or protection. For example, leather shoes imported from China were probably
produced with different processes, chemicals, and environmental discharges than leather shoes
produced in the US.

Aggregation uncertainty: Aggregation issues arise primarily in IO-based models and occur because
of heterogeneous producers and products within any one input–output sector. However,
similar issues may occur in process-based models, such as those that aggregate various
production processes into a single average method, e.g., electricity ‘grid’ processes representing
a weighted average of underlying processes.

It is normal to want more detail than is available from a particular model’s sectors or processes,
but such desires need to be balanced against the level of data available. Even the hundreds of
sectors of large IO models do not give sufficiently detailed information on many particular
products. For example, we might seek data on lead-acid or nickel-metal hydride batteries, but
the model may only have two (primary and secondary) battery sectors. Likewise, a single
battery production sector may contain tiny hearing aid batteries as well as massive cells used
to protect against electricity blackouts in buildings. $1 million spent on hearing aid batteries
will use quite different materials and create different environmental discharges than $1 million
spent on huge batteries. But all production within an IO sector is assumed to be uniform
(identical), and the economic and environmental effect values in the model are effectively
averaged across all products within it.

As one indication of the degree of uncertainty, Lenzen (2000) estimates that the average total
relative standard error of input–output coefficients is about 85%. However, because numerous
individual errors in input–output coefficients cancel out in the process of calculating economic
requirements and environmental impacts, the overall relative standard errors of economic
requirements are only about 10–20%.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
236 Chapter 8: LCA Screening via Economic Input-Output Models

Introduction to the EIO-LCA Input-Output LCA Model


In this section, we provide specific information about the EIO-LCA model and specific
illustrations to aid in your understanding of how such models are built. EIO-LCA was
developed by researchers at the Green Design Institute at Carnegie Mellon University and is
available both as an Internet web tool and as an underlying MATLAB model environment at
http://www.eiolca.net/. Online tutorials and screencasts are available on how to use the web
model and are not repeated here. For EIO-LCA MATLAB information, see the Advanced
Material for this Chapter, Section 6. EIO-LCA is available free for non-commercial use on the
Internet (although even corporate users are able to generate results to get a sense of how it
works). Process model databases and software sold by consulting companies can be quite
expensive, generally ranging in the order of thousands of dollars, as shown in Chapter 5. Other
IO-LCA models likely use very similar processes, but if you intend to use them, you should
read their documentation to ensure similarity and limitations.

The calculations required for Equations 8-1 through 8-3 are well within the capabilities of
personal computers and servers. The result is a quick and inexpensive way to trace out the
supply chain impacts of any purchase. The EIO-LCA website is able to assist in generating
IO-LCA estimates for various countries and with various levels of detail. In this chapter, we
will use the 428-sector 2002 US economy model to explore a variety of design and purchase
decisions. The EIO-LCA model traces the various economic transactions, resource
requirements, and environmental emissions required to provide a particular product or service.
The model captures all the various manufacturing, transportation, mining, and related
requirements to produce it. For example, it is possible to trace out the upstream implications
of purchasing $50,000 of reinforcing steel and $100,000 of concrete for a kilometer of roadway
pavement. Environmental impacts of these purchases can be estimated using EIO-LCA.
Converting such values into the relevant benchmark model year dollar values is discussed in
Section 3 of the Advanced Material.

We discuss the various data sources for the model, give an example application of the software,
provide a numerical example of the input–output calculations, and provide some sample
problems.

The data in the EIO-LCA software is derived from a variety of public datasets and assembled
for the various commodity sectors. For the most part, the data is self-reported and is subject
to measurement error and reporting requirement gaps. For example, automotive repair shops
do not have to report to the Toxics Release Inventory. The level of quality assurance of the
public data used varies. The major datasets include:

§ Direct and Total Input–Output Tables: The EIO-LCA website provides models for the US,
Germany, Spain, Canada, and China. US models available include those for the
benchmark years 1992, 1997, and 2002. Several of those years have multiple levels of
sector detail available. The 428-sector 2002 industry by commodity input–output (IO)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 237

matrix of the US economy as developed by the U.S. Department of Commerce Bureau


of Economic Analysis is the default model. Economic Impacts are computed from the
IO matrix and the user-input change in final demand. While the remaining data sets
below are generally available for multiple country-year models, the specific details
provided are for the 428-sector 2002 model.

R matrices:

§ Energy use for the 428 sectors come from a number of sources. Energy use of
manufacturing sectors (roughly 270 of 428) is developed from the Manufacturing
Energy Consumption Survey (MECS), for mining sectors is calculated from the 2002
Economic Census (USCB 1997). Service sector electricity use is estimated using the IO
table purchases and average electricity prices for these sectors.

§ Conventional pollutant emissions are from the US Environmental Protection Agency,


primarily the National Emissions Inventory (NEI) and onroad /nonroad data sources.

§ Greenhouse gas emissions are calculated by applying emissions factors to fuel use for fossil-
based emissions and allocating top-down estimates of agricultural, chemical process,
waste management, and other practices that generate non-fossil carbon emissions to
economic sectors.

§ Toxic releases and emissions are derived from EPA’s 2002 Toxics Release Inventory (TRI).

§ Hazardous waste, specifically Resource Conservation and Recovery Act (RCRA) Subtitle
C hazardous waste generation, management, and shipment was derived from EPA’s
National Biannual RCRA Hazardous Waste Report.

§ Water withdrawals come from various sources, as published in Blackhurst (2010).

§ Transportation use tracking flow of products through different freight modes comes from
multiple data sources, as published in Nealer (201x).

§ Land use estimates come from various sources, as published in Costello (2011).

Detailed information on how the underlying data sources are used to generate the R matrices
in EIO-LCA are available on the EIO-LCA website (at http://www.eiolca.net/docs/).

The EIO-LCA website follows the same workflow as the generic IO model shown in Figure
8-1. As a user, all you need to do is to enter a single value of final demand, select a sector that
must produce that final demand, and choose whether you want to see economic (X) or energy-
environmental results (R). All of the matrix math, data management, etc., is done by a web
server and results are shown in tabular form within seconds. In the basic EIO-LCA web

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
238 Chapter 8: LCA Screening via Economic Input-Output Models

model, you can only enter a final demand for a single sector (i.e., you can only enter a value
for a single Yi and all other elements of Y are assumed to be zero). However, in general, IO
models can be run with multiple (even many) Yi final demand entries. It is possible to build a
custom model in the EIO-LCA model (see Advanced Material Section 5) where simultaneous
purchases can be made from multiple sectors; however, such a model also has limitations on
its meaningfulness. If you have not used EIO-LCA before, there are various tutorials,
screencasts, and other resources available on the website.

EIO-LCA Example: Automobile Manufacturing


As a specific demonstration of the utility of IO-LCA models (specifically EIO-LCA), this
section examines the manufacture of automobiles. As defined by the US Department of
Commerce, the automobile manufacturing sector for the US 2002 benchmark EIO model is
composed of the following NAICS sector:

336111 Automobile Manufacturing

This U.S. industry comprises establishments primarily engaged in one or more of the
following manufacturing activities:

* complete automobiles (i.e., body and chassis or unibody) or

* automobile chassis only.

Note that the EIO-LCA server shows the information above when browsing or searching for
sectors to make final demand. For example, choosing the related light truck manufacturing sector
would provide similar but different results below.

We can trace the supply chain for the production of $1 million of automobiles in 2002 using
EIO-LCA. This production of $1 million would represent the effects of making roughly 40
automobiles (given an approximate average price of $25,000 each in the year 2002). Figure 8-3
shows the total and direct (including percentage direct) economic contributions of the largest
20 supply sectors within the supply chain for making automobiles in the US.

First, the economic results are considered. From Figure 8-3, a $1 million final demand of
automobiles requires total economic activity in the supply chain of $2.71 million. In the total
economic output column are the elements of X. EIO-LCA also sums across all of X to present
the total. Results for the other 403 sectors are available on the website but are not shown here.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 239

Total Direct Direct


Economic Economic Economic
$mill $mill %

Total for all sectors 2.71 1.74 64.2

Automobile manufacturing 0.849 0.849 100.0

Motor vehicle parts manufacturing 0.506 0.446 88.1

Light truck and utility vehicle manufacturing 0.150 0.150 99.9

Wholesale trade 0.124 0.057 46.1

Management of companies and enterprises 0.108 0.033 30.9

Iron and steel mills 0.038 0.000 1.60

Semiconductor and related device manufacturing 0.026 0.014 54.7

Truck transportation 0.025 0.009 34.2

Other plastics product manufacturing 0.021 0.010 48.5

Power generation and supply 0.020 0.002 10.7

Real estate 0.020 0.001 5.97

Turned product and screw, nut, and bolt manufacturing 0.017 0.005 30.1

Ferrous metal foundaries 0.015 0.000 1.68

Nonferrous foundries 0.015 0.000 2.14

Glass product manufacturing made of purchased glass 0.015 0.012 84.4

Other engine equipment manufacturing 0.015 0.012 79.8

Machine shops 0.014 0.002 15.9

Oil and gas extraction 0.013 0.000 0.085

Monetary authorities and depository credit intermediation 0.013 0.000 2.18

Lessors of nonfinancial intangible assets 0.013 0.000 3.65


Figure 8-3: Supply chain economic transactions for production of $1 million of automobiles
in US, $2002. Top 20 sectors. Results sorted by total economic output.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
240 Chapter 8: LCA Screening via Economic Input-Output Models

As discussed above, the change in GDP as a result of this economic activity would be only $1
million, since GDP measures only changes in final output, not of all purchases of intermediate
goods (i.e., not $2,710,000). The largest activity is in the automobile manufacturing sector itself:
$849,000. This includes purchases by the company that assembles vehicles from other
companies within the automobile manufacturing industry, like those that make steering
wheels, interior lighting systems, and seats.

The economic value of the supply chain is also shown in Figure 8-3. Direct purchases (from
the IO perspective, i.e., I+A) are $1.74 million, including the $1 million of final demand. Not
surprisingly, direct purchases are dominated by vehicle and parts manufacturing sectors. The
direct percentage compares the direct purchases for each sector to the total purchases across
the supply chain. Sectors with small direct purchase percentages (or, alternatively, large indirect
percentages) have most of their production feeding the indirect supply chain of automobiles
rather than the automobile assembly factories directly. Many of the top 25 sectors have more
than 50% of their total output as direct inputs into making automobiles (e.g., semiconductor
manufacturing and glass manufacturing sectors). Others primarily supply the other suppliers (e.g.,
iron and steel mills, power generation and supply).

EIO-LCA also allows you to generate estimates of energy and environmental effects, using
the data sources identified above. Using the same final demand of $1 million, Figure 8-4 shows
the energy use across the supply chain for producing automobiles for the top 10 energy-
consuming sectors (results available but not shown for the other 418 sectors). We remind you
that IO models are linear. Any analysis you do for $1 million of automobiles can be linearly
scaled down per vehicle.

EIO-LCA estimates total supply chain energy use of 8.33 TJ per $1 million of automobiles
manufactured (or 167 GJ per vehicle assuming 50 vehicles produced at an average cost of
$20,000). About 25% of that energy use (2.19 TJ) comes from energy needed in the electricity
(power generation and supply) sector, and about 15% from iron and steel mills. Most of the coal used
in the supply chain is for generating power. Natural gas use is fairly evenly split among the top
sectors. Similarly, most of the petroleum used is in the various transportation sectors, not all
of which are shown in the top 10 list of Figure 8-4. Notice that the top sectors in terms of
economic output are not closely associated with the top energy-consuming sectors! IO models
will show that generally energy-intensive sectors are an important part of the energy supply
chain for every sector but are not always those that have the largest economic input.

Note that specific fuels are not shown in Figure 8-4. Underlying data sources provide
information on consumption of diesel, gasoline, and other fuels that are aggregated into a
single estimate of “Petroleum” use.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 241

Total Non-
Coal NatGas Petrol Bio/Waste
Energy Foss Elec
Sector
TJ TJ TJ TJ
TJ TJ

Total for all sectors 8.33 2.56 2.63 1.29 0.435 1.41

Power generation and supply 2.19 1.60 0.467 0.078 0 0.051

Iron and steel mills 1.25 0.743 0.341 0.012 0.005 0.151

Motor vehicle parts 0.460 0.005 0.190 0.014 0.024 0.228


manufacturing

Automobile Manufacturing 0.381 0.004 0.190 0.013 0.040 0.133

Truck transportation 0.327 0 0 0.324 0 0.003

Other basic organic chemical 0.259 0.032 0.099 0.036 0.078 0.014
manufacturing

Petroleum refineries 0.187 0.000 0.050 0.121 0.009 0.007

Alumina refining and primary 0.172 0 0.046 0.001 0.004 0.120


aluminum production

Plastics material and resin 0.169 0.007 0.088 0.037 0.018 0.019
manufacturing

Paperboard Mills 0.161 0.015 0.033 0.007 0.095 0.011


Figure 8-4: Supply chain energy requirements for production of $1 million of automobiles in 2002,
results for top 10 energy consuming sectors, sorted by total energy.

IO-LCA models must carefully manage fuel and energy data. Fuel use is tracked only in the
sector that directly uses it. Many sectors consume electricity, but only the power generation
sector consumes the coal, natural gas, petroleum, and biomaterial needed for generation. Also,
note that “non-fossil” electricity consumption is estimated in Figure 8-4. While facilities within
sectors are assumed to consume average electricity (generated from a mix of fossil and non-
fossil sources), if the model tracked total energy use of fossil and non-fossil sourced electricity
in TJ, and also tracked the coal and/or natural gas used to generate it, the model would “double
count” the energy in the fossil fuel and the electricity. Thus, we only track an average amount
of non-fossil electricity (which does not depend on fossil fuels to generate it), avoiding the
double counting of energy. To derive the non-fossil share, Department of Energy data on

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
242 Chapter 8: LCA Screening via Economic Input-Output Models

percent non-fossil electricity generation in 2002 (31%) is multiplied by the amount of


electricity consumed by each sector.

The case study below helps to describe how an IO-LCA based screening assessment can be
used in a corporate setting.

Case Study: Bio-based Feedstocks in the Paint and Coatings Sector


A US company was considering acting on customer requests to provide an alternative
product comprised of bio-based feedstocks. These customers were looking to reduce
the “carbon footprint” of their products, and to reduce fossil fuel consumption, and
had read studies on the net carbon efficiency of bio-based as opposed to petrochemical-
based feedstocks. The question is whether such a conversion might be a beneficial
substitution for the producer and its customers.

An excerpted screening analysis using EIO-LCA for $100,000 of final demand in the
Paint and coatings sector demonstrated that the current mix of petroleum based
feedstocks – across the entire production chain of making paints and coatings – is a
fairly small part of the total purchases as well as only about 5% (6 / 107 tons) of the
carbon emissions. On the other hand, supply chain-wide purchases of electricity are
comparable in economic value ($2k and $5k) but constitute 25% (25 / 107 tons) of the
total carbon equivalent emissions. This screening analysis suggests that the switch to
bio-based feedstocks would likely have a modest effect on the burden of the product.
In addition, it suggests that a corporate push for more renewable electricity in its
supply chain could have substantial benefits. It is this latter strategy that we
recommended to our corporate partner.

Figure 8-5: EIO-LCA economic and CO2 emissions results for the ‘Paint and Coatings’ Sector

Total CO2 equivalents


Sector
($ thousand) (tons)
Total across all 428 sectors 266 107
Paint and coatings 100 3
Materials and resins 13 5
Organic chemicals 12 5
Wholesale Trade 10 1
Management of companies 10 <1
Dyes and pigments 8 17
Petroleum refineries 5 6
Truck transportation 5 8
Electricity 2 25

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 243

Beyond Cradle to Gate Analyses with IO-LCA Models


So far, IO tables and models have been represented as capturing the entire upstream supply
chain up to the point of manufacture, economically referred to as a producer price basis.
This means the context of final demand as well as the structure of the production recipes was
from the perspective of the producer at their place of business. In other words, the relevant
input into the model would be as if the producer was merely trying to recover the costs they
incurred. Thus, the perspective (or boundary) of producer-basis models is ‘cradle to gate’ –
the effects estimated by the model end at the point of production. The appropriate final
demand input is as measured or ‘seen’ by the producer.

IO models can also be created on a purchaser price basis. In such models, additional stages
beyond production are internalized into the production recipe for each sector. For physical
goods, typical activities internalized include transportation of product from the producer as
well as wholesale and retail operations (so-called margin activities). These models have a
‘cradle to consumer’ boundary. The relevant dollar input into a purchaser price model is the
price that a consumer (buyer) would expect to pay, which is generally easier to determine or
derive than a producer price. As a simple example, an automobile may have a $20,000 producer
price basis, but after transportation and dealer overhead are included may have a purchaser
price of $25,000. In such a case, the ‘production recipe’ of the $25,000 car in a purchaser price
model might have $22,000 of the recipe be associated with automobile manufacturing, $1,000
for transportation (e.g., by truck and rail) and $2,000 for retail overhead. If we were able to
perfectly separate the pieces, the purchaser price model would have all of the effects as
estimated as if a $22,000 input into a producer price model had been used, as well as the
additional effects from the $3,000 of other activities. Additional impacts that might be
estimated via the wholesale and retail margins are electricity use by computers at the store or
emissions from climate-controlled warehouses. On the other hand, if we entered the same
value ($22,000 or $25,000) into both models, this correspondence would not occur since the
models are linear and the recipes would not be fully inclusive. Additional detail about price
systems in producer and purchaser models is available in the Advanced Material (Section 3) at
the end of this chapter.

Beyond producer and purchaser basis models, IO-LCA models can be used to estimate effects
of even broader life cycles. In the example above, we consider what an EIO model would
estimate in terms of effects for a cradle to gate and cradle to consumer boundary for an
automobile. With those boundaries, the various requirements of using the vehicle (e.g.,
purchasing gasoline, insurance, maintenance, etc.) and managing it at end of life would not be
included in the model results.

However, these broader considerations can be approximated with a slightly more complicated
IO-LCA model final demand input. Instead of entering only a single element of final demand,
multiple Yi elements can be chosen. Alternatively, the model could be run multiple times and
the individual results aggregated to a single final result.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
244 Chapter 8: LCA Screening via Economic Input-Output Models

If we build on problem 8 from Chapter 3, which compared the life cycle effects associated
with two washing machines, the life cycle effects might include various elements of final
demand (entered as a series of elements in Y – see Advanced Material Section 5 - or run
consecutively through a model as separate individual elements of final demand):

• An input of final demand into the Household laundry equipment manufacturing sector

• A final demand input for a lifetime of water used (at an assumed $/gallon cost) from
the Water, sewage, and other systems sector

• A final demand input for a lifetime of electricity used (at an assumed $/kWh cost)
from the Power generation and supply sector

A full analysis would include differences pertaining to end-of-life disposal, although the
differences are likely to be relatively small for two washing machines.

IO model frameworks produce results as shown in


What do these EIO-LCA Figure 8-3 and Figure 8-4. The effects from all
results demonstrate? facilities within a sector at many levels of the supply
chain are “rolled up” into these single value results.
Keeping in mind that IO-LCA For example, from Figure 8-4, the 1.6 TJ of coal used
models are a screening tool, they
can help us to make and justify
in the power generation sector comes from many
our LCA project design decisions. individual power plants, some (about 10%) directly,
If we were doing an LCA of the but mostly indirectly. These rolled up results do not
energy use of an automobile, our allow us to see energy use at specific tiers of the
IO-LCA results suggest that the supply chain, or at particular facilities. From such
boundary of the manufacturing frameworks, the best analysis possible is a
processes should include comparison of direct and indirect effects. Advanced
electricity, semiconductors, trade, methods, such as structural path analysis, (Chapter
and chemicals needed. Many of 12) allow one to drill down into specific layers of the
the other processes could be supply chain to find specific pathways of
ignored with little impact on the connections between requirements.
results.
As discussed in Chapters 2 and 5, referencing of data
sources and models is critical in LCA. If using an IO-LCA model in your study, you should
be sure to note:

• the name and location of the model,

• its country of focus,

• EIO table year, and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 245

• whether it is a producer or purchaser basis model.

Somewhere in your study, you must also clearly state the input value of final demand, the name
of any sectors chosen for analysis, and R matrix datasets chosen. For example, the EIO-LCA
model suggests the following citation be used:
Carnegie Mellon University Green Design Institute. (2013) Economic Input-Output Life
Cycle Assessment (EIO-LCA) US 2002 (428 sectors) Producer model [Internet], Available from:
http://www.eiolca.net/> [Accessed 25 Aug, 2013].
In addition, you could separately provide in the LCA study document a table of final demands,
detailed sectors, and impact categories.

So far we have only motivated economic input-output models. However an IO framework


can be applied to any type of unit, for example, physical flows. If desired, one could derive a
linear system of equations that instead represented the mass quantities needed across an
economy to support production. Such models could also be built with multiple or mixed units.
All of the same matrix techniques can be used to estimate direct and total requirements (see
Chapter 9).

Overall, tracing the supply chain requirements for production has yielded surprises that have
raised questions about sole reliance on process-based LCA. The most important suppliers in
one dimension (e.g., economic dependence) often are not the most important in another (e.g.,
energy use). Figure 8-4 showed that some of the largest energy users in the supply chain do
not even appear among the top 25 economic supply sectors. A system-wide view is critical in
assessing life cycle effects. That said, IO-LCA models provide quick but coarse and average
estimates of LCI results, and cannot substitute for detailed process-based analysis. IO-LCA
methods can help you draw boundaries, assess which processes are important, and can help
validate process-based results. Given the very short time required to generate results, there is
little reason not to consult an IO-LCA model in support of an LCA study when setting the
SDPs. Your screening analysis could identify whether the choice of sector was critical or not,
and also generalize whether placing various processes within the system boundary is critical or
not.

Chapter Summary
We have shown several examples that highlight both the ease and utility, along with the
complications, of exploring the entire supply chain via IO and IO-LCA models. Process-based
models were shown to specifically estimate detailed mass and/or energy balances for specific
activities relevant to the life cycle of a product and to link many of these data sources to yield
a ‘bottom up’ model. The process-based method is also generally expensive and time-
consuming. The resource intensity can lead to project design decisions that narrow the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
246 Chapter 8: LCA Screening via Economic Input-Output Models

boundaries around the problem, causing many supply chain aspects to be ignored. IO-LCA
methods, on the other hand, have a top-down system boundary of the entire supply chain up
to the point of production by default. The benefits of economy-wide comprehensiveness in
IO-LCA models are traded off against the reality that the models are built upon average values
for sectors and environmental burdens. As such, the utility of IO-LCA models is primarily as
a screening tool rather than as a true alternative to a process-based model. For those wishing
to read further into the theory and practice of economic input-output models, we can
recommend two sources: Miller (2009) and Hendrickson (2006).

References for this Chapter


Blackhurst, Michael, Hendrickson, Chris, Sels I Vidal, Jordi, “Direct and Indirect Water
Withdrawals for U.S. Industrial Sectors”, Environmental Science & Technology, 2010, Vol. 44, No.
6, pp. 2126–2130, DOI: 10.1021/es903147k

Costello, Christine, Griffin, W. Michael, Matthews, H. Scott, and Weber, Christopher,


"Inventory Development and Input-Output Model of U.S. Land Use: Relating Land in
Production to Consumption", Environmental Science & Technology, 2011, Vol. 45, pp. 4937–4943,
DOI:10.1021/es104245j

Rachael Nealer, Christopher L. Weber, Chris Hendrickson and H. Scott Matthews, “Modal
freight transport required for production of US goods and services”, Transportation Research
Part E: Logistics and Transportation Review, Volume 47, Issue 4, July 2011, Pages 474-489.
DOI: 10.1016/j.tre.2010.11.015

Hawkins, Troy R., and Matthews, Deanna H., 2009. A Classroom Simulation to Teach
Economic Input-Output Life Cycle Assessment. Journal Of Industrial Ecology 13, no. 4 (August):
622-637. doi:10.1111/j.1530-9290.2009.00148.x

Hendrickson, Chris., Arpad Horvath, Satish Joshi, and Lester Lave. Introduction to the Use
of Economic Input–Output Models for Environmental Life Cycle Assessment. Environmental
Science and Technology, 32(7): 184A–191A, 1998.

Hendrickson, Chris T., Lave, Lester B., and Matthews, H. Scott, “Environmental Life Cycle
Assessment of Goods and Services: An Input-Output Approach”, RFF Press, April 2006.

Lave, L., E. Cobas-Flores, C. Hendrickson, and F. McMichael, Generalizing Life Cycle


Analysis: Using Input–Output Analysis to Estimate Economy-Wide Discharges. Environmental
Science & Technology, 29(9): 420A–426A, 1995.

Lenzen, M. Errors in Conventional and Input Output-based Life Cycle Inventories, Journal of
Industrial Ecology, 4(4):127 – 148, 2000. DOI: 10.1162/10881980052541981

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 247

Leontief, W., Environmental Repercussions and the Economic Structure: An Input–Output


Approach. Review of Economics and Statistics, 1970.

Miller, Ronald E. and Blair, Peter D., Input-Output Analysis: Foundations and Extensions, 2nd
edition. Cambridge University Press, 2009.

NAICS 2013 United States Census Bureau. 2013. ‘North American Industrial Classification
System, http://www.census.gov/eos/www/naics/ (accessed July 10, 2013).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
248 Chapter 8: LCA Screening via Economic Input-Output Models

End of Chapter Questions

Objective 1. Describe how economic sector data are organized into input-output
tables

Objective 2. Compute direct, indirect, and total effects using an input-output model

1. Use this transactions table (in millions of currency units) to answer the following
questions.

1 2 Y X
1 450 200 350 1000
2 100 600 1500 2200
V 450 1400
X 1000 2200

a. Describe in words what the highlighted values in the table represent.

b. Generate the direct requirements matrix

c. Generate the total requirements matrix

d. For a final demand of $50 million in sector 1, find the direct and total
requirements.

2. Reconsider the washing machine homework question from Chapter 3 using the 2002 EIO-
LCA purchaser price model. Assume a 10-year lifetime of each machine without discounting.
Ignore potential impacts from a disposal phase.

a. Use the $500 and $1,000 purchaser prices as inputs into the EIO-LCA Household laundry
equipment manufacturing sector to estimate the total energy consumption and fossil CO2
emissions to manufacture the two machines. Compare direct and indirect effects. What
do the results using these inputs suggest?

b. Use the assumptions about water use to estimate the use-phase energy and fossil CO2
emissions via input to the Water, sewage, and other systems sector.

c. Use the assumptions about electricity use to estimate the use-phase energy and fossil
CO2 emissions via input to the Power generation and supply sector.

d. Create a table summarizing the results above and find total energy and fossil CO2
emissions for the two machines. Determine the percent of energy and fossil CO2

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 249

emissions associated with manufacturing and use phases. What are some caveats you
might want to note if presenting these results given your use of an IO-LCA model?

Objective 3. Assess how an input-output model might be used as a screening tool.

3. You want to do a screening assessment of the energy needed to manufacture two different
types of plain white cotton t-shirts, one from a discount store costing $5 and another from a
specialty clothing store costing $15. What would the results of using an IO-LCA model’s
apparel sector suggest about the differences in their energy use for manufacture? What
other differences are likely in their manufacturing energy use?

4. Consider the case of a university looking to better manage its greenhouse gas emissions.

a. What are the fossil CO2 emissions associated with $1 million of College, university, and
junior colleges in 2002 using the EIO-LCA 2002 benchmark producer price model?
What percent of the emissions from each of the top 10 sectors are direct? What are
the overall direct emissions? What do these results tell you about the kinds of decisions
or activities that might be best managed by an average university in hopes of reducing
fossil CO2 emissions?

b. How could you adjust the average College, university, and junior colleges sector fossil CO2
results from (a) to represent a university with a $200 million annual budget and that is
considering the purchase of 10% of its electricity from wind power? As a
simplification, assume that no fossil CO2 emissions are associated with wind power
generation and the amount of wind power used in the estimate of part (a) is zero.

c. How could you adjust the average College, university, and junior colleges sector fossil CO2
results from (a) to represent a university with a $200 million annual budget and that is
considering the purchase of new equipment to lead to an overall reduction of 20% in
fossil CO2 emissions on site compared to the average university?

d. Comment on the overall feasibility and effectiveness of the university’s choice to


pursue green power or equipment replacement to achieve GHG emissions targets.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
250 Chapter 8: LCA Screening via Economic Input-Output Models

Advanced Material for Chapter 8 - Overview


As with the Advanced Material elsewhere in this book, these sections contain additional detail
about the methods and principles discussed in the chapter. They have been moved to the back
of the chapter because knowing about them is not vital to understanding the chapter content,
but may be necessary if you intend to more substantively use those methods. It is generally
expected that an undergraduate course (or casual learner of LCA) would generally focus on
the main chapters, and a graduate course (or advanced practitioner) would incorporate
elements from the advanced material.

In the advanced material sections of this chapter, you will find more in depth discussion of
the theoretical framework of economic input-output models, how price systems change, how
the vectors and matrices of IO-LCA models (with specific examples from EIO-LCA) have
been constructed, advanced features of the EIO-LCA website, and using software tools to
develop IO-LCA models.

Section 1 - Linear Algebra Derivation of Leontief (Input-Output)


Model Equations
In the chapter, we showed the format of the transactions table and a general derivation of the
round by round purchases and how they become the Leontief inverse equation. In this section,
more detail is provided about the system of linear equations that drive IO models. If you will
be doing matrix computations in your work using IO-LCA models, it is important to
understand the equations in this section. We repeat Figure 8-1 here.

Intermediate Final Total


Input to sectors output demand output
O Y X
Output from sectors 1 2 3 n

1 Z11 Z 12 Z 13 Z 1n O1 Y1 X1
2 Z21 Z22 Z23 Z2n O2 Y2 X2
3 Z31 Z32 Z33 Z3n O3 Y3 X3
n Zn1 Zn2 Zn3 Znn On Yn Xn
Intermediate input I I1 I2 I3 In

Value added V V1 V2 V3 Vn GDP


Total output X X1 X2 X3 Xn
Figure 8-1 (repeated). Example Structure of an Economic Input–Output Transactions Table

Notes: Matrix entries Zij are the input to economic sector j from sector i. Total (row) output for each sector i,
Xi, is the sum of intermediate outputs used by other sectors, Oi, and final demand by consumers.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 251

In a typical IO model, output across the rows of the transactions table (Figure 8-1), typically
commodity output, can be represented by the sum of each row’s values. Thus, for each of the
n commodities indexed by i, output Xi is:

Xi = Zi1 + Zi2 + … + Zin + Yi (8-5)

However, I-O models are typically generalized instead by representing inter-industry flows
between sectors as a percentage of sectoral output. This flow is represented by dividing the
economically-valued (transaction) flow from sector i to sector j by the total output of sector
j. Namely,

Aij = Zij / Xj (8-6)

In such a system, the Aij term is a unitless technical (or input–output) coefficient. For
example, if a flow of $250 of goods goes from sector 3 to sector 4 (Z34), and the total output
of sector 4 (X4) is $5,000, then A34 = 0.05. This says that 5 cents worth of inputs from sector
3 is in every dollar’s worth of output from sector 4. As a substitution, we can also see from
Equation 8-6 that Zij = Aij Xj. This form is more common since the system of linear
equations corresponding to Equation 8-5 is typically represented as

Xi = Ai1 X1 + Ai2 X2 + … + Ain Xn + Yi. (8-7)

It is straightforward to notice that each Xi term on the left has a corresponding term on the
right of Equation 8-7. Thus all X terms are typically moved to the left hand side of the
equation and the whole system of equations written as:

(1 – A11)X1 – A12 X2 – … –A1n Xn = Y1

–A21X1 + (1 – A22)X2 – … –A2n Xn = Y2

–Ai1X1 –Ai2X2 – … + (1 – Aii)Xi –… –Ain Xn = Yi (8-8)

–An1X1 –An2X2 – … + (1 – Ann)Xn = Yn

If we let the matrix A contain all of the technical coefficient Aij terms, vector X all the
output Xi terms, and vector Y the final demand Yi terms, then equation system 8-8 can be
written more compactly as Equation 8-9:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
252 Chapter 8: LCA Screening via Economic Input-Output Models

X – AX =

[I – A] X = Y (8-9)

where I is the n×n identity matrix. This representation takes advantage of the fact that only
diagonal entries in the system are (1–Aii) terms, and all others are (–Aij) terms. Finally, we
typically want to calculate the total output, X, of the economy for various exogenous final
demands Y, taken as an input to the system. We can take the inverse of [I – A] and multiply
it on the left of each side of Equation 8-9 to yield the familiar solution

X = [I – A]–1 Y (8-10)

where [I – A]–1 is the Leontief inverse matrix or, more simply, the Leontief matrix. As
discussed in the main part of the chapter, the creation of this inverse matrix transforms the
direct requirements matrix into a total requirements matrix. The total requirements matrix
mathematically represents all tiers or levels of upstream purchases (instead of just direct
purchases) associated with an input of final demand.

Section 2 – Commodities, Industries, and the Make-Use Framework


of EIO Methods
The Leontief model described above is very general, discussing output only in terms of which
sectors they apply to. For many readers and users, this is sufficient differentiation. In reality,
IO tables and models are generally commodity by industry, where commodity production
sectors i are in the rows and industry sectors j are in the columns. The distinction between
commodities and industries is subtle but important. The traditional definition of a commodity
is a basic good that is produced widely but equally, for example, white rice. In this traditional
definition all companies and facilities make identical, non-distinct products. Industries, on the
other hand, combine various commodity inputs and make a new product. This traditional view
of commodities is obsolete as these “commodity sectors” in modern IO tables categorize such
complex and distinct products as computers and other electronics. The terminology is the only
constant.

The simplified tables shown above (e.g., in Figure 8-1) are generally derived from “make and
use” tables. A make table organizes economic data related to which industry sectors make
which commodities, while a use table organizes economic data related to which industries
use which commodities. While a full mathematical description of converting from make-use
tables to transactions tables is beyond the scope of this chapter (and better left for the teacher
or student to implement), the make-use framework forms the foundation of the commodity-
by-industry transactions table introduced in the chapter. The matrix math involved in

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 253

transforming make-use tables into transactions tables internalizes and allocates the multiple
productions and uses of commodities into a unified set of production recipes. Matrix math
also allows us to generate different formats of transactions tables from the original make-use
format, e.g., an industry-industry or commodity-commodity table format.

The columns of a make table show the distribution of industries producing a commodity,
while the rows show the distribution of commodities produced by an industry. If you read
across the values of a row in the make table, you see all of the commodity outputs that each
sector makes. The make table might reveal that in fact several sectors are responsible for
“making” a particular commodity, e.g., a steel facility and a power plant may both produce
electricity. Figure 8A-1 shows an excerpt of actual data from the 2002 Make Table of the US
economy.20 It shows that the vast majority of farm commodities are produced by the farm
industry ($197 billion), and that billions of dollars of forestry commodities are produced in
both the farm and forestry sectors.

Forestry,
Mining, Support
Industries/ fishing, Oil and
except activities
Commodities Farms and gas Utilities
oil and for
related extraction
gas mining
activities

Farms 197,334 3,306 ... ... ... ...

Forestry, fishing,
and related 19 38,924 ... ... ... ...
activities

Oil and gas


... ... 91,968 133 1,301 ...
extraction

Mining, except oil


... ... 1 47,270 163 ...
and gas

Support activities
... ... 33 86 32,074 ...
for mining

Utilities ... ... 33 ... ... 316,527


Figure 8A-1: Excerpted 2002 Make Table of the US economy ($millions)

Use Tables follow the “production recipe” style mentioned earlier in the chapter. If you read
down the values of a column in the Use Table, you see all of the economic inputs needed from
other sectors, i.e., you see how much the industry sector uses from the commodity sectors. The

20The Make and Use Tables in Figures 8A-1 and 8A-2 are excerpted from aggregated tables with about 80 sectors of resolution, not the
428 sectors in the benchmark tables. Less than 10 commodity and industry sectors are excerpted, so 70 other columns of data are not
shown.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
254 Chapter 8: LCA Screening via Economic Input-Output Models

rows show where the commodity outputs of sectors are used. Figure 8A-2 shows excerpted
but actual data from the 2002 Use Table of the US economy. It shows that the utilities
industry, which includes power generation, uses billions of dollars of oil, gas, and mined (coal)
commodities. It also shows that a large share of the production of wood products ($18 billion)
were used by the construction industry in 2002.

Commodities/
Oil and gas Mining, except
Industries Utilities Construction
extraction oil and gas

Farms ... 0 0 1,098

Oil and gas extraction 25,171 3 45,170 ...

Mining, except oil and gas 487 8,028 4,503 8,206

Support activities for


5,354 2,378 ... 6
mining

Utilities 3,608 3,981 108 2,895

Construction 13,957 2 3,952 733

Wood products 0 9 31 18,573

Nonmetallic mineral
314 615 52 36,028
products
Figure 8A-2: Excerpted 2002 Use Table of the US economy ($millions)

Make and Use Tables often exist with classifications of “before and after redefinitions”. The
figures above are both before redefinitions. While the various methods of redefinition vary by
the agency creating the tables, typically the process of redefinition involves carefully remapping
secondary activities within established sectors to other sectors. As an example, the hotel
industry typically has restaurants and laundry services on site, which are represented by
separate sectors in tables. As the data available supports it, activities within the hotel industry
are re-mapped into those other sectors (e.g., food purchases are switched from the hotel
industry to the restaurants industry). This affects both the make and use tables, and the
industry outputs are different between the versions of the tables with and without
redefinitions. In the end, some sectors’ production recipes are basically unchanged by
redefinitions, while others are substantially changed. Since such redefinitions lead to better-
linked representations of the activities that could lead to energy and environmental impacts,
they are typically the basis of IO-LCA models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 255

Section 3 – Further Detail on Prices in IO-LCA Models


Adjusting Values to Match Basis Year of EIO Models

As represented in Figure 8-2, one of the critical inputs to an EIO model is an increment of
final demand to be studied. The appropriate “unit” of this final demand is a currency-valued
input the same year as that of the model. If using a 2002 US EIO model, then a final demand
in 2002 dollars is needed. If you are using the model to assess the impacts of automobile
production in 2013, then you need to find a method of adjusting from 2013 to 2002 dollars
for the final demand, since it is likely that prices in sectors have changed significantly since the
year of the model. However, since the intention is to perform a screening-level analysis, you
can exploit the fact that production recipes (technologies) do not change quickly, and assume
that the only relevant difference between a 2013 and 2002 vehicle is the price (producer or
purchaser).

For such conversions, the appropriate type of tool is an economic price index or GDP deflator
for a particular sector. These are generally available from national economic agencies (in the
US they are provided by the BEA, the same agency that creates the input-output tables, which
leads to consistent comparisons). Note that an overall national price index or GDP deflator
may be the only such conversion factor available. In this case, it can be used but the adjustment
should be clearly documented as using this national average rather than a sector-specific value.

A full discussion of price indices and deflators is beyond the scope of this book, but are
typically represented as values where there is a “base year” with an index value of 100, and
values for years before and after the base year. Such values could be, e.g., 98 and 102, which
if before and after the base year would suggest annual price changes of about 2% per year. It
is the percentage equivalent values of the index values that are useful when using indexes to
adjust values from present day back to the basis year of EIO models. Note that the base year
of the index does not need to match the year of the EIO model; as long as you can use the
index values to adjust dollar values back and forth, you can adjust current values back to the
appropriate final demand value for the right year (or for any other year you might care about).
Equation 8A-1 shows how you can convert values from one basis year to another using a price
index (or a GDP deflator represented with a base=100 format):
‰8.&/Š ‹'-5/ -4|/Œ
‰8.&/i
= ‹'-5/ -4|/ŒŠ (8A-1)
i

For example, assume the average retail price of automobiles in 2011 is known to be $30,000,
and we want to find the corresponding retail price to be used as the final demand in a 2002
US EIO purchaser basis model so that we could try to estimate the effects of manufacturing
a single automobile in 2011. The BEA provides spreadsheets of various economic time-series
estimates, such as Gross Output, including price indices, by sector.21 For example, price index

21 As of 2014 the file, named GDPbyInd_GO_NAICS_1998-2011.xls, can be found at http://www.bea.gov/industry/gdpbyind_data.htm

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
256 Chapter 8: LCA Screening via Economic Input-Output Models

values for the Automobile manufacturing sector (#336111), of 98.75 for 2002 and 99.4 for 2011,
are shown in Figure 8A-3.

Year Index Value


1998 100.645
1999 100.451
2000 101.488
2001 100.902
2002 98.750
2003 98.748
2004 100.286
2005 100.000
2006 97.168
2007 96.280
2008 98.185
2009 99.918
2010 98.653
2011 99.400
Figure 8A-3: Price Index Values for Automobile Manufacturing Sector, 1998-2011 (Source: BEA)

Thus, the converted 2002 value can be found by applying equation 8A-1, as shown in Equation
8A-2:
‰8.&/i•ŠŠ $•7,777 ••.6
‰8.&/i••i
= ‰8.&/ = •‘.’“ (8A-2)
i••i

In this case, the adjusted value for 2002 is $29,800. It may be surprising that the price level has
been almost unchanged over those ten years! One could instead use the negligible (less than
1%) price level change as the basis of an assumption to ignore the need for adjustment and
just use $30,000 directly as the input final demand into the model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 257

Differences Between Producer And Purchaser Models

It is important to understand the multiple ways in which price bases can be defined in input-
output systems. While we primarily discussed and assumed the producer basis in Chapter 8,
there are other ways as well, as defined by the UN System of National Accounts (UN 2009):

Basic prices are the amount received by a producer from a purchaser for a good or service,
minus any taxes, and plus any subsidies (this is referred to as net taxes). This basis typically
excludes transport charges that are separately invoiced by the producer. You might more
simply consider basic prices as the raw value of a product before taxes or subsidies are
considered. Producer prices are the amount received by a producer from a purchaser, plus
any taxes and minus any subsidies. Producer prices are equivalent to the sum of basic prices
and net taxes. Finally, purchaser prices are the amount paid by the purchaser and include the
cost of delivery (e.g., transportation costs) as well as additional amounts paid to wholesale and
retail entities to make it available for sale. These transportation and wholesale/retail
components are referred to as margins.

Example: Basic, producer and purchaser prices

To illustrate these different ways of describing prices, consider this example from
Statistics New Zealand (2012). Generic currency units of $ are used.

Item Amount
Basic price 12
+ Taxes on product, except sales tax or VAT 1
- Subsidy on products 5
= Producer’s price 8
+ Sales tax or VAT 2
+ Transport charges and trade margins paid by purchaser 3
= Purchaser’s price 13
Note: VAT = Value Added Tax (used in many parts of the developed world)

In this example, the seller is actually able to retain $12 for the product (basic price).
The sales transaction takes place at $8 (producer’s price). The seller gets an
additional $4 from the subsidy, less the tax. The purchaser has to pay $13 to take
possession of the good (producer’s price), with $5 going to non-deductible taxes and
transport charges and trade margins.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
258 Chapter 8: LCA Screening via Economic Input-Output Models

With respect to LCA, as we discussed, a producer basis is a cradle to gate scope, while a
purchaser basis adds in transport and wholesale/retail operations (assuming pickup at store)
and is thus cradle to consumer in scope. In some cases, the purchaser and producer price bases
are approximately the same. However, if transportation costs or retail markups needed to bring
the product to market are significant, there will be a difference.

Figure 8A-4 gives total sectoral output values in producer price and purchaser price bases in
the 1997 US benchmark accounts. Note that these values are not provided to help ‘convert’
values between the models as done above with price indexes, but instead, to demonstrate why
the prices and model results are different. Service sectors like barbershops have identical
producer and purchaser prices, because the service is produced at the point of purchase. On
the other hand, the purchaser price of furniture is roughly split 50-50 between manufacture
and the wholesale/retail margin activities needed to market the product.
Producer /
Transportation Wholesale and Purchaser Purchaser Price
Item Producer Price Cost Retail Trade Price Ratio (%)
Shoes $18,333 $179 $21,748 $40,259 45
Barber shops $31,246 — — $31,246 100
Furniture $28,078 $235 $27,648 $55,960 50

Figure 8A-4: Differences in Producer and Purchaser Prices (Millions of 1997 Dollars in Sector Output)

When a purchaser price model is used, a dollar of input of final demand to a single sector is
distributed into these shares of the various underlying sectors (like creating a final demand
vector with multiple entries instead of just a single value for the production sector). This data
is often represented on a normalized (percentage basis) instead of using total values. For
example, $1,000 of final demand of furniture in a purchaser price model will have a final
demand of about $500 to the furniture manufacturing sector, a small amount for
transportation, and about $500 to the wholesale and retail trade sectors together. This will be
discussed in more detail in Advanced Material Section 6. Depending on the energy or
environmental intensity of the various sectors, the results for a given final demand using a
producer versus purchaser model may be higher, lower, or about the same. For example, truck
transportation (one of the margin sectors included in a purchaser basis model) is fairly carbon
intensive. If the purchased product has significant transportation requirements, the purchaser
model might have higher emissions than the producer model.

References

Statistics New Zealand, “Introduction to Price Indices”, online course materials,


http://unstats.un.org/unsd/EconStatKB/KnowledgebaseArticle10351.aspx, posted June 14
2012. Last accessed January 10, 2014.

United Nations, System of National Accounts, 2008. New York, 2009.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 259

Section 4 – Mapping Examples from Industry Classified Sectors to


EIO Model Sectors
The organization of data for IO-LCA models is a substantial exercise requiring quality
checking and assurance processes. Economic matrices (e.g., an A matrix) are typically provided
directly by agencies and at worst typically only require minimal conversion or preparation for
use in EIO models. R matrices, on the other hand, require significant effort. In this section,
we focus on explaining how the various different industry classification methods map to each
other in support of making these matrices.

In the US, the primary classification scheme for industries (and the businesses within them) is
the North American Industry Classification System (NAICS). While the US government has
officially decreed that all industry data collection efforts shall use NAICS, some data sources
have not yet completely converted to this system. NAICS is a hierarchical classification system
with values ranging from 2 to 6 digits. Sectors are broadly categorized by the first two digits,
and then sub-classified by appending additional digits. For example, manufacturing sectors
start with the first two digits 31-33. Three digit sector numbers (e.g., 311, 312, … , up to 339)
further classify manufacturing into activities like food manufacturing and miscellaneous
manufacturing. The three digit sector values can be similarly broken up into more specific
manufacturing categories which can be described with 4-digits sector numbers. (e.g., 3111,
3112, etc.), Six-digit sectors are the most detailed (and least aggregated) classifications of
activity in the economy. For example, the Automobile manufacturing sector discussed at various
times in this chapter is classified hierarchically in the NAICS system as follows:

NAICS 33 Manufacturing (note 31-33 are all classified in the same way)

NAICS 336 Transportation equipment manufacturing

NAICS 3361 Motor vehicle manufacturing

NAICS 33611 Automobile and light truck manufacturing

NAICS 336111 Automobile Manufacturing

There are of course many other complementary manufacturing subsectors throughout that
hierarchy that are not shown. The full official US Census Bureau NAICS classification is
available on the Internet (at http://www.census.gov/eos/www/naics/).

While the Census Bureau (via BEA) is also the creator of the input-output tables, they do not
simply define the sectors of the input-output table to correspond precisely to 6-digit NAICS
industries or commodities. As mentioned in the chapter, they balance available resources
against the need to produce a sufficiently detailed input-output table. Thus, of the 428 sectors
in the 2002 US input-output model, relatively few correspond directly to 6-digit NAICS codes

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
260 Chapter 8: LCA Screening via Economic Input-Output Models

(though these are mostly in the manufacturing sectors), many IO sectors map to 5-digit
NAICS codes, and a significant number map to 3- and 4-digit level NAICS codes.

Beyond the mapping of IO sectors to n-digit NAICS level, many IO sectors are not simple
one-to-one mappings, meaning the IO sectors represent aggregations of multiple underlying
NAICS codes. Figure 8A-5 shows a summary of how NAICS codes map into the first set of
sectors of the 2002 US IO detailed models from BEA. In the left hand column is a subset of
the hierarchical classifications of IO sectors.

I-O Industry Code and Title Related 2002 NAICS Codes


11 AGRICULTURE, FORESTRY, FISHING AND
HUNTING
1110 Crop production
1111A0 Oilseed farming 11111-2
1111B0 Grain farming 11113-6, 11119
111200 Vegetable and melon farming 1112
1113A0 Fruit farming 11131-2,111331-4, 111336*,
111339
111335 Tree nut farming 111335, 111336*
111400 Greenhouse, nursery, and floriculture 1114
production
111910 Tobacco farming 11191
111920 Cotton farming 11192
1119A0 Sugarcane and sugar beet farming 11193, 111991
1119B0 All other crop farming 11194, 111992, 111998
Figure 8A-5: Correspondence of Crop Production NAICS and IO Sectors, 2002 US Benchmark Model
(Source: Appendix A)

In the right hand column are the NAICS level sectors that are mapped into each of the detailed
IO sectors. For example, two 5-digit NAICS sectors (11111 and 11112) map into the Oilseed
farming sector. A single 4-digit sector (1112) maps into the Vegetable and melon farming sector.
Various 5 and 6-digit level NAICS sectors map into the Fruit farming IO sector. The asterisk
next to 111336 notes that output from that sector is not 1:1 mapped into a single sector. As
you can see, some of NAICS 111336’s output is mapped into the Tree nut farming sector below
it. Fortunately, the names of the IO sectors tend to be very similar or identical to the NAICS
sector names (not shown above but available on the Census NAICS URL above), so following
the mapping process is a bit easier.

While the discussion above is motivated by how the IO transactions tables are created, it is
also critical to understand because of how the classifications and mappings affect the creation
of R matrices. Since each value of an R matrix is in units of effects per currency unit of output
for a sector, we need to ensure that data on energy and environmental effects for a sector have
been correctly mapped into IO sectors. In other words, we need to ensure that the R matrix
values (numerators and denominators) have been derived correctly.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 261

Reconsider Example 8-4 from the chapter. If instead of trying to find SO2 emissions for the
power generation sector, imagine you were deriving an R matrix of fuel use by sector. As
defined in Figure 8A-5, the fuel use of the Oilseed farming IO sector (1111A0) would be found
by finding data on the fuel use of NAICS sectors 11111 and 11112, and adding them together.
Finally, this sum would be normalized by the output of the oilseed farming sector (from the
Use Table) and the result would be the entry of the R matrix for that sector. As another
example, the R matrix value for the Greenhouse, nursery, and floriculture production (111400) sector
requires data on fuel use from just one 4-digit NAICS sector, 1114.

The mapping process for building R matrices seems simple, and is conceptually. However,
data for the required level of aggregation (4, 5, or 6-digit) is often unavailable. When you have
data at one aggregation level, but need to modify it for use for another level, assumptions need
to be made and documented.

If you have more detailed data, but need more aggregated data, the process is generally simple.
You can aggregate (sum) 6-digit NAICS data into a single 5-digit level. However when the
data only exists at an aggregate (e.g., 3 or 4-digit NAICS) level then you need to create ways
of allocating the aggregate data into more disaggregated 4, 5, or 6-digit level sectors.

The following example shows the challenges present in mapping available data to the
corresponding IO sectors. It represents actual data available for use in the 2002 US benchmark
IO model, at the 428 sector level. Figure 8A-6 shows an excerpt of the NAICS to IO mapping
for the 29 Food manufacturing sectors. For these 29 sectors, the required level of aggregated
NAICS data ranges from the 4 to 6-digit levels. Creating the R matrix for every type of fuel
would require at least 29 different energy values that would then be divided by sectoral output.
Figure 8A-7 shows available data on energy use from food manufacturing sectors from the
US Department of Energy (MECS 2002).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
262 Chapter 8: LCA Screening via Economic Input-Output Models

Related 2002
I-O Industry Code and Title
NAICS Codes
31 MANUFACTURING
3110 Food manufacturing
311111 Dog and cat food manufacturing 311111
311119 Other animal food manufacturing 311119
311210 Flour milling and malt manufacturing 31121
311221 Wet corn milling 311221
31122A Soybean and other oilseed processing 311222-3
311225 Fats and oils refining and blending 311225
311230 Breakfast cereal manufacturing 311230
31131A Sugar cane mills and refining 311311-2
311313 Beet sugar manufacturing 311313
311320 Chocolate and confectionery manufacturing from cacao beans 31132
311330 Confectionery manufacturing from purchased chocolate 31133
311340 Nonchocolate confectionery manufacturing 31134
311410 Frozen food manufacturing 31141
311420 Fruit and vegetable canning, pickling, and drying 31142
31151A Fluid milk and butter manufacturing 311511-2
311513 Cheese manufacturing 311513
311514 Dry, condensed, and evaporated dairy product manufacturing 311514
311520 Ice cream and frozen dessert manufacturing 311520
31161A Animal (except poultry) slaughtering, rendering, and processing 311611-3
311615 Poultry processing 311615
311700 Seafood product preparation and packaging 3117
311810 Bread and bakery product manufacturing 31181
311820 Cookie, cracker, and pasta manufacturing 31182
311830 Tortilla manufacturing 31183
311910 Snack food manufacturing 31191
311920 Coffee and tea manufacturing 31192
311930 Flavoring syrup and concentrate manufacturing 31193
311940 Seasoning and dressing manufacturing 31194
311990 All other food manufacturing 31199
Figure 8A-6: Correspondence of Food Manufacturing NAICS and IO Sectors,
2002 US Benchmark Model (Source: Appendix A)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 263

NAICS Sector Name Total Net Residual Distillate Natural


Code Electricity Fuel Fuel Gas
311 Food 1,116 230 13 19 575
311221 Wet Corn Milling 217 23 * * 61
31131 Sugar 111 2 2 1 22
311421 Fruit and Vegetable Canning 47 7 1 1 36
Figure 8A-7: NAICS Level Fuel Use Data From Manufacturing Energy Consumption Survey,
units: trillion BTU (Source: MECS 2002)

As you can see, the immediate challenge is that only 4 different sectors of results (rows) are
available from MECS, the best available data source. One of the sectors in MECS is a value
of energy use for the entire 3-digit NAICS Food manufacturing sector (311). Estimates of energy
use are provided for only three more detailed food manufacturing sectors: Wet corn milling
(311221), Sugar (31131), and Fruit and vegetable canning (311421). The reasons why only these
sectors were estimated is not provided, but presumably again seeks to balance data quality,
budget resources, and resulting resolution. Regardless, the MECS data provided for 311221
maps perfectly into IO sector 311221. The MECS data for 31131 needs to be split into values
for IO sectors 31131A and 311313. The MECS data for 311421 can be put into IO sector
311420 (but may be missing data for sectors 311422, etc.). So the 5 and 6-digit data from
MECS can only at best help us with specifically mapped data into 4 of the 29 sectors. For the
remaining 25 sectors, we need to find a method to allocate total energy use data from the 3-
digit NAICS level for 311 as shown in the first row of Figure 8A-7. It is not as easy as taking
these values for sector 311 and allocating because the values provided (e.g., 1,116 trillion BTU
of total energy use) already include the energy use of three detailed sectors below in the table.
Thus, the energy of the other 25 sectors to be allocated is the difference between the values
provided in the NAICS 311 row and the three detailed rows. In this example, total energy use
of the 25 other sectors is 1,116 – 217 – 111 - 47, or 741 trillion BTU (with similarly calculated
values for the other fuels).

In EIO-LCA, the allocation method used to distribute 741 trillion BTU of energy use into the
other 25 food manufacturing sectors is to use the dollar amounts from the 2002 Use Table as
weighted-average proxies for consumption of each energy source, which assumes that each
sector within a sub-industry is paying the same price per unit of energy. For the case of the
1997 and 2002 Benchmark IO models for the US, more complete documentation of how
various effects have been derived is available in the EIO-LCA documentation
(http://www.eiolca.net/docs). Other IO-LCA models may make different assumptions to
allocate the available data.

Hopefully, this discussion into the lack of consistency in the aggregation and organization of
data for IO-based sectoral analysis helps you to appreciate the complexity in creating such
models that are, in the end, simple to use!

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
264 Chapter 8: LCA Screening via Economic Input-Output Models

Section 5 – Modeling Effects of Multiple Final Demand Entries


In this section we briefly demonstrate an advanced feature of the EIO-LCA website which
allows a user to estimate the effects of multiple final demand entries into the same model. This
is one of two custom advanced models available (the other is described in the Advanced
Material of Chapter 9).

From the ‘Use the Model’ page of EIO-LCA, select the ‘Create Custom Model’ tab in the top
center of the page (as shown in the screenshot below). On the resulting page, select the
‘custom product’ link.

The custom product tool allows you to do an expanded input-output analysis of a more
complex production process. Instead of only looking at the effects of demand from a single
sector, you can consider effects of increased production in several sectors simultaneously (i.e.,
as described in the chapter it allows you to consider the effects of multiple final demand entries
simultaneously).

There are two primary applications of this tool in terms of types of products to model. Of
course, either of these options still has the same advantages and disadvantages of any IO-LCA
model in terms of aggregation, representative production, etc.

• Hypothetical products - You can consider the implications of a product that is not
currently represented in the input-output model, e.g., a ‘recipe’ of the requirements needed
to build an electronic book reader. These products are similar to laptop computers, but
not part of any existing sector production (as of the 2002 model used).

• Improved Analysis of Existing Products - you can perform a more specific analysis of an
existing product. For example, instead of just looking at $20,000 of production from the
vehicle production sector, you could put together a recipe of similar but different items
needed in the supply chain of a hybrid vehicle. This may more closely approximate your
desired product.

To use the custom product builder, the first step is to choose the EIO-LCA ‘model year’ to
be used (e.g., 1997 or 2002) and click the ‘Change Model’ button, as shown in the screenshot
below.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 265

The second step uses the same two-level sector selection interface as used in the basic EIO-
LCA model to iteratively add the sectors required to produce your good or service (i.e., you
are selecting the various entries of the final demand vector Y for your model). For each
separately required sector (final demand component), you choose the sector, enter a final
demand in millions of dollars, and then click the ‘Add this Sector’ button. The third and final
step is done when all components of final demand in your custom model have been entered,
and you click the ‘Build It’ button at the bottom of the screen.

The screenshot below shows what the website would look like when using the 2002 EIO-LCA
Producer price model for a custom build model to approximate LED lamp production,
including the following three sectoral components of final demand into the model:

• $1 million of Semiconductor and related device manufacturing for the LED wafer production.

• $0.2 million of Lighting fixture manufacturing for the fixture components of the bulb.

• $0.1 million of Other pressed and blown glass and glassware manufacturing for the glass
encasing for the bulb.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
266 Chapter 8: LCA Screening via Economic Input-Output Models

As above, the last step (#3) would be to click the ‘Build It’ button, which would return the
default EIO-LCA economic result summary shown below. From this screen, you can click the
‘Change Inputs’ button to instead show results for energy, greenhouse gases, etc., across the
economic supply chain.

In the parlance of Chapter 8, this example of the custom builder tool to estimate the effects
of LED lamp production is using a final demand as shown below.

1
𝑌 = •0.2–
0.1
The results generated by the custom tool in this case are identical to those that would be
obtained from separately running the 2002 EIO-LCA Producer model three times (once for
each of the sectoral final demand inputs), saving the results to a spreadsheet, and then
summing the results. The custom tool is merely a way of saving this added manual effort and
doing it all at once.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 267

If you need additional help with the custom builder tool, see the online EIO-LCA screencasts
(at http://www.eiolca.net/tutorial/ScreencastTutorial/screencasts.html).

End of Section Questions

1. Redo question #2 (the washing machine comparison) from the End of Chapter questions,
but using the custom model tool described in this section. Again, reflect on the caveats of
using the EIO-LCA model for this comparison, and also comment on whether the custom
tool introduces any particular new issues in interpreting the results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
268 Chapter 8: LCA Screening via Economic Input-Output Models

Section 6 – Spreadsheet and MATLAB Methods for Using EIO Models


In this section we overview the specific use of Microsoft Excel and MATLAB in support of
linear algebra / matrix math manipulations of IO models. This section is not intended to be
an introduction to using these two software tools, or as an introduction to linear algebra.

Modeling Using Microsoft Excel Software

Despite its cost, Microsoft Excel is a ubiquitous spreadsheet software already installed on
many computers. Other spreadsheet programs (such as those from OpenOffice) use very
similar methods to those described here. For relatively small projects, spreadsheets can be very
useful in organizing LCA data and in assisting with matrix calculations.

In Excel, elements of vectors and matrices are easy to enter by hand (for small matrices) and
also by pasting or importing data from other sources. A 1 row by 5 column or 5 row by 5
column area of a spreadsheet can be generated quickly. Returning to Example 8-2, the A and
I matrices, and vectors Y1 and Y2 could be entered into Excel as in Figure 8A-8.

Figure 8A-8: Data Entry in Microsoft Excel for Example 8-2

However, such entries, despite looking like a matrix, would not be treated as such in Excel.
All cells in Excel are by default treated individually. To be recognized as a vector or matrix,
Excel requires you to create arrays. This can be achieved in one of two ways. The most
convenient way of making re-usable arrays in Excel is to highlight the entire area of the matrix
(e.g., the 1 by 5 or 5 by 5 series of cells created above) and to use the built-in naming feature
of Excel. For example, we can highlight the cell range B2:C3 and then move the cursor to the
small box between the “Home” ribbon bar and cell A1 and type in “A”, to designate this set
of cells as the A matrix, as shown in Figure 8A-9. The same can be done for I, Y1, and Y2,
however note that you can not use “Y1” and “Y2” as Excel names because there are already
Excel cells with those names (in column Y of the spreadsheet) – you must instead name them
something like “Y_1” and “Y_2”. These names of specific cells or groups of cells help you to
create more complex cell formulas as they act like aliases or shorthand notations that refer to
the underlying cell ranges. In practice, instead of having to enter the cell range (e.g., B2:C3)
and potentially making typos in formulas, you can instead just use the name you have assigned.
This is useful with matrix math because it is easier to ensure you are multiplying the correct
vectors and matrices by using their names instead of cell ranges.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 269

Figure 8A-9: Named A Matrix in Microsoft Excel for Example 8-2

Once you have made names for your data ranges, you can use built-in Excel matrix math
functions like multiplication and inversion. Addition and subtraction of vectors or matrices of
the same dimensions (m x n) can be done with regular + and – operators. However, you need
to help Excel realize that you are making an array and set aside space for it to be created based
on knowing its dimensions. To find I+A as in Example 8-2, you need to first select an unused
cell range in your spreadsheet that is 2 x 2 (and optionally name it, e.g., IplusA, and press
enter), then type the equal sign (=), enter the formula (I+A), and then click CTRL-SHIFT-
ENTER. This multi-step process tells Excel that you want the results of the matrix operation
I+A to be entered into your selected cell range, to add the previously named references I and
A, and to generate the result with array formulas (thus the CTRL-SHIFT-ENTER at the end).
The screenshots in Figures 8A-10 and 8A-11 show the intermediate and final steps (before
and after typing CTRL-SHIFT-ENTER) of this process in Excel. Note that after CTRL-
SHIFT-ENTER has been typed, Excel modifies the cell formula such that curly brackets are
placed around the formula (not shown), denoting the use of an array function as applied to a
cell in the named range.

Figure 8A-10: Entering Array Formula for Selected Area in Microsoft Excel for Example 8-2

As shown in the last Advanced Material Section, multiple Yi entries of final demand can also
be modeled, e.g., a final demand Y3 could have $100 inputs into both sectors (not shown here).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
270 Chapter 8: LCA Screening via Economic Input-Output Models

Figure 8A-11: Result of Array Formula in Microsoft Excel for Example 8-2

Multiplication and inversion of matrices uses the same multi-step process, but use the built-in
functions MMULT and MINVERSE. You can use the MMULT and MINVERSE functions
by typing them into the formula bar or by using the Excel “Insert->Function” dialog box
helper. As with the example shown above, as long as you first select the cell range of the
expected result (with the appropriate m x n dimensions), enter the formula, and click CTRL-
SHIFT-ENTER at the end, you will get the right results. You will see an error (or a result in
only one cell) if you skip one of the steps. While a bit cumbersome, using array functions in
Excel is straightforward and very useful for small vectors and matrices. Figure 8A-12 shows a
screenshot where [I-A]-1, [I-A]-1 Y1, and [I+A] Y1 have been created.

E-resource: A Microsoft Excel file solving Examples 8-1 through 8-3 is posted to the
textbook website.

Figure 8A-12: Result of Matrix Math in Microsoft Excel for Example 8-2

Note that you can perform vector and matrix math without using the Excel name feature. In
this case, you would just continue using regular cell references (e.g., B2:C3 for the A matrix in
the screenshot above). All of the remaining instructions are the same.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 271

Brief MATLAB Tutorial For IO-LCA Modeling

This short primer on using Mathworks MATLAB is no substitute for a more complete lesson
or lecture on the topic but will help you get up to speed quickly. It presumes you have
MATLAB installed on a local computer with the standard set of toolboxes (no special ones
required). MATLAB, unlike Microsoft Excel, is a high-end computation and programming
environment that is often used when working with large datasets and matrices. It is typically
available in academic and other research environments.

When MATLAB is run, the screen is split into various customizable windows. Generally
though, these windows show:

• the files within the current directory path,

• the command window interface for entering and viewing results of analysis,

• a workspace that shows a listing of all variables, vectors, and matrices defined in the
current session, and

• a history of commands entered during the current session.

In this tutorial, we focus on the command line interface and the workspace. Despite the brevity
of the discussion included here, one could learn enough about MATLAB in an hour to
replicate all of the Excel work above.

MATLAB has many built-in commands, and given its scientific computing specialties, is
designed to operate on very large (thousands of rows and columns) matrices when installed.
Some of the most useful commands and operators for use with EIO models in MATLAB are
shown in Figure 8A-11. Many commands have an (x,y) notation, where x refers to rows and y
refers to columns. Others operate on whole matrices.

Working with EIO matrices in MATLAB involves defining matrices and using built-in
operators much the same way as was done in the Excel examples above. Matrices are defined
by choosing an unused name in the workspace and setting it equal to some other matrix or
the result of an operation involving commands on existing matrices. MATLAB commands are
entered at the command line prompt ( >> ) and executed by pressing ENTER, or placed all
in a text file (called an ‘.m file’) and run as a script. If commands are entered without a
semicolon at the end, then the results of each command are displayed on the screen in the
command window when ENTER is pressed. If the semicolon is added before pressing
ENTER, then the command is executed, but the results are not shown in the command
window. One could look in the workspace window to see the results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
272 Chapter 8: LCA Screening via Economic Input-Output Models

Command Description of Result

zeros(x,y) creates a matrix of zeros of size (x,y). This is also useful to “clear out” an existing matrix.

ones(x,y) same as zeros, but creates a matrix of all ones.

eye(x) creates an identity matrix of size (x,x). Note the command is not I(x), a common
confusion.

inv(X) returns the matrix inverse of X.

diag(X) returns a diagonalized matrix from a vector X, i.e, where the elements of the vector are the
diagonal entries of the matrix (like the identity matrix).

sum(X) returns an array with the sum of each column of the input matrix. If X is a vector then the
command returns the sum of the column.

size(X) tells you the size of a matrix, returning (number of rows, number of columns). This is
useful if you want to verify the row and column sizes of a matrix before performing a
matrix operation.

A’ performs a matrix transpose on A, inverting the row and column indices of all elements of
the matrix.

A*B multiplies matrices A and B in left-to-right order and with usual linear algebra.

A.*B element-wise multiplication instead of matrix multiplication, i.e., A11 is multiplied by B11
and the results put into ‘cell11’ of the new matrix (A and B must be the same size).

[A,B] concatenates A and B horizontally.

[A;B] concatenates A and B vertically.

clear all empties out the workspace and removes all vectors, matrices, etc. Like a reset.
Figure 8A-11: Summary of MATLAB Commands Relevant to EIO Modeling

In this section, courier font is used to show commands typed into, or results returned
from, MATLAB. For example, the following commands, entered consecutively, would “clear
out” a matrix named “test_identity” and then populate its values as a 2x2 identity matrix:

>> test_identity=zeros(2,2)

>> test_identity=eye(2)

and the results consecutively displayed would be:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 273

test_identity =
0 0
0 0
test_identity =
1 0
0 1
The format of matrices displayed in MATLAB’s command window is just as one would write
them in row and column format. Matrices are populated with values by either importing data
(not discussed here) or by entering values in rows and columns, where columns are separated
by a space and rows by a semicolon. For example, the following command would create a 2x2
identity matrix:

identity_2 = [1 0; 0 1]

which would return the following result in the command window:

identity_2 =
1 0
0 1
The workspace window has a list of all vectors or matrices created in the session. All are listed,
and for small matrices, individual values are shown. For larger matrices, only dimensions (m x
n) are shown. Display of the dimensions is useful to ensure that you do not try to perform
operations on matrices with the wrong number of rows and columns. Double clicking on a
vector or matrix in the workspace opens a new window with a tabbed spreadsheet-like view
of its elements (called the Variable Editor). It is far easier to diagnose problems in this editor
window than in scrolling through the results in the command window, which can be
overwhelming to read with many rows and columns.

As discussed above, commands can be run from a text file containing a list of commands.
Code is written into such files and saved to a filename with an .m extension. To run .m files,
you navigate within the current directory path window until your .m file is visible. Then in the
command window, you type in the name of the .m file (without the .m extension) and hit
ENTER. MATLAB then treats the entire list of commands in the file as a script and runs it
sequentially. Depending on your needs, you may or may not need semicolons at the end of
lines (but usually you will include semicolons so the command window does not become
cluttered as results speed by in the background). Any commands without semicolons will have
their results shown in the command window. If semicolons are always included, the results
can be viewed via the workspace.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
274 Chapter 8: LCA Screening via Economic Input-Output Models

As a demonstration, one possible sequence of commands to complete Example 8-1 (either


entered line by line or run as an entire .m file) is:

Z=[150 500; 200 100];


X=[1000 2000; 1000 2000];
A=Z./X;
A command sequence for Example 8-2 is (assuming commands above are already done):

y1=[100; 0];
y2= [0; 100];
direct=eye(2)+A;
L=inv(eye(2)-A);
directreq1=direct*y1;
directreq2=direct*y2;
totalreq1=L*y1;
totalreq2=L*y2;
where the final 4 commands create the direct and total requirements for Y1 and Y2. A
command sequence for Example 8-3 is (assuming commands above are already done):

R=[50 5];
R_diag=diag(R);
E_direct_Y1 = R_diag*directreq1;
E_direct_Y2 = R_diag*directreq2;
E_total_Y1=R_diag*totalreq1;
E_total_Y2=R_diag*totalreq2;
E_sum_Y1=sum(E_total_Y1);
E_sum_Y2=sum(E_total_Y2);

EIO-LCA in MATLAB

The EIO-LCA model in a MATLAB environment is available as a free download from the
website (www.eiolca.net). The full 1997 model in MATLAB is available directly for
download, and a version of the 2002 model excluding energy and GHG data is available
directly for download. The 2002 MATLAB model with energy and GHG data is available

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 275

for free for non-commercial use via a clickable license agreement on the www.eiolca.net
home page (teachers are encouraged to acquire this license and the MATLAB file for local
distribution but to make non-commercial license terms clear to students).

Within the downloaded material for each model are .mat files with the vectors and matrices
needed to replicate the results available on the www.eiolca.net website, and MATLAB code
to work with producer and purchaser models. MATLAB .m files named EIOLCA97.m and
EIOLCA02.m are scripts for the 1997 and 2002 models, respectively, to generate results
similar to what is available on the website.

For example, running the EIOLCA97.m file in the 1997 MATLAB model will successively
ask whether you want to use the producer or purchaser model, which vector (economic,
GHG, etc.) to display, and how many sectors of results (e.g., all 491 or just the top 10).
Before running this script, you need to enter a final demand into one or more of the 491
sectors in the SectorNumbers.xls spreadsheet file. Results will be saved into a file called
EIOLCAout.xls in the 1997 MATLAB workspace directory. Note: to run the
EIOLCA97.m script file, you must be running MATLAB directly in Windows or via
Windows emulation software (e.g., Boot Camp or Parallels on a Mac) since it uses
Microsoft Excel read and write routines only available on Windows. The vectors and
matrices in the 1997 model though are accessible to MATLAB on any platform. Due
to these limitations, and the age of the data in the 1997 model, this section focuses on
the 2002 MATLAB model (but similar examples and matrices exist in the 1997
model).

Running the EIOLCA02.m file in the 2002 MATLAB model files will successively ask
whether you want to use the producer (industry by commodity basis default) or purchaser
model, the name of the vector variable that contains your final demand (which you will need
to set before running the .m file), and what you would like to name the output file. Note the
2002 MATLAB model can be run on any MATLAB platform (not just Windows).

Before running this script, you need to create and enter a final demand into one or more of
the 428 sectors. The following MATLAB session shows how to use the EIOLCA02.m script
to model $1 million of final demand into the Oilseed farming sector. All lines beginning with
>> show user commands (and as noted above the user also needs to choose between the
producer and purchaser models, and give names for the final demand vector and a named
txt file for output – highlighted in green). Before running this code, you will need to change
the current MATLAB directory to point to where you have unzipped the MATLAB code.

>> y=zeros(428,1);
>> y(1,1)=1;
>> EIOLCA02

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
276 Chapter 8: LCA Screening via Economic Input-Output Models

Welcome to EIO-LCA
This model can be run in 2002 $million producer or
purchaser prices.
For producer prices, select 1. For retail (purchaser)
prices, select 2.
Producer or Purchaser prices? 1

Name of the 428 x 1 final demand vector y


Output file name? (include a “.txt”)
Filename xout.txt
Total production input is: 1$M2002, producer prices

The resulting xout.txt file shows the total supply chain results across all sectors for $1
million of final demand in all data vectors available in the MATLAB environment (which
would match those on the website), all in one place. This file can be imported as a text file
into Microsoft Excel with semicolon delimiters for more readable output and for easier
comparison to the results on the website. An excerpt of rows and columns from this file is
shown in Figure 8A-12 (sorted by sector number and rounded off):
Fossil CO2
Total econ, Total
Emissions,
$M Energy, TJ
mt CO2e
Total, All Sectors 2.1 16.1 944
1111A0 Oilseed farming 1.1 8.4 476
1111B0 Grain farming 0.0 0.2 13
111200 Vegetable and melon farming 0.0 0.0 0.06
111335 Tree nut farming 0.0 0.0 0.023
1113A0 Fruit farming 0.0 0.0 0.11
111400 Greenhouse and nursery production 0.0 0.0 0.36
111910 Tobacco farming 0.0 0.0 0.14
111920 Cotton farming 0.0 0.2 11
1119A0 Sugarcane and sugar beet farming 0.0 0.0 0.06
1119B0 All other crop farming 0.0 0.0 0.75
Figure 8A-12: First 10 Sectors of Output from EIOLCA02.m script for $1M of Oilseed farming

The script .m files have much useful information in them, should you care to follow the
code. For example, you can see the specific matrix math combinations used to generate the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 277

producer and purchaser models and their direct and total requirements matrices used in
EIO-LCA (summarized in Figure 8A-13). The Readme file within the EIO-LCA 1997 and
2002 ZIP files give additional detail.
Element
Name Description
Direct requirements matrices – this particular one is an industry by commodity
A02ic format (there are similarly-named matrices for industry by industry, etc.
L02ic Total requirements matrices (others similarly named), using Leontief inverse.
EIOsecs
EIOsecnames List of IO sector numbers, names of sectors in the model.
List of effects available to be output, starting with direct and total economic
effects, then energy (including fuel-specific), then greenhouse gases (specified by
gas), water, conventional air pollutant, hazardous waste, transportation by mode,
envect02 toxic releases, then TRACI impact assessment.
The energy and greenhouse gas emission R matrix values, including totals and
EIvect specific to fuels and gases. There are similar matrices for water flows, etc.
A modified make matrix (see section on make and use tables). This matrix is used
to ensure that all sectors that actually produce an input are considered for overall
national average production. Any final demand input is ‘split’ into all sectors that
produce it. For example, many sectors make electricity, not just the power
W generation sector.
Matrix that distributes purchaser-valued final demand into the production,
purchtransmat transportation, and wholesale-retail margin inputs.
Figure 8A-13: Summary of Vectors and Matrices

E-resource: A Microsoft Excel file with the components of purchaser-valued output (as
used in purchtransmat) for the 2002 benchmark input-output table of the US economy
is posted in the Chapter 8 folder (filename is Purchaser Producer Cost Map 2002.xls). Figure
8A-14 summarizes three values in the file. For example, it shows to convert a final demand of
$1 in purchaser-valued output in Automobile manufacturing into a more complex final demand
of $0.88 of Automobile manufacturing, $0.1 in Wholesale trade, etc.
Transportation Margin Sectors
Producer Wholesale Retail Purchaser
Sector Name Price trade Air Rail Water Truck Pipeline trade Price
Automobile
Manufacturing 0.88 0.1 0.002 0.00 0 0.01 0 0.01 1
Light Truck and
Utility Vehicle
Manufacturing 0.69 0.08 0.00 0.01 0 0.01 0 0.2 1
Heavy Duty Truck
Manufacturing 0.83 0.02 0.00 0.01 0 0.01 0 0.14 1
Figure 8A-14: Example of Purchaser Model Margin Coefficients

Instead of using the provided .m script files, the MATLAB workspaces for 1997 and 2002 can
be used on any MATLAB platform to do tailored modeling using the various vectors and
matrices. For example, you may want to generate the total fossil CO2 emissions for $1 million
of oilseed farming in the same EIO-LCA 2002 model. Fossil CO2 emissions are in the matrix

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
278 Chapter 8: LCA Screening via Economic Input-Output Models

EIvect, in row 8 (rows 1-6 are the various energy vector values and rows 7-12 are the various
GHG emission vector values). The MATLAB code needed is as follows:

>> clear all


>> load EIO02.mat
>> y=zeros(428,1);
>> y(1,1)=1;
>> x=L02ic*y;
>> E=EIvect(8,:)*x
which returns:

E = 944.2073
The same value appears in the first row of the last column of Figure 8A-12.22

Likewise, you might be interested in generating the total fossil CO2 emissions across the
supply chain for $1 million into each of the 428 sectors:

>> allsects=EIvect(8,:)*L02ic;
which returns a 1x428 vector containing the requested 428 values (where the data in column
1 is the same as above for oilseed farming). This simple one-line MATLAB instruction
works because the 1x428 row vector chosen from EIvect (total fossil CO2 emissions factors
per $million for each of 428 sectors) is multiplied by the column entries in the total
requirements matrix for each of the sectors, and the result is the same as finding the total
GHG emissions across the supply chain as if done one at a time. The first four values in this
vector (rounded) are:

[944.2 1123.1 739.8 756.4],


representing the total fossil CO2 emissions for $1 million of final demand into the first 4 (of
428) sectors in EIO-LCA. Much more is possible given the available economic and
environmental/energy flow matrices that is not possible on the website or with the included
script file. For example, you could do a similar analysis as above for the purchaser-based
model to find the results of $1 million in all sectors.

22 Technically, the EIO-LCA MATLAB code uses the W matrix to find direct effects in these equations. See the .m file.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 279

End of Section Questions

E-resource: A Microsoft Excel file with the aggregated 15-sector 2002 benchmark input-
output table of the US economy is posted to the textbook website. It contains the A
matrix and the sector outputs for the 15 sectors.

1. Use the US 2002 benchmark 15-sector spreadsheet to answer the following questions.

a. Find the direct supply chain purchases ($M) from all sectors for a $100 million purchase
in Sector 1 (Agriculture, forestry, fishing, and hunting).

b. Find the second level supply chain (first level indirect, or Tier 2) purchases ($M) from all
sectors for a $100 million purchase in Sector 1.

c. Find the total supply chain purchases ($M) from all 15 sectors for a $100 million increase
in Sector 1.

d. What percent of the total purchases in each sector are direct? What percent of the total
purchases in each sector are second-tier?

2. Answer the questions below using the EIO-LCA 2002 MATLAB environment, using 2002
producer and purchaser priced industry by commodity matrix models.

Hint: see the EIOLCA02.m code and observe differences in the producer and purchaser
models. No deductions for inelegant solutions, but be sure to show equations or code used.
You will be able to verify you have done it right because the eiolca.net website for the 2002
US model will show the same emissions for a given sector.

a. Find the fossil CO2e emissions resulting from $1 million of producer-valued final demand
for every sector in the model. Which ten sectors, sorted by total fossil CO2e emissions,
result in the highest total tons of CO2e as a result of this economic activity?

b. Repeat part (a) for purchaser-valued final demand (still submitting only the top 10). Show
results broken out by production, transportation and margin sectors.

c. Make a visual comparing the producer and purchaser-valued results for your top 10 fossil
CO2-emitting sectors per purchaser $million (i.e., you should have the top 10 from part (a)
matched with the results from part (b) even if they are outside of that top 10). Discuss the
differences, and why they are greater or less than each other.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
280 Chapter 8: LCA Screening via Economic Input-Output Models

Section 7 – IO-LCA-based Uncertainty Analysis: Example with


Ranges23
Input-output models are generally used for screening tools, for example to identify hot spots
within a product system network for subsequent process-based analysis or to focus primary
data collection efforts. Traditionally the screening is done on a magnitude basis, meaning that
the point of the screening is to identify the likely biggest hot spots. However by considering
available data, uncertainty based screening is also possible.

Chapter 8 (including its Advanced Material sections) described how publicly available data is
used to generate the data for the R matrices used for environmental flows per unit of output.
In short, a single ‘best’ data source is identified for each of the sectors in the model for each
needed flow. The overall IO-LCA model then is comprised of many ‘best guess’ deterministic
data points and no representation of uncertainty. Most IO-LCA models are deterministic in
nature.

Chen et al (2016) considered the availability of various data sources on energy use for each of
the sectors in the 428 sector 2002 US EIO-LCA model, as well as various combinations of
different assumptions (e.g., different assumptions on unit prices of fuel) that could be used to
generate alternative values for R matrices. They also introduce a method for expressing and
visualizing the resulting parameter uncertainty in matrix-based models. The results provide
additional insight to understand how a screening tool can be used to identify both hot spots
as well as uncertainties that could be better understood with additional effort to improve LCA
results. Instead of the commonly used simulation method, this study develops a new parameter
uncertainty estimation and visualization method focusing on the uncertainty across multiple
data sources for each inventory sector. The inventory flow of energy consumption was chosen
due to high data availability, and because it is fundamental to estimate other inventory
categories such as greenhouse gas emissions. In addition, uncertainty in energy consumption
is one of the most important impact categories to stakeholders, due to financial and global
scarcity concerns. Industrial stakeholders would want to understand the fuel use expense for
each year and thereby make decisions based on the available information. Therefore, the
uncertainty of the result is very important for users to make more robust decisions. The general
method used for energy consumption in 2002 EIO-LCA model could be applied to other
inventory categories, and other matrix-based LCA models. See Advanced Material Section 3
for more detail on the methods used.

From the data sources and with various assumptions, multiple values were calculated for each
sector. For the energy consumption category in the 2002 EIO-LCA model, data from 6
publicly available sources from government agencies were used:

• U.S. Bureau of Economic Analysis (US BEA)

23 This section is excerpted / modified from the published paper Chen (2018). Reference provided at end of section.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 281

• U.S. Census Bureau (Census)

• U.S. Department of Energy, Energy Information Administration (US EIA)

• U.S. Environmental Protection Agency (US EPA)

• U.S. Department of Agriculture (USDA)

• National Transportation Research Center (NTRC).

Figure 8-6 shows a high level summary of different assumptions and data sources used in
estimation of energy use (USDA 2002) (Census 2007) (MECS 2002) (NTRC 2014) (EIA 2004).
Each distinct color represents an energy/$ result (i.e., a different R matrix value for a sector)
calculated from different data sources and/or assumptions.

Figure 8-6: summary of values from different assumptions and data sources

EIO-LCA Uncertainty results for single sectors

Sector No. 120, Petrochemical manufacturing, is used to demonstrate the uncertainty range results.
Figure 8-7 shows the results for total energy consumption as well as separated by fuel. The
consumption values are based on $1 million dollar final demand of sector 120. Since IO-LCA
models are linear, the relative uncertainty for a different input of final demand would be the
same. In this case, the sector’s total energy consumption varies from 21 TJ to 62 TJ, resulting
in the range of -50% to 40% compared to the default value (42 TJ). The graph shows that the
uncertainties of natural gas and petroleum usage are the major contribution to the large
discrepancy of total energy consumption (and they are also estimated to be the highest fuel
inputs in the total).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
282 Chapter 8: LCA Screening via Economic Input-Output Models

The causes of the uncertainties are also visible in Figure 8-7. The red marks are the values
calculated from R matrix with values from the Use Table; all of red marks are larger than the
value calculated from the default R matrix (black asterisk). It indicates that the values in the
Use Table are a major contribution to the larger values in the ranges. In addition, the large
discrepancy in the total energy consumption are caused by the large range in petroleum usage
category; the large range is caused by the Use Table as well. On the contrary, the results
calculated from Census data are closer to the maximum value only when maximum values are
chosen as substitutes (blue circles in figure), likewise for the minimum values and default
values. It suggests that there are only a small proportion of data available from the Census, the
majority of the data are substituted, thereby the results are similar to the results calculated
from upper, lower bound or default value.

Figure 8-7: bounding results for sector No. 120, based on total energy consumption and separated fuel
consumptions. All consumption values are based on 1 million dollar output of sector No.120.
NG=natural gas, Petrol=petroleum products, N-F-Elec = non-fossil fuel electricity.

As a hotspot analysis tool, IO models like the EIO-LCA model deterministically list the
highest sectors across the supply chain for an industry. Figure 8-8 shows the ranges of the top
15 energy intense sectors for $1 million of sector 120, sorted by the values in the current
deterministic R matrix used in EIO-LCA. As can be seen, sector 120 (Petrochemical
manufacturing), 115 (Petroleum refineries) and 126 (Other basic organic chemical manufacturing) are the
top 3 energy intense sectors, contributing an estimated 64% to the total energy consumption.
These three sectors are also major contributors to the uncertainty of total energy consumption;
however, the rankings of top 5 energy intense sectors can be changed when uncertainties are

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 283

considered. The red symbols for the top 3 sectors show that the Use Table is the data source
for the high end of the ranges. As these sectors belong to Petroleum Product Manufacturing
category, it indicates that the Use Table provides greater energy consumption values for some
petroleum products. Similar conclusions can be made for values from CBECS: CBECS
provides relatively smaller values for petroleum products.

Figure 8-8: bounding results for sector No. 120, based on total energy consumption. Numerical labels
represent the 428 sectors. The lower part of the graph shows the top 15 energy intense sectors in terms
of $1 million dollars output of sector No. 120.

While Figure 8-8 suggests sectors with high default values also have high uncertainty for Sector
120, that is not a general rule – uncertainty varies across many dimensions. Figure 8-9 shows
the uncertainty ranges of all 428 sectors (x axis) in all 12 R matrices, based on each sector’s
percentage change compared to its default value. Note that the positive side of the x-axis is
transformed to log scale for legibility. Results suggest that in general, the sectoral uncertainty
for direct and indirect energy consumption in the R matrix is approximately ±50% overall. In
some extreme cases, the values reach over 40 times larger than the default value. For example,
the value for Petroleum lubricating oil and grease manufacturing (No. 118) varies from 3 to 110
TJ/M$, and Hospitals (No. 338) varies from 1 to 33 TJ/M$. This outcome demonstrates how
different data sources and assumptions can result in a significantly large range for some sectors,
which is not currently expressed in current models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
284 Chapter 8: LCA Screening via Economic Input-Output Models

Figure 8-9: bounding results of R matrices for all 428 sectors, based on percentage changes comparing
with default value, base 10 log scale is used for positive values.

This uncertainty range analysis suggests that the uncertainty in the R matrix can be large. Given
the large uncertainties, using only a single data source for a sector in the R matrix (as typically
done in IO-LCA models) ignores important variation in the system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 285

Figure 8-10: bounding results of the B vectors for all 428 sectors, based on percentage changes
comparing with default value

Uncertainties of the B matrix

Figure 8-10 shows the uncertainty ranges of the B matrices, based on the energy consumption
(TJ) per million dollar output (from the R matrix range above) for each of the 428 industry
sectors. The results were calculated based on the total economic output in 2002 for each
sector. The overall uncertainty in total energy consumption for a sector varies about -40% to
40%; however, for the most extreme cases, the result is approximately 5 times larger than the
default value. The percentage changes are smaller compared to changes in the R matrix,
especially for the extreme cases, given the interconnectedness of supply chains and confirms
the cancellation effect discussed in (Lenzen 2001): in IO-LCA models, the R matrix
corresponding relative errors cancel out in the final results. It also shows the combination
effect of uncertainty results in matrix-based models- the percentage changes in the B matrix
are less scattered than the changes in the R matrix.

These different patterns of uncertainties are caused by the discrepancies of values from
different data sources. Investigating the uncertainties of different data sources can give a
clearer concept of where the uncertainties occur. Using this information, potential sectors that
have large uncertainties could be found and identified or adjusted after further investigation
in order to reduce the uncertainty in the model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
286 Chapter 8: LCA Screening via Economic Input-Output Models

Uncertainty values re-evaluation

The existence of big discrepancies for some of the sectors is one of the novel results found
from using the uncertainty range method. Tracking the reasons for the big discrepancies can
provide decision makers with a clearer concept of how or when to use the various data given
the uncertainties. An example is the values shown in Figure 8-8: larger values were calculated
from the Use Table (red) while smaller values were calculated form CBECS (light blue). A
possible explanation is the limitation of the data source. Values provided by the Use Table
were based on the purchase of fuels, rather than energy consumed in production phases. Thus,
these purchases could include fuel used as both energy and feedstocks. Petroleum product
manufacturing industries have a considerable amount of inputs from petroleum refinery as the
feedstocks; however, without additional detail all the purchase values were assumed to be for
energy consumed in production. Thus, the results from the Use Table may be larger due to
misinterpretation; the consequential uncertainty can be reduced when better-documented data
are provided (we note that MECS separates fuel and feedstock usage but is more aggregated).
A similar conclusion can be made for the results calculated from CBECS: smaller values were
calculated because only energy consumptions related to buildings such as natural gas for heat
and electricity were counted.

A more extreme case is the uncertainty in the Coal Mining sector. The coal mining sector uses
data from Census and the Use Table, and more than 5 data points were gathered for each fuel
type. In the lower and upper bound evaluation, the total energy consumption for the Coal
mining sector in the R matrix varies from 1 to 58 TJ/M$, with the default value 4 TJ/M$. The
total coal consumption for Coal mining sector from Census, while the value for the upper
bound was estimated via the Use Table. Comparing the evaluation from the Use Table with
other data sources, a possible reason for the discrepancy could be that coal purchases between
coal companies for beneficiation services were counted as an energy source in the Use Table.
Creators of IO-LCA databases may use such data sources without adjustment or
accommodation, and thus users of the data may not be aware of the reasons that lead to such
large deviations. In this case, if the result is used with caution, the decision impact of
uncertainty can be reduced.

The two examples show that even reliable data sources can lead to large outliers, giving
significantly different results that lead to large uncertainty. As discussed, the large discrepancy
between reliable data sources are typically caused by different assumptions used in reporting
the data. Using other empirical estimation methods such as the pedigree matrix approach, the
uncertainty of Coal mining sector will be determined as less than 40% no matter using which
data sources, as they are all from reliable first-hand data sources. The unreasonably large energy
consumption value due to the misinterpretation of data would be ignored.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 287

Decision making and screening under uncertainty


The uncertainty ranges estimated for the EIO-LCA model can improve the utility of this
source of information to assist with LCA decision making. Adding uncertainty allows for
screening tools that ‘screen’ based on both the magnitude of effect as well as uncertainty,
which allows a two-dimensional hotspot analysis. The first dimension relates to which are the
likely most important effects, and the second is how relatively uncertain they are. Either or
both could be indicators of where additional data (e.g., primary data or process-specific data)
could be useful in subsequent effort. Three case studies based on previously published LCA
work are shown to demonstrate how the information in the uncertainty range approach can
be used. Case study 1 shows how the uncertainty result can help robust decision-making in
LCA studies. Case study 2 shows that decisions can be totally overturned when the uncertainty
result is considered. Case study 3 shows how rankings provided by the hotspot analysis in
LCA tools are changed considering the uncertainty and how decisions can be affected.

Case Study 1 – Energy consumption of plastic cup and paper cup

In the early days of motivating LCA for use in personal decisions, such as the paper vs. plastic
debate (Hocking 1991), Lave and colleagues compared the toxic releases and energy use
(electricity only) of plastic and paper cups using a 1987 EIO-LCA model (Lave 1995). Using
price assumptions, they estimated a plastic cup consumed almost 50% less electricity (4,400
kWh vs. 8,600 kWh) than a paper cup. Using our uncertainty range results, total energy
consumption, not just electricity, for a plastic cup and a paper cup varies from 0.3 TJ to 0.7
TJ, and from 0.3 TJ to 0.4 TJ, respectively. When the uncertainty of the comparison is
considered, the original conclusions change. Instead of plastic cup production clearly using
50% less energy, the overlap range for possible energy use values suggest high potential for
about the same energy use, or for lower energy use for the paper cup. While the lack of a
simple conclusion on which of the two materials is better may seem like a step backward, it
more appropriately represents the challenge of the decision. The uncertainty may be reduced
by finding primary or process specific data.

Case Study 2 – Energy consumption comparison between asphalt and concrete


pavement

In an LCA study, decisions could change significantly by considering the uncertainty from the
LCA tools. To better illustrate the idea, a LCA study of energy consumption comparison of
asphalt and concrete pavement is used and re-evaluated based on the new uncertainty result.
Hendrickson and colleagues determined the energy consumption for one km-long asphalt and
concrete pavement using the 1992 EIO-LCA model (Hendrickson, Lave, and Matthews 2010).
The values were updated for the 2002 EIO-LCA model. From the 2002 EIO-LCA model’s
deterministic default value, the direct and indirect energy consumption for one km-long
asphalt and cement concrete pavement were 4 TJ and 5 TJ, respectively, favoring the choice
of asphalt on an energy basis.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
288 Chapter 8: LCA Screening via Economic Input-Output Models

25
Energy consumption (TJ)

20

15

Concrete
10 Asphalt

0
Total Coal NG Petrol Biomass Non-Fossil
Electricity

Figure 8-11: Results of direct and indirect energy consumption for 1 km concrete and
asphalt pavement, error bars indicates the upper and lower bound values

Considering the uncertainty, as shown in Figure 8-11 with error bars, the energy
consumptions of asphalt and concrete pavement have large ranges. The default values show
that one functional unit concrete pavement on average consumes 1 TJ more energy than
asphalt pavement, mostly because of the larger coal consumption in producing concrete.
However if the uncertainty results are considered, asphalt pavement could be nearly 5 times
more energy intense, overturning the simple deterministic and conclusion. The large range in
total energy use for asphalt pavement is associated with the uncertainty of petroleum usage.
As mentioned previously, the Use Table provides larger petroleum purchase values for
petroleum product manufacturing industries due to the feedstock use. The sector used for
evaluating asphalt pavement is ‘Asphalt paving mixture and block manufacturing’, a sector
associated with petroleum products, the values from the Use Table provided have the same
issue of including feedstock as fuel consumption. The uncertainty can be reduced if better
documented data are provided; however, with available information, no firm conclusion can
be made regarding the superiority of two pavements.

Case Study 3 – Energy saving from structural steel reuse

Input-output based LCA models (and matrix based LCA models in general) are often used as
a hotspot screening tool to help decision makers access quick and simple relative results. The
importance of sectors regarding environmental impacts rankings helps determine what and
where to look for potential impacts. Yeung and colleagues completed a hotspot analysis of
steel reuse (Yeung, Walbridge, and Haas 2015), applying rankings from EIO-LCA to help in

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 289

evaluating the impacts of reused steel. Due to the purpose of the study, the rankings of energy
use for steel and cement manufacturing was crucial to the analysis.

If only deterministic results are considered from EIO-LCA, iron and steel mills and cement
manufacturing sectors were ranked as 3rd and 7th regarding energy consumption. If the
uncertainty is considered, the sectors’ rankings change, shown as Default column in Figure
8-12. Using the upper bound of the uncertainty range, the iron and steel mills sector changes
slightly, but cement manufacturing sector is no longer in the top 10. If the hotspot analysis is
used to determine what to include in a detail analysis, then at the default value missing the
impacts of the paper mills and lighting fixture manufacturing sector at the upper bound.
Although these sectors may turn out to have limited impact on the final results, the analyst
considering all possibilities has a better picture and can investigate their potential impacts.
Rank Higher Default Upper
Nonresidential manufacturing Nonresidential manufacturing Nonresidential manufacturing
1 structures structures structures
2 Power generation and supply Power generation and supply Power generation and supply
3 Petroleum refineries Iron and steel mills Petroleum refineries
4 Iron and steel mills Petroleum refineries Iron and steel mills
5 Oil and gas extraction Oil and gas extraction Oil and gas extraction
Clay and non-clay refractory
6 Cement manufacturing Truck transportation manufacturing
7 Truck transportation Cement manufacturing Paperboard Mills
Other basic organic chemical Clay and non-clay refractory Other basic organic chemical
8 manufacturing manufacturing manufacturing
Clay and non-clay refractory Other basic organic chemical
9 manufacturing manufacturing Paper mills
Petroleum lubricating oil and
10 grease manufacturing Paperboard Mills Lighting fixture manufacturing
Figure 8-12: Top 5 Energy Consumption Sectors for $1 Million dollar Output of
Nonresidential manufacturing structures sector

The method used in this study can be applied to other matrix-based models for future work;
the results will be applied to uncertainty-based screening in matrix-based models. In addition,
the current method can be improved by considering all the data points and their impacts to
the model rather than only the maximum and minimum R matrices. The data points could be
defined with discrete distributions and simulation could be conducted, results with
distributions based on real life data could be provided.

Reference for this Section


Xiaoju Chen, W. Michael Griffin, H. Scott Matthews, “Representing and visualizing data
uncertainty in input-output life cycle assessment models”, Resources, Conservation, and
Recycling, Volume 137, October 2018, pp. 316-325. DOI:
https://doi.org/10.1016/j.resconrec.2018.06.011

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
290 Chapter 8: LCA Screening via Economic Input-Output Models

Advanced Material for Chapter 8 – Section 8


Uncertainty in Leontief Input–Output Equations: Some Numerical Examples

(need to fix vector/matrix notation throughout..)

In this Advanced Material section, issues associated with propagation of uncertainty through
matrix-based methods is demonstrated via perturbing values in an IO transactions matrix.
Such perturbations would be useful if considering the effect of structured uncertainty ranges
in a product system to assess whether changes have significant ‘ripple through’ effects. In
general, this methods shows that making small changes in these matrices leads to varying levels
of effects.

(More trans here?)

While the national economy is dynamic, there is considerable consistency over time. For
example, an electricity generation plant lasts 30 to 50 years and variations from year to year
are small. Input–output coefficients are relatively stable over time. Carter (1970) calculated the
total intermediate output in the economy to satisfy 1961 final demand using five different U.S.
national input–output tables. The results varied by only 3.8% over the preceding 22 years:

Using 1939 Coefficients: $324, 288

Using 1947 Coefficients: $336,296

Using 1958 Coefficients: $336,941

Actual 1961 Output: $334,160

Environmental discharges change more rapidly. Table 4-1 shows several impacts for the
generation of $1 million of electricity as calculated by EIO-LCA using the 1992 and 1997 and
2002? benchmark models. The economic transactions expected in the supply chain are
comparable for the two periods, with only a 3% difference, even though the sector definitions
changed over time. The 1997 benchmark separated out corporate headquarters operations into
its own economic sector; energy use and greenhouse gas emissions each declined from 1992
to 1997 by about 30%, suggesting that the sector and its supply chain became somewhat more
efficient and cleaner over time. However, the generation of hazardous wastes and the emission
of toxic materials increased. Both of these effects may be due to changes in reporting
requirements rather than changes in the performance of different sectors. In particular, the
electric utility industry was not required to report toxic releases in 1992. Users of the EIO-

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 291

LCA model can use the different model dates to assess these types of changes over time for
sectors of interest.

[Table 4-1] – INSERT from EIOLCA Book?

Another trans here?

Tables of direct requirements and total requirements are often reported to six significant
figures. It is said that these tables allow one to calculate to a single dollar the effect on a sector
of a one million dollar demand. It should go without saying that we do not believe that we
have information that permits us to do this kind of arithmetic. Little advice is given to the
novice EIO analyst about how many significant figures merit attention. We have advised our
colleagues and students to be careful when going beyond two significant figures, and in this
book we restrict virtually all of our impact estimates to two significant digits. This paper
explores some ways to investigate systematically the uncertainty in Leontief input–output
analysis.

We can judge the results of using the input–output estimates by considering the effects of
errors or uncertainties in the requirements matrix on the solution of the Leontief system. Here,
we will generally assume that we know the elements of the final demand vector Y without
error. Uncertainty in the values of the elements of the total output vector X will result from
uncertainty in the elements of the requirements matrix A. Uncertainties in X will result from
the propagation of errors through the nonlinear operation of calculating the Leontief inverse
matrix.

The Hawkins-Simon conditions require that all elements in the A matrix are positive and less
than one, and at least one column sum in the A matrix must be less than 1. The determinant
of the A matrix must be greater than zero for a solution to exist. For empirical requirements
matrices, such as are reported by the Bureau of Economic Services (BES) for the United States,
values of the determinants of A are very small (on the order of 10–12). Values of the determinant
of the associated Leontief matrices are greater than 1.

We will use the U.S. 1998 BES 9×9 tables for the direct requirements matrix A and the
calculated Leontief inverse matrix to illustrate the meaning of the sector elements (see Table
IV-1). The two tables are given below. Of the 81 terms in A, five are zero. The numbers in
any column of the A matrix represent the amount, or direct requirement, that is purchased in
cents from the sector-named rows to produce one dollar of output from the sector-named
column. For example, for a one dollar output from the construction sector, 29.8 cents of input
is required from the manufacturing sector. The column sum for the construction sector is 53.8
cents for all nine input sectors; this says that 1.00 – 0.538 = 46.2 cents is the value added by
the construction sector for one dollar of output. The values in the [I-A]-1 matrix are the total

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
292 Chapter 8: LCA Screening via Economic Input-Output Models

requirements or the sum of the direct and the indirect requirements. Hence, from [I-A]-1 we
see that a one dollar direct demand from the construction sector requires a total of 52.4 cents
to be purchased from the manufacturing sector to cover both the direct and indirect
requirements. The sum of the A values in the construction column shows that one dollar
demand for construction results in $2.09 of economic transactions for the whole economy.

[Table IV-1] - INSERT

Deterministic Changes in the [I–A] and [I-A]-1 matrices

Sherman and Morrison (1950) provide a simple algebraic method for calculating the
adjustment of the inverse matrix corresponding to a change in one element in a given matrix.
Their method shows how the change in a single element in A will result in the change for all
the elements of the Leontief inverse, and that there is a limit to changing an element in [A]
that requires that the change does not lead to A becoming singular. We use the 1998 9×9 A
and Leontief inverse matrices to demonstrate the numerical effect of changing the value in A
on the Leontief inverse matrix. We do not use the Sherman-Morrison equation for our
calculation, but instead use the functional term for calculating the inverse matrix in the
spreadsheet program Excel.

Example 1. The effect of changing one element in the A matrix on the elements in the Leontief
matrix.

If element A4,3 is increased by 25%, we want to know the magnitude of the changes in the
Leontief inverse matrix. The original A4,3 = 0.298 in the 1998 table and the increased value is
A4,3 = 1.25×0.298 = 0.3725. All other elements in A are unchanged. All calculated elements in
the new Leontief matrix show some change, but the amount of change varies. For an increase
of 25% in A4,3, the inverse element [I – A]-14,3 increases by nearly the same amount. The
manufacturing column and the construction row elements all change by the same percentage.
In the Leontief matrix system, the sum of the column is called the backward linkage and the
sum of the row is called the forward linkage. As a result of the 25% increase in the direct
requirement of the manufacturing sector on the construction sector, each backward element
in the construction sector increases by 0.17%. There is a similar increase in each element in
the forward linkage of the manufacturing sector. Other sectors change by different amounts.
In percentage terms, the changes for the new Leontief matrix compared to the original 1998
matrix are shown in Table IV-2.

[Table IV-2]- INSERT

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 293

Example 2. The effect of a small change in a single cell of A on the Leontief matrix.

The value of A4,3 is rounded from three decimal places to two, from 0.298 to 0.30. The relative
change in the new Leontief matrix is small for all cells, and no cell has a positive change (see
Table IV-3). The largest change is in [I-A]-14,3 of –0.6%.

[Table IV-3]- INSERT

Example 3. The effect of rounding all cells of A from three decimal places to two.

The changes in the new Leontief matrix are both positive and negative, and are larger than for
the single cell rounding change illustrated in the previous example (see Table IV-4). [I–A]–14,3
changes by –1.7% when all cells in A are rounded down. Rounding all the cells in A to two
decimal places results in large changes in many cells of the Leontief matrix. The largest
negative change is –71.1% in [I–A]–17,1, and the largest positive change is 54.1% in [I–A]–19,2.

[Table IV-4]- INSERT

Modeling Changes in the [I–A] and Leontief matrices with Probabilistic Methods

The literature dealing with uncertainty in the Leontief equations is not extensive. The earliest
work that we have found that deals with probabilistic errors was from the PhD thesis of R.E.
Quandt (1956). Quandt’s (1958, 1959) analysis of probabilistic errors in the Leontief system
was limited by the computing facilities of the late 1950s. His numerical experiments were
confined to small (3×3) matrices. He developed equations to calculate expected values for
means and variances of the Leontief matrix based on estimates of these parameters for the [A]
matrix.

Quandt investigated changes in the Leontief equations by examining them in this form:

[I – A – E]–1 y = x (IV-3)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
294 Chapter 8: LCA Screening via Economic Input-Output Models

Quandt specified conditions on his errors E that each element [Aij + Eij] > 0 and that column
j, sum of all elements [Aij + Eij], must be less than 1, that is, the uncertain A elements satisfied
the Hawkins-Simon conditions. His work examined eleven discrete distributions for E, (eight
were centered at the origin and two were skewed about the origin.) The probabilities of these
errors were also modeled discretely with choices of uniform, symmetric, and asymmetric
distributions.

For each distribution Quandt selected a sample of 100 3×3 matrices. Each sample set
represented about 0.5% of the total population of 39 = 19,683 matrices. From this set of
experiments he calculated the variance and the third and fourth moments of the error
distributions and the resulting vector x. Quandt used a constant demand vector y for all his
experiments. The mean values of x were little changed from the deterministic values and the
variance of A had little effect on the mean values of x.

Quandt concluded the following:

1. the skewness of the errors in A are transmitted to the skewness of the errors in the x
vector.

2. The lognormal distribution provides a fairly adequate description of the distribution


of the x vector elements irrespective of the distribution of the errors in A.

3. One can use the approximate lognormal distribution to establish confidence limits for
the elements in the solution x.

West (1986) has performed a stochastic analysis of the Leontief equations with the assumption
that elements in A could be represented by continuous normal distributions. He presents
equations for calculating the moments of the elements in A. West’s work is critically examined
by Raa and Steel (1994) who point out shortcomings in his choice of normality for the A
elements—mainly that elements in A cannot be less than zero—and suggest using a beta
distribution limited to the interval for A elements between 0 and 1 to keep elements in A
positive.

Some Numerical Experiments with Stochastic Input A + E matrices

The examples presented in this section are constructed using Microsoft Excel spreadsheets
and @Risk software. They illustrate the ease by which we can study the effects of changes in

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 295

the form of elements of A on the Leontief matrix and some of the multipliers and linkages
commonly used in Leontief analysis. Numerical simulation is easy, and results from more than
a thousand iterations are obtained quickly. Still, the critical issues are the formulation of good
questions and the interpretation of the results of numerical experiments. As we have pointed
out before, the lack of a detailed empirical database to support our assumptions about the
statistical properties of the elements in the direct requirements A matrix is the most important
limitation of this analysis.

For each of our numerical experiments, we compare the properties of the input A + E direct
requirements matrix and the output [I – A – E]-1 total requirements matrix, where E is an
introduced perturbation. The results of stochastic simulations for the [I – A –E] inverses are
compared to the deterministic calculation of [I – A] inverse. We report some representative
results for four scenarios.

1) In each scenario, the means of A are the 1998 values reported to three decimal
places.

2) Four types of input distributions are examined: a uniform distribution and two
triangular distributions with both positive and negative skewness, and a symmetric
triangular distribution.

An Excel spreadsheet is constructed with the 1998 9×9 matrix A, and used to calculate the
Leontief inverse. @Risk uses the Excel spreadsheet as a basis for defining input and output
cells. The chosen probabilistic distribution functions can be selected from a menu in @Risk;
the number of iterations can be set, or one may let the program automatically choose the
number of iterations to reach closure. We expect changes from the results of the deterministic
calculation of the Leontief matrix from A; Simonovits (1975) showed that

Exp (I – A)–1 > (I – Exp (A))–1 (IV-4)

where Exp is the expected value operator. We use the software to numerically simulate the
values of the Leontief matrix given an assumed distribution for A.

Table IV-5 shows the results of our simulations for the manufacturing:construction element
in A, namely A4,3. Each of the 81 values in A is iterated over l000 times for each simulation.
Here only the distribution of values for one cell, the manufacturing:construction intersection,
is reported.

[Table IV-5]- INSERT

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
296 Chapter 8: LCA Screening via Economic Input-Output Models

The numerical simulations show that the mean value for [I-A]-14,3 for the symmetric
uniform distribution is identical to the deterministic value for this sector pair of
manufacturing:construction. The mean values for [I-A]-14,3 for the two skewed distributions
are both lower than the deterministic value for [I-A]-14,3, and so is the mean value for [A4,3] for
the symmetrical triangular distribution. The coefficient of variation (COV: the ratio of the
standard deviation to the mean) of [I-A]-14,3 is smaller than the COV for [A4,3] for all
simulations except for the uniform input distribution. Consistent with Quandt’s conjecture,
the skewness for every [I-A]-14,3 increases except for the uniform distribution input. Additional
work remains to show the patterns for the entire distribution of cells in [A].

Energy Analysis with Stochastic Leontief Equations

The following table is representative of the 9×9 U.S. economy with nearly $15 trillion of total
transactions and the total value added, or GDP, of more than $8 trillion. We have included a
column called the percentage value added for this economy. If we think in terms of $100
million of value added, or final demand, for the U.S. economy, we can also think of this
demand in disaggregated sector demands of nearly $18 million for manufactured products,
more than $5 million of construction, etc.

For this energy analysis, we show three columns of energy data in mixed units, energy per $
million of total sector output. Hence, the manufactured products sector uses nearly 3.5 TJ of
total energy per $ million of output, 0.24 million kWh of electricity per $ million of output,
and 0.44 TJ of coal per $ million of output.

[Table IV-6]

The total energy use and the direct energy use for each sector are given by:

r = [R diagonal] x = [r diagonal] [I – A]–1 y (IV-5)

and

r direct = [R diagonal] [I + A] y (IV-6)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 8: LCA Screening via Economic Input-Output Models 297

where r is a vector of energy use by sector, [R diagonal] is a matrix with diagonal cells equal
to the energy use per dollar of sector output and off-diagonal terms equal to zero, and A is the
requirements matrix.

Example 1. For this example, we use Excel and @Risk to build a model to calculate the
uncertainty in the physical units both the direct and the total energy use for $ 100 million of
final demand for the U.S. economy. The demand distributed among the nine sectors
proportionally to the distributions of value added for the economy. We assume we know the
demand vector with certainty, and that we know the physical energy use with certainty. All
uncertainty for this example is in A.

Assume that the entries in A may be represented by a symmetric triangular distribution with a
low limit of zero, a mode equal to the three decimal place value reported by BES, and a high
limit of two times the mode. The mean of this distribution is equal to the mode. This is
equivalent to saying that the coefficient of variation is constant for all entries with a value of
0.41. Previously, we presented the results of simulations for this triangular distribution on the
Leontief matrix In this example, we examine the distribution of the r direct and the total r.

@Risk performed 5000 iterations to calculate the mean value and the standard deviation of
the energy use for each of the nine sectors for a $100 million increment of GDP proportionally
distributed across the economy. The uncertainty in the energy output for each sector is shown
by the standard deviation and the COV. The sum of the total energy use for the whole
economy is 730 TJ and the direct energy use is 563 TJ. The sector values for r direct are
smaller than the total r values, and the r values have more uncertainty than the r direct values
as shown by the COVs. The COVs for all sectors are smaller than the constant COV of 0.41
assumed for A. Direct energy use is lowest as a percentage of the total energy use for the
agricultural products and minerals sectors. For all other sectors, the direct energy use is more
than 70% of the total energy use.

[Table IV-7]

Summary

Uncertain values in the cells of the requirements matrix generate uncertain values in the cells
of the total requirements or Leontief matrix. Three cases have studied. Two deterministic cases
are presented; in one case only a single value in A is modified, and in the second case all the
values in A are changed. For a set of probabilistic examples we used Excel and @Risk to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
298 Chapter 8: LCA Screening via Economic Input-Output Models

calculate the Leontief matrix for a uniform and three triangular distributions of A as input.
The simulations show small effects on the mean values of the Leontief matrix and larger
changes in the second and third moments.

An example of an energy analysis for a 9×9 sector model of the U.S. economy shows the
effect of uncertainty from A on the total energy use r and the direct energy use r direct for
each sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 299

Chapter 9 : Advanced Life Cycle Models


In this chapter, we define alternative approaches for LCA using advanced methods such as
process matrices and hybrid analysis. Process matrices organize process-specific data into
linear systems of equations that can be solved with matrix algebra, and represent a significant
improvement over process flow diagram-based approaches. Hybrid LCA models, those that
combine process and input-output based methods, offer ways to leverage the advantages of
the two methods while minimizing disadvantages. Three approaches to hybrid LCA modeling
are presented, with the common goal of combining types of LCA models to yield improved
results. The approaches vary in their theoretical basis, the ways in which the submodels are
combined, and how they have been used and tested.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Define, build, and use a process matrix LCI model from available process flow data.

2. Describe the advantages of a process matrix model as compared to a process flow


diagram based model and an input-output based model.

3. Describe the various advantages and disadvantages of process-based and IO-based


LCA models.

4. Classify the various types of hybrid models for LCA, and understand how they
combine advantages and disadvantages of process and IO-based LCA models.

5. Suggest an appropriate category of hybrid model to use for a given analysis, including
the types of data and process-IO model interaction needed.

Process Matrix Based Approach to LCA


In Chapters 5 and 8 we introduced process-based and IO-based methods as two approaches
to performing life cycle assessment. The bottom-up process method presented in Chapter 5
(referred to as the process flow diagram approach) is a fairly limited application of the process
method. It requires time iteratively finding each needed set of process data, including for
following the connections between processes. We found results by summing effects from each
included process in the diagram in a bottom-up method. On the other hand, IO-LCA methods
presented a distinct benefit in terms of delivering quick and easy top-down results by
exploiting matrix math methods to invert and solve the entire upstream chain.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
300 Chapter 9: Advanced Life Cycle Models

We can merge the concepts introduced in input-output analysis with the data from a process
flow diagram approach to create linear systems of equations that represent a comprehensive
set of process models known as a process matrix.

Now that you have seen both process and IO methods, you might have already considered a
process matrix-based model. Conceptually, a process matrix model incorporates all available
process data (whether explicitly part of the process flow diagram or not) into the system. The
process matrix approach yields results similar to what would be expected if we added more
and more processes to the process flow diagram. However, as we will see, the process matrix
approach is able to improve upon the bottom up process diagram approach, as it can model
the interconnections of all processes, and as in IO methods, will be able to fully consider the
environmental flows of all upstream interconnections. The process matrix approach has the
potential to produce some of the large boundary benefits of an IO model system but with data
from explicit (rather than average) processes.

Before going further, we use the linear algebra introduced in Chapter 8 to re-define process
data and models. Figure 9-1 shows a hypothetical system with two processes, one that makes
fuel and one that makes electricity.24 This example is similar to the main example of Chapter
5 that discussed making electricity from coal.

Figure 9-1: Process Diagrams for Two-Process System

Focusing on the purely technical flow perspective, process 1 takes the raw input of 50 liters of
crude oil, and process 2 takes an input of 2 liters of fuel. Likewise, the output flow arrows
show production of 20 liters of fuel and 10 kWh of electricity, respectively, in the two
processes (the emissions shown in the figure will be discussed later). In this scenario, the
functional units are the outputs, 20 liters of fuel and 10 kWh of electricity, since all of the
process data corresponds to those normalized values. Without any analysis we know that fuel
(process 1’s output) must be produced to produce electricity (process 2’s output).

As with any linear system, but especially for the types of analysis of interest in LCA, we need
to consider alternative amounts of outputs needed, and thus create a general way of scaling

24 Thanks to Vikas Khanna of the University of Pittsburgh for this example system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 301

our production higher or lower than the functional unit values above. That is, we do not merely
have to be constrained to produce 20 liters of fuel or 10 kWh of electricity. Once a scaling
factor is established, the output for any input, or the input for any output can be found.

Within our process system, we initially consider only flows of product outputs through the
processes, e.g., fuel and electricity, not elementary flows. Thus, for now, we ignore necessary
crude oil input and the various emissions (again, we will consider these later). In such a linear
system we define a scaling factor vector X with values for each of the two processes, X1 and
X2, and the total net production across the system for each of the two outputs, Y1 and Y2.
Here, Y1 is the total net amount of fuel produced, in liters. Y2 is the total net amount of
electricity produced, in kWh.

We can define a sign convention for inputs and outputs such that positive values are for
outputs and negative values are for inputs (i.e., product output that is input to other processes
in the system). Given this framework and notation, we define the following linear system of
equations which act as a series of physical balances given our unit process data, with unit
production of fuel and electricity in rows via the two processes:

20 X1 - 2 X2 = Y1

0 X1 +10 X2 = Y2 (9-1)

where the first equation mathematically defines that the total amount of fuel produced is 20
liters for every scaled unit process 1, net of 2 liters needed for every scaled unit produced in
unit process 2. Likewise, the second equation defines that the total amount of electricity
produced is zero per scaled unit of process 1 and 10 kWh per scaled unit process 2. To scale
our functional unit-based processes (up or down), we would insert values for X1 and X2. In
general, these values could be fractions or multiples of the unit. If X1 = 1 and X2 = 1, we would
generate the identical outputs in the processes shown in Figure 9-1. If we wanted to make
twice as much fuel, then X1 = 2, which, for example would require 100 liters of crude oil. If
we wanted to make twice as much electricity as in the unit process equation (20 kWh), then
X2 = 2, requiring 4 liters of fuel input.

Similar to what was shown in Chapter 8 (and its Appendices), we use the generalized matrix
notation AX = Y to describe the system of Equations 9-1, as demonstrated in Heijungs (1994)
and Suh (2005). Now the matrix A, which in the process matrix domain is called the
technology matrix, represents the technical coefficients of the processes linking the product
systems together. In the system describing Figure 9-1 above,

20 −2
𝐀=€ •
0 10

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
302 Chapter 9: Advanced Life Cycle Models

Note the structure of the matrix. The functional units, representing the measured outputs of
each of the processes, are along the diagonal of A. The use of outputs from other processes
within the system are the off-diagonal entries, e.g., -2 shows the fuel (output 1) used in the
process for making electricity (output 2).

To solve for the required scaling factor to produce a certain net final production in the system,
the linear system AX = Y is solved as in Chapter 8, by rearranging the linear system equation
and finding the inverse of A:

AX = Y ó X=A-1Y (9-2)

In this example, the inverse of A is:

. 05 . 01
𝐀…+ = € •
0 .1

thus if we want to produce Y2 = 1,000 kWh of electricity, we can use Equation 9-2 to
determine what the total production in the system needs to be. In this case it is:

. 05 . 01 0 10
𝑋 = 𝐀…+ 𝑌 = € •€ •=€ •
0 . 1 1000 100

which says that to make 1,000 kWh of electricity in our system of two processes, then from a
purely technological standpoint, we would need to make 10 times the unit process 1 of fuel
production (200 liters total) and we would need to scale the electricity generation process 2 by
a factor of 100. Within the system, of course, we would be making 200 liters of fuel, all of
which would be consumed as the necessary (sole) input into making 1,000 kWh of electricity.
Figure 9-2 shows this scaled up sequence of processes, including the dotted line ‘connection’
of the two processes. The processes are defined identically as those in Figure 9-1, but with all
values scaled by 10 for process 1 and by 100 for process 2.

Figure 9-2: Scaled-Up and Connected Two-Process System

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 303

So far, we have motivated the purely


We could also use AX = Y notation technological aspects of the simple two-process
for the linear system if we instead system. However, Figure 9-1 gives us additional
wanted to determine the total net
information on the resource use and emissions
output given a set of scaling factors.
For example, if X = [10 ; 100] (our
performance of the two processes. We create an
result from the example above) then: environmental matrix, B, analogous to Chapter
8’s R matrix, to represent the direct per-
20 −2 10 0 functional unit resource use and emissions
𝐀𝑋 = 𝑌 = € •€ •=€ •
0 10 100 1000 factors. The B matrix has a conceptually identical
basis of flows per functional unit as in Chapter 8,
So if we want to make 10 times the except that instead of consistently being flows
unit production of process 1 (200 per million dollars, the units are flows per
liters of fuel), and 100 times the unit
functional unit, which vary across the processes
production of process 2 (1,000 kWh
of electricity), net production is only (e.g., per kg, per MJ, etc.).
Y2 = 1,000 kWh of electricity, since
all of the 200 liters of fuel produced E = BX = BA−1Y (9-3)
in process 1 are consumed in process
2 to make electricity, resulting in a Again we use a sign convention where negative
net of Y1 = 0 liters of fuel. This result values are inputs and positive values are outputs.
(i.e., Y = [0; 1000]) is the same as From Figure 9-1, process 1 uses 50 liters of crude
used in the previous example. oil as an input, and emits 2 kg SO2 and 10 kg CO2.
Process 2 has no raw inputs (only the product
input of fuel already represented in the A matrix), and emits 0.1 kg of SO2 and 1 kg of CO2
emissions, respectively. Thus the environmental matrix B for our system, where the rows
represent the flows of crude oil, SO2, and CO2, and the columns represent the two processes,
can be represented as:

−50 0
𝐁=• 2 . 1–
10 1
Of course, the linear system behind B reminds us of the connection of inputs, outputs, and
emissions shown in Figure 9-1:

Ecrude = −50 X1 ; ESO2 = 2 X1 + 0.1 X2 ; ECO2 = 10 X1 + X2

Put another way, the elements of BA-1 in Equation 9-3 represent total resource and emissions
factors across the process matrix system, analogous to total economy-wide emissions factors
R[I -A]-1 of an IO system (and the A matrices are different in the systems).

Building on our example from above, we can estimate the environmental effects of producing
the required amount of outputs by multiplying the BA-1 matrix by our previously found vector
Y [0 ; 1,000]. The resulting E is:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
304 Chapter 9: Advanced Life Cycle Models

−500
𝐸 = • 30 –
200
While we have motivated this initial example with an intentionally small (two process) model,
it is easy to envision how adding additional processes would change the system. If we added a
third process, which potentially had interconnections to the original two processes, we would
merely be adding another dimension to the problem. The A matrix would be 3x3, and the X
and Y vectors would have an additional row as well. If we added no environmental flows, B
would also be 3x3. If we added flows (e.g., another fuel input or emission) then there would
be additional rows. The linear algebra does not become significantly more difficult. Depending
on the necessary scope, your system may end up with 5, 10, or 50 processes.

We generally use integer scaling factors and achieve integer results in the chapter examples.
However, the linear algebra method can derive any real input and output values. Note that
some processes may in fact only be able to use integer inputs (or be able to produce integer
levels of output), in which case your results would need to be rounded to a near integer.

E-resource: “Chapter 9 Excel Matrix Math” shows all of the examples in this chapter in
a Microsoft Excel spreadsheet.

Connection Between Process- and IO-Based Matrix Formulations


The most important aspect of the process matrix approach to recognize is the similarity to
how we solved EIO models. The matrix math (AX = Y ó X=A-1Y) is identical, differing only
where in the EIO notation the ‘A matrix’ is instead I-A. If you look at the elements of the
technology matrix A in the process matrix domain, and think through its composition, a more
distinct connection becomes clear. As noted above, the diagonal entries of the technology A
matrix summarize the functional units of the processes collected within the system. If we were
to think only about the inputs into the process system, and/or collect an A matrix consisting
only of the values of our own technology matrix from available process data, the matrix may
not have any of those functional unit values – it would just contain data on the required inputs
from all of the processes in the system. We would have no need to specify a particular sign
convention for inputs, so we could include them as positive values. For the system of
equations 9-1, the adjusted A matrix with this perspective would be:

0 2
𝐀∗ = € •
0 0

which would summarize the case where process 1 had no technological inputs from other
parts of the system (i.e., no input of fuel or electricity) and where process 2 had a requirement
of 2 liters of fuel. If we wanted to make productive use of this different process, we would

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 305

need to add in the functional unit basis of the system (otherwise we would have no way of
knowing how many units of output can be created from the listed inputs). In doing that, we
would need to create a diagonalized matrix containing the functional unit values of each of the
processes, which in this case is:

20 0
𝐖=€ •
0 10

and we would combine the information in these two matrices before inverting the result. W is
a matrix of positive values of the process outputs while A* is made of positive values for the
process inputs. The net flows are found as outputs minus inputs:

W + -(A*) = W – A*

and our modified matrix math system would be:

[W – A*]X = Y ó X=[W – A*]-1Y

Of course, combining W and A* in this way gives exactly the original A process matrix, which
is then inverted to find the same results as above.

The key part to understand is that this is exactly what is done in IO systems, but since the
system is in general normalized by a constant unit of currency (e.g., “millions of dollars”), all
of the functional units are already “1”, and thus the identity matrix I is what is needed as the
general W matrix above. Nonetheless, this exposition should help to reinforce the similarity
in derivation of the process based and IO-based systems.

Linear Systems from Process Databases


The simple two-process system above assumed values for inputs and outputs. But we may also
build up our linear system with data from processes in available databases. We could envision
the process flow diagram from Chapter 5, where we made electricity from burning bituminous
coal that had been mined and delivered to the power plant by train. This example used actual
data from the US-LCI database. In Chapter 5, we already saw how to build simple LCI models
from this process flow diagram. We could find the same answers by building a linear system.
Using the same notation as in our two-process example above, we could define the linear
system in the system of Equations 9-4. This system uses the US LCI database values from
Chapter 5 (Figure 5-7 through Figure 5-10), where bituminous-fired electricity generation is
process 1, bituminous coal mining is process 2, and rail transport is process 3. Recall from
Figure 5-7 that the values we were most concerned about were that a functional unit of 1 kWh
of bituminous coal-fired electricity required inputs of 0.442 kg of bituminous coal and 0.461
ton-km of diesel-fueled rail transport. Those values are prominent in the linear system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
306 Chapter 9: Advanced Life Cycle Models

1 X1 + 0 X2 + 0 X3 = Y1

-0.442 X1 + 1 X2 + 0 X3 = Y2

-0.461 X1 + 0 X2 + 1 X3 = Y3 (9-4)

For this system,


1 0 0
𝐀 = •−0.442 1 0–
−0.461 0 1

So in order to produce 1 kWh of electricity, we would need to produce the following in each
of the three processes:
1 0 0 1 1
𝑋 = 𝐀…+ 𝑌 = •0.442 1 0– •0– = •0.442–
0.461 0 1 0 0.461

which of course matches the values found in Chapter 5 (and which are obvious given the
absence of inputs of electricity, coal, or rail in two of the three processes). Considering only
fossil CO2 emissions,

𝐁 = [0.994 0 0.0189]

so, using E = BX , E ~ 1.003 kg CO2, the same result reached in Chapter 5 (Equation 5-1).

This example shows us that we could build linear system models based on process data, and
could include as many processes as we have time or resources to consider. Of course we can
use software tools like Microsoft Excel or MATLAB to manage the data and matrix math.
From the example in Chapter 5, we could expand the boundary to add data from additional
processes like refining petroleum, so as to capture effects of diesel fuel inputs. As we add
processes (and flows) we are just adding rows and columns to the linear system above. Beyond
adding rows and columns as we expand our boundary, we also generally add technical
coefficients to the A matrix that were not previously present (e.g., if we had data showing use
of electricity by the mine). We would thus be adding upstream effects that would likely not
have been modeled in a simple process flow diagram approach. The three-process example
above does not shed light on this potential, because there are no upstream coefficients that
are in the system that were not in our process flow diagram example in Chapter 5. There are
questions at the end of this chapter that assess such changes.

If building linear system models from the bottom up, we would eventually decide that we were
unlikely to add significantly more information by adding data from additional processes or
flows. The dimensions of the technology matrix A of our linear system would be equal to the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 307

number of processes included in our boundary, and the dimensions of the environmental
matrix B would be the number of processes and the number of flows. In short, we would have
a matrix representation of our process flow diagram model, but with interconnected upstream
flows.

However, if we have access to electronic versions of all the process LCI modules from
databases, we can use them to build large process matrix models. Since databases like US LCI
and ELCD are publicly available, matrices and spreadsheet models can be built that by default
encompass data for all of the many interlinked processes and flows provided in the databases.
Many researchers and software tools incorporate external databases with the process matrix
approach (e.g., SimaPro). In the rest of this section, we explore construction and use of these
comprehensive process matrix models to facilitate rapid construction of LCI models. While
not publicly available, licensees of ecoinvent data can download or build complete matrices
representing all processes and flows.

Chapter 5 discussed the availability of unit process and system process level data in the
ecoinvent database and software tools. System processes are aggregated views of processes
with relatively little detail, and no connections to the other unit processes. Using them is like
using a snapshot of the process (i.e., where the matrix math has already been done and saved).
Using ecoinvent unit processes allows the full connection to all upstream unit processes, and
calculations involving them will ‘redo’ the matrix math.

While the US LCI database as accessed on the LCA Digital Commons website does not
directly provide A and B matrices, the matrices can be either built by hand using the
downloadable spreadsheets (see Advanced Material for Chapter 5), or by exporting the entire
matrix representation of the US LCI database from SimaPro (choose the library after
launching the software, then choose ‘Export Matrix’ option from the ‘File’ menu). The US
LCI database, as exported from SimaPro as of 2014, provides LCI data for 746 products and
949 flows. These 746 products are the outputs of the various processes available in US LCI.

Given that the US LCI database has information on 746 products, we could form an A matrix
of coefficients analogous to the linear system above with 746 rows and 746 columns, where
the elements of the matrix are the values represented as inputs of other product processes
within the US LCI database to produce the functional unit of another product’s process. For
example, the coefficients of our three-process US LCI example above would be amongst the
cells of the 746 x 746 matrix. Of course, the A matrix will be very sparse (i.e., have many blank
or 0 elements) since many processes are only technologically connected to a few other
processes. This matrix would be similar to the IO direct requirements matrix considered in
Chapter 8. Likewise, B can be made from the non-product input and output flows of the US
LCI database, resulting in a 949 x 746 matrix. This matrix again will be quite sparse since the
number of flows listed in a process is generally on the order of 10 to 20.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
308 Chapter 9: Advanced Life Cycle Models

For small-scale LCA projects, a Microsoft Excel-based process matrix model could suffice
and provide the same quantitative results, yet will not provide the graphical and other
supporting interpretation afforded by other LCA software tools. The process matrix models
will, in general, represent more technological activity than bottom-up process flow diagrams
because of the addition of many upstream matrix coefficients, and thus will generally estimate
higher environmental flows.

E-resources: The Chapter 9 folder shows two forms of the US LCI database represented
as a complete matrix of 746 processes in Excel. The main (larger file size) spreadsheet shows
all of the intermediate steps of building the matrices, including the raw exported data from
SimaPro, matrices normalized by functional unit, transposed, inverted, etc., on separate
worksheets. It also has a ‘Model Inputs and Tech Results’ worksheet where you enter
production values for processes and see the ‘direct’ (just the top-level technical requirements
for the product chosen – as would be shown in the LCI module of the chosen process), total
(X) using the inverted technology matrix, and E results. You can look at this spreadsheet and
its cell formulas, especially those involving array formulas, to see how such models can be
built from databases. Amongst the many features of this larger spreadsheet are that the
coefficients of the A and B matrices can be modified, and changes would ripple through the
model. A second spreadsheet (filename appended with “smaller”) is the same model, but with
just the final resulting matrices, without intermediate matrix math steps. It has the same
functionality, but is significantly smaller in size. It may be more appropriate to use for building
models that will not modify any of the A or B matrix coefficients.

These two spreadsheets work by entering a desired input of process Y into the blue column
of the ‘Model’ worksheet to estimate the direct and total technical requirements X and the
environmental effects E, shown in yellow cells. Values entered must be pertinent to the listed
functional unit of the product (i.e., check whether energy products have units of MJ or kWh).
While the resulting values may be intimidating (746x949 entries), all possible detail is visible
which is not possible in other tools. The spreadsheet formats ‘significant’ results in red, in this
case greater than 0.000001, since imperfect matrix inversion results in many very small values
(less than 10-16) throughout the model which can be treated as negligible.

Example: Estimating the life cycle fossil CO2 emissions of bituminous coal-fired
electricity using a process matrix model of the US LCI database.

We can estimate the total fossil CO2 emissions of making coal-fired electricity in the US using
the Microsoft Excel US LCI e-resource spreadsheets. The first step is determining the
appropriate model unit process and input value to use. The same product used in other US
LCI examples, Electricity, bituminous coal, at power plant/US, (process number 416 of 747 US LCI
processes when sorted alphabetically) is chosen and an input value of 3.6 MJ (equal to 1 kWh)
is used. Figure 9-3 shows the ‘direct’ technical flows from this input, corresponding to the
seven direct inputs needed for this product in the US LCI data module for this process. Values

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 309

can be extracted from the US LCI spreadsheet model by copy/pasting the columns with the
process names (column A) and direct outputs (column E).

Flows prepended with “Dummy” in the US LCI database were not discussed in Chapter 5.25
In short, these are known technical flows for a process, but for which no LCI data are included
within the system boundary of the US LCI model (i.e., they are not connected to other
upstream flows in the matrix). They thus act only as tracking entries in the model.

Product Unit Flow


Transport, train, diesel powered/US tkm 0.461
Bituminous coal, at mine/US kg 0.442
Transport, barge, average fuel mix/US tkm 0.056
Dummy_Disposal, solid waste, unspecified, to unspecified treatment/US kg 0.044
Dummy_Disposal, ash and flue gas desulfurization sludge, to unspecified reuse/US kg 0.014
Transport, combination truck, diesel powered/US tkm 0.003
Dummy_Transport, pipeline, coal slurry/US tkm 0.002
Figure 9-3: Direct technological flows from 3.6 MJ (1 kWh) of Electricity, bituminous coal, at power
plant in US LCI Process Matrix

The values of X and E for all processes and flows are also shown in the spreadsheet. Figure
9-4 shows the elements of X with physical flows greater than 0.001 in magnitude. There are
18 more products upstream of electricity than in the ‘direct’ needs. While, as expected, the
process matrix values for the same physical products are larger in Figure 9-4 than in Figure
9-3, the differences are generally small. We can also see that the additional amount of
bituminous coal-fired electricity needed across the upstream chain within the process matrix
model is small (0.037 MJ).

The Air, carbon dioxide, fossil, column (number 231 out of 949 flows) shows the estimate of total
emissions of fossil CO2 across the entire process matrix, 1.0334 kg CO2 (with apologies for
the abuse of significant figures). While larger, this result is only marginally more than the result
from the process flow diagram approach. This is not surprising, though, because it is well
known that the main contributor of CO2 emissions in fossil electricity generation is the
combustion of fuels at the power plant, which was included in the process flow diagram. If
we were to choose a different product for analysis, we may see substantially higher
environmental flows as a result of having the greater boundary within the process matrix.

Note that the US LCI data provides information on various other carbon dioxide flows which
have not been included in our scope. There are two “Raw” flows of CO2 (as inputs), as well
as 4 other air emissions. The only other notable one of these is the Air, carbon dioxide, biogenic
flow, which represents non-fossil emissions, such as from biomass management.

25 These flows are now referred to as “CUTOFF” processes, not “dummy”. The text and spreadsheet examples still say “Dummy”.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
310 Chapter 9: Advanced Life Cycle Models

Output
Functional Value
Process Unit X
Electricity, bituminous coal, at power plant/US MJ 3.637
Transport, train, diesel powered/US tkm 0.466
Bituminous coal, at mine/US kg 0.447
Dummy_Disposal, solid waste, unspecified, to underground deposit/US kg 0.105
Electricity, at grid, US/US MJ 0.068
Transport, barge, average fuel mix/US tkm 0.057
Dummy_Disposal, solid waste, unspecified, to unspecified treatment/US kg 0.044
Transport, barge, residual fuel oil powered/US tkm 0.044
Transport, ocean freighter, average fuel mix/US tkm 0.037
Transport, ocean freighter, residual fuel oil powered/US tkm 0.033
Electricity, nuclear, at power plant/US MJ 0.015
Dummy_Disposal, ash and flue gas desulfurization sludge, to unspecified kg 0.014
reuse/US
Transport, barge, diesel powered/US tkm 0.012
Electricity, natural gas, at power plant/US MJ 0.012
Crude oil, at production/RNA kg 0.008
Dummy_Transport, pipeline, unspecified/US tkm 0.007
Dummy_Electricity, hydropower, at power plant, unspecified/US MJ 0.005
Transport, ocean freighter, diesel powered/US tkm 0.004
Transport, combination truck, diesel powered/US tkm 0.003
Dummy_Transport, pipeline, coal slurry/US tkm 0.002
Electricity, residual fuel oil, at power plant/US MJ 0.002
Electricity, lignite coal, at power plant/US MJ 0.002
Natural gas, at extraction site/US m3 0.001
Natural gas, processed, at plant/US m3 0.001
Electricity, biomass, at power plant/US MJ 0.001
Figure 9-4: Total technological flows (X) from 3.6 MJ (1 kWh) of Electricity, bituminous coal, at power plant in
US LCI Process Matrix, abridged

Figure 9-5 shows the top products that emit CO2 in the upstream process chain of bituminous
coal-fired electricity. As previously discussed, the combustion of coal at the power plant results
in 97% of the total estimated emissions. The emissions from rail transport by train are another
1%. Thus, our original process flow diagram model from Chapter 5, which we motivated as a
simple example, ended up representing 98% of the CO2 emissions from coal-fired electricity
found in the more complex process matrix model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 311

Process Emissions Percent of


(kg) Total
Total 1.033
Electricity, bituminous coal, at power plant/US 1.004 97.2%
Diesel, combusted in industrial boiler/US 0.011 1.0%
Transport, train, diesel powered/US 0.009 0.9%
Electricity, natural gas, at power plant/US 0.002 0.2%
Residual fuel oil, combusted in industrial boiler/US 0.002 0.2%
Transport, barge, residual fuel oil powered/US 0.001 0.1%
Natural gas, combusted in industrial boiler/US 0.001 0.1%
Gasoline, combusted in equipment/US 0.001 0.1%
Electricity, lignite coal, at power plant/US 0.001 0.1%
Transport, ocean freighter, residual fuel oil powered/US 0.001 0.1%
Bituminous coal, combusted in industrial boiler/US 0.001 0.0%
Figure 9-5: Top products contributing to emissions of fossil CO2 for 1 kWh of bituminous coal-fired
electricity. Those representing more than 1% are bolded.

One of the aspects of the ISO Standard that we did not discuss in previous chapters is the use
of cut-off criteria, which define a threshold of relevance to be included or excluded in a study.
For example, the cut-off may say to only include individual components that are 1% or more
of the total emissions. The Standard motivates the use of cut-off criteria for mass, energy, and
environmental significance, which would mean that we could define cut-off values for these
aspects for which a process is included within the boundary (and can be excluded if not). As
an example, if we set a cut-off criterion of 1% for environmental significance, and our only
inventory concern was for fossil CO2 to air, then we could choose to only consider the first
two processes which comprise 98.2% of the total (three processes if we rounded off
conservatively to 99.1%) listed in Figure 9-5. Likewise, with a cut-off criterion of 0.1%, we
would only need to consider the first 10. Neither of these cut-off criteria significantly affects
our overall estimate of CO2 emissions.

Cut-off criteria could be similarly applied for other effects. Mass-based cut-off criteria could
be evaluated by considering the mass of subcomponents of a product. In all cases, the cut-off
criteria apply to the effects of the whole system boundary, not to each product or process in
the system, so if the product system were fairly large, it is possible that many initially scoped
parts of the system could be excluded based on the cut-off criteria. For example, if the system
studied is the life cycle of an automobile, and the scope is fossil CO2 emissions (which would
likely be dominated by burning fuels in the use phase), then electricity use in production of
the vehicle may not be large enough to matter. In that case, all the results in the example above
would be excluded.

The other side of the cut-off criteria selection decision issue is that of truncation error. LCI
models are inevitably truncated when arbitrary or small boundaries are used. Thus, studies

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
312 Chapter 9: Advanced Life Cycle Models

showing the effects of truncation can compare the results from within a selected boundary
(e.g., a process flow diagram approach) as compared to those with a complete (e.g., process
matrix or IO system) boundary. In the example above, the truncation error is very small as
long as the ‘at power plant’ effects are within the boundary, but such errors can sometimes be
substantial as analysts define boundaries based on what they think are the important
components without knowledge of which are the most important. In the end, the process
matrix approach is yet another valuable screening tool, albeit one with substantial process-
based detail, to be used in setting analysis boundaries.

Truncation error in LCA models

The process flow diagram approach is particularly vulnerable to missing data resulting
from choices of analytical boundaries, also called truncation error. Lenzen (2000) reports
that truncation errors on the boundaries will vary with the type of product or process
considered, but can be on the order of 50% (i.e., 50% of the effects may be truncated). To
make process-oriented LCA possible at all, truncation of some sort is essential. From your
observations of the components of process matrix models, you may have noticed that they
are often dominated by manufacturing and energy production processes, with limited or
no data associated with support services. This becomes less important for studies with
narrow scopes that exclude such ancillary services, however, such studies become
necessarily limited in value.

As you have seen in this chapter, the process matrix approach provides an innovative way to
use our process data models. The results from using the matrix approach will generally be
larger and more comprehensive compared to the simpler process diagram approach, in the
same way as when we used IO models. This is because the process flow diagram approach is
inherently limited by what is included in the boundary. To some extent, a process flow diagram
approach assumes by default that everything excluded from the diagram doesn’t matter. But
as we have seen with the process matrix (and IO) approaches, it is difficult to determine what
matters until you have considered these larger boundaries. We should generally expect IO
models to estimate the largest amount of flows, as they comprise the whole economy within
the boundary, including overhead and service sectors, which are rarely included in LCI
databases. It is for this reason that IO models are often used to pre-screen the needed effort
for a more comprehensive process-based analysis.

Note though that process matrix approaches based solely on information from databases may
not be significantly better than a process flow diagram model. Databases are often populated
with average data (like IO models) and if assessing a system with processes significantly
different than the average, the model will not yield relevant results. In those cases, it may be
beneficial to build a parallel process matrix that replaces values in the process matrix from the
database with updated information like primary data collected for the study.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 313

Extending process matrix methods to post-production stages


In our examples above, the boundaries have only included the cradle to gate (including
upstream) effects. We can extend our linear systems methods to include downstream effects
as well, such as use and end-of-life management, since again this will merely involve adding
rows and columns, as well as additional coefficients, to the matrices.

In this case, lets build on the example from Figure 9-1 but where we make alternative
descriptions of the processes involved. Assume that we want to make a lamp product requiring
fuel and electricity to manufacture it, and that it will be disposed of in a landfill by a truck. We
had previously referred to fuel production as process 1, and electricity production as process
2. Now we add lamp production as process 3, and disposal as process 4. Specifically, lets
assume that producing each lamp requires 10 litres of fuel and 300 kWh of electricity (and
emits 5 kg CO2), and finally that disposing of the lamp at the end of its life consumes 2 litres
of fuel (and emits 1 kg CO2).

Our linear system can be written as in system 9-5:

20 X1 - 2 X2 - 10 X3 - 2 X4 = Y1

0 X1 + 10 X2 - 300 X3 + 0 X4 = Y2

0 X1 + 0 X2 + 1 X3 + 0 X4 = Y3

0 X1 + 0 X2 + 0 X3 + 1 X4 = Y4 (9-5)

where

20 −2 −10 −2
0 10 −300 0
𝐀=› œ
0 0 1 0
0 0 0 1

In this case, when we have an input (Y) to the system of one produced lamp and one disposed
lamp, the total production is:

. 05 . 01 3.5 0.1 0 3.6


0 0.1 30 0 œ ›0œ › 30 œ
𝑋 = 𝐀…+ 𝑌 = › =
0 0 1 0 1 1
0 0 0 1 1 1

Thus, in order to produce 1 lamp (the output of 1 lamp unit process), 3.6 units of fuel and 30
units of electricity are also needed. If we only model CO2 emissions, then

𝐁 = [10 1 5 1]

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
314 Chapter 9: Advanced Life Cycle Models

and E = 72 kg of CO2. While the system has few interrelationships, it may not be as easy to
validate this result as before, but if we think through our production, we need to make 72 litres
of fuel (12 litres from producing the lamp and disposing of it, and 60 litres from making
electricity), 300 kWh of electricity (all for making the lamp), and 1 lamp for every lamp
produced. The total emissions of 72 kg come from 36 kg of CO2 from fuel production, 30 kg
CO2 from electricity production, 5 kg from lamp production, and 1 kg from disposal. Thus,
our results make sense.

While not shown here, we could add another row and column to represent use of the lamp.

Advantages and Disadvantages of Process and IO Life Cycle Methods


Before considering other advanced LCA methods, we summarize the strengths and
weaknesses of process and IO methods. Both IO-LCA and process modeling have their
advantages and drawbacks. In principle, they can yield quite different results. If so, the analyst
would need to make a closer examination to determine the reasons for the differences and
which one or combination of the two gives the best estimate. Figure 9-6 compares the
strengths and weaknesses of the two types of LCA models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 315

Process Models Input-Output

Detailed process-specific analyses Economy-wide, comprehensive assessments (direct


and indirect environmental effects included)
Specific product comparisons
Advantages System LCA: industries, products, services, national
Process improvements, weak point economy
analyses
Sensitivity analyses, scenario planning
Future product development
assessments Publicly available data, reproducible results

Future product development assessments

Information on every commodity in the economy

System boundary setting subjective Some product assessments contain aggregate data

Tend to be time intensive and Process assessments difficult


costly
Difficulty in linking dollar values to physical units
New process design difficult
Disadvantages

Economic and environmental data may reflect past


Use of proprietary data practices

Cannot be replicated if Imports treated as U.S. products


confidential data are used
Difficult to apply to an open economy (with
Uncertainty in data substantial non-comparable imports)

Non-U.S. data availability a problem

Uncertainty in data
Figure 9-6: Advantages and Disadvantages of Process and IO-Based Approaches

The main advantage of a process model is its ability to examine, in whatever detail is desired,
the inputs and discharges for a particular process or product. The main disadvantage is that
gathering the necessary data for each unit process can be time consuming and expensive. In
addition, process models require ongoing comparison of tradeoffs to ensure that sufficient
process detail is provided while realizing that many types of relevant processes may not have
available data. Even though process matrix methods are quick and comprehensive, their
boundaries still do not include all relevant activities. Process models improve and extend the
possibilities for analysis, but we often cannot rely wholly on process models.

The main advantage of an IO approach is its comprehensiveness, it will by default include all
production-based activities within the economy. Its main disadvantage is its aggregated and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
316 Chapter 9: Advanced Life Cycle Models

average nature, where entire sectors are modeled as having equal impact (e.g., no
differentiation between types of electricity). IO-based models simplify our modeling effort
and avoid errors arising from the necessary truncation or boundary definition for the network
of process models. An IO model’s operation at this aggregate level fails to provide the detailed
information required for some analyses.

Categories of Hybrid LCA Models


An inevitable goal is thus to develop hybrid LCA methods that combine the best features of
process and IO-based approaches. In general, hybrid approaches use either a process-based
or IO model as the core model, but use elements of the other approach to extend the utility
of the overall model.

While Bullard (1978) was perhaps the first to discuss such hybrid methods, Suh (2004)
categorizes the types of hybrid models in LCA as follows: tiered, input-output based, and
integrated hybrid analysis.

In a tiered hybrid analysis, specific process data are used to model several key components
of the product system (such as direct and downstream effects like use phase and end of life),
while input-output analysis is used for the remaining components. If the process and IO
components of a tiered hybrid analysis are not linked, the hybrid total results can be found by
summing the LCI results of the process and IO components without further adjustment (but
identified double counted results should be deducted). The point or boundary at which the
system switches from process to IO-based methods is arbitrary but can be affected by
resources and data available. This boundary should thus be selected carefully, to reduce model
errors. Note that unlike many of the other methods discussed in this book, there are no
standard rules for performing the various types of hybrid analysis.

In tiered hybrid models, the input–output matrix and its coefficients are generally not
modified. Thus, analysis can be performed rapidly, allowing integration with design procedures
and consideration of a wide range of alternatives. Process models can be introduced wherever
greater detail is needed or the IO model is inadequate. For example, process models may be
used to estimate environmental impacts from imported goods or specialized production.

The most basic type of tiered hybrid model is one where the process and IO components are
not explicitly linked other than by the functional unit. An example of such a tiered hybrid
model could be in estimating the life cycle of a consumer product. In this case, one could use
an IO-based method, e.g., EIO-LCA, to estimate the production of the product (and
depending on which kind of IO model used, the scope of this could be cradle to gate or cradle
to consumer). Process based methods could then be used to consider the use phase and
disposal of the product.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 317

Example 9-1: Tiered separable hybrid LCA of a Washing Machine

To estimate the flows of a washing machine over its life cycle, we could assume that an EIO-LCA
purchaser basis model is able to estimate the effects from cradle to the point at which the consumer
purchases the appliance. Likewise, we could assume that process data can be used to estimate the
effects of powering the washing machine over its lifetime. We use the data shown in Chapter 3 for
“Washing Machine 1”.

IO component: Assuming that the purchaser price of a new washing machine is $500, we could
estimate the fossil CO2 emissions from cradle to consumer via the 2002 US purchaser price basis
model in EIO-LCA (Household laundry equipment manufacturing sector) as 0.2 tons.

Process components: Using the US-LCI process matrix Excel spreadsheet, we can consider the
production and upstream effects of 10 years’ worth (8,320 kWh) of electricity. Since the functional
unit is MJ, we convert by multiplying by 3.6 (30,000 MJ), and the resulting fossil CO2 emissions are
6,140 kg (6.1 tons).

Note: The US-LCI process matrix has no data on water production and landfilling, so these stages
are excluded from this example.

Total: 6.3 tons fossil CO2.

The description of tiered hybrid methods noted the possibility that the process and IO
components could both be estimating some of the same parts of the product system diagram.
In these cases, common elements should be dealt with by subtracting out results found in the
other part of the model. In Williams et al (2004), the authors performed a hybrid LCA of a
desktop computer. The three main subcomponents of the hybrid model are shown in Figure
9-7, where most of the major pieces of a desktop computer system were modeled via process-
based methods, capital equipment and feedstocks were modeled with IO methods, and the
net “remaining value” of the computer systems not otherwise considered in the two other
pieces were also then modeled with IO methods.

The overall result from Williams is that the total production energy for a desktop computer
system was 6400 MJ, 3140 MJ from the process-sum components, 1100 MJ from the additive
IO component, and 2130 MJ from the remaining value. Common elements were subtracted
but not represented in the values listed above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
318 Chapter 9: Advanced Life Cycle Models

Figure 9-7: Subcomponents of Hybrid LCA of Desktop Computer (Source: Williams et al 2004)

Others have used tiered hybrid methods to consider effects of goods and services produced
in other countries when using IO models, which would otherwise assume that impacts of
production would be equal to domestic production.

In an input-output based hybrid analysis, sectors of an IO model are disaggregated into


multiple sectors based on available process data. A frequently mentioned early discussion of
such a method is Joshi (2000), where an IO sector was disaggregated to model steel and plastic
fuel tanks for vehicles. In this type of hybrid model, the process level data allows us to modify
values in the rows or columns of existing sectoral matrices by allocating their values into an
existing and a disaggregated sector.

In Chapter 8, we discussed various aggregated sectors in the US input-output tables, such as


Power generation and supply, where all electricity generation types (as well as transmission and
distribution) activities are all in a single sector. If one could collect sufficient data, this single
sector could be first disaggregated into generation, transmission, and distribution sectors, and
then the generation sector further disaggregated into fossil and non-fossil, and then perhaps
into specific generation types like coal, gas, or wind. Another example of disaggregation that
could be accomplished with sufficient process data is the Oil and gas extraction sector (which
could be disaggregated into oil extraction and gas extraction). Any of these would be possible
with sufficient data, but only if the resulting models would be better than what process-based
methods could achieve.

Of course when disaggregating, all relevant matrices need to be disaggregated and updated,
and to use the disaggregated results in a model, the A matrix and R matrix values need to be

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 319

adjusted based on the process data. Since the A and R matrices are already normalized per
unit of currency, it is usually easier to modify and disaggregate make, use, or transaction
matrices for economic values (and then re-normalize them to A) and to disaggregate matrices
with total un-normalized flows to subsequently make R matrices.

Let us consider that we want to build a hybrid LCI model based on Example 8-3 in Chapter
8. At the time a two-sector economy was defined as follows (values in billions):

1 2 Y X
1 150 500 350 1000
2 200 100 1700 2000
V 650 1400
X 1000 2000
Assume that sector 1 is energy and sector 2 is manufacturing, and that we have process data
(not shown) to disaggregate sector 1 into sectors for fuel production (1a) and electricity (1b).
The data tells us that most of the $150 billion purchased by sector 1 from itself is for fuel to
make electricity, and how the value added and final demand is split between the fuel and
electricity subsectors. We verify that the X, V, and Y values for sector 1 in the original example
are equal to the sum of the values across both sectors in the revised IO table.

1a: Fuel 1b: Elec 2: Manuf Y X


1a: Fuel 15 100 300 110 525
1b: Elec 10 25 200 240 475
2: Manuf 100 100 100 1700 2000
V 400 250 1400
X 525 475 2000
The direct requirements matrix for the updated system (rounded to 2 digits) is:

. 03 . 21 . 15
𝐀 = •. 02 . 05 .1 –
. 19 . 21 . 05
Finding values for the disaggregated R requires more thought. Emissions of waste per $billion
were 50 g in the original (aggregated) sector 1 and 5 g for sector 2. Thus, the total emissions
of sector 1 were originally (50 g/$billion)($1,000 billion) = 50,000 g. If our available process
data suggests that emissions are 20% from fuel extraction and 80% from electricity, then there
are 10,000 g and 40,000 g, respectively. Given disaggregated sectoral outputs of fuel and
electricity of $525 and $475 billion, the waste factors for sectors 1 and 2 are 19 g and 84 g,
respectively (sector 3’s value is unchanged). The disaggregated R matrix is:

19 0 0
𝐑=•0 84 0–
0 0 5

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
320 Chapter 9: Advanced Life Cycle Models

Following the same analysis as done in Example 8-3 ($100 billion into each of the sectors), the
total waste generated by each of the (now 3) sectors are 2.5, 9.9, and 2.0 kg, respectively. The
new emissions for the disaggregated energy sectors are both fairly different than the original
aggregated sector 1’s emissions of 6.4 kg. The emissions from sector 2 are slightly less (2 kg
compared to the previous 2.2 kg), since the revised A matrix from our hybrid analysis splits
the purchases of energy by sector 2 differently, with relatively less dependence on the more
polluting electricity sector.

This example is intended to demonstrate the method by which one could mathematically
disaggregate an A matrix with process data. Note that complete process data is not required,
and that even limited process data for some of the transactions, coupled with assumptions on
how to adjust other values, can still lead to interesting and relevant hybrid models. For
example, if disaggregating an electricity sector into generation, transmission, and distribution,
purchases of various services by the three disaggregated sectors may not be available.
Assumptions that purchases of services are equal (i.e., divide the original sector’s value by 3),
or proportional to the outputs of the disaggregated sectors (i.e., distribute the original value
by weighted factors) are both reasonable. Given the ultimate purpose of estimating
environmental impact, it’s unlikely any of these choices on how to redistribute effects of a
service sector would have a significant effect on the final results of the hybrid model.

In the final category are integrated hybrid models, where there is a technology matrix that
represents physical flows between processes and an economic IO matrix representing
monetary flows between sectors. The general rationale for wanting to build an integrated
hybrid model is that IO data may be comprehensive but slightly dated, or too aggregated to
be used by itself. Use of process data can attempt to overcome both of these shortcomings.
Both the process and IO matrices in this kind of model use make-use frameworks (see
Advanced Material for Chapter 8), and are linked via flows at the border of both systems.
Unlike the tiered or IO-based approached above, the models are called integrated because the
process level data is fully incorporated into the IO model. Double counting is avoided by
subtracting process-based commodity flows out of the IO framework. Integrated models
require substantially more effort than the other two types of hybrid models because of the
need to manage multiple unit systems (physical and monetary) as well as the need to avoid
double counting of flows through subtraction. They may also require sufficiently detailed
estimates of physical commodity prices across sectors. In general, the goal of an integrated
hybrid model is to form a complete linear system of equations, physical and monetary, that
comprehensively describe the system of interest.

The general structure of such a model, analogous to Equation 8-5, is:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 321

x1m = z11
m m
+ z12 +...+ z1jm +...+ z1n
m
+ y1m (mass)
x 2m = z 21
m m
+ z 22 +...+ z 2jm +...+ z 2n
m
+ y 2m (mass)
x $3 = z $31 + z $32 +...+ z $3j +...+ z $3n + y $3 (dollar)
:
x $n = z $n1 + z $n2 +...+ z $nj +...+ z $nn + y $n (dollar)

which leads to an A matrix with mixed units which has four separate partitioned matrices
representing fully physical flows, fully monetary flows, and two connecting matrices for the
interacting flows from the physical to monetary and monetary to physical sectors. The models
can be “run” with inputs (Y) of physical and/or monetary flows, and outputs are then physical
and monetary.

Hawkins, et al. (2007) built an integrated hybrid model to comprehensively consider the
implications of lead and cadmium flows in the US. Data on material (physical) flows were
from USGS process-based data. The economic model used was the US 1997 12-sector input-
output model (the most aggregated version of the benchmark model).

E-resource: The Microsoft Excel spreadsheet for the Hawkins (2007) lead and cadmium
models is available on the course website as a direct demonstration of the data needs and
calculation methods for an integrated hybrid model (see the paper for additional model
details).

From the Hawkins model, Figure 9-8 shows direct and indirect physical and monetary flows
from an input of final demand of $10 million into the manufacturing sector in 1997. Since the
focus of this model is on lead, the left side of the graph summarizes flows (kg) through the
physical lead-relevant sectors of the mixed-unit model while the right side of the graph
summarizes monetary flows ($ millions) through the economic model. While the
manufacturing sector is highly aggregated in this model, it shows the significant flows of lead
through various physical sectors needed in the manufacturing sector. While a more
disaggregated IO model may provide higher resolution insights into the specific flows of lead
through manufacturing sectors, even this highly aggregated model required a person-year of
effort to complete, and pushed the limits of the available USGS physical flow data. Given the
specific data needs of these models, it is likely that a more disaggregated model is not possible
given available process data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
322 Chapter 9: Advanced Life Cycle Models

tonnes million $
16.7
13.7
8 2.0

Monetary Requirements
6 1.5

(million $)
(tonnes)

million
Lead

4 1.0

2 0.5

0 0.0
er
ut e
ar me g

O s
al S ge ox er
so ag a es

ac n

n
L s g

an Tr g

is ea se ies
Le Sm lting

s att s
Le hee g

se lity
s
M st ing

O os ces
an se es
C mi s

d ad

e
nd S in

uf ctio
rin

n sin l a tio
ad tin

ce B rie
S ltin

uc and ina Info ilitie


d ie

th
at Li Sto ea old
re tor B id

ic
e h ic
co ary Min

Le d h ss tivit

er ta
an er
on n

io bu cia ma
tu
ur e tte

h i

rv
ad e

d rv
ur lt rv
an ru

th pi
an e c
r
ur fe ra d
Se rim ad

y
ad P Le

n
ti o
ta

F
or
Le ead

d ew

sp
En N
L

an
N of

Ed al
at
Tr

on
si
es
of
Pr

Figure 9-8: Physical Lead And Monetary Output Required For $10 Million Final Demand Of
Manufacturing Sector Output, 1997 (Source: Hawkins 2007). Output Directly Required Is
Represented By The Hashed Areas While Dotted Gray Regions Depict The Indirect Output.

Chapter Summary
Mathematics allows us to organize generic process data into matrices that can be used to create
process matrix models. These process matrix models share some of the desirable features of
input-output based models such as fast computation and larger boundaries while preserving
desirable process-specific detail. Process matrix models are generally implemented with
existing LCI databases, which only model representative or average processes. However, such
models can be adapted with data for specific processes as needed. As LCA needs evolve,
process and IO models can be combined in various ways resulting in hybrid LCA models that
leverage advantages of the two model types while overcoming some of the disadvantages.
Hybrid LCA models vary with respect to the amount of resources and data needed, integration,
and model development involved. All hybrid models will generally yield more useful results

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 323

than a single model. Now that we have introduced all of the core quantitative models behind
LCA, we can learn how to take the next step, impact assessment.

Note that the E-resources provided with this book do not provide spreadsheet forms
of the ecoinvent database, as it is a commercial product. However, if you have a license
for ecoinvent directly from the website, you can request the files needed to construct a process
matrix. If you have an ecoinvent sublicense through purchase of SimaPro, you can use the
“Export Matrix” option mentioned above to create your own Microsoft Excel-based
ecoinvent process matrix. Note that the dimensions of the A matrix for ecoinvent 2.0 will be
roughly 4,000 x 4,000, and the spreadsheet files will quickly become large (the B matrix
dimensions will be 1,600 x 4,000). If trying to use ecoinvent data in a process matrix form, it
is better to use MATLAB or other robust tools given issues in working with matrices of that
size in Excel (see Advanced Material at the end of this chapter).

References for this Chapter


Bullard C. W., Penner P. S., Pilati D.A., “Net energy analysis: handbook for combining process
and input-output analysis”, Resources and Energy, 1978, Vol. 1, pp. 267-313.

Hawkins, T., Hendrickson, C., Higgins, C., Matthews, H. and Suh, S., “A Mixed-Unit Input-
Output Model for Environmental Life-Cycle Assessment and Material Flow Analysis”,
Environmental Science and Technology, Vol. 41, No. 3, pp. 1024 - 1031, 2007.

Heijungs, R., “A generic method for the identification of options for cleaner products”,
Ecological Economics, 1994, Vol. 10, pp. 69-81.

Joshi, S., “Product environmental life-cycle assessment using input-output techniques”, The
Journal of Industrial Ecology, 2000, 3 (2, 3), pp. 95-120.

Suh, S., and Huppes, G., Methods for Life Cycle Inventory of a Product, Journal of Cleaner
Production, 13:7, June 2005, pp. 687–697.

Suh, S., Lenzen, M., Treloar, G., Hondo, H., Horvath, A., Huppes, G., Jolliet, O., Klann, U.,
Krewitt, W., Moriguchi, Y., Munksgaard, J., and Norris, G., “System Boundary Selection in
Life-Cycle Inventories Using Hybrid Approaches”, Environmental Science and Technology, 2004,
38 (3), pp 657–664.

Williams, Eric, “Energy Intensity of Computer Manufacturing:  Hybrid Assessment


Combining Process and Economic Input−Output Methods”, Environmental Science and
Technology, 38 (22), pp. 6166–6174, 2004.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
324 Chapter 9: Advanced Life Cycle Models

End of Chapter Questions

Objective 1. Define, build, and use a process matrix LCI model from available
process flow data.

Objective 2. Describe the advantages of a process matrix model as compared to a


process flow diagram based model and an input-output based model.

1. Modify the two-process example from the Chapter (Eq. 9-1), and estimate E, if process 1
also requires 1 kWh of electricity as an input.

2. In the three-process linear system (Eq. 9-4), the coal mining process had no electricity input.
Full inputs to the US LCI database for Bituminous coal, at mine are shown in the table below.
Assume that the electricity used in the coal mining process is from bituminous coal-fired (not
grid) electricity. How different are X and E from the original example in the chapter that
produced 1 kWh of electricity? If you instead update the system to use grid electricity how
would you expect your E result to change and why?

Input Unit Amount


Bituminous coal, combusted in industrial boiler kg 0.00043
Diesel, combusted in industrial boiler L 0.0088
Electricity, at grid, US, 2000 kWh 0.039
Gasoline, combusted in equipment L 0.00084
Natural gas, combusted in industrial boiler m3 0.00016
Residual fuel oil, combusted in industrial boiler L 0.00087
Dummy, Disposal, solid waste, unspecified, to underground deposit kg 0.24

3. Further expand the three-process system adjusted in Question 2 by adding the Bituminous
coal, combusted in industrial boiler process (and its input at the mine) from US LCI. The table
below shows an abridged list of relevant inputs and outputs for that process for a product
flow of 1 kg of combusted coal in a boiler. Assume other flows are outside of your scope.
What is your updated X and E across the new four-process system for producing 1 kWh?

Input Unit Amount


Bituminous coal, at mine kg 1
Output
Carbon dioxide, fossil kg 2.63

4. The US LCI database contains many unit process models relevant to energy systems
modeling in the United States. Many of these models are physical flow models as opposed to
the economic models we have been discussing lately. In this question, we will create a
streamlined ‘physical flow’ process matrix model of energy production that incorporates the
production from several processes in the US LCI database (from a previous version, pre-SI
units). This problem will test whether you understand process matrix model theory.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 325

We will streamline by only focusing on distillate oil, electricity, gasoline, natural gas, residual
oil, and coal. All other inputs will be ignored. Summarized from the US LCI database are the
following streamlined production functions:

0.375 pounds coal + 0.436 gallons of distillate oil + 9.61 kWh electricity + 0.0321 gallons
gasoline + 3.72 cubic feet natural gas + 0.161 gallons residual oil = 1000 pounds of coal

0.01 gallons distillate + 1.2 kWh electricity + 0.004 gallons gasoline + 49.6 cubic feet of natural
gas + 0.005 gallons residual oil = 1000 cubic feet of natural gas

13.7 kWh electricity + 32.1 ft3 natural gas + 0.589 gals residual oil = 30 gals. of distillate oil

26.4 kWh electricity + 61.7 ft3 natural gas + 1.13 gals residual oil = 65 gallons of gasoline

3.07 kWh electricity + 7.18 ft3 natural gas + 0.132 gals residual oil = 6.3 gallons residual oil

0.75 pounds of coal + 2.56 ft3 natural gas + 0.003 gals of residual fuel oil = 1 kWh of
electricity26

a) Using the production equations above, make a process matrix model to estimate the total
amount of energy (in BTU) needed to produce:

1) 5,000 kWh of electricity (equivalent for one US citizen in a year)

2) 230 gallons of gasoline (rough average per household driving for a year)

Report your A matrix in order of: coal, gas, distillate, gasoline, residual fuel, and electricity.
For consistency purposes, please only use the conversion factors given at the end of
this question.

b) Validate your results by comparing to data in the EIO-LCA 2002 Benchmark model (the
default model). Explain the differences you find.

Conversion Factors (continued on next page):

Distillate Oil: 138,700 BTU/gallon Electricity: 3,412 BTU/kWh


Gasoline: 125,000 BTU/gallon Natural Gas: 1,000 BTU/ft3
Residual: 150,000 BTU/gallon Crude Oil: 18,600 BTU/pound

26 Note that since some renewable/other sources make electricity in the US, which have been excluded from this
streamlined model, and since coal + natural gas + residual is only 71% of the US electricity fuel mix, each of the NREL
LCI numbers were scaled up by 100/71 to get the production equations above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
326 Chapter 9: Advanced Life Cycle Models

Coal: 12,000 BTU/pound Distillate: 18,600 BTU/pound


Gasoline: 18,900 BTU/pound Residual: 17,800 BTU/pound
Objective 2. Describe the advantages of a process matrix model as compared to a
process flow diagram based model and an input-output based model.

Objective 3. Describe the various advantages and disadvantages of process-based


and IO-based LCA models.

Use the E-resource of the 746 x 949 US LCI process matrix model for Questions 5-8. Be
careful not to over-write results of the files (perhaps do not save any of your changes).

5. What final demand of electricity (in $) is needed in the 2002 EIO-LCA producer price model
to yield total fossil CO2 results equivalent to those in the US LCI process matrix Excel
spreadsheet for Electricity, at grid, US? What final demand is needed to yield equivalent total
SO2 air emissions? Discuss how these results provide insight into the challenges of using each
type of model, and why each model type is limited.

6. Compare fossil CO2 sectoral contribution results of the 2002 EIO-LCA producer price
model Power generation and supply sector with the process contribution results of the US LCI
Electricity, at grid US process. Describe the differences in results of these two models, and how
your choice of model would affect results of an LCA study.

Objective 4. Classify various types of hybrid models for LCA, and understand how
they combine advantages and disadvantages of process and IO-based LCA models.

Objective 5. Suggest an appropriate category of hybrid model to use for a given


analysis, including the types of data and process-IO model interaction needed.

7. Estimate the fossil CO2 emissions of 1kg alumina, at plant/US using the US-LCI 746x949
process matrix model. How does your estimate of fossil CO2 emissions change if you apply a
cut-off criterion of 1%, 5%, and 10%? Given these findings and continuing to only be
concerned with fossil CO2, what might be an appropriate cut-off criterion here? What
conditions might apply to your answer?

[Use the full, not ‘smaller’ US LCI Excel model E-resource for Question 8]

8. Similar to question 2, update the US LCI process matrix spreadsheet so that the electricity
input used for Bituminous coal, at mine is Electricity, bituminous coal, at power plant (not Electricity, at
grid, US). Now for 1 kWh of Electricity, bituminous coal, at power plant, how different are X and
ECO2 from those in Figures 9-3 through 9-5 and from Question 2? Note: recall the US LCI
spreadsheet models electricity flow in MJ not kWh (3.6 MJ = 1 kWh)!

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 327

Have a problem where I truncate by hand the process matrix and have them run the same
input into each and discuss results and effects of truncation?

AIV question 1 from HW 3 (EIO, Simapro)

Ecoinvent Example here?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
328 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 1 – Process Matrix Models


in MATLAB
In this section, we build on the material presented in the chapter about process matrix models,
which have already been demonstrated in Microsoft Excel, by showing how to implement
them in MATLAB.

One of the benefits of using MATLAB is that it is well suited for manipulating a series of
consecutive operations quickly and in real-time, without needing to save versions of
normalized or inverted matrices for later use (e.g., as required for the Excel version of the
model introduced in the chapter, leading to its large file size).

E-resource: In the online supplemental material for Chapter 9 is a ZIP file containing
matrix files and an .m file for the US LCI database (746 processes, 949 flows) that can be used
in MATLAB. The A and B matrices are identical to those used in the Excel US LCI model.

A subset of code in USLCI_process_matrix_code.m is below, which shows the MATLAB


implementation of the same US LCI process matrix model as in the US LCI process matrix
spreadsheets discussed in the chapter. Since the models were developed with the same
parameters exported from SimaPro and using the same algorithm, results are identical.

% matrices assumed to be in workspace (USLCI.mat):


% USLCI_Atech_raw - technology matrix from exported matrix (A)
% env_factors - environmental coefficients in exported matrix (B)
% funct_units - row of functional units from exported Matrix
clear all
load(‘USLCI.mat’)
% makes a “repeated matrix” with funct_units down columns
funct_units_mat=repmat(funct_units,746,1);
norm_Atech_raw=USLCI_Atech_raw./funct_units_mat; % normalizes the
A matrix by functional units
L=inv(eye(746)-norm_Atech_raw); % this is the I-A inverse matrix
y=zeros(746,1);
funct_units_env=repmat(funct_units,949,1); % same as above except
has 949 rows
env_factors_norm=env_factors./funct_units_env;
co2fossil=env_factors_norm(231,:); % row vector for the fossil CO2
flow

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 329

% as example, enter a value into the y vector to be run through


the model
% default example here is 1 kWh into the bituminous coal-fired
electricity process, which is process number 416 out of 746
y(416,1)=3.6; %% funct unit basis is in MJ, this is MJ per kWh
(so 1 kWh)
out=L*y; % analogous to x = [I-A]inverse * y
co2out=co2fossil*out;
co2outcols=diag(co2fossil)*out;
sum(co2outcols) % result of running this script will be the sum
of fossil CO2 emissions throughout the upstream process matrix
This example code considers the effects of 1 kWh of Electricity, bituminous coal, at power plant as
done in the chapter. As the comments in the code show, a value of 3.6 MJ is used as an input
into the vector y in row 416, which is the row index in the model for the coal-fired electricity
sector in the US LCI model.

If we run the .m code either by double clicking it in the MATLAB window, or selecting it and
choosing the “Run” menu option, the result is 1.0334, which matches the Microsoft Excel
version of the model.

Like the US LCI Excel model, the intermediate and final matrix results can be viewed. While
all numerical results are visible, they are less user-friendly than in Excel, where the lists and
names of processes can be kept in rows and columns (but are not available in MATLAB). You
need to separately track or memorize row and column indices in MATLAB. However, the
MATLAB files consume considerably less space because they can quickly generate all
intermediate matrices needed while running the code.

End of Section Questions

Use the US LCI MATLAB environment E-resource (and writing or editing MATLAB code
as needed) to answer these questions.

1. Estimate the fossil CO2 emissions of 1 kg Alumina, at plant/US using the US-LCI 746x949
process matrix model.

2. Make changes to the process matrix so that the electricity input used for Bituminous coal, at
mine is Electricity, bituminous coal, at power plant (not Electricity, at grid, US). Now for 1 kWh of
Electricity, bituminous coal, at power plant, how different are X and ECO2 from those in Figures 9-3
through 9-5? Note: recall the US LCI data models electricity flow in MJ not kWh (3.6 MJ =
1 kWh)!

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
330 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 2 – Process Matrix Models


in SimaPro
In the Advanced Material for Chapter 5, demonstrations were provided on how to find
process data in SimaPro. Here we show how the process matrix LCI results of a particular
process can be viewed in SimaPro – this is not the same as building a model in SimaPro, but
is an important first step. Recall that SimaPro uses the same process matrix-based approach
as shown for Microsoft Excel in the chapter.

Using the same steps as shown in Chapter 5, find and select the US LCI database process for
Electricity, bituminous coal, at power plant. Click the analyze button (shown highlighted by the
cursor in Figure 9-9 below).

Figure 9-9: Analyze Feature Used in SimaPro to View Data

The resulting window allows you to set some analysis options, as shown in Figure 9-10.

Figure 9-10: New Calculation Setup Window in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 331

If needed, change the amount from 1 to 3.6 MJ in the calculation setup window. Click the
Calculate button. In the resulting window (shown in Figure 9-11), click the “Process
contribution” tab. This shows the total technical flows from other products / processes
needed to make the chosen (3.6 MJ) amount of electricity. Note the default units checkbox
ensures the normally used units (e.g., kg) are displayed, otherwise SimaPro will try to maintain
3 digits and move to a higher or lower order unit (e.g., grams).

Figure 9-11: Results of Process Contribution Analysis for Process in SimaPro

You will see that these values are the same as those presented by the Microsoft Excel
spreadsheets for the US LCI database (shown in Figure 9-4), but SimaPro tends to maintain
three significant digits, so the results may be slightly different due to rounding. Clicking on the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
332 Chapter 9: Advanced Life Cycle Models

inventory tab of the results window shows the E matrix results for all tracked flows (Figure
9-12). Substances, compartments, and units are presented. While not all are shown, the results
from the US LCI process matrix Excel spreadsheet would match those here. Note that
SimaPro shows the same value for fossil CO2 (1.03 kg/kWh).

Figure 9-12: Inventory Results in SimaPro

The final part to explore is the Network tab view. This tool creates a visualization of flows for
the entire network of connected processes (as summarized in the process contribution tab).
By default, SimaPro will truncate the Network display so as to reasonably draw the network
system without showing all flows. Figure 9-13 shows a default network diagram for fossil CO2,
with an assumed cut-off of about 0.09%. This cut-off can be increased or decreased to see
more or less of the network.

There will be more discussion of how network analysis works in Chapter 12.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 333

Figure 9-13: SimaPro Network view of process outputs (excerpt)

Further discussion of modeling with SimaPro is in a later chapter, but the analysis of LCI and
LCA results uses this same analyze feature.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
334 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 3 – Process Matrix Models


in openLCA
As with SimaPro, openLCA also uses a process matrix approach behind the scenes of the tool.
Again the focus on this Section will be on how to view LCI results.

To do this, start openLCA as described previously, and click on the “Product systems” folder
under the US LCI data folder. Choose the “Create a new product system” option. You may
name it whatever you like, and optionally give it a description. In the reference process
window, drill down through Utilities and then Fossil Fuel Electric Power Generation to find
the Electricity, bituminous coal, at power plant process. Keep both options at the bottom of the
window selected (as shown in Figure 9-14).

Figure 9-14: Creation of a New Product System in openLCA

You may then choose the calculate button at the top of the window (the green arrow with x+y
written on it) to do the analysis of the process, as shown in Figure 9-15.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 335

Figure 9-15: Product System Information in openLCA

In the calculation properties dialog box, choose the “Analysis” calculation type, then click the
Calculate button as shown in Figure 9-16.

Figure 9-16: Calculation Properties Window in openLCA

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
336 Chapter 9: Advanced Life Cycle Models

The resulting window (not shown) has various tabs to display tables and graphs of the process
matrix-calculated upstream results for your product system (your selected process in this case).
If you enable it in the openLCA preferences, you can also download a spreadsheet export of
analysis results here. The “LCI – Total” tab summarizes the inputs and outputs from the
process matrix calculation as shown in Figure 9-17. Again, these are very similar to the results
from the US LCI Excel spreadsheet or SimaPro. For our usual observation of the CO2 results,
openLCA seems to aggregate all carbon dioxide air emissions into a single value (the US LCI
database tracks 4 separate air emissions of CO2, including biogenic emissions).

Figure 9-17: LCI - Total Results for Product System in openLCA

The “Process results” tab shows the additional detail of the direct inputs and outputs from
the chosen process as well as the total upstream, as shown in Figure 9-18.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 337

Figure 9-18: Process results view in openLCA

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
338 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 4 – Allocation in Process


Matrix Models
Chapter 6 had shown the relevant issues associated with multifunctional product systems, such
as the need to solve them through allocation or system expansion methods. However, Chapter
6 showed how to use such methods in the context of simple process flow diagram-based
models. With our new linear systems models introduced in this chapter, we have a better
structure in place to consider alternative methods and model designs. Figure 9-1 gave the
example of a two-product system - fuel and electricity - developed in a linear system of
equations (matrix) format. In this section, we demonstrate how the allocation and system
expansion techniques can be implemented in a process matrix system.

Figure 9-19 shows a modified two-product system, where the second unit process is
multifunctional, in that it produces heat and electricity through cogeneration.

Figure 9-19: Process Diagrams for Two-Process System with Heat

For such a system, we would have a third process representing the production of heat, and
the A matrix would be
20 −2
𝐀=•0 10 –
0 18

but this is an over-determined system of equations, and A can not be inverted. Clearly we can
not use this system description to find a solution.

As discussed in Chapter 6, it is possible that the additional production of 18 MJ of heat avoids


the production of 18 MJ of heat from some standalone heat generation process. The method
in which the multifunctional problem is solved using this intuition is referred to as the
substitution method. In this method, we assume there is a standalone unit process (as shown
in Figure 9-20) for the avoided production of the function heat.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 339

Figure 9-20: Process Diagram for Standalone Heat Process

If we again assume that heat is the third unit process in the system of equations, then we have
a new A matrix for the system of equations:

20 −2 −5
𝐀 = • 0 10 0–
0 18 90

so the total production across the system for making 1,000 kWh of electricity is:

. 05 0.005 0.0028 0 5
…+
𝑋=𝐀 𝑌=• 0 0.1 0 – •1000 – = • 100 –
0 −0.02 0.011 0 −20

Compared to the X result from this chapter ([10; 100]), we see that the scale factor of
production from process 1, fuel, is half – 5 versus 10 – and that the scale factor of process 2,
electricity, is identical (i.e., 100, which produces 1,000 kWh). The significance of the negative
value (-20) for process 3 is a reminder of the substitution effect, or as discussed in this chapter,
the ‘credit’ as a result of the provision of heat as a multifunctional output of process 2. In our
case, given a scaling factor of 100 for electricity (which produces 1,000 kWh) we co-produce
1,800 MJ of heat, which is equal to a substitution of a scale factor of 20 for process 3 (i.e., 20
times 90 MJ).

These results should be expected. In the substitution method, we should expect that our scale
factor from the multifunctional process (P2) will be the same. Since the substituted process
(P3) has fuel (P1) as an input, the production of fuel should decrease. Finally, we should expect
a negative scale factor for the standalone unit process that makes heat.

The updated B matrix representing flows of crude oil, SO2, and CO2 is shown by:

−50 0 0
𝐁=• 2 0.1 0–
10 1 3

and E is found using the same method as in the chapter:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
340 Chapter 9: Advanced Life Cycle Models

−250
𝐸 = 𝑩𝑋 = • 20 –
90
Here, the quantity of crude oil input needed is half as much as before (which makes sense
given the half as big scale factor for process 1), that the emissions of SO2 are about 1/3 less.
The CO2 emissions decrease by more than half (90 versus 200 kg), mostly because we have
‘credited’ our electricity generation process for the substitution of separately produced heat (-
60 kg) but also because we only produced half as much fuel (-50kg).

These are the same results we would have achieved by doing the same problem without a
linear system or matrices, but the matrix arrangement helps to organize the answer. In
addition, as we have noted throughout Chapter 9, the process matrix method connects
upstream technology processes, so that circularity or other interconnected technological flows
(and thus environmental flows) are considered. Thus the matrix method would ensure these
effects were considered within the boundary of the system, and would not have to be
separately accounted for by hand.

E-resource: Examples in this Advanced Material Section are also included in the ‘Chapter
9 Excel Matrix Math’ spreadsheet.

Matrix-Based Allocation

As discussed in the Chapter, it is these types of results that leads to considerations or desires
to allocate flows. While partitioning and system expansion are preferred, allocation is still
sometimes needed. From a matrix perspective, allocation is straightforward.

In a system such as the one shown in Figure 9-19, where there are only 2 product outputs in
a multifunctional unit process (P2), we only need a single variable (e.g., l) to represent the
effect of allocation. Since the sum of allocation factors must equal 1, the allocation factors for
the two products are l and (1- l)27. In such a system, we would consider the original processes
P1 and P2, and separate P2 into a process P2’ that produces electricity and P3’ that produces
heat. The input and output flows from the original P2 are allocated (by l) across the revised
processes P2’ and P3’.

For the modified three unit process system of equations, the A matrix is generally

20 −2𝜆 −2(1 − 𝜆) −50 0𝜆 0(1 − 𝜆)


𝐀′ = • 0 10 0 – and 𝐁′ = 2 0.1𝜆 0.1(1 − 𝜆)¡
0 0 18 10 1𝜆 1(1 − 𝜆)

27The approach is independent of the type of allocation done (mass, energy, volume, etc.). For unit processes with more than two
products, the approach is similar, but additional variables are needed. Allocation factors must still sum to 1.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 341

Considering, for example, the specific case of l=0.7,

20 −1.4 −0.6 −50 0 0


𝐀′ = • 0 10 0 – and 𝐁′ = • 2 0.07 0.03–
0 0 18 10 0.7 0.3

Using the same final demand of 1,000 kWh of electricity,

. 05 0.007 0.002 0 7 −350


𝑋 = 𝐀′…+ 𝑌 = • 0 0.1 0 – •1000 – = •100– and 𝐸 = • 21 –
0 0 0.056 0 0 140

Similar to what was noted above, the scale factor for electricity is unchanged from our previous
examples, which is expected. The scale factor for fuel production is decreased, which makes
sense since the mono-functional P2’ now needs less fuel input. Finally, the scale factor for
mono-functional heat process P3’ is zero, which is not a surprise since it is not needed to
produce electricity and is disconnected from the rest of the system. In this particular case (for
generating electricity), it could have been left out of consideration. On the other hand, if we
ran the model for a final demand of heat, it would be relevant.

This section has shown how we can use our newly developed process matrix systems of
equations to more efficiently solve the multifunctional system challenges motivated in Chapter
6.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
342 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 5 – Hybrid EIO-LCA


Models
In this section we briefly demonstrate an advanced feature of the EIO-LCA website which
allows a user to estimate the effects of adjusted purchase requirements for existing sector. This
is one of two custom advanced models available (the other was described in the Advanced
Material of Chapter 8).

From the ‘Use the Model’ page of EIO-LCA, select the ‘Create Custom Model’ tab in the top
center of the page (as shown in the screenshot below). On the resulting page, select the ‘hybrid
product’ link.

The hybrid product tool allows you to perform a hybrid life cycle assessment (LCA) for a
product. The term ‘hybrid’ here indicates that you will be able to create your own theoretical
sector through which you will be able to model a different production process based on input-
output data from the EIO-LCA model and process LCA data to which you may have access.

This feature will allow you to look beyond the aggregation of existing EIO-LCA sectors and
towards a more fine-tuned analysis of various custom products, including early development,
hypothetical, or existing products that may not be adequately modeled with the existing
aggregation of the input-output sector in question (i.e., an electric car, hydrogen fuel cell, or
specific existing product).

This tool works by allowing you to modify the default input purchase requirements of an
existing IO sector in order to simulate production of the related production process (as
opposed to the custom builder described in Chapter 8 which allows for multiple final demand
entries). In other words, this tool provides different cells for the A matrix, as opposed to
additional entries for the Y vector.

To use the custom product builder, step one is to choose the EIO-LCA ‘model year’ to be
used (e.g., 1997 or 2002, producer or purchaser basis) and click the ‘Change Model’ button, as
shown in the screenshot below.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 343

In step two, you choose an existing sector for which you assume production is similar (in
order to build your adjusted model from). The sector selection interface works similarly as
other EIO-LCA models already discussed. Once the sector has been chosen, you enter a
relative amount of final demand on which to base the model. As with other models shown, it
may be relevant to choose a higher level of demand than you are interested in modeling, and
then scale results back down since IO models are linear (see below). Once the amount of
economic activity has been entered, click the “Select this Sector” button. This generates a web
page where each of the default level of purchases from the other sectors in the model are
shown on the basis of the input economic activity, and each of the sectoral purchase inputs
can be modified (or left as is). The screenshot below shows an abridged page resulting from
$300,000 ($0.3 million) of Automobile manufacturing in the 2002 EIO-LCA producer model
(many of the 428 rows in the middle of the resulting page have been deleted from the
screenshot). This level of scaling helps to provide data in a unit that is feasible to consider,
and approximately shows the requirements of making ten $30,000 average vehicles in the US
in 2002.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
344 Chapter 9: Advanced Life Cycle Models

This excerpt shows that for $300,000 of default purchases in the Automobile manufacturing
sector, amongst many other purchases, there are about $430 of production needed from the
two battery sectors, and about $152,000 of motor vehicle parts. If instead of producing a
default average vehicle using the 2002 data, we could, for example, consider the case of
producing a hybrid gas-electric vehicle. Assume that we had information from a separate
process-based source or inventory that hybrid cars as compared to typical internal combustion
engine on the relative amount of batteries versus engines.

For ten hybrid vehicles (as we noticed above given the $300,000 economic input), the vehicles
would likely need more batteries and relatively less in terms of engines (which are in the motor
vehicle parts sector). If we wanted to keep the $300,000 basis we could then shift some of the
$152,000 purchases (which includes engines) into the storage battery sector (which is where
rechargeable batteries are categorized). For example we could split it in half and replace
adjusted values of $76,000 for the Storage battery manufacturing sector and $76,000 in the Motor
vehicle parts manufacturing sector.

The final step is when all needed components of purchases have been modified, you click the
‘Continue’ button at the bottom of the screen, which leads to the usual display of results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 345

The screenshot below shows the results, and first notes the basis used ($300,000 for the
automobile production sector), then the adjustments made – i.e., nearly $76,000 more to the
battery sector (net of what was already being purchased) and about $76,000 less in
engines/parts.

Comparing these results to those of Figure 8-4, which was total supply chain energy per $1
million of automobiles (8.3 TJ/$1 million), this hybrid combination would consumer more
energy (about 9.2 TJ/$1 million).

If you need additional help with the custom builder tool, see the online EIO-LCA screencasts
(at http://www.eiolca.net/tutorial/ScreencastTutorial/screencasts.html).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
346 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 6 – Connections in Process


Matrix Models: Comparison of US LCI and EIO-LCA

Definitions of connections and processes

Chapter 8 discussed input-output models, which have a substantial scope in terms of by default
including the entire economy of effects, while we have generally shown that process-based
methods (and by extension, process matrix based methods) as being more limited. In the first
part of this section, we quantitatively show the difference in ‘connections’ that exist in the two
types of models using a specific comparison of the EIO-LCA model and the US LCI database
(as a matrix model).

As the LCI for each process only provides direct inputs or outputs, the indirect inputs or
outputs calculated from the matrix-based LCA method rely on the interconnections between
processes in the system under study. These interconnections are links between processes based
on their respective inventories. In the matrices, a link between two processes is represented
by a non-zero value. In recent years, the interconnections in the LCI database have become
interesting to LCA practitioners. Using graphical methods, Kuczenski (2015) estimated the
interconnections between processes in several LCI databases; this study concluded that all the
databases under consideration shared similar internal structure. Network analysis methods
have also been used in input-output models to determine the key processes in the system
(Singh and Bakshi 2011; Kagawa et al. 2013; Nuss et al. 2016). These studies generally focused
on determining the strongly connected processes in the system with the purpose of
categorizing processes in the databases, and identifying important processes in particular
industries. However, the question of how the database interconnections affect the results from
matrix-based LCA method remains unanswered.

Substantial connections are crucial to evaluate direct and indirect environmental effects. The
coverage of connections determines whether the indirect environmental effects can be fully
captured. This study demonstrates a method to evaluate the effects from the interconnections
in matrix-based methods. The US LCI database is used as a case study. The latest version of
the US LCI database provides 1,060 unit processes. These unit processes have inputs or
outputs from 1,466 elementary flows and other processes. In this study, first, the processes
from the US LCI database were mapped on to the A and B matrices. Then, the inputs and
outputs for each process were evaluated and quantified to demonstrate the coverages of
connections in the database. Last, we compared the connections with the input-output table
and the results from EIO-LCA (Environmental Input-Output Life Cycle Assessment) model
to identify possible missing connections for some processes. In addition, the uncertainties
were analyzed resulting from using alternative utilities for each process.

Matrices mapped from unit processes in the US LCI database

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 347

When I conducted the analysis, the US LCI database at that time provided 701 unit processes.
The inventories of these unit processes could be obtained individually from the US LCI
website compiled by the National Renewable Energy Laboratory (NREL 2012). To apply the
inventories into matrix-based LCA model, I downloaded the inventory data for all 701 unit
processes and used Matlab software to map their inventories into the A and B matrices.
Following the terminology from above, the results from the mapping showed n=1,471 process
flows and m=2,701 elementary flows in the matrices. I observed some potentially problematic
processes in the matrix mapping and results. These problematic processes affect the
consistency of the inventories in the matrices and could result in inaccurate interpretations in
the matrix-based LCA model. Therefore, before the interconnections in the matrices were
evaluated, I screened potential problematic processes according to three categories: the cut-
off processes, the system boundary, and the flow types.

Cut-off processes

In US LCI, “cut-off” processes, previously labeled as “Dummy” processes, are input products
whose inventories are not yet provided in the current version of the database. For example,
the inventory information of one secondary steel production process (“CUTOFF Steel,
secondary, at plant”) is not yet included in the database. Currently, 525 or 35% of the total
number of processes in US LCI database are cut-off processes. When the database is applied
to a matrix-based LCA model, the presence of cut-off processes, as inputs to a process under
study, results in the neglect of upstream effects. This is because the cut-off processes do not
contribute to the indirect effects.

Cradle-to-gate and gate-to-gate processes

Based on the inclusiveness of the processes’ system boundaries, data in the US LCI database
is associated with two different types of unit processes: “gate-to-gate” and “cradle-to-gate”.
Gate-to-gate processes are unit processes for the production only; the word “gate” indicates
the entry or exit threshold of a factory. Whereas the cradle-to-gate processes also include
upstream inputs and outputs; the word “cradle” indicates the origin of a product.

Let us consider Polyethylene terephthalate (PET) as an example product to illustrate how the
two types of processes are listed in the US LCI database. The cradle-to-gate and gate-to-gate
inventories for producing PET were provided in a LCA study (Franklin Associates 2010), and
are shown in Figure 9-21. For PET, the cradle-to-gate process and gate-to-gate process differ
in their material usage and transportation inputs. The four material inputs for the gate-to-gate
process are products from other factories that are incorporated to the production of PET.
The cradle-to-gate PET process is composed of crude oil, natural gas, and oxygen; these are
the raw production materials for the four inputs in the gate-to-gate process. There are
transportation inputs in the cradle-to-gate process, because compared to the gate-to-gate
process, there are more product exchanges between factories or industrial sites.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
348 Chapter 9: Advanced Life Cycle Models

In the US LCI database, the published inventories in Figure 9-21 were modified to produce
two different unit processes, as shown in Figure 9-22.
Figure 9-21: Inputs in the gate-to-gate and cradle-to-gate processes for PET production from Franklin
Associates’ original LCA study. The two processes differ in material usage and inputs.

Gate-to-gate PET inventory, for 1000kg PET, Cradle-to-gate PET inventory, for 1000kg
provided by Franklin Associates study PET, provided by Franklin Associates study

Material Material
Paraxylene 521 kg Crude oil 568 kg
Ethylene glycol 322 kg Natural gas 215 kg
Acetic acid 37.2kg Oxygen 223 kg
Methanol 35.2 kg
Water consumption 0.537 m3
Energy use Energy use
Electricity (grid) 558 kWh Electricity (grid) 882 kWh
Electricity 9.59 m3 Electricity (cogeneration) 12.1 m3
(cogeneration) 98.1 m3 Natural gas 352 m3
Natural gas 18.9 kg Bit./Sbit. Coal 35.9 kg
Bit./Sbit. Coal 12.8 liter Distillate oil 13.8 liter
Distillate oil 26.8 liter Residual oil 80.1 liter
Residual oil
Transportation
Combination truck 28.5 tkm
Rail 1633 tkm
Barge 139 tkm
Ocean freighter 2717 tkm
Pipeline-natural gas 382 tkm
Pipeline-petroleum 395 tkm

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 349

Figure 9-22: Inputs in the gate-to-gate and cradle-to-gate (CTG) processes for PET production
provided by US LCI. Italics used to mark differences from Figure 9-21. Inputs for the CTG process are
broken into upstream raw elementary flows, as opposed to inputs as process flows from original study.

Gate-to-gate PET inventory, for 1000kg PET, provided Cradle-to-gate PET inventory, for 1000kg PET,
by the US LCI database provided by the US LCI database
Material Material
Paraxylene, at plant 521 kg Coal, lignite, in ground 25.3 kg
CUTOFF Ethylene glycol, at plant 322 kg Coal, unprocessed bituminous, in ground 326 kg
Acetic acid, at plant 37.2kg Gas, natural, in ground 222 kg
Methanol, at plant 35.2 kg Oil, crude, in ground 587 kg
Oxygen, in air 243 kg
Uranium oxide, 332 GJ per kg, in ore 6.46 g
Water, process, unspecified natural origin/m3 0.244 m3 Water, process, unspecified natural origin/m3 12.1 m3
Energy use Energy use
Electricity, at grid, US, 2008 558 kWh Gas, natural, in ground 504 m3
Electricity, at cogen, for natural gas turbine 9.59 m3 Oil, crude, in ground 151 kg
Natural gas, combusted in industrial boiler 98.1 m3 Energy, from biomass 5.37 MJ
Bituminous coal, combusted in industrial boiler 18.9 kg Energy, from hydro power 24.8 MJ
Diesel, combusted in industrial equipment 12.8 liter Energy, geothermal 13 MJ
Residual fuel oil, combusted in industrial boiler 26.8 liter Energy, kinetic (in wind), converted 12.9 MJ
Energy, solar 0.547 MJ
Energy, unspecified 17.4 MJ
Transportation Transportation
Transport, barge, diesel powered 0.655 tkm
Transport, barge, residual fuel oil powered 2.18 tkm
Transport, pipeline, natural gas 0.0876 tkm
Transport, train, diesel powered 1610 tkm
Disposal Disposal
CUTOFF Disposal, solid waste, unspecified, to 0.31 kg CUTOFF Disposal, solid waste, unspecified, to 128 kg
municipal incineration municipal incineration
CUTOFF Disposal, solid waste, unspecified, to 4.19 kg CUTOFF Disposal, solid waste, unspecified, to 31.8 kg
sanitary landfill sanitary landfill
CUTOFF Disposal, solid waste, unspecified, to 0.59 kg CUTOFF Disposal, solid waste, unspecified, to 0.595 kg
waste-to-energy waste-to-energy

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
350 Chapter 9: Advanced Life Cycle Models

CUTOFF Disposal, solid waste, process, to 1.03 kg


municipal incineration

The US LCI database shows that the gate-to-gate processes link to four other processes flows
in the model as inputs, and the inputs are identical to the inventory provided in the original
LCA study. On the other hand, cradle-to-gate processes in the US LCI database link to other
elementary flows as inputs, and do not have process flows. The US LCI database also includes
further decompositions, such as those derived from Figure 9-21, of the inputs’ raw materials
represented as only elementary flows. For example, “Electricity (grid)” is decomposed into
upstream inputs such as “energy from solar”, and “energy from geothermal”. This
decomposition requires additional assumptions on electricity generation methods. The grid
electricity used in the production is a mix of electricity generated from different types of power
plants. The power plants themselves use different energy sources, such as solar power, to
generate electricity. Thus, in this example, the resulting cradle-to-gate PET inventory is a mix
of unusual energy inputs. In general, this type of decomposition allows the cradle-to-gate
process to have only elementary flows as inputs.

The effects calculated from gate-to-gate processes can be separated into direct and indirect
effects in each upstream input. However, the necessary assumptions as well as the
decomposition methods are not documented in the US LCI database. As a result, the users
are unable to fully interpret the cradle-to-gate inventory. In addition, the decomposed
inventory in cradle-to-gate processes impedes the process-level separation of environmental
effects (i.e., noting where in the upstream chain they occur).

The cradle-to-gate processes and gate-to-gate processes have different system boundaries. The
two types of processes should be separated when mapped into matrices to avoid potential
scenario uncertainty in matrix-based LCA models. Hence, in this analysis I distinguish the
cradle-to-gate processes from the unit processes in the US LCI database.

The cradle-to-gate processes can provide total environmental effects without needing to use
the matrix-based method, making them seemingly more convenient to use than the gate-to-
gate processes. However, a disadvantage of the cradle-to-gate processes is that they fail to
provide effects on different stages of the production. For example, the cradle-to-gate PET
process provided in Figure 9-22 indicates that the total (direct and indirect) fossil CO:
emissions, for producing 1 kg of PET, is 2.419 kg. I cannot know the amount of fossil 𝐶𝑂:
emissions from different inputs of the production (and thus cannot evaluate sources of
differences if there were alternative production processes). In comparison, by running the
gate-to-gate PET process in the matrix-based model, I calculate the direct and indirect
emissions from the upstream processes (the upstream processes with the highest 10 emissions
are listed in Figure 9-23). Gate-to-gate processes can take the full advantage of the matrix-
based method by separately listing the emissions from different stages. For the PET example,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 351

the users can tell that the onsite fossil CO: emissions (excluding emissions from burning fossil
fuel) was 72.4 kg, only 6% of the total emissions, while the most intensive emissions were
from burning fossil fuel in the production or upstream energy use. As seen in this example,
gate-to-gate processes allow the LCA practitioners to efficiently interpret the LCA results. On
the contrary, cradle-to-gate processes fail to deliver an equivalent breakdown of results.

Another limitation of the undocumented cradle-to-gate processes is that when applied to the
matrix-based method, it is impossible to improve the inventory by reducing the uncertainty in
each input. For the PET example, the total fossil 𝐶𝑂: emissions from the cradle-to-gate and
gate-to-gate differ by 1.242 kg, with the cradle-to-gate process (not surprisingly) having a larger
emission value. As the emissions by inputs from the cradle-to-gate process are not available,
it is impossible to identify what produced these discrepancies. Alternatively, in the case of
gate-to-gate processes, improving the data quality for an input also improves the uncertainty
in the total emission. For example, the largest part of fossil 𝐶𝑂: emissions is from burning
natural gas in the production, therefore when this input is switched to another fuel, or the data
quality of emissions from the natural gas combustion is improved, the total fossil 𝐶𝑂:
emissions can be updated accordingly. This update could not be possible for cradle-to-gate
processes, because their emission sources cannot be tracked.

Figure 9-23: Top ten fossil 𝑪𝑶𝟐 emissions processes for producing 1 kg of Polyethylene terephthalate,
resin, at plant (PET), calculated from the gate-to-gate PET process.

Fossil CO2 emissions


Process name (g)
‘Natural gas, combusted in industrial boiler’ 508.18
‘Residual fuel oil, combusted in industrial boiler’ 140.02
‘Electricity, natural gas, at power plant’ 95.72
‘Bituminous coal, combusted in industrial boiler’ 94.78
‘Polyethylene terephthalate, resin, at plant’ 72.40
‘Electricity, bituminous coal, at power plant’ 44.94
‘Transport, ocean freighter, residual fuel oil powered’ 40.34
‘Diesel, combusted in industrial equipment’ 37.34
‘Transport, train, diesel powered’ 32.04
‘Methanol, at plant’ 29.28

As mentioned before, a gate-to-gate process should include all the possible inputs within the
technosphere. This would maximize the interconnections and include all possible indirect
effects. For example, when mapping the inventory from a coal power plant on the US LCI
database, one should make sure that all of the associated intermediate processes have been
included. For example, Coal power plants use coal as their energy input, thus a process flow
that represents coal combusted for energy should be chosen as one of the inputs. Choosing
“coal as a raw material” as an input would result in ignoring potentially large emissions from

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
352 Chapter 9: Advanced Life Cycle Models

the coal combustion process. Note that some cradle-to-gate processes in the US LCI database
have already included all inputs in the elementary flows, thus the cradle-to-gate processes do
not have the same problem.

In this study, 474 raw material input elementary flows were used to identify the processes that
had skipped intermediate process flows. First, I distinguished the processes with any of these
raw material flows as an input. Then, the distinguished processes were individually evaluated
to identify the processes that had used these raw material elementary flows directly without
the intermediate process.

Skipping intermediate processes can result in neglecting the upstream environmental effects.
The processes fail to reach the utmost connections for the matrix-based LCA studies. Hence
there is a need to resolve the inconsistencies in flow types in order to improve the database.
The next section will discuss additional connection issues found in the US LCI database.

Connections in the US LCI database

The results show that 791 out of 1471 process flows have no upstream connections; this
number includes 525 cut-off processes (Figure 9-24). The absence of upstream connections
neglects potential indirect effects, as discussed previously. Similarly, 503 out of the 1471
process flows had no downstream connections (Figure 9-25). The cut-off processes and non-
cut-off processes were separately identified. The lack of upstream connections in cut-off
processes is due to inventory unavailability, not zero inputs. The matrix-based models (and
databases) are less useful when the inventories include cut-off processes; when the cut-off
processes are replaced by other processes with full inventories, they will have non-zero
upstream connections. It is important to be aware of cut-off processes because they are used
as inputs in other processes’ inventories (but again, have no upstream inventory). Figure 9-25
shows that almost all cut-off processes have more than one downstream connection; in fact,
28 of them have more than 10 downstream connections – meaning 10 processes would
otherwise benefit from them having real inventories. These cutoff processes may hide total
environmental effects. Therefore, when new inventory data are incorporated in the US LCI
database, the cut-off processes’ inventories should be prioritized to maximize the utility and
completeness of the database, and avoid potentially missing total environmental effects.

The results also show that global average (upstream and downstream for all processes) was 4
connections. In general, processes had more downstream connections than upstream
connections; 69 processes had more than 20 downstream connections, one process (“Iron,
sand casted”) had 251 connections. In comparison, no process had more than 51 upstream
connections. Individual processes did not necessarily have more downstream than upstream
connections.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 353

Figure 9-24: Number of upstream connections for each process in the US LCI database. The
inset provides greater detail for those processes with 10 or less process inputs. Values on the
x axis indicate the number of connections in which each process falls. For example, the third
bar on the main graph shows that there are 272 processes that have a range from 6 to 10
upstream connections. Results are shown separately for Cut-off (blue), and non-Cut-off
processes (orange). The bar on the zero value is the number of processes with no upstream
connection. All Cut-off processes have zero upstream connections, due to the inventory
unavailability.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
354 Chapter 9: Advanced Life Cycle Models

Figure 9-25: number of downstream connections for each process in the US LCI database.
The inset shows processes with 10 or less downstream connections. Result are shown
separately for cut-off (blue) and non-cut-off processes (orange). Values on the x axis
indicate the number of connections in which each process falls. For example, the third bar
on the bottom graph shows that there are 19 processes that have connections ranging from
11 to 20; 9 of these processes are “cut-off” processes. The bar on the zero value is the
number of processes with no downstream connections.

Figure 9-26 lists the non-cutoff processes with more than 20 downstream connections. A
number of these processes represent very similar alternative processes, often with minor
differences. For example, “Electricity, at grid, US, 2000” (No. 15) and “Electricity, at grid,
US, 2008” (No. 11) each represent an overall US average grid mix of electricity. Here, the
connections are due to various electricity generation processes (such as coal, solar, and nuclear
power plants), and differ only by the year. If there are multiple versions for a given type of
electricity process, only one should be chosen as the input. Figure 9-27 includes the numbers
of downstream connections for some similar processes, such as the two electricity examples

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 355

discussed above. I aggregated the connections of these similar processes, they are listed by
their categories. The results show that energy or transportation accounted for the majority of
processes with large numbers of downstream connections. These processes are similar and
are often used as inputs by other processes. When one of these processes is used as an input,
it is possible to consider other similar processes as alternatives. Considering these possible
alternatives can potentially improve the inventories and the LCA results. This is the subject of
the second topic in this chapter.
Figure 9-26: Processes with more than 20 downstream connections in the US LCI database.

No. Number of
downstream
Process name connections
1 Transport, train, diesel powered 249
2 Transport, pipeline, unspecified petroleum products 191
3 Transport, barge, average fuel mix 176
4 Natural gas, combusted in industrial boiler 130
5 Transport, ocean freighter, average fuel mix 129
6 Diesel, at refinery 128
7 Transport, combination truck, diesel powered 102
8 Transport, combination truck, average fuel mix 93
9 Diesel, combusted in industrial equipment 89
10 Gasoline, combusted in equipment 83
11 Electricity, at grid, US, 2008 71
12 Electricity, residual fuel oil, at power plant 66
13 Liquefied petroleum gas, combusted in industrial boiler 63
14 Gasoline, at refinery 55
15 Electricity, at grid, US, 2000 54
16 Diesel, combusted in industrial boiler 45
17 Electricity, at grid, Eastern US, 2000 42
18 Residual fuel oil, combusted in industrial boiler 42
19 Transport, pipeline, natural gas 31
20 Natural gas, combusted in industrial equipment 30
21 Transport, barge, diesel powered 26
22 Electricity, at grid, Western US, 2000 25
23 Quicklime, at plant 25
24 Bituminous coal, combusted in industrial boiler 24
25 Transport, barge, residual fuel oil powered 23

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
356 Chapter 9: Advanced Life Cycle Models

Figure 9-27: Processes with more than 20 downstream connections in the US LCI database, sorted by
three industrial categories

Number of downstream
Category Process name
connections

Electricity, at grid 192


Natural gas, combusted 160
Diesel, combusted 89
Fossil fuel and
Gasoline, combusted 83
electricity as energy
Liquefied petroleum gas, combusted 63
Residual fuel oil, combusted 42
Bituminous coal, combusted 24
Transport, train 249
Transport, pipeline 222
Transportation Transport, barge 225
Transport, combination truck 195
Transport, ocean freighter 129
Diesel, at refinery 128
Materials Gasoline, at refinery 55
Quicklime, at plant 25

Figure 9-28 shows the different types of connections for the 1471 processes in the database.
Also included in Figure 9-28 is the analysis of energy and transportation inputs for the
processes that have at least one upstream or downstream connection. The results show that
342 (233+284-194+19) processes (23% of the total) have at least one upstream or downstream
connection. 248 (342-94) out of these 342 processes have either transportation or fuel as input.
If transportation is considered as an important input to decide whether a given process’
inventory is fully established, there are 76 (61+9+6) processes (5% of the total) that have both
energy and transportation as inputs. On the other hand, more than 50% of the total processes
have no upstream connections; this percentage includes the cut-off processes. I also note that
more than 34% of the total processes do not have downstream connections.

The results also show that 165 processes in the database have neither upstream nor
downstream connections. Most of these processes were by-products from the production of
other products, thus their inputs were not considered. For example, “Steam, at uncoated
freesheet mill” is a by-product of “uncoated freesheet”. For this production, I assumed that
allocation was not necessary. Thus, the number of inputs for this by-product was zero;
consequently, the number of upstream connections was also zero. Additionally, this by-

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 357

product was not used as an input in any other processes’ inventories, resulting in zero
downstream connections.

Figure 9-28: different types of connections among the 1471 processes in the US LCI
database. The chart on the right shows the different types of connections shared by the 1471
processes, the figure on the left shows processes that have fuel (green circle), electricity
(brown circle) or transportation (blue circle) as inputs, for the processes that have one or
more than one upstream or downstream connections.

Comparisons of the interconnections between the US LCI database and the EIO-LCA
model

The technology matrix (A) in the current US LCI database is structured by mapping the
physical inventory data from all processes; the processes are comprised of a wide range of
industries in the US. It is ordered alphabetically. However, the analyses above show that there
are processes without upstream or downstream connections, which suggests a lack of
interconnections in the technosphere. To better understand the interconnections in the
database, the 2002 US input-output table (IO table) is used as a reference to identify potentially
missed connections in the US LCI database.

The interconnections in the (A) matrix from the US LCI database were compared with the
interconnections in the IO table; both represent exchanges in the technosphere. The
interconnections in the US LCI database were then referenced with the hotspots of exchanges
between industries identified in the IO table. In the IO table, the exchanges between sectors
are based on direct requirements in dollar values. In this way, all purchases between industries
can be translated into product exchanges. These exchanges can be good references for
interconnections. In the US LCI database, the exchanges in the technosphere are based on

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
358 Chapter 9: Advanced Life Cycle Models

individual products or processes in physical unit operations. In the IO table, the exchanges
are based on industry sectors with purchase values in dollars. To compare the two models, I
categorized the processes in the US LCI database into industrial divisions via the ISIC code.
The industry-classified US LCI processes were aggregated and fitted into a new technology
matrix that can be more feasibly compared with the IO table.

Number of interconnections in the IO table

Let us first consider the number of connections from the detailed level industries in the IO
table; this should provide a general idea of how different the connections are at this detailed
level.

I used the same method provided in the section above to evaluate the number of upstream
and downstream connections of the sectors in the IO table. In the detailed level IO table,
there are 428 industrial sectors; 97 of these are service and government sectors that are not
part of the US LCI database. I focus on the remaining 331 agricultural, industrial, and
transportation sectors, because these categories are also present in the US LCI database, and
thus their connections can be compared.

Results show that the average number of connections in the IO table is 277 for both upstream
and downstream connections. I note that connection values less than $50,000 were not
provided in the table (rounded down to $0), thus the actual number of the connections could
be larger. Figure 9-29 shows the histograms of downstream and upstream connections for the
331 sectors. The majority of sectors had more than 250 upstream inputs, no sector had less
than 150 inputs in the IO table. On the other hand, 57 (5+15+18+6+13) sectors had less than
200 downstream connections, among which 5 had zero downstream connections (hunting,
and construction sectors).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 359

Figure 9-29: Numbers of connections for agriculture, industrial and transportation sectors
(331 in total) in the IO table (service sectors not considered). The left histogram shows the
numbers of downstream connections; the right histogram shows the numbers of upstream
connections.

While the IO table benefits from aggregation of sectors, the results indicate that the IO table
differs from the US LCI database in number and pattern of connections. The number of
connections in the IO table are significantly larger (277 on average) than the connections in
the US LCI database (4 on average). The average number of connections in the US LCI
database is significantly smaller despite its larger total number of processes. The IO table also
differs on the pattern of connections. For instance, the majority of the sectors had upstream
inputs, and only a few sectors were not used as inputs to other sectors. The number of
processes with downstream connections in the IO table was higher than the number of
processes with upstream connections in the US LCI database. This indicates that unlike the
processes in the US LCI database, the IO table includes mostly foreground sectors (products
but not contributors).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
360 Chapter 9: Advanced Life Cycle Models

Advanced Material for Chapter 9 – Section 7 – Uncertainty in Process


Matrix Models: Case Study of US LCI28

Scenario uncertainty estimation in the US LCI database

In the advanced material of Chapter 8, a range method was used to analyze the parameter
uncertainty in IO matrix-based LCA models. In this chapter, I introduce a method to estimate
the scenario uncertainty in process matrix models. Scenario uncertainty reflects the uncertainty
in the results caused by different choices in the LCA studies. In traditional LCA studies, during
the selection of allocation methods, different choices have to be made to draw the system
boundaries (Huijbregts et al. 2003). On the other hand, when using matrix-based LCA models,
the scenario uncertainty can be considered by simultaneously choosing different inventories.

In the current US LCI database, various processes have overly specific inputs. For example,
“Transport, combination truck, diesel powered”, is the only truck transportation input for the
“Lime, agricultural, corn production” process. This suggests that only diesel powered
combination trucks are used in the production of lime, excluding the possibility of using
gasoline or other powered trucks. These overly specific processes can cause scenario
uncertainty in traditional LCA studies, because the processes fail to provide other possible
choices as inputs. Typically, this is also a problem in the matrix-based LCA models where the
processes are connected, and potentially contribute to each other’s environmental effects.
Moreover, in matrix-based LCA models, the results only show the aggregated effects from
each process. Without a profound knowledge of all inventories in the database, the users are
unable to realize about the existence of overly specific processes. Thus, the scenario
uncertainties caused by these processes are likely to be ignored. In this study, I introduce
methods to consider scenario uncertainties in the matrix-based LCA model, using the US LCI
database is used as a case study.

Scenario uncertainty in US LCI processes

For the scenario uncertainty evaluation, I created a range of alternative scenarios while treating
each environmental effect separately. Each possible scenario is built by replacing alternative
inputs to a process; the replacement is based on similar inputs. Later on this section, I provide
more details about the similarity criteria. The range of alternative scenarios represents the
scenario uncertainty. I first calculate and visualize the uncertainty due to the alternative inputs
for each process. Then, for the purpose of understanding the overall ranges of the
environmental effects, I combine the inventory information within the same industrial
categories. Fossil CO: emissions are used as an example to demonstrate the results.

This section is excerpted from a chapter of Xiaoju (Julie) Chen’s dissertation, entitled “Uncertainty Estimation in Matrix-based Life Cycle
28

Assessment Models”.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 361

When deriving scenario uncertainties, I used two separate methods to group similar inputs
and calculate total environmental effects. The first method was designed for utility or
transportation inputs; the second method was used for all industries in general.

In the first method, for a process, its specified upstream connections from electricity, truck,
or fossil fuel were replaced by all possible alternatives in the database; new results were
calculated based on these alternatives and used to form a range. When there were multiple
electricity, fossil fuel, or truck transportation inputs for a process in the original inventory, the
values of the common category inputs were summed for a single input value. For example,
the “Lime, agricultural, corn production” process mentioned above had three upstream truck
transportation inputs (112 ton-km combination truck (diesel powered), 14.6 ton-km single unit
truck (diesel powered), and 3.43 ton-km single unit truck (gasoline powered)) in the original
inventory. Thus, the total truck transportation was the sum of the three, or 130.03 ton-km.
Then, I fit the new inventory into a new 𝑨 matrix (A𝒏𝒆𝒘 ). In 𝑨𝒏𝒆𝒘 , the 130.03 tkm truck
transportation input for the lime process was iteratively replaced as a scenario by one of the
115 truck transportation processes in the database. Replacing the truck transportation input
by all available alternatives resulted in 115 different technology matrices (𝑨𝒊 𝒏𝒆𝒘 , 𝑖 =
1,2, … ,115 ) matrices. While all values in the inventory remain unchanged, the difference
between these 115 𝑨𝒊 𝒏𝒆𝒘 matrices was only due to the different truck transportation processes
used for the lime process. I calculated 115 different g vectors (g vector has one value, fossil
CO: , in this study) from the 115 different 𝑨𝒊 𝒏𝒆𝒘matrices; the range generated was the final
result for producing 1 kg of lime in the US LCI database.

The overall uncertainties across all US LCI processes caused by choosing different electricity
inputs are shown in Figure 9-30. For each process, I calculated the fossil CO: emissions based
on 99 different scenarios. In each scenario, the process’ electricity inputs were assumed to be
one of the 99 electricity generation processes in the US LCI database. The percentage values
in Figure 9-30 were calculated based on the differences between each alternative and the
default scenarios, which was assumed as the emission calculated from using “Electricity, grid,
US 2008” as input. Note that cutoff processes had zero total CO: emissions.

The minimum and maximum values from all of these scenarios were used to calculate the
range uncertainty for each process. Without including outliers, the results show that the
average (across all US LCI processes) of the minimum and maximum scenario uncertainties is
-31% and 7%, respectively. If the outliers are included, the maximum average uncertainty
increases to 31%. In all cases, the outliers can be traced to using “Electricity, onsite boiler,
hardwood mill, average, NE-NC” as the electricity input. The reason why this particular
electricity input is causing outliers may be inferred from its nature. Onsite boilers are not as
efficient as power plants in generating electricity; they likely incur in higher CO: emissions. I
can thus assume that it is sensible to exclude “Electricity, onsite boiler, hardwood mill, average,
NE-NC” from the general scenario uncertainty estimation.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
362 Chapter 9: Advanced Life Cycle Models

Figure 9-30: Overall scenario uncertainty by using alternative electricity inputs. The default
electricity input for each process was assumed to be “grid electricity, US, 2008”. Left: fossil
𝐂𝐎𝟐 emissions, as percentage difference from the default, for 1401 gate-to-gate processes.
Right: same as the left panel, but without outliers caused by using “Electricity, onsite boiler,
hardwood mill, average, NE-NC”. Emissions for each process calculated by using 99
different scenarios.
In the US LCI database, the default scenario for fossil CO: emissions used “Electricity, grid
US, 2000” as the electricity input for the “Iron, sand casted” process. I chose this process as
example because across all US LCI processes, it has the largest numbers of inputs. The default
scenario resulted in 1120 kg of fossil CO: emissions. Figure 9-31 shows the fossil CO:
emissions for 1 metric ton of Iron, sand casted process, which resulted from all potential
scenarios. The top graph shows that the total CO: emissions vary between 240 and 1770 kg.
However, there was an outlier again at 19MT (not shown) that was also caused by using
“Electricity, onsite boiler, hardwood mill, average, NE-NC”.

Similar to the parameter uncertainty results shown in the Advanced Material of Chapter 8, I
listed the top 10 processes contributes to the total emission of Iron, sand casted process. In

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 363

the bottom graph, the rows show the CO: emissions results from different product
contributors sorted by their maximum value in descending order. I can see for the first and
third process, the maximum results are quite separated from the rest. Consequently, for these
processes, if the maximum values are excluded, the uncertainties are considerably reduced. On
the other hand, the uncertainty in the second row cannot be considerably reduced if the
maximum value is excluded, meaning the uncertainty resulted from different scenarios is large.
Thus, the process (Bituminous coal electricity) is also the most important contributor to the
uncertainty in producing sand casted iron. On the other hand, some processes only contribute
to the total emission, not the uncertainty. For example, “Bituminous coal, combusted in
industrial boiler” process has consistent emission value, indicating that the process contribute
the same amount of emission despite of the scenario chosen.

Figure 9-31: Fossil 𝑪𝑶𝟐 emissions for one metric ton of “Iron, sand casted”. The top graph
shows the total emissions (direct and indirect). The bottom graph shows the results from the
top ten product contributors under 99 alternative scenarios (markers).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
364 Chapter 9: Advanced Life Cycle Models

Because Iron, sand casted process excluded transportation as input, I choose another example
to show the uncertainty due to use alternative transportation inputs. Figure 9-32 shows the
total fossil CO: emissions for producing one metric ton of “Benzene, at plant”. The default
scenario used the processes specified in the current US LCI database as inputs. The markers
indicate the results from alternative scenarios in four electricity (99) and transportation (151)
industries. The number of scenarios is different in each industry. As in Figure 9-31, I excluded
the outlier results from “Electricity, onsite boiler, hardwood mill, average, NE-NC”. The
results from different scenarios ranged from 540 to 567 kg in truck transportation, and from
530 to 563 kg in electricity; in both industries, the scenario uncertainties are similar. Jointly,
truck transportation and electricity form a range (530 to 567 kg) that is only -2 to 5% different
from the results in the default scenario (540.6 kg). The range of results from alternative
scenarios in transportation barge and pipeline was much narrower; however, these industries
only had a few alternative scenarios (less than three each).

Figure 9-32: total fossil 𝑪𝑶𝟐 emissions, in kg, for 1 ton of “Benzene, at plant” by using
alternative scenarios in electricity and transportation industries.

Conclusion

I used two methods to evaluate the scenario uncertainties in the US LCI database. The first
method focused on the uncertainties due to choices of alternative electricity and transportation
in the inventory. The second method evaluated the scenario uncertainties caused by choosing
different processes in each industrial category. The results from the first method indicate that
scenario uncertainties are generally within -30% and 10% from the default choice, and may
show some outliers. The second method results in bigger scenario uncertainties: by choosing
different processes within the same industrial category, the CO: emissions for 1 kg of industrial
product can vary between 0 and 100 metric tons. Both methods show that the scenario

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 9: Advanced Life Cycle Models 365

uncertainties in matrix-based LCA models can be large. The results calculated based on the
current US LCI database are only preliminary as the connections between processes are rather
sparse. However, this method can provide better results on future database versions, which
ideally will include more well-connected inventories.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
366 Chapter 10: Life Cycle Impact Assessment

Chapter 10 : Life Cycle Impact Assessment


In this chapter, we complete the discussion of the major phases of the LCA Standard by
defining and describing life cycle impact assessment (LCIA). This is the part of the standard
where we translate the inventory results already created into new information related to the
impacts of those flows, in order to help to assess their significance. These impacts may be on
ecosystems, humans, or resources. As with the previous discussions about quantitative
methods, life cycle impact assessment involves applying a series of factors to inventory results
to generate impact estimates. While many impact assessment models exist, we begin by
assessing some of the more common and simpler impact categories, such as those used for
energy use and greenhouse gases, and then move on to more comprehensive LCIA methods
used around the world. As always, our focus is on understanding the quantitative
fundamentals associated with these efforts.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Describe various impact categories of interest in LCA and the ways in which those
impacts can be informed by inventory flow information.

2. Describe in words the cause-effect chain linking inventory flows to impacts and
damages for various examples.

3. List and describe the various mandatory and optional elements of life cycle impact
assessment.

4. Select and justify LCIA methods for a study, and perform a classification and
characterization analysis using the cumulative energy demand (CED) and/or climate
change (IPCC) methods for a given set of inventory flows.

Why Impact Assessment?


To help motivate the general need to pursue LCIA, we create a hypothetical set of LCI results
that we will revisit throughout the chapter. These LCI results for two alternative product
systems, A and B, may have been generated either as part of a prior study intended to only be
an LCI (as opposed to an LCA), or as the LCI results to be subsequently used in an LCA. Due
to either data constraints, or explicitly chosen statements in the goal and scope of the study,
only a few flows have been tracked. As shown in Figure 10-1, a life cycle interpretation analysis
of these results based only on the inventory would be challenging. Option A has more fossil
CO2 emissions (5 kg) and use of crude oil (10 kg), but fewer emissions of SO2 (2 kg), than
Option B (2 kg, 8 kg, 5 kg, respectively). The tradeoff (how much of one flow we would need
to trade for less of another flow) alone is not enough to support an interpretation towards the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 367

decision support for A versus B. We would need more information related to, for example,
how much we cared about or valued the various flows, or how much damage might occur in
the environment if the various flows occurred.
Flow Compartment Units Option A Option B
Carbon dioxide, fossil air kg 5 2
Sulfur dioxide air kg 2 5
Crude oil kg 10 8
Figure 10-1: Hypothetical Study LCI Results

The ideal case, of course, for the interpretation of LCI results is vector dominance, where
the flows of one option across all inventoried flows are lower for one option than the other.
In such a case, we would always prefer the option with lower flows. In reality, vector
dominance in LCI results is rare, even with a small number of inventoried flows. As inventory
flows are added in the scope (i.e., more rows in Figure 10-1), the likelihood of vector
dominance nears zero, because more tradeoffs in flows are likely to appear across options. It
is the existence of tradeoffs, and the typical comparative use of LCA across multiple product
systems, that makes us seek an improved method to allow us to choose between alternatives
in LCA studies, and for that we need to use impact assessment.

Overview of Impacts and Impact Assessment


In Chapter 1, we motivated the idea of thinking about impacts of product systems. We showed
that we might have concern for various impacts, and seek indicators to help us to understand
how to measure and assess those impacts. For example, we described how we might measure
our concern for fossil fuel depletion by tracking coal and natural gas use (in MJ or BTU).
Similarly, we described how we might measure our concern for climate change in terms of
greenhouse gas emissions (in kg or tons). These indicators, which led to our LCI results, were
intended to be short-term placeholders on our path to considering the eventual impacts. In
this chapter, we take the next steps needed to achieve this goal.

The idea of impact assessment is not new. Scientists have been performing impact assessments
for decades. Entire career domains exist for those interested in environmental impact
assessment, risk assessment, performance benchmarking, etc. A key difference between life
cycle impact assessment and other frameworks is its link to a particular functional unit (and of
course the entire life cycle as a boundary), which focuses our attention on impacts as a function
of that specific normalized quantity. Typically, risk or environmental impact assessments are
for entire projects or products, such as the environmental impact of a highway expansion or
a new commercial development. That said, the methods we will use in life cycle impact
assessment (LCIA) have in general been informed and derived from activities in these other
domains. Impact assessment is about being able to consider the actual effects on humans,
ecosystems, and resources, instead of merely tracking quantities like tons of emissions or
gallons of fuel consumed as a result of production.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
368 Chapter 10: Life Cycle Impact Assessment

This chapter cannot fully describe the various methods needed to perform LCIA. Our focus
is on explaining what the ISO LCA Standard requires in terms of LCIA, and on the qualitative
and quantitative skills needed to document and complete this phase. Before discussing the
mandatory and optional elements for LCIA in the Standard, we reintroduce the notion that
there are various impacts that one might be
concerned about, and discuss in limited detail how
There are impact assessment
categories for energy use and
we might frame our concerns for such impacts in an
climate change, as we will see later LCA study.
in the chapter. But before we get
to the formal definitions of those Figure 10-2 summarizes the different classes of
methods, we will reuse energy and issues of concern, called impact categories, which
climate examples along the way, are commonly used in LCA studies. Also included is
as these two concepts are likely the scale of impact (e.g., local or global), and the
already familiar to you. Many of typical kinds of LCI data results that can be used as
the other impact categories and inputs into methods created to quantitatively assess
methods available are much more these impacts. This list of impact categories is not
complex, and we will save all intended to be exhaustive in terms of listing all
discussion of those for later. potential impact categories for which an individual
or a party might have concern, or in terms of the
potentially connected LCI results.

Indicators of impacts exist, as first discussed in Chapter 1. However, ‘indicator’ is just a generic
word that refers to a signal. As most of you know, scientific consensus suggests greenhouse
gas emissions are indicators of global warming. We will see that impact categories (like global
warming) have their own specifically defined indicators.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 369

Impact Category Scale Examples of LCI Data (i.e., classification)


Carbon Dioxide (CO2), Nitrous Oxide (N2O), Methane (CH4),
Chlorofluorocarbons (CFCs),
Global Warming Global
Hydrochlorofluorocarbons (HCFCs),
Methyl Bromide (CH3Br)

Chlorofluorocarbons (CFCs),
Stratospheric Ozone Hydrochlorofluorocarbons (HCFCs), Halons,
Global
Depletion
Methyl Bromide (CH3Br)

Sulfur Oxides (SOx), Nitrogen Oxides (NOx), Hydrochloric Acid


Acidification Regional, Local (HCl), Hydrofluoric Acid (HF), Ammonia (NH4)

Phosphate (PO4), Nitrogen Oxide (NO), Nitrogen Dioxide


Eutrophication Local
(NO2), Nitrates, Ammonia (NH4)

Photochemical Smog Local Non-methane hydrocarbon (NMHC)


Terrestrial Toxicity Local Toxic chemicals with a reported lethal concentration to rodents
Aquatic Toxicity Local Toxic chemicals with a reported lethal concentration to fish
Global,
Human Health Total releases to air, water, and soil.
Regional, Local
Global,
Resource Depletion Quantity of minerals used, Quantity of fossil fuels used
Regional, Local
Global,
Land Use Quantity disposed of in a landfill or other land modifications
Regional, Local
Water Use Regional, Local Water used or consumed
Figure 10-2: Summary of Impact Categories (US EPA 2006)

Impact Assessment Models for LCA


Figure 10-2 introduced various individual impact categories, but most of the attention and
examples so far have related to climate change and energy. While these continue to be the
most popular impact categories of interest in LCA (partly due to the relatively small amount
of uncertainty regarding their application and thus the large degree of scientific consensus on
their use), more comprehensive models of impacts exist that encompass multiple impact
categories and have been incorporated into LCA studies and software tools. Some of the most
frequently used LCIA methods are summarized and mapped to their available characterization
models in Figure 10-3. Some of these may already be familiar to those that have reviewed
existing studies.

As Figure 10-3 shows, some LCIA methods are focused on a single category, e.g., cumulative
energy demand (CED), while others broadly encompass all of the listed categories. Note that
only the TRACI method is US-focused, with the remainder being mostly Europe-focused.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
370 Chapter 10: Life Cycle Impact Assessment

Terrest. eutrophication

Aquatic eutrophication

Resource consumption
Respiratory inorganics

Ionising radiation

Ozone formation
Ozone depletion

Human toxicity
Climate change

Acidification
Ecotoxicity

Land use
Model
CED X
CML2002 X X X X X X X X X X X
Eco-indicator 99 X X X X X X X X X X
EDIP 2003/EDIP976 X X X X X X X X X X X
EPS 2000 X X X X X X X X X X X X
Impact 2002+ X X X X X X X X X X X
IPCC X
LIME X X X X X X X X X X X
LUCAS X X X X X X X X X X
MEEuP X X X X X X X X X X
ReCiPe X X X X X X X X X X X X
Swiss Ecoscarcity 07 X X X X X X X X X X X X
TRACI X X X X X X X X X X
USEtox X X
Figure 10-3: Summary of Impact Categories (Characterization Models) Available
in Popular LCIA Methods (modified from ILCD 2010)

We will not be forced to choose a single impact category of concern. A study may set its study
design parameters to include several, all, or none of the impact categories from the list in
Figure 10-2, and thus may use one or more of the LCIA methods in Figure 10-3, with varying
comprehensiveness. Using a diverse set of impact categories could allow us to make relevant
comparisons across inventory flows so that we could credibly assess whether we should prefer
a product system that releases 3 kg less of CO2 to air or one that releases 3 kg less SO2 to air
(as in Figure 10-1). If our concerns are cross-media, such that some of our releases are to air
and some are to water or soil, the challenge is even greater because we then need to balance
concern for impacts in both ecosystems. Being able to take this next step in our LCA studies
beyond merely providing LCI results is significant. The degree of difficulty and effort needed
to successfully complete LCIA precludes some authors from even attempting it (which, as
discussed above, is a big driver for why so many studies end at the LCI phase). As LCIA is
perhaps most useful in support of comparative LCAs, it will not typically be very interesting
or useful to know the LCIA results of a single product system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 371

Beyond using multiple impact categories, multiple LCIA methods are often used in studies to
assess whether different approaches agree on the severity of the chosen impacts. Of course,
this is only useful when the LCIA methods use different underlying characterization models.
The outputs of LCIA methods will be discussed below.

In order to understand impact assessment, and thus LCIA, it is important to understand how
LCI results may eventually connect to impacts. Figure 10-4 shows the cause-effect chain (also
referred to as the environmental mechanism) for an emission category. Similar chains exist for
impact categories like resource depletion and land use. At the top are emissions, sometimes
referred to as stressors, so called because they are triggers for potential impacts (the ‘causes’
in the cause-effect chain). While drawn as single chains in the figure, there may be various
stressors all leading to the same potential impacts or damages. Likewise, the same emissions
may be the first link in the chain for multiple effects (not shown).

Emissions

Concentrations

Impacts Midpoints

Damages Endpoints

Figure 10-4: General Cause-Effect Chain for Environmental Impacts


(Adapted from Finnveden 1992)
Next in the chain are concentrations, which in the case of air emissions are the resulting
contribution of increased emissions with respect to the rest of the natural and manmade
molecules in the atmosphere. A relatively small emission would have a negligible effect on
concentrations, while a large emission may have a noticeable effect on concentrations. In the
case of climate change impacts, increased emissions of greenhouse gases lead to increased
concentrations of greenhouse gases in the atmosphere.

As concentrations are changed in the environment, we would expect to see intermediate


impacts. For the case of climate change, increased concentrations of greenhouse gases are
expected to lead to increased warming (actually radiative forcing). Emissions of conventional
pollutant emissions lead to increased concentrations in the local atmosphere. These
intermediate points of the chain are also called midpoints, which are quantifiable effects that
can be linked back to the original emissions, but are not fully indicative of the eventual effects
in the chain.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
372 Chapter 10: Life Cycle Impact Assessment

Finally, damages arise from the impacts. These damages are also referred to as endpoints,
since they are the final part of the chain and represent the inevitable ending point with respect
to the original stressors. These damages or endpoints are the ‘effects’ in the cause-effect chain.
For global warming (or climate change), the damages/endpoints of concern may be
destruction of coral reefs, rising sea levels, etc. For conventional pollutants, endpoints may be
human health effects due to increased exposure to concentrations, like increases in asthma
cases or hospital admissions. For ozone depletion, we may be concerned with increases in
human cancer rates due to increased UV radiation. Note that LCIA will not quantify these
specific damages (i.e., it will not give an estimate of the number of coral reefs destroyed or
height of sea level change), but it will provide other useful and relevant information that could
subsequently allow us to consider them. An example is that human health damages may be
represented by disability adjusted life-years (DALYs), which are an estimate of cumulative
years of human life lost due to pollution and other stressors. Ecosystem damages may be
represented as potentially disappeared fraction (PDF) of species lost for a particular area
and time.

Fortunately, as we will learn below, the science behind impact assessment, while continuing to
be developed, is available for us to use without needing to build it ourselves. But using the
relevant methods still requires substantial understanding of how these methods work. Getting
to the idea of an endpoint is hard, and again, that is partly why people stop at the inventory
stage.

Along the way, we have seen how LCAs can yield potentially large lists of inventory results.
These are generally lists of inputs needed (e.g., fuels used) and outputs created (e.g., GHG
emissions) by our product systems. The prospect of impact assessment may create an
intimidating thought of “how will we pull together a coherent view of impact given this large
list of effects?” However, in reality, impact assessment methods are created exactly to deal
with using large inventories as inputs. Impact assessment methods convert the detailed
inventory information into estimates of associated impacts.

ISO Life Cycle Impact Assessment


In Chapter 4, we began our summary discussion of the ISO Standard. Figure 10-5 repeats the
original Figure 4-1 which overviews the major phases of the Standard. As we have already
discussed, the various phases are all iterative. We remind you that the text in this chapter is
not intended to replace a careful read of the ISO LCA Standard documents specific to LCIA,
as here we only summarize the information, link it to previously discussed material, and show
examples.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 373

Figure 10-5: Overview of ISO LCA Framework (Source: ISO 14040:2006)

In life cycle impact assessment (LCIA), we associate our LCI results with various impact
categories and then apply other factors to these categorized results to give us information
about the relevant impacts of our results. We also then iteratively return to the life cycle
interpretation phase so that we can add to our interpretations made when only the LCI was
complete. LCIA also connects iteratively back to the LCI phase, so that if the LCIA results do
not help us in expected ways, we can refine the inventory analysis to try to improve our study.
While not shown as a direct connection in Figure 10-5, we may also iteratively decide to adjust
the study design parameters (i.e., goal and scope) if we interpret that our impact assessment
results are unable to meet our objectives.

As we will see below, some elements of LCIA may be subjective (i.e., influenced by our own
value judgments). As stated in the Standard, it is important to be as transparent as possible
about assumptions and intentions when documenting LCIA work so as to be clear about these
subjective qualities. Figure 10-6 shows the various steps in LCIA, which includes several
mandatory and several optional elements, each of which is discussed below. The steps in LCIA
are commonly referred to by the shorthand name in parentheses in the figure.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
374 Chapter 10: Life Cycle Impact Assessment

Figure 10-6: Overview of Elements of LCIA Phase (Source: ISO 14040:2006)

Mandatory Elements of LCIA


Selection

The first mandatory element of LCIA is the selection of impact categories, their indicators,
and the characterization models and LCIA methods to be used. In practice, this element also
involves sufficiently documenting the rationale behind these choices, which need to be
consistent with the stated goal and scope of the study. While we will discuss more of the
various possible impact categories later in this chapter (as well as indicators and
characterization models), we know from previous discussions that climate change is an impact
category. If we wanted to include climate change as one of our study’s impact categories, then
we should justify why climate change is a relevant impact category given our choice of study
design parameters (goal and scope) and/or given the product system itself – i.e., is it a product
considered to be a major potential cause of climate change?

The ISO Standard requires that the impact assessment performed encompass “a
comprehensive set of environmental issues” so that the study is not narrowly focused, for
example, on one particular hand picked impact that might be chosen since it can easily show
low impacts. Thus, our LCIA should use multiple impact categories. Our justification should

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 375

include text for all of our chosen categories. A study’s justification for selection of impact
categories should not be subjective to the author’s own personal wishes. They should consider
those of the organization responsible for funding the study, or who is responsible for the
product system. For example, if the organization requesting the study has long-term goals of
mitigating climate change in their actions, that would be an appropriate justification for
choosing climate change as an impact category when assessing their products.

The LCIA methods selected should be relevant to the geographical area of the study. An LCA
on a product manufactured in a US factory would not be well-served by using an LCIA method
primarily developed in, and intended to be applied in, Europe. However, the majority of LCIA
models have been created only for the US and Europe, and thus, it can be challenging to select
a model if considering a product system in Asia. In such cases, it may make sense to use
multiple models outside of the relevant geographic region to consider ranges of results and to
try to generalize findings.

This step should also document and reference the studies on impacts used, i.e., the specific
scientific studies used to assess impacts of greenhouse gases. Of course, the vast majority of
LCA studies will be using well-established LCIA methods. Beyond this initial LCIA element
for justification of choices, the remaining mandatory elements involve the organization and
application of indicator model values to your previously generated inventory results.

Classification

Classification is the first quantitative element of LCIA, where the various inventory results
are organized such that they map into the frameworks of the relevant impact category
frameworks chosen for the study. Classification involves copying your inventory items into a
number of different piles, where each pile is associated with one of the impact categories used
by the selected LCIA methods.

Consider again the hypothetical inventory from Figure 10-1. If a study has selected climate
change as an impact category, then the carbon dioxide, fossil inventory flow would be classified
into that pile since it is a greenhouse gas (and the other two flows would not). If you chose an
impact category for energy, then the crude oil inventory flow would be classified there (and the
other two would not). If you chose no other impact categories, then the sulfur dioxide flow
would not be classified anywhere, and would have no effect on the impact assessment.

To be able to perform classification, each LCIA method must have a list of inventory flows
connected to that impact. As discussed in Chapter 5, LCI results can have hundreds or
thousands of flows. Thus, the list of relevant connected flows for LCIA methods can likewise
be substantial (hundreds or thousands of interconnections). Classification has no quantitative
effect on the inventory flows other than arranging and creating piles. It is possible that the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
376 Chapter 10: Life Cycle Impact Assessment

classified list of inventory flows relevant to a chosen LCIA method have different underlying
units (e.g., kg, g, etc.). These differences will be managed in subsequent elements of the LCIA.

Amongst the most widely used impact categories are those for climate change and energy use.
Two specific underlying methods to support these are the Intergovernmental Panel for
Climate Change (IPCC) 100-year global warming potential method and the cumulative energy
demand (CED) method. We describe each below and use them to illustrate the mechanics of
the various LCIA elements through examples. Since the IPCC and CED methods have all of
the mandatory elements, they qualify as LCIA methods, but they are fairly simplistic and
singularly focused compared to some of the more advanced LCIA methods shown in Figure
10-3. Studies that only consider energy and global warming impacts are sometimes viewed as
being narrowly focused with respect to impact assessment, especially since energy and climate
results tend to be very similar.

Figure 10-7 provides an abridged list of substance names and chemical formulas (generally
greenhouse gases) that are classified in the IPCC method introduced above. Thus, any of the
substances in this list that are in an LCI would be copied into the pile of classified substances
to be used in assessing the impacts of climate change.
Chemical
Name Formula
Carbon dioxide CO2
Methane CH4
Nitrous oxide N2O
CFC-11 CCl3F
CFC-12 CCl2F2
CFC-13 CClF3
CFC-113 CCl2FCClF2
CFC-114 CClF2CClF2
CFC-115 CClF2CF3
Halon-1301 CBrF3
Halon-1211 CBrClF2
Halon-2402 CBrF2CBrF2
Carbon tetrachloride CCl4
Methyl bromide CH3Br
Methyl chloroform CH3CCl3
HCFC-22 CHClF2
HCFC-123 CHCl2CF3
HCFC-124 CHClFCF3
HCFC-141b CH3CCl2F
HCFC-142b CH3CClF2
HCFC-225ca CHCl2CF2CF3
HCFC-225cb CHClFCF2CClF2
Figure 10-7: (Abridged) List of Substances Classified into IPCC (2007) LCIA Method

Likewise, Figure 10-8 provides an example list of energy flows that would be classified into
the CED method. Note that the CED method further sub-classifies renewable and non-
renewable energy, as well as particular subcategories of energy (e.g., fossil or solar). Also note
that the listings in Figure 10-8 are not specific to known flows in any of the databases. One

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 377

database might have a flow for a particular kind of coal or wood that is named something
different in another database.
Category Subcategory Included Energy Sources
hard coal, lignite, crude oil, natural gas, coal mining off-gas,
fossil
peat
Non-renewable resources nuclear uranium
primary
wood and biomass from primary forests
forest
biomass wood, food products, biomass from agriculture, e.g. straw
wind wind energy
Renewable resources solar solar energy (used for heat & electricity)
geothermal geothermal energy (shallow: 100-300 m)
water run-of-river hydro power, reservoir hydro power
Figure 10-8: Energy Sources Classified into Cumulative Energy Demand (CED) LCIA Method
(Source: Hischier 2010)

If classification is done manually (which is rare), then various quality control problems could
occur in creating the piles of classified inventory flows. For example, you would need to look
at each of your inventory results and then check every LCIA method’s list of classified
substances to see whether it should be put into that pile, and to put it into the correct pile. It
would be easy to make errors in such a process, either by not noticing that certain inventory
flows are classified into a method, or by classifying the wrong flows (e.g., those in an adjacent
row number).

In practice, most classification is done via the use of software tools and/or matrix
manipulation (see Advanced Material at end of this chapter). Even so, making the classification
process work efficiently is not easy. There are also potential problems associated with the
computerized classification process (Hischier 2010). First, inventory flows reported in
databases (or from primary data collection) may be named inconsistently with scientific
practice, and cause mismatches or inability to match with LCIA methods. For example, one
source may list CO2 and another carbon dioxide. Behind the scenes of the software tools,
many ‘matches’ are done by using CAS numbers to avoid such problems. CAS Numbers give
unique identities to specific chemicals (e.g., formaldehyde is 50-00-0). Beyond naming and
matching problems, an LCIA method may have many listed flows that should be classified
under it, but the inventory done may be so streamlined that none of the classified flows have
been estimated. Conversely, a relatively substantial LCI may have no flows that can be
classified into any of the selected LCIA methods. In short, the connection between flows in
an LCI and the classification list of flows in the LCIA method is not one-to-one. Of course,
should problems like these be identified during the study, then changes should be made to the
study’s goal, scope, or inventory results to ensure that relevant flows are identified so as to be
able to make use of the selected LCIA method (or, of course, the LCIA method should be
adjusted).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
378 Chapter 10: Life Cycle Impact Assessment

This potential disconnect between available and quantified inventoried flows and the
connections to the inputs of LCIA methods is critical to understand. Since most LCIA
methods have a large list of classifiable LCI flows, it is critical that inventory efforts are
sufficiently robust so as to make full use of the methods. Thus, inventories must track a
sufficient number of flows needed for the classification step of the chosen method. A
significant risk is posed when doing primary data collection of a new process. Imagine the case
where the study author has chosen a climate change method for LCIA. If only CO2 is
inventoried as part of the boundary set in the data collection effort, then the potential climate
change effects due to non-CO2 GHGs (which are more potent) cannot be considered in the
LCIA. For example, if another round of data collection could measure emissions of methane
or other GHGs, it could have a substantial effect on the results. One could use IO-LCA
screening as a guide to help explicitly screen for inventory flows that should be measured or
verified to be zero when using a particular method.

It is possible that you could have chosen an impact category (or categories) such that none of
your quantified inventory flows are classified into the pile for that category, giving you a zero
impact result. While unlikely, this again would be a situation where you would want to iterate
back to the inventory stage and either redouble data collection efforts, or iterate all the way
back to goal and scope to change the parameters of the study.

Inventory flows can be classified into multiple impact category piles. For example, various
types of air emissions may be classified into a climate change impact category, an acidification
impact category, and others. In these cases, the entire flow is classified into each pile (not
assigned to only one of the piles, and not having its flows allocated across the impact piles).
Figure 10-9 shows the classification results for the hypothetical inventory example (Figure
10-1) for the IPCC and CED methods. Once the classification is completed, the LCIA
proceeds to the next required step, characterization.
Classification: Climate Change Impact Category (IPCC)
Flow Compartment Units Option Option
A B
Carbon dioxide, fossil air kg 5 2
Classification: Energy Impact Category (CED)
Crude oil kg 10 8
Figure 10-9: Classification of Hypothetical Inventory

Characterization

The characterization element of LCIA quantitatively transforms each set of classified


inventory flows via characterization factors (also called equivalency factors) to create
impact category indicators relevant to resources, ecosystems, and human health. The
purpose of characterization is to apply scientific knowledge of relative impacts such that all
classified flows for an impact can be converted into common units for comparison. The
characterization methods are pre-existing scientific studies that are leveraged in order to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 379

create the common units. For example, in the climate change impact example we have been
using in this chapter, the characterization method is from IPCC (2013). This IPCC method is
well known for creating the global warming potential equivalency values for greenhouse gases,
where CO2 is by definition given a value of 1 and all other greenhouse gases have a factor in
equivalent kg of CO2, also abbreviated as CO2-equiv or CO2e. Similar to other methods, this
creates (in effect) a weighting factor adjustment for greenhouse gases. Furthermore, since all
characterized values are in equivalent kg of CO2, the values can be aggregated and reported in
the common unit of an impact category indicator. The IPCC report actually provides several
sets of characterization factors, for different time horizons of greenhouse gases in the
atmosphere. The factors typically used in LCA and other studies are the IPCC 100-year time
horizon values, but values for other numbers of years are also available. Figure 10-10 shows
the characterization (equivalency) factors for greenhouse gases in the IPCC Fifth Assessment
Report (AR5) method for 20-year and 100-year time horizons (2013). The values suggest that
1 kg of methane has the warming potential of 28 kg of carbon dioxide in a 100 year timeframe
(or 84 kg of carbon dioxide in a 20 year timeframe). Any classified greenhouse gases or other
substances appearing in the list of characterized flows would then be multiplied by the ‘kg
CO2e/kg of substance’ factors to create the characterized value for each inventory flow. Note
that in the IPCC method, biogenic CO2 has no characterization factor (i.e., its ‘0’) since it is
assumed that biogenic carbon emissions come from carbon that was stored biogenically from
a previous emission.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
380 Chapter 10: Life Cycle Impact Assessment

Characterization Factor
Name Chemical Formula
(kg CO2-eq / kg of substance)
20 years 100 years
Carbon dioxide CO2 1 1
Methane CH4 84 28
Methane, fossil CH4 85 30
Nitrous oxide N2O 264 265
CFC-11 CCl3F 6,900 4,660
CFC-12 CCl2F2 10,800 10,200
CFC-13 CClF3 10,900 13,900
CFC-113 CCl2FCClF2 6,490 5,820
CFC-114 CClF2CClF2 7,710 8,590
CFC-115 CClF2CF3 5,860 7,670
Halon-1301 CBrF3 7,800 6,290
Halon-1211 CBrClF2 4,590 1,750
Halon-2402 CBrF2CBrF2 3,440 1,470
Carbon tetrachloride CCl4 3,480 1,730
Methyl bromide CH3Br 9 2
Methyl chloroform CH3CCl3 578 160
HCFC-22 CHClF2 5,280 1,760
HCFC-123 CHCl2CF3 292 79
HCFC-124 CHClFCF3 1,870 527
HCFC-141b CH3CCl2F 2,550 782
HCFC-142b CH3CClF2 5,020 1,980
HCFC-225ca CHCl2CF2CF3 469 127
HCFC-225cb CHClFCF2CClF2 1,860 525
Figure 10-10: IPCC AR5 (2013) 100-year Characterization Factors (abridged)

Equation 10-1 shows how to convert from inventory units to an impact category’s
characterized units given a characterization factor (abbreviated char. factor).
𝑐ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑧𝑒𝑑 𝑢𝑛𝑖𝑡𝑠
𝐶ℎ𝑎𝑟𝑎𝑐𝑡𝑒𝑟𝑖𝑧𝑒𝑑 𝑓𝑙𝑜𝑤 = 𝑓𝑙𝑜𝑤 (𝑖𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝑢𝑛𝑖𝑡) ∗ 𝑐ℎ𝑎𝑟. 𝑓𝑎𝑐𝑡𝑜𝑟 ( ) (10 − 1)
𝑖𝑛𝑣𝑒𝑛𝑡𝑜𝑟𝑦 𝑢𝑛𝑖𝑡

While similar in application, the Cumulative Energy Demand (CED) method introduced
above has multiple subcategories for which inventory flows are classified, and thus an
additional level of characterization factors by subcategory. The characterization factors
transform original physical units of energy from each source into overall MJ-equivalent
category indicator values. Category indicators for CED are reported by subcategory (e.g.,
fossil, nuclear, solar, or wind), aggregated into the categories (e.g., non-renewable and
renewable), and then aggregated into a total (cumulative) energy demand. Figure 10-11 shows
CED characterization values used in the ecoinvent model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 381

CED characterization factors (MJ-equivalent per unit)

Non-renewable Renewable
primary wind geo-
water
Source Unit fossil nuclear forest biomass (kinetic) solar thermal
Coal, brown kg 9.90
Coal, hard kg 19.10
Natural gas Nm3 38.29
Uranium kg 560,000
Crude oil kg 45.80
Peat kg 9.9
Energy,
biomass, MJ 1
primary forest
Energy, in
MJ 1
biomass
Energy, wind
MJ 1
(kinetic)
Energy, solar MJ 1
Energy,
MJ 1
geothermal
Energy,
hydropower MJ 1
(potential)
Figure 10-11: Cumulative Energy Demand Values Used In Ecoinvent Model
(Abridged from Hieschier 2010). Nm3 means normal cubic foot (normal temperature and pressure)

Models may internally change their mappings between inventory flows. For example,
ecoinvent maps hard and soft wood uses into the energy, biomass categories shown above.
Likewise, energy values are often pre-converted and adjusted to appropriate categories when
creating the processes in LCI databases (as in the kinetic and potential energy values in Figure
10-11). Due to differences in naming inventory flows in different systems, CED
characterization factors often have to be tailored for different frameworks (i.e., CED values
used for the US LCI database may be different than those above). All of these issues make
comparisons of CED results across different databases and software tools problematic.

While not discussed in this book, the science behind the development of characterization
factors for use in LCIA methods is an extremely time consuming and comprehensive research
task. Activities involved include finding impact pathways, relating flows to each other, and
then inevitably the development of the equivalency factors. Such research takes many person-
years of effort, yet the provision of convenient equivalency factors as shown above may make
the level of rigor appear to be small.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
382 Chapter 10: Life Cycle Impact Assessment

Category Indicator Results or LCIA Results

The summary of all category indicator values used is referred to as the LCIA profile. Using
the IPCC (2013, 100 year) and CED factors above, Figure 10-12 shows the LCIA profile
associated with Figure 10-9. Note that the CED values are the product of the raw values for
kg of crude oil (10 and 8 for Options A and B, respectively) with the CED characterization
factor for crude oil of 45.8 MJ-eq/kg. The CED method does not add renewable and non-
renewable impacts. CO2 values appear the same since the characterization factor is 1 unit CO2-
equivalent per unit of CO2. An example that considers the SO2 emissions in the original
example is in the end of chapter questions.
Characterization: Climate Change (IPCC 2013, 100 year)
Indicator Units Option A Option B
Equivalent releases CO2 kg CO2 equiv. 5 2
Characterization: Energy (CED)
Non-renewable fossil MJ-eq. 458 366
Non-renewable nuclear MJ-eq. 0 0
Non-renewable forest MJ-eq. 0 0
Non-renewable total MJ-eq. 458 366
Renewable total MJ-eq. 0 0
Figure 10-12: LCIA Profile of Hypothetical Example

Characterization represents the last of the initial mandatory elements in LCIA, as the
remaining elements are optional, and many LCA studies skip all optional elements. If it were
to be the final step, one could next interpret the results in Figure 10-12. Given that only energy
and greenhouse gas related impacts were chosen for the study, and that these impacts tend to
be highly correlated, it is not a surprise to see that the characterized LCIA results suggest the
same result, i.e., that Option B has lower impacts than Option A. The interpretation of course
should still highlight the fact that this result occurs because of the chosen impact assessment
methods. If other impact categories were selected, different answers could result, including
tradeoffs between impacts.

There is a final step, evaluation and reporting, which is not officially in the Standard but is
described after the optional elements below. This step should be done after characterization
regardless of whether the optional elements are pursued.

The TRACI LCIA Model


The US Environmental Protection Agency (EPA) created the Tool for the Reduction and
Assessment of Chemical and Other Environmental Impacts (TRACI) LCIA tool in 2002. It
has been updated several times, and is generally calibrated to support assessment of impacts
in the US and North America. It is a fairly comprehensive midpoint-oriented LCIA method.
The TRACI characterization factors are publicly available, and have been integrated into many
tools. Even without access to such tools, it is straightforward to use the TRACI data in LCA
studies.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 383

E-resources: The Chapter 10 folder has the user manual for TRACI version 2.1, as well
as an Excel spreadsheet of the characterization factors used in TRACI 2.1. It is instructive
to inspect the spreadsheet to appreciate how comprehensive the connections are between
chemicals and the various impact categories (e.g., acidification, global warming, etc.).

E-resources: The Chapter 10 folder has an expanded version of the US LCI process
matrix spreadsheet model introduced in Chapter 9 (with 746 processes and 949
environmental flows tracked). Like the model demonstrated earlier, it has a ‘Model Inputs and
Tech Results’ worksheet where you enter production values for processes and see the direct,
total (X), and E results. However, it adds LCIA results for the various TRACI 2.1 impact
categories and their characterization factors (from the E-resource above). The various cell
formulas in this spreadsheet, especially those involving array formulas and developer controls,
demonstrate how such models can be built from the databases.

This spreadsheet again works by entering a desired input of process Y into the blue column
of the ‘Model Inputs and Tech Results’ worksheet. To see results for various impacts, choose
one of the TRACI-categorized impacts in the top left corner of the ‘Model Inputs and Tech
Results’ sheet after entering an input value for a process (or multiple processes) in the ‘Input
Value’ column. The total sum of all characterized impacts are shown in the top left corner,
and the totals by specific flow (e.g., a specific emission) are shown in the yellow row to the
right. Characterized flows for each process are shown in the characterized columns. The
spreadsheet formats ‘significant’ results in red. The ‘Detailed Summary’ worksheet shows
which of the specific inventory flows contribute to the total selected impact category (e.g., the
specific greenhouse gas emissions leading to global warming). This worksheet also shows
which of the 746 processes most contribute to the total selected impact category, and also
shows rankings by process and inventory flow, such as the specific processes releasing the
most fossil CO2 or methane for global warming.

Example: Estimating the life cycle global warming impacts of 1 kWh of bituminous
coal-fired electricity using the process matrix model of the US LCI database with
impact assessment.

We can estimate the total global warming impacts of coal-fired electricity production in the
US using the US LCI Excel spreadsheet LCIA model described above. As in Chapter 9, the
input to the Electricity, bituminous coal, at power plant/US process (number 416 of 747 US LCI
processes) is 3.6 MJ (equal to 1 kWh) is used. If the ‘Global warming air’ impact category is
chosen from the selection box, Figure 10-13 shows the top inventory flows contributing to
global warming in the overall process chain of bituminous coal-fired electricity using the
TRACI method – these results are pasted from the ‘Detailed summary’ sheet. Emissions of
fossil CO2 (across all processes) account for 96% of total estimated global warming impacts
measured in kg CO2-equivalents. Methane emissions (again across all processes) are another
4%, and all other gases are negligible.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
384 Chapter 10: Life Cycle Impact Assessment

Flow Emissions Percent of


(kg CO2-e) Total
Total 1.08
Carbon dioxide, fossil 1.033 95.7%
Methane 0.046 4.2%
Methane, fossil 0.0003 0.03%
Carbon dioxide, in air 0.0003 0.02%
Dinitrogen monoxide 0.0001 0.01%
Figure 10-13: Top flows contributing to global warming for 1 kWh of bituminous coal-fired electricity
(US LCI)

Similarly, Figure 10-14 shows the top processes contributing to global warming in the overall
chain of bituminous coal-fired electricity using the TRACI method – also from the ‘Detailed
summary’ sheet. Emissions from the bituminous coal-fired electricity generation process
(across all greenhouse gases) account for 93% of total estimated global warming impacts
measured in kg CO2-equivalents. Global warming impacts from coal mining are another 4%,
combustion of diesel 1%. All other processes have global warming impacts less than 1%, and
none outside the top 10 are more than 0.1%.
Flow Emissions Percent of
(kg CO2-e) Total
Total 1.08
Electricity, bituminous coal, at power plant/US 1.004 93.0%
Bituminous coal, at mine/US 0.045 4.1%
Diesel, combusted in industrial boiler/US 0.011 1.0%
Transport, train, diesel powered/US 0.009 0.82%
Electricity, natural gas, at power plant/US 0.002 0.18%
Residual fuel oil, combusted in industrial
boiler/US 0.002 0.17%
Transport, barge, residual fuel oil powered/US 0.001 0.12%
Natural gas, combusted in industrial boiler/US 0.001 0.09%
Gasoline, combusted in equipment/US 0.001 0.07%
Crude oil, at production/RNA 0.001 0.07%
Figure 10-14: Top processes contributing to global warming for 1 kWh of bituminous coal-fired
electricity

As noted in the interpretations of the previous two tables, each of those summaries were
aggregated - across all gases or all processes. It is not clear from such summaries which specific
greenhouse gas emissions in which specific processes are contributing the most impacts.
Figure 10-15 shows an example of such a summary from the ‘Detailed Summary’ sheet. The
quantitative impact result column of this figure may appear to be identical to those in Figure
10-14, but note there are small differences not apparent due to rounding. This is because each
of the combinations of process-inventory flow in the Top 10 presented below represent the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 385

majority, but not all, of the emissions for the various processes – see the ‘Detailed Summary’
tab for specific values.

This level of detail is critical when interpreting LCA results and considering potential
improvements. By seeing more specific places in the system where emissions occur, better
decisions and changes can be made. Note though that this is not the most detailed drilldown
of impact possible. Chapter 12 shows how to further separate impacts across the supply chain
via path analysis.

The Advanced Material – Section 2 – of this chapter shows how to generate similar results in
SimaPro. Again, the creation and availability of the Excel spreadsheet version is meant to help
you learn in a more accessible way how the underlying matrix math and database operations
work to organize LCI and LCIA data and methods.
Flow Emissions Percent of
(kg CO2-e) Total
Total 1.08
Electricity, bituminous coal, at power plant/US Carbon dioxide, fossil/ Air 1.004 93.0%
Bituminous coal, at mine/US Methane/ Air 0.045 4.1%
Diesel, combusted in industrial boiler/US Carbon dioxide, fossil/ Air 0.011 1.0%
Transport, train, diesel powered/US Carbon dioxide, fossil/ Air 0.009 0.82%
Electricity, natural gas, at power plant/US Carbon dioxide, fossil/ Air 0.002 0.18%
Residual fuel oil, combusted in industrial boiler/US Carbon dioxide, fossil/ Air 0.002 0.17%
Transport, barge, residual fuel oil powered/US Carbon dioxide, fossil/ Air 0.001 0.12%
Natural gas, combusted in industrial boiler/US Carbon dioxide, fossil/ Air 0.001 0.09%
Gasoline, combusted in equipment/US Carbon dioxide, fossil/ Air 0.001 0.07%
Crude oil, at production/RNA Methane/ Air 0.001 0.07%
Figure 10-15: Top specific inventory flows from specific processes contributing
to global warming for 1 kWh of bituminous coal-fired electricity

Optional Elements of LCIA


The remaining text in this chapter discusses the optional elements of LCIA. Note that each of
the elements below is independently optional, i.e., one could extend the characterized result
by doing none, some, or all, of these optional elements. The underlying concepts of the
optional elements are simpler than in the mandatory elements, and thus the explanations are
more concise. Part of the reason that they are optional is that they build on the relatively
objective results from the mandatory elements and may introduce subjective components
(even if not perceived as subjective to the study authors) into the LCA. They also modify the
‘pure’ results that end with characterization, which use known and established scientific factors
used throughout the community. It is in passing over the threshold between the mandatory

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
386 Chapter 10: Life Cycle Impact Assessment

and optional elements that two parties provided with the same characterized LCIA results
could subsequently generate different LCIA results. Beyond the mere subjectivity issues,
pursuing the optional steps can lead to results that are hard to validate or compare against in
other LCA studies. Because of these issues, as noted above, many studies end the LCIA phase
of the study at characterization.

Normalization
Normalization of LCIA results involves transforming by a selected reference value or
information. A separate reference value is chosen for each impact. The rationale of
normalization is to provide temporal and spatial perspective or context to LCIA results and
also to help to validate results. The Standard provides suggestions on useful reference values,
such as dividing by total (or total per-capita) known indicators in a region or system, total
consumption of resources, etc. The Standard does not, however, specify reference values to
be used.

Another useful normalization factor is an LCIA indicator result for one of the alternatives
studied (or the indicator value from a previously completed study of a similar product system)
as a baseline. The chosen reference value might be the largest or smallest of the results. In this
type of normalization, a key benefit is the creation of a ratio-like normalized indicator for
which alternatives can be compared. For example, any normalized result greater than 1 has
higher impact than the baseline, and any less than 1 has lower impact.

A potential downside of normalization is that the vast majority of product systems studied will
have negligible impacts compared to the total or even the per-capita values to be used as the
reference value. Thus, normalized values tend to be extremely small and thus their effect can
be viewed as irrelevant or negligible. This can be partially solved by choosing similar but
normalized reference values; for instance, instead of using the total annual impact or resource
consumption, one may choose a daily value. This problem can also be solved by scaling up
the functional unit that represents a higher level of production so that the normalized values
are larger. As an example, in a study considering the life cycle of gasoline, the functional unit
could be 100 billion gallons per year (the approximate annual US consumption) instead of 1
gallon.

Various LCA communities around the world have invested time and research effort in the
development and dissemination of normalization databases and factors to be used in support
of LCA studies. Such efforts are extremely valuable as they serve to provide a common set of
factors that can be cited and used broadly in studies of impacts in the relevant country. It also
removes the need for practitioners to independently create their own normalization values,
which can cause problems in comparing results across studies. In the US, total and per-capita
normalization factors have been published for use with the TRACI 2.1 LCIA model for the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 387

reference year 2008 (Ryberg et al. 2014) as shown in Figure 10-16. Interpreting the global
warming normalization factors suggests that total US GHG emissions in 2008 were 7.4E+12
kg, or about 7 billion tons. Likewise, per-capita emissions are about 24 tons.
US Normalization factors
Reference Year 2008
Annual Per-capita
Impact category (impact per year) (impact per person year)
Ecotoxicity-non-metals (CTUe) 2.3 E+10 7.6 E+01
Ecotoxicity-metals (CTUe) 3.3 E+12 1.1 E+04
Carcinogens-non-metals (CTUcanc.) 1.7 E+03 5.5 E-06
Carcinogens-metals (CTUcanc.) 1.4 E+04 4.5 E-05
Non-carcinogens-non-metals (CTUnon-canc.) 1.1 E+04 3.7 E-05
Non-carcinogens-metals (CTUcanc.) 3.1 E+05 1.0 E-03
Global warming (kg CO2 eq) 7.4 E+12 2.4 E+04
Ozone depletion (kg CFC-11 eq) 4.9 E+07 1.6 E-01
Acidification (kg SO2 eq) 2.8 E+10 9.1 E+01
Eutrophication (kg N eq) 6.6 E+09 2.2 E+01
Photochemical ozone formation (kg O3 eq) 4.2 E+11 1.4 E+03
Respiratory effects (kg PM2.5 eq) 7.4 E+09 2.4 E+01
Fossil fuel depletion (MJ surplus) 5.3 E+12 1.7 E+04
Figure 10-16: Summary of Normalization Factors for US, 2008 (Source: Ryberg et al 2014)

While these values were created as being specific to the year 2008, various practitioners have
continued to use them as is for the past decade. The assumed population in deriving the per-
capita estimates was 304 million, so one could update the per-capita factors with the current
population if desired while retaining the 2008 baselines. Note that CTU in Figure 10-16 refer
generally to comparative toxic units, which have more specific underlying definitions not
shown in the table.

Figure 10-17 shows normalized results for the LCIA profile shown in Figure 10-12, found by
dividing the equivalent CO2 releases by the total factor of 7.4x1012 and by the per-capita value
of 2.4x104. The units of the results can be interpreted as shares of the total annual US releases
of GHG emissions, or as shares of an individual’s emissions. Not surprisingly, the impacts of
5 kg of GHG emissions are negligible on a total and per-capita basis.

The US normalization factors above were developed for the TRACI model, not the CED
model, so with respect to energy, they could not be applied to our LCIA profile. If results had
been obtained from CED, they might be able to be normalized by the fossil fuel depletion
values of 5.3x1012 and 1.7x104, respectively.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
388 Chapter 10: Life Cycle Impact Assessment

Normalization: Climate Change (IPCC 2013, Ryberg 2014)


Annual Per-capita
Indicator Option A Option B Option A Option B
Equivalent releases CO2 6.8 E-13 2.7 E-13 2.1 E-04 8.3 E-05
Figure 10-17: Normalized LCIA Profile

Given the potential for comparability issues, it is often useful to develop multiple
normalization factors, and to perform sensitivity analyses on the normalization results.

While normalization factors for the US were discussed above, they also exist for Canada (also
from Ryberg 2014) and many European countries (Sleeswijk 2008). Such annual, country level
factors are the mostly commonly used. However, normalization factors at the global level for
several impact categories are also available (Sleeswijk 2008).

Grouping
Grouping of LCIA results is achieved by reorganizing LCIA results to meet objectives stated
in the goal and scope. Grouping is accomplished by sorting and/or ranking the characterized
or normalized LCIA results (since normalization is an optional step, it may or may not have
been done). The Standard allows sorting of the results along dimensions of the values, the
spatial scales, etc. Ranking, on the other hand, is done by creating a hierarchy, such as a
subjectively-defined impact priority hierarchy of high-medium-low, to place the impacts into
context with each other. As it relies on value choices, the grouping, sorting, or ranking is
subjective and could be inconsistent with what other authors might choose. Since it involves
an assessment of how to prioritize impacts, grouping should be done carefully, but should also
acknowledge that other parties might create different rankings based on different priorities
and ranked impacts.

If a study includes only one or two preselected impacts, then the results of grouping may not
be apparent. However, if more than a handful of impacts have been selected, then grouping
them together for reporting and presentation can help to guide the reader through the results.

Weighting
Weighting of LCIA results is potentially the most subjective of the optional elements. In
weighting, a set of factors are developed, one for each of the chosen impact categories, such
that the results are multiplied by the weighting factors to create a set of weighted impacts. This
is done in short to generate a single overall score (or a small set of scores) for a larger set of
LCIA results that would otherwise be represented in different underlying impact units.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 389

Weighting factors may be derived with stakeholder involvement. As with grouping, the
practice of weighting depends on value choices and is subjective and could lead to different
results for different types of stakeholders, or different authors or parties. The weights chosen
in the study may be different than what may be otherwise chosen by the reader. Regardless,
the method used to generate the weighting factors, and the weighting factors themselves,
needs to be documented or referenced. The Standard notes that LCIA results should be shown
with and without weighting factors applied, so that the underlying non-weighted results and
the effects of the weighting method are clear.

Beyond subjectivity concerns, it is possible that a consideration in doing the study was a
particular set of potential impacts, such as local emissions of hazardous substances at a factory.
In such a case, weighting such impacts greater than other impacts can be deemed credible and
also fit well within the goal and scope considerations. It also means that a separate study of
the same product system but with a different production location (or a different set of weights)
could lead to a different perceived impact.

The issue of impact categories selected in an LCIA raises the issue of implicit weighting.
While the Standard recommends that the effects of value choices and assumptions be
minimized in impact category selection, by choosing a particular set of impacts to study, a
practitioner necessarily is also choosing impacts to ignore. This choice is thus applying a
stronger weighting to selected impacts and a zero weighting to others, yet the choice may be
unfairly biased by preconceived notions about which impacts are most relevant. Likewise,
implicit weighting is associated with choices of normalization methods (and the decision to
normalize). As such, many experts argue that instead of avoiding the weighting step of LCIA,
it is ethically and scientifically better to explicitly state weights (and to use a broad set of impact
categories) instead of allowing implicit biases to affect results. It may be impossible to
completely eliminate the biases and implicit weighting effects in the interpretation of results
of a study.

Beyond such concerns, one of the primary ways in which weighting has been accomplished to
date in LCIA is by studying and applying different cultural theories, representing different
philosophical views and sets of value choices. By aggregating the values of many types of
stakeholders into a minimal number of cultural perspectives, it is possible to consider the
weightings expected of actors in such cultural contexts. Likewise, with a relatively small
number of cultural perspectives, the results of weighting through these different perspectives
can be considered (this is an effective scenario-based sensitivity analysis).

Many European LCIA methods use three packaged cultural perspectives to represent
alternative philosophies of the world. Each perspective represents values and a set of choices
on issues like the relevant time horizon for concern and expectations that technology
development can avoid damages that occur in the future. The three perspectives are:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
390 Chapter 10: Life Cycle Impact Assessment

• Individualist represents short-term (relatively selfish) interests, undisputed impacts,


and optimism that technology can avoid many problems in future. It thus relies on
time horizons for characterization factors that are short (e.g., 20-year instead of 100-
year GWP).

• Hierarchist represents a consensus view, based on commonly held policy principles


on timing and technology. It is often considered to be the default model. It relies on
medium time horizons (e.g., 100-year GWP).

• Egalitarian represents long-term interests based on the widespread view of


precautionary principle thinking, and has appreciation for impacts that might be
recognized but not as fully established. It relies on the longest time horizons (e.g., 500-
year GWP).

Sources of Weighting Factors

While the concept of weighting is generally familiar to most readers, the most important issue
beyond the contentiousness and subjectivity of weighting is how the weights are determined
(i.e., where the numbers come from). Continuing with the interdisciplinary nature of life cycle
assessment, there are interesting social science methods used in determining LCIA weights
that then connect to the technically derived LCI data. The most popular approaches for
determining weights are summarized below, adapted from the excellent review of Huppes
(2011). All methods translate qualitatively or quantitatively perceived preferences into weights.

• Panel Methods are approaches using groups of people, interested parties, or experts
representing stakeholders, to provide weighting factors. While the underlying
protocols and techniques are beyond the scope of the chapter, in general the methods
involve asking the participants to specify the weightings. An example exercise could
involve participants allocating 100 total points across various impact categories to
express their weights. Shortcomings of such approaches include participants having
sufficient current (and unbiased) knowledge across enough of the impacts to provide
credible weights, as well as efficiently aggregating the results of individual participants
into the overall panel weighting.

• Distance to Target approaches consider the relative degree to which current impacts
have met politically or scientifically set targets. For a policy-derived example, if a
national target exists to reduce GHG emissions by 20%, and the target for energy use
reduction is only 10%, it would motivate a higher (perhaps 2x) weight for global
warming. A similar scientifically derived example could use goals of the scientific
community to achieve a CO2 concentration of 350 ppm in the atmosphere. If we were
to deviate more from that target a larger weight would be justified. In either case,
having credible quantitative targets that are accessible and updated is needed.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 391

• Monetization approaches use economically valued damages from impacts or


economic costs of preventing or mitigating such damages. If scientific literature exists
that is able to provide estimates of damage in units like dollars or Euros for all impact
categories, then these damages could be used to derive a complete set of weighting
factors. For example, particulate matter emissions are generally orders of magnitude
higher than carbon emissions, which would lead to a higher weight on particulate-
associated impacts. Another specific monetization approach is via willingness to pay
(WTP) approaches, which derive weights based on how much stakeholders would be
willing to pay to avoid associated damages.

Huppes (2011) provided a summary of well known methods and the type of weighting used,
adapted here in Figure 10-18.

Panel Distance to Target Monetization

BEES Ecopoints Damage-based

Ecoindicator 99 EDIP ReCiPe-CE

Ecological Footprint WTP-based

ReCiPe-Nogepa ReCiPe-CML

ExternE / NEEDS
Figure 10-18: Classification of Weighting Used in LCIA Methods

Some of the social science needed in order to produce weighting factors involves the methods
needed to produce valid scientific consensus. In other words, the science needed to translate
the preferences and values of the various participants into a final set of weights.

Example Weighting Factors

The ReCiPe LCIA method (2014) provides the weighting factors shown in Figure 10-19, and
generally recommends the use of the average weights. Note that these weights are at the
endpoint level, not the midpoint level, and thus they do not map to the specific sets of impact
categories used in the examples above (e.g., acidification). The endpoint style LCIA methods
first convert from midpoints to endpoints, then apply the weighting factors.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
392 Chapter 10: Life Cycle Impact Assessment

Perspective Ecosystems Human health Resources Total


Average 400 400 200 1000
Individualist 250 550 200 1000
Hierarchist 400 300 300 1000
Egalitarian 500 300 200 1000
Figure 10-19: Weighting Factors Used in ReCiPe and Other LCIA Methods

Finally, weighting values are inherently associated with a particular underlying unit (e.g., kg of
CO2e). This should be obvious, since if results were instead provided in a different unit, one
would expect the respective weights to change. For example, if instead of CO2e in kg they
were summarized in metric tons, the weight for CO2e impacts should decrease to compensate
for the higher order of magnitude.

E-resource: The folder for Chapter 10 shows the spreadsheet of characterization and
normalization factors for the ReCiPe 1.08 LCIA Method, including the worksheets that
convert from midpoints to endpoints and weightings by cultural perspective. Following the
logic through these spreadsheets helps to understand how various classified physical inventory
flows lead to weighted endpoint values.

An alternative to the cultural theory approach in terms of generating weighting scores is the
use of panel studies or focus groups. In these settings, facilitators interact with groups of
stakeholders to determine the collective weights for the relevant impacts. For example, a
company interested in the impacts of its products could hold an internal panel session where
environmental management staff and other corporate managers assess and determine the
appropriate impacts of focus and derive relevant weighting factors. More inclusive panels held
by a company could include customers, regulators, residents near production facilities, or the
general public.

Figure 10-20 shows the weighting factors available in the US National Institute of Standards
and Technology (NIST)’s Building for Environmental and Economic Sustainability (BEES)
LCIA tool (Gloria 2007). These factors were developed using a facilitated panel study of
nineteen volunteers that included producers, users of BEES, and LCA Experts with the
explicit goal of developing weighting factors in support of environmentally preferable
purchasing. While these weighting factors are of course still subjective, they are a useful
benchmark for what experts might suggest to be used. The table also shows what the weights
would be if all categories were considered to be equal.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 393

Impact Category Panel Equal


Global Warming 29.3 8.3
Fossil Fuel Depletion 9.7 8.3
Criteria Air Pollutants 8.9 8.3
Water Intake 7.8 8.3
Human Health Cancerous 7.6
8.3*
Human Health Noncancerous 5.3
Ecological Toxicity 7.5 8.3
Eutrophication 6.2 8.3
Habitat Alteration 6.1 8.3
Smog 3.5 8.3
Indoor Air Quality 3.3 8.3
Acidification 3 8.3
Ozone Depletion 2.1 8.3
Figure 10-20: Weighting Factors for US NIST’s BEES LCIA Model (Gloria 2007)
* Note: In the BEES model, Human Health is a single category with a weight of 13, but the
underlying cancerous and noncancerous panel weights are shown above.

Evaluation and Reporting

While not listed as an element in the Standard, a final step within LCIA is to evaluate and
report on results from the various elements. It is important that intermediate LCIA profile
results from the individual mandatory (and optional) elements be shown. This prevents the
study from, for example, providing only final results that have been normalized and/or
grouped and/or weighted, at the expense of not showing what the characterized results would
have been. Showing the ‘pure’ impact assessment results also gives your study greater utility
as it will be relevant for comparison to a larger number of other studies.

Iterate Back to LCA Interpretation Step

This chapter focused on the impact assessment stage of the LCA Standard, but as noted in
Figure 10-5, this stage is iteratively connected to the life cycle interpretation stage. As
preliminary LCIA results become available, preliminary interpretation must be done to ensure
that the results are consistent with the goal and scope. If necessary, changes to the goal and
scope need to be made, which may require additional work (or re-work) in the inventory
analysis stage.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
394 Chapter 10: Life Cycle Impact Assessment

Uncertainty in LCIA
While not explicitly shown or discussed in the examples above, there are various uncertainties
associated with LCIA. The Advanced Material at the end of this Chapter (Section 3) discusses
the following LCIA uncertainties in more detail.

Choice of a particular impact assessment method: The analyst’s choice of a particular LCIA method
in the goal and scope step of an LCA leads to uncertainty. This is especially true if we were to
be able to compare the results of various similar LCIA methods to the chosen method (e.g.,
TRACI vs. IPCC for global warming impacts), which is not typically done in LCA. This also
relates to the choice of a 20 vs. 100-year global warming method.

Quality of data modules and databases: Depending on the amount of rigor used to create the life
cycle inventories of processes, the number of connected technical processes (i.e., A values)
and the number of flows to and from the technosphere (i.e., R values) may or may not be
comprehensive as discussed in previous chapters. If so, there may not be sufficient flows that
can be classified into the chosen impact categories, and when that happens the resulting LCIA
profiles may not be appropriate.

Parameter values of characterization factors: These values are set by the creators of the LCIA
methods, but despite all being based on similar literatures, will not always use the same values.
For example, one global warming method may use values from IPCC AR4 and another from
AR5. Using different characterization factors will of course lead to different results.

Finally, as discussed in the previous section, there are uncertainties associated with the optional
(and sometimes subjective) steps of grouping, normalization, or weighting.

Chapter Summary
As first introduced in Chapter 4, life cycle impact assessment (LCIA) is the final quantitative
phase of LCA. LCIA allows us to transform the basic inventory flows created from the
inventory phase of the LCA and to attempt to draw conclusions related to the expected
impacts of these flows for product systems. While climate change and cumulative energy
demand tend to dominate LCA studies, other impact categories of broad interest have
characterization models that are scientifically credible and available for use. Despite the
availability of these credible models and tools, many LCA studies continue to focus just on
generating inventory results, or at most, use only climate and energy impact models.

Now that we have reviewed all of the important phases of LCA, in the next few chapters we
focus on ways in which we can create robust analyses that will serve our intended goals of
building quantitatively sound and rigorous methods.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 395

References for this Chapter


Bare, Jane, Gloria, Thomas, and Norris, Gregory, “Development of the Method and U.S.
Normalization Database for Life Cycle Impact Assessment and Sustainability Metrics”,
Environmental Science and Technology, 2006, Vol. 40, pp. 5108-5115.

Ryberg, Morten, Vieira, Marisa D. M., Zgola, Melissa, Bare, Jane, Rosenbaum, Ralph K.
Updated US and Canadian normalization factors for TRACI 2.1. Clean Technologies and
Environmental Policy, Vol. 16, No. 2, 2014, p. 329-339.

Finnveden, G., Andersson-Sköld, Y., Samuelsson, M-O., Zetterberg, L., Lindfors, L-G.
“Classification (impact analysis) in connection with life cycle assessments—a preliminary
study.” In Product Life Cycle Assessment—Principles and Methodology, Nord 1992:9, Nordic Council
of Ministers, Copenhagen. 1992.

Gloria, Thomas P., Lippiatt, Barbara C., and Cooper, Jennifer, “Life Cycle Impact Assessment
Weights to Support Environmentally Preferable Purchasing in the United States,”
Environmental Science & Technology, 2007 41 (21), 7551-7557.

Gjalt Huppes & Lauran van Oers, “Background Review of Existing Weighting Approaches in
Life Cycle Impact Assessment (LCIA)”, JRC Scientific and Technical Reports, 2011.

Hischier, Roland and Weidema, Bo (Editors), “Implementation of Life Cycle Impact


Assessment Methods Data v2.2 (2010)”, ecoinvent report No. 3 St. Gallen, July 2010.

“ILCD Handbook: Analysing of existing Environmental Impact Assessment methodologies


for use in Life Cycle Assessment”, First edition, European Union, 2010.

IPCC Fourth Assessment Report: Climate Change 2007. Available at www.ipcc.ch, last
accessed October 30, 2013.

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the
Fifth Assessment Report of the Intergovern- mental Panel on Climate Change [Stocker, T.F., D. Qin, G.-
K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley
(eds.)]. Cambridge University Press, Cambridge, UK and New York, NY, USA.

“Life Cycle Assessment: Principles And Practice”, United States Environmental Protection
Agency, EPA/600/R-06/060, May 2006.

ReCiPe model website, http://www.lcia-recipe.net/characterisation-and-normalisation-


factors, last accessed September 26, 2014.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
396 Chapter 10: Life Cycle Impact Assessment

Wegener Sleeswijk, A, Van Oers, LFCM, Guinée, JB, Struijs, J, Huijbregts, MAJ.
Normalisation in product life cycle assessment: An LCA of the global and European economic
systems in the year 2000. Science of the Total Environment, 2008, 390 (1): 227-240.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 397

End of Chapter Questions

Objective 1. Describe various impact categories of interest in LCA and the ways in
which those impacts can be informed by inventory flow information.

Objective 2. Describe in words the cause-effect chain linking inventory flows to


impacts and damages for various examples.

1. If a study sponsor cared about the following list of impacts, suggest impact categories and
LCIA methods to be used in a study in the US. Discuss the types of inventory flows you would
include in the scope of the study to be able to address them, and how those inventory flows
lead to the impacts.

• Impacts on fish
• Holes in the ozone layer
• Hypoxia in coastal areas
• Peak oil
• Pediatric cancer

2. Draft a paragraph justifying the inclusion of two of the impacts from Question 1 that could
be used in the goal and scope section of a study report.

Objective 3. List and describe the various mandatory and optional elements of life
cycle impact assessment.

Objective 4. Select and justify LCIA methods for a study, and perform a classification
and characterization analysis using the cumulative energy demand (CED) and/or
climate change (IPCC) methods for a given set of inventory flows.

3. Given LCI results in Figure 10-21, use the CED and IPCC (2013, 100-year) methods to
characterize the LCI results and compare the two options. Which would you recommend as
having better performance? Does your answer change if you use IPCC (20-year) instead?
Flow Compartment Units Option A Option B
Carbon dioxide, fossil air kg 5 2
Energy, geothermal MJ 100 80
Figure 10-21: Hypothetical Study LCI Results

4. Characterize the LCI results in Figure 10-22 using the IPCC AR5 (2013) and CED LCIA
method characterization factors provided in the chapter. Compare the results for Option A
and Option B. Discuss how the study could be improved based on your analysis.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
398 Chapter 10: Life Cycle Impact Assessment

Flow Units Option A Option B


Coal, hard kg 4 2
Energy, geothermal MJ 50 80
Figure 10-22: Hypothetical Study LCI Results

5. Figure 10-23 shows excerpted characterization factors used in the TRACI model. Use these
values to update the LCIA profile for the hypothetical flows shown in Figure 10-1 and Figure
10-12. Compare the characterized results for Options A and B.
Flow Acidification Air Human Health - Criteria Air
(kg H+ moles eq (kg PM10 eq / kg
/ kg substance) substance)
Sulfur dioxide 50.8 0.167
Figure 10-23: Excerpted Characterization Factors for Sulfur Dioxide

6. Consider a project that reduces on-site methane emissions with the LCI results in Figure
10-24. Compare the options using the IPCC (2013) 20-year and 100-year characterization
factors.
Flow Compartment Units Option A Option B
Carbon dioxide, fossil air kg 30 300
Methane air kg 5 1
Figure 10-24: Comparative Methane Emission Reduction Project LCI Results

7. In this question you will characterize, normalize, and apply weighting to the set of inventory
values shown below.
Flow Compartment Units Option A Option B
Carbon dioxide, fossil air kg 5 2
Energy, geothermal MJ 100 80
Coal, hard kg 4 2
Crude oil kg 10 8

a. Characterize the flows in the table above using the BEES impact assessment method, which
uses IPCC for global warming, and the natural resource depletion (in units of MJ surplus)
method with values excerpted in the table below.
Flow Factor Unit
Coal, hard 0.49 MJ surplus/kg
Crude oil 6.12 MJ surplus/kg
Gas, natural 7.8 MJ surplus/kg

b. Normalize the characterized results from part (a) with the BEES per-capita factors below.
Note: Observe that BEES uses g, not kg, for global warming and ozone depletion.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 399

Impact Factor Units


Global warming 2.56 E+07 g CO2e
Natural resource depletion 3.53 E+04 MJ
Ozone depletion 340 g CFC-11 eq

c. With your normalized results, generate weighted results using the table of equal and
stakeholder panel BEES weighting factors below.
Stakeholder
Impact Category Panel Equal
Global Warming 29.21 7.69
Natural Resource Depletion 9.67 7.69
Human Health Criteria Air Pollutants 8.87 7.69
Water Intake 7.78 7.69
Human Health Cancerous 7.58 7.69
Human Health Noncancerous 5.28 7.69
Ecotoxicity 7.48 7.69
Eutrophication 6.18 7.69
Habitat Alteration 6.08 7.69
Smog 3.49 7.69
Indoor Air Quality 3.29 7.69
Acidification 2.99 7.69
Ozone Depletion 2.09 7.69

d. Considering your unweighted results, as well as the weighted results, compare impacts of
Option A and B. Make a recommendation of A or B if possible. For each of the weighted
comparisons, quantify the difference between each of the four inventory flows needed
separately in order to make the other option better.

For the following question, use the US LCI model Excel spreadsheet that includes LCIA (E-
resource introduced in the chapter).

8. Consider the impacts of producing 1 kg of Benzene, at plant.

a. For each of the TRACI LCIA methods below, report three tables: (1) summarizing the top
10 impacting processes similar to Figure 9-5 (which focused only on fossil CO2) and (2)
summarizing the top 10 underlying flows leading to impacts and (3) top 10 specific inventory
flows through specific processes leading to impacts. Write brief summaries of the results for
each table.

• Global warming (air)

• Smog (air)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
400 Chapter 10: Life Cycle Impact Assessment

b. Discuss the various processes and inventory flows identified in the summary impact
assessment results, and how similarities and differences in them affects interpretation of the
results and potential improvements.

c. If you have access to SimaPro, compare your numerical results above to those presented in
SimaPro for the same process and LCIA method, and discuss reasons for differences.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 401

Advanced Material for Chapter 10 - Section 1 LCIA with Matrix Math


As discussed in the chapter, research and software tools will generally implement LCIA via
matrix-based methods. This section demonstrates how such matrices could be developed and
used using the examples from this chapter.

First off, assume that we want to perform LCIA on the hypothetical LCI data from Figure
10-1, using the abridged classification tables for energy (CED) and greenhouse gases (IPCC)
as shown in Figure 10-7 and Figure 10-8. To recall, Figure 10-1 suggested these results:
Flow Compartment Units Option A Option B
Carbon dioxide, fossil air kg 5 2
Sulfur dioxide air kg 2 5
Crude oil kg 10 8
Figure 10-1 (repeated): Hypothetical Study LCI Results

Using our notation from Chapter 9, the inventory flows from Figure 10-1 could be represented
as 2 E vectors:29

5 2
𝐸+ = • 2 – and 𝐸: = • 5 –
−10 −8
We could create a simple classification table mapping inventory flows relevant to global
warming (using IPCC) in row 1 and energy (CED) in row 2 as in Figure 10-25:

Carbon
Sulfur dioxide Crude oil
dioxide, fossil

IPCC 1 0 0

CED 0 0 1
Figure 10-25: Classification Table for Figure 10-1 Results

The table elements in row 1 suggest that only fossil carbon dioxide of the three tracked flows
is a greenhouse gas (and thus classified in IPCC), and only crude oil in row 2 is an energy
source relevant for CED. Note that this table considers only the three inventory flows in this
example. In reality, a classification table in a matrix-based tool will have far more rows. For
example, in the US LCI TRACI model presented in the chapter, a classification table would
have 949 columns – one for each of the environmental flows in the model – and one row for
each for the various impacts considered (about 20).

29Note that while we have maintained the negative notation for inputs here, in general LCIA methods will report all input and output
effects as positive.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
402 Chapter 10: Life Cycle Impact Assessment

In true matrix terms, we would define C (there is no standard notation for this matrix) from
the classification table above, with elements of classified inventory flows relevant to global
warming (using IPCC) in row 1 and energy (CED) in row 2:

1 0 0
𝐂=€ •
0 0 1
The matrix based method for classifying the original inventory flows is by using the expression
EC, where:

5 • 2
𝐸+ 𝐂 = € and 𝐸: 𝐂 = € •
−10 −8
These results match those of Figure 10-9 in the chapter, where only the fossil CO2 was
classified into the global warming method, and only the crude oil into the CED method.

Matrix-based tools may do both characterization and classification in a single matrix


multiplication step. Instead of just performing classification by entering ‘1s’ in the C matrix
as above, the 1’s could be replaced with the relevant characterization factors. Using the same
example used in the chapter, and the characterization factors for fossil CO2 and crude oil in
IPCC and CED respectively, a matrix C* that classifies and characterizes might look like this:

1 0 0
𝐂∗ = € •
0 0 45.8

Row 1 may appear to be unchanged, but recall that the characterization factor for fossil CO2
is 1 (kg CO2-equiv/kg). The matrix based method for classifying and characterizing the original
inventory flows is by using the expression EC*, where:

5 • 2
𝐸+ 𝐂 ∗ = € and 𝐸: 𝐂 ∗ = € •
−458 −366
More complex classification or characterization matrices may be desirable, such as to maintain
all inventory flow rows, in order to show zero results when inventory results were not classified
into a method, or where characterized flows are zero.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 403

Advanced Material for Chapter 10 – Section 2 – Process Matrix


Models in SimaPro for Impact Assessment
In the Advanced Material for Chapters 5 and 9, demonstrations were provided on how to find
process data in SimaPro and to use it to estimate input or output flows such as energy or fossil
CO2. Here we show how to generate LCIA results rooted in the process matrix results of a
particular process in SimaPro. Again, this is not the same as building a model in SimaPro, but
is an important core activity. Recall that SimaPro uses the same process matrix-based approach
as shown via Microsoft Excel in the chapter – thus we demonstrate how to generate results
for the same example, and expect our results to mirror those presented.

Note that unlike the chapter, the screenshot figures shown are presented ‘in line’ with text descriptions, and are
not given numbered figure references.

Using the same steps as shown in Chapter 5, find and select the US LCI database process for
Electricity, bituminous coal, at power plant. Note that if you had previously de-selected other
libraries, you should re-select all of them (as they contain information on the LCIA methods
to use that is not by default part of the US LCI library)! As before, click the analyze button
(shown with cursor on it).

The resulting ‘New calculation setup’ window (not shown) is the same as shown in Chapter 9.
As shown previously, change the ‘Amount’ from ‘1’ to ‘3.6’ MJ in the window, then double
click the box for ‘Method’.

In the resulting window, click the ‘Method’ text in the left hand side to expand the view. It
shows all of the various impact assessment methods available, categorized by type (e.g., those
developed for North America that are comprehensive as well as those that relate to a single
issue like energy or global warming). To continue our example from the chapter, we click on
‘Single issue’ under the Method, and in the resulting list to the right, choose the ‘IPCC 2007
GWP 100a’ method – which uses the IPCC 100-year GWP factors previously discussed (a
brief description of any selected method is shown in the bottom of the screen). To finish, we
click the ‘Select’ button in the top right corner, which returns us to the previous dialog box,
where we click ‘Continue’ in the bottom right.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
404 Chapter 10: Life Cycle Impact Assessment

The resulting window shows the aggregated LCIA result using the ‘IPCC 2007 GWP 100a’
method, confirming results previously shown in Figure 10-13 that for 1 kWh (3.6 MJ) of
bituminous coal-fired electricity, there are a total of 1.08 kg CO2 equivalent emissions. The
method chosen is listed at the bottom of the window, as is the LCIA step. Note that if we had
been using a more complex LCIA method, the aggregate results for all underlying impacts
would be shown here (e.g., also including eutrophication, etc.).

However, as shown in the chapter (and to support documentation and interpretation needs of
a study), we should be interested in the relative contributions of these impacts across both the
network of processes, as well as across the various constituent greenhouse gas flows. In other
words, we may want to be able to generate tabular results as provided in our US LCI Microsoft

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 405

Excel spreadsheet model, and summarized in Figure 10-13 through Figure 10-15 in the
chapter. To get those details, we need to navigate through the various tabs and options shown
in the screenshot above.

To see impact assessment results disaggregated by flow (in this case, by individual greenhouse
gas), click the ‘Inventory’ tab, and then click the ‘Indicator’ box and change it from ‘Amount’
to ‘Characterization’ – this multiplies the underlying flows by their characterization factors (in
this case by their 100-year 2007 GWP values). The ‘IPCC 2007 GWP 100a’ Category should
appear below it, as well as the list of (unsorted) flows with their characterized values.

If you double click on the right-most column (the name of the process being analyzed), and
scroll up, you will see the sorted list of impacts broken out by greenhouse gas. These values
shown match those in Figure 10-13.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
406 Chapter 10: Life Cycle Impact Assessment

To see impact assessment results disaggregated by processes in the network, click the ‘Process
contribution’ tab, and again click the ‘Indicator’ box and change it from ‘Amount’ to
‘Characterization’. The ‘IPCC 2007 GWP 100a’ Category should appear below it. Again,
double click the last column to view a sorted list of processes in the network with their
characterized values, which aligns with Figure 10-14 in the chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 407

The final part to explore is again the ‘Network’ tab view. This tool creates a visualization of
flows for the entire network of connected processes (as aggregated in the ‘Impact Assessment’
and ‘Process contribution’ tabs shown above). By default, SimaPro truncates the Network
display so as to reasonably draw the network system without showing all flows. For example
the diagram below shows a default network diagram for 100-year GWP with an assumed cut-
off of about 0.1%. This cut-off can be increased or decreased to see more or less of the
network. Note this is the closest (yet still not quite comparable) result available from SimaPro
that provides the detail provided in Figure 10-15 of the chapter.

In addition to the basic ‘Single issue’ impact modeling shown above, SimaPro is also able to
use more complex multi-impact methods. To use one of these methods, you choose one of
the other hierarchical categories. For example, to use TRACI you click on the ‘Methods’ text
in the left hand side of the method selection dialog box, then choose ‘North American’ (the
geographical region relevant to TRACI), and then in the list to the right choose ‘TRACI 2.1’
(the same method and characterization factors shown in the chapter). When using such
methods, you are also able to optionally choose a set of normalization factors for the impacts.
The screenshot below shows the choice of the TRACI 2.1 North American method as well as
the ‘US 2008’ normalization factors.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
408 Chapter 10: Life Cycle Impact Assessment

As before, click the ‘Select’ button, and in the resulting previous ‘Calculation setup’ click
‘Continue’. The result screen is similar as shown before, but since TRACI 2.1 has various
underlying impact categories beyond its use of the IPCC 2007 GWP method, all of the
aggregated results are shown (note that the ‘Global warming’ impact is the same as above).

If you compare these to the results from the US LCI Microsoft Excel LCIA spreadsheet model
shown in the chapter, you will see they provide comparable results. This default display is just
for the characterized results. Similarly, to normalize them with the US 2008 factors selected

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 409

on the previous screen (note it uses the per-capita factors, i.e., those in the right hand column
of Figure 10-16), click the ‘Normalization’ button (located right below the ‘Impact assessment’
tab). The result of 4.46 E-5 shown comes from dividing the 1.08 value shown in the previous
‘Characterization’ screen by the global warming normalization factor (2.4 E4) of Figure 10-16
(they differ slightly due to rounding).

As motivated in the chapter, there are other impact assessment methods available with
supplemental information that supports the other optional LCIA steps. The final example
shown in this section demonstrates how SimaPro supports damage assessment, weighting, and
single scoring, using the Impact2002+ method. This is done by going back to the ‘New
calculation setup’ window and (using the same electricity process, and same input of 3.6 MJ
used above) changing the ‘Method’ to ‘Impact2002+’ v 2.11 and the default (also
Impact2002+) normalization and weighting factors.

The following screenshot shows the default view of characterized impacts for this process and
method, highlighting the carcinogenic impacts. Note that Impact2002+ encompasses more
impact categories (15) than TRACI (10). The various more advanced comparison options are
shown as buttons across the top of the window. Beyond those already shown above, there are
also LCIA views for Damage Assessment, Weighting, and Single Score.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
410 Chapter 10: Life Cycle Impact Assessment

Clicking on the next button, ‘Damage Assessment’, shows the unitized estimates of damage
into the four high-level damage categories that the 15 impact categories map into. Both DALY
and PDF (introduced in the chapter) are damage estimates in this method. Clicking on the
‘Normalization’ button normalizes these values with the default Impact2002+ factors (results
not shown).

Clicking on the ‘Weighting’ button uses the weighting factors selected in the Calculation setup
window that align with Impact2002+, which multiplies the normalized values by the weighting
factors, resulting in ‘Points’ (abbreviation Pt), where larger point values mean higher impacts.
In this example, the climate change impacts are expected to have the highest relative impacts.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 411

The ‘Single score’ button in this instance would look the same as the weighted results (the
‘Total’ row is the single score result). Similar to the previous (IPCC) example above, the results
can be disaggregated with ‘Process contribution’ results using this new indicator (Single score)
and sorted. The top 7 and total impacts (based on single score ‘points’) are shown.

Given the various discussions throughout the textbook, it is probably not surprising that the
single largest process contributing to impact points is the bituminous coal combustion process
that makes the electricity. The next largest process (mining) is roughly an order of magnitude
smaller in impacts. This result can be shown visually by clicking the icon that looks like a bar
graph next to the default (selected) icon that looks like a spreadsheet table.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
412 Chapter 10: Life Cycle Impact Assessment

This particular graph very strongly shows the pronounced way in which the impacts decline
in importance for the various process contributions, with the top 3 being two orders of
magnitudes different in impact. Few of the remaining processes even register in the y-axis.
Note that the only graphical result shown was this last one, but that almost all of the tabular
results above can be shown with a graph by clicking the bar graph button.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 413

Advanced Material for Chapter 10 – Section 3 – Uncertainty from


Choice of Impact Assessment Method30
The comparison between different LCIA methods can be traced to the 1990’s. Baumann and
Rydberg (1994) compared three early LCIA methods: ecological scarcity (ECO),
environmental theme (ET), and environmental priority strategies in product design (EPS). The
basis for comparison was the difference in the characterization factors. The main difference
between ECO, ET, and EPS was in the estimation of the characterization factors with respect
to different goals and scopes. LCA practitioners should choose different LCIA methods
according to the purpose of their studies. Pennington et al. (2004) systematically reviewed the
differences between multiple impact assessment methods regarding their models and
methodologies. Twelve impact categories, such as ecotoxicological effects, were separately
reviewed. The available LCIA methods for each category were listed for LCA practitioners to
make decisions in selecting the impact assessment method for particular LCA studies. Both
Baumann and Rydberg, and Pennington emphasized the importance of choosing the
appropriate impact assessment method according to the goal and scope of each individual
study. However, these pioneering studies focused mostly on qualitative differences, and did
not elaborate a quantitative comparison between LCIA methods.

More recently, some LCA studies have considered the variability of results caused by choosing
different impact methods. Bovea and Gallardo (2006) and Dreyer et al. (2003) used three
methods (CML, Ecoinciator95, and EDIP) to estimate the impact results for particular
materials. Dreyer et al. (2003) examined the differences in impact results due to
characterization factor values by calculating the contribution of impacts from each substance.
Bovea and Gallardo (2006) compared the impact results of three different plastic materials.
Their study showed that for the same plastic product, the differences between the minimum
and maximum values in the LCIA results varied between 0% to more than 80000%; the higher
values occurred in the photochemical oxidation category. Additionally, Owsianiak et al. (2014)
compared the results of plastic materials from three impact assessment methods (ILCD 2009,
ReCiPe 2008, and IMPACT 2002+), and concluded that the variability in the LCIA results
was between 5% and more than 1,000,000%. Similar comparisons based on particular
materials or products can be found in other LCA studies: Martinez et al. 2015; Cavalett et al.
2013; Xue and Landis 2010; Brent and Hietkamp 2003; de Vries and de Boer 2010. All of these
studies agree that different impact methods may lead to great deviations in the range of LCIA
results for particular products. LCA is supposed to be for decision support and strategic
thinking. The presence of different results means there is potential for different decisions.
However, some issues have not been addressed yet. First, all of the existing comparisons used
only a few impact methods. Systematic quantitative comparisons across more than three
impact methods were not addressed. Second, though these studies provided variances from
using different impact assessment methods; they did not quantify the causes of variability. The
variances could be caused by the differences in the coverages of substances or by discrepancies
This section is excerpted from a chapter of Xiaoju (Julie) Chen’s dissertation, entitled “Uncertainty Estimation in Matrix-based Life Cycle
30

Assessment Models”.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
414 Chapter 10: Life Cycle Impact Assessment

in the characterization values. Third, the existing studies only used specific LCA case studies.
It would be of interest to study the variability in the impacts of many different products using
an LCI dataset with inventories for many processes.

LCA practitioners often select one impact assessment method and at least one impact
category. For example, developed by the US EPA, the TRACI method is a common choice
for LCA studies of products produced in the US (Bare et al. 2003). Within TRACI, the global
warming (GW) category is often selected, as it is the impact category for the evaluation of
climate change impacts. LCA software includes features that allow easy and fast selections
from these methods and categories to provide fast impact results. For example, SimaPro
software (version 8.3) contains information on characterization factors for 50 impact
assessment methods. The users can select one impact method from the methods provided for
LCIA results. SimaPro outputs the total impact results for the selected method; however, it
fails to show the differences in the LCIA methods as well as the possible different results when
other methods are chosen.

Here we show a comprehensive comparison of uncertainties arising from the selection and
use of different LCIA methods. Processes from the US LCI database are used to demonstrate
the results. Our work considers three key sources of variability in LCIA results: 1) differences
in the environmental effects from the US LCI process inventories, 2) differences in coverage
of substances in the methods, and 3) differences in the characterization factor values for all
commonly used impact assessment methods.

Summary of impact categories and substances

While there are many LCIA methods available, we use a practitioner’s perspective in this
analysis. We obtained characteristics of the 50 impact methods and categories provided in the
SimaPro software (version 8.3). The number of substances for the 19 most widely used impact
categories in 45 selected methods are listed in Figure 10-26. The 5 remaining methods only
provided one or a few impact categories - such as two cumulative energy demand methods -
and are not listed in the table.

The results show that on average, each method provides approximately 10 impact categories,
with a few exceptions that cover only one or two. These exceptions are customized methods
that were developed for a specific purpose. For example, IPCC methods only provide
information on GW categories. Some categories are used in most of the methods, such as
global warming and acidification; while others are given in only a few methods, such as abiotic
depletion. For each category, different methods provide connections to significantly different
numbers of characterized substances. For example, the number of substances for the
Freshwater Ecotoxicity category vary from 46 in CML 1992 to 22,706 in TRACI 2.1.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 415

Figure 10-26: Numbers of characterized substances in major impact categories (columns) for 45
impact assessment methods (rows), as embedded in SimaPro (version 8.3). Highlighted rows with red
and Italic font are the methods chosen for further demonstration.

Some of the methods are simply different versions of the same methods, e.g., there are three
different TRACI methods: TRACI, TRACI 2, and TRACI 2.1. As a result, we only focus on
distinct methods using their latest versions. Five impact categories (Global warming (GW),
Acidification, Eutrophication, Ozone depletion, and Ecotoxicity) were chosen based on their

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
416 Chapter 10: Life Cycle Impact Assessment

widespread use, acceptance, and variation, for three reasons. First, these categories were
available in most of the impact assessment methods. This was not the case for every category,
for instance, “Smog” was available only in a few methods. Second, GW, Acidification,
Eutrophication and Ozone depletion had relatively widely accepted coverages of substances.
Third, the number of characterized substances varied significantly in these five categories:
from as few as 18 substances in Acidification to as many as 30,514 substances in Ecotoxicity.
Overall, the selected methods robustly represent the coverages of substances in all other
categories. Under this selection of example categories, we only focus on comparing the
methods that include these five categories (red italics in Figure 10-26).

For the 18 chosen methods, we summarized the complete list of all the specific substances for
each of the five impact categories, i.e., whether they were included in one or all methods. The
GW category is shown here as an example.

Figure 10-27 shows an abridged summary of the characterization factor values for only GW
substances generalized from 14 impact methods provided in SimaPro software (generally the
red rows from Figure 10-26, but “USEtox” methods were not included as they did not have
GW as one of the impact categories). As shown in the results, the coverage of substances (i.e.,
number of elementary flows with characterization factors) varied across methods. Most of the
methods had around 100 GW substances, while a few methods covered less than 50. Provided
with the same impact units (kg CO2 eq.), the characterization factor values themselves also
differed across methods. For some widely acknowledged greenhouse gas substances, such as
“carbon dioxide”, the characterization factor values were agreed by all methods as 1 kg CO2
eq. (this is not a surprise in this case because it is assumed to be the numeraire or reference
flow against all other GHGs). However, some substances’ characterization factor values
differed by more than 100%, such as “Methane, fossil”, whose value varies from 7.6 (ReCiPe
(I)) to 72 kg CO2 eq.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 417

Figure 10-27: Characterization factor values (in kg CO2 eq.) for all global warming (GW) substances
generalized from 14 popularly used impact methods

The observed high variability in the GW characterization factors is somewhat unexpected


because compared to other impact categories, the GW category is considered a relatively well-
developed category. The estimation method for values in the GW category was assumed to be
derived from one research agency, the Intergovernmental Panel on Climate Change (IPCC),
and accepted by most of the methods. Thus, the coverages of substances were expected to be
identical or at least similar between different methods. However, the results show that some
methods have much smaller numbers of substances, such as BEES, CML2001, and Eco-
indicator (which we stipulate are relatively older, but are still widely available and sometimes
used via LCA software tools). No method includes all substances listed in Figure 10-27. The
ILCD method, for example, includes six substances that are not included in any other
methods; yet it does not include “Propane, perfluorocyclo”, a substance that is included in
most of the other methods.

Apart from the differences in the coverages of substances, the characterization factors also
vary significantly across methods. This variability is beyond the variability expected from using
different specific time intervals. For example, there are six different values for Chloroform
from 14 methods while there are only three time intervals for GW category.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
418 Chapter 10: Life Cycle Impact Assessment

Variances in the impact assessment results in the US LCI database

Differences in the coverages from substances, and differences in the characterization factors
are two sources for the variability in the LCIA results. A third source of variability is from
differences in inventories that are connected to LCIA methods. The number of included
substances in the inventory affects the LCIA result. This section evaluates the effects from the
coverages of these GW substances in life cycle inventories.

The 2701 elementary flows in the US LCI database include potential substances in each impact
category. The purpose of this section is to identify these substances by matching their
information with the elementary flows and substances provided in each category. The
information includes the names of the materials or emissions in the elementary flows, as well
as two other restrictive elements: environmental compartments such as air, water and soil, and
by impact regions such as lake, and river. As an example, Figure 10-28 shows all elementary
flows for Mercury in the US LCI database, specified to compartments and impact regions.
Matlab software was used for the matching between the information in the elementary flows
and the substances in each category, by the names and other restricted information.

In some cases, the substances provided in impact categories are not specified by impact regions
and compartments. Current practice in LCIA analysis is to apply the substances and their
characterization factors to all possible regions and compartments in the elementary flows. For
instance, in the Ecoindicator 95 method, “Mercury emissions to air” is not regionally specified
in the “heavy metals” impact category. Current practice applies the characterization factor to
all five mercury flows to air in Figure 10-28. Hence, when substances are not specified by
impact regions, I identify elementary flows with the same name and compartment. The same
characterization factor value was used for all the matching elementary flows. On the other
hand, a few cases show that the impact regions from some substances are not included in the
elementary flows of the US LCI database. For example, one of the impact regions for Mercury
to soil is “forestry” (ReCiPe Midpoint method). As shown in Figure 10-28 Mercury does not
have forestry as one of its impact regions, which means that the substance of “Mercury,
forestry, soil” does not match any elementary flow in the US LCI database. I assume that the
exclusion of such regionally specific substances does not affect the impact results. Because if
the substance is not included in any process’ inventory, the process has no impact from the
substance.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 419

Figure 10-28: Elementary flows for Mercury with different impact regions and compartments in the US
LCI database

Flow No.
in USLCI Name Impact region Compartment Unit
1398 ‘Mercury’ ‘unspecified’ ‘air’ ‘kg’
1399 ‘Mercury’ ‘unspecified’ ‘water’ ‘kg’
1400 ‘Mercury’ ‘industrial’ ‘soil’ ‘kg’
1401 ‘Mercury’ ‘ocean’ ‘water’ ‘kg’
1402 ‘Mercury’ ‘agricultural’ ‘soil’ ‘kg’
1403 ‘Mercury’ ‘high population density’ ‘air’ ‘kg’
1404 ‘Mercury’ ‘low population density’ ‘air’ ‘kg’
1405 ‘Mercury’ ‘stratosphere’ ‘air’ ‘kg’
1406 ‘Mercury’ ‘ground-’ ‘water’ ‘kg’
1407 ‘Mercury’ ‘ground-, long-term’ ‘water’ ‘kg’
1408 ‘Mercury’ ‘lake’ ‘water’ ‘kg’
1409 ‘Mercury’ ‘river’ ‘water’ ‘kg’
1410 ‘Mercury’ ‘unspecified’ ‘soil’ ‘kg’
1411 ‘Mercury’ ‘low. pop.’ ‘air’ ‘kg’

Figure 10-29 shows the GW substances that have matched elementary flows in the US LCI
database. 37 out of the 110 GW substances shown in the figure were identified as elementary
flows. These GW substances were not regionally specified; they applied to all the region-
specific substances. Thus, 94 US LCI elementary flows specified by impact regions were
identified as matches. Some GW substances that have large characterization factor values were
not included in the US LCI database, such as Nitrogen fluoride.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
420 Chapter 10: Life Cycle Impact Assessment

Figure 10-29: Identified GW substances in the US LCI database and their characterization factors in
different methods.

The number of substances for the five chosen impact categories are shown in Figure 10-30.
The first row shows the numbers of substances summarized from all impact methods for each
category. The second row represents the numbers of chemicals in each category that are
included in the elementary flows. Finally, the last row shows the number of elementary flows
that have the same name with the substances in each category. For each impact category, only
part of the substances were identified as elementary flows. The US LCI database covers over
2701 elementary flows; however, it fails to cover most of the characterized impact substances,
which can lead to neglecting some environmental impacts. The identified substances revealed
that three categories (Acidification, Eutrophication and Ecotoxicity) have different
characterization units from different methods. The difference in the units partially (but not
completely) explains the differences in the characterization factor values. For example, the
acidification category from eight methods uses the same impact unit (kg SO2 eq.), but the
characterization factor values are quite different.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 421

Figure 10-30: Summary of the number of substances and matching elementary flows in five impact
categories for the US LCI database.

Eutrophi- Ozone Eco-


GW Acidification cation Depletion toxicity
Number of substances from all methods 110 19 93 108 30514
Matched chemicals (compartment
specified) 35 18 36 19 830
Matched elementary flows considering all
impact regions 94 48 71 39 830

Variances in the impact assessment results based on individual processes

Above we identified the substances from all US LCI elementary flows at the level of the whole
database. In this section, the substances are identified at the level of individual US LCI
processes. For simplicity, the matrix method is applied. The values in the B matrix were used
to identify whether a process had certain substances in the direct emission. Each row in the B
matrix represents an elementary flow, which is a direct input or output for the processes in
the columns. The elementary flows that were identified as substances were obtained from the
B matrix. For each column, non-zero values in the rows represent direct emissions from
identified substances in each process. Thus, identifying the non-zero values in the substance
rows efficiently leads to the substances from each process. The zero values were assumed to
have zero effects, thus resulting in no corresponding impacts. All substances summarized from
the 14 different impact methods were used for the identification.

Figure 10-31 shows the ranges in number of substances for gate-to-gate processes. These
ranges are sorted by the process that has the largest number of substances (left), to the smallest
(right), by the maximum values. The two values for each process are the maximum and
minimum number of substances summarized from 14 impact assessment methods. For
example, in Figure 10-31, process number one had a range of identified substances between
11 and 15; this range resulted from using different impact methods. The “cut-off” processes
do not have any output component, thus they were not shown in Figure 10-31. Cradle-to-gate
processes were also excluded in Figure 10-31, because their emissions are not exclusively direct
emissions. Figure 10-31 also omits processes without GW substances. The results show that
the discrepancies between the maximum and minimum numbers can be large. Only 314 or
25% of the 876 cradle-to-gate processes had more than one GW substance, and 523 processes
do not include any GW in their inventories. As shown in Figure 10-31, the processes with the
largest numbers of GW substances are fuel combustion processes.

There were some instances in which the number of substances was small or even zero. This
lack of substances induces a small or zero value for the direct GW impact results in most of
the processes (i.e., it would report that the process has no global warming impact). The reason
for the lack of substances can be traced to the inventories of the processes. Most processes
do not include GW substances in their inventories. It is possible that the processes do not

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
422 Chapter 10: Life Cycle Impact Assessment

have GW emissions. For example, the “electricity, US, grid mix” process is a mix of electricity
generated by different types of fuels. In this case, generating grid electricity does not involve
any real production (no emission is produced from the process). However, these special cases
only apply to a small part of the processes, not to all the 523 processes. GW substance
emissions may also be excluded from the inventory when they are outside the system boundary
of the LCA studies. This exclusion can result in ignoring impact values in the results. When
the system boundary is set, the inclusion/exclusion of certain emissions is often based on
benchmarks referred to as cutoff criteria. The cutoff criteria specify a boundary using mass,
energy or economic value, rather than impact results. Thus, small emissions from substances
are more susceptible to being excluded from the inventory. I will discuss cutoff issues more
in the next section.

Figure 10-31: Number of Global warming (GW) substances in gate-to-gate processes from US LCI.
Processes sorted from largest number of substances (left) to smallest (right). The two values for each
process are the max and min number of substances in 14 popular impact assessment methods (see
Figure 10-27). “Cut-off” processes in US LCI are not shown, as they do not have an output

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 423

component. Also omitted are the processes without GW substances after No. 360. The first 10
processes’ names are shown in the table.

So far, we have identified the direct emission substances from each process in the US LCI
database. Now we focus on the indirect/upstream impacts from each process. The final ranges
of impact results were computed from their corresponding characterization factors
(generalized from all popular impact methods). The differences observed in the impact results
were due to the amount of variance in the characterization factors, and the number of
substances covered in each method.

In the direct analysis, we found 548 processes with no GW substances (from any GW
methods) in their direct emissions (i.e. in the 𝑩 matrix). In the indirect analysis, 280 out of
these 548 processes still do not include any GW substances. As an example, Figure 10-32
summarizes the occurrence GW substances for the processes with the top ten highest number
of GW substances in their direct inventories (i.e., the first ten processes listed in Figure 10-31).
The non-zero substances corresponding to direct and indirect emissions are displayed
separately and highlighted in orange. The results indicate that 1) not surprisingly, more
substances are found in the total emissions; 2) some substances are widely included, such as
fossil carbon dioxide.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
424 Chapter 10: Life Cycle Impact Assessment

Figure 10-32: Non-zero values for the US LCI processes that have the largest numbers of global
warming (GW) substances. The marked cells in the tables represent non-zero values (the non-zero
values are highlighted in orange).

Ten example processes with only one direct GW substance in the 𝑩 matrix columns are shown
in Figure 10-33. Perhaps contrary to expectation, when a process has only one GW substance,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 425

it is not necessarily “carbon dioxide”, but can also be “dinitrogen monoxide”. As expected,
the numbers of substances in the indirect effects for the same process are larger, further
indicating that indirect emissions are important to the impact results.

Figure 10-32 and Figure 10-33 highlight the substances included in the process inventories.
Though the total effects cover more substances, 50% of substances are still excluded from the
inventories. These excluded substances are included in the US LCI boundary; but only a few
processes (mostly the cradle-to-gate processes) considered these substances in their inventory.
Excluding substances can result in missing impact values; this is an issue in LCI databases.
Addressing this issue can improve the quality of databases and facilitate better LCA results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
426 Chapter 10: Life Cycle Impact Assessment

Figure 10-33: Non-zero values for the US LCI processes that have only one global warming (GW)
substance. The highlighted cells in the tables represent non-zero values. The process number
corresponds to the x-axis in Figure 10-31.

Next, we calculated the total impact assessment results per functional unit for each of the 876
gate-to-gate processes in the US LCI database. All 14 methods were used in the estimation.
Again, Matlab software was used to match substances. Among the 876 gate-to-gate processes,
608 have non-zero GW impact results which are graphed below. Of these 608 processes,
Figure 10-34shows results for the 50 processes with the largest total GW impact values per
functional unit. The total impact values for each process were calculated based on 14 different
GW methods, as shown in the legend. Each row in the figure represents the 14 total emissions
from one functional unit production of a process. The displayed processes were sorted by the
maximum GW impact value calculated from all methods. Figure 10-44 shows the results for
the remaining 558 non-zero-impact processes. The processes with the 10 largest total GW
impact values are completely different than the processes with the 10 largest numbers of GW
substances in Figure 10-32, for two reasons. First, the 10 largest numbers of GW substances
shown in Figure 10-32 are based on the direct emissions for each process, while the 10 largest
GW impact values in Figure 10-35 are total emissions. Second, the GW impact values are
affected by the amount of substances emitted as well as the characterization factor values for
the substances, so the results are expected to be different than merely counting the numbers
of substances for a process.

In general, the GW results are similar across different methods. Among the 608 processes,
517 have less than 5% absolute departure from the average value (i.e., the difference between
the maximum or minimum value and the average is less than 5%). Some big outliers from two
impact methods are observed. No. 17 to No. 23 on the x-axis, for example, have larger values
calculated from the Greenhouse gas protocol method (light blue squares). On the other hand,
processes No. 305 to No. 310 have larger values from the CML 2001 method.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 427

Figure 10-34: GW values in 𝒌𝒈 𝑪𝑶𝟐 𝒆𝒒 per functional unit, from 14 impact methods, marked by the
different symbols described in the legend. Processes No.1 to No.10 are shown on the left, and
processes No. 11 to No. 50 are shown on the right. The processes are sorted by total impact results

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
428 Chapter 10: Life Cycle Impact Assessment

including direct and indirect impacts. The first 10 processes’ names, industrial categories, functional
units, and original index numbers from the US LCI database are listed in the table.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 429

Figure 10-35: GW values in 𝒌𝒈 𝑪𝑶𝟐 𝒆𝒒 per functional unit from 14 impact methods for processes No.
50 to No. 620. The processes are sorted by the total impact results including direct and indirect
impacts. Other processes with zero GW impact from any of the methods are not shown.

The processes provided in Figure 10-34 and Figure 10-35 were sorted to visualize the
uncertainties in all processes; however, the functional units of these processes are different,
making comparisons of uncertainties between processes impossible. In addition, the processes
from the same industry could have similar impacts. Thus, we also grouped the processes from
the same industries together to enhance the interpretation of results. Figure 10-36, Figure
10-37, and Figure 10-38 provide GW impact results for three categories: transportation,
electricity, and plastic material. The results show that processes within the same category
generally have similar impact results; possibly due to the same functional unit used in each
category. The results also show that within the three categories, the maximum and minimum
impact values are generally from the same impact methods. IPCC 2007 20a often provides the
largest GW impact, while ReCiPe Midpoint (I) results in the minimum impact value. These
two methods cover the same numbers of substances, thus, the differences in the impact values
were due to the differences in the characterization factor values. In some cases (such as the
results for plastic materials Figure 10-38, clearly show there are three sets of GW impacts.
They were the results from three GW time intervals: short, medium, and long term. The
differences in the characterization factors are the sources of the variances in these three sets.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
430 Chapter 10: Life Cycle Impact Assessment

Figure 10-36: GW impact values in 𝒌𝒈 𝑪𝑶𝟐 𝒆𝒒 per ton-km for 190 transportation processes in the US
LCI database.

Figure 10-37: GW impact values in 𝒌𝒈 𝑪𝑶𝟐 𝒆𝒒 per kWh for 98 electricity processes in the US LCI
database.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 431

Figure 10-38: GW impact values in 𝒌𝒈 𝑪𝑶𝟐 𝒆𝒒 per kg for 23 plastic material processes in the US LCI
database

The results above were high-level summaries across all processes in US LCI to demonstrate
the variability in results from various GW methods. Showing the detailed results calculated
from LCIA methods can help LCA practitioners to understand the uncertainties caused by
selecting different impact methods, which is more easily understood when considering a single
process (which is the type of decision a practitioner would make when putting together a
study). As such, Figure 10-39 shows the GW impacts calculated from 14 different methods
for an example process, “Crude palm kernel oil, at plant”. The process was chosen because
compared to most of the processes, it has a larger range of GW impact values. Among the 14
methods, two methods (CML2001, and GHG Protocol) have different substances covered.

Figure 10-39: Total GW impacts for the “Crude palm kernel oil, at plant” process calculated from 14
different methods. Two of the 14 methods listed in the figure have different covered substances (The
markers are the same as in Figure 10-34).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
432 Chapter 10: Life Cycle Impact Assessment

Thus, the discrepancies between the remaining 12 methods are caused entirely by the
differences in their characterization factors. As shown in Figure 10-40, the larger value from
the GHG Protocol method was caused by the characterization factor for “Carbon dioxide,
biogenic”, a substance that was not included in any other methods. On the other hand, the
CML2001 method included two substances (“carbon monoxide” and “carbon monoxide,
fossil”) that were generally excluded from other methods. Here, by saying excluded, I mean
that the chemicals were not shown in the list of substances of the method. However, because
the impact values from these two substances were not large, the total impact results from the
CML2001 method were within the range of other methods. The CML2001 method covers
three less substances compared to most of other methods, two of which have large
characterization factor values (more than 10000 𝑘𝑔 𝐶𝑂: 𝑒𝑞). However, the total inventoried
emission values from those substances are rather small (1E-7 kg), resulting in a negligible GW
impact value from those substances. Hence, the total GW impact calculated from the
CNL2001 method was not the smallest one among all methods. It can be concluded that for
this process, the discrepancies of the total impact results are caused by differences in: 1) the
total (direct + indirect) emission values for the characterized substances; 2) the coverages of
substances in methods; and 3) the differences in the characterization factor values of the
substances. From the example in Figure 10-39 and the results from all US LCI processes, we
can generally conclude that the outliers of the impact results are often caused by the inclusion
of certain unusual substances in a particular method. The ranges in the impact results can be
attributed to the differences in the characterization factors.

Figure 10-40: GW substances for the “Crude palm kernel oil, at plant” process, and impacts for the
substances calculated from 14 impact methods. The cells with larger values are highlighted with
darker background.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 433

It would be useful to incorporate visualizations, like those above, as readily available tools in
LCA software. These visualizations improve the understanding of the LCIA results by
showing the main sources of discrepancies. We believe that these visualizations would enable
LCIA users to make more robust decisions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
434 Chapter 10: Life Cycle Impact Assessment

(END OF CHAPTER MATERIAL – THE NEXT FEW PAGES ARE WORK IN PROGRESS)

Add stuff into Chapter 10 or appendix about uncertainty connections to LCIA from Standard

Book versions before 4-9-17 had the multiple GHG elementary flows compared with IPCC
GWP values to make ranges. They are deleted now but can be recaptured. Some of it is
pasted below.

We motivate an example based on the two US LCI-based processes Electricity, bituminous coal,
at power plant and Electricity, natural gas, at power plant from the US LCI process matrix
spreadsheet LCIA tool of Chapter 10. Using the Global warming, air LCIA method in the US
LCI spreadsheet tool of Chapter 10, and considering the production of 1 kWh of electricity in
each of the two processes would produce results as shown in Error! Reference source not f
ound. (similar to those already shown in Chapter 10).

Coal vs gas, as is, isn’t really working.. better idea? Or figure out wherhe the
uncertainty/sensitivity is and get there faster? Note cant use it as is – it assumes LCIA
knowledge.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 435

Flow Emissions Percent of


(kg CO2-e) Total
Total for bituminous coal-fired electricity 1.08
Carbon dioxide, fossil 1.033 95.7%
Methane 0.046 4.2%
Methane, fossil 0.0003 0.03%
Carbon dioxide, biogenic 0.0003 0.03%
Carbon dioxide, in air 0.0003 0.02%
Dinitrogen monoxide 0.0001 0.01%
Total for natural gas-fired electricity 0.72
Carbon dioxide, fossil 0.634 87.96%
Methane 0.072 9.98%
Methane, fossil 0.011 1.56%
Dinitrogen monoxide 0.0033 0.45%
Carbon dioxide, in air 0.0002 0.03%
Carbon dioxide, biogenic 0.0002 0.03%
Figure 10-41: Top flows contributing to global warming for 1 kWh of coal and gas electricity (US LCI),
using IPCC AR4 100-year GWP values

These TRACI results use the IPCC AR4 100-year global warming potential values, which is
25 kg CO2e/kg CH4. If we were to consider the sensitivity of the IPCC GWPs in our results
(holding all of the technological flows, and underlying emissions factors constant), we could
consider various changes: (1) simple sensitivity ranges around the default GWPs used (e.g., +
or - 20%); (2) update the GWP values to the IPCC fifth report (AR5) values shown in Chapter
10; and/or (3) differences in results are if we use the 20-year GWPs instead. Additional
sensitivities not associated with the GWP values could assess the effect of different heat rates
or emissions factors.

Case 1 – Sensitivity of methane AR4 values

In an initial case, we consider only the changes in estimated emissions from changing the
methane GWP value from its base of 25 kg CO2e/kg CH4 by + or – 20%, as shown in Error! R
eference source not found. (of course, only the methane values have changed from Error!
Reference source not found.). This range of change is negligible and would not change our
comparative result that natural gas was significantly less GHG intensive than coal. The
methane parameter alone would apparently have to change by substantially more than 20% to
make a difference.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
436 Chapter 10: Life Cycle Impact Assessment

Flow Emissions
(kg CO2-e)
-20% Base Base +20% Base
Total for bituminous coal-fired electricity 1.07 1.08 1.09
Carbon dioxide, fossil 1.033 1.033 1.033
Methane 0.036 0.046 0.055
Methane, fossil 0.0003 0.0003 0.0003
Carbon dioxide, biogenic 0.0003 0.0003 0.0003
Carbon dioxide, in air 0.0003 0.0003 0.0003
Dinitrogen monoxide 0.0001 0.0001 0.0001
Total for natural gas-fired electricity 0.70 0.72 0.74
Carbon dioxide, fossil 0.634 0.634 0.634
Methane 0.058 0.072 0.086
Methane, fossil 0.009 0.011 0.013
Dinitrogen monoxide 0.0033 0.0033 0.0033
Carbon dioxide, in air 0.0002 0.0002 0.0002
Carbon dioxide, biogenic 0.0002 0.0002 0.0002
Figure 10-42: Top flows contributing to global warming for 1 kWh of coal and gas electricity (US LCI),
given 20% sensitivity in AR4 methane values

Case 2 – Updating methane GWP values to AR5

GWP values for CO2 are always 1 (by definition). In AR5, methane was increased to 34 units
on a 100-year basis31. Error! Reference source not found. shows the updated results for the h
igher methane value in both the coal and gas alternatives. The gap between natural gas and
coal has closed a bit, but the higher methane GWP in AR5 also increases the emissions in the
coal-fired alternative given the methane emissions from coal beds, etc.

31This can be done by manually editing the GWP factor for methane in the ‘TRACI 2.1’ sheet of the US LCI database LCIA-smaller
spreadsheet available in the course resources for Chapter 10. It is cell E96.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 10: Life Cycle Impact Assessment 437

Flow Emissions Percent of


(kg CO2-e) Total
Total for bituminous coal-fired electricity 1.1
Carbon dioxide, fossil 1.033 94.3%
Methane 0.062 5.7%
Methane, fossil 0.0004 0.04%
Carbon dioxide, biogenic 0.0003 0.02%
Carbon dioxide, in air 0.0003 0.02%
Dinitrogen monoxide 0.0001 0.01%
Total for natural gas-fired electricity 0.75
Carbon dioxide, fossil 0.634 84.5%
Methane 0.1 13.0%
Methane, fossil 0.015 2.03%
Dinitrogen monoxide 0.0033 0.43%
Carbon dioxide, in air 0.0002 0.03%
Carbon dioxide, biogenic 0.0002 0.03%
Figure 10-43: Top flows contributing to global warming for 1 kWh of coal and gas electricity (US LCI),
using IPCC AR5 100-year GWP values

Case 3: Considering 20-year methane GWP values

While the typical analysis uses the 100-year GWP values, if the 20-year GWP values in AR5
are instead used (now 86 for methane, more than 3 times higher than the base case AR4 100-
year value!), the coal versus gas results are shown in Error! Reference source not found.. T
he coal vs. gas-fired electricity comparison that began in Error! Reference source not found.
with a pretty straightforward comparative result of gas being 33% lower than coal in terms of
global warming now looks far less compelling – only about 22% less.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
438 Chapter 10: Life Cycle Impact Assessment

Flow Emissions Percent of


(kg CO2-e) Total
Total for bituminous coal-fired electricity 1.19
Carbon dioxide, fossil 1.033 86.7%
Methane 0.157 13.2%
Methane, fossil 0.001 0.09%
Carbon dioxide, biogenic 0.0003 0.02%
Carbon dioxide, in air 0.0003 0.02%
Dinitrogen monoxide 0.0001 0.01%
Total for natural gas-fired electricity 0.92
Carbon dioxide, fossil 0.634 68.6%
Methane 0.247 26.8%
Methane, fossil 0.039 4.18%
Dinitrogen monoxide 0.0033 0.35%
Carbon dioxide, in air 0.0002 0.02%
Carbon dioxide, biogenic 0.0002 0.02%
Figure 10-44: Top flows contributing to global warming for 1 kWh of coal and gas electricity (US LCI),
using IPCC AR5 20-year GWP values

Error! Reference source not found. shows the sensitivity as the methane GWP is varied by I
PCC reports. ISNT THIS REALLY A RANGE?

Figure 10-45: Visual Summary of Sensitivity of Coal vs. Gas Comparison as a Range

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 439

Probability and Statistics Knowledge Needed For Chapter 11

Chapter 11 discusses several qualitative and quantitative aspects of uncertainty that are relevant
to any kind of modeling, particularly LCA models.

Does the chapter actually talk about all of these statistical topics?

While the core discussion in this chapter consists of senior undergraduate level material (the
Advanced Material section goes beyond), we assume knowledge of statistics such as ranges,
confidence intervals, means, medians, standard deviations, etc.. Some of this terminology was
mentioned in Chapter 2, but for readers not familiar or comfortable with such
concepts from a quantitative (problem solving) standpoint, they should be
reviewed before reading Chapter 11.

Some good web-based overviews are available at:

• stattrek.com

• khanacademy.com

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
440 Chapter 11: Uncertainty in LCA

Chapter 11 : Placeholder

whats here is text that might not belong in the flow of Chapter 7 – and
will likely get moved to other chapters. Check if its in Chap 7 or not
before deleting.

IO-based model uncertainties: While various IO-specific model issues were mentioned in
Chapter 8, those leading to relevant uncertainties are summarized here. Recall that IO models
are inherently linear in production (e.g., the effects of producing $1,000 from a sector are
exactly 10 times more than producing $100 from the same sector), implying that there are no
capacity constraints or scale economies. Likewise, the environmental impact vectors generally
use average impacts per dollar of output across the entire sector, even though the effects of a
production change might not be incremental or marginal. For example, increasing demand for
a new product might be produced in a plant with advanced energy efficiency or pollution
control equipment or with brand new technology. If production functions are changing
rapidly, are discontinuous, or are not marginal, then the linear approximations will be relatively
poor. Reducing errors of this kind requires more effort on the part of the LCA practitioner.
A simple approach would be to alter the parameters of the IO-LCA model to reflect the user’s
beliefs about their actual values. Thus, estimates of marginal changes in R vectors may be
substituted for the average values provided in the standard model.

As one indication of the degree of uncertainty, Lenzen (2000) estimates that the average total
relative standard error of input–output coefficients is about 85%. However, because numerous
individual errors in input–output coefficients cancel out in the process of calculating economic
requirements and environmental impacts, the overall relative standard errors of economic
requirements are only about 10–20%.

Process-based model uncertainties: While process-based models are often viewed as


preferable given their connection to specific technological descriptions and data, process
models also have uncertainties. As with IO models, they are only as good as the process data
available within the model, both in terms of the resolution (number of processes in the model)
as well as the number of input and output flows available. They are also typically linear.

(moved down from above) Foreign production: Process or IO-based models may assume
that manufacture of inputs occurs in the same geographic region as the manufacture of the
product. For example, IO-LCA models of a country represent typical production across
the supply chain within each sector of the home country, even though some products and
services might be produced outside the country and imported. The impacts of production

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 441

in other regions may be the same, or substantially different. This variation could be the
result of different levels of environmental regulation or protection. For example, leather
shoes imported from China were probably produced with different processes, chemicals,
and environmental discharges than leather shoes produced in the US.

Cutoff uncertainty: As originally mentioned in Chapter 9, the process flow diagram approach
is particularly vulnerable to missing data resulting from choices of analytical boundaries, also
called truncation error. Lenzen (2000) reports that truncation errors on the boundaries will
vary with the type of product or process considered, but can be on the order of 50% (i.e., 50%
of the effects may be truncated). To make process-oriented LCA possible at all, truncation of
some sort is essential. From your observations of the components of process matrix models,
you may have noticed that they are often dominated by manufacturing and energy production
processes, with limited or no data associated with support services. This becomes less
important for studies with narrow scopes that exclude such ancillary services, however, such
studies become necessarily limited in value.

Aggregation uncertainty: Aggregation issues arise primarily in IO-based models and occur
because of heterogeneous producers and products within any one input–output sector.
However, similar issues may occur in process-based models, such as those that aggregate
various production processes into a single average method, e.g., electricity ‘grid’ processes
representing a weighted average of underlying processes.

It is normal to want more detail than is available from a particular model’s sectors or processes,
but such desires need to be balanced against the level of data available. Even the hundreds of
sectors of large and detailed IO models do not give us detailed information on many particular
products or processes. For example, we might seek data on lead-acid or nickel-metal hydride
batteries, but the model may only have primary and secondary battery sectors. Likewise, a
single battery production sector may contain tiny hearing aid batteries as well as massive cells
used to protect against electricity blackouts in residential or commercial buildings. $1 million
spent on hearing aid batteries will use quite different materials and create different
environmental discharges than $1 million spent on huge batteries. But all production within
an IO sector is assumed to be uniform (identical), and the economic and environmental effect
values in the model are effectively averaged across all products within it.

Software tool uncertainty: While most major LCA software tools use matrix-based methods,
and thus largely depend on matrix multiplication of the same underlying data, they may
generate different results. This is because there may be internal differences on how parameters,
assumptions, etc., are used. Software tools may also have graphical user interfaces that blur
the differences between data available, leading to differences in what was actually modeled.

Impact assessment method uncertainty: Chapter 10 outlined the various components of


LCIA. There are various uncertainties associated with LCIA, such as in the choice or use of a
particular impact assessment method versus another (e.g., X vs. Y), use or choice of particular

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
442 Chapter 11: Uncertainty in LCA

characterization factors (such as a 20 year versus 100-year GWP value, or an AR4 vs. AR5
value), effects from grouping, normalization, or weighting.

This subsection should make clear that methods and tools (not just data and inputs) can
contribute significant uncertainty to LCA results. Generally, the best means to compensate for
method uncertainty is to use multiple methods and compare results. (see comment at top
about setting these up as “problems/solution” sections)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 443

Semi-Quantitative: Uncertainty Via Pedigree Matrix Using Data Quality Indicators

Another motivation for using uncertainty assessment is when trying to use available data in
broader applications than the context present when forming the original data. As an example,
recall the truck transportation process introduced in Chapter 6 with the flow of 10 kg fossil
CO2 per one-way trip. Its context was delivery of fruit and vegetables from farm to market in
a presumably small truck. This estimate could have been formalized as a generic public LCI
process dataset for delivery by truck. Subsequent practitioners may require an estimate of the
transportation-oriented GHG emissions of delivering an item in a product system by truck.
Given no other data, this public fruit delivery dataset could be used. If the subject and scope
of this new study is the same, then the LCI values could be used without modification.
However, it is likely that the product being shipped, the age/size of truck, etc., would be
different. In such cases, a method would be needed to derive a representative LCI value.

The pedigree matrix approach for quantitatively representing uncertainty (which we will call
PMA), originally developed by Weidema et al (1996), leverages data quality indicators (DQI,
first introduced in Chapter 5) to derive an aggregate quantitative representation of uncertainty
in LCI data to be reused in a subsequent model. This approach creates a mathematical
relationship that can be used to consider ranges or distributions of raw LCI data for improved
decision making associated with data reuse. A complete discussion of the approach is not
possible in this chapter, but the key highlights are provided. The inspiration for PMA was the
observation that an available LCI dataset, while true for a specific process or context, may
have more limited applicability when applied to other contexts. The PMA approach is the
quantitative basis behind the uncertainty assessment in software tools such as SimaPro (note
that the uncertainty assessment features of SimaPro are not available in the general
education/course version and are only available in advanced research versions).

The 1996 version of PMA has been revisited and updated. The most recent PMA, currently
used for ecoinvent v3, considers 5 categories of DQIs, and gives numerical scores (1-5) for
each of the categories for a particular LCI dataset, as applied to another product system. PMA
is classified in this semi-quantitative section of uncertainty assessment because while the
mathematical techniques are quantitative, the DQI assessment is otherwise subjective. PMA
also provides a rubric to aid in determining the appropriate numerical scores to assign for each
category, as shown in Figure 11-1.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
444 Chapter 11: Uncertainty in LCA

1 2 3 4 5 (default)
Reliability Verified data Verified data Non-verified Qualified Non-qualified
based on partly based on data partly estimate (e.g. by estimate
measurements assumptions based on industrial ex-
or non-verified qualified pert)
data based on estimates
measurements
Completeness Representative Representative Representative Representative Representativeness
data from all data from data from only data from only unknown or data
sites relevant for >50% of the some sites one site relevant from a small
the market sites relevant for (<<50%) for the market number of sites
considered, over the market relevant for the considered or and from shorter
an adequate considered, over market some sites but periods
period to even an adequate considered or from shorter
out normal period to even >50% of sites periods
fluctuations out normal but from
fluctuations shorter periods
Temporal Less than 3 Less than 6 Less than 10 Less than 15 Age of data
Correlation years of years of years of years of unknown or more
difference to the difference to the difference to the difference to the than 15 years of
time period of time period of time period of time period of difference to the
the dataset the dataset the dataset the dataset time period of the
dataset
Geographical Data from area Average data Data from area Data from area Data from
Correlation under study from larger area with similar with slightly unknown or
in which the production similar distinctly different
area under study conditions production area (North
is included conditions America in- stead
of Middle East,
OECD-Europe
instead of Russia)
Further Data from Data from Data from Data on related Data on related
Technological enterprises, processes and processes and processes or processes on
Correlation processes and materials under materials under materials laboratory scale or
materials under study (i.e., study but from from different
study identical different technology
technology) but technology
from different
enterprises
Figure 11-1: Pedigree matrix used in ecoinvent v3 (Source: Weidema 2013)

In the approach, assessed DQI Scores of 1 are the best, indicating that the original LCI data
very closely matches the needs of the product system being studied (i.e. the one in which the
analyst is considering ‘reuse’ of the data), resulting in relatively low uncertainty around the
reused values. DQI scores of 5, on the other hand, are relatively worst, indicating that the
LCI data may be a poor match to the needs of the product system being studied. A benefit of
the pedigree matrix approach is that the technique is simple to understand and easy to apply
for practitioners with limited backgrounds in quantitative uncertainty assessment. A helpful
reminder is Figure 7-4, which showed the age distribution of the various data in the US LCI
database, which may help motivate why increased uncertainty might be associated with older
data (related to assessed temporal scores). The following three steps comprise the pedigree
matrix approach (PMA) introduced above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 445

PMA Step 1: Qualitative Assessment of LCI Data

An LCI data module can be assessed using the DQI table above by looking at the metadata
and other sources and determining how well it corresponds to the rubric. For example, a
dataset that is from currently collected primary data would receive a ‘1’ score for temporal
correlation. Once all 5 categories (rows) have been assessed with the rubric, the overall
pedigree matrix score can be represented compactly as (2,3,1,4,2) for the scores in order as
they appear in the matrix. While this score is simple to represent, a more detailed justification
of the 5 values assessed should be provided.

PMA Step 2: Parameterization of Assessment Scores

As shown below, PMA uses parameters in an equation to derive total uncertainty, based on
the scores from Step 1. Specific parameters for each data quality category have been provided
by the PMA authors to be used in assessing uncertainty32, as shown in Figure 11-2.
1 2 3 4 5
Reliability 1 1.54 1.61 1.69 N/A
Completeness 1 1.03 1.04 1.08 N/A
Temporal Correlation 1 1.03 1.1 1.19 1.29
Geographical 1 1.04 1.08 1.11 N/A
Correlation
Further Technological 1 1.18 1.65 2.08 2.8
Correlation
Figure 11-2: Data Quality Indicator Uncertainty Factors (Source: Ciroth, 2013)

Another core insight incorporated into PMA is that the relative uncertainty of different
inventory flows varies. It is likely intuitive that the inventory flow per functional unit for a
given generic type of process (like electricity or transportation) is more certain for energy or
GHG emissions than it is for other air pollutants or releases of toxics. Energy use is very firmly
tied to the process, and GHG emissions are tied to fossil fuel combustion. However, quantities
of conventional pollutants or toxic releases could depend on specific fuel formulations,
specific engine optimization scenarios, pollution control technologies, etc. The pedigree
matrix approach deals with this by also suggesting basic uncertainty factors (Ub) for various
categories of inventory flows, as summarized in Figure 11-3.

The basic uncertainty factors are combined with the data quality factors to create a total
representation of uncertainty.

32The 2013 updated version of the pedigree matrix and parameters results in significantly greater uncertainty than previous versions. As
such, the original and updated approaches can not be used interchangeably.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
446 Chapter 11: Uncertainty in LCA

Input / output group C P A


Demand of:
Thermal energy, electricity, semi-finished products,
working material, waste treatment services 1.05 1.05 1.05
Transport services (tkm) 2 2 2
Infrastructure 3 3 3
Resources:
Primary energy carriers, metals, salts 1.05 1.05 1.05
Land use, occupation 1.5 1.5 1.5
Land use, transformation 2 2 2
Pollutants emitted to air:
CO2 1.05 1.05
SO2 1.05
NMVOC total 1.5
NOX, N2O 1.5 1.4
CH4, NH3 1.5 1.2
Individual hydrocarbons 1.5 2
PM>10 1.5 1.5
PM10 2 2
PM2.5 3.00 3
Polycyclic aromatic hydrocarbons (PAH) 3.00
CO, heavy metals 5.00
Inorganic emissions, others 1.5
Radionuclides (e.g. Radon-222) 3
Pollutants emitted to water:
BOD, COD, DOC, TOC, inorganic compounds (NH4, PO4, NO3,
Cl, Na, etc.) 1.5
Individual hydrocarbons, PAH 3
Heavy metals 5 1.8
Pesticides 1.5
NO3, PO4 1.5
Pollutants emitted to soil:
Oil, hydrocarbon total 1.5
Heavy metals 1.5 1.5
Pesticides 1.2

Figure 11-3: Basic uncertainty factors for categories of inventory flows


C = combustion emissions, P = process emissions, A = agricultural emissions

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 447

The basic uncertainty factors could also be used to inspire relevant values for significance
heuristics mentioned above, e.g., the heuristic for energy or GHGs is 10% given the basic
uncertainty factor of 1.05.

PMA Step 3: Calculation of geometric mean for uncertainty

PMA calculates a total uncertainty (UT) equal to the square of the geometric mean standard
deviation across the 5 DQI parameters and the basic factor using Equation 11-1:

U¶ = σ:f = expº»(ln(U¼ )): ) + ∑½(ln(U½ )): )¾ (11-1)

With this calculated value UT, an existing deterministic LCI dataset could be transformed into
one representing uncertainty. Typically, the pedigree matrix approach assumes that instead of
being deterministic, the data is represented by a lognormal distribution with parameters
(µ , s ), where µ is the deterministic LCI value and the standard deviation sg (not sg2) is
from Equation 11-1.

For those uninterested in the probability distribution treatment of uncertainty, mere ranges
can be estimated. As noted in Weidema (2013), in a lognormal distribution, 68% of the data
lies in the range µ /sg to µ sg, 95% of the data lies in the interval µ /sg2 to µ sg2, and 99.7%
of the data lies in the interval µ /sg3 to µ sg3. Thus a 95% confidence interval for a lognormal
distribution is the range [µ /sg2, µ sg2].

Example 11-1: Assume that the transportation data example from earlier in the chapter is
assessed with the pedigree matrix approach as (2,3,1,4,2). Given the original deterministic value
of 10 kg fossil CO2, what lognormal distribution parameters represent the data, and what would
the range of values be for a 95% confidence interval around the LCI value?

Answer: Since the LCI inventory of interest is CO2, the basic uncertainty factor (Ub) is 1.05 (from
Figure 11-3). (2,3,1,4,2) are the five uncertainty parameters, with Ui values coming from Figure
11-2. Using these values in Equation 11-1 yields:

U¶ = σ2g = exp ¿»(ln(1.05)): ) + (ln(1.54)): ) + (ln(1.04)): ) + (ln(1)): ) + (ln(1.11)): ) + (ln(1.18)): )À

so UT = σ:f = 1.61. Since UT is the square of the geometric standard deviation, the standard
deviation parameter of the lognormal distribution is the square root of 1.61, or 1.27. The other
parameter of the distribution is the raw value of 10 (kg CO2/unit).

Using the formula above, the 95% confidence interval is [e/sg2, esg2] = [10/1.61, 10*1.61] = [6.2,
16.1], meaning we would have very high confidence that in this case the flow of CO2 would be in
this range, or -40% to +60% of the original 10 kg.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
448 Chapter 11: Uncertainty in LCA

Limitations of the Pedigree Matrix Approach

Critics of the pedigree approach have noted that despite its broad and simple applicability, it
sets arbitrary bounds on uncertainty based on the DQI scores and provides only implicit
treatment of uncertainty. It does not use real data and effectively assumes the inventory data
in the original process is certain. As such, using it as a basis for more advanced uncertainty
assessment (such as simulation, shown below) creates the impression that the uncertainty
assessment is more substantive than it really is. The method would yield the exact same
representation of uncertainty for two processes with the same data quality (even if there was
more variability for one of them). For practitioners with access to multiple data sources and/or
the ability to perform direct uncertainty assessment, the pedigree matrix approach may be
unnecessary.

Software packages like SimaPro implement the PMA method for ecoinvent endogenously. For
example, in advanced versions of SimaPro that include uncertainty assessment, each of the
ecoinvent processes contains a data quality rating (i.e., the series of 5 values).

However, these ratings may or may not be relevant when considering the re-use of an
ecoinvent LCI dataset for a specific process for a particular study (you might want to otherwise
modify the 5 values for your own purposes).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 449

Copied from Chapter 3.. keep here? Where?

Deterministic and Probabilistic LCCA – keep in chapter 3 or here?


Our examples so far, as well as many LCCAs (and LCAs, as we will see later) are
deterministic. That means they are based on single, fixed values of assumptions and
parameters but more importantly it suggests that there is no chance of risk or uncertainty that
the result might be different. Of course it is very rare that there would be any big decision we
might want to make that lacks risk or uncertainty. Probabilistic or stochastic models are
built based on some expected uncertainty, variability, or chance.

Let us first consider a hypothetical example of a deterministic LCCA as done in DOT (2002).
The example considers two project alternatives (A and B) over a 35-year timeline. Included in
the timeline are cost estimates for the life cycle stages of initial construction, rehabilitation,
and end of use. An important difference between the two alternatives is that Alternative B has
more work zones, which have a shorter duration but that cause inconvenience for users,
leading to higher user costs as valued by their productive time lost. Following the five-step
method outlined above, DOT showed these values:

Without discounting, we could scan the data and see that Alternative A has fewer periods of
disruption and fairly compact project costs in three time periods. Alternative B’s cost structure
(for both agency and user costs) is distributed across the analysis period of 35 years. Given the
time value of money, however, it is not obvious which might be preferred.

At a 4% rate, the discounting factors using Equation 3-1 for years 12, 20, 28, and 35 are 0.6246,
0.4564, 0.3335, and 0.2534, respectively. Thus for Alternative A the discounted life cycle
agency costs would be $31.9 million and user costs would be $22.8 million. For Alternative B
they would be $28.3 million and $30.0 million, respectively. As DOT (2002) noted in their
analysis, “Alternative A has the lowest combined agency and user costs, whereas Alternative
B has the lowest initial construction and total agency costs. Based on this information alone,
the decision-maker could lean toward either Alternative A (based on overall cost) or
Alternative B (due to its lower initial and total agency costs). However, more analysis might
prove beneficial. For instance, Alternative B might be revised to see if user costs could be
reduced through improved traffic management during construction and rehabilitation.”

For big decisions like that in the DOT example, one would want to consider the ranges of
uncertainty possible to ensure against a poor decision. Building on DOT’s recommendation,
we could consider various values of users’ time, the lengths of time of work zone closures, etc.
If we had ranges of plausible values instead of simple deterministic values, that too could be
useful. Construction costs and work zone closure times, for example, are rarely much below
estimates (due to contracting issues) but in large projects have the potential to go significantly
higher. Thus, an asymmetric range of input values may be relevant for a model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
450 Chapter 11: Uncertainty in LCA

We could also use probability distributions to represent the various cost and other assumptions
in our models. By doing this, and using tools like Monte Carlo simulation, we could create
output distributions of expected life cycle cost for use in LCCA studies.

The use of such methods to aid in uncertainty assessment are discussed in the Advanced
Material at the end of this chapter.

Process Matrix-based Example with Ranges


Despite demonstrating how to use ranges, the examples above have been limited by using only
process flow diagrams as opposed to process matrix approaches, which were shown to be
more comprehensive in Chapter 9.

References

Ciroth, A., Muller, S., Weidema, B. et al., Int J Life Cycle Assess (2013) 21: 1338.
doi:10.1007/s11367-013-0670-5
Weidema & Wesnæs, Weidema, Bo Pedersen, and Marianne Suhr Wesnæs. “Data quality
management for life cycle inventories—an example of using data quality indicators.” Journal
of cleaner production 4.3 (1996): 167-174.
Weidema B P, Bauer C, Hischier R, Mutel C, Nemecek T, Reinhard J, Vadenbo C O, Wernet
G, Overview and methodology: Data quality guideline for the ecoinvent database version 3,
ecoinvent report no. 1 (v3), St. Gallen, 2013.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 451

Advanced Material for Chapter 11: Probabilistic Methods and


Simulation

Better to leave this in the chapter (left at the end for easy way to skip) or do the usual thing
and move it to advanced material for chapter?

Finally, in this section, we discuss the most complex approach to managing uncertainty. In
this section, we use data from various sources to generate probability distributions for inputs,
and then use spreadsheets or other techniques to track the effect of these probabilities through
the model to generate results that also have probability distributions.

If we are trying to convince a stakeholder that the results of our LCA are “good enough to
support their decision” then the final possible step discussed relates to considering the
available data by creating probabilistic distributions of data instead of point estimates or
ranges. By doing this, we explicitly aim to create models where the output could support a
probabilistic assessment of the magnitude of a hot spot, or the percentage likelihood that
Product A has less impact than Product B.

Relatively Basic

Type 1 vs type 2 error, etc.

Trying to get a mean value to help with reporting, but still using the whole distribution..

Aranya’s comments:

for case 2)
1. When the uncertainty is really large, and spans across an order of magnitude -
like in the case of Kim’s biofuel work. Uncertainty becomes important while
comparing LCA results to other baselines, and you need to get a good
understanding of what drives the underlying uncertainties and variabilities, and
whether this can be reduced either through technical improvements or policy
decisions.
2. When the uncertainty isn’t as significant, but the difference between life cycle
results and a baseline is much smaller - like in the case of our fossil fuel
uncertainty results. The difference between CNG and gasoline life cycle
emissions for transport is small on average, ~5%, but this isn’t a robust enough

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
452 Chapter 11: Uncertainty in LCA

result since the difference is not statistically significant. So, it isn’t a good idea
to make any decisions based on those comparisions.

The most commonly used approach is a Monte Carlo simulation to include key
uncertainties in inputs parameters to the LCA model. I think this method can be
extended to decision-support frameworks as well. For example, if you used an
optimization model and one of your objectives or constraints was an environmental
life cycle metric, you could still incorporate these uncertainties to identify robust
‘optimal’ solutions wherever possible. i.e. solutions that work across all or most of
the scenarios that characterize uncertainty in the LCA ‘space’.

From CFL:

My first thought is that there should maybe two distinct sections: one talking about
the importance of considering uncertainty and variability in LCA and discussing
ideal ways to do and communicate that analysis, and a second discussing the data
availability issues that you’ll likely face in reality and what can be done other than
ignore uncertainty altogether.

In the second part, specific questions to address are:

1) Are the default distributions in LCI databases like ecoinvent for variability or
uncertainty? How valid/useful/reliable are they?

2) If you only have a single point value for each measurement, is there any rule of
thumb for estimating distributions (e.g. always use normal or lognormal, with a
mean at your point value and a standard deviation of X of your point value where
X=0.5 for fuel use, 0.2 for mining and extraction, 0.35 for agricultural process,
etc.... does this kind of guidance exist?)

3) What if you have direct measurements from one process (that your company
owns) but have to rely on literature or an LCI database for the rest of the life-cycle.
Do you assume point values for all of the latter processes but do some analysis
using the variability in your own processes? Can you combine ecoinvent default
distributions with directly measured distributions?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 453

4) What if you find multiple point values in literature for a process, but they are for
slightly different processes. Could these be used to develop a range for a uniform
distribution for a Monte Carlo analysis or are they better used as end points of a
range and the whold LCI built using one and then the other? Is this a better
estimate for uncertainty or variability?

5) Alternatively, is it better to not do any uncertainty analysis, but identify some


categories of stakeholders and do some what-if scenarios (what is distances
doubled, what if the two major processes were half as efficient) and see under
what conditions certain types of impacts increase substantially, or become the
majority contributor to the single score impact, or one product becomes preferred
in a comparative LCA.

Essentially, if you don’t really have good data for all processes, is there anything
that you can do to get a valid estimate of variability or uncertainty instead of just
not doing it?

Figure 11-4: General Diagram for Model with Probabilistic Inputs and Results

To numerically estimate the uncertainty of environmental impacts, Monte Carlo simulation is


usually employed. Several steps are involved:

The underlying distributions, correlations, and distribution parameters are estimated for each
input–output coefficient, environmental impact vector, and required sector output.
Correlations refer to the interaction of the uncertainty for the various coefficients.

Random draws are made for each of the coefficients in the EIO-LCA model.

The environmental impacts are calculated based on the random draws.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
454 Chapter 11: Uncertainty in LCA

Steps 2 and 3 are repeated numerous times. Each repetition represents another observation of
a realized environmental impact. Eventually, the distribution of environmental impacts can be
reasonably characterized.

Steps 2–4 in this process rely heavily upon modern information technology, as they involve
considerable computation. Still, the computations are easily accomplished on modern personal
computers. Appendix IV illustrates some simulations of this type on a small model for
interested readers.

Step 1 is the difficulty in applying this method. We simply do not have good information about
distributions, correlations, and the associated parameters for input–output matrices or
environmental impact vectors. Over time, one hopes that research results will accumulate in
this area.

A simpler approach to the analysis of uncertainty in comparing two alternatives is to conduct


a sensitivity analysis; that is, to consider what level some parameter would have to attain in
order to change the preference for one alternative.

Mullins curves / -probabilities of drawing incorrect conclusions about signs (pos or


neg) or comparisons.

Troy: Finally, I would say the field seems to be moving toward more detailed models
based on stock datasets built up over time. These will reduce some sources of
variability within LCA results and facilitate the calculation of end point metrics
practitioners tend not to use very often at present, ie DALYs.

Add text on meta-analyses (eg JIE special issue?)

Add section on MIT’s triage method (see Jeremy’s “uncertainty barriers” slides)? – also consider the metrics of
percent difference, etc…

Still deciding - Move stochastic text from Chapter 3 back here too? Pretty big jump for chap
3 as is..
Advanced Material – how does SimaPro do Uncertainty?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 455

This is text from the NRC Sustainability committee report for EPA, have permission to
include if needed. Pasted.. not sure if it’s the most current version but likely close..

Scott Matthews and Indy Burke


Case Study– Biofuels (merged with hypoxia)

1. Brief statement of the issue or problem


The promotion and adoption of higher levels of biofuels caused significant attention with respect to their
sustainability as compared to the incumbent, petroleum-based fuels. The relevant issues crossed economic,
social, and environmental pillars. Economic issues included aspects such as the relative net economic benefits
to consumers when consuming biofuels, the differences in location of production of the fuel (i.e., domestic
versus imported), moderation of oil prices, and job creation. Social issues included potential reductions in
poverty levels and the equity of using land and crops for fuel instead of food. Environmental issues included
the relative energy and emissions performance of the various fuels, potential effects for hypoxia in the Gulf of
Mexico, and land use. In the end, EPA and other agencies promoted various biofuels as viable alternatives to
petroleum-based fuels.

2. Background
Biofuels have existed as alternatives for decades, including early work on vegetable oil-based diesel and corn-
based ethanol. However production levels had been modest during these periods.
Debates on the US Energy Independence and Security Act (EISA) of 2007, which proposed large targets in the
billions of gallons for various biofuels, were initiated because of the various tradeoffs related to aspects in the
various pillars listed above. Specific concerns were raised about the energy balance of ethanol production, the
possibility of increased hypoxia in the Gulf of Mexico from fertilizer runoff into the Mississippi River basin,
and the relative carbon emissions of the fuels.
Through EISA and the Energy Policy Act of 2005, EPA was given the authority, through OTAQ, to set
regulations in support of a national Renewable Fuel Standard (RFS). Amongst the support activities needed
under RFS, EPA had to set the baseline GHG emissions of petroleum-based gasoline as well as alternative
corn- and cellulosic-based ethanol, to ensure that the alternatives would meet 20% and 60% reductions in
GHG emissions, respectively, compared to gasoline.
In EPA’s supporting documentation of the EISA/RFS standard they provided estimates of the life cycle
carbon emissions of petroleum-based gasoline as well as for bio-based feedstocks. In support of the
requirements of the standard, they provided deterministic values for the carbon emissions of each fuel such
that biofuels had 20% lower emissions than gasoline.
An important issue tackled in the RIA for RFS2 was the modeling of emissions from so-called “indirect land
use change” (ILUC), which considers broader effects of agricultural soil disruption around the world as a result
of local decisions about how to use land. For example, if diverting corn to the fuel market (instead of food) in
the US would be expected to lead to increased demand and use for corn cropland in other parts of the world,
that land would be disrupted by agricultural processes, leading to higher GHG emissions, attributed back to the
source decision. Consideration of ILUC is uncertain but generally has large effects on estimates of GHG
emissions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
456 Chapter 11: Uncertainty in LCA

(MWG comment on previous paragraph: “Actually they assume a constant demand with a decreased supply.
This induces corn production elsewhere which can (1) displace current crops, (including pasture) or (2) place
non ag land into production.”
Nutrient loading in the Mississippi River Basin and subsequent hypoxia of the Gulf of Mexico represents a
critical social and environmental issue that EPA confronts. EPA has led the interagency Mississippi River Gulf
of Mexico Watershed Nutrient Task Force, which developed the science and created action plans for reducing
nutrient loading in support of goals for reducing the hypoxic zone in the Gulf to 5,000 square kilometers by
2015. This would be accomplished by influencing nutrient loading sources like agricultural land, urban
stormwater and septic systems, soil erosion, and industrial/municipal sources. Each of the states involved
conducted a scientific assessment and developed a strategy for nutrient reduction. Many of the states engaged
stakeholders who have been working on these issues in the development of their strategies.

3. Applicable tools and approaches


Managing the interests of the various parties requires approaches to combine stakeholder concerns. Benefit-
cost analysis methods were used to consider net benefits including differences in prices of fuels and vehicle fuel
economy. Sufficiently appreciating the complexity of the carbon emissions of biofuels as a replacement for
petroleum-based fuels requires life cycle assessment. Considering the uncertainty of life cycle carbon
emissions, as well as estimating the downstream impacts for hypoxia in the Gulf requires risk assessment
methods.

4. How have tools been used by EPA / how have outcomes reflected analytic results
The regulatory impact assessment (RIA) supporting RFS was arguably the largest investment of time and effort
by the US government to date involving incorporation of life cycle assessment (LCA) into public policy. EPA’s
effort found various life cycle GHG emissions values for the various fuels in existing literature (including
relatively large ranges of values), but EPA inevitably “chose” a series of life cycle emissions factors for the
fuels. These were all deterministic, fixed-point values, and were those that would form the basis of future
decisions on whether fuels met the standard. For example, the values of 93 grams of CO2-equivalent
emissions per MJ (g/MJ) and 75 g/MJ were determined for the baseline of gasoline and corn-based ethanol,
respectively. The corn ethanol value determined barely meets the 20% reduction called for in EISA.
Subsequent studies since the original RIA have added official EPA baseline values for various other fuels (e.g.,
grain sorghum for ethanol). Of course, the issue of ILUC has a dramatic effect on the carbon emissions of
biofuels, changing a roughly 60% reduction for corn ethanol into only 20%.

5. How has uncertainty been treated


The resulting LCA values provided by EPA were deterministic “point estimate” results, which is a typical
outcome in such studies. While there was underlying variability and uncertainty in the available data, only a
single result was published. These results were not explicitly expressed as “means of a probabilistic
distribution” or otherwise mentioned as probabilistically based. Emerging work in LCA, though, more
robustly considers uncertainty and variability in system results.
In using the EPA “Green Book” the use of LCA and risk analysis would not necessarily be linked. In assessing
the impacts of RFS2, neither tool alone would be sufficient to provide the framework needed to understand
the implications with respect to GHG emissions of various fuel systems.

6. How have tradeoffs been evaluated


Many tradeoffs exist in the consideration of alternatives to petroleum-based fuels. While many different

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 457

impacts were considered in the RIA for RFS (lifecycle GHG emissions; criteria, air toxic, ozone, PM emissions;
economics; impacts on costs; and water use), no explicit tradeoff analysis was performed.

7. What tools do other involved agencies use? Europe/others?

The California Air Resources Board (CARB) in support of their own low-carbon fuel standard (LCFS) as well
as other states and countries performed similar LCA modeling of lifecycle GHG emissions for fuels. None
reported full probabilistic results with respect to carbon emissions.

8. How could tools have been used / what tools are needed?
Not just from a sustainability perspective, but also from a policy analysis perspective, it is important to consider
more than just single, deterministic “point values”, as many components of the system have various possible
resulting emissions rather than a single value. By using ranges and probability distributions representing
potential values, a simulation can be performed to better assess the likely comparative performance of multiple
fuels.
In the end, such robust analysis methods could better support an assessment of the performance of a
renewable fuels policy. Stated in other words, given the probability distributions representing ranges of
performance, a risk analysis could assess the likelihood that a fuel could meet the policy target of promoting a
20% reduction from the baseline of petroleum-based fuel.
Figure 1 summarizes estimated probabilities of the carbon intensity across the life cycle of various
transportation fuels. As noted above, LCAs often only use or report a single value from such a distribution
(e.g., the mean), but may not have done the underlying analysis to create the distribution and only considered a
single value. Given the “baseline” of gasoline emissions to be compared, Figure 1 shows that various sources
of biofuels may end up with emissions close to gasoline in terms of carbon intensity, but also shows that mean
values may differ by the amounts required in the RFS.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
458 Chapter 11: Uncertainty in LCA

FIGURE 1: Probability distributions of estimated carbon intensity of various petroleum-based and


biofuels. For biofuels, two modeled cases are presented: full life cycle and life cycle without ILUC.
Source: Kocoloski 2013
Figure 2 uses the full distribution of carbon intensities for all fuels compared to gasoline in order to represent
the chance that particular sources of biofuels (e.g., points on the distribution in Figure 1) may not meet the
reduction standard set forth in the RIA. The resulting display of the “risk of policy failure” shows the effect of
the uncertainty inherent in the data, given the relatively large uncertainties in the emissions. Since the RIAs
inevitably focused on use of the mean values, such analysis was not done in the RIA.
RIAs are a great place to incorporate sustainability into decision making, as they provide comprehensive
opportunities to think broadly.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 459

FIGURE 2: Probability that biofuel emissions are below those of gasoline (at 0%) or are below some
policy target. EISA targets for corn fuels are 20% reduction and for cellulosic fuels are 60% (shown
with vertical lines). Two modeled cases are presented: full life cycle and life cycle without ILUC.
Source: Mullins 2010.
MWG comment: “Just to be exact. Data from the figure about was not used to generate this curve. This
was Kim’s versus Aranya’s values. The gasoline value in Kocoloski was updated to include gasoline from
oilsands. No one could notice a real difference though.”

Likewise, changes in crop choice and nutrient management practices in support of increased biofuel production
targets could have been modeled using risk assessment methods to assess likely changes in the size of the
hypoxic zone, as done in Costello (2009) and shown in Figure 3. Such modeling could suggest whether any
reasonable alternatives could have a significant effect on hypoxia.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
460 Chapter 11: Uncertainty in LCA

3,500 18,000

Areal Extent of Hypoxia (km2)


Nitrate Emissions (1000 mt)

16,000
3,000
14,000
2,500
12,000

2,000 10,000

1,500 8,000

6,000
1,000

5000 km2
4,000
switchgrass
500 Hypoxic Areal
Extent Goal stover
2,000
soy
corn
0 0
50% Buffer
2022corn/st 2022corn/s 2022stover/ 2022swg 2022no No Buffer
over wg swg fuel

Figure 3. Nitrate output within the MARB (colored bars, lefthand y-axis) and mean areal extent of hypoxia
in the NGOM with “No Buffer” and “50% Buffer” (grey-scale bars, righthand y-axis). Nitrate output
columns represent mean values and the 80% credible intervals from simulation modeling. The horizontal
dashed line represents the 5000km goal set for 2015. Source: Costello (2009).
2

Additional “LCA for policy” applications could be expected in the future, for example by broader
considerations of CO2 limits for energy generation units that consider upstream methane emissions for natural
gas-fired power plants. They could follow similar expanded methods as described above.

9. What considerations could be added by a sustainability approach that were not


previously included?
While EPA’s strategies have engaged some of the sustainability tools (e.g. stakeholder engagement), there may
be additional opportunities for application of sustainability approaches across the hierarchy of scales that could
result in practices and policies to improve water quality. For instance, cost-benefit analysis of nutrient
management approaches for individual farms or watershed aggregations of farms could provide important
information for outreach that might influence conservation strategies. Ecosystem services valuation that
address the trade-offs between land management practices and water quality improvements at local to regional
and entire continental scales could potentially lead to the development of markets to provide incentives for
both non-point and point sources reduction.

(I wasn’t sure whether we were going to have in-line recommendations, at end of chapter, etc.)
Recommendation: Represent LCA values and results as distributions, not point estimates, and/or explicitly

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 11: Uncertainty in LCA 461

discuss whether output values are means of a distribution.

References

Kimberley A. Mullins, W. Michael Griffin, And
H. Scott Matthews, “Policy Implications of Uncertainty in
Modeled Life-Cycle Greenhouse Gas Emissions of Biofuels”, ES&T, 2010.
Kocoloski, M., Mullins, K. A., Venkatesh, A., and Griffin, W. M. (2013) Addressing uncertainty in life-cycle
carbon intensity in a national low-carbon fuel standard. Energy Policy 56, 41–50.
Chris Costello, W. Michael Griffin, Amy Landis, and H. Scott Matthews, “Impact of Biofuel Crop
Production on the Formation of Hypoxia in the Gulf of Mexico”, Environmental Science &
Technology, 2009, 43 (20), pp. 7985–7991. DOI: 10.1021/es9011433, 2009.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
462 Chapter 12: Advanced Hotspot and Path Analysis

EXTRA PAGE FOR FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 463

Chapter 12 : Advanced Hybrid Hotspot and Path Analysis


In this chapter, we describe advanced LCA methods that consider all potential paths of a
modeled system as separate entities, instead of summarizing aggregated results. For process
matrix-based methods, these methods are often referred to as network analyses, and for IO-
based methods, as structural path analyses. These methods are helpful in support of screening
level analyses, as well as in helping to identify specific processes and sources of interest. In
addition, we discuss hybrid methods that allow us to consider changes to the parameters used
in the default network or structural path analysis in order to estimate the effects of alternative
designs, use of processes, or other assumptions.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

1. Explain the limitations of aggregated LCA results in terms of creating detailed


assessments of product systems.

2. Express and describe interdependent systems as a hierarchical tree with nodes and
paths through a tree.

3. Describe how path analysis methods provide disaggregated LCA estimates of nodes,
paths, and trees.

4. Interpret the results of a path analysis to support improved LCA decision making.

5. Explain and apply the path-exchange method to update SPA results for a scenario of
interest, and describe how the results can support changes in design or procurement.

Results of Aggregated LCA Methods


While the analytical methods described in the previous chapters are useful in terms of
providing results in LCA studies, these results have generally been aggregated. By aggregated
results, we mean those that have been ‘rolled up’ in a way so as to ignore additional detail that
may be available within the system.

In process-based methods, aggregated results express totals across all processes of the same
name that are modeled within the system boundary (or within the boundary of the process
matrix). For example, Figure 9-5 (revisited here as Figure 12-1) showed aggregated fossil CO2
emissions to air across the entire inverted US LCI process matrix system by process in the
production of bituminous coal-fired electricity.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
464 Chapter 12: Advanced Hotspot and Path Analysis

Process Emissions Percent of


(kg) Total
Total 1.033
Electricity, bituminous coal, at power plant/US 1.004 97.2%
Diesel, combusted in industrial boiler/US 0.011 1.0%
Transport, train, diesel powered/US 0.009 0.9%
Electricity, natural gas, at power plant/US 0.002 0.2%
Residual fuel oil, combusted in industrial boiler/US 0.002 0.2%
Transport, barge, residual fuel oil powered/US 0.001 0.1%
Natural gas, combusted in industrial boiler/US 0.001 0.1%
Gasoline, combusted in equipment/US 0.001 0.1%
Electricity, lignite coal, at power plant/US 0.001 0.1%
Transport, ocean freighter, residual fuel oil powered/US 0.001 0.1%
Bituminous coal, combusted in industrial boiler/US 0.001 0.0%
Figure 12-1: Top products contributing to emissions of fossil CO2 for 1 kWh of bituminous coal-fired
electricity. Those representing more than 1% are bolded.

The total emissions across all processes are 1.033 kg per kWh of electricity, with the top
processes contributing 1.004 kg CO2 to that value from producing coal-fired electricity and
0.011 kg CO2 from producing diesel fuel that is combusted in an industrial boiler. These two
values generally include all use of coal-fired electricity and diesel in boilers through the system,
not just direct use.

Similarly, in IO-based methods, the aggregated results expressed as the output of using an IO
model with the Leontief equation provides totals across all of the economic sectors within the
IO model. Figure 8-5 (revisited here as Figure 12-2) showed total and sectoral energy use for
producing $100,000 in the Paint and coatings sector. The total CO2 emissions across the supply
chain are 107 tons, with the top sectoral sources being electricity (25 tons) and dyes and pigments
(17 tons).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 465

Total CO2 equivalents


Sector
($ thousand) (tons)
Total across all 428 sectors 266 107
Paint and coatings 100 3
Materials and resins 13 5
Organic chemicals 12 5
Wholesale Trade 10 1
Management of companies 10 <1
Dyes and pigments 8 17
Petroleum refineries 5 6
Truck transportation 5 8
Electricity 2 25
Figure 12-2: EIO-LCA economic and CO2 emissions results for the ‘Paint and Coatings’ Sector

While these results may seem detailed, and to some extent they are, they fail to represent
additional detail that exists within the models – i.e., the specific processes and sectors at the
various tiers that lead to these emissions. In the process-based results, multiple processes may
contribute to the CO2 emissions from coal-fired electricity or diesel fuel, but the aggregated
totals provided do not show which of these underlying processes lead to the most emissions.
It may be important to know whether most of the emissions are from final production or from
a specific upstream product or process. Instead, only the aggregated totals across all instances
of production of coal-fired electricity and diesel are shown.

The IO-based results are also aggregated, e.g., across all sectors throughout the economy
where electricity or dyes are produced. There are likely thousands of sectors in the economy
that use electricity that lead to the provided estimate of emissions, and we may be interested
in knowing which of them lead to the most. Knowing that a handful of specific facilities within
the supply chain purchase the most electricity and thus lead to the most emissions may be an
important finding. But the aggregated results cannot show this — they can at most provide
estimates of direct and indirect effects. Supply chain management could be greatly informed
by better information at the upstream facility level, and knowledge of impacts of specific
facilities could augment IO-based screening (as will be seen later in the chapter).

Aggregated results are neither problematic nor inappropriate to use. In many studies,
aggregated results may be sufficient. Previous chapters have shown that acquiring the
aggregate results can be done quickly. Disaggregating into additional detail will require more
time and effort. “Unrolling” aggregated results from either process-based or IO-based
methods into the underlying flows (i.e., for a specific individual connected process or sector)
is the goal of this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
466 Chapter 12: Advanced Hotspot and Path Analysis

A Disaggregated Two-Unit Example


Before demonstrating the methods to provide the additional detail sought, consider a simple
two ‘unit’ example, as shown in Figure 12-3. We use ‘unit’ to generally refer to processes or
sectors that may exist in process-based or IO-based systems. The circles labeled 1 and 2
represent activities of the two units of the system. This is equivalent to a process matrix model
with only 2 processes, or an IO model with only 2 sectors. The directional arrow at the top of
Figure 12-3(a) shows that unit 2 requires a certain amount of output from unit 1 (A12), and the
circular arrow on the right represents the amount of output required from itself (A22). Unit 1
requires output from unit 2 and itself. These four flows (represented as directional arrows)
would be the elements of the A matrix in this two-unit example.

Figure 12-3: Overview of Hierarchical Supply Chain for Two-Unit Economy. (a) represents the flows
between the two sectors as would be represented in a 2 x 2 A matrix. (b) represents the flows as an
interdependent two-sector economy. Adapted from Peters 2006.

Figure 12-3(b) is an alternative representation of the economy, expressed as a hierarchical


tree of the supply chain. At the top is the final demand vector Y. Producing that final demand
will require some amount of production in the two sectors (there are two separate values Y1
and Y2, potentially zero valued). Using the nomenclature introduced in Chapter 8, this initial
demand for production is classified as Tier 0 in the hierarchy. Producing the Tier 0 output in
each unit may require some amount of production from units 1 and 2 (as expressed in Figure
12-3(a)) in Tier 1. Likewise, the Tier 1 production requires amounts of product from units 1
and 2, etc. The arrows at the bottom of Figure 12-3(b) remind us that this tiered production
graph would continue on and on beyond what is shown. The hierarchical tree continues to
double in complexity since there are two units in the economy, and at each tier the relative

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 467

amount of product needed from the other unit is found by the relationships in Figure 12-3(a),
a.k.a., the direct requirements matrix A.

Even in the abridged Figure 12-3(b), we see how the supply chain looks like a tree, and how
at each tier the flows are branches and the circles are nodes - or apples on the tree. Going
further down the tree to subsequent tiers, we see more and more apples. This is a far more
comprehensive view of the product-system. If we generally extended Figure 12-3 to many
more levels, we could represent the entire supply chain. Of course quantitatively many of the
apples would have zero values, because either there was no interlinked demand or because the
needed flow was zero. But if we added up all of the individual apples’ values, we would get the
same aggregated results we get from using the inverted process or IO-based model, which
otherwise sums all effects for each of the two units. In IO terms, summing all of the node 1’s
would give us the IO result for Sector 1 and summing all of the node 2’s would yield the
rolled-up total for Sector 2 (as in the Tabular results in Chapter 8).

Continuing with the simple two-unit system example, imagine that unit 2 represents electricity
production. If we were considering the impacts of electricity across our supply chain, then
every time there was a non-zero value for output in unit 2, we could find the amount of
electricity needed (and/or use the R matrix values to estimate greenhouse gas or other
emissions effects of that production) for each of the apples on the tree. Continuing to go
down Figure 12-3(b) would allow us to estimate electricity use for the entire supply chain. The
sum of all of them would, as described above, provide the aggregated result available for the
entire electricity sector using the Leontief IO equation. There is great benefit, though, in being
able to separately assess the various amounts of electricity of each of the apples on the tree.
The largest apples (in terms of values of effects like emissions) could be very important. It is
likely that there are “electricity apples” in lower tiers that are bigger than some in higher tiers.

Let’s now extend our vision of the system beyond two units. While not drawn here, we know
from earlier discussion that the 2002 US Benchmark IO Model has 428 sectors. The equivalent
Figure 12-3 for this model would have a final demand Y with 428 potential values. Tier 0
would have 428 potential node values (apples), Tier 1 would have 428 squared, etc. The total
amount of apples is quickly in the billions. That means if we wanted to separately estimate the
economic or emissions values for all of the apples, we would need to do a substantial amount
of work.

Structural Path and Network Analysis


The goal of an advanced hotspot analysis is being able to identify the ‘large apples’ in our
hierarchical tree. In reality, we would want to know more than just the size of the apples, we
would also want to know the branches of the tree that lead to the apples. In a more traditional
hierarchical tree (or graph theory) discussion, the collection of the branches from Y down to
a specific node (apple) would be referred to as a path. In a 428-sector economy, a path could

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
468 Chapter 12: Advanced Hotspot and Path Analysis

represent, for example, the electricity consumed by the engine manufacturer of a final demand
automobile, or the production of rubber in tires needed by a new automobile. The path (as
expressed in the directional arrows of Figure 12-3) “leads” from top to bottom, or from final
demand down to a specific apple. It could be expressed from bottom to top as well – the
important aspect of the path is the chain of nodes between the final demand and the final
node. Depending on our goals, we may be interested in the relevance of all paths, or a specific
path (e.g., the path represented by Y-1-2-1 in Figure 12-3).

A structural path analysis (SPA) is a method that helps to explain the embedded effects in
a system studied by an input-output model.33 It was originally described for economic models
(see Crama 1984 and Defourny 1984), and later proposed for use in LCA by Treloar (1997)
and is generally described by a list of paths that describe how energy or environmental effects
are connected in the upstream supply chain. An SPA will be comprised of a variety of paths
with varying lengths, e.g., there may be paths of length 1 that go only from Y to a node in Tier
1, or paths of length 2 that go from Y down to Tier 2. Similarly, an impact at ‘path length 0’ is
associated with the final production of the demanded good. When visualized, SPAs are usually
written as hierarchical trees with the product at the top of the tree. Network analysis, which
is typically done for process matrix methods, is analogous to SPA and was shown as a
visualization feature of SimaPro in Chapter 9.

The value in performing SPA is in identifying the most significant effects embodied (perhaps
deeply) within the supply chain of a product. Whereas a typical aggregated IO-LCA, for
example, could estimate the total effect of all electricity consumed across the supply chain,
SPA can show the specific sites where electricity is used (e.g., electricity used by particular
suppliers or nodes).

To help understand the mechanics behind SPA, we explore further the two-sector economic
system used throughout Chapter 8, where the units of R were waste in grams per $billion:

. 15 . 25 50 0
𝐀=€ • , 𝐑 = € •
.2 . 05 0 5

In our analysis of that system, we estimated that for Y1= $100 billion (a final demand of $100
billion in sector 1) the total, or aggregated, requirements were ~$152 billion and the total waste
emissions were 6.4 kg. But what if we wanted to know which particular interconnected paths
led to the largest economic flows and/or waste? SPA works by looking for all of the effects
from individual nodes and then creating paths connecting them to the top of the tree.
Returning to the Leontief system expansion Equation 8-1,

X = [I + A + A×A + A×A×A + … ] Y = IY +AY + A2Y + A3Y + … (8-1)

33 This kind of analysis is also referred to as a structural decomposition, but for consistency will be referred to as SPA in this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 469

The economic-only contribution of any particular node in the hierarchical tree can be
decomposed from the elements on the right hand side of Equation 8-1. The node with path
length 0, a.k.a., from the IY part of Equation 8-1, has a site value of Y1 or $100 billion. The
value of the leftmost node 1 in Tier 1 of Figure 12-3, from unit 1 to unit 1, is represented by
the product Y1A11 = $100 billion*0.15 = $15 billion. Figure 12-4 is a modified form of Figure
12-3(b) with the path to a specific node highlighted.

Figure 12-4: Hierarchical Supply Chain Emphasizing Path to Specific Tier 2 Node

The value of the highlighted node in Figure 12-4, which goes from unit 1 in Tier 0 to unit 2
in Tier 1 to unit 1 in Tier 2, is represented by the product Y1A21A12 = $100 billion*0.2*0.25 =
$5 billion. Generally, the economic value of nodes in the supply chain are represented by:
Economic node value =Yk Ajk Aij … (12-1)

where i, j, and k represent various industries in the system, and the subscripts follow the path
back to the top as noted in the description above (e.g., A21 is the value representing a path
going from a node for sector 2 to a node for sector 1). Figure 12-5 shows the economic values
of various nodes in Figure 12-3 using Equation 12-1. Note that the node values are not found
by applying the general A2 and A3 matrices as implied from Equation 8-1. The economic path
values are found by using the relevant economic coefficients of the A matrix of each step in
the path from top to bottom. Since in our example, there is only a final demand in unit 1, we
are only considering nodes and paths in the tree under unit 1.
Node Equation Economic Value
($billions)
Y1 in Tier 0 Y1 $100
Leftmost 1 in Tier 1 Y1A11 $15
Rightmost 2 in Tier 1 Y1A21 $20
Leftmost 1 in Tier 2 Y1A11A11 $2.25
Highlighted Node in Tier 2 Y1A21A12 $5
Rightmost 2 in Tier 2 Y1A21A22 $1
Figure 12-5: Economic Node Values for Two-by-Two System (from Figure 12-3)

As motivated above, we might care about the nodes that generate the most waste. These are
easy to find because we know the R matrix values of 50 and 5 (units of grams/$billion). If the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
470 Chapter 12: Advanced Hotspot and Path Analysis

R matrix values are put into a vector R (i.e., [50 5]) we can multiply Equation 12-1 (or the
values in Figure 12-5) by the R values:

Effect node value =RkYk Ajk Aij … (12-2)

Using Equation 12-2, the waste from direct production is 50 g per billion*$100 billion = 5,000
g = 5 kg. Likewise, Figure 12-6 shows the waste output values for the same nodes of our two
by two system.
Node Equation Waste Value (grams)
Y1 in Tier 0 R1Y1 5000
Leftmost 1 in Tier 1 R1Y1A11 750
Rightmost 2 in Tier 1 R2Y1A21 100
Leftmost 1 in Tier 2 R1Y1A11A11 112.5
Rightmost 2 in Tier 2 R2Y1A21A22 5
Figure 12-6: Waste Node Values for Two-by-Two System
(as referenced in Figure 12-3)

These generalized equations help to illustrate the mathematics that are foundational to
performing SPA. We can use these equations to find the most important nodes for a variety
of effects, so long as we have the needed data. One additional part of SPA that we will explore
later is that SPA routines typically also provide ‘LCI’ values for each node, i.e., the total
(cumulative) economic and/or environmental effects under a node. The LCI value at the top
of a tree is equal to the Leontief inverse. More generally, LCI values aggregate all nodes
underneath it (including the node itself).

Before presenting the typical outputs of an SPA, we reconsider the typical level of detail
provided in a standard aggregated IO-LCA analysis. Figure 12-7 shows the aggregated results
via EIO-LCA to estimate the various greenhouse gas emissions of producing $1 million of
soft drinks (from the Soft drink and ice manufacturing sector).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 471

tons CO2 equivalent emissions


Fossil Process
Total CH4 N2O Other
CO2 CO2
Total for all sectors 940 721 48.8 75 63.2 32.6
Power generation and supply 314 309 0 0.85 1.92 1.99
Wet corn milling 57.7 57.7 0 0 0 0
Alumina refining and primary aluminum 55 12.5 19.5 0 0 23
production
Grain farming 46 6.78 0 3.76 35.5 0
Oil and gas extraction 37 10.4 6.79 19.8 0 0
Soft drink and ice manufacturing 35.7 35.7 0 0 0 0
Other basic organic chemical manufacturing 33.8 30.3 0 0 3.47 0
Truck transportation 33 33 0 0 0 0
Aluminum product manufacturing from 25.4 25.4 0 0 0 0
purchased aluminum
Iron and steel mills 21.1 7.95 13 0.128 0 0
Figure 12-7: Aggregated EIO-LCA Results for $1 Million of Soft drink and ice manufacturing,
2002 EIO-LCA Producer Price Model

From Figure 12-7, the total CO2-equivalent emissions from producing $1 million of soft drinks
are 940 tons. Of that total, 721 tons are from fossil-based sources (such as burning fuels), and
of those fossil CO2 emissions, the largest sectoral contributor is from electricity use. As
previously noted, these aggregated values are not very helpful in understanding where the ‘hot
spots’ are in the system. We might be interested in knowing which of the various factories
across the supply chain of making soft drinks lead to the most electricity use (and thus the
most CO2 emissions). To answer such questions, we need to use SPA.

MATLAB code is available (see Advanced Material at end of this Chapter) to perform a
complete, economy-wide SPA on a particular input-output system. This code takes as inputs
a final demand Y, a direct requirements matrix A, an R matrix, and a series of truncation
criteria, and produces a sorted list of the paths (ordered by their path through the system) with
the highest impact. The truncation criteria ensure that the path analysis performed is not
infinitely complex.

Figure 12-8 shows abridged SPA results, only for the top 15 paths of the 2002 benchmark US
IO model, using the MATLAB code estimating the total greenhouse gas emissions of $1
million of soda (from the Soft drink and ice manufacturing sector) and a truncation parameter of
0.01% of the total tons CO2e in the SPA for a path’s LCI value34.

34These results were found using the MATLAB code provided as of the time of writing, and may differ from those found using the most
current data. The values presented herein should be considered as assumed values for the discussion and diagrams in this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
472 Chapter 12: Advanced Hotspot and Path Analysis

LCI
GWP @ S1 S1 S2 S2 S3 S3 S4 S4
Length Site Site Name Name Name Name
Power generation and
1 79.6 83.6 70 SDM 31 supply
1 52.1 119.4 70 SDM 44 Wet corn milling
0 36.0 940.4 70 SDM
Alumina
refining and
Aluminum product primary
manufacturing from aluminum
2 32.4 48.1 70 SDM 174 purchased aluminum 173 production
Power
Plastics bottle generation and
2 32.0 33.6 70 SDM 148 manufacturing 31 supply
2 29.8 42.9 70 SDM 44 Wet corn milling 2 Grain farming
Aluminum product
manufacturing from
1 20.0 166.0 70 SDM 174 purchased aluminum
1 15.2 22.8 70 SDM 11 Milk Production
1 15.0 21.7 70 SDM 324 Truck transportation
Aluminum product Power
manufacturing from generation and
2 14.7 15.5 70 SDM 174 purchased aluminum 31 supply
Alumina
refining and Power
Aluminum product primary generatio
manufacturing from aluminum n and
3 13.2 13.9 70 SDM 174 purchased aluminum 173 production 31 supply
Plastics material
Plastics bottle and resin
2 11.6 44.5 70 SDM 148 manufacturing 127 manufacturing
Power
generation and
2 11.0 11.6 70 SDM 44 Wet corn milling 31 supply
Secondary Power
Aluminum product smelting and generatio
manufacturing from alloying of n and
3 9.3 9.7 70 SDM 174 purchased aluminum 172 aluminum 31 supply
Paperboard container Paperboard
2 8.8 17.1 70 SDM 107 manufacturing 106 Mills
Figure 12-8: Example SPA of Total Greenhouse Gas Emissions for $1 Million of Soft drink and ice
manufacturing Sector. GWP Site and LCI Values in tons CO2e. SDM = Soft Drink Manufacturing

Full results for $1 million of Soft drink and ice manufacturing have about 1,100 paths, and is
in the web-based resources for Chapter 12 (as SPA_SoftDrinks_GHG_Tmax4.xls). SPA
spreadsheets for several other sectors using the same parameters are also posted there. It is
worth browsing them to see how the same parameters lead to vastly different numbers of
paths (e.g., tens versus thousands) and thus how concentrated the hot spots are.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 473

Figure 12-8 is sorted by the results for the nodes or ‘apples on the tree’, which in this case are
the GHG emissions from any particular apple in the economic system. This is column 2, noted
as the ‘GWP site’, referring to the greenhouse gas emissions of each apple (emissions at the
facility represented by the node). The next column shows the total LCI value at the node (i.e.,
the emissions of the apple and all of the branches and apples beneath it). The remaining
columns describe the path from Y down to the node for these top GHG emissions paths. All
values in S1 are for sector 70, Soft drink and ice manufacturing (SDM). Sector numbers and names
for S2-S4 are also listed for paths of length higher than zero.

Across all of the many apples, the first row of Figure 12-8 shows that the discrete activity in
the supply chain of making soda with the highest GHG emissions comes from the path for
producing the electricity purchased to make soda. It is in Tier 1 (path has length 1 from final
demand Y) and the site emissions are 79.6 tons CO2e. The entire LCI (including the site) for
this path is 83.6 tons CO2e. That means there are about 4 tons CO2e more GHG emissions
further down that chain (e.g., from mining or transporting the coal needed to make electricity).
The second largest apple is from the path for wet corn milling needed to produce soda (also
path length 1). The third biggest discrete source of GHG emissions would be the emissions
at the soda manufacturing facility itself (path length
It is common that the percent 0, about 36 tons CO2e). The LCI value for this row
contributions of paths will estimates that there are about 940 tons CO2e below
diminish quickly. With an overall this node. Since it is path 0, this LCI value comprises
LCI for $1 million of soda of all of the other apples in the SPA, and so it equals
approximately 940 tons CO2e, by the rolled up aggregated results for $1 million of soft
the 15th highest path, the drinks from Figure 12-7. Each SPA has only one
contribution of the total is already path length 0 entry, and its site value is commonly
less than 1% (8.8 / 940). This not the largest value in the sorted SPA. The final row
further reinforces our discussion we highlight is the eleventh largest, which is one of
from Chapter 8, which showed
only two paths of length 3 in the abridged SPA (top
that aggregated results also
diminish quickly. Thus, an SPA
15) summary. It is the electricity that goes into
sorted with the top 15 or 20 paths alumina refining that is needed to make aluminum!
will typically yield most of the “hot The sum across the abridged top 15 GWP site values
spots” for discrete activities with is only 380 tons CO2e, which is only 40% of the total
the highest impact in the supply 940 tons CO2e. Summing site values ensures we do
chain. not double count emissions represented in the LCI
values.

An unabridged SPA for an activity in a 428-sector economy would be massive, with billions
of rows representing all of the apples in the hierarchical tree. But, if we had an unabridged
SPA with all site values for all paths, the sum of all of the GWP site values would equal 940
tons. However, there are many nodes with zero or very small values, many of them deep into
the tiers, and these nodes can be ignored, reducing the size of the SPA. While the complete
method to create the SPA excerpted in Figure 12-8 is beyond the scope of the main chapter

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
474 Chapter 12: Advanced Hotspot and Path Analysis

(but is discussed in the Advanced Material for this Chapter, Section 1), it was generated by
truncating the SPA by cutting off paths with a cutoff of 0.01% of total GHG emissions and
following paths only down to Tier 4 (i.e., including up to nodes with path lengths of 3). The
cutoff means that the minimum LCI value at a node required so as to be included in the results
had to be greater than 0.01% * 940, or 0.094 tons CO2e. Even this truncated SPA, however,
returns about 1,100 paths (i.e., imagine 1,100 rows in Figure 12-8). Regardless, using SPA
software with truncation parameters will under-represent the total aggregate results because
of the excluded paths. Not shown here (but visible in the posted spreadsheet) is that the sum
of all site values in the resulting 1,100-path SPA, which still represents only about 75% of the
entire 940 tons CO2e in the supply chain. Thus, 25% of the total greenhouse gas emissions are
comprised of many, many very small site and LCI values not large enough to surpass the
threshold or with path lengths greater than 3.

One caveat is that despite the more disaggregated results that SPA provides, it still does not
necessarily give facility level information. The estimates provided in Figure 12-8 for truck
transportation is for all contracts and purchases of truck transportation by the soft drink
manufacturer, which are likely from many different individual trucking companies.

The format of Figure 12-8 arises from the output used by the MATLAB code available to
generate SPA results from an economic system. The general way of describing paths is
beginning to end (bottom to top in a hierarchical tree), e.g., the first one in Figure 12-8 would
be written as ‘Power generation and supply > Soft drink and ice manufacturing’.

Figure 12-9: Hierarchical Tree View of Structural Path Analysis for Soft Drink and Ice Manufacturing

The PowerPoint template used to make Figure 12-9 is in the Chapter 12 folder.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 475

Despite providing significant detail, SPA results expressed in tabular form can be difficult to
interpret. Figure 12-9 shows a truncated graphical representation of most of the SPA results
shown in Figure 12-8 (several of the values come from the underlying SPA, not shown). The
values in the blue rectangles represent the sector names of the activities and the GHG
emissions at the site, similar to Figure 12-3(b), but streamlined to only include a subset of them
at each tier. The numerical values above the rectangles are the GHG LCI values from Figure
12-8. For example, at the very top of the hierarchy is the rectangle representing the initial (Tier
0) effects of Soft drink and ice manufacturing, which Figure 12-8 says has a value of 36 tons CO2e,
and the LCI emissions, rounded off to 940 tons. Likewise, the Tier 1 site emissions of Wet corn
milling are 52 tons, and the LCI of 119 tons CO2e.

As promised, SPA, unlike aggregate methods, shows a far richer view of where flows occur in
the product system. By using SPA, we could improve our understanding of our product, or
the design of our own LCA study. For example, in the soft drink example above, we could
ensure that key processes such as wet corn milling and other high emissions nodes are within
our system boundary. If such nodes were excluded, we would be ignoring significant sources
of emissions.

Web-based Tool for SPA


Visual structural path analyses similar to those shown above can be generated online via the
EIO-LCA SPA tool (accessed at http://www.eiolca.net/abhoopat/componentviz/ ).

The SPA tool has four elements, which display results of the 2002 benchmark model:

1) A search or browse interface to find the input-output sector you want to analyze

2) A pull down menu for selecting the effect you want to analyze (e.g., energy, carbon
emissions, water use, etc.)

3) A sector hierarchy with categories of products you want to browse amongst to choose
a sector to analyze

4) A Structured Path Analysis graphic displaying the results (shown after the three above
are chosen)

A key component of the structural path visualization is the ability to ‘cut off’ the many small
paths in the supply chain (e.g., all paths with impacts less than 0.1%) in order to more
effectively and efficiently focus on visualizing the results of the larger paths for decision
making. The SPA tool allows the user to select any of the 428 IO sectors in the 2002 model,
and then visualize the hierarchy or chain of effects that lead to significant impacts. Figure
12-10 shows the initial screen displayed when using the SPA tool to select a sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
476 Chapter 12: Advanced Hotspot and Path Analysis

Figure 12-10: Home Screen of the Online SPA Tool

Once a sector is chosen in the online interface, either by searching (i.e., by starting to type it
in words and then selecting from a set of auto-completed options) or browsing (i.e., using the
categorical drill down + symbols), the SPA display begins. Figure 12-11 shows the initial SPA
screen for energy use (by default) associated with Electronic computer manufacturing.35 The user
can click the ‘Change Metric’ button to instead display SPA results for greenhouse gas
emissions or other indicators of interest.

35 Note: For consistency, a revision of this web tool example will be updated to show the same soft drink example described above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 477

Figure 12-11: Initial SPA Interface Screen for Electronic Computer Manufacturing
(Showing Energy, by default)

The tool also provides in-line help when the cursor is moved over screen elements. For
example if the metric is changed to ‘Greenhouse Gases’, and the cursor is hovered over any
of the elements in the top row (the top 5 sources), Figure 12-12 shows how the tool
summarizes why those values are shown, i.e., they are the top sectors across the supply chain
of producing computers that emit greenhouse gases.

Figure 12-12: SPA Interface for Greenhouse Gas Emissions, with Help Tip Displayed

Likewise, Figure 12-13 shows how moving the cursor over elements in the first row of the
structural pathway display will spell out the acronyms of sectors chosen (and which are
abbreviated to fit on the display). It also explains the concept of depth introduced above which
is relevant to how deep in the supply chain the path is being displayed. At the first level (or
depth) of the visualization, all of the top-level activities that go into final assembly of a
computer are represented as boxes in the large horizontal rectangle of activities. On the left
hand side of this top level is always the sector chosen (in this case, computers), and then sorted
to the right of that choice are the top sectors that result in emissions associated with the highest

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
478 Chapter 12: Advanced Hotspot and Path Analysis

level of the supply chain. These include computer storage devices, semiconductors (shown
below), etc.

Figure 12-13: SPA Interface Showing Detail of Elements in First Tier of Display

Each of the boxes in the lower (pathway) portion of the SPA tool shows the respective
percentage of effects from that sector in the overall path. Of all of the highest level processes
in the path of a computer, 17% of the emissions come from computer storage device
manufacturing (CSDM), 16% comes from semiconductors (S&RDM), 13% from printed
circuit assembly (PCAAM), etc. But each of those activities themselves also has an upstream
supply chain.

The red bar at the bottom of each of those grey process boxes is denoting that it has a further
upstream process that may contribute effects in the overall SPA. By clicking on any of these
boxes in the top level, the visualization drills down to all of the activities associated with that
specific pathway. Figure 12-14 shows the SPA visual that would result from choosing
computers, and then subsequently choosing the semiconductor manufacturing process at the
first level.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 479

Figure 12-14: SPA Interface with Second Tier Detail Shown

In this case, the largest emitting activity in the upstream supply chain of semiconductors is
power generation, showing 20% of the relevant upstream emissions from that process. All
other basic inorganic chemicals (AOBICM) would be next highest at 14%. Selecting the power
generation box at this level would again drill down to the next level of the SPA, resulting in
the display in Figure 12-15. At the third path level, the result is that almost all of the emissions
come from the generation of electric power itself, with a few smaller upstream processes like
coal mines, etc., to the right.

Figure 12-15: SPA Interface with Third Tier Detail Shown

Finally, the SPA visual can connect the top impacts with the results shown in the SPA display
at the bottom. By moving the mouse over any of the top 5 sectors on the top of the screen,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
480 Chapter 12: Advanced Hotspot and Path Analysis

the SPA will highlight in the same color all of the processes in your selected structural path
that are associated with that sector. In Figure 12-16, all power generation boxes are shown.
The point of this feature is to visually reinforce the importance of these top 5 sectors.

Figure 12-16: SPA Interface with top sources highlighted as nodes in levels

Note that the example drill down of the SPA shown above is just one of thousands of
combinations that could be done for any particular top-level sector. For example, instead of
choosing semiconductors in the first row, the SPA could have elected to follow computer
storage devices, etc. The resulting visualizations would be different and are not shown here.

The discussion and demonstration above hopefully provide further motivation as to why one
might be interested in path specific results in LCA. In the next section, methods are presented
that incorporate additional data to update the structural path results available from IO models.
These methods represent the most detailed hybrid methods available to support detailed
questions at the level of specific nodes in the supply chain.

The Structural Path Exchange Method


While the results of an SPA may be inherently tied to sectoral average data and methods, the
path and node-specific information may be useful when considering the effects of alternative
design or procurement decisions on the relative impacts of product systems. For example, we
may use generic IO-based SPA results to develop a baseline for a product system, and replace
various baseline results with our own data or assumptions as they relate to alternative designs
or purchases.

At the simplest level, path exchange (PXC) is an advanced hybrid LCA method conceived
by Treloar (1997) and summarized theoretically in Lenzen (2009) that ‘exchanges’ path values

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 481

from a baseline SPA, e.g., from a national level model, with data related to alternate processes, i.e.,
that differ from those modeled by average and aggregate IO data. The alternate data may be
from primary data, supplier information, assumptions, or locally-available data on specific
paths. The values exchanged may be only for a specific node, an entire subtree of the node, or
more. The main purpose of path exchange is to create an improved estimate of effects across
a network / supply chain, with the alternate exchanged data exploiting the comprehensiveness
of the SPA. The PXC method targets specific nodes in the supply chain. Baboulet (2010)
provides an excellent practical demonstration of path exchange in support of decision making
and policy setting for a university seeking to reduce its carbon footprint.

The steps of path exchange can thus be summarized as:

(1) Perform a general national IO-based SPA for a sector and effect of interest to develop
a baseline estimate.

(2) Identify paths where alternate process data would be used (e.g., paths with relatively high
values in the baseline, or where process values are significantly different than averages),
and where data is available to replace the baseline path values in the SPA. For each of
these exchanged paths, do the following steps:

• Develop a quantitative connection between the alternative process data and


the nature of the relationship of the chosen paths, including potential unit
change differences (e.g., mass to dollars).

• Normalize available process data to replace information in the default path.

• Update the path value.

(3) Re-calculate the SPA results with path exchanges and compare the new results to the
baseline SPA.

As a motivating example, consider trying to estimate the carbon footprint of a formulation of


soda where renewable electricity has been purchased in key places of the supply chain (instead
of using national average grid electricity everywhere). You would (1) run a baseline SPA on
the soda manufacturing sector, (2) look for nodes in the SPA where electricity is used and has
large impact, and use alternate data on renewables, derive alternate path values, and (3) change
the path values and recalculate and compare to the baseline SPA to see the overall effect of
the green power. Example 12-1 shows a brief example to inspire how PXC works before we
dive into more details and scenarios.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
482 Chapter 12: Advanced Hotspot and Path Analysis

Example 12-1: Use the path exchange method to estimate the GHG emissions reductions of using
renewable electricity on site for soft drink and ice manufacturing (SDM) in the US.

Answer: Consider that an estimate is needed for the total GHG emissions of a physical amount of
soda that, when converted to dollars, is $1 million (maybe this is one month’s worth of physical production
from a facility). Further, the facility making the soda buys wind power.

The results from Figure 12-8 can be used as the baseline since they were generated for $1 million of soda
manufacturing. Figure 12-17 shows an abridged version of Figure 12-8 with the top three paths, and excludes
several unused columns, sorted by site CO2e emissions. Recall that the path in the third row (path length 0)
shows that the site emissions of the soda manufacturing facility are 36 tons CO2e and the total LCI (for the
whole supply chain below it including the site) are 940 tons CO2e. Row 1 shows that the path for electricity
directly used by the soda factory (path length 1), represents 79.6 tons of CO2e (83.6 tons considering the
whole supply chain below this node).

Baseline SPA Results Path Exchanges


GWP LCI @ GWP LCI @
Length Site Site Path Description Site Site Reasons Share Exch’d
(tons) (tons) (tons) (tons)
Power generation and supply >
1 79.6 83.6 0 4.0 Green Power -100% site
SDM
1 52.1 119.4 Wet corn milling > SDM
0 36.0 940.4 SDM
Total 940.4 860.8

Figure 12-17: Abridged PXC Worksheet (Top Three Site Values only) for $1 million of Soda Manufacturing

The right hand side of Figure 12-17 shows a worksheet for annotating path exchanges. If we assume that
the green power purchased by the soda factory has no direct greenhouse gas emissions, we note that 100%
of site emissions would be reduced, and record a path-exchanged value of 0 for the site CO2e emissions.
The upstream (LCI) portion of the renewable energy system may or may not have 0 emissions. The existing
upstream LCI value of 4 tons CO2e is for average US electricity, involving a weighted average for the
upstream emissions of extracting and delivering fuel to power plants, equipment manufacture, etc. If we did
not have specific information on the generation type and/or upstream CO2e value for our green power, we
could choose to maintain the 4 tons CO2e LCI value from the baseline SPA. Of course, if we did have
specific information, we could justify an alternate value, like an assumption for 0 in the LCI category as well.

If we made no other changes to the baseline SPA results, our path-exchanged total system would be 860.8
(940.4 – 79.6) tons CO2e emissions – the extra significant figures shown to ensure the math is clear. This is
a fairly significant effect for only one path exchange – an 8% reduction from the baseline SPA results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 483

The basic path exchange in Example 12-1 also lays the foundation for the general PXC
method. PXC does not manipulate the underlying A or R matrices of the IO model used for
the SPA, and thus does not make broad and consistent changes to the entire economic system.
Following Example 12-1, if PXC changed the R matrix for GHG emissions of electricity (in
this case, made them 0), then all purchases of electricity by all facilities in the entire economy
would be exchanged. Such a change of course would overestimate the effect of the decision
by a single facility. Instead, PXC adjusts specific uses of A or R matrix values for nodes of a
specific path (e.g., those used in Equation 12-2). This is a benefit of SPA and path exchanges
– we can target very specific places in the network.

Equation 12-2 helps to motivate that there are only two general kinds of exchanges – to the
transaction coefficients (e.g., Aij) or intensity coefficients (Ri) underlying the SPA results used
to generate the site and LCI values for specific paths. Transaction coefficient-based exchanges
are those rooted in a change in the level of purchases made. If we remember what the values
in an IO Use Table look like that eventually become elements of an A matrix, then we can
consider that the ‘production recipe’ for a particular path can be presented as a value in units
like cents per dollar. In the drill-down generated by an SPA, we might be able to assess that
the economic value of a particular node is 10 cents per dollar. If we make a decision to change
consumption of this particular node in a future design or decision, then we would edit the
underlying 10 cents per dollar value to something more or less than 10. Buying 50% less would
change this transaction coefficient to 5 units.

On the other hand, changing intensity coefficients is done to represent different decisions or
opportunities where the degree of effect is different. The waste example at the beginning of
the chapter had intensities of 50 and 5 grams per $billion. Again, a path exchange could
increase or decrease these values.

Finally, an exchange can involve both transaction and intensity changes. Regardless of the type
of exchange, and depending on the depth of the path you are trying to exchange, you may
need to perform significant conversions so that you can determine the appropriate coefficients
to use in the exchange. This could take the form of estimation problems (see Chapter 2),
dealing with several physical to monetary (or vice versa) unit conversions, or other issues.

In the end, what you will be exchanging is the path value from the baseline to the exchanged
value (e.g., from 79.6 to 0 in Figure 12-17). You may be able to determine the appropriate
exchanged path value without describing all of the transaction or intensity conversions
(Example 12-1 exemplifies this in showing the exchange to 0).

Building on the prior examples in the chapter about soda manufacturing, Example 12-2 shows
how to use SPA to consider the effects of reducing the amount of corn syrup used in soda, in
support of a more natural product.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
484 Chapter 12: Advanced Hotspot and Path Analysis

Example 12-2: Use the path exchange method to estimate the GHG emissions reductions of using 50%
less corn syrup on site for $1 million of soft drink and ice manufacturing (SDM) in the US.

Answer: Corn syrup (e.g., high fructose corn syrup) is one of the primary ingredients of soda and is
the product of wet corn milling processes. The results from Figure 12-8 can again be used as the SPA
baseline. Figure 12-18 shows our PXC worksheet that includes separate columns to track transaction or
intensity coefficient changes. If we assume that the second row of the table represents all of our direct
purchases of corn syrup, then the values we choose to exchange will fully represent the effect. Behind the
scenes, this would be equivalent to reducing our Aij cell value by 50%. A 50% reduction would reduce the
GHG site and LCI values by 50%. Of course this would have the equivalent effect as finding an alternative
corn syrup supplier with 50% less site and LCI emissions.

Baseline SPA Results Path Exchanges


GWP LCI @ GWP LCI @ Trans Intensity
Length Site Site Path Description Site Site Reasons Share Share
(tons) (tons) (tons) (tons) Exch’d Exch’d
Power generation and supply >
1 79.6 83.6
SDM
-50% site,
1 52.1 119.4 Wet corn milling > SDM 26.0 59.7 Reduce syrup
LCI
0 36.0 940.4 SDM
Total 940.4 880.7
Figure 12-18: Abridged PXC Worksheet (Top Three Site Values Only) for $1 million of Soda Manufacturing

If we made no other changes to the baseline SPA results, our path-exchanged total system would be 880.7
(940.4 – 59.7) tons CO2e emissions – the extra significant figures shown to ensure the math is clear. This is
a fairly significant effect for only one path exchange – a 6% reduction from the baseline SPA results.

The same result occurs if the syrup is purchased by a supplier able to make the same amount of syrup with
50% lower emissions. The exchange would instead be entered in the intensity share exchange column.

Example 12-2 shows a comparable GHG reduction as shifting our soda factory to 100% green
power. An important part of a decision of pursuing one or the other alternative would be the
relative costs (not included here). This further demonstrates why SPA and PXC are such
powerful tools – the ability to do these kinds of ‘what if’ analyses to compare alternative
strategies to reduce impact.

When performing PXC, it is important to be careful in tracking the site and LCI values. While
the examples above show both site and LCI values for the abridged baseline SPA for soda,
recall that the full SPA has about 1,100 paths, including separate row entries for nodes
upstream of some of the nodes with large LCI values. Tracking and managing effects in
upstream nodes may be more difficult than these examples imply. In Examples 12-1 and 12-
2, this was done by showing the resulting exchanged values for site and LCI in the same row.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 485

It may be easier to track site and LCI changes separately. For example, Figure 12-8 shows the
top 15 paths of the soda SPA. Row 2 shows the path ‘Wet corn milling > SDM’, and Row 6
shows the path ‘Grain farming > Wet corn milling > SDM’. The LCI value for row 2 is 119 tons,
while the site value for row 6 (which falls under the tree of the node in row 2) is 30 tons. Row
6 represents a significant share of row 2’s LCI value. When Example 12-2 reduced the
purchase of syrup by 50%, we also reduced the LCI value by the same amount, which makes
sense given the transactional nature of the choice. However, there may be other path
exchanges where we want to independently adjust these connected site and LCI values (i.e.,
separately edit site values in the PXC worksheet for rows 2 and 6). Example 12-3 shows how
to represent multiple, offsetting exchanges via PXC.

Example 12-3: Use the path exchange method to estimate the GHG emissions reductions of shifting 50%
of direct truck delivery of soda to rail.

Answer: Results from Figure 12-8 can again be used as the SPA baseline. Figure 12-19 shows our
PXC worksheet for path length 0 (the entire LCI of system), and the paths of length 1 for truck and rail
transportation (the latter not previously shown in Figure 12-8 but available in the supplemental resources).

To reduce 50% of direct deliveries by truck, we exchange a value in Row 2 of the worksheet. This 50%
transactional reduction would reduce the site and LCI values, or about 10.9 tons total. The offsetting increase
in rail may not be simple, as the baseline amount of soda shipped by rail is not given, and the underlying
physical units are not known (e.g., tons/$). Physical or monetary unit factors for the two transport modes
are needed to adjust the rail value. If we assume that truck and rail emit 0.17 kg and 0.1 kg of CO2 per ton-
mile, respectively, then the original 15 tons of CO2 from delivery by truck (row 2) equates to (15,000 kg /
0.17 kg CO2 per ton-mile), or 88,200 ton-miles. A 50% diversion is 44,100 ton-miles, which at 0.1 kg CO2
per ton-mile of rail emits 4.4 more tons CO2 than what is already shipped by rail in the baseline. Relative to
the SPA site baseline of 2.5 tons (row 3), this is a factor of 2.76 (176%) increase, which we could apply to
both the site and LCI values for rail.

Baseline SPA Results Path Exchanges


GWP LCI @ GWP LCI @ Intensity
Trans Share
Length Site Site Path Description Site Site Reasons Share
Exch’d
(tons) (tons) (tons) (tons) Exch’d
0 36.0 940.4 SDM
1 15.0 21.7 Truck Transportation > SDM 7.5 10.9 Divert -50% site, LCI
truck to +176% site,
1 2.5 3.2 Rail Transportation > SDM 6.9 8.8 rail LCI
Total 940.4 935.1
Figure 12-19: Abridged PXC Worksheet for $1 million of Soda Manufacturing

If we made no other exchanges, our path-exchanged total system would have 935.1 tons CO2e emissions –
a fairly insignificant effect for what is likely a large amount of logistical planning. From an economic
perspective, though, it is likely much cheaper.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
486 Chapter 12: Advanced Hotspot and Path Analysis

Software and code exists to help with PXC activities, for example the University of Sydney’s
BottomLine. These provide detailed interfaces showing path summaries, transaction and
intensity coefficients, etc., to be edited for path exchanges. Without such software, PXC must
be done with exchange worksheets (potentially done in Microsoft Excel) as in Figure 12-17
through Figure 12-19.

Exchanges will sometimes involve more than one path. A substitution in a design may involve
a reduction of transaction or intensity from one path and an increase in another. For example,
if our company elected not to use direct truck transportation for soda, we could not reasonably
deliver our product, and could not capture the full effects of such an exchange by zeroing out
truck transportation. We would need to increase the use of some other mode of transportation
(e.g., rail, which was not shown in Figure 12-8).

While this chapter has focused on IO-based structural path analysis, network analysis of
process matrix models is analogous. The same matrix definitions and techniques are used, and
the main inputs needed are the raw matrices used for the process model. See the Advanced
Material for additional help on network analysis of process matrices.

Chapter Summary
Structural Path Analysis (SPA) is a rigorous quantitative method that provides a way to
disaggregate IO-LCA results to provide insights that are otherwise not possible. These
disaggregated results can be very useful in terms of helping to set our study design parameters
to ensure a high quality result. Path exchange is a hybrid method that allows replacement of
results from specific paths in an SPA based on available monetary or physical data. These
advanced hot spot analysis methods provide significant power, but remain critically dependent
on our data sources.

References for this Chapter


Baboulet, O., and Lenzen, M. Evaluating the environmental performance of a university,
Journal of Cleaner Production, Volume 18, Issue 12, August 2010, Pages 1134–1141. DOI:
http://dx.doi.org/10.1016/j.jclepro.2010.04.006

Crama, Y.; Defourny, J.; Gazon, J. Structural decomposition of multipliers in input-output or


social accounting matrix analysis. Econ. Appl. 1984, 37 (1), 215–222.

Defourny, J.; Thorbecke, E. Structural path analysis and multiplier decomposition within a
social accounting matrix framework. Econ. J. 1984, 94 (373), 111–136.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 487

Lenzen, M. and R. Crawford, The Path Exchange Method for Hybrid LCA, Environ. Sci.
Technol. 2009, 43, 8251–8256.

Peters, G. P. & Hertwich, E. G. Structural analysis of international trade: Environmental


impacts of Norway. Economic Systems Research 18, 155-181, (2006).

Treloar, G. Extracting embodied energy paths from input-output tables: towards an input-
output-based hybrid energy analysis 
method. Economic Systems Research, 1997, 9 (4), 375–391.

End of Chapter Questions

For questions 1-4, use the Microsoft Excel file ‘SPA_Automobiles_1million_GHG.xls’


posted in the Homework files folder for Chapter 12 to answer the questions. This file
shows the results of a baseline SPA for $1 million of Automobile manufacturing in the 2002 EIO-
LCA producer model with respect to total GHG emissions (units of tons CO2e).

Objective 1. Explain the limitations of aggregated LCA results in terms of creating


detailed assessments of product systems.

Objective 2. Express and describe interdependent systems as a hierarchical tree with


nodes and paths through a tree.

1. Draw a hierarchical tree (either by hand or by modifying the posted PowerPoint template
for soft drinks) for the top 10 paths that is similar in layout to Figure 12-9.

2. Using all of the path analysis results, find the percent of the total emissions in the system
specified by the path analysis, and describe in words what the path analysis results tell you
about the GHG hot spots in the supply chain for automobiles.

Objective 3. Describe how structural path analysis methods provide disaggregated


LCA estimates of nodes, paths, and trees.

Objective 4. Interpret the results of a structural path analysis to support improved


LCA decision making.

Objective 5. Explain and apply the path-exchange method to update SPA results for
a scenario of interest, and describe how the results can support changes in design or
procurement.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
488 Chapter 12: Advanced Hotspot and Path Analysis

3. Use the path exchange method to estimate the net CO2e effects of each of the following
adjustments to the 2002 baseline SPA. Without doing a cost analysis, discuss the relative
feasibility of each of the alternatives.

a. Use 100% renewable electricity at automobile assembly factory

b. Use 100% renewable electricity at all factories producing motor vehicle parts

c. Reduce use of carbon-based fuels by 50% at the automobile assembly factory


(assume all SPA node site GHG emissions are from use of fuels)

d. Substitute aluminum (top path 17) for steel (top path 1) in 50% of all motor vehicle
parts. Assume $17,000 of steel and $1,000 of aluminum per $million in parts
currently, and that prices are $450 per ton of steel and $1,000 per ton of aluminum.
Aluminum can substitute for steel at 80% rate.

4. Discuss the limitations of using SPA and path exchanges to model the life cycle of an
automobile.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 489

Advanced Material for Chapter 12 – Section 1 - MATLAB Code for SPA


The MATLAB code used to generate structural path analysis (SPA) results throughout
Chapter 12 is available in the Web-based resources for Chapter 12 on the lcatextbook.com
website (SPAEIO.zip). The core code was originally developed by Glen Peters and is provided
with his permission. Use of alternative SPA tools or code could lead to different path analysis
results than those presented in the chapter. To use the code, unzip the file into a local directory.

The specific .m file in SPAEIO.zip that is used to generate the results for the waste example
in Chapter 12 is called RunSPAChap12Waste.m, and uses the code below to generate the
values for Figure 12-5 and Figure 12-6:
clear all

F = [1 1]; % for econ SPA paths (‘1’ values just return L matrix)
%F = [50 5]; % for waste paths
A = [0.15 0.25; 0.2 0.05];

filename = ‘chap12example’;

% code to make default sector names if needed (comment out if not)


[rows, cols] = size(A);
sectornames=cell(rows,1);
for i=1:rows
sectornames{i}=[‘Sector’ num2str(i)];
end

L = inv([eye(2)-A]);

F_total = F*L;
y = zeros(2,1);
y(1,1) = 100; % The $100 billion of final demand

percent = 0.01; % ‘cut-off’ of upstream LCI (as % of total emissions)


T_max = 4; % Max tiers to search

% this prints the T_max, percent, etc. params in the file


% change to 0 or comment it out if not needed
thresh_banner=1;

% this last command runs a function in another .m file in the zip file
% parameters of function are the data matrices and threshold parameters
SPAEIO02(F, A, y, F_total, T_max, percent, filename, sectornames,
thresh_banner);

The rest of the .m files in the ZIP folder provided build the hierarchical tree, sort it, traverse
it across the various paths, and return results, printing only those that meet the threshold

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
490 Chapter 12: Advanced Hotspot and Path Analysis

criteria (e.g., if T_max=4, it will not output any paths at or below that tier). The other .m files
should not generally need to be modified.36 To use the code, you use or edit the matrices and
parameters in RunSPAChap12Waste.m, and then run it in MATLAB. It will generate a CSV
text file (named chap12example here) with the economic path results below, where the
intermediate calculations are summarized in Figure 12-5. You may want to check the math for
some of the paths below and ensure you see which of the nodes they correspond to.
Paths = 15, T_max = 4, percent = 0.01000, Total Effects = 1.518152e+02
1: 0:100.0000:151.8152 : 1 ; Sector1
2: 1:20.0000:29.0429 : 1 ; Sector1 : 2 ; Sector2
3: 1:15.0000:22.7723 : 1 ; Sector1 : 1 ; Sector1
4: 2:5.0000:7.5908 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1
5: 2:3.0000:4.3564 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2
6: 2:2.2500:3.4158 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1
7: 3:1.0000:1.4521 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1 :
2 ; Sector2
8: 2:1.0000:1.4521 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2
9: 3:0.7500:1.1386 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1 :
1 ; Sector1
10: 3:0.7500:1.1386 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
1 ; Sector1
11: 3:0.4500:0.6535 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
2 ; Sector2
12: 3:0.3375:0.5124 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
1 ; Sector1
13: 3:0.2500:0.3795 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2 :
1 ; Sector1
14: 3:0.1500:0.2178 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
2 ; Sector2
15: 3:0.0500:0.0726 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2 :
2 ; Sector2

The format of this output is as follows: the first row displays all of the threshold parameters,
total paths given the thresholds, and the total effects - in this case, economic results in billions.
The rows below this row show results for each path (sorted by the site effect value): the path
number (here 1-15), the path length, the site and LCI effects (here, $billions), then the ordered
path, e.g., path #1 is the top level purchases from sector 1 of the final demand, and path #15
is the purchases in the path Sector 2 > Sector 2 > Sector 2 > Sector 1). The extraneous digits
in the site and LCI values come from the SPA code (which, by default, are 4 post-decimal
digits). The CSV text file results generated by MATLAB can be imported into Microsoft Excel
by opening the text file in Excel and using the import wizard with colon and semicolons
provided as delimeters (a colon needs to be typed into the ‘other’ field).

Print_sorted_EIO2.m optionally displays the threshold criteria in the output file, printing of the sector names,
36

and number of significant digits to display. These could all be edited if desired.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 491

The economic SPA results represent 99% of all economic effects throughout the supply chain
in only 15 paths. Since the variable percent is 0.01, the SPA code searches for paths up to
T_max where the LCI values are greater than 0.01% of $151.8 billion, or $0.0152 billion. If
there were a 16th path that had been identified, it was ignored because its LCI value was less
than that amount (but path 15 was not ignored).

If you comment out the second line of code in RunSPAChap12Waste.m (F = [1 1];) and
un-comment the third line (F = [50 5];) and re-run the .m file, it will instead return the
waste path results (as summarized in Figure 12-6):
Paths = 15, T_max = 4, percent = 0.01000, Total Effects = 6.402640e+03
1: 0:5000.0000:6402.6403 : 1 ; Sector1
2: 1:750.0000:960.3960 : 1 ; Sector1 : 1 ; Sector1
3: 2:250.0000:320.1320 : 1 ; Sector1 : 2 ; Sector2 : 1 ;
Sector1
4: 1:100.0000:442.2442 : 1 ; Sector1 : 2 ; Sector2
5: 2:112.5000:144.0594 : 1 ; Sector1 : 1 ; Sector1 : 1 ;
Sector1
6: 2:15.0000:66.3366 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2
7: 3:37.5000:48.0198 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2
: 1 ; Sector1
8: 3:37.5000:48.0198 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1
: 1 ; Sector1
9: 3:16.8750:21.6089 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1
: 1 ; Sector1
10: 3:5.0000:22.1122 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1 :
2 ; Sector2
11: 2:5.0000:22.1122 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2
12: 3:12.5000:16.0066 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2
: 1 ; Sector1
13: 3:2.2500:9.9505 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
2 ; Sector2
14: 3:0.7500:3.3168 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
2 ; Sector2
15: 3:0.2500:1.1056 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2 :
2 ; Sector2

Note that the path numbers in the economic and waste path results are different, as they are
sorted based on total economic and waste effects, respectively. Path #1 in both happens to be
the path of length 0. But Paths #2-15 do not refer to the same paths. Economic path #2
(Sector 2 > Sector 1) corresponds to waste path #4.

Connecting back to the chapter discussion on the coefficients to be changed in the path
exchange method, the economic and waste path results above provide the node values, which
are products of the underlying coefficients (Equations 12-1 and 12-2). The path results do not
show the various individual coefficients. For example, the node value for economic path #3

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
492 Chapter 12: Advanced Hotspot and Path Analysis

is $15 billion (second row of Figure 12-5) and the effect node value for the corresponding
waste path #2 is 750 g (second row of Figure 12-6).

Using the SPA Code for Other Models


In terms of edits to the code shown in RunSPAChap12Waste.m, the Y vector and/or F, A,
and L matrix assignments can be modified. For example, to use the SPA code in conjunction
with the 2002 US benchmark EIO-LCA model (described in the Advanced Material for
Chapter 8 - Section 5), the load command can be added to load the .mat file containing the
EIO-LCA F, A, and L matrices, and then edit the lines of code defining F, A, and L to point
to specific matrices in that model. RunEIO02SPA.m, also included in the SPAEIO.zip file,
does a path analysis of sector #70, Soft drink and ice manufacturing in the 2002 EIO-LCA
producer model (code different than RunSPAChap12Waste.m is in bold):
clear all
load(‘Web-030613/EIO02.mat’) % relative path to 2002 EIOLCA .mat file

F = EIvect(7,:); % matrix of energy & GHG results, 7 is total GHGs


A = A02ic; % from the industry by commodity matrix
filename = ‘softdrinks_1million’;

% sector names in the external .mat file


sectornames = EIOsecnames;

L = L02ic; % industry by commodity total reqs

F_total = F*L;
y = zeros(428,1);
y(70,1) = 1; % sector 70 is soft drink mfg (soda), $1 million

percent = 0.01; % ‘cut-off’ of upstream LCI (% of total effects)


T_max = 4; % Max tiers to search

% this prints the T_max, percent, etc. params in the file


% change to 0 or comment it out if not needed

thresh_banner=1;

% this last command runs two other .m files in the zip file
% the parameters on the right hand side are the threshold parameters

SPAEIO02(F, A, y, F_total, T_max, percent, filename, sectornames,


thresh_banner);

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
Chapter 12: Advanced Hotspot and Path Analysis 493

The load command looks for the path to the EIO02.mat file, which in the code above is in a
directory within the same directory as the .m file. You would need to edit this to point to
where you put it. The next few lines of code set the inputs to various components of the
EIO02.mat file. The F vector is set to a column in the matrix EIvect of the EIO02.mat file,
which, as stated in Chapter 8, contains all of the energy and GHG multipliers for the R matrix
(row 7 is the total GHG emissions factors across 428 columns and is thus transposed), A
points to A02ic (the 2002 industry by commodity direct requirements matrix), L points to the
already inverted L02ic, and sectornames is set to the vector of sector names (EIOsecnames), and
Y has a 1 in row 70 and 0’s in all other 427 rows. The RunEIO02SPA.m code is run the same
way as the RunSPAChap12Waste.m code, and yield the following excerpted results (only first
10 paths shown, used to make Figure 12-8):
Paths = 1095, T_max = 4, percent = 0.01000, Total Effects =
9.403932e+02
1: 1:79.6233:83.5978: 70 ; SDM : 31 ; Power generation and
supply
2: 1:52.0585:119.4311: 70 ; SDM : 44 ; Wet corn milling
3: 0:36.0017:940.3932: 70 ; SDM
4: 2:32.4451:48.0935: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum : 173 ; Alumina refining and
primary aluminum production
5: 2:32.0096:33.6074: 70 ; SDM : 148 ; Plastics bottle
manufacturing : 31 ; Power generation and supply
6: 2:29.7694:42.8620: 70 ; SDM : 44 ; Wet corn milling : 2 ;
Grain farming
7: 1:19.9944:166.0256: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum
8: 1:15.1703:22.8200: 70 ; SDM : 11 ; Milk Production
9: 1:14.9953:21.7487: 70 ; SDM : 324 ; Truck transportation
10: 2:14.7441:15.4801: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum : 31 ; Power generation and
supply
Since the format was discussed above, we note only that the first line of results shows that the
total LCI for $1million of soft drink manufacturing is 940.4 tons CO2e (as shown in Figure
12-7 or Figure 12-8). The other 10 rows show the path-specific results, which were reformatted
and rounded off to one decimal digit for Figure 12-8). Soft drink and ice manufacturing has
again been abbreviated SDM to conserve space. Summing all of the site values in the 1,095
paths would give a value of 684 tons CO2e, which is 73% of the total 940.4 tons CO2e. As
motivated earlier, this is an expected outcome when using threshold parameters to limit the
runtime of the code and the number of paths produced. Increasing T_max and/or reducing
the percent parameters in the SPA code will always increase the number of paths and the total
site emissions in the paths, and consequently, the percentage coverage of the SPA compared
to the total will increase.

Homework Questions for this Section

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com
494 Chapter 12: Advanced Hotspot and Path Analysis

1. Using the provided RunSPAChap12Waste.m file for the 2-sector economy example from
the chapter, fill in the table below for the sum of all site waste effects across paths as T_max
ranges from 2 to 5 and as percent ranges across 0.01, 0.1, and 1. Determine what % of total
waste (found above as 6,403g) is represented in each cell of the table. Describe in words what
the results in the table tell you about the waste of this 2-sector economy.

T_max
percent 2 3 4 5
0.01 6,384 g
0.1
1

2. Perform the same analysis as in question 1, but using the RunEIO02SPA.m file and the Soft
drink and ice manufacturing sector, and reporting the sum of all site CO2e emissions across paths
per million dollars of final demand. Determine what % of total emissions (found above as
940.4 t) is represented in each cell of the table. Describe in words what the results in the table
tell you about the CO2e emissions for this sector.

T_max
percent 2 3 4 5
0.01 684 t
0.1
1
3. Modify the provided RunEIO02SPA.m code to that it uses the 2002 commodity-by-
commodity A and L matrices (keeping all other values the same, e.g., 0.01% and T_max=4).
How different are the GHG results as compared to the 684 tons CO2e with the industry by
commodity values? Discuss differences in results.

4. Use the 2002 EIO-LCA producer model to create an expanded SPA for $1 million of
automobiles that includes lifetime gasoline purchases with respect to total GHG emissions.
Assume year 2002 cars cost $25,000, have fuel economy of 25 mpg and are driven 100,000
miles. Assume 2002 gasoline price was $1.30 per gallon. Discuss how this SPA differs from
the SPA for $1 million of automobile manufacturing only. Also discuss the limitations of this
model for the life cycle emissions of gasoline-powered vehicles. (Hint: all of the MATLAB
code for SPA has discussed entering just a single value into Y).

5. Modify the provided MATLAB code to use the US LCI process matrix (from Chapter 9)
to generate an SPA for 1 kg of Alumina, at plant/US for fossil CO2 with parameters 0.01 and
T-max=4. Report the sum of all site fossil CO2e emissions across paths for the 1 kg input.
Determine what % of total emissions is represented. Describe in words what the results in
tell you about the CO2e emissions for this sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter – lcatextbook.com

Anda mungkin juga menyukai