Anda di halaman 1dari 269

OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

Explanation and Integration in Mind and Brain Science


OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

Explanation and
Integration in Mind
and Brain Science

edited by
David M. Kaplan

1
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

3
Great Clarendon Street, Oxford, ox2 6dp,
United Kingdom
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark of
Oxford University Press in the UK and in certain other countries
© the several contributors 2017
The moral rights of the authors have been asserted
First Edition published in 2017
Impression: 1
All rights reserved. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other form
and you must impose this same condition on any acquirer
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America
British Library Cataloguing in Publication Data
Data available
Library of Congress Control Number: 2017956292
ISBN 978–0–19–968550–9
Printed and bound by
CPI Group (UK) Ltd, Croydon, CR0 4YY
Links to third party websites are provided by Oxford in good faith and
for information only. Oxford disclaims any responsibility for the materials
contained in any third party website referenced in this work.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

Contents

List of Figures vii


List of Contributors ix

1. Integrating Mind and Brain Science: A Field Guide 1


David M. Kaplan
2. Neuroscience, Psychology, Reduction, and Functional Analysis 29
Martin Roth and Robert Cummins
3. The Explanatory Autonomy of Cognitive Models 44
Daniel A. Weiskopf
4. Explanation in Neurobiology: An Interventionist Perspective 70
James Woodward
5. The Whole Story: Explanatory Autonomy and Convergent Evolution 101
Michael Strevens
6. Brains and Beliefs: On the Scientific Integration of Folk Psychology 119
Dominic Murphy
7. Function-Theoretic Explanation and the Search for Neural Mechanisms 145
Frances Egan
8. Neural Computation, Multiple Realizability, and the Prospects
for Mechanistic Explanation 164
David M. Kaplan
9. Marr’s Computational Level and Delineating Phenomena 190
Oron Shagrir and William Bechtel
10. Multiple Realization, Autonomy, and Integration 215
Kenneth Aizawa
11. A Unified Mechanistic Account of Teleological Functions
for Psychology and Neuroscience 236
Corey J. Maley and Gualtiero Piccinini

Index 257
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

List of Figures

2.1 Reduction hierarchy 31


3.1 Baddeley’s model of working memory 48
4.1 The Hodgkin–Huxley model 95
5.1 Golden mole; marsupial mole 107
7.1 An adder 151
7.2 A state-space portrait for the eye-position memory network 153
8.1 Example of cross-orientation suppression in V1 neurons 170
8.2 Sound localization in birds and mammals 174
8.3 Neural computation of interaural time differences (ITDs)
in birds and mammals 176
9.1 Marr’s portrayal of the ambiguity in matching elements
to determine the depth of an object 198
9.2 Edge detection 203
10.1 Signal ambiguity with a single type of cone 230
10.2 Signal disambiguation in a system with three types of cone 230
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

List of Contributors

Kenneth Aizawa, Rutgers University


William Bechtel, University of California, San Diego
Robert Cummins, University of Illinois, Urbana-Champaign
Frances Egan, Rutgers University
David M. Kaplan, Macquarie University
Corey J. Maley, University of Kansas
Dominic Murphy, University of Sydney
Gualtiero Piccinini, University of Missouri, St Louis
Martin Roth, Drake University
Oron Shagrir, Hebrew University of Jerusalem
Michael Strevens, New York University
Daniel A. Weiskopf, Georgia State University
James Woodward, University of Pittsburgh
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

1
Integrating Mind
and Brain Science
A Field Guide

David M. Kaplan

1. Introduction
Long a topic of discussion among philosophers and scientists alike, there is growing
appreciation that understanding the complex relationship between neuroscience and
psychological science is of fundamental importance to achieving progress across these
scientific domains. Is the relationship between them one of complete autonomy or
independence—like two great ships passing in the night? Or is the relationship a
reductive one of total dependence—where one is subordinate to the other? Or perhaps
the correct picture is one of mutually beneficial interaction and integration—lying
somewhere in between these two extremes? One primary strategy for addressing this
issue, and one that occupies center stage in this volume, involves understanding
the nature of explanation in these different domains. Representative questions taken
up by various chapters in this volume include: Are the explanatory patterns employed
across these domains similar or different in kind? If their explanatory frameworks do
in fact differ, to what extent do they inform and constrain each other? And finally, how
should answers to these and other related questions shape our thinking about the pros-
pects for integrating mind and brain science?
Several decades ago, during the heyday of computational cognitive psychology, the
prevailing view was that the sciences of the mind and brain enjoy a considerable degree
of independence or autonomy from one another—with respect to their theories, their
research methods, and the phenomena they elect to investigate (e.g., Fodor 1974; Johnson-
Laird 1983; Lachman et al. 1979; Newell and Simon 1972; Pylyshyn 1984; Simon 1979). In
an expression of the mainstream perspective in the field at the time, the psychologist
Philip Johnson-Laird proposes that “[t]he mind can be studied independently from
the brain. Psychology (the study of the programs) can be pursued independently
from neurophysiology (the study of the machine code)” (Johnson-Laird 1983, 9).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

2  DAVID M. KAPLAN

In the intervening decades, the doctrine of disciplinary autonomy has fallen on hard
times. Today, it is far from being the universally held or even dominant view. In fact,
given the emergence of cognitive neuroscience as a new scientific field formed precisely
at the interface between these disciplines, one might reasonably wonder whether the
consensus has now shifted in exactly the opposite direction—towards a view of complete
disciplinary integration and interdependence rather than autonomy. In the inaugural
issue of the Journal of Cognitive Neuroscience, then editor Michael Gazzaniga writes:
In the past 10 years, there have been many developments in sciences concerned with the study of
mind. Perhaps the most noteworthy is the gradual realization that the sub-disciplines committed
to the effort such as cognitive science, neuroscience, computer science and philosophy should not
exist alone and that each has much to gain by interacting. Those cognitive scientists interested in
a deeper understanding of how the human mind works now believe that it is maximally fruitful
to propose models of cognitive processes that can be assessed in neurobiologic terms. Likewise, it
is no longer useful for neuroscientists to propose brain mechanisms underlying psychological
processes without actually coming to grips with the complexities of psychological processes
involved in any particular mental capacity being examined.  (Gazzaniga 1989, 2)

From the outset, contributors to the cognitive neuroscience movement have explicitly
recognized the interdisciplinary and integrative nature of the field, which is unified
by the common goal of trying to decipher how the mind and brain work (Boone and
Piccinini 2016; Churchland and Sejnowski 1988). Despite the rapidly growing influ-
ence of cognitive neuroscience and cognate fields such as computational neuroscience,
some researchers continue to maintain that neuroscience is largely or completely
irrelevant to understanding cognition (e.g., Fodor 1997; Gallistel and King 2009). Others
maintain that psychology is (or ought to be) a tightly integrated part of the broader
scientific enterprise to discover and elucidate the multi-level mechanisms underlying
mind and cognition (e.g., Boone and Piccinini 2016; Piccinini and Craver  2011).
Hence, the debate over an autonomous psychology remains incompletely settled.
The objective of this chapter is to provide a field guide to some of the key issues that
have shaped and continue to influence the debate about explanation and integration
across the mind and brain sciences. Along the way, many of the central proposals
defended in the individual chapters will be introduced and important similarities and
differences between them will be highlighted. Since questions on this topic have a long
track record of philosophical and scientific engagement, providing some of the broader
historical and theoretical context will facilitate a deeper appreciation of the contribu-
tions each individual chapter makes to these important and ongoing debates.

2.  Autonomy and Distinctness: Some


Provisional Definitions
It is frequently claimed that psychology is autonomous and distinct from neuroscience
and other lower-level sciences. But what exactly do these terms mean? Before proceeding
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  3

it will prove useful to have working definitions of these key concepts, which recur
throughout this introductory chapter as well as the volume more generally.
First, consider the notion of autonomy. Generally speaking, autonomy implies
independence from external influence, control, or constraint. The Tibet Autonomous
Region is, at least according to the Chinese Government, appropriately so called
because it is free of direct external control from Beijing. Autonomous robotic
­vehicles are appropriately so called because they are capable of sensing and navigat-
ing in their environments without reliance on direct human input or control. In a
­similar manner, scientific disciplines may also be autonomous from one another.
Following Piccinini and Craver (2011), we might provisionally define one discip-
line as being autonomous from another when at least one of the following c­ onditions
is satisfied:
(a) they can independently select which phenomenon to investigate,
(b) they can independently select which methods to use,
(c) they can independently select which theoretical vocabulary to apply,
(d) the laws/theories/explanations from one discipline are not reducible to the
laws/theories/explanations of the other discipline, or
(e) evidence from one discipline does not exert any direct constraints on the laws/
theories/explanations of the other discipline.
Importantly, this characterization of autonomy is flexible and admits of degrees.
A scientific discipline can in principle completely or partially satisfy one or more of
these conditions (a–e), and consequently can be completely or partially autonomous
with respect to another discipline. At one extreme, a discipline may only incompletely
or partially satisfy a single condition, comprising a minimal form of a­ utonomy. At
the other extreme, a discipline may completely satisfy all conditions, instantiating a
­maximal form of autonomy (at least with respect to identified ­conditions a–e).
The notion of distinctness is closely related, but logically weaker. Disciplines exhibit
distinctness if they investigate different kinds of phenomena, use different kinds of
methods, or construct different kinds of explanations. The last of these is most relevant
in the context of the present volume. As we will see, the thesis of the explanatory dis-
tinctness of neuroscience and psychology—roughly, that they employ characteristically
different kinds of explanation—is a key premise in a number of recent arguments for
the autonomy of psychology.
It is important to distinguish between autonomy and distinctness because one can
obtain without the other. Generally speaking, distinctness is a necessary but insufficient
condition for autonomy (for additional discussion, see Piccinini and Craver 2011).
Without distinctness there is clearly no scope for autonomy. If two disciplines investi-
gate the same phenomena, in an important sense, they cannot independently select which
phenomenon to investigate. They are instead constrained or bound to investigate the
same phenomena. Similarly, if two disciplines employ the same methods or theoretical
vocabularies, in an important sense, they cannot independently select which methods
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

4  DAVID M. KAPLAN

or theoretical vocabularies to use. They are bound to use the same across the disciplines.
Although distinctness is required for autonomy, it does not entail it. Two or more
things can be distinct yet be mutually dependent or interdependent. Consider a simple
example. The Earth is distinct from the Sun, yet these systems influence one another in
a multitude of ways (e.g., gravitationally and thermodynamically). They are distinct,
but not autonomous in any interesting sense of the word. Similarly, a scientific field or
discipline may have its own distinct laws, principles, and theories, yet these may turn
out to be reducible to or evidentially constrained by those of another discipline. Even
though distinctness does not entail autonomy, as will be discussed shortly, they are
often endorsed as a package deal.

3. Reduction or Autonomy? A Debate Oscillating


between Two Extremes
Philosophers weighing in on this topic have tended to focus on the prospects of either
(a) achieving integration or unification of psychology and neuroscience via theory
reduction, or (b) securing the autonomy of psychology and establishing in principle
its  irreducibility to neuroscience via multiple realizability. Despite its historical
prevalence, one obvious problem with this way of carrying out the debate is that it
assumes a binary opposition between two extreme positions—either psychological
science reduces to or is completely autonomous from neuroscience. According to the
­traditional picture, the proposed relationship between psychology and neuroscience is
either one of complete dependence (reduction) or complete independence (auton-
omy). There is no stable middle ground. Many recent contributors to the debate reject
this strong binary assumption and instead recognize that there is a continuum of
plausible positions lying in between these two extremes. These intermediate positions
involve some kind of relationship of partial dependence or partial constraint. A major
objective of this volume is to focus attention on some of these “middle ground” posi-
tions that have been staked out in the debate and highlight their associated prospects
and problems. Before considering these intermediates, however, it will be useful to
take a closer look at the extremes.

3.1  Theory reduction


Many of the dominant ideas concerning the relationship between the mind and brain
sciences have emerged from traditional philosophical perspectives concerning
­explanation and reduction. No view is more influential in this regard than the cover-
ing law account of explanation. According to the covering law account, explaining
some event or phenomenon involves showing how to derive it in a logical argument
(Hempel and Oppenheim 1948). More specifically, a scientific explanation should be
expressible as a logical argument in which the explanandum-phenomenon (that
which is being explained) appears as the conclusion of the argument and the explanans
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  5

(that which does the explaining) appears as the premise set, which includes statements
characterizing the relevant empirical conditions under which the phenomenon
obtains (initial conditions) and at least one general law required for the derivation of
the explanandum. According to the view, good scientific explanations are those in
which the ­explanans provides strong or conclusive evidential grounds for expecting
the explanandum-phenomenon to occur (Hempel 1965).
In its most general formulation, the covering law account is intended to apply
uniformly to the explanation of spatiotemporally restricted events such as the explo-
sion of the Space Shuttle Challenger, as well as the explanation of general regularities
or laws such as the explanation of Kepler’s laws of planetary motion in terms of more
basic laws of Newtonian mechanics. A derivation of one or more sets of laws (com-
prising a theory) from another set of laws (comprising another theory) is known as
an intertheoretic reduction. According to the covering law account, intertheoretic
reduction comprises a special case of deductive-nomological explanation.
Nagel (1961) developed these ideas into an explicit model of theory reduction, pro-
posing that a theory (or law) from a higher-level science such as psychology can be
reduced to, and thereby explained by, a theory (or law) from a lower-level science
such as neuroscience or biology just in case (a suitably axiomatized version of) the
higher-level theory can be logically derived from (a suitably axiomatized version of)
the lower-level theory. Since the terminology employed in both the reduced and
reducing theories will invariably differ in some way, so-called bridge principles or
rules of correspondence are required to establish links between the terms of the two
theories. For example, a bridge principle might connect terms from thermodynamics
such as “heat” with those of statistical mechanics such as “mean molecular energy.”
Finally, because the reduced theory will typically only apply over a restricted part
of the domain of the reducing theory or at certain limits, boundary conditions that set
the appropriate range for the reduction are often required in order for the derivation
to be successful. The theory reduction model can be represented schematically as
follows (Bechtel 2008, 131):
Lower-level laws (in the basic, reducing science)
Bridge principles
Boundary conditions
_______________________________
∴ Higher-level laws (in the secondary, reduced science).
Oppenheim and Putnam (1958) famously argue that the Logical Positivists’ grand
vision of scientific unification can finally be achieved, at least in principle, by reveal-
ing the derivability relationships between the theories of the sciences. They start by
assuming that each scientific discipline occupies a different level within a single
­global hierarchy. The Oppenheim–Putnam framework then involves an iterated
sequence of reductions (so-called micro-reductions) starting with the reduction of
some higher-level theory to the next lowest-level theory, which in turn is reduced to
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

6  DAVID M. KAPLAN

the next lowest-level theory, and so on, until the level of fundamental physical theory
is eventually reached. As Fodor succinctly puts it: “all true theories in the special sci-
ences should reduce to physical theories in the long run” (1974, 97). Oppenheim and
Putnam’s general framework entails a specific conception of how psychology will
eventually reduce to neuroscience and beyond:

It is not absurd to suppose that psychological laws will eventually be explained in terms of the
behavior or individual neurons in the brain; that the behavior of individual cells—including
neurons—may eventually be explained in terms of their biochemical constitution; and that the
behavior of molecules—including the macro-molecules that make up living cells—may even-
tually be explained in terms of atomic physics. If this is achieved, then psychological laws will
have, in principle, been reduced to laws of atomic physics.  (Oppenheim and Putnam 1958, 7)

Although many philosophers once held out high hopes for reductive successes of this
kind, few are so optimistic today. The theory reduction account faces challenges along
several fronts including those raised about its adequacy as a general account of the
relations between the sciences and as a specific account of the relation between neuro-
science and psychology. Its descriptive adequacy as a general account of reduction in
science has been called into question as it has proved exceedingly difficult to locate real
examples that satisfy the account even in domains thought to be paradigmatic such as
physics (e.g., Sklar 1967). Other general issues concern its oversimplified or otherwise
inaccurate portrayal of the relationships between the various sciences including the
relationships between the theories, concepts, and explanandum phenomena of those
sciences (e.g., Bickle 1998; Churchland 1989; Feyerabend 1962; Schaffner 1967, 1969;
Wimsatt 2007). Yet, it is the specific challenges that stand out as most relevant for
­present purposes.
One primary reason for heightened skepticism about theory reduction as an
­adequate account of the specific relationship between neuroscience and psychology is
the conspicuous absence of laws or lawlike generalizations in these sciences. This is what
Rosenberg (2001), in the context of biology, aptly refers to as the “nomological vac-
uum.” Since unification is supposed to be achieved by deriving the laws of psychology
from the laws of neuroscience (or some other lower-level science such as biophysics),
clearly a precondition for such unification is the availability of laws at both the level of
reduced and reducing theories. If the theoretical knowledge of a given discipline can-
not be specified in terms of a set of laws (an assumption that mechanists and others
reject), there is simply no scope for unification along these lines. Yet, despite decades of
effort to identify genuine lawful generalizations in psychology or neuroscience of the
sort one finds in other scientific disciplines such as physics, few if any candidate laws
have been revealed.
In their chapter, Martin Roth and Robert Cummins echo similar criticisms about the
“nomic conception of science” at the heart of the covering law framework. As Cummins
puts it in his earlier and highly influential work on the nature of psychological
­explanation: “Forcing psychological explanation into the subsumptivist [covering
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  7

law] mold made it continuous with the rest of science only at the price of making it
appear trivial or senseless” (Cummins 1983, 27). In their chapter, Roth and Cummins
identify one source of confusion underwriting views that attribute an explanatory
role to laws in psychological science. Building on previous work by Cummins (2000),
they indicate how the term “law” in psychology is often (confusingly) used by
researchers working in the field to refer to effects (i.e., robust patterns or regularities),
which are the targets of explanation rather than explanations in themselves. For
example, Fitts’ law describes but does not explain the widely observed tradeoff
between speed and accuracy in skilled human motor behavior. The Weber–Fechner
law describes but does not explain how the just-noticeable difference between two
stimuli is proportional to the magnitude of the stimuli. Nevertheless, someone
might be tempted to try to read off the nomological character of psychological science
(and the explanatory role of psychological laws) from the mere appearance of the
word “law” in these instances. Yet these nominal laws, which simply describe effects
or phenomena to be explained, do not satisfy any of the standardly accepted criteria
for lawhood such as being exceptionless, having wide scope, etc., and are thus poorly
suited to play the required role in covering law explanation and theory reduction.
Roth and Cummins instead maintain that psychological laws are better understood
as capturing the explananda for psychological science rather than the explanans,
and argue that, appearances notwithstanding, psychological explanations do not
involve subsumption under laws. Their efforts to expose how the nomic character
of psychology is largely illusory places additional pressure on efforts to recruit the
covering law framework to shed light on the nature of psychological explanation
and reduction.
Another reason many participants in this debate see intertheoretic reduction as
a problematic way to achieve unification among the scientific disciplines is that
successful reduction renders the laws and theories of the higher-level (reduced) science
expendable in principle. Since all of the laws and all observational consequences of the
higher-level (reduced) theory can be derived directly from information contained in
the lower-level theory, the resulting picture is one in which the higher-level sciences
in principle provide no distinctive, non-redundant explanatory contribution over and
above that made by the lower-level science. As Fodor puts it, reductionism has “the
curious consequence that the more special sciences succeed, the more they ought to
disappear” (1974, 97). In practice, however, higher-level sciences might retain their
usefulness temporarily until the lower-level sciences become theoretically mature
enough to support the reductions on their own, or they might play heuristic roles such
as revealing the regularities or phenomena that the lower-level sciences seek to explain.
Hence, even hard-core reductionists such as John Bickle can admit that “psychological
causal explanations still play important heuristic roles in generating and testing
­neurobiological hypotheses” (author’s emphasis; Bickle 2003, 178). But this picture
will ­nevertheless appear deeply unsatisfying to those who seek to secure a long-term
explanatory role for psychological science. For these and other reasons, using theory
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

8  DAVID M. KAPLAN

reduction as the basis for an account of the relationship between psychology and
­neuroscience has appeared unpromising to many.
The traditional theory reduction framework thus offers one potential strategy for
unifying or integrating psychological and brain science, but one that is fraught with
problems. The well-known considerations rehearsed above indicate that the pros-
pects for achieving unification by reduction are either extremely dim due to the lack
of explanatory laws in psychology and neuroscience, or else reduction can succeed
but in doing so would impose unbearably heavy costs by rendering psychology
explanatorily inert and obsolete. Neither of these paths appears particularly promis-
ing. This has consequently inspired a search for alternative ways of characterizing
the relationship between the sciences of the mind and the brain that do not bottom
out in theory reduction, including those that manage to secure some degree of
autonomy for psychology.
Before moving on, it is worth pausing briefly to describe another reductionist
account—importantly distinct from the theory reduction account—that has received
considerable attention in recent decades. This is the “ruthless reductionism” account
advocated primarily by John Bickle (2003, 2006). Bickle’s account rejects a number of
core assumptions of the theory reduction view including that laws are central to reduc-
tion, and that successful reductions of the concepts and kinds posited by higher-level
theories to those of some basic lower-level theory proceeds via a sequence of step-wise
reductions. According to ruthless reductionism, reductions can instead be “direct” (i.e.,
without any intermediate steps) such as the “reductions of psychological concepts and
kinds to molecular-biological mechanisms and pathways” (Bickle 2006, 412). Bickle
argues that researchers in lower-level neuroscience such as cellular and molecular
neuroscience accomplish these direct reductions by experimentally intervening at the
cellular or molecular level and producing detectable effects at the level of the phenom-
enon to be explained (the behavioral or psychological level). Accordingly, there is a
path for reduction that skips over any intermediary levels.
Despite its role in the broader debate, ruthless reductionism exhibits many similar
problems to traditional theory reduction accounts. In particular, it treats higher-level
explanations in psychology (and even higher-level fields within neuroscience includ-
ing cognitive neuroscience and systems neuroscience) as expendable in principle,
and therefore fails to secure a permanent role for explanations developed at these
higher levels. It therefore fails to exemplify the type of “middle ground” integrative
views about the relationship between psychology and neuroscience emphasized in
this volume.
3.2  Autonomy and multiple realizability
Another traditional response that philosophers have given is to argue that psychology
exhibits a similar kind of autonomy with respect to “lower-level” sciences such as
­neuroscience in the sense that their theories or explanations are unconstrained by
­evidence about neural implementation.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  9

Many early defenses of the autonomy of psychology and other higher-level sciences
involved appeals to multiple realizability in order to deny the possibility of reducing
the theories or laws of the higher-level science to those of the lower-level science
(e.g., Fodor 1974, 1997; Putnam 1975). These views emerged as direct responses to the
traditional theory reduction model and its negative implications for the independent
status of psychology and the rest of the special sciences.
Recall that according to the classical theory reduction model, successful intertheo-
retic reduction requires a specification of appropriate bridge principles and boundary
conditions (Nagel 1961). Bridge principles establish systematic mappings or identities
between the terms of the two theories, and are essential for the reduction to go through.
Anti-reductionists therefore naturally gravitate towards these bridge principles in their
attacks, claiming that bridge principles will generally be unavailable given that the
events picked out by special science predicates or terms (e.g., functionally defined terms
such as “money” or “pain”) will be “wildly disjunctive” from the perspective of lower-
level sciences such as physics (Fodor 1974, 103). In other words, the enterprise to build
bridge principles connecting the vocabularies or predicates of the higher- and lower-
level sciences in an orderly, one-to-one manner breaks down because higher-level phe-
nomena are often multiply realized by heterogeneous sets of lower-level realizers. Put
somewhat differently, multiple realizability entails that the predicates of some higher-
level science will cross-classify phenomena picked out by predicates from a lower-level
science. The one-to-many mapping from the psychological to the neurobiological (or
physical) implied by multiple realizability renders the bridge principle building enter-
prise at the heart of the theory reduction model a non-starter. Since the establishment of
bridge principles is a necessary condition for classical intertheoretic reduction, multiple
realizability directly implies the irreducibility and autonomy of psychology.
This line of argument has connections to functionalist and computationalist views
in the philosophy of mind, which also depend on a notion of multiple realizability.
According to one influential version of computationalism, cognition is identified with
digital computation over symbolic representations (Newell and Simon 1976; Anderson
1996; Johnson-Laird  1983; Pylyshyn  1984). Proponents of computationalism have
long maintained that psychology can accomplish its explanatory objectives without
reliance on evidence from neuroscience about underlying neural mechanisms.
Multiple realizability is taken to justify a theoretically principled neglect of neurosci-
entific data based on the alleged close analogy between psychological processes and
running software (e.g., executing programs) on a digital computer, and the multiple
realizability of the former on the latter. According to the analogy, the brain merely pro-
vides the particular hardware on which the cognitive programs (e.g., software) happen
to run, but the same software could in principle be implemented in indefinitely many
other hardware platforms. For this reason, the brain is deemed a mere implementation
of the software. If the goal is to understand the functional organization of the
­software—the computations being performed—determining the hardware details is a
relatively unimportant step.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

10  DAVID M. KAPLAN

If psychological capacities are akin to the functional capacities of computer software


in that they can be implemented in diverse physical substrates or hardware, then, in an
important sense, they are distinct from and irreducible to the neural mechanisms that
happen to realize them. For parallel reasons, psychological explanations making refer-
ence to psychological properties are likewise thought to be autonomous and distinct
from neurobiological explanations citing the neural properties that realize them.
Although this line of argument held sway in philosophy for some time, multiple
realizability-based arguments for the explanatory autonomy of psychology have been
vigorously challenged in recent decades. For example, critics maintain that the evi-
dence for multiple realization is substantially weaker than has been traditionally
assumed (Bickle  2003,  2006; Bechtel and Mundale  1999; Churchland  2005; Polger
2004, 2009; Shapiro 2000) or that the thesis of multiple realizability is consistent with
reductionism (Richardson  1979; Sober  1999), and so psychological explanations
either reduce to or ought to be replaced by neurobiological explanations.
In his chapter, Kenneth Aizawa enters into this debate and argues that multiple
realization is alive and well in the sciences of the mind and brain, albeit in a more
restricted form than many proponents of autonomy have previously endorsed.
Focusing on examples from vision science, he argues that when one attends to actual
scientific practice it becomes clear how evidence for different underlying neural
mechanisms (lower-level realizer properties) for a given psychological capacity (higher-
level realized properties) are not always handled in identical ways. Sometimes this
information is used to support multiple realizability claims. Other times it is not. More
specifically, Aizawa makes the case that scientists do not always adopt an “eliminate-
and-split” strategy according to which differences in the realizer properties result in
the elimination of the putative multiply realized higher-level property in favor of
two (or more) distinct higher-level psychological properties corresponding to the
different realizers. The role of the “eliminate-and-split” strategy has been the subject
of much philosophical discussion since Fodor (1974) first explicitly identified it as
a theoretical possibility:

[W]e could, if we liked, require the taxonomies of the special sciences to correspond to the
taxonomy of physics [or neuroscience] by insisting upon distinctions between the natural
kinds postulated by the former wherever they turn out to correspond to distinct natural kinds
in the latter.  (Fodor 1974, 112)

If neuroscientists always applied this taxonomic strategy, multiple realizability would


be ruled out in principle since differences in how the realizer properties are taxono-
mized would always reflect differences in how the realized properties are taxonomized.
Clearly, this would undercut the prospects for an autonomous psychology. Aizawa
aims to show that this is not always the case; sometimes the higher-level taxonomy is
resilient in the face of discovered differences in lower-level realizers. Aizawa defends
the view that how discoveries about different lower-level realizers are treated depends
on specific features of the higher-level theory. In particular, sometimes higher-level
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  11

psychological kinds are theorized in such a way as to permit a degree of individual


variation in underlying mechanisms; other times they are not. It is only in the latter
case that higher-level psychological kinds are eliminated and split in light of evidence
about different underlying mechanisms. In cases of the former, higher-level kinds are
retained in spite of such evidence.
Aizawa thus offers a more nuanced account of the role of multiple realizability
­considerations in contemporary mind and brain science, and aims to show how a
piecemeal or partial but nonetheless genuine form of autonomy of higher-level
­psychological kinds may be secured. This is not the sweeping autonomy that Fodor
envisioned, where the structural taxonomy of neuroscience never interacts, informs,
or otherwise constrains the functional taxonomy of psychology. Neither is it a
wholesale form of reduction where the higher-level kinds are slavishly dictated by the
taxonomic divisions established by the lower-level science. Instead, sometimes (but
not always) higher-level kinds are retained in spite of such divisions.

4.  Functional and Computational Explanation


A somewhat different strategy for establishing the autonomy of psychology, which
does not directly rely on appeals to multiple realizability, involves identifying the dis-
tinctive kind (or kinds) of explanation constructed and used across these different
disciplines. The key idea here is that the discipline of psychology has its own explana-
tory patterns, which do not simply mimic those of another more fundamental
­discipline and are not reducible to them. According to the general line of argument,
although the prevalent form of explanation in the neurosciences and other biological
sciences is mechanistic explanation (Bechtel 2008; Bechtel and Richardson 1993/2010;
Craver 2007; Machamer et al. 2000), the dominant form of explanation in psychology
is functional or computational explanation. Critically, the latter are not to be assimi-
lated to the former; they are distinct kinds of explanation.

4.1  Functional explanation


It is widely assumed that the primary (although not exclusive) explananda in psychology
are sensory, motor, and cognitive capacities such as object recognition or working
memory (e.g., Von Eckardt  1995); and that psychologists explain these capacities
by  providing a functional analysis (e.g., Cummins 1975,  1983; Fodor  1965, 1968).
Cummins defines functional analysis as follows: “Functional analysis consists in
­analysing a disposition into a number of less problematic dispositions such that [the]
programmed manifestation of these analyzing dispositions amounts to a manifestation
of the analysed disposition” (Cummins 1983, 28). The central idea is that functional
analysis involves decomposing or breaking down a target capacity (or disposition) of a
system into a set of simpler sub-capacities and specifying how these are organized to
yield the capacity to be explained. Traditionally, functional analysis has been thought
to provide a distinct form of explanation from mechanistic explanation, the dominant
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

12  DAVID M. KAPLAN

mode of explanation employed in many lower-level sciences including neuroscience


(Cummins 1975, 1983; Fodor 1965, 1968). Call this the DISTINCTNESS thesis. As a
reminder, mechanistic explanations describe the organized assemblies of component
parts and activities responsible for maintaining, producing, or underlying the phe-
nomenon of interest (Bechtel 2008; Bechtel and Richardson 1993/2010; Craver 2007;
Machamer et al. 2000). Cummins expresses his commitment to DISTINCTNESS in
the following passages:

Form-function correlation is certainly absent in many cases, however, and it is therefore


important to keep functional analysis and componential [mechanistic] analysis conceptually
distinct.  (1983, 29)
Since we do this sort of analysis [functional analysis] without reference to an instantiating system,
the analysis is evidently not an analysis of the instantiating system.  (Cummins 1983, 29)

Fodor similarly embraces DISTINCTNESS when he states: “[V]is-à-vis explanations


of behavior, neurological theories specify mechanisms and psychological theories
do  not” (1965, 177). Although DISTINCTNESS does not entail the autonomy of
­psychology from neuroscience (call this the AUTONOMY thesis), often these
are  defended together. Thus, Cummins embraces AUTONOMY when he claims:
“[T]his analysis [functional analysis] seems to put no constraints at all on [a given
system’s] componential analysis” (Cummins 1983, 30). Taken together, these claims
about DISTINCTNESS and AUTONOMY form what has been called the received
view about psychological explanation (Barrett 2014; Piccinini and Craver 2011).
In their chapter in this volume, Roth and Cummins further refine the influential
­position first developed by Cummins (1983). They argue that a proper understanding of
functional analysis permits us to see how it provides a distinct and autonomous kind of
explanation that cannot be assimilated to that of mechanistic explanation, but which
nevertheless bears an evidential or confirmational relation to the description of under-
lying mechanisms. As an illustrative example, they describe a functional analysis of the
capacity to multiply numbers given in terms of the partial products algorithm. The step-
by-step algorithm specification provided by the functional analysis reveals little to no
information about the implementing mechanism, yet they argue that the analysis pro-
vided by the algorithm provides a complete explanation for the capacity in question.
According to the view Roth and Cummins defend, the functional analysis permits an
understanding of why any system possessing the capacity for computing the algorithm
ipso facto exhibits the specific regularities or patterns that constitute the phenomenon
to be explained. And this, they argue, is all that is being requested of the explanation.
Roth and Cummins acknowledge that information about lower-level implementa-
tion details can deepen our understanding of the systems whose capacities are targeted
by functional analyses. But they nevertheless stress that, strictly speaking, this infor-
mation should neither be interpreted as a proper part of the explanation nor as a
requirement on adequate functional explanation. In their words, “having a fuller
understanding of a system, in this sense, is not the same thing as having a more
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  13

complete explanation of the [capacity] targeted for functional analysis.” “(Roth and
Cummins, this volume, 37). Roth and Cummins instead suggest that there is a ­crucial
distinction between explaining a capacity via functional analysis (what they call
­horizontal explanation) and explaining how a functional analysis is implemented
(what they call vertical explanation)—a distinction which, in their opinion, has
been repeatedly elided or conflated in the literature. In other words, functional
explanation is not mechanistic explanation.
While evidence from neuroscience is relevant to determining which functional ana-
lysis is correct, they argue that the specific role that details about underlying neural
­mechanisms plays is one of confirmation not explanation. As they put it, “bringing such
knowledge to bear in this instance would be an exercise in confirming a proposed ana-
lysis, not explaining a capacity.” “(Roth and Cummins, this volume, 39). Their discussion
provides an important clarification of the original, highly influential position first
articulated by Cummins (1983). The chapter also raises the stakes in the current debate,
since it stands diametrically opposed to recent attempts by proponents of the mechan-
istic perspective to identify functional analyses as elliptical or incomplete mechanistic
explanations (Piccinini and Craver 2011). This view will be taken up in detail below.
Along similar lines, in his chapter, Daniel Weiskopf argues that psychological models
can be explanatorily adequate in the sense that they satisfy standardly accepted norms
of good explanation such as providing the ability to answer a range of counterfactual
questions regarding the target phenomenon and the ability to manipulate and control
the target phenomenon, without necessarily being mechanistic. A cognitive model
(sometimes also referred to as a “box-and-arrow” model; see Datteri and Laudisa 2014)
involves a set of functionally interacting components each of which is characterized
on  the basis of its functional profile (and typically couched in representational or
information-processing terms). According to Weiskopf, cognitive models can provide
perfectly legitimate causal explanations of psychological capacities by describing the
way information is represented and processed. Although these models evidently
describe real causal structure, they do not embody determinate commitments about
the neural mechanisms or structures underlying these capacities. They do not provide
a specifiable decomposition of the target system into spatially localizable physical
parts, and ­critically, these mechanistic details do not need to be filled in for the model
to be endowed with explanatory force. Consequently, on Weiskopf ’s view, psychological
explanation is fundamentally different in kind to mechanistic explanation.

4.2  Computational explanation


Along closely related lines, others have argued that computational explanations of
­psychological capacities are different in character from the mechanistic explanations
found in neuroscience and other life sciences. Sejnowski, Koch, and Churchland (1988)
express one primary motivation for this view:
Mechanical and causal explanations of chemical and electrical signals in the brain are different
from computational explanations. The chief difference is that a computational explanation
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

14  DAVID M. KAPLAN

refers to the information content in the physical signals and how they are used to accomplish
a task.  (Sejnowski et al. 1988, 1300)

According to this perspective, computational explanations differ because they appeal


to information-processing or representational or semantic notions, and this somehow
makes them incompatible with the mechanistic framework. Recent advocates of the
mechanistic approach counter that computational explanation can be readily under-
stood as a species of mechanistic explanation despite having distinctive features
(Bechtel 2008; Kaplan 2011; Piccinini 2007; Boone and Piccinini 2016).
Others have attempted to draw a stark boundary between computational and
mechanistic explanations by arguing that computational explanations are abstract or
mathematical in a way that prevents their integration into the mechanistic framework
(e.g., Chirimuuta 2014). On Chirimuuta’s view, computational explanations—even
those constructed in computational neuroscience—embody a “distinct explanatory
style” which “cannot be assimilated into the mechanistic framework” because they
“indicate a mathematical operation—a computation—not a biological mechanism”
(2014, 124). Since these explanations are claimed to be highly abstract—focusing
on  the high-level computations being performed—they are supposed to enjoy a
considerable degree of autonomy from low-level details about underlying neural
mechanisms. This is a version of the multiple realizability claim encountered above.
In his contribution to this volume, David Kaplan argues that this kind of claim
rests on persistent confusions about multiple realizability and its implications for
mechanistic explanation. Specifically, he argues against the lessons that Chirimuuta
and others wish to draw from recent modeling work involving so-called canonical
neural computations—standard computational modules that apply the same funda-
mental operations across multiple brain areas. Because these neural computations
can rely on diverse circuits and mechanisms, modeling the underlying mechanisms
is  argued to be of limited explanatory value. They take this work as evidence that
­computational neuroscientists sometimes employ a distinctive explanatory scheme
from that of mechanistic e­ xplanation. Kaplan offers reasons for thinking this ­conclusion
is unjustified, and addresses why multiple realization does not always limit the pros-
pects for mechanistic explanation.
In her contribution to the volume, Frances Egan argues for a position on the same
side of the debate as Chirimuuta and others who seek to defend the distinctness and
autonomy of computational explanation. Egan argues that a common type of ­explanation
in computational cognitive science is what she terms function-theoretic explanation.
Building on ideas from her earlier work (Egan 1995, 2010), she contends that this type
of explanation involves articulating how a given system computes some mathematic-
ally well-defined function and how performing this computation contributes to the
target capacity in question. For example, Marr famously proposed that the early visual
system performs edge detection by computing the zero-crossing of s­ econd-derivative
filtered versions of the retinal inputs (i.e., the Laplacian of a Gaussian; ∇2G*I)—a well-
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  15

defined mathematical function. This is a paradigmatic function-theoretic explanation—


because it provides a mathematically precise specification of what the early ­visual
system does and an adequate explanation of how it does it. According to Egan, function-
theoretic characterizations can possess explanatory import even in the absence of
details about how such computations are implemented in neural systems. Consequently,
they are not properly interpreted as mechanistic explanations.
Views about autonomous computational explanation are often backed up by
appeals to David Marr’s influential tri-level computational framework. According to
Marr (1982), there are three distinct levels of analysis that apply to all information-
processing systems ranging from digital computers to the brain: the computational
level (a specification of what function is being computed and why it is computed),
the  algorithmic level (a specification of the representations and computational
transformations defined over those representations), and the implementation level
(a specification of how the other levels are physically realized). Marr’s discussion of
the relationships between these levels appears to reinforce the idea of an autonomous
level of computational explanation. First, he repeatedly prioritizes the relative importance
of the computational level:

[I]t is the top level, the level of computational theory, which is critically important from
the  information-processing point of view. The reason for this is that the nature of the
­computations . . . depends more upon the computational problems to be solved than upon
the particular hardware in which their solutions are implemented.  (Marr 1982, 27)

This privileging of the computational level, coupled with the fact that his preferred
methodology is top-down, moving from the computational level to the algorithmic,
and ultimately, to implementation, has fostered the idea of an autonomous level of
computational explanation. Second, in some passages, Marr appears to claim that there
are either no direct constraints between levels or that the operative constraints are rela-
tively weak and only flow downward from the computational level—claims that are
clearly at odds with the mechanistic view. For instance, he states that: “since the three
levels are only rather loosely related, some phenomena may be explained at only one or
two of them” (1982, 25). If computational explanations were unconstrained by one
another in this manner, this could certainly be used to draw a conclusion about an
explanatorily autonomous level.
Nevertheless, there are numerous places where Marr sounds much more mechanis-
tic in his tone (for further discussion, see Kaplan 2011). Although his general compu-
tational framework clearly emphasizes how one and the same computation might in
principle be performed by a wide array of distinct algorithms and implemented in a
broad range of physical systems, when the focus is on explaining a particular cogni-
tive capacity such as human vision, Marr appears to strongly reject the claim that any
computationally adequate algorithm (i.e., one that has the same input–output profile
or computes the same function) can provide an equally appropriate explanation of
how the computation is performed in that particular system. After outlining their
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

16  DAVID M. KAPLAN

computational hypothesis for the extraction of zero-crossings in early vision, Marr


quickly shifts gears to determine “whether the human visual system implements these
algorithms or something close to them” (Marr and Hildreth 1980, 205; see also Marr
et al. 1979; Marr 1982). The broader context for this passage suggests that Marr did not
view this as a secondary task, to be undertaken after the complete and fully autono-
mous computational explanation is given. Instead, Marr appears to be sensitive to the
critical explanatory role played by information about neural implementation. On this
interpretation, Marr’s view is much more closely aligned with the mainstream of con-
temporary computational neuroscience. Interestingly, Tomaso Poggio, one of Marr’s
principal collaborators and a highly accomplished computational neuroscientist in
his  own right, recently espoused a view that similarly emphasizes the importance
of elaborating the various connections and constraints operative between different
levels of analysis. He argues that real progress in computational neuroscience will only
be achieved if we attend to the connections between levels (Poggio 2010).
In their contributed chapter, Oron Shagrir and William Bechtel shed further light
on the nature of computational explanation and its status vis-à-vis the mechanistic
approach. Like many seeking to understand computational explanation, they too
engage with Marr’s foundational work on the topic. They focus their attention on
what they view as an underappreciated feature of Marr’s (1982) account of the com-
putational level of analysis. Marr defines the computational level as the “level of
what the device does and why” (1982, 22). The role of the what-aspect is relatively
straightforward, involving a specification of what computation is being performed
(or what mathematical function is being computed). The role of the why-aspect is
different—it specifies how the specific computations being performed are adequate
to the information-processing task.
According to Shagrir and Bechtel, many interpreters of Marr have provided an
incomplete analysis of the computational level because they have neglected the
what-aspect. Part of the reason for this neglect is that Marr never provides a detailed
and systematic account of this aspect of the computational level. In their chapter,
Shagrir and Bechtel offer a plausible reconstruction of Marr’s views concerning the
computational level. They maintain that the why-aspect characterizes why a particular
computation is the one the system in fact needs to perform, given the structure of the
physical environment in which it is embedded (i.e., the target domain). Marr (1982)
calls these constraints imposed by the physical environment “physical constraints,”
and implies that any visual system worth its salt must be capable of preserving certain
structural relations present in the target domain (i.e., must be “designed” to reflect
these physical constraints). However, Marr’s original discussion raises more questions
than it provides answers. It is here that Shagrir and Bechtel make real headway. They
argue that the why-aspect of the computational analysis provides a characterization of
the structure-preserving mapping relation between the computed function and the target
domain. It thus serves to relate the physical constraints to the computed function—and
in doing so, it demonstrates the appropriateness of the computed function for the
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  17

information-processing task at hand. This, according to Shagrir and Bechtel, is why


the early visual system computes the Laplacian of a Gaussian as opposed to performing
multiplication or exponentiation or factorization.
Shagrir and Bechtel also make the case that the computational level of analysis pro-
vides indispensable information for the construction of mechanistic explanations in
so far as it specifies the target phenomenon to be explained in precise quantitative or
mathematical terms. They argue that delineating scientific phenomena in general is
an essential and highly non-trivial scientific task, and it is a specific prerequisite for
building mechanistic explanations. Hence, another one of Marr’s great insights was to
highlight the importance of having a clear and precise specification of the computational
phenomenon in order to develop an explanation.

5.  Mechanistic Explanation


Advocates of the mechanistic approach to explanation have articulated a vision of
­disciplinary integration that neither bottoms out in classical theory reduction nor
attempts to undermine arguments for the autonomy of psychology by challenging
multiple realizability claims (Bechtel 2007, 2008; Piccinini and Craver 2011). According
to many defenders of the mechanistic perspective, the traditional framing of the debate
imposes a false choice between reduction and autonomy because it implies that these
are mutually exclusive options. Bechtel, for example, maintains that the key to resolving
this debate is understanding how the mechanistic framework ­enables a “rapproachement
between reductionism and the independence of investigations focused on higher levels
of organization” (Bechtel 2008, 158).

5.1  Modest reductionism afforded by the mechanistic approach


According to Bechtel (2007, 2008), the kinds of reduction achieved through mech­
anistic explanations, in contrast to those posited by the traditional theory reduction
model, are fully compatible with a robust notion of autonomy for psychology and
other special sciences. He states:
Within the mechanistic framework one does not have to reject reduction in order to allow for
the independence of the higher-level sciences. The decomposition required by mechanistic
explanation is reductionist, but the recognition that parts and operations must be organized
into an appropriate whole provides a robust sense of a higher level of organization.
(Bechtel 2008, 130)

Mechanistic explanations are reductionist in the specific sense that they seek to
explain the overall pattern of activity or phenomenon generated by the mechanism as
a whole by appealing to lower-level component parts and their activities. Yet despite
this reductionist character, it is claimed to be (more) palatable to anti-reductionists
because mechanistic explanations involve a non-trivial form of autonomy in so far as
the higher-level (spatial and temporal) organization of the components in a target
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

18  DAVID M. KAPLAN

mechanism is often essential to producing the phenomenon to be explained.


According to Bechtel, “[m]odes of organization are not determined by the compo-
nents but are imposed on them” (Bechtel  2007, 192). Furthermore, successful
­mechanistic explanations sometimes go beyond describing the local mechanism and
its underlying components because they appeal to conditions of the broader system
or environment in which the mechanism is embedded and without which they could
not perform their functions (Bechtel 2008). This too has been argued to secure the
autonomy of higher levels of organization and explanation that do not directly
depend on multiple realizability.
Relatedly, the mechanistic framework embodies a distinctive account of levels of
organization in mechanisms, which in turn affords a more modest view of reduction
than the traditional theory reduction model. Whereas the traditional approach
assumes that higher-level theories can be reduced in succession to increasingly lower
levels until some fundamental level is reached, which in turn grounds all the higher
levels, the mechanistic approach rejects this global account of reduction. Although
mechanistic explanations are reductionist in the sense that they appeal to lower-level
parts and their operations to explain some higher-level behavior of the mechanism,
the reductions supported have a local character since there is no single fundamental
level that globally grounds all higher levels of mechanisms. In stark contrast to
­traditional approaches that construe levels as global strata spanning the natural world
(Oppenheim and Putnam 1958), levels of organization in mechanisms are local in the
sense that they are defined relative to a given mechanism (Bechtel 2008; Craver 2007).
In a particular mechanistic context, two arbitrary elements are deemed to reside at the
same mechanistic level only if they are components in the same mechanism, and they
occupy a higher or lower level depending on how they figure into a componential or
part-whole relation within a mechanism. Critically, questions concerning whether
components of a given mechanism (or the mechanism as a whole) reside at a higher,
lower, or the same level as entities outside the mechanism are not well defined (Bechtel
2008; Craver 2007).

5.2  Functional analysis as elliptical mechanistic explanation


Along somewhat different lines, Piccinini and Craver (2011) maintain that the
­mechanistic perspective encourages a rethinking of the received view of psychological
­explanation as a kind of functional analysis or functional explanation (e.g., Cummins,
1975, 1983, 2000; Fodor 1968), which eliminates all commitments to autonomy.
Piccinini and Craver (2011) reject the received view and instead argue that functional
and mechanistic explanations are neither distinct nor autonomous from one another
­precisely because functional analysis, when properly constrained, provides a kind of
mechanistic explanation—partial or elliptical mechanistic explanation.1 Mechanistic

1
 Arguably, a precursor of this view was articulated and defended much earlier by Bechtel and
Richardson (1993/2010). In that work, they repeatedly emphasize how both functional and structural
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  19

explanations, which are prevalent throughout the biological sciences including


neuroscience, involve the identification of the mechanism responsible for maintaining,
producing, or underlying the phenomenon of interest (Bechtel  2008; Bechtel and
Richardson 1993, 2010; Craver 2007; Craver and Darden 2013; Machamer et al. 2000).
Piccinini and Craver maintain that this shift in perspective will open up a pathway for
“building a unified science of cognition” (2011, 284). Their main claim is as follows:
The core idea is that functional analyses are sketches of mechanisms, in which some structural
aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional
analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are
seamlessly integrated with multilevel mechanistic explanations.  (Piccinini and Craver 2011, 284)

According to Piccinini and Craver (2011), a functional analysis is a mechanism sketch


in which the capacity to be explained is decomposed into sub-capacities, yet most if
not all of the information about the underlying structural components or parts is
omitted. According to the mechanistic perspective they endorse, structural informa-
tion provides an essential source of constraints on functional analyses. It must be
incorporated if a given analysis is to count as revealing the causal organization of the
system and in turn explanatory. As they put it:
If the connection between analyzing tasks and components is severed completely, then there is
no clear sense in which the analyzing sub-capacities are aspects of the actual causal structure
of the system as opposed to arbitrary partitions of the system’s capacities or merely possible
causal structures.  (Piccinini and Craver 2011, 293)

Once the missing structural information about the components underlying each
identified sub-capacity is filled in, the mechanism sketch is transformed into a (more
complete) mechanistic explanation.
The proposed picture involves a rejection of both DISTINCTNESS and AUTONOMY.
Since functional analysis is conceived as a kind of mechanistic explanation—elliptical
mechanistic explanation—it cannot be distinct from mechanistic explanation. Because
distinctness is a necessary condition for autonomy, the view also entails a rejection
of AUTONOMY. Beyond this, the view also embodies a positive account of the inter-
action between the explanatory frameworks of psychology and neuroscience. The
identification of sub-capacities in a functional analysis is argued to place very real and

decompositions of a target system (decomposition and localization, respectively) must be incorporated to


produce adequate mechanistic explanations. Decomposition “allows the subdivision of the explanatory
task so that the task becomes manageable and the system intelligible” and “assumes that one activity of a
whole system is the product of a set of subordinate functions performed in the system” (Bechtel and
Richardson 2010, 23). In addition, the decomposed sub-capacities must also be assigned to structural com-
ponents of the underlying mechanism. In other words, they must be localized. Localization involves the
“identification of the different activities proposes in a task decomposition with the behavior or capacities of
specific components” (Bechtel and Richardson, 2010, 24). Therefore, according to their view, identify-
ing either the functional or structural properties of a system alone will fail to yield an adequate mechanistic
explanation. Instead, mechanistic explanation requires both a functional and structural analysis of the target
system. These are complementary, not independent or competing endeavors.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

20  DAVID M. KAPLAN

direct constraints on which components can engage in those ­capacities. In particular,


the analysis generates, at a minimum, the expectation that for each identified sub-
capacity there will be a corresponding structure or set of structures that implements
the capacity. This is what is supposed to help to ensure that the ­proposed functional
decomposition goes beyond providing a merely possible partitioning of the system,
and succeeds in revealing its real causal structure. On this mechanistic view, the
explanatory projects of psychology and neuroscience coincide and are deeply inter-
twined because both provide complementary and mutually constraining descriptions
of different aspects of the same multi-level mechanisms. One describes function, the
other describes underlying structure.
In their contribution to the volume, Corey Maley and Gualtiero Piccinini aim to
provide a suitable foundation for functional ascriptions at the heart of the mechanistic
enterprise. Mechanistic explanations involve the identification of underlying compo-
nent parts and attributions of specific functions performed by those components. Yet
surprisingly little work has been done to investigate what underwrites these functional
ascriptions in a mechanistic context (for a notable exception, see Craver (2001, 2013).
Having an account of functions in hand would, for example, allow one to distinguish
cases that justify the ascription of particular functions from those that do not. Maley
and Piccinini contend that understanding how functions are ascribed to neural and
cognitive mechanisms and their parts is critical for a fully adequate account of multi-level
mechanistic explanation.
They reject standard etiological accounts of function, which face many well-known
criticisms including that the selective or evolutionary histories proposed to ground
functional attributions are often exceedingly difficult if not impossible to discover and
so routinely remain unknown. Relatedly, it is frequently objected that functions are
often plausibly attributed in the absence of historical information about a system. They
also reject causal role accounts, which successfully avoid the discovery problem by
grounding function in a system’s current causal powers, but nevertheless face a different
set of challenges. It is widely argued that causal accounts involve an overly permissive
concept of function, which makes it difficult to define a counterpart notion of
­malfunction and relatedly distinguish between how things ought to work (their proper
functions) from how they in fact work. For these reasons, Maley and Piccinini instead
develop a teleological account of function according to which functions are defined in
terms of their stable contribution to a current objective goal of a biological organism
(e.g., survival or inclusive fitness). They maintain that a primary advantage of their
account is that, like standard causal accounts, functions are grounded in current causal
powers. However, unlike standard accounts, theirs is claimed to be more restrictive
such that a distinction between function/malfunction can be drawn.
The mechanistic perspective thus appears to offer a number of promising routes to
achieving explanatory integration or unification of mind and brain science, while at
the same time, undermining the historically influential view of autonomous psycho-
logical explanation. But, like the philosophical views canvassed above, it too faces
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  21

obstacles. One primary objection is that in treating functional analyses in psychology


as elliptical mechanistic explanations to be filled in by neuroscience, the prospects for a
sufficiently weighty or substantive form of autonomy for higher-level psychological
explanation becomes rather bleak. A number of challenges along these lines are raised
in contributions to this volume.

6.  High-Level Causal Explanation


Recent philosophical work on mechanistic explanation is often interpreted as having
undesirable imperialistic tendencies. In his contribution, James Woodward argues
against the claim recently attributed to some proponents of mechanism that only
mechanistic models in neuroscience and psychology explain. In particular, he seeks to
combat the view that models which include more mechanistic detail will always be
explanatorily superior to those that include less detail. This more-details-the-better
view has been attributed to Kaplan and Craver (2011), among others. Woodward
instead maintains that many successful explanatory models across both neuroscience
and psychology often purposefully abstract away from all manner of lower-level
implementation (e.g., neurobiological or molecular) details in order to highlight just
those factors that make a difference to whether or not the target phenomenon occurs
(so-called difference makers). Woodward claims that such models can and often do
provide perfectly legitimate explanations, and that resources from the interventionist
account of causal explanation can illuminate their explanatory status.
According to the interventionist approach, explanatory models permit the answer-
ing of what Woodward (2003) calls what-if-things-had-been-different questions. They
identify conditions that, had they been otherwise, would “make a difference” to the
target phenomenon to be explained. This includes conditions under which the target
phenomenon would not have occurred, would have occurred at a different rate, etc.
This requirement is important because it implies that successful explanations will pick
out just those conditions or factors that are explanatorily or causally relevant to the
phenomenon to be explained (i.e., the difference makers). The notion of causal or
explanatory relevance (or difference making) is in turn cashed out in terms of inter-
ventions. Roughly, X causes (or is causally relevant to) Y just in case, given some set of
background circumstances, it is possible to change Y (or the probability distribution
of Y) by intervening on X. The notion of intervention is here understood in a technical
sense common in the philosophical and statistical literature (e.g., Spirtes et al. 2000;
Woodward  2003). The idea is that a causal relationship can be inferred between
X and Y when the intervention is “surgical,” i.e., when the intervention on X changes
the value of Y “directly” and does not involve changing the values of other possibly
confounding variables that could in turn change the value of Y (for further discussion,
see  Woodward  2003). Interventions are therefore naturally thought of as idealized
(perfectly controlled and non-confounded) versions of real experimental manipulations
routinely performed in the lab.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

22  DAVID M. KAPLAN

The interventionist approach stands to legitimize higher-level explanations in two


ways. First, it opens up the possibility that sometimes relatively abstract, higher-level
explanations can provide better explanations than more detailed, lower-level ones.
This is because the lower-level ones might include irrelevant or inessential details
whose variations or changes make no difference to the target phenomenon and thus
serve to obscure the difference-making factors. Second, this particular way of thinking
about causal relationships opens the door to higher-level causal explanations since
higher-level factors such as attentional load, memory capacity, or general psychological
state can in principle serve equally well as the targets of such interventions as lower-
level neurobiological or molecular factors. Hence, the interventionist framework holds
promise to illuminate the causal and explanatory relevance of high-level factors, and
in doing so legitimize high-level, relatively abstract explanations found throughout
­psychology and neuroscience.2
In his chapter, Woodward focuses on relatively abstract neurobiological models
such as conductance models and even the Hodgkin-Huxley model of the action poten-
tial, whose explanatory credentials have been subject to intense debate in the recent
philosophical literature (e.g., Bogen 2005, 2008; Craver 2006, 2007, 2008; Kaplan 2011;
Levy 2014; Levy and Bechtel 2013; Schaffner 2008; Weber 2008). Woodward’s general
conclusion here is that the interventionist framework can be used to illuminate how
models in neurobiology and psychology that abstract away from certain lower-level
implementational details can nonetheless be explanatory. If successful, this secures a
kind of partial autonomy of higher-level explanations and models from lower-level
mechanistic details.
Woodward argues that higher-level psychological models need not be seen as auto-
matically competing with lower-level neurobiological models. Whether the higher- or
lower-level model is most appropriate, or provides a superior explanation, depends on
the phenomenon one is trying to explain. Sometimes lower-level details about neural
implementation will be causally relevant and so must be incorporated into the model if
it is to be explanatorily adequate. Other times such details will be irrelevant to (make
no difference for) the phenomenon of interest, and so can be safely ignored in one’s
model without affecting its explanatory power. Woodward finds that modeling
practices in psychology and neuroscience are often exquisitely sensitive to the goal of
trying to include just enough detail to account for what one is trying to explain but no
more. This message dovetails nicely with views commonly expressed by computational
modelers who are continually trying to find the appropriate balance of detail and
abstraction in their models so as to best account for the phenomenon of interest. For
example, the computational neuroscientist Trappenberg (2010) asserts that “[m]odels
are intended to simplify experimental data, and thereby to identify which details of the
biology are essential to explain particular aspects of a system” (2010, 6). He is triangu-
lating on the idea that simpler, relatively abstract models can often provide superior

2
  Woodward (2008, 2010) explores similar questions in the contexts of psychiatry and biology.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  23

explanations in so far as they include only the “essential” details. Woodward aims to
provide principled reasons for the neglect of lower-level mechanistic details when
attempting to build explanatory models in mind and brain science. In particular, he
argues that such details may be safely ignored precisely when they are causally and
explanatorily irrelevant—they make no difference—to the phenomena under investi-
gation. In these cases, higher-level explanations are not subject to constraint from facts
about these lower-level details. The explanatory autonomy of psychology, according to
this view, can be seen as stemming from the causal irrelevance of lower-level details
about neural implementation. Variation in neural details sometimes makes no differ-
ence for the phenomenon under consideration, and so they can be abstracted away
from without explanatory repercussions.
In his contribution, Michael Strevens takes up similar themes. Like Woodward, he
too seeks to shed light on higher-level causal explanations in sciences like biology,
economics, and psychology, which seem to possess explanatory force despite the
fact that they abstract away from—place “black boxes” around—many of lower-level
mechanistic or implementational details. Although Strevens recognizes the intuitive
pull of the idea that these models provide adequate explanations, he is cautious about
embracing it.
Strevens carefully considers the challenges posed by convergent evolution for
detail-oriented modeling approaches including the mechanistic approach. Because
convergent evolution generates functional kinds that are instantiated by radically dif-
ferent physical realizations, modeling the underlying mechanisms is supposed to be of
limited explanatory value. In such cases, more abstract or less detailed models appear
to provide better (e.g., more unifying) explanations than those bogged down in the
mechanistic details. Even worse, mechanistic explanation may seem entirely out of
reach in such cases. For example, while there may well be some interesting high-level
or abstract explanatory models or generalizations about wings, which are thought to
have evolved independently approximately forty times throughout history, a demand
that their explanation satisfy the strictures of the mechanistic approach may go entirely
unfulfilled since the mechanistic details vary considerably across these instances.
(There are important parallels between these issues and those discussed in the chapter
by Kaplan in this volume.)
Strevens recognizes the intuitive force behind this type of (multiple realizability-
based) argument for the autonomy and explanatory superiority of abstract, higher-
level explanations. He nonetheless maintains that sometimes models in which lower-level
details are omitted or black-boxed can mistakenly be deemed explanatorily adequate
and complete because of a subtle and unrecognized shift in the target phenomenon
to be explained. Specifically, Strevens argues that there is a tendency to conflate the
difference between explaining the common instantiation of the same functional kind
(e.g., wing) by several different (types of) entities versus the instantiation of the func-
tional kind by a single entity (e.g., the avian wing). According to Strevens, not only are
these fundamentally different explananda, but they also require different explanations
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

24  DAVID M. KAPLAN

with varying amounts of mechanistic detail. Explanations of the former may be highly
abstract (suppressing or “black-boxing” many or most of the underlying mechanistic
details) in order to highlight the common factor (or set of factors) causally relevant
to the outcome across the different instances. But critically, the set of factors cited
in such explanations is argued to fall well short of exhausting the complete set of
­factors relevant to any individual instantiation (e.g., the avian wing or the insect
wing), and so these types of explanation will typically require considerably more
mechanistic detail.
Strevens suggests that some multiple realizability-based arguments for the explana-
tory autonomy of higher-level sciences including psychology similarly exploit this
slippage in order to conclude that abstract explanations are superior to detailed ones.
And while he agrees that models with more detail are not always better, he disagrees
that models with less detail are always better. Instead, Strevens, like Woodward, main-
tains that the appropriate level of detail depends sensitively on the phenomena one
wants to explain.
In his contribution to the volume, Dominic Murphy addresses the role of folk
­psychology and its relation to the sciences of the mind and the brain. Is folk psychological
explanation sui generis and therefore distinct and autonomous from scientific
­psychology and neuroscience? Or is it continuous with scientific approaches to the
mind and brain, and therefore a potential candidate for integration? Folk psychology
refers to the commonsense conceptual framework that all normally socialized humans
use to understand, predict, explain, and control the behavior of other humans and higher
non-human animals (Churchland 1998). Murphy identifies and explores three broad
perspectives on folk psychology—integration, autonomy, and elimination. According
to the integrationist perspective, folk psychology defines the phenomena that the cog-
nitive and brain sciences seek to investigate and explain, and thus plays a permanent
albeit limited role in scientific inquiry. According to the autonomist perspective, folk
psychology comprises a perfectly legitimate explanatory framework but one that is
fundamentally different in character and therefore incompatible or incommensurable
with the explanatory frameworks of cognitive science and neuroscience. Whereas the
explanatory framework of folk psychology operates at the level of people and their
sensations, beliefs, desires, and intentions (the personal level), the explanatory frame-
works of cognitive science and neuroscience operate at the sub-personal level of the
information-processing and brain mechanisms underlying these personal-level
activities. According to the autonomist, folk psychology comprises a fully ­autonomous
and self-contained domain of personal-level explanation that is neither confirmed
nor refuted by empirical evidence from mind and brain science. According to the
eliminativist perspective, folk psychology is a massively false theory that should be
replaced in favor of another more predictively and empirically adequate scientific
theory, presumably from neuroscience.
After dismissing the autonomist perspective, Murphy focuses on exposing the
­limitations of the integrationist perspective. According to the integrationist, the job
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  25

description for folk psychology is to specify the explananda for scientific psychology
and cognitive neuroscience, and critically, that it has done reasonably well at complet-
ing this job. Murphy rejects the latter claim and argues that since the taxonomic divi-
sions of folk psychology have been laid down independently of constraints from
evidence about neural implementation they fail to limn the contours of the mind.
Consequently, folk psychology cannot play the role integrationists envision for it.
Instead, the explananda for the cognitive and brain sciences have themselves been sub-
ject to rather heavy revision in the light of information about the workings and struc-
ture of the brain. Hence, Murphy argues, we are left in the position of endorsing
eliminativism as the only scientifically viable option. Nevertheless, Murphy embraces
a less radical form of eliminativism than many others because he thinks folk p
­ sychology
will be retained as a central part of the “manifest image” in light of its continuing
­practical, heuristic, and social roles.

7. Conclusion
Understanding the multi-faceted relationship between neuroscience and psychological
science is vital to achieving progress across these scientific domains. Elucidating
the  nature of explanation in these sciences provides one highly fruitful avenue for
exploring these issues. Are the explanatory patterns employed across these domains
similar or different in kind? To what extent do they inform and constrain each other?
Or, are they autonomous? Questions of this sort concerning explanation and how this
shapes our thinking about the prospects for integrating mind and brain science
­occupies center stage in this volume.
On the one hand, the emergence of cognitive neuroscience suggests that the integra-
tion of mind and brain science is already largely upon us or is an inevitable future out-
come. Moreover, the growing dominance of the mechanistic approach to explanation
further reinforces a picture of unity and integration between explanatory frameworks.
And yet, on the other hand, there nevertheless appears to be strong reasons for think-
ing that a psychological science will, over the long term, retain some partial degree of
explanatory autonomy. Although a final resolution continues to elude us, the chapters
contained in this volume succeed in pushing this important debate forward.

References
Anderson, J. R. (1996). The architecture of cognition. Mahwah, NJ: Lawrence Erlbaum
Associates.
Barrett, D. (2014). Functional analysis and mechanistic explanation. Synthese, 191(12), 2695–714.
Bechtel, W. (2007). Reducing psychology while maintaining its autonomy via mechanistic
explanation. In M. Schouten and H. Looren de Jong (Eds.), The Matter of the Mind:
Philosophical Essays on Psychology, Neuroscience and Reduction. Oxford: Basil Blackwell ,
pp. 172–98.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

26  DAVID M. KAPLAN

Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cognitive neuroscience.


New York: Taylor and Francis.
Bechtel, W. and Mundale, J. (1999). Multiple realizability revisited: Linking cognitive and
neural states. Philosophy of Science, 66(2), 175–207.
Bechtel, W. and Richardson, R. C. (1993). Discovering complexity: Decomposition and localization
as scientific research strategies. Princeton, NJ: Princeton University Press.
Bechtel, W. and Richardson, R. C. (2010). Discovering complexity: Decomposition and localization
as strategies in scientific research. Cambridge, MA: MIT Press.
Bickle, J. (1998). Psychoneural reduction: The new wave. Cambridge, MA: MIT Press.
Bickle, J. (2003). Philosophy and neuroscience: A ruthlessly reductionist account. Dordrecht: Kluwer.
Bickle, J. (2006). Reducing mind to molecular pathways: Explicating the reductionism implicit
in current cellular and molecular neuroscience. Synthese, 151, 411–34.
Bogen, J. (2005). Regularities and causality: Generalizations and causal explanations. Studies in
History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and
Biomedical Sciences, 36(2), 397–420.
Bogen, J. (2008). The Hodgkin-Huxley equations and the concrete model: Comments on
Craver, Schaffner, and Weber. Philosophy of Science, 75(5), 1034–46.
Boon, W. and Piccinini, G. (2016). The Cognitive Neuroscience Revolution. Synthese, 193(5),
1509–34.
Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness
of computational explanation in neuroscience. Synthese, 191(2), 127–53.
Churchland, P. M. (1989). A neurocomputational perspective: The nature of mind and the
structure of science. Cambridge, MA: MIT Press.
Churchland, P. M. (1998). Folk Psychology. In P. M. Churchland and P. S. Churchland (Eds.),
On the contrary: Critical essays, 1987-1997. Cambridge, MA: MIT Press, pp. 3–15.
Churchland, P. M. (2005). Functionalism at forty: A critical retrospective. Journal of Philosophy,
33–50.
Churchland, P. S. and Sejnowski, T. J. (1988). Perspectives on cognitive neuroscience. Science,
242(4879), 741–5.
Craver, C. F. (2001). Role functions, mechanisms, and hierarchy. Philosophy of Science, 53–74.
Craver, C. F. (2006). When mechanistic models explain. Synthese, 153(3), 355–76.
Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience.
New York: Oxford University Press.
Craver, C. F. (2008). Physical law and mechanistic explanation in the Hodgkin and Huxley
model of the action potential. Philosophy of Science, 75(5), 1022–33.
Craver, C. F. (2013). Functions and mechanisms: A perspectivalist view. In P. Huneman (Ed.),
Functions: Selection and mechanisms. New York: Springer, pp. 133–58.
Craver, C. F. and Darden, L. (2013). In search of mechanisms: Discoveries across the life sciences.
Chicago: University of Chicago Press.
Cummins, R. C. (1975). Functional Analysis. Journal of Philosophy, 72, 741–64.
Cummins, R. (1983). The nature of psychological explanation. Cambridge, MA: MIT Press.
Cummins, R. (2000). How does it work? “versus” What are the laws? Two conceptions of
psychological explanation. In F. Keil and R.A. Wilson (Eds.), Explanation and Cognition.
Cambridge, MA: MIT Press, pp. 117–45.
Datteri, E. and Laudisa, F. (2014). Box-and-arrow explanations need not be more abstract
than neuroscientific mechanism descriptions. Frontiers in Psychology, 5, 464.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

INTEGRATING MIND AND BRAIN SCIENCE  27

Egan, F. (1995). Computation and content. Philosophical Review, 104(2), 181–203.


Egan, F. (2010). Computational models: A modest role for content. Studies in History and
Philosophy of Science Part A, 41(3), 253–59.
Feyerabend, P. K. (1962). Explanation, reduction and empiricism. Minneapolis: University of
Minnesota Press.
Fodor, J. (1965). Explanations in psychology. In M. Black (Ed.), Philosophy in America. Ithaca:
Cornell University Press, pp. 161–79.
Fodor, J. (1968). Psychological explanation: An introduction to the philosophy of psychology.
New York: Random House.
Fodor, J. (1974). Special sciences (or: the disunity of science as a working hypothesis). Synthese,
28(2), 97–115.
Fodor, J. (1997). Special sciences: Still autonomous after all these years. Noûs, 31(s11),
149–63.
Gallistel, C. R. and King, A. P. (2009). Memory and the computational brain: Why cognitive
science will transform neuroscience. Chichester: Wiley-Blackwell.
Gazzaniga, M. S. (1989). Editor’s note. Journal of Cognitive Neuroscience, 1(1), 2.
Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science.
New York: Free Press.
Hempel, C. G. and Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of
Science, 15(2), 135–75.
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference,
and consciousness. Cambridge, MA: Harvard University Press.
Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese,
183(3), 339–73.
Kaplan, D. M. and Craver, C. F. (2011). The explanatory force of dynamical and mathemat-
ical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78(4),
601–27.
Lachman, R., Lachman, J. L., and Butterfield, E. C. (1979). Cognitive psychology and information
processing: An introduction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Levy, A. (2014). What was Hodgkin and Huxley’s achievement? British Journal for the
Philosophy of Science, 65(3), 469–92.
Levy, A. and Bechtel, W. (2013). Abstraction and the organization of mechanisms. Philosophy
of Science, 80(2), 241–61.
Machamer, P., Darden, L., and Craver, C. F. (2000). Thinking about mechanisms. Philosophy of
Science, 67(1), 1–25.
Marr, D. (1982). Vision: A computational approach. San Francisco, CA: Freeman and Co.
Marr, D. and Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society of
London B: Biological Sciences, 207(1167), 187–217.
Marr, D., Ullman, S., and Poggio, T. (1979). Bandpass channels, zero-crossings, and early visual
information processing. Journal of the Optical Society of America, 69(6), 914–16.
Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation.
Indianapolis: Hackett Publishing.
Newell, A. and Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ:
Prentice-Hall.
Newell, A. and Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and
search. Communications of the ACM, 19(3), 113–26.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

28  DAVID M. KAPLAN

Oppenheim, P. and Putnam, H. (1958). Unity of science as a working hypothesis. Minnesota


Studies in the Philosophy of Science, 2, 3–36.
Piccinini, G. (2007). Computing mechanisms. Philosophy of Science, 74(4), 501–26.
Piccinini, G. and Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses
as mechanism sketches. Synthese, 183(3), 283–311.
Poggio, T. (2010). Afterword. In Vision: A computational investigation into the human represen-
tation and processing of visual information. Cambridge, MA: MIT Press.
Polger, T. (2004). Natural minds. Cambridge, MA: MIT Press.
Polger, T. W. (2009). Evaluating the evidence for multiple realization. Synthese, 167(3), 457–72.
Putnam, H. (1975). Philosophy and our mental life. In H. Putnam (Ed.), Mind, language and Reality:
Philosophical papers volume 2 Cambridge, MA: Harvard University Press, pp. 291–303.
Pylyshyn, Z. W. (1984). Computation and cognition: Towards a foundation for cognitive science.
Cambridge, MA: MIT Press.
Richardson, R. C. (1979). Functionalism and reductionism. Philosophy of Science, 533–58.
Rosenberg, A. (2001). How is biological explanation possible? British Journal for the Philosophy
of Science, 52(4), 735–60.
Schaffner, K. F. (1967). Approaches to reduction. Philosophy of Science, 137–47.
Schaffner, K. F. (1969). The Watson-Crick model and reductionism. British Journal for the
Philosophy of Science, 20(4), 325–48.
Schaffner, K. F. (2008). Theories, models, and equations in biology: The heuristic search for
emergent simplifications in neurobiology. Philosophy of Science, 75(5), 1008–21.
Sejnowski, T. J., Koch, C., and Churchland, P. S. (1988). Computational neuroscience. Science,
241(4871), 1299–306.
Shapiro, L. A. (2000). Multiple realizations. Journal of Philosophy, 97(12), 635–54.
Simon, H. A. (1979). Information processing models of cognition. Annual Review of Psychology,
30(1), 363–96.
Sklar, L. (1967). Types of inter-theoretic reduction. British Journal for the Philosophy of Science,
18(2), 109–24.
Sober, E. (1999). The multiple realizability argument against reductionism. Philosophy of
Science, 66(4), 542–64.
Spirtes, P., Glymour, C., and Scheines, R. (2000). Causation, Prediction, and Search. 2nd ed.
New York: Cambridge, MA: MIT Press.
Trappenberg, T. (2010). Fundamentals of computational neuroscience. Oxford: Oxford
University Press.
Von Eckardt, B. (1995). What is cognitive science? Cambridge, MA: MIT Press.
Weber, M. (2008). Causes without mechanisms: Experimental regularities, physical laws, and
neuroscientific explanation. Philosophy of Science, 75(5), 995–1007.
Wimsatt, W. C. (2007). Re-engineering philosophy for limited beings: Piecewise approximations
to reality. Cambridge, MA: Harvard University Press.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford
University Press.
Woodward, J. (2008). Cause and explanation in psychiatry. In K.S. Kendler and J. Parnas (Eds.),
Philosophical Issues in Psychiatry. Explanation, Phenomenology, and Nosology Baltimore,
MD: Johns Hopkins University Press, pp. 132–84.
Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of
explanation. Biology and Philosophy, 25(3), 287–318.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

2
Neuroscience, Psychology,
Reduction, and Functional Analysis
Martin Roth and Robert Cummins

This is a volume about the relation of psychology to neuroscience, and in particular,


about whether psychological states and processes can be reduced to neurological ones.
The pressure for reduction in science (and a consequent conflation of explanatory
adequacy and evidential adequacy) is an artifact of what we call the nomic conception
of science (NCS): the idea that the content of science is a collection of laws, and that
scientific explanation is subsumption under these laws. NCS, in effect, identifies
­explanation with reduction: no reduction, no explanation.
Psychological states appear to be irreducible because they are widely thought to be
functional states: states defined by their role in a containing system. The reduction of
psychological states to their neural implementations therefore appears to be blocked
by the multiple realizability of functional states: door stops need not be made of r­ ubber
to function as door stops, and psychological states need not be implemented in car-
bon-based neural tissue as opposed to, say, silicon-based circuitry, to qualify
as  ­psychological states. If psychological states cannot be reduced to neural states,
if ­psychology is an autonomous science, then the mind appears to be sui generis, and
psychology appears disconnected from the rest of science.
But functional analysis, we claim, is ubiquitous in the sciences at every level. If
explanation by functional analysis undermines reduction, then reduction is under-
mined ­everywhere, in physics and chemistry as well as in economics, psychology,
and ­biology. In this respect, then, there is nothing special about the mind: function-
analytical explanations exhibit explanatory autonomy wherever they are found, and
they are found everywhere in science, engineering, and everyday life.

1.  The Nomic Conception of Science


We set the stage for our position by showing how the pressure for reduction in science,
and the conflation of explanatory adequacy and evidential adequacy, is an artifact of
what we call the nomic conception of science: the idea that the content of science is a
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

30  MARTIN ROTH AND ROBERT CUMMINS

collection of laws, together with the deductive-nomological model of explanation.


NCS, in effect, identifies explanation with reduction, thus making no room for the
explanatory autonomy of function-analytical explanations.
NCS has two easily recognizable and internally related components:
(1) The content of a science is captured by its laws. The various sciences are indi-
viduated by their theories, and a theory is a collection of laws expressed in a
proprietary vocabulary. Thus, ultimately, the content of a science is captured by
a set of laws.1
(2) Scientific explanation is subsumption under law. According to the deductive-
nomological (D-N) account of explanation (Hempel and Oppenheim, 1948),
the explananda of science are events and laws. Explanation of an event is
accomplished by deducing its occurrence from laws and initial conditions.
Explanation of a law is accomplished by deducing it (or a close approximation)
from more basic laws and boundary conditions.
From the perspective of NCS, the sciences form a reduction hierarchy to the extent
that the laws of one science can be deduced from the laws of another science.2 For
example, if the laws of chemistry can be deduced from the laws of physics, then chem-
istry reduces to physics, and physics is “below” chemistry in the hierarchy (Figure 2.13).
Reduction is just D-N applied between sciences, and for those in the grip of NCS, the
goal of unifying science became the goal of providing between-science deductions
(Oppenheim and Putnam, 1958).
A notable feature of D-N is that, given its deductive structure, the explanations it
generates are transitive. For example, if there are laws L1 and L2 such that L2 subsumes
L1, and if L1 subsumes event E, then L2 subsumes event E. If we accept the NCS account
of reduction sketched above, it follows that if there is some event that is subsumed by
the laws of psychology but is not subsumed by the laws of neuroscience, then psychology
does not reduce to neuroscience. When we put the matter this way, it should be
clear why the conventional wisdom is that functionalism in psychology blocks the
reduction of psychology to neuroscience (Kim, 1993; Fodor, 1998).4 According to
functionalism, psychology discloses nomic connections between functional states. If
functional states are multiply realized (e.g., if some instantiations of functional states

1
  This view has been with us since Newton, and the way many of the major debates in twentieth-century
philosophy of science were framed—debates over scientific confirmation and explanation, for example—
make little sense without it (e.g., Nagel, 1961; Carnap, 1966; Hempel, 1966).
2
  Early formulations required bridge principles for the derivation (Nagel,  1949;  1961). Churchland
(1979) provided a formulation that required only that the reducing theory generate a reasonable image
of the reduced theory.
3
  There are some evident difficulties: Where does geology go? Is astronomy part of physics? Is sociology
on the same “level” as economics? Is developmental psychology more or less basic than cognitive psychology?
What is the relationship between evolutionary biology and zoology (or any other part of biology)? We
propose to leave these (embarrassing) questions aside for the time being. A fuller treatment of these issues
can be found in Causey (1977) and Wimsatt (2006).
4
  Of course, unlike Fodor, Kim challenges the conventional wisdom.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  31

Economics

Psychology
Reduction

Biology

Chemistry

Physics

Figure 2.1  Reduction hierarchy

are not instantiations of biological states), then the laws of psychology subsume events
that are not subsumed by the laws of neuroscience, and so the reduction of psychology
to neuroscience fails. Similarly, if nomically distinct biological states can realize the
same psychological state then, once again, the laws of psychology will fail to reduce
to  the laws of neuroscience. Psychology, in this case, will be autonomous from
neuroscience.
If we take seriously the idea that laws of psychology subsume events not subsumed
by laws of neuroscience, then there is something wrong with the hierarchy presented
in Figure 2.1. The problem isn’t merely that the hierarchy fails to be a reduction hierarchy,
for that hierarchy is committed to the idea that if the laws of a science subsume some
event, then laws of the sciences below also subsume that event. The aforementioned
considerations of multiple realization purport to show that laws of psychology do
not stand in this relation to laws of biology, however, so the argument from multiple
realization would require us to abandon the idea that biology is “below” psychology.
Nonetheless, accepting the argument from multiple realization does not require that
we abandon the spirit of the hierarchy. Reductions require type–type identities, but
functionalists can opt for the weaker claim that each token of a psychological state type
is identical to a token of some physical state type or other. This would preserve the idea
that any event subsumed by the laws of psychology is subsumed by some physical law
or other, so failure of reduction would be compatible with the claim that physics is
more basic than psychology (Fodor, 1974).
However, if the explanatory contribution of an unreduced, “higher-level” law is simply
the set of events it subsumes, it is not obvious that embracing token–token identities
(without type–type identities) helps the defender of autonomy. After all, if physical
laws subsume any event that unreduced laws of psychology subsume, it seems to follow
that physical laws can explain any event explained by unreduced laws of psychology.
But if that is so, then unreduced psychological laws are gratuitous. Thus the challenge
to the defender of autonomy is to show that non-reduced, higher-level laws come with
explanatory benefits not enjoyed by lower-level laws. Of course, defenders of auton-
omy do think that higher-level laws enjoy such benefits: the explanations provided by
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

32  MARTIN ROTH AND ROBERT CUMMINS

higher-level laws are more general than the explanations provided by lower-level laws.5
Even if generality is not the only explanatory virtue, the defender of autonomy seems
to be on firm ground in claiming that it is a virtue reserved for explanations that invoke
higher-level laws.
Following Sober (1999), we can think of the choice between higher- and lower-level
explanations as involving a trade-off between breadth and depth. Here is how Sober
puts the point:
The goal of finding “relevant” laws cuts both ways. Macro-generalizations may be laws, but
there also may be laws that relate micro-realizations to each other, and laws that relate micro-
to macro- as well. Although “if P then Q” is more general than “if Ai then Bi,” the virtue of the
micro-generalization is that it provides more details. Science aims for depth as well as
breadth . . . Returning to Putnam’s example, let us imagine that we face two peg-plus-board sys-
tems of the type that he describes. If we opt for the macro-explanation of why, in each case, the
peg goes through one hole but not the other, we will have provided a unified explanation. We
will have explained similar effects by describing similar causes. However, if we choose a micro-
explanation, it is almost inevitable that we will describe the two systems as being physically
different, and thus our explanation will be disunified. We will have explained the similar effects
by tracing them back to different types of cause. Putnam uses the terms “general” and “invari-
ant” to extol the advantages of macro-explanation, but he might just as well have used the term
“unified” instead.  (pp. 550–1)6

Here we find an interesting twist: While multiple realization blocks the kind of unity
that between-science deductions were supposed to provide (reduction), it turns out
that higher-level laws provide a kind of unity—unity-as-generality—that depends on
multiple realization. On this view, the explanatory virtue of higher-level laws depends
on actual multiple realization, as opposed to merely possible multiple realization. If
the states picked out by higher-level laws are not multiply realized, then the generality
­provided by higher-level laws would be no greater than the generality provided by
lower-level laws. And because explanations given in terms of lower-level laws provide
depth (in Sober’s sense, i.e., provide micro-details), lower-level explanations would
always emerge as superior: equivalent breadth and more depth. This is why reductive
explanations are so appealing in the first place.
While the above story about reduction, multiple realization, and autonomy may
be intuitively compelling, we think it is deeply flawed. As we will argue in Section 3, the
autonomy of functional explanation does not depend on actual multiple realization,
and while functional explanations may in fact exhibit the kind of generality that is
thought to apply to higher-level laws, the primary virtue of functional explanation

5
  See Pylyshyn (1978) for an early expression of this idea.
6
  Sober understands “unified explanation” in the sense of Kitcher (1989). According to Kitcher, “Science
advances our understanding of nature by showing us how to derive descriptions of many phenomena, using
the same pattern of derivation again and again, and in demonstrating this, it teaches us how to reduce the
number of facts we have to accept as ultimate” (p. 423). Macro-explanations unify in that macro-explanations
allow us to reduce the number of patterns used to explain phenomena.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  33

is not its generality. To set the stage for our argument, consider the following passage
from Putnam (1975), as well as Sober’s response to it. Putnam writes:
Even if it were not physically possible to realize human psychology in a creature made of
anything but the usual protoplasm, DNA, etc., it would still not be correct to say that psychological
states are identical with their physical realizations. For, as will be argued below, such an
identification has no explanatory value in psychology.  (p. 293)

Sober calls Putnam’s remark “curious” (1999, p. 549) and notes the following:
If we take Putnam’s remark seriously, we must conclude that he thinks that the virtue of
higher-level explanations does not reside in their greater generality. If a higher-level predicate
(P) has just one possible physical realization (A), then P and A apply to exactly the same
objects. Putnam presumably would say that citing A in an explanation provides extraneous
information, whereas citing P does not. It is unclear how this concept of explanatory relevance
might be explicated.  (p. 549)

Viewed though the lens of NCS, Putnam’s remark is curious. Because NCS must iden-
tify any explanatory bonus yielded by psychological laws with breadth, NCS weds the
autonomy of psychology to actual multiple realization. Yet Putnam appears to be
claiming that psychological explanations would be autonomous even if psychological
states were not actually multiply realized. We think Putnam is right, but it is impossible
to see why he is right from an NCS perspective. When we replace NCS with something
more descriptively accurate, however, we find that the kind of explanatory autonomy
Putnam was gesturing at is ubiquitous in the sciences. Moreover, the fact that the
autonomy in question is ubiquitous is important. The autonomy of psychology suggests
that there is something sui generis about the mind. An autonomy that lives in every
science, at all levels, raises no such specters.

2.  The Inadequacy of NCS


When pressed, it is hard to come up with the laws of economics, psychology, biology, or
chemistry as these are envisaged by NCS: a small set of principles whose applications
constitute the business of the field. There are some examples, of course: the law of
supply and demand, the law of effect, the Hardy-Weinberg law, Dalton’s law of multiple
proportions. But the positivist dream of an autonomous axiomatic presentation of
these sciences is pretty clearly a pipe dream. The would-be reductionist, therefore,
must assume that this is simply a reflection of the relatively undeveloped state of the
super-physical sciences, an assumption that effectively legislates how these sciences
should be structured without bothering with how they actually are structured.7 If this
legislation could be made to stick—if we were to refuse to count psychology as science,

7
  This is not to say that accounts of science should aim for descriptive adequacy only; the point, rather,
is that we should be suspicious of accounts of science that rule out what is pre-theoretically taken to be
obviously good science.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

34  MARTIN ROTH AND ROBERT CUMMINS

or a “mature science,” until it had an expression as a set of laws (a regrettably widespread


tendency for decades)—NCS would be made true by fiat, and a major requirement of
the reductionist program would, in turn, be guaranteed by that fiat. There would be
considerable collateral damage as a result, of course: no actual science would count as
science. Since truth by fiat should not be attractive to those sympathetic to the ­scientific
enterprise, a revision of NCS would seem to be in order.
We can go some distance along this path by distinguishing laws and effects. In
­science, when a law is thought of as an explanandum, it is called an “effect” (Cummins,
2000). Einstein received his Nobel Prize, not for his work on relativity, but for his
explanation of the photoelectric effect. In psychology, laws are almost always con-
ceived of, and generally called, effects, though they are sometimes called laws as well.
We have the Garcia effect (Garcia and Koelling, 1966), the spacing effect (Madigan,
1969), the McGurk effect (MacDonald and McGurk, 1978), as well as the Law of Effect
(Thorndike, 1905) and Emmert’s Law (Emmert, 1881). Each of these is a fairly well-
confirmed law or regularity (or set of them). But no one thinks that the McGurk effect
explains the data it subsumes; i.e., no one not already in the grip of the deductive-
nomological model would suppose that one could explain why someone hears the
consonant that a speaker’s mouth appears to make by appeal to the McGurk effect.
That just is the McGurk effect. To distinguish the sense of “effect” that applies to events
from the sense of “effect” that applies to laws or regularities, we will call the latter
“R-effects” (short for “regularity-effects”).8 Science sometimes focuses on explaining
individual events—e.g., the extinction of the dinosaurs is explained by appeal to the
global dust cloud that formed when a large meteorite struck Earth. But, more often,
and more fundamentally, it is focused on the explanation of R-effects. Contrary to the
deductive-nomological account of explanation, R-effects themselves do not explain
the states and events they subsume. They simply specify regularities. An R-effect is an
explanandum, not an explanans.9
Treating the “higher” or “special” sciences as collections of R-effects frees one from
the burden of finding a unified, subsumptive nomic structure for every candidate
science, and this makes room for the idea that it might be sensible to ask, e.g., whether
chemistry reduces to physics, since it underwrites the idea that chemical R-effects
might be subsumed under the laws of physics without assuming that chemistry can be
organized as a unifying set of laws. It leaves unsettled, however, the question of how
we are to think, for example, of the relationship of biological R-effects to chemical
R-effects. The unity of science, as usually conceived, presupposes the unity of the

8
  We omit the prefixes in contexts in which the sense is obvious, as in “photo-electric effect” and “the
accident was an effect of the blowout.”
9
  We agree that one can explain why Wilbur thought the sound he heard was the one Oscar’s mouth
appeared to be making by appeal to the McGurk effect, and that this is straightforward subsumption under
causal law. But this is explanatory only if you do not know about the McGurk effect. In that case, you now
understand that what you witnessed is not an anomaly, but what is to be expected.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  35

individual sciences. The conception now under consideration requires only the unity
of (basic) physics. But this conception seems to vastly understate the organization and
internal complexity of the special sciences, while at the same time making the “principles”
of the science immediately below the reductive target look like poor candidates for a
reductive base, since those principles themselves appear as a mere bundle of R-effects
when viewed from below.
There are two conclusions we want to draw from all this. First, NCS is not descriptive
fact, but an ideal generated by an outmoded conception of scientific theories as the
deductive closure of a small set of sentences, and conceived explicitly to partner with
some version of the deductive-nomological model of explanation and its offspring.
Second, the hierarchy of the sciences is also not descriptive fact, but an ideal generated
by a combination of NCS and the corresponding flavor of reductive aspirations
engendered by the dream of unified science. We need a conception of the sciences that
does justice to their internal organization and to their “continuity” (or lack thereof)
with the rest of science as a whole. NCS and the reductionist hierarchy that goes with it
was an attempt to deliver these goods, but that conception is now widely recognized
as deeply flawed.
We need to abandon NCS. But how should we think about explanation, reduction,
autonomy, and hierarchy once we do so? The answer depends, to a large extent anyway,
on a proper understanding of functional analysis.

3.  Functional Analysis


When we identify something functionally—a mousetrap, a gene, a legislature—we
identify it in terms of what it does. Many biological terms have both a functional and
an anatomical sense: an artificial heart is a heart by function but is not an anatomical
heart, and cognitive neuroscience was conceived when “brain” became a functional
term as well as an anatomical one. Functional analysis is the attempt to explain
the properties of complex systems—especially their characteristic R-effects—by the
analysis of a systemic property into organized interaction among other simpler
­systemic properties or properties of component subsystems (Cummins, 1975, 1983).
This explanation-by-analysis is functional analysis because it identifies analyzing
properties in terms of what they do or contribute, rather than in terms of their intrinsic
constitutions. For example, a circuit diagram describes or specifies a circuit in a way
that abstracts away from how the components, including the “wires,” are actually
made. The strategy of explaining R-effects—the properties of complex systems—by
functional analysis is ubiquitous in science and engineering, and by no means special
to psychology.10

10
  In addition to Einstein’s analysis of the photoelectric effect, we find Mendel’s analysis of inheritance
patterns in plants, Helmholtz’s analysis of Emmert’s Law, and Newell and Simon’s analysis of problem
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

36  MARTIN ROTH AND ROBERT CUMMINS

From the point of view of functional analysis, functional properties are dispositional
properties, and the dispositional properties of a complex system are explained by
exhibiting their manifestations as the disciplined manifestation of dispositions that
are components of the target disposition, or by the disciplined interaction of the dis­
positions of the system’s component parts. The explanatory targets of this sort of analysis
typically are not points in state space (particular events) or trajectories through it
(particular sequences of events). Rather, the usual aim of this kind of analysis is to
appeal to a system’s design in order to explain why one finds the trajectories one does
and not others. The design provides a model of the state space and constrains the
­possible paths through it, thereby explaining the system’s characteristic R-effects.
More generally, the explanandum of a functional analysis is a dispositional property,
and the strategy is to understand the dispositional properties of a complex system by
exhibiting the abstract functional design of that system—to show, in short, that a system
with a certain design is bound to have the (typically dispositional) property in question.
Designs can do this because functional terms pick out the causal powers that are relevant
to the capacity being analyzed. Functional terms are in this sense causal relevance
filters: by selecting from the myriad causal consequences of a system’s states, processes,
or mechanisms those that are relevant to the target R-effect, functional characterization
makes the contributions of those states, processes, or mechanisms transparent. It is
precisely this transparency that enables us to understand why anything that possesses
these states, processes, or mechanisms is bound to have the R-effect in question. Without
this filtering, we are simply left with a welter of noisy detail with no indication of what
is relevant and what is a mere by-product of this or that implementation.11
Causal relevance filtering is, therefore, just abstraction from the implementation
details that are irrelevant to the achievement of the targeted R-effect. Implementations
that differ in those details but retain the design will thus all exhibit the targeted R-effect.
In this way, the possibility of multiple realization is an inevitable consequence of causal
relevance filtering, and so it should come as no surprise to find that functional analyses
subsume causal paths that have heterogeneous implementations.
It would, however, be a mistake to wed the explanatory power of functional analysis
to assumptions about actual multiple realization, for even if there is only one nomo-
logically possible way to implement a design, giving implementation details that go
beyond what is specified by an analysis adds nothing to the explanation provided by
the design. For example, suppose there is just one nomologically possible way to imple-
ment a doorstop—say, by being a particular configuration of rubber. In this case, it
would be plausible to hold that being a doorstop—the type—is identical to being a
particular configuration of rubber—the type. Because type–type identities give you

s­olving (to name just a few historical examples). In engineering, design specification is almost always
­functional analysis (e.g., circuit diagrams in electrical engineering).
11
  Roth and Cummins (2014). The claim here isn’t an evidential one; e.g., it isn’t a claim about how we
discover causal structure in a system. The latter is important to confirming functional analyses, but it is
important not to conflate confirmation with explanation. We come back to this point below.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  37

property reductions, being a doorstop would thus reduce to being a particular


­configuration of rubber. But a functional analysis that specifies something as a
doorstop would still be autonomous, in the following sense. Being a particular config­
uration of rubber comes with any number of causal powers. One of those powers is
stopping doors, and in the context of the imagined functional analysis, stopping doors
is the only causal power of this particular configuration of rubber that matters to
having the target R-effect. If we replace “doorstop” with “rubber configured thus and
so” in our analysis, we won’t lose anything as far as the causation goes. However, we
will lose the transparency functional analysis affords unless we specify explicitly that
stopping doors is the r­elevant causal power. But then the explanation is tantamount
to the e­ xplanation given in terms of “doorstop”; i.e., the explanation does not give us
anything beyond what is provided by the functional analysis itself.
If we focus on the causal explanation of events and assume type–type identity, then
framing explanations in terms of “doorstop” is guaranteed to give you nothing beyond
what framing explanations in terms of “rubber configured thus and so” gives you, and
this is why it has been generally assumed that reduction is incompatible with auton-
omy. From the perspective of functional analysis, by contrast, autonomy can live with
reduction. Design explanations are autonomous in the sense that they do not require
“completion” by annexing implementation details; e.g., in the case imagined above, it is
irrelevant to explaining the target R-effect whether a specific doorstop is a particular
configuration of rubber. But design explanations are also autonomous in the sense that
adding implementation details would undermine the transparency provided by causal
relevance filtering and thereby obviate the understanding provided by the design.
A doorstop may be a particular configuration of rubber, but replacing “doorstop” with
“rubber configured thus and so” masks the information needed to understand why a
system has the target R-effect. We think this is the lesson of Putnam’s passage, but the
lesson is lost if we try to understand it as a lesson about causal explanations of events.

4.  Horizontal vs. Vertical Explanation


We are sympathetic to the thought that complete knowledge of implementation details
would contribute to a fuller understanding of those systems whose R-effects are tar-
geted by functional analysis. Indeed, such details are necessary for understanding how
a system manages to have the very causal powers that are picked out by functional
analysis. But having a fuller understanding of a system, in this sense, is not the same
thing as having a more complete explanation of the R-effects targeted for functional
analysis. For example, when we analyze the capacity to multiply numbers in terms of a
partial products algorithm, the specification of the algorithm tells us nothing about the
states, processes, or mechanisms of a system that implements the algorithm (except in
the trivial sense that the states, processes, or mechanisms of any system that imple-
ments the algorithm are sufficient for implementing it). However, as far as explaining
the capacity goes—what we might call the “multiplication effect”—the analysis provided
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

38  MARTIN ROTH AND ROBERT CUMMINS

by the algorithm is complete, i.e., the analysis allows us to understand why any system
that has the capacity for computing the algorithm ipso facto exhibits the multiplication
effect. Generalizing, because details about how a design is implemented add nothing
to the analysis, such details are irrelevant to the explanation of an R-effect. If you claim
that the presence of a doorstop explains the fact that the door is open, you need to find
some doorstop or other interacting with the door. Having found this, however, the fact
that it is rubber rather than wood adds nothing to the explanation itself.
The perspective we have outlined here suggests that we replace Sober’s distinction
between breadth and depth—a distinction that really only makes sense within NCS—
with a distinction between horizontal and vertical explanation. Horizontal explanations
explain R-effects by appeal to a design or functional analysis. They answer the question:
Why does S exhibit R-effect E? Vertical explanations specify implementations. They
answer the question: How is design D realized in S? Neither type of explanation is sub-
sumption under law. And neither is in the business of explaining individual events. The
explananda are, rather, R-effects (horizontal) and designs (vertical).12
We think the tendency to conflate explaining an R-effect via functional analysis with
explaining how a functional analysis is implemented has led to a misunderstanding con-
cerning the relationship between functional analysis and mechanistic explanation.
Following Bechtel and Abrahamsen (2006), a mechanism “is a structure performing a
function in virtue of its component parts, component operations, and their organiza-
tion. The orchestrated functioning of the mechanism is responsible for one or more
phenomena” (p. 162). As we see it, the goal of discovering and specifying mechanisms
is often or largely undertaken to explain how the analyzing capacities specified by
a  functional analysis are implemented in some system. In this way, the horizontal
explanations provided by functional analysis and the vertical explanations provided
by specifying mechanisms complement each other.
This view of the relationship is not without its challengers, however. For example,
Piccinini and Craver (2011) argue that functional analyses are “mechanism sketches”:
functional analyses and the design explanations they provide are “incomplete” until
filled out with implementation details, and in that way, the explanations provided by
functional analysis are not autonomous. However, we think their argument involves a
misidentification of the relevant explanatory targets of functional analysis—R-effects—
and a correlative conflation of explanation and confirmation. We’ll take these up in turn.
Piccinini and Craver write that, “Descriptions of mechanisms . . . can be more or less
complete. Incomplete models—with gaps, question-marks, filler-terms, or hand-
waving boxes and arrows—are mechanism sketches. Mechanism sketches are incom-
plete because they leave out crucial details about how the mechanism works” (p. 292).
Sketches being what they are, we have no quarrel with the claim that mechanism

12
  Note that vertical explanation tells us how a design is implemented in some system, not why a system
implements the design(s) it does. The latter belongs to what Tinbergen (1958) called ultimate explanation.
An ultimate explanation answers the question “Why does S have D?” by specifying a developmental, learning,
and/or evolutionary history.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  39

sketches are incomplete, and insofar as mechanistic explanations explain by showing


how a mechanism works, we agree that filling in the missing details of a mechanism
sketch can lead to a more complete mechanistic explanation. The crucial issue here,
however, is whether functional analyses should be viewed as mechanism sketches.
To motivate the claim that functional analyses are mechanism sketches, we have to
assume that abstraction from implementation detail inevitably leaves out something
crucial to the analytical explanation of a target R-effect, something that implementation
details would provide. But as we’ve already argued, the opposite is in fact true; adding
implementation details obfuscates the understanding provided by functional analysis.
Instead of favoring the autonomy of functional analysis, however, Piccinini and
Craver think that abstraction from implementation actually works against claims of
autonomy. They write:
Autonomist psychology—the search for functional analysis without direct constraints from
neural structures—usually goes hand in hand with the assumption that each psychological
capacity has a unique functional decomposition (which in turn may have multiple realizers).
But there is evidence that . . . several functional decompositions may all be correct across different
species, different members of the same species, and even different time-slices of an individual
organism. Yet the typical outcome of autonomist psychology is a single functional analysis
of a given capacity. Even assuming for the sake of the argument that autonomist psychology
stumbles on one among the correct functional analyses, autonomist psychology is bound to
miss the other functional analyses that are also correct. The way around this problem is to let
functional analysis be constrained by neural structures—that is, to abandon autonomist
­psychology in favor of integrating psychology and neuroscience.  (p. 285)

We think this argument clearly conflates explanatory autonomy with confirmational


autonomy. If a capacity admits of more than one analysis, merely providing an analysis
will, of course, leave open the question of whether the analysis provided correctly
describes how a system manages to have the capacity in question (assuming it does
have the capacity). Knowledge of neural structures is undoubtedly relevant to settling
the question of which analysis is correct, but bringing such knowledge to bear in this
instance would be an exercise in confirming a proposed analysis, not explaining a
capacity.13 Suppose there are two possible analyses, A and B, for some capacity C, and

13
  We take this to be a clarification of the view expressed in Cummins (1983). He writes: “Functional
analysis of a capacity C of a system S must eventually terminate in dispositions whose instantiations are
explicable via analysis of S. Failing this, we have no reason to suppose we have analyzed C as it is instanti-
ated in S. If S and/or its components do not have the analyzing capacities, then the analysis cannot help us
to explain the instantiation of C in S” (p. 31). Finding an implementation is a condition of adequacy of a
proposed functional analysis, but the condition is evidential: which functional analysis is instantiated in S?
No sane person would deny the importance of this question to psychology, nor would any sane (non-
dualist) person deny that neuroscience is relevant to answering this question. Complicating matters is the
fact that distinguishing explanation from confirmation is not enough; we also need to distinguish horizontal
and vertical explanations. A distinction along these lines is also at work in Cummins (1983): “Ultimately,
of course, a complete property theory for a dispositional property must exhibit the details of the target
property’s instantiation in the system (or system type) that has it. Analysis of the disposition (or any other
property) is only a first step; instantiation is the second” (p. 31).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

40  MARTIN ROTH AND ROBERT CUMMINS

the neurological data suggest that analysis A is implemented in system S. The explanation
of capacity C in S is provided by A, not by the neural structures evidence about which
confirms A.14
Arguably, nothing enjoys confirmational autonomy from anything else. Confirm­
ation, as Fodor (1983) pointed out, appears to be isotropic and Quinian.15 As such,
neuroscience that makes well-confirmed psychological R-effects impossible or
unlikely needs revision as much as a design hypothesis in psychology that appears to
have no plausible neural implementation. We thus agree with Piccinini and Craver’s
claim that “psychologists ought to let knowledge of neural mechanisms constrain their
hypotheses just like neuroscientists ought to let knowledge of psychological functions
constrain theirs” (p. 285). This is not an argument against the explanatory autonomy
of psychology; it is rather a consequence.
Defending the autonomy of functional analysis is not the same thing as defending
the autonomy of psychology. Functional analysis is an explanatory strategy, not a
­scientific discipline, and when we are careful to distinguish horizontal and vertical
explanations, and distinguish confirmation and explanation, the autonomy of func-
tional analysis emerges as unproblematic.
It also emerges as ubiquitous, for the function-implementation relation iterates:
what is function at one level of abstraction is implementation when viewed from a
higher level of abstraction, and what is implementation at one level of abstraction is
function viewed from a lower level of abstraction (Lycan, 1987). Thus, a resistor is any-
thing in a circuit across which there is a drop in potential. This might be a semiconductor,
or a motor or a light bulb: each of these might implement the resistance function. But
we have just identified the implementations themselves in familiar functional terms,
and there are many ways to implement semiconductors, light bulbs, and motors. It is
thus no wonder that functional analysis is ubiquitous in the sciences, including
neuroscience.
We want to draw three conclusions from this:
(1) Because the function-implementation relation iterates, we can think of the
relation as generating (a potentially large number of) function-implementation
hierarchies. For example, with respect to any functional analysis, there may be
a number of branching paths “down” the hierarchy. So, too, there may a number
of branching paths “up” the hierarchy. Imagine a tangled graph of function-
implementation hierarchies, crisscrossing in various ways.16
14
  It is of course possible that in some other system S*, or in S at different times, B is implemented, in
which case it would be B, not A, that explains C.
15
  By isotropic, Fodor means “that the acts relevant to the confirmation of a scientific hypothesis may be
drawn from anywhere in the field of previously established empirical (or, of course, demonstrative) truths.
Crudely: everything that the scientist knows is, in principle, relevant to determining what else he ought to
believe” (p. 105). By Quinian, Fodor means “that the degree of confirmation assigned to any given hypothesis
is sensitive to properties of the entire belief system; as it were, the shape of our whole science bears on the
epistemic status of each scientific hypothesis” (p. 107).
16
  If we add ultimate explanations (note 12) to this, it is clear you will never get explanation mapped on
two dimensions.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  41

(2) We find autonomous explanations wherever we find a functional analysis of


an R-effect, so we can expect to find autonomous explanations up and down
function-implementation hierarchies. But to repeat: such autonomy should
not be confused with confirmational autonomy. Nor should it be confused with
the claim that functional analyses cannot themselves be explained. They can,
and this is precisely what vertical explanations do. But this does not undermine
the autonomy of functional analysis, for the explananda of vertical explanations
are designs—not the R-effects designs explain.
(3) When we travel “up” or “down” function-implementation hierarchies, we do
not find principled borders between the sciences. Churchland and Sejnowski
(1990) provide a nice expression of this point:
. . . the idea that there is essentially one single implementational level is an oversimplification.
Depending on the fineness of grain, research techniques reveal structural organization at many
strata: the biochemical level; then the levels of the membrane, the single cell, and the circuit; and
perhaps yet other levels such as brain subsystems, brain systems, brain maps, and the whole ner-
vous system. But notice that at each structurally specified stratum we can raise the functional
question: What does it contribute to the wider, functional business of the brain?  (p. 369)

While this passage may suggest that the functional-implementation hierarchy


described rests nicely within neuroscience, Churchland and Sejowski go on to show
why this isn’t the case:
. . . without the constraints from psychology, ethology, and linguistics to specify more exactly
the parameters of the large-scale capacities of nervous systems, our conception of the functions
for which we need explanation will be so wooly and tangled as to effectively smother p ­ rogress.
(p. 370)

Just so. When we look at any given research program that seeks to understand how a
natural system works, we find the work spreading out among disciplines like a slime
mold. Psychological and neurological R-effects exist, but these are not distinctions of
kind that have any serious significance for psychology and neuroscience. There is just a
lot of analysis to be done to explain effects, along with the attendant job of discovering
how designs are implemented. If we try to draw disciplinary lines on the function-
implementation graph—east–west or north–south—we are not going to get anything
like a straight line, and anything we do get will have little import for the business of
science as such. We have to consult guides that work in other buildings to travel far
either north–south or east–west in our quest for understanding. This is division of
labor and interest and method, not of subject matter per se.

5. Conclusion
How should we view the relationship of psychology to neuroscience? In this chapter,
we have attempted to defuse this issue by putting it in a larger context. We should aban-
don this question, we argue, in favor of a host of questions about R-effects, designs, and
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

42  MARTIN ROTH AND ROBERT CUMMINS

implementations. Psychology and neuroscience do not exist as they were envisioned


by NCS. We need to leave behind the picture that underlies the old questions about
scientific reduction—Does biology reduce to chemistry? Does psychology reduce
to neuroscience?—for these questions presuppose that one can somehow arrive, in
principle, at an independent and self-contained formulation of chemistry or physics
or biology or psychology, and then ask how they might be related. But we cannot.
Instead, we should acknowledge that the unity of science is not the same as the unity
of theory. Unity of theory across the whole scientific landscape is unlikely in the
extreme: different problems require different representational resources and different
assumptions, and this makes an insistence on theoretical unification more of a hin-
drance than a help. The unity of science is rather a philosophical unity: a unity of
outlook and reliance on observation and experimentation and the kind of objectivity
embodied in the requirement of replication and observer independence. From this
perspective, explanatory autonomy, as opposed to confirmational isolation, should not
be regarded as “anti-physicalist.” Autonomous design explanations of psychological
R-effects do not open the door to spooky mind stuff or empirically unconstrained
theorizing any more than a multiply realizable circuit diagram explaining an electronic
R-effect (e.g., signal amplification) opens the door to nineteenth-century views of
electricity as a vital force.17

References
Bechtel, W., and Abrahamsen, A. (2006). “Phenomena and Mechanisms: Putting the Symbolic,
Connectionist, and Dynamical Systems Debate in Broader Perspective,” in R. Stainton (ed.),
Contemporary Debates in Cognitive Science. Blackwell: Malden, MA.
Carnap, R. (1966). Philosophical Foundations of Physics: An Introduction to the Philosophy of
Science. Basic Books: New York.
Causey, R. (1977). Unity of Science. D. Reidel Publishing: Dordrecht.
Churchland, P. (1979). Scientific Realism and the Plasticity of Mind. Cambridge University Press:
New York.
Churchland, P., and Sejnowski, T. (1990). “Neural Representation and Neural Computation.”
Philosophical Perspectives 4: 343–82.
Cummins, R. (1975). “Functional Analysis.” Journal of Philosophy 72 (20): 741–65.
Cummins, R. (1983). The Nature of Psychological Explanation. MIT Press: Cambridge, MA.
Cummins, R. (2000). “ ‘How Does It Work?’ versus ‘What Are the Laws?’: Two Conceptions of
Psychological Explanation,” in F. Keil and R. Wilson (eds), Explanation and Cognition. MIT
Press: Cambridge, MA.
Emmert, E. (1881). “Grossen verhalnisse der Nachbidder.” Klinische Monatsblätter für
Augenheilkunde 19: 443–50.
Fodor, J. (1974). “Special Sciences.” Synthese 28: 77–115.

17
  We wish to thank Denise Dellarosa Cummins and Ian Harmon for reading and commenting on
­several drafts of this chapter.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

NEUROSCIENCE, PSYCHOLOGY, REDUCTION  43

Fodor, J. (1983). The Modularity of Mind. MIT Press: Cambridge, MA.


Fodor, J. (1998). “Special Sciences: Still Autonomous after All These Years,” in In Critical
Condition. MIT Press: Cambridge, MA.
Garcia, J., and Koelling, R. (1966). “The Relation of Cue to Consequence in Avoidance Learning.”
Psychonomic Science 4: 123–4.
Hempel, C. (1966). Philosophy of Natural Science. Prentice Hall: Upper Saddle River, NJ.
Hempel, C., and Oppenheim, P. (1948). “Studies in the Logic of Explanation.” Philosophy of
Science 15: 135–75.
Kim, J. (1993). “Multiple Realization and the Metaphysics of Reduction,” in Supervenience and
Mind. Cambridge University Press: New York.
Kitcher, P. (1989). “Explanatory Unification and the Causal Structure of the World,” in P. Kitcher
and W. Salmon (eds), Scientific Explanation. University of Minnesota Press: Minneapolis.
Lycan, W. (1987). Consciousness. MIT Press: Cambridge, MA.
MacDonald, J., and McGurk, H. (1978). “Visual Influences on Speech Perception Processes.”
Perception and Psychophysics 24: 253–7.
Madigan, S. (1969). “Intraserial Repetition and Coding Processes in Free Recall.” Journal of
Verbal Learning and Verbal Behavior 8: 828–35.
Nagel, E. (1949). “The Meaning of Reduction in the Natural Sciences,” in R.C. Stauffer (ed.),
Science and Civilization. University of Wisconsin Press: Madison.
Nagel, E. (1961). The Structure of Science. Harcourt Brace and Co.: New York.
Oppenheim, P., and Putnam, H. (1958). “The Unity of Science as a Working Hypothesis,”
in  H.  Feigl et al. (eds), Minnesota Studies in the Philosophy of Science, Vol. 2. Minnesota
University Press: Minneapolis.
Piccinini, G., and Craver, C. (2011). “Integrating Psychology and Neuroscience: Functional
Analyses as Mechanism Sketches.” Synthese 183 (3): 283–311.
Putnam, H. (1975). “Philosophy and Our Mental Life,” in Mind, Language, and Reality, Vol. 2.
Cambridge University Press: New York.
Pylyshyn, Z. (1978). “Computational Models and Empirical Constraints.” Behavioral and Brain
Sciences 1 (1): 91–9.
Roth, M., and Cummins, R. (2014). “Two Tales of Functional Explanation.” Philosophical
Psychology 27 (6): 773–88.
Sober. E. (1999). “The Multiple Realizability Argument against Reductionism.” Philosophy of
Science 66 (4): 542–64.
Thorndike, E. (1905). The Elements of Psychology. A.G. Seiler: New York.
Tinbergen, N. (1958). The Curious Naturalists. University of Massachusetts Press: Amherst, MA.
Wimsatt, W. (2006). “Reductionism and Its Heuristics: Making Methodological Reductionism
Honest.” Synthese 151: 445–75.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

3
The Explanatory Autonomy
of Cognitive Models
Daniel A. Weiskopf

1.  Many Models, One World


The mind/brain, like any other complex system, can be modeled in a variety of ways.1
Some of these involve ignoring or abstracting from most of its structure: for the pur-
pose of understanding overall glucose metabolism in the body, we can neglect the
brain’s intricate internal organization and treat it simply as a suitably discretized
homogeneous mass having certain energy demands (Gaohua & Kumura, 2009). Other
projects demand more fine-grained modeling schemes, as when we are trying to map
cortical white-matter density and connectivity (Johansen-Berg & Rushworth, 2009),
or the distribution of various neurotransmitter receptor sites (Zilles & Amunts, 2009).
Here, the system’s detailed structural and dynamical properties matter, although not
necessarily the same ones in every context. A single system may admit of many possible
simplifying idealizations, and how we model a system—which of its components
and properties we choose to represent, and how much detail we incorporate into that
representation—is fundamentally a pragmatic choice.
When we have multiple models of a single target system, we face the problem of how
to integrate these models into one coherent picture. We wish to understand these
­models not merely as singular glimpses, but as parts of a unified view of the system.
This problem arises in a variety of domains, from atmospheric and climate modeling
(Parker, 2006) to understanding the division of labor among social insects (Mitchell,
2002) to modeling the structure of the atomic nucleus (Morrison, 2011). Concerns
about integration arise whenever we are uncertain as to how two or more representa-
tions of the same system fit together in a way that gives us insight into the system’s real
organization. The problem is complicated by the fact that often models that are

1
  Here I am borrowing Chomsky’s (2000, p. 9) hybrid term ‘mind/brain’ to denote the brain considered
as a system instantiating both a complex neural and cognitive organization. It is meant to encompass
both of these aspects, the biological and the psychological (as well as any other relevant types of causal
organization).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  45

i­ndividually well validated in terms of their ability to explain a range of phenomena


will represent one and the same target system as having substantially different, or even
seemingly contradictory, properties.2
There are a number of available strategies for integrating models in ways that resolve
these tensions. We may be able to show one model to be an approximation to another,
such as when we sharpen a model of the simple pendulum by incorporating facts about
air resistance, friction, and the mass of the string. We may depict one model as an
embedded component or sub-model of the other. Or we may be able to show that
­models apply to physically distinct aspects of the same system, so that they never actu-
ally represent the very same parts of the system in contradictory ways. This practice is
standard in fluid dynamics, which treats fluids as lacking viscosity in regions where
ideal fluid treatments are appropriate, and as having viscosity elsewhere, such as near
walls or other boundaries.
The construction and testing of psychological models has a long history in the
cognitive and behavioral sciences, and thanks to an impressive array of electrophysio-
logical and imaging technologies we can also construct sophisticated models of
the structure and dynamics of neural systems, both during the execution of tasks
and in their resting or ‘default’ state. This gives rise to what may be regarded as one
modern form of the mind-body problem: how are these two types of models related?
Specifically, how are they to be integrated to provide a complete understanding of
the mind/brain system as a whole? And what strategies are available if they cannot
be neatly integrated?
The problem of integrating psychological and neuroscientific models is especially
challenging, since these models are derived from different theoretical frameworks, use
distinct explanatory primitives, are responsible for different experimental phenomena,
and are tested and validated using different methods. Recently, some philosophers of
science (e.g., Piccinini & Craver, 2011) have claimed that the framework of mechanis-
tic explanation provides a solution to the problem of unification. They suggest that if
we view psychological models as mechanistic, they can be smoothly integrated with
the typical multilevel explanatory constructs of neuroscience. Here I wish to challenge
this extension of the mechanistic program. While mechanistic explanation is a dis­
tinctive and important strategy in a number of scientific domains, not every attempt to
capture the behavior of complex systems in terms of their causal structure should be
seen as mechanistic (Woodward, 2013).
Mechanistic explanations, I suggest, are one member of the class of causal explanations,
specifically the wider class of componential causal explanations (Clark, 1997, pp. 104–5).
Many psychological explanations also fall within this class, but they differ in important
respects from mechanistic explanations understood more narrowly. Despite these
­differences, psychological explanations are capable of fitting or capturing real aspects
of the causal structure of the world, just as much as mechanistic explanations are.

  See, for example, Morrison’s (2011) discussion of inconsistent models of the atomic nucleus.
2
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

46  DANIEL A. WEISKOPF

Thus a defense of the legitimacy of model-based psychological explanation is at the same


time a defense of the reality of the cognitive structures that these models map onto.

2.  Cognitive Models


Psychology, like any other science, pursues multiple goals in parallel. Observational
and experimentally oriented studies often aim simply to produce and refine our
characterizations of new phenomena to be explained. In other frames of mind,
psychologists aim to generate theories and models that explain these phenomena. One
common explanatory strategy involves producing cognitive models. Generally, a cognitive
model takes a cognitive system as its intended target, and the structures that it contains
purport to characterize this system in a way that captures its cognitive functioning.
Such a model uses a proprietary set of modeling resources to explain some aspect of
the system’s functioning, whether normal or pathological.
These models typically describe systems in terms of the representations, processes
and operations, and resources that they employ. These psychological entities constitute
the basic explanatory toolkit of cognitive modeling. Representations include symbols
(perceptual, conceptual, and otherwise), images and icons, units and weights, state
vectors, and so on. Processes and operations are various ways of combining and
transforming these representations such as comparison, concatenation, and deletion.
Resources include parts of the architecture, including memory buffers, information
channels, attentional filters, and process schedulers, all of which govern how and when
processing can take place.3
These models can take a number of different forms, depending on the kind of
format that they are presented in:4

2.1  Verbal descriptions


Words may be sufficient to specify some particularly simple models, or to specify
models in terms of their rough qualitative features. As an example, take the levels
of  processing framework in memory modeling (Craik & Lockhart, 1972; Craik &
Tulving, 1975; Cermak & Craik, 1979). This model makes two assumptions: (1) that
novel stimuli are interpreted in terms of a fixed order of processing that operates over a
hierarchy of features, starting with their superficial perceptual characteristics and
leading to more conceptual or semantically elaborated characteristics; (2) that depth

3
  Some have argued that psychological processes should not be understood in representational terms
(van Gelder,  1995; Chemero,  2009). Models developed within a non-representational framework will
accordingly use a different toolkit of basic explanatory constructs. Cognitive models themselves are defined
in terms of their explanatory targets, not whether they use representational states as their primitives.
4
  For extensive discussion of further types of models and model-construction procedures, see Busemeyer
& Diederich (2009), Gray (2011), Lewandowsky & Farrell (2007), and Shiffrin (2010). A taxonomy similar
to the one proposed here occurs in Jacobs & Grainger (1994).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  47

of processing, as defined in terms of movement through this fixed hierarchy, predicts


degree of memory encoding, so that the more deeply and elaborately a stimulus is
processed, the more likely it is to be recalled later. Although these two assumptions
are schematic and require much more filling in, they outline a framework that is already
sufficient to guide experimentation and generate determinate predictions about recall
and recognition performance—they predict, for instance, that manipulating the con-
ditions of memory encoding so that only perceptual features are processed should
result in poorer recall.

2.2  Mathematical formalism


Mathematical equations and related formalisms, e.g., geometric and state-space
­models, have a number of applications in modeling cognition. Dynamical systems
models provide one paradigmatic example. These models typically represent cognitive
states as points or regions in a low-dimensional state space and cognitive processes as
trajectories through that space. The governing equations determine the trajectory that
the system takes through the space under various parametric regimes.
Equations may also be used to specify the form cognitive processes take. For
instance, Amos Tversky’s (1977) Contrast Rule specifies that the similarity of two
objects belonging to different categories (a and b) is a weighted function of their com-
mon attributes minus their distinctive attributes:

s im(a, b) = αf(a ∩ b) − β f(a − b) − γf(b − a).

The form of the equation itself carries implications about category representation,
since it requires that categories a and b be associated with distinct sets of separable
features whose intersection and differences can be taken. It is also possible to interpret
the equation as specifying the causal process of computing similarities, in which three
distinct comparison operations are carried out and then subtracted to yield an overall
similarity evaluation. Support for this causal interpretation might be provided by
studies that vary the common and distinctive features possessed by two categories and
track the effects of these manipulations on ratings of overall similarity. One way to
support a causal interpretation of an equation is to use it to design manipulations that
have systematic effects such as these.

2.3  Diagrams and graphics


There are many varieties of graphical models, but the most common are so-called
‘boxological’ models. The main components of these models are boxes, which stand
for distinct functional elements, and arrows, which stand for relationships of control
or informational exchange. A cognitive architecture can be described at one level of
functional analysis by a directed graph made of such elements.
Boxological models are employed in many domains. In the early so-called ‘modal
model,’ human memory was represented as having three separate stores (sensory
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

48  DANIEL A. WEISKOPF

CENTRAL EXECUTIVE

VISUOSPATIAL
EPISODIC BUFFER PHONOLOGICAL LOOP
SKETCHPAD

EPISODIC
VISUAL SEMANTICS LANGUAGE
LTM

Figure 3.1  Baddeley’s model of working memory. Based on Baddeley (2000), 418.

­ uffers, short-term memory, and long-term memory), with a determinate order


b
of  processing and a set of associated control processes for orchestrating rehearsal
(Atkinson & Shiffrin, 1968). In later models, the construct of working memory takes
center stage; these models posit three different core components: the central executive,
visuospatial sketchpad, and phonological loop (Baddeley & Hitch, 1974). In more recent
iterations, they introduce more structures such as the episodic buffer and episodic
long-term memory. This organization is illustrated in Figure 3.1 (after Baddeley, 2000).
With each successive development, new functional components are added and old ones
are divided into more finely specified subsystems.5 This pattern is familiar from other
domains such as the study of reading performance, which has centered on developing
models that differ in how they functionally decompose the system underlying normal
fluent reading (Coltheart, Curtis, Atkins, & Haller, 1993).
Diagrams often serve as the basis for hybrid models which make use of a host of rep-
resentational tools (visual, verbal, mathematical) to describe cognitive systems. In all
boxological models, cognitive subcomponents and their interactions are depicted as part
of a directed graph, and the simplest of these models depict only this much structure. The
functions of boxes, as well as the connections among them, may be specified by verbal

5
  This model in particular is discussed at greater length in Section 6.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  49

labels or mathematical formulae. Importantly, these need not be regarded as black boxes
whose inner workings are opaque: greater detail about how exactly each box carries
out its function can be given by an associated description of the representations and
processes that the box uses in carrying out its internal operations, and each box may be
recursively decomposed into f­urther subsystems. Boxological models offer numerous
open ‘slots’ where further refinements may naturally be incorporated. The process of
decomposition continues until there is no longer anything useful to say, cognitively
speaking, about the ­operations of these components.

2.4  Computational models or simulations


Computer simulations are often used to model cognitive processes because computer
programs offer a concrete way to extract determinate predictions from cognitive
­models, and because models embodied in computer programs can be directly com-
pared in terms of their ability to account for the same data (Winsberg, 2010). As
McClelland puts it: “The essential purpose of [computational] cognitive modeling is to
allow investigation of the implications of ideas, beyond the limits of human thinking.
Models allow the exploration of the implications of ideas that cannot be fully explored
by thought alone” (2009, p. 16). Typical examples of computer simulations of cognitive
processes include some large-scale cognitive architectures such as Soar (Newell, 1990)
and ACT-R (Anderson, Bothell, Byrne, Douglass, Lebiere, & Qin, 2004), as well as
neural network models (Rogers & McClelland, 2004).
While mathematical models are often used as the source for computational
­models, the two belong to distinct types, since a set of mathematical equations can be
manipulated or solved using many different computer programs implemented on
many types of hardware architecture. In principle, however, any type of model can be
used to construct a computer simulation, as long as its operations are capable of being
described in a sufficiently precise way for us to write a program that executes them.
Embodying open-ended models in programs often forces us to make a number of
highly specific decisions about how the model functions, what values its parameters
take, and so on, whereas other models are designed and constructed from the very
start as programs.
It is important in considering computational simulations to distinguish features
of the program from the features of the model that lie behind the program. So, for
example, simulating object recognition with a computational routine written in LISP
does not in any way commit us to thinking of the visual system itself as computing
using such primitive functions as CAR, CDR, etc. These aspects of the programming
­language are not intended to be interpreted in terms of characteristics of the modeled
system. Which of these aspects are supposed to be projected onto the cognitive system
is a matter requiring careful interpretation.6 ACT-R assumes that psychological

6
  Thus see, for example, the discussion in Cooper & Shallice (1995) of the distinction between Soar as a
psychological theory and Soar as an implemented program.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

50  DANIEL A. WEISKOPF

­ perations consist of the application of production rules (which the program simulates),
o
but not that they involve the execution of lines of compiled C code, and neural network
models assume that cognition involves passing activation in parallel through a network
of simple units, despite the fact that this activity is almost always simulated on an
underlying serial computational architecture. Turning a model into a program is
something of an art, and not every aspect of the resulting program should be inter-
preted either as part of the model that inspired it or as part of the target system itself.
This brief discussion serves to illustrate several points. First, cognitive models come
in many varieties, and any discussion of their strengths and weaknesses needs to be
sensitive to this diversity. Second, these models are selective simplifications: they
­typically aim to capture the performance of some relatively restricted aspect or sub-
system of the total cognitive system, and to do so in terms of relatively few variables or
factors.7 Verbal descriptions, for instance, are obvious simplifications of cognitive pro-
cessing, and mathematical models often aim for compactness of expression rather
than capturing everything about a system’s performance. And third, these models
typically individuate their components in a way that is neutral with respect to the
underlying physical structure of the system that realizes them. Although the system’s
physical structure and the organization of cognitive models do constrain one another,
cognitive models themselves are physically non-committal. This last point will be
especially important in the forthcoming sections, which aim to distinguish cognitive
models from mechanistic models.

3.  The Mechanist’s Challenge


Cognitive modeling provides a rich set of resources for representing and explaining
the performance of cognitive systems. At the same time, some philosophers of science
have argued that the characteristic mode of explanation in the life sciences and the
sciences of complex systems more generally is mechanistic. Mechanistic explanations
take as their targets the capacities (functions and behaviors) of particular systems, and
they try to explain these capacities by breaking the system down into its component
parts, enumerating their activities, operations, and interactions, and describing how
they are organized (Bechtel, 2008; Bechtel & Abrahamson, 2005; Craver, 2007; Glennan,
2002). A mechanistic model is one that represents a system in terms of this sort of
componential analysis. Such models, when they are accurate, display the system’s
mechanistic organization and thus make intelligible how the dynamic activities of
the components can produce the target phenomena (Kaplan & Craver, 2010).

7
  The main exceptions here are neural network models, which tend to be composed of hundreds or
thousands of independent units, and even more weights connecting them. Recent models contain as many
as 2.5 million units representing neurons, networks, or regions (Eliasmith et al., 2012). In light of this, net-
work models are distinguished by the fact that their performance tends to be impenetrable to casual
inspection, thus giving rise to a host of analytic techniques (e.g., cluster analysis) to uncover the salient
operations that explain their behavior.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  51

It is indisputable that many successful explanations, particularly in neuroscience


and biology, take the form of giving mechanistic models for systems. Canonical examples
from neurophysiology include our best understanding of how action potentials are
produced in neurons by the movement of ions across various transmembrane voltage-
gated channels, and the processes by which action potentials can induce neurotrans-
mitter release at synaptic junctions. Explaining these phenomena involves giving a
detailed account of the physical organization of the components of the cell membrane
and their functional profiles, the active intracellular elements that package neurotrans­
mitters for release, the movements of various messenger molecules to key regions in
the synapse, and so on. When these components are spatiotemporally integrated in
just the right way and given the appropriate initiating stimulus, they will produce the
phenomena associated with neural spiking and transmitter release. Mechanisms
explain phenomena because they are the causes of those phenomena (or important
parts of their causes). These explanations rank among the greatest modeling successes
in cellular neurophysiology, and similar accounts can be given for other neural and
biological phenomena at a number of spatial and temporal scales (for historical back-
ground, see Shepherd, 2010).
Supposing that many neural systems can be modeled mechanistically, the question
arises: how are the cognitive models produced by psychologists related to the various
multilevel mechanistic models produced by the neurosciences? An answer tradition-
ally offered by functionalist philosophy of mind says that the domain of psychology is
at least partially autonomous from the underlying physical details of implementation
(Fodor, 1974). Autonomy can be understood in a number of ways, but in the present
context I intend it to cover both taxonomic and explanatory autonomy.
To say that psychology has taxonomic autonomy is to say that the range of entities,
states, and processes that psychology posits as part of its basic modeling toolkit, and
the kinds of structure that these models incorporate, are at most constrained only
loosely by the way other sciences may model the mind/brain, and in particular by the
details of physical implementation.8 What appears as an entity or process in a cognitive
model need not appear as such in any other model of the same system. Consequently,
the structure that cognitive models impose on the system may differ sharply from the
structure other models do. Hence cognitive models are allowed to carve up the world
in a way that aligns first and foremost with the ontological and theoretical commit-
ments of psychology.
To say that psychology has explanatory autonomy is to say that cognitive models
are sufficient by themselves to give adequate explanations of various psychological

8
  There is nothing unique about psychology in this; every field involves setting out its phenomena
and the set of concepts and entities it will use in explaining them. Here I follow Darden & Maull (1977) in
taking fields to be defined by packages consisting of core problems, phenomena related to those problems,
explanatory factors and goals related to solving such problems, specific techniques and methods for solving
them, a proprietary vocabulary, and various concepts, laws, and theories that may be brought to bear
on them.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

52  DANIEL A. WEISKOPF

phenomena. For example, in the domain of memory, there is a host of robust phenomena,
including interactions between encoding and retrieval conditions, the specificity with
which prior learning transfers to new tasks, rates of relearning, interference and serial
position effects in recall, and so on. Cognitive models of memory attempt to capture some
or all of these phenomena by positing underlying memory stores, types of representations
and encoding schemes, and control processes. Spelling these out in sufficient detail
describes the causal structure of the cognitive system and thereby explains how the
phenomena are produced by the interactions among representations, processes, and
resources. The autonomy thesis says that these phenomena can be given a wholly
adequate explanation in terms of some cognitive model. That isn’t to say that there
might not be other possible explanations of the phenomena as well—autonomy does
not imply uniqueness. It does imply that psychological modeling practices can stand
on their own, however, and are not incomplete in principle.
Neither taxonomic nor explanatory autonomy requires that there is a privileged
­evidential base for the construction of cognitive models or theories. These models
may  be confirmed or disconfirmed by appeal to potentially any piece of evidence
(introspective, behavioral, neurophysiological, clinical, etc.).9 And neither implies that
cognitive models cannot be integrated with other models to produce interlevel m ­ odels.
Cognitive neuropsychology, for example, is a distinctive field that explicitly aims to
link psychological function with neural structure in just this way. Autonomy says
only  that cognitive models are capable by themselves of meeting any standards of
­taxonomic legitimacy and explanatory adequacy.
In a recent paper, Piccinini & Craver (2011; henceforth P&C) argue that integrating
psychology with neuroscience will involve denying at least explanatory autonomy, and
perhaps taxonomic autonomy as well. They argue for two related claims about the
relationship between psychological and neuroscientific explanation.
Common Type Claim:  Psychological and neuroscientific explanations belong to a
common type: both are mechanistic explanations. As P&C put it, “[f]unctional
analysis cannot be autonomous from mechanistic explanation because the former is
just an elliptical form of the latter” (p. 290). Consequently, these explanations take
the same general form, and are subject to the same explanatory norms.
Sketch Claim:  Explanations in terms of psychological mechanisms are sketches of
more completely filled in neuroscientific mechanistic explanations. And more
generally, “functional analyses are sketches of mechanisms, in which some structural
aspects of a mechanistic explanation are omitted” (P&C, p. 284). This claim presents a
picture of how models are integrated that is considerably stronger than the mere claim
that psychological explanations will ultimately (somehow) be cashed out in terms of
­neuroscientific mechanisms, or that the psychological is realized by the neural.

9
  For an argument that cognitive theories have not, and perhaps cannot, be either supported or undermined
by neuroimaging data, see Coltheart (2006); for a response, see Roskies (2009).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  53

The two claims are connected, insofar as the common type claim states that functional
explanation in psychology is mechanistic, and the sketch claim says that qua
­mechanistic explanations they are incomplete. Summarizing their position, P&C say:
“Psychological explanations are not distinct from neuroscientific ones; each describes
aspects of the same multilevel mechanisms” (p. 288).
I will argue that both of these claims are false. The truth of the common type
claim turns on how we interpret the scope of mechanistic explanations. If they are
understood in a relatively conservative way, the claim fails, whereas liberalizing the
conception of mechanistic explanation empties it of any distinctive content. The sketch
claim is also false, since the relationship between psychological and neuroscientific
explanations is not, in general, one in which the neuroscientific explanations involve
filling in more ‘missing details’ or unpacking black boxes and filler terms present in
psychological models. Psychology is not simply delivering approximate or idealized
versions of neuroscientific explanations.

4.  Against Psychological Mechanisms


In their influential discussion of mechanisms, Bechtel & Richardson (1993) traced the
historical development of heuristics employed in the mechanistic analysis of complex
systems. Chief among these are the twin heuristics of decomposition and localization.
Decomposition is a form of functional analysis. It involves taking the overall function
of a system and breaking it down into various simpler subfunctions whose processes
and interactions jointly account for the overall system-level behavior. Localization
involves mapping the component functions produced by a candidate decomposition
onto relatively circumscribed component parts of the system and their activities. The
joint application of strategies of decomposition and localization is central to many
canonical examples of mechanistic explanation.
However, application of these strategies depends on the model in question being
one that makes determinate claims about the localization of components in the
first  place. Not all models that display componential organization need to do this.
Cognitive models, in particular, are not committed to any particular spatial organiza-
tion of their components.10 Verbally described processing models and mathematical
models are most obviously neutral on this point, but even diagrammatic models are
typically ­compatible with many possible spatial or geometric configurations of the
physical structures that realize their functional components. This is true not only when
they are viewed as systems-level decompositions, but even more so when we begin to
unpack the various processing stages that each subsystem implements. Even if a par-
ticular functional subsystem can be localized, it is highly unlikely that each distinct
inferential stage or representational transformation that it undergoes can be.

  See Weiskopf (2011a, pp. 332–4) for further argument on this point.
10
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

54  DANIEL A. WEISKOPF

This fact about cognitive models is often obscured by their similarity to mechanistic
models, particularly when both are presented in visual or diagrammatic form. Visual
representations of mechanisms often use space in order to represent space.11 In
these, “diagrams exhibit spatial relations and structural features of the entities in the
mechanism” (Machamer, Darden, & Craver, 2000, p. 8; see also Bechtel & Abrahamsen,
2005, p. 428). Thus in a cross-sectional diagram of the synaptic terminal of an axon, the
shape of the perimeter is roughly the shape of an idealized or ‘average’ terminal, the
placement of intracellular structures reflects their proximity, the width of the synaptic
gap is scaled to represent the relative distance between the neurons, etc. Size, scale, and
location also matter in other mechanistic models, such as those describing how voltage-
gated ion channels embedded in cell membranes open and close. Here the particular
spatial configuration of the molecular components of the channels is essential to their
correct operation and, importantly, this organization is reflected in their standard
depiction. The same points can be made about exploded view diagrams and the
zoomed-in side views used to display mereological relationships, such as how entities
of different sizes may be ‘nested’ within each other.
In diagrammatic cognitive models such as Baddeley’s working memory model
(see again Figure 3.1), spatial relations in the representation itself need not map onto
those in the target system. The length of arrows connecting boxes is irrelevant; all that
matters is their directional connectivity, weight, function, etc. Similarly, the boxes
themselves are represented by arbitrary shapes, whereas the particular shapes of entities
in mechanistic models matters a great deal to their function. The same indifference to
these characteristics is iterated at the levels of representations and processes as well;
notoriously, symbolic objects and their formal properties need not resemble neural
structures. Therefore, many of the structures posited in cognitive models lack the
characteristic properties of mechanistic entities, which “often must be appropriately
located, structured, and oriented” (Machamer et al., 2000, p. 3).12
Support for the possibility of models that display this sort of spatial neutrality goes
back to Herbert Simon’s pioneering work on complex systems (Simon, 1996). Simon’s
notion of a complex system can be understood in at least two different ways. One way
sees hierarchies mereologically, in terms of size and spatial containment relations, so
that a system is decomposed into subsystems that are literally physically parts of it.
Putting mereological hierarchies at the center of the notion of a complex system leads
naturally to the mechanist conception, since mechanistic levels themselves are p ­ artially

11
  This claim strikes me as clearly true of the paradigmatic mechanistic models discussed in the litera-
ture, certainly those that are drawn from cell biology and neurophysiology. Mechanisms may be described
in other ways, including non-diagrammatic ones, but as we have seen this is also true of cognitive models,
and it is the resemblance between these diagrammatic representations that encourages confusion between
the two.
12
  Further, they later say: “Traditionally one identifies and individuates entities in terms of their properties
and spatiotemporal location. Activities, likewise, may be identified and individuated by their spatiotemporal
location” (p. 5). This repeated emphasis suggests strongly that spatial organization is central to the notion
of mechanism that is prevalent in these early conceptions.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  55

specified in these terms, and the spatial boundaries of a mechanism are drawn around
all and only the entities that account for its performance.
An alternative, however, is to define hierarchies in terms of the interactional
strength of various components rather than their spatial relations (Haugeland, 1998).
Distinguishing social from physical and biological hierarchies, Simon writes: “we
propose to identify social hierarchies not by observing who lives close to whom but
by observing who interacts with whom” (1996, p. 187). On this view, the boundaries
of systems are determined by elements that are maximally coupled with one another,
i.e., capable of frequent reliable dynamical interactions involving the flow of infor-
mation and control. These are then assembled into larger elements and systems, which
are bound together by further interactional relations, all the way to the top level
of organization.
As Simon notes, spatial and interactional hierarchies are often related, but this
pairing is at best contingent. While strongly interacting elements may be spatially
contiguous, they need not be, and spatially proximate elements need not interact with
each other. And since strength of interaction or influence is purely functionally defined,
there is no requirement that an interactional hierarchy have any particular spatial
organization, although of course it must have one of some sort in order to generate its
stable functioning and the effective interactions that define it. What ties an interactional
hierarchy together is the existence and strength of the causal relations among a set of
elements, specifically the causal relations that support and explain the behavior of the
system that is of interest to us.
This point is clear from considering the variety of systems that can manifest these
hierarchies. Simon’s examples of hierarchical complex systems include subsystems of
the economy (those involving the production and consumption of goods), as well as
various social institutions (families, tribes, states, etc.). These plainly bear no necessary
spatial relationships to each other at all. Models in economics and finance offer numer-
ous other instances, as shown by Kuorikoski (2008).13 Consider the role of central
banks in the financial system. As social institutions, central banks have effects on
money markets, auctions, and regulative legislation; and in virtue of playing these roles
they can do such things as selling government securities to commercial banks, setting
the rate at which commercial banks can borrow, and adjusting the commercial banks’
ratio of reserves to loans. All of these interactions affect the overall operation of the
financial system, and as such can potentially be exploited by policymakers in designing
interventions. But understanding how central banks work does not involve asking
localization questions; indeed, for most cases involving component parts such as social

13
  The example which follows is taken and slightly simplified from Kuorikoski; however, while he
points out that the “parts” in an economy are either massively distributed or bear spatial relations to each
other that are essentially inscrutable, he still interprets this case as being mechanistic. The larger point
of his discussion, however, is that there are systems that appear mechanistic but which involve merely
capturing the abstract form of the causal interactions in a system. This dovetails with the moral of the
present section.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

56  DANIEL A. WEISKOPF

institutions or markets, it is questionable whether localization even makes sense in


principle—‘markets,’ after all, ceased to be exclusively physical spaces long ago.
So there are many interactional systems that are highly resistant to functional
­localization. Cognitive models represent such systems: they are defined in terms of the
functional coupling of their components, but are, considered in themselves, neutral on
issues of spatial organization and the shapes of components. Cognitive models do cap-
ture a certain kind of causal structure. But this causal structure is modeled in ways that
involve abstracting away many or most aspects of the physical, biological, and neural
architecture that support it. These models say nothing about how this causal structure
is implemented in actual underlying components and activities, and their explanatory
force does not turn on such details. Mechanistic models appeal to parts, their activ-
ities, and their structure to explain a system’s capacities. So if it is a requirement on
being mechanistic that a model be committal about spatial or structural facts, these
models will not qualify. The paradigmatic mechanistic models—the ones that fix our
understanding of this otherwise generic metaphysical notion—are those that them-
selves display the relevant spatial and temporal organization of more or less localized
entities.14
It is in this sense that cognitive models are taxonomically autonomous: the functional
divisions they impose on a system may be only loosely related to its underlying
physical organization. An example discussed by P&C is the implementation of the
functional distinction between beliefs and desires (p. 303). There are many possible
ways to ensure that these states have distinct functional roles: one appeals to separate
memory stores and processes, while another allows them to co-mingle in a single store but
gives them different ‘attitude tags’ that assign them their typical functional roles. These
are distinct realization possibilities; however (contra what P&C suggest) the example
seems to illustrate the extremely loose relationship between a functional classification
and its implementation, rather than any form of direct constraint.
Mechanists have responded to the possibility of non-localized complex systems by
broadening their notion of a ‘part.’ Structural components, say P&C, are not necessarily
spatially localizable, single-function, or “stable and unchanging”: “a structural component
might be so distributed and diffuse as to defy tidy structural description, though it no
doubt has one if we had the time, knowledge, and patience to formulate it” (p. 291).
This strategy carries risks, however, since it appears to verge on giving up not just
localization, but any requirement that parts be describable in a way that our modeling
techniques can capture.15 And this, in turn, seems to strip the mechanistic program of

14
  I refer here to the paradigm cases because, as Bechtel and Richardson (1993) point out, there is a range
of cases that gradually loosen these assumptions about spatial and temporal organization.
15
  This acceptance of parts that are so non-localized as to elude structural description also sits poorly
with the claim that we should aim for ideally complete models that capture maximal amounts of causally
relevant detail. If these parts cannot be captured by the descriptive resources we have available, these
­explanations seem inaccessible to us. So broadening the notion of a mechanism may have costs in terms of
our ability to satisfy mechanistic explanatory norms themselves.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  57

any substantial commitment concerning the distinctive ontology of mechanisms.


Remember, as initially presented the strategy was never intended to apply to all complex
physical systems: there were clear exit points where the heuristics of decomposition and
localization broke down. If we give up localization it is no longer clear whether there
is  any such thing as a complex physical system that is not subsumable under the
­mechanistic program.16

5.  Against Sketches


Even if psychological models are mechanistic in form, it doesn’t follow that they must
be mechanism sketches. For ease of exposition in what follows, I will sometimes con-
cessively talk as if cognitive models are mechanistic models. The real question is why
they can’t be fully adequate explanatory models, not in need of further filling-in using
the various modeling tools of neuroscience.
Mechanistic models are classified according to whether they are sketches, schemata,
or ideally complete models (Craver, 2006, 2007; Machamer et al., 2000). The distinction
is a measure of the representational accuracy of the model: a “sketch is an abstraction
for which bottom out entities and activities cannot (yet) be supplied or which contains
gaps in its stages. The productive continuity from one stage to the next has missing
pieces, black boxes, which we do not yet know how to fill in” (Machamer et al., 2000).
This is not a simple continuum, however, since the notion of accuracy includes separate
uncorrelated factors such as the degree to which a model abstracts away from particular
details or makes use of generic ‘filler’ concepts, the degree to which it includes false
components, the significance of these omissions or inclusions, and so on (Gervais &
Weber, 2013; Weiskopf, 2011a, pp. 316–17). To move from a sketch towards an ideally
complete model is to progressively remove these various omissions, generalities, and
inaccuracies, on the assumption that this will result in greater predictive or explanatory
power, or improved skill at intervening in the system.
In light of this definition, the claim that cognitive models invariably and as a class
are mere sketches is suspicious on its face. It amounts to saying that no cognitive model
can be ideally complete and accurate with respect to how it represents a system’s
psychological structures and properties. We can admit that most, perhaps all, of our
current best cognitive models are sketchy. This is especially true of verbally formulated

16
  A similar point is made in more sweeping fashion by Laura Franklin-Hall (ms.). She argues that
mechanists have not said in a systematic and principled way what sorts of causal relations mechanisms
contain or what counts as a genuine part of a mechanism. Without some way of fleshing out these abstract
ontological categories, the notion of a mechanistic explanation remains in substantial respects a promis-
sory note. John Campbell (2008) lodges a similar complaint, noting that the term ‘mechanism’ has had little
specific content outside of particular historical periods and disciplines, and that treating the search for
mechanisms as a general goal of scientific inquiry is misguided: “You can, of course, evacuate content from
the notion of ‘mechanism’ and say that although there was not the kind of mechanism they expected, there
was nonetheless some other kind of mechanism at work. And of course there is no point in disputing that,
since the claim lacks any definite meaning” (p. 430).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

58  DANIEL A. WEISKOPF

models, which often give only the rough qualitative contours of the processes they
represent. Mathematical models may sometimes offer precise predictions and ways of
tracking complex relationships among psychological variables, but they are often silent
on systems-level facts about cognitive architecture. Diagrammatic models themselves
omit many details. A box-and-arrow decomposition of a system that gives us a rough
assignment of functions to subsystems may give us no clue about the detailed inner
organization of these boxes: what representational formats they use, what information
they encode, what control processes there are, and so on. And even where we have a
detailed model of a certain subsystem, we often have no notion of how to embed it into
a network of other systems.
So a certain degree of sketchiness is the de facto norm in psychology. Part of this is
due to our ignorance of the correct structure of the cognitive system itself, but part
is due to ordinary idealizations common to all modeling (Weisberg, 2007). The explana-
tory context rarely requires us to put all of these details into our models at once. The
question is whether remedying this sketchiness requires stepping out of the explana-
tory framework of psychology. The argument against autonomy must establish that doing
so is necessary: if psychological explanations can meet the appropriate explanatory
norms on their own, this undermines the claim that they are mere sketches.
We first need to separate two ways in which mechanistic explanations can be
­elaborated on or improved:
Intralevel elaboration:  this involves staying at the same level of the mechanistic
hierarchy, but making a model more detailed and precise, adding relevant compo-
nents and activities, articulating their relations and structure, and so on.
Interlevel elaboration:  this involves descending a level in the mechanistic hierarchy
in order to explain the behavior of the various entities and operations in the system
by appeal to a further set of components and activities.
Each of these is a different way of elaborating on a simple mechanistic model, but
neither undermines the autonomy of cognitive modeling.
Intralevel elaboration requires getting rid of whatever filler terms, black boxes, fictions,
unspecified entities, and generic processes that the initial model incorporated and
replacing them with explicit specifications of the system’s elements. The end result will
be a model that is de-idealized, maximally specific, and wholly veridical. For instance,
a black box might be filled in by giving a description of the precise computation that it
carries out, or the stages of processing involved in its operation; or an abstract arrow
connecting two boxes might be elaborated on by saying something about the information
it carries and in what format.
Psychologists frequently try to do this, aiming not only to distinguish functional
subsystems but also to describe the information they make use of, the nature of their
internal databases and operations, and the formal character of the representations
they manipulate. To claim that these attempts will always stall out at the stage of a
mechanism sketch is effectively to say that cognitive modeling techniques are inherently
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  59

inadequate to capture the psychological properties and states of the target system (i.e.,
they cannot satisfy the ideal of representational accuracy).17 This entails, for instance,
that there can be no cognitive model of working memory or object recognition that is
ideally complete and that captures all of the relevant phenomena. But there is no rea-
son to believe this strong claim. An ideally complete cognitive model will still be one
that is couched in the autonomous theoretical vocabulary of psychology.
Interlevel elaboration, by contrast, involves descent to a further level of mechanistic
analysis of a system.18 So we might invoke ribosomes as the site of neuropeptide syn-
thesis in one explanation, perhaps requiring only the information about their origin
within the cell so that we can account for how they are transported to the synapse. For
this purpose we may ignore precisely how they carry this process out, though we can if
we wish change the context and descend to a lower level by treating the ribosome itself
as a new target system and attempting to explain its operations. This might be relevant
if we were trying to account for the rate at which neurons can regenerate depleted
neuropeptides.
Interlevel elaboration is driven by a new set of explanatory demands: a novel set of
phenomena (those associated with the activities of the system’s components) demand
explanations of their own, and so the mechanistic hierarchy gives rise to an associated
“cascade of explanations” (Bechtel & Abrahamsen, 2005, p. 426). Interlevel moves may
also involve shifting from the taxonomy and explanatory toolkit of one field to another,
since shifts to lower levels may involve moving to spatiotemporal scales where different
principles become dominant.
There are explanatory insights to be gained from this sort of descent. However, to
adequately explain a system’s behavior we rarely need to continue this recursive des-
cent through the hierarchy. A psychological capacity may be explained by appeal to a
cognitive model that captures some of the relevant internal causal structure, as
Baddeley’s phonological loop accounts for word-length effects, phonological similar-
ity effects, articulatory suppression, and so on (see Section 6 for more details). What
does not follow is that an explanation of a psychological capacity by appeal to a cognitive
model also requires that we have a further set of lower-level explanations for how all of
the elements of the model are implemented.
Explaining one thing in terms of another does not in general require recursive
explanatory pursuit. This is clear when it comes to etiological causal explanations of
events: in saying why a window broke it is often sufficient to cite the proximal cause,

17
  Bear in mind, again, that cognitive modeling may use information from any kind of study, including
lesion, imaging, and electrophysiological studies, in confirming psychological hypotheses. Whether
a model is accurate or not says nothing about the kind of evidence used to support it. The claim is just
that the resulting cognitive models themselves can never capture the properties of the system with
full accuracy.
18
  Following the standard view among mechanistic philosophers, we do not need to assume any ordering
of nature, entities, properties, or disciplines into anything like absolute levels here. All that is needed is a
relative conception. Once we fix a particular analysis of a system, a lower level is defined by the fact that it
invokes a decomposition of some component or operation of the system as initially analyzed.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

60  DANIEL A. WEISKOPF

namely its being struck by a stone. It is unnecessary to trace every causal factor that was
involved all the way back to the Big Bang. Similarly, componential causal explanations
of this kind typically terminate at levels far above the fundamental.19 If this were not so,
explanations in all non-fundamental sciences, including neuroscience itself, would be
just as sketchy and incomplete as those in psychology. Neurobiological systems are
composed of a staggering array of nested mechanisms. In explaining a particular phe-
nomenon we ignore most of these, however, descending only low enough to uncover
the immediate structures that causally explain things at the grain of detail required. To
insist that these are mere sketches insofar as they fail to capture the most fundamental
mechanistic dependencies of the system is to place the bar for a fully adequate model
far beyond our reach.20
Descent down the mechanistic hierarchy, then, is constrained by the fact that
­explanations stop at the boundaries delimited by the interests and vocabulary of
­particular fields of inquiry, which contain a set of ‘bottom-out’ activities that they take
as explanatory primitives. At this point, “explanation comes to an end, and description
of lower-level mechanisms would be irrelevant” (Machamer et al., 2000, p. 13). This
anti-fundamentalist attitude is part of what distinguishes the program of multilevel
integration from a classical reductive perspective committed to pursuing explanation
in terms of ultimate or fundamental structures.
Mechanists might seek a middle ground position here, saying that while we should
avoid fundamentalism, we should equally avoid stopping at the level of cognitive
­models. P&C suggest that “the search for mechanistic details is crucial to the process
of sorting correct from incorrect functional explanations” (2011, p. 306). As noted
earlier, however, the evidence for a cognitive model may come from anywhere, includ-
ing from neuroscience. That does not compromise its explanatory autonomy. How we
confirm an explanation is one thing, whether it is autonomous is another. Further, they
say: “To accept as an explanation something that need not correspond with how the
system is in fact implemented at lower levels is to accept that the explanations simply
end at that point” (2011, p. 307). But autonomous explanations should not be confused
with ­ultimate explanations. Psychological models can be sufficient for capturing the

19
  Salmon (1984) distinguishes between etiological and constitutive causal explanations of phenomena.
Etiological explanations account for a phenomenon’s existence and properties in terms of the ‘causal story’
leading up to its occurrence. Such explanations are historical. Constitutive explanations “account for a given
phenomenon by providing a causal analysis of the phenomenon itself ” (p. 297); his example is explaining
the pressure a gas exerts on its container in terms of the momentum exchanged by its component
­molecules and the walls. Salmon’s example of constitutive explanation has two possible readings. On one,
the phenomenon of a gas having certain pressure is identified with a certain pattern of momentum exchange
by its molecules. On the other, a phenomenon displayed by a system is causally explained by the behaviors
of its components. My term “componential” is meant to have the second reading.
20
  Many aggregative idealizations fare far better as explanations than would anything pitched at a lower
level or a finer grain of detail. The premise of continuum mechanics is that these idealized representations
can be explanatorily effective over a broad domain, precisely because large collections of individual atoms
or molecules effectively have the causal powers of continuous substances under the appropriate conditions.
This treatment not only captures their causal organization, it does so more efficiently than would a finer-grained
representation of the system.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  61

target phenomena without being themselves inexplicable. The point is merely that
explaining how these models are implemented is a separate task from explaining how
to capture the original phenomena in cognitive terms.
So neither of these two ways in which models can be sharpened suggests any
­principled limitation on how accurate cognitive models can be. They may be enriched
so as to better capture the psychological capacities that are their target phenomena, or
they may be integrated with lower-level implementation details. This may provide
information about how the psychological and neurobiological aspects of the mind/
brain fit together, and this in turn may improve our overall understanding of the
­system. Seeing how these models fit together and achieving multilevel integration is a
genuine e­ pistemic achievement, but we should not take the fact that we can increase
our understanding of the total system through this kind of integration to show anything
inherently defective or incomplete in the original cognitive model itself.

6.  Autonomy and Realism


Stepping back, the larger question posed by the autonomy of cognitive modeling has to
do with whether these models are giving us insight into real structures and processes
happening in the mind/brain. P&C’s argument poses a dilemma for cognitive modelers:
either psychological explanations are mechanistic, or they aren’t. If they aren’t, then
the states, entities, and processes in psychology do not map onto real components of
the mind/brain. In this case, cognitive models cannot be offering causal explanations
at all, since the only way to achieve a real causal explanation is to pick out the underlying
constituents of a mechanism and track how their interactions produce the phe-
nomena. If they are, on the other hand, they can only be regarded as mechanism
sketches: incomplete or partial accounts of the underlying organization of the system.
Integration would then take the form of fleshing out this partial sketch of the brain’s
mechanisms given by psychology, with the aim of producing a fuller and more
­adequate model as we descend down the mechanistic hierarchy to the neural level.
Consider the first possibility. So far I have been arguing that cognitive models are
often non-mechanistic in form. Despite this, they still have explanatory force. Their
status as explanations derives from the fact that they are able to capture facts about the
causal structure of a system. The cognitive states, processes, resources, and other com-
ponents that they represent are capable of interacting to produce the psychological
phenomena that lie within the domain of the model. These facts about causal structure
may be verbally described, captured in sets of equations, or schematized in diagrams.
The causal patterns themselves are what is important, not the mode in which they
are represented.
Explanations of how a system possesses and exercises a certain capacity typically
make reference to the presence of some organized structures and processes that
­coordinate in the right way to produce the phenomena that are characteristic of the
target capacity. Sometimes these patterns conform neatly to the stereotypical examples
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

62  DANIEL A. WEISKOPF

of mechanisms in neuroscience, biology, and certain branches of engineering. On the


other hand, sometimes they don’t, as in the case of many of the cognitive models
described here. Both kinds of models, mechanistic and non-mechanistic alike, draw
their explanatory force from the same place, namely from the fact that they pick out
causal structures and patterns that produce the relevant functions and capacities. So a
componential but non-mechanistic cognitive model that represents some aspects of real
causal structure in the domain of psychology ought to have just as much explanatory
legitimacy as a mechanistic model does.
Now consider the second case. Even if cognitive models were mechanistic, they
would still be more than mere sketches. They can be fleshed out and made as close to
ideally complete as any other scientific model we know how to construct. It might still
seem that if cognitive models were non-mechanistic that this would somehow under-
mine their reality. For example, P&C argue that task analysis must ultimately be a form
of mechanistic explanation because “[i]f the connection between analyzing tasks and
components is severed completely, then there is no clear sense in which the analyzing
sub-capacities are aspects of the actual causal structure of the system as opposed
to arbitrary partitions of the system’s capacities or merely possible causal structures”
(p. 293). The worry is that without some appeal to realization-level facts, we cannot
distinguish between competing cognitive models, and will have no grounds for saying
that any of them captures the true psychological structure of the target system.21
As a general point, it cannot be that a model captures some causal facts only when it
maps onto a mechanism. All mechanistic explanations come to an end at some point,
beyond which it becomes impossible to continue to find mechanisms to account for
the behavior of a system’s components. The causal capacities of these entities will have
to be explained otherwise than by their mechanistic organization. For example,
­consider protein folding, a process which starts with a mostly linear native state of a
polypeptide and terminates with a complexly structured geometric shape. There does
not appear to be any mechanism of this process: for many proteins, given the initially
generated polypeptide chain and relatively normal surrounding conditions, folding
takes place automatically, under the constraints of certain basic principles of economy.
The very structure of the chain itself plus this array of physical laws and constraints
shapes the final outcome. This seems to be a case in which complex forms are produced
not by mechanisms but by a combination of structures and the natural forces or
­tendencies that govern them.
But in any case, the elements of cognitive models can meet any number of tests for
mapping onto real entities (Weiskopf, 2011a): they have stable properties, they are

21
  There are obvious echoes here of earlier debates in cognitive science, most prominently the debate
about whether natural language grammars have ‘psychological reality’ or not. In this debate, grammars
were taken to be abstract mathematical objects, and the appeal to mental structures was meant to decide
which of the many possible formally equivalent grammars captures real human linguistic competence.
Analogously, the issue here is whether neural facts can help to decide which cognitive model captures the
real psychological facts.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  63

robustly detectable using a range of theoretically independent methods, they can be


manipulated and intervened on, and their existence can be demonstrated under
regular, non-pathological conditions. These tests are applicable to representations
(such as prototypes and analog mental images), processes (such as various forms of
similarity matching), and resources (such as a limited capacity working memory or
an attentional filter). The manipulation condition is particularly important, since
psychological experiments often aim to isolate particular cognitive processes and
representations and see what effects changing them has on behavior.
The elements of cognitive models may therefore constitute control variables for the
behavior of the cognitive system (Campbell, 2008, 2009, 2010). In Campbell’s sense,
we have a control variable for a system when: (1) there is a ‘good’ or natural-seeming
function from the variable to the set of possible outcomes; (2) changes in the variable
can make a large difference to the possible outcome; (3) these differences are largely
specific to the particular outcome; and (4) there is a way of systematically manipulat-
ing or changing the value of the variable. These variables are aspects of a system that,
when altered in a smooth fashion, allow us to choose among its various similarly
ordered outcome states. Control variables in this sense are also robustly detectable and
participate in a range of causal processes.22
If we adopt a metaphorical view on which the elements of cognitive models are akin
to the dials, knobs, and levers of a control panel, then control variables are the ones
“intervention on which makes large, specific, and systematic differences to the out-
come in which we are interested, and for which can be specifically changed by actual
physical processes” (Campbell, 2008, p. 433). And if our cognitive architecture has a
sufficiently regular causal organization such that these conditions hold for its compo-
nent elements and processes, then they will constitute control variables, and we may
systematically affect both particular outcomes (thoughts and behaviors) and the
­overall functioning of the system by manipulating them.
The existence of cognitive control variables that hover at some remove from the
neural organization of the mind/brain should be no surprise, since complex systems
typically instantiate many different patterns of causal structure simultaneously. Consider
the many ways neurons themselves can be causally cross-classified. As living cells they
have a host of processes that involve genetic regulation of their activities. They also
have mechanisms for producing action potentials and other graded potentials, and
they have net metabolic demands that affect how they contribute to the local BOLD
signal. Further mechanisms are involved in longer-term processes like synaptic and
dendritic plasticity, directing the growth and pruning of these structures with use.
Many of these mechanisms are interlocking and overlapping, but they are nevertheless
different causal patterns co-present within the same system.

22
  Elsewhere I have argued that these robust, repeatable features of models that are employed in a wide
range of explanations should be thought of as functional kinds (Weiskopf, 2011b).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

64  DANIEL A. WEISKOPF

To see how cognitive models can provide representations of a system’s psychological


control variables, return for a moment to Baddeley’s recent refinement of his model of
working memory (WM). In its latest version, the model contains four component sys-
tems: the phonological loop, the visuospatial sketchpad, the episodic buffer, and the
central executive (Baddeley, 2000, 2007, 2012; Repovš & Baddeley, 2006).23 Whether
the components of the modeled system constitute control variables depends on
whether there are ways to specifically, systematically, and significantly activate, sup-
press, and modulate the behavior of the components of this system. The Baddeley-
Hitch WM model is a particularly good test case to measure against these criteria, since
it was explicitly developed in a data-driven fashion, meaning that components were
added to the model on the basis of whether they could be experimentally modified in
these ways.
Consider some canonical results bearing on the properties of the phonological loop.
The loop itself is composed of two subsystems: a short-term store and an articulatory
rehearsal process. The former is a limited capacity buffer, while the second is a control
system that refreshes and sustains items within the store, and also is responsible for
converting visually presented material into a subvocalized phonological code. In a
typical working memory span task, participants must retain an ordered sequence of
items such as a list of six numbers, letters, or words. Item span can be affected by the
phonological similarity between the items, so that “man, cat, map, cab, can” will be
harder to recall than “pit, day, cow, sup, pen.” Semantic similarities among the items,
however, have no effect on how easily they can be recalled (Baddeley, 1966). The fact
that only certain types of confusion can occur in working memory suggests something
about the code that the system uses. Selective modification of the phonological loop
component of WM is possible by manipulating the to-be-remembered stimuli along
specific dimensions of similarity, consistent with the control variable paradigm.
Material manipulations provide one way to influence cognitive processing. But
components of the model can also be isolated using dual-task methods. When partici-
pants are asked to hold in memory a list of heard items while performing a concurrent
articulatory task such as repeating a word, their performance tends to drop precipi-
tously (Baddeley, Thomson, & Buchanan, 1975). This is predicted by the model, since
articulatory processes are involved in maintaining information within the loop’s stor-
age system. Articulatory processes are also a gateway for non-auditory information to
enter the phonological store. So disabling them should prevent visual information
from being recoded in an auditory format. This seems to be the case: the phonological
similarity effect disappears for visually presented items when participants perform an
articulation task during encoding (Baddeley, Lewis, & Vallar, 1984). Manipulating
task demands, then, provides yet another causal lever for affecting the elements of the

23
  These are not regarded as the ultimate or final divisions of the system: the visuospatial sketchpad itself
is now thought to fractionate into two subsystems, one for retaining visual images and the other for retain-
ing spatial coding of information (Klauer & Zhao, 2004). On the role of neuropsychological case studies in
confirming the model, see Vallar & Papagno (2002).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  65

modeled system, specifically the articulatory control processes posited as part of the
phonological loop.
Finally, the phonological loop can be activated in a more or less mandatory way by
certain intrusive or irrelevant sounds. Participants asked to memorize visually pre-
sented digits do poorly when they are concurrently presented with speech sounds in
an unfamiliar language, relative to conditions of silence or hearing white noise (Colle
& Welsh, 1976; Salame & Baddeley, 1982). This effect seems not to be specific to speech
sounds, but also to include other temporally patterned sounds such as fluctuating
tones (Jones & Macken, 1993). The explanation for this disruption of performance is
that certain irrelevant sounds gain automatic access to the phonological store, over-
writing or interfering with its existing contents. So the phonological loop may have a
mandatory access channel that selects for sound patterns that share abstract qualities
of variability with normal speech.
These three effects (phonological similarity, articulatory suppression, and irrelevant
speech) provide evidence that the phonological loop is a real construct that can be
intervened on and manipulated experimentally. It can be activated (by irrelevant
speech or speech-like sounds), manipulated (by phonologically related materials), and
disrupted or deactivated (by articulatory suppression). These procedures have system-
atic and specific effects on performance in WM tasks that, according to the model,
depend on the relevant properties of this subsystem. In these respects, the phonological
loop as it is modeled here satisfies Campbell’s conditions for being a psychological
control variable.
Of course, none of this is meant as an endorsement of Baddeley and Hitch’s model,
since for present purposes I am less interested in the structure of working memory
itself than I am in what the construction of working memory models can tell us about
cognitive modeling practices in general.24 What this relatively brief summary suggests
is that multicomponent cognitive models can contain functionally characterized
­elements that may be manipulated to produce systematic effects on the phenomena in
their domain. While it may be informative to ask how these elements relate to neural
structures and processes, having this knowledge is not necessary for cognitive models
themselves to be explanatory.

7. Conclusion
The challenge to the autonomy of cognitive modeling that I have surveyed has two
parts. Against the idea that cognitive modeling is a form of mechanistic explanation,
I’ve argued that it is a way of capturing the causal organization of a psychological
­system by representing it in terms of abstract relationships among functional compo-
nents. This is a kind of componential causal explanation, but one that has important
differences from mechanistic modeling. Further, cognitive models are capable of

  For an alternative to Baddeley’s perspective on working memory, see Postle (2006).


24
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

66  DANIEL A. WEISKOPF

g­ iving explanations of their target phenomena that answer to all of the relevant epistemic
norms and standards, and they achieve this without making essential reference to the
details of those models’ neural implementation. A total understanding of the mind/
brain will involve both perfecting such cognitive models and coordinating them with
neurobiological ones. But this is not in conflict with the autonomist claim that some
explanations of our psychological capacities come to an end within psychology itself.

Acknowledgments
Thanks to Eric Winsberg and David M. Kaplan for helpful comments on an earlier draft of this
chapter, and to students and faculty at Georgetown University for a ­stimulating discussion of
this material.

References
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An
integrated theory of the mind. Psychological Review, 111, 1036–60.
Atkinson, R. C. & Shiffrin, R. M. (1968). Human memory: A proposed system and its control
processes. In K. W. Spence (Ed.), The Psychology of Learning and Motivation, Vol. 2
(pp. 89–195). New York: Academic Press.
Baddeley, A. D. (1966). Short-term memory for word sequences as a function of acoustic,
semantic, and formal similarity. Quarterly Journal of Experimental Psychology, 18, 362–5.
Baddeley, A. D. (2000). The episodic buffer: A new component of working memory? Trends in
Cognitive Sciences, 4, 417–23.
Baddeley, A. D. (2007). Working Memory, Thought and Action. Oxford: Oxford University
Press.
Baddeley, A. D. (2012). Working memory: Theories, models, and controversies. Annual Review
of Psychology, 63, 1–29.
Baddeley, A. D. & Hitch, G. J. (1974). Working memory. In G. A. Bower (Ed.), Recent Advances
in Learning and Motivation, Vol. 8 (pp. 47–89). New York: Academic Press.
Baddeley, A. D., Lewis, V. J., & Vallar, G. (1984). Exploring the articulatory loop. Quarterly
Journal of Experimental Psychology, 36, 233–52.
Baddeley, A. D., Thomson, N., & Buchanan, M. (1975). Word length and the structure of short
term memory. Journal of Verbal Learning and Verbal Behavior, 14, 575–89.
Bechtel, W. (2008). Mental Mechanisms. New York: Routledge.
Bechtel, W. & Abrahamsen, A. (2005). Explanation: A mechanistic alternative. Studies in
History and Philosophy of the Biological and Biomedical Sciences, 36, 421–41.
Bechtel, W. & Richardson, R. C. (1993). Discovering Complexity: Decomposition and Localization
as Strategies in Scientific Research. Princeton, NJ: Princeton University Press.
Busemeyer, J. R. & Diederich, A. (2009). Cognitive Modeling. New York: Sage Publications.
Campbell, J. (2008). Interventionism, control variables, and causation in the qualitative world.
Philosophical Issues, 18, 426–45.
Campbell, J. (2009). Control variables and mental causation. Proceedings of the Aristotelian
Society, 110, 15–30.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  67

Campbell, J. (2010). Independence of variables in mental causation. Philosophical Issues, 20,


64–79.
Cermak, L. S. & Craik, F. I. M. (Eds) (1979). Levels of Processing in Human Memory. Hillsdale,
NJ: Lawrence Erlbaum.
Chemero, A. (2009). Radical Embodied Cognitive Science. Cambridge, MA: MIT Press.
Chomsky, N. (2000). New horizons in the study of language. In New Horizons in the Study of
Language and Mind (pp. 3–18). Cambridge: Cambridge University Press.
Clark, A. (1997). Being There. Cambridge, MA: MIT Press.
Colle, H. A. & Welsh, A. (1976). Acoustic masking in primary memory. Journal of Verbal
Learning and Verbal Behavior, 15, 17–32.
Coltheart, M. (2006). What has functional neuroimaging told us about the mind (so far)?
Cortex, 42, 323–31.
Coltheart, M., Curtis, B., Atkins, P., & Haller, M. (1993). Models of reading aloud: Dual-route
and parallel distributed processing approaches. Psychological Review, 100, 589–608.
Cooper, R. & Shallice, T. (1995). Soar and the case for unified theories of cognition. Cognition,
55, 115–49.
Craik, F. I. G. & Lockhart, R. S. (1972). Levels of processing: A framework for memory research.
Journal of Verbal Learning and Verbal Behavior, 11, 671–84.
Craik, F. I. G. & Tulving, E. (1975). Depth of processing and the retention of words in episodic
memory. Journal of Experimental Psychology: General, 104, 268–94.
Craver, C. F. (2006). When mechanistic models explain. Synthese, 153, 355–76.
Craver, C. F. (2007). Explaining the Brain. Oxford: Oxford University Press.
Darden, L. & Maull, N. (1977). Interfield theories. Philosophy of Science, 44, 43–64.
Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D.
(2012). A large-scale model of the functioning brain. Science, 338, 1202–5.
Fodor, J. A. (1974). Special sciences (or: the disunity of science as a working hypothesis).
Synthese, 28, 97–115.
Franklin-Hall, L. (ms.). The emperor’s new mechanisms. <http://laurafranklin-hall.com/franklin-
hall----the-empero.pdf>.
Gaohua, L. & Kumura, H. (2009). A mathematical model of brain glucose homeostasis.
Theoretical Biology and Medical Modelling, 6, 1–24.
Gervais, R. & Weber, E. (2013). Plausibility versus richness in mechanistic models. Philosophical
Psychology, 26, 139–52.
Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69, S342–53.
Gray, W. D. (Ed.) (2011). Integrated Models of Cognitive Systems. Oxford: Oxford University
Press.
Haugeland, J. (1998). Mind embodied and embedded. In Having Thought (pp. 207–37).
Cambridge: Harvard University Press.
Jacobs, A. M. & Grainger, J. (1994). Models of visual word recognition: Sampling the state
of  the art. Journal of Experimental Psychology: Human Perception and Performance, 20,
1311–34.
Johansen-Berg, H. & Rushworth, M. F. S. (2009). Using diffusion imaging to study human
connectional anatomy. Annual Review of Neuroscience, 32, 75–94.
Jones, D. M. & Macken, W. J. (1993). Irrelevant tones produce an irrelevant speech effect:
Implications for phonological coding in working memory. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 19, 369–81.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

68  DANIEL A. WEISKOPF

Kaplan, D. M. & Craver, C. F. (2010). The explanatory force of dynamical and mathematical
models in neuroscience: A mechanistic perspective. Philosophy of Science, 78, 601–27.
Klauer, K. C. & Zhao, Z. (2004). Double dissociations in visual and spatial short-term memory.
Journal of Experimental Psychology: General, 133, 355–81.
Kuorikoski, J. (2008). Two concepts of mechanism: Componential causal system and abstract
form of interaction. International Studies in the Philosophy of Science, 23, 143–60.
Lewandowsky, S. & Farrell, S. (2007). Computational Modeling in Cognition. New York: Sage
Publications.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of
Science, 67, 1–25.
McClelland, J. L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science,
1, 11–38.
Mitchell, S. D. (2002). Integrative pluralism. Biology and Philosophy, 17, 55–70.
Morrison, M. (2011). One phenomenon, many models: Inconsistency and complimentarity.
Studies in History and Philosophy of Science, 42, 342–51.
Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.
Parker, W. S. (2006). Understanding pluralism in climate modeling. Foundations of Science, 11,
349–68.
Piccinini, G. & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional
analyses as mechanism sketches. Synthese, 183, 283–311.
Postle, B. R. (2006). Working memory as an emergent property of the mind and brain.
Neuroscience, 139, 23–38.
Repovš, G. & Baddeley, A. D. (2006). The multi-component model of working memory:
Explorations in experimental cognitive psychology. Neuroscience, 139, 5–21.
Rogers, T. T. & McClelland, J. L. (2004). Semantic cognition: A parallel distributed processing
approach. Cambridge, MA: MIT Press.
Roskies, A. (2009). Brain-mind and structure-function relationships: A methodological
response to Coltheart. Philosophy of Science, 76, 927–39.
Salame, P. & Baddeley, A. D. (1982). Disruption of short-term memory by unattended speech:
Implications for the structure of working memory. Journal of Verbal Learning and Verbal
Behavior, 21, 150–64.
Salmon, W. (1984). Scientific explanation: Three basic conceptions. PSA: Proceedings of the
Biennial Meeting of the Philosophy of Science Association, 2, 293–305.
Shepherd, G. (2010). Creating Modern Neuroscience: The Revolutionary 1950s. Oxford: Oxford
University Press.
Shiffrin, R. (2010). Perspectives on modeling in cognitive science. Topics in Cognitive Science,
2, 736–50.
Simon, H. (1996). The Sciences of the Artificial, 3rd Ed. Cambridge, MA: MIT Press.
Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–52.
Vallar, G. & Papagno, C. (2002). Neuropsychological impairments of short-term memory.
In A. D. Baddeley, M. D. Kopelman, & B. A. Wilson (Eds), The Handbook of Memory Disorders,
2nd Ed. (pp. 249–70). Chichester: John Wiley & Sons.
van Gelder, T. (1995). What might cognition be, if not computation? Journal of Philosophy, 92,
345–81.
Weisberg, M. (2007). Three kinds of idealization. Journal of Philosophy, 104, 639–59.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

THE EXPLANATORY AUTONOMY OF COGNITIVE MODELS  69

Weiskopf, D. A. (2011a). Models and mechanisms in psychological explanation. Synthese, 183,


313–38.
Weiskopf, D. A. (2011b). The functional unity of special science kinds. British Journal for the
Philosophy of Science, 6, 233–58.
Winsberg, E. (2010). Science in the Age of Computer Simulation. Chicago: University of Chicago
Press.
Woodward, J. (2013). Mechanistic explanation: Its scope and limits. Proceedings of the Aristotelian
Society, 87, 39–65.
Zilles, K. & Amunts, K. (2009). Receptor mapping: Architecture of the human cerebral cortex.
Current Opinion in Neurology, 22, 331–9.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

4
Explanation in Neurobiology
An Interventionist Perspective

James Woodward

1. Introduction
Issues about explanation in psychology and neurobiology have received a great deal
of philosophical attention lately. To a significant degree this reflects the impact of dis-
cussions of mechanism and mechanistic explanation in recent philosophy of science.
Several writers (hereafter mechanists), including perhaps most prominently Carl
Craver and David Kaplan (Machamer et al. 2000,  2006; Kaplan and Craver  2011,
Kaplan 2011), have argued that at least in psychology and neuroscience, mechanistic
theories or models are the predominant mode of explanation, with other sorts of the-
ories or models often being merely “descriptive” or “phenomenological” rather than
explanatory.1 Other writers such as Chermero and Silberstein (2008) have disputed
this, arguing that, e.g., dynamical systems models are not mechanistic but nonethe-
less explanatory. This literature raises a number of issues, which I propose to examine
below. First, how should we understand the contrast between explanatory and
descriptive or phenomenological models within the context of neuroscience? What
qualifies a theory or model as “mechanistic” and are there reasons, connected to some
(plausible) general account of explanation, for supposing that only mechanistic
theories explain? Or do plausible general theories of explanation suggest that other
theories besides mechanistic ones explain? In particular, what does a broadly inter-
ventionist account of causation and explanation suggest about this question? If there
are plausible candidates for non-mechanistic forms of explanation in psychology or
neurobiology, what might these look like? What should we think about the explana-
tory status of “higher-level” psychological or neurobiological theories that abstract
away from “lower-level” physiological, neurobiological, or molecular detail and are,
at least in this respect, “non-mechanistic?”
1
  David Kaplan has informed me that the intention in Kaplan and Craver (2011) was not to exclude the
possibility that there might be forms of non-mechanistic explanation that were different from the dynami-
cal and other models the authors targeted as non-explanatory. At Kaplan’s suggestion, I have adopted the
formulation in this sentence (mechanism as “the predominant mode of explanation”) to capture this point.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  71

In what follows I will argue for the following conclusions. First, I will suggest that
an interventionist framework like that developed in Woodward (2003) can be used
to distinguish theories and models that are explanatory from those that are merely
descriptive. This framework can also be used to characterize a notion of a mechanis-
tic explanation, according to which mechanistic explanations are those that meet
interventionist criteria for successful explanation and certain additional constraints
as well. However, from an interventionist perspective, although mechanistic theor-
ies have a number of virtues, it is a mistake to think that mechanistic models are the
exclusive or uniquely dominant mode of explanation in neuroscience and psych-
ology. In particular, the idea that models that provide more mechanistically relevant
low-level detail2 are, even ceteris paribus, explanatorily superior to those which do
not is misguided. Instead, my contrasting view, which I take to be supported by the
interventionist account as well as modeling practice in neuroscience, is that many
explanatory models in neurobiology will necessarily abstract away from such detail.
At the same time, however, I think that the mechanists are right, against some of
their dynamicist critics, in holding that explanation is different from prediction
(and from subsumption under a “covering law”) and that some of the dynamical
systems-based models touted in the recent literature are merely descriptive rather

2
  As Kaplan has observed in correspondence, almost everyone agrees that the addition of true but
irrelevant detail does not improve the quality of explanations; the real issue is what counts as “relevant
detail” for improving the quality of an explanation. Kaplan (2011) thinks of relevant detail as a “mechanistically
relevant detail” (my emphasis):
3M [Kaplan’s and Craver’s requirements on mechanistic explanation—see below] aligns with
the highly plausible assumption that the more accurate and detailed the model is for a target
system or phenomenon the better it explains that phenomenon, all other things being equal
(for a contrasting view, see Batterman  2009). As one incorporates more mechanistically
relevant details into the model, for example, by including additional variables to represent
additional mechanism components, by changing the relationships between variables to better
reflect the causal dependencies among components, or by further adjusting the model
parameters to fit more closely what is going on in the target mechanism, one correspondingly
improves the quality of the explanation.  (Kaplan 2011, p. 347)
One possible understanding of “relevant detail” is detail about significant difference makers for the explan-
anda we are trying to explain—a detail is “relevant” if variations in that detail (within some suitable range)
would “make a difference” for the explananda of interest (although possibly not for other explananda hav-
ing to do with the behavior of the system at some other level of analysis). This is essentially the picture of
explanation I advocate below. I take it, however, that this is probably not what Kaplan (and Craver) have in
mind when they speak of mechanistically relevant detail, since they hold, for example, that the addition of
information about the molecular details of the opening and closing of individual ion channels would
improve the explanatory quality of the original Hodgkin–Huxley model even though (assuming my argu-
ment below is correct) this information does not describe difference makers for the explanandum repre-
sented by the generation of the action potential. (This molecular information is difference-making
information for other explananda.) Similarly, Kaplan differentiates his views from Batterman in the passage
quoted above, presumably on the grounds that the information that Batterman thinks plays an explanatory
role in, e.g., explanations of critical point behavior in terms of the renormalization group (see below), is not
mechanistically relevant detail. So while it would be incorrect to describe Kaplan and Craver as holding that
the addition of just any detail improves the quality of explanations, it seems to me that they do have a
­conception of the sort of detail that improves explanatory quality that contrasts with other possible posi-
tions, including my own (and Batterman’s). I’ve tried to do justice to this difference by using the phrase
“mechanistically relevant detail” to describe their position.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

72  James Woodward

than explanatory. This is not, however, because all such dynamical systems models
or all models that abstract away from implementation detail are unexplanatory, but
rather because more specific features of some models of this sort render them
explanatorily unsatisfactory.
The remainder of this chapter is organized as follows. Section 2 discusses some
ideas from the neuroscience on the difference between explanatory and descriptive
models. Sections 3 and 4 relate these ideas to the interventionist account of caus-
ation and explanation I defend elsewhere (Woodward 2003). Section 5 discusses the
idea that different causal or explanatory factors, often operating at different scales,
will be appropriate for different models, depending on what we are trying to explain.
Section 6 illustrates this with some neurobiological examples. Section 7 asks what
makes an explanation distinctively “mechanistic” and argues that, in the light of
previous sections, we should not expect all explanation in neuroscience to be
mechanistic. Section 8 argues that, contrary to what some mechanists have claimed,
abandoning the requirement that all explanation be mechanistic does not lead to
instrumentalism or other similar sins. Section 9 illustrates the ideas in previous
sections by reference to the Hodgkin–Huxley model of the generation of the action
potential. Section 10 concludes the discussion.

2.  Explanatory versus Descriptive Models


in Neuroscience
Since the contrast between models or theories that explain and those that do not will
be central to what follows, it is useful to begin with some remarks from some neurosci-
entists about how they understand this contrast. Here is a representative quotation
from a recent textbook:

The questions what, how, and why are addressed by descriptive, mechanistic, and interpretive
models, each of which we discuss in the following chapters. Descriptive models summarize
large amounts of experimental data compactly yet accurately, thereby characterizing what
­neurons and neural circuits do. These models may be based loosely on biophysical, anatomical,
and physiological findings, but their primary purpose is to describe phenomena, not to explain
them. Mechanistic models, on the other hand, address the question of how nervous systems
operate on the basis of known anatomy, physiology, and circuitry. Such models often form a
bridge between descriptive models couched at different levels. Interpretive models use com-
putational and information-theoretic principles to explore the behavioral and cognitive
significance of various aspects of nervous system function, addressing the question of why
nervous systems operate as they do.  (Dayan and Abbott 2001, xii)

In this passage, portions of which are also cited by Kaplan and Craver (2011), Dayan
and Abbott draw a contrast between descriptive and mechanistic models, and suggest
that the former are not (and by contrast, that the latter presumably are) explanatory.
However, they also introduce, in portions of the above comments not quoted by
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  73

Craver and Kaplan, a third category of model—interpretative models—which are also


described as explaining (and as answering why questions, as opposed to the how
questions answered by mechanistic models). The apparent implication is that although
mechanistic models explain, other sorts of models that are not mechanistic do so as
well, and both have a role to play in understanding the brain.
Dayan and Abbott go on to say, in remarks to which I will return to below, that:
It is often difficult to identify the appropriate level of modeling for a particular problem. A
­frequent mistake is to assume that a more detailed model is necessarily superior. Because models
act as bridges between levels of understanding, they must be detailed enough to make contact
with the lower level yet simple enough to provide clear results at the higher level.
(Dayan and Abbott 2001, xii)

These remarks introduce a number of ideas that I discuss below: (1) Neuroscientists
recognize a distinction between explanatory and merely descriptive theories and
models;3 (2) for purposes of explanation, more detail is not always better; (3) different
models may be appropriate at different “levels”4 of understanding or analysis, with it
often being far from obvious which level of modeling is most appropriate for a given
set of phenomena; and (4) It is nonetheless important to be able to relate or connect
models at different levels.
A second set of remarks come from a discussion of computational neuroscience
modeling in Trappenberg (2002):

3
  One possible response to the use of words like “explanation,” “understanding,” and so on in these
passages as well as those from Trappenberg immediately below, is that we should understand these words
as mere honorifics, with the labeling of a theory as “explanatory” meaning nothing more than “I like it or
regard it as impressive,” rather than anything of any deeper methodological significance. It is not easy, how-
ever, to reconcile this suggestion with the care these authors take in contrasting explanatory models with
those that are merely descriptive or phenomenological. Another more radical response would be to
acknowledge that these authors do mean what they say but claim that they are simply mistaken about what
constitutes an explanation in neuroscience with the correct view being the position advocated by mechanists.
I assume, however, that few philosophers would favor such a dismissive response, especially since, as noted
below, there are normative accounts of explanation (such as interventionism) which support the quoted
ideas. Let me also add that although it is true that one motive for abstraction away from detail is to enhance
computational tractability, the passages quoted and many of the examples discussed below make it clear
that this is not the only motive: sometimes such abstraction leads to better explanations, where this is not
just a matter of improved computational tractability.
4
  Talk of “levels” of explanation is ubiquitous in neuroscience, psychology, and philosophy, although
many commentators (myself included—see Woodward  2008) also complain about the unclarity of this
notion. In order to avoid getting enmeshed in the philosophical literature on this subject, let me just say
that the understanding of this notion I will adopt (which I think also fits with the apparent views of the
neuroscientists discussed below) is a very deflationary one, according to which level talk is just a way of
expressing claims about explanatory or causal relevance and irrelevance: To say that a multiple compartment
model of the neuron (see Section 6) is the right level for modeling dendritic currents (or an appropriate
model at the level of such currents) is just to say that such a model captures the factors relevant to the
explanation of dendritic currents. This gives us only a very local and contextual notion of level and also
makes it entirely an empirical, a posteriori issue what level of theorizing is appropriate for understanding
a given set of phenomena; it does not carry any suggestion that reality as a whole can be divided into “layers” of
levels on the basis of size or compositional relations or that “upper-level” causes (understood compositionally)
cannot affect lower-level causes.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

74  James Woodward

As scientists, we want to find the roots of natural phenomena. The explanations we are seeking
are usually deeper than merely parameterizing experimental data with specific functions. Most
of the models in this book are intended to capture processes that are thought of as being the
basis of the information-processing capabilities of the brain. This includes models of single
neurons, networks of neurons, and specific architectures capturing brain organizations . . .
The current state of neuroscience, often still exploratory in nature, frequently makes it diffi-
cult to find the right level of abstraction to properly investigate hypotheses. Some models in
computational neuroscience have certainly been too abstract to justify claims derived from
them. On the other hand, there is a great danger in keeping too many details that are not essen-
tial for the scientific argument. Models are intended to simplify experimental data, and thereby
to identify which details of the biology are essential to explain particular aspects of a system . . .
What we are looking for, at least in this book, is a better comprehension of brain mechan-
isms on explanatory levels. It is therefore important to learn about the art of abstraction,
making suitable simplifications to a system without abolishing the important features we
want to comprehend.  (Trappenberg 2002, pp. 6-7)

Here, as in the passage quoted from Dayan and Abbott, the notion of finding an
explanatory model is connected to finding the right “level” of “abstraction,” with the
suggestion that this has to do with discovering which features of a system are “essen-
tial” or necessary for the explanation of those phenomena. Elsewhere Trappenberg
connects this to the notion of a “minimal” model—“minimal” in the sense that the
model includes just those features or details which are necessary or required to account
for whatever it is that we are trying to understand and nothing more.5 Trappenberg
writes that “we want the model to be as simple as possible while still capturing the main
aspects of the data that the model should capture” and that “it can be advantageous to
highlight the minimal features necessary to enable certain emergent properties in
[neural] network [models].”

3.  An Interventionist Account of Causation


and Explanation
How, if at all, might the ideas in these remarks be related to an interventionist account
of causal explanation? I begin with a brief sketch of that account and then attempt to
connect it to some issues about modeling and explanation in neuroscience suggested
by the remarks quoted above. According to the interventional model, causal and caus-
ally explanatory claims are understood as claims about what would happen to the
value of some variable under hypothetical manipulations (interventions6) on other
variables. A causal claim of form X causes Y is true if (i) some interventions that change
the value of X are “possible” and (ii) under those interventions the value of Y would
change. A more specific causal claim (e.g., that X and Y are causally related according

5
  For recent discussions of the notion (or perhaps notions) of a minimal model see Chirimuuta (2014)
and Batterman and Rice (2014).
6
  An intervention is an idealized, non-confounded experimental manipulation. See Woodward (2003).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  75

to Y=F(X) where F is some specified function) will be true if, under interventions on
X, Y responds in the way described by F. For our purposes, we may think of the following
as a necessary condition for a structure H to count as a causal explanation of some
explanandum E:
H consists of true causal generalizations {Gi} (true according to the criteria just specified) and
additional true claims C (often but not always about the values taken by initial and boundary
conditions) in the systems for which H holds such that C U {Gi} entails E and alternatives to E
would hold according to Gi if alternatives to C were to be realized (e.g., if those initial and
boundary conditions were to take different values).

For example (cf. Woodward 2003), an explanation of why the electromagnetic field due
to presence of a uniform current along a long straight wire is given by the expression

E = 1/2π eo L /r (4.1)

(where E is the field intensity, L the charge density along the wire, and r the distance
from the wire) might consist of a derivation of expression (4.1) from Coulomb’s law,
and facts about the geometry of the wire and the charge distribution along it, as well as
information about how the expression describing the field would have been different if
the geometry of the conductor or the charge distribution had been different, where (in
this case) this will involve additional derivations also appealing to Coulomb’s law. In
this way the explanation answers a set of what Woodward (2003) calls what-if-things-
had-been-different-questions, identifying conditions under which alternatives to the
explanandum would have occurred. This requirement that an explanation answer
such questions is meant to capture the intuitive idea that a successful explanation
should identify conditions that are explanatorily or causally relevant to the explanan-
dum: the relevant factors are just those that “make a difference” to the explanandum in
the sense that changes in these factors lead to changes in the explanandum. This
requirement fits naturally with the notion of a minimal model on at least one construal
of this notion: such a model will incorporate all and only those factors which are rele-
vant to an explanandum in the sense described. The requirement also embodies the
characteristic interventionist idea that causally explanatory information is informa-
tion that is in principle exploitable for manipulation and control. It is when this what-if
things-had-been-different condition is satisfied that changing or manipulating the
conditions cited in the explanans will change the explanandum. Finally, we may also
think of this what-if–things-had-been-different condition as an attempt to capture
the  idea that successful explanations exhibit dependency relationships: exhibiting
dependency relations is a matter of exhibiting how the explanandum would have been
different under changes in the factors cited in the explanans.
Next, a brief aside about non-causal forms of why-explanations—another topic
which I lack the space to discuss in the detail that it deserves. I agree that there are
forms of why-explanation that are not naturally regarded as causal. One way of under-
standing these (and distinguishing them from causal explanations), defended in
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

76  James Woodward

passing in Woodward (2003), is to take causal explanations to involve dependency


or difference-making relationships (that answer what-if-things-had-been-different
questions) that have to do with what would happen under interventions. Non-causal
forms of why-explanation also answer what-if-things-had-been-different questions
but by citing dependency relations or information about difference makers that does
not have an interventionist interpretation. For example, the universal behavior of many
systems near their critical point depends on certain features of their Hamiltonian but
arguably this is not naturally regarded as a form of causal dependence—cf. note 10.
The trajectory of an object moving along an inertial path depends on the affine struc-
ture of spacetime but again this is not plausibly viewed as a case of causal dependence.
In what follows I will sometimes speak generically of dependency relations, where
this is meant to cover both the possibility that these are causal and the possibility that
they are non-causal.
Many different devices are employed in science to describe dependency relations
between explanans and explanandum, including directed graphs of various sorts with
an arrow from X to Y meaning that Y depends in some way on X. (Such graphs are
widely used in the biological sciences). However, one of the most common (and pre-
cise) such devices involves the use of equations. These can provide interventionist
information (or more generally information about dependency relations) by spelling
out explicitly how changes in the values of one or more variables depend on changes
(including changes due to interventions) in the values of others. In contrast to the
­tendency of some mechanists (e.g., Bogen 2005) to downplay the significance of math-
ematical relationships in explanation, the interventionist framework instead sees
mathematical relationships as playing a central role in many explanations, including
many neuroscientifc explanations.7 Often they are the best means we have of repre-
senting the dependency relations that are crucial to successful explanation.
In its emphasis on the role played by generalizations, including those taking a math-
ematical form, in explanation and causal analysis, the interventionist account has some
affinities with the DN model. However, in other respects, it is fundamentally different.
In particular, the interventionist account rejects the DN idea that subsumption under a
“covering law” is sufficient for successful explanation; a derivation can provide such
subsumption and yet fail to satisfy interventionist requirements on explanation, as a
number of the examples discussed below illustrate. In addition, although the interven-
tionist account requires information about dependency relations, generalizations and
other sorts of descriptions that fall short of being laws can provide such information,
so the interventionist account does not require laws for explanation. I stress this point
because I want to separate the issue of whether the DN model is an adequate account of
explanation (here I agree with mechanists in rejecting this model) from the issue of

7
  This is certainly not true of all mechanists. Kaplan (2011) is a significant exception and Bechtel (e.g.,
Bechtel and Abrahamsen 2013) has also emphasized the important role of mathematics in explanation in
neuroscience and psychology.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  77

whether good explanations, including many in neuroscience, often take a mathematical


or derivational form—a claim which I endorse. Interventionism provides a framework
that allows for recognition of the role of mathematical structure in explanation with-
out adopting the specific commitments of the DN model.
With these basic interventionist ideas in hand, now let me make explicit some
additional features that will be relevant to the discussion below. First, in science we are
usually interested in explaining regularities or recurrent patterns—what Bogen and
Woodward (1988) call phenomena—rather than individual events. For example, we are
usually interested in explaining why the field created by all long straight conductors
with a uniform charge distribution is given by (4.1) rather than explaining why some
particular conductor creates such a field. Or at least we are interested in explaining the
latter only insofar as the explanation we provide will also count as an explanation of
the former. In other words, contrary to what some philosophical discussions of
explanation suggest, it is wrong to think of explanation in science in terms of a “two-
stage” model in which one (i) first explains why some singular explanandum E (e.g.,
that a particular wire produces a certain field) by appealing to some low-level covering
generalization G (e.g., (4.1)) saying that E occurs regularly and then, in a second, inde-
pendent step, (ii) explains why G itself holds via an appeal to some deeper generaliza-
tion (e.g., Coulomb’s law). Usually in scientific practice there is no separate step
conforming to (i).8 Or, to put the point slightly differently, the low-level generalization
(G) is treated as something to be explained—a claim about a phenomenon—rather
than as a potential explainer of anything, despite the fact that many such Gs (including
(4.1)) qualify as “law-like,” on at least some conceptions of scientific law.
Because claims about phenomena describe repeatable patterns they necessarily
abstract away from some of the idiosyncrasies of particular events that fall under
those patterns, providing instead more generic descriptions, often characterized as
“stylized” or “prototypical.” For example, the Hodgkin–Huxley model, described
below, takes as its explanandum the shape of the action potential of an individual
neuron, but this explanandum amounts to a generic representation of important fea-
tures of the action potential rather than a description of any individual action poten-
tial in all of its idiosyncrasy. This in turn has implications for what an explanatory
model of this explanandum should look like—what such a model aims to do is to
describe the factors on which the generic features of this repeatable pattern depend,
rather than to reproduce all of the features of individual instances of the pattern. Put
differently, since individual neurons will differ in many details, what we want is an
account of how all neurons meeting certain general conditions are able to generate
action potentials despite this variation.
This framework may also be used to capture one natural notion of a (merely) “phe-
nomenological” model (but not the only one; see Section 8): one may think of this as
a  model or representation that consists just of a generalization playing the role of

8
  See Woodward (1979) for additional argument in support of this claim.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

78  James Woodward

G above—in other words, a model that merely describes some “phenomenon” understood
as a recurrent pattern. Trappenberg (2002) provides an illustration:9 the tuning curves
of neurons in the LGN (lateral geniculate nucleus) may be described by means of a
class of functions called Gabor functions, which can be fitted to the experimental data
with parameters estimated directly from that data. Trappenberg describes the result-
ing curves as a “phenomenological model” of the response fields in the LGN, adding
that “of course this phenomenological model does not tell us anything about the bio-
physical mechanisms underlying the formation of receptive fields and why the cells
respond in this particular way” (p. 6). The tuning curves describe phenomena in the
sense of Bogen and Woodward; they are generalizations which describe potential
explananda but which are not themselves regarded as furnishing explanations. An
“explanation” in this context would explain why these neurons have the response prop-
erties described by the tuning curves—that is, what these response properties depend
on. Obviously, merely citing the fitted functions does not do this. As this example illus-
trates, the contrast between a merely phenomenological model and an explanatory one
falls naturally out of the interventionist framework, as does the contrast between DN
and interventionist conceptions of explanation. The fitted functions describe and pre-
dict neuronal responses (they show the neuronal responses to particular stimuli “were
to be expected” and do so via subsumption under a “covering” generalization, which
many philosophers are willing to regard as locally “lawlike”), but they do not explain
those responses on the interventionist account of explanation.
This idea that explanations are directed at explaining phenomena naturally suggests
a second point. This is that what sorts of factors and generalizations it is appropriate
to cite in an explanans (and in particular, the level of detail that is appropriate)
depends on the explananda E we want to account for, where (remember) this will be
characterization at a certain level of detail or abstractness. In providing an explan-
ation we are looking for just those factors which make a difference to whatever
explananda are our target, and thus it will be at least permissible (and perhaps desir-
able) not to include in our explanans those factors S* which are such that variations or
changes in those ­factors make no difference for whether E holds. (Of course, as illus-
trated below, an explanans that includes S* may well furnish an explanation of some
other explanandum E* which is related to E—for example by describing the more
detailed behavior of some particular set of instances of E.)10

9
  Kaplan (2011) also uses this illustration.
10
  There is a very large philosophical literature on abstraction, idealization, and the use of “fictions” in
modeling which I will largely ignore for reasons of space. However, a few additional orienting remarks may
be useful. First, a number of writers (e.g., Thomson-Jones 2005) distinguish between idealization, under-
stood as the introduction of false or fictional claims into a model, and abstraction, which involves omitting
detail, but without introducing falsehoods or misrepresentation. I myself do not believe that thinking about
the sorts of examples philosophers have in mind when they talk about “idealization” in terms of categories
like “false” and “fictional” is very illuminating, but in any case it is worth emphasizing that the goal of
including in one’s model only those features that make a difference to some explanandum need not, in
itself, involve the introduction of falsehood or misrepresentation; instead it involves the omission of non-
difference-making detail. However, I will also add that I do not think that most of the cases of modeling of
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  79

A physics example illustrates this point with particular vividness. Consider the
“universal” behavior exhibited by a wide variety of different materials including fluids
of different material composition and magnets near their critical points, with both
being characterized by the same critical exponent b. In the case of fluids, for example,
behavior near the critical point can be characterized in terms of an “order” parameter
S given by the difference in densities between the liquid and vapor forms of the fluid
S = óliq - óvap. As the temperature T of the system approaches the critical temperature
Tc, S is found to depend upon a power of the “reduced” temperature t= T-Tc /T
b
S~ t

where b is the critical exponent referred to above. Remarkably, the same value of b
characterizes not just different fluids but also the behavior of magnets in the transition
from ferromagnetic to paramagnetic phases.
Suppose one is interested in explaining why some particular kind of fluid has the crit-
ical point that it does. Since different kinds of fluids have different critical points, the
value of Tc for any particular fluid will indeed depend on microphysical details about its
material composition. However, if one is instead interested in explaining the universal
behavior just described (the phenomenon or generic fact that S ~ |t|b with fixed b for
many different materials), then (as particularly emphasized by Batterman in a series of
papers—e.g., 2009) information about the differing microphysical details of different
­fluids is irrelevant: within the interventionist framework it is non-difference-making
information. That is, this universal behavior does not depend on these microphysical
details since, as we have just noted, variations in those details do not make a difference for
whether this universal behavior occurs. In other words, the universality of this behavior
shows us that its explanation must be found elsewhere than in details about the differences
in material composition of different fluids. In fact, as Batterman argues, the explanation
for universal behavior is provided by renormalization group techniques which in effect
trace the behavior to very generic qualitative features (e.g., certain symmetries) that are
shared by the Hamiltonians governing the interactions occurring in each of the systems,
despite the fact these Hamiltonians differ in detail for each system.11

upper-level systems discussed below are usefully viewed as involving only the omission of detail present in
some lower-level model—i.e. such upper-level models do not just involve abstraction from a lower-level
model. Instead, such modeling typically introduces new detail/explanatory features not found in models of
lower-level systems—that is, it adds as well as removes. Of course if, like Strevens (2008), one begins with
the idea that one has available a fundamental level theory T that somehow represents or contains “all”
explanatorily relevant factors at all levels of analysis for a system (a neural “theory of everything”), then
models of higher-level behavior will involve only dropping various sorts of detail from T. But actual exam-
ples of lower-level models in science are not like T—instead they include detail which is difference making
for some much more restricted set of explananda, with the consequence that when we wish to explain other
higher-level explananda, we must include additional difference-making factors. To take an example dis-
cussed in more detail below, one doesn’t get the Hodgkin–Huxley model for the action potential just by
omitting detail from a lower-level multi-compartment model; instead the Hodgkin–Huxley model intro-
duces a great deal of relevant information that is “new” with respect to any actual lower-level model.
11
  I gloss over a number of important issues here. But to avoid a possible misunderstanding let me say
that the similarity between explanation of critical point behavior in terms of the renormalization group
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

80  James Woodward

This example provides a concrete illustration of the point made more abstractly by
Abbot and Dayan and by Trappenberg: it is not always correct that adding additional
accurate detail (for example, details about the different Hamiltonians governing the
different systems above) improves the quality of one’s explanation. Instead, this can
detract from the goodness of the explanation if the target explanandum does not
depend on the details in question. Or at the very least, it is not mandatory in construct-
ing an explanation that one provide such detail. Arguably a similar point follows if
the detail in question is “mechanistically relevant detail”—the explanatory import of the
renormalization groups’ account of critical point behavior would not be improved by
the provision of such detail.

4.  “Levels” of Explanation and Independence


The general idea of an explanandum “not depending” on “lower-level” or implementa-
tional/compositional/realizational detail deserves more development than I can give
here, but a few additional comments may be helpful in fleshing out the picture I have in
mind. First, when we speak of non-dependence on such detail, what we have in mind is
non-dependence within a certain range of variation of such detail, rather than
­complete independence from all facts about realization. For example, in the example
discussed above, the value of the critical exponent b does not depend on variations in
the composition of the fluid being investigated—whether it is water, liquid helium, etc.
This is not to say, however, that “lower-level facts” about such fluids play no role in
determining the value of b. But the facts that are relevant are very generic features of
the Hamiltonians characterizing these particular fluids—features that are common to
a large range of fluids—rather than features that distinguish one fluid from another. To
the extent there are materials that do not meet these generic conditions, the model will
not apply to them. In a similar way, whether a relatively “high-level” neural network
model correctly describes, say, memory recall in some structure in the temporal lobe
may be independent of various facts about the detailed workings of ion channels in the
neurons involved in this structure—“independent” in the sense that the workings of
these channels might have been different, within some range of variation (e.g., having
to do with biologically normal possibilities), consistently with the network structure
behaving in the same way with respect to phenomena having to do with memory recall.
Again, this does not mean that the behavior of the structure will be independent of all
lower-level detail—for example, it certainly matters to the behavior of the network that

and the neurobiological explanations I consider is that in both cases certain behaviors are independent
of variations in lower-level details. However, there is also an important difference: in the neurobiological
cases, it often seems reasonable to regard the explanations as causal, in the case of the explanation of critical
point behavior the explanation is (in my view and also in Batterman’s) not causal. As suggested above,
I would be inclined to trace this difference to the fact that in the neurobiological examples the explanatorily
relevant factors are possible objects of intervention or manipulation. This is not the case for the renormal-
ization group explanation. In this case, one can still talk of variations making or failing to make a difference,
but “making a difference” should not be understood in causal or interventionist terms.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  81

the neurons are not made of copper wire or constituted in such a way that they disinte-
grate when connected. Just as with critical point behavior, the idea is that lower-level
facts about neuronal behavior will impose constraints on what is possible in terms of
higher-level behavior, but that these constraints often will be relatively generic in the
sense that a number of different low-level variants will satisfy them. In this respect,
what we have is a picture involving, so to speak, partial or constrained autonomy of the
behavior of upper-level systems from lower-level features of realization, but not com-
plete autonomy or independence.
A second point worth making explicit is this: the picture just sketched requires that
it be possible for a model or theory to explain some explananda having to do with some
aspects of the behavior of a system without the model explaining all such aspects. It is
thus opposed to an alternative picture according to which a theory that explains any
explanandum satisfactorily must be a “theory of everything” that explains all aspects of
the behavior of the system of interest, whatever the scale or level at which this is exhib-
ited. In the neural case, for example, such a theory of everything would appeal to a sin-
gle set of factors or principles that could be used to explain the detailed behavior of
dendritic currents and ion channels in individual neurons, the overall behavior of large
networks of neurons and everything in between. The alternative view which is implicit
in the remarks from Dayan and Abbott and Trappenberg above is that in addition to
being completely computationally intractable such a theory is not necessary to the
extent that behavior at some levels does not depend on causal details at other levels.
Instead, it is acceptable to operate with different models, each appropriate for explaining
explananda at some level but not others. There will be constraint relationships among
these models—they will not be completely independent of each other—but this is dif-
ferent from saying that our goal should be one big ur-model with maximal lower-level
detail encompassing everything.12

5.  The Separation of Levels/Scales


The ideas just described would be less interesting and consequential if it were not for
another broadly empirical fact. In principle, it is certainly possible that a huge number
of different factors might turn out, empirically, to make a difference (and perhaps roughly
the “same” difference, if we were able to devise some appropriate measure for this)

12
  Two additional points: First, I do not mean to imply that “mechanists” like Kaplan and Craver are
committed to such “a theory of everything” view. The point of my remarks above is just to make explicit
some of the commitments of the picture I favor. Second, another way of putting matters is that on my view
a model can, so to speak, designate a set of target explananda and say, in effect, that it is interested in
explaining just these, rather than all behaviors at all scales exhibited by the system of interest. A model M
that represents neurons as dimensionless points is, obviously, going to make radically false or no predic-
tions concerning any phenomena P that depend on the fact that neurons are spatially extended, but it is
legitimate for M to decline to take on the task of explaining P, if its target is some other set of explananda.
In other words, M should be assessed in terms of whether it succeeds in explaining the explananda in its
target domain.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

82  James Woodward

to some set of target explananda. It is thus of great interest (and prima facie surprising,
as well as extremely fortunate for modeling purposes) that this is often not the case.
Instead, it often turns out that there is some relatively small number of factors that
make a difference or at least a substantial or non-trivial difference to a target set of
explananda. Or, to express the idea slightly differently, it often turns out that we can
group or segregate sets of explananda in such a way that different sets can be accounted
for by different small sets of difference-making factors. In physics, these sets (of
explananda and their accompanying difference makers) are sometimes described
as “domains” or “regimes” or “protectorates”—the idea being that certain explanatory
factors and not others are “drivers” or represent the “dominant physics” for certain
domains while other explanatory factors are the primary drivers for explananda in
other domains. In physics, the possibility of separating domains and dominant
explanatory factors in this way is often connected to differences in the “scale” (e.g., of
length, time, or energy) at which different factors are dominant or influential. That is,
there often turn out to be factors that are very important to what happens physically at,
say, very short-length scales or at high energies but which we can entirely or largely
ignore at longer-length scales, where instead different factors (or at least factors char-
acterized by different theories) become important. To take a very simple example, if we
wish to understand what happens within an atomic nucleus, the strong and weak
forces, which fall off very rapidly with distance are major determinants of many pro-
cesses, and gravitational forces, which are very weak, are inconsequential. The oppos-
ite is true if one is interested in understanding the motion of galaxies, where gravity
dominates. A similar point seems to hold for many biological phenomena, including
phenomena involving the brain. Here, too, considerations of scale—both temporal
and length scale—seem to operate in such a way that certain factors are important to
understanding phenomena at some scales and not others, while models appealing to
other factors are relevant at other scales.13 For example, the detailed behavior of ion
channels in a neuron requires modeling at length and temporal scales that are several
orders of magnitude less than is appropriate for models of the behavior of an entire
neuron in generating an action potential. This suggests the possibility of models that
account for the latter without accounting for the former and vice-versa—a possibility
described in more detail immediately below.

6.  Levels of Modeling in Neurobiology


To illustrate the ideas in Section 5 in more detail, I turn to a recent review paper entitled
“Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and

13
  One generic way in which this can happen is that factors that change very slowly with respect to the
explananda of interest can be treated as effectively constant and hence (for some purposes) either ignored
or modeled in a very simple way—by means of a single constant parameter. Another possibility is that some
factor goes to equilibrium very quickly in comparison with the time scale of the explanandum of interest,
in which case it may also be legitimate to treat it as constant.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  83

Abstraction” (Herz et al. 2006). In this paper, the authors describe five different “levels”
(there’s that word again) of single neuron modeling. At “level one” are “detailed com-
partment models” (in some cases consisting of more than 1,000 compartments14)
which are “morphologically realistic” and “focus on how the spatial structure of a
neuron contributes to its dynamics and function.” The authors add, however, that
“[a]lthough detailed compartmental models can approximate the dynamics of single
neurons quite well, they suffer from several drawbacks. Their high dimensionality
and intricate structure rule out any mathematical understanding of their emergent
properties.” By contrast, “reduced [compartment] models [level two] with only one
or few dendritic compartments overcome these problems and are often sufficient to
understand somatodendritic interactions that govern spiking or bursting.” They add
that “a well-matched task for such [reduced compartment] models is to relate behav-
iorally relevant computations on various time scales to salient features of neural
structure and dynamics,” mentioning in this connection the modeling of binaural
neurons in the auditory brainstem.
Level three comprises “single compartment models” with the Hodgkin-Huxley
model being explicitly cited as an example. Herz et al. write:
Single-compartment models such as the classic Hodgkin-Huxley model neglect the neuron’s
spatial structure and focus entirely on how its various ionic currents contribute to sub-
threshold behavior and spike generation. These models have led to a quantitative understand-
ing of many dynamical phenomena including phasic spiking, bursting, and spike-frequency
adaptation.  (p. 82)

They add that models in this class “explain why, for example, some neurons resemble
integrate-and-fire elements or why the membrane potential of others oscillates in
response to current injections enabling a ‘resonate-and-fire’ behavior,” as well as other
explananda (p. 82).
Cascade models (level four) involving linear filters, non-linear transformations, and
explicit modeling of noise abstract even further from physiological details but “allow
one to capture additional neural characteristics” such as those involved in adaptation
to light intensity and contrast. Finally, “black box models” (level five), which may char-
acterize the behavior of a neuron simply in terms of a probability distribution govern-
ing its input/output relationships, may be most appropriate if we “want to understand
and quantify the signal-processing capabilities of a single neuron without considering
its biophysical machinery. This approach may reveal general principles that explain, for
example, where neurons place their operating points and how they alter their responses
when the input statistics are modified” (p. 83). Models at this level may be used to show,

14
  “Compartment” refers to the number of sections, represented by distinct sets of variables, into
which the neuron is divided for modeling purposes—for example, the Hodgkin-Huxley model is a “single-
compartment” model since the modeling is in terms of a single variable, voltage, which characterizes the
behavior of the entire neural membrane. A multiple compartment model would have many different
voltage variables for different parts of the membrane.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

84  James Woodward

for example, how individual neurons shift their input–output curves in such a way as
to achieve efficient coding.
Several features of this discussion are worth particular emphasis. First, and most
obviously, there is explicit countenancing of models at a number of “levels,” where the
notion of level is tied to differences in spatial and temporal scale (a representation of
the neuron as spatially extended, with different potentials in different spatial regions is
required for understanding dendritic currents, but this scale of spatial representation
may not be required for other purposes). Models at each level are explicitly recognized
as being capable of providing “explanations,” “understanding,” and the like, rather than
models at some levels being regarded as merely descriptive or phenomenological in a
way that contrasts with the genuinely “explanatory” models at other (presumably
“lower”) levels. Moreover, these models are seen as complementary rather than in
competition with each other, at least in part because they are seen as aiming at different
sets of explananda. There is no suggestion that we have to choose between modeling at
a very fine-grained, detailed level (e.g., level one) or a more coarse-grained level (e.g.,
levels four or five). Second, it is also recognized that which modeling level is most
appropriate depends on the phenomena one wants to explain and that is not true that
models with more details (or even more mechanistically relevant details) are always
better, regardless of what one is trying to explain, although for some purposes highly
detailed models are just what are called for.15 For example, if one’s goal is to understand
how the details of the anatomy and spatial structure of an individual neuron influence
its detailed dynamics, a model at level one may be most appropriate. If one wants a
“quantitative understanding” of spike train behavior, a model at a higher level (e.g.,
level three) may be better. This would be better in the sense that the details invoked in a
level one model may be such that they are irrelevant to (make no difference for) this
phenomenon. Again, the goal is taken to be the inclusion of just enough detail to
account for what it is one is trying to explain but not more:
All these [modeling] tasks require a delicate balance between incorporating sufficient details
to account for complex single-cell dynamics and reducing this complexity to the essential
characteristics to make a model tractable. The appropriate level of description depends on
the particular goal of the model. Indeed, finding the best abstraction level is often the key to
success.  (p. 80)

7.  Mechanistic Explanation


So far I have discussed “explanation” but have said nothing about distinctively “mech-
anistic” explanations and how these relate to the ideas just described. Although, for
reasons that will emerge below, I don’t think that “mechanistic explanation” is a
15
  Once again, my goal in these remarks is the positive one of highlighting a feature of good explanatory
practice in neuroscience. I do not mean to imply that mechanistic approaches are unable to incorporate
this feature, but rather to emphasize that they should.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  85

notion with sharp boundaries, I fully agree that these are one important variety of
explanation in many areas of biology and neuroscience. Roughly speaking, I see these
as explanations meeting certain specific conditions M (described immediately below)
that lead us to think of them as “mechanistic,” where satisfying M is one way of meet-
ing the general interventionist conditions on explanation. However, I also think that
it is possible for a theory or model to fail to satisfy conditions M and still qualify as
explanatory in virtue of meeting these more general conditions.
At the level of methodology, if not underlying metaphysics, my general picture of
mechanisms and mechanistic explanation is fairly close to that advanced by other
writers, such as Machamer et al. (2000) and Bechtel and Abrahamsen (2005). Consider
a system S that exhibits behavior B—the phenomenon we want to explain. A mechanistic
explanation involves decomposing S into components or parts (“entities” in the par-
lance of Machamer et al. 2000), which exhibit characteristic patterns of causal inter-
action with one another, describable by generalizations Gi (describing “activities”).
Explanation then proceeds by showing how B results from these interactions, in a way
that satisfies the interventionist conditions on causal explanation. This in turn involves
showing how variations or changes in the parts or in the generalizations ­governing
them would result in alternatives to B, thereby allowing us to see how the behaviors of
the parts and the way in which they interact make a difference for (or are relevant to)
whether B holds. Part of the attraction of explanations that are mechanistic in this
sense is that this information about the parts and their interactions can guide more
fine-grained interventions that might affect behavior B—a point that is spelled out in
detail in Woodward (2002) and Kaplan and Craver (2011).
Explanations having this general character often, and perhaps even typically, satisfy
several other related conditions. One of these, which I have discussed elsewhere
(Woodward 2003), is a modularity condition: modularity requires that the different
causal generalizations Gi describing the causal relations among the parts should at
least to some degree be capable of changing independently of each other. Versions of
modularity are often explicitly or implicitly assumed in the “box (or node) and arrow”
representations that are adopted in many different disciplines for the representation of
mechanisms, with modularity corresponding to the idea that arrows into one node can
be disrupted without disrupting arrows into other nodes. Arguably, satisfaction of a
modularity condition is also required if we are to make sense of the idea that mechanistic
explanation involves decomposition of S into distinct “parts” with distinctive general-
izations characterizing the behavior of parts and the interactions into which they enter.
If the alleged parts can’t be changed or modified (at least in principle) independently of
each other or if no local changes can affect the pattern of interaction of some of the
parts without holistically altering all of the parts and their interactions, then talk of
decomposing the behavior of the system into interactions among its “parts” seems at
best metaphorical. In practice, the most straightforward cases in which ­modularity
conditions are satisfied seem to be those in which a mechanical explanation provides
information about spatio-temporally separate parts and their spatio-temporal relations,
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

86  James Woodward

since distinctness of spatio-temporal location is very closely tied to the possibility of


independent modifiability. For example, the spatio-temporal separation of the dif-
ferent classes of ion channels (Na and K channels) in the Hodgkin-Huxley model
discussed in Section 9 is one reason why it is natural to think of that model as involv-
ing a representation of independently modifiable parts that interact to produce the
action potential and thus to think of the Hodgkin–Huxley model as in this respect a
“mechanical” model.16
A second feature possessed by explanations that we most readily regard as mechanis-
tic (or at least a feature that, reasonably enough, philosophers favorable to mechanism
often take to be characteristic of mechanistic explanations) is a kind of sensitivity of
behavior to details (material and organizational) of implementation/realization/com-
position. Consider some ordinary machine (e.g., a clock). For such a machine to func-
tion as it was designed to, these components must be connected up to one another in a
relatively spatio-temporally precise way. Moreover, the details of the behavior of the
parts also matter—we do not expect to be able to replace a gear in a clock with a gear of
a different size or different spacing to teeth and get the same result. Indeed, this is why
we need to invoke such details to explain the behavior of these systems: the details
make a difference for how such systems behave. It is systems of this sort for which
“mechanistic” explanation (or at least the kind of mechanistic explanation that invokes
considerable implementational detail) seems particularly appropriate.17
Putting these requirements together, we get the claim that mechanical explanations
are those that satisfy the interventionist requirements in Section 2, which involve
decomposition into parts (where the notion of part is usually understood spatio-­
temporally), and which are appropriate to systems whose behavior is sensitive to
details of material realization and organization. Since satisfaction of this last condi-
tion, in particular, is a matter of degree, we should not expect sharp boundaries
between mechanistic and non-mechanistic forms of explanation, although there will

16
  My claim here is that modularity and decomposition into independently changeable parts are con-
ditions that are most readily satisfied when “part” is understood in spatio-temporal terms, but for the
purposes of this chapter, I leave open the question of whether decomposition (and hence mechanistic
explanation) might also be understood in a way that does not require spatio-temporal localizability of
parts. (Bechtel and Richardson (1993) were among the first to talk about this kind of decomposition,
which they called functional decomposition.) Cognitive psychology employs a number of different strat-
egies that seek to decompose overall cognitive processes into distinct cognitive processes, components, or
modules (e.g., Sternberg 2001), but typically without providing information about the spatial location of
those parts, although usually there is appeal to information about temporal relationships. Assessment of
these strategies is beyond the scope of this chapter, although I will say that the strategies require strong
empirical background assumptions and that proposals about decompositions of cognitive processes into
components often face severe under-determination problems in the absence of information about neural
realization (which does provide relevant spatial information). (See also Piccinini and Craver 2011 for a
discussion of closely related issues.)
17
  These features of sensitivity to details of organization and composition as characteristic of mechanical
explanation are also emphasized in Levy (forthcoming) and in Levy and Bechtel (2013). Woodward (2008)
also distinguishes systems that are realization sensitive from those that are not, although not in the context
of a discussion of mechanistic explanation.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  87

be clear enough cases. (The absence of such sharp boundaries is itself one reason for
thinking that is misguided to suppose that only theories meeting mechanistic con-
straints explain—the notion of mechanistical explanation is not sufficiently sharply
bounded to play this sort of demarcational role.)
We have already noted many cases in which, in contrast to the mechanistic possibil-
ities just described, we need to invoke only very limited information about the details of
material realization or spatio-temporal organization to explain aspects of the behavior
of a system. For example, the explanation of universal behavior near critical points in
terms of the renormalization group does not appeal to the details of the composition of
the particular materials involved, for the very good reason that such behavior does not
depend on these details. In part for this reason, it seems unintuitive to describe the
renormalization group explanation as a “mechanistic.” Certainly it is not mechanistic in
the sense of that notion employed by writers like Craver. Nonetheless, the renormaliza-
tion group analysis seems explanatory. Previous sections have also noted the existence
of many “higher-level” explanatory neurobiological models and theories that abstract
away from many neural details. To the extent such models are relatively insensitive to
material or organizational details of implementation or to the extent they do not involve
decomposition of the system modeled into distinct parts with characteristic patterns of
interaction, the models will also seem comparatively less mechanistic.
As an additional illustration, consider the very common use of models involving
recurrent networks with auto-associative features to explain phenomena like retrieval
of memories from partial cues. Such models represent neurons (or perhaps even
populations of neurons) as individual nodes, the connections of which form directed
cycles, with every node being connected to every other node in a fully recurrent net-
work. In a separate training phase, the network produces, via a process of Hebbian
learning, an output which resembles (imperfectly) some previously acquired trained
pattern. This output is then fed back into the network, resulting in a pattern that is
closer to the trained pattern. During the retrieval phase, presentation of just part of
the input pattern will lead, via the auto-associative process just described, to more
and more of the learned pattern. The process by which the network settles into a state
corresponding to this previously learned pattern can be understood as involving
movement into an attractor state in an attractive landscape, the shape of which is spe-
cified by the dynamical equations describing the operation of the network. Networks
of this sort have been used to model a number of psychological or neurobiological
processes including the recall of complete memories from partial cues (see, e.g.,
Trappenberg 2002). Processing of this kind is often associated with brain structures
such as the hippocampus. Such models obviously abstract away from many neural
details, and in this respect are relatively non-mechanistic in Craver’s sense.18 On my

18
  To the extent that such models explain in terms of generic facts about the structure of attractive land-
scapes and so on, they also involve abstraction away from the details of individual trajectories taken by the
system in reaching some final state. That is, the explanation for why the system ends up in some final state
has to do with, e.g., this being in a basin of attraction for the landscape, with the details of the exact process
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

88  James Woodward

view, however, we should not conclude that they are unexplanatory for this reason
alone. Instead, their explanatory status depends on whether they accurately capture
the dependency relations in real neural structures. This depends in turn on whether
the modeled neural structures have the connectivity of a recurrent network, whether
they involve Hebbian associative learning, whether there is empirical support for sep-
arate training and retrieval phases, and so on.19

8.  Mechanism, Predictivism, and Instrumentalism


So far I have not addressed an important set of objections, due to Craver and others, to
the ideas just defended. These objections turn on the claim that if we abandon the idea
that explanation (at least in neuroscience) must be mechanistic, we lose the ability to
make various important distinctions. For example, we lose the distinction between, on
the one hand, purely descriptive or phenomenological models, and, on the other hand,
explanatory models. We also lose the related distinction between the use of models,
construed instrumentally merely for predictive purposes, and their use under realistic
construals to explain. Craver argues, for example, that without the mechanistic con-
straints on explanation that he favors, we will be forced to regard Ptolemaic astronomy
or models that merely postulate correlations as explanatory. Although it should be
obvious from my discussion above that I disagree with many of these claims, I also
think that they raise many interesting issues that are especially in need of discussion
with the interventionist framework, since they often turn on what can be a possible
target of intervention, when a model can be thought of as telling us what would happen
under interventions, and when a model provides information about dependency
relations in the relevant sense. In what follows I explore some of the different ways in
which, from an interventionist perspective, a model may be merely descriptive or
phenomenological rather than explanatory. This will give us a sort of catalog of differ-
ent ways in which models can be explanatorily deficient, but, as we shall also see, a
model can avoid these deficiencies without being mechanical.
(1)  Obviously one straightforward way in which the interventionist requirements
can be violated is that the factors cited in some candidate explanans correspond
to “real” features F in the world, but the model should not be understood as even
attempting to describe how explanandum E responds to interventions on those features
or as describing a dependency relation (in the relevant sense) between F and E. This
will be the case, for example, for models in which the relationship between F and E
is  (and is understood to be) purely correlational rather than causal. For example,

by which the system falls into that state being omitted from the model. This is arguably another respect in
which the system departs from some of the expectations we have about mechanical explanations, since
specific trajectories are often taken to matter for these.
19
  For additional relevant discussion concerning a different neural network model (the Zipser–Andersen
gain field model) see Kaplan (2011, section 7).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  89

a  model might represent the correlation between barometer readings B and the
occurrence S of a storm, and this representation may be descriptively accurate and
predictively useful even though the B–S relationship is not causal. The non-causal,
non-explanatory status of such a model follows, within the interventionist framework,
from the fact that the model does not tell us how (or even whether) S will change
under interventions on B or about a dependency relation between B and S. Note that
reaching this judgment does not require acceptance of the idea that only models that
are mechanistic in the sense of Section 7 or that provide lots of implementational
detail explain: a model that abstracts away from such detail can nonetheless describe
relationships that are causal in the interventionist sense (or that are explanatory in the
sense of describing dependency relationships) and a purely correlational model
might include lots of detail about the material composition of the modeled system
and the spatio-temporal organization of its parts.
(2)  A different, and in some respects more interesting, kind of case arises when a
theory or model is interpreted to (or purports to) describe dependency relationships
but these completely fail to track the actual dependency relations operative the sys-
tem whose behavior the theory purports to explain. Of course models of this sort can
nonetheless be descriptively accurate and predictively useful to the extent that they
correctly represent correlational patterns among variables.
A plausible example of this possibility, discussed by Kaplan and Craver (2011), is
Ptolemaic astronomy. According to this theory (at least in the cartoon version we
consider here) the planets move as they do because they are carried around in their
orbits by revolving crystalline spheres centered on the earth, or by additional crys-
talline spheres (“epicycles”) whose centers move on the geocentric revolving spheres.
It is uncontroversial that nothing like such spheres exists and that the motions of the
planets do not depend on their being carried around on such spheres. There is thus
no legitimate interventionist interpretation of Ptolemaic astronomy as correctly tell-
ing us what would happen to the planetary orbits if interventions were to occur on
such spheres (e.g., by changing their rates of revolution or disrupting them in some
way). Nor does this theory provide other sorts of explanatorily relevant information
about dependency relationships.20 It follows that Ptolemaic astronomy does not
qualify as an explanatory theory within the interventionist framework. It is a purely
phenomenological (or descriptive) theory, although for somewhat different reasons
than the barometer reading/storm “theory” discussed in Section 1.
20
  I would thus reject (as a general condition on explanation) condition (a) in Kaplan and Craver’s 3M
requirement, which holds in that “[i]n successful explanatory models in cognitive and systems neuroscience
(a) the variables in the model correspond to components, activities, properties, and organizational features
of the target mechanism that produces, maintains, or underlies the phenomenon” (p. 611). I am much more
sympathetic to their second condition (b), when properly interpreted: “(b) the (perhaps mathematical)
dependencies posited among these variables in the model correspond to the (perhaps quantifiable) causal
relations among the components of the target mechanism.” I will add, though, that condition (a) may have
more plausibility when construed more narrowly as a requirement on what it means for an explanation to
be “mechanistic.”
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

90  James Woodward

The case of Ptolemaic astronomy seems clear enough but there are many other
examples involving models with “unrealistic” elements that raise subtle and interesting
questions regarding their explanatory status. Although I lack the space for detailed
discussion, my general view is that a model can contain many features that do not
directly correspond to or mirror features of a target system but nonetheless be explana-
tory in virtue of correctly characterizing dependency relations governing that system.
On my view, what matters most for purposes of explanation is that the model correctly
characterizes dependency relations relevant to the explananda we are trying to explain.
That the model may misrepresent other dependency relations relevant to other
explananda that the model does not attempt to explain or that it mischaracterizes in
some respects (or cannot be taken literally in what it says regarding) the entities or
properties standing in those relations often matters less much from the point of view
of explanation. To take a simple example, a network model in which neurons are
represented as interconnected dimensionless points may nonetheless correctly describe
what would happen to the network or how it would behave under various changes in
the inputs delivered to those neurons (so that the model is explanatory with respect to
these explananda), even though it is of course true that neurons are not dimensionless
points and some predictions based on this assumption will be obviously mistaken. As
another illustration, it is arguable that Bohr’s model of the atom had some explanatory
force in virtue of correctly representing the dependency of the emission spectrum for
hydrogen on transitions between electronic energy levels (and the dependency of the
latter on the absorption of photons), even though in other respects the model was
representationally quite inaccurate. For this reason, I do not think that it is correct to
claim that if the model is to provide satisfactory explanations all of the variables in the
model must correspond directly to entities or properties that are present in the target
system.21 Models can successfully convey dependency information in surprisingly
indirect ways that do not require this sort of mirroring or correspondence of individ-
ual elements in the model to elements in the world. I acknowledge that this introduces
a certain vagueness or indeterminacy into assessments of explanatory status (when is a
model so far “off ” in what it claims about the target system that we should regard it as
unexplanatory?), but I believe this to be unavoidable.
(3)  Yet another possibility is that a theory or model might be merely descriptive in
the sense that it describes or summarizes a pattern in some body of data in terms of
variables X, Y, etc., but without any suggestion that these variables are related causally
in the interventionist sense. For example, a model according to which the distribution
21
  To put the point in a slightly different way, whether a model gets the underlying ontology of the target
system right and whether it conveys correct information about dependency relations and the answers to
what-if-things-had-been-different questions are much more independent of one another than many
philosophers suppose. On my view, it is the latter (getting the appropriate relationships rather than the
relata) that matter for explanation. A version of the wave theory of light that conveys correct information
about relationships (including intervention-supporting relationships) involved in reflection, refraction,
diffraction, and so on should be regarded as explanatory even if the theory represents waves themselves as
mechanical displacements in an ether.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  91

of velocities of molecules in a gas is Gaussian is merely descriptive in this sense, as is


a model according to which the receptive fields of neurons can be represented by the
difference between two Gaussians—an example considered in Kaplan and Craver
(2011). A closely related possibility is that the model simply describes some regularly
occurring phenomenon but without telling us anything about the factors on which the
occurrence of that phenomenon depends, as was the case for the “phenomenological”
representation of neural tuning curves discussed in Section 2.
(4)  The model might describe a predictively useful relationship which involves
one or more variables that are not, for logical or conceptual reasons, possible targets
for intervention. An illustration (due to Kaplan 2011) is provided by the Balmer for-
mula which gives the wavelength (λ) of lines in the absorption/emission spectrum of
hydrogen in terms of the relation: λ = B (m2/m2−4) where B is a constant and m an
integer greater than two. This relationship is not a causal relationship, at least accord-
ing to the interventionist account, since the notion of intervening to change the value
of m from one integral value to another does not make sense. We cannot interpret the
Balmer formula as telling us what would happen to λ under interventions on the
number m. Nor does this seem to be a case of a dependency relationship of any other
kind relevant to explanation.
(5)  Another possible way in which the interventionist requirements can fail is that
a theory or model can be so unclear or non-committal about how some of the terms
or variables in the theory are to be interpreted (or what features they correspond to in
the world) that we have no conception of what would constitute an intervention on
those features, what would happen under such an intervention, or even what would
be involved in those features varying or being different. (This possibility contrasts
with the case of Ptolemaic astronomy described under (2) since it seems clear in a
general way what crystalline spheres would be were they to exist, and what would be
involved in their varying in diameter and position, and so on.) An extreme case is a
theory which is just a mathematical structure or an entirely uninterpreted set of equa-
tions relating certain variables. To the extent that the theory does not specify at all
what structures or relations in the world are supposed to correspond to the depend-
ency relationships postulated in the theory, then, according to the interventionist
framework, it is not even a candidate for an explanatory theory. (For example, the
Hodgkin–Huxley model, considered simply as a set of equations without any physical
interpretation, is not even a candidate for an explanation.) Another, less extreme pos-
sibility along these lines is that the theory does not contain completely uninterpreted
variables and relationships but instead provides some characterization of these, per-
haps giving them a semantic label or even assigning a number to them, estimated
from other measured quantities, but nonetheless leaves their physical or worldly
interpretation sufficiently underspecified that we lack any clear conception of what
would be involved in intervening on them or what corresponds in the target system
to the dependency relations in which they figure. The “gating” variables fitted by
Hodgkin and Huxley to the expressions describing the voltage and time dependencies
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

92  James Woodward

of sodium and potassium channels in their model of the generation of the action
potential had something of this character, as discussed in Section 9.
Another related possibility is represented by the treatment of bimanual coordin-
ation by Haken et al. (1985, the HKB model), which is championed by Chemero and
Silberstein (2008) as an alternative to more standard mechanistic or computational
accounts of psychological and neuroscience explanation. When subjects attempt to
move their left and right index fingers in phase in time with a metronome, their move-
ments are found to be related by

dØ /dt =
−a sinØ − 2b sin2Ø (4.2)

where Ø is the relative phase angle between the two fingers and b/a reflects the finger
oscillating frequencies. It is readily seen that this equation permits just two stable
outcomes, when either Ø = 0 or Ø = 180 degrees, corresponding to the movement of
fingers either in-phase (parallel, like windshield wipers) or in anti-phase. As b/a
decreases (corresponding to faster finger oscillation), subjects are unable to maintain
the anti-phase movement and switch to the in-phase movement, with this being
regarded as a “phase transition.” This behavior is reflected in the basins of attraction
associated with (4.2); there are two attractors (at Ø = 0 or Ø = 180) when b/a is relatively
large and just one when this ratio is small.
I agree with Kaplan and Craver (2011) that it is difficult to see this as a causal or as
an explanatory model.22 To begin with, it does not purport to tell us anything about
the neural features on which the described behavior depends—in this respect, it
seems like a non-starter as an example of neuroscientific or psychological explan-
ation and, contrary to what Chemero and Silberstein claim, a dubious candidate for a
replacement for such explanations. Because there is no accompanying neural account
(indeed, as far as the model itself goes, no claim about whether such an account even
exists), it is unclear how, if at all, to interpret the HKB model as a causal or explana-
tory model. As far as the model and the accompanying experimental data go, the
restricted possible states of coupled finger movement and the “phase transition”
might be due to some common neural/nervous system cause, in which case these
aspects of the phenomenon will have more of the character of a correlation among
joint effects than a causal relationship. Indeed, Kelso himself in his 1984 paper pro-
poses that the relation (4.2) may be regarded as “constrain[ing] possible neural
explanations” (p. 93) of the facts about finger movement he describes, which suggests
that (4.2) has more of the status of a potential explanandum for a genuinely explana-
tory theory (or an empirical constraint on such a theory) grounded in more general
22
  Although I do not regard the HKB model as a plausible example of an explanatory psychological/
neuroscientific model rooted in dynamic systems theory, I emphasize, as argued above, that in my view it
would be a mistake to suppose that all dynamic systems accounts of brain function in terms of attractor
landscapes and the like are non-explanatory. In addition to the theories of memory retrieval mentioned
above, other plausible candidates for explanatory models involving dynamic systems theory include
accounts of categorization and decision making of the sort described in Rolls and Deco (2010).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  93

features of the brain or nervous system, rather than something which should itself
be regarded as explanatory.23
The cases 1–5 are all cases in which the interventionist requirements for explanation
are not met. Note, however, that none are cases in which a theory or model fails to be
explanatory simply because it fails to provide extensive mechanistic or implementa-
tional detail. Instead, at least from an interventionist perspective, the models under
1–5 fail to be explanatory for other, independent reasons—because they invoke merely
correlational relationships or non-existent or woefully underspecified dependence
relations and so on. In other words, we can explain what is explanatorily defective
about such models in terms of violations of basic interventionist/dependency require-
ments on explanation without invoking the idea that all explanations must be mechanistic.
To the extent that a model avoids the problems described under 1–5 above, and satisfies
the interventionist constraints on explanation, it will count as explanatory even if it
fails to be mechanistic. For example, depending on the details of the case, a recurrent
network model for auto-associative memory may describe genuine dependence
relations in a target system (a brain) in the interventionist sense, rather than just
correlations and the items related via these dependence relations—neurons, connections
among neurons, and neural activity—may be “real” and possible objects of intervention.
It may also be clear enough what would be involved in intervening on such a structure
(e.g., by changing its input or more dramatically by lesioning it) so the model is not one
in which it is left completely unclear or unspecified as to what in the world corresponds
to relevant variables. Similarly, it may be clear enough what the relationships postulated
in the model imply about what would happen in the target system under various
manipulations or perturbations. On the other hand, the model lacks implementational
or mechanistic detail, thus illustrating the independence of this feature from the kinds
of deficiencies represented by 1–5.

9.  The Hodgkin–Huxley Model


Many of the themes discussed above are illustrated by the Hodgkin–Huxley (hereafter
HH) model, to which I now turn. This has been the subject of a considerable recent
discussion, with some (e.g., Craver 2008 and Bogen 2008) regarding the model as
unexplanatory (or in Craver’s case, at best an explanation sketch) because of its failure
to provide various sorts of mechanistic detail and others (Weber 2008; Levy (2014)
defending the explanatory status of the model. As will be seen, my own assessment
is very close to that of Weber and Levy, and I will draw on both of their discussions
in what follows.

23
  I will also add that the motivation for (1) in Haken et al.’s (1985) paper also does not seem to have
much to do with distinctively causal considerations. Instead, (4.2) is motivated by perceived “analogies”
(rooted in “synergetics”) with the behavior of other sorts of physical systems exhibiting phase transitions,
with (1) described as the “simplest” equation (p. 47) of a certain general form subject to certain symmetry
constraints that fits the observed data describing finger movements.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

94  James Woodward

I take the goal of HH’s 1952 paper to be the presentation of a model of the generation
of the action potential in an individual neuron. The experiments HH report were con-
ducted on the giant axon of the squid, although it is assumed that many of the features
of the model apply much more generally. The explanandum of the model is a phenom-
enon or stylized fact (in the sense described in Section 3) having to do with shape of
the action potential—what Trappenberg calls the “prototypical form of the action
potential” (p. 33). This involves a change in the potential across the neuron’s mem-
brane which follows a characteristic pattern: first rising sharply to a positive value from
the resting potential of the neuron (depolarization) and then decreasing sharply to
below the resting potential, followed by a recovery to the resting potential. The action
potential results from changes in the conductance of the membrane to sodium and
potassium ions, with the rise in potential being due to the opening of Na channels in
the membrane leading to the influx in Na ions and the subsequent fall being due to
the inactivation of the sodium channels approximately 1ms after their opening and the
opening at this point of the potassium channels. These ionic currents are responsible
for the patterns of change in membrane potential. Furthermore, the channels them-
selves are “voltage-gated” with the channel resistances/conductances being influenced
by the membrane potential.
The basic idea of the HH model is that structural features of the neuron responsible
for the action potential may be represented by a circuit diagram with the structure in
Figure 4.1. This is a circuit in parallel with (reading from left to right) a capacitor which
stores charge (the potential across the membrane functions as a capacitor), a channel24
that conducts the sodium current INa, with an associated time and voltage dependent
conductance gNa, a channel that conducts a potassium current IK with time and voltage
dependent conductance gK, and a leakage current Il which is assumed to be time and
voltage independent. The relationships governing these quantities are represented by
HH by means of a set of differential equations. First, the total membrane current I is
written as the sum of the capacitor current and the total ionic current Ii:
I= CmdV/dT+ Ii (This is just a version of Kirchoff ’s law for the conservation of
charge.)
The ionic current in turn is the sum Ii = INa + IK +Il.
These last three currents can be written as INa = gNa (V-VNa ), IK = gK (V-VK), and Il =gl
(V-Vl) where VNa , Vk , Vl are the equilibrium membrane potentials. These are just ver-
sions of Ohm’s law, with the currents being equal to the products of the conductances
and the difference between the membrane potential and the equilibrium potential. The
ionic conductances in turn are expressed as the product of the maximum conductances
(which I will write as G*Na , etc. for the channels) times “gating” variables n, m, and h:

24
  As noted above, the channels which these variables in the HH model describe are really (from a
molecular perspective) aggregates or classes of channels of various types (Na, etc.) rather than individual
ion currents.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  95

Outside
I

INa IK Iˡ

CM E
RNa RK Rˡ
– + +
+ – –
ENa EK Eˡ

Inside

Figure 4.1  The Hodgkin–Huxley model. Based on Hodgkin and Huxley (1952), p. 501.

Gk = G * K n 4

GN a = G * Na m3 h

The underlying picture is that the passage of ions through a channel requires the open-
ing of a number of distinct hypothetical structures or “gates,” with the gating variables
representing the probability that these are open. For example, n represents the prob-
ability that a gate in the potassium channel is open, it is assumed that four distinct gates
must be open for the passage of the potassium current, and also that these gates open
independently, so that n4 is in effect the probability that the potassium channel is open.
G*K n4 thus yields an expression for the active or available conductive as a function of
the maximum conductance. Variables m and h have similar interpretations: the Na
current requires that three gates, each with probability m, be open and that a distinct
gate also be open with probability h. Other equations, not reproduced here, describe
the time derivatives of the gating variables n, etc. as functions of other variables such as
the voltage dependent opening and closing rates of the gates.
Combining these equations yields:

I = C M dV / dt + G *K n4 (V − VK ) + G *Na m3 h (V − VNa ) + Gl (V − Vl ) (4.3)

G*Na and G*K are directly measured variables but, by HH’s own account, the gating
variables (and the variables occurring in the differential equations describing how
these change with time) were chosen on the basis that they fit the experimental data
reasonably well and were simple. Lacking information about the details of the molecu-
lar mechanisms governing the operation of the channels, HH in effect settled for
expressions (the quantities m, n, and h, the powers to which these are raised, and the
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

96  James Woodward

equations specifying the time course of these) that accurately empirically described
the channel conductances and, although they speculated on possible physical inter-
pretations for these expressions, they did not claim that they had successfully identified
the mechanisms responsible for them. They write “the success of the equations25 is
no evidence in favor of the mechanism of permeability changes [i.e. changes in mem-
brane conductance] that we tentatively had in mind when formulating them” (p. 541).
On the other hand, the passage just quoted is immediately followed by this remark
(also quoted by Weber and by Levy):
The point that we do consider to be established is that fairly simple permeability changes in
response to alterations in membrane potential, of the kind deduced from the voltage clamp
results, are a sufficient explanation of the wide range of phenomena that have been fitted by
solutions of the equations.  (p. 541)

Indeed, their entire 1952 paper is full of language strongly suggesting that they think of
themselves as having provided a causal explanation or a causal account of the action
potential. Their introductory paragraph says that their model “will account for con-
ductance and excitation in quantitative terms” (p. 500) and the first page of their paper
contains language like the following:
Each component of the ionic current is determined by a driving force which may conveniently
be measured as an electrical potential difference and a permeability coefficient.
(p. 500, emphasis added)
The influence of membrane potential on permeability can be summarized by stating: first, that
depolarization causes a transient increase in sodium conductance and a slower but maintained
increase in potassium conductance; secondly, that these changes are graded and that they can
be reversed by repolarizing the membrane.  (p. 500, emphasis added)

They go on to say that: “In order to decide whether these effects are sufficient to account
for complicated phenomena such as the action potential and refractory period, it is
necessary to obtain expressions relating the sodium and potassium conductances to
time and membrane potential” (pp. 500–1, emphasis added). The judgment that the
HH model is explanatory is repeated in many if not most of the papers and texts I con-
sulted that contain explications of the model. For example, in the passage quoted from
Herz et al. in Section 6, the HH model is described as “explaining” and providing “quan-
titative understanding.” McCormack (2003) writes that the experiments and model in
the 1952 paper “explained qualitatively and quantitatively the ionic mechanism by
which the action potential is generated” (p. 145). Koch (1999) writes that “the bio-
physical mechanisms and underlying action potential generation in the cell body of
both vertebrates and invertebrates can be understood and modeled by the formalism

25
  I follow Weber in interpreting the reference to “the equations” in this passage to the equations HH
propose describing the dependence of the channel conductances on m, n, and h and to the equations
describing the time dependence of the latter, rather than to Equation (4.3).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  97

Hodgkin and Huxley introduced” (p. 144).26 Similarly, Trappenberg (2002, pp. 34ff)
repeatedly characterizes the HH model as describing the “mechanism” (or “minimal
mechanism”) for the generation of the action potential.
I follow both Weber and Levy in holding that the obvious way of reconciling HH’s
various remarks about the explanatory status of their model is to distinguish the
question of whether HH provided (i) an explanation of the generation of the action
potential from the issue of whether they provided (ii) a satisfactory explanation of
the operation of the ion channels and the molecular mechanisms involved in gating.
Both by their own account and judged in the light of subsequent understanding of the
operation of the ion channels, they do not provide (ii). However, as argued in previous
sections, this is consistent with their having provided an explanation of (i) the generation
of the action potential. Put at a very general level, this is because Equation (4.3) and the
associated model identifies the factors (or at least many of the factors) on which the
generation of the action potential depends, although it does not successfully identify (or
at least very fully or adequately identify) the factors on which the operation of the ion
channels depends. The possibility of explaining (i) without explaining (ii) can be thought
of as reflection of the general point, made in previous sections in connection with
modeling strategies, that models work at different levels or scales, and a model can
explain some explananda at a particular scale or level (the overall behavior of a neuron
in generating an action potential) without explaining aspects of neural behavior at
other scales or levels (the molecular mechanisms associated with the ion channels).
As Trappenberg suggests, one way of thinking of the HH model is as a kind of min-
imal model of the generation of the action potential. The HH model shows that the
generation of the action potential depends on (or requires at a minimum), among
other things, the existence of at least two voltage gated and time-dependent ion
channels, as well as an additional static or leakage channel and a membrane that is
otherwise sufficiently insulated to act as a capacitor. However, given that such a struc-
ture is present and behaves appropriately, the presence of the specific mechanism by
which the ion channels in the giant squid operates is not required for the generation
of the action potential, as long as some mechanism or other that plays this role is
present. This in effect allows for the separation of explanatory tasks (i) and (ii) in the
manner that I have described.
This assessment of the explanatory status of the HH model also follows from the
interventionist requirements on explanation described in Section 2—a point that is
also developed by Weber (2008). For example, the HH model correctly describes what

26
  I should also acknowledge, though, that this remark by Koch is followed shortly by a reference to the
“phenomenological model . . . of the events underlying the generation of the action potential” (pp. 144–5)
postulated by HH, which seems to mix together the claim that the model provides causal information
(“generation”) with a description of it as “phenomenological.” This makes sense if “phenomenological” in
this context just means “lacking lower level mechanistic detail” (which is not taken to imply that the
account is non-causal or non-explanatory). This is perhaps the sense in which classical thermodynamics is
a “phenomenological” theory.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

98  James Woodward

will happen to the total current I under interventions on the transmembrane voltage V
(which can be accomplished experimentally via the voltage clamp device), and under
changes in the maximum sodium and potassium channel conductances, which can be
accomplished by techniques for molecular manipulation of these. Although the HH
model does not correctly describe the molecular mechanisms involved in the o ­ peration
of ion channels, it does claim, correctly, that it should be possible to intervene on these
classes of channels independently and to change the individual currents, INa and IK,
independently of each other and independently of the other terms in the equation. The
equation and associated correctly describes what would happen to the total current
under such interventions. The HH model is thus (at least in this respect) modular and
effects a decomposition of the structure responsible for the membrane current into
components, each of which is governed by generalizations which operate independently
of the generalizations governing the other components. In this sense it seems fairly
natural to characterize the HH model as describing the “mechanism” of the action
potential, as a number of the writers quoted above do.
We may also note that, putting aside the role of the gating terms and the equations
governing them, the HH model does not exhibit any of the pathologies described in
Section 8 which render a model merely descriptive or phenomenological rather than
explanatory. In particular, the HH model does not (i) describe a relationship (between
I and terms like V, INa  . . .) that is purely correlational rather than causal in the inter-
ventionist sense. Moreover, with the partial exception of the gating terms, the relations
among other terms conveys information about dependency relations in the target
system. For instance, V, the various currents, the membrane capacitance, and the
sodium and potassium conductances all refer to features of the world that are “real” in
the sense that they can be measured and manipulated and the model correctly
describes how these features are related (via intervention-supporting dependency
relations) to one another in the target system. In these respects, the HH model is very
different from the Ptolemaic model.

10. Conclusion
In this chapter I have attempted to use an interventionist framework to argue that
theories and models in neurobiology that abstract away from lower-level or implemen-
tational detail can nonetheless be explanatory. I have tried to show that this conclusion
does not require that one abandon the distinction between models that are explanatory
and those that are merely descriptive or predictively accurate, but non-explanatory.
Instead, interventionism provides a natural framework for capturing this distinction.
I have also argued that mechanistic models are just one possible form of explanatory
model; they are explanations that meet certain additional conditions that qualify them
as “mechanistic.” Models that are not mechanistic can nonetheless count as explanatory
if they correctly capture dependency relations that support interventions.27

27
  Thanks to Mazviita Chirimuuta and David Kaplan for helpful comments on an earlier draft.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

explanation in neurobiology  99

References
Batterman, R. (2009). “Idealization and Modeling,” Synthese 169: 427–46.
Batterman, R. and Rice, C. (2014). “Minimal Model Explanation,” Philosophy of Science 81:
349–76.
Bechtel, W. and Abrahamsen, A. (2005). “ Explanation: A Mechanistic Alternative,” Studies in
History and Philosophy of the Biological and Biomedical Sciences 36: 421–41.
Bechtel, W. and Abrahamsen, A. (2013). “Thinking Dynamically about Biological Mechanisms:
Networks of Coupled Oscillators,” Foundations of Science 18: 707–23.
Bechtel, W. and Richardson, R. (1993). Discovering Complexity: Decomposition and Localization
as Strategies in Scientific Research. Princeton, NJ: Princeton University Press.
Bogen, J. (2005). “Regularities and Causality; Generalizations and Causal Explanations,” Studies
in History and Philosophy of Biological and Biomedical Sciences 36: 397–420.
Bogen, J. (2008). “The Hodgkin-Huxley Equations and the Concrete Model: Comments on
Craver, Schaffner, and Weber,” Philosophy of Science 75(5): 1034–46.
Bogen, J. and Woodward, J. (1988). “Saving the Phenomena,” Philosophical Review 97(3): 303–52.
Chemero, A. and Silberstein, M. (2008). “After the Philosophy of Mind: Replacing Scholasticism
with Science,” Philosophy of Science 75: 1–27.
Chirimuuta, M. (2014). “Minimal Models and Canonical Neural Computations: The Distinctness
of Computational Explanation in Neuroscience,” Synthese 191: 127–53.
Craver, C. F. (2006). “When Mechanistic Models Explain,” Synthese 153: 355–76.
Craver, C. (2008). “Physical Law and Mechanistic Explanation in the Hodgkin and Huxley
Model of the Action Potential,” Philosophy of Science 75: 1022–33.
Dayan, P. and Abbott, L. (2001). Theoretical Neuroscience: Computational and Mathematical
Modeling of Neural Systems. Cambridge, MA: MIT Press.
Haken, H., Kelso, J., and Bunz, H. (1985). “A Theoretical Model of Phase Transitions in Human
Hand Movements,” Biological Cybernetics 51: 347–442.
Herz, A., Gollisch, T., Machens, C., and Jaeger, D. (2006). “Modeling Single-Neuron Dynamics
and Computation: A Balance of Detail and Abstraction,” Science 314: 80–5.
Hodgkin, A. and Huxley, A. (1952). “A Quantitative Description of Membrane Current and Its
Application to Conduction and Excitation in Nerve,” Journal of Physiology 117: 500–44.
Kaplan, D. (2011). “Explanation and Description in Computational Neuroscience,” Synthese
183: 339–73.
Kaplan, D. and Craver, C. (2011). “The Explanatory Force of Dynamical and Mathematical
Models in Neuroscience: A Mechanistic Perspective,” Philosophy of Science 78: 601–27.
Koch, C. (1999). Biophysics of Computation: Information Processing in Single Neurons. New York:
Oxford University Press.
Levy, A. (forthcoming). “Causal Organization and Strategies of Abstraction.”
Levy, A. (2014). “What Was Hodgkin and Huxley’s Achievement?” British Journal for the
Philosophy of Science 65: 469–92.
Levy, A. and Bechtel, B. (2013). “Abstraction and the Organization of Mechanisms,” Philosophy
of Science 80:241-61.
Machamer, P. Darden, L., and Craver, C. (2000). “Thinking about Mechanisms,” Philosophy of
Science 67: 1–25.
McCormack, D. (2003). “Membrane Potential and Action Potential,” in L. Squire, F. Bloom,
S. McConnell, J. Roberts, N. Spitzer, and M. Zigmond, (eds), Fundamental Neuroscience. San
Diego, CA: Academic Press.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

100  James Woodward

Piccinini, G. and Craver, C. (2011). “Integrating Psychology and Neuroscience: Functional


Analyses as Mechanism Sketches,” Synthese 183: 283–311.
Rolls, E. and Deco, G. (2010). The Noisy Brain: Stochastic Dynamics as a Principle of Brain
Functioning. Oxford: Oxford University Press.
Sternberg, S. (2001). “Separate Modifiability, Mental Modules, and the Use of Pure and
Composite Measures to Reveal Them,” Acta Psychologica 106: 147–246.
Strevens, M. (2008). Depth: An Account of Scientific Explanation. Cambridge, MA: Harvard
University Press.
Thomson-Jones, M. (2005). “Idealization and Abstraction: A Framework,” in M. Thomson-
Jones and N. Cartwright (eds), Idealization XII: Correcting the Model. Amsterdam: Rodopi,
pp. 173–217.
Trappenberg, T. (2002). Fundamentals of Computational Neuroscience. Oxford: Oxford University
Press.
Weber, M. (2008). “Causes without Mechanisms: Experimental Regularities, Physical Laws,
and Neuroscientific Explanation,” Philosophy of Science 75: 995–1007.
Woodward, J. (1979). “Scientific Explanation,” British Journal for the Philosophy of Science 30:
41–67.
Woodward, J. (2002). “What Is a Mechanism? A Counterfactual Account,” Philosophy of Science
69: S366–77.
Woodward, J. (2003). Making Things Happen: A Theory of Causal Explanation. New York:
Oxford University Press.
Woodward, J. (2008). “Comments on John Campbell’s Causation in Psychiatry,” in K. Kendler
and J. Parnas (eds), Philosophical Issues in Psychiatry: Explanation, Phenomenology and
Nosology. Baltimore, MD: Johns Hopkins University Press, pp. 216–35.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

5
The Whole Story
Explanatory Autonomy and Convergent
Evolution

Michael Strevens

1.  Explanatory Disintegration


Look wherever you like in the higher-level sciences—in cognitive psychology, or
economics, or anthropology, or even much of biology—and you will find explanatory
models that are entirely unconcerned with lower-level mechanisms. In economics, you
find models of the consequences of economic decision making that have nothing to say
about the psychology of decision; in psychology you find models of decision making
that have nothing to say about the way that psychological processes are implemented in
the cerebral substructure; in neuroscience you may, depending on your corridor, find
quite a bit of cytology or chemistry, but typically no quantum chromodynamics.
This absence of the lower level is one aspect of what is called the explanatory auton-
omy of the high-level sciences. Explanatory autonomy is perhaps itself only one kind of
autonomy, to be set alongside methodological autonomy, metaphysical autonomy,
managerial autonomy, and so on. I focus on explanation in this chapter because it raises
the problem of the integration or the unity of the sciences in principle and in the long
term, the production of explanations being a scientific end and not merely a means.
The autonomy, or disunity, or disintegration of the scientific disciplines and sub-
disciplines poses a prima facie challenge to those of us who believe that we live in a coher-
ent world and that science’s overriding task is to give us a clear picture of that world.
If the subject matter is a unified whole, why is its scientific portrait so fragmentary?
Perhaps the world is not so unified—perhaps it is dappled (Cartwright 1999) or
­disordered (Dupré 1993). It might be, for example, that the theories of a completed
cognitive psychology could not be translated into, or otherwise explanatorily related to,
the language of a completed neuroscience. To try to fit the two together would then
be like solving a jigsaw puzzle made up half from one set and half from another set of
differently shaped, differently cut pieces. Or it might be that psychological theories can
be translated into neuro-argot, but that the resulting sentences cannot be derived from
existing neuroscientific theories, either because of some sort of emergence or perhaps
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

102  Michael Strevens

because the theoretically interesting categories of neuroscience cross-classify the


interesting psychological categories.
If any of these visions of disunity is correct, then the present-day autonomy of the
sciences of the mind would be a sign of maturity: anticipating insurmountable barriers
to integration, the sciences have renounced vain pretensions to a seamless theory of
thinking; each consequently pursues its own ends in its own way.
The balkanization of our representations of the world need not, however, imply a
balkanized world. The sum total of being might be the integrated whole imagined by
Plato or Spinoza, yet our windows onto the world might be for some reason manifold
and variously shaped and tinted.
In this chapter, I juxtapose two such reasons. According to the first, the many
windows exist for practical reasons, to better organize the process of uncovering the
structure of a unified world. On this sort of view, explanatory autonomy is a temporary
arrangement: a completed high-level science will pay just as much attention to, will be
just as constrained by, and will derive at least as much of its explanatory richness from
low-level structures such as underlying mechanisms as to, by, and from principles of
high-level organization.
According to the second, the many windows exist because of the nature of explanation
itself: the lower-level facts are irrelevant, explanatorily speaking, to the higher-level
facts. On this sort of view, autonomy in present-day explanatory practice reflects
the  inherent structure of explanatory knowledge. The high-level sciences neglect
low-level mechanisms for principled reasons, and will continue to do so even in
their finished form. They need not, and indeed should not, draw on the lower-level
sciences for their explanatory content, nor need they be constrained by the lower-
level sciences’ explanatory organization of things.
At the heart of the chapter is an argument, based on convergent evolution, to prefer
the second picture to the first. I do not endorse the argument; rather, my aim is to
develop it and to investigate possible responses on behalf of those thinkers who feel the
explanatory pull of underlying mechanisms.

2.  One World; Many Sciences


Autonomous explanatory practices in a unified world: why? Let me describe the two
possible answers to be investigated in this chapter in more detail.
According to the first answer, the relative lack of integration between the higher-
and the lower-level sciences is motivated by the practical benefits of intellectual
specialization.
Suppose, for example, that in order to produce a complete explanation of some
economic phenomenon, we need both an economic story relating the explanandum to
various patterns of decision making and a psychological story that accounts for those
patterns (by relating them to more basic principles of thought, which are to be accounted
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  103

for in turn by the topology of neural interchanges, their physical substrate, and
ultimately fundamental-level physics). The economic and the psychological explan-
ations are, in that case, two panels of a larger mural—the big picture that constitutes the
whole story, the full explanation of why that phenomenon occurs.
Each part of the mural draws on particular skills. Some require advanced mathematics,
some the manipulation of temperamental experimental setups. Better, then, to divide
the work among teams each specializing in intellectual labor of that variety—to give the
game theory to the economists, the laws of mental logic to the psychologists, the brain
circuitry to the neuroscientists, and so on.
The economists are ignoring the psychologists, in that case, not because psychology
is irrelevant to their explanatory enterprise, but because the efficient division of labor
requires a certain intellectual compartmentalization: the point is precisely that by not
thinking about the psychology, however relevant it may be to the economic master
narrative, you make yourself a better because more focused game theorist, and so con-
tribute to the narrative a more perfect game-theoretic tableau. It is only one thread
among many in the explanatory action, but by neglecting the thread’s final purpose, you
make a stronger, more flexible, more colorful contribution to the tapestry of knowledge.
By your deliberate neglect of the other strands of the big story, you contribute more
surely, more quickly, more reliably to its telling.
On this view of things, once the explanatory work is done, there is notionally a great
gathering. Each research group, each department, comes bearing its own particular
pieces of explanation, and then as the assembled scientists watch, the parts are assem-
bled into an explanatory entirety. Contemplating the big picture, the whole story, each
researcher at last, for the first time, understands fully the phenomena they have been
studying all their life.
What are these “pieces of explanation”? They are what Hempel (1965, §4.2) called
partial explanations or explanation sketches, that is, legitimate explanations from
which some pieces or details are omitted.1 The omission takes a particular form in the
explanatory products of the high-level sciences: descriptions of mechanisms are
replaced by black boxes, that is, very roughly, job specifications paired with assertions
that something or other gets the job done. The game-theoretic economist specifies, for
example, that something in people’s heads computes the optimal course of action in
such and such a context, and something else makes the plan a reality, without saying
what these somethings, these underlying mechanisms, are or how they work.
Because a model of the underlying mechanisms is nevertheless necessary for a full
understanding of the economic phenomena, practically inspired black-boxing results

1
 A partial explanation in Hempel’s sense is a complete explanation of some aspect, but not every aspect,
of the explanandum. A complete explanation of why Mount Vesuvius erupted in 79 ce is therefore a partial
explanation of why it erupted in October of that year; it would become a complete explanation of the
October eruption were sufficient details added to the explanatory model to entail the October date. An
explanation sketch, by contrast, is not a legitimate explanation of anything, but rather an outline or template
or even just a fragment of a complete explanation.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

104  Michael Strevens

in an explanation that is at best partial, at best a sketch. It leaves an epistemic hole, but is
desirable all the same so that science may enjoy the efficiencies made possible by the
division of labor.
There are two components to the view I am here describing, then: first, the thesis that
explanations are incomplete without models of the relevant underlying mechanisms,
and second, the thesis that although black-boxing explanations are explanatorily
unfinished, the most efficient organization of explanatory inquiry will mandate the
production, for purely practical reasons, of precisely such things—with the rider that
there is a further phase in scientific explanation construction in which the various
explanatory parts are woven together to create the whole explanatory story.
Who advocates such a view? That a description of underlying mechanisms com-
pletes an explanation, or to put it another way, that describing such mechanisms
increases a model’s explanatory value, is a postulate popular among those who hope to
integrate psychology and neuroscience. Piccinini and Craver (2011, §7) write that in
the explanatory enterprise “full-blown mechanistic models are to be preferred”; Kaplan
(2011, §2) also favors explanations that “delineate the underlying mechanisms.” For
these writers, cognitive psychologists produce explanatory templates that are to be
filled in, when the time comes, by a mature neuroscience.
I myself think along the same lines. An explanation that black-boxes is leaving out
something explanatorily important; at the same time, black-boxing is for reasons of
efficiency ubiquitous in the high-level sciences—not just in the cognitive sciences but
everywhere, down to and including much of physics (Strevens 2008, §5.4, 2016).
If this is correct, then the fact of explanatory autonomy—the fact that explanatory
inquiry in the sciences is modular, even fragmented—provides no more reason to infer
that the world itself is fragmented, than the modularity of the various parts of a Shenzhen
assembly line provides reason to think that there is no finished product.

* * *
There is another way to reconcile explanatory autonomy with a unified world that
makes no appeal to practical considerations; it will serve as the chief rival to the practical
view in this chapter.
On this second view, the lower-level details ignored by a high-level inquiry are
typically explanatorily irrelevant to the phenomena under investigation. An econo-
mist’s neglect of psychological details, for example, is on this approach due to the
irrelevance of the mind’s decision-making mechanisms to economic phenomena. That
is not to say that nothing about the mind is relevant to economics, but rather to say that
what is relevant is captured by the appropriate black box: it matters that the mind finds
the optimal move in the game, but not how it finds that move. The explanatorily best
economic model will therefore contain a black box asserting the that without describing
the how.
One level down, the story is repeated: the black-boxing of brain-related details by a
cognitive psychologist is, far from an embarrassment, an omission mandated by the
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  105

canons of explanatory relevance. In explaining some cognitive capacity, it is highly


­relevant that the thinker is using this inferential rule rather than that rule, but irrelevant
how the all-important rule is implemented. Thus, even a completed cognitive psychology
will float a top the seething neural sea.
According to this view, then, the high-level sciences are explanatorily autonomous
from one another and from the lower-level sciences because they attempt to explain
different sets of phenomena and because the standards of explanatory relevance
judge each of these sets of phenomena irrelevant—except in black-box form—to
the explanation of most or all of the others. Among philosophers who think this
way are Franklin-Hall (forthcoming) and, to some extent, Garfinkel (1981). I myself
believe that it has something to offer, though it falls far short of accounting for all
instances of black-boxing in the high-level sciences, since most are simply a matter
of labor division.
A related view attributes autonomy not to a single standard of explanatory
­relevance making different judgments about different classes of explananda, but to
different domains having distinct standards for relevance. Thus, there is not a single
relevance-determining principle that rules the details of psychological mechanisms
irrelevant to economic phenomena and the details of neural mechanisms irrelevant
to psychological phenomena. Rather, the economists have their own, distinctive,
idiosyncratic relevance principle that discriminates against psychology, while the
psychologists have a rule, different from the economists’, that discriminates against
neurons in turn. For dialectical purposes in what follows, the two relevance approaches—
one positing a single standard for relevance and one positing a standard for every
scientific domain—can be lumped together. I take the simpler single-standard version
as my paradigm.

* * *
How to distinguish these two explanations of autonomy? Is autonomy a manifestation
of the division of cognitive labor, or is it legislated by the canons of explanatory rele-
vance? Or neither?
Many well-known examples of explanatory autonomy seem to be accounted for
equally well on either view. Economists, as Fodor (1974) remarked, show a studied
neglect of the finer details of the mechanisms of currency circulation. They have
nothing to say, for example, about the machinery used by automated tellers to dis-
pense banknotes, or about the queuing system inside the bank, though both may play
an important role in bank runs. While this certainly establishes that the high-level
sciences are uninterested in calling the plays molecule by molecule, it does not reveal
the foundation of their disregard. Are they simply leaving the details to the paper
engineers and retail consultants so that they themselves can focus more intently on
the workings of their macroeconomic models, though they recognize that both kinds
of facts are part of the complete explanation of the near-collapse of, say, the Northern
Rock bank? Or do they think that the details are irrelevant, that anything beyond
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

106  Michael Strevens

some simple black boxes would add nothing of explanatory value, would in no way
enhance our understanding, of those economic events of 2007?
The fact that the details are plainly ignored in economics departments is easily
accommodated on either approach, since even if the details of the queuing system were
explanatorily relevant, it would make very good practical sense to organize the study
of macroeconomic models separately from the study of customer service in retail
banking. The truth about autonomy cannot be read off the surface features of scien-
tific practice.
How to make progress, then? A philosopher like me will consult their intuitive
judgments about explanatory relevance. Does it seem that the queuing system is
explanatorily relevant? That the engineering of the ATMs is relevant? Or to take some
more serious cases, does knowledge of the flow of neurotransmitters add to my
understanding of adult human causal reasoning? Does it add to my understanding of
the connection between printing money and inflation?
The answers are a mixed bag. Many low-level details about the implementation of
high-level processes seem clearly to be irrelevant to the explanation of phenomena
brought about by those processes, because as explained below, they make no difference
to the phenomena’s occurrence. But for some low-level details, matters are not so clear.
It would be useful to have an argument pointing one way or the other that did not
hinge on intuitions about relevance. In the next section I present an argument for the
complete irrelevance of the low level that offers as evidence not intuitions, or even
­scientific practice, but the very structure of the living world.

3.  Convergent Evolution and the Irrelevance


of Mechanism
Golden moles, which comprise the twenty-one species of the family Chrysochloridae,
are small southern African animals that live an almost entirely subterranean life. They
are marvelously well adapted to existence underground, with their tightly packed fur
that slides through sand and soil keeping dirt and water at bay and their short and
powerful legs tipped with claws apparently tailor-made for excavation. They have
eyes that do not see, and tiny earholes that barely hear; touch is their sensory guide to
the world.
Marsupial moles, which comprise the two species of the genus Notoryctes, live a
life not unlike that of the golden moles, and it shows. They have many of the same
adaptations—the fur, the claws, the lack of sight—and perhaps most striking of all,
their overall aspect is remarkably similar to the golden moles (Figure 5.1).
The two taxa are not at all evolutionarily related, however, or at least, they are less
related than any two placental mammals. (Nor are the golden moles at all closely related
to the true moles, which make up most of the family Talpidae—they are, rather, relatives
of the tenrec. I will nevertheless continue to refer to golden moles and marsupial moles
as “moles,” thereby using the term morphologically rather than phylogenetically.) Their
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  107

Figure 5.1  Golden mole (left); marsupial mole (right)


Source: Golden mole drawn by Clare Abbott, from P. Apps, Smithers’ Mammals of Southern Africa: A Field
Guide, Struik Nature, 2012. Reprinted with permission from Penguin Random House South Africa.
Marsupial mole from F. Beddard, The Cambridge Natural History Volume X: Mammalia, Macmillan, 1902.

similarities are due to convergent evolution: faced with similar environmental


challenges, natural selection has fabricated similar phenotypes.
There are multitudinous other examples of convergent evolution. Some of the most
conspicuous are cases in which there are marsupial and placental versions of the same
mammalian body plan: the thylacine (Tasmanian tiger) and the placental canids
(wolves, jackals, and so on); the kangaroo and the Patagonian mara; the marsupial
mulgara and the mouse; the marsupial sugar glider and the placental flying squirrel.
There are monotreme, marsupial, and placental anteaters all of which have evolved
claws for tearing open anthills or termite mounds and long sticky tongues for scooping
up their swarming inhabitants. I could continue with examples of convergent evolution
in other vertebrates, other phyla, other kingdoms, or at the molecular level—but let’s
move ahead with the moles.

* * *
In the converging contours of the moles, nature itself seems to have written the answers
to the questions about what matters and what does not matter to the molding of
biological form, telling us what is relevant and what is not in explaining phenotypic
structure. The low-level biological cogs and levers are evidently of very little importance
in deciding the overt physiology of the moles. Placental or marsupial, true mole or tenrec
fellow traveler—it is all, from selection’s perspective, the same. Provided that the physio-
logical substrate satisfies a few broad conditions that might easily be represented by a
black box, the adaptive advantage of the phenotype for underground living is sufficient,
acting alone, to make all moles alike. Let me try to capture this intuitive sense of the
irrelevance of the causal underlayer in the form of a philosophical argument.
Call the various properties shared by the golden and the marsupial moles the talpid
phenotype.2 Call the mode of living shared by the two kinds of mole, made possible by
the shared features of their environments, the fossorial lifestyle. I want to run the fol-
lowing argument on behalf of the explanatory account of autonomy, that is, the view
that the high-level sciences for the most part ignore underlying mechanisms because
they are objectively explanatorily irrelevant.

  Talpa is the Latin for mole; as noted above, the family centered around the true moles is the Talpidae.
2
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

108  Michael Strevens

1. The complete evolutionary explanation of the talpid phenotype in both golden


moles and marsupial moles is the same, adverting to the adaptive advantages of
the phenotype for creatures living the fossorial life.
2. The underlying mechanisms involved in the evolution of the golden moles’
talpid phenotype are in many cases different from the underlying mechanisms
involved in the evolution of the talpid phenotype of the marsupial moles.
∴ The complete evolutionary explanation of the talpid phenotype in either group
excludes the details of all underlying mechanisms—or else the complete explan-
ations, differing with respect to these mechanisms according to premise (2),
would be non-identical, contradicting premise (1).
If the underlying mechanisms do not appear in the complete explanations of the talpid
phenotypes in golden and marsupial moles, then they are explanatorily irrelevant. An
explanation of the phenotype that described some aspect of the mechanisms would be
making a certain sort of explanatory error.
The argument does not entail that every underlying mechanism is irrelevant to
the explanation of the talpid phenotype; the mechanisms that are shown to be irrele-
vant are only those not shared by golden and marsupial moles. But there is no obvi-
ous reason to think that the shared underlying mechanisms are any more relevant in
principle than the rest, so the argument suggests, without implying, that the com-
plete explanation of the talpid phenotype in golden and marsupial moles is compre-
hensively black-boxing: it contains not a single underlying mechanism. That itself
provides a powerful reason, in the shape of a formidable paradigm, to think that the
high-level sciences’ principled disregard of low-level mechanisms is attributable to
explanatory, and not merely to practical, reasons.
Kitcher’s well-known argument that underlying mechanisms are irrelevant to the
evolution of the male to female sex ratio in humans (Kitcher 1999) can be adapted
along the same lines. The vast majority of large animals have an approximately
­one-to-one sex ratio. This pleasingly even proportion is famously accounted for, in
an explanation usually attributed to R. A. Fisher, as follows. The even sex ratio is a
stable and unique evolutionary equilibrium. This is because, in a population with
more females than males, individuals with a propensity to produce more males
than females will have a higher expected number of grandchildren, and vice versa.
Why is that? Your expected number of grandchildren is proportional to your
expected number of children and your children’s expected number of matings.
Since matings require exactly one male and one female, a male’s expected number
of matings will increase, relative to a female’s, as the proportion of males in a
­population decreases.
Although the physiological mechanisms and behavioral dispositions underlying
mating, reproduction, and nurturance are different—often wildly different—in the
various animals having a roughly even male to female ratio, it seems permissible,
and even insightful, to say that the explanation of the ratio is the same in all animals
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  109

that have it. The complete explanation of the ratio, if that is correct, black-boxes
underlying mechanisms.3
(I should note that the version of the explanation given in the previous paragraph is
not by anyone’s standards complete. It is necessary, for example, to add that equal
parental investment in the sexes is a precondition for the equilibrium—if it costs more
to produce a male than a female, then the sex ratio will tend to be skewed toward
females. But these additions will be black-boxing: the reasons for the equality of paren-
tal investment will not be spelled out or, at least, they will not be spelled out at a level of
detail that distinguishes the different organisms to which the Fisherian explanatory
scheme applies.)

4.  Against Underlying Mechanisms


Let me develop the convergent evolution argument against the explanatory relevance
of underlying mechanisms, arguing in favor of each of its two premises in turn.
The first and I think ultimately more contestable premise is that the explanation of
the talpid phenotype is the same in golden moles and marsupial moles.
The talpid phenotype, I remind you, is defined so as to include only those features
shared by golden moles and marsupial moles. There are many differences between the
two taxa, including many differences in the way that the talpid phenotype is realized.
Both kinds of organisms have very dense fur, but the patterns of fur growth are (let’s
say) not identical. To have the talpid phenotype is to have very dense fur, then, but it is
not to have any particular pattern of fur growth. Consequently, an explanation of the
phenotype should account for fur density, but it need not say anything about fur
growth pattern. Indeed, it should not say anything about growth pattern, insofar as the
pattern differs in the two kinds of mole. When explaining the golden moles’ talpid
phenotype, then, you are explaining the instantiation of exactly the same property as
when you are explaining the marsupial moles’ phenotype. The question is: do the two
explanations nevertheless in some way differ?
Here is a simple argument for their not differing in any way. Both the golden moles
and the marsupial moles have the talpid phenotype for exactly the same reasons,
founded in the phenotype’s adaptedness to the fossorial lifestyle—a burrowing, subter-
ranean mode of existence. This is why it is a genuine case of convergent evolution. But
the reasons that a taxon has a phenotype are just the explanation of that phenotype. So
if the reasons for the phenotype are identical in golden and marsupial moles, the
phenotype’s explanation is in both groups identical.

3
 The argument presented here departs from Kitcher’s original argument in several ways. First,
Kitcher’s explanandum concerns humans only. Second, his explanandum is the fact that the ratio slightly
favors males (because males are less likely to reach sexual maturity). Third, he compares the high-level
explanation that black-boxes all facts of implementation with an ultra-specific explanation that recounts
the conception and gestation of every human born over a certain period—a vastly more detailed explanatory
story than any low-level evolutionary model seriously considered in this chapter.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

110  Michael Strevens

Compare: the reason that a male cardinal’s feathers are red is different from the
reason that raven blood is red. So the explanations of redness are different in each
case; the double redness is coincidence rather than convergent evolution. There is
no  element of the talpid phenotype, by contrast, that is the same in golden and
­marsupial moles only by coincidence. Every aspect of that phenotype has its roots in
the fossorial lifestyle.
On to the second premise of the argument, that the mechanisms underlying the
evolutionary process in golden and marsupial moles are different in at least some
relevant respects.
That the underlying mechanisms are different in some respects can hardly be
denied. Every schoolchild knows that the mechanics of reproduction in marsupials
are different from those in placental mammals.4 Anyone with some biological sophis-
tication can list many other differences in the causal underpinnings of survival and
reproduction in these and for that matter in almost any two distantly related groups of
organisms: different mating behaviors, different dentition, somewhat different diets,
different numbers of chromosomes, and so on. (The latter two vary even among the
different genera of golden moles.)
This in itself will not convince a proponent of the explanatory value of underlying
mechanisms, however. No one, except perhaps a few extremists, believes that every
aspect of underlying mechanisms is relevant to explanation. Paint a big rock black
and hurl it at a window. The window breaks, but the black pigment, though it contrib-
utes to the weight of the rock and is perhaps the only thing to make direct contact
with the w ­ indow, does not play a part in explaining the breaking—whereas the rock’s
large mass, of course, does. An appealing way to separate explanatory from non-
explanatory properties of low-level mechanisms is a difference-making account of
relevance, according to which the rock’s mass is relevant to the breaking and its paint job is
not because the mass makes a difference to whether or not the window breaks and the
paint makes no difference. But what follows does not turn essentially on any ­particular
view of relevance.5
Here is the dialectical situation. Divide the philosophers with something to say
about explanatory relevance, autonomy, and underlying mechanism into three classes.
First, there are those who hold that in the high-level sciences, underlying mechan-
isms are typically not explanatorily relevant. Complete high-level explanations, on this
view, normally contain black boxes that stand in for all of the physical, or chemical, or
(depending on the science) biological or psychological details. For these thinkers, the
explanatory autonomy of, say, cognitive psychology from neuroscience is accounted for
by the explanatory irrelevance, in psychology, of neural implementation.

4
  Although as mentioned in Section 5, not all marsupials have the eponymous pouch.
5
  For a survey of ways to make sense of difference making, including but not limited to counterfactual
approaches, see Strevens (2008).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  111

Then there are those who hold (like me) that some details of underlying mechanisms—
the difference-making details—are relevant in the high-level sciences, but are normally
omitted in order to reap the efficiencies made possible by the division of explanatory
labor. Confronted by the phenomenon of convergent evolution, and the case of the
golden and marsupial moles in particular, they might react in one of two ways: they
might acknowledge that the underlying mechanisms which differ between the two
kinds of mole are on their view explanatorily relevant (this constitutes the s­ econd class
of philosophers), or they might not (the third class). The former route means accepting
premise (2) of the convergent evolution argument; a successful defense of the pragmatic
approach to autonomy and black-boxing, at least in the case of evolutionary theory,
then depends on finding some reason to reject premise (1). This strategy will be
considered in Section 5.
The third class of philosophers hold, then, that in general underlying mechanisms are
explanatorily relevant and are ignored by the higher-level sciences only for practical
reasons, but that convergent evolution, or perhaps evolution in general, is an exception.
In particular, philosophers in this class hold that in the case of the moles, the low-level
mechanisms for survival and reproduction that distinguish the golden and marsupial
moles are irrelevant to their talpid phenotype. When you have moles, then, neglect of
lower-level mechanisms is explained by their objective explanatory relevance rather
than by practical concerns. In general, however, this is not the rule; in general, under-
lying mechanisms are relevant and are ignored only to make science more efficient.
What are the prospects for the third kind of view? I am not aware of any account of
explanatory relevance, whether based on difference making or not, that will rule out
the relevance of low-level mechanism tout court in evolutionary processes, or even
only in subterranean evolutionary processes, yet that will attribute explanatory weight
to low-level mechanisms in, say, psychology or economics. Consequently, I suspect
that the third view is a case of adhockery in the service of wishful thinking. Maybe I am
wrong; in the remainder of this chapter, however, I put the third view to one side,
assuming that if the practical account of explanatory autonomy is to be defended
against the argument from convergent evolution, it is premise (1) that must go.

5.  On Behalf of Underlying Mechanisms


Newborn marsupial moles migrate to the mother’s pouch, where they suckle in safety
until maturity. Golden moles have no pouch; their young are more developed at birth
and take shelter in a nest on their way to adulthood. The mechanisms underlying
reproduction and the nurturing of young are therefore, in this and many other respects,
different in golden and marsupial moles.
Are mechanisms like these explanatorily relevant in evolutionary theory? On one
side of the question are philosophers like Kitcher who hold that implementational
details of this sort are entirely irrelevant to models of certain high-level evolutionary
processes, such as the evolution of the one-to-one sex ratio in humans and other
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

112  Michael Strevens

animals and (I have suggested) the evolution of the talpid phenotype in the two groups
of moles. On the other side are those like me who hold that some details of implemen-
tation are explanatorily relevant in accounting for even the most abstract and high-level
explananda, and that if they are routinely ignored, it is for practical reasons alone.
Cases of convergent evolution add weight to the anti-detail view: the explanation of
the talpid phenotype is identical in golden and marsupial moles; thus, it cannot include
aspects of their lives that differ, and so it cannot contain details about reproduction and
nurture. Likewise, I have extrapolated, Kitcher would say that the explanation of the
roughly one-to-one male to female ratio is identical in humans, moles, and many other
creatures; the explanations in each case cannot, then, include the sexy details of repro-
duction, insofar as they differ from species to species.
This final section of the chapter will explore a two-part strategy for resisting such
a conclusion, first giving a positive reason to think that the explanation of the talpid
phenotype is slightly different in marsupial moles than in golden moles, and then
giving an explanation for why we mistakenly think that they are the same. Let me
emphasize that my aim in what follows is not to make a positive case for the explanatory
relevance of underlying mechanisms; indeed, the suggested difference between
the two explanations of talpid phenotype is not one of implementation. My tactics
are purely defensive, then: I am trying to undermine the argument from convergent
evolution, rather than to provide an independent, standalone argument for the
relevance of implementation.

* * *
The pouch of marsupial moles faces backwards so that it does not scoop up sand and
soil. Let me suppose for the sake of the argument that this orientation is essential;
without the rear-facing pouch—that is, with a front-facing pouch or no pouch at all—
marsupial moles could not sustain their fossorial lives. The evolution of the moles’
talpid phenotype, then, required a rear-facing pouch either evolve (if the non-talpid
ancestors lacked one) or that it be retained.
It seems that marsupial pouches have evolved, disappeared, and changed configur-
ation quite frequently during the time they have been around. Among the opossums of
the Americas, for example, the pouch is usually absent or forward-facing, but in the
yapok (Chironectes minimus), which unlike its mostly arboreal relatives forages under-
water, it is backward-facing, presumably for much the same reasons as the marsupial
mole pouch. Plausibly, then—though we are here in speculative territory—the marsu-
pial moles’ backward-facing pouch evolved at the same time as their talpid phenotype.
Let me assume that it is so: the non-talpid ancestors either lacked or had forward-
facing pouches, and in order to attain their fossorial lifestyle, the incipient marsupial
moles had to evolve a backward-facing pouch along with the features they share with
the golden moles.
In that case, it seems to me, the story of the development of the backward-facing
pouch is an essential part of the story of the evolution of the marsupial moles’ talpid
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  113

phenotype, although the pouch and its orientation are not themselves a part of the
phenotype. Why?
A proponent of the pouch’s relevance might argue as follows. Relevance is a matter
(so the speaker assumes) of counterfactual difference making: x’s having F is relevant
to its having P just in case, if x had not had F, then it would not have had P. If marsupial
moles had not had backward-facing pouches, they would not have been able to adopt
the fossorial lifestyle and so they would not have evolved the talpid phenotype. The
pouch is therefore explanatorily relevant to the phenotype.
Such an argument is not decisive, however, because a defender of the argument
from convergent evolution can make the following black-boxing reply. It is true
that the marsupial moles would not have evolved the talpid phenotype if they had
not had, or evolved, a reproductive system compatible with the fossorial lifestyle.
But that is the right level to understand the relevance of the reproductive system:
what mattered was that it was compatible with the lifestyle; the further details
describing how it operated underground do not matter. The complete explanation
of the talpid phenotype in marsupial moles should black-box the details, then; it
will specify only the fact of compatibility. The explanation of the phenotype in
golden moles will of course specify precisely the same fact. The compatibility of
the reproductive system is explanatorily essential, then, but it can be captured by a
black box that sits equally easily in a specification of either marsupial mole or
golden mole physiology.
Here is a better argument for the pouch’s relevance: it is invidious to black-box
the reasons for the reproductive system’s adaptedness to the fossorial lifestyle (e.g., a
backward-facing pouch does not snag on dirt) while spelling out the reasons for the
talpid phenotype’s adaptedness to the lifestyle (e.g., dense fur enables the creature to
slide easily past dirt). Since the explanation of the phenotype must describe the latter
facts, it should describe the former facts as well.
My reason for thinking this is a certain explanatory holism about evolutionary
history: the complete explanation of any of the marsupial mole’s adaptations to life
underground, I suggest, is an evolutionary story that relates all the important devel-
opments that made that life possible. These developments together make up a single
evolutionary process; as they co-evolve, so they co-explain each other’s evolution, because
each next step in that evolution depends on the degree of fossorial compatibility so far
attained. Or in other words, the springboard for the next step forward in the evolution
of (say) the fur is in equal part the configurations of fur, claw, and pouch that enable the
mole to dig a little deeper or a little faster.
This argument might, I think, be resisted by endorsing a principle according to
which, when explaining the natural selection of a trait, you ought to black-box every
aspect of the evolutionary process other than the mechanisms that constitute the trait.
That is an extreme response: it would mean, for example, that there is no unified
explanation of the talpid phenotype, but rather only a heavily black-boxed explanatory
model for each component of the phenotype. I doubt that evolutionary biologists
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

114  Michael Strevens

would recognize in this explanatory atomization an accurate representation of their


own scientific practice. But let me not pursue this line of thought here.
If I am right, the complete explanation of the talpid phenotype in marsupial moles
contains an element—the causal history of certain properties of the pouch—that does
not appear in the complete explanation of the same phenotype in golden moles. The
two explanations are not identical, and the argument from convergent evolution there-
fore fails.
So what? After all, the difference between the two explanations is not exactly a matter
of underlying mechanisms: the evolutionary history of the pouch does not lie at a lower
level than the history of the talpid phenotype, and so its inclusion in the complete explan-
ation of the phenotype does not constitute an explanatory descent to a lower level.
True, but it ought to lead you to draw a wider moral nevertheless. Positively, it shows
that the evolution of the talpid phenotype is to be explained in part by specific facts
about specific creatures, and so that the apparently desirable unified explanation of
the phenotype, in the pursuit of which mechanism and many other particularities
are jettisoned, is out of reach. Negatively, it suggests that the explanatory completeness
of the unified, black-boxing explanation is in any case something of an illusion: we ought
never to have thought that the story about the talpid phenotype was substantially the
same in two kinds of animals that are so different deep down.
Let me now try to explain the source of that illusion.

* * *
There is something intuitively right about the claim that the talpid phenotype is
identically explained in both the golden and the marsupial moles. I want to diagnose
the source of that apparent rightness, and to show that it rests on a mistake.
The mistake is to conflate the complete explanation of why two things x and y share a
certain property P, on the one hand, with the complete explanations of why x has P
and why y has P, on the other. As a result of this conflation we infer, from the fact that x’s
and y’s sharing P has a unified complete explanation, that the complete explanation of
x’s having P is identical to the complete explanation of y’s having P.
I will be arguing, then, that the following “distributive principle” for explanation is false:
Distributive principle: If the complete explanation why x and y both have P is M,
then the complete explanation of why x has P is M (and likewise for y).
The sharing of a property by several entities is not only a different explanandum
than the possession, by a single entity, of that same property; it is a different kind of
explanandum.
Suppose that you are asked to explain why the US men’s basketball teams won the
gold medal in the first seven Olympics in which basketball was played (1936–68). You
are to explain, then, why seven separate entities—the seven US teams—shared a certain
property, namely, winning the Olympics.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  115

The natural answer is something like this: basketball was established longer in, and
was more popular in, the US than in any other country; the US in any case had a larger
population than any other competitive country except the USSR; the US had in its
college basketball tournaments a highly effective system for training young players;
and so on.
These are properties that (presumably) have a role to play in explaining each one of
the seven Olympic victories. But they hardly exhaust the factors relevant to the win-
ning of any of the gold medals (or else the Americans would not have lost to Argentina
in the 1950 FIBA championships, when these advantages also applied). Individual
victories are explained by the skills of particular team members, the ability of those
particular team members to work together in particular ways, and so on. As the com-
position of the team changed (because it consisted of college players, it was almost
completely different for each Olympics), so these particular explainers changed.
Should they not be a part of the story?
What the example shows, I think, is that the correct answer to an explanatory ques-
tion about shared properties picks out only the explainers that are common to all of the
relevant entities, that is, the factors that played a role in every one (or perhaps a major-
ity?) of the wins. “What explains why x and y both have P?” is equivalent to “What factors
appear both in the explanation of x’s having P and in the explanation of y’s having P?”
Further evidence for this interpretation is given by cases where there are no sig-
nificant shared factors. Suppose someone asks why, in every US presidential election
from 1952 to 1976, the Republican candidate won just in case an American League
team won the (baseball) World Series. The answer is: there is no explanation, it is just
a coincidence.
But clearly this series of events, like any other, can be explained. Indeed, presidential
elections and major league baseball attract explainers like almost no other fixtures.
You would not have to Google far to find explanations for why Eisenhower won the
election in 1952 and 1956, nor for why the (American League) Yankees won the
World Series in those same years. Although there would be little or ­nothing in com-
mon between the baseball side and the presidential side of the account, then,
you could give a perfectly good explanation of why things unfolded in the way that
they did.
Why, then, say that the pattern has no explanation? Even if it is pure coincidence,
it is hardly incomprehensible. The answer, as I have suggested, is that the explanatory
request in cases like these is for shared properties in the explanations, that is, for
factors that played a significant role in the causal histories of both the elections and
the games of baseball. Perhaps the most important word here is “pattern”: when asking
for the explanation of a resemblance, a similarity, or some other run of events we
want something that accounts not only for the individual events, but for their form-
ing a pattern. That something will have to have figured over and over in the causal
production of the instances of the pattern; when there is no such factor we say that
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

116  Michael Strevens

the exhibition of the pattern (though not the facts that entail its instantiation) has no
explanation—it is just a coincidence.6
If what I have said is correct, then in explaining the similarities between golden and
marsupial moles—thus, in explaining their convergently evolving the talpid phenotype—
you will pick out only elements that play a role in explaining the phenotype’s evolution
in both groups. Thus, you will pick out the functional utility, for a burrower, of dense
fur and spade-like claws, but not the importance of a backward-facing pouch, which is
relevant only to the marsupials.
Incautiously, you might then apply the distributive principle spelled out above, and
conclude that the features you have cited in your explanation of the shared phenotype
also constitute the complete explanation of, first, golden moles having the phenotype,
and, second, marsupial moles having the phenotype. That would lead you straight
to premise (1) of the convergence argument, that the complete explanation of the
phenotype is identical in both groups.
I suggest that this line of thought accounts for the appeal of premise (1). It is a
mistake, however, because the distributive principle is false: a complete explanation of
two entities’ sharing a property is typically an incomplete explanation of each entity’s
instantiation of that property, as it leaves out by design explainers present only in one
strand of the story, and thus aspects of the explanation of the dual instantiation that
are “mere coincidence.”

6. Conclusion
The high-level sciences that black-box most enthusiastically, and whose kinds are
therefore the most promiscuously multiply realizable—economics, belief-desire
psychology (if a science at all), mathematical ecology—can seem to be alarmingly
non-empirical in their content. In their characteristic explanations, what carries you
from explainer to explanandum seems to be mathematical or logical rather than causal
or physical necessity—or to put it another way, the phenomena predicted by these
branches of science are represented as the consequences of theoretical definitions
rather than of causal tendencies.
Is there a science of radically multiply realizable kinds that is plainly empirical, that
identifies in these kinds real, explanatory causal tendencies rather than the logical
black-boxy shadows thereof? Convergent evolution has long seemed to me to provide
a promising testing ground for the idea that science can be radically multiply realizable
but thoroughly empirical, thoroughly causal. It is in this role that I introduced it here,

6
  These comments amount to an augmentation of the treatment in Strevens (2008, §5.5), where, though
I noted the importance of citing similarities in explanations of similarities, I did not consider the possibility
that the exhibition of a pattern over a given time period is a sui generis explanandum distinct from the
facts that entail its instantiation. The difference between the two is, indeed, rather difficult to pin down, and
I will not try to do so here.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

the whole story  117

asking whether the explanation of convergent evolutionary tendencies dispenses with


underlying mechanisms and other specifics, so as to offer a high-level causal unifica-
tion of what are, from a low-level perspective, physically and indeed physiologically
very different sorts of things.
Although my own views on explanation suggest that such a unification is impossible
(Strevens 2008), I have ignored those views here and treated the case study in way that
is as unladen with theory as I can manage.
I have provisionally concluded that even in a paradigm of convergent evolution, the
marsupial and golden moles, there is not a genuinely unified explanation that crosses
lower-level kinds: the explanation of the talpid phenotype in marsupial moles makes
reference to properties that are not shared with the golden moles (and presumably
vice versa).
This is hardly the end of the debate. I think that the argument for the splitting of the
explanation along the boundaries of the lower-level kinds is strong but not decisive.
Even if it is correct it does not follow that underlying mechanisms are typically rele-
vant, or relevant even in this particular case. Radically multiply realizable kinds may
have real causal-explanatory oomph in evolutionary and diverse other processes.7
But I am somewhat pessimistic that progress can be made without debating the
virtues and vices of particular philosophical accounts of scientific explanation. If I am
right about the case of convergent evolution, then an impression of explanatory as well
as evolutionary convergence is created by the rules for explaining shared properties;
we will need theories of explanation to uncover and disentangle such intuitions.
Signing off, I leave things much as they were when I began. The high-level sciences
are clearly black-boxing in their explanatory practices; they are autonomized if not
atomized. But whether this segregation reflects the canons of explanatory relevance
or merely a canny division of labor—whether, to return to the topic of this volume,
explanations in cognitive psychology are independent entities existing quite inde-
pendently of the details of neural implementation or whether they are explanatory
sketches or templates awaiting neuroscientific substance—remains to be seen.

References
Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science. Cambridge
University Press, Cambridge.
Dupré, J. (1993). The Disorder of Things. Harvard University Press, Cambridge, MA.
Fodor, J. A. (1974). Special sciences. Synthese 28: 97–115.
Franklin-Hall, L. R. (Forthcoming). The causal economy account of scientific explanation.
In C. K. Waters and J. Woodward (eds), Causation and Explanation in Biology. University of
Minnesota Press, Minneapolis.
Garfinkel, A. (1981). Forms of Explanation. Yale University Press, New Haven, CT.

  For a theory on which black boxes can be causally explanatory, see Franklin-Hall (forthcoming).
7
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

118  Michael Strevens

Hempel, C. G. (1965). Aspects of scientific explanation. In Aspects of Scientific Explanation,


pp. 331–496. Free Press, New York.
Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese
183: 339–73.
Kitcher, P. (1999). The hegemony of molecular biology. Biology and Philosophy 14: 195–210.
Piccinini, G. and C. Craver. (2011). Integrating psychology and neuroscience: Functional
analyses as mechanism sketches. Synthese 183: 283–311.
Strevens, M. (2008). Depth: An Account of Scientific Explanation. Harvard University Press,
Cambridge, MA.
Strevens, M. (2016). Special science autonomy and the division of labor. In M. Couch and
J. Pfeifer (eds), The Philosophy of Philip Kitcher. Oxford University Press, Oxford.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

6
Brains and Beliefs
On the Scientific Integration
of Folk Psychology

Dominic Murphy

1. Introduction
It’s not usual to discuss folk psychology in the context of integrating the various cognitive
sciences, but perhaps it should be. After all, there are philosophers who think that
folk psychology is—or should be treated as—a theory that makes clear empirical com-
mitments, and is least approximately true. On that view, folk psychology just is a kind
of science, and its relations to the rest of the sciences are a proper philosophical topic.
That topic is usually framed as a dispute about the reduction of the laws of folk
psychology to those of physics (Fodor 1997; Loewer 2009). My aim here is different.
I want to discuss the relations between folk psychology and those of the other cognitive
sciences, especially neuroscience: whether or not the sciences of the mind are reducible,
they aren’t going to be reduced just yet. Their interrelations in our current state of
ignorance are worth considering.
In this chapter, I introduce three philosophical perspectives on the role of folk
­psychology in a mature cognitive neuroscience. One is integration (Gerrans 2014), which
affirms that folk psychology plays a decisive role in defining the objects of scientific
inquiry and guiding that inquiry. This view aligns with the one I alluded to above:
it takes seriously the ontological commitments of folk psychology and uses them as the
explananda for cognitive models. We can then ask questions about the relations these
models bear to neuroscience. Folk psychology is not the only source of explananda,
since many psychological effects will be uncovered that are not part of folk psychology’s
scope. Nor is folk psychology going to survive completely unrevised, since some
regimentation is inevitable. However, as that regimentation proceeds, it will do so
largely by expanding the domain covered by the explanatory devices of folk psychology,
namely ‘belief-desire-intention’ (BDI) psychology (Cummins  2000) which informs
not just folk psychology but also ‘a great deal of current developmental, social and
cognitive psychology’ (p. 127). So the integrationist picture is that folk psychology
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

120  DOMINIC MURPHY

supplies a set of constructs that can be turned into cognitive-psychological constructs,


which will then serve to define the explananda of a maturing neuroscience of
human behaviour.
The second perspective I’ll term (again following Gerrans) autonomy, the view that
folk psychology deals in personal rather than subpersonal explanations and as such
has aims that are incompatible with science. This version of autonomy should be dis-
tinguished from the view advocated by, e.g., Daniel Weiskopf (Chapter 3, this volume).
Weiskopf thinks that cognitive models are adequate explanations as they stand. They
need no neurobiological or mechanistic vindication. However, Weiskopf thinks of
cognitive models as defining part of a hierarchical structure that includes biological
phenomena. The position I have in mind here denies that the posits of folk psych-
ology bear any relation to any science. We know what folk psychology is in virtue of
our mastery of its concepts, but these do not aspire to play the same game as scientific
concepts. Last, I discuss eliminativism, which argues that folk psychology will be
replaced by a scientific theory of the mind. Since Feyerabend (1963), the customary
picture of elimination has been one in which the generalizations of folk psychology
get replaced by generalizations employing neurophysiological constructs. Following
some other theorists (e.g., Bickle 1992), I will understand eliminativism more broadly
as the view that current folk psychology will be very heavily revised by advances in
the cognitive and brain sciences. We should expect some of its constructs to dis-
appear, others to survive in amended form, and additional constructs to emerge. My
picture is one in which eliminativism should be the preferred scientific option. I will
sketch some familiar reasons for this, but my bet is that integrationism, in so far as it
aims to retain folk psychology as a theoretical construct that serves a heuristic role in
cognitive theorizing, is an unstable position: folk psychology cannot play the role
that integrationists have in mind for it. Any psychology that plays the integrationist
position must be heavily revised enough to count as a successor theory to folk psych-
ology, and that is a vindication of eliminativism from the point of view of scientific
theory-construction.
But that might not mean that we can get rid of folk psychology, since it might continue
as the main idiom in which we talk about each other. We can retain folk psychology if
we give up on its scientific pretensions, and there are probably good practical (and also
political, I shall suggest) reasons for doing that. So I expect folk psychology to survive
as an object of philosophical study, but not as a scientific venture. However, it is possible
that a reimagined conception of human nature will emerge that is not really compat-
ible with folk psychology, and I will end by very briefly pointing to the implications of
that possibility.
One last preliminary: I suspect many theorists will be attracted to aspects of more
than one of these perspectives, perhaps with respect to different mental phenomena,
or they may adhere to a heavily qualified version. Furthermore, one could be placed
­differently with respect to different aspects of folk psychology. It is quite possible, for
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  121

instance, to deny that beliefs belong in our best ontology, but to think that moods do
belong there, because moods seem to be clearly suitable for modelling and to rest on
widely conserved neural structures (Nettle and Bateson 2012). I want to understand
the commitments of each position in their starkest forms, but few theorists consistently
adopt those stark positions across the board.

2.  Three Perspectives


A common perspective in the philosophy of mind is that the generalizations
of  folk  psychology define the phenomena that cognitive science must explain.
And this comes with a picture of explanation and a metaphysics that justifies that
­picture. Psychology is at the top of a hierarchy of levels of explanation that bottoms
out  in molecular neuroscience, or even basic physics. If description precedes
explanation, it is psychology that describes what cognitive science must explain.
The former limns the phenomena at the personal level, and the latter provides
causal-explanatory theories employing subpersonal mechanisms whose operations
are disclosed by neuroscience. I will call this first perspective, following Gerrans
(2014), integration.
The integrative perspective shares the stage with two others that are less popular.
The first argues that psychology is autonomous because it is about persons, whereas
cognitive science is about mechanisms, and only personal explanations can explain
personal phenomena (Bennett and Hacker 2003; Graham 2010; McDowell 1995).
Again, there is a picture of explanation embedded in this perspective. It is that
explanation must be transparent to reason: connections between (say) dopamine
levels and (say) belief revision are just absurd. Such brute connections make no
sense, and cannot explain things. They don’t seem explanatory in the way that a link
between belief revision and exposure to testimony seems explanatory. Gerrans (2014)
calls this the autonomy thesis. It  should be distinguished from the explanatory
autonomy thesis with respect to cognitive models adopted by Weiskopf (Chapter 3,
this volume). Autonomy seems compatible with different views about the details of
folk psychology, but one common theme is that folk psychology tells us how the
mind works: mastery of the concepts and practices of folk psychology tells you all
you need to know about the mind, and does so without positing any inner states such
that science might explain those states (Gauker 2003). As Julia Tanney (2013, 11) puts
it: studying mental phenomena ‘involves looking at the way expressions containing
mental concepts are correctly used and how, in those various uses, they function’.
The job of neuroscience is to show how the brain enables us to behave intelligently,
but not to find things out about the nature of the mental states posited by folk
psychology. The explanations that folk psychology offers us exemplify rational con-
nections between concepts (Graham 2010, 119–20), and these connections cannot
be established empirically.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

122  DOMINIC MURPHY

Finally, we come to a view that says that psychology, especially folk psychology, is
just wrong, and both the description and explanation of the phenomena should be
carried out in the language of the neurosciences. Following much precedent, let’s
call this eliminativism.

2.1  What is eliminativism?


Eliminative arguments have typically moved from the premise that something is
horribly unscientific about folk psychology to the Jacobin conclusion that it should all
be swept away and replaced with more rational structures. But there is no need for
those structures to be non-psychological. We can think of eliminativism as a revisionary
doctrine, not a revolutionary one. A reformulated cognitive psychology can still play a
crucial role in describing and explaining the subject matter of the cognitive sciences. It
is a scientifically reformed psychology that plays the role that folk psychology is often
awarded in the integrationist picture.
At the same time, we need to revise the direction of intellectual authority as well.
What I mean by this is that psychology is seen by integrationists as defining the
explananda of neuroscience (e.g., Cummins 2000), but there are now plenty of examples
of neuroscientific findings that suggest we need a new set of psychological concepts, or
at least a heavily revised set. As an example, consider the relation between the concept
of desire and the empirical distinction between wanting and liking. Berridge and
Robinson (1998) took rats who had become addicted to amphetamine and showed
that they do not exhibit in their behaviour a stronger preference for sugar than non-
addicted rats. Yet they press a bar to deliver a sugar hit four times as often—even
though they do not like the sugar any more, they want it more.
A frequent interpretation of these results is that wanting equals desiring and liking
equals pleasure. But it is unclear whether the folk account of desire admits of such a strong
distinction between pleasure and desire; there is a philosophical tradition of pleasure-
based theories of desire, certainly held by Mill, and arguably by Hobbes and Hume as well.
Strawson (1994, 266), on conceptual grounds, argues that ‘the fundamental and only
essential element in desire is just: wanting (or liking) something’. Strawson thinks that
wanting and liking are interchangeable and the analysis of desire in these terms is obvious
(p. 266) ‘if one reflects dispassionately about it’. If Strawson is correct about our concepts
then the empirical results appear to show that our concepts are mistaken, since liking and
wanting are not the same. Morillo (1990) argues for a pleasure-based analysis of desire on
the grounds that expressions of dopamine in the brain’s reward system constitute both
instances of pleasure and episodes of action-origination, and that is the best fit with our
concept. Schroeder (2004) in contrast, has argued that the empirical results show that
desire is reward-based learning, but that nobody would have come up with that a priori.
In order to gain explanatory power, he suggests: ‘we must be ready to learn things about
desire we did not expect to be true’ (2004, 179).
Schroeder suggests that whether or not a view counts as eliminativist boils down to
whether it is surprising in terms of forcing us to give up settled aspects of our existing
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  123

views (2004, 169). This strikes me as a happy way of putting it, since it avoids inconclusive
talk about the theory of reference, and also admits that we can have an eliminativist
view that retains parts of our existing mental ontology. That seems odd in light of the
eliminativist stress on wholesale ontological replacement, but consider; if anything
counts as a scientific revolution it is the replacement of the physics of Ptolemy and
Aristotle with the new science of the sixteenth and seventeenth centuries. Yet the new
science did not tell people there were no planets and no cannonballs; rather it pro-
duced surprising results in astronomy and ballistics that overthrew much pre-existing
belief. This property of being a surprise relative to our existing conceptual structure is
central to the version of eliminativism that I am pushing here.
Memory offers another example. Comparative evidence (Boyer 2010) suggests that
a core component of folk psychology around the world is the idea of memory as a store,
into which copies of experiences are deposited for subsequent recall. But neurological
evidence (Schachter et al. 2007) implies the existence of a core system involving medial
prefrontal regions and areas of medial and lateral cortex (and other areas). This system
is active while remembering the past, but also in imagining the future and simulating a
range of possible courses of action. They ‘suggest that this core brain system functions
adaptively to integrate information about relationships and associations from past
experiences, in order to construct mental simulations about possible future events’. The
evidence for this ‘prospective brain’ is growing, and it suggests that a core part of folk
psychology is just wrong; memory is not a store and it exists as part of a much larger
planning and simulation system, not as a source of information.
The real lesson of eliminativism is not that neuroscience should replace psychology,
it is that psychology is not in charge. Neuroscience can change our understanding of
how psychology works, just as empirical psychology can change our understanding
of how it works. If we think of eliminativism as a doctrine about the replacement of
folk psychology by superior theories, then it is an open question what those new theories
should look like. The successor theory has to be neuroscientific only if you agree that
psychology cannot be conceived of apart from folk psychology.
If it is correct that folk psychology should be replaced, then what should we think
about folk psychology? Many of the virtues claimed for folk psychology are real: it can
play an important heuristic and interpretive role. It certainly does seem, as Fodor
(1987) stressed, to be a powerful predictive instrument. It may not be well suited to
a  scientific context, but lots of commonsense concepts are scientifically unhelpful
without being no use at all.

2.2  Folk psychology and science


If it is not suited to the sciences, though, folk psychology might be better seen as
independent of them, neither confirmed nor refuted by empirical findings. And that
is what the autonomy theorist claims, since mental terms acquire their meanings
during the acquisition of the language rather than from any empirical inquiry
(Mölder 2010, 145). There is another claim that the autonomy theorist may make,
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

124  DOMINIC MURPHY

though, that I will not endorse, namely the claim that everyday psychology has some
sort of intellectual authority over, or can act as a constraint on, the science of the mind.
This claim is often phrased as follows: the concepts of folk psychology define the
inquiry, and no scientific results can overturn those concepts because, qua concepts,
they are immune to empirical rebuke. This is a position adopted by philosophers who
think that philosophy of mind is purely a priori, such as Bennett and Hacker (2003).
The idea here is that philosophy tells you about the nature of concepts like memory or
perception. Science investigates what brains do that enables us to see or remember, but
it cannot tell us anything about memory or perception, because the nature of those
processes is fixed by our ordinary terms. This view departs from integration because it
denies that folk psychology can be reformed or regimented by experimental findings.
For example, Bennett and Hacker (2003, 213) insist that Damasio’s theory of emotion
(Damasio 1994) must be wrong because children do not use emotion terms in a way
that fits with the theory. Damasio could say that is because the folk psychology of
emotion is a poor guide to what emotion really is. Bennett and Hacker would just
reject that as conceptually confused, as if one were to insist that the rules of chess are
a poor guide to chess. The normal use of language just tells us about the nature of
emotions (or memory, or other mental ­phenomena) because it defines the relevant
concepts. This is not, I think, a happy place to end up, although it may be the result of
deep philosophical instincts and not really something than can be settled by debate
on grounds that all parties will accept.
However, there is something intriguing about the idea that folk psychology tells
us how the mind works. I have already insinuated that folk psychology is scientifically
inaccurate, but it does seem right to suggest that our ordinary understanding of
the mind is summed up in folk psychology. And this is very important. Ordinary
conduct, much public policy, childrearing, and a lot else all take for granted the
assumptions of folk psychology, and this makes it a very important subject of study
in its own right.
An account of the relation between philosophy of mind and philosophy of psych-
ology is a consequence of this position. By philosophy of mind I mean the philosophical
investigation of folk psychology, and the exploration of the concepts that are used in
unscientific, everyday description and explanation of human action and states of
mind. By philosophy of psychology I mean philosophy of science as done with respect
to the sciences of the mind, and hence the philosophical investigation of the scientific
description and explanation of human behaviour. On this way of putting things,
philosophy of neuroscience is a branch of philosophy of psychology, and both are parts
of philosophy of science. Philosophy of mind, on the other hand, looks more like a
branch of anthropology: it aims to understand the contours of the conceptual structure
of the mental.
A further upshot of doing things this way is that folk psychology will turn out to
have fewer theoretical commitments than is usually presumed. I do not think that folk
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  125

psychology is committed, for example, to representationalism—it is just committed to


beliefs and other mental states without much presumption as to their nature. Cognition
may or may not be representational (see Ramsey 2015 for a persuasive argument that
we should not treat cognition as necessarily representational) but folk psychology,
arguably, is not. One of the lessons of the past few decades is that philosophers have
been mixing up functionalism, which is an analysis of what mental states are, with
cognitive psychology, which is a scientific program aimed at understanding the basis
of human behaviour. Functionalism might be correct as an articulation of folk
psychology, but the specific models of cognitive psychology go beyond anything in
our ordinary conceptual repertoire. However, it will not do to evacuate folk psychology of
all commitments, and turn into nothing but a predictive device. If folk psychology
is just about predicting human behaviour, then behaviourism counts as a branch of
folk psychology; it talks of persons and explains their behaviour in terms of condi-
tioning. Folk psychology must have some content, otherwise everything would be
consistent with it. But the autonomist does not think this content is open to revision
by the sciences.
We have three perspectives, then, on the relations between psychology and science:
integration, autonomy, and elimination. To begin with, I’ll assume that the psychology
at stake is folk psychology. This is a reasonable starting point, since the status of folk
psychology is important to all the perspectives I have mentioned. Eliminativism is
aimed at refuting folk psychology, integrationists have argued, for its utility as a proto-
scientific psychology (even, in the case of Fodor (1987), that it is governed by laws just
like a proper theory), and autonomy has insisted that folk psychology describes aspects
of human nature that science cannot capture. However, the relation between neurosci-
ence and psychology has to consider the varieties of scientific psychology also, and that
is something I will come to.

3.  Three Questions


I have distinguished three philosophical programmes: integration, autonomy, and
elimination. For more detail about where the three perspectives agree and differ,
it may be helpful to set things up like this: each of them agrees on two out of three
methodological commitments, but none of them agree on the same two. These contrasts
can be set up in terms of answers to three questions:
1) Does folk psychology make empirical commitments?
2) Is folk psychology true (or alternatively, predictively and explanatorily powerful)?
3) Does folk psychology define the top level in an explanatory hierarchy?
Integration and elimination both answer yes to (1), and integration also answers yes
to (2) and (3); autonomy says yes to (2) and no to the others. Eliminativism says no to
(2) and (3). So each agrees on two of the three questions, but not the same two.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

126  DOMINIC MURPHY

3.1  Does folk psychology make empirical commitments?


First, both eliminativism and integration agree on the ontological commitment of folk
psychology. This commitment means that, as Bruno Mölder (who does not share it)
says (2010, 134) ‘an account of the nature of mental facts must uncover their functional
essence’. It assumes that when we talk about mental states we are, in albeit a very
roundabout way, talking about the brain. The job of mental state talk is to fix reference,
and determining the underlying nature of the state is an empirical matter.
Eliminativists, of course, think that nothing corresponds to the posits of folk
psychology, in the same way that nothing corresponds to the posits of other failed
theories, such as ether theory or phlogiston chemistry. But eliminativists agree with
integrationists that folk psychology is a theory with genuine empirical commit-
ments; one that is in the business of trying to accurately represent the mental world
in such a way that science can go on to specify what really corresponds in nature to the
posits of the theory. If you are an eliminativist, you think that folk psychology is in the
business of explaining and predicting human behaviour, and that science simply does
these jobs better.
Feyerabend (1963) raised questions about our capacity to reduce folk psychology to
physiology, and argued that any successful materialist theory would undermine folk
psychology by showing that there was really nothing mental at all. The most prominent
eliminativists, however, have questioned the scientific credentials of folk psychology on
other grounds. Churchland (1981) thought the relevant science was neuroscience,
whereas Stich (1983) foresaw a future cognitive psychology based on syntactic, but not
semantic, properties of mental states. But both agreed that the cognitive science of the
future would elbow folk psychology off the stage, because it would do a far superior job
of explaining and predicting phenomena of interest, and connect with neighbouring
sciences. Champions of folk psychology stress how it lets us co-ordinate actions and
predict the behaviour of others. The classic statement is arguably Fodor (1987, 2–16).
He argued that making inferences about people’s behaviour, based on their utterances
and other pieces of information, involves filling in a lot of gaps. This is achieved using
information derived from our stock of knowledge about propositional attitudes and
the ensuing inferences look like fallible scientific reasoning. Fodor held that folk
psychology is reminiscent of a scientific theory, in that it consists of generalizations
that specify the behaviour of the unobservable (mental) states that cause the observable
phenomena. Predictive accuracy is taken to be a reason for believing in the approximate
truth of a theory, as when the predictive accuracy of Mendelian genetics was held to be a
reason for believing in the existence of genes. And if you think that, you are also likely to
think that, whatever genes are made of, they are very likely to have the properties that
Mendel’s laws specify. Similarly, the argument for integrationism is that the entities that
feature in the laws of folk psychology, if those laws are more or less predictively accurate,
should have the properties that the laws say they have. What this means is that mental
states cause behaviour and are made true, or fulfilled, by their relation to the world.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  127

This is all familiar territory. Integrationists think that folk psychology tells us what
mental states are, and that neuroscience tells us what they are made of. Eliminativists
think that folk psychology tries to tell what mental states are, but that cognitive science,
and general philosophical considerations about the nature of progressive scientific
theories, tells us otherwise. In fact, there is nothing in nature with the properties that
folk psychology assigns to mental states. But the critical point here is that integrationism
and elimination agree about the putative scientific standing of folk psychology as a
scientific theory with legitimate empirical commitments.

3.2  Is folk psychology true?


With respect to the second question, consider that autonomists disagree with eliminativ-
ists that folk psychology is false and will give way to science, but unlike integrationists
they do not think of folk psychology as setting out the topmost level of scientific
explanation. For the eliminativist, the question of the relation between folk psychology
and neuroscience is simple to answer: folk psychology is rubbish, and neuroscience
will bury it. One way of putting the eliminativist’s position is this: she accepts that folk
psychology is a theory with empirical commitment, but thinks that the science is likely
to completely overturn it. This contrasts with an integrationist position like that of the
psychofunctionalist, who also accepts that folk psychology makes empirical commit-
ments, but thinks that these refer to real mental activities, which will be given further
elaboration by a scientific psychology. An integrationist thinks that we will discover
the empirical nature of folk psychological posits. Autonomists think that folk psych-
ology is true in the sense (at least) that it gets things right, even if nothing in the world
corresponds to folk posits. Folk psychology can get it right in the sense of making
human behaviour explicable in terms of the relevant shared norms and practices that
we use to understand each other. These are consistent with, but make no commitments
about, whatever ontology the sciences of the mind converge on. Importantly, autonomists
are not eliminativists. Eliminativists think that folk psychology and the sciences of the
mind are in competition, and autonomists do not.
What’s the competition? Well, it is clear that autonomists like Mölder reject the idea
that (2010, 134) ‘an account of the nature of mental facts must uncover their functional
essence’. (That is the discussion we just had.) The job of mental state talk is not to fix
reference (p. 133) or identify empirical posits. For Mölder, the folk specification exhausts
the nature of the mental; folk psychology alone tells us what mental states are. Mölder
calls his view ascriptionism, because it says that all there is to having a mind is being
ascribed one according to the norms of folk psychology. His approach is rooted in the
work of Dennett (1978) and Davidson (1984, 1990), but without their stress on ration-
ality as a guiding assumption of ascription. It is the totality of evidence, rather than
assumptions about rationality, that should guide one’s ascription. The basic point,
though, is common to the entire autonomist tradition. It can be perfectly true, for
example, that the machine you are playing chess against wants to get its queen out early
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

128  DOMINIC MURPHY

(Dennett 1978) without there being anything at all in the machine that corresponds
to or realizes the desire to get its queen out early.
So autonomists and integrationists can agree that folk psychology can be true,
but yet still disagree over its nature. An autonomist thinks that its truth depends
on the correct application of folk psychological concepts. An integrationist thinks
that its truth depends on the empirical discovery of facts about the mental states
posited by folk psychology. These mental states make up a distinct level of explanation
that stands as the topmost explanatory level in cognitive science. For both auto-
nomists and eliminativists, folk psychology does not define the topmost level of
the cognitive sciences. The autonomist says this is because they are not trying to
talk about the same subject matter. The eliminativist says that it is because folk
psychology is false.

3.3  Does folk psychology define the top level in an explanatory hierarchy?
Autonomists, like integrationists, think of folk psychology as true, but they disagree
with them about what sort of institution folk psychology is. The autonomist rejects
the idea that folk psychology is in the business of specifying the explananda of the
sciences. All there is to the nature of the mental is what folk psychology says, and
therefore further scientific investigation cannot tell us what the mind is really like.
Mölder says that ‘mental (and other) terms have a meaning that is acquired when the
language is acquired’ (2010, 145).1
The integrationist and eliminativist points of view are quite straightforward on this
score. Integrationists see scientific psychology as a regimentation of folk psychology.
That regimentation, done in intentional terms, then defines the explananda of any
science of the mind. Cognitive theories point ‘toward the way human brains actually
perform cognitive processes’, which are complicated information-processing types
that have folk psychological descriptions at the personal level (Gerrans 2014, 23–4).
The integrationist might disagree over whether reductionism is a plausible strategy
when it comes to the explanatory structure of cognitive psychology (e.g., laws or mech-
anisms), but will probably agree about the viability of a research programme that looks
for the biological realization of those structures. Eliminativists, on the other hand,
expect any scientific elaboration of cognitive phenomena to end up looking not at all
like folk psychology. Our basic folk kinds may not really admit of regimentation, as
opposed to sundering into successor concepts none of which look like good successors
to the original. Or it may turn out that neural evidence leads us to see psychological

1
  This, I take it, marks a point of disagreement between fully fledged autonomy theory and the position
of someone like Dennett, who agrees on the heuristic power of folk psychology, but not on its impervious-
ness to empirical correction. Dennett (1978, xx) has claimed to be a functionalist about the posits of folk
psychology that belong in a mature psychology as well as about the novel entities that such a mature psy-
chology would embrace. About other mental items, including the posits of folk psychology that will not
survive the development of psychology, he is an eliminative materialist. For theorists like Mölder (2010),
Bennett and Hacker (2003, 2012), Gauker (2003), or Tanney (2013) this is a misstep, because it puts ordinary
mental terms before the tribunal of empirical vindication, where they do not belong.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  129

processes as sharing basic properties. Alexander et al. (1986) persuaded many scientists
that motor processes and cognitive processes shared some anatomically very similar
cortical-subcortical circuitry. The evidence strongly suggests the existence of a dorso-
lateral prefrontal circuit that is anatomically similar to a number of motor circuits and
is involved in a variety of cognitive tasks. Damage to this circuit (Tekin and Cummings
2002) produces diverse psychiatric symptoms including memory deficits, executive
shortcomings and perseverations. It could be that many different psychological
processes share this circuitry and we carry on modelling them independently of each
other. It could also be that our folk psychological taxonomy of processes involved in
memory, action, planning, and motivation needs to be revised, since they are not as
separable as we have thought.
I have argued that elimination and integration give an affirmative answer to the
question of whether folk psychology has ontological commitments; autonomy and
integration jointly affirm that folk psychology is true; and autonomy and elimination
agree that folk psychology does not define the topmost level in a hierarchy of mental
sciences. I will now briefly argue against integration, and then explore some of the
consequences of setting out the intellectual landscape as I did above.

4.  Folk Psychology and Psychology


Consider a simple argument Gerrans makes on behalf of the integrationist (2014, 21–2).
There must be, says Gerrans, ‘an explanatory relationship between neuroscience and
folk psychology’; because, for example, someone with amnesia will have quite different
experiences and behavioural capacities after a brain injury. The most plausible hypoth-
esis is that she is suffering from memory loss caused by brain trauma. I agree about the
plausibility of the hypothesis, but I don’t agree that it shows the truth of integrationism
if we understand integrationism as a thesis about the relation of neuroscience and folk
psychology. The hypothesis of memory loss attendant on brain injury is only evidence
for integrationism if folk psychology is correct about memory and the neurobiological
theory constructed to explain memory loss involves the biological realizations of the
phenomena posited by folk psychology. Nevertheless, it is entirely possible that the
correct theory of memory looks nothing like the way memory is treated in folk psych-
ology. Gerrans also says that an important step in understanding amnesia and similar
conditions is understanding how ‘the brain encodes information acquired in experi-
ence and then reconstructs representations of that information when subsequently
cued’. Again, this looks like a substantive scientific research programme rather than an
articulation of folk psychology. Is it really part of folk psychology to assume that the
brain encodes perceptual information and retrieves it on cue? The mere existence of
memory and its frailty in the light of brain injury is not a vindication of folk psych-
ology. An eliminativist can perfectly well argue that Gerrans’ picture is consistent with
neuroscience showing folk psychology to be quite wrong about memory. And the
autonomist can argue that the scenario is one in which neuroscience tells us nothing
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

130  DOMINIC MURPHY

about the real nature of memory, just that it can be affected by physical injury, which
we all knew anyway.
Gerrans would likely not disagree with this, because his real goal is to show the
power of cognitive theorizing, using our best guesses about ‘the encoding, storage and
reconstruction of representation of life events’ (2014, 23). Gerrans would surely be
happy to agree that the final theory might look like nothing that folk psychology would
recognize. Nonetheless, this is an instructive passage. In arguing for integration,
Gerrans claims that we need to recognize that ‘cognitive theories’ (p. 23) play the role of
‘pointing toward the way human brains actually perform cognitive processes’ and that
this will be achieved by discovering relations of causal relevance among mechanisms
at different levels. However, it is unclear why this should all amount to an integrationist
manifesto: the multi-level picture and the idea that cognitive theories point towards
what we need to explain are compatible with eliminativism. Gerrans disagrees because
he thinks of cognitive processes as belonging to a psychological level. When discussing
Andreasen’s (1999) avowedly reductive and brain-based theory of schizophrenia, Gerrans
points out that she ‘invokes the cognitive properties of neural circuits’ (2014, 17). This
is correct, but Gerrans draws from it the conclusion that Andreasen is not really an
eliminativist but an advocate of bottom-up explanation. The latter claim is correct, but
the former point about Andreasen not really being an eliminativist only follows if we
regard any talk of cognitive properties of the brain as inconsistent with eliminativism.
Should we grant this?
It might not seem to matter. I have suggested that eliminativism is the view that folk
psychology is false and will need at least partial replacement and heavy revision.
Gerrans is seemingly arguing that it is instead the view that the brain has no cognitive
properties at all. I think my way of setting out the intellectual territory is superior.
It keeps the emphasis on the status of folk psychology, where the main action around
eliminativism has been. Gerrans argues that cognitive theories point towards the way
the brain really works, but this is doubly ambiguous: Gerrans moves between folk
psychology and cognitive modelling more generally, and there is a stronger and a weaker
reading of the epistemic role of ‘pointing towards’. On the stronger reading—which
I don’t believe Gerrans buys for a moment—cognitive theories act as a conceptual con-
straint that shows what commonsense requires for something to count as a mental
state. Then we can look to science to see what in fact meets those constraints in the real
world. This is the picture often associated with the Canberra Plan (Jackson 1993, 1998).
However, there is a much weaker reading. Cognitive theorizing could say merely that
science should aim to explain certain phenomena, and folk thought might pick those
phenomena out without thereby constraining what science is supposed to say about
them. As Ramsey (2015) says, you might agree that geology is concerned with what folk
thought counts as mountains without being committed to the view that geology must
be constrained by everyday beliefs about what mountains are really like. If anything
is to count as a posit of folk geology it is that—as everyone agreed for centuries—
a continent doesn’t move. And yet it moves.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  131

Most integrationists would probably pick a position somewhere in between the


weak reading and the Protocols of the Elders of Canberra. The weak view is weak
enough to be compatible with eliminativism, especially since our investigation, even if
begun under the aegis of folk concepts, might end up dispensing with them. I do not
have time for a full-scale offensive on Canberra here, but the position is deeply
unattractive to naturalists, and does not seem to fit how successful science works.
I think the standard integrationist position, at least within cognitive science, is that
folk psychological constructs occupy the top of the hierarchy of levels of explanation,
an issue to which I now turn.

5. Levels
The integrationist picture goes something like this: the concepts of psychology,
especially those of folk psychology, explain behaviour through rendering it intelligible.
These psychological concepts work at the personal level; they talk about what people
do, think, and feel. But human behaviour rests on subpersonal mechanisms that
do things like assign meanings to phonological representations or compute visual
edges. The personal-level capacities can be analysed into other personal-level capacities
(Cummins 2000) and these can then be understood as expressions of a hierarchy of
biological processes.
The relation between the personal and subpersonal, as I just stated it, is part of a
broader picture in which the biological world is composed of levels. There are many
different ways in which levels talk is used (for a field guide, see Craver 2007, ch. 5),
but the conception of levels I am interested in here is what Cummins (2000, 129) calls
‘top-down computationalism’—as he puts it, the idea that the brain is a computer and
the mind is what it does, and this amounts to a version of belief-desire-intention
psychology. The picture of levels usually associated with top-down computationalism
is that of levels of explanation, or as Sterelny (1990, 43) puts it, there are three domains
in psychology, and a level for each. The top level specifies what the system does, one
level down specifies the information-processing means by which it does it, and the
base level specifies the physical realization. Sterelny calls the top level ecological; it
specifies the cognitive capacities we are interested in.
The picture is familiar from Marr’s (1982, 24–5) articulation of three levels of
explanation in cognitive science. There is general agreement that for Marr the inter-
mediate level describes the actual representations and algorithms that perform the
computation that enables some capacity and the lowest level tells us how brain systems
or other material substrate, such as the parts of a machine, can implement the algorithm.
There is much disagreement about how Marr thought the top—computational—level
should be described. Shagrir and Bechtel (Chapter 9, this volume) outline a view about
Marr’s computational level as defining different aspects of the phenomena to be explained.
Sterelny (1990, 45) reads Marr as defining an information-processing problem that the
organism has to solve. Egan (1995) insists that Marr’s computational level is characterized
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

132  DOMINIC MURPHY

purely formally, as specifying the function computed by the system, even though this
does not make the process perspicuous in the sense that Sterelny wanted.
The consensus among integrationists is that the topmost level in this threefold
structure should be understood intentionally (Egan (1995) gives copious references)
even if that may not be what Marr intended. As Sterelny notes, the ecological interpret-
ation (which describes what the system is doing) does not have to be limited to the
personal level. It can describe subcomponents within a system; nor need it be expressed
in folk-psychological terms, since it might apply to very simple creatures indeed.
However, the top-down approach that I am suggesting integrationism has adopted is
typically intentional at least in the case of persons. It identifies personal-level variables
at the topmost level, building a cognitive model of those processes, and then looks for
their physical implementation. It is a commitment of the integrationist approach that
these levels must be integrated.
Marr’s three levels are not ontological but different representations of the same
process. Marr was interested in understanding vision in the abstract, as a process that
could be multiply realized in diverse physical systems. But his system naturally lends
itself to an integrationist stance in the philosophy of mind, because it comports so well
with a multi-level explanation of human beings, in which the topmost level defines what
the system does and the subprocesses needed to carry out that task can be parcelled
out among biological components. Vision, like anything else biological, may be com-
prehensible in the abstract but it also has a particular realization in a species. We can
break the overall task down into functional components and assign those component
tasks to structural components of the organism. Breaking a problem down and show-
ing how the bits are solved, and then breaking down those bits and showing how they
are solved, is fundamental to how explanation works.
However, because of the demand that levels be integrated, this picture of explan-
ation needs to be fleshed out with an ontologically committed conception of levels, as
an ontological hierarchy. It was clearly and influentially set out in Oppenheim and
Putnam’s famous (1958) ‘Unity of Science as a Working Hypothesis’. They argued that
in principle, psychological laws could be reduced to statements about neurons, which
could be reduced to claims about biochemistry which could be reduced to atomic
physics, and thus we could have a successful micro-reduction of psychology to physics.
The hope, and bet, is that this reducing theory will be the theory of the very smallest
bits of nature. This conception of explanation as a nested cascade of laws no longer
commands widespread assent in the philosophy of the life sciences, though it remains
powerful in the reductionist debates in the philosophy of mind I mentioned at the
beginning of the chapter. This reductionist programme is being supplemented by a
mechanistic approach in the philosophy of biology and neuroscience. In recent years
philosophers have stressed the way in which explanation in many sciences, above all
the biological and cognitive, depends on finding mechanisms (Bechtel and Richardson
1993; Machamer et al. 2000; Craver 2007). Rather than seeing explanation as a search
for laws, we seek the parts within a system of which the structure and activities explain
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  133

the phenomena produced by the system. Philosophers disagree over exactly how to
characterize mechanisms, but it is agreed that mechanisms comprise (i) component
parts that (ii) do things. Strife arises over how to understand the activities of the parts.
Are they also primitive constituents of a mechanism or just activities of the constituent
components (Tabery 2004)? But it is generally agreed that a mechanistic explanation
shows how the parts and their interactions give rise to the phenomenon we want
to explain.
However, even though the mechanistic approach is not reductive in the way that
Oppenheim and Putnam expected, it still talks of relations among levels. Central to
Craver’s account of mechanistic explanation, for instance, are relations of causal rele-
vance between phenomena at an explanation, and relations of constitution between
levels (Craver 2002, 2007). Causal relevance is defined in terms of manipulability and
intervention. Levels of explanation, on this account, are actually descriptions of the
same processes at different levels of resolution. A delusion can be understood in per-
sonal terms as a psychotic episode in the life of an individual that depends on relations
between different psychological processes in different brain systems. These in turn
involve cells whose operations can be studied in terms of the systems that constitute
them, and on down to the yet lower mechanistic levels. On this account, explanation in
neuroscience, as in biology more generally, involves describing mechanism(s) at each
level in ways that make apparent the relationships between causally relevant variables
at different levels (Woodward 2010). Mechanisms, for Craver, are divided into compo-
nents, and these components themselves may do things—those component deeds in turn
may receive a mechanistic explanation (2007, 189). This progressive decomposition is
always relative to a behaviour of the overall system which provides the phenomenon to
be explained.
My reading of integrationism (especially in Gerrans  2014) is that it is entirely
compatible with both the nomological reading of natural hierarchies that we see
in  Oppenheim and Putnam, and with a mechanistic hierarchy. The domains of
­psychological theory may be explanatory rather than ontological, but the level
of  physical realization in the brain hooks up the explanatory picture with a meta-
physical one. Because the explanatory enterprise aims to bottom out in brain systems,
it uses as explananda the entities of an ontological level. And once we have aligned
the explanatory picture with the metaphysics, we can go on to further decomposition
of brain systems.
The integrationist picture thus aims not just to explain psychological processes at
different levels, but to situate them within a broader picture of the world, by showing
how cognitive processes can be understood, via levels of explanation, as situated in the
hierarchy of levels of ontological composition.
The Oppenheim–Putnam picture is a very powerful and natural portrayal of a vision
of explanation tied to a vision of the world. The world is a mereological hierarchy of
smaller entities nested within bigger ones, and ultimately explaining how things work
involves showing how the higher levels emerge from the lower. As well as expressing
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

134  DOMINIC MURPHY

the metaphysics that once dominated modern science and philosophy, the picture also
fits with an idea of explanation as involving showing how things work—taking the bigger
system apart to reveal the workings within it.
So I agree with Weiskopf (Chapter 3, this volume) that if we think of hierarchies
mereologically it leads naturally to the mechanistic perspective, because systems get
decomposed into their physical parts. (Though I think it is consistent with the original
hierarchy of law-governed levels that Oppenheim and Putnam envisaged too.) Weiskopf,
though, follows Herbert Simon (1969) in arguing that we can also construct hierarchies
that are interactive, rather than spatial or compositional. He argues that many systems
cited in social-scientific explanation look like that, and that cognitive models are like
them in being ‘neutral on issues of spatial organization and the shapes of components’
(Chapter 3, this volume). Weiskopf argues that this supports the claim that cognitive
models are autonomous in his sense; their explanatory power does not rest on further
mechanistic support. As he notes, the response is to urge us to see cognitive models
as  sketches—in effect, specifications of processes at the ecological level that will
ultimately get a mechanistic explanation. Weiskopf says that this response is prima
facie wrong because it denies that cognitive systems ‘can be ideally complete and accurate
with respect how it represents a system’s psychological structures and properties’.
I have said that integrationists want to show how cognitive models fit into a broader
picture of the world by supplying their physical realization, and that eliminativists
often want to correct cognitive models using lower-level data. On the face of it,
Weiskopf ’s argument for the autonomy of cognitive models is a problem for both
positions. How might they respond?
To begin with, Weiskopf ’s appeal to social sciences needs tempering, since they can
employ mereological conceptions of levels. At the same time as Oppenheim and Putnam,
Kenneth Waltz (1954) had something very like a picture of levels of explanation—
although he called them ‘images’. Waltz’s problem was explaining why wars start, and
he argued for three images each of which made possible a different explanation. One is
human behaviour—wars start because people are aggressive. A second is the nature of
polities—wars start because of the internal dynamics of states. A third is the nature
of the state system—wars start because of the threats and incentives faced by nations in
a system of actors with no exogenous control. Waltz did not have the metaphysical
preoccupations of a philosopher, but there are part–whole relations among his images:
people constitute states, which in their turn make up the state system. The issue for
Waltz is that we do not know a priori where we will find the load-bearing parts of the
explanation. And, in principle, the properties at any level might be important. This
contrasts with Weiskopf ’s example of a Central Bank. Though it is staffed by human
beings, or at any rate bankers, it is not constituted by them, and it is hard to see how
the different possible compositions of a Central Bank could make a difference to its
monetary authority. On the other hand, the psychology and ideological commitments
of senior officials within the organization might make a difference to the policies that
express that authority.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  135

For Weiskopf ’s argument about the autonomy of cognitive modelling to work, the
models would have to be explanatory irrespective of their physical implementation.
I have elucidated the integrationist picture as one in which the top level of explanation
characterizes the mental process to be explained, abstracted away from the details of its
implementation. But I have also linked the integrationist with a metaphysical picture.
What gets integrated is a cognitive description that borrows key assumptions of folk
psychology, and it gets integrated into a metaphysical picture of the world. What has
become accepted as the classic way to do this is the mechanistic strategy, outlined
above, that descends ultimately from Simon (1969) via Wimsatt (1976) and Bechtel and
Richardson (1993), and emphasizes the strategies of decomposition and localization.
So, a delusion can be understood in personal terms as a psychotic episode in the life
of an individual that depends on relations between different psychological processes.
These can be realized in different brain systems.2 These in their turn involve cells whose
operations can be studied in terms of the systems that constitute them, and on down to
the molecular level. This is an instance of a particular analytic strategy in which the
biologically significant capacities of a whole organism are explained by breaking down
the organism’s biology into a number of ‘systems’—the circulatory system, the digestive
system, the nervous system, and so on—each of which has its characteristic capacities.
These capacities are in turn analysed into the capacities of their component organs
and structures. We can reiterate this systemic concept of functions through levels of
physiology, explaining the workings of the circulatory system, the heart, certain kinds
of tissue, certain kinds of cell, and so on.
The attraction of this picture is that it solves what Cummins (2000) calls Leibniz’s
problem or what Jackson (1998) calls the location problem—it shows how cognitive
abilities can be not just explained but also fitted into our picture of the physical world.
Weiskopf offers a picture of explanation on which the second, metaphysical component
of the integrationist picture is missing. How big a problem is that?
There are two issues here. One is the extent to which we can have confidence in
cognitive models as complete explanations. The second issue is whether autono-
mous cognitive models can fit into the physicalist picture of the world. I have presented
the mechanistic picture as a way to make that fit complete. But I am not saying that an
integrationist needs to be committed to the mechanistic picture of decomposition
and localization within a hierarchy. But integration does need an account of how the
cognitive level fits into the world. Such a picture does not have to be that of levels of
mechanism. However, in some cases, understanding the mind in terms of nested
mechanisms does have a straightforward appeal; in applied sciences we are concerned
very directly with interlevel relations. Psychological processes are vulnerable to brute
causes, and the mechanistic picture makes these interlevel relations apparent. However,

2
  They can also be realized in relations to the world. For the sake of simplicity, I have written throughout
in individualistic terms, but in many cases the supervenience base of the psychological will include chunks
of the surrounding environment.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

136  DOMINIC MURPHY

there is reason, as I shall now argue, for wondering if the acceptance of widespread
bottom-up influences threatens to nudge the integrationist in an eliminativist direction.
The top-down approach, and with it the integrationist picture, is only plausible
if  we  can find an abstract characterization of the mind that vindicates at least the
broad outlines of folk psychology. However, the lessons of cognitive science strongly
suggest understanding the way the human mind works cannot be done independently
of understanding its neurological implementation. An abstract understanding of
mentality (which can be implemented in some physical system) may be attainable,
but  the problem for cognitive science is the empirical one of understanding how
the human mind/brain works. The constraints on this enterprise need to come from
biology, not folk psychology. There is little reason, judging from the history of science,
to be confident that in general folk thought is a good guide to the universe, and no reason
to think that folk psychology is an exception. Without neurobiological constraints
that tell us something about how our brains solve various computational problems,
we might end up with options that make conceptual sense but are in fact biologically
implausible. A purely abstract specification might miss what is really going on.
Consider an old classic, Goodale and Milner’s (1992) dual visual system model that
posits distinct anatomical processing streams for visual information. The first stream
defines the ‘What’ system: it runs ventrally from the primary visual cortex (V1) to the
temporal lobe (areas TEO and TE) and is primarily involved in object recognition.
The dorsal ‘Where’ system runs dorsally from V1 into the posterior parietal cortex
(PPC), and is involved in processing the object’s location relative to the viewer. Would
you have designed a system like that? There is little reason to think that reflection
on folk psychology, rather than experimental data, would have come up with such a
picture of vision. It is true that the cognitive concepts built into most contemporary
cognitive neuroscience are just inherited from our wider cultural tradition, but this is
starting to change as the neuroscientists begin to ask questions about phenomena that
are distinctly their own and remote from traditional psychology and philosophy of
mind (e.g., What is the job of the brain’s default network? What does the dopamine
spike represent?). Maturing sciences typically transform our commonsense conceptual
structures, and we should expect neuroscience to do the same, ultimately making the
truth about human nature as remote from ordinary people as most other scientific truth
(Churchland 1993, 2006). As we learn more about the brain, the old psychological
verities will probably fade away to be replaced by a new scientific vocabulary that will
take decades or centuries for us to come to terms with.
If this is right it suggests two things: first, the psychological level is not independent
of the neurological, at least as an empirical matter. And that is what is important for
science. Second, folk psychology is not vindicated. It is likely a poor guide to how the
mind works. These two considerations are decisive objections against integrationism.
The second is a problem for the autonomy theorist too, but only if she does not see folk
psychology as an independent structure that is part of our commonsense repertoire,
rather than a part of science. For further elaboration of these thoughts, I will turn to
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  137

the concept of belief, which is central to folk psychology on anybody’s reckoning. Our


repertoire of psychological concepts is not limited to belief and desire, even though
these have dominated philosophical discussion of folk psychology and the related
empirical study of ‘theory of mind’. Folk psychology is much broader and more varied
than that. Most people who read this will have been raised in cultures that explain
people’s behaviour not just as a product of beliefs, desires, and other propositional
attitudes but also in terms of, to take some simple examples, affective states like moods
and emotions, and relatively enduring traits of character such as piety, bravery, intelli-
gence, or sloth. The boundaries of the mental are not easy to discern, but all of these
seem to qualify, and they are usually forgotten when folk psychology is discussed.
(Though not by Churchland (1981) who made the explanatory poverty of the non-
doxastic parts of folk psychology central to his argument.)
However, because belief is so central to the debate, it is worth considering as a case
study of some the ways in which folk psychology is under stress, and the implications
of this stress.

6. Belief
Some aspects of our psychology don’t matter to us. If experts come and tell you that
you don’t know how your brain parses sentences or responds to pheromones, you
might not be bothered. But other aspects matter a great deal—we all care about our
memories, our emotional life, or the sources of our behaviour, and do not want to be
told that we are systematically wrong about them, especially not if the truth is expressed
in scientific language that is incomprehensible to us. The truth about fermentation
might be hard to grasp, but it does not interfere with your drinking. The truth about
love or belief might be more disquieting. The concept of belief is perhaps less central
to the Western self-conception of humanity than some other parts of our folk psycho-
logical repertoire, but it is worth thinking about in this context because it is not only at
the heart of philosophical elaborations of folk psychology and the philosophy of mind,
but also to the way philosophers think about central topics in epistemology and many
other areas (like the theory of delusion).
In this section, I will sketch some of these issues, and argue that the concept of belief
is, as some eliminativists have argued, scientifically confused beyond redemption.
I shall suggest that the problems with the concept that make it so unfit for scientific
work are solvable (if at all) through cognitive neuroscience, but also that there are
non-scientific grounds for keeping the concept going. I think the same argument could
be made for many aspects of folk psychology.
Let me first clarify that I am talking about belief, not all kinds of representational
states with content. Nor do I expect the personal-level characterization of human beings
to disappear. Stich (1983) argued not just that cognitive science would do without folk
psychological concepts, but that it would do without any representational states at all.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

138  DOMINIC MURPHY

His argument, in brief, was that representational states of mind—beliefs above all—are
vague and context sensitive. These properties break the rule that properties mentioned
in scientific explanations of behaviour supervene on current physical states of the
organism (see Egan 2009 for a clear review). The question whether science makes use
of representational systems isn’t really open to doubt any longer: many areas of
psychology and neuroscience take for granted the existence of semantically interpretable
internal states. The assumption that inputs and outputs to and from brain components
represent distal features of the world has been part of neuroscience since the nineteenth
century (Glymour 1992). What is open to doubt is whether representation, as used in
the sciences of mind, has the properties that philosophers have found in intentional
content, as presupposed by folk psychology. Although I am not taking a stand on that,
I do want to suggest that the concept of belief will do very little useful explanatory work
in any mature cognitive science. But it might nevertheless be decomposable into a family
of successor notions that can suggest and guide useful neuroscientific hypotheses.
One of Stich’s points is that there are cases in which it is unclear whether the concept
of belief really applies at all. His famous example is that of an elderly woman, Mrs T.,
suffering from severe memory degradation. Mrs T. is able to state that ‘McKinley was
assassinated’ even though she cannot, owing to neurodegenerative disease, say who
McKinley was, whether he is alive or dead, or what assassination might be. So, does she
really believe that McKinley was assassinated? The tragic yet hilarious story of Mrs T.
points to vagueness in our concept of belief, and this example could be multiplied. In
discussing the doxastic theory of delusions, Gerrans (2014, xiii) notes that although
delusions resemble straightforward empirical beliefs in some ways, they also possess
features that make it hard to assimilate them to beliefs, ‘being somehow insulated
from public standards of justification, openness to dialogue and disconfirmation by
obvious counterevidence’. Are these features, missing in delusion, in fact part of our
everyday concept of belief? I think it is hard to tell, but they are certainly part of the
philosopher’s conception of belief, because that conception is tied to the notion of
rational legitimation.
Modern society is built on understanding and manipulating the natural world,
which leads to increased technological growth and concomitant economic prosperity.
The basis of all this is the rational legitimation of hypotheses. Warrant for scientific
theories rests on developing standards of evidential support and justification, and
these standards have increasingly worked their way into the fabric of other forms
of intellectual life (such as the growth of archival research and broadly scientific
standards of confirmation among nineteenth-century historians). These standards have
also, as Weber and others (e.g., Gellner 1990) have emphasized, ramified into political
and social life, so that in modern liberal societies every social arrangement is expected
to submit to the test of rational legitimation. Alongside this insinuation of rational
legitimation into every walk of life there has grown up the philosophical project of
understanding what that legitimation consists in, and the entanglement of the concept
of belief with the epistemic virtues that Gerrans finds so closely connected with it.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  139

Now, it is clear that people can believe something even in the absence of anything
like the epistemic norms that philosophers have connected to belief. Nozick (1993)
imagines a mother who persists in believing in her son’s innocence despite his criminal
conviction for murder. Let’s suppose that her child is guilty and that everyone else
believes this because it is the correct inference to make given the evidence. The mother
has no relevant evidence that others don’t possess. However, the cost to her of admitting
her child’s guilt is too great. She doesn’t put it like that. To do so would admit that she
only believes in his innocence because to concede his guilt would be too painful. She just
doesn’t believe that her son is guilty.
Nozick (1993, 86) argues that the mother in this case is not being irrational by
refusing to believe in her son’s guilt. The disutility of accepting the belief is so great that
it undercuts what Nozick calls the ‘credibility value’ she attaches to the belief that her
son is guilty. Not everyone agrees (although in my experience, mothers tend to), but we
nonetheless feel the force of the idea that it would be normal to let one’s personal stake
in the case outweigh the power of the evidence. Of course it may not be epistemic good
practice to do this, but that’s my point. We can distinguish narrowly epistemic assessment
of a belief as warranted or rational from a descriptive psychological assessment of how
typical human belief fixation actually works. Rationality may be the wrong concept to
use when we judge the murderer’s mother. We might prefer to judge that she is irrational,
but give her a pass anyway; perhaps she is irrational but not unreasonable, or maybe
she is only being human. The crucial point is that we recognize that she has come by
her belief about her son in a way that is thoroughly normal, even though it is not
epistemically proper. Beliefs are often caused by processes that do not justify them;
everyday cognition is notoriously prone to wish-fulfilment, bias, and the influence of
factors like class position, ideology, or loyalty to a research programme. However much
we deplore it, we can make sense of it according to our normal ways of understanding
human nature.
We can make sense of our readiness to accept these epistemically flawed yet fully
human types of belief fixation if we acknowledge that folk psychology contains
(or perhaps, exists alongside of) a folk epistemology that comprises expectations about
how ordinary human beings arrive at their beliefs. Many of these would fail Gerrans’
tests of justification and openness to counterevidence, because people very often form
beliefs in ways that are insulated from the standards embedded in epistemology. If a
belief aims at the truth, it will need to be sensitive to evidence and justification, but
often our beliefs have nothing to do with the truth. We recognize that people can form
beliefs in ways that violate epistemic norms—our understanding of human nature,
I suggest, includes causes of beliefs that do not justify them.
Tamar Gendler has recently (2008) advocated for supplementing our notion of
belief with one of alief. An alief is an automatic state that has some belief-like features,
exerts some control over behaviour and cognition, and is typically in tension with
belief. Hume considers the case of a man hung from a high tower in an iron cage. He
‘cannot forbear trembling’, despite being ‘perfectly secure from falling, by the solidity
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

140  DOMINIC MURPHY

of the iron which supports him’ (Hume 1978, 1.3.13). Hume puts this in terms of
general rules (see Serjeantson 2005) learned from experience, with one rule supplied
by imagination—that great height is dangerous—set against another, drawn from
judgement—that iron supports are secure. Rather than judgement and experience,
Gendler puts things in terms alief and belief, which highlights the tension—does the
man in the cage really expect to fall? No. But he can’t help thinking it, or at least
imagining it. Gendler’s idea is reminiscent of a longstanding research programme in
psychology—appraisal theory—which argues that emotions depend on constant evalu-
ations by the organism of the surrounding environment. These may be quite explicitly
cognitive at the highest level, or depend on very basic innate sensory-motor responses
(see, e.g., Scherer 2001, 102–3). If you see a tiger in the bushes, you will run away
because your built-in response to threats will kick in. Is it correct to say that you
run because you believe that you are in danger? If so is the attribution of the belief in
the case of the tiger the same sort of attribution that I make when I say you believe
that the archival evidence suggests that the Royal Navy agreed to the Anglo-German
naval treaty of 1935 in order to retard the growth of the German submarine fleet
rather than to limit the maximum size of German battleships? That judgement does
not depend on innate responses to the environment, but on painstakingly acquired
abilities to sift historical evidence.
I suspect that the real terrain is more complicated than the simple tensions in
Hume’s or Gendler’s accounts; there are probably lots of distinct information-processing
streams in the brain that have some of the stereotypical aspects of belief, but our con-
cept of belief seems to lump together everything from quick and dirty appraisals to
measured responses to empirical evidence. It includes elaborate scientific hypotheses
as well as casual prejudices. This might do for everyday discussion, but it is unlikely
that a good scientific kind can capture every one of the states that we might refer to
when we talk about belief. And it is quite likely that scientific progress will be slowed
if  we try. Much labour has been expended, for example, about whether delusions
are really beliefs (for an excellent review see Bortolotti 2009). I doubt that this can be
settled, although clearly delusions have some belief-like properties. A decision theorist
can assign utilities to outcomes, numbers that represent the degree to which the agent
values them. Should this be seen as an operationalization of desire? Decision theory
clearly bears some relation to the belief-desire structure of folk psychology, but it will
not prosper by asking whether the states it ascribes to agents are really beliefs and
desires or something else.
We may very well need to draw a distinction between ‘bottom-up’ processes that
exert unreflective control, and ‘top-down’ processes that are more deliberative and
effortful. This broad distinction is compatible with lots of more precise scientific
projects, but the concept of belief may well not belong in any of them. That does not
mean that there is no place for autonomy, as I shall say in a moment. But it does suggest
that integration is a non-starter as a charter for cognitive science. The eliminativist
tradition was right to point out the potential revolution that science might work on our
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  141

culturally bequeathed folk psychology but wrong to think of the abolition of psychology
rather than its reform. But my bet is that the reform will reject folk psychology just as
thoroughly as eliminativism foresaw.
On the other hand, folk psychology might remain as an autonomous institution for
independent reasons. For many of our ordinary purposes, resolving its ambiguities
and clarifying its imprecisions are just not relevant, any more than understanding
physics and chemistry is needed to cook a meal or catch a ball. Understanding folk
psychology as an autonomous part of the manifest image is not just a philosophical
project, but it seems clear that philosophy has a part to play.
However, it might be that eliminativism and autonomy clash in important ways.
They might proffer competing explanations, but even if they do the real problem is
likely to be ethical, or political in a broad sense. Scientific advances have often caused
large-scale reforms of our conception of nature. The hard thing to accept is that it might
have a similar effect on our view of ourselves. That would be an epistemic advance, but
the worry that these new vocabularies will deprive people of their ability to understand
themselves by replacing a familiar vocabulary with a remote, scientific one. We will
always need to be able to help people understand what they have become and how they
can improve their lives. The worry is that greater understanding of the mind will make
it harder for us to explain people to themselves.

7. Conclusions
I have tried to sketch three broad tendencies in the philosophy of mind on the relations
between folk psychology and its regimented cognitive image on the one hand, and
neuroscience on the other. The dominant tendency, I have suggested, is integrationism,
which sees folk psychology as sketching a level of explanation. This level can be
made clearer by cognitive theorizing. It can then be understood via a physical level of
explanation and fitted into our conception of the world via locating it at an ontological
level. The crucial issue separating this perspective from that of the eliminativist, I have
suggested, is the extent to which key constructs of folk psychology can survive amend-
ment in the light of neurological evidence, which threatens to dissolve our existing
concepts and introduce new ones. One way in which this might happen is that all our
folk concepts are supplanted by concepts drawn from neurophysiology or cognitive
science, as the first wave of eliminativism suggested (Feyerabend 1963; Churchland
1981; Stich 1983). But as a later wave understood, there are intermediate positions
(Bickle 1992; Churchland 1993) in which some of the old conception is retained but
reimagined even as new concepts and posits are introduced. The position, then, is not
that psychology will disappear or that we will be left with only subpersonal explanations;
it is that the psychology needed to make cognitive science work in the future will be a
successor theory to the one we have now, and like all successor theories it will involve
a conceptual overhaul that makes some of the old projects and questions simply
impossible to carry on.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

142  DOMINIC MURPHY

As with any intermediate reform, one might wonder if it is simply a version of the old
order—perhaps folk psychology is adaptable enough, or ontologically non-committed
enough, to change in response to the new sciences of the mind/brain without anyone
really noticing any difference?
There are two answers to this which I have foreshadowed, but will reiterate here. The
first is scientific. As bottom-up models of cognition emerge from new findings in
the  neurosciences, we should expect psychology to change radically as abstract
characterizations of human capacities make way for an inventory of new constructs.
This is the heart of the revisionist case against the top-down, belief-desire-intention-
based programme that drives integrationism. This is an empirical issue which will be
decided case by case.
My second answer concerns the fate of folk psychology. Given a sufficiently relaxed
view of its commitments, one might reason that as long as we retain a conception of
persons (and personal-level explanations), we can still speak of a folk psychology.
I think this is wrong. If all folk psychology is for is the prediction of human behaviour,
then behaviourism counts as a branch of folk psychology; it talks of persons but just
reinterprets them as bundles of responses to stimuli. But the real point about behav-
iourism is not that it falsifies the ontological commitments of folk psychology. It is that it
reimagines human beings in ways that make it difficult to understand how most of our
traditional self-conceptions and projects can carry on. And this is where the test and
significance of eliminative materialism will ultimately lie. If the new sciences of the
mind reinterpret human beings too substantially, we will risk losing our grip on what
matters to people. The integrationist perspective, in its various guises, aims to take the
existing picture of human beings and fits it into our overall understanding of the world.
The eliminativist perspective expects that fit to be much harder to achieve—both the
confirmation and significance of eliminativism are, in the broadest sense, political. The
philosophical challenges involve both understanding the new sciences of the mind and
developing the resources to make human projects sustainable going forward.

Acknowledgements
Thanks to David Kaplan for helpful comments on an earlier draft. I am also grateful for discus-
sion and feedback from Paul Griffiths, Dan Hutto, and audiences at the Philosophy Department
at the University of Sydney and the Cognitive Ontology Workshop at Macquarie University,
June 2016.

References
Alexander, G. E., DeLong, M. R., and Strick, P. L. 1986. ‘Parallel Organization of Functionally
Segregated Circuits Linking Basal Ganglia and Cortex’. Annual Review of Neuroscience 9:
357–81.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

BRAINS AND BELIEFS  143

Andreasen, N. C. 1999. ‘A Unitary Model of Schizophrenia: Bleuler’s “Fragmented Phrene” as


Schizencephaly’. Archives of General Psychiatry 56: 781–7.
Bechtel, W. and Richardson, R. C. 1993. Discovering Complexity: Decomposition and Localization
as Strategies in Scientific Research. Princeton, NJ: Princeton University Press.
Bennett, M. and Hacker, P. 2003. Philosophical Foundations of Neuroscience. Chichester:
Wiley-Blackwell.
Bennett, M. and Hacker, P. 2012. History of Cognitive Neuroscience. Chichester: Wiley-Blackwell.
Berridge, K. and Robinson, T. 1998. ‘What Is the Role of Dopamine in Reward: Hedonic
Impact, Reward Learning, or Incentive Salience?’ Brain Research Reviews 28: 309–69.
Bickle, J. 1992. ‘Revisionary Physicalism’. Biology and Philosophy 7: 411–30.
Bortolotti, L. 2009. Delusions and Other Irrational Beliefs. Oxford: Oxford University Press.
Boyer, P. 2010. ‘Intuitive Expectations and the Detection of Mental Disorder: A Cognitive
Background to Folk-Psychiatries’. Philosophical Psychology 24(1): 95–118.
Churchland, P. M. 1981. ‘Eliminative Materialism and the Propositional Attitudes’. Journal
of Philosophy 78: 67–90.
Churchland, P. M. 1993. ‘Evaluating Our Self-Conception’. Mind and Language 8: 211–22.
Churchland, P. M. 2006. ‘Into the Brain: Where Philosophy Should Go from Here’. Topoi 25: 29–32.
Craver, C. 2002. ‘Interlevel Experiments and Multilevel Mechanisms in the Neuroscience of
Memory. Philosophy of Science 69(S3): 83–97.
Craver, C. 2007. Explaining the Brain. New York: Oxford University Press.
Cummins, R. 2000. ‘ “How Does It Work?” versus “What Are the Laws?”: Two Conceptions of
Psychological Explanation’. In F. Keil and R. A. Wilson (eds), Explanation and Cognition.
Cambridge, MA: MIT Press.
Damasio, A. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Putnam.
Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Oxford University Press.
Davidson, D. 1990. ‘Representation and Interpretation’. In K. Said, W. Newton-Smith, R. Viale,
and K. Wilkes (eds), Modelling the Mind. Oxford: Clarendon Press: 13–26.
Dennett, D. 1978. Brainstorms. Cambridge, MA: MIT Press.
Egan, F. 1995. ‘Computation and Content’. Philosophical Review 104: 181–203.
Egan F. 2009. ‘Is There a Role for Representational Content in Scientific Psychology?’ In
D. Murphy and M. Bishop (eds), Stich and His Critics. Chichester: Wiley-Blackwell: 14–29.
Feyerabend, P. 1963. ‘Mental Events and the Brain’. Journal of Philosophy 40: 295–6.
Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press.
Fodor, J. 1997. ‘Special Sciences: Still Autonomous after All These Years’. Philosophical Perspectives
11: 149–63.
Gauker, C. 2003. Words without Meaning. Cambridge, MA: MIT Press.
Gellner, E. 1990. Plough, Sword and Book: The Structure of Human History. Chicago: University
of Chicago Press.
Gendler, T. S. 2008. ‘Alief and Belief ’. Journal of Philosophy 105: 634–63.
Gerrans, P. 2014. The Measure of Madness. Cambridge, MA: MIT Press.
Glymour, C. 1992. ‘Freud’s Androids’. In J. New (ed.), Cambridge Companion to Freud.
Cambridge: Cambridge University Press.
Goodale, M. A. and Milner, A. D. 1992. ‘Separate Visual Pathways for Perception and Action’.
Trends in Neurosciences 15(1): 20–5.
Graham, G. 2010. The Disordered Mind. New York: Routledge.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

144  DOMINIC MURPHY

Hume, D. 1978. A Treatise of Human Nature, ed. David Fate Norton. Oxford: Clarendon Press.
Jackson, F. 1993. ‘Armchair Metaphysics’. In J. O’Leary-Hawthorne and M. Michael (eds),
Philosophy in Mind. Dordrecht: Kluwer.
Jackson, F. 1998. From Metaphysics to Ethics. Oxford: Clarendon Press.
Loewer, B. 2009. ‘Why Is There Anything Except Physics?’ Synthese 170: 217–33.
Machamer, P., Darden, L., and Craver, C. F. ‘Thinking about Mechanisms’. Philosophy of Science
67: 1–25.
Marr, D. 1982. Vision: A Computational Investigation into the Human Representation and
Processing of Visual Information. San Francisco: W. H. Freeman and Co.
McDowell, J. 1995. Mind and World. Cambridge, MA: Harvard University Press.
Mölder, B. 2010. Mind Ascribed: An Elaboration and Defence of Interpretivism. Amsterdam:
John Benjamins.
Morillo, C. 1990. ‘The Reward Event and Motivation’. Journal of Philosophy 87: 169–86.
Nettle, D. and Bateson, M. 2012. ‘The Evolutionary Origin of Mood and Its Disorders’. Current
Biology 22: R712–21.
Nozick, R. 1993. The Nature of Rationality. Princeton, NJ: Princeton University Press.
Oppenheim, P. and Putnam, H. 1958. ‘The Unity of Science as a Working Hypothesis’. In
G. Maxwell, H. Feigl, and M. Scriven (eds), Concepts, Theories, and the Mind-Body Problem.
Minneapolis: Minnesota University Press: 3–36.
Ramsey, W. 2015. ‘Must Cognition Be Representational’. Synthese. DOI: 10.1007/s11229-014-0644-6.
Schacter, D. L., Addis, D. R., and Buckner, R. 2007. ‘Remembering the Past to Imagine the
Future: The Prospective Brain’. Nature Reviews Neuroscience 8: 657–61.
Scherer, K. R. 2001. ‘Appraisal Considered as a Process of Multi-Level Sequential Checking’.
In K. R. Scherer, A. Schorr, and T. Johnstone (eds), Appraisal Processes in Emotion: Theory,
Methods, Research. New York: Oxford University Press: 92–120.
Schroeder, T. 2004. Three Faces of Desire. Oxford: Oxford University Press.
Serjeantson, R. 2005. ‘Hume’s General Rules and the “Chief Business of Philosophers” ’. In
M.  Frasca-Spada and P. J. E. Kail (eds), Impressions of Hume. Oxford: Oxford University
Press: 187–212.
Simon, H. 1969. The Sciences of the Artificial. Cambridge, MA: MIT Press.
Sterelny, K. 1990. The Representational Theory of Mind. Oxford: Blackwell.
Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press.
Strawson, G. 1994. Mental Reality. Cambridge, MA: MIT Press.
Tabery, J. 2004. ‘Synthesizing Activities and Interactions in the Concept of a Mechanism’.
Philosophy of Science 71: 1–15.
Tanney, J. 2013. Rules, Reason and Self-Knowledge. Cambridge, MA: Harvard University Press.
Tekin, S. and Cummings, J. L. 2002. ‘Frontal–Subcortical Neuronal Circuits and Clinical
Neuropsychiatry’. Journal of Psychosomatic Research 53: 647–54.
Waltz, K. N. 1954. Man, the State and War: A Theoretical Analysis. New York: Columbia
University Press.
Wimsatt, W. 1976. ‘Reductionism, Levels of Organization and the Mind-Body Problem’. In
G.  Globus, I. Savodnik, and G. Maxwell (eds), Consciousness and the Brain. New York:
Plenum Press: 199–267.
Woodward, J. 2010. ‘Causation in Biology: Stability, Specificity, and the Choice of Levels of
Explanation’. Biology and Philosophy 25: 287–318.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

7
Function-Theoretic Explanation
and the Search for Neural
Mechanisms
Frances Egan

1. Introduction
A common kind of explanation in cognitive neuroscience might be called function-
theoretic: with some target cognitive capacity in view, the theorist hypothesizes that
the system computes a well-defined function (in the mathematical sense) and explains
how computing this function constitutes (in the system’s normal environment) the
exercise of the cognitive capacity. Recently, proponents of the so-called ‘new mechanist’
approach in philosophy of science have argued that a model of a cognitive capacity is
explanatory only to the extent that it reveals the causal structure of the mechanism
underlying the capacity. If they are right, then a cognitive model that resists a transparent
mapping to known neural mechanisms fails to be explanatory. I argue that a function-
theoretic characterization of a cognitive capacity can be genuinely explanatory even
absent an account of how the capacity is realized in neural hardware.

2.  Function-Theoretic Explanation


Marr’s (1982) theory of early vision purports to explain edge detection by positing
the computation of the Laplacian of a Gaussian of the retinal array. The mechanism
takes as input intensity values at points in the image and calculates the rate of intensity
change over the image. In other words, it computes a particular smoothing function.
Ullman (1979) hypothesizes that the visual system recovers the 3D structure of moving
objects by computing a function from three distinct views of four non-coplanar points
to the unique rigid configuration consistent with the points. Shadmehr and Wise’s (2005)
computational account of motor control putatively explains how a subject is able to
grasp an object in view by computing the displacement of the hand from its current
location to the target location, i.e. by computing vector subtraction. In a well-known
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

146  FRANCES EGAN

example from animal cognition, Gallistel (1990) purports to explain the Tunisian
desert ant’s impressive navigational abilities by appeal to the computation of the dis-
placement vector to its nest from any point along its foraging trajectory. Seung et al.
(1996, 1998, 2000) hypothesize that the brain keeps track of eye movements by deploying
an internal integrator.
These examples illustrate an explanatory strategy that is pervasive in computational
cognitive science. I call the strategy function-theoretic explanation and the mathematical
characterization that is central to it function-theoretic characterization.1 (Henceforth,
I shall abbreviate “function-theoretic” as FT.) Theories employing the strategy explain
a cognitive capacity by appeal to an independently well-understood mathematical
function under which the physical system is subsumed. In other words, what gets
computed, according to these computational models, is the value of a mathematical
function (e.g., addition, vector subtraction, the Laplacian of a Gaussian, a fast Fourier
transform) for certain arguments for which the function is defined. For present pur-
poses we can take functions to be mappings from sets (the arguments of the function)
to sets (its values). A fully specified theory of a cognitive capacity will go on to propose
an algorithm by which the computation of the value of the function(s) is effected, and
describe the neural hardware that implements the computation.2
A function-theoretic description provides a domain-general, environment-neutral
characterization of a mechanism. It prescinds not only from the cognitive capacity that
is the explanatory target of the theory (vision, motor control, etc.), but also from the
environment in which the capacity is normally exercised. In fact, the abstract nature of
the FT characterization—in particular, the fact that as an independently characterized
mathematical object the function can be decoupled from both the environmental
context and the cognitive domain that it subserves—accounts for perhaps the most
significant explanatory virtue of function-theoretic characterization. The mathematical
functions deployed in computational models are typically well understood independ-
ently of their use in such models. Laplacian of Gaussian filters, fast Fourier transforms,
vector subtraction, and so on are standard items in the applied mathematician’s toolbox.
To apply one of these tools to a biological system—to subsume the system under the
mathematical description—makes sense of what might otherwise be a heterogeneous
collection of input–output pairs. (“I see what it’s doing . . . it’s an integrator!”)3 And

1
  This sense of function-theoretic characterization is not to be confused with various notions of
functional explanation in the literature, in particular, with Cummins’ (1975) notion of functional analysis.
However, a functional analysis of a complex system may involve function-theoretic characterization, in the
sense explicated in this chapter.
2
  The FT characterization, the specification of the algorithm, and the neural implementation correspond,
roughly, to Marr’s three levels of description—the computational, algorithmic, and implementation,
respectively. The topmost, computational, level of theory also adverts to general environmental facts
(‘constraints’) essential to the explanation of the cognitive capacity, as discussed below. See Egan (1995) for
elaboration and defense of this account of Marr’s computational level.
3
  Moreover, theorists typically have at their fingertips various algorithms for computing these functions.
Of course, the algorithms are hypotheses that require independent support, but the point is that the theorist
has a ready supply of such hypotheses.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  147

since the FT characterization specifies the function intensionally, typically in terms of


an algorithm for computing the function, it provides the basis for predicting the out-
put of the device in a wide range of circumstances that go well beyond the observed
data set.
But, of course, the theorist of cognition must explain how computing the value of the
specified function, in the subject’s normal environment, contributes to the exercise of
the cognitive capacity that is the explanatory target of the theory—for the motor control
mechanism, the capacity to grasp an object in nearby space, for visual mechanisms,
the capacity to see “what is where” (as Marr puts it) in the nearby environment. Only in
some environments would computing the Laplacian of a Gaussian help an organism
to see. In our environment this computation produces a smoothed output that facili-
tates the detection of sharp intensity gradients across the retina, which, when these
intensity gradients co-occur at different scales, correspond to physically significant
boundaries—changes in depth, surface orientation, illumination, or reflectance—in
the scene. Ullman’s structure-from-motion mechanism succeeds in recovering the
3D structure of a moving object by computing the unique rigid configuration consistent
with three distinct views of four non-coplanar points on the object only because, in
our world, most objects are rigid in translation (the rigidity assumption). Thus, to yield
an explanation of the target cognitive capacity, the environment-neutral, domain-
general characterization given by the FT description must be supplemented by
environment-specific facts that explain how computing the value of the specified
mathematical function, in the subject’s normal environment, contributes to the exercise
of the target cognitive capacity.
One way to connect the abstract FT characterization to the target cognitive capacity
is to attribute representational contents that are appropriate to the relevant cognitive
domain. Theorists of vision construe the mechanisms they posit as representing
properties of the light, e.g., light-intensity values, changes in light intensity, and, further
downstream, changes in depth and surface orientation. The inputs and outputs of
the  Laplacian/Gaussian filter represent light intensities and discontinuities of light
intensity, respectively. Theorists of motor control construe the mechanisms they posit
as representing positions of objects in nearby space and changes in joint angles. But the
fact that a mechanism characterized function-theoretically can also be characterized
in terms of representational contents appropriate to the cognitive domain in question
does not obviate the explanatory interest of the more abstract, domain-general,
mathematical characterization that is the focus of this chapter.4
I will have much more to say about function-theoretic explanation as we progress,
but I have said enough to set up the main issue of the chapter. I turn now to the
challenge from the new mechanists.

4
  In Egan (2014) I argue that representational contents are best construed as part of an explanatory gloss
on a computational theory, that they serve a variety of pragmatic purposes but are, strictly speaking, theo-
retically optional. The argument in this chapter does not depend on any particular view of representational
content. (Though see the postscript.)
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

148  FRANCES EGAN

3.  The New Mechanistic Philosophy


A mechanism is an object that performs some function in virtue of the operations of its
component parts and their organization. Mechanistic explanation is the explanation
of the capacities of a system by reference to the properties and operations of its
component parts and their causal organization.5
Proponents of the new mechanistic philosophy claim that all genuine explanation in
cognitive neuroscience is mechanistic explanation:
The common crux of mechanistic explanation, both in its current form and in forms stretching
back through Descartes to Aristotle, is to reveal the causal structure of a system. Explanatory
models are counted as good or bad to the extent that they capture, even dimly at times, aspects
of that causal structure.   (Piccinini and Craver 2011, 292)
Explanations in computational neuroscience are subject to precisely these same norms [the
norms of mechanistic explanation]. The cost imposed by departing from this view . . . is the
loss of a clear distinction between computational models that genuinely explain how a given
phenomenon is actually produced versus those that merely describe how it might possibly be
produced . . . And it is precisely by adhering to this distinction (along with a distinction between
merely describing or saving a phenomenon and explaining it), that one can identify models in
computational neuroscience possessing explanatory force.  (Kaplan 2011, 346)

Levy (2013) provides a useful gloss on the view that he calls explanatory mechanism:
“to understand a phenomenon one must look under the hood and discern its under-
lying structure” (107). The idea that a cognitive model has explanatory force just to
the extent that it reveals the causal structure of an underlying mechanism is expli-
cated in terms of what Kaplan calls a model-mechanism-mapping (3M) constraint on
explanatory models:
(3M) A model of a target phenomenon explains that phenomenon to the extent that (a) the
variables in the model correspond to identifiable components, activities, and organizational
features of the target mechanism that produces, maintains, or underlies the phenomenon, and
(b) the (perhaps mathematical) dependencies posited among these (perhaps mathematical)
variables in the model correspond to causal relations among the components of the target
mechanism.  (Kaplan 2011, 347; see also Kaplan and Craver 2011, 611)

The 3M Constraint is claimed to distinguish genuine explanations in cognitive neuro-


science from mere descriptions and predictive devices (Kaplan 2011, 340). There is no
doubt that many explanatory models in cognitive neuroscience do conform to the new
mechanists’ strictures. But if the 3M Constraint is a necessary condition on genuine
explanation, then many promising cognitive models turn out not to be explanatory.6

5
  For characterization of mechanisms and mechanistic explanation see Bechtel (2008); Bechtel and
Abrahamsen (2005); Bechtel and Richardson (1993); Craver (2006, 2007).
6
  Kaplan intends the 3M Constraint to be both necessary and sufficient for genuine explanation:
“A central tenet of the mechanistic framework is that the model carries explanatory force to the extent
that it reveals aspects of the causal structure of a mechanism, and lacks explanatory force to the extent
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  149

In Sections 4 and 5 I argue that FT models often do not fit the mechanists’ account, yet
they can be, and often are, genuinely explanatory.

4.  Function-Theoretic Models and Mechanistic


Explanation
Satisfying the 3M Constraint requires a decomposition of a cognitive system into
components. How is this crucial notion to be understood? According to Craver (2007)
“[c]omponents are the entities in a mechanism—what are commonly called ‘parts’ ” (128).
He goes on to characterize the relationship between mechanisms and their components
as follows:
Organization is the inter-level relation between a mechanism as a whole and its components.
Lower-level components are made up into higher-level components by organizing them spatially,
temporally, and actively into something greater than a mere sum of the parts.  (Craver 2007, 189)

It follows that an entity cannot be a component of itself. Moreover, the components


(parts) of the system should be distinct structures, or at least, not characterized simply
in functional terms, on pain of trivializing the mechanists’ requirement on genuinely
explanatory accounts in neuroscience.7 As Piccinini and Craver (2011) note, the point
of mechanistic explanation is to reveal the causal structure of a system; this requires
that the components (parts) over which causal transitions are defined be understood
as structures of some sort.
Let’s consider the two parts of the 3M Constraint in turn. Condition (a) requires
that variables in the model correspond to components, activities, and organizational
features of the target neural mechanism. In the case of FT models the variables range
over the arguments and values of a mathematical function. But often, perhaps even
typically, nothing in the function-theoretically characterized system corresponds to
(states of) components of neural mechanisms.8 The Marrian filter computes a function
from intensity values at points in the image to the rate of intensity change over the
image. The Ullman structure-from-motion system calculates the unique rigid structure
compatible with three distinct views of four non-coplanar points. The presumption,
of course, is that these mathematically characterized systems are realized in neural
hardware, but in neither case is the implementing hardware specified at the level of its
component parts and their organization. Shadmehr and Wise (2005) hypothesize that

that it fails to describe this structure” (2011, 347). I am not here challenging the sufficiency claim.
Revealing the causal structure of a mechanism often is explanatory when we wish to understand how a
particular effect occurs. The point is that the constraint does not account for the explanatory force of an
important class of cognitive models.
7
  Milkowski (2013, 55) argues that a mechanistic analysis must “bottom out” in the constitutive level, the
level at which “the structures that realize the computation are described.”
8
  Since the relevant variables in FT models are mathematical, the 3M constraint should be interpreted
as requiring a mapping from values of the variables to states of component parts. This is how I will understand
the constraint.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

150  FRANCES EGAN

the motor control system that computes vector subtraction is realized in a network
in the pre-motor cortex, but again, nothing in the FT characterization corresponds
to (states of) components of the network and their organization.9 And Seung et al.’s
(1996, 1998, 2000) model of oculomotor control posits an internal integrator without
specifying any correspondence between variables over which the computation is
defined and (states of) components of implementing neural hardware. I discuss this
example in more detail below.
Let us turn now to condition (b) of the 3M Constraint, which requires that the
dependencies among the variables in the model correspond to causal relations among
components of the target neural mechanism. In the case of FT models the depend-
encies among the variables ranging over the arguments and values of the specified
function are, of course, mathematical. The presumption that systems characterized in
FT terms are realized in neural hardware—as they must be if the FT model is to be true
of the organism—amounts to the idea that there exists a mapping from physical
state-types to the arguments and values of the specified mathematical function, such
that causal state transitions among the physical states are interpreted as mathematical
relations among the arguments and values of the function. A complete computational

9
 I am not denying that computational theorists sometimes attempt to specify correspondences
between variables in their models and (states of) components of neural mechanisms. In explaining forward
kinematics—the computation of target location in body-centered coordinates from information about
eye orientation and retinotopic location of target—Shadmehr and Wise appeal to Zipser and Anderson’s
(1988) three-layer neural network model of gain modulation. Some nodes in the model’s input layers
represent (correspond to) eye orientation and others retinotopic location of target; output layers represent
(correspond to) target location in body-centered coordinates. Zipser and Anderson hypothesized that neurons
in area LIP and area 7A in the parietal cortex play the relevant computational roles. It is not implausible,
then, to describe input and output layers in the Zipser-Anderson model as component parts of a neural
mechanism. Interestingly, the Zipser-Anderson models fails to count as genuinely explanatory by Kaplan’s
lights. He says:
The real limitations on the explanatory force of the Zipser–Andersen model is that it is dif-
ficult if not impossible to effect a mapping between those elements in the model giving rise
to gain-modulated hidden unit activity and the neural components in parietal cortex
underlying gain-modulated responses (arguably, the core requirement imposed by 3M on
explanatory mechanistic models). (Kaplan 2011, 365–6)
Kaplan cites two reasons why the model fails to be genuinely explanatory: First, it is difficult in general
to effect a mapping between neural network models and target neural systems. There is typically only
a loose and imprecise correspondence between network architecture and neural implementation (see, e.g.,
Crick 1989; Smolensky 1988; Kaplan 2011, 366). Secondly, Kaplan notes that there are competing models
of how gain modulation is implemented in the brain, each enjoying some empirical support, and so, he
concludes, the Zipser-Anderson model is just a “how possibly” model and not genuinely explanatory.
An interesting question is whether, according to mechanists, the apparent failure of the Zipser-Anderson
model to satisfy the 3M Constraint thereby undermines the explanatory bona fides of the Shadmehr-Wise
function-theoretic model that it is supposed to implement. Presumably it does, since variables in the high-level
characterization do not in fact correspond to components in an explanatory model of neural mechanisms.
A consequence of the mechanist constraint would seem to be that any breakdown or lacunae in the decom-
position (all the way down to basic physical constituents?) would threaten the explanatory credentials of
higher-level theories. According to Kaplan (personal correspondence) the 3M constraint requires only that
some variables are mapped to components, thus allowing for partial or incomplete explanations.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  151

n, m n+m

p1, p2 p3

Figure 7.1  An adder

explanation of a cognitive capacity will specify this mapping. Consider the character-
ization of a device that computes the addition function (Figure 7.1).
A physical system computes the addition function just in case there exists a mapping
from physical state types to numbers, such that physical state types related by a causal
state-transition relation ((p1, p2)→p3) are mapped to numbers n, m, and n+m related
as addends and sums. Whenever the system goes into the physical state specified
under the mapping as n, and then goes into the physical state specified under the
mapping as m, it is caused to go into the physical state specified under the mapping
as n+m.
It follows that the function-theoretic description provides an abstract characterization
of causal relations among initial and end states of the realizing physical mechanism,
whatever that happens to be. The physical states [p1 . . . pn] that are characterized as the
arguments and values of the function (as addends and sums in the above example)
in the complete computational model may count as components (in the sense expli-
cated above) of the neural mechanism, but there is no reason to assume that they must.
In precisely those cases where condition (a) of the 3M constraint fails to be satisfied—
where the arguments and values of the specified function do not correspond to
(states of) components of the neural mechanism—condition (b) will fail to be satisfied
as well—the dependencies among the variables specified by the mathematical description
will not correspond to causal relations among (states of) components of the target
neural mechanism.
Seung’s (1996, 1998) model of oculomotor memory illustrates this failure. It is worth
examining the case in more detail.
The last forty years has seen a good deal of experimental and theoretical work on
oculomotor control.10 Saccadic eye movements shift the eyes rapidly from one position
in the visual field to another. Between saccades the eyes remain stationary; experi-
mental results show that normal humans can hold their eyes still at arbitrary positions
for twenty or more seconds, even in complete darkness (Becker and Klein 1973; Hess
et al. 1985). The most plausible explanation is that the brain maintains current eye
position after a stimulus has gone by employing a short-term memory of eye positions.
The experimental data support the hypothesis of a multi-stable recurrent network
located in the brainstem that takes as input transient eye velocities and gives as output

  For general discussion see Robinson 1989, Glimcher 1999, and Leigh and Zee 2006.
10
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

152  FRANCES EGAN

persistent eye positions (see Seung 1996, 1998). It does so by accumulating input


pulses, adding or subtracting (depending on the direction of movement) new inputs
from the previous summation. In other words, it performs integration. A second,
“read-out” network reads the current position and stabilizes the eye by controlling
the length–tension relationships of the muscles. Seung (1998) describes the neural
integrator as follows:
In the oculomotor system, the integrator can be regarded as an internal model. The location of
this internal model is known, unlike in other motor systems. As a steady eye position can be
maintained without proprioceptive or visual feedback, the quality of the internal model is
very good. And physiological studies of this internal model indicate that it is a recurrent neural
network with a continuous attractor.  (Seung 1998, 1257)

Encoding eye position in neural activity requires a continuous, analog-grade code,


thus motivating the choice of a continuous attractor network.11 Figure 7.2 illustrates
the continuous line attractor dynamics of the network. A new stimulus changes the
state of the network away from a line of fixed points. The network then settles on
a  new point along the attractor line; this point encodes the current eye position.
Line  attractor neural networks are posited to underlie a wide variety of motor
control functions.12
Two features of the neural integrator are of special interest for present purposes.
In the first place, the network has no proprietary input and output units; each unit
is interconnected to all other units and can receive external stimuli (that is, pulse
saccades). Secondly, no single unit or proper subset of total units represents an eye
position; rather, only the total state of the network at a given time is a candidate
for encoding a persistent eye position. Points in the state-space portrait (Figure 7.2)
do not represent the activity of single cells, but rather the collective activity of the
whole network. At any given moment the network “occupies” a point in the portrait,
and it “aspires to” the line attractor. Points along the line attractor are collective states
of the network that represent persistent eye positions.
Let’s consider these two features of the network in light of the mechanists’ 3M con-
straint. The computation effected by the integrator takes as arguments eye-movement
velocities and gives as values persistent eye positions. Condition (a) of the 3M constraint,
recall, requires that the variables in the model “correspond to identifiable components,
activities, and organizational features of the . . . mechanism that . . . underlies the phe-
nomenon” (Kaplan 2011, 347). Arguably, neither variable of the FT model corresponds
to (states of) components of the network that realizes the computation. With respect to
the arguments of the function: as noted above, any of the networks’ units can receive
external stimuli from pulse saccades generating eye velocities. The mechanist may
respond that eye velocities correspond to (are realized by) states of components of

11
  See Seung (1996, 1998). For general discussion of attractor networks see Amit (1989), Eliasmith and
Anderson (2003), and Eliasmith (2005).
12
  Besides Seung (1996) and Seung et al. (2000), see Shadmehr and Wise (2005).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  153

Figure 7.2  A state-space portrait for the eye-position memory network. All trajectories in the
state space of the network flow toward a line attractor (thick line). Each point in the line is a
persistent state that represents a different eye position.
Source: Seung (1996), 13340. Reprinted with permission of the National Academy of Sciences, U.S.A.

the mechanism; they correspond to states of different components for each run of the
computation. But the mechanist cannot take this line for the values of the function.
Persistent eye positions correspond to (are realized by) collective states of the whole
network. On any plausible construal of “component,” collective states of the whole network
do not count as components of the network. And while it is certainly true that the
values of the integration do correspond to (are realized by) “activities” and “organiza-
tional features” of the network, a weakened construal of condition (a) of the 3M constraint
that makes no mention of components and their interactions amounts to nothing
more than the requirement that the model is realized by neural hardware; in other
words, it imposes only the requirement that there exist a mapping of the sort depicted
in Figure 7.1.13

13
  Bechtel and Richardson (1993) discuss neural network models where “classical mechanistic
s­trategies—and in particular, decomposition and localization—fall short” (203). They leave open whether
an “explanatory strategy that abandons localization and decomposition . . . constitutes a properly mechan-
istic approach” (203). I want to leave this issue open too. There is no question that the behavior of the
neural integrator is a function of the interaction of its parts. Thus an account of the operation of the inte-
grator would be mechanistic in some intuitive sense. But the 3M Constraint, which requires a transparent
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

154  FRANCES EGAN

Turning to condition (b) of the 3M constraint: since condition (a) is not satisfied—the
arguments and values of the function do not correspond to (states of) components
of the neural mechanism—condition (b) is not satisfied either. While the FT model
gives an abstract characterization of causal relations between initial and end states of
the attractor network, the dependencies among the variables specified by the FT
description do not correspond to causal relations among components of the network.
Since attractor networks of the sort that realizes the neural integrator are widespread
in cognitive neuroscience, the 3M constraint is likely to fail for a wide variety of FT
cognitive models. The mechanists’ requirements on genuine explanation would have
the unfortunate consequence of stripping much promising research in cognitive
neuroscience of its explanatory interest.
To summarize the argument in this section: many FT models fail to satisfy the
mechanists’ strictures on genuine explanation. They do so for one of two reasons:
either (1) there is a detailed and well-confirmed account of the neural mechanism
that realizes the computation, but the relation between the FT description and the
realizing mechanism is not of the specific type characterized by the 3M constraint,
viz. a mapping from arguments and values of the computed function to (states
of)  components of the mechanism; or (2) the neural mechanism that realizes
the ­computation is presently unknown, though theorists may have some idea of
general features of the mechanism, such as where in the brain it is located. In
the second sort of case there is obviously much more theoretical work to be done.
A  computational model is not complete until the algorithm used to compute
the function and the neural hardware that implements it have been specified. But
there is no reason to think that the neural realization has to be of the specific type
favored by mechanists.
In Section 5 I discuss the specific features of FT models that make them explanatory,
when they are. First, though, a more general point: claims can often be explanatory in
the absence of realizing details.14 That someone deliberately started a fire can be an
explanation of a forest fire. Of course, it is not a complete explanation; for that we
would need to know about the chemical composition of the materials involved in the
incident, the condition of the prevailing winds, and so on. But, as many have noted,
explanation is typically interest-relative;15 sometimes the relevant interests are served
without specifying the realizing details. The special sciences—including the sciences
that purport to explain cognitive capacities—are continuous with ordinary practice in
this respect.

mapping between variables in the FT characterization and components of the realizing network, does not
capture this sense.
14
  Mechanists need not deny that there can be other kinds of explanations in science. For example,
Kaplan (2011, 346, fn. 14) mentions etiological causal explanations, which explain why a phenomenon
occurs by citing an antecedent cause.
15
  See, for example, Putnam (1978), van Fraassen (1980), and Lipton (1991).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  155

5.  The Explanatory Credentials of Function-Theoretic


Models
Computational models are proposed to explain our manifest success at some cognitive
task—seeing 3D structure in the scene, understanding speech, grasping objects in nearby
space, and so on. Justifying the FT description requires explaining how computing the
value of the specified function contributes to the exercise of the cognitive capacity in
question. The model is answerable, primarily, to behavioral data about the performance
of the system in its normal environment. Thus, the theorist’s first task is to characterize, in
rough and ready terms, the organism’s competence in the relevant cognitive domain—in
what circumstances it is successful, and in what, perhaps rare, circumstances, it fails.
Often the burden will be carried not by details of the realizing neural mechanism,
about which very little may be known, but by features of the environmental context in
which the mechanism normally operates. As noted above, Ullman’s structure-from-
motion mechanism is able to recover the 3D structure of a moving object by computing
the unique rigid interpretation consistent with three distinct views of four non-coplanar
points only because in our world objects are typically rigid in translation. Appeal to a
general environmental constraint (rigidity in this case) is crucial to the explanation of
the organism’s pattern of success and failure. Very little is known about the neural
mechanism that implements the function (beyond the fact that areas V3/V3A and the
dorsal parieto-occipital junction appear to be implicated).
As I have noted, an FT characterization of a cognitive mechanism resides at the
topmost of Marr’s explanatory levels, the so-called theory of the computation. It provides
a canonical specification of the function computed by the mechanism, hence it answers
a “what-question”: what, precisely, does the device do? But it also takes the first step in
specifying how the system computes the cognitive function that is the explanandum of
the theory: it computes the cognitive function, in its normal environment, by comput-
ing the specified mathematical function.16 However, the FT characterization does not
“reveal the causal structure of the mechanism,” as Kaplan (2011, 352) requires, except
at a very high level of abstraction.
By its very nature an FT characterization is multiply realizable—it subsumes both
natural and artefactual computers. Moreover, much of its explanatory force depends
on the fact that it is abstract. Our grasp of a mathematical characterization—say a
characterization of a system as an adder or an integrator—is independent of any
acquaintance we may have with particular (type or token) physical realizations of the
mathematical description.
The idea that mental or other “high-level” properties are multiply realized has
recently come under attack. Opponents of multiple realizability argue that only the
various structure-specific realizers of putative multiply realized properties count as

16
  So, schematically, a cognitive system S computes x (the cognitive capacity) by computing y (the
mathematical function specified by FT) in context z (the normal environment).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

156  FRANCES EGAN

genuine explanatory kinds. The dialectical context of these arguments is an attack on


non-reductive materialism. Materialism (reductive or not) isn’t the point here, but if
these arguments succeed in undermining the integrity of multiply realized kinds then
the explanatory bona fides of FT models would be threatened. Deflecting these arguments
will allow me to highlight some important explanatory features of FT models.
Shapiro (2000) poses a dilemma: either the realizing kinds of a putative higher-level
multiply realized property share many causally relevant properties or they do not. If the
realizers share many causally relevant properties, then they are not distinct realizations.
If they do not share many causally relevant properties, then any generalizations that
apply to them will be “numbingly dull” (2000, 649). (Shapiro cites as examples of numb-
ingly dull generalizations that all realizers of mousetraps are used to catch mice, and that
camera eyes and compound eyes both have the function of facilitating sight.) So either
the higher-level kind is just not multiply realized or there is no motivation for subsuming
the various distinct physical kinds under a higher-level (multiply realized) kind.17
FT kinds evade both horns of the dilemma. Shapiro says “multiple realizations truly
count as multiple realizations when they differ in causally relevant properties—when
they make a difference to how they contribute to the capacity under investigation”
(p. 644). Corkscrews that differ only in color contribute in identical ways to removing
corks, but hand calculators and human brains almost certainly differ in relevant causal
powers, for example, in how they contribute to the system’s capacity to compute the
addition function. It is very likely that they employ different algorithms that require
different realizing mechanisms. But the fact that these very different physical systems
both compute the addition function—the fact that we can specify their behavior over a
staggeringly large range of input conditions—is hardly “numbingly dull.” So Shapiro’s
argument fails to show that FT kinds are not genuinely multiply realized.
Klein (2008) argues that there are no cases of genuinely multiply realized kinds in
science. All putative examples either only support generalizations that are projectible
within the restricted-realization kind, or, if they appear to support generalizations that
are projectible across other realization kinds, turn out, on closer examination, to be
non-actual idealizations, and so involve no ontological commitment to the higher-level
kind. Materials science provides an example of the first sort of case: it classifies as
brittle various materials—brittle steel and brittle glass, for example—that otherwise have
very little in common. Of all that we know about brittleness in steel—that brittleness is
proportional to hardness, that steel can be made less brittle by tempering, and so on—
almost nothing applies to brittle glass. Discoveries about one realization-restricted
kind of brittle material are not projectible to other realization-restricted kinds. Klein
goes on to say:
If there are [multiply realizable] kinds, they must be proper scientific kinds. If they are scientific
kinds, then we should be able to project generalizations about them across all instances of that
kind. But there aren’t any such projectable discoveries; it looks like we must therefore abandon

17
  See Kim (1992) for a similar argument.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  157

MR kinds—and not just in metallurgy, but in all of the special sciences, and psychology in
particular.  (Klein 2008, 162)

Klein concludes that scientific ontologies should include only realization-restricted


kinds. The upshot is that FT kinds—which appear to subsume both biological subjects
and artifacts, and hence are not realization-restricted kinds—should be eliminated; at
best they are idealizations that do not literally apply to anything.
I will tackle the second disjunct of the dilemma first. FT models are not idealizations,
in the sense that Klein has in mind. He says:
Idealizing models do not purport to describe the world . . . idealizing models are mere possi-
bilia. Talk about them is false of anything in the actual world. You can’t use them to predict
anything . . . Idealizations are typically used to explain the ceteris paribus laws that cover the
(realization-restricted) kinds of particular special sciences. When scientists talk about the ideal
gas, it is usually in the context of explaining the ceteris paribus laws that cover actual gasses.
The ideally brittle solid is never cited on its own to explain anything.  (Klein 2008, 173)

To be sure, FT characterization does involve idealization. To describe a hand calculator


as computing the addition function is to attribute to it a capacity defined on an infinite
domain. The calculator’s actual capacity is limited by the size of its display. A similar
point applies to any biological system. And artefactual and biological computers are
subject to various sorts of noise. They can fail to compute the specified function when
they overheat or are exposed to harmful substances. Nonetheless, the FT characteriza-
tion is intended to be literally true of the calculator, as is an FT characterization of a
biological system in a computational psychological model. And, as I have noted, FT
characterizations allow the prediction of the device’s behavior across a wide range of
input conditions, viz. those corresponding to the arguments of the specified function.
So they are not idealizations in Klein’s sense.
The argument against the first disjunct of Klein’s dilemma—that empirical discoveries
about one class of realizers do not project to other classes of realizers, and so commitment
to a multiply realized higher-level kind is not scientifically motivated—is somewhat less
direct. It is true, of course, that what we know about the circuitry of the hand calculator
is unlikely to be true of the brain (and vice versa). But the fact that lower-level physical
facts about one class of realizers are not projectible to other classes is beside the point.18
The understanding we gain of the capacities of a system (whether artefactual or biological)
from FT models depends on the abstract character of the capacity attributed, not on
any particular physical realization of that capacity. As I noted above, this explanatory
payoff of FT characterization depends on the fact that the mathematical functions
deployed in computational models—addition, integration, Laplacian of Gaussians,
and so on—are well understood independently of their application in such models, and
independently of our familiarity with computing devices, which of course is a relatively

  Facts about the behavior of the system, under interpretation, are projectible.
18
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

158  FRANCES EGAN

recent development. If this is right then Klein’s case for the elimination of multiply
realized kinds does not apply to FT kinds.
The upshot is that FT kinds are genuinely multiply realized; in fact they may be sui
generis multiply realized kinds. They are not only abstract, but they are also normative.
Theories of cognition are charged with explaining not just behavior, but, more import-
antly, cognitive capacities or competences, and FT models do so by positing further
(mathematical) competences. In attributing a competence to a physical system—to
add, to compute a displacement vector, and so on—FT models support attributions
of correctness and mistakes. Just as the normal functioning of the system—correctly
computing the specified mathematical function—explains the subject’s success at a
cognitive task in its normal environment, so a malfunction explains its occasional
failure. Ingesting too much alcohol can cause neural systems to malfunction in any
number of ways; one effect is that computational mechanisms may not compute
their normal functions. One’s hand overshooting the cup because the motor control
system miscalculated the difference vector is a perfectly good explanation of a motor
control failure.19
This gives the FT characterization a kind of autonomy—the physical description
that specifies the realizing neural mechanism does not allow the reconstruction of the
normative notions of correctness and mistake.20 The FT characterization explains the
cognitive capacity by appeal to another competence not recoverable from the physical/
causal details alone. But though the normativity inherent in the FT description cannot
be accounted for at the level of realizing mechanisms there is nothing mysterious here.
Look again at the adder depicted in Figure 7.1. The bottom span of the figure specifies
the physical state transitions that characterize the normal operation of the mechanism.
When conditions are not normal—for example, when a human subject containing the
neural adder is drunk, or a hand calculator is immersed in water—these physical state
transitions may be disrupted. In other words, the system may be in the physical state(s)
that (under the interpretation imposed by the mapping) realizes the arguments of the
function, but fail to go into the physical state that (under interpretation) realizes the
value of the function. The specification of physical state transitions (the bottom span
of Figure 7.1) does not support attributions of correctness or mistake; the normative
attributions are a consequence of the computational interpretation imposed by the
mapping to the function.
In summary, the fact that FT characterizations are both abstract and normative (in
the above sense) explains how FT models can be genuinely explanatory even absent an
account of their neural realization.

19
  Of course, the hand may have overshot the cup for a variety of other reasons. A spasm in the arm
muscles would be a different kind of malfunction.
20
  This is not to deny that accounts of neural mechanisms may advert to such normative notions as well-
functioning and malfunction. But physical/causal descriptions, even when they advert to functional notions,
do not support attributions of correctness and mistake. They do not allow us to say that the mechanism
miscalculates (or misrepresents) the vector from hand to cup.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  159

6.  Objections and Replies


I conclude by considering some objections.
Objection (1): FT models are not genuinely explanatory; they are what Craver (2006)
and Kaplan (2011) call phenomenological models (p-models). P-models “provide descrip-
tions (often highly abstract, mathematical descriptions) of the phenomena for which
explanations are sought . . . [but they] merely ‘save the phenomena’ to be explained”
(Kaplan 2011, 349).
Reply: FT characterizations are not p-models—they do not just give a mathematical
description of an observed regularity; rather they claim that the device computes a
particular mathematical function and in so doing produces the observed regularity.
This distinction is important. The motion of the planets can be described (mathematically)
by Kepler’s laws, but the planets do not compute Kepler’s laws, in the intended sense.
The explanandum of a computational cognitive theory is a manifest cognitive capacity.
An FT model is a hypothesis about how the system does it, by exercising a mathemat-
ical competence. The solar system has no manifest cognitive capacities that require
appeal to mathematical competence. The objection that computational models, in the
absence of realizing neural details, are just p-models, in other words that they are just
descriptions of behavior, rests on a misconstrual of these models. The models give an
abstract characterization of a mechanism that produces the phenomena by computing
a mathematical function.21
Objection (2): FT models do not describe “the real components, activities, and
organizational features of the mechanism that in fact produces the phenomena” (Craver
2006, 361). They are mere “how-possibly” models, rather than “how-actually” models.
As Kaplan puts it “the cost imposed by departing from [the mechanists’] view . . . is the
loss of a clear distinction between computational models that genuinely explain how a
given phenomenon is actually produced versus those that merely describe how it
might possibly be produced” (2011, 346).
Reply: I doubt that there is a sharp distinction between how-actually and how-possibly
models. Weiskopf (2011) argues, convincingly to my mind, that the distinction is
epistemological. As a model that purports to explain a given phenomenon is more fully
developed and acquires additional empirical support it will typically cross the threshold
from “how-possibly” to “how-actually,” though the latter verdict is always defeasible.
But putting aside what kind of distinction this is, FT models are hypotheses about how
a system in fact exercises a particular cognitive capacity. A well-confirmed account of
the algorithm used to compute the specified function and the neural structure that

21
  Putnam (1988) and Searle (1993) argue that every physical system computes every function. If every
physical system does compute every function, in the sense at work in function-theoretic explanation, then
the distinction between a system being merely describable mathematically and a system actually computing
a mathematical function collapses, and computational models cannot be genuinely explanatory. The Putnam/
Searle arguments have been widely discussed. For recent responses see Chalmers (2012), Egan (2012), and
the other papers in the Journal of Cognitive Science, Vol. 12.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

160  FRANCES EGAN

realizes the computation would, of course, increase our confidence that the model
describes how the brain actually does it.
It should also be emphasized that FT characterizations in the first instance are
specifications of the function that the device in fact computes; they are, one might say,
what-explanations. Sometimes the characterization takes the form of a specification
of an algorithm, in other words, an intensional specification of the function computed.
In such cases, the algorithm is offered, not as an account of how possibly the device
computes the function, but of what it computes and how in fact it computes it, as is
evidenced by the fact that theorists would change the hypothesized algorithm were
evidence to become available showing that the device is computing a function other
than the one specified by the algorithm. Initial hypotheses regarding the functions
computed and the algorithms for computing these functions are selected from the
computational theorist’s toolbox, but that fact does not undermine the claim that these
are hypotheses about what the device actually does and how it actually does it. How
else are theorists supposed to develop theories except by using what they know? These
initial hypotheses will often be modified in light of new behavioral data, sometimes to
the point that eventually the device is said to compute a function sufficiently different
from the well-understood function with which the theorist began that it can only be
described in task-specific intensional terms.22
Objection (3): FT characterizations are just mechanism sketches. They derive their
explanatory force in the same way that other mechanistic models do, by specifying the
underlying mechanism. In this case, the specification is only partial.
Reply: FT characterizations are descriptions of cognitive mechanisms. Since they do
not fully specify how a cognitive mechanism works, we might call them “mechanism
sketches.” In any event, construing FT models as mechanism sketches is dispositive
only if a mechanism sketch is explanatory just to the extent that it issues a promissory
note for a decompositional mechanistic analysis of the sort specified by the 3M con-
straint. I have argued that the explanatory credentials of an FT model do not depend
on the existence of a mapping that satisfies the 3M constraint, but rather on the
FT model providing a canonical specification of the function computed by a cognitive
mechanism, a crucial first step in an explanation of how the mechanism enables the
cognitive capacity to be explained. Moreover, the distinctive features of FT models—
their abstract and normative character—are not recoverable from a specification of their
neural realization, even in cases where the specification does satisfy the 3M constraint.
An account of the realizing neural architecture would, of course, increase the probability
that a given FT model is true, but it is not the source of the model’s explanatory force.
If FT models are mechanism sketches then some mechanism sketches derive their
explanatory force non-mechanistically.

22
  The computational model of natural language processing developed by Marcus (1980) is a case in
point: the proposed model is an augmentation of a standard LR(k) parser of the sort that one might
encounter in a graduate-level course in parsing theory. The augmentations are dictated by observed features
of human linguistic competence, and the resulting model can be characterized only intensionally.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  161

Objection (4): You say that FT characterization is autonomous. Isn’t this just an
expression of what Piccinini (2006) has called computational chauvinism, the idea that
(as Johnson-Laird 1983, 9) put it: “[t]he mind can be studied independently from the
brain . . . [that] psychology (the study of the programs) can be pursued independently
from neurophysiology (the study of the machine and the machine code).”
Reply: No, it isn’t computational chauvinism. I have explained the sense in which the
FT level is autonomous—it characterizes the physical system in abstract terms, as a
member of a well-understood class of mathematical devices. Moreover, the FT charac-
terization is normative in a particular sense—it supports attributions of correctness
and mistakes, notions not available at the level of realizing neural mechanisms. I am
not claiming that we can fully explain cognition without understanding these neural
mechanisms; in fact, I insist that we cannot. The full explanation requires both an
account of the realizing mechanism (though, as I have argued, the relation between
the FT characterization and the realization may not satisfy the 3M constraint) and,
typically, an account of the environment in which the cognitive capacity is exercised.
The point is rather that claims of the sort “the hand overshot the cup because the system
miscalculated the difference vector” enjoy a sort of explanatory autonomy from the
realization details.

7.  Postscript: Personal and Subpersonal Capacities


My account of FT explanation refers to two kinds of capacities or competences; it is
useful to clarify the relationship between the two. Cognitive capacities that are the
explananda of the cognitive sciences—seeing what is where, understanding speech,
reaching, and pointing—are personal-level capacities. They are achievements of the
organism, things at which we are generally successful. Personal-level cognitive capaci-
ties, manifest in our behavior, are explained by positing subpersonal mechanisms that
have mathematical capacities. It is something of a “category mistake” (as philosophers
used to say) to say that we compute the Laplacian of a Gaussian or integration. Rather,
mechanisms in our brains do this, and by doing so (in normal conditions) they enable
us to see, understand speech, manipulate objects, and so on.
I have argued elsewhere (see Egan 2014) that the main job of representational content
is to connect the subpersonal mechanisms characterized in abstract terms by cognitive
scientific theories with the manifest personal-level capacities that it is the job of these
theories to explain. Marr construes the inputs to the Laplacian/Gaussian filter as repre-
senting light intensities and outputs as representing discontinuities of light intensity.
Shadmehr and Wise construe inputs and outputs of the mechanisms they posit as
representing positions of objects in nearby space and changes in joint angles. In general,
the inputs and outputs of FT mechanisms are characterized not only in abstract terms,
as the arguments and values of the specified mathematical function; they are often
characterized as representing properties or objects relevant to the cognitive capacity to
be explained. Characterizing the subpersonal mechanism in terms congruent with the
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

162  FRANCES EGAN

way we think about the personal-level capacity that the mechanism subserves allows
us to see how the exercise of the mathematical competence contributes to our success
at these tasks. Representational content is the “connective tissue” linking the subper-
sonal capacities posited in the theory and the manifest personal-level capacities
(the “phenomena”) that the theory attempts to explain.23

References
Amit, D. J. (1989), Modeling Brain Function: The World of Attractor Neural Networks. New York:
Cambridge University Press.
Bechtel, W. (2008), Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience.
London: Routledge.
Bechtel, W. and Abrahamsen, A. (2005), “Explanation: A Mechanistic Alternative.” Studies in
History and Philosophy of the Biological and Biomedical Sciences 36: 421–41.
Bechtel, W. and Richardson, R. C. (1993), Discovering Complexity: Decomposition and Localization
as Strategies in Scientific Research. Princeton, NJ: Princeton University Press.
Becker, W. and Klein, H. M. (1973), “Accuracy of Saccadic Eye Movements and Maintenance of
Eccentric Eye Positions in the Dark.” Vision Research 13: 1021–34.
Chalmers, D. (2012), “A Computational Foundation for the Study of Cognition.” Journal of
Cognitive Science 12: 323–57.
Craver, C. (2006), “When Mechanistic Models Explain.” Synthese 153: 355–76.
Craver, C. (2007), Explaining the Brain. Oxford: Oxford University Press.
Crick, F. (1989), “The Recent Excitement about Neural Networks.” Nature 337: 129–32.
Cummins, R. (1975), “Functional Analysis.” Journal of Philosophy 72: 741–65.
Egan, F. (1995), “Computation and Content.” Philosophical Review 104: 181–203.
Egan, F. (2012), “Metaphysics and Computational Cognitive Science: Let’s Not Let the Tail Way
the Dog.” Journal of Cognitive Science 13: 39–49.
Egan, F. (2014), “How to Think about Mental Content.” Philosophical Studies 170: 115–35.
Eliasmith, C. (2005), “A Unified Approach to Building and Controlling Spiking Attractor
Networks.” Neural Computation 17(6): 1276–314.
Eliasmith, C. and Anderson, C. (2003), Neural Engineering: Computation, Representation, and
Dynamics in Neurobiological Systems. Cambridge, MA: MIT Press.
Gallistel, C. R. (1990), The Organization of Learning. Cambridge, MA: MIT Press.
Glimcher, P. W. (1999), “Oculomotor Control,” in R. A. Wilson and F. C. Kiel (eds), MIT
Encyclopedia of Cognitive Science. Cambridge, MA: MIT Press, 618–20.
Hess, R. F., Baker, C. L., Verhoeve, J. N., Keesey, U. T., and France, T. D. (1985), “The Pattern
Evoked Electroretinogram: Its Variability in Normals and Its Relationship to Amblyopia.”
Investigative Ophthalmology and Visual Science 26: 1610–23.

23
  Thanks to David M. Kaplan, Sydney Keough, Robert Matthews, and Oron Shagrir for helpful
comments on earlier versions of this chapter. Thanks also to participants at the Philosophy and the Brain
workshop at the Institute for Advanced Studies at the Hebrew University of Jerusalem in May 2013, parti-
cipants at the Graduate Student Spring Colloquium on Exploring the Subpersonal: Agency, Cognition, and
Rationality at the University of Michigan, Ann Arbor, March 2014, and the students in my graduate seminar
on psychological explanation at Rutgers in spring 2014.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTION-THEORETIC EXPLANATION  163

Johnson-Laird, P. (1983), Mental Models: Towards a Cognitive Science of Language, Inference


and Consciousness. New York: Cambridge University Press.
Kaplan, D. (2011), “Explanation and Description in Computational Neuroscience.” Synthese
183(3): 339–73.
Kaplan, D. and Craver, C. (2011), “The Explanatory Force of Dynamical and Mathematical
Models in Neuroscience: A Mechanistic Perspective.” Philosophy of Science 78(4): 601–27.
Kim, J. (1992), “Multiple Realization and the Metaphysics of Reduction.” Philosophy and
Phenomenological Research 52: 1–26.
Klein, C. (2008), “An Ideal Solution to Disputes about Multiply Realized Kinds.” Philosophical
Studies 140: 161–77.
Leigh, R. J. and Zee, D. S. (2006), The Neurology of Eye Movements (4th edition). New York:
Oxford University Press.
Levy, A. (2013), “Three Kinds of New Mechanism.” Biology and Philosophy 28: 99–114.
Lipton, P. (1991), Inference to the Best Explanation. Oxford: Routledge.
Marcus, M. (1980), A Theory of Syntactic Recognition for Natural Language. Cambridge, MA:
MIT Press.
Marr, D. (1982), Vision. San Francisco: W. H. Freeman.
Milkowski, M. (2013), Explaining the Computational Mind. Cambridge, MA: MIT Press.
Piccinini, G. (2006), “Computational Explanation in Neuroscience.” Synthese 153: 343–53.
Piccinini, G. and Craver, C. (2011), “Integrating Psychology and Neuroscience: Functional
Analyses as Mechanism Sketches.” Synthese 183(3): 283–311.
Putnam, H. (1978), Meaning and the Moral Sciences. London: Routledge.
Putnam, H. (1988), Representation and Reality. Cambridge, MA: MIT Press.
Robinson, D. A. (1989), “Integrating with Neurons.” Annual Review of Neuroscience 12: 33–45.
Searle, J. (1993), The Rediscovery of the Mind. Cambridge, MA: MIT Press.
Seung, S. H. (1996), “How the Brain Keeps the Eyes Still.” Proceedings of the National Academy
of Science USA 93: 13339–44.
Seung, S. H. (1998), “Continuous Attractors and Oculomotor Control.” Neural Networks 11:
1253–8.
Seung, S. H., Lee, D. D., Reis, B. Y., and Tank, D. W. (2000), “Stability of the Memory of
Eye Position in a Recurrent Network of Conductance-Based Model Neurons.” Neuron 26:
259–71.
Shadmehr, R. and Wise, S. (2005), The Neurobiology of Reaching and Pointing: A Foundation for
Motor Learning. Cambridge, MA: MIT Press.
Shapiro, L. (2000), “Multiple Realizations.” Journal of Philosophy 97: 635–54.
Smolensky, P. (1988), “On the Proper Treatment of Connectionism.” Behavioral and Brain
Sciences 11: 1–23.
Ullman, S. (1979), The Interpretation of Visual Motion. Cambridge, MA: MIT Press.
van Fraassen, B. C. (1980), The Scientific Image. New York: Oxford.
Weiskopf, D. (2011), “Models and Mechanisms in Psychological Explanation.” Synthese 183(3):
313–38.
Zipser, D. and Anderson, R. A. (1988), “A Back-Propagation Programmed Network that
Simulates Response Properties of a Subset of Posterior Parietal Neurons,” Nature 331:
679–88.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

8
Neural Computation, Multiple
Realizability, and the Prospects
for Mechanistic Explanation
David M. Kaplan

1. Introduction
Multiple realizability considerations have played a long-standing role in debates about
reduction, explanation, and the autonomy of higher-level sciences including psychology
and functional biology. The concept of multiple realizability (hereafter, MR) was
initially introduced in the philosophy of mind to serve as an argument against mind-
brain identity theories, and to motivate a functionalist view about the mind. Since the
1970s, however, there has been a gradual shift in focus beyond ontological issues raised
by MR to epistemological (i.e., explanatory and methodological) ones including
whether psychological explanations and methods are autonomous from or reduce to
those encountered in lower-level sciences such as neuroscience. Fodor (1974) famously
appeals to MR in an attempt to block the reduction of higher-level theories and/or laws
of the special sciences to those of lower-level sciences. Putnam (1975) invokes MR to
argue for the “autonomous” character of higher-level explanatory generalizations in
psychology. Several decades later, Fodor explicitly foregrounds the explanatory issues
raised by MR, asserting that “the conventional wisdom in the philosophy of mind is
that psychological states are functional and the laws and theories that figure in psycho-
logical explanations are autonomous [i.e., are not reducible to a law or theory of physics]”
(1997, 149). These formulations of the MR argument directly link up with debates in
the philosophy of science concerning explanation and theory reduction in ways that
many early construals do not.1

1
  For further discussion of the distinction between epistemological and ontological construals of
reductionism, and the importance of the former over the latter in scientific contexts, see Ayala (1968) and
Hoyningen-Huene (1989). Although these authors are explicitly concerned with reductionism in biology,
the same kinds of considerations are relevant in the present context. For philosophical discussions of
MR that prioritize explanatory over ontological issues associated with reductionism, see many contributions
from the philosophy of biology (e.g., Kitcher 1984; Rosenberg 2001; Sarkar 1992; Sober 1999; Waters 1990).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  165

Although philosophical excitement about MR and the prospects for the autonomy
of psychology in its traditional guise have faded to some degree in recent years, MR
considerations are once again being invoked in a different light in order to draw con-
clusions about the nature of explanation, and in particular, computational explanation
in neuroscience and psychology. More specifically, it has recently been argued by
several authors (Carandini 2012; Carandini and Heeger 2012; Chirimuuta 2014) that
certain kinds of computations performed in the brain—so-called canonical neural
computations—cannot be explained in mechanistic terms. Briefly, canonical neural
computations (hereafter CNCs) are defined as “standard computational modules that
apply the same fundamental operations in a variety of contexts” (Carandini and Heeger
2012, 51). According to these authors, the reason why CNCs cannot be accommodated
by the framework of mechanistic explanation—which is dominant across the life
sciences—is that these computations are associated with multiple neural circuits or
mechanisms in the brain, which can vary from region to region and from species to
species. In other words, CNCs cannot be analyzed in mechanistic terms because they
are multiply realized. Instead, as Chirimuuta (2014) argues, modeling work involving
CNCs must embody an entirely different, non-mechanistic form of explanation. In
this chapter, I argue against the necessity of this claim by showing how MR fails to
block the development of adequate mechanistic explanations of computational phe-
nomena in neuroscience. In this manner, I show how MR considerations are largely
irrelevant to assessing the quality of mechanistic explanations. Once these confusions
are resolved, it becomes clear that CNC explanations can be properly understood as
mechanistic explanations.
The chapter is organized as follows. In Section 2, I discuss some traditional MR-based
arguments for the explanatory autonomy of psychology and briefly highlight their
well-known limitations. In Section 3, I introduce the concept of canonical neural
computation and outline how MR considerations are supposed to raise problems
for  mechanistic explanations of these phenomena. In Sections 4 and 5, I describe
two different mechanistic explanations in neuroscience that invoke the same neural
computation to show how MR considerations, although operative in these cases, are
irrelevant to assessing the quality of the explanations provided. In Section 6, I show
how persistent confusions over the proper scope of mechanistic explanations serve
to underwrite MR-based challenges to mechanistic explanation of computational
phenomena. Specifically, I elaborate how scope is not an appropriate norm on
mechanistic explanation and therefore even mechanistic models with highly restricted
scope can in principle be explanatorily adequate. This conclusion provides a foothold
for the view that mechanistic models of computational phenomena can provide
legitimate explanations, even in the presence of multiple realizability. In Section 7,
I return to the specific debate concerning the nature of CNC explanations and expose
these same confusions about the relationship between scope and mechanistic explanation.
In Section 8, I summarize the mechanistic perspective on neural computation defended
in this chapter.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

166  DAVID M. KAPLAN

2.  Traditional MR-Based Arguments


for Explanatory Autonomy
The central idea behind traditional, MR-based arguments for autonomy is that the
phenomena or kinds identified in the theories of a higher-level science such as
­psychology can be multiply realized by heterogeneous sets of lower-level realizers, and
so will invariably be cross-classified by, and therefore cannot be explained in terms of,
the  phenomena or kinds invoked in lower-level theories (e.g., Fodor  1974,  1997;
Putnam 1975). In virtue of this, higher-level phenomena or kinds (and the theories
and generalizations that invoke them) are said to enjoy a certain degree of explanatory
autonomy or independence from lower-level phenomena or kinds (and the theories or
generalizations that invoke them). Before proceeding, it is important to distinguish
this kind of autonomy from methodological autonomy. A scientific discipline X is
methodologically autonomous from some lower-level discipline Y if X’s investigative
methods, discovery procedures, and most importantly, its taxonomic categories can
vary independently of Y’s, or vice versa.2 Some advocates of autonomous computational
psychology implicitly embrace methodological autonomy without taking a stance on
explanatory autonomy. For example, the psychologist Philip Johnson-Laird embraces
methodological autonomy when boldly pronouncing, seven decades ago, that “[t]he mind
can be studied independently from the brain. Psychology (the study of the programs) can
be pursued independently from neurophysiology (the study of the machine and the
machine code)” (Johnson-Laird 1983, 9).
It is important to keep these two notions of autonomy distinct because the evidence
against the methodological autonomy of psychology from neuroscience is relatively
strong, whereas the case concerning the explanatory autonomy of psychology remains
far more uncertain. Keeley (2000), for example, argues compellingly that the discovery
and categorization of psychological kinds frequently depends on structural informa-
tion about the brain delivered from the neurosciences. Similarly, Bechtel and Mundale
(1999) make a powerful case that the taxonomic enterprises of psychology and neuro-
science are not independent in the ways implied by the thesis of multiple realizability.
In particular, they show how the taxonomic identification of brain areas in neuroscience
relies intimately on the deployment of functional criteria and prior categorizations
from psychology. They also demonstrate how efforts to decompose psychological
capacities or functions often depends on, and is sometimes refined by, structural infor-
mation about the brain and its organization. The functional and structural taxonomies
of psychology and neuroscience thus appear to reflect the complex interplay and inter-
action between the disciplines. The resulting picture is arguably one of methodological
interdependence, not autonomy. Consequently, even if the ontological thesis that
psychological states are (or can in principle be) multiply realized in different substrates,
there is good reason to think this does not support the methodological claim that

2
  For a similar characterization of methodological autonomy, see Feest (2003).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  167

neuroscience lacks usefulness in guiding psychological investigations, and vice versa.


By contrast, the status of the explanatory autonomy of psychology is far less settled.3
In what follows, the focus will exclusively be on this form of autonomy.
As indicated above, many of the hallmark defenses of the explanatory autonomy of
psychology and other higher-level sciences invoke MR considerations in order to deny
the possibility of reducing the theories or laws of the higher-level science to those of the
lower-level science (e.g., Fodor 1974; Putnam 1975).4 Proponents of these views (and
their critics) have generally assumed a traditional conception of theory reduction,
according to which some higher-level theory (or law) is said to be reducible to (and
also explained by) some lower-level theory (or law) just in case the former can be
logically derived from the latter, together with appropriate bridge principles and
boundary conditions (e.g., Nagel 1961; Schaffner 1967). Since the terminology invoked
by the higher- and lower-level theories often differ in appreciable ways, so-called
bridge principles connecting the terms or predicates of the two theories in a systematic
(one-to-one) manner are necessary to ensure the derivation can go through.5 The
classical antireductionist strategy thus involves showing how the bridge principle-
building enterprise breaks down because higher-level phenomena are often multiply
realized by heterogeneous sets of lower-level realizers, implying an unsystematic (many-
to-one) mapping between the terms of the two theories. For example, Fodor, who
famously frames the issue in terms of the challenges MR poses for the logical positivist
goal of the unity of science, states that:
The problem all along has been that there is an open empirical possibility that what corresponds
to the natural kind predicates of a reduced science may be a heterogeneous and unsystematic
disjunction of predicates in the reducing science, and we do not want the unity of science to be
prejudiced by this possibility. Suppose, then, that we allow that bridge statements may be of the
form . . . Sx ⇆P1x ∨ P2x ∨ . . . ∨ Pnx, where “P1x ∨ P2x ∨ . . . ∨ Pnx” is not a natural kind predicate
in the reducing science. I take it that this is tantamount to allowing that at least some “bridge laws”
may, in fact, not turn out to be laws, since I take it that a necessary condition on a universal
generalization being lawlike is that the predicates which constitute its antecedent and consequent
should pick out natural kinds.  (Fodor 1974, 108)

3
  Before proceeding it is worth preempting a certain mistaken conclusion that this analysis might elicit.
Although I am focusing here on challenges raised about the standard conclusion of the MR argument—the
autonomy of psychology—this does not imply that this is the exclusive or even primary focus in the critical
literature. Many authors have also challenged the MR argument by denying the MR premise itself. Although
MR once held the status as an unquestioned truth, as philosophers started to pay closer attention to empirical
research in neuroscience, the evidence for multiple realization began to appear less clear-cut than had been
initially assumed (e.g., Bechtel and Mundale 1999; Polger 2009; Shapiro 2000, 2008).
4
  According to the traditional model, theory reduction implies that all the higher-level laws and
observational consequences can be derived from information contained in the lower-level theory. Strictly
speaking, then, the higher-level (reduced) theory can make no non-redundant or “autonomous” informational
or explanatory contribution beyond that made by the lower-level (reducing) theory.
5
  Because the lower-level theory will typically only apply over a restricted part of the domain of the
higher-level theory (Nagel 1961) or at certain limits (Glymour 1970; Batterman 2002), boundary conditions
that set the appropriate range for the reduction are also typically required for the derivation to succeed.
For further discussion, see Kaplan (2015).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

168  DAVID M. KAPLAN

The major difficulty with this and many other traditional positions staked out on both
sides of the debate over explanatory autonomy is that they all commonly assume the
appropriateness of what is now widely recognized as an outdated and inapplicable
law-based model of theory reduction and explanation (Rosenberg 2001).6 The problem,
now widely recognized among philosophers of the special sciences including biology
and psychology, is the conspicuous absence of lawful generalizations, either at the level
of the reducing theory or the reduced theory (e.g., Machamer et al. 2000; Woodward
2000).7 This is problematic for traditional reductionists because there is no scope for a
reduction without laws in either theory. Interestingly, the absence of laws also raises
difficulties for traditional defenders of autonomous psychological explanation because
all recourse to laws, even at the level of the reduced theory, becomes empty in this
nomological vacuum. For this reason, philosophical interest in these debates has
diminished in recent years.
Despite diminished attention of late, MR considerations are once again being invoked
in a different light in order to draw conclusions about the nature of explanation, and in
particular, computational explanation in neuroscience. Critically, if these arguments
about the distinctness and autonomy of computational explanation in neuroscience
can secure a foothold, then broader conclusions about the autonomy of computational
explanations in psychology are also within reach. In what follows, I will argue that even
when rehabilitated from the outmoded framework of laws, MR considerations are
orthogonal to issues concerning explanation.

3.  Canonical Neural Computation Explanations


and Multiple Realizability
Several recent authors (e.g., Carandini 2012; Carandini and Heeger 2012; Chirimuuta
2014) claim that certain kinds of computations performed in the brain—canonical
neural computations—cannot be explained in mechanistic terms because they rely
on a diversity of neural mechanisms and circuits. According to Carandini, a leading
visual neuroscientist, the brain “relies on a core set of standard (canonical) neural
computations: combined and repeated across brain regions and modalities to apply

6
  Explanation and theory reduction are closely linked in the traditional Nagelian model. Just as the D–N
model ties successful explanation to the presence of a deductive or derivability relationship between
explanans and explanandum statements, so too successful reduction is tied to the presence of a deductive
relationship between two theories. Intertheoretic reduction is thus naturally treated as a special case of
deductive-nomological explanation. For further discussion, see Kaplan (2015).
7
  To the extent that the notion of “law” continues to find uses in these scientific disciplines it is perhaps
best characterized in terms of describing the pattern, effect, or regularity to be explained, rather than as
providing the explanation (e.g., Bechtel 2008; Cummins 2000; Wright and Bechtel 2006). For example,
Fitts’ law describes the tradeoff (negative correlation) between speed and accuracy in goal-directed, human
motor behavior. The Weber-Fechner law describes the robust psychological finding that the just-noticeable
difference between two sensory stimuli is proportional to their magnitudes. These well-known generalizations
mathematically describe but arguably do not explain what gives rise to these widespread patterns or
regularities in human behavior and perception.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  169

simpler operations to different problems” (Carandini 2012, 508). The key idea is that
these neural computations are repeatedly deployed across different brain areas, different
sensory modalities, and even across different species.8 Prominent examples of CNCs
include linear filtering and divisive normalization. Linear filtering involves a computation
of the weighted sum of synaptic inputs by linear receptive fields. Divisive normalization
involves the activity of one neuron (or neural population) being normalized or adjusted
by the activity of other neurons in the same region. Below, I will focus exclusively on
the example of divisive normalization.
The divisive normalization model was initially introduced to account for certain
response properties of neurons in primary visual cortex (V1) including contrast
saturation and cross-orientation suppression (Heeger  1992; Carandini and Heeger
1994; Carandini et al. 1997), which could not be handled by linear models of orientation
selectivity (e.g., Hubel and Wiesel 1968). Cross-orientation suppression, for example,
describes the phenomenon in which simple cells in V1 exhibit weaker responses to
the superposition of two orientation stimuli than would be predicted by the sum of the
responses to each individual stimulus alone, even when one of the component stimuli
evokes no above-baseline response at all (Figure 8.1).
Models of orientation selectivity such as that of Hubel and Wiesel, in which con-
verging feedforward inputs from the lateral geniculate nucleus (LGN) are combined
by taking a weighted sum, fail to account for this and other non-linear response
properties. The divisive normalization model accommodates these non-linearities by
including not only a term for the feedforward or excitatory inputs (sometimes called
the summative or simply receptive field), but also a term in the divisor reflecting activity
pooled from a large number of nearby neurons (sometimes called the suppressive field)
(Equation (8.1)). The overall neural response (output) thus reflects the ratio of activity
between these two terms, and the essential computational operation being performed
is division. A generalized version of the normalization model is as follows:

D nj
Rj = y (8.1)
σ + ∑k D
n n
k

where Rj is the normalized response of neuron j; Dj in the numerator represents the


neuron’s excitatory input; ∑j Dk represents the sum of a large number of inputs, the
normalization pool. The constants y, σ, and n represent free parameters fit to empirical
data: y scales the overall level of responsiveness, σ prevents undefined values resulting

8
  One assumption not addressed in this chapter is the foundational one concerning whether neural
systems genuinely compute and process information. While most computational neuroscientists assume
that neural systems perform computations (canonical or otherwise), and so go beyond what many other
computational scientists assume about the phenomena they investigate when building computational
models (e.g., computational climate scientists do not generally think that climate systems compute in the
relevant sense), this remains open to challenge. For further discussion of this important issue, see Piccinini
and Shagrir (2014).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

170  DAVID M. KAPLAN

a b V1 neuron

6
Mask contrast (%)

12

25

270
50

0 6 12 25 50 270 ms
Test contrast (%)
Figure 8.1  Example of cross-orientation suppression in V1 neurons. (A) Grating stimuli are
presented in alignment with the neuron’s preferred orientation (top row; test stimuli), orthogonal
to the neuron’s preferred orientation (left column; mask stimuli), or by superimposing the two
(remaining panels; plaid stimuli). (B) Corresponding responses of a typical V1 neuron to the same
stimulus set. Cross-orientation suppression is most evident when non-preferred orientation
grating stimuli are presented at high contrasts (bottom rows). For this neuron, the superposition
of two gratings (e.g., bottom right panel in A), one at the preferred orientation (e.g., top right
panel in A) and one in the null or orthogonal orientation (e.g., bottom left panel in A), evokes
a smaller response than does the preferred orientation grating alone, even though the null
orientation stimulus has no effect on firing as compared to baseline levels.
Source: Freeman et al. 2002, 760. Reprinted with permission from Elsevier.

from division by zero, and n is an exponential term that modulates the values of
individual inputs.
The model successfully accounts for response saturation in V1 neurons with
increasing stimulus contrast because terms in both the numerator and denominator
positions scale proportionally with changes in stimulus contrast. Increasingly larger
excitatory responses elicited by increasing contrast are kept neatly in check (are “nor-
malized” in the mathematical sense) because they are being divided by an increasingly
larger number in the denominator reflecting inputs from the entire suppressive field.
The model also predicts cross-orientation suppression effects: responses of a V1 neuron
to an optimally oriented stimulus (e.g., a grating stimulus aligned with the neuron’s
preferred orientation) are suppressed by superimposing a mask stimulus of different
orientation (Figure 8.1). Even a sub-optimal visual stimulus that is ineffective at elicit-
ing a response such as a grating stimulus with an orientation orthogonal to the neuron’s
preferred orientation (the null orientation); Figure 8.1, left column) nonetheless sup-
presses the response to a stimulus that optimally drives the neuron (Figure 8.1, top row)
when the two stimuli are presented simultaneously (Figure 8.1, bottom right corner panel).
The normalization model accounts for this puzzling effect because suppression in
the denominator increases as a function of increasing contrast in both the preferred
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  171

and non-preferred orientation stimuli, whereas excitation in the numerator only


increases with increasing contrast in the preferred orientation. While the model’s pre-
dictive successes in V1 are noteworthy, interest in divisive normalization has expanded
considerably because of its apparent ubiquity in nervous systems.
Critically, the normalization equation has since been applied successfully to a
wide range of systems well beyond mammalian V1. It has been used to characterize
responses in the mammalian retina (Carandini and Heeger 2012); sub-cortical
structures including the LGN (Bonin et al. 2005; Mante et al. 2005); higher visual cor-
tical areas including V4 (Reynolds et al. 2000; Reynolds and Heeger 2009), area MT/V5
(Simoncelli and Heeger 1998; Rust et al. 2006), and IT (Zoccolan et al. 2005); and in
non-visual cortical areas including auditory cortex (Rabinowitz et al.  2011). The
model has even been applied to describe certain operating characteristics of the
invertebrate olfactory system (Olsen et al. 2010). Simply stated, the same computa-
tion appears to be performed in multiple neural circuits across a diversity of brain
regions and species. This, we are told, seriously limits the prospects of providing a
suitable mechanistic explanation of this phenomenon. Carandini has the follow-
ing  to say about canonical neural computation and the relevance of underlying
mechanistic details:
Crucially, research in neural computation does not need to rest on an understanding of the
underlying biophysics. Some computations, such as thresholding, are closely related to under-
lying biophysical mechanisms. Others, however, such as divisive normalization, are less likely
to map one-to-one onto a biophysical circuit. These computations depend on multiple circuits
and mechanisms acting in combination, which may vary from region to region and species to
species. In this respect, they resemble a set of instructions in a computer language, which does
not map uniquely onto a specific set of transistors or serve uniquely the needs of a specific software
application.  (Carandini 2012, 508)

Elsewhere, Carandini and Heeger echo the same point: “[T]here seem to be many
circuits and mechanisms underlying normalization and they are not necessarily the
same across species and systems. Consequently, we propose that the answer has to
do with computation, not mechanism” (Carandini and Heeger 2012, 60). Given the
apparent boldness of these claims, it is unsurprising that philosophers have started
to take notice. For example, Mazviita Chirimuuta (2014), seeking to build on consider-
ations of this kind, develops a full-blown argument against mechanistic analyses for
these kinds of neural computations. Canonical neural computation, she says, “creates
difficulties for the mechanist project [because these] computations can have numerous
biophysical realisations, and so straightforward examination of the mechanisms
underlying these computations carries little explanatory weight” (Chirimuuta 2014, 127).
CNC modeling is instead supposed to embody a “distinct explanatory style,” and “[s]uch
explanations cannot be assimilated into the mechanistic framework” (Chirimuuta
2014, 127). The claim here is clear: CNCs cannot be accommodated by the framework
of mechanistic explanation, and must therefore embody a distinctive non-mechanistic
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

172  DAVID M. KAPLAN

explanatory form, because these computations are associated with multiple neural
circuits or mechanisms in the brain. In other words, CNCs are multiply realized, and
therefore cannot be analyzed in mechanistic terms. Chirimuuta explicitly formulates
the key premise as follows:
[The] most important characteristic of normalization as a CNC is that there is now good
evidence that normalization is implemented by numerous different biophysical mechanisms,
depending on the system in question . . . In other words, normalization is multiply realized.
(Chirimuuta 2014, 138–9)

According to Chirimuuta, the conclusion we are supposed to draw from this


premise—that the same computations performed across multiple neural circuits,
areas, and species—is that CNCs cannot be characterized properly in mechanistic
terms and instead require a distinctive, non-mechanistic explanatory framework.
Chirimuuta therefore defends what she calls a “distinctness of computation” thesis
about CNCs, according to which “there can be principled reasons for analyzing neural
systems computationally rather than mechanistically” (2014, 139). One clear-cut way
of defending the “distinctness” thesis would be to show that CNC explanations fail to
satisfy various norms on adequate mechanistic explanations. Indeed, this is precisely
what Chirimuuta sets out to do. She argues that CNC explanations “should not be sub-
ject to mechanistic norms of explanation” (2014, 131). In particular, she identifies
Kaplan and Craver’s (2011) model–mechanism–mapping (3M) principle as the central
mechanistic norm from which CNC explanations evidently depart. As a reminder, the
3M principle states:
3M: A mechanistic model of a target phenomenon explains that phenomenon to the extent that
(a) the elements in the model correspond to identifiable components, activities, and organizational
features of the target mechanism that produces, maintains, or underlies the phenomenon, and
(b) the pattern of dependencies posited among these elements in the model correspond to
causal relations among the components of the target mechanism.
(Kaplan 2011, 347; Kaplan and Craver 2011, 611)

The objective of 3M is to consolidate some of the major requirements on explanatory


mechanistic models and highlight the primary dimensions along which mechanistic
models may be assessed. Explanatory mechanistic models reveal aspects of the causal
structure of the mechanism—the component parts, their activities, and organization—
responsible for the phenomenon of interest. Mechanistic models are explanatorily
adequate to the extent that they accurately and completely describe or represent this
structure.9 Chirimuuta claims that CNC models fail to satisfy 3M, and therefore

9
  Accuracy and completeness requirements on mechanistic models are implicit in 3M. It is beyond the
scope of this chapter to address how the norms of accuracy and completeness are central to assessing
mechanistic explanations. For further discussion, see Craver and Darden (2013).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  173

cannot be assimilated into the mechanistic framework. She characterizes the conclusion
of her critical analysis as follows:10
Along the way we have seen that the mechanists’ 3M requirement . . . sit[s] oddly (to put it mildly)
with the new interpretation and status of the normalization model. For Carandini and Heeger
(2012) no longer think that we should interpret the normalization model mechanistically, as
describing components and dynamics of a neural mechanism . . . [A]ny digging down to more
mechanistic detail would simply lead us to miss the presence of CNC’s entirely, because of their
many different realizations.  (Chirimuuta 2014, 140)

Here is one plausible way of summarizing the entire argument:


P1.  If a given model is a mechanistic explanation, then it satisfies 3M.
P2.  If a given phenomenon is multiply realizable, then a model of that phenom-
enon will not satisfy 3M.
P3.  Canonical neural computations such as divisive normalization are multiply
realizable.
P4.  Models of canonical neural computations do not satisfy 3M. (Inference from
P2 and P3)
C.  Models of canonical neural computations are not mechanistic explanations.
(Inference from P1 and P4)
As should be obvious now, P2 is the crucial premise in the argument. P2 identifies
MR as a sufficient condition for a given model failing to satisfy 3M. There are at least
two possible interpretations of P2 that are consistent with the letter of 3M. One way
of interpreting P2 is that it outlines a sufficient condition for a given model to count
as a bad (or inadequate) mechanistic explanation. Another interpretation is that it
provides a sufficient condition for not counting as a mechanistic explanation at
all (good or bad). The latter interpretation seems to be the one that Chirimuuta pre-
fers. In what follows, I will make a case for rejecting (both interpretations of) P2 by
showing how MR considerations are irrelevant to assessing the quality of a mechan-
istic explanation.11 Once this barrier to understanding is removed, it will be clear
how CNC explanations are properly characterized as mechanistic explanations.

10
  It is important to acknowledge that Chirimuuta (2014) also develops a positive proposal centered
on what she terms an informational minimal model (or I-minimal model). Although her positive view
is interesting and merits consideration on its own terms, the focus of the present chapter is on her
negative thesis.
11
  There is another route to undermining P2 on entirely different grounds from those developed in the
present chapter. This strategy involves challenging the basic premise of multiple realization itself. In par-
ticular, following a line of argument first developed by Bechtel and Mundale (1999), one could argue that
the initial plausibility of MR claims (at the heart of the challenge from canonical neural computation) rests
on a spurious mismatch between a coarse-grained functional individuation criterion (for establishing the
sameness of the computation being performed) with an excessively fine-grained structural individuation
criterion (for establishing the difference of underlying brain mechanisms). Although it will not be pursued
here, at least for the case of divisive normalization, the granularity objection appears to have teeth.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

174  DAVID M. KAPLAN

4.  Sound Localization: Birds


Specific cases of CNCs are too controversial to be useful for the purposes of illustrating
how MR considerations are irrelevant to assessing the adequacy of mechanistic
explanations. Instead, I want to focus on one of the most successful explanatory
models in all of systems neuroscience, the neural circuit model of auditory sound
localization in the barn owl. For reasons that will become evident shortly, this is a
paradigmatic example of a mechanistic explanation. Nevertheless, the capacity to
localize airborne sounds is implemented in a diversity of neural mechanisms across
different species throughout the animal kingdom. Yet, I will argue, this does not
compromise the explanatory status of the model. Nor does it affect the adequacy of
any other mechanistic explanations for sound localization in different systems or
species. This example helps to show how MR considerations are irrelevant to assessing
mechanistic explanations.
Many species of birds are capable of accurately localizing sounds on the basis
of auditory cues alone such as during flight in complete darkness. These animals
exploit the different arrival times of a sound at the two ears (Figure 8.2). Although
these interaural time differences (ITDs) may only be microseconds apart, birds have
evolved an exquisite strategy to detect and use ITDs to localize sounds. Unraveling
precisely how ITDs are computed in the brain is one of the great success stories of
modern neuroscience.

a b
ITD

ITD

Figure 8.2  Sound localization in birds (A) and mammals (B). Both rely on computing the
difference between the arrival times at the two ears (ITD) to localize airborne sounds in
the horizontal plane.
Source: Grothe (2003), 2. Adapted with permission from Macmillan Publishers Ltd.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  175

More than fifty years ago, the psychologist Lloyd Jeffress proposed an elegant
computational model for sound localization in the horizontal plane (Jeffress 1948).
The Jeffress model involves three basic elements: (e1) converging time- or phase-locked
excitatory inputs from both ears, (e2) coincidence detectors that respond maximally
when signals arrive from each ear simultaneously, and (e3) an arrangement of delay
lines with systematically varying lengths from each ear so that different coincidence
detectors encode different ITDs (Figure 8.3a).12 Since neural transmission delay time
is directly proportional to an axon’s length, tuning for different ITDs can be achieved
by having axonal “delay lines” of systematically varying lengths project from each ear
onto different individual coincidence detectors. A final, related detail of the model is
that the set of coincidence detectors are topographically organized such that adjacent
coincidence detectors represent adjacent locations in the horizontal plane (Figure 8.3a,
schematically represented by grayscale coding along the sagittal plane). Strikingly,
Jeffress developed the model to account for a body of human psychophysical data on
sound localization, and did so in the absence of information about the nature of the
underlying brain mechanisms.13
Remarkably, all of the major elements of the Jeffress delay line model (e1–e3) have
now been confirmed in the barn owl (Carr and Konishi 1990; Konishi 2003; Pena
et al. 2001).14 Careful behavioral, anatomical, and physiological investigations have
revealed a neural circuit for computing ITDs involving delay line-based coincidence
detection of signals from the two ears. More specifically, so-called bushy cells in the
left and right nucleus magnocellularis (NM) send time-locked excitatory inputs from
each ear, implementing e1 of the Jeffress model. Neurons in the nucleus laminaris (NL),
the first station of binaural processing in the avian auditory brainstem, are maximally
responsive when ipsilateral and contralateral input signals arrive simultaneously.
In other words, these neurons perform coincidence detection, implementing e2 of
the model. Individual NL neurons tuned to the same characteristic frequency show
­different preferred ITDs in virtue of differences in the axonal conduction delays
from each ear.15 Recall that, since the time delay of neural conduction through an
axon is directly proportional to its length, neural tuning for different ITDs can be

12
  Temporally coincident inputs can be precisely defined in the model as occurring when the sum of
acoustic and neural transmission delays originating from one ear equals that from the other ear: Ai + Ni =
Ac + Nc , where A indicates the auditory input signal, N indicates the neural transmission delay, and sub-
scripts i and c indicate ipsilateral and contralateral, respectively. For further discussion, see Konishi (2003).
13
  As the original Jeffress model involved a set of minimally constrained mechanistic conjecture about
how sound localization might be performed, it constitutes a clear example of a how-possibly mechanistic
model (Craver 2007; Kaplan 2011).
14
  Although the model was originally verified in the owl, extremely similar observations have been
confirmed in most other bird species (Grothe and Pecka 2014).
15
  Frequency tuning is observed in NL and in many other auditory neurons mainly because hearing begins
when the cochlea mechanically filters incoming sounds into separate frequency components. Consequently, all
output signals from the cochlea are already broken down or filtered according to their various frequencies.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

176  DAVID M. KAPLAN

a b

+ +
+ +
+ +
+ +
+ + –500 –200 0 200 500
Right ear leading Left ear leading
Birds + Excitation ITD (μs)
– Inhibition
c d

+
– –
+

–500 –200 0 200 500


Mammals Right ear leading Left ear leading
ITD (μs)

Figure 8.3  Neural computation of interaural time differences (ITDs) in birds and mammals.
(A) Jeffress-type computational mechanism observed in the nucleus laminaris (NL) of birds
involving coincidence detection of excitatory inputs from the two ears. (B) Distribution of
different preferred ITDs across a population of narrowly tuned individual neurons in one
hemispheric NL. Shaded area indicates physiologically relevant range of ITDs. (C) Computational
mechanism in mammals involving precisely timed hyperpolarizing inhibition that adjusts the
timing of excitatory inputs to coincidence detector neurons in the medial superior olive (MSO).
(D) For a given frequency band, the ITD tuning of the population of MSO neurons in the left
MSO is the inversion of that of the corresponding population of neurons in the right MSO
Source: Grothe 2003, 4. Reprinted with permission from Macmillan Publishers Ltd.

implemented by axons (“delay lines”) of systematically varying lengths projecting


from each ear onto individual NL neurons. And this is exactly what has been found. A
matrix of NL neurons receives inputs from axons of systematically varying lengths
(and with systematically varying interaural delays) project along the length of the
nucleus, implementing e3 of the Jeffress delay line model (Figure 8.3a).16 Because

16
  To make the critical role played by delay lines and coincidence detection clear, it is helpful to consider
how different arrangements of delays from the two ears give rise to coincident inputs from different locations
in auditory space. For example, axons of equal length projecting from each ear onto a coincidence detector
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  177

different NL neurons have different preferred ITDs, a population code is required to


represent the entire physiologically relevant range of ITDs (Figure 3b),17 and by
extension, the entirety of horizontal ­auditory space.18
Critically, the neural circuit model of sound localization developed by Konishi and
colleagues is a paradigmatic example of a mechanistic explanation in computational
neuroscience. Its explanatory status is beyond reproach within the neuroscience
community, and more importantly it exhibits all the hallmarks of an adequate
mechanistic explanation. In particular, it readily satisfies 3M. There is a clear
­mapping from elements of the model onto all key aspects of the target mechanism
in the avian auditory system. The parts, activities, and organization identified in the
model are implemented by corresponding structures, activities, and organizational
features in the avian brainstem. NL neurons function as coincidence detectors and
axons from NM neurons serve as delay lines. Different individual coincidence
detector neurons in NL exhibit different response or tuning properties such that
across the population the full range of physiologically relevant ITDs are encoded.
Finally, the temporal and spatial organization depicted in the model is precisely
reproduced in the avian brainstem circuitry. As described in detail above, the tim-
ing of excitatory inputs received by individual NL coincidence detector neurons
reflects the exquisite spatial organization of axons (i.e., component parts) that are
systematically arranged such that their lengths generate neural transmission delays
that precisely offset specific ITDs. This organization is essential to their ITD
­tuning properties.
One way of elucidating why mechanistic models have explanatory force with respect
to the phenomenon they are used to explain appeals to the same sorts of considerations
that one might plausibly appeal to for other kinds of causal explanations. Along these
lines, adequate mechanistic explanations allow us to answer a range of what-if-things-
had-been-different questions (or w-questions) just as causal explanations do (Kaplan
2011; Kaplan and Craver 2011; Woodward 2005). Purely descriptive models may either
fail to provide answers to w-questions and offer no explanation at all, or answer only

neuron in NL will have equal internal neural conduction delays (Figure 8.3a, central row positions); and
consequently will give rise to coincident inputs only when sounds are emitted from straight ahead and
reach both cochlea at the same time (i.e., when ITD = 0). Because these neurons fire maximally for ITDs
of zero, they are said to be tuned or have a best or preferred ITD of zero. By contrast, an NL neuron
receiving a short axonal projection from the left (ipsilateral) ear and a long axonal projection from the
right (contralateral) ear, will receive coincident inputs and exhibit tuning only for sounds coming from
the right auditory hemifield. This is because signals from the right ear must travel longer compared to
those from the left. Hence, the internal transmission delays precisely offset the difference in arrival times
at the two ears. Conversely, an NL neuron with a short axonal projection from the right ear and a long
axon from the left ear will receive coincident inputs and exhibit tuning for sounds coming from the left
auditory hemifield.
17
  What comprises the physiologically relevant range of ITDs is determined by factors such as overall
head size, and more specifically, the distance between the ears.
18
  It turns out that these neurons are also topographically organized in NL, such that neurons in adjacent
positions in NL code for spatially adjacent locations in contralateral auditory space, thereby implementing
another detail of the Jeffress model (Figure 8.3a).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

178  DAVID M. KAPLAN

a very restricted range of w-questions and offer superficial explanations. By contrast,


deeper mechanistic explanations afford answers to a broader range of w-questions
concerning interventions on the target mechanism than do more superficial ones. The
model for sound localization in the barn owl is an example of a deeper mechanistic
explanation. The model allows us to answer a multitude of w-questions. For example, it
can answer how the response profile of a given coincidence detector neuron in NL
would change if we intervened to vary the length of either the contralateral or ipsilat-
eral axonal delay line through which it receives its inputs. It can answer how the neural
circuit would perform if excitatory inputs from one of the two ears were completely or
partially eliminated. It can answer the question of which individual coincidence
detector neuron in NL would respond if set the ITD value of a sound was artificially set
to x microseconds. And so on. Mechanistic models explain because they deliver
answers to these and other w-questions.

5.  Sound Localization: Mammals


For many years, it was thought that all birds and mammals perform sound localization
by relying on the same Jeffress-type mechanism involving coincidence detection of
excitatory inputs coming from the two ears. However, mounting evidence now suggests
that mammals compute ITDs and thereby localize sounds using a different underlying
mechanism. It is becoming increasingly clear that mammals (unlike birds) do not rely on
a population of neurons with a precise arrangement of axonal delay lines from the two
ears in order to compute ITDs and perform auditory localization (Ashida and Carr 2011;
Brand et al. 2002; Grothe 2003; McAlpine and Grothe 2003; McAlpine et al. 2001; Myoga
et al. 2014).19 Instead, the emerging picture attributes a major role for synaptic inhibition
in the processing of ITDs (Figure 8.3c). More specifically, precisely timed hyperpolariz-
ing inhibition controls the timing of excitatory inputs reaching binaural coincidence
detector neurons in the mammalian auditory brainstem structure known as the medial
superior olive (MSO).20 Inhibition slows the transmission of excitatory inputs to the
MSO in such a way as to precisely offset the difference in arrival time at the two ears aris-
ing from the specific location of a sound source. This altered temporal sensitivity of
binaural neurons in the MSO provides the basis for ITD computation. The fact that exci-
tatory inputs from both ears would reach the MSO without any significant interaural
conduction delays (and thus would always coincide at ITDs of zero) in the absence of
inhibition, clarifies its role in ITD computation. Mammalian sound localization there-
fore reflects the convergence of bilateral excitatory and exquisitely timed inhibitory
inputs onto coincidence detector neurons in the MSO (Figure 8.3c).
19
  The synaptic inhibition model has primarily been described in gerbils and guinea pigs. However, there
is evidence that similar neural mechanisms for ITD computation are at work in other mammals including
cats and possibly even humans. For additional discussion, see Grothe (2003) and Thompson et al. (2006).
20
  The MSO receives bilateral excitatory inputs from so-called spherical bushy cells in both ventral coch-
lear nuclei. For additional discussion of the neural circuit underlying sound localization in mammals, see
Grothe (2003).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  179

The mechanism underlying ITD computation and sound localization in mammals


differs from the mechanism observed in birds in several major ways. First, even though
the inhibitory mechanism involves similarly functioning parts (i.e., neurons serving as
coincidence detectors), the functional activities—tuning properties—of coincidence
detector neurons in MSO are fundamentally different from those observed in the avian
auditory system. Specifically, individual MSO neurons do not have different preferred
ITDs distributed across the entire relevant range of ITDs, as is found in birds
(Figure 8.3b). In mammals, all MSO neurons tuned to the same frequency band within
each hemispheric MSO exhibit the same ITD tuning, and horizontal sound location
is read out from the population-averaged firing rate across two broadly tuned spatial
channels—one for each hemispheric MSO (Figure 8.3d). What this means is that a
change in the horizontal position of a sound source will induce a specific pattern
of change in population activity in one hemisphere and a corresponding change of
opposite sign in the population activity in the other hemisphere. For example, a sound
moving away from the midline (where ITD = 0), might serve to increase activity in the
contralateral MSO and correspondingly decrease activity in the ipsilateral MSO,
thereby indicating that the sound source has shifted to a more lateral position. It is
therefore the relative difference between the two hemispheric channels (the relative
activity across the entire population of MSO neurons) that encodes ITD information
and indicates the horizontal location of a sound source. Because individual MSO
neurons within each hemisphere carry similar information, this argues against the
local coding strategy observed in birds in which each individual neuron encodes
information about different ITDs, and instead strongly implies that a population code
is used to represent ITDs (Lesica et al. 2010; McAlpine et al. 2001).
Second, the mammalian mechanism for ITD computation involves different parts
doing different things. As indicated above, in addition to the excitatory projections
originating from the left and right cochlea, MSO neurons also receive bilateral inhibitory
projections from other structures in the auditory system that are highly specialized to
preserve the fine temporal structure of auditory stimuli with high precision.21 These
structures generate temporally accurate patterns of inhibition that precisely control
the timing of excitatory inputs reaching the MSO.
Third, and perhaps most obviously, certain key parts and properties of the mechanism
for computing ITDs in birds are simply missing from the inhibitory mechanism found
in mammals. Specifically, there are no axons serving as delay lines. Axon lengths are
roughly equivalent such that excitatory inputs from both ears would reach the MSO
without any appreciable interaural conduction delay in the absence of inhibition, and
thus always coincide at ITDs of zero. It is only through precise inhibitory control over
excitatory timing that appropriate ITD tuning is achieved.

21
  Glycinergic neurons in the medial nucleus of the trapezoid body provide the main source of
hyperpolarizing inhibition to the mammalian MSO. For further discussion, see Grothe (2003).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

180  DAVID M. KAPLAN

Despite these profound differences in underlying mechanisms, the inhibitory control


model similarly provides an adequate mechanistic explanation of sound localization
in mammals. Like the delay-line model, it straightforwardly satisfies 3M. The inhibitory
control model describes the parts, activities, and organizational features of the target
mechanism in the mammalian nervous system underlying ITD computation and
sound localization.

6.  Multiple Realizability, Scope, and Mechanistic


Explanation
The upshot of the preceding discussion should now be clear. Together, the cases described
above provide a strong counterexample to P2 in Chirimuuta’s argument—the claim
that if a given phenomenon is multiply realizable, then models of that phenomenon
will fail to satisfy 3M. Here we have just seen that the phenomenon of ITD computa-
tion (or sound localization) is multiply realized, and yet both the delay line model for
birds and the inhibitory control model for mammals each individually satisfies the
requirements of 3M. Both provide adequate mechanistic explanations. Indeed, both
are widely regarded as paradigmatic instances of mechanistic explanations. Hence, at
least for these cases, MR appears irrelevant to assessing their explanatory status. This
result should be prima facie puzzling given the strong parallels between these models
and models of CNCs. Before returning to consider CNCs, it will help to explore a bit
further how these argumentative moves play out in the context of models of ITD
computation canvassed above.
Suppose that instead of being satisfied that both the delay line model and the inhibi-
tory control model independently provide adequate mechanistic explanations of sound
localization (because each independently satisfies 3M), an additional requirement is
imposed to the effect that there must be a single, unifying explanation to cover both birds
and mammals since both perform the same computation (i.e., ITD ­computation). To be
clear, what is being demanded here is a wide-scope explanation that unifies or subsumes
all known systems in which a given computation is performed. Although a detailed com-
parison goes well beyond the purview of the current c­ hapter, this requirement reflects
deep similarities with unificationist accounts of explanation (e.g., Kitcher 1981). The
general idea behind unificationist views is that scientific explanation is a matter of
providing a unified account of a range of different phenomena. For ease of reference,
we might therefore call this additional requirement or principle the unification principle
(UP). Because no sufficiently wide-scope or unifying mechanistic explanation will be
forthcoming in the case of ITD computation—the underlying mechanisms in birds and
mammals are known to differ, after all—taking UP on board entails the rejection of the
mechanistic approach in favor of some other non-mechanistic form of explanation. But
is this inference justified? The short answer is “no.”
There are two things wrong with this line of argument. First, it trades on a confusion over
the proper scope of mechanistic explanations, and subsequently, neglects the important
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  181

possibility of constructing “local” or narrow-scope mechanistic explanations. Second,


it problematically assumes that more unifying models—those with wider scope—will
always be explanatorily superior to those that are less unifying. I will address each in turn.

6.1  Scope and mechanistic explanation


Attempts to impose UP reflect a deep confusion over the proper scope of mechanistic
explanations. Scope concerns how many systems or how many different kinds of
systems there actually are to which a given model or generalization applies. In the
present context, UP implies that models of ITD computation cannot successfully explain
unless they have relatively wide scope—i.e., unless there is a model (mechanistic or
otherwise) that uniformly captures the common computation performed across
multiple neural circuits and taxa. Critically, although it is true that wide-scope
mechanistic explanations will be unavailable if MR obtains, it does not follow that
the mechanistic approach must be abandoned (and some other explanatory scheme
embraced), as is implied by P2 in Chirimuuta’s argument. This is because wide-scope
explanations are not the only type of mechanistic explanation that can be provided for
neural computations. In particular, as the cases of ITD computation canvassed above
clearly indicate, perfectly adequate mechanistic explanations can be constructed that
capture some but not necessarily all instances in which the same ITD computation is
being performed. The delay line model provides an adequate mechanistic explanation
even though its scope only includes the biological taxa of birds but not mammals.
Similarly, the inhibitory control model provides an adequate mechanistic explanation
even though its scope only includes mammals but not birds. These models explain how
a given computation is implemented in a specific system or type of system without
covering all systems in which the computation is implemented. Crucially, they show
how the scope of mechanistic models can be relatively narrow or “local” without this
restricting or compromising their explanatory power. Narrow-scope mechanistic
explanations are perfectly adequate for the range of systems they do in fact cover. Why?
Although scope is a dimension along which mechanistic models can and do vary
(Craver and Darden 2013), it is not an appropriate norm for evaluating mechanistic
explanations. This is implicit in the 3M principle. 3M clarifies how mechanistic models
are to be judged as explanatorily adequate based on the extent to which they accurately
and completely describe the causal structure of the mechanism—the component parts,
their activities, and organization—responsible for the phenomenon of interest (Kaplan
2011; Kaplan and Craver 2011). The quality of the explanation provided does not in
any way depend on how widely the model applies. Consequently, our assessments of
mechanistic explanations should be insensitive to scope.
Despite the fact that adequate mechanistic explanations can and frequently do
describe highly conserved biological or neural mechanisms (e.g., DNA or the selectivity
filter mechanism in potassium channels; Doyle et al. 1998; MacKinnon et al. 1998),
resulting in wide-scope mechanistic explanations, this is not required. Instead, just
like the mechanistic explanations of ITD computation described above, the scope of
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

182  DAVID M. KAPLAN

explanatory mechanistic models is often highly restricted in biological and neurobio-


logical contexts. Along these lines, Bechtel (2009) maintains that mechanistic accounts
are often “highly particularized” in that they “characterize the mechanism responsible
for a particular phenomenon in a specific biological system” (Bechtel 2009, 762). At the
limit, a mechanistic model might aim to capture only a single case (e.g., the unique
mechanism responsible for producing the specific pattern of cognitive deficits exhibited
by patient H. M.; Annese et al. 2014) or a recurring mechanism found in only one species
(e.g., the exotic mechanism underlying color vision in the mantis shrimp; Thoen
et al. 2014). These “highly particularized” or scope-restricted models can nonetheless
provide perfectly adequate mechanistic explanations—by satisfying 3M—even though
their application may be restricted to extremely narrow domains.
Of course, things could have turned out differently in the cases of ITD computation
surveyed above due to differences in patterns of evolution, drift, or both. Under some
counterfactual evolutionary scenarios, a common circuit design across these taxa
might have resulted instead of the actual divergent mechanisms observed today.
Although these differences would invariably change the scope of our computational
models, such alterations in scope are (or at any rate should be) inconsequential to how
we evaluate these models as mechanistic explanations.

6.2  Unification and explanation


This deflationary view about the relationship between model scope and explanatory
power is by no means unique or idiosyncratic to the mechanistic framework. Although
the view that wide scope is not an important property of explanatory models places the
mechanistic approach at odds with traditional covering law and unificationist accounts
of explanation, interventionist approaches to causal explanation similarly reject scope
as a reliable indicator of explanatory depth or power (Hitchcock and Woodward 2003;
Woodward 2005). According to the interventionist perspective, explanatory depth
reflects the extent to which a given generalization or model provides resources for
answering a greater range of what-if-things-had-been-different questions about the
phenomenon of interest. This in turn is argued to depend on how wide the range of
interventions is under which a given generalization is invariant. One generalization
or model thus provides a better or “deeper” explanation than another because it is
invariant under a wider range of interventions than the other, not because it has wider
scope. Hence, both mechanistic and interventionist perspectives provide arguments
for thinking that scope and explanatory power can and often do vary independently of
one another.
Along similar lines, Sober (1999) rejects the assumption underlying UP, namely,
that relatively abstract, unifying explanations are always better and are to be preferred
over more detailed, disunified explanations. In particular, he contends that there is
“no objective reason” (551) to prefer unified over disunified explanations. According
to unificationists like Kitcher, explanations involving an abundance of “micro-details”
(or mechanistic details) are “less unified” than explanations that abstract from such
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  183

detail because the latter and not the former can more easily apply across a wide range
of systems with different underlying realizations. Sober argues that it is the nature of our
explanatory interests that determines which type we will prefer in any given situation.
In some explanatory contexts, highly detailed, narrow-scope explanations may be
better or more appropriate. In other contexts, more abstract, wide-scope explanations
may be preferable. Crucially, there is no automatic preference or requirement for
explanations involving abstraction away from micro-detail, as is implied by UP.

7.  Canonical Neural Computation Explanations


Reconsidered
We now have the appropriate resources to reconsider the claim that CNCs cannot
be analyzed in mechanistic terms and therefore require a distinct explanatory
framework. In particular, we can now see how arguments against the mechanistic
analyses of CNCs trade on precisely the same confusion as the one identified above
concerning the proper scope of mechanistic explanations. Specifically, these arguments
start by pointing out that wide-scope mechanistic explanations of CNCs will invari-
ably fail in virtue of the diversity of mechanisms across different neural circuits and
species that implement the computation. From this, together with an assumed premise
embodying something highly similar to UP, a conclusion is drawn to the effect that
CNCs cannot be assimilated into the mechanistic framework (Chirimuuta  2014).
However, for reasons canvassed above, this argumentative strategy assumes that
only wide-scope explanations are appropriate, and it consequently neglects the
possibility of developing narrow-scope mechanistic explanations of computational
phenomena. Yet, as the discussion of ITD computation shows, narrow-scope mech-
anistic explanations can in principle be provided, even in contexts where MR
is operative.
After reviewing evidence that divisive normalization computations are implemented
differently across different systems, Carandini and Heeger (2012) claim that: “it is
unlikely that a single mechanistic explanation will hold across all systems and species:
what seems to be common is not necessarily the biophysical mechanism but rather
the computation” (58). On a strong reading, Carandini and Heeger (2012) appear to
embrace UP. Chirimuuta reinforces this interpretation when she claims that they
“no longer think that we should interpret the normalization model mechanistically,
as describing components and dynamics of a neural mechanism” (Chirimuuta 2014, 140).
She interprets them as embracing the view that unless a wide-scope explanation of
divisive normalization is forthcoming, all manner of mechanistic explanation will be
out of reach. Independently of whether or not this is the correct interpretation of
Carandini and Heeger’s view,22 Chirimuuta explicitly endorses this position as her

22
  Carandini and Heeger’s (2012) claim is compatible with both a weaker and stronger reading, only
the latter of which implies a commitment to UP. According to the weaker reading, although a single
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

184  DAVID M. KAPLAN

own. She takes the fact that no single mechanistic explanation can be provided that
subsumes all divisive normalization phenomena—i.e., that describes the full range of
circuits and systems that perform the same (divisive normalization) computation—as
evidence that “[s]uch explanations cannot be assimilated into the  mechanistic
framework” (2014, 127). But this assumption of UP is just as problematic in the context
of models of CNCs as it was in the context of models of sound localization.
As was true in the case of ITD computation, scope-restricted explanations of
­divisive normalization are either currently available or forthcoming.23 For example,
neuroscientists have already developed a robust narrow-scope mechanistic explan-
ation for how divisive normalization computations are performed in the fruit fly
(Drosophila) olfactory system. There is strong evidence that, in this specific system,
normalization is implemented by GABA-mediated inhibition of presynaptic connec-
tions between neurons in the fly antennal lobe (Olsen et al. 2010). Similarly robust
narrow-scope explanations have been developed for divisive normalization in the fly
visual system and the mammalian retina, and the mechanisms thought to be involved
differ in important respects. Although uncertainty remains about the specific mechan-
ism underlying divisive normalization in mammalian primary visual cortex (V1)—the
original phenomenon for which the normalization model was developed—candidate
narrow-scope mechanistic explanations of this phenomenon continue to be the ­subject
of intense experimental investigation in contemporary neuroscience. One plaus-
ible  narrow-scope explanation posits a form of shunting inhibition—increases in
membrane conductance without the introduction of depolarizing or hyperpolarizing
synaptic currents—as the mechanism underlying divisive normalization in V1
(Chance et al. 2002). Other narrow-scope explanations posit mechanisms of synaptic
depression (Abbott et al.  1997) or intrinsic neural noise (Carandini  2007; Finn
et al. 2007). The fact that we do not at present have a complete mechanistic explanation
for this ­phenomenon, or that the evidence for one plausible mechanism over another
remains mixed, is inconsequential. What matters is that progress is being made,

mechanistic explanation is unavailable, multiple individual mechanistic explanations may still be available
or forthcoming. On the stronger reading, the fact that no single mechanistic explanation can be given
implies that no mechanistic explanation of any kind is available or forthcoming. Although determining
which reading is more closely aligned with their actual view goes beyond the scope of this chapter, based
on the broader context, they appear to endorse the weaker reading. Specifically, they repeatedly emphasize
how research into canonical neural computations such as divisive normalization involves the discovery
and articulation of underlying mechanisms, stating that “[a] key set of questions concerns the circuits and
mechanisms that result in normalization” and how “[u]nderstanding these circuits and mechanisms is
fundamental” (2012, 60).
23
  For reasons that go beyond the scope of this chapter to address, the underlying mechanisms cited in
these explanations of normalization are highly similar in many respects and bear a family resemblance to
one another. It seems entirely plausible to think that even though there is no single common normalization
mechanism, the fact that all these systems perform a common computational operation nevertheless places
some important constraints on the nature of the underlying mechanisms. Consequently, we might reason-
ably expect a family of highly similar mechanisms (and in turn, mechanistic explanations) whenever a
common computational operation is being performed across systems.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  185

or can in principle be made, towards suitable narrow-scope mechanistic explanations


of  divisive normalization phenomena and other kinds of CNCs more generally.
And we should have every reason to think that the question of how each of these
neural systems performs computations will ultimately succumb to this kind of
mechanistic analysis.
Assimilating CNCs into the mechanistic framework does not require, as Chirimuuta
and others have argued, the provision of a single mechanistic explanation to cover
all instances in which one and the same computation is performed—a wide-scope
mechanistic explanation—but rather only requires the provision of multiple suitable
narrow-scope explanations. Once recognized, the irrelevance of MR considerations
becomes clear. For narrow-scope mechanistic explanations of computational phenomena,
the objective is to reveal the particular implementing mechanism underlying a given
neural computation in a specific system (or type of system), independently of whether
multiple other possible mechanisms also might have p ­ erformed the computation, or
whether other actual mechanisms might be performing the same computation in
other neural systems or in other species.

8. Conclusion
As the mechanistic perspective comes to occupy a dominant position in philosophical
thinking about explanation across the biological sciences including all areas of neuro-
science, questions will naturally arise about its scope and limits. Along these lines,
some authors have recently appealed to modeling work involving canonical neural
computations to argue that some explanations in neuroscience fall outside the bounds
of the mechanistic approach. Because these neural computations can rely on diverse
circuits and mechanisms, modeling the underlying mechanisms is supposed to be of
limited explanatory value. At its core, this is a challenge from multiple realizability.
In this chapter, I argue that these anti-mechanistic conclusions about canonical neural
computation explanations are mistaken, and rest upon confusions about the proper
scope of mechanistic explanation and the relevance of multiple realizability consid-
erations. More specifically, I maintain that this confusion stems from a failure to
appreciate how scope is not an appropriate norm on mechanistic explanation and
therefore even mechanistic models with highly restricted scope can in principle be
explanatorily adequate. Once appreciated, it becomes clear how mechanistic models
of computational phenomena can provide legitimate explanations, even in the presence
of multiple realizability.
Although there is a long history of wielding multiple realizability considerations to
successfully challenge dominant philosophical positions including mind-brain type
identity theory and reductionism in biology, throughout this chapter I have argued that
such considerations do not create any insuperable obstacles for adopting a mechanistic
perspective about computational explanation in neuroscience.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

186  DAVID M. KAPLAN

References
Abbott, L. F., J. A. Varela, K. Sen, and S. B. Nelson. 1997. “Synaptic Depression and Cortical
Gain Control.” Science 275 (5297): 220–4.
Annese, J., N. M. Schenker-Ahmed, H. Bartsch et al. 2014. “Examination of Patient H. M.’s Brain
Based on Histological Sectioning and Digital 3D Reconstruction.” Nature Communications
5: 3122.
Ashida, Go and Catherine E. Carr. 2011. “Sound Localization: Jeffress and Beyond.” Current
Opinion in Neurobiology 21 (5): 745–51.
Ayala, Francisco J. 1968. “Biology as an Autonomous Science.” American Scientist: 207–21.
Batterman, Robert W. 2002. The Devil in the Details: Asymptotic Reasoning in Explanation,
Reduction, and Emergence. Oxford: Oxford University Press.
Bechtel, William. 2008. Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience.
Mahwah, NJ: Lawrence Erlbaum.
Bechtel, William. 2009. “Generalization and Discovery by Assuming Conserved Mechanisms:
Cross Species Research on Circadian Oscillators.” Philosophy of Science 76(5): 762–73.
Bechtel, William and Jennifer Mundale. 1999. “Multiple Realizability Revisited: Linking
Cognitive and Neural States.” Philosophy of Science 66(2): 175–207.
Bonin, Vincent, Valerio Mante, and Matteo Carandini. 2005. “The Suppressive Field of Neurons
in Lateral Geniculate Nucleus.” Journal of Neuroscience 25 (47): 10844–56.
Brand, Antje, Oliver Behrend, Torsten Marquardt, David McAlpine, and Benedikt Grothe. 2002.
“Precise Inhibition Is Essential for Microsecond Interaural Time Difference Coding.” Nature
417 (6888): 543–47.
Carandini, Matteo. 2007. “Melting the Iceberg: Contrast Invariance in Visual Cortex.” Neuron
54 (1): 11–13. doi:10.1016/j.neuron.2007.03.019.
Carandini, Matteo. 2012. “From Circuits to Behavior: A Bridge Too Far?” Nature Neuroscience
15 (4): 507–9. doi:10.1038/nn.3043.
Carandini, Matteo and David J. Heeger. 1994. “Summation and Division by Neurons in Primate
Visual Cortex.” Science 264 (5163): 1333–6.
Carandini, Matteo and David J. Heeger. 2012. “Normalization as a Canonical Neural
Computation.” Nature Reviews Neuroscience 13 (1): 51–62.
Carandini, Matteo, David J. Heeger, and J. Anthony Movshon. 1997. “Linearity and Normalization
in Simple Cells of the Macaque Primary Visual Cortex.” Journal of Neuroscience 17 (21): 8621–44.
Carr, C. E. and M. Konishi. 1990. “A Circuit for Detection of Interaural Time Differences in the
Brain Stem of the Barn Owl.” Journal of Neuroscience 10 (10): 3227–46.
Chance, Frances S., L. F. Abbott, and Alex D. Reyes. 2002. “Gain Modulation from Background
Synaptic Input.” Neuron 35 (4): 773–82.
Chirimuuta, M. 2014. “Minimal Models and Canonical Neural Computations: The Distinctness
of Computational Explanation in Neuroscience.” Synthese 191 (2): 127–53.
Craver, Carl F. 2007. Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience.
New York: Oxford University Press.
Craver, Carl F. and Lindley Darden. 2013. In Search of Mechanisms: Discoveries across the Life
Sciences. Chicago: University Of Chicago Press.
Cummins, Robert. 2000. “ ‘How Does It Work?’ versus ‘What are the Laws?’ Two Conceptions
of Psychological Explanation.” In Explanation and Cognition, edited by F. Keil and
Robert A. Wilson, 117–44. Cambridge, MA: MIT Press.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  187

Doyle, D. A., J. Morais Cabral, R. A. Pfuetzner, A. Kuo, J. M. Gulbis, S. L. Cohen, B. T. Chait, and
R. MacKinnon. 1998. The structure of the potassium channel: molecular basis of
K+ conduction and selectivity. Science 280 (5360): 69–77.
Feest, Uljana. 2003. “Functional Analysis and the Autonomy of Psychology.” Philosophy of
Science 70 (5): 937–48.
Finn, Ian M., Nicholas J. Priebe, and David Ferster. 2007. “The Emergence of Contrast-Invariant
Orientation Tuning in Simple Cells of Cat Visual Cortex.” Neuron 54 (1): 137–52.
doi:10.1016/j.neuron.2007.02.029.
Fodor, Jerry A. 1974. “Special Sciences (or: The Disunity of Science as a Working Hypothesis).”
Synthese 28 (2): 97–115.
Fodor, Jerry A. 1997. “Special Sciences: Still Autonomous after All These Years.” Noûs 31 (s11):
149–63.
Freeman, Tobe C. B. Séverine Durand, Daniel C. Kiper, and Matteo Carandini. 2002.
“Suppression without Inhibition in Visual Cortex.” Neuron 35 (4): 759–71.
Glymour, Clark. 1970. “On Some Patterns of Reduction.” Philosophy of Science 37 (3):
340–53.
Grothe, Benedikt. 2003. “New Roles for Synaptic Inhibition in Sound Localization.” Nature
Reviews Neuroscience 4 (7): 540–50.
Grothe, Benedikt and Michael Pecka. 2014. “The Natural History of Sound Localization in
Mammals: A Story of Neuronal Inhibition.” Frontiers in Neural Circuits 8. <http://www.ncbi.
nlm.nih.gov/pmc/articles/PMC4181121/>.
Heeger, David J. 1992. “Normalization of Cell Responses in Cat Striate Cortex.” Visual
Neuroscience 9 (2): 181–97.
Hitchcock, Christopher and James Woodward. 2003. “Explanatory Generalizations, Part II:
Plumbing Explanatory Depth.” Noûs 37 (2): 181–99.
Hoyningen-Huene, Paul. 1989. “Epistemological Reductionism in Biology: Intuitions,
Explications, and Objections.” In Reductionism and Systems Theory in the Life Sciences, edited by
Paul Hoyningen-Huene and Franz M. Wuketits, 29–44. New York: Springer. <http://link.
springer.com/chapter/10.1007/978-94-009-1003-4_3>.
Hubel, David H. and Torsten N. Wiesel. 1968. “Receptive Fields and Functional Architecture of
Monkey Striate Cortex.” Journal of Physiology 195 (1): 215–43.
Jeffress, Lloyd A. 1948. “A Place Theory of Sound Localization.” Journal of Comparative and
Physiological Psychology 41 (1): 35.
Johnson-Laird, Philip N. 1983. Mental Models: Towards a Cognitive Science of Language,
Inference, and Consciousness. Cambridge, MA: Harvard University Press.
Kaplan, David Michael. 2011. “Explanation and Description in Computational Neuroscience.”
Synthese 183 (3): 1–35.
Kaplan, David Michael. 2015. “Explanation and Levels in Cognitive Neuroscience.” In
Handbook of Neuroethics, edited by Jens Clausen and Neil Levy, 9–29. Dordrecht: Springer.
<http://link.springer.com/referenceworkentry/10.1007/978-94-007-4707-4_4>.
Kaplan, David Michael and Carl F. Craver. 2011. “The Explanatory Force of Dynamical and
Mathematical Models in Neuroscience: A Mechanistic Perspective.” Philosophy of Science
78 (4): 601–27.
Keeley, Brian L. 2000. “Shocking Lessons from Electric Fish: The Theory and Practice of
Multiple Realization.” Philosophy of Science 67(3): 444–65.
Kitcher, Philip. 1981. “Explanatory Unification.” Philosophy of Science 48(4): 507–31.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

188  DAVID M. KAPLAN

Kitcher, Philip. 1984. “1953 and All That: A Tale of Two Sciences.” Philosophical Review 93(3):
335–73.
Konishi, Masakazu. 2003. “Coding of Auditory Space.” Annual Review of Neuroscience 26 (1):
31–55.
Lesica, Nicholas A., Andrea Lingner, and Benedikt Grothe. 2010. “Population Coding of
Interaural Time Differences in Gerbils and Barn Owls.” Journal of Neuroscience: The
Official Journal of the Society for Neuroscience 30 (35): 11696–702. doi:10.1523/
JNEUROSCI.0846-10.2010.
Machamer, Peter, Lindley Darden, and Carl F. Craver. 2000. “Thinking about Mechanisms.”
Philosophy of Science 67(1): 1–25.
MacKinnon, R., S. L. Cohen, A. Kuo, A. Lee, and B. T. Chait. 1998. Structural conservation in
prokaryotic and eukaryotic potassium channels. Science 280(5360): 106–9.
Mante, Valerio, Robert A. Frazor, Vincent Bonin, Wilson S. Geisler, and Matteo Carandini. 2005.
“Independence of Luminance and Contrast in Natural Scenes and in the Early Visual
System.” Nature Neuroscience 8 (12): 1690–7.
McAlpine, David and Benedikt Grothe. 2003. “Sound Localization and Delay Lines: Do
Mammals Fit the Model?” Trends in Neurosciences 26 (7): 347–50.
McAlpine, David, Dan Jiang, and Alan R. Palmer. 2001. “A Neural Code for Low-Frequency
Sound Localization in Mammals.” Nature Neuroscience 4 (4): 396–401.
Myoga, Michael H., Simon Lehnert, Christian Leibold, Felix Felmy, and Benedikt Grothe. 2014.
“Glycinergic Inhibition Tunes Coincidence Detection in the Auditory Brainstem.” Nature
Communications 5: 3790. doi:10.1038/ncomms4790.
Nagel, Ernest. 1961. The Structure of Science: Problems in the Logic of Scientific Explanation.
Vol. 1. New York: Harcourt, Brace, and World.
Olsen, Shawn R., Vikas Bhandawat, and Rachel I. Wilson. 2010. “Divisive Normalization in
Olfactory Population Codes.” Neuron 66 (2): 287–99. doi:10.1016/j.neuron.2010.04.009.
Pena, Jose Luis, Svenja Viete, Kazuo Funabiki, Kourosh Saberi, and Masakazu Konishi. 2001.
“Cochlear and Neural Delays for Coincidence Detection in Owls.” Journal of Neuroscience 21
(23): 9455–9.
Piccinini, Gualtiero and Oron Shagrir. 2014. “Foundations of Computational Neuroscience.”
Current Opinion in Neurobiology 25: 25–30.
Polger, Thomas W. 2009. “Evaluating the Evidence for Multiple Realization.” Synthese 167 (3):
457–72.
Putnam, Hilary. 1975. “Mind, Language and Reality.” In Philosophical Papers, Vol. 2. Cambridge:
Cambridge University Press.
Rabinowitz, Neil C., Ben D. B. Willmore, Jan W. H. Schnupp, and Andrew J. King. 2011.
“Contrast Gain Control in Auditory Cortex.” Neuron 70 (6): 1178–91.
Reynolds, John H. and David J. Heeger. 2009. “The Normalization Model of Attention.” Neuron
61 (2): 168–85.
Reynolds, John H., Tatiana Pasternak, and Robert Desimone. 2000. “Attention Increases
Sensitivity of V4 Neurons.” Neuron 26 (3): 703–14.
Rosenberg, Alex. 2001. “Reductionism in a Historical Science.” Philosophy of Science 68:
135–63.
Rust, Nicole C., Valerio Mante, Eero P. Simoncelli, and J. Anthony Movshon. 2006. “How MT
Cells Analyze the Motion of Visual Patterns.” Nature Neuroscience 9 (11): 1421–31.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

PROSPECTS FOR MECHANISTIC EXPLANATION  189

Sarkar, Sahotra. 1992. “Models of Reduction and Categories of Reductionism.” Synthese 91 (3):
167–94.
Schaffner, Kenneth F. 1967. “Approaches to Reduction.” Philosophy of Science 34: 137–47.
Shapiro, Lawrence A. 2000. “Multiple Realizations.” Journal of Philosophy 97 (12): 635–54.
Shapiro, Lawrence A. 2008. “How to Test for Multiple Realization.” Philosophy of Science 75 (5):
514–25.
Simoncelli, Eero P. and David J. Heeger. 1998. “A Model of Neuronal Responses in Visual Area
MT.” Vision Research 38 (5): 743–61.
Sober, Elliott. 1999. “The Multiple Realizability Argument against Reductionism.” Philosophy of
Science 66(4): 542–64.
Thoen, H. H., M. J. How, T.-H. Chiou, and J. Marshall. 2014. “A Different Form of Color Vision
in Mantis Shrimp.” Science 343 (6169): 411–13.
Thompson, Sarah K., Katharina von Kriegstein, Adenike Deane-Pratt, Torsten Marquardt,
Ralf  Deichmann, Timothy D. Griffiths, and David McAlpine. 2006. “Representation of
Interaural Time Delay in the Human Auditory Midbrain.” Nature Neuroscience 9 (9):
1096–8. doi:10.1038/nn1755.
Waters, C. Kenneth. 1990. “Why the Anti-Reductionist Consensus Won’t Survive: The Case of
Classical Mendelian Genetics.” In PSA: Proceedings of the Biennial Meeting of the Philosophy
of Science Association, 125–39. <http://www.jstor.org/stable/192698>.
Woodward, J. 2000. “Explanation and Invariance in the Special Sciences.” British Journal for the
Philosophy of Science 51 (2): 197–254.
Woodward, J. 2005. Making Things Happen: A Theory of Causal Explanation. New York: Oxford
University Press.
Wright, Cory D. and William Bechtel. 2006. “Mechanisms and Psychological Explanation.”
Handbook of Philosophy of Psychology and Cognitive Science, 31–79.
Zoccolan, Davide, David D. Cox, and James J. DiCarlo. 2005. “Multiple Object Response
Normalization in Monkey Inferotemporal Cortex.” Journal of Neuroscience 25 (36): 8150–64.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

9
Marr’s Computational Level
and Delineating Phenomena
Oron Shagrir and William Bechtel

1. Introduction
Bogen and Woodward (1988) convincingly demonstrated that scientific explanations
are directed at phenomena, not data. Phenomena are regular, repeatable types of
events, processes, or states; Bogen and Woodward offer examples of what they mean by
phenomena: “weak neutral currents, the decay of the proton, and chunking and
recency effects in human memory” (p. 306). The new mechanistic philosophers of
­science have embraced Bogen and Woodward’s focus on phenomena, holding that
mechanisms are identified in terms of the phenomena they are to explain (Machamer,
Darden, & Craver, 2000; Glennan, 2002; Bechtel & Abrahamsen, 2005). For the most
part they, following the lead of Bogen and Woodward, have stayed with textbook
accounts of phenomena, offering as examples the action potential, glycolysis, protein
synthesis, and long-term potentiation. The specification of phenomena is generally
treated as unproblematic—the challenge is explaining them. Kauffman (1971) noted
the importance of selecting among the many things organisms do before attempting to
explain how they do so as that selection will affect the explanation offered. Bechtel and
Richardson (2010 [1993]) drew attention to the fact that often much research must be
done to delineate phenomena and that sometimes in the course of developing a mech-
anistic account scientists end up recognizing that the phenomenon is different than
they initially supposed. For example, research in biochemical genetics began by trying
to account for the role of genes in generating phenotypic traits, but in the course of
their research Beadle and Tatum (1941) recharacterized genes as involved in generating
enzymes. Bechtel and Richardson refer to such revisions in the account of the phe-
nomenon as reconstituting the phenomena. But even they do not develop the fact that
the phenomena for which explanations are sought are typically characterized in a far
more detailed, quantitative fashion, and that saving such quantitative features of
phenomena is often a critical challenge in explanation and an important criterion in
evaluating putative explanations.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  191

Insofar as phenomena are the explananda for mechanistic explanation it is import-


ant to clarify what a phenomenon is. Although measuring a phenomenon quantitatively
is more important than mechanists have recognized, not everything that can be meas-
ured quantitatively is treated as a phenomenon to be explained by a mechanism, even
if it is the effect of a mechanism and plays a role in evaluating proposed accounts of the
mechanism. In the case of the action potential, the change over time of the electrical
charge across the neuron membrane is part of the phenomenon, but the temporary
increase in sodium concentration inside the neuron is not, although both can be char-
acterized quantitatively. Likewise, the phenomenon of long-term potentiation is
characterized by the increased number of action potentials generated by a neuron in
response to a stimulus but not by how much ATP is consumed in the process. Given
the multitude of items that can be measured quantitatively, it is important that we are
able to differentiate those that do and those that do not count as phenomena for which
a mechanism is sought.
We will argue that important insights into the role of phenomena in mechanistic
explanations can be found in Marr’s (1982) characterization of what he called the compu-
tational level. Marr introduces his well-known account of levels to counter what he took
to be a shortcoming in the practice of neuroscience: the preoccupation with the compo-
nents of the visual processing mechanism—the properties of cells and their behavior.
Marr’s objective was not to repudiate the search for mechanism but to recast it in terms of
his tri-level framework of computational, algorithmic, and implementational levels.
Marr contended that “Vision is . . . first and foremost, an information-processing task.”
Delineating this information processing task—the phenomenon—is the job of what
Marr called the computational level. The algorithmic level characterizes the system of
representations that is being used, e.g., decimal versus binary, and the algorithm
employed for transforming representations of inputs into those of outputs. The imple-
mentation level specifies how the representations and algorithm are physically realized.
What is involved in characterizing vision as performing an information-processing
task? Marr associates the computational level with two aspects, the what and the why.
In the introductory, “philosophical outlook” chapter of Vision, Marr says that “the most
abstract is the level of what the device does and why” (p. 22). The job of the what-aspect
is to specify what is computed. The job of the why-aspect is to demonstrate the appro-
priateness and adequacy of what is being computed to the information-processing
task (pp. 24–5). In “Artificial intelligence: A personal view,” Marr states that at the com-
putational level, “the underlying nature of a particular computation is characterized,
and its basis in the physical world is understood. One can think of this part as an
abstract formulation of what is being computed and why” (1977, p. 37).
But what exactly does Marr mean by these what and why aspects of CL? Marr never
provided a systematic and detailed account of his notion of CL; what he does say
about it is often brief and somewhat vague. Instead, Marr provided a set of computa-
tional theories of specific visual tasks. These impressive theories induced an enor-
mous amount of research into computer and biological vision. The conceptual task of
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

192  Oron Shagrir and William Bechtel

explicating the notion of a computational-level theory was left to philosophers, who


provided, in turn, radically different interpretations.
Unfortunately, as we will argue in Section 2, none of these interpretations is adequate
to the distinctive role Marr envisaged for CL. We will review three of the most prominent
accounts in Section 2 and show how each falls short of what Marr seems to have had in
mind. In Section 3 we advance an alternative interpretation that we contend b ­ etter cap-
tures what Marr saw as the importance of the what and why aspects of CL analysis. CL
theory, as we see it, provides a formal or mathematical account of the task the visual
system performs in the actual physical world in which it functions. Our goal, though, is
not simply to engage in Marr exegesis. Rather, we contend that understanding what
Marr had in mind by analysis of CL is extremely important for providing an adequate
account of the role delineating phenomena plays in science, especially science devoted
to the identification of mechanisms. As we argue in Section 4, the phenomena for which
mechanisms are sought require formal or mathematical characterizations that are
grounded in the context in the world in which the mechanism functions. In Section 5
we will argue that Marr did not go far enough in characterizing phenomena in CL
terms. The tasks mechanisms are to perform are not simply given to scientists, but
typically discovered through empirical (observational or experimental) inquiry.
Moreover, they are frequently revised in the course of developing explanations of them.
Following Marr, we will take visual perception as our primary exemplar. But the
implications of Marr’s approach extend to any phenomena that are appropriately char-
acterized in computational terms, that is, information-processing terms. Marr’s
account was designed for neuroscience and, although some contest it, the computa-
tional metaphor is appropriate for brain function generally. The task for the brain and
the various processes occurring in it is to extract and use information to control the
functioning of an organism. Moreover, the reasons that justify reference to computa-
tion and information processing in the case of the brain apply far more broadly to
control processes in living organisms. Cell signaling systems, for example, process
information to control such activities as the use of different metabolites for fuel, the
repair of DNA, or the synthesis of proteins and researchers are increasingly employing
information-processing language to characterize these processes (Shapiro, 2011).1 But
the activities thereby regulated—the transformation of energy into ATP, or the synthe-
sis or degradation of proteins—are not appropriately characterized in information-
processing terms. Exploring what insight Marr’s account of CL offers to characterizing
phenomena in those cases goes beyond the scope of this chapter.

1
  The concept of information has been employed in many different ways in biology, where it took on
special significance after Watson and Crick (1953) used it to characterize the function of the genetic code.
Some, inspired by Shannon (1948), have treated information in causal terms (effects carry information
about their causes). Others such as Maynard Smith (2000) have defended a teleosemantic notion in which
the content of a signal is fixed by natural selection. Yet others have rejected the application of the concept
of information to genes as metaphorical (Griffiths, 2001). See Levy (2011) for a valuable discussion that
elucidates the roles different accounts of information play in biological theorizing.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  193

2.  Shortcomings of Extant Accounts


of Marr’s Computational Level
We cannot review all attempts to explicate Marr’s notion of CL, but will focus on
three whose shortcomings are illuminating. According to the first (the “standard”
interpretation), CL characterizes the information-processing task, mainly in inten-
tional terms. According to the second, the aim of CL is to provide a mathematical or
a formal theory, and according to the third, CL provides a sketch of mechanism.

2.1  The “standard” interpretation: specifying an information-processing task


Most interpreters of Marr assume that the role of the computational level is specifying an
information-processing visual or cognitive task: “At the highest level was a specifica-
tion of what task a system was designed to perform: e.g., in the case of vision, to construct
a three-dimensional representation of distal stimuli on the basis of inputs to the retina”
(Horst, 2009).This information-processing task is often described in terms of the contents
of the input and the output representations: “A computational analysis will identify the
information with which the cognitive system has to begin (the input to that system)
and the information with which it needs to end up (the output from that system)”
(Bermúdez, 2005, p. 18). Thus edge-detection is the mapping from representations of light
intensities to representations of physical edges (e.g., object boundaries). Shape-from-
shading is the mapping from representations of shading to representations of shape, and
so on. When put in the context of the what and why aspects, the standard interpretation
apparently associates the what with the mapping of input representations to output rep-
resentations, and the why with the informational (or “intentional”) content of these repre-
sentations. Thus the computational level specifies, for example, that early visual processes
map representations of light intensities to representations of oriented lines (“edges”).
Another claim made by the standard interpretation is that these specified visual
information-processing tasks are the phenomena to be explained. In other words,
the specification of the information-processing task is “the specification of the
explanandum—the cognitive task that we are attempting to explain. Marr calls this
the ‘computational’ level, where the specification is typically an input–output function”
(Ramsey, 2007, p. 41). De facto, most interpreters think that the real explanatory
level is the algorithmic level where it is shown “how the brain performs this repre-
sentational conversion” (p. 41). Ramsey continues: “In this three-tiered framework,
it is the middle, algorithmic level where the CCTC theories attempt to explain the
kinds of processes that account for mentality” (p. 42). In the last sentence Ramsey
mentions classical theories (CCTC), but he adds: “This is the general form of cogni-
tive science explananda, even for non-CCTC accounts like connectionism” (p. 41).
We agree that the phenomena to be explained are visual information-processing
tasks, couched in intentional terms of input and output representations (i.e., edge-
detection, shape-from-shading, and so on). We also think that this specification
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

194  Oron Shagrir and William Bechtel

itself is often not trivial and requires lengthy scientific investigation. We contend, how-
ever, that this intentional specification is not the job, or at least the main job, of CL. It is
often made, at least to some extent, before we invoke CL at all. Using techniques such
as single-cell recording, neuroscientists had discovered that photoreceptors are sensi-
tive to light reflectance, that information from the retina arrives to V1, and that cells in
V1 are sensitive to oriented lines long before Marr invoked his computational theories.
We see no reason to call a specification of a task in terms of informational content of
the inputs and outputs a “computational theory.” This would trivialize Marr’s notion of
CL-level theory Indeed, those who hold the standard interpretation refrain from
Marr’s label of computational theory. Thus Dennett, who associates Marr’s computa-
tional level with his intentional level, says that “this specification was at what he [Marr]
called, misleadingly, the computational level” (Dennett, 1994, p. 681).2 But, of course,
the labeling would be misleading only if the job of computational-level theories is pro-
viding such intentional descriptions of cognitive tasks. We will argue, however, that
the job of CL goes far above and beyond that and that the standard interpretation
misses what makes CL-level analysis distinctive.

2.2  Frances Egan: Providing a mathematical or formal theory


Frances Egan associates Marr’s CL with “the specification of the function computed”
(Egan,  1991, pp. 196–7). She argues that CL provides no more than mathematical
specifications: “The top level should be understood to provide a function-theoretic
characterization,” and “the theory of computation is a mathematical characterization
of the function(s) computed” (Egan, 1995, p. 185). The aim of CL, on this view, is to
specify the input–output mathematical function that the system computes (then the
algorithmic levels specify the algorithm by means of which the system computes this
function, and the implementation level specifies how this algorithm is implemented in
the brain). Thus, for example, the computational theory of early vision provides the
mathematical formula ∇2G*I as the computational description of what the retina does.
As Marr put it: “Take the retina. I have argued that from a computational point of view,
it signals ∇2G*I (the X channels) and its time derivative ∂/∂t(∇2G*I) (the Y channels).
From a computational point of view, this is a precise specification of what the retina
does” (1982, p. 337).3
Proponents of the standard interpretation might agree with Egan that CL also pro-
vides a mathematical description of the computed function. Egan departs from the
standard interpretation in two ways. One is her insistence that CL does not provide an
intentional, information-processing, characterization of the input–output function.
Egan (2010) cites Chomsky, who writes that when Marr talks about “representation,”

2
  Sterelny (1990, p. 46), Ramsey (2007, p. 41, note 43), and Horst (2009) make similar comments.
3
  The term I stands for a two-dimensional array (“the retinal image”) of intensity values detected by the
photoreceptors (which is the input). This image is convoluted (here signified by “*”) through a filter ∇2G,
where G is a Gaussian and ∇2 is a second-derivative (Laplacian) operator. This operation is arguably performed
in the retinal ganglion cells.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  195

it “is not to be understood relationally, as ‘representation of ’ ” (Chomsky, 1995, p. 53).


What is being represented, according to Egan, is immaterial from a computational
point of view:
Qua computational device, it does not matter that input values represent light intensities and
output values the rate of change of light intensity. The computational theory characterizes the
visual filter as a member of a well understood class of mathematical devices that have nothing
essentially to do with the transduction of light.  (Egan, 2010, p. 255)

We invoke the representational content only after the computational-level theory has
accomplished its task of specifying the mathematical function. The cognitive, inten-
tional characterization is what Egan terms a gloss on the mathematical characterization
provided by the computational theory. This intentional characterization “forms a bridge
between the abstract, mathematical characterization that constitutes the explanatory
core of the theory and the intentionally characterized pre-theoretic explananda that
define the theory’s cognitive domain” (pp. 256–7).4
The other departure from the standard interpretation is mentioned in the last sen-
tence cited. According to Egan, CL is a mathematical theory whose aim is explanatory.
What it explains is the intentional, information-processing characterization of the
function that the visual system performs. Thus, Egan agrees with the standard inter-
pretation as to the need for such an intentional, information-processing account. She
contends, however, that this characterization is pre-theoretic and so does not consti-
tute part of the computational theory. The computational theory, which consists
solely of mathematical descriptions, aims to explain this pre-theoretic explananda.
That the early visual system computes the ∇2G*I operations explains how it performs
edge detection. The explanation (presumably) is that the system detects edges by
detecting the zero-crossings generated by the second-derivative filters ∇2G*I (where
Gaussians are used at different scales).
We think that Egan captures very well the way Marr characterizes the what aspect of
CL. The job of this element is to provide a precise specification of what the system does,
and the precise specification of what the retina does is provided by the formula ∇2G*I.
However, Egan downplays the fact that there is another component to CL, namely, the
why aspect. When Marr says “from a computational point of view, this is a precise speci-
fication of what the retina does,” he refers to what the retina does, not to the why. After
characterizing what early visual processes do, Marr (1982) says that “the term edge has a
partly physical meaning—it makes us think of a real physical boundary, for example”
(p. 68). And, he adds, “all we have discussed so far are the zero values of a set of roughly
band-pass second-derivative filters. We have no right to call these edges, or, if we do
have a right, then we must say so and why” (p. 68). So it seems that Marr thinks that CL
has to cover another aspect, beyond providing mathematical characterizations.

4
  Egan’s main motivation here is avoiding a context-dependent individuation of computational states;
see Shagrir (2001) for discussion.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

196  Oron Shagrir and William Bechtel

2.3  Piccinini and Craver: CL as a sketch of mechanism


Piccinini and Craver (2011) argue that it is best to conceive Marr’s computational
and algorithmic levels as sketches of mechanism. On the one hand, the two levels are
not levels of mechanisms “because they do not describe component/sub-component
relations” (p. 303). On the other hand, the two levels “constrain the range of compo-
nents that can be in play and are constrained in turn by the available components”
(p.  303). In this sense, of constraining, the computational and algorithmic levels
are sketches. They are placeholders for structural components or sub-capacities in a
mechanism. At the beginning of their paper, Piccinini and Craver say that a sketch
of mechanism is a description in which some structural aspects (of the mechan-
ism) are omitted. Once the missing aspects are filled in, the description turns into
“a  full-blown mechanistic explanation”; the sketches themselves can be thus seen
as “elliptical or incomplete mechanistic explanations” (p. 284). They are, in a way, a
guide or a first step towards the structural components that constitute the full-blown
mechanistic explanations.5
We agree with Piccinini and Craver that CL puts constraints on the mechanistic
explanation of the phenomenon. This seems to align with Marr’s methodological
approach (to be discussed below). But we reject their attempt to collapse Marr’s three
levels into two by closely intertwining the computational and algorithmic levels.
Theirs is not unique among philosophical and theoretical approaches to cognitive
science in attempting to collapse Marr’s levels (see, e.g., Pylyshyn, 1984; Newell, 1980,
both of whom collapse Marr’s computational and algorithmic level before adding
an addition semantic (Pylyshyn) or knowledge (Newell) level). But this approach is
foreign to Marr. If anything, it is the algorithmic and implementational levels that
belong together as both look inside the mechanism to the operations that enable it
to compute a function.6 Piccinini and Craver are right to observe that both the com-
putational and algorithmic levels are abstract, in that they omit certain structural
aspects of the mechanism (both levels are also abstract in the sense that they provide
mathematical or formal descriptions). But Marr is far keener to point to a funda-
mental difference between the computational and algorithmic levels. The algorithmic
level (much like the implementation level) is directed to the inner working of the

5
  Kaplan (2011) advances a somewhat similar view arguing that computational models in neuroscience
are explanatory to the extent that they are tied to the norms of mechanistic explanations. When referring to
Marr, Kaplan argues that “according to Marr, the ultimate adequacy of these computational and algorithmic
specifications as explanations of human vision is to be assessed in terms of how well they can be brought into
registration with known details from neuroscience about their biological implementation” (p. 343).
6
  Thus Marr (1982) writes that “there must exist an additional level of understanding [i.e., CL] at which
the character of the information-processing tasks carried out during perception are analyzed and under-
stood in a way that is independent of the particular mechanisms and structures that implement them in
our heads” (p. 19), and that “although algorithms and mechanisms are empirically more accessible, it is the
top level, the level of computational theory, which is critically important from an information-processing
point of view” (p. 27).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  197

mechanism, i.e., to causal relations (signified by arrows) between sub-components.7


The computational level looks outside, to identifying the function computed and
relating it to the environment in which the mechanism operates. Marr’s former
­student and colleague, Shimon Ullman, puts this point about CL succinctly in his
manuscript on visual motion: “In formulating the computational theory, a major
portion concerns the discovery of the implicit assumptions utilized by the visual system.
Briefly, these are valid assumptions about the environment that are incorporated
into the computation” (Ullman, 1979, pp. 3–4). We will elaborate on this point below.

3.  Recognizing What Is Distinctive about CL


We offer an alternative interpretation of Marr’s CL that keeps equally in focus the
what and why questions associated with it. Accordingly, we emphasize two aspects
of Marr’s CL. One is the quantitative characterization of the phenomena (associated
with the what). The other is the role of contextual or environmental constraints
(associated with the why). To make things more concrete we focus on one specific
information-processing task—the correspondence problem in stereo vision. As we
proceed, we identify respects in which our interpretation agrees and differs with the
three interpretations above.

3.1  The correspondence problem


There is an angular discrepancy in the position of an object in the two retinal images.
This discrepancy is known as disparity. The disparity is usually larger when the object
is closer to the eyes (as in looking at a finger touching your nose) and smaller when it is
further away. The visual system deploys disparity to compute several features such as
depth. The first step of this process is matching up elements from the visual scene—
that is, finding the two elements, one from the left retinal image and the other from the
right retinal image—that correspond to the same object. The difficulty of the task stems,
among other things, from the ambiguity of elements in the images and the multiple
possibilities of matching elements.
Marr illustrates the ambiguity of elements in Figure 9.1. The four projections in the
left eye’s view, L1, . . . ,L4, can be paired in sixteen possible ways with the four projec-
tions, R1, . . . ,R4, in the right eye’s view, but only four are correct (filled circles). The
remaining twelve (open circles) are false targets. The horizontal dashed lines signify
the amount of (horizontal) disparity; circles (pairs) that are on the same line have the

7
  There are reasons to reject as well Piccinini and Craver’s contention that the algorithmic level offers
only a sketch of a mechanism. An algorithm can provide a complete account of the operations in a mecha-
nism. In doing so it will not specify the parts of the mechanism, as that is the task of the implementation
account, but then the implementation account is also incomplete insofar as it fails to specify the operations
the parts perform. Moreover, as Levy and Bechtel (2013) argue, it is often a virtue in explanation to abstract
from details of the mechanism to reveal what is actually responsible for the phenomenon of interest.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

198  Oron Shagrir and William Bechtel

L1 L2 L3 L4 R1 R2 R3 R4

Figure 9.1  Marr’s portrayal of the ambiguity in matching elements to determine the depth of
an object
Source: Marr and Poggio (1976), p.285. Reprinted with permission from the American Association for the
Advancement of Science.

same disparity. Strikingly, the visual system solves the correspondence problem even
in highly ambiguous scenes.
According to the standard interpretation, characterizing the correspondence
problem provides an intentional characterization of the input–output description
of the task and exhausts the role of the computational level. The computational
level states that the task at hand is the cognitive function whose input are elements
from the left and right retinal images (say, edges, bars, and so forth), and its output
is some array of pairs of elements from the left and right images that correspond
to the same worldly feature. With this characterization of CL, the standard inter-
pretation would have researchers turn to the mechanistic levels of algorithms
and  implementations for the explanation. This, however, is not Marr’s view. His
computational level—both its what and why aspects—advance beyond this inten-
tional description.

3.2  Specifying the task in quantitative terms (the what)


Let us start with the what aspect. Marr and Poggio (1976, 1979) provide a quantitative,
mathematical description of the function solving the correspondence problem. This is
a pairing function that satisfies two conditions: (a) Uniqueness: a black dot from one
image can match no more than one black dot from the other image. This constraint
rules out, for example, the function that matches L1 to R1 and also L1 to R2; and (b)
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  199

Continuity: disparity varies smoothly almost everywhere. This constraint rules out
functions that match up pairs with very different disparities.
We see, then, that CL provides more than an intentional description of the phenom-
enon to be explained, i.e., matching elements from the left and right images. CL provides
a quantitative characterization of this matching function: It specifies the (input–output)
mathematical function that the system computes in order to reach matching. CL shows
that the visual system solves the correspondence problem by computing a pairing
function that satisfies the Uniqueness and Continuity constraints (in short: UC-pairing
function). This role of CL is consistent with Egan’s interpretation that highlights the
centrality of a mathematical or formal theory. It is also consistent with Piccinini and
Craver’s claim that CL is a sketch of a mechanism. The computed, mathematical func-
tion constrains the possible algorithms that the system might use, which are just the
algorithms for a UC-pairing function (Marr and Poggio (1979) propose a constraint-
satisfaction attractor neural network). And the computational and algorithmic levels
constrain the possible “full-blown” mechanistic explanations that can be provided.
However, both Egan, on the one hand, and Piccinini and Craver, on the other, do not
notice that this quantitative characterization of the task is associated with the what
aspect of CL: What is being computed is a UC-pairing function. This aspect, however,
does not exhaust the role of the computational level. CL is also involved with embedding
this function in the environment of the perceiving subject.

3.3  The role of the environment (the why)


Marr often emphasizes that CL is involved with what he calls physical or natural con-
straints. As his students once put it, CL includes “an analysis of how properties of the
physical world constrain how problems in vision are solved” (Hildreth & Ullman, 1989,
p. 582). These physical constraints are features in the physical environment of the
perceiving individual (Marr, 1982, pp. 22–3); they are not features of the mechanism
described abstractly. To avoid ambiguities with physical features of the inner implement-
ing mechanisms we call them contextual constraints. It should be noted that these
­constraints are not the informational contents of the representations, but facts about
the physical environment we happen to live in.
In our case, Marr and Poggio relate the uniqueness and continuity conditions to
­contextual, environmental physical features. Uniqueness (“a black dot from one image
can match no more than one black dot from the other image”) is motivated by the spatial
localization constraint, which specifies that “a given point on a physical surface has a
unique position in space at any one time” (Marr & Poggio, 1976, p. 284; see also Marr, 1982,
pp. 112–13). Continuity (“disparity varies smoothly almost everywhere”) is motivated by
the cohesiveness of matter constraint, which says that “matter is cohesive, it is separated
into objects, and the surfaces of objects are generally smooth compared with their
­distance from the viewer” (Marr & Poggio 1976, p. 284; see also Marr 1982, pp. 112–13).
What is the role of the contextual constraints in the analysis of vision, and of cogni-
tion more generally? We identify two related but different roles, one methodological
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

200  Oron Shagrir and William Bechtel

and another explanatory. The methodological role has to do with the discovery of the
computed function. The claim here is that we appeal to physical external factors in
order to discover the mathematical function that is being computed. Thus, for example,
we derive continuity (“contextual constraint”) from the fact that the world around us
consists of objects whose surfaces are by and large smooth; only a small fraction of the
image is composed of features such as object boundaries that result in changes in
depth. Thus overall disparity is mostly continuous. Or, returning to the example of
edge detection, the discovery that early visual processes compute derivation (either of
first or second degree) is made through the observation that in our perceived environ-
ment sharp changes in light reflectance occur along physical edges such as boundaries
of objects. This contextual feature puts substantial constraints on the mathematical
function that is being computed, i.e., that it has to do with some form of derivation.
The methodological role of the physical constraints is related to a top-down meth-
odology that is often associated with Marr’s framework (that the scientific investiga-
tion should proceed from the top, computational, level, down to the algorithmic and
implementation levels). A central claim of this approach is that it would be practically
impossible to extract the computed mathematical function by abstracting from neural
mechanisms. The way to go is to extract what the system computes from relevant cues
in the physical world that constrain the computed function. The contextual constraints
play a central role in this top-down approach.
The other role of the contextual constraints is explanatory (we note that on p. 22
Marr refers to CL as a “level of explanation”). This explanatory role of constraints is tied
to the why aspect: The contextual constraints play the role of answering the question of
why the computed mathematical function is appropriate for the given information-
processing visual task. Thus consider again the correspondence problem. After char-
acterizing the what (what is being computed is the UC-pairing function), Marr asks
why the UC-pairing function—and not another pairing function—provides matching.
As Marr puts it: “The real question to ask is Why might something like that work? For
the plain fact is that if we look just at the pair of images, there is no reason whatever
why L1 should not match R3; L2 match R1, and even L3 match R1” (1982, p. 112; emphasis
original). Marr is asking why should computing UC-pairing, and not any of the other
functions, provide a solution for the correspondence problem. The algorithms and the
neural mechanisms that underlie this function cannot answer to this question. These
mechanisms specify how the system computes the UC-function, but they do not
explain why computing this function, and not another function, lead to matching.
Marr explains why the UC-pairing function leads to matching by relating the condi-
tions of uniqueness and continuity to facts about the physical world we happen to live
in. Computing a UC-pairing function leads to matching because the UC-pairing func-
tion corresponds to spatial localization and the cohesiveness of matter in our world.
Imagine a world consisting of objects with spiky surfaces that give rise to a reflection
function that is almost never smooth. This will mean that the disparity between the
images changes almost everywhere. In our example (Figure 9.1), the disparity between
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  201

L1 and R1 is very different from the disparity between L2 and R2, and so on. In this
world it might be impossible to find a function that satisfies continuity, and even if
there is such function there is no reason to assume that computing it will lead to match-
ing. Had we lived in such a world, then computing this function would not lead to
matching, but, if anything, to something else. Computing UC-pairing function is
appropriate for matching in our case due to certain contingent facts about the physical
environment in which we are embedded.
The methodological and explanatory roles of the constraints are related, of course.
On the one hand, the contextual constraints explain, at least partly, the fact that the
visual system computes the UC-function and not another function. On the other hand,
Marr’s methodological moral is that we can deploy these constraints in order to dis-
cover that the computed function is one satisfying the conditions of uniqueness and
continuity.

4.  Insights from Marr’s CL for


Mechanistic Explanation
Having articulated our account of Marr’s CL level that sharply distinguishes it from the
algorithmic and implementational levels and takes seriously his construal as involving
both what and why aspects, we can return to mechanistic explanation. As we discussed
above, Piccinini and Craver treated CL as offering a mechanism sketch. On our con-
strual, CL is not providing a sketch of a mechanism but something quite different—it is
characterizing the phenomenon for which a mechanism is sought as explanation.
There is an important role for mechanism sketches in developing mechanistic
explanations, but insofar as the sketch identifies operations in the mechanism it is an
account at Marr’s algorithmic level and insofar as it identifies these operations with
component parts of the mechanism, it is at the implementational level. With respect to
the mechanism, CL only specifies the task the mechanism performs and offers no sug-
gestions as to how it does it. Thus, it characterizes the phenomenon without trying to
explain it mechanistically (although, as we have noted, it does figure in a different type
of explanation, that concerned with why the mechanism is appropriate for the task).
Egan is correct to draw out the fact that CL offers mathematical characterizations of
the task the mechanism is to perform. This is a crucial aspect of the way phenomena
are delineated in scientific inquiries. If they weren’t delineated mathematically, the
quest for mechanistic explanation would often be unmanageably underdetermined.
Many mechanisms can perform in qualitatively the same way, but quantitatively their
performance differs. The challenge is to explain the actual phenomenon characterized
quantitatively. This quantitative detail is also important to researchers as it provides a
major tool for evaluating proposed mechanistic explanations. Of course the mechanis-
tic explanation must also appeal to parts and operations that are known to constitute
the mechanism. Yet, even when this condition is met, researchers find it important to
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

202  Oron Shagrir and William Bechtel

assess whether the proposed mechanism could account for the quantitative details of
the phenomenon. This is where computational modeling plays an increasingly central
role in neuroscience and biology more generally (Bechtel, 2011).
As Egan correctly observes, the mathematical, quantitative characterization (what
she calls a mathematical or a formal theory) plays an explanatory role with respect to
the pre-theoretic, intentionally characterized, explananda phenomenon. What Egan
disregards, however, is that the mathematical theory has this explanatory role only if
we embed the mechanism in the physical environment of the perceiving individual.
The mathematical operation ∇2G*I is explanatory with respect to the phenomenon of
edge detection only when we relate this mathematical function with the relation that
holds between magnitudes existing in the world. As Egan notes, correctly again (!), the
informational content of the cells in the retina and in the primary visual cortex have no
explanatory role in CL. They are, perhaps, only a gloss on the mathematical character-
ization that the computational theory provides. But this does not entail that there are
no other contextual features that play an explanatory role. Indeed, according to Marr
the relevant contextual features are physical (“contextual”) constraints that indicate
intensity changes in the image result from “surface discontinuities or from reflectance
or illumination boundaries” (Marr & Hildreth, 1980, p. 187). The upshot is that the
formal theory constitutes only one part of the explanation (associated with the what).
“The other half of this level of explanation” (1982, p. 22), as Marr put it, has to do with
the why, namely with why the visual system performs the mathematical operation
∇2G*I, and not, say, exponentiation or factorization when detecting edges.
What makes CL explanatory with respect to edge detection—so that the what and
the why conspire to provide an explanation—is an intriguing question. One proposal is
that the visual system works much like scientific models (for a survey, see Frigg &
Hartmann, 2017). It models its environment by preserving certain relations in the
environment. CL describes this modeling relation, explaining its role in the visual
task.8 This is shown in Figure 9.2 in which the top portion identifies the relation in the
world and the bottom portion the operations occurring on representations in the vis-
ual system. The dotted arrows indicate that the representations in the brain stand in for
features in the world itself. The detection of visual edges (say, zero-crossing segments)
mirrors a pertinent relation in the visual field in the sense that there is an isomorphism
(or some other morphism) between this visual process and the visual field. This mor-
phism is exemplified by the (alleged) fact that the visual system and the visual field
have a shared mathematical description (or structure). On the one hand, the visual
system computes the zero-crossings of second-derivative operations (over the retinal
pixels) to detect edges; this is shown in the bottom span of Figure 9.2. On the other
hand, the reflection function in the visual field changes sharply along physical edges
such as object boundaries. These changes can be described in terms of extreme points
of first-derivatives or zero-crossing of second derivatives.

  This modeling idea is discussed in some detail by Shagrir (2010a).


8
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  203

Light Physical
intensities edges

I(x,y) ∇2G*I(x,y) Visual edges

Figure 9.2  Edge detection. Early visual processes (bottom span) detect “visual edges” in the
retinal image by computing the zero-crossings of ∇2G*I (see note 3); the second-derivative
operations ∇2G*I are performed by the ganglion and LGN cells. The intensity values encode
(dashed arrow) “light intensities” of the visual field that combine different factors such as the
reflectance of the visible surfaces. The visual edges (e.g., segments of co-located zero-crossings)
encode physical edges such as object boundaries.

Figure 9.2 makes clear how the CL accounts for edge detection: It is important to
compute the function ∇2G*I because that is the relation that holds between magnitudes
existing in the world: a mechanism that computes it will identify edges. This match
between the task and the mechanism shows why the mechanism succeeds. The what
aspect provides a description of the mathematical function that is being computed.
The why aspect employs the contextual constraints in order to show how this function
matches with the environment.
There are debates about whether the matching relation in models is similarity,
isomorphism, partial isomorphism, or homomorphism.9 And, of course, not all mech-
anisms are perfectly adapted to their environments. There is a long tradition of show-
ing that cognitive systems with limited resources employ heuristics that succeeded
well enough in the actual world, but which can be expected to fail under specifiable
conditions (Simon, 1996). Our proposal, though, works for heuristics as well as opti-
mal procedures—heuristics work as well as they do because they capture real relations
in the world (between cues and outcomes). The why-aspect of CL accounts does not
require showing that the computational performed is optimal, only that it is grounded
in the world in which the visual system is operating.10
What are the relations between CL explanations and mechanistic explanations? On
the one hand, it is important to recognize that the task to be performed is conceptually
independent of any mechanism that performs it, including the particular inputs the
organism receives or the specific outputs it produces in solving it. While Marr viewed

9
  Swoyer (1991) talks about isomorphism, but others about partial isomorphism (French & Ladyman,
1999; Da Costa & French, 2003; Bueno & French, 2011), homomorphism (Bartels, 2006), and similarity
(Giere, 2004; Weisberg, 2013).
10
  Edge detection is by no means an isolated example of this kind of CL explanation. Shagrir (2010b)
discusses the case of stereo vision. Another example is Ullman’s (1979) structure-from-motion algorithm
in which the 3D structure and motion of objects is inferred from the 2D transformations of their projected
images. Here the mathematical function computed reflects spatial relations in the target, assuming the
constraint of rigidity.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

204  Oron Shagrir and William Bechtel

the algorithm he took to be operative in our brains as computing ∇2G*I, computing


that function would still be a task for a perceptual system even if our brains failed to do
so. By actually computing it, our brains solve a problem that is specified by the relation
between light intensities and physical edges occurring in the world, as it is clearly
shown in Figure 9.2.
On the other hand, Marr does not offer CL as an alternative explanation to mech-
anistic explanations, but as a complementary one. Mechanistic explanations describe
the mechanisms by means of which the nervous system changes from one neural state
to another. It describes, for example, how certain activity in the photoreceptors (that
represent light intensities) lead, through the activity of the retinal ganglion cells, to the
activation of cells in V1 (that are sensitive to oriented lines). This mechanistic descrip-
tion is surely an explanation at the level of neural circuitry. But it does not by itself
explain the information-processing task of edge detection (this is perhaps what Marr
means when he says: “The key observation is that neurophysiology and psychophysics
have as their business to describe the behavior of cells or of subjects but not to explain
such behavior” (1982, p. 15)). This mechanistic description does not explain why this
particular neural mechanism has to do with the detection of edges and not, say, with
the detection of color. The CL provides the answer to this question: The mechanism
implements a certain mathematical function (of the zero-crossings of ∇2G*I) and this
function matches the relations in the world, e.g., sharp changes in light intensities that
typically occur along object boundaries. When the CL explanation is in place, the
mechanistic—algorithmic and implementational—descriptions explain how exactly
the visual system computes the mathematical function.
While one might accept our contention that Marr’s CL accounts require turning
to the world to address both the what and why aspects, one might still question
whether there is a similar need to look outside a mechanism to its context in
delineating the phenomenon it explains. Isn’t it sufficient to show that the targeted
mechanism exhibits regular behavior? We offer two responses to this question. First,
as we noted at the beginning, not all regularities that can be stated mathematically
are appropriate targets for explanation. This applies both to naturally occurring ones
and to ones that can be detected experimentally. Looking to the task that needs to be
performed by the mechanism given the structure of the world provides a way of iden-
tifying which regularities require mechanistic explanation. Contrasting examples
illustrates this. Although the heat generated by animals can be quantified, the hun-
dred-year effort to explain animal heat terminated quickly when around 1930 it was
recognized that heat was a waste product, not a source of energy that animals could
use to perform work. The identification that instead, adenosine triphosphate (ATP)
was the molecule in which energy released through metabolism was stored, resulted
in extensive research to explain how, for example, oxidative metabolism could result
in synthesis of three ATPs. As these examples make clear, looking to the environ-
ment is important in mechanistic research in general, but it is especially relevant in
the context of information-processing mechanisms where the task being performed
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  205

is an important guide to what operations carry information needed for the mech-
anism to perform its task.
Second, it is the world that both sets the task and determines the resources available
to the mechanism in performing the task. Part of the investigatory strategy researchers
employ in developing mechanistic explanations is to identify these resources and their
utilization within the mechanism. Mechanisms are typically not closed systems but
consist of parts and operations that interact with their environment in generating
­phenomena. The visual system is an example. Although Marr and many other vision
researchers focused only on the steps in processing stimuli and not the activities of the
organism that determine what stimuli impact its retina, perceivers are often active—
they move their eyes, heads, or whole bodies in the course of seeing. As they do so, the
projections onto their retina change. Moreover, some of these movements are directed
by the visual system as it actively samples the visual array to procure information
(Ballard, 1991). Since many mechanisms actively engage their environment as they
operate, it is important to capture these interactions in characterizing the phenomenon
itself. Otherwise, researchers end up trying to explain a phenomenon that does not
actually occur and may require resources that are not available in the mechanism. This
concern is what lay behind calls for ecological validity in psychology research by
Brunswik (1943), Gibson (1979), Neisser (1976), and others. (We discuss Gibson and
Marr’s response to Gibson further in the following section.)
In this section we have focused on two important insights that can be gleaned for
the task of delineating phenomena for mechanistic explanation. The first is that
­phenomena are typically characterized not just qualitatively, as they typically are in
the accounts of the new mechanistic philosophers of science, but also in quantitative
or formal terms (for recent exceptions, see Bechtel, 2013; Bechtel & Abrahamsen, 2010;
Brigandt, 2013; Kaplan,  2011). In describing the talk of edge detection in his CL
account, Marr identified the mathematical function that needed to be computed.
Second, in delineating phenomena researchers often, as Marr did at the CL level,
focus outwards on the context in which the mechanism operates. Among other
things, this allows researchers to identify the resources available to the mechanism in
producing the phenomenon. We will return to show how Marr’s account generalizes
to other phenomena beyond vision in the concluding section, but first point to two
limitations of the account Marr offered.

5.  Delineating Phenomena: Going beyond


Marr’s Account of CL
As much as Marr emphasized the importance of developing an analysis of CL that
showed both quantitative rigor and addressed the context in the world in which the
visual mechanism operated, it is noteworthy that he did not develop two other aspects
of the CL account that are critical in delineating phenomena—that empirical, even
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

206  Oron Shagrir and William Bechtel

experimental, research is required to identify the quantitative relations that constitute


the phenomena and that characterizations of phenomena are often revised in the
course of developing mechanistic explanations of them.

5.1  Empirical inquiry to delineate phenomena


Despite the attention Marr paid to CL, he pursued CL accounts with an intuitive,
almost armchair approach. Poggio (1981), in articulating and defending Marr’s
approach and bringing out clearly how CL analysis is directed at the world outside the
visual systems, nonetheless also claims: “No high-level specific preunderstanding is
required, but only general knowledge about the physical world. An example of such
general knowledge is that the world is constituted mainly of solid, non-deformable
objects of which only one can occupy a given point in space and time” (p. 259). He also
notes “It is probably fair to say that most physiologists and students of psychophysics
have often approached a specific problem in visual perception with their personal
‘computational’ prejudices about the goal of the system and why it does what it does.”
This almost trivializes the importance of CL analysis. But we contend that Marr did, or
should have, intended something more radical.
We take a cue as to what CL analysis ought to involve from Gibson, of whom Marr
said: “In perception, perhaps the nearest anyone came to the level of computational
theory was Gibson” (1982, p. 29). The basis for this comment is that Gibson more than
most psychologists took seriously the importance of the environment in which per-
ception occurs. Although he adopted the biological term ecological, his principle focus
was on the physical features of the environment (specifically, those physical features
about which information is available in the light). Much of Marr’s discussion of Gibson
is critical, focusing on Gibson’s repudiation of representations and internal processing
(Gibson claimed that vision was direct—we directly see objects in the world by picking
up information available in the light). At the same period as Marr was writing Vision,
Ullman published a detailed criticism of Gibson’s account of direct perception
(Ullman, 1980). We focus, however, on why Marr saw Gibson as the person who came
closest to offering a computational theory.11
What an ecological approach to perception meant for Gibson and many who have
subsequently pursued his project is that psychologists should study the perceiving
organism in the context of the world in which it functions, considering both what
the organism uses vision for and the resources the world provides for performing
those tasks. Both require empirical inquiry. Studying perceiving organisms reveals

11
  Gibson would have bristled at being associated with anything called a computational theory and even
more to Marr’s advocacy of analyzing vision in terms of algorithms. It is possible, however, to view Gibson’s
arguments for direct perception and his eschewal of internal processing as methodological—as a strategy
for focusing on the richness of what he called “information in the light” that was neglected by most psy-
chologists who jumped too quickly to address how organisms process stimuli that they have designed to
probe the visual system with less attention to how such stimuli reflect the inputs the visual system typically
confronts. In this he is allied with Marr’s contention of the importance of CL analysis.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  207

that they use vision to accomplish biological needs—detect resources and threats in
their environments and safely navigate through it. Often these tasks can be per-
formed by picking up on information in the environment without having to build up
a complete representation of the world (by converting 2D representations into 3D
representations).12
Gibson referred to what an organism picks up through vision or other senses as
affordances: “The affordances of the environment are what it offers the animal, what it
provides or furnishes, either for good or ill” (Gibson, 1979). In particular, they are pos-
sibilities for action in the world that are relative to the organism and its capacities for
acting. An example he used is that a surface that is nearly horizontal and flat, suffi-
ciently extended, and rigid relative to the weight of the animal, affords support that can
be walked or run on. Moreover, he stressed that these potentials exist regardless of
whether the organism has learned to pick up on them (Gibson was a pioneer in treat-
ing perception as a skill to be learned; see Gibson & Gibson, 1955). This topic became
the focus of Eleanor Gibson’s research (1969). When the objects of vision are other
agents, vision captures emotional information and presents others as entities to engage,
fight, flee from, etc. Gibson maintained that these affordances were not in the organism
but in the world, although they might only be relevant to organisms with particular
capacities for action and so “picked up” by them.
In identifying affordances the perceiver is typically not passive but moves about in
the world, and even when not moving physically, moves its eyes to focus on different
parts of the visual field. As we noted in Section 4, once one recognizes that perceivers
move to acquire information, it is not sufficient to characterize the input they use when
functioning in their environment in terms of retinal images. Rather, it is better to focus
on what Gibson termed the “optic array”—the pattern found in the light that changes
as either the perceiver changes vantage points or objects in the world move. Among
other things, the optic array provides information as to how the perceiver and per-
ceived objects are situated vis-à-vis each other.
Gibson initiated a research program that has provided substantial information
about the information in the optic array. Lee and Reddish (1981), for example showed
that a parameter τ, easily calculated from the rate of expansion in the optic array,
specifies time to impact even for accelerating agents such as gannets diving into the
ocean. By experimentally manipulating the size of doorways, Warren and Whang
(1987) showed that the optic array carried information about whether a person could
simply walk through or whether they would have to turn sideways. An important
offshoot of Gibson’s research are investigations such as those of Findlay and Gilchrist
(2003) into how agents determine appropriate eye movements (saccades) to secure
useful information.
From the perspective of attempts to explain the information processing involved
in vision, these inquiries are all CL inquiries. But, contrary to Poggio, they reveal

  Such inquiry has been pursued subsequently by, for example, Turvey (1992).
12
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

208  Oron Shagrir and William Bechtel

information about how the visual system is situated in the body and world that was
not part of general knowledge but stemmed from empirical investigations. Although
we have not emphasized it, the results of these inquiries into the world in which
vision operates can be stated in a precise, quantitative manner.

5.2  Reconstituting computational-level accounts


A standard picture of scientific inquiry is that researchers begin with a problem to be
solved such as a phenomenon to be explained and their efforts are then directed at
solving the problem or explaining the phenomenon. But as we are all aware, attempts
at solving a problem often lead to recognition that the problem was somewhat differ-
ent from what it was initially taken to be. Likewise, efforts at explaining a phenomenon
by studying the mechanism can lead scientists to recognize that the phenomenon is
different than they took it to be (Craver, 2007, p. 261). One of the most important
developments in the analysis of vision since Marr has been the discovery that there
are two streams of visual processing beyond V1: the ventral stream projects to areas in
the medial temporal lobe while the dorsal stream projects to areas in the parietal lobe.
In their paper identifying these pathways, Mishkin et al. (1983) characterized them as
being involved in respectively determining the identity of an object and its location.
Subsequently, Milner and Goodale (1995) offered evidence to support the claim that
the dorsal stream serves to identify possibilities for action. These two streams, how-
ever, are not fully independent as there are connections at several points between
them (van Essen & Gallant, 1994) and, as Findlay and Gilchrist (2003, chapter 1)
discuss, areas such as the frontal eye fields, critical in regulating saccades, receive
inputs from both. These discoveries revealed that there are at least two components
of the phenomenon of vision that were not differentiated prior to research on the
responsible mechanism.
Even the characterization of the object-recognition process on which Marr focused
has been significantly revised in recent years. Although the fact that there are at least as
many and likely many more recurrent as feed-forward projection through cortex has
been known since the pioneering research of Lorente de Nó (1938), there was little
understanding of what function these might serve. References to top-down processing
were frequent, especially in cognitive science, during the period in which Marr was
working, but he was highly skeptical of them since they seemed incompatible with the
fact that we often see what we don’t expect to see. But evidence of the prevalence of
recurrent activity in the brain has continued to grow and recently a number of
researchers have developed accounts that accommodate it (Dayan et al., 1995; Rao &
Ballard, 1999; Llinás, 2001; Hawkins & Blakeslee, 2004; Hohwy et al., 2008; Huang &
Rao, 2011; Clark, 2013). They have recast the phenomenon of vision as starting with
the brain predicting what it will next encounter through its senses and only engaging
in further processing of input information when it contravenes what it predicted.
Through the combination of empirical and conceptual research, the phenomenon of
vision on which Marr focused is being reconstituted.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  209

Marr was right to emphasize both the what and why elements of CL, but he did not
go far enough in exploring how these are to be identified. Empirical investigations con-
ducted at the point at which the mechanism engages its environment are required to
determine what are the stimuli to which the perceiver is responding and, although we
have not addressed it, the uses to which the perceiver puts the information. Moreover,
the CL account is not final when investigation of the mechanism begins but often must
be revised in light of what is discovered by the mechanism itself.

6. Conclusion
Our goal in this chapter has been to develop a characterization of CL that is more
­adequate to Marr’s insistence that it involves both a what and a why aspect than extant
interpretations. The what aspect requires developing a mathematical description of the
task for vision. The why aspect forces researchers to look to the structures in the world
that the organism engages through its visual system. It shows that the function com-
puted by the visual system is effective because it matches a mathematical relation that
exists in the world (e.g., between light intensities and physical edges). We argued, how-
ever, that Marr did not go far enough either in recognizing that empirical inquiry such
as that which Gibson pursued is often required to identify the task confronted by the
visual system or that the characterization of the task must often be revised as research
on the mechanism proceeds. The CL analysis, so construed, identifies the phenomena
of vision—the visual system processes information provided by light so as to compute
functions that correspond to those realized in the physical world, thereby enabling
organisms to perform their activities.
Following Marr, we have focused on the visual system and thus discussed CL ana-
lyses of visual information available to organisms. But as we indicated at the outset,
this perspective can be extended to other brain systems. The most straightforward
extensions are to other sensory systems and motor systems that compute functions
that relate directly to structures in the environment. Motor systems must compute
commands that enable the body to operate in the environment, including changes in
the environment that result from the execution of the motor processes. It is by looking
to the environment that researchers can identify the function that the motor system
must compute. A nice example involves the oculomotor system that controls eye
movements. One of its tasks (performed by the vestibulo-ocular reflex) is keeping the
visual world stable on the retina when the head is moving. Experimental studies show
that the system converts transient eye-velocity-encoded inputs into persistent eye-
position-encoded outputs. It was thus concluded that the system network is a neural
integrator.13 In this case the researchers infer from contextual cues (“contextual
constraint”) that the relations between the encoded velocity and position are that of
integration to the claim mathematical integration is what is computed (Shagrir, 2012).

13
  It is hypothesized that the neural integrator also serves for other eye-movement operations such as
saccadic and pursuit movements (Robinson, 1989; Goldman et al., 2002).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

210  Oron Shagrir and William Bechtel

The challenge in characterizing CL analysis is somewhat greater for more central


cognitive activities such as episodic memory. Following Ebbinghaus, memory has often
been studied using laboratory tasks such as learning lists of words that are relatively
far removed from those humans typically confront. Inspired by Gibson, researchers
such as Neisser (1982; Neisser & Winograd, 1988) investigated real stimuli and real tasks
(e.g., providing testimony in legal proceedings). One of the upshots of this endeavor
was to demonstrate how reconstructive memory is (a claim that has been pursued by
other researchers as well; see, e.g., Schacter, 1996). What makes it reconstructive is that,
in the process of recall, pieces of information that are retrieved are organized together
in ways that are at least partly responsive to the context in which retrieval is required.
This points to the retrieval context as partly shaping the task of memory recall. It is
much more challenging to characterize memory retrieval in terms of a mathematical
function, and this may be one of the reasons why research on the mechanisms of
episodic memory is less advanced than the research on the mechanisms of vision.
The information-processing perspective applies more generally than just to brain
function. Biological systems often employ systems that control other systems. At the
cellular level, this is carried out chemically through the cell signaling system. In single-
celled organisms, which are the most prevalent life forms on the planet, molecular sys-
tems pick up information about the internal state or conditions in the environment of
the cell and regulate such activities as the synthesis of new proteins. In characterizing
these phenomena, both the what and why aspects of Marr’s CL level are appropriate:
researchers both specify the relationship between the signal picked up and the response
generated mathematically, and relate this to conditions external to the control system.
This outward focus is important, as it is in vision, to specifying which mathematical
relations constitute the phenomena to be explained and the resources available to the
system in generating the phenomena.
We have limited our discussion in this chapter to information-processing contexts.
But we think that Marr’s account of CL provides insights into the tasks confronted in
delineating phenomena and can help fill a lacuna in the accounts the new mechanists
in philosophy of science have offered of the task of delineating phenomena. For
example, mechanistic explanations are also advanced for phenomena such as protein
synthesis and the generation of action potentials that do not themselves serve to pro-
cess information. Developing detailed accounts of the phenomena and the contexts
in which they are performed is also vitally important in those endeavors. Hence,
some of the lessons derived from the CL analysis may extend to these explanations.
However, since these explanations do not involve processing information, the
distinctive why feature of CL analysis which we have emphasized does not apply.
Our contention is that Marr’s valuable insight is that with information-processing
mechanisms, the CL level plays a crucial role in identifying the relation in the world
that the information-processing system must compute in order to succeed. Moreover,
we have argued that without a CL analysis, the quest for mechanism would be
impaired and a crucial part of the explanation would be unavailable.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  211

Acknowledgments
This joint work emerged from our stay, as members of the the Computation and the Brain
group at the Jerusalem Institute of Advanced Studies. We are grateful to the other members of
the group for stimulating discussions: Adele Abrahamsen, Frances Egan, Hilla Jacobson, Arnon
Levy, Robert Matthews, and Gualtiero Piccinini. Shagrir's research was supported by the Israel
Science Foundation grant 1509/11.

References
Ballard, D. H. (1991). Animate vision. Artificial Intelligence, 48, 57–86.
Bartels, A. (2006). Defending the structural concept of representation. Theoria-Revista de
Teoria Historia y Fundamentos de la Ciencia, 21, 7–19.
Beadle, G. W. & Tatum, E. L. (1941). Genetic control of biochemical reactions in Neurospora.
Proceedings of the National Academy of Sciences of the USA, 27, 499–506.
Bechtel, W. (2011). Mechanism and biological explanation. Philosophy of Science, 78, 533–57.
Bechtel, W. (2013). Understanding biological mechanisms: Using illustrations from circadian
rhythm research. In K. Kampourakis (Ed.), The philosophy of biology (Vol. 1, pp. 487–510).
Dordrecht: Springer.
Bechtel, W. & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History
and Philosophy of Biological and Biomedical Sciences, 36, 421–41.
Bechtel, W. & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational mod-
eling of circadian rhythms as an exemplar for cognitive science. Studies in History and
Philosophy of Science Part A, 41, 321–33.
Bechtel, W. & Richardson, R. C. (2010 [1993]). Discovering complexity: Decomposition and local-
ization as strategies in scientific research. Cambridge, MA: MIT Press. 1993 edition published
by Princeton University Press.
Bermúdez, J. L. (2005). Philosophy of psychology: A contemporary introduction. New York: Routledge.
Bogen, J. & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–52.
Brigandt, I. (2013). Systems biology and the integration of mechanistic explanation and
mathematical explanation. Studies in History and Philosophy of Biological and Biomedical
Science, 4, 477–92.
Brunswik, E. (1943). Organism achievement and environmental probability. Psychological
Review, 50, 255–72.
Bueno, O. & French, S. (2011). How theories represent. British Journal for the Philosophy of
Science, 62, 857–94.
Chomsky, N. (1995). Language and nature. Mind, 104, 1–61.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive
science. Behavioral and Brain Sciences, 36, 181–204.
Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience.
New York: Oxford University Press.
Da Costa, N. C. A. & French, S. (2003). Science and partial truth: A unitary approach to models
and scientific reasoning. Oxford: Oxford University Press.
Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural
Computation, 7, 889–904.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

212  Oron Shagrir and William Bechtel

Dennett, D. C. (1994). Cognitive science as reverse engineering: Several meanings of “top-down”


and “bottom-up.” In D. Prawitz, B. Skyrms, & D. Westerstahl (Eds), Logic, methodology and
philosophy of science IX. Amsterdam: Elsevier.
Egan, F. (1991). Must psychology be individualistic? Philosophical Review, 100, 179–203.
Egan, F. (1995). Computation and content. Philosophical Review, 104, 181–203.
Egan, F. (2010). Computational models: A modest role for content. Studies in History and
Philosophy of Science, 41, 253–9.
Findlay, J. M. & Gilchrist, I. D. (2003). Active vision: The psychology of looking and seeing.
Oxford: Oxford University Press.
French, S. & Ladyman, J. (1999). Reinflating the semantic approach. International Studies in the
Philosophy of Science, 13, 103–21.
Frigg, R. & Hartmann, S. (2017). Models in science. In E. N. Zalta (Ed.), The Stanford Encyclopedia
of philosophy: <https://plato.stanford.edu/archives/spr2017/entries/models-science/>.
Gibson, E. J. (1969). Principles of perceptual learning and development. New York: Appleton-
Century-Crofts.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin.
Gibson, J. J. & Gibson, E. J. (1955). Perceptual learning; differentiation or enrichment?
Psychological Review, 62, 32–41.
Giere, R. N. (2004). How models are used to represent reality. Philosophy of Science, 71, 742–52.
Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69, S342–53.
Goldman, M. S., Kaneko, C. R., Major, G., Aksay, E., Tank, D. W., & Seung, H. S. (2002). Linear
regression of eye velocity on eye position and head velocity suggests a common oculomotor
neural integrator. Journal of Neurophysiology, 88, 659–65.
Griffiths, P. E. (2001). Genetic information: A metaphor in search of a theory. Philosophy of
Science, 68, 394–412.
Hawkins, J. & Blakeslee, S. (2004). On intelligence. New York: Times Books.
Hildreth, E. C. & Ullman, S. (1989). The computational study of vision. In M. I. Posner (Ed.),
Foundations of cognitive science (pp. 581–630). Cambridge, MA: MIT Press.
Hohwy, J., Roepstorff, A., & Friston, K. (2008). Predictive coding explains binocular rivalry:
An epistemological review. Cognition, 108, 687–701.
Horst, S. W. (2009). The computational theory of mind. In E. N. Zalta (Ed.), The Stanford
Encyclopedia of Philosophy: <https://plato.stanford.edu/archives/win2009/entries/
computational-mind/>.
Huang, Y. & Rao, R. P. N. (2011). Predictive coding. Wiley Interdisciplinary Reviews: Cognitive
Science, 2, 580–93.
Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese,
183, 339–73.
Kauffman, S. A. (1971). Articulation of parts explanation in biology and the rational search for
them. In R. C. Bluck & R. S. Cohen (Eds), PSA 1970 (pp. 257–72). Dordrecht: Reidel.
Lee, D. & Reddish, P. E. (1981). Plummeting gannets: A paradigm of ecological optics. Nature,
293, 293–4.
Levy, A. (2011). Information in biology: A fictionalist account. Noûs, 45, 640–57.
Levy, A. & Bechtel, W. (2013). Abstraction and the organization of mechanisms. Philosophy of
Science, 80, 241–61.
Llinás, R. R. (2001). I of the vortex: From neurons to self. Cambridge, MA: MIT Press.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

marr’s computational level  213

Lorente de Nó, R. (1938). Analysis of the activity of the chains of internuncial neurons. Journal
of Neurophysiology, 1, 207–44.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of
Science, 67, 1–25.
Marr, D. C. (1977). Artificial intelligence: A personal view. Artificial Intelligence, 9, 37–48.
Marr, D. C. (1982). Vision: A computation investigation into the human representational system
and processing of visual information. San Francisco: Freeman.
Marr, D. C. & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society of
London. Series B. Biological Sciences, 207, 187–217.
Marr, D. C. & Poggio, T. (1976). Cooperative computation of stereo disparity. Science, 194,
283–7.
Marr, D. C. & Poggio, T. (1979). A computational theory of human stereo vision. Proceedings of
the Royal Society of London. Series B. Biological Sciences, 204, 301–28.
Maynard Smith, J. (2000). The concept of information in biology. Philosophy of Science, 67,
177–94.
Milner, A. D. & Goodale, M. G. (1995). The visual brain in action. Oxford: Oxford University
Press.
Mishkin, M., Ungerleider, L. G., & Macko, K. A. (1983). Object vision and spatial vision: Two
cortical pathways. Trends in Neurosciences, 6, 414–17.
Neisser, U. (1976). Cognition and reality: Principles and implications of cognitive psychology.
San Francisco: W. H. Freeman.
Neisser, U. (1982). Memory observed: Remembering in natural contexts. San Francisco:
W. H. Freeman.
Neisser, U. & Winograd, E. (1988). Remembering reconsidered: Ecological and traditional
approaches to the study of memory. Cambridge: Cambridge University Press.
Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135–83.
Piccinini, G. & Craver, C. (2011). Integrating psychology and neuroscience: Functional ana-
lyses as mechanism sketches. Synthese, 183, 283–311.
Poggio, T. (1981). Marr’s computational approach to vision. Trends in Neurosciences, 4, 258–62.
Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science.
Cambridge, MA: MIT Press.
Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Rao, R. P. N. & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional
interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2, 79–87.
Robinson, D. A. (1989). Integrating with neurons. Annual Review of Neuroscience, 12, 33–45.
Schacter, D. L. (1996). Searching for memory: The brain, the mind, and the past. New York: Basic
Books.
Shagrir, O. (2001). Content, computation and externalism. Mind, 110, 369–400.
Shagrir, O. (2010a). Brains as analog-model computers. Studies in History and Philosophy of
Science Part A, 41, 271–9.
Shagrir, O. (2010b). Marr on computational-level theories. Philosophy of Science, 77, 477–500.
Shagrir, O. (2012). Structural representations and the brain. British Journal for the Philosophy of
Science, 63, 519–45.
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal,
27, 379–423, 623–56.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

214  Oron Shagrir and William Bechtel

Shapiro, J. A. (2011). Evolution: A view from the 21st century. Upper Saddle River, NJ: FT Press
Science.
Simon, H. A. (1996). The sciences of the artificial (3rd Ed.). Cambridge, MA: MIT Press.
Sterelny, K. (1990). The representational theory of mind: An introduction. Oxford: B. Blackwell.
Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese, 87, 449–508.
Turvey, M. T. (1992). Ecological foundations of cognition: Invariants of perception and action.
In H. L. Pick, P. van den Broek, & D. C. Knill (Eds), Cognition: Conceptual and methodological
issues (pp. 85–117). Washington, DC: American Psychological Association.
Ullman, S. (1979). The interpretation of visual motion. Cambridge, MA: MIT Press.
Ullman, S. (1980). Against direct perception. Behavioral and Brain Sciences, 3, 373–415.
van Essen, D. C. & Gallant, J. L. (1994). Neural mechanisms of form and motion processing in
the primate visual system. Neuron, 13, 1–10.
Warren, W. H., Jr. & Whang, S. (1987). Visual guidance of walking through apertures:
Body-scaled information for affordances. Journal of Experimental Psychology. Human
Perception and Performance, 13, 371–83.
Watson, J. D. & Crick, F. H. C. (1953). Genetical implications of the structure of deoxyribo-
nucleic acid. Nature, 171, 964–7.
Weisberg, M. (2013). Simulation and similarity: Using models to understand the world.
New York: Oxford University Press.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

10
Multiple Realization, Autonomy,
and Integration
Kenneth Aizawa

1. Introduction
One source of skepticism regarding the existence of multiple realization in the sciences
stems from the following line of thought. To begin with, there is a conceptual point
that, at the very least, multiple realization requires that there be one realized property
and a diversity of realizer properties. Second, there is a metaphysical presumption that
differences in realizer properties must make for differences in realized properties.
Third, there is a methodological strategy—what we might call an “eliminate and split
strategy”—according to which scientific theorizing will take into consideration the
differences among realizer properties in such a way that any apparently multiply realized
property will be eliminated in favor of two (or more) uniquely realized properties.1
This principle, in effect, proposes that the taxonomy of realized properties will be
driven by the taxonomy of (sets of) realizing properties.
As plausible as this strategy is to some philosophers, actual scientific theorizing can
be more complicated.2 Scientific theorizing in vision science reveals three strategies that
go beyond what some skeptics have imagined. First, and most simply, vision scientists
sometimes postulate properties within which they will admit individual differences
that are explained by differences in realizers. Second, sometimes vision scientists propose
that two sets of realizers differ in such a way that the differences between them cancel
each other out. This is what one might call “multiple realization by compensatory

1
  Aizawa and Gillett (2011) label this principle the “eliminate and split strategy.” Carl Craver articulates
a “splitting” principle that is closely related to the idea that scientific theorizing will take into consideration
the differences among realizer properties: “if you find that a single cluster of properties is explained by
more than one mechanism, split the cluster into subset clusters, each of which is explained by a single
mechanism” (Craver, 2009, p. 581). Delete “cluster of ” and treat mechanistic explanation of properties as
invoking realization relations, then you will have the principle invoked above. Craver does not, however,
appear to endorse another component of the foregoing argument, namely, that differences in realizer
properties will always lead to differences in realized properties.
2
  For examples of those at least sympathetic to the “eliminate and split” strategy, see Shagrir (1998) and
Craver (2004).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

216  KENNETH AIZAWA

differences.”3 Third, vision scientists sometimes find that small variations in realizers
induce variations in some realized properties but not others. Let F1 be a member of the
set of properties realizing G and a member of the set of properties realizing H. There
are cases in which small variations in F1 can lead to small variations in G, but not in H.
One might say that small variations in F1 are “orthogonal” to H.4
These conclusions have a clear bearing on the autonomy of psychology. Insofar
as there are realization relations between psychological properties and, say, neuro-
biological properties, this can be used to specify a clear sense in which neurobiological
properties are not completely irrelevant to psychological properties. The neurobiological
is relevant to the psychological because the neurobiological realizes the psychological.
So, realization relations preclude one kind of autonomy of psychology from neurobiology.
Nevertheless, multiple realization reveals another form of autonomy. Insofar as the
scientific taxonomy of realized properties does not derive entirely from the scientific
taxonomy of realizer properties, we find that the taxonomy of the realized properties
is autonomous from the taxonomy of the realizer properties. This kind of autonomy
harks back to the idea that the taxonomy of psychology “cross-cuts” the taxonomy of
neurobiology.5
These conclusions also bear on the integration of psychology with neurobiology.
The fact that psychological properties are realized by biological properties reveals that
psychological properties are somehow integrated with biological properties. We should,
therefore, ask how they are so integrated in actual scientific theorizing. The “eliminate
and split strategy” proposes an answer: the psychological is integrated with the neuro-
biological by having the properties of the former track the properties of the latter. The
three strategies described above and revealed in vision science, however, indicate that
there are other answers. There is more than one way in which differences in realizer
properties are “parceled out” among realized properties. Sometimes differences in
realizer properties are lumped indifferently in a single realized property. Sometimes
sets of realizers differ in such a way that there are compensatory differences leading to
the realization of what psychology takes to be one and the same property. Sometimes
differences among lower-level realizers make a difference to one realized property but
not another.
What follows is primarily a fuller articulation of the preceding ideas. This articu-
lation will begin, in Section 2, with a review of the Dimensioned view of realization
and a complementary theory of multiple realization. The review will be brief, leaving
aside details and defense of the views, since they have been described and defended in
greater detail in other publications.6 Sections 3–5 will illustrate each of the three
ways in which vision scientists have integrated differences in realizer properties into
psychological properties. Section 6 considers another type of argument against multiple

3
  For further discussion, see Aizawa (2013).
4
  For further discussion, see Aizawa & Gillett (2011).
5
  See Fodor (1974).
6
  See, for example, Gillett (2002, 2003) and Aizawa & Gillett (2009a, 2009b, 2011).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  217

realization. This argument appeals to what one might call “conjunctive properties,”
putative properties that we typically describe by a conjunction of predicates. The argu-
ment is based on the idea that “conjunctive properties” are less likely to be multiply
realized than are any of the conjunct properties. Section 7 will relate the senses of
“autonomy” and “integration” being used in Sections 3–5 to some of the claims found
in Fodor (1974), and then show how this can be used to avoid “Keeley’s Dilemma”
(Keeley, 2000).

2.  The Dimensioned View of Realization and a Theory


of Multiple Realization
The Dimensioned view of realization maintains that realization is a kind of compositional
determination relation wherein properties at one level determine properties at a
higher level (see, for example, Gillett, 2002, 2003).7 More technically, it proposes that:
Property/relation instance(s) F1–Fn realize an instance of a property G, in an
individual s under conditions $, if and only if, under $, F1–Fn together contribute
powers, to s or s’s part(s)/constituent(s), in virtue of which s has powers that are
individuative of an instance of G, but not vice versa.
This can be a daunting formulation, but the core idea is simple: individuals have
properties in virtue of the properties of their parts. Take a simple case. A molecule of
hydrogen fluoride (HF) has an asymmetric charge distribution, a dipole moment,
of 1.82 debye (D) (Nelson, Lide, & Maryott, 1967, p. 11). It has this property in virtue of
properties of the hydrogen and fluoride atoms (their electronegativities) and the way
in which those atoms are bonded together.
This is a theory of realization, but we also need a theory of multiple realization. Roughly
speaking, multiple realization occurs when one set of property instances F1–Fn realizes
an instance of G and another set of property instances F*1–F*m realizes an instance of G
and the properties in the two sets are not identical. One slight refinement is in order,
however, to take account of the fact that a neuronal realization and a biochemical
realization of pain would not constitute a case of multiple realization. To refine the
account, one can add that the two distinct realizers that multiply realize G must be at
the same level.8 The official formulation of multiple realization is, therefore, that
A property G is multiply realized if and only if (i) under condition $, an individual s
has an instance of property G in virtue of the powers contributed by instances of
7
  This theory of realization, thus, has affinities with theories of mechanistic explanation. See, for example,
Bechtel & Richardson (1993), Glennan (1996,  2002), Machamer, Darden, & Craver (2000), and Craver
(2007). It also involves a highly detailed theory of levels articulated in Gillett (unpublished). Because the
theory cannot be presented adequately in the space of even a few pages, the interested reader is encouraged
to obtain a copy of Gillett’s paper.
8
  This is one respect in which multiple realization requires more than that there be one realized property
and a diversity of realizer properties.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

218  KENNETH AIZAWA

properties/relations F1–Fn to s, or s’s constituents, but not vice versa; (ii) under con-
dition $* (which may or may not be identical to $), an individual s* (which may or
may not be identical to s) has an instance of property G in virtue of the powers con-
tributed by instances of properties/relations F*1–F*m of s* or s*’s constituents, but
not vice versa; (iii) F1–Fn ≠ F*1–F*m and (iv), under conditions $ and $*, F1–Fn of s
and F*1–F*m of s* are at the same scientific level of properties.
To illustrate multiple realization we may return to the property of having a dipole
moment of 1.82 D. HF has this property in virtue of the electronegativities of H,
F, and the bond between them, but chlorofluoromethane (CH2ClF) appears to have
the same dipole moment in virtue of the electronegativities of C, H, Cl, and F and
the bonds between them (cf., Nelson et al., 1967, p. 16). This is apparently a case
of multiple realization.9

3.  Multiple Realization with Individual Variation


Consider the property of having normal human color vision.10 In what follows, the
discussion will be limited to human vision, as this will avoid complications stemming
from cross-species comparisons. Additionally, in the context of this discussion, normal
color vision will be understood as a capacity for making normal color discriminations.
It does not include other features of color vision, such as rapidity of response, luminance
sensitivity, trichromacy, etc., that might also be considered normal. In other words,
there is a distinction to be made between the property of having normal color vision,
the property of having normal response time to light, the property of having normal
luminance sensitivity, and the property of having trichromatic vision, among others.
While “normalcy” might be used in a broader way so as to include all these other prop-
erties, this is not how it will be used here. Nor is it the way that it is used in the vision
science literature to be discussed here.
It is sometimes relatively easy to determine that a vision scientist is using “normal
color vision” in the sense just introduced. In the methods section of a journal article,
the scientist will typically report that participants are screened for normal color vision
using methods that are sensitive to color-discrimination capacities, but not these other
properties of color vision. So, for example, one very popular test is the Ishihara, which
uses twenty-four pseudo-isochromatic plates. If a test participant correctly identifies
the numeral in each of the plates, then the participant is judged to have normal color

9
  The qualifier “appears” is needed, since the dipole moments are experimentally determined values.
Thus, it could be that HF and CH2ClF have exactly the same dipole moment or it could be that HF and
CH2ClF have the same dipole moment to within the limits of experimental error. In the latter case, we would
not have an example of multiple realization.
10
  Some philosophers object that the property of having normal color vision involves an element of
scientific convention regarding the scope of what is to be treated as normal, hence that it is not a bona fide
natural property. For those who hold this view, one can rework the relevant arguments of Sections 2–4
mutatis mutandis for the property of having trichromatic vision.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  219

vision.11 Within certain boundary conditions, the Ishihara test does not check for such
properties as trichromacy, rapidity of response, or luminance sensitivity.
Move now from the realized property to the realizer properties. Many distinct
structures in the eye, the lateral geniculate nucleus, areas V1, V2, V4, etc., contribute
properties that realize normal color vision. In addition, in some of these structures,
there are individuals (often called “components” or “entities” in the mechanistic
explanation literature) at the organ level, the tissue level, the cellular level, and the
biochemical level that contribute realizer properties.12 Further, there appears to be a
diversity of realizers in each of these structures and at many distinct levels, yielding
what Aizawa & Gillett (2009a) described as “massive multiple realization” in color
vision. The concerns of the present chapter will, however, be served by focusing on the
multiple realization of the properties of normal color vision by realizer properties
contributed by: 1) the photopigments in the cones, 2) the retina, 3) the crystalline lens, 4)
the macular pigment, and 5) one of the macromolecules involved in phototransduction.
Begin with the cones of the retina. A number of studies have documented the
existence of polymorphisms in the green and red photopigments.13 For the red photo-
pigment, it has been estimated that roughly 44 per cent of the population of European
descent has an amino acid chain that has an alanine at position 180, whereas about
56 per cent of the population of European descent has an amino acid chain with a
serine at position 180. These two variants are often designated Red (ala180) and Red
(ser180), respectively. For the green photopigment, it has been estimated that roughly
94 per cent of the population has an amino acid chain that has an alanine at position
180, whereas about 6 per cent of the population has an amino acid chain with a serine
at position 180.14 These variants are often designated Green (ala180) and Green (ser180),
respectively. In addition, each of these distinct photopigment molecules will have a
distinct light absorption spectrum. So, for example, Merbs & Nathans (1992) report
that the wavelength of maximum absorption for Red (ala180) is 552.4 nm and that for
Red (ser180) is 556.7. These differences in cone opsins lead to corresponding differences
in the photoreceptors that contain them.15

11
  Note that the Ishihara test is not taken to provide an operationalization of the property of having
normal color vision. The idea is not that, by definition, any participant who correctly identifies the numerals
in all the plates has normal color vision. Instead, the test is just a reliable indicator of color-discrimination
capacities. This is evidenced by the fact that many tests of normal color vision are analyzed for their reliability
and accuracy.
12
  See Aizawa & Gillett (2011) for further explanation.
13
  See, for example, Neitz & Neitz (1998), Sjoberg, Neitz, Balding, & Neitz (1998), and Winderickx
et al. (1992).
14
  This composite data is assembled in Sharpe, Stockman, Jägle, & Nathans (1999).
15
  Photopigment molecules have many properties, e.g., mass, charge, density, and absorption spectrum.
Implicit in this example is the view that the light absorption spectrum of a photopigment is “relevant” to
having normal color vision—that the absorption spectrum is among the realizers of normal color vision.
One might, however, think that something like the property of being a (red/green/blue) cone opsin is what
is relevant to having normal color vision, hence that it is a realizer property of normal color vision. This
might be a view encouraged by Couch (2009, p. 509). This view, however, does not appear to be scientifically
correct. The property of being a (red/green/blue) cone opsin is a property that many biochemically distinct
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

220  KENNETH AIZAWA

The stage is, therefore, perfectly set for scientists to invoke the “eliminate and split”
strategy with respect to normal color vision. There is a single putative property of having
normal human color vision, but a diversity of realizers in the diversity of absorption
spectra associated with the diversity of red and green cone opsin molecules. (In addition,
the diversity of absorption spectra gives rise to subtle differences in color-discrimination
capacities.) Following the eliminate and split strategy, vision scientists can deny the
existence of normal human color vision and instead postulate four types of color vision
corresponding to the four combinations of photopigments:
Normal color vision with Red (ala180), Green (ala180),
Normal color vision with Red (ala180), Green (ser180),
Normal color vision with Red (ser180), Green (ala180),
Normal color vision with Red (ser180), Green (ser180).
Given this opportunity, however, vision scientists have not availed themselves of it.
Rather than eliminating and splitting the property of normal color vision, vision
scientists treat the variations induced by the differences in realizers as cases of indi-
vidual differences. Normal color vision is a broad category within which there can
be  ­variation. Differences in realizers make for differences in color-discrimination
capacities—they give rise to individual differences in color-discrimination capacities—
but they do not “make a difference” to normal color vision. Individuals can have normal
color vision in the face of individual differences in color-discrimination capacities.
Exactly this analysis is given in the opening passage of a study of the spectral tuning
of  the photopigments: “Human color vision encompasses a range of individual
variations, including . . . subtle variations among individuals whose color vision is
considered to be normal. One of the principal causes of these color vision differences
is  variation in the spectral positioning of the cone photopigments” (Neitz, Neitz,
& Jacobs, 1991, p. 971).
Nor is the variation in the properties of the cone photopigments, hence in the
properties of the cones themselves, the only variation handled in terms of individual
variation. In order to reach the retinal photoreceptors, light must pass through the
crystalline lens and the macular pigment. These pre-retinal structures are not perfectly
transparent, so that they filter the light reaching the retina. Their light-filtering properties
are, thus, among the realizers of normal color vision. Vision scientists are, therefore,
presented with the opportunity to invoke the “eliminate and split” strategy along the

molecules have in virtue of their evolutionary history. They are homologous molecules. One reason to
think that the property of being a (red/green/blue) opsin molecule is not the relevant one for the realization
of normal color vision is the existence of mutational variants of the cone opsins that have distinctly
abnormal absorption spectra. These molecules will still be cone opsins understood as homologs, but they
will give rise to color-discrimination anomalies or color vision deficiencies. Thus, you can change the prop-
erty of having a particular absorption spectrum, leaving the property of being a cone opsin unchanged, and
thereby change the realized property of having normal color vision. This is just the familiar story about
color blindness.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  221

lines of differences in pre-retinal or pre-receptoral properties.16 Vision scientists could


have decided to reject the existence of normal color vision and instead postulate a
taxonomy of normal color vision that tracks the various filtering properties of the lens
and macular pigment. Instead, they treat pre-retinal variations in the same way they
treat retinal variations, namely, as sources of individual differences. Here is a portion
of one journal abstract setting out the picture:
There are significant variations in colour matches made by individuals whose colour vision
is classified as normal. Some of this is due to individual differences in preretinal absorption
and photopigment density, but some is also believed to arise because there is variation in the
spectral positioning of the cone pigments among those who have normal colour vision.
(Neitz & Jacobs, 1986, p. 623)

Here is another:
The color matches of normal trichromatic observers show substantial and reliable individual
differences. This implies the population of normal trichromats is not homogeneous, an obser-
vation that leads to the question of how one normal trichromat differs from another. In general,
the physiological mechanisms that contribute to color-matching differences among normal
observers may be classified as either pre-receptoral or receptoral. Pre-receptoral spectrally
selective filtering can significantly affect color matches and therefore can cause individual
differences. The influence of pre-receptoral filtering, however, can be eliminated with well-
known techniques . . . This implies that individual differences among normal trichromats are
due in part to receptoral variation.  (He & Shevell, 1994, p. 367)17

To be clear, vision scientists maintain that all the individuals in their study share
the  property of having normal color vision despite variations in their specific color-
discriminative capacities. The example illustrates multiple realization of this property,
while leaving open the issue of the unique or multiple realization of some other prop-
erties of human vision. What this indicates is that vision scientists allow psychological
properties some autonomy from their realizers by not having the psychological properties
track the differences in realizers. Instead, differences in realizers are integrated into
some of the relatively broad categories of psychology as individual variations.18
Return now to the skeptical line of thought with which this chapter began. The cases of
multiple realization with individual variation reveal two respects in which the skeptical

16
  In fact, the crystalline lens yellows with age. Thus, an individual human being may have a single
instance of the property of having normal color vision while the light-transmitting properties that realize
that vision change over the course of a single lifetime. The crystalline lens, thus, illustrates both inter-
individual multiple realization of normal color vision and intra-individual multiple realization of normal
color vision.
17
  Recall the claim from a footnote at the start of this section that one can rework the arguments of this
section mutatis mutandis for the property of having trichromatic vision. The passage from He and Shevell
supports this contention.
18
  Note that the property of normal color vision may be one that Bechtel & Mundale (1999) would label a
“coarse-grain” property. Nevertheless, the multiple realization of this property by a diversity of photoreceptor
properties is not the product of philosophical gerrymandering of properties. It is the way in which actual
scientific theorizing appears to work.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

222  KENNETH AIZAWA

argument goes wrong. Recall, first, that the argument assumed that differences in
realizer properties must make for differences in realized properties. This is a meta-
physical claim about how different properties in the world are related to each other.19
Some differences in the realizer properties of cone opsins, lenses, and maculae do lead
to differences in some realized properties, such as the capacities for finer color dis-
criminations. These capacities can be detected with sensitive tests such as Rayleigh
matching. Nevertheless, not all properties are like this. In particular, the property of
having normal color vision is not among them. Recall, second, that the skeptical argu-
ment assumed that scientific theorizing will take differences among realizer properties
into account by invoking what we have called an “eliminate and split strategy.” This
second premise is a methodological principle about how scientific theorizing will
handle the apparent discovery of multiple realization: Whenever scientists discover
apparent multiple realization, they will refine their taxonomy of realized properties
to avoid multiple realization. Our example of normal color vision, however, shows
that, at least at times, scientific theorizing will take the differences into account not
by invoking the “eliminate and split” strategy, but by using them to explain individual
differences. Vision science, thus, provides a concrete example that dispels some over-
simplifications latent in the abstract philosophical argument described at the outset.20

4.  Multiple Realization by Compensatory Differences


The Dimensioned theory of realization recognizes that, in many scientific cases, a
single property instance G can be realized by a set of property instances F1–Fn.21 This
“teamwork” of realizers suggests one way that multiple realization can arise, namely, by
“compensatory differences.” The idea is that sets of properties F1–Fn and F*1–F*m may
be such that the differences between F1–Fg and F*1–F*i and between Fh–Fn and F*j–F*m
“counterbalance” each other. Two examples should make this clearer.
Example 1: Light filtering. Consider two humans who make particular color dis-
criminations in response to 450 nm light. As noted in Section 2, the realizers for this
discriminatory capacity include the optical density (or light-filtering properties) of the
crystalline lens and the macular pigment. Clearly there are many distinct combinations
19
  It is a claim that will later come in for further clarification.
20
  In comments on an earlier draft, David Kaplan proposes that this example suggests a weakening of
the eliminate and split principle. It is not that the differences in realizer properties make for differences
in realized properties simpliciter. Instead, roughly speaking, differences in realizer properties make for
differences in realized properties (hence justify splitting, etc.) when those realizer differences lead to functional
differences among the realized properties. This weakening retains the idea that lower-level realizers are
drivers of the taxonomy of realized properties. But, what basis is there for this? The operative methodo-
logical principle appears to be purely at the level of the realized properties: Split “functionally different”
properties. Insofar as a principle like this is at work, we have a kind of methodological autonomy of the
realized properties.
21
  This is a feature that Dimensioned realization shares with extant accounts of mechanistic explanation,
wherein a single phenomenon or property is explained by the joint action of multiple entities. See, for
example, Machamer et al. (2000) and Craver (2007).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  223

of lens pigment optical density and macular pigment optical density that yield the
same total optical density for 450 nm light. For instance, one can achieve the same
total optical density by lower values of lens pigment optical density and suitably higher
values of macular pigment optical density. So, there can be one and the same color-
discriminatory capacity with an infinitude of distinct combinations of lens and macular
pigment properties.
Example 2: The “lens paradox”. Throughout a normal human’s life, the crystalline
lens grows in such a way that it bulges along its central axis. This growth leads to a
decrease in the focal length of the lens, which, in turn, suggests that aging will typically
lead to increasing near-sightedness. Of course, as is well known, aging does not
­typically lead to increasing near-sightedness, but to increasing far-sightedness. This
is the so-called “lens paradox.” How can it be that the aging lens typically changes
in shape to favor a decreasing focal length, which should lead to near-sightedness,
when aging humans typically experience far-sightedness, which implies an increasing
focal length?
The most likely resolution of the paradox involves postulating changes in the
refractive indices of the internal components of the lens in such a way as to overcom-
pensate for the changes in the shape of the lens.22 Once we recognize the compensatory
relations between lens shape and the refractive index of the internal components,
we can see how it is possible to multiply realize a given focal length by compensatory
differences in the surface of the lens and in the internal structure of the lens.
In these two examples, vision scientists do not adopt the “eliminate and split”
strategy. While this is a logically possible option, scientists do not embrace it. They do
not reject the property of having a particular color-discriminatory capacity with respect
to 450 nm light in favor of a set of non-denumerably many properties individuated by
combinations of lens pigment optical densities and macular pigment optical densities.
They do not reject the property of having emmetropic vision in favor of a set of non-
denumerably many distinct properties individuated by combinations of lens shape
and internal refractive indices. Here again, the taxonomy of properties in psychology
does not mirror the taxonomy of the biochemical and physical properties of the lens,
so there is a form of autonomy of the psychological from the biochemical and the
physical. Biochemical and physical properties are integrated into psychology by
producing what psychologists evidently consider to be a single psychological property.
Consider, again, the skeptical argument with which we began. It assumed that multiple
realization requires a diversity of realizer properties and that differences in realizer
properties must make for differences in realized properties. What this reasoning does
not take into consideration is the possibility of the differences in realizers cancelling
each other out.23 This possibility, however, is actualized with properties of the lens and

22
  See, for example, Moffat, Atchison, & Pope (2002).
23
  Again, it appears that the operative principle is not that differences in realizer properties make for
differences in realized properties when those realizer differences lead to functional differences. Instead,
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

224  KENNETH AIZAWA

macular pigments and with the curvature of the lens and the refractive indices of
its internal components.

5.  Multiple Realization by Orthogonal Realizers


For our third way of integrating realizers with the realized, we need to draw a
­distinction between two types of realizers: Parallel realizers of G and orthogonal
realizers of G. A property F is a parallel realizer of G if, and only if, small variations
in the value of F will lead to corresponding changes in the value of G. A property F is
an orthogonal realizer of G if, and only if, small variations in the value of F will not
lead to corresponding changes in the value of G.24 Let me provide an example of each
type of realizer.
Earlier we noted that the color discriminations one makes are realized by, among
other things, the absorption spectra of cones. Take the absorption spectrum of a red
cone. Slight variations in the absorption spectrum will lead to slightly different color
discriminations among individuals with normal color vision. Recall that the property
of having normal color vision does not vary with these differences in realizers; it is
what one might call a “coarse” property. Nevertheless, among those with normal color
vision there remain differences in color-discrimination capacities. The absorption
spectra of the red cones are parallel realizers of color-discrimination capacities.
Moreover, these are the kind of realizers that philosophers seem to have in mind in
their skeptical thinking about multiple realization. Slight variations among realizers,
such as cone opsin absorption spectra, will lead to slight variations in the realized
property, such as color discriminations, so that this will preclude the multiple realization
of a specific color-discrimination capacity.
Orthogonal realizers, however, work differently. To appreciate them, we need to
look to some of the other realizers of normal color vision, such as the properties of
one component of the mechanism of phototransduction. Upon absorption of a photon,
a single photopigment molecule will change conformation from 11-cis-retinal to
all-trans-retinal. After this conformational change, the retinal chromophore is released
into the cytosol, while the opsin fragment remains embedded in the cell membrane in
an activated state. The activated opsin binds to a single transducin molecule located on
the inner surface of the cell membrane. This transducin molecule, in turn, activates a

the principle is something simpler, such as, split “functionally different” properties. Insofar as a principle
like this is at work, we have a kind of methodological autonomy of the realized properties.
24
  In truth, it can be problematic to specify what a “small” variation is. So, for example, regarding
parallel realizers, it could be that very, very tiny shifts in the absorption spectrum of the red cone opsin
do not, for one reason or another, make for measurable differences in color-discrimination capacities. In
addition, regarding orthogonal realizers, at some point, sufficiently large changes in, say, the binding
constant of a G protein (see below), will lead to a property that is no longer a realizer property. It is
unclear how this issue can be resolved, so that the account that we now have on the table is, at best,
a rough approximation.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  225

molecule of an enzyme, cGMP phosphodiesterase. There is more to the phototransduction


story, but this remainder is inessential to the idea of orthogonal realizers.25
Notice that the properties of the transducin molecule are among the realizers of
normal color vision. The capacity for normal color vision depends on the capacities
of the transducin molecules. Without the properties of the transducin, there would be
no color vision. Next notice that the properties of transducin molecules can vary. They
can vary in their binding affinities to cGMP-phosphodiesterase. But, here is the fact
that is philosophically interesting: small changes in these binding affinities will not
change the color discriminations one makes. This means that the binding constants for
transducin molecules are orthogonal realizers. What this means is that one can have
two individuals who are exactly alike, save for having different transducin binding
constants, but who make exactly the same color discriminations. As with our other
cases of multiple realization, vision scientists have the option of subtyping normal
color vision along the lines of the diversity in the transducin molecules, but they have
not developed a theoretical framework to do so.
At present, variations in the properties of human transducin have not been directly
confirmed.26 Instead, its existence is only to be expected on the grounds of general
considerations of evolution by natural selection. There are likely variations in the DNA
that codes for transducin, hence variations in the amino acid sequences for transducin,
hence variations in the binding constants of transducin. Thus, this particular example
of multiple realization by orthogonal properties is perhaps somewhat speculative. To
shore up the view that there are orthogonal realizers that do not inspire the use of
the “eliminate and split” strategy, we can simply change examples. We can consider the
spatial distribution of the different types of cones in the retina.
Return to the property of having normal color vision.27 Normal color vision requires
the integration of signals from red and green cones, so the spatial distribution of the
red and green cones matters to normal color vision. The relative positions matter, so
they are among the realizers of normal color vision. Recent work has shown, however,
that, among those with normal color vision, red and green cones are not uniformly
distributed (even at a fixed relative position) in the retina.28 In fact, Hofer, Singer, &
Williams (2005), report that, in a sample of five individuals with normal color vision,
the ratio of red to green cones varies from 1.1:1 to 16.5:1.29 In principle, vision scientists
could reject the existence of normal color vision in favor of a multitude of subtypes of
25
  For more details, see Aizawa & Gillett (2011) or Kandel, Schwartz, & Jessell (2000).
26
  Transducin is a heterotrimeric G protein with alpha, beta, and gamma subunits. As of February 19,
2014, the Uniprot database lists one natural variant of the alpha-1 subunit, three variants of the alpha-2
subunit, and no variants of the beta or gamma subunits. (This lack of natural variants likely constitutes an
absence of evidence, rather than an evidence of absence. It likely reflects the relative difficulty of collecting
samples of transducin compared to collecting samples of, say, alpha hemoglobin.) There are no listings for
the values of the relevant properties of the transducin variants.
27
  Trichromacy would work just as well here.
28
  See, for example, Hofer, Carroll, Neitz, Neitz, & Williams (2005).
29
  The property of normal color vision appears to be multiply realized by distinct cone mosaics, but so
does a person’s perception of unique yellow:
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

226  KENNETH AIZAWA

normal color vision wherein each subtype is individuated by reference to a distinct


spatial configuration of red and green cones. While one can imagine vision scientists
doing this and while it is a logically possible move for vision scientists to make, the
literature does not reveal any efforts in this direction. Scientists working in this area
do not seem to be attracted to the “eliminate and split” strategy in this case.
As before, this shows that the vision science inventory of properties is autonomous
from the diversity of realizers found in the retina. This much is familiar from earlier
sections of the chapter, but the examples of orthogonal realizers indicate another way
in which differences in realizers are integrated into vision science. Differences in
realizers can be “parceled out” into many different realized properties. Differences
in realizers may not be manifest in one higher-level property, but instead manifest in
another. So, it is likely that the variations in the binding constants of transducin make
for no differences in color discriminations, but they may make for a difference in
response latencies in color discriminations. The example of the spatial distribution of
red and green cones is an excellent example of this. Small differences in the spatial dis-
tribution of red and green cones do not change the color discriminations one makes,
but they do give rise to differences in other features of vision. So, for example, islands
of red-only or green-only cones are predicted to be islands of color blindness. So,
individuals with more extreme red-to-green cone ratios will have more islands of color
blindness. Hofer, Singer, & Williams (2005), report that subjects with a preponderance
of either red or green cones are more likely to perceive very small spots of light as
white. This means that there can be multiple realization of the property G of having
normal color vision, while there is diversity in another property H of having a certain
proportion of color-blind regions in one’s visual field. Or take another property, J, the
capacity to make color discriminations in patterns with high spatial frequency. There
can be multiple realization of the capacity G for making normal color discriminations
in the face of variations in the property J. Austin Roorda and David Williams report on
two individuals with just this configuration of properties:
The proportion of L to M cones is strikingly different in two male subjects, each of whom has
normal colour vision. The mosaics of both subjects have large patches in which either M or
L  cones are missing. This arrangement reduces the eye’s ability to recover colour variations
of  high spatial frequency in the environment but may improve the recovery of luminance
variations of high spatial frequency.  (Roorda & Williams, 1999, p. 520)

Unique yellow, the wavelength that appears neither reddish nor greenish and represents the neutral
point of the red–green color mechanism, is thought to be driven mainly by differences in L and M cone
excitation. Several investigators have noted that whereas estimates of L:M cone ratio vary widely, the
wavelength that subjects judge uniquely yellow is nearly constant, varying with a SD of only 2–5 nm
(Pokorny, Smith, & Wesner, 1991; Jordan & Mollon, 1997; Miyahara, Pokorny, Smith, Baron, & Baron,
1998; Brainard et al., 2000; Neitz, Carroll, Yamauchi, Neitz, & Williams, 2002). In agreement with these
studies, measures of unique yellow did not correlate with direct measurements of L:M cone ratio in six
of our subjects (HS, YY, MD, JP, JC, and BS; data not shown). (Hofer, Carroll, Neitz, Neitz, & Williams, 2005,
p. 9674).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  227

The existence of orthogonal realizers also bears on the multiple realization skepticism
with which we began. Recall that this line of thought included the assumption that
differences in realizer properties must make for differences in realized properties.
Yet, the case of the spatial distribution of red and green cones illustrates an ambiguity
in this argument. Different spatial distributions of red and green cones do make for
differences in some realized properties, but not necessarily in every realized property.
Some differences in spatial distributions of cones can change patterns of response
to small spots of light, but they do not change the property of having normal color
vision. This is part of what makes it easier than one might have imagined for there to
be multiple realization.

6.  What Is Realized?


It has been a presupposition of this chapter that, at the least, multiple realization requires
the sameness of a realized property in the face of a diversity of realizer properties. This
section will draw attention to a strategy for blocking multiple realization by raising
the bar on what is to count as multiple realization. What these variants have in common
is, in one way or another, postulating that multiple realization is a matter of realizer
properties simultaneously realizing the same value for lots of different properties.
So, the idea is that one does not have multiple realization if distinct sets of property
instances F1–Fn and F*1–F*m realize just a single property G. Instead, for multiple
realization, F1–Fn and F*1–F*m must ultimately simultaneously realize a cluster of prop-
erties, say, G, H, I, and J. The discussion of orthogonal realizers invites consideration
of this view, since it draws attention to these other properties without supposing that
the multiple realization of, say, G requires the simultaneous multiple realization of
these other properties H, I, and J.
Couch (2009) compares the human eye and the fly eye. He argues that the human
eye and the fly eye do not provide multiple realizations of vision, because the eyes differ
in many properties. So, for example, the human eye does not detect ultraviolet light,
whereas the fly eye does. The human eye does not detect the plane of polarized light,
whereas the fly eye does. The human eye accommodates for distance, where the fly eye
does not. So, there is no multiple realization of vision.
Craver’s framework of mechanistic explanation that might be taken to be supportive
of the view that, in order to have a case of multiple realization, F1–Fn and F*1–F*m must
simultaneously realize a cluster of properties, say, G, H, I, and J. One might propose
that Craver’s theory of mechanistic explanation—a theory of compositional deter-
mination relations—is a theory of realization.30 Thus, a mechanistic explanation of
some phenomenon would be a realization of that phenomenon and having distinct
mechanistic explanations of one phenomenon would be multiple realizations of that

  See, for example, Craver (2004, 2009), and Wilson & Craver (2006).
30
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

228  KENNETH AIZAWA

phenomenon. Notice, however, that Craver proposes that phenomena include many
properties. As he puts it,
Phenomena are typically multifaceted. Part of characterizing the action potential phenomenon
involves noting that action potentials are produced under a given range of precipitating condi-
tions (for example, a range of depolarizations in the cell body or axon hillock). But, as Hodgkin
and Huxley’s a–h illustrate, there is much more to be said about the manifestations of an action
potential. It is necessary to describe its rate of rise, its peak magnitude, its rate of decline, its
refractory period, and so on.  (Craver, 2006, p. 368; 2007, p. 125)

One might, then, claim that two phenomena are different if they differ in any of
their individual properties. Notice, therefore, that this leads to a picture of multiple
realization very much like Couch’s, wherein multiple realization apparently involves the
simultaneous realization of many different properties. In point of logic, it is at least as
difficult to find two sets of realizer properties F1–Fn and F*1–F*m that realize all the proper-
ties in a non-singleton set of properties {G1, G2, . . . Gp} than it is to realize a single property
G. Craver’s theory, thus, provides a path rejecting certain potential cases of multiple
realization. To repeat, however, Craver does not propose to use his framework in this way.
Another way of thinking about this issue is by way of what might be called “conjunc-
tive properties,” or properties that are specified by conjunctions of predicates. So, one
might maintain that if normal color discrimination and normal spatial resolution are
two properties of a visual system, then there is also a third property, the property of
having normal color discrimination and normal spatial resolution. In fact, our earlier
specification of what we here mean by “normal color vision” brushed by this issue.
We took the property specified by that term to be a capacity for making normal color
discriminations. We explicitly excluded other features of color vision, such as rapidity
of response, luminance sensitivity, trichromacy, etc., that might also be considered
normal. What we did was direct our attention away from any putative conjunctive
property of normal color vision onto a more specific property. One might, however,
think that this was a misguided redirection.
There are stronger and weaker responses one might give to these putative “property
complexes”—conjunctive properties, Craver’s phenomena, or Couch’s functions. Heil
(2003) and Gillett & Rives (2005), among others, suggest a stronger sort of reply. One
might argue that there simply are no properties corresponding to predicates such as
“has the function of seeing” or “is in pain.” Instead, there are only more specific proper-
ties that the resources of our natural language allow us to group together for one or
another reason. Rather than rehearse any of the argumentation in support of this
strong reply, present purposes will be served by a weaker response. This is that whether
or not there exist “conjunctive properties” or the like, this does not in any obvious way
impugn the apparent multiple realization of the individual properties on which we
have focused. Rather than challenging the existence of “conjunctive” sort of properties,
we simply bracket their consideration for the present. We still have the multiple
realization of some scientifically important properties.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  229

7.  Some Disambiguation of “Autonomy”: Fodor (1974)


and Keeley (2000)
The sense of “autonomy” being invoked here is not, in the first instance, meant to vindicate,
most notably, Fodor’s view of the relationship between psychology, on the one hand,
and neurobiology or physics, on the other. It is, instead, meant to articulate what appear
to be the compositional relations among properties found in the sciences. Nevertheless,
this sense of “autonomy” is not entirely irrelevant to some of the comments from Fodor
(1974). In “Special Sciences (Or: The Disunity of Science as a Working Hypothesis),”
Fodor claimed that “we could, if we liked, require the taxonomies of the special sciences
to correspond to the taxonomy of physics by insisting upon distinctions between
the natural kinds postulated by the former wherever they turn out to correspond to
distinct natural kinds in the latter” (p. 112). This sounds something like the “eliminate
and split” strategy. And, in subsequent years, some philosophers have been willing
to entertain, and perhaps even support, something like this idea. For example, Oron
Shagrir proposes, “to the extent that compensation by brain damaged people recruits
type distinct neural mechanisms, to that extent the relevant psychological states may
be type distinct from those of normals” (Shagrir, 1998, p. 449).31 What the present study
urges is that, in a clear sense of realization, when the science is fairly well developed,
scientists do not always avail themselves of the opportunity to invoke the “eliminate
and split” strategy.
Fodor also claims that “Physics develops the taxonomy of its subject-matter which
best suits its purposes . . . But this is not the only taxonomy which may be required if the
purposes of science in general are to be served” (Fodor, 1974, p. 114). Something very
much like this is evident for normal color vision and trichromacy. The capacity for
making normal color discriminations is important at the level of individual humans,
since certain jobs require being able to discriminate certain colors. The property of
being trichromatic is important in explaining how the human visual system overcomes
a limitation of individual neurons, namely, the so-called “univariance principle.”
According to this principle, “Each visual pigment can only signal the rate at which it is
effectively catching quanta; it cannot also signal the wave-length associated with the
quanta caught” (Naka & Rushton, 1966, p. 538). Suppose a given photoreceptor is twice
as effective in capturing photons of wavelength B as it is in capturing photons of wave-
length A (see Figure 10.1). Such a photoreceptor cannot distinguish between light of
wavelength B of a given intensity and light of wavelength A at twice that intensity. With
a trichromatic system, however, a given frequency of incoming light will, to a greater or
lesser degree, stimulate a unique set of responses from the three types of cones (short
wavelength, medium wavelength, and long wavelength). Colors are, then, coded by

31
  Shagrir, of course, qualifies his view by “may.” So, it is unclear to what extent he wishes merely to entertain
the eliminate and split strategy, rather than support it. Craver (2004) seems to be implicitly committed to
at least the splitting portion of the strategy. For more discussion, see Aizawa & Gillett (2011).
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

230  KENNETH AIZAWA

Percentage of light absorbed

A B
Wavelength of light

Figure 10.1  Signal ambiguity with a single type of cone

S M L
Percentage of light absorbed

A
Wavelength of light

Figure 10.2  Signal disambiguation in a system with three types of cone

the ratio of activity in the three types of cones. So, for example, light of wavelength
A will be coded by high activity in S-cones, moderate activity in M-cones, and slight
activity in L-cones (see Figure 10.2).32 While the property of trichromacy is realized
by, say, the properties of the macromolecules in the phototransduction biochemical
cascade, details regarding the properties of macromolecules are irrelevant to the
understanding of what trichromacy is, why the visual system uses it, and so forth.
The taxonomy that best suits vision science, thus, evidently recognizes a unitary
property of trichromacy, but not an enormous family of properties of trichromacy
embracing all of the variation in cone opsins, all of the variation in lens pigment
and macular pigment optical density, and all the variation in the properties of the

  For more details, see Blake & Sekuler, 2006, pp. 246–50.


32
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  231

molecules involved with phototransduction. Vision science has not developed this
inventory of properties.
Once we recognize the kind of autonomy that arises from multiple realization we
can see how to avoid an apparent challenge to Fodor (1974), namely, “Keeley’s dilemma”
(Keeley, 2000). The dilemma is supposed to arise when we ask, “How are special
science taxonomies developed?” (The “eliminate and split” strategy is, obviously, one
answer to this question.) According to the first horn of the dilemma, “if one claims that
an individual special science alone has the right and responsibility to define its own
taxonomy, then the [argument for autonomy] begins to look suspiciously circular, since
a degree of autonomy is assumed by the argument” (Keeley, 2000, p. 449). According to
the second, “if the taxonomies of special sciences are developed by the special sciences
in interaction with structural sciences, then the strength of the conclusion risks being
significantly undermined. Unified science, in large part, consists in scientists at
different levels of investigation working together to negotiate compatible taxonomies”
(Keeley, 2000, p. 450). Neither horn, however, seems to threaten the kind of autonomy
that arises from the multiple realization of properties.
Consider the first horn. The sense of autonomy we have been investigating is the one
according to which the taxonomy of realized properties is not inherited from the
taxonomy of realizing properties. This is a descriptive claim about how special science
taxonomies are actually developed. We did not try to show that scientific theorizing
displays this sort of autonomy by assuming that it does.33 Moreover, this kind of
argument does not require any normative epistemological claims about the rights
or responsibilities of the science of realized properties. One might take it as a default
assumption that, if scientists theorize in a particular way, then they have the right or
responsibility to theorize in that way. But, in principle, one could also maintain that
actual scientific theorizing works in some way, but that this way is illegitimate. In either
case, the normative dimension is inessential to the present issue of characterizing
actual scientific theorizing.
Consider the second horn of the dilemma, “if the taxonomies of special sciences
are  developed by the special sciences in interaction with structural sciences, then
the strength of the conclusion risks being significantly undermined. Unified science,
in large part, consists in scientists at different levels of investigation working together
to negotiate compatible taxonomies” (Keeley, 2000, p. 450). This horn, as well, turns
out not to be damaging to the present analysis. We simply have to distinguish two
senses of “autonomy.” Two taxonomies might be autonomous in the sense that the
sciences developing them do not interact at all or the taxonomies might be autonomous
in the sense that one is not isomorphic to another. We can grant that sciences are not
autonomous in the sense that they do interact, but that they are autonomous in the
sense that the taxonomy of one is not isomorphic to the taxonomy of another.
33
  Perhaps one can read Fodor (1974) in this way as well. Fodor does not, of course, provide a detailed,
explicit examination of a particular science; he instead relies on what he takes to be evident to those who
are familiar with the special sciences.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

232  KENNETH AIZAWA

In fact, given the distinction between the two senses of “autonomy,” we have a
new way of describing one of the goals of this chapter. It has been to argue that the
taxonomy of vision science and the taxonomy of, say, biochemistry, are not autonomous
in the sense of there being no interaction between vision science and biochemistry
at all. Those disciplines and their respective taxonomies do interact. Differences in
biochemical properties are “parceled out” in a variety of ways among the different
properties studied by vision scientists. Nevertheless, the taxonomy of vision science
and the taxonomy of, say, biochemistry are autonomous in the sense that the one is not
a mirror image of the other. So, in answer to the question, “How are special science
taxonomies developed?” we can say that they are developed by interactions among
various sciences in such a way that each science develops a taxonomy that serves its
particular scientific objectives. The sciences of realized properties are not slaves to the
science(s) of realizer properties.

8. Conclusion
One way to read this chapter is as a partial vindication of some of the claims regarding
multiple realization and autonomy in Fodor (1974). There is a kind of autonomy of
higher-level theories that comes by way of multiple realization. Properties in distinct
theories can be integrated with the properties in other theories by standing in realization
relations. Moreover, these realization relations can relate properties in a diversity of
ways: realizer properties can be invoked to explain individual variations with realized
properties, realizer properties can stand in compensatory relations to each other in
such a way as to realize one and the same realized property, and realizer properties can
stand in parallel or orthogonal relations to realized properties. These conclusions
about autonomy and integration speak directly to this book’s themes of autonomy and
integration.
There is, however, another perspective one might take on what has been argued
here. The arguments have relied upon a combination of the Dimensioned theory of
realization and a companion theory of multiple realization, on the one hand, and
some of the scientific research on color vision, on the other. Insofar as this combin-
ation provides clear and compelling analyses of the relations among properties in the
sciences—such as the diversity of ways in which realizer properties can interact with
realized properties—there is some incentive to look more closely at it. The theories
of realization and multiple realization are a part of a more inclusive approach to
compositional relations in the sciences.34 Moreover, the rich body of scientific work
on color vision has only begun to be tapped for insights into these compositional
­relations. There are, therefore, significant reasons to expect future contributions for
this combination of tools.

  See, for example, Aizawa & Gillett (2009a, 2009b), Gillett (forthcoming, unpublished).
34
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  233

References
Aizawa, K. (2013). Multiple realization by compensatory differences. European Journal for
Philosophy of Science, 3 (1), 69–86.
Aizawa, K. & Gillett, C. (2009a). Levels, individual variation and massive multiple realization
in  neurobiology. In J. Bickle (Ed.), The Oxford Handbook of Philosophy and Neuroscience
(pp. 539–81). Oxford: Oxford University Press.
Aizawa, K. & Gillett, C. (2009b). The (multiple) realization of psychological and other properties
in the sciences. Mind & Language, 24 (2), 181–208.
Aizawa, K. & Gillett, C. (2011). The autonomy of psychology in the age of neuroscience. In
P. M. Illari, F. Russo, & J. Williamson (Eds), Causality in the Sciences (pp. 202–23). Oxford:
Oxford University Press.
Bechtel, W. & Mundale, J. (1999). Multiple realizability revisited: Linking cognitive and neural
states. Philosophy of Science, 66 (2), 175.
Bechtel, W. & Richardson, R. (1993). Discovering Complexity: Decomposition and Localization
as Strategies in Scientific Research. Princeton, NJ: Princeton University Press.
Blake, R. & Sekuler, R. (2006). Perception (5th Ed.). Boston, MA: McGraw-Hill.
Brainard, D. H., Roorda, A., Yamauchi, Y., Calderone, J. B., Metha, A., Neitz, M., Neitz, J.,
Williams, D., & Jacobs, G. H. (2000). Functional consequences of the relative numbers of
L and M cones. Journal of the Optical Society of America A, 17 (3), 607–14.
Couch, M. B. (2009). Multiple realization in comparative perspective. Biology and Philosophy,
24 (4), 505–19.
Craver, C. (2004). Dissociable realization and kind splitting. Philosophy of Science, 71 (5),
960–71.
Craver, C. (2006). When mechanistic models explain. Synthese, 153 (3), 355–76.
Craver, C. (2007). Explaining the Brain. New York: Oxford University Press.
Craver, C. (2009). Mechanisms and natural kinds. Philosophical Psychology, 22 (5), 575–94.
Fodor, J. (1974). Special sciences (or: the disunity of science as a working hypothesis). Synthese,
28 (2), 97–115.
Gillett, C. (2002). The dimensions of realization: A critique of the Standard view. Analysis,
62 (276), 316–23.
Gillett, C. (2003). The metaphysics of realization, multiple realizability, and the special sciences.
Journal of Philosophy, 100, 591–603.
Gillett, C. (unpublished). Making sense of levels in the sciences.
Gillett, C. & Rives, B. (2005). The non-existence of determinables: Or, a world of absolute
determinates as default hypothesis. Nous, 39 (3), 483–504.
Glennan, S. (1996). Mechanisms and the nature of causation. Erkenntnis, 44 (1), 49–71.
Glennan, S. (2002). Rethinking mechanistic explanation. Philosophy of Science, 69 (S3),
342–53.
He, J. C. & Shevell, S. K. (1994). Individual differences in cone photopigments of normal
trichromats measured by dual Rayleigh-type color matches. Vision Research, 34 (3),
367–76.
Heil, J. (2003). Multiply realized properties. In S. Walter & H.-D. Heckmann (Eds), Physicalism
and Mental Causation: The Metaphysics of Mind and Action (pp. 11–30). Charlottesville, VA:
Imprint Academic.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

234  KENNETH AIZAWA

Hofer, H., Carroll, J., Neitz, J., Neitz, M., & Williams, D. R. (2005). Organization of the human
trichromatic cone mosaic. Journal of Neuroscience, 25 (42), 9669–79.
Hofer, H., Singer, B., & Williams, D. R. (2005). Different sensations from cones with the same
photopigment. Journal of Vision, 5 (5), 444–54.
Jordan, G. & Mollon, J. D. (1997). Unique hues in heterozygotes for protan and deutan deficien-
cies. In C. R. Cavonius (Ed.), Colour Vision Deficiencies XIII (pp. 67–76). Dordrecht: Kluwer
Academic Publishers.
Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (2000). Principles of Neural Science. New York:
McGraw-Hill.
Keeley, B. L. (2000). Shocking lessons from electric fish: The theory and practice of multiple
realization. Philosophy of Science, 67 (3), 444–65.
Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of
Science, 67 (1), 1–25.
Merbs, S. L. & Nathans, J. (1992). Absorption spectra of the hybrid pigments responsible
for anomalous color vision. Science, 258 (5081), 464–6.
Miyahara, E., Pokorny, J., Smith, V. C., Baron, R., & Baron, E. (1998). Color vision in two
observers with highly biased LWS/MWS cone ratios. Vision Research, 38 (4), 601–12.
Moffat, B. A., Atchison, D. A., & Pope, J. M. (2002). Age-related changes in refractive index
distribution and power of the human lens as measured by magnetic resonance micro-
imaging in vitro. Vision Research, 42 (13), 1683–93.
Naka, K. I. & Rushton, W. A. H. (1966). S-potentials from colour units in the retina of fish
(Cyprinidae). Journal of Physiology, 185 (3), 536–55.
Neitz, J. & Jacobs, G. H. (1986). Polymorphism of the long-wavelength cone in normal human
colour vision. Nature, 323 (6089), 623–5.
Neitz, M. & Neitz, J. (1998). Molecular genetics and the biological basis of color vision. In
W. G. K. Backhaus, R. Kliegel, & J. S. Werner (Eds), Color Vision: Perspectives from Different
Disciplines (pp. 101–19). Berlin: Walter de Gruyter & Co.
Neitz, M., Neitz, J., & Jacobs, G. H. (1991). Spectral tuning of pigments underlying red-green
color vision. Science, 252 (5008), 971–4.
Neitz, J., Carroll, J., Yamauchi, Y., Neitz, M., & Williams, D. R. (2002). Color perception is medi-
ated by a plastic neural mechanism that is adjustable in adults. Neuron, 35 (4), 783–92.
Nelson, R. D., Lide, D. R., & Maryott, A. A. (1967). Selected values of electric dipole moments
for molecules in the gas phase. DTIC document.
Pokorny, J., Smith, V. C., & Wesner, M. F. (1991). Variability in cone populations and implica-
tions. In A. Valberg & B. B. Lee (Eds), From Pigments to Perception (pp. 23–34). New York:
Springer.
Roorda, A. & Williams, D. R. (1999). The arrangement of the three cone classes in the living
human eye. Nature, 397, 520–2.
Shagrir, O. (1998). Multiple realization, computation and the taxonomy of psychological states.
Synthese, 114, 445–61.
Sharpe, L. T., Stockman, A., Jägle, H., & Nathans, J. (1999). Opsin genes, cone photopigments,
color vision, and color blindness. In K. R. Gegenfurtner & L. T. Sharpe (Eds.), Color Vision:
From Genes to Perception (pp. 3–51). New York: Walter de Gruyter.
Sjoberg, S. A., Neitz, M., Balding, S. D., & Neitz, J. (1998). L-cone pigment genes expressed in
normal colour vision. Vision Research, 38 (21), 3213–19.
OUP CORRECTED PROOF – FINAL, 11/06/2017, SPi

MULTIPLE REALIZATION, AUTONOMY, INTEGRATION  235

Wilson, R. A., & Craver, C. F. (2006). Realization: Metaphysical and Scientific Perspectives.
In P. Thagard (Ed.), Philosophy of Psychology and Cognitive Science: A Volume of the Handbook
of the Philosophy of Science Series (pp. 81–104). Amsterdam: Elsevier.
Winderickx, J., Lindsey, D. T., Sanocki, E., Teller, D. Y., Motulsky, A. G., & Deeb, S. S. (1992).
Polymorphism in red photopigment underlies variation in colour matching. Nature, 356
(6368), 431–3.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

11
A Unified Mechanistic Account
of Teleological Functions for
Psychology and Neuroscience
Corey J. Maley and Gualtiero Piccinini

1.  Integrating Psychology and Neuroscience within


a Multi-Level Mechanistic Framework
To see how to integrate psychology and neuroscience, begin by considering a classical
psychological explanation: a functional analysis of a cognitive capacity, such as vision,
problem solving, or planning, in terms of appropriate computations performed over
suitable representations by a given cognitive architecture (cf. Cummins 1983; Pylyshyn
1984). Performing those computations is a function of the cognitive architecture, and
the performance of such a function explains the cognitive capacity.
According to the autonomist view of psychological explanation that is still very
popular among philosophers of cognitive science (cf. Weiskopf 2011 and some of
the other chapters in this collection), psychological functional analyses are distinct
and autonomous from neuroscientific explanations. Neuroscientific explanations are
mechanistic—they delve into concrete neural structures and activities. Psychological
functional analyses are said to be distinct from and autonomous from neuroscien-
tific mechanistic explanations primarily because the former allegedly abstract
away from the mechanisms; instead of concrete mechanisms, functional analyses
posit functional states and processes that are multiply realizable by different kinds
of structures.
The autonomist view has the virtue of emphasizing that cognitive capacities
have explanations at different levels, that all levels ought to be included in a com-
plete explanation, and that higher levels are multiply realizable by lower levels. But
the autonomist view also has a vice: it separates psychological explanations from
neuroscientific ones in a way that makes it difficult to integrate psychology and
neuroscience and does not do justice to the scientific practices that are sweeping
the field.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  237

Cognitive psychology is turning into cognitive neuroscience. Cognitive neuroscience,


in turn, is the search for integrated, multi-level, mechanistic explanations of cognitive
capacities (Boone and Piccinini 2016a). Everything that is correct about the autono-
mist view—that there are many levels of explanation, that they all must be included in
a complete explanation, and that higher levels are multiply realizable by lower levels,
can be accepted while rejecting the autonomist view itself—namely, the view that
(psychological) functional analyses are distinct and autonomous from (neuroscien-
tific) mechanistic explanations. In a recent separate paper, we offer an account of mul-
tiple realizability within a mechanistic framework without higher-level autonomy
(Piccinini and Maley 2014).
Psychology and neuroscience belong to the same explanatory enterprise. Both
contribute aspects of multi-level mechanistic explanations of cognition and behav-
ior. Insofar as psychology offers functional explanations (or functional analyses),
those are nothing but sketches of mechanisms (Piccinini and Craver 2011). In our
example, the cognitive architectures that are found in organisms are neural structures,
which fulfill their functions by performing computations over representations.
Mechanistic explanations are already multi-level; ascending from one level to the
one above it requires abstracting away from lower-level details (Boone and Piccinini
2016b). Thus, psychological functional analyses are not distinct and autonomous
from neuroscientific mechanistic explanations; on the contrary, they capture some
aspects of one level of a multi-level mechanism. Psychology and neuroscience are
integrated by fitting psychological explanations within the kind of multi-level mechanistic
explanations of cognition and behavior that neuroscience provides. The outcome is a
unified science of cognition.
The above picture raises the foundational question of what, exactly, multi-level
mechanisms and their functions are. One aspect that deserves more attention is that
neurocognitive mechanisms—like other biological mechanisms and like a­ rtifacts, but
unlike most non-biological mechanisms—appear to be for something; they appear
to  have teleological functions. We call mechanisms that have teleological functions
functional mechanisms.1 A sound integration of psychology and neuroscience requires
an adequate account of functional mechanisms and their teleological functions.
This chapter sketches an ontologically serious account of functional mechanisms that
provides the ontological foundation for, generalizes, and extends to artifact functions
and non-biological organismic functions the account of biological functions proposed
by Garson and Piccinini (2014). By ontologically serious, we mean that it begins with
an independently motivated ontology and, insofar as possible, it grounds a system’s
functions in objective properties of the system or the population to which it belongs, as
opposed to features of the epistemic or explanatory context of function attribution.
Put simply: on our account, functions are an aspect of what a system is, rather than an
aspect of what we may or may not say about that system.

1
  Garson (2013) independently introduced the same term for the same reason.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

238  Corey J. Maley and Gualtiero Piccinini

2.  Teleological Functions


It may seem unremarkable that coffeemakers—an artifact designed with a purpose—
are for making coffee. To be sure, they do many things: they generate heat, they
weigh down what they are placed upon, they reflect light, and they make coffee. But
only one of these is a function of a coffeemaker, as indicated by its name. What is
remarkable is that organisms—which are not designed at all—have parts that have
functions in what appears to be the same sense. A stomach also does many things—it
digests, gurgles, and occasionally aches. But a stomach is for digesting, which is to
say that one of its functions is digestion. What a trait or part of an organism is for, as
opposed to the other things it does, is its teleological function. When a trait fails to
perform its teleological function at the appropriate rate in an appropriate situation,
it malfunctions. From now on, we will simply use the term “function” for teleological
function (unless otherwise noted).
As commonplace as the notion of function may be, the correct account is not settled.
While we lack enough space for doing justice to current accounts, we will briefly indi-
cate why we are unsatisfied with them.2
Etiological approaches have been attractive to many: roughly, what determines the
function of a stomach here and now is the reproductive history of ancestral organisms
whose stomachs did whatever allowed them to survive (examples include Millikan
1989; Neander 1991; Griffiths 1993; Godfrey-Smith 1994; and Schwartz 2002). Thus,
the stomachs in organisms alive today have the function of digesting, and not gurgling,
because it was digesting (and not gurgling) that allowed the ancestors of those
organisms to reproduce.3 According to selectionist accounts—which are similar to
etiological accounts—what determines the function of a trait is the selection process
that causes a trait in a system to be selectively reproduced or retained (Wimsatt 2002;
Garson 2016, chapter 3). With respect to artifacts, a sophisticated etiological approach
maintains that what determines the function of a coffeemaker here and now is the way
past coffeemakers were used; that use, in turn, contributed to the “reproduction” of
coffeemakers (Preston 2013). Etiological (and selectionist) accounts of function may
be useful in certain contexts, but they are inadequate for our purposes for two reasons,
one epistemological and one metaphysical.
The main problem with etiological accounts of biological functions is that the causal
histories that ground functions on these accounts are often unknown (and in many
cases, unknowable), making function attribution difficult or even impossible. While
our ignorance does not preclude the ontological reality of functions, functions are very
often correctly attributed (or so it seems—quite compellingly) in the absence of any

2
  Many others have offered objections similar to some of ours. For an excellent review of the vast
philosophical literature on biological functions, see Garson (2016).
3
  Etiological accounts of function come in both strong and weak versions, depending upon whether
the function was selected for (strong), or merely contributed to the organism’s reproduction (weak)
(Buller 1998).
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  239

knowledge of a system’s causal history. Learning about its causal history can, at best,
show that a function has stayed the same or changed over time: learning about that
causal history does not lead to changing the current attribution of functions to a cur-
rent system.4 Thus, etiological accounts do not do justice to the practices of sciences
that study the functions of a trait or part without regard to evolutionary history (e.g.,
psychology, neuroscience, functional anatomy, physiology, etc.).
Another problem with etiological accounts, which affects their treatment of both
biological and artifact functions, is that they violate what we take to be an important
metaphysical principle concerning causal powers. Consider a real US coin and its per-
fect molecule-for-molecule duplicate. One is genuine, the other a counterfeit, and what
determines which is which is their respective causal histories. Thus, in one sense there
is a real difference between these two entities: they have different kinds of histories.
There may even be a way of characterizing this difference as a physical difference if
we think of objects in a four-dimensionalist way (e.g., Sider 1997). Nevertheless, the
difference between the genuine and the counterfeit cannot result in a difference between
the causal powers of the two. We could not build a vending machine that accepted one
but not the other, and it would be misguided to demand that a physicist or chemist
devise a method for detecting such counterfeits. Similarly, the history of an organism’s
ancestors cannot contribute to the causal powers of that organism’s traits or parts. If a
tiger were to emerge from the swamp following a lightning strike (à la Davidson’s
swamp man), its stomach would have the power, and thus the function, of digesting
even though it had no ancestors whatsoever.
Etiological theorists may reply that considerations about causal powers are question-
begging. For them, attributing a function to a trait is not the same as attributing
current causal powers. Rather, it’s precisely analogous to calling something a genuine
US coin—it says something about its origin and history and thus distinguishes it from
counterfeit coins (mutatis mutandis, from swamp organisms), regardless of any other
similarities in causal powers between genuine and fake coins. But this reply only high-
lights that, insofar as etiological theorists are interested in functions, they are interested
in a different notion of function than we are. We are after functions that are grounded
in the current causal powers of organisms and their environments, and thus can be
discovered by studying those causal powers. We are after the functions discovered and
ascribed by psychologists and neuroscientists, who have access to the current causal
powers of organisms and their environments, not their evolutionary history. In other
words, we are after functions that can be shared among organisms, swamp organisms,
and artifacts, regardless of their exact reproductive or selection histories.

4
  If the current function of some trait is unknown, investigation into the history of that trait may help
discover its current function. And if by mistake a trait is attributed a function it does not in fact have, know-
ing the history of that trait may help correct the mistake. But a trait’s causal history does not affect current
function. A physiologist may discover that the stapes has the function of transmitting vibrations to the oval
window of the ear without knowing anything about its causal history. That this same bone once had the
function of, say, supporting part of the jaw does not affect its current function.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

240  Corey J. Maley and Gualtiero Piccinini

To be sure, there are accounts that do not rely on causal histories, but we find them
inadequate for our purposes. Causal role accounts (such as Cummins  1975 and
Craver 2001) reject teleological functions and instead consider functions as causal
contributions to an activity of a complex containing system. As a consequence,
according to causal role accounts, everything or almost everything (of sufficient com-
plexity) ends up having functions, which clashes with the fact that only organisms
and artifacts have functions in the sense that we are interested in, that is, the sense in
which things can malfunction.
One way of fixing this weakness is to appeal to explanatory interests and perspec-
tives (Hardcastle 1999; Craver 2012). From the perspective of the survival of organ-
isms, the function of the heart is to pump blood. From the perspective of diagnosing
heart conditions, the function of the heart is to make thump-thump noises. From the
perspective of the mass of the organism (to speak loosely), the function of the heart is
to contribute a certain amount of that mass. And so on. This perspectivalism makes
functions observer-dependent and hence subjective, and in no way objective. But
functions seem perfectly objective.
Contrary to perspectivalism, a function of the heart is to pump blood, not to make
noises or to possess a certain mass, and it has this function even if there is no one
around to observe it. Some traits do have multiple functions, but not in virtue of mul-
tiple perspectives. From the very same perspective—the perspective of identifying the
functions of the medulla oblongata—the medulla oblongata has many functions,
including regulating breathing, circulation, and blood pressure, initiating the gag
reflex, and initiating vomiting. The reason we are sometimes interested in the noises
the heart makes—the reason we listen to the heart’s noises at all—is not that the noises
are another function of the heart in addition to pumping blood; rather, the noises are
useful in diagnosing how well the heart performs its function: to wit, pumping blood.
Consider what would happen if a cardiologist discovered that a completely silent heart
nevertheless pumps blood perfectly; the cardiologist would not declare that a new kind
of heart malfunction has been discovered; rather, she would try to figure out how this
heart can perform its function silently. The converse—the perfectly sounding, thump-
thumping heart that pumped blood poorly—would be considered malfunctioning.
In summary, perspectivalism does not do justice to the perspectives we actually
take in the biological sciences. If we could identify non-teleological truthmakers for
teleological claims, we would avoid perspectivalism and deem functions real without
deeming them mysterious. That is our project.
Accounts that ground artifact functions in terms of the intentions of designers and
users (e.g., Houkes and Vermaas 2010) face the problem that intentions are neither
necessary nor sufficient for artifacts to have functions (here, we mean “intention” in
the sense articulated in, for example, Anscombe 1957 and Searle 1983). That intentions
are insufficient to confer genuine functions is illustrated by artifacts such as amulets
and talismans, whose designers and users have all the right intentions and plans for
their proper use, yet lack genuine functions (more on this below). That intentions are
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  241

unnecessary to confer functions is illustrated especially well by artifacts created by


non-human animals such as spiders and termites.
Accounts that identify functions with propensities (Bigelow and Pargetter 1987)
cannot account for malfunctioning items, which have a function yet lack the pro-
pensity to perform it. Accounts based on special features of living systems (e.g.,
self-maintenance, self-preservation, reproduction; cf. Albert et al. 1988; Christensen
and Bickhard 2002; McLaughlin 2001; Mossio et al. 2009; Schlosser 1998; Schroeder
2004) are on the right track and we will retain what we take to be right about them.
Goal-contribution accounts (such as Nagel 1977; Adams 1979; Boorse 2002) are
right that functions contribute to a system’s goal(s). That is the core idea behind our
account as well. But traditional goal-contribution accounts maintain one or more of
the following: that a system has functions only if it is goal-directed, that a system is
goal-directed only if it is guided via feedback control, or that a system is goal-directed
only if it represents its goals. The problem is that plenty of things—e.g., doormats—
have functions without being goal-directed, without being guided via feedback
­control, or without representing goals. Thus we need a more inclusive account of
goals and the relation between goals and functions than those offered by traditional
goal-contribution accounts.
Unlike the account we are about to propose, previous accounts—even when they are
on the right track—often suffer from one or more of the following: a lack of a plausible
ontology, a lack of coordination with a multi-level mechanistic framework, or a lack of
a unified treatment of both organismic functions and artifact functions. We have already
mentioned the benefits of an ontologically serious account; the utility of including a
multi-level mechanistic framework is obvious for our purpose in this chapter. What
about unifying organismic and artifact functions? While some have argued against
this possibility (Godfrey-Smith 1993; Lewens 2004), these arguments have primarily
been concerned with etiological accounts. A unified account provides a desirable
foundation for taking seriously the analogies between the functions of biological traits
and the functions of artifacts that are part and parcel of many biological explanations
(e.g., the heart is a pump, the brain is a computer). A unified account is also more
parsimonious and elegant, so that is what we offer here.

3.  Ontological Foundations


We assume an ontology of particulars (entities) and their properties understood
as causal powers.5 We remain neutral on whether properties are universals or modes
(tropes). A similar account could be formulated in terms of an ontology of properties
alone, with entities being bundles thereof, or in terms of processes.

5
  A special version of this ontology is articulated by Heil (2003, 2012) and Martin (2007). Heil and
Martin also equate causal powers with qualities. Since qualities play no explicit role in the present chapter,
we prefer to stay neutral on the relationship between causal powers and qualities.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

242  Corey J. Maley and Gualtiero Piccinini

Activities are manifestations of properties (powers). Some have objected that we


can only observe activities and not powers and hence activities must be fundamental
(e.g. Machamer 2004). We set this concern aside on the grounds that activities may be
evidence of powers.
When many entities are organized together in various ways, they form (constitute)
more complex entities. Such complex entities have their own properties (causal
­powers), which are constituted by the way the causal powers of the constituting
entities are organized and perhaps modified by the way such entities and their causal
powers are organized. For instance, when atoms chemically bond to one another, they
form molecules with properties constituted by those of the individual atoms, includ-
ing properties of individual atoms that have changed because they are so bonded. The
subsystems that constitute complex entities and make stable causal contributions to
their behavior may be called mechanisms (Craver 2007; Machamer et al. 2000).
The causal powers of mechanisms have special subsets; they are special because
they are the causal powers whose manifestation are their most specific (peculiar, char-
acteristic) interactions with other relevant entities. These are the most characteristic
“higher-level” properties of complex entities and their mechanisms (Piccinini and
Maley  2014). Since higher-level properties are subsets of the causal powers of the
lower-level entities that constitute the higher-level entities, they are no “addition of
being” (in the sense of Armstrong 2010) over and above the lower-level properties and
thus they do not run into problems of ontological redundancy such as causal exclusion
(Kim 2005). Thus we establish a series of non-redundant levels of entities and proper-
ties, each level constituted by lower-level entities and properties.
Organisms are a special class of complex entities. What is special about them is some
of their general properties. First, they are organized in ways such that individual
organisms preserve themselves and their organization for significant stretches of time.
Organisms accomplish this by collecting and expending energy in order to maintain a
set of states in the face of various types of disruption. For example, mammals expend
energy in order to maintain their body temperature within a certain range; without
these homeostatic mechanisms, fluctuations in temperature outside of a very narrow
range would disrupt activities necessary for mammalian life. We call the characteristic
manifestation of this first property survival. Many organisms also reproduce—that is,
they make other organisms similar to themselves by organizing less complex entities.
In some cases, although individual organisms are not organized to reproduce, they are
organized to work toward the preservation of their kin. Honeybee workers, for example,
are infertile, but still contribute to the survival and reproduction of the other members
of their hive. We call the characteristic manifestation of either of these latter properties
inclusive fitness (in the sense introduced by Hamilton 1964a, 1964b).6

6
  Inclusive fitness is fitness due either to personal reproduction or reproduction of genetic relatives.
Some organisms do not pursue their inclusive fitness at all. They are the exception that proves the rule: if
their parents had not pursued inclusive fitness . . .
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  243

Survival and inclusive fitness as we have characterized them are necessary for the
continued existence of organisms. Although individual organisms can last for a while
without reproducing and even without having the ability to reproduce, these individ-
uals will eventually die. If no individuals reproduced, there would soon be no more
organisms. Even quasi-immortal organisms (i.e., organisms that have an indefinitely
long lifespan) will eventually be eaten by a predator, die of a disease, suffer a fatal acci-
dent, or succumb to changed environmental conditions. So, barring truly immortal,
god-like creatures impervious to damage (which are not organisms in the present
sense anyway), survival and inclusive fitness are necessary for organisms to exist.
That these two properties are essential for the existence of biological organisms is
obviously a special feature of them both. Another special feature is that the manifest-
ation of these properties requires organisms to expend energy. We call the state toward
which such a special property manifestation is directed, and which requires work on
the part of the organism via particular mechanisms, an objective goal of the organism.
This is reminiscent of the cybernetic accounts of goal-directedness as control over
perturbations (Rosenblueth et al. 1943; Sommerhoff 1950). While we are sympathetic
to cybernetic accounts of goal-directedness and rely on it in our appeal to goals, we
depart from previous goal-contribution accounts of functions (Nagel 1977; Adams
1979; Boorse 2002) because we do not maintain that being goal-directed is sufficient to
have functions. Instead, we ground functions directly in the special organismic goals
of survival and inclusive fitness.
Note that ours is a technical sense of “goal,” which does not entail any kind of goal
representation, mental or otherwise. Furthermore, there is no requirement that goals
always be achieved: all that is required is that these goals are a sort of state toward which
the energy expenditure, via mechanisms, must work in order for organisms to exist.
Our suggestion is that this notion of goal can underwrite the primary notion of teleo-
logical function that is used in some sciences. It also seems to underlie much of our
commonsense understanding of the notion.7
It may be objected that there are systems and behaviors that “survive” but lack goals
in the relevant sense.8 For instance, a gas leak may poison any plumber who tries to fix
it, thereby preserving itself. Does it follow that, on our account, the gas leak has the
objective goal of surviving? Or consider a crack addict who steals in order to buy crack,
thereby preserving the addiction, and sells crack to others, thereby “reproducing” crack
addiction. Does it follow that, on our account, a crack addiction has the objective goal
of maintaining and reproducing itself?

7
  Two interesting questions are whether there are objective goals other than the survival and inclusive
fitness of organisms and whether entities other than organisms and their artifacts have functions.
Additional objective goals may include the survival of the group, or of the entire species, or of the entire
biosphere. Additional entities with functions may include genes, groups, societies, corporations, ecosys-
tems, etc. We leave the exploration of these possibilities to another occasion. For now, we limit ourselves to
the paradigmatic cases of teleological functions of organisms and their artifacts based on their objective
goals of survival and inclusive fitness.
8
  Thanks to Carl Craver for this objection.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

244  Corey J. Maley and Gualtiero Piccinini

These two putative counterexamples are quite different. As to the gas leak, it does not
reproduce and does not pursue its inclusive fitness. (It also does not extract energy
from its environment in order to do work that preserves its internal states, etc.)
Therefore, it is not an organism in the relevant sense. If there were artificial systems
that did pursue survival and inclusive fitness in the relevant sense (i.e., self-reproduc-
ing machines), they would be organisms in the relevant sense and they would have
objective goals (more on this below). As to crack addiction, its preservation and repro-
duction are not objective goals because they are detrimental to the survival of the
addicts and yet they depend on the objective goal of survival in a way in which survival
does not depend on them. That is, crack addiction requires organismic survival in
order to persist and reproduce; but organismic survival itself does not require crack
addiction (quite the contrary). In a rather loose sense, someone might think of crack
addiction as a kind of parasite, and thus as a special kind of organism that may have
objective goals. But this sense is so loose as to be unhelpful. To name just one immedi-
ate difficulty, it is quite problematic to reify the complex pattern of behaviors, disposi-
tions, desires, etc. that constitute an addiction as an entity separate from the addict.
A better way of addressing cases like crack addiction is to treat them as a subjective goal
of some organisms (more on this below).
Another objection runs as follows:9 Having mass is necessary for survival, whereas
survival is not necessary for having mass; by parity of reasoning with the crack addic-
tion case, it seems to follow that survival is not an objective goal, whereas having mass
is. But having mass is necessary for survival only in the sense that massless objects can-
not organize themselves into organisms. It takes a special organization for massive
objects to turn themselves into organisms. When they organize themselves into organ-
isms, such suitably organized massive objects either survive or else they perish. Thus,
there is a big difference between having mass and surviving. Only the latter is a defining
characteristic of organisms, which distinguishes them from other systems. Having mass
is something organisms share with many other systems, which are not goal-directed
towards survival. But notice that organisms may need to maintain their mass within a
certain range in order to stay alive. If so, then maintaining their mass within that range
is a subsidiary goal of organisms, which serves the overarching goal of survival.

4.  Teleological Functions as Contributions


to Objective Goals of Organisms
We now have the ingredients for an account of teleological functions in organisms.
A teleological function in an organism is a stable contribution by a trait (or compo-
nent, activity, property) of organisms belonging to a biological population to an
objective goal of those organisms.

  Thanks to Kent Staley for this objection.


9
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  245

Construed generally, a trait’s function (and sometimes the successful performance of


that function) depends on some combination of the organism and its environment
(which may include other organisms: kin, conspecifics, or unrelated organisms, speci-
fied or unspecified). In other words, the truthmakers for attributions of functions to an
organism’s trait are facts about the organism and its environment. Different types of
function depend on factors outside the organism to different extents. It is worth consid-
ering some real-world examples in order to get a feel for the range of facts upon which
functions can depend. The contexts and frequencies required for traits to have func-
tions, for their functions to be performed at an appropriate rate in appropriate situ-
ations, and for traits to malfunction, are made precise in Garson and Piccinini (2014).
Some functions, such as the blood-pumping of tiger hearts that pump at an appropri-
ate rate, depend primarily on the individual organisms: blood-pumping contributes to
the survival of a single tiger, independent of the existence of any other organisms. But
this function also depends on the environment: the tiger must be in the right kind of
atmosphere with the right pressure, located in the right kind of gravitational field, etc.
If, in an appropriate situation, a previously well-functioning tiger’s heart were to
stop pumping blood, then the tiger would die; we can safely say its heart has malfunc-
tioned (or is malfunctioning). A less severe malfunction would result if the tiger’s heart
were to pump at an inappropriate rate. Determining the appropriate situations for a
trait’s functioning and the appropriate rate at which a trait ought to function in an
appropriate situation may require comparing the rates of functioning of different trait
tokens of the same type in different organisms in the same population. The trait tokens
that provide a sufficient contribution to the objective goals of an organism are the well-
functioning ones, the others are malfunctioning. Thus, whether a trait has a function,
and thus a malfunction, may depend on the way other traits of the same type function
in other organisms.10 In addition, the environment is important here because what
may be a malfunction in one environment might not be in some other environment.
An enlarged heart on Earth would result in a malfunction; but in an environment with,
say, higher atmospheric pressure, a non-enlarged heart might be a malfunction.
Nanay (2010) objects that comparing a trait token to other traits of the same type in
order to determine its function requires a function-independent way of individuating
trait types, and he argues that there is no function-independent way of individuating
types. We believe that there are function-independent ways of individuating types. But
we won’t defend this thesis here because, pace Nanay, there is no need for a (purely)
function-independent way of individuating traits.
To see why, consider a biological population. Begin with a function—that is, begin
with a certain stable contribution to the pursuit of an objective goal of the organisms

10
  What if an organism is the last survivor of its species? We can include organisms that lived in an
organism’s past as part of the truthmaker for attributions of functions to its traits. Nothing in our account
requires that all organisms relevant to function attribution live at the same time. This feature does not turn
our account into an etiological account, however, because our account does not make the reproductive
history of a trait (let alone selection for certain effects of a trait) constitutive of its function.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

246  Corey J. Maley and Gualtiero Piccinini

in that population. Then, find the traits that perform that function. Find the well-
functioning trait tokens, each of which performs the same function. The trait tokens
can be typed together because they perform the same function (as well as by their
morphological and homological properties). Now that we have established a type, we
can type other (less-than-well-functioning) traits: they belong in the same type inso-
far as they share a combination of the following: less-than-appropriate performance
of their function, morphological properties, and homological properties. By typing
well-functioning trait tokens first and then typing less-than-well-functioning tokens
later, we need not rely on a function-independent way of individuating trait types.
This way of individuating functional types may well recapitulate the way func-
tions are discovered and attributed empirically, but that is not our point. Our point is
that there is an asymmetric ontological dependence between the functions of mal-
functioning tokens and the functions of well-functioning tokens. The functions of
malfunctioning tokens are grounded in part in the functions of well-functioning
tokens (which in turn are constituted by some of their causal powers), but not vice
versa. In other words, the truthmakers for functional attributions to malfunctioning
tokens include the causal powers of well-functioning tokens, but not vice versa.
Some functions depend on the environment because they depend on other species.
Consider the eyespots of the Polyphemus moth, which have the function of distracting
would-be predators (a contribution to the moth’s survival). This function depends on
the existence of would-be predators disposed to be distracted by these eyespots.
Another example is the giant sphinx moth, which has a proboscis long enough to drink
nectar from the ghost orchid. The function of the proboscis is to draw in nectar from
this particular orchid species. In both cases, these traits would have no function with-
out the environment: the eyespots would not have the function of distracting would-
be predators if there were no would-be predators, and the proboscis would not have
the function of drawing up ghost orchid nectar if there were no ghost orchids. If the
environment were different, the traits might have no function, or might even acquire
new functions.11
Finally, some functions—particularly those that contribute to inclusive fitness—
depend on the existence of kin, and often other species as well. The female kangaroo’s
pouch, for example, has the function of protecting the young joey as it develops and
nurses. If there were no joeys, the pouch would have no function. The stingers of hon-
eybee workers have the function of stinging, which deters would-be hive intruders.
This is certainly not a contribution to the honeybee’s survival—using a stinger usually
results in the death of the individual honeybee—but it is a contribution to the survival
of its kin, and hence to its inclusive fitness. Thus, the stinger’s function depends on the
existence of both the honeybee’s kin and would-be hive intruders.

11
  This accords well with how biologists describe the evolutionary beginnings of vestigial structures: the
environment in which a trait once had a function changes, leaving the trait with no function (often result-
ing in non-adaptive evolutionary changes to the trait). The vestigial hind legs of the blue whale have no
function, but these hind legs presumably did for the whale’s land-dwelling ancestors.
OUP CORRECTED PROOF – FINAL, 11/08/2017, SPi

FUNCTIONS FOR PSYCHOLOGY AND NEUROSCIENCE  247

As many have pointed out (e.g., Craver  2007), organisms contain mechanisms
nested within mechanisms: mechanisms have components, which are themselves
mechanisms, which themselves have components, etc. Mechanisms and their compo-
nents have functions, and the functions of components contribute to the functions of
their containing mechanisms. Thus, a contribution to an objective goal may be made
by the organism itself (via a behavior), or by one its components, or by one of its com-
ponents’ components, and so on.
Examples of mechanism hierarchies abound in neuroscience. One such example is
the way various species of noctuid moth avoid bat predation by way of their tympanic
organ (Roeder (1998) describes the discovery of this organ’s function and the mech-
anism responsible). Roughly, this organ’s function is detecting approaching bats; when
it does, it sends signals to the moth’s wings, initiating evasive maneuvers, often allow-
ing the moth to avoid the oncoming predator (turning away from a bat that is some
distance away, and diving or flying erratically when a bat is very close). The tympanic
organ does this by responding differentially to the intensity of a bat’s ultrasonic
screeches (which the bat uses for echolocation). When we look at the components
of this organ, we can identify mechanisms—and their functions—within these com-
ponents. For example, the so-called A-neurons have the function of generating action
potentials in response to ultrasonic sounds hitting the outer tympanic membrane.
These action potentials are part of the tympanic organ’s activity, which in turn drives
the moth’s response to the predator. We can then look at the mechanisms of compo-
nents of these neurons, such as ion channels, that have the function of allowing ions to
flow into or out of the neuron. Each of these components has a functional mechanism
that contributes, either directly (e.g., the initiation of evasive maneuvers) or indirectly
(e.g., allowing the flow of ions) to the objective goal of survival. All of these functions
are more than a matter of mere explanatory interest, or part of an analysis of one sys-
tem or other: these functions are contributions to the survival of the organism.
There is a related notion of functionality in organisms that our framework
accounts for. Sometimes physiologists distinguish between functional and non-
functional conditions based on whether a condition is found in vivo (in the living
organism) or only in vitro (in laboratory preparations).12 Presumably, the underlying
assumption is that unless a condition is found in the living organism, it is unlikely
that anything that happens under that condition makes a contribution