Anda di halaman 1dari 240

Bioarchaeology

lnterpreting behavior from


the human skeleton
Clark Spencer Larsen
;
j;
The study of human remains recovered from archaeological
sites facilitates the interpretation of lifetime events such as
disease, physiological stress, injury and violent death, physical
activity, tooth use and diet, and the demographic history of
once-living populations. This is the first comprehensive syn-
thesis ofthe emergingfield ofbioarchaeology. A central themeis
the interaction between biology and behavior, underscoring the
dynamic nature of skeletal and dental tissues, and the influences
of environment and culture on human biological variation. The
book emphasizes research results and their interpretation,
covering paleopathology, physiological stress, skeletal and
dental growth and structure, the processes of aging and bio-
distance. It will be a unique resource for students and
researchers interested in biological and physical anthropology
or archaeology.
Cambridge Studies in Bio/ogica/ Anthropo/ogy 21
Bioarchaeology
ll
!
:1
Cambridge Studies in Biological Anthropo/ogy
Series Editors
G. W. Lasker Department of Anatomy, Wayne State University,
Detroit, Michigan, USA
C. G. N. Mascie-Taylor Department ofBiological Anthropology,
University of Cambridge
D. F. Roberts Department of Human Genetics, University of
Newcastle-upon-Tyne
R. A. Foley Department of Biological Anthropology, University of
Cambridge
A/so in the series
G. W. Lasker Surna1nes and Gene tic Structure
C. G. N. Mascie-Taylor and G. W. Lasker (editors) Biological Aspects
of Human Migration
Barry Bogin Patterns and Human Growth
Julius A. Kieser Human Adult Odontometrics - The study of variation in
adult tooth size
J. E. Lindsay Carter and Barbara Honeyman Heath Somatotyping -
Development and applications
Roy J. Shephard Body Composition in Bio/ogical Anthropology
Ashley H. Robins Biological Perspectives on Human Pigmentation
C. G. N. Mascie-Taylor and G. W. Lasker (editors) Applications of
Biological Anthropology to Human Ajfairs
Alex F. Roche Growth, Maturation, and Body Composition - The Fels
Longitudinal Study 1929-1991
Eric J. Devor (editor) Molecular Applications in Biological Anthropology
Kenneth M. Weiss The Genetic Causes of Human Disease - Principies
and evolutionary approaches
Duane Quiatt and Vernon Reynolds Primate Behaviour - Information,
social knowledge, and the evolution of culture
G. W. Lasker and C. G. N. Mascie-Taylor (editors) Research Strategies
in Biological Anthropology - Field and survey studies
S. J. Ulijaszek and C. G. N. Mascie-Taylor (editors) Anthropometry: the
individual and the population
C. G. N. Mascie-Taylor and B. Bogin (editors) Human Variability and
Plasticity
S. J. Ulijaszek Human Energetics in Bio/ogical Anthropology
R. J. Shephard and A. Rode The Health Consequences of
'Modernisation'
M. M. Lahr The Evolution of Modern Human Diversity
L. Rosetta and C. G. N. Mascie-Taylor (editors) Variability in Human
Fertility
G. R. Scott and C. Turner II The Anthropology of Modern Human
Teeth
Bioarchaeology
Interpreting behavior from the
human skeleton
CLARK SPENCER LARSEN
Department o/ Anthropology and Research Laboratories o/
Anthropo/ogy
The University of North Carolina, Chapel Hill
CAMBRIDGE
UNIVERSITY PRESS
PUBLISHED BY THE PRESS SYNDICATE OF THE UNIVERSITY OF CAMBRIDGE
The Pitt Building, Trumpington Street, Cambridge, United Kingdom
CAMBRIDGE UNIVERSITY PRESS
The Edinburgh Building, Cambridge CB2 2RU, UK
40 West 20th Street, NewYork, NY 10011-4211, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
Ruiz de Alarcn 13, 28014 Madrid, Spain
Dock House, The Waterfront, Cape Town 8001, South Africa
http://www.cambridge.org
Cambridge University Press 1997
This book is in copyright. Subject to statutory ex:ception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without
the written pennission of Cambridge University Press.
First published 1997
Reprinted 1998
First paperback edition 1999
Reprinted 2000, 2003
Typeset in Times 10/12.5 pt [vN]
A catalogue record for this book is available from the British Library
Library of Congress Cataloguing in Publication data
Larsen, Clark Spencer.
Bioarchaeology: interpreting behaviour from the human skeleton /
Clark Spencer Larsen.
p. cm. - (Cambridge studies in biological anthropology)
Includes bibliographical references (p. ) and index:.
ISBN O 521 49641 1 (hb)
I. Human remains (Archaeology) 2. Human skeleton -Analysis.
l. Title. II. Series.
CC77.B8L37 1997
599.97 -dc21 96-51571 C!P
ISBN O 521 49641 l hardback
ISBN O 521 65834 9 paperback
Transferred to digital printing 2004
For Chris and Spencer
Contents
Acknow/edgments XI
l lntroduction l
2 Stress and deprivation during the years of growth and
development and adulthood 6
'
2.1 Introduction 6
fil
2.2 Growth and development: skeletal 8
"'
2.3 Growth and development: dental 23
&
2.4 Skeletal and dental pathological markers of deprivation 29
:m

2.5 Adult stress 56
2.6 Summary and conclusions 61
;
3 Exposure to infectious pathogens 64 s
f
'.ti
3.1 Introduction 64
l
M 3.2 Dental caries 65
~
_0
3.3 Periodontal disease (periodontitis) and tooth 1oss 77
@
:
3.4 Nonspecific infection 82 ;'.
1
3.5 Specific infectious diseases: treponematosis, tuberculosis,
*
and leprosy 93
~
~ 3.6 Summary and conclusions 107
1
~ 4 Injury and violen! death 109
1 4.1 lntroduction 109
1
4.2 Accidental injury 110
1
4.3 Intentional injury and interpersonal violence 119
4.4 Medical care and surgical intervention 152
4.5 Interpreting skeletal trauma 154
4.6 Summary and conclusions 159
fil
t
5 Activity patterns: l. Articular and muscular modifications 161
'-O
5.1 Introduction 161
J 5.2 Articular joints and their function 162
@
1
5.3 Articular joint pathology 162
X Contents
5.4
Nonpathological articular modifications
5.5
Nonarticular pathological conditions relating to activity
5.6 Summary .and conclusions
6
Activity patterns: 2. Structural adaptation
6.1 Bone form and function
6.2 Cross-sectional geometry
6.3
Histomorphometric biomechanical adaptation
6.4
Behavioral inference from whole bone measurements
6.5 Surnmary and conclusions
7
Masticatory and nonmasticatory functions: craniofacial
adaptation
7.1 Introduction
7.2 Cranial forro and function
7.3 Dental and alveolar changes
7.4 Dental wear and function
7.5
Summary and conclusions
8
Isotopic and elemental signatures of diet and nutrition
8.1 lntroduction
8.2 Isotopic analysis
8.3 Elemental analysis
8.4
Methodological issues in bioarchaeological chemistry
8.5
Summary and conclusions
9
Historical dimensions of skeletal variation: tracing genetic
relationships
9.1 Introduction
9.2 Classes of biodistance data
9.3
Biohistorical issues: temporal perspectives
9.4
Biohistorical issues: spatial perspectives
9.5 Summary and conclusions
10
Changes and challenges in bioarchaeology
10.l Introduction
10.2 Sample representation
10.3 Data recording standards
10.4
Bioarchaeology and cultural patrimony
References
General index
Site index
185
188
193
195
195
197
219
220
224
226
226
227
242
247
268
270
270
271
290
300
300
302
302
305
310
324
331
333
333
334
340
341
343
432
459
~

: ~
t
!$
t
0

1
*
'i
b

~
~
m
~
1
1
1

1
1
1
~
1
~
~
1
1
$
~
Acknowledgments
The writing of this book was fostered by my involvement in a series of
interdisciplinary research programs undertaken in the southeastern
(Florida and Georgia) and western (Nevada) United States. 1 thank my
collaborators, colleagues, and friends who have been involved in this
exciting research. In regard to fieldwork, the following individuals and
projects figured prominently in the development ofthis book: David Hurst
Thomas on St. Catherines Island, Georgia; Jerald Milanich and Rebecca
Saunders on Amelia Island, Florida; Bonnie McEwan at Mission San Luis
de Talimali in Tallahassee, Florida; and Robert Kelly in the western Great
Basin, Nevada. A number of individuals deserve special thanks for their
valuable contributions to the study of human remains from these regions:
Christopher Ruff, Margare! Schoeninger, Dale Hutchinson, Katherine
Russell, Scott Simpson, Anne Fresia, Nikolaas van der Merwe, Julia
Lee-Thorp, Mark Teaford, David Smith, Inui Choi, Mark Griffin,
Katherine Moore, Dawn Harn, Rebecca Shavit, Joanna Lamber!, Susan
Simmons, Leslie Sering, Hong Huynh, Elizabeth Moore, and Elizabeth
Monahan.
1 thank the dward John Noble Foundation, the St. Catherines Island
F oundation, Dr and Mrs George Dorion, the Center for Early Contact
Period Studies (University of Florida), the National Science Foundation
(awards BNS-8406773, BNS-8703849, BNS-8747309, SBR-9305391, SBR-
9542559), and the National Endowment for the Humanities (award
RK-20111-94) for support offieldwork and follow-up analysis. A research
lea ve given to me during the fall of 1991 while 1 was on the faculty at Purdue
University and a fellowship from Purdue's Center for Social and Behav-
ioral Sciences during the spring and summer of 1992 gave me a much
needed breather from teaching and other obligations in arder to get a
jump-start on writing this book. Preparation of the final manuscript was
made possible by generous funding from the University of North
Carolina's University Research Council. 1 acknowledge the support -
institutional and otherwise - of the University of North Carolina's
Research Laboratories of Anthropology, Vincas Steponaitis, Director.
A number of colleagues provided reprints or helped in tracking down key
xii Acknowledgrnents
data or literature sources. I especially thank John Anderson, Kirsten
Anderson, Brenda Baker, Pia Bennike, Sara Bon, Brian Burt, Steven
Churchill, Trinette Constandse-Westermann, Andrea Drusini, Henry
Fricke, Stanley Garn, Alan Goodman, Gisela Grupe, Donald Haggis,
Diane Hawkey, Brian Hemphill, Frank Ivanhoe, Anne Katzenberg, Lynn
Kilgore, Patricia Lamber!, Daniel Lieberman, John Lukacs, Lourdes
Mrquez Morfn, Debra Martin, Christopher Meiklejohn, Jerome Melbye,
Gyrgy Plfi, Thomas Patterson, Carmen Pijoan, William Pollitzer,
Charlotte Roberts, Jerome Rose, Christopher Ruff, Richard Scott, Maria
Smith, Michael Spence, Dawnie Steadman, Vincas Steponaitis, Erik
Trinkaus, Christy Turner, Do u glas Ubelaker, J ohn Verano, Phillip Walker,
and Robert Walker.
Various versions ofindividual chapters and parts of chapters were re ad by
Kirsten Anderson, Brenda Baker, Patricia Bridges, James Burton, Steven
Churchill, Robert Corruccini, Marie Danforth, Leslie Eisenberg, Alan
Goodman, Mark Griffin, Gary Heathcote, Brian Hemphill, Simon Hillson,
Dale Hutchinson, Anne Katzenberg, Lyle Konigsberg, Patricia Lamber!,
Christine Larsen, George Milner, Susan Pfeiffer, Mary Powell, Charlotte
Roberts, Christopher Ruff, Shelley Saunders, Margare! Schoeninger, Mark
Spencer, Mark Teaford, and Christine White. Ann Kakaliouras, Jerome
Rose, and Phillip Walker generously donated their time in the reading of and
commenting on the en tire manuscript. I am indebted to ali of the readers for
their help in improving the clarity, organization, and content of the book.
The bibliography was organized by Elizabeth Monahan. Ann
Kakaliouras compiled the index. Patrick Livingood helped in the prepara-
tion offigures. I thank the following colleagues for providing photographs
and figures: Stanley Ambrose, Kirsten Anderson, David Barondess, Brian
Hemphill, Charles Hildebolt, Dale Hutchinson, George Milner, Mary
Powell, Christopher Ruff, Richard Scott, Scott Simpson, Holly Smith,
Mark Teaford, Erik Trinkaus, Phillip Walker, and Tim White.
A book like this is not written without a supportive press. I thank the
Syndicate ofthe Cambridge University Press and the editorial board ofthe
Cambridge Studies in Biological Anthropology - Robert Foley, Derek
Roberts, C. G. N. Mascie-Taylor, and especially, Gabriel Lasker-for their
encouragement and comments, especially when 1 proposed the idea of
writing the book and what it should contain. Most of ali, 1 thank Tracey
Sanderson, Commissioning Editor of Bi_ological Sciences at CUP, for her
help throughout the various stages, from proposal to finished book.
Chapel Hill, North Carolina
28 August 1996
1 Introduction
'.'fany_thousands ofarchaeological human skeletons a
m institutional repositories throughout the
at last count, the Smithsonian
p k cata ogued records of human remains (Loring &
ro opec, 1994). This surfeit of skeletons suggests that teeth d b
anthropologists as a valuable source of
peoples. There is considerable evidence


C 1 d ' omsrnna, exas, Oklahoma Kansas

studied by biological
' . or e en tire southern half of Texas an area encom .
squbare miles, over 300 mortuary site; have been
c aeo og1sts, ut skeletons fro 1 50 f h .
(Steele & Olive
1989
) H m_ on Y
0
t ese s1tes are described
and interpreted in from eight sites were described
o bserved. 'ma . . n t e rev1ew, Owsley and coworkers
t f 1. f ny archeolog1sts have not appreciated the full
pbeohen .ia o doshteolog1cal research as a source of information on biocultural
av10r an uman adapt f M . . . a ton. any ofthese v1ews persist as reflected
:? an_ statement to a reporter visiting a field schol excava-
ln m o orado: Human bones don't provide that much information
fter ali, know that they are Indians."' (1989:122). .

not
that 'burials on ?istorical sites are much
Would d
. kl worth . Unless the ctrcumstances are very special I
a vise qmc y cover th
(Noel Hume 1975 158 em you ever saw thern'
. , . ' ' emphas1s mme). This attitude is a 1
other regions of the globe. In Great Britain, for k
ve e_ ' remarked that 'Unaware of the potential of human' skeletal
remams, many archaeologists view them as, at best, an irrelevance, and
2 Introduction
. h excavatin is time-consuming
when encountered in situ as obiects o;e "real" archaeology' (Bush &
and which somehow does not cons 1 u e .
Zuelebil, 1991:5) ... ' h . owing evidence to suggest that
osil!ve note t ere 1s gr
On a more p . , . skeletal studies into their research de-
are for testing hypotheses and drawing
health disease,


physical et al., 1995;
1991; Chamberlam, :
0
. -Ashmore et al., 1982; Roberts,
Ford, 1979; Brown, 1979). Additionally,
1991; Sobohk, 1994a, Sttr an ' ' . is a dominant area of inquiry
within biological anthropology, to 1992 about 20% of ali
(Lovejoy_ et al., Journal of Phy;ical Anthropology
manuscnpt subm1ss10nsf oh d' . e were in the subareas of. osteo\ogy
malo te 1sc1pm -
- the pnmary JOU bmissions than any other smg e
and pa\eopathol.ogy, 8). Because the study of
subarea (Cartm1ll f!' Brown,. . 1 of biological anthropology' th1s
biological systems _is iact that the hard tissues - bones and
strong representat10n is dnven y nt of biologica\ly relevan! information
teeth - preserve the g;eatest ;'uof earlier human groups will always rely
the There ore, :tu u:e:keletons for addressing such issues as
on mformat1on gleaned . '.' d activit atterns, although
physiological stress, i:n settings (e.g.,
other importan! mater_ia s enco l"t ) serve as complementary sources of
, plant and animal rema1ns, 1 es
information to ?uman antsourceofinformationforthestudyof
Skeletal from specific Jocalities are more
human vana ion. . t ms of the environments whence
homogeneous both and m er atomical skeletal series. Skeletal
they carne than are d1ssectmg room orfan many populations and highly
. f. th latter contexts are rom .
remams rom e f archaeological series becomes especially
diverse circumstances. T?e drawn about intrapopulation variabil_ity
importan t. when and age may be important influencmg
for a range of topics m V: ic sex . . see Ruff & Hayes, 1982).
factors (e.g., biomechamcal osteology are available (Ander-
Varioussurveys Schwartz, 1995; Shipman et al.,
son, 1969; Bass, 1995} Ubelaker 1989; White, 1991). In order to
1985; Steele & Bramb _ett: , . 'searchers' methods and results,
address the incompal!bihty of ve been developed (Buikstra &
'standards' for skeletal data co ec :lthough dealing with the inter-
Ubelaker, 1994; see also Chapter .
Introduction 3
pretive role of human remains, these works .serve primarily as 'how to'
guides to bone identification and skeletal analysis and not as resources for
the investigation of broader issues in biological anthropology and sister
disciplines. The present book focusses on the relevance of skeletal remains
to the study of the human condition and human behavior generally;
namely, how skeletal and dental tissues from archaeological settings revea!
life history at both the individual and the population levels. The goal of th.is
book is to provide a synthesis of bioarchaeology, an emerging discipline
that emphasizes the human biological component of the archaeological
record. Although first applied to archaeozoology (Clark, 1972), the study
of animal remains in archaeological contexts, it has become convention to
use the term bioarchaeology in reference to the study of archaeological
human remains exclusively.
The enormous potential of bioarchaeology for understanding the past
has only recen ti y become realized. This is the case for severa! reasons. First,
most human osteological analysis has been descriptive and oriented around
case studies. Even for large assemblages of skeletons, osteological reports
tended to overlook pattern and tendency in a population perspective. This
descriptive orientation reflects the historical role of medica! practitioners
and their emphasis on diagnostic approaches to the study of ancient human
remains, especially in regard to paleopathology and disease. This is
especially true in the older paleopathology literature, in which diagnostic
case studies predominate. Second, most well documented, large collections
ofhuman remains havebeen excavated only within the last few decades. A
book like this would not ha ve been possible prior to the last decade or so.
Finally, theoretical and methodological developments underlying the
studies presented here are also quite recent.
This book takes a population perspective. Individual-based case studies
are discussed, especially because collectively they help to build a picture of
biological variability in earlier societies. The population approach is critica!
for characterizing patterns ofbehavior, lifestyle, disease, and other aspects
that forro the fabric of the human condition. The discussion in the following
pages also underscores the importance of culture in interpreting population
characteristics. Dietary behavior, for example, is highly influenced by
culture. If an individualis taught that aspecific food is'good' to eat, then the
consumption of that food tem becomes fully appropriate in that cultural
context. Other factors media te the consumption of a food or foods within a
society (e.g., environment, local plants and animals). However, cultural
behavior plays an essential role in determining diets of a group of peo ple.
Unlike many ofthe aforementioned guides to osteological analysis, this
book is not methodologically driven, although methodological develop-
4 Introduction
ments make possible much of the discussion prese'nted in the following
chapters. I limit the disc11ssion of methodology in order to direct the
reader's attention to 'research results and how they inform our understand-
ing of the past. Thus, this book is intended to feature the various insights
gained about human behavior and biology rather than to describe or
evaluate specific methods and techniques of skeletal analysis. This ap-
proach is central to the biocultural perspective offered by anthropologists-
we must seek to envision past populations as though they were alive today
and then ask what information drawn from the study of skeletal tissues
would provide understanding of them as functioning, living human beings
and members of populations. This book is nota critica! review; it does not
highlight the shortcomings ofthe field or what bioarchaeologists should be
doing, but are not.
Bioarchaeological findings are important in a number of areas of
scientific and scholarly discourse. Within anthropology, the use of human
remains in interpreting social behavior is especially fruitful in mortuary
studies (e.g., Beck, 1995; Chapman et al., 1981; Goldstein, 1980; Hump-
hreys & King, 1981; O'Shea, 1984). The story human remains tell is also
reaching an audience outside of anthropology. There is an increase in use of
bioarchaeological data in history, economics, and nutrition science. In an
edited book dealing with the effects of changing food production and
consumption in historical settings (Rotberg & Rabb, 1985), a number of
contributors cited data from skeletal studies on nutrition, disease, and
related topics. The economic historian Richard Steckel produced a series of
papers dealing with biological indicators (e.g., stature) of economic
success, nutritional deprivation, and standards of living in a number of
recent human groups (e.g., United States, Sweden, African slaves; see
summary in Steckel, 1993). Scholars who study long-term trends in health
and nutrition typically rely on parish records, plantation records, genealo-
gies, and vital registration data. Recently, they ha ve begun to extend their
base of information to include human skeletal remains. Recent collabor-
ation involving a group of sorne 40 bioarchaeologists, historians, econom-
ists, demographers, and geographers resulted in an ambitious effort to
track the history ofhuman health and nutrition in the Western Hemisphere
from Precolumbian times to the recent past. Central to their discussions are
data derived from archaeological human remains (Kiple & Tarver, 1992).
The emerging role of skeletal remains in the study of the human condition
has been underscored by the historian John Coatsworth (1996:1), who
highlights the 'masses of evidence' provided from bioarchaeological
investigations and the important role they play in understanding historical
developments.
lntroduction
5
Breakthroughs have been mad . .
archaeolo?ical settings, includin e analys1s body tissues in
Arnaza, 1995; Brothwelf skm; and other soft tissues
ansen & Gul10v 1989 H ' ' ockburn & Cockburn 1980
d" . . ' ' ansen et al 1991 S ' '
iscuss10ns presented in th b ., ; tead et al. 1986) Th
d . is ook are , ' e
. l!ssues. Building on the stud ofh most y iocussed on skeletal and
m th1s book is behavioral inference remains, the unifying theme
to physical activity; rather it ,se . "dy 1scuss1on ofbehavior is not limited
(i d . ' ons1 eredina "d
n or er of appearance in the book) h . w1 er perspective, including
agents, injury and vio! p ys1ol?gical stress, exposure to
?ond1etary uses ofthe face and aws phys1cal activity; dietary and
mference, and population ' ietary reconstruction and nutritional
B10archaeology is represented thr
upon a sample of this record in the world. This book draws
my geographic area of expertismg pomts and issues.
s anted dealing with skel e is orth America, the book is
North Amenca is especially well studied eta; rem_ams from this continent.
other areas of the globe wh h ' at east m comparison with m
, ere t e scie ffi . . any
may not be as well established Altho h n c trad111on of bioarchaeology
data from other ug t e_book has this geographic bia
illustrated. ments are d1scussed when key t . s, optes are
lvJany points made in the b k
addressed by contrasting and
from different levels of representing human popula-
subs1stence regimes Beca f h poht1cal complexity and d1""" .
useo t evag fd' 11enng
past, anthropologist ietary reconstruction in the
using terms such as 'fora s characterize human grou s
farmers'. The reader shoufd
t e complexity ofhuman do not adequatelyconvey
help us to better understand b systems. Nevertheless, these
groups, and therefore facilita e av1oral and adaptive features of
t10n of past lifeways. Of far more im te the reconstruction and interpreta-
to the of this book is that
d1scuss10n m anthropolog . mmportantd1mension to thegrow
d y onented towa d th mg
an consequences of adaptive and b h r. eunderstandingofthecauses
Human skeletal and dental f e av10ral shifts in the past
environment, providing what are remarkably to the
storehouse ofindividual histo . 1 ey M. Garn referred to as 'a rich
a t f nea events' (1976 454
.our o the vast holdings in this s . ). Th1s bookprovides
gamed about earlier peoples based d1splaying the knowledge
on t e study of their mortal remains.
2 Stress and deprivation during the
years of growth and development
and adulthood
2.1 Introduction
Physiological disruption resulting from impoverished environmental cir-
cumstances - 'stress' - is central to the study of health and well-being and
the reconstruction of adaptation and behavior in earlier and contemporary
human societies (Goodman et al., 1988; Huss-Ashmore et al., 1982). Stress
is a product of three key factors, including (!) environmental constraints;
(2) cultural systems; and (3) host resistance. Goodman and coworkers
(Goodman, 1991; Goodman & Armelagos, 1989; Goodman et al., 1984,
1988) have modeled the interaction ofthese factors at both the individual
and the population levels (Figure 2.1 ). This model emphasizes the environ-
ment in providing both the resources necessary for survival and the
stressors that may affect the health of the population. Cultural systems
serve as protective buffers, and they provide behaviors necessary for
extraction of importan! nutrients and resources from the environment. Ali
stressors can never be fully buffered; sorne slip through the fi!ter of the
cultural system. In these instances, the individual may exhibit a biological
stress response observable at the tissue leve! (bones and teeth). Physiologi-
cal disruption feeds directly back into environmental constraints and
cultural systems. This model makes clear that health is a key variable in the
adaptive process.
Stress has significant functional consequences. Elevated stress can lead
to a state of functional impairment, resulting in diminished cognitive
development and work capacity. The reduction in work capacity can be
detrimental if it impedes the acquisition of essential resources (e.g., dietary)
for the maintenance of the individual and the population. If individuals of
reproductive age are affected by poor health, then decreased fertility may
be the outcome. Ultimately, the success or failure of a population to
mitiga te stress has far-reaching implications for behavior and the function-
ing of the society (see also Martin et al., 1991).
Biological anthropologists employ a variety of skeletal and dental stress
indicators which can be measured empirically. Use of multiple indicators
gives a comprehensive understanding of stress and adaptation in the past
"'
Wz
WQ

"-:::>
Oa.
1-0
ua.
fz
;;
T
...J
5z
;;o i' 1

Fw
..Jo. w 1
o=>o:::.
-0:>-1
"'"'"'

,_zn:

o(/) ti
:I: e; e{
Mi u..
_J __
...J "'

Fffi,_
...1"-"'
:::>u..>-
uffioo
1--
- __ _;__J
+--
"' ...Jo n:

"' :::> :::>"'
1- ow
...J z n:
:::> -1-
o
"'
.5

8
Stress and deprivation
(Buikstra & Cook, 1980; Goodman & Armelagos, 1989; Huss-Ashmore et
al 1982 Larsen 1987). The multiple-indicator approach stems from the
that' health is a composite of nutrition, disease, and other
aspects of Jife history. Contrary to medica! models of health, stress and
disease (see also Chapter 3) represen\ a continuum rather tha? a presence
vs. absence phenomenon, with respect to both the populat1on and the
individuals who comprise it.
2.2 Growth and development: skeletal
2.2.1 Growth rates
Although generally continuous, growth from birth through is
punctuated by two intensive periods of activity. The. first penod shows a
great increase in growth velocity during infancy, fallmg soon after
first year of life. The second involves another marked dunng
adolescence, then declines and. reduces to zero growth ep1physeal
fusion of the long bones (femur, tibia, fibula, humerus, radms, and
and other skeletal elements is complete in early adulthood. Growth r.ate is
widely recognized as a highly sensitive indicator of health and well-bemg of
a community or population (Crooks, 1995; Eveleth & Tanner, 1990,
Gracey, 1987; Gray & Wolfe, 1996; Huss-Ashmore !ohnston, 1985).
Growth is affected by various factors, such as genet1c mfluences, growth
hormone deficiencies, and psychological stress (Eveleth & Tanner, 1990;
G & W
olfe 1996) but the preponderance of evidence underscores the
ray ' ' h"ld
influence of environment - especially nutrition - on the c 1 .
Infectious disease, such as episodic diarrheal disease, can also to
poor growth (e.g., Jenkins, 1982; Martorell et al., 1977).
disease ha ve a synergistic relationship nour.ished iuvemles
are more susceptible to_ infection, and disease and mfect10n the
ability of.the body to absorb essential nutrients (Keusch & Farthmg, 1986;
Scrimshaw et al., 1968). . .
Children raised in impoverished environments m third-world ?r devel-
oping nations generally are small for age (see reviews by Bogm, 1988;
Eveleth & Tanner, 1976, 1990; Huss-Ashmore et al., 1982). Among .the
best documented populations are the Mayan Indians of
who show retarded growth in comparison with reference populations
(Crooks, 1994). In Guatemala City, Guatemala, well fed upper
children are taller than poorly nourished lower class ch1ldren (B.ogm &
MacVean, 1978, 1981, 1983; Johnston et al., 1975, 1976). Add1t1onally,
unlike the markedly slower growth in lower class. chtldren, upper class
Growth and development: skeletal 9
children have comparable growth to that of Europeans. The cumulative
differences between Mayan and European children are especially pro-
nou.nced for the period preceding adolescence, suggesting that growth
durmg the early years of childhood may be the most sensitive to the
environment in comparison with other life periods (Bogin, 1988). During
adolescence, the genetic influence on growth is more strongly expressed
(Bogin, 1988).
. have bee? taller over much ofthe twentieth century in
mdustnahzed countnes and m sorne developing nations. This secular trend
in !s related to a variety of environmental and cultural changes,
mcludmg 1mprovement in food availability and nutrition sanitation
reduction of infectious disease, and increased access to health
care. As environment improves, growth increases. On the other hand
in growth velocity are well documented, especially during
of d1etary deprivation in wartime settings, famines, and economic crises
(Eveleth & Tanner, 1990; Himes, 1979). This link between growth status
and environment is well documented via analysis of historical data.
ofheights of British school children from various regions and
econom1c circumstances for the period of 1908 to 1950 show that children
were generally shorter in areas experiencing high unemployment (e.g.,
Glasgow, Scotland) than in other regions with more robust economies
(Harris, 1994). These differences were especially pronounced during the
severe economic depression in the late l 920s when nutritional and general
health of of unemployed parents declined. Similarly, growth
and attamment per age increased in post-World War II Japanese
chddren following amelioration of negative conditions (e.g., food short-
ages; Tanner et al., 1982). An equivalen! pattern of growth increase is
documented in post-1945 Poland, with relatively greater increases in higher
socioeconomic groups (Bielicki & Welon, 1982).
The pattern of juvenile growth in archaeological populations is
broadly similar to that in living populations (Armelagos et al., 1972;
Boldsen, 1995; Edynak, 1976; Hoppa, 1992; Huss-Ashmore, 1981; Joh-
nston, 1962; Merchant & Ubelaker, 1977; Molleson, 1995; Ribo! &
Roberts, 1996; Ryan, 1976; Storey, 1992a; Sundick, 1978; Walimbe &
?ambhr, Walker, 1969; and see below). The congruence of growth
m past and hvmg groups suggests that there have not been major shifts in
the general pattern of growth in recent human evolution (Saunders, 1992).
Thus, stress in past populations can be inferred on the basis of the
identification of deviations in growth from 'normal' modern populations
(Johnston & Zimmer, 1989; Saunders, 1992).
Analysis ofjuvenile long bones from prehistoric North America reveals
10
Stress and deprivation
evidence of growth retardation in agricultura! nd mixed. sub.sistence
economies. In children less than six years of age in the preh1stonc lower
Illinois River valley, matching of femur length to dental growth
suppression in late prehistoric (Late Woodland period) ma1ze
ists in comparison with earlier foragers (Middle Woodland penod) (Cook,
1979, 1984). Cook (1984) concluded that the decline in growth was ?ue t? a
decrease in nutritional status with the shift to a protem-poor ma1ze d1et.
Children short for age during the later prehistoric period tended to express
a higher frequency of stress indicators (e.g., porotic hyperostosis, enamel
defects) than children who are tall for age, lendmg further for
nutritional deficiency as a prime factor contributing to growth retardat10n.
Lallo (1973; see also Goodman et al., 1984) also found a in the
growth of femur, tibia, and humerus diaphysis.lengths
in the Mississippian period (AD J 200--1300) m w1th earher
periods (AD 950--1200) in the. central Illinois River valley. D1etary
during this time involved a shift from mixed foragmg and/farmmg to
intensive maize agriculture. Growth during the period betweentwo and five
years of age was especially slow, which Goodman and coworkers
conclude reflects an increase in physiological stress dueto poorer nutntmn
and the presence of other .stressors during the la ter prehistoric occupation
of the region. .
The impact of increased stress loads due to the combmed effects of
undernutrition, European-introduced infectious disease
measles), warfare, and increased social disruption has been
the late prehistoric and contact-era Arikara Indians of the upper M1ssoun
Rivervalley (Jantz & Owsley, 1984a, !984b, !994a; Owsle?' & Jantz, 1985).
Matching oflong bone lengths (femur, tibia, humerus, radms) to dental age
in perinatal (late fetalfearly neonatal) and other juvenile skeletons reveals
that late postcontact era (AD 1760--1835) Arikara juveniles.were
than early postcontact (AD 1600-1733) juveniles,
health status as European influence and encroachment mto the reg1on by
other tribes increased.
The interaction between stress and population mobility has been
examined in a comparison of Late Archaic period foragers from the
Carlston Annis Bt-5 site, Kentucky (2655-3992 ne), and Late Woodland
foragers from the Libben si te, Ohio (AD 800--1100)_ 1985).
Archaeological evidence indicates that Late Archa1c populatt?ns were
highly mobile and exclusively dependen! on wild plan':' and In
contras!, maize was consumed by the Libben populatmn, but 1t was of
minor dietary significance. For both groups, nutrition appears to ha ve been
adequate (Mensforth, 1985). Comparisons of ti.bia lengths revea! a general
Growth and development: skeletal 11
similarity between the two groups from birth to six months and from four
years to 10 years. F or juveniles aged six months to four years, Libben tibiae
are shorter than Bt-5 tibiae. The growth period between six months and
four years- the period differing most between Btc5 and Libben populations
- is highly sensitive to metabolic disruption. During this period, the infant
undergoes weaning, involving the shift from a relatively stable, nutritious
food source (mother's milk) to a potentially less stable, less digestible, and
less nutritious food (e.g., maize). Passive immunities derived from con-
sumption of breast milk are lost during weaning during this period of life
(Popkin et al., 1986). These immunities are crucial for early health and
well-being sin ce the child's immune system is not fully developed until after
five years of age (Newman, 1995). Mensforth (1985) found a high
prevalence of nonspecific periosteal infections in the Libben infants,
suggesting that high levels of infectious disease in infancy and young
childhood contributed to growth retardation. Although both groups
apparently enjoyed adequate nutrition, Mensforth argues that the Libben
population had a subsistence economy with a relatively greater diversity of
resources that were immediately available withoutneed of extensive travel
for acquisition ofresources. Thus, in comparisonwith the Bt-5 population,
the Libben population experienced greater sedentism and size which
fostered poor sanitation, elevated infectious disese, and poor health.
Comparison of Libben with a modero reference population (Denver,
Colorado) confirms the presence of growth suppression in the first three
years oflife in the former, after which the growth rates are similar between
the two groups (Lovejoy et al., 1990). Lovejoy and coworkers ( 1990) argue
that massive infection was the cause of growth retardation. They suggest
that inflammation would result in an increased production of cortisol, the
major natural glucocorticoid, which results in limitation of growth and
availability of amino acids. Thus, elevation of infection in the Libben
population may ha ve hada strong influence on growth generally (Lovejoy
et al., 1990).
Historie-era skeletal series furoish important insights into stress in the
recent past. Saunders and coworkers (1993, 1995) analyzed growth data
available from a large series of juvenile remains from the St. Thomas'
Anglican Church cemetery in Belleville, Ontario. The cemetery was used by
a predominantly British-descent population during the period of 1821 to
1874. Comparisons of femur length from juveniles buried in the cemetery
with a tenth century Anglo-Saxon series from Raunds, England, and
modero growth data from Denver, Colorado (Maresh, 1970), indicate a
strong similarity in overall pattero of growth between the three groups
(Figure 2.2). The two cemetery samples are temporally separate, but share
12
Stress and deprivation
400
350
E 300
g
.e
&_250

gi 200

150
o
100


Age (years)
Figure 2.2. Fitted curves for femoral diaphyseal length for. the
nineteenth-century St. Thotnas' Church cemetery hne), tenth-century
R.aunds Anglo-Saxon-sketetons (dashed line), and twentleth-century
Colorado, livingpopulatiOn (solid line). (From & Hoppa, 1993,
reproduced with pennission of authors and John W1ley & Sons, lnc.)
general ethnic origins with the modern U.S. population. Figure 2.2
that the St. Thomas' series is slightly shorter for age than the modern senes.
That the Raunds .series is considerably shorter than either of the other
groups is to be expected given the inferior living standards of tenth century
England. With regard to the St. Thomas' skeletons, Saunders an.d
coworkers suggest that juveniles died from causes. not chromc
conditions (e.g., chronic infections or chromc undernutntlon) that would
result in a.decrease in skeletal growth. Children less than two years of age
had slightly lower growth rates than modern twentieth centur_Y
tions. They regard this as perhaps representing stresses assoctated w1th
poor maternal health and prenatal growth. . .
Analysis of juvenile cortical bone growth v1a measurement of cortical
thickness provides a complementary source of information to the length of
.long bones. In living populations, in cor'.ical bone mass
present in groups experiencing undernutnt10n (e.g., Fnsancho et al., 1970,
Garn 1970; Garn et al., 1964; Himes, 1978; Himes et al., 1975). Garn and
(1964), for example, showed that malnourished
children have reduced cortical bone in comparison with well
reference groups. Although bone lengths increased of
growth.recovery, cortical thickness continued to show defic1enc1es dueto
Growth and development: skeletal 13
earlier episodes of bone loss. Thus, growth recovery may involve an
increase in bone length (and attained height), but ilot bone mass (Huss-
Ashmore, 1981; Huss-Ashmore et al., 1982).
Cortical bone mass also appears to be a sensitive indicator of environ-
mental disturbance in archaeological settings. Comparison of femoral
cortical thickness from Middle Woodland (Gibson site) and from Late
Woodland (Ledders site) series from west-central Illinois reveals a reduc-
tion in bone mass in young children (24-36 months), the presumed time of
weaning and im;reased dietary stress (Cook, 1979). In contrast, Hummert
and coworkers (Hummert, 1983; Hummert & Van Gerven, 1983; Van
Gerven et al., 1985) documented cortical bone deficiencies in exclusively
older children from the early to late Christian periods in Sudanese Nubia
(ca. AD 550-1450). Long bone lengths ofNubians are shorter in the early
Christian period than in the late Christian period, which may be due to
nutritional deficiencies and bacteria] and parasitic infections (Hummert,
1983; Hummert & Van Gerven, 1983). Increasing political autonomy
during the later Christian period may have served to improve living
conditions, resulting in better growth status and health generally. Cortical
bone mass continued to be deficient in the later period, indicating that
stress was present throughout the Christian period, both early and late.
Unlike the long bone lengths, which show a recovery during adolescence,
there was a continued decrease in cortical bone mass in older children,
suggesting that growth in long bone length continued at the expense of
cortical bone maintenance (Hummert, 1983: cf. Garn et al.
0
1964).
2.2.2 Stature
Substantial evidence drawn from the study ofliving populations reveals the
strong relationship between growth suppression in childhood and attain-
ment of adult body size, including terminal height:
children should be short-statured adults. Study of living populations
provides sorne support for this conclusion. Comparison of growth of
undernourished Thai children with American (U.S.) children reveals that
despite a longer period of growth in the former (by about one year) the
reduction in growth over their lifetimes resulted in shortened terminal
height (Bailey et al., 1984; see also Bogin & MacVean, 1983; Frisancho et
al., 1970; Satyanarayana et al., 1980).
The close ties between stress- especiaHy poor nutrition- and stature are
abundantly documented in research developingout ofa growing interest in
anthropometric history (Floud et al., 1990; Komlos, 1989, 1994, 1995;
Steckel, 1995). Originally inspired by controversy over the health and
14 Stress and deprivation
well-being of enslaved African-Americans (Steckel; 1979), current research
has greatly broadened to include a range of other populations in North
America, Europe, and Asia (Fogel et al., 1983; Komlos, 1994, 1995;
Steckel, 1995). Evidence from a wide range of recent historical populations
indicates that stature variability can be explained in large part by
environmental factors (Steckel, 1995). This evidence shows that terminal
height is a product of nutritional adequacy and, to a lesser extent,
history. Individuals with adequate nutrition tend to reach their genetlc
growth potential; those with-poor nutrition do not.
Genetic factors are also importan t. For example, well-off Japanese reach
only the 15th height centile of well-off British (Tanner et al., 1982). Clima te
may be a mediating factor in determinj,ng terminal height, but stature
shows little correlation with latitude in comparison of a wide range of
human populations(Ruff, 1994a). Ofmuch moreimportance tothe issueof
climate is body breadth, which plays a crucial role in determination of
amount of body surface area to body mass in hot and cold cJimates (see
Ruff, 1994a).
As with childhood growth, there is a temporal trend of stature increase
with economic and nutritional improvement (e.g., Boldsen, 1995; Floud,
1994 Greulich, 1976; Yagi et al., 1989; and many others) and decline
times ofhardship and deprivation (e.g., Fogel et al., 1983; Kimura,
1984 Price et al., 1987; Steegmann, 1985).
T:rminal height data for historical populations are drawn from various
archiva! sources, including military records (e.g., Bielicki & Welon, 1982;
Komlos, 1989; Mokyr & Grda, 1994; Sandberg & Steckel, 1987;
Steegmann, 1985, 1986; Steegmann & Haseley, 1988), militarypreparatory
schools (Komlos, 1987), prison inmates (e.g., Riggs, 1994), enslaved
African Americans (e.g., Steckel, 1979, 1986, 1987), voter registrations
(Wu, 1994), and other sources (see Komlos, 1994). Analyses ofthese data
sets by economic historians revea! temporal trends in stature that can be
linked with changing economic conditions relating to nutritional adequacy
in particular and health status in general. Terminal stature in Euroameri-
can populations shows significan! variability in relation to time, geogra-
phy, and socioeconomic status. Over the last severa] centuries, marked
improvements in health and nutrition have been documented. Popular
convention indicates that adult stature has increased during and after the
Colonial period in North America. Steckel (1994) analyzed sta\ure data for
American-boro Euroamerican male soldiers for the period of 171 O to 1950.
Contrary to this previous conception, twentieth century Euroamerican
males are not appreciably taller than their predecessors living in the
eighteenth century (Steckel, .1995).
Growth and development: skeletal
Table 2.l. Euroamerican statures. (Mean values)
Sample
Males
Crossn
Fort William Henry"
Harvie''
Prospect HiJJd
Colonial U.S.t
Mt. Gileadf
Clifts Plantationg
Belleviewg
Ft. Laurensh
Snake Hill;
Bradford's Company;
Old Quebedl
West Point cadetsk
Modero U.S.1
Fema/es
Cross
Harvier
Prospect Hilld
Colonial U .S."
Mt. Gileadf
Modero U.S.'
"Larsen, Craig et al., 1995.
hSteegmann, 1986.
<Saunders & Lazenby, 1991.
et al., 1989.
'Angel, 1976.
IWood et al., 1986.
iRathbun & Scurry, 1991.
hSciulli & Gramly, 1989.
Saunders, 1991.
iCybulski, 1988.
Stature (cm)
175
177
171
173
173
172
169
170
174
176
174
173
172
174
163
161
161
160
162
161
kKomlos, 1987; average of 1840s-1870s, 21-year-olds
only.
1National Center for Health Statistics, 1992.
15
Skeletons from historie-era archaeological sites offer a complementary
data set for stature analyses based on archiva! sources. Comparison of
stature estimates derived from measurements of long bones shows little
change from the pre-modern (1675-1879) to the modern (1950--1975)
period in the United States (Angel, 1976; Larsen, Craig et al., 1995) (Table
2. 1). Thesefindings suggest thatimprovements in health and nutrition were
not so great as to result in appreciable increases in body height. For the
16
Stress and deprivation
same time span, however, the-statnre of Europeall populations
(Boldsen, 1995; Floud, 1994). In Denmark, for example, the
height of adults - ,based on analysis of Medieval skeletal
and twentieth century archiva! data - is linked to improvmg cond1t10ns
associated with the change from preindustrial rural to industrial urban
living (Boldsen, 1995). .
In the New World, the transition to agriculture involved the adopt10n of
maize as a key componen! of subsistence. There are severa! negative aspects
of maize that could potentially lead to physiological disturbance and
reduced height in native populations in the Americas. Although
appears to meet caloric requirements, it is deficient in essential
acids Iysine, isoleucine, and tryptophan,(Food and Agnculture
ation 1970 Whitney & Rolfes, 1993). Because maize has these ammo ac1d
it is a very poor protein source. Niacin (vitamin B3) in maize is
chemically bound, which reduces the bioavailability of this nutrient to the
consumer. In maize-based diets, iron absorption is very low CAshworth et
al., 1973), methionine and phenylalanine are minimally represented,_
the Ieucine-isoleucine ratio isinadequate. The nutritive value of ma!Ze is
altered by the preparation techniques used to transform it into food.
native New World societies enhance the nutritional content of ma1ze v1a
alkali-processing (Katz et al., 1974; Stahl, 1989). The addition o_f alkali
promotes the availability of niacin during digestion (Food and Agnculture
Organization, 1953). Sorne evidence suggests that these treatment proto-
cols actually promote dystrophic effects (see Huss-Ashmore et al., 1982).
Additionally, removal of the pericarp (bran) in the grinding process
decreases the nutritive value of maize; importan! minerals and sorne fiber
are removed ifthe pericarp is winnowed from the maize. If the aleurone, the
protein- and niacin-rich !ayer, and bran are
importan! nutrients are also lost (Food and Agnculture
1953; Rylander, 1994). Thiamine content is also affected by the manner m
which the maize is processed.
The study of temporal series of archaeological remains, especially in
comparison ofNew World foragers with la ter farming
trends that are consisten\ with declining nutritional quahty m both matZe
consumers and populations dependen! on other plant domesticates.
Comparisons of prehistoric Georgia coastal foragers (pre-AD 1150) w1th
la ter maizefarmers (AD 1150-1550) indicate reductions in stature of about
3% for adult females and 1 % for adult males (Larsen, 1982). Similar
reductions in other New World settings are documented in the American
Midwest (Perzigian et al., 1984) and in Mesoamerica (Haviland, 1967;
Nickens; 1976; Saul; 1972; Stewart, 1949, 1953a; but see Danforth, 1994;
l
1
Growth and deve/opment: skeletal 17
McCaa & Mrquez Morfin, 1995; Wright & White, 1996). Comparisons of
agricultura! populations with other settings indicate relatively short
statures in Mesoamerica (Storey, l992a) and Ecuador (Ubelaker, 1994)
which are linked with chronic malnutrition.
Other archaeological settings show reduction in stature in the shift to
agricultura! economies. Preliminary evidence indica tes that late Pleistocene
foragers in South Asia were taller and more robust than their farming
descendants (Kennedy, 1984). Similarly, comparisons of skeletal series
from the Upper Paleolithic through the Neolithic in western Europe
indicate a general reduction in average stature, which is especially pro-
nounced in the comparison of Mesolithic with Neolithic subsamples
(Meiklejohn et al., 1984; although see Jacobs, 1993). Finally, in Sudanese
Nubia, reduction in stature coincided with agricultura! intensification,
especially in comparison of the earlier A-group (3400-2400 se) and Jater
X-group (AD 350-550) populations (Van Gerven et al., 1995). These studies
point to the possibility of increasing dietary stress as a causal factor in
stature reduction.
Much of the research on body size in children and adults in archaeologi-
cal settings is oriented toward tracking the consequences of adaptive
transformations, primarily from foraging to farming; relatively little is
known about other dietary transitions. The consequences of change in
dietary focus not involving agriculture are manifested in temporal com-
parisons of native populations from the Santa Barbara Channel region of
southern California (Lamber!, 1993, 1994). In this region, populations
shifted their dietary emphasis from terrestrial resources - especially plant
foods - to marine resources after 500 BC (Glassow, 1996). Over the period
of 6000 BC to AD 1782, stature decreased by about 10 cm. Lamber! (1993,
1994) argues that stature reduction was fostered by decline in health, dueto .
the combined effects of declining nutrition and elevated infectious disease:
Protein, mostly derived from fish, was abundan!, but other importan!
nutrients may have been lacking in the diets of later prehistoric popula-
tions. During the latest prehistoric period, for example, island populations
traded beads and other manufactured products with mainland populations
for plant foods, especially roots and seeds, suggesting that islanders lacked
immediate access to key plant resources. In addition, the later populations
were more sedentary, and they consumed a narrower range of foods than
the earlier populations. Environmental evidence indicates periodic and
lengthy periods of drought, which would ha ve reduced the availability of
potable water and plan\ foods (e.g., acorns and other seeds). The worsening
of nutritional quality was probably compounded by other stressors,
particularly infectious disease. In addition to increases in other. stress
18
Stress and deprivation
indicators (e.g., enamel defects), there was a marked increasein nonspecific
periosteal infections (and see Chapter 3), which was dueto the sedentary
lifeway coupled wth an increase in population size. .
Similar trends in stature reduction have also been documented m the
Central Valley of interior California (Ivanhoe, 1995). Comparisons of
populations spanning the period of 3000 BC to the mid-nineteenth century
revea! statistically significant reductions in stature for both females and
males (2.2% and 3.1%, respectively). These reductions were interpreted as
resulting from genetic drift or population replacement (Newman, 1957).
Archaeological evidence indica tes the presence of a biological contin.uum
of populations in the region. Therefore, stature reducttons are more .hkely
to reflect nutritional stress owing to a focus on acorns and a narrowmg of
the dietary spectrum in later prehistory. .
Stature reductions identified in archaeological contexts are not umver-
sal. A number of regions show no change, oran increase, ora high
regional variability in stature (e.g., Danforth, 1994). In the \ower Illmo1s
River valley, there is no clear trend of stature change in comparison of early
prehistoric through late,prehistoric periods (<'.ook, .1984).' This is
significant, because it indicates that reduced iuvemle he1ght. m settmg
did not result in reduced adult stature in later preh1stonc agncultural
groups. Likewise, temporal comparisons of stature in a of
archaeological populations - from Ontario, northern Great '.la'.ns, Pe.ru,
and Chile - show no change with the shift in adaptive strateg1es mvolvmg
agriculture (Allison, 1984; Cole, 1994; Katzenberg, l 992a). For the period
of8250 BP to the colonial period in Ecuador, there is no evidence of stature
decline despite increases in physiological stress (Ubelaker, Ali
groups in Ecuador are relatively short-statured, therefore (mcludmg
poor nutrition) may have been severe throughout the entlfe sequenc.e
(Ubelaker, 1994). Alternatively, these populations may simply be genet1-
cally small in comparison with other groups.
The influence of nutritional deprivation on human growth and ter-
minal height is revealed in the study of components of past
may have been.differentially buffered against str.ess. of ehte
and.nonelite adults from Middle Bronze Age s1tes (2000 ne) m Greece
shows that elites are about 6cm taller than nonelites (Angel, 1975, 1984).
Similarly, the tallest adults in the Etruscan period in Tarquinia, Italy'. are
associated with high-status chamber tombs (Becker, 1993). In a Ma1tas-
Chiribaya (ca. 2000 BP) population from northern Chile, shaman males
are taller than other, nonelite males, which may indicate better health
and resources in the former (Allison, 1984). High-status adult males in
sorne Mesoamerican populations. appear to be taller than low-status
Growth and development: skeletal 19
individuals or the general population (Haviland, 1967; Helmuth & Pen-
dergast, 1986-1987; but see Wilkinson & Norelli, 1981). Likewise, elite
males are taller than nonelite males in severa! contexts in the prehistoric
southeastern and midwestern United States (Buikstra, 1976a; Cook,
1984; Hatch, 1976; Hatch & Willey, 1974; Powell, 1988). These apparent
status differences in attained height suggest that elite males may ha ve had
nutritional advantages resulting in greater height than nonelite individ-
uals. There are no clear differences in stature between elite and nonelite
adult females in any of these New World settings. This suggests that the
burden of stress may be on adult males in ranked societies, at least as it is
exhibited in attained height.
2.2.3 Cranial base height
Biological anthropologists note specific patterns ofvariability in skull base
height (auriculare-basion or porion-basion distances) inselected samples,
which Angel (1982) suggests is linked to nutritional adequacy during the
years of growth and development. Poorly nourished individuals should
ha ve flatter cranial bases (called 'platybasia') than well nourished individ-
uals, due to relatively greater deformation of supporting bone in response
to the weight of the head and brain: the 'weakening of the bone from
nutritional deficiencies decreases its ability to resist gravitational pull,
therefore inhibiting upward growth of the skull .... Thus the amount of
compression in this area should give an indication of nutritional status'
(Angel, 1982:298).
Angel tested bis hypothesis by comparing skull base heights from
skeletal series representing nutritionally disadvantaged and advantaged
populations. These comparisons revealed that the advantaged group has
much higher cranial bases than the disadvantaged group, which Angel
concludes 'fits a nutritionally-caused mechanical weakening of bone
supportinga heavy head' (1982:302). Study of archaeological remairts from
the eastern Mediterranean Basin indicates variation in cranial base height
that Angel (1984; Angel & Olney, 1981) attributed to nutritional quality:
crania from populations experiencing nutritional deprivation are platy-
basic, whereas crania from populations or segments of populations (e.g.;
Middle Bronze Age 'royalty') with nutritionally adequate diets are not.
The relationship between cranial base height and nutritional quality may
be more apparent than real, however. Cranial base cartilages, like epiph-
yseal cartilages oflimb bones, are primary cartilages. Therefore, they have
intrinsic growth capabilities that are characteristically resistant to com-
pressive loading. This suggests that a model invoking compression as a
20 Stress and deprivation
causal factor in.determining cranial base form is incorrect. The phenom-
enon of cranial base flattening is largely unexplained.
'
2.2.4 Pelvic morphology
Severe vitamin D deficiency (rickets) caused by inadequate protein con-
sumption weakens growing bone during early childhood, because the
rapidly forming protein matrix <loes not mineralize sufliciently. This results
in pelvic deformation, dueto the forces created by body weight and gravity
(Angel, 1975, 1978a, 1982, 1984;Angel & Olney, 1981; Greulich & Thoms,
1938; Nicholson, 1945; Thoms, 1947, 1956; Thoms et al., 1939; Walker et
al., 1976). Pelvic inlet deformation is. characterized by a reduction in
anterior-posterior diameter relative to the medial-lateral diameter (called
'platypellism'). Flattening of the pelvis is well documented in clinical
populations (e.g., Greulich & Thoms, 1938; Nicholson, 1945; Thoms, 1947)
and in modern anatomical samples in comparison of lowei; and middle
class groups from the United States (Angel, 1982). For example, British
women who were young children during the waryears of 1914 to 1918 have
flattened pelvic inlets (Nicholson, 1945). Presumably, these women had
relatively poor nutrition during these years. Consisten! with the relation-
ship between growth and nutritional status, women with flattened pelves
tend also to be short-statured.
Comparisons of pelvic inlet form between earlier and la ter ( or modern
reference) populations suggest improvements in nutritional health in
several settings, including the eastern Mediterranean (Angel, 1984; Angel
& Olney, 1981), North America (Angel, 1976), and Sudanese Nubia(Sibley
et al., 1992). Preliminary evidence shows differences in pelvic shape by
status group.Low-status adult females from the Middle Woodland (Klunk
and Gibson Mound groups)"period in the lower Illinois River valley have
flatter pelvic inlets than high-status adult females (Brinker, 1985). These
differences appear to reflect better nutrition in the high-status women than
in Jow-status women.
Other aspects of pelvic morphology may also be linked to negative
environmental factors. Sciatic notch widths (innominate bone) are appreci-
ably larger in nutritionally stressed eighteenth and nineteenth century
British from the St .. Bride's Church, London, than in better fed twentieth
century Americans (Walker, unpublished manuscript). Rickets. was a
severe health problem in industrial England, and is well documented in the
St. Bride's Church population. Archival documents indicate that the births
of St. Bride's individuals with wide sciatic notches occurred during cold
months of the. year, the period when. rickets. was especially prevalen!..
Growth and deve/opment: skeletal 21
Analysis of a contemporary population from Spitalfields, London, reveals
that individuals with rickets had also been exposed to extremely cold
temperatures during the first year of life (Molleson & Cox, 1993).
Therefore, the relatively wide sciatic notches in the St. Bride's Church
population appear to be remnants of early childhood stress (Walker,
unpublished manuscript).
2.2.5 Long bone diaphyseal form
Pronounced bowing of the lower limb long bones is another skeletal
deformation in richitic individuals. As with the pelvis, most bowing
deformities occur during the first several years of life when the skeleton is
undergoing rapid growth, especially between ages six months and three
years (see Stuart-Macadam, 1989a). Rickets became highly prevalen!
during the Industrial Revolution, especially in large, densely populated
towns and cities in Europe. Culturally influenced avoidance of sunlight
(e.g., excess clothing, infant swaddling) may involve decreased vitamin D
synthesis, such as in Asia and north Africa (Fallon, 1988; Kuhnke, 1993).
Increased availability of vitarnin D-enriched foods and reduced air
pollution resulted in a virtual disappearance ofthe disease in industrialized
nations during the twetieth century.
Skeletal evidence of rickets is very uncommon prior to the Medieval
period in Europe. In Medieval and later skeletal samples from Europe, a
number oflong bone deformities- especially severe bowing oflong bones-
may have a richitiC origin (e.g., Gejvall, 1960; M01ler-Christensen, 1958;
Molleson & Cox, 1993; Ortner & Putschar, 1985; Roberts &Manchester,
1995). Extreme bowing of lower limb bones of an eight-year-old recovered
from an early nineteenth century African American cemetery in Philadel-
phia probably resulted from rickets (Angel et al., 1987). A significan!
prevalence of adult males and females - 35% and 20%, respectively- have
bowing resulting from childhood growth disturbance in the same popula-
tion. Similar patterns of long bone bowing were reported for the !ron Age
site of Mahujhari, India (Kennedy, 1984), and in Mesolithic and Bronze
Age Europe (e.g., Meiklejohn & Zvelebil, 1991).
Flattening of femoral and tibial diaphyses has been documented in
numerous archaeological skeletal samples worldwide (and see Chapter 6).
The primary indices measuring the degree of flatness of femora and tibiae
include the meric index (anterior-posterior flattening of the proximal
femur diaphysis), pilasteric index (medial-lateral flattening of the femur
diaphysis), and cnemic index (medial-lateral flattening of the tibia diaphy-
sis at the nutrient foramen). Sorne attribute diaphyseal flattening to
22 Stress and deprivation
nutritional stress (e.g., Adams, 1969; Angel, 1984; Buxton, 1938). Buxton
(1938) asserted that less bone is required in the construction ofa diaphysis
ifit is flattened ratherthan round. He viewed the temporal trend ofrounder
diaphyses as representing an increase in amount ofbone, inferring a decline
in nutritional deficiency in recen! 'civilized' populations. Structural analy-
sis of long bone diaphyses reveals that flattening is related not to the
amount ofbone present, but rather to the manner in which it is distributed
when viewed in cross section. Mechanical loading, not nutritional stress, is
the primary determinantofflatness oflong bone diaphyses (see Chapter 6).
Nutritional deprivation or other physiological stressors certainly have an
influence on amount of bone, but the relationship between nutritional
status and diaphyseal shape is unsubstantiated.
2.2.'6 Vertebral neural canal size
The effects of catch-up growth on stature and long bone. lengths are
problematic for documenting stress history of an individual during their
growth years. An individual may be stressed early in life, but amelioration
of negative conditions (e.g., improvement in nutritional status) during la ter
juvenile years may result in obliteration of evidence of growth disruptions
that had occurred earlier in life. In the Dickson Mounds senes, for
example, although juvenile growth became stunted in the transitio.n to
intensive farming for the period of AD 950 to 1300, no appreciable
reductions occurred in adult height (Lallo, 1973). Thus, adult heights in
this population are uninformative about juvenile stress.
The. similarity of stature in Dickson Mounds may be simply due to
growth recovery. Vertebral growth provides a means of addressing the
problem of growth stress identification not possible with attained height.
At the time of birth, vertebral neural canal size is approximately 65%
complete; ful! size is reached by about four years of age (Clark, 1988; Clark
et al., 1986). Vertebral body height continues to grow into early adulthood,
well after the third decade of life. Thus, early and late stress in the life
history ofthe individual is represented in the respective size ofthe vertebral
neural c<tnal and vertebral body height in adult skeletons. If there is a
reduction fo canal size but not in vertebral height, then catch-up growth
probably occurred following early stress (prior to four years of age). If?oth
neural canal size and vertebral body height are small, then stress was hkely
to be present throughout most of the years of growth and development,
certainly after four years of age and possibly into adulthood (Clark, 1988;
Clark et al., 1986).
Analysis of thoracic an.d lumbar vertebrne from the Dickson Mounds
Growth and deve/opment: dental 23
si te reveals that growth of neural canal size was completed prematurely, but
growth in vertebral body height continued throngh the juvenile years into
adulthood (Clark et al., 1986). This growth pattern suggests that stress
amelioration in the later juvenile years accounts for the similarity in adult
long bone lengths and stature in the earlier and later populations from
Dickson Mounds.
Y oung adults (15--25-year age group) in the Dickson Mounds series ha ve
significantly smaller vertebral neural canal size than older adults (25 +
years) (Clark eta/., 1986). This finding suggests that small neural canal size
is linked with a reduced lifespan. Additionally, vertebral wedging, the
condition whereby anterior body height is reduced compared to posterior
body height, is associated with small vertebral neural canal size. Vertebral
wedging is symptomatic of adult and postmenopausal bone loss (os-
teoporosis). The association between smallerueural canal size and wedging
suggests that stress occurring during an individual's juvenile years may be a
predisposing factor for poor health during adulthood.
2.3 Growth and development: dental
2.3.J Dental development rafes
Dental development comprises two components-forma/ion of crowns and
roots and eruption of teeth. Unlike skeletal development, dental develop-
ment overall is insensitive to environmental constraints (Smith, 1991). The
resistance of dental tissues to environmental insults hasbeen demonstrat<d
by the observation that various stressors influencing stature and bone age
have little affect on dental development (reviewed by Smith, 1991). The
high heritability of dental development serves to minimize the effects of
poor envirorimental circumstances (see Garn et al., 1965; Mobrrees &
Kent, 1981; Smith, 1991).
Tooth formation rates are free of environmental influence (e.g., nutri-
tion), which is indicated by low correlations between formation and bone
age, stature, relative body weight, and fatness, and by the lack of any kind
of secular trend (see above and Smith, 1991 ). Eruption rates and timing are
somewhat more responsive to environmental factors, such as caries
experience, tooth loss, and severe malnutrition (e.g., Alvarez, 1995;
Alvarez et al., 1988, 1990; Alvarez& Navia, 1989; Ronnerman, 1977). For
example, eruption and exfoliation of deciduous teeth are significantly
delayed in nutritionally deprived children in comparison with well-
nourished children from Cantogrande, Peru (Alvarez, 1995; Alvarez et al.,
1988; and see Barrett & Brown, 1966): Additionally, unlike formation,
24 Stress and deprivation
eruption timing shows sorne correlation with body size (Garn et al., 1960;
McGregor et al., 1968).
It is not possible to identify delays in dental eruption timing in
archaeological seties based on teeth alone, since age-at-death must be
determined by comparing the archaeological dentitions with sorne stan-
dard based on individuals of known age (e.g., Moorrees et al., 1963).
Relative differences between dental and skeletal development may provide
sorne insight in to growth stress. Comparison of skeletal age and dental age
in Medieval period skeletons from Sudanese Nubia reveals that most
individuals (70.5%) have skeletal ages younger than their dental ages
(Moore et al., 1986). These relative differences indica te that skeletal growth
may have been retarded. Dietary reco.nstruction suggests that growth
retardation was due to nutrition.al deprivation, a finding that is consisten!
with other skeletal indicators ofstress (e.g., iron deficiency anemia; Moore
et al., 1986).
2.3.2 Tooth size
Like bone size, tooth size involves a complex interplay between environ-
ment and heredity. Unlike skeletal elements, tooth crowns do not remodel
once they are fully formed. Therefore, teeth provide an unchanging record
of size well in advance of the adult years. Tooth size appears to be highly
heritable, indicating that variation between and within human populations
can be explained mostly by genetic differences (Chapter 7; and see Kieser,
1990). Twin studies revea! that as much as 80% to 90% of observed
covariation in tooth size is due to additive genetic factors; the remaining
10-20% is attributed to environment (Townsend et al., 1994). Other
estimates of heritability vary widely (see Kieser, 1990), but most workers
agree that environmental influences on tooth size are significan!, albeit
small (e.g., Dempsey et al., 1995; Garn et al., 1965; Garn, Osborne et al.,
1979; Potter et al., 1983; Townsend, 1980, 1992; Townsend & Brown,
1978). Therefore, tooth size represents a measure of deviation from genetic
growth potential in response to sorne stressor or stressors (Bailit et al.,
1968, 1970; Evans, 1944; Garn, Osborne et al., 1979; Garn et al., 1980;
Goose, 1967). Placenta( insufficiency, maternal health status, nutritional
status, anda variety of genetic and congenital defects (Down's syndrome,
cleft palate, prenatal rubella, congenital syphilis) are linked with reduced
tooth size (Cohen et al., 1979; Garn & Burdi, 1971; Garn, Osborne et al.,
1979; Goodman et al., 1989). Understanding the influence of nutrition on
tooth size is hampered by the paucity of .data holding genetic factors
costant. in situations of variable nu.tritional quality. Goodman and
Growth and development: dental 25
coworkers (1989) tested the hypothesis that poor nutrition will result in
small permanent tooth size by comparison of dietary supplemented and
nonsupplemented individuals in a Nahuatlcommimity from Tezonteopan,
Mexico. Overall, supplemented individuals have larger tooth size than
nonsupplemented individuals, with statistically greater differences occur-
ring for the mate first incisor, second incisor, and first molar buccolingual
dimensions. These findings are consisten! with experimental research on
laboratory animals showing tooth size reduction ih response to develop-
mental disruptions and nutritional deprivations (e.g., Bennett et al., 1981;
Holloway et al., 1961; Paynter & Grainger, 1956; Riesenfeld, 1970;
although see Murchison et al., 1988).
Prehistoric maize agriculturalists from coastal Georgia post-dating AD
1150 had smaller teeth than did their foraging predecessors (Larsen, 1982,
1983a). Tooth size was reduced in both the permanent and deciduous
dentitions, which may reflect increase in physiological stress dueto declines
in dietary quality and health status generally. Tooth size reduction in the
primary dentition suggests a negative change in maternal health status and
placenta! environment, since deciduous teeth form in utero. Given the
relatively narrow temporal window of tooth size reduction in this and other
populations with the shift from foraging to farming (e.g., Coppa et al.,
1995; Hinton et al., 1980; Meiklejohn & Zvelebil, 1991; y'Edynak, 1989),
these changes probably indicate an increase in stress that accompanied this
transition. In contras!, Lunt ( 1969) documented a temporal increase in
permanent tooth size from Medieval times to the present in Denmark,
attributed to improved dietary conditions in later times (and see Lavelle,
1968).
Dental size decrease or increase in Holocene populations cannot be
explained fully by nonevolutionary factors. In prehistoric Nubian popula-
tions, there is a relatively greater reduction in posterior tooth size than
anterior tooth size, which Calcagno (1989) attributes to a selective
advantage for smaller posterior teeth in caries-prone agriculturalists. These
findings underscore the complexity of tooth size, requiring consideration of
both extrinsic and intrinsic circumstances in specific settings.
The hypothesis that members of a population who suffer most from
illness and physiological stress are more likely to die at an earlier age than
other (healthier) members of a population has been tested by the compari-
son of permanent tooth size of juveniles and adults in different settings in
the American Southeast, namely in the late prehistoric Averbuch series
from the Middle Tennessee River valley (Guagliardo, !982a) and the
Spanish mission Santa Catalina de Guate from St. Catherines lsland,
Georgia (Simpson et al.; 1990). Both populations were sedentary mai'e
26 Stress and deprivation
Table 2.2. Juvenile and adult permanenttooth size (buccolingua/; mm)
from Santa Catalina de Gua/e, St. Catherines lsland, Georgia. { Adapted
from Simpson et al., 1990: Table 5-1.)
Juvenile Permanent
Tooth n Mean SD n Mean SD /o Difference"
Maxillary
11 16 7.66 0.56 33 7.48 0.40 -2.4
12 23 6.94 0.39 37 6.91 0.36 -0.4
e 28 8.59 0.66 55 8.64 0.47 0.6
PMI 34 I0.12 0.59 70 I0.09 0.49 -0.3
PM2 25 9.77 0.56 72 9.89 0.64 1.2
MI 38 11.93 0.68 77 12.14 0.51 1.7
M2 21 12.09 0.67 85 12.01 0.68 -0.7
Mandibular
ll 20 :S.84 0.38 22 5.89 0.33 0.8
12 27 6.23 0.40 47 6.34 0.38 1.7
e 32 7.51 0.57 77 7.85 0.53 4.3'"
PMI 37 8.09 0.46 95 8.30 0.44 2.Sb
PM2 33 8.42 0.52 95 8.63 0.47 2.4b
MI 45 11.11 0.49 72 11.24 0.52 l.2
M2 31 10.76 0.57 87 I0.76 0.61 o.o
"Computed by the formula:'.100-[oox (O'iin. mean/max. mean)].
bp:50.05 (Student's t-teSt).
~ p : 5 0 0 l (Student's t-test).
agriculturalists exhibiting skeletal evidence of high levels of physiological
stress and poor health. In both settings, juveniles have smaller permanent
teeth than adults (Table 2.2). In the Santa Catalina series, nine of 14 tooth
types examined are smaller in juveniles than adults. The other tooth types
show either no difference or slightly smaller size in adults. Ali statistically
significant differences are for smaller juvenile teeth. Similarly, the Aver-
buch juvenile teeth are significantly smaller for 1 O tooth types; the other
four types showed nonsignificantly smaller size in adults. In both samples,
juvenile-adult size differences are more common in mandibular teeth than
maxillary teeth, suggesting that the lower dentition may be more develop-
mentally sensitive to stress than the upper dentition.
These. studies indicate the failure of teeth to reach their genetic size
potential in circumstances involving increased stress. This conclusion lends
support for Sagne's (1976) hypothesis that Medieval-era Swedish dying
young received suboptimal nutrition during the years of dental develop-
ment, resulting in smaller teeth (and see Lunt, 1969). These investigations
suggest that individuals with small teeth - those individuals who were most
Growth and deve/opment: dental 27
stressed - had a reduced lifespan, which is consistent with evidence based
on vertebral neural canal size and other dental indicators (e.g., enamel
defects: Goodman & Armelagos, 1988). As with neural arch canal size, it is
unlikely that small teeth led to reduced longevity. Rather, size reduction is
symptomatic of environmental stress that contributed to smaller teeth and
carly death.
2.3.3 Ffuctuating and directional odontometric asymmetry
Beginning with the work of developmental geneticists in the 1940s, a
consensus has emerged that bilateral structures normally developing as
mirror images of each other will develop differently in response to
environmental instability (see Kieser, 1990). Van Valen (1962) suggested
that one type of asymmetry - which he called fluctuating asymmetry -
retlects the inability of the body tissues to develop bilaterally in their normal
growth pathways. Thus, in settings involving stress, teeth and other bilateral
structures fail to develop evenly on both si des. Support for this hypothesis is
provided by study of laboratory animals exposed to induced stress (e.g.,
hypothermia, blood loss, heat, cold, diabetes, audiogenic stress). Stressed
animals display increase in fluctuating asymmetry in a variety of bilateral
anatomical structures, including teeth (e.g., Kohn & Bennett, 1986; Nass,
1982; Sciulli et al., 1979; Siegel et al., 1977; Siegel & Mooney, 1987).
Study of odontometric fluctuating asymmetry in living, archaeological,
and paleontological samples presents mixed and sometimes contradictory
results (e.g., Bassett, 1982; Black, 1980; Doyle & Johnston, 1977; Harris &
Nweeia, 1980; O'Connell, 1983; Suarez, 1974; Townsend & Brown, 1980).
Left-right tooth size differences are present in archaeological samples,
including Archaic period Indian Knoll (Kentucky), late prehistoric Camp-
bell site (Missouri), and contact era Larson si te (South Dakota) (Perzigian,
1977a, 1977b). The Indian Knoll dentitions are the most asymmetric,
which Perzigian (l 977a, l 977b) attributes to poor diet in comparison with
late prehistoric and contact era agriculturalists. His interpretation of
decrease in dietary quality in comparing prehistoric foragers and farmers
runs counter to the conclusions drawn by many bioarchaeologists working
in the Eastern Woodlands - namely, dietary stress is more pronounced in
agriculturalists than in hunter-gatherers (see Larsen, 1995). Thus, the
pattern of decreasing asymmetry in these groups remains unexplained.
Temporal comparisons of a series of populations from prehistoric Paloma,
Peru, reveals a trend of decrease in asymmetry over time, but in this setting
substantive skeletal and dental evidence indicates improving health over
time (Benfer, 1984),
28 Stress and deprivation
Using computer-simulation sampling, Smith and coworkers (1982;
Garn, Cole et al., 1979) assert that amount of asymmetry is highly sensitive
to sample size. They argue that sample sizes of severa! hundred individuals
are required in order to detect meaningful differences between populations
in fluctuating asymmetry. Similarly, Greene (1984) found that the con-
founding effect of measurement error can both obscure real differences and
artificially create others.
Kieser and coworkers (1986a, 1986b; Kieser, 1990) analyzed fluctuating
asymmetry as an indicator of envirorimental disruption in highly stressed
Lengua Indians presently inhabiting the Chaco region of Paraguay.
Application of Euclidian map analysis, a statistically powerful approach
whereby each dentition is considered an independent variable, produces a
measure of asymmetry represented by dividing the. sum of Euclidean
distan ces for tooth antimeres by a product of the mean individual tooth size
and the number oftooth pairs. Comparisons with well-nourished, disease-
free Whites reveals much greater asymmetry in the Lengu;t population.
Younger, more acculturated Lengua with better diets and greater access to
Western health care show lower asymmetry values than more traditional
Lengua experiencing elevated stress. Using the same methodology, Tqwn-
send & Farmer (1995) determined asymmetry seores in a sample of South
Australian children. Most children were healthy, and had correspondingly
low asymmetry seores. A few individuals with low birth weight had
relatively high asymmetry seores.
Directional asymmetry is another pattern ofbilateral variation that has
been identified in analysis oftooth size in human populations. This pattern
is characterized by.larger teeth on one side ofthe dental arch than the other.
Directional asymmetry is infrequently reported for human populations
(see Ben-David et al., 1992; Boklage, 1987; Harris, 1992; Lundstrom, 1948;
Mizoguchi, 1986; Sharma et al., 1986; Townsend & Farmer, 1995). Harris
(1992) detected directional asymmetry in a large sample of permanent teeth
ofEuroamerican adolescents, with a consistently greater left dominance in
one dental arch.
Directional asymmetry is unexplained by curren! models, but may be an
indicator of developmental instability arising from stress (Harris, 1992).
The strong environmental basis of directional (and fluctuating) asymmetry
is inferred by observation of low intraclass correlations between mono-
zygous and dizygous twins (Townsend, 1992). Additionally, detection of
spurious genetic variance indicates virtual lack of evidence for a significan!
genetic basis. . .
. No published studies of archaeological remains using Kieser's approach
to detecting dental asymmetry have been forthcoming. The application to
Skeletal and dental pathological markers of deprivation 29
past groups should provide importan! insight into stress and developmen-
tal instability in earlier societies.
2.4 Skeletal and dental pathological markers of deprivation
2.4.J /ron dejiciency anemia
!ron is necessary for many body functions. It is an essential element in
hemoglobin, thus serving in the transport of oxygen to the body tissues (see
Stuart-Macadam, 1989a; Wadsworth, 1992). The bioavailability of iron
from dietary sources results from severa! factors (Baynes & Bothwell, 1990;
Hallberg, 1981 ), butthe precise mechanism for the transport of iron from
the gut into blood and circulatory system is unknown (Wadsworth, 1992).
The efficiency of dietary absorption of iron is dependen! upon its source
within foods consumed, either heme or nonheme. Generally, heme sources
of iron are efficiently absorbed, with mea! being among the best (Baynes &
Bothwell, 1990). !ron in meat does not require processing in the stomach,
and the amino acids from digestion of meat help to enhance iron
absorption (Wadsworth, 1992). !ron bioavailability in nonheme sources is
highly variable, but plant sources are generally poorly absorbed. Various
substances found in plants inhibit iron absorption, such as phytates in
many nuts (e.g., almonds, walnuts), cereals (e.g., maize, rice, whole wheat
tlour), and legumes (Baynes & Bothwell, 1990). Unlike protein in meat,
plant proteins (e.g., in soybeans, nuts, tupines) inhibit iron absorption.
Tannates found in tea and coffee also reduce iron absorption significantly
(Hallberg, 1981).
A number offoods are known to enhance iron bioavailability, especially
when consumed in combination with nonheme iron. For example, ascorbic
acid promotes iron absorption (Baynes & Bothwell, 1990; Hallberg, 1981;
Wadsworth, 1992). Citric acid from various fruits and lactic acid from
fermented cereal beers are implicated in promoting iron absorption
(Baynes & Bothwell, 1990). Layrisse and coworkers (1968) showed that
nonheme iron (e.g., in maize) absorption is enhanced considerably by
concurren! consumption of meat and fish.
Iron deficiency anemia is potentially caused by a variety of nondietary
factors. Children with low birth weights can be predisposed to iron
deficiency anemia, and blood loss, hemorrhage, and chronic diarrhea have
also been implicated (Stuart-Macadam, 1989a). Even when dietcontains
sufficient amounts of iron, parasitic infections can result in severe iron
deficiency anemia. In sorne regions of the world; parasitism is highly
30 Stress and deprivation
endemic. Schistosomiasis ('snail fever') triggers an immunological re-
sponse after the eggs of blood-vessel inhabiting worms (genus Schis-
tosoma) become lodged in body organs (e.g., liver, intestinal wall,
urogenital tract)., The disease has a tropical worldwide distribution
(Farley, 1993). Hookworm disease results from the ingestion or inhalation
of infective larvae of the hookworm (Ancy/ostoma duodena/e, Necator
americanus). The worm extracts blood by grasping the host's intestinal
wall with its sharp teeth (Despommier et al., 1995; Hotez & Pritchard,
1995). The consequences can be especially severe owing to losses of large
amounts of blood when severa! hundred or more worms are simultaneous-
ly feeding on the same host. Hookworms are also geographically wide-
spread, mostly in tropical settings, and their presence is linked with iron
deficiency anemia in a range of settings ( e.g., Layrisse & Roche, 1964; and
see below).
Various genetic diseases also cause iron deficiency, including thalas-
semia, sickle cell anemia, nonspherocytic hemolytic anemia (e.g., glucose-
6-phosphate dehydrogenase deficiency [favism], pyruvate kinase defi-
ciency), spherocytosis, and rarely, hereditary elliptocytosis.
Skeletal changes associated with chronic anemia that are identified
radiographically include perpendicular orientation of trabeculae in the
cranial diploe (called 'hair-on-end'), expansion (hyperostosis) of the
diploe, thinning of compact cranial bone, and orbital roof thickening
(Stuart-Macadam, 1987). Postcranial changes have also been observed,
such as in metaphyses of long bones, but they are generally less severe and
reduced in prevalence in acquired anemias than in genetic anemias
(Stuart-Macadam, l 989a). Skeletal changes result from the hypertrophy of
the blood-forming tissues (marrow) in order to increase the production of
red blood cells in response to the anemia. The increase in marrow
production results in a replacement of the o u ter table of compact bon e with
exposed diploic bone, which gives the appearance of raised and hypervas-
cularized areas of skeletal tissue.
The skeletal changes associated with iron deficiency anemia are part of a
generalized syndrome called porotic hyperostosis, a term introduced by
Angel (1966a; see also Hill & Armelagos, 1990) that he u sed to describe
pathology involving the outer table of cranial vault bones (Figure 2.3).
Similar lesions found in the roof areas of the eye orbits, called cribra
orbitalia, are also frequently observed in archaeological remains (Figure
2.4). Sorne argue that cribra orbitalia is one of the earliest manifestations of
anemia, with changes on the ftat bones of the cranial vault appearing
subsequently (Carlson et al., 1974; Lallo et al., 1977; Walker, 1985). There
is wide variation in frequency of orbital versus nonorbital lesions across
Skeletal and dental pathological markers of deprivation
Figure 2.3. Porotic hyperostosis on parietals and occipital; Peru. (From
Hrdlicka, 1914.)
31
human populations. For example, in prehistoric Ecuador, the majority ?f
Iesions are found on the cranial vault (Ubelaker, 1992a), whereas m
Australian samples, lesions are restricted mostly to the eye orbits (Webb,
.
The contrasting patterns of lesion location indicate a variable
ship between the two regions of the skull. Dickel (1988) statistical
independence of the two lesion types in the early preh1stonc senes from
Windover, Florida, suggestingthat lesion location reftects different types of
stress. However, the preponderance of clinical and paleopathological
evidence indicates a common etiology for orbital and vault lesions
(Stuart-Macadam, 1989b; Walker, 1985). In temporal series showing
prevalence changes, the trends for vault and orbit lesions tend to coincide
32 Stress and deprivation
Figure 2.4. Cribra orbitalia; Santa Catalina de Guale de Santa Maria, Amelia
Island, Florida. (From Larsen, 1994; photograph by Mark C. Griffin;
reproduced with pennission of Wiley-Liss, Inc., a division of John Wiley &
Sons, lnc.)
(e.g., Ubelaker, 1992a). Because oftheir common etiology, 1 subsume both
variants under porotic hyperostosis for the remainder of this discussion.
Although porotic hyperostosis is present in older juveniles and adults in
archaeological remains, most individuals with active, unhealed lesions are
young juveniles (less than five years), regardless of geographic or cultural
circumstances (e.g., Lallo et al., 1977; Larsen, Ruff et al., 1992; Mensforth
et al., 1978; Milner & Smith, 1990; Miritoiu, 1992; Mittler & Van Gerven,
1994; Ribot & Roberts, 1996; Stodder & Martin, 1992; Walker, 1986a;
Webb, 1995; and many others). The majority of adult cases of porotic
hyperostosis are remodeled and healed. This age pattern indicates that the
lesions form during childhood episodes of anemia (Stuart-Macadam,
1985). This is so beca use the marrow spaces in young children are
completely occupied with red marrow. Therefore, an expansion due to
increase in marrow cells can place increased stress on the bone. In adults,
increase in red blood cell production and marrow expansion does not
involve the use of ali available marrow space (Stuart-Macadam, 1985). The
restriction of active porotic hyperostosis to youngjuveniles indica tes that
the effects of anemia on the adult componen! of past populations cannot be
Skeletal and dental pathological markers of deprivation 33
deterrnined. Nevertheless, like growth retardation in juveniles, porotic
hyperostosis prevalence provides an importan! retrospective picture of
stress for the population generally.
Other pathological conditions can mimic skeletal changes produced by
iron deficiency anemia, including scurvy, rickets, and infection (Henschen,
1961; Ortner, 1992; Schultz, 1993). Pathological ectocranial porosity in
anemia and rickets can look similar macroscopically. Microscopic analysis
reveals that advanced anemia affects the ectocranial surface and diploe,
whereas rickets involves the ectocranial surface only (Schultz, 1993).
Additionally, the presence of inllammatory bone on the ectocranial
dueto infection (see Chapter 3) can give the appearance of hemopmel!c
marrow expansion.
The paleopathological record of porotic is
since it covers such a wide diversity of human populat10ns spatially and
temporally. The condition does not have a significan! presence until
transition from the Upper Paleolithic to the Mesolithic in northern Afnca
and the eastern Mediterranean basin (Angel, l 978b, 1984) and dudng later
prehistory and early contact era in the Americas (see below).
Bioarchaeologists ha ve speculated on the etiology of porotic hyperosto-
sis in past human groups. Angel (1966a, 196'., 1971, 1978b, was
among the first to systematically study large senes of skeletal remams on a
regional and population basis. On the bas'.s of bis of so'.e. 2200
archaeological crania from the eastern Med1terranean reg1on, prmc.1pally
Greece, Cyprus, and western Turkey, Angel proposed that porot1c hy-
perostosis resulted from hereditary hemolytic anemias,
semia or sickle cell anemia. In this setting where malana 1s endem1c,
individuals who are heterozygous for sickle cell anemia or thalassemia ha ve
a selective advantage over normal homozygous individuals lacking !he
sickle or thalassemia genes. Carriers show lower infection rates by malaria!
parasites (genus Plasmodium), thus enjoying greater_ protection from
malaria.
Other regional studies dealing with large samples of skeletal remains
showed that porotic hyperostosis in past populations is likely to be dueto a
variety of nongenetic factors. In Wadi Halfa, Nubia, in the Nile Valley,
high prevalence of orbital lesions (21.4%) for the Meroitic (350 3_5C),
X-group (AD 350-550), and Medieval Christian (AD _550-1400) pe'.1ods
have been reported (Carlson et al., 1974). Reconstrucl!on of the
mental context based on archaeological, historical, and ethnograph1c
evidence indicates that severa] factors probably contributed to iron
deficiency anemia in this setting. Milled cereal grains (millet, wheat), the
focus of die! in this setting; contain very little iron and are high in phytate.
34 Stress and deprivation
Additionally, as with populations currently living in the Nile Valley,
hookworm disease and schistosomiasis were probably highly endemic.
These factors, combined with chronic diarrhea, which is also prevalen! in
the region today;indicate 'little doubt that cribra orbitalia in the Nubian
remains resulted from (acquired) iron deficiency anemia' (Carlson et al.,
1974).
Further to the south in the Nile Valley, high prevalence of porotic
hyperostosis (45%) has been reported at the Medieval period Kulubnarti
site (AD 550-1500) (Mittler & Van Gerven, 1994). Early and late period
Kulubnarti juveniles have remarkably high prevalences (94% and 82%,
respectively; Van Gerven et al., 1995). Like the Nubian groups down river
at Wadi Halfa, the Kulubnarti population suffered the ilJ effects of iron
deficiency anemia due to reliance on iron-poor diets and other negative
influences of sedentism and unhealthy living conditions. Analysis of
demographic pro files of individuals with and without lesions indicates that
!hose with porotic crania have shortened life expectancy, with differences
greatest in the subadult years. There is a decline in porotic hyperostosis
prevalence from 51.8% to 39.0% from the early to late Christian periods
(AD 550-750 to AD 750-1500). This apparent improvement in iron status
coincides with improvements in health generally that arose following
increased political autonomy and improved living conditions (Mittler &
Van Gerven, 1994).
Circumstances involving low iron bioavailability and increasing stress
have been documented in Medieval and seventeenth century Tokyo
(Hirata, 1990), prehistoric Iran and Iraq (Rathbun, 1984), third century ec
Carthage (Fornaciari et al., 1981), Neolithic Greece (Papathanasiou et al.,
1995), third century AD Moldavia (Miritoiu, 1992), and Romano-British,
Medieval, and eighteenth-nineteenth century Britain (Grauer, 1991, 1993;
Molleson & Cox, 1993; Stuart-Macadam, 1991; Wells, 1982). All ofthese
settings have good contextual evidence for elevated environmental stress,
but the specific circumstances for iron deficiency anemia are regionally
specific. For example, causative factors for high prevalence of porotic
hyperostosis in the Roman-period Poundbury Camp, British populations
probably included parasitism, disease, and perhaps lead poison-
ing (Stuart-Macadam, 1991). High prevalences in eighteenth-nineteenth
century urban London appear to be linked with poor living conditions,
including parasitism, deficient infant diet, and low maternal iron status
(Molleson & Cox, 1993). Improved environments result in the decline of
prevalence of porotic hyperostosis. For example, a decrease in prevalence
of porotic hyperostosis in modern Japan reflects decreased crowding,
reduction in infectious diseases, and improved hygiene (Hirata, 1990).
Skeletal and dental pathological markers of deprivation 35
The most abundan! data on porotic hyperostosis are available from the
New World, especially North America. In the American Southwest,
porotic hyperostosis is highly prevalen! (e.g., Akins, 1986; El-Najjar et al.,
1975, 1976, 1982; El-Najjar& Robertson, 1976; Hooton, 1930; Kent, 1986;
Lagia, 1993; Martin et al., 1991, 1995; Palkovich, 1980, 1987; Stodder,
1994; Stodder & Martin, 1992; Walker, 1985; Zaino, 1967, 1968). Among
mostly late prehistoric Puebloan samples studied by El-Najjar and collab-
orators (e.g., El-Najjar et al., 1976) from Canyon de Chelly, Chaco
Canyon, Inscription House, Navajo Reservoir, and Gran Quivira, porotic
hyperostosis was found in 34.3% ofindividuals. At Chaco Canyon alone,
sorne 71.8% of individuals display the characteristic lesions. Similarly, high
prevalences have been reported from late prehistoric and contact-period
siles, including San Cristobal (90%), Hawikku (84%), Black Mesa (88%),
Mesa Verde (70%), Dolores (82%), Cases Grandes (46%), and La Plata
Valley (40%) (Martin et al., 1995; Stodder, 1994; Weaver, 1985). There are
sorne southwestern samples that have relatively low prevalences (e.g., 16%
for Navajo Reservoir children; see Martin et al., 1991). Martin and
coworkers (1991) note that comparisons of data collected by different
researchers is problematical, because of the varying methods used in
identification and recording of porotic lesions. For example, sorne re-
searchers may include slight pitting when analyzing their data sets, whereas
others may not. Unfortunately, this distinction is only rarely noted in
bioarchaeological reports, regardless of geographic or cultural setting.
El-Najjar ( 1976) links the elevated levels of porotic hyperostosis in the
American Southwest and other regions ofthe New World to the effects of
over-reliance on maize in conjunction with food processing techniques that
may contribute to iron deficiency. Specifically, he regards the presence of
phytate - an iron inhibitor - as well as lime treatment as decreasing the
nutritional value of maize.
Analysis of archaeological samples from other maize agriculturalists in
the New World provides mixed support for El-Najjar's dietary hypothesis.
Relatively high prevalences of porotic hyperostosis ( > 15-20%) are
present in agriculturalists in the American Midwest (e.g., Cook, 1984;
Garner, 1991; Goodman et al., 1984; Lallo et al., 1977; Milner, 1983, 1991;
Milner & Smith, 1990; Perzigian et al., 1984; Rose et al., 1984), Southeast
(e.g., Boyd, 1986; Eisenberg, 1986a, 199la, 1991 b; Hancock, 1986; Parham
& Scott, 1980), and Northeast (e.g., Magennis, 1986; Pfeiffer & Fairgrieve,
1994), as well as a range of other settings in Mesoamerica and South
America (e.g., Cohen et al., 1994; Hodges, 1989; Hooton, 1940; Hrdlicka,
1914; Saul, 1972; Trinkaus, 1977; Ubelaker, 1984, 1992a; White et al.;
1994). For sorne regions where skeletal remains of foragers (or less
36 Stress and deprivation
intensive agriculturalists). have been compared with those of agricultural-
ists, there are clear temporal increases in porotic hyperostosis prevalence
(e.g., Cook, 1984; Lallo et al., 1977; Perzigian et al., 1984; Rose et al., 1984;
although see Hodges, 1989).
Skeletal series from large, late prehistoric Mississippian centers in the
American Southeast (e.g., Blakely, 1980; Larsen, Ruff et al., 1992; Powell,
1988, 1989), contact era part-time maize agriculturalists in the Great Plains
(Miller, 1995), a large urban center in Mesoamerica (Storey, 1992a), and
the coastal desert of Peru and Chile (Allison, 1984) ali display low
prevalences. These findings are not consisten! with the dietary hypothesis,
suggesting that other factors underlie the etiology of poro tic hyperostosis.
The dietary hypothesis <loes not account for the relatively high frequen-
cies of porotic hyperostosis in sorne foraging populations. A number of
Pacific coastal foraging groups with access to iron-rich marine resources
have high prevalences of porotic hyperostosis. Moderate levels of porotic
hyperostosis are present in precontact and contact era Northwest coas!
populations (13-14%; Cybulski, 1977, 1992, 1994). In this setting, Euro-
pean-introduced diseases may have prevented adequate iron metabolism
during the contact period (Cybulski, 1994). The presence of porotic
hyperostosis prior to contact indicates that there may have been other
importan! factors, such as blood loss and parasitism (see Cybulski, 1994).
Late prehistoric foragers from the islands and mainland of the Santa
Barbara Channel Island region ofCalifornia ha ve higher prevalences than
earlier foragers, increasing from 12.8% in the Early Early period to 32. l %
in the Late Middle period (Lamber!, 1994; Lamber! & Walker, 1991;
Walker, 1986a). Late period populations living on islands located furthest
from the mainland coast ha vean extraordinarily high prevalence of poro tic
hyperostosis (73.1% on San Miguel Island). Walker and Lambert suggest
that water contamination explains the elevated prevalence of the condition.
High prevalence of iron deficiency anemia in island populations coincides
with a period of increasing sedentism and population size, anda shift from
terrestrialto marine diets. In the Late period, groups became concentrated
around a limited number of water sources. As a result, diarrhea-causing
enteric bacteria may ha ve contaminated these water sources. Ethnographic
evidence indica tes that island populations preferred eating raw (vs. cooked)
fish (see Walker, 1986a), thus also increasing their chances of acquiring
parasitic infections.
Prevalences of porotic hyperostosis in prehistoric Australian foragers
are consistently high in tropicaJ/subtropical environments and low in
desert environments (Webb, 1995). For example, in southeastern Austra-
lia, prevalences range from 62.5% ( < 21 years) in the Rufus Valley to
1
1
,
Skeletal and dental pathologica/ markers of deprivation 37
30.0% ( < 21 years) in the desert. Half of the juvenile crania from the
tropics of northeastern Australia are porotic. Various factors appear to
have contributed to iron deficiency anemia in Australia, but parasitism is
primary. The Murray Valley, southeastern coast, and tropics provide well
suited environments for support of various intestinal parasitic organisms,
including Trichuris trichuris, Ascaris /umbricoides, Strongyloides ster-
coralis, and Enterobius vermicularis. In the tropics, hookworm infection
may have been a principal cause. In living populations occupying the
tropics of Australia, sorne 40% of children are infected with this helminth.
Although it is unknown whether hookworm parasites were present in this
region prior to contact by Europeans (in 1788), had they been, they would
have caused the same types ofhealth problems seen in living groups toda y.
The cumulative evidence showing a patchwork distribution of porotic
hyperostosis independent of diet in past populations' makes El-Najjar's
dietary hypothesis unlikely. Samples studied by El-Najjar and others in the
American Southwest tend to be from maize-dependent populations in-
habiting canyon bottomlands. Perhaps the greater prevalence of porotic
hyperostosis in the canyon sites was due to problems arising from poor
drainage and contaminated water and generally more restricted diets
rather than to maize consumption (Walker, 1986a).
El-Najjar and coworkers underplay the role of parasitism in iron
deficiency anemia in the prehistoric American Southwest. They assert that
parasitic infections (e.g., from hookworm) were 'extremely rare in South-
western American Indians' (1975:921). However, recen! analysis of cop-
rolites from archaeological sites in the American Southwest indicates the
presence of disease-causing parasites (e.g., Enterobius vermicularis,
Moniliformis c/arki, Strongy/oides spp.; Cummings, 1994; Reinhard, 1992).
Comparison of foragers and farmers indicates a dramatic increase in
helminth parasitism in the latter, especially in E. vermicularis, the organism
that causes pinworm infection. This finding is consisten! with a decrease in
sanitation and increase in population crowding (Reinhard, 1992). Addi-
tionally, the dark and crowded living conditions in prehistoric South-
western Pueblos of the Anasazi would ha ve exacerbated these conditions;
promoting infection and anemic responses (and see Kent, 1986).
A limited number of settings in South America show high prevalence of
porotic hyperostosis (Peru and Ecuador: Hrdlicka, 1914; Ubelaker, 1981,
1992a). Porotic hyperostosis is low in prevalence in mountainous regions,
and appears to be restricted primarily to late prehistoric coastal occupa-
tions. The penchant for coastal settings may reflecta more restricted access
by native populations to fresh, parasite-free water sources in these areas.
Ubelaker (1992a) contends that the coastal pattern of elevated porotic
38 Stress and deprivation
hyperostosis prevalence fits the model of increased anemia due to chronic
helminth disease brought about by population crowding and reduced
hygiene. In twentieth century Ecuador, hookworm disease is a common
and majar public health problem in coastal regions. This distribution in
contemporary populations, along with the pattern of prehistoric porotic
hyperostosis, strongly implicates parasitism as a causal factor in north-
western South America.
Severa! investigations suggest that increased prevalence of porotic
hyperostosis may be due to highly localized factors. Porotic hyperostosis
has. been evaluated in prehistoric and contact. era populations that
inhabited the southeastern United States Atlantic coast.of Georgia and
northern Florida (Larsen, Ruff et al., \992). Maize agriculture-introduced
during the twelfth century AD - played an important role in changes in
health in native populations, including in crea sed prevalence of nonspecific
infections due to population. aggregation along with other evidence of
increased stress (see below). However, comparisons ofprellistoric foragers
and farmers show low prevalences of poro tic hyperostosis: 6.5% and 6.2%,
respectively. Marine resources contributed significan ti y to diets in both
foragers and farmers, which may have enhanced iron absorption in these
groups (see above). In the contact period both theearly mission population
living on St. Catherines Island, Georgia (Santa Catalina de Guale; AD
1607-1680), and their descendants on Amelia Island, Florida (Santa
Catalina de Guale de Santa Maria; AD 1686-1702), have considerably
higher prevalences of poro tic hyperostosis: 26.5% and 27 .2%, respectively.
Archaeological and bone isotope evidence indicates an increase in maize
consumption in the mission groups, but marine foods continued to be
heavily used.
The dramatic increase in prevalence of porotic hyperostosis in contact
era coastal Georgia and Florida populations may be related to similar
conditions documented in the Santa Barbara Channel Islands region. At
missions in Spanish Florida, limited and easily contaminated water sources
- wells located next to the settlement - served as primary water sources.
Wells in this subtropical setting are highly susceptible to contamination by
parasites and microbes that cause diarrheal infections. During the mission
period on St. Catherines Island, a freshwater stream bordering the
mission/village was artificially dammed and also used as a principal water
source (Larsen, Ruff et al., 1992). An abundance of archaeological refuse
deposits - mostly food remains - surrounds and intrudes into this water
source today. The accumulation of refuse during the mission period
probably contributed to water contamination, thus also providing an
importan! source of infections potentially leading to. iron deficiency

1
Skeletal and dental pathological markers of deprivation 39
anemia. The general increase in the concentration of population and
sedentism during the mission period undoubtedly fostered poor sanitation
and living conditions (Larsen, Ruff et al., 1992).
The data generated in analysis of archaeological human skeletons
worldwide indicate that the etiology of porotic hyperostosis can only be
understood in relation to multiple stressors. Although common factors
(e.g., parasitism, poor diets, decreased sanitation) are likely to be present in
many regions, these studies also demonstrate that the behavioral circum-
stances unique to particular settings must be considered when porotic
hyperostosis prevalence is interpreted. Much more information is needed
on details and circumstances regarding living conditions and lifestyle (e.g.,
trash disposal, household and settlement size, dietary practices, food
preparation techniques). Other classes of pathological data need to be
considered for the understanding of health patterns that potentially
influence iron status (cf. Mensforth et al., 1978; Weaver, 1985).
Variation in the prevalence of porotic hyperostosis in human popula-
tions should inform our understanding of the differential costs of disease
stress in past societies. In sorne settings, there is a consistently higher
prevalence of poro tic hyperostosis in adult women than adult men (Dickel,
1991; Webb, 1995; Whittington, 1989). For Australia, Webb (1995)
suggests that a higher prevalence in women than men reflects thestresses of
'(c]hildbearing, lactation, menstruation and the imposition offood taboos'.
Given that the pathological condition primarily reflects childhood anemia,
it seems unlikely that parity status, lactation, or menstruation can explain
the variability in porotic hyperostosis prevalences. These differences
suggest, though, that the growth period may have involved greater anemia
stresses in females than in maies.
Comparison of porotic hyperostosis prevalence across social ranks in
prehistoric stratified societies suggests sorne differences in iron deficiency
anemia. Elite individuals in severa! settings in the American Southeast and
Midwest ha ve a lower prevalence of porotic hyperostosis than nonelite
individuals. This difference has been reported at prehistoric Mississippian
localities from Moundville, Alabama (2.5%, elite; 9.9%, nonelite) and at
Toqua, Tennessee (5%, mound burial; 21%, village burial) (Parham &
Scott, 1980; Powell, 1988, 1992a). At Mound 72 in the late prehistoric
Cahokia site, Illinois, high-status individuals lacked porotic hyperostosis,
whereas 12.5% of Iow-status individuals - female sacrificial victims - have
the condition (Rose & Hartnady, 1987). These differences suggest, then,
that high-status individuals may ha ve been buffered against stressors that
result in iron deficiency anemia.
In summary, a consensus has emerged that porotic hyperostosis in
40 Stress and deprivation
archaeological skeletal samples is dueto acquired iron deficiency anemia in
the vast majority of cases. Genetic anemias may have contributed to
porotic hyperostosis in the past, but it is unlikely that they would have
occurred in appreciable frequencies. Moreover, porotic hyperostosis in
areas of the world where genetic anemias (e.g., sickle cell anemia,
thalassemia) <lid not occur prior to contact by Europeans - such as
Australia and the Americas - can be explained only by negative environ-
mental factors.
2.4.2 Skeletal fines of growth disruption ( Harris fines)
On many skeletal elements, including long bones and round or irregular
skeletal elements (e.g., scapula, ischium, ilium), radiopaque lines may be
visible in x-rays that follow growth contours, 'topographically mapping the
history of the bone' (Garn et al., 1968:58; Figure 2.5). Lines range in
thickness from less than 1 mm to more than 1 cm, and are thickest in areas
of rapid growth, such as the distal tibia and femur (Dreizen et al., 1964;
Garn et al., 1968).
Although transverse lines were originally considered to be symptomatic
ofrickets (Wegner, 1874), studies ofliving populations and animal studies
link these lines to many other conditions potentially resulting in metabolic
insult, including dietary insufficiencies (Blanco et al., 1974; Dreizen et al.,
1956, 1964; Garn et al., 1968; Harris, 1931, 1933; Martin et al., 1985; Park
& Richter, 1953; Platt & Stewart, 1962; Stewart & Platt, 1958), disease
(Acheson, 1959; Harris, 1931, 1933), trauma from minor surgery and
immunization (Garn et al., 1968), fracture (Ferrozo et al., 1990), lead
poisoning (Caffey, 1931), and the physiological and psychological impact
ofweaning (Clarke & Gindhart, 1981). Most lines appear to form after six
months of life, peaking sorne time during the first five years (Clarke, 1980;
Clarke & Gindhart, 1981; Dreizen et al., 1964).
Although they are commonly referred to as growth arres! lines, a better
description is growth recovery lines, since most evidence indica tes that the
lines form during the recovery phase following growth arrest (Garn et al.,
1968). When the epiphysis commences growth following the stress event,
mineralization at the growth plate continues in the absence of epiphyseal
cartilage deposition. The association of transverse lines with growth zones
as well as their bilateral presence (e.g., left and right tibiae; Garn & Baby,
1969; McHenry, 1968) indicates that they are linked with systemic
physiological stress.
Analyses of transverse lines in archaeological remains provide sorne
insight in to stress history. Comparisons of transverse lines in early and late
Skeletal and dental pathological markers of deprivation
Figure 2.5. Harris lines on juvenile tibia (left) and femur (right); anatomical
specimens. The dashed lines indicate thc contours of the growth disr.uption
(From Garn et al., 1968; reproduccd with permission of authors and
Eastman-Kodak Company.)
41
prehistoric foragers in central California show a general decrease in
frequency, possibly indicating improved reliability offood sources (Dickel
et al., 1984; McHenry, 1968). This argument runs counter toan abundance
of evidence drawn from the study of other pathological conditions and
stature showing increase in stress in populations in the Central Valley
(Ivanhoe, 1995) and to the south in the Santa Barbara Channel Islands
region (Lamber!, 1994).
Other prehistoric foragers show variable frequencies in transverse lines.
High-latitude Arctic populations express elevated prevalen ces ( > 30--50%;
e.g., Buikstra, 1976b; Lobdell, 1984, 1988; Steffian & Simon, 1994; Yesner,
1990). In severa! of these settings where even spacing of lines has been
42 Stress and deprivation
observed, regularity or periodicity of stress-perhaps on a seasonal basis-is
inferred (e.g., Buikstra, 1976b; Lobdell, 1988). The common occurrence of
transverse lines in these populations indicates that metabolic stress is more
characteristic of Arctic lifeways than has often been assumed (see Steffian &
Simon, 1994). In addition to nutritional deficits, other stressors associated
with this setting include the constan! variation in the ratio oflight to dark,
placing significan! demands on the body, especially in children (Condon,
1983). Seasonal or other altera ti o ns in the intensity and duration of daylight
lead to various changes, especially in circumstances involving the depletion
oflight, such as overall health and mental functioning. Study ofliving Inuit
in thecentral CanadianArcticindicates thatJanuary is a peak time of disease
susceptibility, primarily because of extremely low temperatures, low
ambient humidity within and outside dwellings, and lowered sunlight
(Condon, 1983). This may be exacerbated by the effects of desynchroniz-
ation of natural physiological rhythms and lack of sleep.
There is a high, but variable, prevalence of Harris lines in prehistoric
foragers in Australia (Webb, 1995). As in the Arctic populations, stress is
probably not related to high levels of infectious disease and sedentism,
since population density is low far most regions. Rather, transverse lines
refiect seasonal nutritional deficits (Webb, 1995).
Temporal comparisons of transverse lines in archaeological samples
presents a mixed picture. Comparisons of three successive periods at
Dickson Mounds - Late Woodland (AD 950-1050), Mississippian Accul-
turated Late Woodland (AD 1050-1200), and Middle Mississippian (AD
1200-1300)- revea! a decrease in tibia! transverse lines (Goodman & Clark,
1981 ). Similarly, lines decreased in frequency in a comparison of foragers
and farmers in the Ohio River valley (Cassidy, 1984; Perzigian et al., 1984)
and in the Caddo region ofthe southeastern US (adjoining states ofTexas,
Oklahoma, Arkansas, Louisiana) (Rose et al., 1984). These trends suggest-
ing a decrease in stress are puzzling, because most other indicators of
morbidity show a highly consisten! pattern ofincrease in stress and reduced
health status (e.g., enamel defects, nonspecific infection).
Other investigations do not show consistency between the prevalence of
lines and other indicators of stress. Comparison of prehistoric farmers with
later populations who had shifted their dietary focus to include more meat
in northwestern Nebraska reveals a decrease in frequency of individuals
with lines;from 84.6% to 45.8% (Sandness & Green, 1993). The finding of a
decrease in Harris lines in contact era native populations in Nebraska is
consisten! with other markers that show a general picture of health
improvement (Miller, 1995). Prehistoric farmers in the lower Illinois River
valley and in southwestern Asia show in creases in line prevalence relative to
Skeletal and dental pathological markers of deprivation 43
foragers in both settings (Cook, 1984; Rathbun, 1984). Cook (1984)
regards the increase in prevalence in the lower Illinois River valley as
reflecting moderate increases in stress relating to a shift in dietary focus to
maize.
The use of transverse lines for documenting stress in past populations is
clouded by the fact that lines have a tendency to fade or vanish with
advancing age, dueto bone remodeling. In a study of living populations
composed of individuals ofknown stress history, lines showed a decrease in
width with advancing age; sorne lines disappeared, whereas others were
inexplicablyretained well into adulthood (Garn & Schwager, 1967; Garn et
al., 1968).
Study of formation rates, persisten ce, and loss of lines in tibiae from the
Medieval period Kulubnarti si te in Sudanese Nubia revealsa clear history of
growth disruption, but in the context of complexitiesaddressed by Garn and
coworkers on line disappearance and persistence (Hummert & Van Gerven,
1985). Juveniles in this setting display high prevalence ofHarris lines; adults
have few lines. The reduction in frequency of lines in older individuals
suggests that lines denoting previous stress events had disappeared, dueto
bon e remodeling. Therefore, although lines present in adults certainly refiect
episodes of metabolic stress, their absence represents either the lack of stress
or simply the resorption oflines. Thus, transverse lines are relatively more
representative of stress history in juveniles than in adults.
Another difficulty of using Harris lines for assessing stress is the high
degree of frequency variation in relation to individual health history. Study
of living individuals of known stress history reveals the presence of
numerous lines in clinically normal children with uneventful health
histories (Garn et al., 1968), and few lines in children who are well below
weight-for-age (Walimbe & Gambhir, 1994). These findings and the lack of
close association between transverse lines and disease episodes in archae-
ological populations (e.g., Mensforth, 1981) and in living populations (e.g.,
Marshall, 1968) suggest that this stress indicator should be interpreted
cautiously in bioarchaeological analysis, especially in consideration of
health status and its relationship to specific behavioral, environmental, and
dietary adaptations.
2.4.3 Growth disruption of dental tissues: enamel defects
Growth oftooth enamel commences at theincisal or cuspal terminus ofthe
crown and proceeds in a uniform fashion to completion at the cemen-
toenamel junction (Figure 2.6). The enamel is first laid down by amelo-
blasts (enamel-producing cells) secreting a highly proteinaceous matrix.
44 Stress and deprivalion
Cementoenamel
junction
Dentin
Root
Figure 2.6. Section o permanent mandibular canine showing major eatures
discussed in text. (Adapted from Rose et al., 1985; reproduced with permission
of authors and Academic Press, Inc.)
This matrix then mineralizes into an acellular material composed mostly
( > 97%) of inorganic salts, thus forming the fully mature enamel (see
Goodman & Rose, 1990, for a full discussion of dental tissue growth). The
enamel matrix is deposited in a series of structural increments demarcated
by striae (or lines or bands) of Retzius. Like the production of skeletal
tissue, the formation of enamel is a regular process that is subject to factors
that may either slow or stop growth. Tooth enamel is especially sensitive to
metabolic insults arising from nutritional deficiencies or disease, or both.
Because enamel <loes not remodel and it preserves better than any other
hard tissue, developmental disturbances provide an excellent source of
information towards reconstructing a retrospective stress and morbidity
history of human populations, past or present.
Macrodefects
Virtually any environmental factor leading to metabolic disturbance will
result in visible changes in the structure of enamel. Ameloblasts are
especially sensitive to even minor physiological disruptions. Enamel
defects arising from physiological perturbation have been most frequently
documented as visible alterations of the tooth surface, especially hypo-
plasias, and to a lesser extent hypocalcifications. Hypoplasias are quanti-
tative defects characterized as deficiencies in the amount or thickness of
enamel (Goodman & Rose, 1990; Suckling, 1989). They vary in appearance
l

1
Skeletal and dental pathological markers of deprivation
Figure 2. 7. Maxillary dentition showing enarnel hypoplasias on incompletely
erupted central incisors; anatomical specimen. (From Larsen, 1994;
photograph by Barry Stark; reproduced with permission of John Wiley &
Sons, Inc.)
45
from small pits or furrows to large, deep grooves or even large areas of
missing enamel. Typically, these defects are horizontal grooves that are
called chronological or linear enamel hypoplasias (Figure 2. 7). The color
and hardness of hypoplastic enamel is normal.
Hypocalcifications are enamel defects involving change in color or
opacity, reflecting variation in enamel quality or hardness. The enamel
surface is usually smooth and appears intact. Hypoplasias occur when
ameloblasts fail to produce the normal thickness of enamel matrix during
enamel development. Hypocalcifications appear to result from a disruption
of the mineralization process during the maturation stage of enamel
development. This dichotimy may not be so clear cut, since hypocalcifica-
tions ha ve been experimentally documented in the initial stage of enamel
formation (Suckling, 1989).
Hypoplasias result from three potential causes, including hereditary
anomalies, localized traumas, and systemic metabolic stress (Goodman &
Rose, 1991 ). Defects arising as hereditary anomalies or as localized
traumas are rare in human populations, indicating that the vast majority of
hypoplasias seen in contemporary and archaeological populations are
linked to systemic physiological stress. The causal stressors assciated with
46 Stress and deprivation
hypoplasias are numerous and varied. Clinical and epidemiological investi-
gations in living populations document associations with systemic diseases,
neonatal disturbances, and nutritional deprivation (reviewed by Hillson,
1996; Pindborg, 1982). Experimentally induced stress in laboratory ani-
mals has also shown the direct link between enamel deficiency and stress
(e.g., Kreshover, 1944; Kreshover & Clough, 1953a, l 953b; Suckling et al.,
1983, 1986; Suckling & Thurley, 1984). Studies of non human primates with
known life histories revea! links between enamel defects and life events,
including birth, parturition, poor physical growth, social stress, and the
stresses associated with capture (Bowman, 1991). Thus, enamel defects are
a nonspecific indicator of physiological stress (Goodman & Rose, 1990;
Kreshover, 1960; Pindborg, 1982).
Ecological factors are critica! far understanding the prevalence and
pattern of enamel defects in human populations. Studies ofliving popula-
tions with dietary deficiencies show the primacy of nutrition in the
development of normal enamel. Analysis of individuals born during the
starvation famine of 1959-1961 in the People's Republic ofChina reveals
that enamel farmed during the famine is highly defective, unlike the enamel
that farmed either befare or after the famine (Zhou, 1995). Rural
individuals ha ve more defects than urban individuals, a pattern consisten!
with records indicating that the rural population was subjected to more
stress than the urban population (Zhou, 1995).
Enamel hypoplasias show a predilection far anterior teeth and far the
cervical and middle thirds of tooth crowns, suggesting that specific teeth
and regions of crowns are differentially susceptible to growth disruption
(Condon & Rose, 1992; Goodman & Armelagos, 1985a, 1985b; Hutchin-
son & Larsen, 1988; Li et al., 1995; Pedersen & Scott, 1951; Zhou, 1995).
Susceptibility to growth disruption in specific teeth and specific areas of
tooth crowns may vary according to the deposition rate of enamel matrix:
greater susceptibility to farmation of enamel defects may be associated
with slower deposition rates (Condon & Rose, 1992). The implication of
variability in defect susceptibility is that anterior teeth and cervical and
middle thirds of tooth crowns provide the most representative picture of
stress.
Microdefects
Histological structures in dental enamel known as Wilson bands (or
accentuated or pathological striae of Retzius) provide a detailed record of
growth disruption (Goodman & Rose, 1990; Marks, 1992; Wright, 1990).
Wilson bands are thin Iayers of abnormally structured enamel marking the
Skeletal and dental pathological markers o[ deprivation
Figure 2.8. SEM micrograph ( x 230) of a polished and
longitudinal section from a maxillary central incisor showing a Wilson band;
Santa Catalina de Guale, Amelia Island, Florida.(Photograph by Scott W.
Simpson.)
47
position of the active ameloblasts at the time of the insult (Figure 2.8). They
typically begin at the cuspa]/incisal slope of the hypoplasia and terminate
at the dentoenamel junction (Condon & Rose, 1992; Goodman & Rose,
1990). A common association between these incremental structures and life
history is the birth event, resulting in a distinctive 'neonatal line' on the
farming teeth, namely the deciduous teeth and the permanent first molars
(Schour, 1936; Whittaker & Richards, 1978). Wide Wilson bands are
associated with trauma tic births (Eli et al., 1989), suggesting that the width
of Wilson bands may represen! an iudicator of stress severity.
Wilson bands are not always associated with a surface macrodefect such
as a hypoplasia (Bullion, 1986; Condon, 1981; Condon & Rose, 1992;
Danforth, 1989; Goodman &Rose, 1990; Rose, 1977; Wright, 1990). The
lack of a consisten! association between macro- and microdefects suggests
that their etiologies are different. Wilson bands appear to represen! brief
periods of stress lasting from one to five days, whereas hypoplasias appear
to represen! long-term lasting from weeks to severa] months
(Condon, 1981).
48 Stress and deprivation
Stress chronology
Because metabolic insults leading to growth disruption affect only the part
ofthe tooth that is in the process offorming, location ofthe disturbance on
the tooth crown provides a precise chronological indicator of stress history.
Toothenamel begins to form at about four months in utero, beginningwith
the deciduous first incisors, and is co)Ilplete when the crowns of the
permanent third molars are fully farmed at about age 12 (Smith, 1991; Ten
Cate, 1991). The location of the position of an enamel defect (e.g.,
hypoplasia) relative to the cementoenameljunction can be used to plot the
age of disturbance (Goodman et al., 1980; Sciulli, 1992; Swiirdstedt, 1966).
Earlier researchers suggested that there is a preprogrammed stress clock in
humans (e.g., Massler et al., 1941; Sarna! & Schour, 1942). However, the
body of evidence that has built up over the last five decades indicates little
support far a universal model of the timing of growth disruption (see
Goodman, 1989; Goodman & Armelagos, 1985a, 1985b; Zhou, 1995; and
below).
Although the relationship between enamel defects and age has been
recognized since the nineteenth century (e.g., Talbot, 1898), this chrono-
logical approach has only recently been applied to archaeological remains.
The pioneering investigation of Medieval-era dentitions from Westerhus,
Sweden, by Swiirdstedt (1966) revealed that hypoplasias peaked in the two
to faur year period, a pattern that has been identified in many other
archaeological samples (e.g., Corruccini et al., 1985; Goodman et al., 1980;
Hillson, 1979; Hodges, 1989; Hutchinson & Larsen, 1995; Martn et al.,
1991; Powell, 1988; Storey, 1992a, 1992b) (Figure 2.9).
The tendency far hypoplasias to occur after the first year in archaeologi-
cal samples suggests that stresses may be due to the negative effects of
weaning (e.g., Cappa et al., 1995; Corruccini et al., 1985; Lanphear, 1990;
Lillie, 1996; Moggi-Cecchi et al., 1994; Ogilvie et al., 1989; Simpson et al.,
1990; Ubelaker, 1992b; Webb, 1995; and many others). Weaning may not
be the most appropriate explanation in ali circumstances. A test of the
weaning hypothesis based on the study of historical records and archae-
ological dentitions from enslaved African-American populations living in
Maryland and Virginia reveals that the peak frequencies ofhypoplasias are
in the 1.5--4.5-year age intervals, whereas weaning took place only nine
months to one year afier birth (Blakey et al., 1994). This discrepancy
between age pattern ofhypoplasias and weaning led Blakey and coworkers
(1994) to conclude that weaning was not the primary causal factor leading
to enamel defects. Rather, other stresses of enslavement, including nutri-
tional problems, poor hygiene, and illness, were likely to be responsible far
30
!!l.
120
~
10
o
Skeletal and dental pathological markers of deprivation 49
~ ... ~
1 \
~
o 2 3 4 5 6
Age (years)
Figure 2.9. Stillwater (dotted line) maxillary permanent central incisor
hypoplasia frequencies (o/o per half-year age group) compared. with Georgia
coastal foragers (unbroken line) and farmers (dashed line). (Frm Hutchinson
& Larsen, 1995; reproduced with permission of the American Museum of
Natural History.)
the age pattern of physiological perturbation in this setting. The predilec-
tlon far the two to four year period reflects at least in part the greater
susceptibilityofthe regan ofthe tooth crown associated with this period of
cnamel deposition and tooth farmation. Therefare, the general acceptance
of weaning as a cause far the age profile of enamel defects in archaeological
scttings, usually between two and faur years, is incorrect (and see
Katzenberg et al., 1996). Weaning may certainly be a cause of stress leading
to poor enamel. However, the link between enamel defects and weaning is
coincidental rather than real in many circumstances.
Duration/severity of stress
llecause enamel is deposited in consecutive layers from the incisaljocclusal
surface to the cementoenamel junction over sorne interval of time,
hypoplasia width should represen! a quantification of duration of stress
cvents (Blakey & Armelagos, 1985; Danfarth, 1989; Ensor & Irish, 1995;
Hutchinson & Larsen, 1988, 1990, 1995; Larsen & Hutchinson, 1992;
50 Stress and deprivation
Sarna! & Schour, 1941, 1942; Simpson et al., 1990). Suckling and
coworkers ( 1986; Suckling, 1989) concluded on the basis of experimental
work with laboratory sheep that severity of stress plays a vital role in
determining the width of individual hypoplasias. Therefore, hypoplasia
size may reflect either stress duration or severity or perhaps sorne unknown
combination of both.
Stress histories in human populations
The connection between poor living conditions and enamel defect preva-
lence is well supported by epidemiological studies of contemporary human
populations. In general, individuals fron developed nations tend to have
far lower prevalences than individuals from underdeveloped nations (see
Goodman & Rose, 1991). In tbis regard, less than 10% ofindividuals from
developed nations ha ve one or more hypoplasias, whereas hypoplasias are
commonplace in many underdeveloped settings or in disadvantaged
subgroups of populations with poorer diets, more disease, or sorne
combination of undernutrition and disease ( e.g., Anderson & Stevenson,
1930; Baume& Meyer, 1966; Dobney& Goodman, 1991; Enwonwu, 1973;
Goodman et al., 1987, 1991; Goodman, Pelto et al., 1992; Infante, 1974;
Infante & Gillespie, 1974, 1977; Li et al., 1995; Zhou, 1995; Lukacs &Joshi,
1992; Massler et al., 1941; May et al., 1993; Pedersen & Scott, 1951; Sawyer
& Nwoku, 1985; Sweeney et al., 1971 ).
Severa! case studies are especially informa ti ve regarding the link between
stress, socioeconomic status, and life history. Children from villages in the
Solis Valley of the Temascalcingo region ofthe Mexican highlands display
prevalences of hypoplasias that document the relationship between poor
growth status and physiological disruption (Goodman, Pelto et al., 1992).
Children with enamel defects have reduced body weights and heights-for-
age in comparison with children who lack defects. Predictably, children
with hypoplasias tend to be from families of lower socioeconomic status
living under conditions of malnutrition and poor sanitation. The associ-
ation between negative environments and defect prevalence is well illus-
trated in settings involving selected dietary supplementation (Dobney &
Goodman, 1991; Goodman et al., 1991; May et al., 1993). In rural Mexico
and Guatemala, for example, children receiving dietary supplements have
far fewer linear enamel hypoplasias than their nonsupplemented peers. In
Guatemala, children who were ill more than 3.6% of the time had more
hypoplasias than other children (May et al. 1993). Comparable patterns of
morbidity differences are reported far Wilson band prevalence in living
populations. Children with histories of chronic systemic disease in Shef-
Skeletal and dental pathological markers of deprivation 51
field, England, ha ve elevated prevalence ofWilson bands in their deciduous
teeth (Hillier & Craig, 1992).
The study of dentitions from paleontological and archaeological con-
texts adds a great <leal to our present understanding of the history ofhuman
stress and its complex links with environment, culture, and biology. Unlike
most stress indicators, enamel defects ha ve been systematically investigated
in a range of early hominids, including australopithecines (e.g., Tobias,
1967; White, 1978), early archaic Horno sapiens (Bermdez de Castro &
Prez, 1995), and Neandertals (Hutchinson et al., 1994; Molnar & Molnar,
l 985a; Ogilvie et al., 1989). Early archaic H. sapiens from Atapuerca, Spain
(Bermdez de Castro & Prez, 1995), and Neandertals generally (Ogilvie et
al., 1989), possess low to modera te prevalences of hypoplasias (permanent
dentition: 12.8%, Atapuerca; 41.9%, Krapina). Each of the two series
displays similar age patterns ofhypoplasias, including two primary peaks,
the first in early childhood (3-4 years) and a second in late childhood
(10-13 years). The late peak is especially interesting, becauseit represents
stress events affecting posterior teeth, teeth that rarely exhibit enamel
defects in modern human populations (see Ogilvie et al., 1989). Thus, the
earlier peak may reflect early childhood stress (e.g., weaning), and the later
peak may represen! overall high levels of systemic stress. Because genetic
defects were probably eliminated from the gene pool in these early
hominids, genetic agents leading to growth disruption do not explain
enamel defects in these settings. Infection may have been a canse of growth
disruption, but the Atapuerca and Krapina samples show very low
prevalences of skeletal infection. Instead, nutritional deficiencies- perhaps
during periodic food shortages - were the most likely causative factor
(Hutchinson et al., 1994; Ogilvie et al., 1989).
Prevalences and widths of defects in Krapina Neandertals are not
remarkable in comparison with modern foragers from archaeological
contexts (Hutchinson et al., 1994). Electron spin resonance dates fron
Krapina (ca. 130 000 BP) place these hominids in the last interglacial (stage
5e; Rink et al., 1995), which is a period ofrelative environmental stability.
This environmental stability may have engendered the availability of
adequate food resources and, hence, relatively low stress.
Temporal comparisons of enamel defect prevalence in Holocene popula-
tions undergoing dietary and behavioral changes show clear trends in
physiological stress. Especially striking are changes observed in human
populations undergoing the shift from foraging to agriculture or agricul-
tura! intensification; In general, these comparisons revea! increases in the
prevalence of enamel defects, especially in populations inhabiting the
Eastern Woodlands ()f North America (Cassidy, 1984; Cook; 1984;
52 Stress and deprivation
Goodman et al., 1984; Perzigian et al., 1984;-Rose et al., 1984), Latin
America (Ubelaker, 1984, 1992b; although see Hodges, 1989, for excep-
tion), and to a lesser extent, Asia (Rathbun, 1984; Smith, Bar-Yosef et al.,
1984; although see Yamamoto, 1992). This is not a universal trend. In
comparison oflate prehistoric and contact-era maize agriculturalists from
one area of the American Southeast, there is a decrease in hypoplasia
prevalence (Hutchinson & Larsen, 1988; Larsen & Hutchinson, 1992;
Simpson et al., 1990). Hypoplasia widths increase in this setting. These
trends reflecta decline in number of individuals affected, but stress episodes
were of either longer duration, or greater severity, or both. Other late
prehistoric or contact period agriculturalists in the New World express
generally high frequencies of hypoplasias, which reftects the deterioration
ofhealth in many ofthese settings (e.g., Cohen et al., 1994; Martin et al.,
1991; Milner & Smith, 1990; Pfeiffer & Fairgrieve, 1994; Stodder, 1994;
Stodder & Martin, 1992).
The impact of the adoption of agriculture on the stress experience in
earlier human populations is revealed in a number of settings where
changes in health and nutrition are documented by archaeological and
osteological means. At Dickson Mounds, multiple indicators show increas-
ing levels of nutritional stress and infectious disease (see above; Chapter 3).
Increase in prevalence of enamel defects - both macrodefects and micro-
defects - is consisten! with these findings (Goodman, 1989; Goodman &
Armelagos, 1988; Goodman et al., 1980, 1984; Lallo & Rose, 1979; Rose et
al., 1978). The mean frequency ofhypoplastic defects increased to 0.9, 1.2,
and 1.6 per individual in the Late Woodland, Mississippian Acculturated
Late Woodland, and Middle Mississippian periods, respectively. For the
same three periods, the frequency ofindividuals affected increased to 45%,
60%, and 80%. Most defects occurred in the 2--4-year period for the first
two periods. Defects in the late prehistoric intensive agriculturalists
(Middle Mississippian) were earlier ( < 2 years) than in either of the two
previous periods, indicating that stress occurred in earlier childhood as
nutritional quality worsened and disease intensified.
Comparisons of age-at-death for Dickson Mounds individuals with and
without hypoplasias reveal that in the final two prehistoric periods, mean
age-at-death of individuals without hypoplasias is 36.6 years and 37.5
years, and mean age-at-death for individuals with hypoplasias is 31.3 and
30.2 years. These results are similar to other settings, showing an inverse
relationship between age and enamel defects - adults have fewer defects
than juveniles, older juveniles have fewer defects than younger juveniles,
and/or older adults have fewer defects than younger adults (e.g., Blakely,
1988; Cook, 1990; Cook & Buikstra, 1979; Danforth, 1989; Dura y, 1996a,
Skeletal and dental pathological markers of deprivation 53
1996b; Rose et al., 1978; Simpson et al., 1990; Stodder, 1995; Swiirdstedt,
1966; White, 1978). These findings suggest that individuals experiencing
stress during childhood are predisposed to early death. This may indicate
that individuals who are stressed in childhood continue to be stressed as
adults, resulting in weaker constitutions and earlier death. Alternatively,
individuals experiencing stress early in their Iives may somehow lose the
ability to deal with stress la ter in life. Finally, higher social position of sorne
juveniles in stratified societies (e.g., Dickson Mounds) may buffer them
from stresses experienced by lower social ranks (Goodman & Armelagos,
1988). Unfortunately, with archaeological samples, it is diflicult to deter-
mine the likelihood of the alternative explanations (see also Goodman,
1989; Goodman & Armelagos, 1988). The similar pattern of age differences
in vertebral neural arch size, tooth size, and enamel defect frequency,
especially in comparison of juveniles and adults from the same population,
strongly suggests that individuals surviving to adulthood enjoyed relatively
better health than !hose members of the population who expired prior to
reaching adulthood.
Wilson bands in archaeological remains also provide an importan!
avenue for investigating other major transitions in human populations that
might compromise health status and stress levels. Wright (1990) deter-
mined prevalences of bands in precontact and contact-era mandibular
canines from Lamanai, Belize, a Maya center occupied from the Preclassic
period through the Historie period. Following an initial period of aban-
donment after contact by Europeans, the si te was re-occupied as a Catholic
mission by Mayan Indians until the mid-seventeenth century. Diet changed
relatively little in the prehistoric to historie transition, but archiva! records
indicate that other stressors (e.g., European-introduced diseases) compro-
mised health .in native populations following contact. Comparisons of
Postclassic and Historie dentitions show a dramatic increase in physiologi-
cal stress: 84% of bands observed in the samples combined are from the
Historie dentitions. Historie individuals also show more bands than
precontact individuals (2.4 vs. 0.88 per individual, respectively). Given the
lack of major dietary changes, Wright (l 990) argues that the differences in
microdefect prevalence can be attributed to changing disease patterns with
contact, such as the introduction of malaria and other Old World parasitic
infections, diseases leading to acute health crises.
A contrasting temporal trend in microdefect prevalence in comparison
of prehistoric and contact era native populations is identified at another
mission locality in Belize. Microdefect prevalence in mission Indians at
Tipu decreases in comparison with precontact lndians (Cohen et al., 1994;
Danforth, 1989). The different temporal trends at Lamanai and Tipu may
54 Stress and deprivation
indicate very different contact experiences within the relatively small
geographical setting of Belize. Lamanai served as an importan! reduccin
center where populations were relocated from nearby villages to the town,
whereas Tipu was only marginally affected by population relocation and
concentration during the mission period. Thus, the contrasting contact
experiences at Lamanai and Tipu may explain the different patterns of
morbidity.
Temporal patterns documenting increase in stress are also displayed in
foraging populations undergoing significant adaptive shifts. In the Santa
Barbara Channel Islands region, there is a marked increase in frequency of
hypoplasias in the Middle period when populations underwent a transition
from hunting and gathering of terrestrial foods to a heavy reliance on
marine foods, especia U y fish (Lambert, 1994). Hypoplasias in the mandibu-
lar left canine increased in prevalence from 18.4% in the Late Early period
to 49.2% in the Early Middle period. This increase in stress mirrors
temporal changes observed for othernonspecific indicators (e.g., periosteal
reactions), ali of which appear to be associated with problems relating to
increasing sedentism, population aggregation, and declining resource
a vailability.
Enamel defect prevalences appear to differ between class or social
groupings in a number of settings, suggesting that higher-status individuals
ha ve better diets or other positive environmental factors than lower-status
individuals. The Medieval period Westerhus sample represents an unam-
biguous example of status differences in enamel defect prevalence (Swiir-
dstedt, 1966). Comparisons of individuals of high, intermediate, and low
status revea! a clear link between social position and stress experience:
high-status individuals have the lowest prevalence of hypoplasias, and
low-status individuals have the highest prevalence of hypoplasias (Table
2.3). Thus, during the years of growth and development of the dentition,
higher-status juveniles may ha ve enjoyed better health than lower-status
juveniles (Swiirdstedt, 1966). Similarly, in the Dickson Mounds series and
in the Pete Klunk Mound group in the lower Illinois River valley,
high-status individuals have fewer hypoplasias than low-status individuals
(Cook, 1981a; Goodman & Armelagos, 1988), suggesting a possible
association between status and stress history in both settings. These status
differences are apparently not part of a universal pattern, at least with
regard to late prehistoric contexts in tlie Eastern Woodlands, since enamel
defect prevalences are indistinguishable between high-ranking and low-
ranking individuals in other contexts (Blakely, 1988; Powell, 1988).
Enamel defects are not a health risk per se, but abundan! clinical
evidence indica tes that enamel defects- hypoplasia and hypocalcification-
1
l
'
Skeletal and dental patho/ogical markers of deprivation
Table 2.3. Prevalence (%) of enamel hypoplasias by social groups
of high (SI), intermedia/e (SI!), and low (SIII) status at Westerhus,
Sweden. ( Adaptedfrom Swiirdstedt, 1966: Table 54.)
Age at tooth mineralization (years)
o.o
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
10.0-16.0
SI
o
o
4
4
19
29
28
32
37
18
21
lO
15
o
o
24
Sil
o
13
13
14
28
53
47
39
40
32
25
25
20
8
5
35
SIII
25
38
31
29
53
75
90
65
76
52
74
57
61
17
17
100
55
predispose teeth to cariogenesis (e.g., Baume & Meyer, 1966; Mellanby,
1934; Nikiforuk & Fraser, 1984; and see review by Duray, 1992). Called
'circular caries', the association between enamel defects and caries has been
documented in Middle Woodland and Late Woodland deciduous teeth in
lower Illinois River valley (Cook & Buikstra, 1979). Both periods contain
high prevalences of circular caries, but there is an especially high prevalence
in the later period, when maize agriculture intensified (Cook &Buikstra,
1979). Follow-up study of microdefects from this setting indicates a close
association between circular caries and microdefects (Cook, 1990). Identifi-
cation of age-of-occurrence of stress indicators indica tes that physiological
perturbations are perinatal, reflecting poor health of both the mother and
her infant around the time of birth.
Dentitions from the Libben site, Ohio, show a gradient in caries
susceptibility in comparison of different types of gross enamel defects -
teeth with gray-chalky hypocalcifications are more carious than discolored
teeth (Duray, 1990, 1992). These associations indicate that weakened
enamel structure promotes cariogenesis. In contras! to many other prehis-
toric foragers, !hose of the Libben series display a high caries prevalence.
Duray (1990) suggests that high caries prevalence can be attributed to the
high prevalence of hypocalcifications. In contrast to the association
between caries and hypocalcification, teeth with linear hypoplasias appear
56 Stress and deprivation
to be caries-resistan!. Duray (1990) speculates that hypermineralization of
the defect may suppress cariogenesis.
Hypocalcifications and dental caries in deciduous teeth have also been
identified in the' Classic period Maya Indians from Copn, Honduras
(Storey, 1992b, 1992c), and in nativepopulationsfrom the Marianaislands,
Polynesia (Hanson, 1990). In both settings, native twentieth century
populations ha ve high prevalen ces of circular caries. In Mesoamerica, these
prevalences are linked with over-reliance on carbohydrates, poor nutrition,
and the synergistic relationship between diet and disease (e.g., Storey,
1992c). In the Mariana Islands, unusually high levels of circular caries may
be related to a number oflocal conditions, includingexcessivefluoride intake
by pregnant and lactating women, poor water quality, the consumption of
highly cariogenic starch diets with weaning, and specific nutrient deficiencies
(e.g., protein) in mothers and their infants (Hanson, 1990).
Sex differences in prevalence ofhypoplasias and other enamel defects in
archaeological series are highly variable. For example, no differences in
hypoplasia prevalence are present between adult females and males in the
Dickson Mounds series (Goodman et al., 1980; and see discussions of other
samples by Lanphear, 1990; Martinetal., 1991, l 997;Powell, 1988; Stodder,
1995; Whittington, 1989; Wright, 1990; although see Danforth et al., 1997;
Swiirdstedt, 1966, who show greater prevalence in males than in females).
Clinical and epidemiological studies indica te that enamel def ects are more
common in males than in females, more common in females than males, and
in equal prevalences (Goodman et al., 1991; Goodman & Rose, 1990). Girls
have more hypoplasias than boys in Tezonteopan, Mexico, which is
consisten! with other evidenceindicating worse nutrition in girls than in boys
(Goodman et al., 1991). Hypoplasia prevalence does not differ between
South Asian males and females in settings where daughter neglect results in
greater malnutrition and mortality in females, at least for sorne regions
(Lukacs & Joshi, 1992). The differences in findings between Mexico and
South Asia suggest that differential treatment of males and females during
childhood will not necessarily be reflected in differences in enamel defect
prevalences.
2.5 Adult stress
1.5.J Bone mass
As during the juvenile years, cortical bone continues to be a highly dynamic
tissue in adulthood. Bone apposition takes place on the endosteal and
subperiosteal surfaces well into the third or fourth decades oflife, peaking
Adult stress 57
at about age 35 (Garn, 1970; Heaney, 1993; Pfeiffer & Lazenby, 1994). At
about age 40, bone commences resorption endosteally but continues to be
deposited periosteally. The imbalance ofbone loss and bone gain on these
respective surfaces dueto relatively greater endosteal losses results in a net
reduction ofbone tissue during and following the fifth decade oflife (Bales
& Anderson, 1995; Garn et al., 1967, 1992; Smith & Walker, 1964).
Adult bone loss leads to increased risk of fracture in older adulthood
owing to a complex disorder called osteoporosis (Anderson, 1995; Ander-
son & Pollitzer, 1994; Heaney, 1993; Stini, 1990, 1995). Two types ofbone
loss dueto osteoporosis are identified clinically, including that arising from
reduction in estrogen levels following menopause (Type I), and gradual
age-related reduction in borre mass in adult females and males (Type II)
(Drezner, 1995; Stini, 1990). Women lose relatively more borre mass than
men, due to the combined affects of Type 1 and Type II osteoporosis.
Estrogen is critica! in borre maintenance (Drinkwater, 1994). Even in
younger women undergoing overvigorous exercise regimens and accom-
panying loss of menstruation (secondary amenorrhea), the reduction in
estrogen results in significan! borre losses (Anderson, 1995; Kreiner, 1995).
The rate of borre loss in human populations is variable, and environ-
mental factors such as nutritional status are significan! influences (Arnaud
& Sanchez, 1990; Martin et al., 1985; Pollitzer & Anderson, 1989;
Schaafsma et al., 1987). Clinical evidence indicates that individuals with
low calcium intakes are more prone to adult borre loss, and other dietary
factors such as high protein consumption are also implicated (Arnaud &
Sanchez, 1990; Nordin, 1984; Pfeiffer & Lazenby, 1994; Stini, 1990, 1995).
Body weight, heredity, and lactation status are also importan! risk factors
(Arnaud & Sanchez, 1990; Evers et al., 1985; Heaney, 1993; Kreiner, 1995;
Pollitzer & Anderson, 1989; Schaafsma et al., 1987; Stini, 1990, 1995).
Comparisons of active vs. sedentary populations or athletes vs. nonathletes
indicate the strong influence of physical activity on borre maintenance:
simply, active individuals have stronger, denser borre than sedentary
individuals (Anderson & Pollitzer, 1994; Drinkwater, 1994; Lacey et al.,
1991; Marcuset al., 1992; McMurray, 1995; Y ano et al., 1984). Owing to a
decrease in physically demanding lifestyles in a number of countries (e.g.,
Sweden, United States, United Kingdom, China), there appears to be a
secular increase in osteoporotic fracture (Allander, 1995).
Adult borre mass is documented in human remains from a variety of
archaeological settings, including Sudanese Nubia, and eastern, south-
western, and high-latitude North America (see Pfeiffer & Lazenby, 1994);
Much of this research shows either a general similarity or accelerated
patterns of borre loss in archaeological samples and in living populations
58 Stress and deprivation
(e.g., see Carlson et al., 1976; Cook, 1984; Dewey et al., 1969; Van Gerven,
1973; Van Gerven et al., 1995). Variation in relation to differing lifestyles
and subsistence strategies has been examined in sorne detail with the use of
alternative data ollection protocols, including raw measures ofbone mass
(cortical thickness [CT], cortical area [CA], bone mineral content [BMC])
or size-standardized measures (per cent cortical area [%CA or PCCA] or
per cent cortical thickness [%CT or PCCT]) (Ruff, 1992). Comparisons of
femoral cortical thickness in X-group (AD intensive agricultural-
ists from the Wadi Halfa area ofSudanese Nubia with modern Euroameri-
cans and Native Americans revea! similar trends of initial gains in bone
mass from the third to fourth decades, followed by losses (Martin &
Armelagos, 1979; Martin et al., 1985). In Nubian females bone mass
decreased after age 20. Martin and coworkers (1985) speculate that
premature osteoporosis was due to nutritional inadeqqacies associated
with an over-reliance on a single dominan! crop (millet) such as
protein--calorie malnutrition or imbalance of calcium/phosphorus ratios -
or the influence of disease. Perhaps the bone losses in childhood in this
setting (Hummert, 1983; and see above) predisposed adults, especially
females, to premature bone loss.
Similarly, bone mass (%CA from second metacarpals) in late prehistoric
maize agriculturalists from southern Ontario is below what has been
documented in living populations (Pfeiffer & King, 1983; cf. Garn, 1970).
Although various factors may be involved, the reliance on maize and atten-
dant protein-calorie malnutrition may ha ve contributed to low bone
With the assumption that cortical thickness and Nordin's Index (cor!Jcal
thickness/total subperiosteal area) are useful measures of bone mass and
nutritional quality, Owsley (1991) compared femoral bone mass in a
temporal series of Great Plains Arikara dating from ca. AD 1600 to 1832.
These comparisons reveal an increase in bone mass in the transition from
the late prehistoric to early protohistoric period in the late seventeenth
century, which Owsley regards as 'a positive change in nutritional status',
perhaps related to increased availability of protein acquired through
hunting and trade (1991:109). By 1800, bone mass had declined dramati-
cally. Owsley suggests that these declines are dueto the stresses associated
with biological and social disruptions of disease, warfare; and other
negative environmental circumstances in the early nineteenth century.
In contras! to the Arikara, no temporal change in bone mass (Nordin's
Index) could be detected in a sequence of human remains from the lower
Illinois River valley dating from the Archaic to Mississippian periods
(Cook, 1984). This suggests that the profound change in diet involving the
shift to maize agriculture in later prehistory had no bearing on bone
Adult stress 59
mnintenance. Sorne diseased individuals in the Mississippian period
cxpressed relatively low bone mass. Individuis with skeletal tuberculosis
hnd remarkably low bone mass, suggesting that nondietary factors play ari
Importan! role in bone maintenance in this region.
Most bioarchaeological research emphasizes the direct role of diet and
nutrition in explaining variation in bone maintenance. This perspective
Ignores the importan! influence ofmechanical loading and activity on bone
muss and how it is distributed (see also Chapter 6). Continued subperios-
tcnl expansion in adults compensa tes for medullary expansion and endos-
tcul bone loss in order to maintain the mechanical integrity of the bone
cross section under loading regimens. The raw measures of bone mass
!'rcquently used in studies of archaeological remains may present an
lncomplete picture of bone remodeling and health status (see Ruff &
Larsen, 1990; Ruff, 1992). Therefore, low bone mass does not necessarily
lndicate inadequate bone mass (Pfeiffer & Lazenby, 1994).
2.5.2 Histomorphometry
Primary among microscopic structures in remodeled human bone are
multicellular features called secondary osteons (primary osteons are found
in primary, unremodeled bone; see Martin & Burr, 1989). Secondary
osteons are created in a two-stage remodeling process: first, osteoclasts
destroy existing bone tissue, resulting in minute tunnels or resorption
spaces; and second, unmineralized bone ( osteoid) is deposited and subse-
quently mineralized in a series ofincremental layers on the surfaces ofthese
tunnels. The fully formed osteon consists of a series of bone layers
organized around vascular canals called Haversian canals (Martin & Burr,
1989; McLean & Urist, 1968) (Figure 2.1 O). In normal, healthy individuals,
remodeling is a uniform process that begins in early childhood and
continues throughout life. Remodeling rates are influenced to a large
degree by a variety of diseases and nutritional disorders (Frost, 1966).
These rates can be quantified in recent and archaeological remains, thus
representing an indicator of stress history (e.g., Martin & Armelagos, 1985;
Simmons, 1985; Stout & Simmons, 1979).
Comparisons of prehistoric populations from the lower Illinois River
valley (Gibson, Ray, and Ledders sites), Florida (Windover site), and Peru
(Paloma site) show that maize agriculturalists (Ledders) have greater
remodeling rates than hunter-gatherers (Stout, 1978, 1983, 1989; Stout &
Lueck, 1995; Stout & Teitelbaum, 1976). The greater remodeling rates in
the maize agriculturalists may reflect the effects of overproduction of
parathyroid hormone resulting from the low calciumjhigh phosphorus
60 Stress and deprivation
Figure 2.1 O. Cross section of compact bon e showing major rnicrostructures
discussed in text. The structures (Haversian systems or osteons) house
the Haversian canals. Each !acuna contains cells that maintain the bone tissue
and are arranged in layers (lamellae). (From Larsen, I 987; illustration by
Dennis O'Brien; reproduced with permission of Academic Press, Inc.)
ratios characteristic of maize-based diets (Stout, 1983), or perhaps a
variation in skeletal tissue maturation rates in different populations (Stout
& Lueck, 1995).
The rate of mineralization ofosteoid (unmineralized bone matrix) in the
Haversian canal can be influenced by stress. Under conditions of normal
bone development, osteons mineralize uniformly, but under conditions
involving slower growth - such as with nutritional stress or disease -
delayed osteoid mineralization results in the creation of hypermineraliz-
ation zones. Viewed in cross section, one or more hypermineralized zones
appear as rings of increased density comparable to lines of growth
disruption found at the ends of long bones (Bartsiokas & Day, 1993;
Martin &Armelagos, 1985; Simmons, 1985; Stout & Simmons, 1979). The
presence of these 'double-zone' osteons indicates the presence of physio-
logical stress and growth disruption. The frequency of double-zone osteons
per unit area ofbone is positively correlated with the total amount (CA) of
bone in the Nubian skeletons (Martin & Armelagos, 1985). This may
indicate that sorne critica] threshold of metabolic activity is essential for
recovery from growth arres! (Martin & Armelagos, .1985).
. Histomorphometric study ofthree populations with varying subsistence
Summary and conc/usions 61
strategies- primarily meat (Alaskan Eskimo ), mixed foraging and farming
(Arikara), and intensive maize agriculture shows a high degree
of variability in osteon structure. Osteons in a number of individuals in
these samples contain second and smaller remodeling sequences (Eriksen,
1980; Richman et al., 1979). Called Type II osteons, these structures
represen\ sites of accelerated availability of calcium. Alaskan Eskimos ha ve
the highest frequency ofType II osteons, perhaps reflecting heavy reliance
on meat in comparison with more plant-oriented Arikara and Pueblo
Amerindians.
Study ofhistomorphometrics in adult Nubian remains provides import-
an! complementary information to the aforementioned analyses of bone
mass. Nubian adult femora possess abnormally large, active resorption
spaces, which may result from nutritional stress due to agricultura!
dependence (Martin & Armelagos, 1979, 1985). The overall quality ofbone
is reduced in these groups, owing to the inadequately mineralized, porous
cortex. These findings underscore the general observation that increased
porosity is due at least in part to histological changes, including enlarged
resorption spaces and increased size of osteons, as well as increased
accumulation of osteons (see also Martin & Burr, 1989).
2.6 Summary and conclusions
The sensitivity of the human skeleton to impoverished environments,
especially during the years of growth and development, is revealed by the
study of a range of. stress indicators, including growth rates, attained
stature, pelvic inlet shape, vertebral neural canal size and shape, tooth size,
tooth size asymmetry, and various skeletal and dental pathological
conditions. Once adulthood is reached, fewer changes arising from
physiological stress are exhibited in the hard tissues. Bone loss and growth
arres! in developing osteons as well as abnormal remodeling rates are
highly informative about stress history. Most skeletal and dental stress
indicators reflect episodes of physiological perturbation during childhood;
nevertheless, their tneasurement and interpretation serve as an indication
of the stress experience for the population generally. Studies ofstress based
on archaeological bones and teeth revea] a number of consisten\ patterns.
Under circumstances conducive to increased stress - such as poor nutri-
tion, population aggregation, and increased infectious disease - skeletons
and dentitions exhibit stress indicators in elevated prevalences.
In a variety of circumstances indicating disadvantage, juveniles exhibit
growth reduction. This reduction is commonly tied to chronic nutritional
62 Stress and deprivation
deficiencies often resulting from the synergy between poor nutrition and
infection. If these conditions are maintained throughout the years of
growth and development, then the affected individuals are likely to be
short-statured as adults. Overall, the early environment has long-lasting
effects on health status over the lifespan. Juveniles experiencing elevated
stress can have poor health and shortened lifespans as adults (and see
Henry & Ulijaszek, 1996). .
Do small-bodied humans have an adaptive advantage over large-bodied
humans living under circumstances of reduced resource availability or
adequate nutrition? Seckler (1980, 1982) proposed that shortness in body
height in developing nations is an adaptation to reduced food supplies.
This reduction, he argues, results in ind_ividuals who are small but healthy.
If reduced body size is adaptive, then reduced height should have no
associated functional costs, In fact, small body size is linked with various
negative factors, including increased disease and poor nutrition. Poor
growth status is associated with a range of functional o ~ t s and conse-
quences, including decreased activity and poorer learning (Crooks, 1995;
Dasgupta, 1993; Goodman, 1991, 1994; Stinson, 1992). Although smaller
body size appears to enable individuals to perform sorne activities with
lower energy requirements, the efficiency is reduced (Stinson, 1992).
Clearly, there are negative consequences of small body size in disadvan-
taged settings, indicating that this reduction is maladaptive.
Sorne also suggest that there may be an adaptive advantage for iron
deficiency, especially in light ofthe apparent link between iron withholding
and microbial invasion (Kent et al., 1990, 1994; Stuart-Macadam, 1992a,
l 992b ). Clinicians observe that decreases in serum iron reduce the
availability of iron for microbial growth, thus inducing a 'nutritional
immunity' (Weinberg, 1974, 1992; see discussion and review by Keusch &
Farthing, 1986). Weinberg (1992) notes that animal and human studies
show that hosts not withholding iron are at increased risk for infection
(bacteria!, fungal, protozoan), and conversely, risk of infection decreases
with increased iron withholding (although see Berger et al., 1992; Keusch &
Farthing, 1986). Thus, iron deficiency anemia - at least in the mild to
moderate form - may be an adaptive response to chronic pathogen loads
(Kent et al., 1990, 1994; Stuart-Macadam, 1992a, 1992b).
Goodman (1994) evaluated the pathogen load model, and suggests that
instead ofbeing an 'adaptation' to stress, iron deficiency anemia should be
considered an 'adjnstment' to stress. Like reduced growth, iron deficiency
has a series of direct functional costs for a population. For example, even
when an individual is only slightly deficient in iron, a number of key
enzymes involved in vital functions (e.g., DNA synthesis) are affected. At
~
~

1
1
' 1
1
1
i

1
1
1
l
1
1
1
1
Summary and conclusions 63
the organism level, iron deficiency anemia has profound negative effects on
work capacity, cognition, and the maintenanceof a healthy immune system
(Goodman, 1994). Additionally, dietary iron has a very importan! influ-
ence on iron status. Low iron intake is the leading cause of iron deficiency
anemia in the United States (children and adult women) and Argentina
(children) (see Goodman, 1994). Thus, in archaeological settings, although
porotic hyperostosis may be an indication of the body's attempt to adjust
to increased pathogen loads, it nevertheless reflects an increased health
burden.
In the study of past societies, it is possible to test the 'adaptive' vs. 'stress'
models for the growth and nutrition stress indicators discussed in this
chapter. Significantly, various indicators are linked with decreased survival
as determined by mean age-at-death of individuals with and without (or
with relative differences in prevalence of) the indicator. Enamel defects
(macro- and microdefects), vertebral neural canal size, and tooth size show
clear links with lifespan. Where age-at-death and stress indicators are
examined concurrently, individuals without enamel defects, with larger
neural canal size, and with larger tooth size appear to ha ve died later in life.
These findings suggest that skeletal stress indicators are related to quality
of life and do not represent adaptations.
3 to iefectious pathogens
3.1 Introduction
For the entire history of humankind, populations were exposed to
numerous infectious agents - bacteria and viruses - resulting in a range of
disease states. Anthropologists, paeopathologists, and others have
documented and described the dental and skeletal evidence for sorne of
these diseases (see Buikstra & Cook, 1980; Ortner & Putschar, 1985;
Steinbock, 1976). Although largely confined to descriptive reports and case
diagnoses, newer studies emphasize biocultural perspectives of disease in
relation to social, cultural, and environmental circumstances (Buikstra &
Cook, 1980; Larsen, 1987; Ubelaker, 1982).
Infection by a pathogen does not always result in disease. The pro-
gression from infection to disease depends on agent pathogenicity, trans-
mission route from agent to host, and the strength and nature of the
response of the host (see lnhorn & Brown, 1990; Smith & Moss, 1994).
Man y acute infectious diseases or epidemics result in death of the infected
individual soon after microbial attack. These infectious diseases lea ve no
skeletal evidence, clouding the full picture of disease and its relationship to
mortality in past populations. Alternatively, severa! chronic infectious
diseases affect osseous tissues in patterned ways. Despite the interpretive
drawbacks, the study of bone lesions documenting disease provides an
importan! perspective on health in earlier societies.
The frequency of members of a population affected by a disease forms
the baseline of information from which to interpret health status and
factors that influence it. Various means of data presentation are used, but
incidence and prevalence are most commonly reported in clinical and
epidemiological studies (see Keyserling, 1988; Waldron, 1994). Incidence is
defined as the number of new cases in a population in a given time period.
Because the number of new cases can never be identified in archaeological
settings with certainty, it is not possible to report on incidence. Prevalence
can be reported in archaeological contexts, because it represents the
proportion of the population affected by the disease ata single point in time
or within a time period.
Dental caries 65
This chapter focusses on disease prevalence as expressed in teeth and
bones. Ortner & Putschar (1985) provide details on a wide variety of
infectious diseases identified in archaeological remains from around the
world. This discussion of infection is centered on dental caries and
periodontal disease/antemortem tooth loss, nonspecific skeletal infection,
and severa! specific infectious diseases-trepanematosis, tuberculosis, and
leprosy. These conditions are among the most frequently studied, and they
have distinctive symptoms. Thus, they provide an ample record for
interpreting disease variation in the past.
3.2 Dental caries
3.2.I Description and etiology
Contrary to that which is often presented by anthropologists, the term
'dental caries' does not refer to lesions in teeth resulting from the invasion
ofmicroorganisms. Rather, dental caries is a disease process characterized
by the focal demineralization of dental hard tissues by organic acids
produced by bacteria! fermentation of dietary carbohydrates, especially
sugars. Dental caries is manifested in various states, ranging from slight
enamel opacities to extensive cavitation involving partial or complete loss
of tooth crowns and roots (Figure 3.1). Because these lesions are readily
observable in both archaeological series and living populations, there is an
abundance of published data on prevalence for a variety of temporal and
geographic settings.
The etiology of dental caries is incompletely understood, but severa!
essential and modifying factors are in volved. The former include: (l) the
exposure of teeth surfaces to the oral environment; (2) the presence of
aggregates of complex indigenous oral bacteria! flora (e.g., Streptococcus
mutans, Lactobacil/us acidophilus), salivary glycoproteins, and inorganic
salts adhering to the tooth surfaces (called dental plaque); and (3) diet
(Rowe, 1982). Modifying factors are those that influence the site distribu-
tion and rate of carious lesitm development; these include, but are not
limited to, crown size and morphology, enamel defects, occlusal surface
attrition, food texture, oral and plaque pH, speed of food consumption,
sorne systemic diseases, age, child abuse, heredity, salivary composition
and flow, nutrition, periodontal disease, enamel elemental composition,
and presence of fluoride and other geochemical factors (Bowen, 1994; Burt
& Ismail, 1986; Geddes, 1994; Greene et al., 1994; Hildebolt et al., 1988,
1989; Hunt et al., 1992; Leverett, 1982; Maat & Van der Velde, 1987;
66 Exposure to infectious pathogens
Figure 3.1. Carious lesions on mandibular dentition; King site, Georgia.
(Photograph by Mark C. Griffin.)
Meiklejohn et al., 1992; Milner, 1984; Molnar & Hildebolt, 1987; Molnar &
Molnar, 1985b; Newbrun, 1982; Powell, 1985; Rowe, 1982; Woodward &
Walker, 1994).
Intrinsic characteristics offood and the consistency and mannerin which
it is prepared strongly influence cariogenesis in human populations. For
example, in the American Southeast, more than three times the number of
late prehistoric Caddoan farmers have carious teeth than the Fourche-
Maline foragers that predate them (Powell, 1985). The higher prevalence in
the later population is due to two factors: (!) consumption of maize, a
highly cariogenic food (see also below); and (2) reduced occlusal surface
wear, which increases the probability that cariogenic bactetial colonies will
aggregate in caries-prone areas, such as in grooves between cusps of
Dental caries 67
premolars and molars (Powell, 1985). Maat & Van der Velde (1987) also
found a negative correlation between frequency of occlusal surface caries
nnd degree of dental wear in molars from sailors recovered from a
seventeenth and eighteenth century Dutch whaling station in the Spitzber-
gen Archipelago (Svalbard). In this series, increased wear appears to be
ussociated with fewer carious lesions. They concluded that '(t)hese findings
strongly suggest a competitive relationship between progress in caries and
attrition' (Maat & Van der Velde, 1987:281). The studies linking reduced
wear to increased caries prevalence are convincing, especially because
common sites of plaque and cariogenesis are the grooves and fissures of
unworn crowns (and see Christopherson & Pedersen, 1939; Corbett &
Moore, 1976; Milner, 1984).
The relationship between low caries rates and high-wear dental environ-
ments should not be overgeneralized. In their study of the Mesolithic
dentitions from Cabeco da Arruda and Moita do Sebastiiio, Portugal,
Meiklejohn and coworkers (1992) found a positive correlation between
caries and wear in molars; the most heavily worn crowns are the most
carious. In this setting, the Mesolithic Portuguese individuals consumed
figs and carob, foods high in sugar content that also produce high rates of
wear. Occlusal surface wear is also excessive in Archaic-period foragers
from the lower Pecas region of southwestern Texas, resulting in pulp cavity
exposure and tooth loss (Hartnady & Rose, 1991; Sobolik, !994b). Caries
prevalence is high (14% of teeth), and is indistinguishable from many
agricultura! groups (see also Bement, 1994; Marks et al., 1988).
Coprolite analysis reveals that various highly abrasive materials were
included in foods consumed, including phytoliths, seeds and small bones,
and calcium-oxylate crystals from succulents and cacti (Hartnady & Rose,
1991; Sobolik, !994b). Historie accounts also document the introduction
of abrasives to food, including ash for baking of sotol cactus and dirt to
'sweeten' meals. High-carbohydrate foods such as succulent fibers, prickly
pear fruits, pecans, and mesquite resulted in active cariogenesis. Thus, like
the Portuguese Mesolithic foragers, Pecas region Indians show a positive
relationship between tooth wear and caries.
3.2.2 Temporal trends: foragers, farmers, and industrialized
popu/ations
The studyoftemporal trends of dental caries in archaeological samples has
a long history. Mummery (1870) was among the first to systematically
document these frequencies in past populations, observing an increase in
British populations. He related the change in caries prevalence to cognitive
68 Exposure to infectious pathogens
development of children in comparing earlier simple and later complex
societies: 'May we not therefore reasonably suppose that through the
diminished vitality consequent u pon this diversion of the formative energy
from the teetll, by premature mental exertion, these organs necessarily
become degenerated; and that this circumstance constitutes one great
difference between the teeth ofthe intellectual and those ofthe uncultivated
families of man' (1870:73).
It has become abundantly clear since Mummery completed his ambi-
tious study that diet and subsistence technology is far more important for
understanding caries development over time. Forman y areas of the globe,
there is a trend toward an increase in caries prevalence with a shift in focus
to carbohydrates in living human grpups changing to Western foodways
(Burt, 1993; Mayhall, 1970; Moorrees, 1957; Oranje et al., 1935-37;
Pedersen, 1947; Price, 1936; Russell et al., 1961; P. L. Walker et al.,
unpublished manuscript; and others) and in past human groups.
Numerous workers have detailed increases in carious le.sion frequencies
in prehistoric agriculturalists compared to hunter-gatherers that preceded
them (reviewed by Larsen, 1995). From a sample of populations drawn
globally, Turner (1979) determined average frequencies of adult teeth
affected (incisors, canines, premolars, molars combined). His investigation
revealed a steadily increasing gradient for groups practicing foraging to an
agricultnral way oflife: foraging, 1.7%; mixed foraging/agriculture, 4.4%;
agriculture, 8.6% (percentages calculated from Turner, 1979:Table 3).
Dental caries prevalences are reported for numerous archaeological
skeletal series derived from the Eastern Woodlands of North America.
Figure 3.2 shows prevalence values for a sample of 75 archaeological
populations. The dichotomy between foragers and maize- farmers is
straightforward, but with sorne overlap in values. The three forager periods
(Archaic to Middle Woodland) have less than 7% carious teeth, and the
three agriculturalist periods (Late Woodland to Contact) have more than
7% carious teeth. Within the agriculturalist periods, there is a high degree
of variability in caries prevalence. This pattern mirrors observations of
living populations with broadly similar diets (e.g., P. L. Walker et al.,
unpublished manuscript). Relatively small differences in diet and food
processing technology can result in large differences in caries prevalence.
Overall, there is a clear tendency for prehistoric and early contact-era maize
farmers to have higher caries prevalence than prehistoric foragers (and see
Larsen et al., 1991; Milner, 1984). The late mission period series from
Amela Island, Florida, shows an unusually high caries prevalence (Larsen
et al., 1991). This is not unexpected, given that missionization was
accompanied by an. intensification in production .and consumption of
Dental caries
20
- 15

ff. 10
5
Period
Figure 3.2. Percentage of teeth affected by dental caries in eastern North
America. {Based on data from Milner, 1984; Larsen et al., 1991.)
69
maize during the sixteenth and seventeenth centuries in Spanish Florida
(e.g., Hann, 1988; Jones, 1978; Larsen, 1990a) and elsewhere (e.g.,
Ecuador: see Ubelaker, 1994; although see White, 1994; White et al., 1994).
The reasons for caries increase in Eastern Woodlands agriculturalists are
varied and complex, but the chief factor is related to the presence of sucrose
in maize (Hardinge et al., 1965). Because sucrose is a simple sugar, it is
readily metabolized by oral bacteria. Declining tooth wear in man y settings
was probably an importan! factor (see Chapter 7 for a discussion oftooth
wear change), but the common occurrence of maize consumption largely
explains these changes over this expansive area of North America.
Therefore, the recognized positive association between maize consumption
and dental caries in eastern North America is useful for tracking temporal
changes in dental health in specific settings where social, cultural, and
dietary shifts are documented (e.g., Cassidy, 1984; Hoyme & Bass, 1962;
Larsen et al., 1991; Milner, 1984; Patterson, 1984; Perzigian et al., 1984;
Smith, 1982; Sullivan, 1990; and others).
For many native populations ofthe Eastern Woodlands, maize agricul-
ture was the first significan! experience with plant domestication. In several
regions, it was not a new experience. In the middle and lower Mississippi
70 Exposure to infectious pathogens
River valley, the Ohio River valley, and the mid-South generally, the first
agricultura! development involved use of starchy seed-bearing plants (e.g.,
chenopod, sunflower, and cucurbit), coinciding with social and economic
transformations during the Middle Woodland period from ca. 250 BC to AD
200 (Fritz, 1990; Smith, 1992). For this time period, Rose and coworkers
(1991) identified an increase in carious lesion frequency that is probably
Iinked with this reorientation of diet well before the adoption of maize in
later prehistory (ca. AD 900).
No comparisons have been made with regard to relative severity or
prevalence differences that might be present between consumers of differ-
ent domesticated plants. No studies have compared caries prevalence
values for populations dependen! on maize in the New World vs. wheat in
the Old World. Thus, there is no understanding ofthe relative differences of
the impact ofagriculture on populations consuming different types of plant
domesticates. Lubell and coworkers (1994) remark that European Neo-
lithic populations have generally lower caries prevalence values than
prehistoric maize farmers in North America. They suggest that Neolithic
domesticates (e.g., wheat) were either less cariogenic or of less dietary
importance than maize in New World populations.
Studies of pre- and post-agricultura! populations in a wide range of
settings, including East and South Asia (Fujita, 1995; Inoue et al., 1986;
Lukacs, 1990, 1992; Lukacs & Minderman, 1992; Lukacs et al., 1989), the
Middle East (Littleton & Frohlich, 1993; Smith, Bar-Yosef et al., 1984),
Europe (Bennike, 1985; Brabant, 1967; Brinch & Moller-Christensen,
1949; Brothwell, 1959; Corbett & Moore, 1976; Hardwick, 1960; Meik-
lejohn et al., 1984; Moore & Corbett, 1971, 1973, 1975; O'Sullivan et al.,
1993; Tth, 1970; Wells, 1975; Whittaker, 1993), northeast Africa (Ar-
melagos, 1969; Elliot Smith & Wood Jones, 1910; Rose et al., 1993),
Ecuador (Ubelaker, 1984, 1994), and elsewhere revea! similar trends to
those in North America (e.g., Larsen et al., 1991), irrespective ofthe type of
cultigens consumed.
Measurement of carious lesion size and location on tooth crowns
pro vides an importan! meaos of assessing the severity of the disease process
(see Hillson, 1996). Comparisons ofprehistoric foragers with la ter farmers
in the Tennessee River valley and on the Georgia coast indicate that,
through time, lesions increased in size and affected more areas of tooth
crowns (Larsen, 1982; Smith, 1982). Carious lesions in prehistoric foragers
are mostly restricted to the cervical region of the posterior dentition, which
may be dueto food impaction between teeth (Smith, 1982). In contras!, late
prehistoric maize farmers display lesions on multiple locations of tooth
crowns.
Dental caries 71
Elevated prevalences or severity of dental caries are not limited to groups
relying on domesticated plants. Prehistoric foragers from the Iower Pecos
River valley and central Texas have values that are well within the ranges
rnported for agriculturalists (Bement, 1994; Hartnady & Rose, 1991;
Mnrks et al., 1988; Sobolik, 1994b). In these settings, the consumption of
nllcky, high-carbohydrate nondomesticated plants resulted in extensive
curies.
High caries prevalence in Mesolithic foragers from Sicily and Portugal
(llorgognini Tarli et al., 1985, 1993; Frayer, 1988a; Lubell et al., 1994;
Mciklejohn et al., 1988) contrasts sharply with other Mesolithic-period
lluropeans (e.g., Scandinavia: Alexandersen, 1988; Meiklejohn et al., 1988,
1997; Meiklejohn & Zvelebil, 1991 ), South Asians (North India: Lukacs &
Pul, 1993), and Africans (e.g., Nubia: Rose et al., 1993). The high
prcvalence of dental caries in Sicily and Portugal has been linked with the
consumption of cariogenic nonagricultural foods (e.g., honey) or sweet,
nlicky fruits (e.g., dates or flgs) (e.g., Lubell et al., 1994). Comparison of
f'oragers (Mesolithic) and later farmers (Neolithic) from Portugal revealed
110 apparent increase in caries prevalence with the adoption of agriculture
(Lubell et al., 1994; Meiklejohn et al., 1988).
Analyses of other hunter-gatherers point to the importance of additional
litctors that might explain unusually high caries prevalence. Lukacs (1990)
documented relatively high caries prevalence (8.0%) in a South Asan
Mesolithic population (Langhnaj site) in comparison with other contem'
porary series from the region (most are around or well under !%).
Archaeological evidence indicates close trade relationships between Lan-
ghnaj and agricultura! groups, suggesting that cariogenic foods may have
been acquired through exchanges (Lukacs, 1990). Similarly, in sorne living
foraging groups in central Africa, although they are called 'hunter-
gatherers', significan! consumption of cariogenic plant foods acquired by
trade from agricultural villagers results in appreciable prevalences of caries
(Walker & Hewlett, 1990; and see below).
Changes in diet and subsistence technology had far-reaching implica-
lions for oral health in sorne prehistoric foragers. Walker and Erlandson
( 1986) examined the link between dental caries and dietary change on Santa
Rosa Island, in the Santa Barbara Channel Island region of southern
California. Archaeological evidence indicates that exclusively foraging
groups occupied the island from ca. 4000to 400BP. For thefirst 1500 years,
populations exploited predominantly terrestrial foods, primarily starchy
roots and tubers. For the remainder ofthe prehistoric period, diet became
increasingly focussed on marine resources (and see Chapter 2). A decrease
in adult caries prevalence from 13.3% to 6.3% coincides with this subsis"
72 Exposure to infectious pathogens
terree change, which appears to be linked to the reduction in use of plant
carbohydrates in later prehistory (Walker & Erlandson, 1986).
Other foraging groups show the opposite trend in caries prevalence. In
Archaic period liunter-gatherers living in the Edwards Plateau of central
Texas, there is an increase in cariogenesis (Bement, 1994). This increase
may reflect a reorientation of diet from a generalized foraging pattern
involving a range of wild plants and animals to a diet focussed more on
high-carbohydrate, cariogenic wild plants (Bement, 1994).
3.1.3 Sex differences in caries prevalences
Comparisons of a wide range of archaeological populations from different
times and settings revea! a common pattern of greater caries prevalence in
females than in males. In eastern North America, regional studies point to
this widespread difference, particularly in late prehistoric and historie-era
maize agriculturalists (e.g., Behrend, 1978; Blakely, 1995; Hrdlicka, 1916;
Kestle, 1988; Larsen, 1983b; Larsen et al., 1991; Milner, 1984; Newman &
Snow, 1942; Patterson, 1984; Seidel, 1995). In the Georgia Bight, adult
females have significantly more carious teeth than adult males both in late
prehistory (15.2% vs. 10.9%) and during the late mission period (41.9% vs.
36.3%) (Larsen et al., 1991). Populations show similar differences from
other regions ofNorth America (e.g., Akins, 1995; Burns, 1979; Danforth
et al., 1997; Dickel et al., 1984; Hooton, 1930; Schmucker, 1985; Whittin-
gton, 1989) and elsewhere (e.g., Bennike, 1985; Formicola, 1986-1987;
Frayer, 1984, 1988a; Henneberg, 1996; Hillson, 1979; Lukacs, 1992, 1994,
1996; Lukacs et al., 1989; Morris, 1992; Swiirdstedt, 1966; Walker &
Hewlett, 1990).
Differences in caries prevalence between males and females suggest that
food consumption may have been different between sexes, with males
consuming more meat than females and females consuming more plant
carbohydrates than males. This conclusion is supported by differences
observed in female and male subsistence behavior in historie and recen!
agriculturalists and foragers. Southeastern North American Indian women
were responsible for most plan! gathering and agricultura! activities such as
planting, harvesting, and food preparation. Men were responsible for
hunting as their primary subsistence task (Hudson, 1976; Swanton, 1942,
1946; Van Doren, 1928). Among foragers living at Ngarulurutja, Australia,
Hay den observed 'that in spite of rules about sharing, the persons who did
the most hunting ate the most meat. It is clear that the young men who
actually caught the game consumed most of it' (1979:166; and see other
accounts of sex differences in diets of foragers by Hay den, 1979; Hewlett et
Dental caries 73
al., 1982; Lee, 1968; Meehan, 1977; Walker & Hewlett, 1990; Woodburn,
1968).
For the most part, sex differences in caries prevalence in archaeological
samples are identified in populations practicing agriculture to sorne degree.
However, female and male foragers from the Santa Barbara Channel
Island region display differences in caries prevalence for much of prehistory
(Lamber! & Walker, 1991; Walker, 1986b; Walker & Erlandson, 1986).
Ethnographic observations indicate a distinctive sexual division of labor
whereby men hunted and fished and women collected plants. Historie
accounts indicate that men ate more of the game they hunted than did
women. Thus, it appears that greater consumption of plants by women - a
factor related to their subsistence responsibilities - is reflected in their
greater caries prevalence.
Walker and Hewlett (1990) have investigated dental caries in severa!
groups of central African pygmy foragers (Aka, Mbuti, Efe) and Bantu
farmers in order to build support for a behavioral interpretation of
prevalence variation. Comparisons of subsistence patterns and food
consumption practices in these groups revea! clear intra- and interpopula-
tion differences in caries prevalences that provide an importan! perspective
on dietary behavior and cariogenesis. Aka and Mbuti foragers practice
net-hunting, and the Efe foragers bow-hunting. The sources of animal
protein available to these groups vary depending on season (e.g., meat from
seasonally available animals), and is mostly obtained through hunting and
collecting. Manioc and other cultigens acquired through trade with Bantu
agriculturalists form a significan! componen! of their diets. The relative
consumption of meat to cultigens is highly variable, ranging from a high
proportion for the Mbuti (al leas! par! of the year) to a much lower
proportion for the Efe. The Aka apparently consume similar amounts of
meat to Mbuti foragers, since both are net-hunters. On the basis of
comparisons of amount of time spent hunting, it appears that Mbuti and
Aka foragers consume more meat than do Efe foragers. In addition to meat
and cultigens, honey is an importan! part of the diet for part of the year.
For Mbuti foragers, this highly cariogenic food contributes nearly 80% of
calories for a one-month period (lchikawa, 1981 ).
Virtually ali foods consumed by Bantu villagers are plant domesticates,
including manioc, maize, rice, peanuts, and plantains. The little amount of
meat that is consumed - approximately 2.5% of foods - is acquired from
pygmy foragers.
Field observations indicate differences in food consumption between
females and males in the foragers, but not in the farmers. Diets of forager
women contain more plants than do those of forager men. Aka men
74 Exposure to irifectious pathogens
Table 3.1. Sex dijferences in central African
dental caries preva/ences: Aka, Mbuti, Efe,
Bantu. ( Adaptedfi'om Walker & Hewlett,
1990: Table 2.)
Tribe and sex Teeth (n)" 'Yo carious
Aka
Total (3099) 5.2
Males ( 1706) 4.2
Fema!es (1393) 6.6
Mbuti
Total (1773) 6.0
Niales (1048) 5.1
Females (753) 7.3
Efe
Total (277) 6.0
Males (277) 6.0
Fema les
Ban/11
Total (630) 8.1
Males (308) 9.1
Females (322) 7.1
"Total number of teeth observed.
consume more meat than do women, and it is acquired on the hunt prior to
their return to the home village; sorne of the choicest cuts of meat are shared
among the roen who participated in the hunt.
Bantu farmers ha ve the highest dental caries prevalences in comparison
with foragers (Table 3.1), which appears to reflect thcir greater consump-
tion ofplant carbohydrates. Although foragers ha ve relatively lower caries
prevalence, the values are nevertheless appreciable (5-6%; Table 3.1),
which points to the significantcomponent ofplant carbohydrates acquired
frorn Bantu farmers via trading relationships. Caries prevalence is also
higher for pygmy women than for men. These differences are related to
food consumption variation between male and fema!e pygmy foragers. The
frequency ofbetween-meal eating in pygmy women is an additional factor
that provides partial explanation for sex differences. In these foragers, men
concentra te theireatingin several large mea Is, and women snack frequently
during the day. Clinical evidence froni Western populations indica tes that
snacking between meals (especially carbohydrates) results in elevated
Dental caries 75
caries rates (e.g., Burt et al.; 1982, 1988; Gustafsson et al., 1954; Konig,
1970; Konig et al., 1969; Nizel, 1973; Rowe, 1982; Weiss & Trihart, 1960;
although see Rugg-Gunn et al., 1984).
Ethnographic documentation of dietary practices in native groups in
South America provides additional insight into differences between fe-
males and males in dental health (P. L. Walker et al., unpublished
manuscript). In the three groups studied - the Yanomamo of Venezuela,
the Y ora of southeastern Peru, and the Shiwiar (Achuar) of Ecuador -
mea! and fish provide a significan! par! of the diet, and most carbohydrates
are from plan! crops, such as manioc and bananas. Ali three groups have
significan! caries rates, in part dueto the consumption of cultigens, but also
dueto the reduction in their isolation and greater access to processed foods.
Y ora and Shiwiar women spend many hours processing manioc in their
mouths for production of beer (chicha). For these two groups, women
display a somewhat higher frequency of carious teeth than roen. Their
relatively greater exposure to cariogenic manioc explains higher caries
rates.
Given the finding that females are frequently more carious than males, it
seems possible that there may be sorne underlying, nonbehavioral reasons
for these differences. Permanent teeth erupt slightly earlier in females than
in males, exposing their teeth at an earlier age to caries-promoting factors
(e.g., Carlos & Gittlesohn, 1965; DePaola et al., 1982; Dunbar, 1969;
Walker, 1986b). However, tooth eruption differences between males and
females show either weak or no correlation with dental caries prevalence
(e.g., Moorrees, 1957; Toverud et al., 1952; Ziskin, 1926). Alternatively,
there is a long-held popular notion that pregnancy results in poor dental
health, including tooth loss and caries. There is evidence for increasing
gingival inflammation in sorne pregnant women (e.g., Arafat, 1974; Loe,
1965; Loe & Silness, 1963; although see Jonsson et al., 1988; Maier &
Orban, 1949). No evidence exists for increase in tooth loss or dental caries
due to gingivitis or other factors during pregnancy (Larsen et al., 1991;
Walker, 1986b; Walker & Hewlett, 1990). Therefore, sex differences in
eruption timing or pregnancy cannot explain variation in dental caries
between males and females. If dental caries differences could be explained
by sorne underlying physiological or developmental reason, then the
pattern of greater prevalence of caries in women than roen should be
universal, or nearly so. In fact, there are exceptions indicating that other
factors are involved (e.g., a number ofinvestigations show either the same
prevalences or greater prevalence in males than infernales in archaeological
and clinical settings: Barmes, 1962; Burns, 1979, 1982; Clarkson &
Worthington, 1993; Moore & Corbett, 1973; Pietrusewsky, 1988; Powell,
76 Exposure to infectious pathogens
1988; Rowe, 1982; Sutter, 1995; Wells, 1980; White, 1994). Therefore,
variation in caries prevalence in females and males is behaviorally
mediated.
3.2.4 Status dijferences in dental caries
There is growing evidence to suggest that members of different social
rankings consumed dissimilar foods, leading to contrasting patterns of
dental disease. High-status Yedo-period Japanese and Dynastic-era Egyp-
tian adults ha ve higher prevalence of caries than low-status adults (Leigh,
1934; Suzuki et al., 1967). Leigh ( 1934) and Suzuki and coworkers (1967)
suggest that these differences are related to dietary consistency: high-status
adults in these settings consume much softer, more refined foods than
low-status individuals, a pattern which also results in different levels of
craniofacial robusticity (see Chapter 7).
Comparisons of dental caries prevalences of upper class and lower class
adults from Medieval Europe show contrasting patterns. At Westerhus,
Sweden, upper, middle, and lower class individuals buried in and surround-
ing a church show no differences in caries prevalence (Swardstedt, 1966).
The lack of class distinctions in dental health suggests that diets in this area
of Medieval Sweden were similar, regardless of social rank. In contras!,
dental health in a Medieval population from Zalavr, Hungary, clearly
varied along social lines (Frayer, 1984). Individuals associated with the
castle (upper class) have 6.4%carious teeth and individuals from the chapel
(lower class) have 12.1% carious teeth. Frayer (1984) argues that these
differences reflect greater consumption of animal protein in the upper class
than in the lower class. Information on specifics of dietary variation in
social classes in Medieval Sweden ar Hungary are not available. Historical
records for contemporary populations from Britain indicate that upper
class landed gentry had much greater access to animal sources of protein,
and diet included a relatively smaller proportion of carbohydrates than
that consumed by lower class peasantry (Wells, 1975).
Similarly, elite adults from the Classic period Maya centers at Copn,
Honduras, and Lamanai, Belize, exhibit lower cares prevalence than
subelite adults (Hodges, 1985; White, 1994). Two Lamanai elite adults- a
male and a female - had no carious lesions. These findings indicate that
high-status adults consumed less maize and had greater access to animal
protein than did low-status adults.
An argument for greater access to animal protein in explaining better
dental health in high-status memhers of human populations is supported
by ethnographic documentation of status differences in dental health in
Periodontal disease and tooth /oss 77
African foragers. Walker & Hewlett (1990) found tbat high-status pygmy
leaders bave fewer carious lesions than low-status nonleaders. They suggest
that greater access to meat by leaders (from gifts and tribute), combined
with reduced consumption of carbohyrate-rich plants, best explains the
discrepancy between social ranks.
3.3 Periodontal disease (periodontitis) and tooth loss
3.3.1 Description and etiology
The accumulations of plaque on teeth is closely linked to an inflammation
of the gums called chronic gingivitis (Brickley, 1981; Hillson, 1986).
Although gingivitis is readily visible in soft tissue-the gums appear swollen
and red - it does not necessarily affect the underlying alveolus, thus
rendering the disease invisible in archaeological remains. Gingivitis can
intensify to the point that the alveolar bone becomes involved. Another
potential irritan! to the soft tissues surrounding alveolar bone is excessive
masticatory loading of the jaws and teeth. These mechanical demands are
due to either consumption of hard-textured foods or excessive extramas-
ticatory practices, such as processing of animal hides (Bement, 1994;
Clarke & Hirsch, 1991; Marks et al., 1988; Molnar, 1972; Pedersen &
Jakobsen, 1989). Extreme mechanical demands on anterior teeth in
northern latitude populations results in severe tooth wear, pulp exposure,
and resorption and shortening of tooth roots. These factors con tribute to
tooth loss (Pedersen & Jakobsen, 1989).
Periodontal disease is generally characterized by a loss of alveolar borre,
represented either by a horizontal lowering of the alveolar crest relative to
the neck of the tooth (cementoenamel junction) or as pockets of bone
rarefaction (Hildebolt & Molnar, 1991; Hillson, 1986) (Figure 3.3). In the
clinical setting, horizontal bone loss is determined by the distance between
the alveolar eres! (interproximal bone) and the cementoenamel junction.
This dimension may not always represent an appropriate indicator of
periodontal disease, especially in anthropological populations in whkh
continuous eruption during adulthood occurs in response to occlusal wear
(Clarke, 1993; Clarke & Hirsch, 1991; Whittaker et al., 1985). Clarke &
Hirsch suggest that periodontal disease in skeletal remains is identified
when 'the cresta! margin of bone undergoes loss of the surface cortical
bone, exposing the porous cancellous structure of the supporting bone,
usually with an accompanying change of the contour of the crest'
(1991:241).
Alveolar borre loss is progressive in periodontal disease. If left un-
78 Exposure to infectious pathogens
Figure 3.3. Periodontal disease; central European Bronze Age. Note large gap
between the alveolar bone and cementoenamel junction for all teeth. (From
Hildebolt & Molnar, 1991; reproduced with permission of authors and
Wiley-Liss, Inc., a division of John Wiley & Sons, Inc.)
checked, the skeletal support for the teeth diminishes, and, ultimately,
exfoliation occurs (Figure 3.4). Once gone, the soft tissue heals, and the
tooth socket is completely remodeled. Although the progression of
periodontal disease is well documented in human populations, ancient and
modern, it remains an etiological conundrum. There is general agreement
that bacteria- perhaps as many as 40 dilferent taxa- may be in volved in the
onset and progression ofthe disease (Drake et al., 1993; Enwonwu, 1995;
Hillson, 1986). Other importan! intluencing factors include poor oral
hygiene, cariogenesis, malocclusion, nutritional status and to a lesser
extent pregnancy, puberty, and psychological stress (see Enwonwu, 1995;
Hildebolt & Molnar, 1991; Hillson, 1986).
3.3.2 Temporal trends: foragers, farmers, and industria/ized.
populations
Many of the foregoing populations with elevated prevalence of dental
caries also exhibit high frequencies of tooth loss. The high prevalence of
dental caries in populations with tooth loss indicates an association
Periodontal disease and tooth loss 79
Figure 3.4. Edentulous individual; Santa Catalina de Guale de Santa Maria,
Amela lsland, Florida. (Photograph by Mark C. Griffin.)
between the two conditions. lt is difficult to identify the cause ofpremortem
tooth loss in most instances. Given the similarities between clinical
populations and archaeological remains in location and pattern of tooth
loss- usually commencing with the posterior mandibular dentition-tooth
loss in archaeological contexts is linked closely to periodontal disease.
Few workers have systematically reported on tooth loss prevalence in
past human populations. The paucity of data reflects a lack ofconsensus on
etiology (see Clarke & Hirsch, 1991; Hildebolt & Molnar, 1991), as well as
the poor representation of intact alveolar bone in many archaeological
remains. The available record indicates that tooth loss has an ancient
history. Various early hominid remains show evidence of alveolar bone
resorption, including Horno erectus (e.g., Mauer) and late archaic Horno
80 Exposure to infectious pathogens
sapiens (e.g., Krapina, La Chapelle-aux-Saints) (Hildebolt & Molnar,
1991; Hillson, 1986; Trinkaus, 1985; Wells, 1975). Significan! tooth loss is
largely limited to Holocene populations, however.
As with dental caries, recen! populations undergoing shifts from
traditional to Western, processed diets experience a marked increase in
periodontal disease and tooth loss ( e.g., Barmes, 1977; Basu & Dutta, 1963;
Clarke et al., 1986; Cutress et al., 1982; Heithersay, 1959; Homan, 1977;
Mayhall, 1977; Moorrees, 1957; Tal, 1985; P. L. Walker et al., unpublished
manuscript; and many others). Similarly, archaeological populations
showing evidence ofhigh levels of consumption of plan! carbohydrates or
processed foods have high rates ofperiodontal disease and tooth loss (e.g.,
Frayer, 1984; Larsen, Craig et al., 1995; Owsley et al., 1987; Patterson,
1984; Sledzik & Moore-Jansen, 1991 ). In contras!, man y foragers with diets
dominated by animal protein ha ve low prevalen ces ( e.g., Costa, 1980; Scott
et al., 1991).
The shift from foraging to farming was accompanied by an increase in
periodontal disease and tooth loss. In the Eastern Woodlands of North
America, tooth loss increases dramatically with the shift to farming ( e.g.,
Cassidy, 1984; Cook, 1984; Patterson, 1984; Smith, 1982; but see Ander-
son, 1965). In prehistoric Ontario, for example, molar loss increases from a
low of 12.0% (LeVesconte Mound) to 37.1% (Glen Williams site) in
prehistoric foragers and farmers, respectively (Patterson, 1984). This and
other populations show a higher prevalence oftooth loss in posterior teeth
(especially molars) and in older adults. The cause for tooth loss in Ontario
appears to be different in comparison of foragers and farmers. In foragers,
tooth loss is due to pulp exposure from severe occlusal wear, and in
farmers, tooth loss is due to periodontal disease and dental caries.
Other settings in Europe, Asia, and Africa mirror the trends documented
in North America. Seventeenth century British mandibles exhibit lower
cresta! borre than Anglo-Saxon (sixth century AD) mandibles (Lavelle &
Moore, 1969). Increasing prevalence of periodontal disease in this setting is
due to the greater consumption of softer foods in the later period, mostly
arising from improvements in the milling of ftour for bread and increased
consumption of sugar and refined carbohydrates generally. This trend
closely parallels rapid increases in dental caries in Britain for the same time
period (e.g., Hardwick, 1960; Moore & Corbett, 1975).
In the Indian subcontinent, Lukacs (1992) documented tooth loss in an
extensive series of human remains from various siles dating from the
Mesolithic through the Iron Age, a temporal framework representing the
transition from foraging to intensive agriculture. The later agricultura!
populations consumed a variety of domesticated plants (wheat, barley,
Periodontal disease and tooth loss 81
peas, sessamum). Comparisons of the earlier and la ter populations reveals
an increase in tooth loss resulting from an increase in consumption of plan!
carbohydrates (Lukacs, 1992).
A similar pattern of declining oral health has been identified in the Nile
Valley in comparison of dentitions from four successive periods: Me-
solithic (ca. 15000 BP), Meroitic (350 BC-AD 350), X-group (AD 350-550),
and Christian (AD 550-1300) (Rose et al., 1993). The Mesolithic group is
characterized by a foraging economy based largely on seed gathering and
hunting. Populations in the three later periods were irrigation farmers who
grew cereal grains. Ali samples show evidence of tooth loss. However, the
Mesolithic group possessed only two teeth lost prior to death (from 400
sockets representing 39 individuals). Later, the prevalence steadily rises
from 9.9% (Meroitic) to 34.6% (Christian). Although this was multifac-
torial, Rose and coworkers ( 1993) suggest that tooth loss was due to
extensive caries and excessive occlusal wear, which predisposed teeth to
decay and eventual loss. Sorne of the tooth loss may be due to the use of
stone grinding implements and the incorporation of grit into food. The
resulting wear and pulp exposure contributes to periodontal disease and
tooth Ioss in other areas of the Nile Valley (e.g., Marion, 1996).
Prevalence of periodontal disease may be overestimated in archaeologi-
cal samples. Super-eruption of teeth (and eventual loss) may represen! a
normal aging process bearing little relation to pathological processes (e.g.,
Clarke & Hirsch, 1991; Whittaker, 1993; Whittaker et al., 1985). However,
the importan! role of the dentition in food consumption argues that tooth
loss can be detrimental at both the individual and the population levels.
3.3.3 Sex and status di.fferences
Sex differences in prevalence of tooth loss do not present a consisten!
pattern in archaeological settings. Males appear to be somewhat more
affected than females (Hillson, 1986), but this difference may be related to
oral hygiene practices. Females in South American native populations
recentlycoming in to contact with Western society ha ve a higher prevalence
of antemortem tooth loss than males (P; L. Walker et al., unpublished
manuscript). In this setting, women of childbearing age use sap to extrae!
diseased (carious) teeth. Thus, cultural behavior explains at least sorne of
this variation.
Very few studies of archaeological dentitions report female and male
prevalence data, thus preventing observations of sexual dimorphism in
tooth Ioss. Populations from Bronze Age Harappa, India (Lukacs, 1992)
and Mesolithic Portugal (Frayer, l 988a)- both settings in volved consump-
82 Exposure to infectious pathogens
tion of plant carbohydrates and other cariogenic foods - have higher
prevalence of tooth loss in females than males, similar to the situation with
dental caries. Prehistoric Eskimos from Point Hope (Ipiutak and Tigara
sites) and Kodiak Island (Jones Point site), Alaska, have very low
prevalences of tooth loss, especially in young adults (Costa, 1980).
Differences are quite pronounced in the comparison of female and male
anterior tooth loss: females have much greater anterior tooth loss than
males. In the Ipiutak series, males have a very low loss ofincisors (5.3%);
females display a high loss of incisors (19.4%). Hrdlicka {1940a) argued
that tooth loss in this region was related to the practice of tooth ablation,
whereby the anterior teeth were intentionally removed (and see Cook,
1981 b ). Re-examination of cultural practices in these groups suggests that
greater anterior tooth loss in adult females is related to the excessive use of
front teeth in extramasticatory activities (Costa, 1980). Women in this
setting engage in behaviors that place excessive mechanical demands on the
anterior teeth, such as hide chewing. The cumulative trauma results in early
loss of incisors and can in es. Thus,. tooth loss differences between men and
women are related to use of the dentition in extramasticatory behaviors
(see also J. R. Lukacs, unpublished manuscript; Chapter 8).
The influence of status and social rank on tooth loss in human
populations has been rarely systematically assessed in archaeological
remains. In the Medieval series from Zalavr, Hungary, tooth loss is
considerably higher in low-status (39.4%) individuals buried in the chapel
than in high-status (9.1%) individuals buried in the castle (Frayer, 1984).
The difference between status groups is especially striking in adult males:
48.4% ofteeth were missing in the lower-status group, and only 5.2% were
missing in the high-status group. Females show prevalences of 32.1 % and
6.7% for low- and high-status groups, respectively. Like the dental caries
prevalences from this series, sex differences in tooth loss indicate dental
health variation between men and women that is strongly influenced by
sexual dimorphism in diet and food consumption practices.
3.4 Nonspeciflc infection
3.4.1 Periostitis and osteomyelitis
Skeletal lesions of infectious origin represen! a continuum initially involv-
ing the periosteum, followed in severity by involvement of cortical bone
generally, and, at the extreme end, extension of infection in to the medullary
cavity. Periostitis (or periosteal reaction) is the least severe, and osteitis and
Nonspecific infection
Figure 3.5. Periosteal reaction (periostitis) on mid-diaphysis of adult tibia;
Santa Catalina ossuary, Amelia Island, Florida. (From Larsen, 1994;
photograph by Mark C. Griffin; reproduced with permission of Wiley-Liss,
Jnc., a division of John Wiley & Sons, Inc.)
83
osteomyelitis are the most severe lesions. Periostitis represents a basic
inflammatory response that may result from bacteria] infection, but
traumatic injury is also implicated in its etiology (Ortner & Putschar, 1985;
Walker et al., 1997). Infection or injury elicits bone production by
stimulating the osteoblasts lining the subperiosteum (Eyre-Brook, 1984;
Simpson, 1985). The resulting lesions are characterized as osseous plaques
with demarcated margins or irregular elevations of bone surfaces (Figure
3.5). The skeletal tissue in the unhealed form is loosely organized woven
bone. In the healed form, the skeletal tissue is incorporated into the normal
cortical bone and the surface is often smooth, undulating, and somewhat
inflated. The lesions can be highly localized, often being limited to single
skeletal elements, but they may also involve multiple elements if the
infection is widespread or systemic. Osteitis is usually not identifiable
without radiological observation. Since most paleopathological studies do
not involve radiological analysis, osteitis is not discussed here.
Osteomyelitis involves exuberan! proliferation of both endosteal and
periostea] bone surfaces (Figure 3.6), the former resulting in the restriction
in diameter of the medullary cavity. Most pyogenic {pus-producing)
osteomyelitis is caused by the microorganism Staphylococcus aureus,
84 Exposure to infectious pathogens
Figure 3.6. Osteomyelitis involving entire diaphysis of adult tibia; Iosco
County, Michigan. (From Barondess & Sauer, 1985; reproduced with
permission of authors and Michigan Archaeologist.)
accounting for sorne 80 to 90% of cases in living populations (Rosenberg,
1994). Other organisms linked with infections include Escherichia coli,
Salmonella typhi, and Neisseria gonorrhoeae (Aegerter & Kirkpatrick,
1975; Steinbock, 1976). The arterial system ofthe bone is !he typical route
of their transport. Direct infection from a bone fracture and break in the
skin is also linked with osteomyelitis. The infection site on the bone is
sometimes associated with sinuses or boles (cloacae) for exudate or pus
drainage. Chronic osteomyelitis can occur over a period of many years,
owing to the presence oflocalized infection foci that occasionally reappear,
sometimes in response to systemic or localized stress.
Infection resulting in periostitis is almos! never fatal, since it is usually
restricted to a localized area of a single bone. lnfection leading to
osteomyelitis can result in death ifthe infection spreads via the circulatory
system to vital organs. As with periosteal reaction, the bone tissue
associated with osteomyelitis has a woven, porous appearance in the
unhealed form. lfhealed, the bone is dense and becomes part ofthe normal
underlying cortical tissue (Ortner & Putschar, 1985). Osteomyelitis is far
less prevalen! in archaeological series than is periostitis (e.g., Larsen, 1982;
Powell, 1988).
Although periosteal reactions and osteomyelitis are nonspecific, their
documentation has proven highly useful for assessing levels and patterns of
community health. Nonspecific infections provide a rather incomplete and
undiagnostic picture of a population's disease experience, but documenta-
tion of prevalence and changing patterns reflect the health costs of specific
lifeways, such as sedentary agriculture.
3.4.1 Temporal trends: foragers and farmers
Nonspecific infections have been documented in many archaeological
skeletal samples, but only recently have biological anthropologists system-
atically collected data on prevalence in order to make interpopulation
1
1
1
Nonspecific infection 85
comparisons. These recen! comparisons demonstrate importan! trends in
health in specific settings, especially in populations making the transition
from foraging to farming or in comparison of foragers, farmers, and other
lifeways.
Generally, populations undergoing adaptive shifts from foraging to
part-time or intensive farming show an increase in prevalence of periostitis
and bone infection. Bioarchaeological analyses of nonspecific periosteal
reactions revea! that virtually without exception the tibia is the most
commonly affected bone (e.g., Boyd, 1986; Clabeaux, 1977; Dickel, 1991;
Eisenberg, 1986a; Hodges, 1989; Hutchinson & Larsen, 1995; Lamber!,
1993; Larsen, 1982; Larsen & Harn, 1994; Martin et al., 1991; Milner, 1991;
Perzigian et al., 1984; Powell, 1988; Suzuki, 1991; Webb, 1995; and others);
lt is unclear why the infection rates for the tibia are so much higher than for
other skeletal elements. Periosteal reactions associated with syphilis show a
penchant for the anterior tibia, the cranial vault, and the superior aspee! of
the clavicular diaphysis, perhaps because these elements are not sur-
rounded by lar ge amounts of soft tissue and, therefore, ha ve slightly cooler
temperatures, rendering them more susceptible to infection (Ortner &
Putschar, 1985). Additionally, the anterior and lateral aspects of the tibia
diaphysis ha ve the largest and perhaps most vascularly and physiologically
inactive surfaces in the skeleton, which may also lead to bacteria!
colonization and infection (Martin et al., 1991; Steinbock, 1976). Circula-
tory flow is generally slower in the lower legs beca use of gravity, enhancing
bacteria! colonization (Cotran et al., 1994). The anatomical location - the
Iower end of the legs - also exposes the anterior tibia to trauma against
which little protection is offered by soft tissue. Subcutaneous and sub-
periosteal bruises from trauma promete bacteria! proliferation throufh
release of blood and intracellular fluids from ruptured cells and vessels.
In the Eastern Woodlands of North America, severa! studies show
importan! links between subsistence, settlement pattern, and community
health. In the Dickson Mounds site in the Illinois River valley, Lallo and
coworkers (Goodman et al., 1984; Lallo et al., 1978; Lallo & Rose, 1979)
compared infection prevalence in two populations, including an earlier
Woodland (called Mississippian Acculturated Late Woodland) group (AD
1050--1200) and a later Mississippian (Middle Mississippian) (AD 1200--
1300) group. The la ter period saw an increase in consumption of maize and
reduction in animal sources of protein. There appears to have been a
consolidation of residential units into larger population aggregates at the
same time. Overall, settlement involved a marked increase in population
density and a decrease in mobility.
There is a dramatic increase in frequency and severity of infectious
86 Exposure to infectious pathogens
lesions (periostitis and osteomyelitis) that coil)cides with these changes in
population settlement at Dickson Mounds. The prevalence doubled in the
later time period, from 30.8% to 67.4% of individuals affected. For the
tibia, the prevalence increased from 26% to 84%, affecting ali age groups
and both sexes. Comparisons of severity (from slight to severe involvement
ofthe periosteum) for the tibia showed that most ofthe infection was slight
in the Woodland group, whereas most was moderate to severe in the
Mississippian group.
This increase in prevalence and severity of infection could be explained in
a number of ways. The introduction of new chronic infectious disease(s)
may have occurred in later prehistory. Lallo and coworkers argue that the
increase in infectious lesions in the thirteenth century AD is probably tied to
severa! interlinking factors. They highlight the role of the decline in
nutritional quality in later prehistory, which was likely to place populations
at increased risk of infection by decreasing the resistance to disease.
Archaeological evidence suggests that there was an increase in trade
networks and long-distance social contact in the later prehistoric period.
These contacts may have provided a means for introducing new pathogens
or disease vectors or both, thus increasing the prevalence of infectious
disease. The effects of population size increase and sedentism are well
understood in infectious disease ecology and epidemiology. By increasing
the size and density of settlements, the host and pathogen are placed
side-by-side in a long-term relationship that may form the basis of chronic
infection. The number of potential hosts is increased, thus providing a
permanent reservoir for certain infectious agents. The closer contact in a
more densely occupied settlement, coupled with the ill effects of poor
sanitation resulting from permanent occupancy of a setting, results in
faster and more proficient disease transmission (Armelagos, 1990; Ar-
melagos & Dewey, 1970; Lallo et al., 1978; Lamber!, 1993).
The negative impact of infection on this population is indicated by
depressed survivorship for those individuals with skeletal lesions. Dickson
adults ( > 20 years) with severe infections hadan average age-at-death of
29 .3 years, which is well below the mean age-at-death for the adult
population overa]] (33.5 years) (Lallo et al., 1978).
. Other settings from the Eastern Woodlands show results that are
generally consisten! with changes observed in the Dickson Mounds
populations. Comparisons of prehistoric foragers (pre-AD 1150), prehis-
toric maize farmers (AD 1150--1550), early mission (AD 1607-1680), and late
mission period intensive maize agriculturalists (AD 1686-1702) in the
Georgia Bight show clear temporal trends in prevalence in relation to
dietary and lifeway changes (Larsen & Harn, 1994; C. S. Larsen et al.,
Nonspecific infection
87
unpublished manuscript; and see above for more detailed description of the
samples).
Unlike the Dickson Mounds study, these analyses focus on comparisons
of populations consuming exclusively wild plants and animals with
populations utilizing maize to varying degrees. For most ofthe time prior
to the arrival of Europeans in the sixteenth century, native groups were
nonsedentary foragers who obtained most foods from a combination of
hunting, gathering, and fishing. Archaeological and isotopic documenta-
tion of subsistence economy indica tes that marine resources from estuarine
and marine contexts provided most of the protein in the na ti ve di et (Larsen,
Schoeninger et al., 1992; Reitz, 1988, 1990). During the twelfth century AD,
maize rapidly took on an increasingly importan! dietary role. With the
arrival of Europeans and the establishment of Catholic missions, there was
a subsistence reorientation whereby maize became highly significant
Coupled with these dietary changes were alterations in population size,
density, and sedentism. Prehistoric foragers in the region appear to ha ve
been sparsely settled and highly mobile. Prehistoric farmers, though, were
living in larger, more densely occupied villages, probably for longer periods
oftime (Larsen, 1982). ln the mission period, populations wereencouraged
to live in and around mission settlements. Although population size was
reduced dramatically during the contact period, the settlements were
permanent and villages were crowded.
Prevalence comparisons for periosteal reactions in adult tibiae indicate
that there is an increase in frequency prior to contact from 9.5% to 19.8%.
In the early mission population, the prevalence dedined slightly to 15.4%,
but then greatly increased to 59.3% in the late mission period. These general
. increases affected both adult males and females. The moderate increase in
the precontact populations is well under that observed at Dickson Mounds:
In ali likelihood, the relatively lower prevalence in the Georgia Bight
reflects considerably smaller population size and less of a commitment to
maize agriculture in the la ter prehistoric period in comparison with interior
Mississippianpopulations (e.g., Dickson Mounds). The marked increasein
the late mission period is probably tied to the relocation and increased
concentration of native populations around mission centers and the
introduction of new diseases, including possibly venereal syphilis. The
change in settlement provided conditions conducive to the maintenance
and spread of chronic infectious diseases and other factors that lead to an
increase in bone lesions (Larsen & Harn, 1994; C. S. Larsen et al.;
unpublished manuscript). The effects of increased infection rates would
probably have been exacerbated by the increase in emphasis on nutrition-
ally poor foods, especially maize.
88 Exposure to infectious pathogens
There is a synergy between infection and malnutrition (Keusch &
Farthing, 1986; Scrimshaw eral., 1968)- malnourished individuals are less
resistan! to infectious pathogens and are rendered more susceptible to
infectious disea.se; conversely, infection worsens nutritional status. In
understanding the increase in infection in these archaeological (and other)
settings, the synergy with nutritionis critica!. Theconsequences ofinfection
and nutrition are worse than when either acts alone. Individuals experienc-
ing infection exhibit higher basal metabolicrates, which are accompanied by
fever and the body's increased demand for protein and other nutrients
necessary for the production of antibodies that light the infection.
Therefore, in the setting of reduced nutritional quality, first in the late
prehistoric and then in the mission context, the ability to mitiga te infection
was probably hampered by a reduction in dietary quality. Thus, infection
increased in the mission period, in large part dueto the compro mi sed health
linked to poorer nutrition and population crowding.
These studies provide strong support for the epidemiological model that
an increase in population size and density often contributes to decline in
community health, at least as it is measured by the prevalence of bone
lesions. This general pattern ofincrease in infections is also documented in
a variety of other areas of the Eastern Woodlands undergoing the shift
from foraging to farming (e.g., Cassidy, 1984; Cook, 1984; Garner, 1991;
Hoyme&Bass, 1962; Katzenberg, !992a; Pfeiffer& Fairgrieve, 1994; Rose
et al., 1984, 199 l) or in single-componen! late prehistoric settings in the
Eastern Woodlands (e.g., Boyd, 1986; Eisenberg, l986a, l99Ia, 199lb;
Magennis, 1986; Milner, 1982, 1991, 1992; Milner & Smith, 1990; Powell,
1986, 1988, 1989; Rose & Hartnady, 1987). Increase in infection prevalence
is also well documented in the American Southwest (Martin et al., 1991,
1997; Stodder, 1994; Stodder & Martin, 1992), Mesoamerica ( e.g., Hodges,
1989; Norr, 1984; Saul, 1972; Storey, !992a), Ecuador (Ubelaker, 1984,
!994), andina few regions ofthe Old World (e.g., Japan: Suzuki, 1991;
northern Europe (Britain): Grauer, 1991). These frequencies contras! with
a generally lower prevalence of nonspecific infection among North Ameri-
can foragers (e.g., Custer et al., l990a, !990b; Hutchinson & Larsen, 1995;
Neumann, 1967).
This is not to say that increased population aggregation invariably led to
the same levels of bone infection prevalence globally or even within broad
regions (e.g., American Southeast). Community health in late prehistoric
populations in eastern North America was highly varied (Milner, 1991).
Sorne late prehistoric agriculturalists had very high prevalences of bone
infections (e.g., Eisenberg, l 986a; Lallo et al., 1978; Milner & Smith, 1990),
whereas other groups had somewhat lower prevalences (e.g., Larsen, 1984;
Nonspecific infection
89
Larsen & Harn, 1994; Milner, 1991).Some ofthis variability is undoubted-
ly due to interobserver differences in recording methods. The variable
pattern of infection prevalence also points to a high degree of
between human groups occupying very different landscapes and phys10-
graphic zones, ranging from highly fertile river bottoms (e.g., Lallo et al.,
1978) to marginal uplands (see Eisenberg, 1986a) or coastal regions (e.g.,
Larsen, 1982). Detailed analysis of population trends indicates that
population histories fluctuated dramatically, with regard to both size a_nd
distribution (Milner, 1990). Living in peripheral settings did not prov1de
freedom from disease - sorne of the highest prevalences of bone infection
are in the so-called 'marginal' habitats (e.g., Eisenberg, l986a, I99la,
1991b).
Sorne evidence suggests that high population density and disease
burdens in combination with other factors (e:g., warfare), may have
to cultural terminations during later prehistory well befare the
arrival of Europeans (e.g., Eisenberg, !986a; and see Larsen, 1994).
Improved survivorship, coupled with a decline in prevalence of skeletal
infection, in Ontario suggests that populations may have adjusted to high
density in this setting (Katzenberg, I 992a). This suggestion contrasts
sharply with other contact-era settings where periosteal infections have
been shown to increase in a dramatic fashion (e.g., Larsen & Harn, 1994;
Stodder, 1994; Ubelaker, 1994).
Timing of agricultura] intensification may explain sorne of the variation
in increasing infection prevalences. In contras! to the findings from the
analysis of populations from the Eastern Woodlands, the prevalence of
periosteal reactions remained unchanged from earlier to la ter periods in the
Valley ofaxaca, Mexico (Hodges, 1987, 1989). Unlike most settings in the
Eastern Woodlands, agricultura! intensification in the Valley of Oaxaca
was accompanied by neither increased sedentism nor appreciable popula-
tion growth. Unlike the Eastern Woodlands, agricultura] development was
long and gradual, taking place over severa! thousand years. This contrasts
with regions of secondary agricultura! development such as the Eastern
Woodlands, where maize agriculture was adopted relatively rapidly.
Hodges (1987) argues that the longer period of human-plant interaction
mayexplain sorne ofthe differences in health declines between the Valley of
Oaxaca and the Eastern Woodlands.
The shift to agriculture and increased population density in the Nile
Valley ofSudanese Nubia was also not accompanied by an elevation in the
frequency ofbone infectious lesions. In the X-group intensive agricultural-
ists (AD 350--550)in the Wadi Halfa area, only 12% of individuals possess
nonspecific bone infections, mostof which are minor localized periosteal
90 Exposure to infectious pathogens
reactions (Armelagos et al., 1981 ). This finding is especia U y surprising,
since the valley was densely settled and populations experienced elevated
stress (see Chapter 2).
Microscopic examination of femoral cortical bone from X-group indi-
viduals indica tes a pattern of ftuorescence identical to that of tetracycline
Iabelingin modero bone. This analysis reveals the presence oftetracycline-
now recognized as a broad spectrum antibiotic - sorne 1400 years prior to
its medica] discovery in the mid-twentieth century (Bassett et al., 1980; and
see Bassett, 1981; Cook et al., 1989; Keith &Armelagos, 1988; Milis, 1992).
Tetracycline is highly effective against gram-negative and gram-positive
bacteria as well as sorne other pathogens; its use would ha ve hada highly
therapeutic value for ancient Nubians. The source of tetracycline is
unclear, but Bassett and coworkers (1980) suggest that grains - wheat,
barley, and millet - stored in mud bins provided the environmental
conditions and nutrients essential for the natural production of streptomy-
cetes, the bacteria that produce tetracyclines. Further south in the Nile
Valley at Kulubnarti, intensive agriculturalists have a much lower presence
of skeletal tetracycline (Hummert & Van Gerven, 1982). Not surprisingly,
these groups also display considerably higher bone infection prevalence
(42%-45%) than in populations at Wadi Halfa.
Increased population density is not solely dependen! upon the adoption
of an agriculture-based economy. A number of regions globally show
increase in sedentism and population density in the absence of plant or
animal domestication. If a chief cause for increasing skeletal infection is
related to demographic factors (i.e., population size and distribution), then
populations undergoing a shift to sedentism in foraging contexts should
show similar changes in infection prevalence to those in populations
adopting agriculture. In order to test this hypothesis, Lambert & Walker
(1991; Lamber! 1993, 1994) documented change in prevalen ce of periosteal
Iesions in populations occupying the mainland coast and islands in the
Santa Barbara Channel Islands. Accompanying the shift toward a marine
based economy, there was an increase in population size and density and
decrease in mobility, especially in later prehistory (Glassow, 1996). By the
time of initial contact, native populations in the region had a leve! of
complexity of social organization and population density that rivaled those
of many agricultura] societies in North America (Arnold, 1992).
Comparisons of nonspecific periosteal reactions in tibiae reveals a
striking increase in prevalence and severity that peaked during the Late
Middle period (AD 580-1380) and declined slightly afterwards in late
prehistory. In general, then, an increase in sedentism and population size
was accompanied by an increase in infection, reflecting a decline in
Nonspecific infection 91
community health. In addition to increasing population density, size, and
degree of sedentism, other factors may have contributed to increased
infection. Archaeological evidence indicates a clear pattern of increase in
exchange between the islands and the mainland, creating the possibility for
the introduction of new infection-causing pathogens. Although their diet
was rich in protein, the well documented increase in interpersonal violence
(see Chapter 4) suggests that local island populations may have become
increasingly competitive for limited terrestrial resources in la ter prehistory.
The slight decline in bone infection in the late prehistoric period is similar
to the pattern documented for prehistoric Ontario (cf. Katzenberg, 1992a),
suggesting the possibility of increasing immunities to pathogenic agents
associated with high population density. The Santa Barbara setting shows
a continued decline in stature, indicating that health did not improve.
Finally, the region saw an extended period of drought during the Late
Middle period that may have resulted in decreased abundance of food
resources, thus contributing to poorer health and increased infection.
Comparative studies of regional samples of human remains provide an
importan! perspective on community health in relation to ecological and
biocultural variability. Webb (1995) compared the prevalence of infectious
lesions in major limb bones from foragers occupying six regions of
Australia: central Murray River valley, Rufus River valley, South Coast,
Desert, northern Tropics, and East Coast. These regions represen! highly
variable ecological settings, ranging from tropics to desert. Although
temporal comparisons are not available in bis study, the regional perspec-
tive represents an importan! first step toward addressing variability in
nonspecific infection.
The prevalence of nonspecific infection is relatively low throughout
Australia, regardless of region. For example, skeletal samples fromcoastal
areas ha ve a remarkably low prevalence of infection, the highest being the
East Coast right tibiae (6.1%). The highest frequency of infection in
Australia is represented in the Desert group: 16.7% of femora have
nonspecific infections. The reasons for the higher prevalence of infection in
this region are unclear. Webb (1995) notes that endemic treponematosis is
present in the area, which may contribute to the higher frequency in the
Desert group in comparison with other regions.
3.4.3 Sex and status differences in nonspec!fic infection prevalence
There is evidence to suggest that different cohorts of a population are
differentially affected by disease stress. At the late prehistoric Dickson
Mounds (Lallo et al.; 1978), Moundville (Powell, 1988), and Georgia Bight
92 Exposure to infectious pathogens
(Larsen & Harn, 1994) sites, the prevalence of nonspecific infections is
broadly similar between adult males and females. These similarities suggest
that factors influencing disease prevalence were the same. In the Averbuch
and historie period Georgia Bight populations, males have appreciably
higher frequencies of nonspecific infections than females (Eisenberg,
l 986a; Larsen & Harn, 1994). Adult males under age 35 show a tendency to
have more unhealed, active lesions than adult females. Higher prevalence
in adult males is also present in other diverse settings: a Medieval
population in England (Grauer, 1991), intensive agriculturalists from
Mesoamerica (Danforth et al., 1997; Hodges, 1989), and hunter-gatherer-
fishers in coastal southern California (Lamber!, 1994).
The tendency for greater male nonspecific infection prevalences in these
di verse settings is related to factors that are uniqueto specificcircumstances.
In Spanish Florida, for example, resettlement of male draft laborers in areas
far from home villages may have exposed them to novel pathogens or other
infectious agents (Larsen & Harn, 1994). In the Santa Barbara Channel
Islands region, the propensity formales to participate in highly demanding
physical activities may explain a higher prevalence of periosteal reactions
resulting from blows to the lower leg than in females (Lamber!, 1994).
Severa! osteological samples show a tendency for adult females to have
more nonspecific lesions than adult males (e.g. Martin et al., 1997;
Whittington, 1989). In the prehistoric series from the La Plata Valley in
northwest New Mexico, females show a mu ch higher prevalen ce of lesions
than males (females, 30.7%; males, 6.2%) (Martin et al., 1997). At Black
Mesa, Arizona, there are no sex differences in infection prevalence, but
females have more severe infections than males. The finding of greater
involvement in females, with regard to either prevalence or severity, in
these Southwestern settings may indicate a greater exposure to pathogens
resulting in infection. In the La Plata Valley, adult females have more
skeletal injuries than males, including cranial depression fractures and
other broken elements. In contras! to male burial, female burial was
haphazard and devoid of grave goods. On the basis ofthis pattern, Martin
and coworkers (1991) suggest that adult females lived under suboptimal
conditions, at leas! in comparison with adult males.
Relatively little paleopathological research is devoted to the importan!
link between status and health, arguably an importan! componen! of
complex societies (Powell, 1992a). If elite members of a society were
exempted from activities that would expose them to pathogens, then they
should exhibit a relatively lower prevalence of bone infection. Compari-
sons ofhigh-status with lower-status individuals at Moundville reveals no
statistically significan! prevalence differences in infection, suggesting that
1
!

Specific infectious diseases
93
'status differentiation at Moundville brought no substantial biological
benefits, nor Ievied any particularly heavy penalties' (Powell, l 992a:88).
Comparison of skeletal elements of the high-status elites with the two
nonelite groups indicates that, with the exception of the fibula, ali long
bones from elite individuals have somewhat less periostitis than nonelite
individuals. For example, 44.8% of high-status and 51.0% of low-status
tibiae have infections (Powell, l 988:Table 35). Given the vagaries of
identification of high status individuals as well as potential problems with
sample size, these differences may not be meaningfl. Other southeastern
U.S. Mississippian centers also show no clear differences between high and
Iow status in infection prevalence (Blakely, 1980, 1988).
In contras! to these late prehistoric settings in the Eastern Woodlands,
the picture of social diffetentiation and infection prevalence is distinctive at
Cahokia, a Mississippian si te located in the American Bottom region of the
central Mississippi River valley. The system associated with the site was the
most organizationally complex late prehistoric Mississippian chiefdom in
the Eastern Woodlands (see discussion by Milner, 1990). By the early
eleventh century AD, an elite social stratum had emerged which had
differential access to a range of prestige items, and probably enjoyed a
better quality oflife than lower classes. At Cahokia Mound 72, analysis of
the remains of 261 individuals from 28 burial features revealed that only
5.3% of high-status individuals had periostitis, whereas 25.0% of middle-
status individuals had lesions (Rose & Hartnady, 1987).
Comparison of socially elite 'shamans' and commoners in the Maitas-
Chiribaya culture of northern Chile (ca. AD 1000) reveals .tha.t .fewer
high-status individuals have bone infections than low-status md1v1duals
(prevalence: shamans, 9%; commoner males, 20%; commoner females,
18%; Allison, 1984). Although they are preliminary, these studies indicate
that high-status individuals enjoyed a healthier lifestyle than low-status
individuals, at least in these settings.
3.5 Specific infectious diseases: treponematosis, tuberculosis, and
le pros y
3.5.1 Treponematosis
Treponematosis is represented today by four disease syn-
dromes: venereal syphilis, nonvenereal (endem1c) syph1hs (also called
bejel), yaws, and pinta '(Hudson, 1965; Mandell et al., 1990; Ortner &
Putschar, 1985). Ali but pinta result in hard tissue responses. Unfortunate'
94 Exposure to infectious pathogens
!y, the skeletal lesions of the other syndromes are so similar that it is
virtually impossible to distinguish among them. The pathogens responsible
for the disease are bacteria! spirochetes of the genus Treponema, including
T. careteum (pinta), T. pal/idum pertenue (yaws), T. pal/idum pallidum
(venereal syphilis), and T. pallidum endemicum (endemic syphilis) (Mandell
et al., 1990). Beca use of the high degree of morphological and immunologi-
cal similarity, sorne argue that thesemay not represen! different species, but
rather reflect differences in expressions due to cultural and environmental
factors that affect the mode of infection. This conclusion is borne out by
recen! DNA hybridization studies (see Ortner et al., 1992).
Treponema! infection is introduced via the skin or mucous membranes.
For venereal syphilis, the spirochete typically enters the body during sexual
contact from lesions on the genitals. F or endemic syphilis and yaws, the
pathogens are spread from nongenital lesions on the arms, legs, or trunk
during nonsexual contact between individuals ( e.g., physical contact
between children playing). Foral! three syndromes, the infection spreads
throughout the bodyvia the circulatory system. Congenital transmission of
venereal syphilis involving passage of the spirochete from the mother
transplacentally to the fetus is well documented (Hudson, 1965; Ortner &
Putschar, 1985). In living populations, pinta, endemic syphilis, and yaws
have been found to be especially prevalen! in rural settings with poor
sanitation. Additionally, in tempera te to hot clima tes, individuals typically
wear relatively little clothing, which facilitates the spread of infection
through direct contact with infected minor wounds, such as abrasions and
cuts. Venereal syphilis characteristically appears in populations with
higher levels of sanitation, such as in urban settings in Western countries.
These groups tend to be fully clothed, which provides fewer opportunities
for the spread of infection via skin contact in the manner typical of
nonvenereal syphilis, pinta, and yaws.
The skeletal manifestations of the treponematoses are described in the
paleopathological literature in detail (Hackett, 1976; Ortner & Putschar,
1985; Powell, 1988). Yaws is most commonly represented by the inflamma-
tory response of the periosteum surrounding the bones of the forearm,
hand, and lower leg bones. In the most severe form, repeated episodes of
periosteal reaction and remodeling may result in hypertrophy of the
anterior crests of the tibia, presenting an appearance of bowing called
'saber-shin'. Bone surfaces with el ose proximity to skin- such as the cranial
vault and the anterior tibia - also may express active lesions or pitted
defects from gummatous granulomas. Destructive nasal and hard palate
changes may occur, but are less prevalen! than in venereal or endemic
syphilis.
Specific infectious diseases 95
Skeletal changes associated with endemic syphilis are similar to those
that develop in yaws. In the tertiary stage of the disease, periostitis and
gummatous granulomas may develop in the cranial flat bones and tibiae,
and the tibiae take on a saber-shin appearance. Destructive lesions of the
face, especially in the nasal region, may also develop. Venereal syphilis
results in virtually the same bone lesions as endemic syphilis, including
extensive cranial vault lesions and periosteal inflammation of lower limb
long bones. Other destructive changes are present in elbow, hip, and knee
joints.
Tertiary-stage venereal syphilis may involve abundan! osteosclerotic
skeletal responses, characterized by gummatous destruction of bone. The
skull is frequently affected, especially the nasal area and flat vault bones.
The frontal bone typically expresses healed star-shaped, gummatous
lesions called caries sicca. Over the course of time, these lesions may
coalesce to form an ectocranial surface with a high degree of irregular
topography.
Congenital syphilis results in distinctive skeletal changes, such as
osteochondritis (poor bone formation in areas of endochondral ossifica-
tion), periostitis, and osteomyelitis. Dental changes do not occur in the
nonvenereal treponematoses, because the teeth are fully or nearly fully
formed in the secondary and tertiary stages. In up to 30% of congenital
syphiliticchildren, pathognomonic modifications may occur in the forming
permanent anterior teeth, specifically the characteristic malformation of
incisors in which the crown is unnaturally constricted at the occlusal
margin (called Hutchinson's incisors) and the anomalous cusp patterning
of first molars (called Moon's or 'mulberry' molars). These lesions are
extremely rare in New World archaeological remains dated before 1492,
and are more common in historie populations with documented evidence
of venereal syphilis (Cook, 1994; Jacobi et al., 1992; Mansilla & Pijoan,
1995; Ortner & Putschar, 1985; Truesdell & Weaver, 1995).
The presence of treponematosis in past populations, especially in the
New World, has been debated for well overa century. Analysis ofhuman
remains from Tennessee and Kentucky by the American Civil War surgeon
Joseph Jones revealed 'the unmistakable marks ofthe ravages ofsyphilis'
(1876:66). The presence of 'diseased, enlarged, and thickened' long bones
convinced him that syphilis was widespread in the region, that it had an
exclusively New World origin, and that it must have been imported 1:0
Europe via the West ludies (Jones, 1876:67). This discussion continues a
debate that is still ongoing: namely, the origin ofvenereal syphilis, New
World or Old World (Baker & Armelagos, 1988; Dutour et al., 1994;
Merbs, 1992). Owing to the overlapping symptoms of the three syndromes
96 Exposure to infectious pathogens
and the far more extensive study ofNorth American skeletal remains than
in Europe or other areas of the Old World, the issue is unresolved.
Researchers generally agree that treponematosis was present in both the
Old World and fhe New World well prior to European contact (see Baker &
Armelagos, 1988). Cases of treponematosis are only sparsely documented
in Europe and Asia (e.g., Dutour et al., 1994; Henneberg et al., 1992;
Roberts, 1993; Rothschild & Rothschild, 1995; Suzuki, 1982-1984).
Treponematosis is widespread in Australia (Hackett, 1936, 1976; Webb,
1995) and in sorne areas of the Pacific (e.g., Marianas: Hanson, 1988;
Stewart & Spoehr, 1952; Stodder et al., 1992). The evidence for
treponematosis from the New World is quite abundan!, it having been
identified both with regard to a range of case studies and population
differential diagnoses in North American contexts. In addition to earlier
studies (see review in Baker & Armelagos, 1988), examples of treponemato-
sis are associated with highly diverse settings, including southern coastal
California (Cybulski, 1980; Lamber!, 1994; Walker & Lambert, 1989),
Northwest Coast (Cybulski, 1990), Great Plains (Schermer et al., 1994),
Southwest (Lahr & Bowman, 1992; Stodder, 1994; Stodder & Martin,
1992), and coastal Chile (Arriaza, 1995). In many contexts, the skeletal
manifestation appears somewhat intermedia te between those of the two
modern endemic syndromes. Considering the evolutionary nature of
human infectious disease over centuries and across host populations of
differing genetic composition, these departures from the modern pattern
are not unexpected.
The preponderance ofNew World treponematosis data is from precon-
tact American Midwest and Southeast human remains. In the lower Illinois
River valey, a pattern of proliferative bone infection that is strongly
suggestive of endemic treponematosis rather than venereal syphilis is
present (Cook, 1976). There is a high frequency oftibial periostitis affecting
adult males and females alike, progressively increasing with age. The
prevalence is especia!ly elevated in later prehistoric, maize-dependent
groups with high population density. Although the pattern is variable
across populations, it has been described in (mostly late) prehistoric human
remains from other groups in Illinois (Garner, 1991; Milner, 1983, 1992;
Milner & Smith, 1990), Arkansas (Powell, 1989), Louisiana (Lewis, 1994;
Robbins, 1978), Mississippi (Ross-Stallings, 1989), Kentucky (Cassidy,
1984), Tennessee (Eisenberg, 1986a, 199la, 199lb}, Alabama (Powell,
1986, 1988, 199la, 199lb, 1992a), North Carolina (Bogdan & Weaver,
1992; Monahan, 1995; Reichs, 1989), Georgia (Powell, 1990, 1991 a, 1992b,
1994), and Florida (Bullen, 1972, 1973; Dickel, 1991; Hutchinson, 1993;
Hutchinson & Mitchem, 1996; i ~ c a n & Miller-Shaivitz, 1985; Miller-
Specific infectious diseases 97
Figure 3.7. Stellate lesions on adult frontal (treponematosis); Tierra Verde,
Florida. (From Hutchinson, 1993; reproduced with permisson of author and
John Wiley & Sons, lnc.)
Shaivitz & i ~ c a n 1991). Severa! of these case studies provide details on
lesion morphology and other characteristics that strongly suggest the
presence of sorne form of endemic treponematosis.
Hutchinson (1993) found extreme proliferative periosteal apposition on
long bones (especially tibiae) and stellate lesions in crania from late
prehistoric and early contact contexts in the central Florida Gulf coast. F or
example, in the sixteenth century componen! of the Tatham Mound
sample, three crania exhibited healed stellate lesions (Figure 3.7). Similar
lesions are present in crania from the postcontact Weeki Wachee Mound
and precontact components of the Safety Harbor and Tierra Verde sites.
Although saber-shin tibiae are also present in these samples, stellate scars
appear to be the best single criterion for endemic treponematosis in the
Florida Gulf coast region (Hutchinson, 1993; and see Milner & Smith,
1990, for a discussion ofNorris Farms, Illinois). Similarly, detailed study of
the Mississippian-period Irene Mound series from north coastal Georgia
indicates the presence ofthe disease. In this series, lesion morphology and
prevalence is consisten! with the presence of treponematosis (endemic
syphilis)(Powell, 1990, 199 la). These include cranial stellate scars, destruc-
98 Exposure to infectious pathogens
Figure 3.8. Periosteal inftammation of tibiae (treponematosis); Irene Mound,
Georgia. The tibia in the middle is nonpathological. (From Powell, 1990;
reproduced with permission of author and American Museum of Natural
History.)
tion of portions of the nasal margin and maxillary palate, and numerous
tibiae with grossly expandeddiaphyses and bone proliferation (Figure 3.8).
Just how life-threatening or debilitating treponema! disease was in these
earlier societies is unknown. Due to the apparent endemic nature of the
disease, it may not ha ve been a primary cause of mortality. Based on the
high degree ofhealing, Powell (1988) argues that populations at Moundville
had more or less successfully adapted to the disease. The presence of the
characteristic lesions indicates that the disease did impose a health burden
and resulted in no small amount of discomfort. In his description ofnative
populations in North Carolina in the early eighteenth century, the explorer
1
Specific infectious diseases 99
John Lawson noted that natives in theeastern part ofthe colony ' ... have a
sor! ofRheumatism or Burning ofthe Limbs, which tortures them grievous-
ly, at which time their Legs are so hot, that they employ the young People
continually pour water down them ... This not seldom bereaves them of
their Nose. I have seen three or four of them render'd most miserable
Spectacles by this Distemper. Yet, when they have been so negligent, as to
let it run on so far without curbing of it; at last, they make shift to patch
themselves up, and Iive forman y years after .. .' (Lawson, 1967:231 ). These
descriptions correspond well with modern clinical descriptions of deep leg
pain and orofacial lesions (see Hackett, 1951; Hudson, 1958).
Sorne examples of nonspecific periosteal reactions observed in archae-
ological skeletons are probably oftreponemal origin, especially in individ-
uals who also display the classic lesions (e.g., stellate scars on crania and/or
highly proliferative bone on tibiae diaphyses). In cases where distinctive
symptoms are absent, the delineation between nonspecific infection (per-
iostitis) and endemic treponematosis is difficult to determine.
The well documnted paleopathological diagnoses of treponematosis in
North American archaeological settings make it clear that the disease was
well established in the New World prior to the arrival ofEuropeans. In the
lower American Midwest and Southeast, regions that are subtropical or
that experience seasonal high hilmidity and temperatures, there appears to
be a cline of skeletal expression from the hotter, more humid coas tal regions
to the somewhat drier interior regions, resembling modern inter-regional
treponema! clines spanning 'classic' yaws and 'classic' endemic syphilis in
central and southern Africa (Basset et al., 1994; Froment, 1994; Grin, 1956).
Identification of Treponema DNA in prehistoric Chilean mummies (Rogan
& Lentz, 1994; Grin, 1956) is consistent with this interpretation.
Variation by sex or status in skeletal populations is difficult to assess,
especially given the vagaries of diagnosis. Where probable cases of
treponematosis have been identified, the prevalences in adult males and
females are broadly similar (e.g., Cook, 1976; Powell, 1988; although see
Powell, 1990). Powell's (1988, 1992b) investigation of the Moundville
skeletal series indicates no clear distinction between status groups. These
findings point to the widespread nature of the disease, but, as with
prevalen ce determination, it is not possible to determine precise frequencies
in subgroups of populations.
3.5.2 Tuberculosis
Skeletal tuberculosis' involves a very different form of pathology from
treponema to sis. Instead of producing prolifera ti ve bon e apposition, tuber-
100 Exposure to infectious pathogens
culo sis progressively destroys bone tissue, and is most commonly expressed
as erosive vertebral lesions of the lower back (lower thoracic and lumbar
vertebrae) and resorptive and slight proliferative changes of the pleural
(internal) surfates of ribs (Buikstra, 198la; Ortner & Putschar, 1985;
Pfeiffer, 1991; Powell, 1988; Roberts et al., 1994; Steinbock, 1976). The
disease is caused by the acid-fast, gram-positive bacillus Mycobacterium
tuberculosis (Cotran et al., 1994). The primary mode of transmission is by
breathing airborne microbes, usually in droplets introduced by sneezing or
coughing (Smith & Moss, 1994).
The infection pathway is usually through the respiratory tract, resulting
in a primary infection in lung tissue and subsequent secondary infection in
regional hilar lymph nodes (Hopewell, 1994; Ortner & Putschar, 1985).
Overa period ofyears, the bacilli may then spread to skeletal tissues via the
circulatory system, with a propensity for hematopoietic marrow_ and
cancellous bone. The vertebrae, ribs, sternum, and (for subadults) long
bone metaphyses are especially favored sites of secondary infection,
because of the presence of a rich blood supply and the scarcity of
phagocytic cells (Hopewell, 1994; Ortner & Putschar, 1985; Thijn &
Steensma, 1990). Any bone or joint can be involved in tubercular infections
(Berney et al., 1972). The process can result in extensive destruction of
cancellous bone, most commonly in vertebral bodies (Figure 3.9). With the
loss of bone mass, vertebral bodies may collapse; the resulting severe
kyphosis is called 'Pott's disease' after the original description by Sir
Percivall Pott (1779). Only rarely are transverse processes, pedicles,
laminae, or spinous processes of vertebrae affected.
The proliferative lesions on the pleural surfaces of ribs are presumed by
sorne to be associated with tuberculosis, but the cause remains unclear
(Kelley & Micozzi, 1984; Pfeiffer, 1991; Roberts et al., 1994). Examination
of skeletal remains from individuals with known cause of death shows that
tuberculosis sufferers are likely to possess these rib lesions, but this is not
always the case. lndividuals with nontubercular pulmonary disease some-
times have similar rib lesions. Therefore, ali rib lesions of this type should
not be interpreted as diagnostic oftuberculosis (Pfeiffer, 1991; Roberts et
al., 1994).
Tuberculosis has been traced to at least 5000 years ago in the Old World
(Daniel et al., 1994; Steinbock, 1976). Most early authorities argued that
the disease was absent in the New World prior to European contact (e.g.,
Hrdlicka, 1909). The identification of acid-fast bacilli and soft-tissue
tubercular lesions (Allison et al., 1973, 1981 ), and especially M. tuberculosis
DNA in precontact Peruvian and Chilean mummies (Arriaza et al., 1995;
Salo et al., 1994), demonstrates its early prehistoric presence in the New

:i
1
1

Specific infectious diseases 101
Figure 3.9. Destructive lesions on bodies of thoracic vertebrae (tuberculosis);
Little Egypt, Georgia. (Photograph by Mark C. Griffin.)
World. The global distribution of the disease via detection of M. tuberculo-
sis DNA is also indicated in archaeological bone from Old World sites in
Turkey and northern Europe (Dixon et al., 1995; Spigelman & Lemma,
1993).
Destructive vertebral or proliferative rib lesions are identified in archae-
ological skeletal series representing a diversity of groups \Vorldwide, '
including the Middle East (Baker, 1997; Buikstra et al., 1993; Morse, 1967;
Ortner, 1979; Strouhal, 1991 ), Denmark and northern Europe (Bennike,
1985; Formicola et al., 1987; Inglemark, 1939; Manchester, 1991; Waldron,
1993), Greece (Angel, 1984), Japan (Suzuki, 1991), and elsewhere (see
Ortner & Putschar, 1985). ln the New World, a spate ofreports documents
102 Exposure to infectious pathogens
the presence of a disease strongly resembling tuberculosis in South America
(Allison, 1984; Allison et al., 1981; Arriaza et al., 1995; Buikstra &
Williams, 1991 ), American Southwest (El-Najjar, 1979; Fink, 1985; Martin
et al., 1997; Micozzi & Kelley, 1985; Stodder, 1994; Stodder & Martn,
1992; Sumner, 1985), Northwest Coast (Cybulski, 1978, 1990), Great
Plains (Mann et al., 1994; Palkovich, 1981; Williams, 1994; Williams &
Snortland-Coles, 1986), Midwest (Buikstra, l 977a; Buikstra & Cook, 1978,
1981; Cook, 1984; Katzenberg, 1977; Milner, 1983, 1992; Milner & Smith,
1990; Widmer & Perzigian, 1981), eastern Canada (Clabeaux, 1977;
Hartney, 1981; Pfeiffer, 1984, 1991; Pfeiffer & Fairgrieve, 1994; Saunders et
al., 1992), and American Southeast (Eisenberg, 1986a, 199la, 199lb;
Murray, 1989;Powell, 1988, 1990, 199la, 199lb, 1992a, 1992b;Rathbunet
al., 1980).
Mostpaleopathological studies report on isolated cases of tuberculosis
or tuberculosis-like infections. The detailed study of prevalence, pattern,
and lesion morphology in a limited number of series provides importan!
details on the history of the disease. In a comprehensive study of a temporal
sequence of human remains from the lower Illinois River valley in
west-central Illinois, biocultural reconstruction and interpretation of
change in pattern and prevalence of resorptive lesions indicates the
presence of tuberculosis (Buikstra & Cook, 1981; also Buikstra, 1977a;
Buikstra & Cook, 1978; Cook, 1984). Over the course of the sequence,
beginning in the Middle Woodland (150 BC-AD 400), followed by the Late
Woodland (AD 400-1050), and Mississippian (AD 1050-1400) periods, the
region saw an increase in population density and sedentism, especially
when maize agriculture became well established during the eleventh
century AD. No clear evidence of tubercular resorptive lesions is present in
the Middle or Late Woodland periods. In the Mississippian period there is
a clear reorientation in disease pattern. Human remains from the Mississip-
pian period Y okem and Schild mounds possess destructive vertebral body
lesions consisten! with an etiology of tuberculosis. The Mississippian
period adults show a high young-adult mortality and an equitable
distribution amongst adult males and females.
Blastomycosis, another opportunistic mycotic disease caused by the
fungal organism Blastomyces dermatitidis, may be an alternative diagnosis
in cases involving appendicular skeletal elements, especially since the
disease is endemic in the southeastern U.S. today and produces very similar
lesion morphology(e.g., Kelley & Eisenberg, 1987; although see Buikstra &
Williams, 1991 ). Sorne of the west-central Illinois lesions could also be due
to blastomycosis. The overall pattern is more likely to be representative of
Specific iefectious diseases 103
tuberculosis, especially because resorptive lesions in the region are highly
age-specific (Buikstra & Cook, 1981).
The biocultural model and general pattern of skeletal involvement in the
context of increasing population density and sedentism is well illustrated
for a number of other localities. In the Averbuch series from Tennessee,
similar resorptive lesions are present in the numerous vertebrae, bu( pelves,
lower long bones, and feet of many other individuals are also affected
(Eisenberg, 1986a, 199la, 199lb). There is a profound disease burden in
this population: fully 30% of individuals display active resorptive lesions.
Comparison of demographic and anatomical patterning indicates the
presence of both tuberculosis and blastomycosis in the population. The
presence oflesions in mostly males indicates that the disease was probably
blastomycosis in sorne instances, rather than tuberculosis (Eisenberg,
1986a, 199la, 199lb).
At Moundville, circumstances for the introduction and spread of
tuberculosis are similar to those in west-central Illinois (Powell, 1988,
199la, 199lb). In comparison with Illinois, vertebral lesions are rare: only
three adults are affected in this manner, and only one adult displays classic
vertebral body destruction. No crania, hips, or knees possess resorptive
lesions consisten! with tuberculosis. However, the presence of a large
number of pleural rib lesions indica tes a broader presence of the disease in
this population.
At the Irene Mound site, three individuals show osteolytic vertebral
lesions, one individual has extensive destruction of the sacroiliac joint
without remodeling, two individuals have periostitis on the anterior
scapular bodies, and one individual has periostitis on the pleural aspect of
the sternum (Powell, 1990, 1991 b). Additionally, eight of ten Irene Mound
individuals with other tubercular lesions have subtle periosteal apposition
on the pleural aspects of ribs. Although the skeletal changes at the Irene
Mound site are not as profound as those observed in the American
Midwest (cf. Buikstra & Cook, 1981; Milner & Smith, 1990), they
nevertheless fit the expected profile of tuberculosis.
Distribution of tuberculosis by sex in these groups generally shows an
even distribution between adult males and females (e.g., Buikstra & Cook,
1981; Powell, 1988, 1992a). In the Irene Mound population, female
prevalence is twice that formales (Powell, 199la), but this could be an
artifact of the composition of the sample (see Larsen, 1984, for discussion
of age bias). The only systematic analysis indicates no apparent relation-
ship between tuberculosis and rank in at least one setting in the American
Southeast (Powell, 1988).
104 Exposure to infectious pathogens
3.5.3 Leprosy
Coexisting with tuberculosis in many regions ofthe world today is leprosy.
Like tuberculosis, leprosy (also called Hansen's disease) is a chronic
infection caused by the acid-fast, gram-positive bacillus Mycobacterium
/eprae (Carmichael, 1993; Ortner & Putschar, 1985). The bacilli are
transmitted either by inhalation or by direct contact in toan open wound by
an infected individual (Davey, 1974). Unlike tuberculosis, leprosy is not
readily communicable; most who acquire the infection have been in
prolonged contact with infected individuals. The incubation period is very
long, occurring over the course ofyears or even decades (Steinbock, 1976).
The disease is usually not fatal. As discussed below, in advanced stages, it is
accompanied by severe disfigurement of the body.
After initial infection, M. leprae multiplies slowly, usually in the sheaths
of peripheral nerves. The primary stage of disease involves a loss of
sensation dueto inadequate innervation. Therefore, minor damage to the
skin - as in a cut ora scrape - <loes not elicit a pain response. Owing to the
poor blood supply and repeated injury of affected tissues, healing is
hampered, resulting in localized infection. Over the course oftime, various
parts of the body, especially toes, fingers, and nasal tissue, are disfigured or
lost entirely. The disease affects the skeleton in advanced cases (see below).
Today, the disease is limited mostly to tropical and subtropical regions of
Africa, Asia, and South America, but in the past it was probably much
more widespread, extending as far north as the Arctic Circle (Carmichael,
1993). Its presence in the Old World is confirmed by identification of
leprosy mycobacterium DNA in osteological tissues (Rafi et al., 1994;
Spigelman & Lemma, 1993).
Unlike treponematosis and tuberculosis, leprosy was not present in the
New World prior to European contact. In ali likelihood, the disease was
introduced to the New World during the early colonial era (Ortner &
Putschar, 1985).
Owing largely to the work ofthe Danish physician and paleopathologist
Vilhelm Meller-Christensen (1961, 1978) on human remains recovered
from Medieval leper cemeteries in Denmark (and see Andersen, 1969), the
skeletal manifestations of leprosy are well delineated. His pioneering
studies contributed to the modern diagnosis of the disease in living
populations.
Excavations at St. Jergensgard, a Danish church cemetery (ca. AD
1250-1550) near Nrestved, resulted in the recovery of sorne 650 skeletons
(Meller-Christensen, 1961, 1978). This is a unique series for bioar-
chaeological investigation, because ali individuals represen\ those who had
(a)
(b)
Specific infectious diseases 105
Figure 3.10. (a) Alveolar atrophy (leprosy); St. J0rgensg8.rd, Nrestved,
Denmark. (From M0ller-Christensen, 1978; reproduced with permission of
Odcnse University Press.) (b) Metatarsal atrophy of left fnot; St. J0rgensgrd,
Odense, Denmark. (Photograph by Kirsten Anderson; reproduced with
permission.)
been admitted into the leprosy hospital in order to isolate them from other
members of the population at large (Meller-Christensen, 1978). Meller-
Christensen 's extensive studies of this series revealed a distinctive 'facies
leprosa' skeletal syndrome involving atrophy of the nasal and maxillary
regions, alveolar resorption, and anterior tooth loss (Figure 3.1 O). Addi-
tionally, hand and foot elements are atrophied and shortened (Andersen et
106 Exposure to infectious pathogens
al., 1992) (Figure 3.10). Similar pathology has recently been identified in a
survey of a large skeletal series recovered from the St. forgensgard leprosy
cemetery near Odense, Denmark (K. Anderson, personal communication).
Other skeletal pathology found in leprous individuals incldes cribra
orbitalia, periostitis on tibiae and fibulae, and maxillary sinusitis (Ander-
sen, 1969; Boocock et al., 1995; M0ller-Christensen, l 978), although these
conditions are no! symptomatic of leprosy by themselves. Increased
prevalence of maxillary sinusitis, for example, appears to accompany
elevation in air pollution and the confines ofurban living in la ter Medieval
England (Lewis et al., 1995). Sorne individuals display dental changes
whereby the crown bases of maxillary incisors are concentrically constric-
ted (Roberts, 1986). The presence of malformed teeth indicates that the
infection occurred early in childhood.
Other probable examples of leprosy from archaeological contexts are
from mostly isolated skeletons from Great Britain, Poland, Nubia, the
Near East, and possibly other localities (see Boocock et al., 1995; Lewis et
al., 1995; Manchester, 1991; Manchester & Roberts, 1989; M0ller-
Christensen & Inkster, 1965; Ortner & Putschar, 1985; Steinbock, 1976;
Zias, l 99 l ), but the bioarchaeological evidence is far less profuse in these
regions !han in Denmark. Manchester and Roberts (Manchester, 1991;
Manchester & Roberts, 1989) assessed skeletal, archaeological, and ar-
chiva! evidence of leprosy in Britain and conclude that as in Scandinavia
the disease was endemic. They argue that following the introduction of the
endemic form of leprosy, perhaps by the late Reman period, the disease
increased in prevalence, peaking during the thirteenth century or somewhat
later. Leprosy then declined and disappeared by the end of the fifteenth
century, but remained in Scandinavia much later than in other regions of
Europe. The general pattern is similar to treponematosis and tuberculosis
in that it in creases with elevation in population size, density, and interper-
sonal contact. The reasons for the disappearance of leprosy remain
obscure. Increased immunities dueto centuries of exposure to the disease
and improvements in hygiene and living conditions may ha ve contributed
to its decline (Carmichael, 1993; Manchester, 1991).
There are no distinctive sex differences in leprosy in the Nrestved adults
studied by M0ller-Christensen. Status differences in leprosy prevalence are
difficult to determine in this sample. M01ler-Christensen did not compare
possible status differences (e.g., exterior church vs. interior church burials).
However, a high-status young adult male buried in the Nrestved church
choir displayed the classic symptoms ofleprosy. Clinical evidence indica tes
that leprosy can occur in anyone, but malnutrition is a predisposing factor
(Keil, 1933).
.&

X

1
1

1
Summary and conclusions 107
3.6 Surnmary and conclusions
In the New World, infectious disease appears to be relatively more
common in late prehistoric settings than in earlier periods. A similar
pattern is expressed in Medieval Europe in comparison with pre-Medieval
Europe. In general, these increases are linked with increased population
size and aggregation, mostly in agricultura!, agropastoral, and urbanized
societies. Poor dental health (dental caries) is related to dietary factors and
to a lesser extentuse of the masticatory complex in stressful activities and
tooth use. Skeletal infections are especially prevalen! in populations living
in densely settled communities. Thus, it should come as no surprise that
treponematosis, tuberculosis, and leprosy - as well as elevated levels of
nonspecific bone infections - are present in these settings. In Medieval
Europe, co-occurrence of leprosy and tuberculosis reflects a similar
deterioration in living standards.
These general characteristics support the contention that infectious
disease as it is expressed in osseous remains is essentially density-
dependent. These diseases are opportunistic in that individuals exposed to
them are already stressed by poor diets and are at high risk for early death.
In at least sorne of these settings, poor diets probably exacerbated already
compromised health, increased density of population enhanced disease
transmissibility, and poor sanitation increased the burden of disease.
The presence of skeletal indicators of infection, both nonspecific and
specific, is indicative of long-term responses to pathogens. In a sense,
therefore, the lesions reflect a vigorous immune response - the individual
survived the initial pathogenic attack long enough to elicit a skeletal
response (e.g., Ortner, 1991; Powell, 1988). However, the presence ofthese
lesions in high frequencies in man y of the groups discussed in this chapter
also reflects an elevated disease burden and generally negative impact on
adaptation and health (cf. Goodman & Armelagos, 1989).
The data on infectious disease, skeletal and dental, reflect social
dynamics within populations. For example, caries rate differences between
males and females in man y societies indica te differential access to foods and
variability in dietary behavior. Differences in patterns of health by rank or
status in sorne groups reflects probable differences in quality of life and
health generally. Regardless of how infectious disease affected specific
components of the populations, the overall impact was negative - affected
individuals probably had reduced ability to acquire key resources (e.g.,
food) and to participate in essential work activities and may well ha ve had
shortened lifespans. Individuals affected by infectious disease carried heavy
social burdens. With regard to leprosy in Medieval Europe, the disease was
108 Exposure to infectious pathogens
highly feared, and individuals with the disease were considered as living
dead, who were to be isolated, forgotten, and removed from society
(Moore, 1987).
Various causal factors discussed in this chapter underscore the fact that
infectious disease has a varied etiology. To be sure, specific pathogens are
linked with nonspecific and specific infectious diseases, and are identified as
their 'causes'. However, even when hosts are infected by these pathogens,
actual disease transpires only when pathogen virulence coincides with host
susceptibility in a conducive environment.
The prevalence of skeletal lesions in an archaeological population <loes
not show a direct one-to-one correlation with actual prevalence in a living
population. Far example, tuberculosis was highly prevalen! in sorne
preantibiotic groups, but it only rarely spread to the skeleton (Roberts et
al., 1994), generally reported as 3 to 7% of cases (reviewed in Milner &
Smith, 1990). Sorne archaeological series show somewhat higher prevalen-
ces (e.g., Buikstra & Williams, 1991; Eisenberg, 1986a; Milner & Smith,
1990). Additionally, it is importan! to distinguish between disease and
infection. Disease prevalence - whether drawn from living or extinct
populations - may represen! only a small part of the total picture of
infection. Similarly, the risks outlined in this chapter for infection may
differconsiderably from the factors that ultimately influence and determine
whether disease will develop from the initial infection.
Diseases involving skeletal tissues must ha ve contributed significantly to
the burden of ill health in many earlier societies, jusi as they do today.
Various segments of populations may have been affected by infectious
disease differently, but the exact experience was always mediated by local
environmental, cultural, social, and behavioral circumstances.
1
4 Injury and violent death
4.1 Introduction
Investigation of injury morbidity and mortality facilitates the assessment
of environmental, cultural, and social influences on behavior. Man y
injuries are not identifiable in human skeletons, and accidental death is
virtually invisible in the archaeological record except under special circum-
stances, such as building collapse or natural disasters (e.g., Cicchitti, 1993;
Deiss, 1989; Palkovich, 1980; Sakellarakis & Sapouna-Sakellaraki, 199 l ).
Nevertheless, osteological remains provide a highly useful index for
assessing accident and violence in a wide variety of circumstances (Cour-
ville, 1962; Jimenez, 1994; Knowles, 1983; Manchester, 1983; Merbs,
1989a; Ortner & Putschar, 1985; Walker, 1997a, 1997b; Webb, 1995).
There is an abundance of skeletal injury data presented in the osteologi-
cal literature. The sparseness of a population perspective in this literature,
however, precludes the realization of the enormous potential that these
kinds of data have for drawing inferences about human behavior and
conflict situations in earlier societies (Burrell et al., 1986; Kennedy, 1994;
Milner et al., 1991; Roberts, 1991; Walker, l997b). In addition to this
shortcoming, severa! problems encumber the study of injury in archae'
ological skeletal remains. Chief among them is the confusion that some-
times arises between skeletal damage originating from accidental or violen!
causes vs. damage having nothing to do with past human behavior. For
example, damage to bone produced by shovels, trowels, and other
equipment during archaeological excavation can produce marks on bones
that mimic blade-induced cutmarks (Milner et al., 1994; Smith, 1997). This
confusion has led to fanciful reconstructions of conflict and its conse-
quences for well-being (e.g., Blakely & Mathews, 1990). Other postdeposi-
tional alterations to bone that are potentially confused with cutmarks and
other forms of trauma include cracking and weathering, root stains, and
small carnivore and rodent damage (Smith, 1997). The application of
methods developed in taphonomic and forensic sciences adds much needed
rigor to the identification of injury and violence (e.g., Berryman & Haun,
1996; Kennedy, 1994).
110 Injury and violen! death
Because both perimortem fracture-fractureoccurring at or around the
time of death- and postmortem breakage show no evidence of remodeling,
the two are difficult to distinguish. This problem is well illustrated by
controversies surrounding the study of a number of skeletal samples (e.g.,
Schimmer, 1979; and comments by Constandse-Westerrnann, 1982). Vari-
ous forms of skeletal trauma are identified by osteologists, ranging from
self-inflicted injuries (e.g., Tyson, 1977) to trauma involving excision of
pieces of cranial vault (trephination or trepanation) (e.g., Bennike, 1985;
Hrdlicka, 1941; Liptk, 1983; Lisowski, 1967; Margetts, 1967; Merbs,
1989a; Parker et al., 1985-1986; Romero, 1970; Stewart, 1957; Stone &
Miles, 1990; Webb, 1988, 1995; and many others) and ablation and
mutilation of teeth (reviewed in Mihier & Larsen, 1991). This chapter
focusses on the behavioral interpretations of accident and violence, since
most skeletal trauma can be attributed to one or the other.
4.2 Accidental injury
The general lifestyle conditions in past soc1ehes is revealed by the
assessment of skeletal injuries in archaeological human remains. Preva-
lence of skeletal injuries by element type as well as regional temporal
patterns give insight into the influence of different lifestyles. These injuries
can have potentially serious consequences, resulting in impaired function
and propensity for secondary arthritis, and can also result in death via
infection or blood loss.
4.2.1 Elemental patterns
Clinicians identify a variety of injuries relating to accident, such as
fractures ofthe lower leg (tibia, fbula), clavicle, ribs, upper arm (humerus),
and hip (especially the femoral neck) (Magnusun, 1942; Ortner & Putschar,
1985; Zimmerman & Kelley, 1982). These patterns are also present in
archaeological human remains (e.g., Ortner & Putschar, 1985). Many
human groups show a high prevalence of fractures of radii and ulnae or
higher prevalence involving these elements compared with other bones
(e.g., Cybulski, 1992; Dickel, 1991; Grauer & Roberts, 1996; Kaplan et al.,
1977; Molleson, 1992; Ortner & Putschar, 1985; Rose, 1985; Sandzn,
1979; Todd, 1927; Ubelaker, 1981). Colles's fractures (distal radius)
typically occur when an individual attempts to break a fall by thrusting
their arms forward (Figure 4.1 ).
Parry fractures-fractures involving the middle or distal diaphysis ofthe
Accidental injury
Figure 4.1. Medial vicw ar distal right radii: healed Colles's fracture (right);
normal (left); anatomical specimens. (Photograph by Paul Braly.)
111
ulna and/or radius - are also common in a wide variety of human
populations (e.g., Angel, 1974; Bassett, 1982; Bennike, 1985; Brothwell,
1961; Burrell et al., 1986; Elliot Smith & Wood Jones, 1910; Jurmain et al.,
1994; Lovejoy & Heiple, 1981; M01ler-Christensen, 1958; Smith, 1990;
Stewart, 1974; Ubelaker, 1981; Webb, 1989, 1995). As the name indicates,
parry fractures are usually interpreted as resulting from the individual's
attempt to ward off a blow directed at their head or upper body (e.g.,Angel,
1974; Armelagos, 1969; Elliot Smith & Wood Jones, 1910; Jurmain, 1991;
Labren & Berryman, 1984; Manchester, 1983; Pietrusewsky & Douglas,
1994; Salib, 1967; Webb, 1989, 1995; Wells, 1982; Wood, 1979).
If the forearm is used to protect the head from injury, then forearm
diaphyseal fractures and cranial injuries should coincide. In order to
112 Jnjury and violen! death
demonstrate an association between forearm and cranial injuries, Smith
( 1990; and unpublished manuscript) determined frequencies of forearm
and cranial injuries in a sample of prehistoric skeletons (n = 1695) from
Tennessee. In this series, ulnar and radial fractures occur with sorne
frequency, but head and face injuries are extremely rare. These findings
suggest that at leas! with this population forearm fractures may have been
caused by accidents and not by aggressors (Smith, 1990; and unpublished
manuscript).
Conversely, crania from the Santa Barbara Channel Islands ha ve a very
high frequency ofviolence-related traumatic injuries (18.3%; 138/753; and
see below) and few forearm fractures (2.6%; 9/350) (Lamber!, 1994).
Forearm fractures are equally distributed throughout the temporal
sequence, whereas cranial trauma has a distinctive peak frequency in
the Early Middle period (ca. 1490 BC-AD 1150). As in the series from
Tennessee, these findings suggest that the attribution of forearm fractures
to parrying is better understood in relation to bread patterns of skeletal
trauma rather than a single cause such as protection of the head.
Individuals in sorne settings probably did use the arm to avert blows to
the head, thus resulting in forearm injury. F or example, in the Honokahua
precontact sample from Hawaii (Pietrusewsky & Douglas, 1994) andina
number of samples from Australia (Dinning, 1949; Pretty & Kricun, 1989;
Webb, 1989, 1995) and Nubia (Elliot Smith & Wood Janes, 19!0), left
forearm fractures ha ve a considerably higher frequency than right forearm
fractures (e.g., Honokahua: 68.8% left vs. 31.3% right). Hand-to-hand
fighting is well documented in native Australians (Webb, 1989, 1995).
Parrying shields were widely used for protection against clubs (Massola,
1963). Kricun ( 1994) notes that parrying fractures could ha ve arisen either
from direct blows to the forearm or from blows hitting the shield with the
forces transmitted to the ulna. These observations, coupled with the high
levels of cranial trauma, indica te the strong likelihood that parry fractures
arose in confl.ict situations in these settings (e.g., Hawaii, Australia,
Nubia).
Forearm or upper arm fractures are infrequent in a number of popula-
tions. Romano-British remains from Cirencester, England, have a remark-
ably high frequency of rib fractures - in both males (25/57) and females
(3/7)- but few arm fractures (Wells, 1982). The contemporary population
from Poundbury (Dorset) has a high prevalence of tibia-fibula spiral
fractures (Molleson, 1992). Moliesen (1992) suggests that these fractures
may have been sustained in rural, agrarian activities, especially when feet
are caught in plow furrows and lower leg bones are fractured in subsequent
falls.
Accidental injury 113
In eighteenth and nineteenth century Britain, fracture rates appear to
have been considerably lower than at either Cirencester or Poundbury,
especially in densely populated, urbanized settings. In contras! to the
Cirencester remains, only 9.4% of men and 4.6% of women in the
Spitalfields (London) population possess fractured bones (Waldron, 1993).
As in the Cirencester series, most Spitalfields skeletal trauma involves
traumatic injuries to the ribs (34/54).
Injury prevalence in sorne other settings is very different from that in
urban Britain. In Ecuador, accidental injuries increased from a low in the
earliest prehistoric period (9%; preceramic, preagricultural) to a high in a
colonial urban population (29%; Quito) (Ubelaker, 1994). Ubelaker (1994)
contends that the increase in trauma reflects the hazards of urban living.
The perils of urban living are also well documented in a nineteenth
century Euroamerican series from Belleville, Ontario (Jimenez, 1994).
Adult males and adult females show a very high prevalence of skeletal
injuries arising from accidents (males, 46.6%; females, 27.9%). These
frequencies reflect the arduous nature of industrialization associated with
the development of urban life in Upper Canada.
4.2.2 Case studies: assessing the difficulty of /ifeway in Shanidar
Neandertals and Sudanese Nubians
Study of accidental injury in single populations or temporal series within
specific geographical settings facilitates the identification and interpreta-
tion of patterns of behavior and lifestyle. The two following well studied,
but very different, populations illustrate the significance of traumatic injury
and behavioral interpretation.
Shanidar Neandertals
The presence of severe injuries attests to the harshness of archaic Horno
sapiens lifeways in this and other late Pleistocene groups (and see Chapter
5). Traumatic lesions have been found in European and western Asian
Neandertals since the first upper limb fracture was reported in the
mid-nineteenth century (Schaaffhausen, 1858; and see Berger & Trinkaus,
1995). Traumatic injuries are common in late archaic Horno sapiens -
virtually every relatively complete adult Neandertal skeleton older than 25
to 30 years displays sorne type of injury (Berger & Trinkaus, 1995).
The Shanidar sample presents a highly distinctive picture of injury in late
Pleistocene hominids. Severa! of the six adult skeletons have . skeletal
trauma, mostly of accidental origin (Stewart, 1977; Trinkaus & Zimmer-
114 Injury and violent death
Figure 4.2. Atrophy and healed fracture o right humerus; Shanidar, Iraq. The
left humerus is nonnal. (Photograph and copyright by Erik Trinkaus;
reproduced with permission.)
man, 1982). Shanidar 1, a mature adult male, has multiple injuries. The
bones of the right upper arm and shoulder are less than half the size of the
bones of the left upper arm and shoulder (Figure 4.2), which may ha ve been
caused by either childhood nerve damage or adult disuse atrophy following
a severe injury (Trinkaus & Zimmerman, 1982). The diminutive right
humerus has two fractures, both at the distal end. One ofthe fractures may
be one side of a false joint (pseudarthrosis) or evidence of an amputation
(Stewart, 1977; Trinkaus & Zimmerman, 1982). This individual also
displays evidence of severe cranial trauma involving extensive scarring of
the frontal and a crushing fracture of the left eye orbit. Ali postcranial and
cranial injuries were completely healed at the time of death.
Other accidental trauma includes a rib fracture in Shanidar 4, a mature
Accidental injury 115
adult male. The presence of a large callus at the fracture site, but with
exposed trabecular bone, indicates that healing was incomplete at the time
of death. Finally, like Shanidar !, another adult male (Shanidar 5) has
frontal bone trauma. The injury was completely healed at death.
The Shanidar cranial injuries are part of an overall pattern of head and
neck injuries in European and western Asian late archaic Horno sapiens
(Berger & Trinkaus, 1995). Nearly one-third of these hominids have head
and neck trauma, which is more than twice the prevalence of a recen!
clinical sample from New York (Berger & Trinkaus, 1995) (Table 4.1).
Survey of traumas associated with a variety of occupations in recen!
humans indicates that American rodeoathletes also have a high prevalence
of head and neck injuries relative to other regions of the body (Berger &
Trinkaus, 1995). The pattern in rodeo athletesreflec(s the dangers of riding
highly irritated animals (e.g., Bos taurus, Equus caballus); head and neck
injuries in rodeo athletes result from impacts after being thrown from these
animals. By logical extension, the high prevalence ofhead and neck injuries
in Neandertals relates to hunting activities, especially involving encounters
with medium-sized ungulates (Berger & Trinkaus, 1995). Use of spears
would have necessarily placed hunters in close proximity to, and hence in
danger of bodily injury from, their enraged prey.
Sudanese Nubians
Temporal comparisons revea! importan! trends in accidental injury pat-
terns in recen! humans (and see below). In the Wadi Halfa region of
Sudanese Nubia, fracture prevalence increased in the Christian period (AD
550-1300) relative to earlier periods (Armelagos, 1969; Armelagos et al.,
1981). Within the Christian period in Kulubnarti, Nubia, there is a general
increase in fracture prevalence from 18% to 23% in the early (AD 550-750)
to late (AD 750-1550) periods, respectively (Burrell 'et al., 1986; Jurmain et
al., 1994; Kilgore et al., 1997; Van Gerven et al., 1995). Upper limb
fractures show an especially pronounced increase (30%) (Van Gerven et al.,
1995).
Increase in skeletal trauma in juveniles and old adults is more pro-
nounced than in other age groups in the late Christian period at Kulub-
narti. These age-specific increases in fracture prevalence may be due to
elevated risks ofliving in two-storey houses in the late period vs. one-storey
houses in the early period (Burrell et al., 1986). Access to the living area on
the second storey in late'period houses was gained by use of a retractable
ladder, which may ha ve caused falls and other types of accidents (Burrell et
al., 1986).
ll6 Injury and violen! death
Table 4.1. Distributional frequencies (%) of traumatic lesions by
anatomical region: Neandertals, recent archaeological samples ( Bt-5,
Libben, Nubia), clinical samp/es (London, New York, New Mexico),
and rodeo athletes. ( Adaptedfrom Berger & Trinkaus, 1995: Table 2.)
Head/ Shoulder/
Group neck Trunk arm Hand Pelvis Leg Foot
Neandertal (n = 17) 29.6 14.8 25.9 3.7 3.7 11.1 11.l
Bt-5 (n = 223) 1.8 51.1 22.4 6.3 3.1 9.0 6.3
Libben (n = 94) 6.4 21.3 29.7 o.o o.o 39.4 3.2
Nubia (n= 160) 10.6 6.9 53.1 1.9 3.8 22.6 1.3
London (n = 1730) 6.2 7.0 31.6 24.4 0.2 23.6 7.0
Ncw York (n = 11959) 13.7 12.3 25.3 21.9 0.5 20.6 5.6
New Mexico (n = 792) 1.6 12.5 23.I 23.6 2.1 11.1 25.9
Rodeo (n= 181) 39.2 9.9 25.9 6.1 3.3 6.1 9.4
A significan! portian of the Kulubnarti individuals possesses multiple
injuries (25%). This unusually high prevalence of accidental injuries,
coupled with the presence of numerous and severe fractures, reflects the
hazards of living in a very difficult terrain. Unlike Lower Nubia to the
north, cultivated areas in Upper Nubia are highly constricted and are
limited to small pockets offlat land immediately adjacent to the Nile River.
Individuals living at Kulubnarti would ha ve been exposed on a daily basis
to difficult walking conditions. The adoption of defensible architecture
later in the Christian period (e.g., two-storey houses) may ha ve also placed
individuals al increased risk of injury.
Most Kulubnarti fractures are in the forearm (75% of fractures) (Kilgore
et al., 1997). Aggressive interactions may ha ve contributed to sorne of the
fractures, but the virtual lack of cranial trauma and weapon wounds makes
this explanation unlikely (Kilgore et al., 1997). Therefore, the distinctive
pattern of forelimb involvement suggests that fractures resulted from falls
and not interpersonal aggression.
4.1.3 Temporal trends and association of accidental injury with
suhsistence strategy
Aside from gross comparisons of fracture prevalence in human popula-
tions, there is surprisingly little information on temporal trends in recen!
humans. Angel (1974) attempted to document temporal trends by compari-
son of post-Mesolithic archaeological skeletons (n = 2125) and modern
Accidental injury
117
Euroamerican samples. This comparison reveals severa! general character-
istics of accidental injury: adult males have more injuries than adult
females; older adult females ha ve more fractures than older adult males (see
also Buhr & Cooke, 1959); and recent populations have far more fractures
than earlier populations. The higher frequency of skeletal injury in adult
males than females in Angel's samples probably reflects a greater exposure
of mento trauma. The reversa! in older adults indicates the effects of bon e
loss, especially in postmenopausal women. Reduced bone mass predis-
poses an individual to fracture, especially in the femoral neck region, where
there is relatively little cortical bone to begin with (cf. Garn, 1970; Stini,
1990). Angel (1974) contends that the higher fracture prevalence in the
modern samples results from the hazards associated with twentieth century
technology and urban living, such as the reliance on automobiles, walking
on staircases, and urban crowding.
Other findings drawn from the study of skeletal series in the North
American Eastern Woodlands, Southwest, and Texas indicate a general
decrease in individuals affected by postcranial fracture in comparison of
preagricultural hunter-gatherers and later agriculturalists (Steinbock,
1976). These findings suggest that, in general, forager lifestyles may have
been more dangerous than agricultura! lifestyles. However, the hunter-
gatherer sample is largely drawn from the Archaic period (4000--2000 BC)
ludian Knoll site, thus limiting comparisons to mostly a single locality. The
samples are reported by the ratio of number of fractures to number of
individuals, making it difficult to assess fracture patterns and prevalence.
Thus, the low prevalence in the later populations may simply reflect the
incompleteness of skeletons rather than the true prevalence of traumatic
injury.
Consisten! with Steinbock's (1976) assessment is a significan! decrease in
forearm trauma (from 4.0% to 0.6%) in a comparison of early prehistoric,
Archaic period foragers with late prehistoric maize agriculturalists in the
Tennessee River valley (Smith, 1990). These findings suggest that foragers
in this setting led a more hazardous lifestyle than did their agricultura!
descendants, at least with respect to the kinds of conditions resulting in
forearm trauma (Smith, 1990).
Other Southeastern prehistoricmaize agriculturalists show generally low
prevalence of accidental injuries. At Moundville, for example, the total
frequency of bones affected is only 0.7% (Powell, 1988). Many of these
fractures are associated with lower-status individuals; no high-status adult
females have accident-related traumatic injuries (Powell, 1988). Therefore,
at least in this setting, sedentary populations had a relatively accident-free
lifestyle, and high-status elite women may have been spared from activities
118 Jnjury and violent death
resulting in accidental injury altogether (lack of elite males in the skeletal
series precluded their assessment). In contras!, at the late prehistoric
Mississippian site of Chucalissa, Tennessee, high-status males have far
higher frequency offractures than low-status males or high-status females
(Lahren & Berryman, 1984). Unlike in the Moundville population, most of
the injuries can be attributed to violence (e.g., cranial depressed fracture).
Late prehistoric intensive agriculturalists from the Dickson Mounds site
show an in crease in fracture prevalen ce compared with earlier less intensive
agriculturalists (Goodman et al., 1984). This suggests that injury risk
increased with the shift to more intensive farming in this setting of the
American Midwest. This exception to the aforementioned pattern of
reduction in accidental injuries reftects the high variability of injury, both
temporally and spatially, across large regions.
A high prevalence of accidental trauma in hunter-gatherers is also well
documented in prehistoric Australian populations (Webb, 1989, 1995).
Although prevalence of postcranial fractures is generally low in these
foragers, many regions of the continent show exceptionally high frequen-
cies of trauma, especially in the forearm. F or example, female left ulnae
have fracture frequencies of 19% for both the southern and the eastern
coastal regions. Sorne forearm fractures are probably due to violence
(parry fractures) (Webb, 1989, 1995), but their presence also reflects the
generally difficult nature of hunter-gatherer lifeways in specific regions of
Australia.
4.1.4 Age pattern.<
Anthropological and epidemiological studies emphasize the link between
age and accidental injury in human populations. Age patterning in
archaeological samples is difficult, because a healed fracture represents an
injury that could ha ve occurred months or years before the time of death.
Industrialized societies express a high frequency of fracture in older adults
(e.g., Buhr & Cooke, 1959), in large part dueto age-related bone loss (Stini,
1990; see Chapter 2). Study of fracture prevalence in the late prehistoric
Libben series expresses a very different pattern from that in recen\ urban
populations (Lo vejo y & Heiple, 1981 ). Survivorship of individuals with
and without one or more fractures in the prehistoric population is
indistinguishable. Analysis of the number of years at risk reveals that
fracture rates peaked in two age groups: adolescence/young adulthood
(15-25 years) and old adulthood ( > 45 years). This pattern may refiect an
elevated risk of trauma due to warfare and conflict, especially for the
younger group. Lovejoy & Heiple (1981) note that women and men were
1
i
Intentional injury 119
equally affected by traumatic injury, indicating that the elevated rates were
due to accidents associated with activities such hunting forays and travel
(in adults) or play (in juveniles).
4.3 Intentional injury and interpersonal violence
Ali human societies experience physical confrontation of one sort or
another at sorne point in time. This universal characteristic of humankind
is abundan ti y represented by archaeological evidence such as fortifications,
defensible site locations, settlement pattern, weaponry, and iconographic
and symbolic representations involving weapons, places, and people in
conflict (e.g., Avery, 1986; Campbell, 1986; Haas, l 990a; Haas & Creamer,
1993; Keeley, 1996; Larson, 1972; Lichtheim, 1973; Maschner, 1992;
Redmond, 1994; Schulman, 1982; Steponaitis, 1991; Wilson, 1987; and
many others). These site characteristics usually identify only the threat of
conflict and not its actual outcome. Ethnographic observation can provide
an importan! source of information on violence and aggression in human
societies (e.g., various authors in Burbank, 1994; Ferguson, 1995; Haas,
l 990b; Montagu, 1978; Redmond, 1994). Harmony and cooperativeness is
sometimes emphasized by anthropologists for many nonliterate societies,
but this appears to be very different from reality (e.g., Berndt, 1978; Ember,
1978; Erchak, 1996; Fienup-Riordan, 1994). Skeletal injuries represen!
clear testimony to conflicts between once-living individuals. Thus, archae-
ological skeletons are regarded as the only direct evidence of violen!
interaction.
The skeletal and paleopathological Jiterature on violence and injury is
largely dominated by descriptions of limited samples, such as individual
instances of arrow wounds (e.g., Armendariz et al., 1994; Bovee & Owsley,
1994; Lewis & Lewis, 1961; Pryor, 1976), decapitation (e.g., Bennike, 1985;
Harman et al., 1981; McKinley, 1993; Newman & Snow, 1942; Rose &
Hartnady, 1987; Smith, 1993; Ubelaker, 1988; Wakely & Bruce, 1989;
Webb, 1974; Wood Jones, 1908), dismemberment (e.g., Brothwell, 1971;
Smith, 1993; Snow & Fitzpatrick, 1989; Webb, 1974), sacrifice and ritual
killing (e.g., Bennike et al., 1986; Fowler, 1984; Pijoan & Mansilla Lory,
1997; Stead et al., 1986), and mutilation, especially scalping (e.g., Allen et
al., 1985; Hamperl & Laughlin, 1959; Hoyme & Bass, 1962;Lesley, 1995;
Miller, 1994; Neumann, 1940; Ortner & Putschar, 1985; O'Shea & Bridges,
1989; Smith, 1995), and cranial depressed fractures or other forms ofhead
injury (e.g., Haas & Creamer, 1993; Lux, 1994; Manchester & Elmhirst,
1980; Wenham, 1989; and many others). Because many of these studies
120 Injury and violent death
usually involve the investigation of only one or severa! skeletons, they
frequently pro vide limited information for inferring conftict behavior in the
populations from which they were drawn.
New population-oriented approaches in bioarchaeology are revealing
the importance of skeletal data far documenting patterns of violen!
behavior, ranging from interpersonal confticts to full-scale warfare. In !he
fallowing discussion, a series of studies are assessed that collectively
illustrate the enormous variation in skeletal evidence of conftict in past
populations. This discussion is not intended to be either comprehensive or
synthetic; rather, a representative sample of various kinds of skeletal
injury useful far identification of patterns of conftict is discussed. These
studies are drawn from a diverse set of geographical and cultural settings,
including the American Midwest (Norris Farms, Riviere aux V ase), Great
Plains (McCutchan-McLaughlin, Crow Creek, Larson Village), Southeast
(Koger's Island), southern California Pacific coas! (Santa Barbara Chan-
nel islands), Arctic (Kodiak Island, Admiralty Island, Saunaktuk), Ameri-
can Southwest (Anasazi) and Mexico, Easter Island, and Australia. Study
of hutnan remains from these settings provides important perspectives on
violence in nonliterate tribal and chiefly societies, mostly befare contact
occurred with Western populations. Lastly, a discussion fallows of
patterns of northern European violence, ranging from interpersonal
conflict to large-scale preindustrial warfare and execution, and the subse-
quent spread of European farms of violence to the New World and
military campaigns in North America. These investigations illustrate the
variability of trauma tic injury in skeletal remains as well as the impact of
violence on different components of the populations involved (e.g.,
gender, age, status) within a specific time period or consequent to major
adaptive shifts.
4.3.J American Midwest
Norris Far1ns
Study of skeletons recovered from a cemetery near a late prehistoric,
Oneota culture (ca. AD 1300) village on bluffs overlooking the Illinois River
floodplain provides a comprehensive picture of widespread violence and
conflict in a prehistoric tribal society (Milner, 1995; Milner et al., 1991;
Milner & Smith, 1989, 1990; Santure, 1990). Sixteen per cent of the
skeletons have evidence indicating violen! death, mostly in the forrn of
unhealed trauma. The range of unhealed trauma affecting principally the
cranium, body trunk, and upper limbs is striking, and includes projectile
.1\
Intentional injury 121
Figure 4.3. Cutmarks on adult frontal (scalping); Norris Farms, Illinois. (From
Milner & Smith, 1990; reproduced with permission of authors and Illinois
State Museum.)
wounds, hales in crania produced by stone celts, depressed fractures, and
various mutilations. A number of individuals have multiple skeletal
injuries, far exceeding what would have caused death. Evidence for
mutilation is especially abundan! in the Norris Farms series. Multiple
cutmarks on cranial vaults (especially on frontals) produced by stone tools
indicate that at least 14 individuals had been scalped (Figure 4.3).
Individuals missing their skulls and with cut cervical vertebra indicate that
they had been decapitated. The presence of cutmarks on articular regions
of postcranial bones evince widespread dismemberment. Many
of punctures and gouges in skeletal elements, mostly produced by carni-
vores, suggest that a significan! proportion ofvictims (n = 30) were exposed
above ground for a period of time befare interment.
Active bone infections, articular joint dislocations, and partially healed
bone fractures and other trauma denote the presence of severe and
long-standing disabilities for many individuals at the time oftheir deaths.
These conditions may well ha ve impaired their ability to escape confronta-
tion, leading to early death (Milner, 1995; Milner et al., 1991).
The presence of victims - individuals who clearly exhibit evidence of
122 Injury and violen! death
violent trauma - in individual bnrial pits or pits containing a few
individuals indica tes that violence occurred o ver the entire period of the use
of the cemetery, al leas! severa! decades. Over this time span, not ali of the
victims died outright as a result ofviolent encounters. For example, three
adult females display completely healed scalping trauma and two adult
females have remodelled bone surrounding embedded chert projectile
points.
The pattern of deadly conflict at Norris Farms is similar to that of
ethnographic small-scale societies where violence is endemic (e.g., Chag-
non, 1992; Heider, 1979). Raids in these societies often involve ambush and
surprise attacks, but may also occur during chance encounters. Victims of
attacks in these groups can include individuals of ali ages and both sexes.
At Norris Farms, fully one-third of the adults were victims of violen!
attack, and they include equal numbers of adult males and females; only
two are juveniles ( < 15 years). The equal number of adult females and
males with traumatic injuries is different from ethnographically observed
small-scale societies where males are the predominant victims ( e.g.,
Chagnon, 1992; Divale & Harris, 1976; Heider, 1979; Keeley, 1996; and
others). Female captives provideeconomic return-women's labor in many
societies is essential for food collection and preparation. Relatively more
women may have been killed al Norris Farms because of their burden on
the attacker's resources, or it may simply have been too much trouble for
the attackers to bring captives back to their home village (Milner, 1995;
Milner et al., 1991 ).
Warfare at Norris Farms is tied to the highly dynamic sociopolitical
circumstances that characterize this regan of the American Midwest
during Iater prehistory. The Oneota representan intrusion into the central
Illinois River valley, replacing a somewhat more organizationally c'mplex
system (Mississippian). Clear evidence of social tensions is indicated by the
presence of fortifications and defensible settlement locations. Populations
occupying the regan may ha ve been in competition over productive lands
and resources concentrated in river valleys (Milner et al., 1991 ). Violence in
thiscase may ha ve been a strategy employed to gain control ofthese highly
valued resources.
Riviere aux Vase
At the late prehistoric (AD 1000-1300) Riviere aux Vase site in southern
Michigan, nonlethal cranial injuries consisting of round or elliptical vault
depressions produced by wood or stone clubs has been identified (Wilkin-
son & Van Wagenen, 1993). A higher frequency of adult females (n = 14)
Intentional injury 123
than males (n = 5) with cranial injuries suggests that women were the
preferred target ofviolence. For sorne ofthese women, violent encounters
may have occurred on more !han one occasion - five female crania have
multiple healed depressed fractures. One of these individuals has a severe
depressed fracture and an accompanying large incision on the occipital as
well as multiple fractures on the left and right parietals.
Although other women may have been responsible for the cranial
injuries in this population (cf. Burbank, 1994), the demographic character-
istics of the injured group suggest that maleswere the primary aggressors.
Males and females show a very different age pattern of injury: the fema!e
peak age of trauma is the early adu!t years, male trauma is evenly
distributed across age groups. Ma!e trauma is oriented toward the front of
the vault, and fema!e trauma is present throughout the cranium. Wilkinson
& Van Wagenen (1993) suggest that violence directed at women by menor
women against women or co-wives in polygamous societies fits well with
ethnohistoric accounts of Eastern Wood!ands native popu!ations (Wilkin-
son & Van Wagenen, 1993).
4.3.2 American Great Plains
McCutchan-McLaughlin
A large proportion of individuals (19%) recovered from this archaeological
site in southeastern Oklahoma are from a single multiple burial containing
the remains of nine individuals (!bree infants, four adult females, two adult
males) (Powell & Rogers, 1980). None ofthe victims display evidence of
mutilation - such as scalping or dismemberment - that suggests violen!
confrontation. However, sorne members of the population clarly died in a
violen! fashion: large projectiles had penetrated thoracic and pelvic
cavities, vertebral columns, and limbs. These associations uilderscore the
importance of the archaeological context for documenting violence.
Unfortunately, this contextual information is easily lost if not properly
recorded during the course of fieldwork. Had this information riot been
available, the only evidence for violence in this series would have been
multiple burial, which by itself is on!y circumstantial evidence of violent
death.
Crow Creek
Study ofhuman remains from various late prehistoric siles in the Missouri
River valley located in the present states of South Dakota and North
Dakota reveals evidence for violen! confrontations between tribal groups
124 lnjury and violent death
competing for overlapping resources and territory (e.g., Bovee & Owsley,
1994; Hollimon & Owsley, 1994). The proto-Arikara (ca. AD 1325) skeletal
series from the Crow Creek site supplies importan! details on prehistoric
conflict (Gregg & Gregg, 1987; Willey, 1990; Willey & Emerson, 1993;
Zimmerman et al., 1981 ). Analysis of sorne 500 individuals buried in a
single pit indicates that nearly ali members of a village were massacred
during the course of a single raid. The presence of carnivore tooth marks
and weathering reveals that, following the attack, the deceased were
exposed for a period of time prior to their burial by returning survivors or
allies ( cf. Milner & Smith, 1989). Analysis of the human remains suggests
that although ali of the deceased were victims of a single attack, violence
had a well-established history in the Crow Creek villagers; a number of
massacre victims had healed violence-related injuries, including scalping
(Willey, 1990; Willey & Emerson, 1993).
Virtually ali individuals in the series have unhealed cutmarks on frontals
and other cranial elements, indicating removal of the scalp with a stone
knife al or following the time of death. Forty per cent of the victims ha ve
era nial depressed fractures, mostly located on the top and si des of vaults. In
addition to scalping trauma, various other mutilations are common,
including tooth evulsions, alveolar and tooth fracture, nose removal,
decapitation, and dismemberment of both upper and lower limbs. The
presence of cutmarks on mandibles - especially on ascending and/or
inferior borders of rami - suggests that mutilation also involved tangue
exc1s1ons.
Demographic assessment of the Crow Creek population indicates that
young adult (15-24 years) males outnumber young adult females by a
factor of two. Additionally, there are twice the number of older adult
(45-59 years) females than older adult males in the series. The absence of
young women may reflect captive taking, escape, or actual demographic
composition of the population from which victims are drawn (Willey &
Emerson, 1993). Similarly, the paucity of elderly men reflects their escape
or actual demographic composition. lt is unlikely that older males were
captured, because they would ha ve represented a relatively low economic
return for the raiders. Perhaps older males are missing because of earlier
raids and endemic warfare (Willey & Emerson, 1993).
Larson Village
During the historie period, the Arikara occupied a series of temporally
successive villages as the tribe migrated northward up the Missouri River
valley. One such villa ge was decimated by violence. At the seventeenth
lntentional injury 125
century Larson Village site in northern South Dakota, 71 skeletons from
house floors display evidence of violen! death and mutilation (Owsley et al.,
1977). About one-third (34%) of the victims had been scalped. The
mutilation patterns are similar to those displayed in the Crow Creek
population, including dismemberment, decapitation, and facial, dental,
cranial vault trauma, and tangue excision. Skeletal modifications on one
young adult female run the gamut ofviolent injury and mutilation: 'the left
side of her skull had been broken away, though cuts on the frontal, right
parietal and occipital indicated she was scalped. The distal diaphysis of her
right radius and ulna are articulated in anatomical position ... A knife cut
was made through the soft tissues in order to free the hand as a trophy or
possibly to secure a bracelet. Epiphyseal areas ofboth bones were broken.
After severing the muscles and tendons, the assailant simply broke the hand
free. Other bones have been cut, including five right ribs, the ventral and
posterior surface ofthe right clavicle, the right scapula, the right tibia and
both femurs. A deep cut near the distal epiphysis of the right humerus mus!
ha ve resulted while separating the upper and lower arm. Cuts on the femurs
are on the neck; the objective may have been to remove the legs from the
body. Bones of both legs are associated with the skeleton though neither
was articulated when excavated' (Owsley et al., 1977:125). This analysis
indicates that the Larson Village was attacked, and the villagers attempted
to defend themselves in their individual houses (see Bamforth, 1994). The
presence of numerous unburied remains indicates that the Larson Village
ceased to exist at the completion of the attack.
Demographic composition of the historie Larson Village site and
prehistoric Crow Creek massacre victims are similar: both series contain
fewer young adult females and juveniles than young adult males. This
pattern suggests that children and women may have been captured rather
than killed in the attack, which is documented historically in the region
(e.g., Lowie, 1954).
Analysis of scalping patterns reveals changes in warfare prior to and
during the contact period in the northern Plains (Owsley, 1994). Compari-
son of a large sample of late prehistoric, protohistoric, and early historie
crania (n = 751) from 15 archaeological siles indica tes that throughout the
time period both males and females were at equal risk of being scalped.
This risk greatly increased for men but not for women in the early historie
period: instances of scalping tripled for males and halved for females.
Most of the male victims are young adults (20-34 years), which probably
reflects the deaths of warriors who were killed either during raids or in
defense of home villages (Owsley, 1994). Owsley's (1994) analysis estab-
lishes the fact that death from violence was present throughout the late
126 lnjury and violen/ death
prehistoric and historie occupation of the Plains. This violence was
occasionally punctuated by eruptions of large-scale warfare, resulting in
numerous deaths at one time (e.g., Crow Creek and Larson Village
massacres). Therefore, contrary to earlier assertions (e.g., Newcomb,
1950), analysis of skeletal evidence of conflict discloses that warfare and
intergroup conflict did not originate during the early period of contact
with Europeans or Euroamericans, but was a well-established part of the
cultural and social behavioral repertoire of prehistoric societies living in
the region. The increase in Arikara male deaths during the historie period
probably reflects an elevated frequency of confrontations, especially with
the encroaching Sioux from the east (Owsley, 1994). The overall similarity
between precontact (e.g., Crow Creek) and postcontact (Larson Village)
Arikara skeletal injuries indicates an enduring pattern of conflict in this
region. In the precontact Plains setting, conflict appears to have been
triggered by food shortages and stress generally, as is suggested by the
presence of stress indicators in human remains as well as paleoclimatologi-
cal evidence for periodic droughts after AD 1250 (see Bamforth, 1994).
Archaeological evidence also suggests that new populations were migra-
ting into the region during this time, resulting in increased competition for
productive lands. In view of these new developments, Bamforth (1994)
contends that the stage was set for increased violence in later prehistory, a
pattern that was exacerbated by the spread of the Sioux into the Missouri
River valley.
4.3.3 American Southeast
Koger's !stand
Analysis of skeletal remains and their archaeological context provides
importan! evidence of conflict in a late prehistoric (Mississippian period)
population from northern Alabama (Bridges, 1996). Five adult male crania
from four mass burials have cutmarks on their frontals and occipitals from
scalping, and sorne individuals had been beheaded. In addition, crushing
fractures on a manubrium anda scapularspine oftwo individuals and deep
cuts on the ribs of a third indicate violen! death (Bridges, 1996). Demo-
graphic assessment of victims suggests that the impact of violence in this
society may have been profound - sorne 21% of the total number of
individuals were recovered from multiple-interment (mass) graves. The
population contains a relatively large number of infants (about 25% of the
total), a pattern that is characteristic of many preindustrial, archaeological
skeletal samples (e.g., Acsdi & Nemeskri, 1970; Buikstra & Konigsberg,
lntentional injury 127
15
z 10
/
--.
- ..

....-
5
"

o 10 20 30 40 50 60
Age (years)
Figure 4.4. Sex-specific mortality curves; Koger's Island, Alabama. Open
circles, males; filled circles, emales. (Adapted from Bridges, 1996; reproduced
with permission of author and John Wiley & Sons Ltd.)
1985). Adult ( > 15 years) males and females have very different age-at"
death profiles (Figure 4.4). Female deaths peak slightly during the late
teens and early twenties, progressively fall to a low point in the thirties, and
again rise in the forties. This pattern also characterizes other archaeological
series (e.g., Buikstra & Konigsberg, 1985). Male deaths are few in number
during the late teens, but high in the thirties. Presumably, many of these
deaths resulted from violen! intergroup encounters. The age composition
of the small burials (containing fewer than five individuals) is different from
the age composition ofthe multiple burials (five or more individuals). The
former is dominated by infants and younger juveniles. The latter has an
un usual peak in the thirties, and most of these deaths are adult males. The
pattern is distinctive in that there are relatively few women or children in
the multiple burials, suggesting that women and children had either
escaped or been captured; males may ha ve died while protecting the village
from aggressors:
The loss of adults, and especially men who were responsible for
protection of the group and acquisition of resources not acquired by
women (e.g., animal protein), would have had far-reaching consequences
for the population's ability to mitigate stress in a hostile setting. In later
prehistory, political systems and population size declined in this and sorne'
other regions of the American Southeast and Midwest (e.g., seeAnderson,
1994; Milner, 1990; Steponaitis, 1991). This hostile environment may have
128 Injury and violen/ death
contributed to the decreased presence of these groups in later prehistory
(Bridges, 1996).
Comparisons of the Koger's Island sample with other late prehistoric
populations in Alabama suggest that conftict was highly localized. Less
than 1 % ofthe late prehistoric (Mississippian period) Moundville skeletons
(n = 564) from west-central Alabama ha ve skeletal injuries, and only a
handful ofthese are from violence (one piercing wound and three with cuts)
(Powell, 1988). Populations from the preceding Late Woodland period in
the nearby Tombigbee River valley display an abundance of injuries
derived from violen! confrontations (e.g., projectile wounds) (Cole et al.,
1982; Hill, 1981; Welch, 1990). Sorne of the deceased who had died
violently were buried in multiple-individual graves. In late prehistoric
west-central Alabama there was an apparent decline in violence from the
Late Woodland to the Mississippian periods, which Steponaitis (1991)
argues was brought about by increasing control ofthe regional population
by centralized polity at Moundville.
4.3.4 Southern California Pacific coast
Aboriginal populations ofthe western Facific coas! of North America are
often characterized as passive and nonwarlike. Early Spanish accounts
highlight the peaceful nature ofthese groups (e.g., Bolton, 1927; Kroeber,
1925; Priestley, 1937). Eruptions of warfare and violence during the contact
period have been attributed to disruption of the natural harmony of the
region by invading Europeans (Lamber!, 1994; Walker, 1989).
The study of an extensive skeletal series of prehistoric human remains
from the Santa Barbara Channel islands and mainland illustrates the
inaccuracies of the perceptions of early explorers. Walker, Lamber!, and
their coworkers (Lambert, 1994, 1997; Lambert & Walker, 1991; Walker,
1989; Walker & Lambert, 1989; Walker et al., 1989) analyzed skeletal
injuries - mostly depressed cranial vault fractures and projectile wounds -
over a 7000-year temporal span of prehistory. Cranial trauma is quite
common: 18.3% of 753 crania have depressed fractures (Lamber!, 1994)
(Figure 4.5). Most fractures are healed, indicating that the majority of
individuals survived the violent encounters causing the injuries. Apparent-
ly, the intention ofthe individual wielding the weapon was to injure-rather
than kili - the targeted victim.
Demographic analysis of the victims reveals a number of tendencies.
Very few of the victims are under the age of 1 O; adolescents ha ve three times
the number ofvault injuries than younger individuals. Depressed fractures
are common in adults, but they are especially common in individuals under
(b)
()
lntentional injury
129
(a)
(e)
(e)
Figure 4.5. Depressed fractures in adult crania: ellipsoidal parietal injury (a),
deep circular injury (b), multiple circular injuries (e), cllipsoidal occipital injury
(d), circular parietal injury (e; arroW points to residual fracture line); Santa
Barbara Channel islands, California; (From Walker, 1989; reproduced with
permission of author and John Wiley & Sons, Inc.)
130 Jnjury and violent death
40 years of age. This age-specific pattern suggests that adults not involved
in behaviors and activities resulting in cranial trauma had somewhat
greater longevity in comparison with victims (Lambert, 1994).
Most of the ihjured adults are males. In these individuals, roughly
two-thirds oftheinjuries occur on the left side ofthe frontal, indicating that
conflicts were face-to-face encounters with a right-handed perpetrator
striking the left side of the head of the victim (Lambert, 1994). This
patterning of nonlethal trauma is remarkably similar to trauma observed in
the Yanomamo foragers ofVenezuela (Chagnon, 1992). Yanomamo men
attack other males with heavy wooden clubs. Although numerous casu-
alties result from these encounters, they are rarely lethal.
Adult female cranial injuries are haphazardly distributed on the face and
vault - only about one-third of the injuries are on the frontal bone
(Lambert, 1994). The random distribution of depressed fractures indica tes
that, although females were occasionally involved in face-to-face attacks,
most were from other directions, including from behind (e.g., while fleeing
an attacker). Additionally, sorne of these nonfrontal injuries could be from
accidental causes (Walker, 1989).
The presence of projectile injuries and associated projectile points in
sorne individuals reveals appreciable numbers of deaths ca u sed by violen ce:
of 1744 individuals, 3.3% had been killed or wounded by single or multiple
projectile injuries. Unlike the cranial trauma, the majority of projectile
victims (at least 71%) died from their wounds, indicating the lethal
intentions of the attacker. The demographic composition of individuals
with either lethal (projectile wounds) or nonlethal (cranial depressed
fractures) injuries is similar in at least two respects: first, young and mature
adult males are the most affected; and second, children, older adults, and
women are the least affected. Like cranial trauma, aggression leading to
injury and death involved primarily adult males under 40 years of age
(Lambert, 1997), which is consistent with patterns observed for many
nonindustrial communities globally (e.g., Chagnon, 1992; Meggitt, 1977;
and see discussion in Lambert, 1994).
The temporal patterns of nonlethal and lethal skeletal injury are
distinctive in prehistoric southern California coastal populations. Non-
lethal cranial depressed fractures are common in the Early Middle period
(1490 BC-AD 580), whereas lethal projectile injuries are far more common in
the Late Middle period (AD 580-1380)(Lambert, 1994, 1997). The increase
in frequency of projectile injuries may be tied to the adoption of the
bow-and-arrow in California during the sixth century AD (Lambert &
Walker, 1991). Perhaps the rapid adoption of the bow-and-arrow at this
time may ha ve been fostered by competition and conflict between popula-
Intentional injury 131
tions in North America generally (Blitz, 1988). Regardless of the moti-
vation for increased lethal violence, the later peak fa projectile injuries
signifies a shift to more serious - and deadlier - forms of conflict in Iater
prehistoric southern California populations.
The availability of abundant bioarchaeological, archaeological, and
climatological data provides a more comprehensive understanding of
factors that may have motivated violent behavior in the Santa Barbara
Channel region. Analysis of tree ring and other climatological data
suggests that the .Middle period saw an increase in environmental instabil-
ity, periodic droughts, and decreased terrestrial resource productivity.
These changes, coupled with warming of the Pacific Ocean during this time,
reduced marine productivity (Lambert, 1994, 1997). In light of these
changes, Lambert ( 1994) speculates that elevated competition for increased
resource stress may have engendered an increase in violence in these
populations. This hypothesis is consistent with other skeletal data showing
a decline in quality oflife and increase in health stress (e.g., Lambert, 1993,
1994; Walker, 1986a; Walker & Lambert, 1989; see Chapters 2 and 3).
Along with the increase in trauma and disease during this time, there is
evidence for increasing social complexity. For example, the burial of
high-status grave goods (e.g., beads) with infants is more common during
the peak in drought in later prehistory, suggesting ascribed rather than
achieved social rank. Perhaps increased intergroup competition '-' and;
hence, increased violence - for scarce resources during episodes of
environmental degradation fostered more complex social organization in
Iater prehistory (Fischman, 1996).
4.3.5 The Arctic
Like California, native groups living in the Arctic are often perceived as
living in a state of quiet repose and nonviolence (see Fienup-Riordan,
1994). Early documentation of Eskimos often presents them as 'passive to
the point oflethargy' (Fienup-Riordan, 1994:322). Severa! bioarchaeologi-
cal investigations are providing new data that indicate the need for revision
of these perceptions. Findings from these studies suggest that, although
violence may have been rare, it erupted on occasion and had dire
consequences for sorne groups.
Saunaktuk
Study of human remains from the Saunaktuk site located east of the
Mackenzie Delta inthe CanadianNorthwest Territories provides compell-
132 Injury and violent death
ing evidence for violen! confrontation between native groups (Melbye &
Fairgrieve, 1994; Walker, 1990). Human remains from this locality
represen! a minimum of 35 Inuit Eskimo villagers, sorne of whom died
violently in the !'ate fourteenth century AD. Evidence for violen! death and
body treatment in this Arctic setting is indicated by extensive perimortem
skeletal modifications, including knife cuts, slashing, piercing, gouging,
and splitting of long bones (Melbye & Fairgrieve, 1994). None of the
remains had been purposefully buried. Most of the victims are juveniles
(68.6%), suggesting that adults- particularly males - may ha ve been away
hunting, leaving a relatively defenseless group vulnerable to attack.
Hundreds of knife cuts, especially around articular joints and the neck
( e.g., occipital condyles and upper cervical vertebrae), indica te the practice
of dismemberment and decapitation. Numerous cuts on facial bones on
most victims identify widespread facial mutilation or disfigurement. Many
other cuts, such as on clavicles and scapulae, reflect an overall pattern
associated with purposeful dismemberment, removal of muscle and other
soft tissues, and intentional mutilation. Unique to the Saunaktuk skeletal
series is the presence of gouges at the ends of long bones. Adult distal
femara from two individuals display large perimortem mediolateral gouges
passing completely through the cortical and cancellous tissue. These
modifications are consisten! with oral tradition describing a type of torture
whereby the victim's knees were pierced and the individual was dragged
around the village by a cord passed through these perforations.
With few exceptions, long bones had been split in to seores of longitudi-
nal sections. The surfaces of split bones are smooth and display tiny step
fractures identical to borre modifications in butchered animal bones.
Presumably, this breakage pattern was produced by extraction of the
nutritionally rich marrow for consumption (Melbye & Fairgrieve, 1994; cf.
White, 1992). The striking similarity between butchered animal remains
found in archaeological siles and the Saunaktuk human skeletal assem-
blage suggests that the deceased had been cannibalized. In sum, at least
sorne members of this late prehistoric Saunaktuk population had been
tortured, and ali had been murdered, mutilated, and cannibalized.
There is a rich historical record that provides a context for understand-
ing violence between native groups living in this region of the Arctic. In
areas where Inuit and Amerindians carne into frequent contact - such as
the Mackenzie Delta region - violen! interactions between the groups were
commonplace. Oral traditions and historical accounts describe the horrific
nature of intergroup violence. For example, Samuel Hearne, who explored
the region in the late eighteenth century for the Hudson Bay Company,
observed the murder and mutilation of Inuit villagers by a group of
Intentional injury 133
Amerindians. He noted that 'the brutish manner in which these savages
used the bodies they had so cruelly bereaved of life wils so shocking, that it
would be indecent to describe it ... ' (Hearne, 1971: 155; quoted in Melbye
& Fairgrieve, 1994:73).
The massacre at Sauriaktuk may not have been an isolated occurrence in
the Arctic region. Preliminary evidence from Admiralty Island, Alaska,
indica tes the presence of a small number of broken long bones and ribs as
well as perimortem cutmarks produced by stone tools (Irish et al., 1993).
Although the evidence is limited, the patterns of modification are similar to
those identified from Saunaktuk (Melbye & Fairgrieve, 1994) as well as
other sites in North America where cannibalism was probably present (e.g.,
American Southwest: Redmond, 1994; Turner, l 993a; White, 1992; and see
below).
On Kodiak Island, Alaska, skeletal modifications suggestive of dismem-
berment and cannibalism have been described. At Uyak Bay, Hrdlicka
(1944) briefly documented a series of scattered human remains, sorne of
which had been broken in a manner which he interpreted as representing a
practice of cannibalism. Recent analysis of remains from tite U yak and
Crag Point sites shows the presence of culturally modified remains of
women and children, but not men (Simon, 1992; Simon & Steffian, 1994;
Steffian & Simon, 1994; and see Urcid, 1994). This study reveals a small
subsample of remains displaying cutmarks from disarticulation and
defleshing, drill holes, perimortem breakage, and longitudinal fracturing:
Therefore, cannibalism was probably present in precontact Koniag Island
native groups, albeit probably not at the levels envisioned by Hrdlicka.
4.3.6 American Southwest and Mexico
Anasazi
The Anasazi were one of severa! complexes oflate prehistoric societies who
were ancestral to sorne of the modern native populations curren ti y living in
the 'Four Corners' region (present-day states ofUtah, Colorado,Arizona,
and New Mexico) of the American Southwest. The region has been the
focus of intensive archaeological investigation for more than a century,
producing an abundance of human remains. Most human remains are
intentional interments, ranging from isolated burials found in house floors,
trash and storage areas, and large cemeteries. Individuals are generally
singular and are accompanied by grave goods (Turner, 1993a).
A small - but highly visible - proportion of graves are multiple-
individual interments containing many disarticulated and broken skeletal
134 Injury and vio/ent death
elements (reviewed by White, 1992; and see Redmond, 1994; Turner,
1993a; Turner & Turner, 1995). Patterns of these skeletal assemblages
contrast sharply with most other burials in two major respects: (1) they are
composed ofunburied bone masses found on the loors of structures or in
the fill ofkivas or rooms, or (2) they are clusters ofhuman remains in pits
(Turner, l 993a). These bone concentrations almost always lack grave
goods, and they are frequently found in small and isolated sites lacking
defensive constructions (Turner, l 993a; Turner & Turner, 1995). The sites
with these unusual remains are late prehistoric (ca. AD 900-1650), they
contain fewer than 30 individuals, and remains are often represented by
equal proportions of juveniles and adults and adult males and females
(although see Martin et al., 1995). As a general rule, no non human fauna!
remains are present in these sites. The skeletal assemblages contain
overwhelming evidence of perimortem trauma, disarticulation, deflcshing,
and burning.
The tremendous amount ofbone breakage in these assemblages prevents
determination of cause of death, and specifically, whether or not violence
was a factor leading to the deaths of individual s. Violence may very well
have been involved (e.g., Turner, 1993a). For example, alveolar bone and
tooth sockets are highly traumatized, and at leas\ one juvenile displays a
massive era nial injury.
The most thoroughly studied skeletal assemblagc from the American
Southwest is the Mancos Canyon series (White, 1992). White's meticulous
investigation reveals a pattern of skeletal modification similar to that in
other sites from the Southwest (e.g., Martin et al., 1997; Turner, 1983,
1993a; Turner & Morris, 1970; Turner & Turner, 1992, 1995). The sample
includes a minimum of 17 young adults and 12 children, ali of whom
display human-induced tool 1narks on their remains from defleshing,
percussion and chopping, and disarticulation. Thermal modification is
widespread in the assemblage. Similarly to the Saunaktuk series, long
bones show extensive reduction and longitudinal fracturing (Figure 4.6).
For example, humeri shafts are highly fragmented; this was accomplished
by hammerstone percussion and anvil fracturing (White, 1992:238). These
patterns of modiication are similar to fracture patterns of animal bones
resulting from removal of flesh, disarticulation, and marrow extraction
(White, 1992). The strong similarity between borre modifications associated
with the processing of humans and mammals in the region indica tes that
cannibalism was pro babi y practiced at Mancos Canyon and other si tes in
the region.
White notes the well documented evidence ofwarfare and violence in the
ethnohistorical and archaeological record of the American Southwest,
Intentional injury
Figure 4.6. Fractured and longitudinally split humeri; Mancos Canyon,
Colorado. (From White, 1992; reproduced with permission of author and
Princeton University Press.)
135
including defensive sites, intentionally burned habitations, as well as
remains of deceased whose deaths had been violen! (and see Haas &
Creamer, 1993). On the basis of the evidence - including lack of projectile
injuries, the pattern of body modification, and the extreme reduction of
skeletal materials found at the site - he contends that it is not possible to
determine the reasons for cannibalism. Severa! possible scenarios emerge,
such as ritualized cannibalism involving killing and eating of enemies, or
perhaps the Mancos Canyon population engaged in culinary cannibalism-
the population was so starved that they consumed friends, associates, and
relatives (White, 1992). Well documented historical cases of starvation
cannibalism provide importan\ support for the latter model (e;g., Bonassie,
1989; Grayson, 1993). The American Southwest can be characterized as a
marginal environment where seasonal cycles and resource variability led to
freqtfent episodes of crop failure and famine. Paleopathological analysis
provides substantial evidence for nutritional stress in the prehistoric
American Southwest (e.g., Martin et al., 1991; Stodder, 1994). This
evidence alone, however, does not provide sufficient indication for the
reason that cannibalism occurred. In sum, although starvation is a
136 lnjury and violent death
plausible motivation for cannibalistic behavior, it cannot be identified as a
primary factor from the evidence at hand.
Skeletal assemblages in the La Plata Valley, New Mexico, show the
presence of a nutnber of characteristics displayed by the Mancos Canyon
series, including breakage, burning, cutmarks, and other alterations of
crania, ribs, and lower limb bones (Martn et al., 1997; cf. White, 1992). At
site LA 37592, remains ofmostly juveniles and sorne adults found in trash
deposits in the fill of a kiva dating to the Pueblo III period (AD 1125-1300)
possess perimortem modifications (cutmarks, burning, breakage). Stable
isotopic analysis of diet and documentation of pathological conditions
(e.g., porotic hyperostosis) do not distinguish the culturally modified
sample from the other skeletal assemblages lacking evidence of human-
induced modifications. At least in this setting, therefore, there are no
apparent underlying differences explaining why sorne individuals were
treated differently from others at or shortly after the time of death.
Martin and coworkers ( 1997) suggest that although perimortem skeletal
modifications may be associated with cannibalism (cf. Turner, l 993a;
White, 1992) in the La Plata Valley other explanations should be carefully
evaluated. For example, they note that killing ofindividuals identified as
'witches' in the American Southwest is well documented historically, and
appears to ha ve climaxed during episodes of food shortages and epidemics
(and see Darling, 1997, regarding treatment of witches in the ethnographic
Southwest). Alternatively, warfare and conflict during late prehistory are
well documented. Other investigations of skeletal remains where warfare
has been observed certainly indicate extensive modifications of skeletons,
reftecting violent encounters.
4.3.7 Precontact Latin America
Ritualized violence and offering of remains- such as heads and other body
parts - as a mediator between the living, their ancestors, and the
supernatural world has been documented in a wide range of settings in
precontact Latin America (e.g., Benson & Boone, 1984; Carneiro, 1990;
Fowler, 1984; Redmond, 1994; Verano, 1995). Numerous iconographic
descriptions and ethnohistorical accounts provide information on sacrifi-
cial death and mutilation, and discoveries of missing heads and limbs in
archaeological settings provide verification for this behavior (e.g., Fowler,
1984). Little systematically collected skeletal evidence exists of these
activities (Verano, 1995). lmportant recent research in Peru and Mexico is
beginning to allay this shortcoming.
In coas tal Peru, a series of mutilated human remains and contextual data
lntentional injury 137
from the site of Pacatnamu in the Jequetepeque River valley on the
northern Peru coast are documented (Faulkner, 1986; Rea, 1986; Verano,
1986, 1995). Located outside the entrance of the primary ceremonial
complex, the remains of 14 adolescent and young adult males (mean age 21
years) were recovered from the bottom of a trench in three superimposed
groups. The groups were separated by erosiona! deposits, indicating that
the deceased had been deposited in the trench in three different burial
episodes. Sorne skeletal remains for each ofthe three groups show evidence
of weathering, indicating that burial did not immediately follow death.
Delayed burial is also indicated by the presence of pu pal cases of muscoid
flies representing different stages of insect growth in an open environment
(Faulkner, 1986).
Numerous injuries are present in the human remains. In the topmost
group, injuries include multiple stab wounds (perforations on vertebrae
and ribs) sustained from different directions. The variable pattern of
wound orientation suggests that more than one individual may have been
in volved in the stabbing of the victim. In the middle group, stab wounds are
not present. The presence of cutmarks on upper cervical vertebrae indica tes
throat-slashing and decapitation. The bottom-most group also shows
evidence of decapitation. Five individuals from the middle and lowcr
groups have bisected manubriums and fractured ribs, indicating that the
chest had been opened forcibly. Collectively, these perimortem traumas
presenta scenario of violent death and mutilation. Many of the victims also
have healed injuries (e.g., rib fractures, depressed cranial fractures)
(Verano, 1986). On the basis of the age distribution, sex, and pattern of
healed and unhealed injuries, Verano (1986) speculates that the victims
were war prisoners. This conclusion is well supported by iconographic
depictions in art from the region (Chimu and Moche) showing ritual
mutilation and sacrifice of war prisoners.
Early accounts report on the practice of removal and curation ofheads in
native populations in Ecuador, which continued well into the twentieth
century in sorne Amazonian groups (Verano, 1995). Head trophy taking is
also well represented in Andean iconography, but the practice is rarely
documented outside the southern Peruvian coast. Trophy skulls ha ve been
recovered from Nasca period (200 BC-AD 600) archaeological sites (e.g.;
Drusini & Baraybar, 1991; Verano, 1995). Young adult males were the
favored victims, suggesting that they represent enemy combatants (Ve-
rano, 1995). The presence of cutmarks on sorne skulls indicates that,
following death, soft tissue had been removed with a sharp implement.
Although the skulls vary in their treatment, two features characterize
virtually ali trophy heads from the Nasca region: the frontal is perforated
138 Injury and violen/ death
with a single hole and the base of the skull shows damage, ranging from
foramen magnum enlargement to removal of most of the cranial base.
Although the reasons for these alterations are not known, the frontal
perforation probably supported a rope handle. The foramen magnum may
ha ve been enlarged for removal of brain tissue, perhaps as a part of the
mortuary ritual.
In precontact Mexico, skeletal evidence has accrued on body processing,
including cutmarks, decapitation, dismemberment, burning, and inten-
tional borre breakage in a limited number of different temporal and cultural
contexts (Pijoan & Mansilla Lory, 1990, 1997). As has been discussed for
the American Southwest, cutmarks and other cultural modifications of
skeletal remains by themselves do not provide information on the moti-
vations for body processing (and see White, 1986). However, a rich
historical and iconographic record indicates that ritual was the key
motivation for cannibalism in early contact period Mexico.
A comprehensive picture of ritualized violence and death has recently
been presented from human remains recovered from Teotihuacan, Mexico.
Although skeletal evidence for ritual sacrifice has been known since the
early twentieth century (e.g., Hrdlicka, 1910a), the number ofindividuals
involved were too small to warrant attention. However, victims represen-
ted by single and multiple interments ofmostly adults from the Temple of
the Feathered Serpent dating to the early occupation of the site and
construction ofthe temple (ca. AD 100-250) have been analyzed (Cabrera
Castro, 1993; Cabrera Castro et al., 1991; Serrano Snchez, 1993;
Sugiyama, 1989). Sacrificial victims associated with the temple number
sorne 120 individuals (Cabrera Castro, 1993). The skeletal remains have
not been systematically studied for cutmarks or other skeletal modifica-
tions, but the position of the hands behind the backs for sorne individuals
suggests that they had been bound at the time of death. The general lack of
disarticulation as well as the inclusion of co-mingled individuals in single
burial pits (e.g., burial 190 included 18 adult males in a linear pit at the
midpoint between the southeast and southwest corners of the temple)
suggest that death probably took place at one time for all individuals
(Sugiyama, 1989). The identity ofthe victims are not known. However, in
reference to one of the multiple burials on the margins of the temple (burial
190), the inclusion of numerous obsidian points and blades and other
offertory items suggests that they 'were military men, priestly soldiers, or
men disguised as military personages' (Sugiyama, 1989:98).
Ritualized violence, sacrifice, and related behaviors are well documented
in the iconography of numerous precontact Latin American cultures. Mass
burials, mutilation, decapitation, and the presence of trophy heads in
Intentional injury 139
archaeological siles in Peru and Mexico indicate that scenes of these
activities in various art forms in these regions were notjust mythical events,
but rather, historically situated cultural practices.
4.3.8 Easter lsland
The easternmost island of Polynesia, Easter Island (Rapa Nui), is among
the most remole inhabitable land masses on earth. The island was settled by
about AD 400 by a small founding population, but by the sixteenth century,
the population density was very high and included between 7000 and 9000
individuals living on 180km
2
of land (Kirch, 1984; Van Tilburg, 1995).
Ethnographic, historical, and archaeological evidence indicates that war-
fare and conflict was fostered by environmental degradation, soil de-
pletion, an increasingly impoverished habita!, depletion of fue! (wood),
reduced food sources, overpopulation, and lack of ability to leave the
island (Kirch, 1984; Owsley et al., 1994). Folklore documents a state of
perpetua! warfare accompanied by numerous deaths, enslavement, mur-
der, and cannibalism in this highly circumscribed setting. Contact and
interaction with Europeans during the early period of exploration and
conques! of this region of the Pacific (AD 1722-1868) seem to have
exacerbated the deteriorating conditions, resulting in even more violen!
confrontations between groups as well as between natives and Europeans.
The study of human remains provides importan! data on health,
lifestyle, and evidence of violence (Gill & Owsley, 1993; Owsley et al.,
1994). From counts of frontal bones, it is found that 11.4% (31/271) of
Easter Island adult (<: 15 years) crania have depressed fractures. Other
cranial bones, primarily the left and right parietals, also exhibit fractures:
These findings indicate a high frequency ofviolent interactions, specifically
involving face-to-face encounters. Adult males have roughly twice the
number of cranial injuries of adult females. More younger adults (15-34
years: 13.2% of frontals) are affected than older adults ( <: 35 years: 10.3%
of frontals), suggesting that the violence may ha ve led to early death. Most
depressed fractures are either circular or oval, and were probably caused by
the impact of either hand-held rocks or clubs. In addition, fragments of
obsidian blades imbedded in sorne of the cranial fractures indica te the use
of other types of weapons. Extensive healing indica tes that very few of these
injuries were fatal, thus showing that violence was generally not in tended to
have a lethal outcome (cf. Lamber!, 1994; Walker, 1989; and above). The
arrival of Europeans resulted in the introduction of firearms weaponry,
which may have contributed to a shifting ofthese intentions from nonlethal
injury to homicide.'Forexample, small lead pellets from a gunshot wound
140 Injury and violent death
Table 4.2. Cranial trauma by region in Australia. Percentages of crania
are shown. ( Adaptedfrom Webb, 1995: Table 8-2.)
,N One lesion (/o) Two Jesions (
0
/o) Three lesions {/o)
Males
Central Murray (247) 13.4 3.6 o
Rufus River (122) 26.2 7.4 0,8
South Coast (138) 14.5 4.4 0.7
Desert (132) 16.7
3,8 1.5
Tropics (92) 6.6 o o
East Coast (133) 23.3 6.8 1.5
Fen1ales
Central Murray (151) 19.9 4.0 1.3
Rufus River (83) 27.7 8.4 2.4
South Coast (123) 31. 7 10.6 4.9
Desert (51) 33.3 11.8 5.9
Tropics (62) 24.2 9,7 4.8
East Coast (86) 32.6 10.5 3.5
were embedded in the frontal and left parietal of a historie period adult.
The bone tissue surrounding the entry wounds is fully healed, suggesting
that the victim survived for a period of time following the attack. Most
other victims offirearms were probably notas fortunate as this individual.
In summary, the skeletal evidence of trauma on Easter Island is
consisten\ with folklore documenting endemic interpersonal conflict in the
sense that violence was frequent. Contrary to this record, bioarchaeologi-
cal analysis indicates that violence resulted in frequent nonlethal injuries
rather than widespread death (Owsley et al., 1994). Additionally, the
practice of cannibalism is not confirmed by study of skeletal remains.
Therefore, the folklore of Eastern Islanders overemphasizes warfare,
violence, and cannibalism.
4.3.9 Australian foragers
Cranial trauma in prehistoric native Australian populations provides a
compelling picture of violence and injury in a wide range of geographical,
ecological, and cultural settings (Webb, 1989, 1995). Most regions of
prehistoric Australia ha ve relatively elevated frequency of cranial trauma,
especially depressed fractures (Table 4.2). Most of the injuries are well
healed, indicating that the attacker's intentions were to injure and not to
kili the victim. Many studies of human populations worldwide document
the higher proportion of violence directed at males (e.g., Gurdjian, 1973;
Jntentional injury 141
Labren & Bertyman, 1984; Owsley, 1994; Robb, 1997; Walker, 1989; and
many others; although see Wilkinson & Van Wagenen, 1993), which
probably reflects the central role of men in the violen! resolution of
conflicts in most human societies. This sex-specific pattern of head injury
contrasts with that in Australia. In virtually ali samples throughout the
continent, adult females show a consistently higher prevalence of cranial
injury than adult males, thus contributing to the greater prevalence in
females than males overall (Table 4.2). Sorne of the sex differences in
specific localities are slight, but many series show a remarkably strong
disparity between sexes. F or example, in the south coastal Swansport
sample, 39.6% (21/53) and 19.3% (11/57) of females and males exhibit
cranial trauma, respectively. For the few skeletal series (4/22) where males
have more cranial injuries than females, the differences are statistically
indistinguishable. The disparity in cranial injury between males and
females is not restricted to prevalence alone. F or virtually ali regions of
Australia - regardless of ecological or cultural setting - women show a
predominance of depressed fractures on right parietals and occipitals.
This pattern suggests that attacks carne from behind the victim, perhaps
while she was fleeing the attacker. Adult males show a different pattern of
injury location: for the entire series, more left parietals are fractured than
right parietals. This pattern suggests that male violence usually involved
facial confrontations.
It is not uncommon for an individual in the Australian samples to have
two, three, or even four cranial bones that display depressed fractures
(Table 4.2). Consisten! with the sex differences in prevalence of crania,
affected, more women than men have multiple injuries. The general
pattern, then, indicates that violence and aggression were directed more at
women than at men in prehistoric foraging societies throughout Australia.
Violence was not limited to prehistoric Australian Aboriginal societies.
Ethnographers observe that violence is a common occurrence anda part of
everyday discourse (Burbank, 1994). Unlike Western societies, such as in
the United States, where fighting - and especially aggression against
women - is viewed as a deviant behavjor, physical aggression in native
Australians is considered an accepted if not legitimate form of social
interaction (e.g., Burbank, 1994; Myers, 1986). Burbank (1994) provided
detailed observations on physical aggression in men and women in an
aboriginal group living in Arnhem Land (Northern Territory). Her study
shows that both men and women were heavily involved in confrontations.
However, the majority of aggressors and their victims are adult females.
These observations of both deceased and living native Australians revea! a
striking consistency in behavior between prehistoric and contemporary
142 ltijury ami violen! death
populations. Women played a key role in aggressive encounters, and not
simply as victims of attack.
4.3.10 Northern European violence
Denmark
The historical record ofviolence and warfare is abundan\ for northern and
western Europe. Systematic studies of violence have been produced for
severa! areas of northern Europe, including Scandinavia, and especially
prehistoricand early historie Denmark. Analysis ofhuman remains reveals
evidence of trauma tic injury, decapitation, and mutilation. Like much of
the history ofpaleopathology, these studies are largely descriptive, having
focussed on single ora few individuals (Bennike, 1987, 1991).
Relying primarily on remains dating from the Mesolithic (ca. 8300-
4200 se) to the Middle Ages (to AD 1536), Bennike (1985) identified
patterns of injury in Denmark. These patterns can be characterized as
involving a predominance of cranial trauma in mostly adult males
(mostly depressed fractures) on anterior cranial vaults, indicating face-to-
face violent interactions. Folklore and historical accounts emphasize the
high prevalence of violence during the Viking period (AD 800--1050).
Bennike's (1985) assessment clearly indicates that the Mesolithic and
Neolithic periods were far more violent than the Viking period: Me-
solithic crania display the highest prevalence of cranial trauma (43.8%),
which markedly declines in the Neolithic (9.4%), !ron Age (4.7%), Viking
period (4.3%), and Middle Ages (5.1%). Violence is well illustrated by the
presence of projectile injuries, sword and axe cuts, cranial depressed
fractures, and decapitation (Bennike, 1985; Ebbesen, 1993; Kannegaard
Nielsen & Brinch Petersen, 1993). At the Mesolithic site of Bogebakken, a
borre projectile was found lodged between the second and third thoracic
vertebrae of an individual. In fact, all projectile wounds at this and many
other Danish sites are f9und in the thoracic and head regions, revealing
the lethal intentions of the attackers.
In interments dating to the Middle Ages, the heads ofvictims had been
removed and placed between their legs. The reasons for this unusual
treatment are unclear, but during the Middle Ages the practice was
associated with criminals in order to prevent their return following death
(Bennike, 1985; and see below). Decapitation and other forms ofhead and
neck trauma were probably more common than is indicated by the skeletal
evidence alone. A number of Neolithic and !ron Age bog corpses show
evidence of decapitation and strangulation; the latter may have involved
lntentional irijury 143
hanging (e.g., Bennike, 1985; Glob, 1971). Owing to the relatively small
number of projectile- and weapon-related deaths, it is not possible to
identify a pattern of decrease or increase in violence-related death in
Denmark (see Bennike, 1985). However, there is a shift from use of
projectile weapons to axes and swords in the Iron Age (Bennike, 1985). The
lethal nature of this new weaponry for the enemies of Danes is demon-
strated in at least one battle site (see below).
Violence in western Europe during the Mesolithic may have been highly
regionalized, with a relatively high prevalence in sorne regions (e.g.,
Denmark) but not in others. For western Europe as a whole, violen!
trauma was relatively infrequent during the Mesolithic (Constandse-
Westermann & Newell, 1984). An increase in population density, reduced
territories occupied by ethnic groups, increase in social complexity, and
resource circumscription during the Mesolithic suggests the potential for
an increase in hostilities. However, trauma prevalence does not change in
comparison to the succession of periods during this time frame (Con-
standse-Westermann & Newell, 1984).
Battle o/ Wisby
The Middle Ages in Europe involved a tremendous amount of armed
conflict between many warring city-states and various confederations of
states. Wisby, located on the island of Gotland in the Baltic Sea, is the site
of one of a number of large battles that have been archaeologically
documented in Europe. Hundreds of skeletons excavated at the battle site
present sorne of the grim details of preindustrial warfare in northern
Europe.
The city of Wisby was invaded in 1361 by an army led by the Danish
king, Waldemar (Thordeman, 1939). Over the course of a single day, the
poorly organized peasant forces defending the city were decisively defeated
and massacred by the king's highly disciplined army. Estimates indicate
that sorne 1800 Gotlanders were killed in this battle (Ingelmark, 1939).
Archaeological excavations of three common graves at Wisby yielded an
enormous sample of human skeletal remains (n = 1185). Analysis of these
remains reveals that only males were victims, but the age distribution was
extraordinarily varied, ranging from adolescents to very old adult males
(Ingelmark, 1939).
Consisten! with research completed on skeletons from other Middle
Ages archaeological sites (cf. Bennike, 1985), most ofthe injuries resulted
from the use of cutting weaponry, especially swords and axes (n = 456). A
significan! minority of injuries were from projectiles (n= 126). Skeletal
144 Injury and violen/ death
wounds are variable, ranging from scratches and nicks on individual bones
to dismemberment. The latter, for example, is illustrated by the presence of
severed hands and feet, partial limbs, and complete limbs. One individual
expresses the intensity of battle: the lower halves of both left and right
tibiae and fibulae are completely severed. The lower legs are affected more
than any other area of the body: about two-thirds (65%) of cutting trauma
involved tibiae. Ingelmark observed that the focus on the lower limbs
probably reftects the use of shields and protective clothing for the body
trunk, leaving the legs especially vulnerable to injury. Sword blows directed
at the lower legs typically resulted in the slicing and chipping of bone on the
tibia anterior crest.
Poor protection of heads of individuals from the Gotlander army is
indicated by the large number of cranial injuries, sorne of which involve
extremely deep cuis. The heads of sorne groups of Gotlander soldiers may
ha ve been better protected than others. For example, only 5.4% of crania
are injured in common grave no. 3; this frequency contrasts with the
prevalence of cranial injuries of 42.3o/o and 52.3o/c, in common graves no. 1
and no. 2, respectively. The majority of cranial wounds are on the left side
of the head - this fits the expected pattern of injury sustained by a weapon
held by a right-handed individual during a face-to-face encounter (and see
Courville, 1965). Sorne crania have injuries on posterior vaults, suggest-
ing that these victims were struck from behind while fteeing their at-
tackers.
The presence of ali ages of males suggests that the majority of the male
population in and around Wisby were recruited for the defense of the city.
Analysis of pathological conditions suggests that virtually anyone who
could walk (and even those who could not) were drafted. Ingelmark
(1939: 192) remarked on the 'good many morbid processes' present in the
skeletal assemblage of battle victims. Many vertebrae have pronounced
osteoarthritis, and al leas! four individuals have extensive vertebral
tuberculosis. One individual displays a completely ankylosed (fused)
knee. The angle of ftexion (about 55) greatly disabled the individual:
running was an impossibility for this victim. A number of individuals
show wellhealed femoral neck fractures which limited their ambulatory
capabilities. These observations, combined with other health problems :-
including skeletal infections and numerous healed, but poorly aligned
limb fractures (n = 39) - also contributed to reduction in efficiency on the
battlefield. The defending army, then, was not composed of a group of
robust males who were in their peak years of fighting prowess. Like many
of the skeletal samples discussed in this chapter from both New and Old
World contexts, these victims of the massacre were members of a
Intentional injury 145
population not unfamiliar with violence during their lifetimes (cf. Milner
et al., 1991; Willey, 1990). A number ofbattle victimshave healed cranial
trauma (e.g., depressed fractures).
Beheading in ancient Britain
In the upper Thames valley, a high frequency of decapitation and prone
burial in Romano-British (third to early fifth centuries AD) cemeteries
(Cassington, Radley, Stanton Harcourt, Queensford Mili, and Curbridge)
has been documented (Harman et al., 1981). In total, 15.3% show evidence
of beheading. Analysis of other Romano-British cemeteries indica tes that
this practice was apparently part of a widespread behavior during this
period (e.g., Bush & Stirland, 1988; Wells, 1982). During the following
Anglo-Saxon period (fifth to tenth centuries AD) beheading continued,
albeit at a lower rate. Reminiscent of the decapitations from Denmark
(Bennike, 1985; see above), heads of a number of burials had been
purposefully placed between the legs of the deceased (and see McKinley,
1993).
The simultaneous occurrence of decapitation and prone burial in
Romano-British and Anglo-Saxon cemeteries suggests a probable connec-
tion between the two practices. The demographic compositori of de-
capitated and prone skeletons shows a selectivity for adults, suggesting that
execution may have been the primary motive. Review of historical;
archaeological, and folklore literature indicates other possibilities, such as
prevention of the deceased from walking or communicating, sacrifice, and
deprivation of the soul, for either sacrificial purposes or for punishment for
sorne wrong-doing (Harman et al., 1981). Decapitation and/or prone
burial may ha ve been a 'final form of indignity infticted on the corpse of an
individual in consequence of particular characteristics or offenses during
life. But it seems more probable that both were believed to ha ve sorne effect
on the subject in an after life' (Harman et al., 1981:168).
The manner of beheading is indicated by the location and pattern of
cutmarks on affected skeletal elements. Severing of the head was usually
done in the upper neck region. Damage to anterior surfaces of cervical
vertebrae in sorne individuals and posterior surfaces of cervical vertebrae in
others indicates that the beheading blow was delivered from in front and
behind at various times, and probably with a variety of tools (Harman et
al., 1981; and see McKinley, 1993; McWhirr et al., 1982). Detailed analysis
of a beheading victim from Hertfordshire shows a series of at least six
carefully placed cuts, including three cuts on the anterior odontoid process,
superior body; lower body, and right inferior articular process (McKinley,
146 Injury and violent death
1993). The narrowness of the cuts indicates that the decapitation was
completed with a narrow blade administered as blows to the neck.
4.3.11 European invasion of the Americas
When Europeans began exploration of the New World in the late fifteenth
century, they brought with them a weapons technology that facilitated
their conques! of native populations. The early expeditions were violen!
affairs at times, resulting in brutal treatment ofnatives (see Weber, 1992).
Although these tactics seem repulsive now, they were well .within the
bounds ofbehavior for European males during the Middle Ages. Historical
literature and accounts of violen! confrontations (e.g., Wisby) indicate that
con:flict behavior between European males was excessively cruel, at least at
times (see also Weber, 1992). The study of native remains dating to the
early period of contact with Europeans has provided a new dimension to
understanding the nature of the interactions between these groups.
La Florida
The availability ofhundreds of human skeletal remains from early contact
period Spanish siles in the American Southeast (Georgia and Florida) and
Southwest (New Mexico and Arizona) provides an importan! perspective
on violent confrontation, especially during the sixteenth and seventeenth
centuries. In Spanish Florida (present northern Florida and coastal
Georgia), the region named La Florida by Juan Ponce de Len in 1513,
short-term encounters between native populations and Spaniards occurred
during the exploration period (ca. 1513-1565), followed by long-term,
sustained contact during the mission period ( 1565-1704) (McEwan, 1993).
The earliest contacts frequently resulted in hostile interactions and deaths
of both Europeans and natives (e.g., Varner & Varner, 1980). The later
mission period was relatively peaceful; long periods of calm were occa-
sionally punctuated by native revolts violently pul down by Spanish
military forces (see Hann, 1988).
Analysis of skeletal remains from both periods of Spanish occupation in
the region produced only limited evidence of violen! interactions. In
skeletons from the Tatham Mound site on the Gulf coas! of western
Florida - the probable location of Hernando de Soto's visit in 1539 -
perimortem trauma caused by the impact of metal weapons is present in 17
skeletal elements (Hutchinson, 1991, 1996). The most drama tic examples of
cut bones show the acromion process of a right scapula severed, and a left
humerus diaphysis cut through completely (Figure 4.7). Neither _bone
showed evidence of healing. Other long bones in the sample have multiple
(a)
(b)
Intentional injury 147
Figure 4.7. Cut adult scapula (a) and humerus (b); Tatham Moun.d, Florida.
(From Hutchinson & Norr, 1994; reproduced with permission ofauthors and
Wiley-Liss, Inc., a division of John Wiley & Sons, Inc.)
cutting injuries around diaphyseal perimeters. In total, the pattern of
damage appears to be associated with purposeful dismemberment
(Hutchinson, 1996). It is possible that Indians using captured Spanish
weapons inflicted the injuries. The early dates of the site (early sixteenth
century) suggest that this is an unlikely possibility. Rather, the injuries were
more probably inflicted by Spaniards (Hutchinson, 1996).
148 lnjury and vio/ent death
Only one other skeleton in Spanish Florida shows evidence of violen!
confrontation. A single high-status male from Mission San Luis de
Talimali (AD 1656-1704) probably died from a gunshot wound, but it is not
possible to identify the perpetrator as Jndian or Spaniard (Larsen, Huynh
et al., 1996). Therefore, from the study of violen! trauma in skeletal
remains, the legends suggesting an unusually cruel treatment of na ti ves- at
least as indicated from metal-edged or firearms weaponry - are not
substantiated.
American South1vest
Like many regions discussed in this chapter, relatively little is known about
violence and conflict in the American Southwest prior to or at the time of
contact (see Stodder & Martn, 1992). The presence of fortifications and
other defensive architecture and high frequencies of traumatic injury
(including injury resulting from violence) in sorne later prehistoric siles
suggests that confrontations were frequent (e.g., Stodder & Martn, 1992).
Cranial trauma in the American Southwest showsan increase in prevalence
during the late prehistoric period, which continued into the historie,
mission period (Stodder, 1990, 1994; and see Stodder & Martn, 1992).
Archaeological and historical evidence shows that the high frequency of
cranial trauma in the historie period can be attributed to confrontation
between Spaniards and Indians as well" as among Pueblos and between
Pueblos and non-Pueblo native groups. Study of skeletal remains reveals
that most cranial injuries are in adult males (Stodder, 1994). At San
Cristobal and Hawikku, sites with significan! contact period skeletal
assemblages, very high frequencies of cranial injuries ha ve been reported
(Stodder, 1994). Twenty per cent and 17% of males have cranial trauma
from the San Cristobal and Hawikku sites, respectively. Paleopatholog1cal
markers of stress (e.g., dental defects) indica te that physiological perturba-
tions were generally high during the late prehistoric and contact periods;
this may have contributed to fostering intra- and intergroup hostilities
during this time (Stodder, 1994).
4.3.12 North American military campaigns
The fight for political domination of vast areas of North America,
especially after the seventeenth century, is indicated by the man y military
campaigns involving confrontations between warring European nations
prior to American independence, between the fiedgling United States and
1
1
1
1
1
1

!
!
Intentional injury 149
Great Britain, and between native populations and various Euroamerican
or European interests.
Fort William Henry
During the North American French and Indian War (called the Seven
Years War in Europe), France and Great Britain fought overcontrol ofthe
vast territories of the Northeast and Canada. During the summer of 1757,
the British garrison surrendered Fort William Henry at the southern end of
Lake George, New York, to French and Canadian troops and their Native
Americans allies (Starbuck, 1993). As part of the conditions of surrender,
British soldiers and dependents were allowed to leave the fort and return to
British-controlled territory under French protection. The French-ally
Native American warriors felt slighted that scalps and other prizes of
warfare would not be forthcoming. In retaliation, the French-ally lndians
killed the remaining British troops at the fort. Warriors then proceeded to
kili or capture the hundreds of civilians and soldiers under the care of the
French troops.
Analysis of the remains of five adult males buried in a mass grave within
the fort indicate clear evidence of violence-related injuries, bearing testi-
mony to the historical and fictionalized (e.g., Cooper, 1919) accounts ofthe
battle (Liston & Baker, 1996). Four ofthe five show premortem trauma to
the legs, refiecting injuries received duringthe siege ofthe fort. None ofthe
injuries are healed, suggesting that they died prior to or during the siege.
The trauma was not lethal, but serious enough to preven! their departure
from the fort. These skeletons display a range of perimortem trauma that
probably represents injuries resulting in death. One individual shows a
series of four cutmarks on the posterior surface of the odontoid process of
the second cervical vertebra. The pattern of trauma suggests that the
soldier had been beheaded from behind. Another individual expresses a
series of radiating fractures through the face and frontal, indicating a
crushing ofthe skull with a blunt object. Ali five individuals show notches,
slices, and gashes in skeletal elements of the anterior and posterior trunk
(e.g., scapula, ribs) and pubic region. Themorphology of cutmarks evinces
the use of both knives and axes in the mutilation of victims.
Snake Hill
Sorne ofthe most vigorous fighting between British and Americans during
the War of 1812 took place in the frontierregion between Lake Ontarioand
150 Injury and violen! death
Lake Erie along the Niagara River (Whitehorne, 1991). During the
four-month period in 1814 when American troops successfully captured
and held Fort Erie on the Canadian side of the river, heavy siege and
combat resulted in the deaths ofhundreds of soldiers from both British and
American armies. Archaeological excavations at the battle site of Snake
Hill, Ontario, resulted in the recovery of the skeletal remains of American
soldiers from burial and medical waste pits (Owsley et al., 1991; Thomas &
Williamson, 1991). Demographic assessment of the complete or nearly
complete skeletons (n = 26) indicates that most individuals were young
adult males, aged 15 to 29; seven soldiers were older than 30 years at death.
Half (50%) of the individuals in the sample had fractures caused by damage
from firearm projectiles. The general lack ofhealing in most cases indicates
that thcse wounds were usually fatal. The highest percentage of fractures
in volved ribs (28%; 7 /25), followed by femara (25%; 7 /28), and crania
(9.1%; 2/22).
Locational assessment of skeletal wounds indicates that most injuries
(69.8%) were above the waist. In regards to the total number ofnoncranial
and nonvertebral trauma (n = 53), twice the number of fractures occurred
on the left side (56.6%; 30/53) than on the right side (26.4%; 14/53) ofbattle
victims (excluding skull and vertebrae [17 .0%; 9/53]). This pattern may
reflect handedness or body postores during the battle (Owsley et al., 1991).
Cause of death is especially apparcnt for several victims. For example, a
young adult male died of a massive head injury in which a firearms
projectile had passed through the left and then right parietals. This
individual also had a large, completely healed cranial depressed fracture
from an earlier injury. Other individuals had fractured facial bones from a
firearms projectile and shattered long bones.
Battle of the Little Bighorn
In present-day South Dakota in June of 1876, General George Armstrong
Custer and 267 soldiers and civilians were overwhelmed and massacred by
a superior force ofNative Americans (see Scott & Fox, 1987). Reminiscent
of prehistoric and historie conflicts between na ti ve groups in the region ( cf.
Crow Creek, Larson Village site; and above), this battle was part of an
overall pattern of political domination and control of lands and resources
in the Great Plains by opposing groups.
Within two days of the battle, eyewitness accounts described mutilation
(including scalping) and dismemberment not unlike patterns observed in
other Plains samples (e.g., Crow Creek, Larson Village site; see above)
(Scott & Fox, 1987; Snow & Fitzpatrick, 1989). Temporary graves were
Intentional injury 151
hastily prepared at the locations where individuals were killed. Sorne ofthe
bodies were identified, but owing to decomposition and mutilation, many
were not. Skeletal fragments of battle victims from erosion and limited test
excavations provide the basis for detailed study of battle injuries (Snow &
Fitzpatrick, 1989).Analysis of375 partial and complete bones and 36 teeth
from a minimum of 34 individuals indicates three primary types of
perimortem trauma, including blunt-force trauma, cutmarks, and bullet
wounds (Snow & Fitzpatrick, 1989; Willey & Scott, 1996). Blunt-force
trauma involved massive fragmentation of crania, and, to a lesser extent,
postcranial elements. All 14 partial crania showed massive injuries due to
heavy blows. Additionally, the presence of cutmarks on various skeletal
elements indicates widespread perimortem mutilation. Severa! different
forms of cutmarks, ranging from fine incisions to pronounced incisions,
reflect the use of metal arrows or knives.
Cutmarks on a variety of skeletal elements indica te the high degree of
mutilation ofbattle victims. One individual, for example, has cutmarks on
a humerus head and sternum. The use ofheavy metal-edged weapons (e.g.,
hatchets) is clearly indicated in severa! instances. Far example, a complete-
ly transected cervical vertebra indicates decapitation by a single blowto the
neck. One individual shows distinctive sets of chopping blows to the proxi-
mal ends of the left and right femara indicating purposeful dismember"
ment.
In addition to traditional native weaponry, the presence of gunshot
wounds in six individualsindicates the useoffirearms by Native Americans
at the battle site (Willey & Scott, 1996). Individual M85, for example, had
at least two upper body gunshot wounds, including an entran ce wound on a
rib margin and shattered ribs from another wound. A third wound is
represented by a bullet or bullet fragment embedded in the distal left radius.
This individual also displays cutmarks on his clavicle. Three gunshot
wounds are located in the crania, one entering from the back and two
entering from the right side and exiting from the left.
In summary, based on the study ofhuman remains from the battle site, a
sequence of events can be reconstructed: namely, soldiers were wounded,
killed (frequently with blunt-force trauma to the skull), and mutilated
(Snow & Fitzpatrick, 1989). As would be expected, except for the use of
firearms, the pattern ofkilling and mutilation of vctims is strikingly similar
to that observed in other North American native populations from the
Great Plains and Midwest discussed in this chapter (e.g., Crow Creek,
Larson Village site, Norris Farms). A consideration of the direction of
entry wounds is consistent with historical records indicating that the battle
was chaotic (Willey & Scott, 1996).
152 Jnjury and violen/ death
4.4 Medica! care and surgical intervention
Depending on the severity of trauma tic injury- originating from accidental
or violen! circuinstances - the survivor is often debilitated and unable to
perform key functions, such as acquisition of food and other essential
resources. For purelyeconomic reasons, it is in the best interest ofthe social
group to ensure that the injured person returns to a good state ofhealth and
well-being. Ethnographic and historical accounts of nonindustrial societies
report a tremendous variation in the care of traumatic injuries, ranging
from alignment offractures and use ofsplints, and immobilization, to oral
medicines, and other treatments (e.g., Ackerknecht, 1967; Ortner &
Putschar, 1985; Roberts, 1988). For example, lack of angulation or
significan! difference in length oflong bones in the fractured vs. the normal
side has been documented in the Libben population from the American
Midwest (Lovejoy & Heiple, 1981) and in Medieval populations from
England (Grauer & Roberts, 1996).
Similarly, in prehistoric Australian skeletal series, most fractures show
proper unification and alignment (Kricun, 1994; Webb, 1989, 1995), which
Webb regards as evidence. for 'a firm commitment tocare and concern for
the injured patient' (1995:200). The presence ofwell healed amputations in
two individuals from the central Murray Valley region suggests a knowl-
edge of this type of surgical procedure before the arrival of Europeans
(Webb, 1995). In 5000-6000 Nubian skeletons, sorne 160 fractures are
present, most being well healed and aligned, with little evidence ofinfection
(Elliot Smith & Wood Janes, 1910). Bark splints were found associated
with limb fractures in a couple of instances (Elliot Smith, 1908).
Sorne earlier societies appear to have lacked either the ability for or an
interest in the treatment of fractures. For example, the fourteenth century
Wisby battle victims had a high degree of angulation of healed fractures,
suggesting only minimal levels of treatment (Ingelmark, 1939). The battle
victims are largely drawn from the peasant class, and may not have had
access to the same medica! care afforded the nobility.
Temporal assessment of a large sample of Roman, Anglo-Saxon, and
Medieval long bone fractures reveals changes in injury management in
these populations (Roberts, 1988). During the earlier Roman and Anglo-
Saxon periods, healing of fractures was generally good, suggesting that
fracture siles were correctly reduced and aligned, probably with sorne type
of a support (Roberts, 1988). Treatment was so widespread in the Roman
period that deformities from poorly healed or misaligned fractures were no
more common than in living populations. Medieval management of
fractures appears to ha ve b.een less effi.cient than in earlier periods. There is
Medica/ care and surgica/ interven/ion 153
a generally higher prevalence of deformation and angulation oflong bones
(Roberts, 1988). Additionally, many fractures (35/59}from this period had
associated periosteal infections, leading Roberts to conclude that 'wound
treatment was variable, haphazard and largely useless' (1988:213). Ifthis
analysis and the findings based on the study of the Wisby sample are
representative, it appears that northern European populations living
during the Middle Ages were far less knowledgeable about fracture
management than their forebears.
Treatment of head injuries is suggested by the association between
trephination and cranial trauma in a number of settings. In Denmark,
trephinations frequently accompany cranial trauma, including fractures
and sword or axe cuts (Bennike, 1985). Most ofthe Danish trephinations
are on the left side ofthe cranial vault, which coincides with the location of
skull injuries received in battle. This pattern is also consisten! with the
predominance oftrephinations in males, the primary participants in battle
and interpersonal conftict.
Precontact sites in Andean South America show an abundance of
trephinatiori, especially in present-day Peru and Bolivia, where hundreds of
cases are reported. Analysis of crania from Peru reveals that adult males
comprise the majority of trephined individuals (73.0%), but adult females
(24.6%) and sorne juveniles (2.4%) are also included (Verano & Williams;
1992). Examination of Peruvian crania from four central highlands
populations - San Damian, Cinco Cerros, Matucana, and Huarochiri
indicates a very high frequency ofhealed cranial injuries, mostly depressed
fractures from violen! confrontations: 55. 7% of males, 31.6% of females,
and 26.9% of juveniles are so affected (Verano & Williams, 1992). The
association between cranial injuries and trephination in this region - 34%
of crania have both - indicates that this form of surgery was probably
performed as a treatment for head trauma. The prevalence oftrephination
was probably higher, beca use evidence of injury may ha ve been removed by
the procedure. Diachronic comparisons indicate that the frequency ofwell
healed trephinations increased over the 2000-year period (400 BC-AD 1532).
The highest rate of success was from the latest precontact period (Inca),
including sorne individuals with as many as five to seven healed trephina-
tions. The apparent in crease insurvival may ha ve been dueto the reduction
in size of the trephination opening as well as to the greater use of the
circular grooving technique of excision, thus reducing the risk of dura
mater penetration and neurological damage (Verano & Williams, 1992).
The lack of an association between trephination and head injury in other
settings suggests that there may have been other motivations, including
treatment of real or imagined ills. Far example, none of the few trephined
154 Jnjury and violent death
crania from North America are associated with cranial injury (e.g., Ortner
& Putschar, 1985; Stone & Miles, 1990).
Evidence for treatment of dental disease has been identified in the form
of alterations ofteeth, namely drilling in tooth roots (e.g., Bennike, 1985;
Schwartz et al., 1995) and crowns (e.g., Koritzer, 1968). These boles are
usually found in association with carious or otherwise diseased teeth,
indicating a therapeutic intention. . . .
In summary, the study of samples worldwide reveals that mJunes
sustained either by accidental means (e.g., limb fractures) or during violent
confrontations (e.g., cranial depressed fractures) were treated in sorne
earlier societies. These findings indicate that many past societies were
aware that proper restoration of function could be brought about only by
appropriate treatment protocols (see Roberts, 1988).
4.5 Interprcting skeletal trauma
A variety of injuries are well documented in earlier societies, especially
those sustained either accidentally or violently. Accidental injuries gen-
erally reflect the hazards of day-to-day living. The inchoate analysis
presented in this chapter indicates that walking on difficult terrains ?r
engaging in behaviors requiring levels of high activi.ty tend to result m
elevated prevalences of skeletal injuries, such as lower hmb, Colles's and nb
fractures.
An abundance of skeletal data exist on violence and nonlethal and lethal
trauma. Unfortunately, many cases presented in the paleopathological
Iiterature are imprecisely described or are presented in obscure sources,
such as archaeology contract reports and long-forgotten monographs and
articles. Only rarely are findings on violent injury couched in broader
discussions of interest to anthropologists generally, such as social relations
and the measurement or documentation of conflict and warfare (and see
Milner et al., 1991; Redmond, 1994).
Studies emphasizing the integration of social, political, or economic
systems as they affect conflict-related behavior are beginning to emerge, for
a number of regions. For example, in the American Midwest and
Southeast, numerous cases ofprojectile injuries, scalping and other forms
of mutilation, and Iethal trauma covering a long temporal span are
reported, which provide compelling evidence of violence and warfare m
prehistoric societies(reviewed by Milner, 1995). This evidencesuggests that
there is violence relatively early in prehistory (e.g., western Tennessee
Valley; Smith, 1995, 1997), but temporal comparisons indicate that
Interpreting skeletal trauma 155
conflicts leading to injury increased in frequency, especially during the late
prehistoric period (after AD 1000) in the Eastern Woodlands of North
America. This regional trend appears to be related to an increase in social
tensions dueto population increase, sedentism, increasing social complex-
ity, and increased focus on restricted and valued resources, especially
high-yielding domesticated plan! foods (Bamforth, 1994; Eisenberg,
1986b; Milner, 1995; Milner et al., 1991). This evidence runs counter to the
arguments raised by various authors (e.g., Ferguson, 1990, 1995; Ferguson
& Whitehead, 1992) who contend that violence is either missing or mnima!
in precontact New World tribal societies, having been a result of disequilib-
rium arising from Western contact. Certainly, social and cultural disrup-
tions arising from contacts with expanding Western, state leve\ societies
resulted in increased conflicts within sorne regions (see Bamforth, 1994).
The skeletal evidence from severa! well studied regions indicates, however,
that conflicts leading to injury were commonplace in a variety of settings.
Skeletal injury resulting from violence, then, represents an important
indicator of environmental stress in human populations.
It is importan! to recognize variability in violence-related injury within
larger regions. For example, in the Tombigbee Valley of Alabama, a
decrease in skeletal injuries due to violence coincided with an increase in
dispersion ofhuman settlement and increased political centralization from
the period of about AD 900 to 1550 (Steponaitis, 1991). Thus, a reduced
circumscription ofthe population, perhaps brought about by new political
forces, was likely to have had an influence on conflict in this setting. The
influence of political factors may explain why sorne regions undergoing an
increase in population density and social complexity do not show an
increase in violen! trauma (e.g., Mesolithic Europe: Constandse-Wester'
mann & Newell, 1984). Violent injury occurs in individuals and popula-
tions who were the victims of conflict. Thus, cemetery assemblages
representing groups who were the winners would not be expected to exhibit
the frequency of injury seen on the losing end of violent enco\mters.
Elevated prevalence ofviolence and injury mortality in sorne prehistoric
and historie settings and their possible relationship with increased popula-
tion density and/or resource circumscription is similar to that seen in
recen!, twentieth century populations. F or example, Relethford &
Mahoney (1991) documented markedly higher rates of injury mortality in
the most densely populated areas ofNew York State (excluding New York
City). These similarities may reflect common themes between past and
recent humans, such as high density of population and social inequalities
that serve to promote violence. Population density is a complex composite
of a number of factors, such as the physical and sociocultural environ-
156 lnjury and violent death
ments, demographic, cultural and social infl.uences, and individual behav-
ior. Therefore, although these apparent similarities are informative, it is
importan! to identify specific causal factors in specific settings before
making conc,lusions regarding the relationship between population dis-
tribution and injury mortality in humans generally.
In sorne regions, clear patterns of violence have begun to emerge. For
example, an increasingly hostile landscape in the Eastern Woodlands
generally is corroborated by the archaeological evidence of increase in
defensive construction in later prehistory (e.g., Milner et al., 1991 ). Similar
patterns of increasing conflict and aggression in the final centuries before
European conques! are indicated in the southern California Santa Barbara
Channel islands, the American Southwest, Great Plains, and Arctic. These
regional investigations suggest that resource productivity and environ-
mental stability may have hada strong influence on the presence or degree
of conflict in the past. Similar patterns of increase in evidence of violence
are also documented in 01d World settings (e.g., Kennedy, 1994).
Overall, bioarchaeological studies indica te that violene and conflict are
not random events, but are strongly influenced by extrinsic factors, such as
resource depletion. Dietary deprivation may have been a motivation for
cannibalism. Ca_nnibalism is poorly understood in archaeological settings
where historical -evidence is limited, such as in the prehistoric American
Southwest. Historical records in a number ofsettings are informative. For
example, cannibalism may have been symptomatic of a larger pattern of
animosity and aggression between groups (e.g., Saunaktuk) ora key part of
ri.tualized violence (e.g., Teotihuacan).
Study of trauma in skeletal remains reveals that the areas of the body
attacked are highly patterned in comparisons of different societies. Walker
(1997b) observed that in modero industrial Western societies (e.g., United
States) the head and neck are highly favored targets ofattack, probably for
both strategic and symbolic reasons. He argues that the face is an appealing
target, because the injuries are especially painful. The face and head
generally bleed profusely and bruise. easily, and this may symbolize the
aggressor's dominance (Walker, 1997b). This probably explains why the
most highly traumatized focal point of the body in recent urban popula-
tions is the nose, followed by the zygoma and the mandible (e.g., Hussain et
al., 1994) ..
Many past societies show a penchant for head injury, but these injuries
are usually directed at the vault and not the face or dentition. Dental
fractures are present in archaeological remains, but they are rare (Alexan-
dersen, 1967; see Leigh, 1929; Lukacs & Hemphill, 1990; al so J. R. Lukacs,
unpublished manuscript, for regional studies documenting dental trauma).
1
1
i
1

i
!
1
/nterpreting skeletal trauma 157
The location of the injury on the body provides insight into sorne of the
details of conflict between the individuals involved. For example, many
cranial injuries are found on the left side of the frontal or other anterior
elements, indicatingthat a right-handed attacker successfully engaged their
weapon while facing their victim (e.g., native Australian males). A more
haphazard pattern of cranial injury (e.g., prehistoric Michigan) or higher
frequency of trauma on the right side or posterior vault indicates that
injuries were sustained while the victim was fleeing their attacker or
perhaps while lying prone (e.g., Wisby). This pattern is more common in
women than in men in sorne settings (e.g., Australia, Michigan), suggesting
that aggression was directed at women. Ethnographic evidence reveals
that, although the aggressor was often an adult male, attack by adult
females on other females (and on males) occurred with no small amount of
frequency in sorne settings (Burbank, 1994).
Historical documentation suggests that children have long been a target
ofviolent injury and death. deMause (1974), for example, regarded child
abuse as widespread in Europe prior to the eighteenth century. Neverthe-
less, examination of thousands of archaeological skeletons reveals no
evidence of skeletal trauma - localized trauma-induced subperiosteal
lesions in multiple stages of healing, and perimortem fracture - associated
with battered-child syndrome (Walker, 1997a; Walker et al., 1997).
Certainly, juveniles in earlier societies were victims of homicide and
violence (e.g., Crow Creek, Norris Farms). Juvenile skeletons, however;
lack the injuries associated with long-term abuse. This suggests that, like
the pattern of facial injuries in twentieth century Western societies, child
abuse resulting in severe skeletal trauma is primarily a modero phenom-
enon. Walker (1997a) suggests that the rise of childhood abuse is dueto the
loss of social controls of violen! behavior in recen! urban settings.
Technological factors are importan! in interpreting patterns and types of
skeletal injuries. The introduction of the bow-and-arrow is linked with an
increase in lethal conflict (e.g., southern California). Prior to the invention
offirearms, violence-related injuries were caused primarily by projectiles,
cutting, and blunt force.
Study of a number of skeletal samples possessing both lethal and
nonlethal forms of trauma provides insight into the previous history of
interpersonal aggression both within and between past societies. The
Wisby, Crow Creek, and Norris Farms victims display numerous healed
injuries (e.g., cranial depressed fractures) that reflect a long and well
established history of conflict well prior to the event or events resulting in
widespread death. In a sense, then, these injuries foreshadowed a future act
(e.g., a major battle) that later resulted in more widespread violence.
158 Injury and violen/ death
Study of human remains from these siles suggests that debilitating
injuries, poor health, or generally high levels of physiological stress may
ha ve increased the susceptibility of a population to attack and defeat. The
Norris Farms artd Crow Creek skeletons display numerous pathological
indicators of stress, including iron deficiency anemia, dental defects,
tuberculosis, and generalized infection, that reflect compromised health
and the reduced ability to perform subsistence and other arduous tasks
(Bamforth, 1994; Milner et al., 1991). Although this pattern of poor health
does not explain the demise of the population, it suggests that they may
have had a reduced ability to mitigate hostile environments. Sorne
populations display fractures and other debilitatingconditions that limited
their ability to protect themselves or even flee a more powerful adversary
(e.g., Wisby).
Although both nonlethal and lethal forms of violen! injury are highly
prevalen! in many of the populations discussed in this chapter, the
dominance of one category o ver the other informs our understanding of the
intentions of the attacker. For example, the higher prevalence of nonlethal
than lethal injury in a number of settings - Australia, Santa Barbara
Channel, and Easter lsland - indicates that injury was mean! to maim and
not to kili the victim. Death of the opponent was clearly the preferred
outcome of attack in the Middle Ages in Europe (e.g., Wisby), late
prehistoric Great Plains and Midwest (e.g., Crow Creek, Norris Farms), in
the Arctic (e.g., Saunaktuk), and in historie North America (e.g., Snake
Hill). In prehistoric settings, it is usually not possible to determine the
reasons for preference of lethal or nonlethal forms of violence. In
California, the shift to lethal forms of injury from projectiles in later
prehistory may have been infiuenced by the change in weapons technology
coupled with increasing resource stress.
Clear patterns of mutilation ofvictims are well dcumented in a number
of prehistoric and other New World settings. In North America, the
evidence for removal of soft tissue and skin from the cranial region -
especially scalping - is abundan!. Typically, the scalp was removed by first
the slicing of skin along the frontal and parietals and peeling back of the
skin (e.g., Norris Farms, Koger's Island), but other approaches involved
removal of facial and other tissues (e.g., Saunaktuk). Mutilation was a
highly visible behavior. In addition to scalps, there was removal oftongues,
noses, limbs, and heads from the near-dead or deceased (e.g., Great Plains).
A more profound mutilation to the head than scalping was decapitation.
Decapitation was practiced in the New World and Old World, and for a
variety ofreasons. In Roman Britain (and throughout history), this was a
preferred form of execution in sorne groups. In northern Europe, the head
Summary and conc/usions 159
of the victim was sometimes placed between the legs (Denmark, England),
perhaps as the ultimate insult. Other unique and highly localized forms of
body treatment ofthe living victim were probably'practiced. For example,
gouging of the knees at Saunaktuk in the Arctic may ha ve been associated
with a practice of dragging the victim through the village prior to their
death.
Traumatic injury data are importan! for dispelling prejudices and
assumptions about past societies. For example, hunter-gatherer societies
around the world are often characterized as peaceful inhabitants of
stress-free environments living in a state of blissful repose (e.g., Lee &
DeVore, 1968; Service, 1966; and see discussions by Burbank, 1994;
Fienup-Riordan, 1994). This characterization may reflect the fact that
anthropologists doing fieldwork among these societies are guests. What
guest is going to go back home and write about the unpleasant things they
observed, especially with regard to violen! encounters between individuals
(and see Keeley, 1996; Erchak, 1996)? Many anthropologists underplay the
negative or offensive, avoiding realism in social beliefs. In fact, a number of
cnltures described as nonviolent or peaceful have homicide rates far
exceeding !hose of sorne Western nations (Knauft, 1987). Evidence drawn
from the study of prehistoric remains from a number of settings presents a
very different picture, both about foragers and about other types of
societies. The point is not to replace peaceful characterizations of earlier
societies with violen! ones. Rather, these findings underscore the import-
ance of substituting an incorrect image with one that fits the evidence. This
approach is critical for informing our perspective on past groups as
functioning societies rather than as images of what earlier social behavior
mus! have been like. This new-found precision adds a more complete
historical context for the study ofrecen! human behavior. Anthropologists
and others seem to employ - either consciously or unconsciously - their
own cultural and social assumptions about earlier societies in order to
'remember' the past. We project these assumptions into a past that seems to
refiect curren! and highly biased perspectives on the condition of human-
kind, be they peaceful or violen!. These skeletal data help us to reconstruct
and interpret trauma and violence in a more comprehensive and accurate
manner.
4.6 Summary and conclusions
A range of bioarchaeological evidence helps to inform our understanding
of accidental trauma and violence-related injury and their relationship to
160 lnjury and violent death
behavior in earlier societies. Human populations living in difficult circum-
stances have an elevated prevalence of skeletal injuries due to accidental
causes. The skeletal record of conftict in the past is highly visible in a
number ofpopulations worldwide. Study of remains from a wide variety of
contexts helps to provide better understanding of the circumstances of
violence whether dueto intra- or intergroup confticts. Sorne conftict may
result in' cannibalism, but, from the study of human remains alone, it is
difficult to identify the cause ofthis practice. Ritualized violence, including
sacrifice and elaborate body treatment, is also highly visible in sorne
archaeological settings. Although limited in scope, its study provides an
importan! link between culture and treatment ofthe body, both in life and
in death. Regardless of the circumstances of death, contextual data are
essential for its interpretation, including resource availability, the history
of intra- and intergroup social relationships, and weapons technology.
5 Activity patterns: 1. Articular and
muscular modifications
5.1 Introduction
Physical activity is a defining characteristic of human adaptive regimes.
Hunter-gatherers, for example, are often characterized as highly mobile,
hard-working, and barely able to eke out an existence. In contras!,
agriculturalists are seen as having it pretty good - they are settled in one
place, they ha ve plenty to eat, and their work loads are light. In his popular
and influential archaeology textbook, Braidwood (1967: 113) distinguished
hunter-gatherers as leading 'a savage's existence, anda very tough one ...
following animals jusi to kili them to ea!, or moving from one berry patch
to another (and) living jusi like an animal'. Ethnographic and other
research calls into question these simplistic portrayals of economic sys-
tems. Following the publication ofLee& DeVore's (1968) Man the Humer
conference volume, and especially Lee's (1979) provocative findings
regarding work behavior and resource acquisition among the !Kung in
northern Botswana, a consensus emerged that, contrary to the traditional
Hobbesian depiction of hunter-gatherer lifeways as 'nasty, brutish, and
short', prehistoric foragers were not subject to overbearing amounts of
work, and life overall for them was leisurely, plentiful, and confident (see
Sahlins, 1972). More importantly, these developments fostered a wider
discussion of activity and physical behavior in humans, present and past,
leading to the conclusion that human adaptive systems are highly variable.
As a result, it is now clear that it is not possible to make blanket statements
about the nature of workloads or other aspects of lifestyle in foragers and 1
farmers (Kelly, 1992, 1995; Larsen, 1995).
Workload and activity ha ve enormous implications for the demographic
history ofa population. Study ofliving humans indicates,for example, that
demanding physical activity in reproductively mature females results in
reduced ovarian function and fecundity (e.g., Ellison, 1994; Ellison et al.,
1993; Jasienska & Ellison, 1993). Therefore, the identification of workload
and patterns of physical activity from the study of human remains may
provideclues to the understandingoflower birth rates and reduced fertility
in sorne past populations (see Larsen, 1995).
162 Articular and 1nuscular modifications
The study of pathological and nonpathological changes of articular
joints and behaviorally related modifications ofnonarticular regions offers
a wealth of information on activity and workload in past populations.
5.2 Articular joints and their function
Two types of articular joints, amphiarthrodial and diarthrodial (synovial),
are important for interpreting pathological and other modifications in a
behavioral context. Amphiarthrodialjoints are somewhat mobile but serve
primarily to stabilize specific regions ofthe skeleton (e.g., pubic symphysis
for the anterior pelvis, intervertebral bodies for the back). The ends of
bones constituting diarthrodial joints (e.g., knee, interphalangeal) articu-
late with each other within a fibrous capsule, and the articular surfaces are
covered with highly lubricated hyaline cartilage. Depending upan the
shape of the articular surfaces, the -anatomy of the capsule, and the
ligamentous connections across the joint, freedom of movement is exten-
sive. Thus, in addition to providing sorne stability (Norkin & Levangie,
1983), diarthrodial joints function primarily in mobility roles, such as
extension and ftexion of the interphalangeal joints for grasping, and
extension and ftexion of the knees for walking and running.
5.3 Articular joint pathology
Osteophytosis and osteoarthritis are degenerative disorders affecting the
amphiarthrodial and diarthrodial joints, respectively. These terms are
misnomers in that they imply an inflammatory response, which is not
always thecase (Bridges, 1992; Hough &Sokoloff, 1989; Moskowitz, 1989;
although see DeRousseau, 1988). A commonly used synonym for these
disorders is degenerative joint disease. Degenerative changes involving
surfaces and margins of vertebral bodies are referred to as osteoarthrosis,
because the articulations of the intervertebral bodies are not diarthrodial
joints. The pathological responses associated with osteoarthrosis are
virtually identical to those of osteoarthritis, and they share the same
etiology and pathogenesis (e.g., see Jurmain, 1990). For the purposes of
this discussion, 1 subsume the two terms under osteoarthritis.
5.3.1 Osteoarthritis
Osteoarthritis is a multifactorial disorder representing a pattern of re-
sponses to various predisposing factors (Hoffman, 1993; Hough &
Articular joint pathology 163
Sokoloff, l 989; Rogers & Waldron, 1995). Epidemiologists and anthropol-
ogists observe a great <leal of worldwide variation in osteoarthritis in
relation to age. For example, young adults and older juveniles in sorne
human populations express a relatively high frequency of the condition
(e.g., Chapman, 1972; Chesterman, 1983; Larsen, Ruff et al., 1995). In
urbanized industrial societies, osteoarthritis rarely occurs before age 30
(Dekker et al., 1992).
There is sorne variation in osteoarthritis prevalence and severity with
climate (see Moskowitz, 1989); symptoms appear to be somewhat less
severe in warm climates, and this may be related to temperature, sun
exposure, and amount of clothing worn (Moskowitz, 1989). Body weight
can also be an importan! factor. Clinical and epidemiological studies
indicate a greater incidence of osteoarthritis in obese individuals, especially
in the weight-bearingjoints (e.g., Kellgren, 1961; Kellgren & Lawrence,
1958; Leach et al., 1973; Moskowitz, 1989). Obese women have rn
especially high incidence of knee osteoarthritis, which is related to adde-d
mechanical stresses on this major weight-bearingjoint (Cooper et al., 1994;
Moskowitz, 1989; Spector et al., 1994).
Jurmain (1977a) documents an increase in female osteoarthritis in later
middle-age, more so than in adult males of the same age. In twentieth
century United States groups, more women than men have osteoarthritis
after age 55 (Roberts & Burch, 1966). Similarly, the disorder is more severe
and more generalized in women than in men in Great Britain (Kellgren et
al., 1963). These studies suggest that there may be a hormonal influence in
osteoarthritis. Other factors such as metabolism, nutrition, bone density,
vascular deficiencies, infection, trauma, and heredity can influence the
disorder (see Duncan, 1979; Engel; 1968; Jurmain, 1977a; McCarty &
Koopman, 1993; Moskowitz, 1989; Ortner & Putschar, 1985; Rogers &
Waldron, 1995).
The primary contributing factor to osteoarthritis is mechanical stress
and physical.activity (Duncan, 1979; Hough & Sokoloff, 1989; Jurmain,
1977a, 1977b; McKeag, 1992; Nuki, 1980; Peyron, 1986; Radin, 1982,
1983; Radin et al., 1972). Asan individual ages, a stressedjoint or joints will
probably exhibit the biological concomitants of osteoarthritis. Radin
( 1982:20) defined osteoarthritis as 'the result of a physiological imbalance
between the mechanical stress in the joint tissue and the ability of the joint
tissues to withstand ... stress'.
The mechanical stress argument is supported by various findings. For
example, industrial laborers show patterns of articular degeneration in
relation to particular physical activities in the workplace. Strenuous lifting
by miners causes articular change in the hips, knees, and vertebrae
164 Articular and muscular modifications
(Anderson et al., 1962; Kellgren & Lawrence, 1958; Lawrence, 1977); use of
pneumatic tools by ship builders and others results in similar modifications
(e.g., Lawrence, 1955, 1961); lifting of long tongs to move hot metals by
foundry workers results in degenerative changes in the elbow (Hough &
Sokoloff, 1989); and repetitive activity involving the hands in cotton mili
workers results in different patterns of osteoarthritis (Hadler, 1977; Hadler
et al., 1978; and see Merbs, 1983). Other findings for manual laborers,
farmers, ballet dancers, various types of athletes, and !hose who engage in
rigorous exercise generally support these observations (see Cooper et al.,
1994; Croft, Coggon et al., 1992; Croft, Cooper et al., 1992; Forsberg &
Nilsson, 1992; Jurmain, 1977a; Lawrence, 1977; McKeag, 1992;
Nakamura et al., 1993; Stenlund, 1993). New epidemiological findings are
providing importan! corroboration for these conclusions linking mechan-
ical demand and osteoarthritis. Comparisons revea! markedly higher
prevalence of knee and hip osteoarthritis in North Carolina rural popula-
tions than the U.S. (primarily urban) population as a whole (hip: 25.1%vs.
2.7% in the 55-64 age cohort; Jordan et al., 1995). These differences reflect
the greater physical demands of the rural lifestyle in the United Sta tes.
The linkages between physical activity and osteoarthritis are not
straightforward. The hand bones ofweavers from the Spitalfields, London,
skeletal series have no more osteoarthritis than hand bones from the
general sample (Waldron, 1994). Manual laborers in this series have no
more or less osteoarthritis than the population as a whole. These findings
anda survey of inconsistencies found in the epidemiological literature led
Waldron to conclude 'that there is no convincing evidence of a consistent
relationship between a particular occupation and a particular form of
osteoarthritis' (1994:94). Therefore, although articular pathology relating
to activity offers importan! insight into behavioral characteristics of
human populations in a general sense, the identification of specific
activities or occupations from individual remains may not always be
possible.
Epidemiological studies provide an importan! baseline for interpreting
osteoarthritis in past human populations. Bioarchaeological and epi-
demiological studies are not strictly comparable, as the !alter are almos!
always based on clinical contexts, either radiological examinations or
palien! interviews, which do not identify subtle degenerative changes seen
in actual skeletal specimens that bioarchaeologists study. Hard tissue
changes observed in the clinical setting are not strictly comparable to those
observed in archaeological or other types of skeletal collections.
The pathophysiology of osteoarthritis is poorly understood regarding
the relationship between hyaline cartilage and borre changes. Sorne have
Articular joint patho/ogy
Figure 5.1. Lumbar vertebra with advanced marginal body lipping
(osteoarthritis); anatomical specimen. (From Larsen, 1987; photograph by
Barry Stark; reproduced with permission of Academc Press, Inc.)
165
argued that changes in cartilage- including fibrillation or tearing-precede
bony responses; others contend that minute changes in subchondral bone
precede cartilaginous changes (e.g., Radin, 1982). For archaeological
remains, the exact order of tissue response to mechanical stress is
immaterial, beca use, regardless of the order of events, the skeletal changes .
arising from osteoarthritis are universal, including proliferative exophytic
growths of new bone on joint margins ('osteophytes' or 'lipping') and/or
erosion of borre on joint surfaces (Figure 5.1 ). In sorne joints, the
cartilaginous tissue covering the articular surface has failed, resulting in
pitting or rarefaction of the surface. In instances in which the cartilage has
disintegrated altogether, the articular surface becomes polished, owing to
direct bone-on-bone contact (Figure 5.2). Because the surface has a
glistening appearance suggestive of ivory, the polished area is called
eburnation (Hough & Sokoloff, 1989; Merbs, 1983; Rogers et al., 1987). In
the knee and elbow, deep, parallel grooves may be present on the eburnated
surface (Ortner & Putschar, 1985). The presence of eburnation indicates
that, although the articular cartilage is missing, the joint was still active at
the time of death (see Rogers & Waldron, 1987, 1995).
Osteophytes vary from fine tuft-like, barely perceptible protrusions to
166 Articular and muscular modifications
Figure 5.2. Distal right humerus showing eburnation (osteoarthritis);
anatomical specimen. (From Larsen, 1987; photograph by Barry Stark;
reproduced with permission of Academic Press, Inc.)
large projections of spiculated bone. Even in the extreme, diarthrodial
joints do not usually fuse. In spinal osteoarthritis, the marginal osteophytes
of two adjacent vertebrae may unite, thus forming a bridge of continuous
bone. This change (ankylosis) is also accompaned by reduction in disk
space separating the two vertebral bodies, and hence, marked reduction in
mobility of the spine.
Compression or crush fracture of anterior vertebral bodies - an
occasional concomitan! of spinal osteoarthritis - gives them a wedge-
shaped appearance (Figure 5.3). Additionally, herniation of the interver-
tebral disk results in irregular depressions on intervertebral body surfaces
called Schmorl's nodes (Hough & Sokoloff, 1989; Merbs, l989a; Rogers &
Waldron, 1995; Saluja et al., 1986; Schmorl & Junghanns, 1971).
Biological anthropologists, anatomists, and others have systematically
collected data on osteoarthritis for well overa century. Wells referred to
osteoarthritis as 'the most useful of all diseases for reconstructing the life
style of early populations' (Wells, 1982: 152). Osteoarthritis is present in all
human populations, and, regardless of etiology, the patterns documented
and interpreted by bioarchaeologists provide a picture of the cumulative
effects of mechanical stress and age on the body in different human groups.
Articular joint pathology
167
Figure 5.3. Cornpression fracture of adult thoracic vertebrae; Stillwater a r s h ~
Nevada. (From Larsen, Ruff et al., 1995; reproduced with permission of
American Museum of Natural History.)
Owing to the lengthy history of study as well as to the ubiquity of
osteoarthritis in skeletal samples, there is a voluminous literature on
frequencies and prevalences both in living and in past human groups (e.g.,
see Bridges, 1992).
5.3.2 Population-specijic patterns of osteoarthritis
Early hominids
Osteoarthritis is present in the earliest hominids, providing an importan!
perspective on activity patterns in the remote past. The three-million year
168 Articular and muscular 1nodifications
old Hadar australopithecine, A.L. 288-1 ('Lucy'), displays a distinctive
anterior-posterior elongation of thoracic vertebral bodies, marginal lip-
ping, disk space reduction, and intervertebral disk collapse (Cook et al.,
1983). These modifications reflectan extraordinarily demanding activity
repertoire, including lifting and carrying. The conspicuous anterior-
posterior vertebral body elongation may be caused by various activities
that involve extreme ventral flexion of the body trunk.
A number of Neandertal skeletons have distinctive patterns of osteoar-
thritis that are useful for reconstructing posture and activity in the late
Pleistocene, providing a context for interpreting behavior of antecedents to
modern humans. Based on his study of the La Chapelle-aux-Saints
skeleton, Boule reconstructed the individual 'as an almos! hunchbacked
creature with head thrust forward, knees habitually bent, and flat, in verted
feet, moving along with a shuflling, uncertain gait' (Straus & Cave,
1957:348). This image of Neandertal locomotion served as a model for
behavioral reconstruction, and it reinforced the popular image ofNeander-
tals as less than human. Straus & Cave (1957) suggested that Boule
misinterpreted key aspects ofthe anatomy ofthe skeleton and overlooked
the possibility that severe osteoarthritis may ha ve prevented the individual
from normal perambulation. Recen! analysis of the La Cha pelle skeleton
reveals the presence of widespread degenerative pathology (especially
marginal Jipping) involving the temporomandibular joint, the occipital
condyles, the Jower cervical vertebrae, and the thoracic vertebrae (the
Tl-T2 exhibits eburnation, and the T6, TJO, and Tll have possible
eburnation) (Trinkaus, 1985). The left acetabulum shows extreme lipping
and eburnation. Although the right acetabulum is missing, the head of the
right femur is normal, suggesting that the hip osteoarthritis is unilateral.
The severe osteoarthritis in the left hip suggests that it would have been
painful for the individual to walk or run. The overall pattern of degen-
erative pathology indicates that locomotor abilities may have been some-
what limited, but certainly not in the manner described by Boule (Trinkaus,
1985).
Osteoarthritis is also extensive in the Neandertal adults (n = 6) from
Shanidar, Iraq (Trinkaus, 1983). The widespread nature of articular
pathology in these individuals reflects a highly physically demanding
lifeway for these archaic Hamo sapiens. This conclusion is confirmed by
other lines of evidence, such as the high overall robusticity and bone
strength in these hominids (e.g., Lovejoy & Trinkaus, 1980; Ruff et al.,
1993; Trinkaus, 1984; see Chapter 6).
Articular joint pathology 169
Sadlermiut Eskimos
The most comprehensive bioarchaeological study of osteoarthritis is the
investigation of Sadlermiut Eskimo (Southhampton Island, Northwest
Territories) skeletons by Merbs (1983). Skeletons in this series display a
distinctive patterning of degenerative articular pathology, which generally
matches activities observed ethnographically. Adult males show bilateral
osteoarthritis of the acromioclavicular joint, a joint in volved mostly in the
elevation of the arm, and hypertrophy of the deltoid tuberosity of the
proximal humerus. A number of potential activities might cause this
distinctive pattern of articular pathology and skeletal morphology, but
ka yak paddling is the most likely. Extreme loading of the shoulder and
upper arm during kayaking probably contributed to this highly specific
pattern of osteoarthritis (Merbs, 1983).
Sadlermiut adult females have extreme levels of degenerative changes
in the temporomandibular joint - twice the prevalence of males. This
pattern suggests heavy loading of the mandible, especially in women. As
documented ethnographically, adult females habitually soften animal
hides with their dentitions, which may contribute to deterioration of this
joint (Merbs, 1983). Both adult females and males have a high prevalence
of postcranial osteoarthritis, which reflects their physically demande
ing lifestyles. For example, widespread and severe vertebral osteoarthritis
indicates that the backs of both sexes were subjected to marked com-
pressive forces, such as those that occur during sledding and tobog-
ganing.
Population comparisons
Studies of archaic hominids and modern Eskimos underscore the highly
variable nature of osteoarthritis. In order to assess general patterns of
variation in humans, coinparisons of many different skeletal samples
should provide a means of identifying patterns and levels of physical
activity on a broad basis. Bridges (1992) attempted this by reviewing
published studies on appendicular(shoulder; elbow, hip, knee) and axillary
(vertebra e) osteoarthritis in na ti ve populations from North America. In the
25 skeletal samples included in her review, osteoarthritis shows the highest
prevalence in the knee for 17 samples; elbow osteoarthritis is either in first
or second place for 15 samples. No clear association between osteoarthritis
and subsistence mode emerges in comparison of hunter-gatherers and
agriculturalists. However, agriculturalists tend to have low prevalence in
the wrists and hands, but not ali foraging groups have high levels in these
170 Articular and muscular modifications
Table 5.1. Frequency of osteoarthritis in right articular joints expressed
by severity. ( Adapted from Jurmain, 1980: Table 5.)
White Black Pecos Eskimo
Joint Moderate Severe Moderate Severe Moderate Severe Moderate Severe
Males
Knee 27.0 3.0 38.2 4.5 29.3 1.7 32.4 13.5
Hip 51.0 2.9 47.3 1.8 20.7 2.3 35.2 2.8
Shoulder 47.3 1.1 50.9 3.8 33.3 1.5 53.6 o.o
Elbow 12.5 5.8 19.8 5.2 11.4 3.8 31.1 18.0
Fema/es
Knee 35.6 10.9 31.9 18.6 16.I o.o 32.0 4.0
Hip 37.4 13.1 47.8 7.8 20.7 o.o 22.4 1.7
Shoulder 44.3 8.2 53.6 8.9 22.2 o.o 23.I 2.6
Elbow 12.7 1.0 21.7 0.9 I0.4 3.0 22.0 7.3
joints. F or nearly ali populations reviewed, ankle or foot arthritis is less
common than hand osteoarthritis.
The comparison of different populations from published findings (e.g.,
Bridges, 1992) contributes to an understanding of variation in work
burdens and activity. These comparisons are limited by the variable nature
of the methods of data collection used by the differetit researchers (see
discussions by Bridges, 1993; Lovell, 1994; Waldron & Rogers, 1991). This
factor alone may prevent investigators from presenting clear diachronic
trends or population differences in osteoarthritis prevalence when com-
paring findings reported by different researchers (e.g., Cohen, 1989). Data
collection and population comparisons by the same researcher circumvent
this problem. Although the identification of specific activities from osteoar-
thritis remains an intractable problem, these types of comparisons provide
an importan! perspective on the general characteristics of different life-
styles, especially with regard to workload and leve! of mechanical demand
(see also Jurmain, 1990).
Jurmain (1977a, !977b, 1978, 1980) assessed osteoarthritis patterns in
the appendicular skeleton (shoulder, elbow, hip, knee) in a range of
populations, including American Whites and Blacks (Terry Collection),
Eskimos (Alaska), and Amerindians (Pecos Pueblo, New Mexico). Male
Eskimos have higher prevalence and severity of knee and shoulder
osteoarthritis than do American Whites and Blacks or Pecos Pueblo Native
Americans; Pecos Pueblo adults have the least prevalence and severity of
osteoarthritis among the four groups (Table 5.1). These population
differences reflect the highly variable mechanical demands associated with
contrasting lifestyles and subsistence strategies. For example, mechanical
Articular joint pathology 171
demands for the Pecos Pueblo agriculturalists may be mostly limited to the
growing season, whereas Eskimos are subjected to high levels of activity
throughout the entire year (Jurmain, 1977a; see also Merbs, 1983).
Assessment of osteoarthritis prevalence in prehistoric adult males and
females from the American Great Basin contributes to an ongoing debate
about workload in harsh environmental settings (Larsen, Ruff et al., 1995).
Archaeologists suggest two alternative models for characterizing native
subsistence strategies in the Great Basin (Thomas, 1985). One model states
that prehistoric native populations pursued a limnosedentary exploitive
strategy whereby food and other resources were obtained primarily in
ecologically rich circumscribed wetland areas that punctuated the desert
landscape, thus resulting in a sedentary lifeway. Alternatively, the lim-
nomobile model contends that these wetlands do not provide sufficient
resources for the support ofnative populations, at least on a full-time basis.
These wetlands are subject to occasional resource crashes arising from
droughts and floods. From this point ofview, native populations relied in
part on marsh resources, but spent significan! amounts oftime in collection
and transport of foods recovered from upland settings in the nearby
mountains and elsewhere. The implications of the former model is that the
more sedentary wetlands-focussed adaptation involved less mechanical
stress than the nonsedentary lifestyle; the limnomobile model is built on the
premise that the carrying of supplies and long-distance travel was physical-
ly demanding, requiring strength and endurance (Larsen, Ruff et al., 1995).
In order to determine which of the two models best characterizes the
adaptive strategies of native populations in the Great Basin, Larsen and
coworkers (1995) assessed the pattern and prevalence of osteoarthritis in
prehistoric human remains recovered from the Stillwater Marsh region, a
large wetland area in western Nevada. Analysis of these remains revealed
an abundance of osteoarthritis. Most adults, including ali individuals over
age 30, have osteoarthritis in at least one, and usually multiple, articular
joints. Articular pathology for older adults involves severe proliferative
lipping on joint margins, eburnation, vertebral compression fractures, and
Schmorl's nodes. Contrary to expectations of the limnosedentary model,
these findings suggest that hunter-gatherers in this setting led extremely
demanding lives. The high prevalence of osteoarthritis suggests elevated
mechanical demand, such as in heavy lifting and carrying. These findings
also imply that prehistoric groups may not ha ve been tethered to the marsh,
and they exploited a wide range of resources from both the marsh and the
surrounding uplands. Beyond concluding that the Great Basin lifeway was
physically demanding, however, it is not possible to state whether these
populations were sedentary or mobile from osteoarthritis evidence alone.
172 Articular and muscular modifications
Analysis of long bone structural morphology is more informative on this
point (Larsen, Ruff et al., 1995; see Chapter 6).
The impact of specific lifestyles and occupations on patterns of degen-
erative articular pathology in nineteenth century United States popula-
tions has received increasing attention by biological anthropologists (see
Owsley, 1990). These studies revea! that for many Americans, physical
activities were highly demanding. African Americans from the First
African Baptist Church (urban Philadelphia) cemetery have extensive
spinal degenerative pathology, including osteoarthritis (males, 69%; fe-
males, 39%) and Schmorl's nodes (males, 31%; females, 13%) (Parrington
& Roberts, 1990; see also Angel et al., 1987). These prevalences are higher
than those of a contemporary African American population from a rural
setting in Cedar Grove, Arkansas (cf. Rose, 1985). These differences
suggest that the urban lifestyle was far more mechanically demanding than
the rural lifestyle. The differences in degenerative joint pathology between
the two settings may be due to specific differences in habitual activities.
Historical records indicate that individuals interred in the Philadelphia
cemetery held unskilled, physically demandingjobs (see also other settings
of African Americans in relation to mechanical environment: Kelley &
Angel, 1987; Owsley et al., 1987; Rathbun, 1987; Thomas et al., 1977).
Highly demanding circumstances are also inferred from the study of
osteoarthritis in pioneer Euroamericans living in the frontier of the
American Midwest and Great Plains. Euroamerican adults from Illinois
and Texas ha ve elevated prevalences of osteoarthritis and highly developed
muscle attachment siles on limb bones (Larsen, Craig et al., 1995; Winchell
et al., 1995). Articular degenerative pathology includes extensive marginal
lipping on weight-bearing and nonweight-bearingjoints, eburnation, and
extensions ofarticular surfaces (e.g., anterior femoral head andneck). High
prevalence of nonspecific physiological stress indicators (e.g., enamel
defects) and historical evidence indicate that life on the early American
frontier was generally unhealthy and physically demanding. Numerous
historical accounts from the early to mid-nineteenth century discuss the
extremely hard physical labor that pioneer families endured, especially in
preparation of fields and lending and harvesting crops (see discussion in
Larsen, Craig et al., 1995).
Degenerative joint pathology in battle casualties is especially revealing
about physical activity in Euroamerican soldiers (Owsley et al., 1991).
Nearly half of the Euroamerican skeletal remains from the War of 1812
Snake Hill cemetery near F ort Erie, New York, display Schmorl's nodes on
vertebral bodies; this is an unusually high prevalence. Sorne individuals
have multiple Schmorl's nodes: six individuals ha ve five or more vertebrae
Articular joint patho/ogy 173
with nodes, and one soldier has pronounced nodes in 11 vertebrae. In
addition, severa! individuals ha ve vertebral compression fractures resulting
from excessive mechanical !oading of the back. These findings indica te that
early nineteenth century military recruits were subjected to excessive
loading of their spines, such as from lifting heavy military hardware,
carrying heavy packs over long distances, construction of fortifications,
and participation in rigorous activity regimens.
These population comparisons revea! sorne tendencies linking degen-
erative articular pathology with lifestyles. An importan! finding to emerge
from these analyses is the impact of lifestyle on articular joint pathology.
Focus on specific joints in archaeological samples indica tes that patterns of
osteoarthritis may be linked with particular tasks, such as in relation to the
technology used for acquiring or processing food, use of the horse in
subsistence pursuits, and activity involving use of the back and associated
vertebral articular pathology.
Weaponry, food acquisition, and food processing
Ortner's (1968) classic study of elbow (distal humerus) osteoarthritis in
Eskimos and Peruvian Indians (Chicama) reveals highly contrasting
patterns that reflect different uses of the upper limb in food acquisition.
Eskimos display a greater prevalence of degenerative changes - marginal
proliferation and articular surface destruction - than do Peruvian Indians
(18% vs. 5%). Eskimos also show a distinctive bilateral asymmetry in
degenerative pathology; right elbows are far more arthritic than left
elbows. Right side dominance of osteoarthritis is due to the greater use of
the right arm than the left, such as from spear-throwing with throwing
boards (atlatls) by predominantly right-handed hunters (see also Kricun,
1994; Merbs, 1983; Webb, 1989, 1995).
Over the course of an individual's lifetime, the prolonged use of
weapons, such as the bow-and-arrow or atlatl, may also contribute to the
degeneration of the elbow joint. Angel (l 966b) first described the 'a tia ti
elbow' in a skeletal series from the Tranquillity site, California. He
speculated that the atlatl facilitates a faster spear-throw without invo!ving
extension and abduction ofthe shoulder; extension is primarily limitedto
the elbow. Consisten! with his hypothesis, Tranquillity shoulder joints
display very little degenera ti ve pathology, but elbow osteoarthritis is severe
(Angel, 1966b).
In order to document the shift in weapons technology from the atlatl to
the bow-and-arrow, Bridges (1990) assessed patterns of upper limb
osteoarthritis in early (Archaic) and late (Mississippian) prehistoric popu-
174 Articular and muscular modifications
Iations from the Pickwick Basin, Alabama. She suggested that only one
arm and just the elbow joint are involved in the use of the atlatl (sensu
Angel, 1966b), whereas both left and right arms and the elbow and
shoulder joints of each are involved in the use of the bow-and-arrow. Thus,
the joints of the upper limb should show different distributions of
osteoarthritis reflecting either an atlatl pattern (unilateral, elbow) or
bow-and-arrow pattern (bilateral, elbow and shoulder). Because males in
most human societies are responsible for hunting, they should show a
higher prevalence of osteoarthritis than females. Furthermore, as expected,
early prehistoric males have a higher prevalence of elbow osteoarthritis
than late prehistoric males, a pattern that probably reflects the use of the
atlatl in the earlier group and the bow-and-arrow in the later group.
Contrary to expectations, both temporal groups display slight right
dominance of osteoarthritis. Early prehistoric Jemales ha ve the highest
frequency of the right-dominant elbow osteoarthritis. These findings
provide mixed support in this setting for the link between weapons use and
degenerative articular pathology.
The Angel and Bridges studies indicate that sorne groups using the atlatl
ha ve a distinctive pattern of elbow osteoarthritis (e.g., Eskimos), whereas
others do not (e.g., Pickwick Basin). These differences may reflect the
relative importance or intensity of specific activities (Bridges, 1990). For
example, traditional Eskimo diets are heavily dominated by mea!, and they
relied exclusively (or nearly so) on hunting over the course of the entire
year. Thus, their atlatl use was highly intensive. Early prehistoric lndians
living in the Pickwick Basin hada far more di verse di et, which was acquired
only partially by hunting. For much of the summer and spring, native
populations utilized riverine resources (e.g., fish) and various flood-plain
plants (e.g., edible seeds); hunting was practiced mostly during the winter.
Therefore, the very different pattern of elbow osteoarthritis in Tennessee
Indians cannot be attributed solely to use of the atlatl or the bow-and-
arrow. Rather, a range of activities probably contributed to the patterns of
upper limb osteoarthritis (Bridges, 1990).
In contras! to the pattern of right-side dominance of osteoarthritis in
upper limbs (e.g., Merbs, 1983; Webb, 1995), sorne groups display bilateral
symmetry. Elbow osteoarthritis in native populations from Chavez Pass,
Arizona, is highly prevalen! and bilaterally symmetric (Miller, 1985; Nagy
& Hawkey, 1993). In this setting, mechanical loading of both elbows while
processing maize with grinding implements - pushing manos against
metates with the hands - involves equal use of the left and right arms
(Miller, 1985; and see Merbs, 1980). In traditional Southwestern native
societies, females are responsible for this activity. Thus, the relatively
Articular joint pathology 175
higher frequency in adult females in the Chavez Pass series reflects the role
of women in food preparation.
Horseback riding
The horse was an importan! mode of transport for many Holocene
societies, in the Old World and laterin the New World following European
contact. Populations show articular degenerative concomitants of an
equestrian lifeway in the limited number of settings studied by bioarchaeol-
ogists (Bradtmiller, 1983; Edynak, 1976; Plfi, 1992; Reinhard et al., 1994).
Following the introduction ofthe horse to the American Great Plains by
Europeans, native populations relied on this animal as 'a matter of daily
routine' (Bradtmiller, 1983:3), especially for hunting and warfare. Patterns
of osteoarthritis attributed to horseback riding include a high frequency of
degenerative changes in the vertebrae and pelves of adult males in early
nineteenth century (AD 1803-1832) Arikara from the Leavenworth site,
South Dakota (Bradtmiller, 1983). Similarly, historie-era Omaha and
Ponca from northeastern Nebraska display vertebral and hip degenerative
pathology, along with other skeletal features that are bes! explained by
mechanical loading of specific joints during horseback riding (Reinhard et
al., 1994). These features include superior elongation of the acetabulum,
extension of the femoral head articular surface onto the anterior femoral
neck, and hypertrophy of mnscle attachment sites for gluteus medius and
gluteus minimus, adductor magnus, adductor brevis, vastus lateralis, and
gastrocnemius (medial head) (Reinhard et al., 1994). Extensive osteoarthri-
tis of the first metatarsals is suggestive of mechanical stresses associated
with the placement of the first toe in to a leather thong stirrup (Reinhard et
al., 1994). More males than females have osteoarthritic changes associated
with horse riding, thus indicating that men were habitually engaged in
behaviors involving the use of the horse, more so than women.
Vertebra/ osteoarthritis
The vertebral column has been studied in a large number of settings in the
Americas (summarized by Bridges, 1992) and elsewhere. For prehistoric
North America, these comparisons revea! a number of tendencies. First,
prevalence is always greatest in the articular region between the fifth and
sixth cervical vertebrae; second, there is a tendency for the lower thoracic to
be affected more than the upper thoracic vertebrae; third, the second to
fourth lumbar vertebrae usually show the greatest degree of marginal
lipping in comparison with other vertebrae; and finally, the region
176 Articular and muscular modifications
encompassing the seventh cervical vertebra to the upper thoracic vertebrae
(to about the third thoracic) is always least affected by the disorder
(Bridges, 1992). The relatively minimal amount of osteoarthritis in the
thoracic vertebrae is due to the lower degree of movement in this region of
the back (Waldron, 1993).
For the world as a whole, the highest prevalen ce of osteoarthritis is in the
lumbar spine, foUowed by the cervical spine, for a wide range of
populations (e.g., Bennike, 1985; Bridges, 1994; Gunness-Hey, 1980;
Jurmain, l 990; Merbs, 1983; Snow, 1974; and see review by Bridges, 1992).
Sorne human populations show relatively higher levels of osteoarthritis in
the cervical vertebrae. For example, cervical vertebral osteoarthritis is
relatively high in the Spitalfields, London, industrial urban group (Wal-
dron, 1993). Similarly, Harappan populations from the Indus VaUey
display higher frequencies of osteophytes and articular surface pitting of
cervical vertebral bodies than in either the lumbar or thoracic spine (LoveU,
l 994). This pattern suggests an activity-related cause, such as carrying of
heavy loads on the head. Individuals in traditional agricultura! communi-
ties and from lower socioeconomic groups from urban settings in South
Asia habituaUy carry lads on their heads (LoveU, l 994). These loads
include laundry bundles, water jars, firewood, and dirt-fiUed containers at
construction siles. Clinical and observational studies confirm that the
upper (cervical) spine is susceptible to injury and cumulative degenerative
changes by persons carrying heavy loads on their heads (e.g., Allison, 1984;
Levy, 1968; Lovell, 1994). The greater severity of osteoarthritis in the
cervical spine in women than in men suggests that the practice of
burden-carrying with use of the head is gender-specific. For example, the
severity of cervical osteoarthritis is greater in adult females than adult
males in the Romano-British Bath Gate populations from Cirencester,
England (Wells, l 982; and see LoveU, 1994).
5.3.3 Sexual dimorphism in osteoarthritis
Adult males and females show a wide range of variation in osteoarthritis
prevalen ce in prehistoric New World and other settings ( e.g., Bridges,
l 992). In general, males have a greater prevalence of osteoarthritis than
females, regardless of subsistence strategy or sociopolitical complexity. Sex
comparisons for prehistoric foragers from coastal Georgia revea] statisti-
caUy significan! differences between males and females for lumbar (69.2%
vs. 32.!%) and shoulder (10.5% vs. 2.4%) joints (Larsen, 1982). In later
prehistoric agriculturalists, more articulations show significan! differences,
including the cervical, thoracic, lumbar, elbow, and knee joints. A similar
Articular joint pathology 177
pattern of increase in sexual dimorphism has been documented in prehis-
toric northwest Alabama (Bridges, 1991a). In this setting, differences in
osteoarthritis prevalence between Archaic period males and females are
not statisticaUy significan!, whereas later Mississippian period males have
more severe osteoarthritis than females (Bridges, 1991a). These patterns in
Georgia and Alabama do not specificalty define behaviors associated with
either sex, but they are suggestive of contrasting patterns of physical
activity (see also below). The presence of more significan! differences
between agriculturatist mates and females in both settings suggests the
possibitity that sex differences in tabor demands were greater in later than
ineartier prehistory. Simitarly,comparisons offoragers from Indian KnoU,
Kentucky, with maize agriculturatists from Averbuch, Tennessee, indica te
a different prevalence of osteoarthritis between adult males and females
(Pierce, 1987). For example, !odian Knotl males ha ve a significantly greater
frequency of shoulder, hip, and knee osteoarthritis than femates; Averbuch
males have significantly greater osteoarthritis for the shoulder and hip, but
not the knee. These differences are suggestive of change in workload and
activity with the adoption of agriculture.
Untike males, agriculturatist females from the lower Ittinois River valley
have a higher prevalence of vertebral osteoarthritis than forager females
from the same region (Pickering, 1984). These differencesare especialty
pronounced in cervical vertebrae; this may be related to an increase in
mechanical demand in this region of the skeleton with the shift to
agriculture (Pickering, 1984).
Fahlstrom (1981) identified an unusually high prevatence and severity of
shoulder osteoarthritis in adult males in the Medieval skeletal series from
Westerhus, Sweden. Historical analysis ofthis population suggests that the
high frequency in males reflects work and activity practices that are
exclusive to men, including parrying in sword fighting, spear throwing,
timber cutting, and other activities associated with repetitive, heavy
toading of the shoulder joint (Fahlstrom, 1981).
Sorne analyses revea! no appreciable differences between males and
femates in archaeological settings. For example, males and females in the
Dickson Mounds, Itlinois, series show no differences in prevalence of
appendicular osteoarthritis (Goodman et al., 1984; Lalto, 1973). The
simitarity between sexes infers that mechanical loading of most articular
joints in this setting was broadly the same in adults regardless of sex, in
contras! to most other prehistoric Eastern Woodlands populations (cf.
Bridges, 1992).
Two clear trends emerge when sex differences are examined (e:g.;
Bridges, 1992). First, where there are statisticaUy significan! differences
178 Articular and muscular modifications
between males and females, males show a higher prevalence of osteoarthri-
tis than females. Second, in specific regions of the New World, maize
agriculturalists tend to display more sexual dimorphism in degenerative
pathology than foragers. This suggests a difference in behavior leading to
degeneration of articular joints in agriculturalists but not in earlier
foragers. The change in pattern of sexual dimorphism suggests that there
was a fundamental shift in the division of labor once agriculture was
adopted (and see Bridges, 1992).
5.3.4 Age variation
The documentation of age-at-onset of osteoarthritis should provide an
indication ofwhen individuals enter the work force. In the late prehistoric
Ledders series from the lower Illinois River valley, elbow and wrist
osteoarthritis commences earlier in females than in males, which may
indicate that women were subjected to the mechanical demands of
adulthood earlier than men (Pickering, 1984). Eskimos have the earliest
age-at-onset in comparison with Southwestern (Pecos Pueblo) agricul-
turalists and urbanized American Whites and Blacks (Jurmain, l 977a).
These differences reflect the relatively greater mechanical demands on the
Eskimos in comparison with other human populations.
Interpretation of intra- and interpopulation differences in osteoarthritis
prevalence must consider age structure, since it is such an important
predisposing factor. For example, females have greater prevalence of
osteoarthritis than males in ali but three of 16 joints in a series of human
remains from coastal British Columbia (Cybulski, 1992). Adult females are
older than the adult males in the assemblage. Thus, the unusually high
prevalence in females relative to males may be dueto the difference in age
composition rather than variation in mechanical environment.
5.3.5 Social rank and work pattern
Comparison of osteoarthritis prevalence and severity between social ranks
in prehistoric stratified societies suggests that higher-status individuals
were exposed to less demanding activities than lower-status individuals.
Archaeological evidence indicates that Middle Woodland populations in
the lower Illinois River valley were hierarchical and organized on the basis
of ascribed (hereditary) statuses (Tainter, 1980). The hierarchy of different
social ranks is clearly displayed in the contrasting levels of energy
expenditure in construction of tombs: a great <leal of energy and resources
were devoted to the construction of elaborate tombs for high-status
l
1
1
!
l
Articular joint pathology 179
individuals. The highest-rank graves include individuals who were either
interred in or processed through large, log-roofed tombs located atthe
centers of individual mounds. Little energy was expended on the construc-
tion oftombs for low-status individuals; graves are simple and unadorned.
Analysis of shoulder, elbow, and knee osteoarthritis in skeletons from the
Pete Klunk and Gibson mound groups reveals that the highest ranking
adults over age 35 display less severe elbow osteoarthritis than lower-
ranked individuals, and high-ranking females have less severe knee
osteoarthritis than females from the other ranks (Tainter, 1980).
5.3.6 Temporal trends and adaptive shifts
The above discussion underscores the tremendous range of variation in
osteoarthritis prevalence and pattern, linking the condition to lifestyle,
food acquisition, food preparation, age, social rank, and other circumstan-
ces. Comparisons of prehistoric foragers and farmers from different
settings (e.g., Jurmain, 1977a, 1977b, 1978, 1980) indicate differences in
osteoarthritis - and presumably workload and activity - in relation to
subsistence. Regionally based temporal studies of osteoarthritis give an
additional perspective on change in functional demand as populations
underwent adaptive shifts in the past. Based on comparisons of earlier and
later societies from the same region, it has become possible to assess the
relative labor costs of change in economic focus, at least as these costs are
measured by mechanical stress. The results of analyses of skeletal series
representing populations that shifted economic focus from foraging to
farming are available for a limited number of regions in the Old World
(e.g., Europe: Meiklejohn et al., 1984; South Asia: Kennedy, 1984). These
studies indicate a reduction in osteoarthritis prevalence, and suggest a
decline in mechanical loading with the adoption of agriculture.
The most extensive temporal studies of osteoarthritis have been com-
pleted for severa! settings in North America. Study of osteoarthritis
prevalence in Archaic period hunter-gatherers and later Mississippian
period maize agriculturalists from northwestern Alabama suggests changes
in activity and workload, especially when viewed in the context of diet and
lifeway (Bridges, 199la). Archaic period populations exploited a range of
terrestrial and riverine animals and plants, including <leer, raccoon, beaver,
fish and shellfish, wild plants, and limited cultivation of sunflower,
sumpweed, chenopod, squash, and bottle gourd (Dye, 1977). Populations
moved seasonally from river valleys to nearby uplands. Later prehistoric
groups were intensive maize agriculturalists, but also exploited a limited
number of species of nondomesticated plants and animals (Smith, 1986).
180
Articular and muscular modifications
Table 5.2. Percentage of individuals with moderare to severe
osteoarthritis, 30 to 49 years. ( Adaptedfrorn Bridges, 199Ja: Table 2.)
Males
Females
Archaic Mississippian Archaic M ississippian
Joint Left Right Left Right Left Right Left Right
Shoulder 36.8 42.l 30.0' 30.4 7.7 28.6 JO.O 17.6
(n) (19) (19) (20) (23) (13) (14) (20) ( 17)
Elbow 27.3 40.9 28.0h 24.0 26.4 37.6 15.8 20.0
(n) (22) (22) (25) (25) (19) (16) (19) (20)
Wrist 9.5 15.8 O.O 17.4 o.o 6.7 5.6 O.O
(n) (21) (19) (23) (23) (13) (15) (18) (14)
Hip 5.0 5.0 o.o o.o o.o o.o 7.1 o.o
(n) (20) (20) (21) (21) ( J 3) (10) (14) ( 17)
Knee 27.3 31.8 21.7 8.6 15.8 22.3 21. l 23.5
(n) (22) (22) (23) (23) (19) (18) (19) (\ 7)
Ankle 23.8
1
1-( o.o O.O' 4.8/J 0.0-1 5.9 o.o o.o
(n) (21) (22) (24) (21) (18) (17) (\ 6) (19)
"Frequency significantly greater in males than in fen1ales P :S: 0.05).
bSeverity significantly greater in males than in fcmales (Mann-Wh1tney rank sum:
p<0.05). . . . . . . .
<'Frequency significantly greater in Archaic than tn M1ss1ss1ppian group (ch1-square.
p<0.05). . . . . .
dSeverity significantly greater in Archaic than in M1ss1ss1ppian group (Mann-Wh1tney
rank sum: p:S:0.05).
These la ter groups were largely sedentary and lived primarily in villages on
river floodplains, although smaller temporary uplands habitations were
utilized far hunting <leer and other animals (e.g., small mammals, turkey,
waterfawl) on a seasonal basis. In sum, although sharing sorne features, the
subsistence strategies and settlement patterns in the earlier and later
periods were very different. Because faraging and farming involved very
different kinds of physical activity, the respective populations should
display different prevalence and patterns of osteoarthritis. .
Comparisons of shoulder, elbow, wrist, hip, knee, and ankle osteoarthn-
tis show a number of importan! temporal trends in the Alabama series
(Bridges, 199la). Far individuals 30 to 49 years of the
Archaic group has generally more osteoarthritis than the M1ss1ss1pptan
group, and the increases are especially consisten! far males (Table 5.2).
Statistically significan! differences between periods are presentin only a few
of thejoints. However, the overall greater prevalence in the Archaic sample
is clear. The severity of osteoarthritis tells the same story: Archaic
populations ha ve generally greater severity ofthe disorder than Mississip-
Articular joint pathology 181
pian populations. The pattern of degenerative pathology is remarkably
similar in the pre historie faragers and farmers and in the males and females
within each group - far ali samples, osteoarthritis is most common in the
elbow, shoulder, and knee, and it is leas! common in the hip, ankle, and
wrist.
Prehistoric and contact period human remains representing a temporal
succession ofNative American populations living in the Georgia Bight has
been the facus of research on physical activity and behavioral changes by
Larsen and coworkers (Fresia et al., 1990; Larsen, 1981, 1982, 1984; Larsen
& Ruff, 1991, 1994; Larsen, Ruff et al., 1996; Ruff & Larsen, 1990; Ruff et
al., 1984). Temporal comparison of osteoarthritis prevalence shows a
distinctive decline in prehistoric farmers relative to earlier faragers (Larsen,
1982). Far the series as a whole (sexes combined), statistically significan!
reductions occur far the lumbar vertebrae (26.2%), elbow (6.8%), wrist
(4.5%), hip (3.8%), knee (7.2%), and ankle (4.0%) joints. The frequency of
osteoarthritis declines or <loes not change in ali other joints. Both adult
females and adult males show the same trend of reduction; more significan!
reductions occur in females than males (six joints vs. three joints; and
above). The pattern of osteoarthritis prevalence in the skeleton is similar in
both the preagricultural and the agricultura! groups. In both series the
lumbar, cervical, elbow, and knee joints show the highest prevalences.
Comparison of the postcontact (seventeenth century) group with late
prehistoric agriculturalists reveals a striking increase in osteoarthritis
prevalence far most articular joints. Sorne ofthis increase is extraordinary:
farexample, from 16.3% to 52.9% far the male lumbar vertebrae and 1.1%
to 41.6% far the male faot. Overall, these findings suggest two significan!
trends: a decrease in mechanical demand with the introduction of maize
agriculture, fallowed by a marked increase in mechanical demand fallow-
ing the arrival of Europeans. Foot osteoarthritis shows the most pro-
nounced increase in comparison with the other articular joints, especially in
males. The increase in degenerative pathology in a comparison of different
joint types is suggestive of a general and pronounced increase in workload
after contact. The very high increase far the faot suggests that these
populations - especially males - were engaged in a type or range of
activities involving pronounced mechanical demands on the lower limb.
Beca use the lower limb and faot function primarily in ambulatory activities
(i.e., walking and running), the increase in contact era faot osteoarthritis
suggests that adult males were engaged in a great <leal of walking.
The changes in osteoarthritis in contact era populations are consisten!
with behavioral characteristics historically documented far the mission
period in Spanish Florida; namely, native males were drafted into work
182 Articular and muscular modifications
service under the repartimiento labor system and farced to make long-
distance trips to various localities in the province (Hann, 1988; Lyon,
unpnblished manuscript; Worth, 1995). These trips involved carrying
heavy burdens over great distances (Hann, 1988), which placed demands
on the lower limbs in walking and on upper limbs and trunks. The general
increase in mechanical demands on native populations in the seventeenth
century is also well documented in the historical literature. The Spanish
viewed native populations asan inexpensive labor source. Native labor was
a central element in their economic and political success in the area. Indian
Iaborers were used for cargo bearing, agricultura} production, construction
projects, wood cutting, and a variety of other physically demanding
activities (Hann, 1988; and see Larsen, 1990a; Worth, 1995). Forexample,
Governor Canzo remarked in his report to the Crown in 1602-1603 that
but with ali this and thegrain frommaize, thelabor that theyendure in the
manycultivations that are given is great, and, if it were not for the help of
the Indians that I make them give, and they come from the province of
Guale, Antonico, and frorn other caciques, it would not be possible to be
able to sow any grain . . . (unpublished translation provided by
1.H. Hann; cited in Larsen, 1990a:l6).
These historical accounts, then, strongly suggest that the increase in
osteoarthritis prevalence during the mission period was due at least in part
to the increasing labor demands placed on native groups during the mission
period (and see Chapter 6).
In summary, from the osteoarthritis evidence, there was an apparent
decline in physical demands with the transition to agriculture in north-
western Alabama and coastal Georgia. It is importan! to point out that the
Alabama and Georgia populations may not be directly comparable. There
is a large temporal gap between the early (faragers) and late (farmers)
prehistoric groups in northwestern Alabama. The skeletal series used for
making comparisons are separated in time by sorne 2200 years ( 1000 BC-AD
1200). The osteoarthritis profile during the period of time immediately
prior to the Mississippian agriculturalists is unknown. It is possible that a
period of less intensive maize agriculture prior to AD 1200 produced levels
of osteoarthritis similar to those of either the foragers or the farmers.
Larsen's (1982) study of osteoarthritis in the prehistoric Georgia Bight also
involved the analysis of skeletal remains from populations spanning a
lengthy period; the precontact preagricultural group comprises ali human
remains predating AD 1150. However, most prehistoric farager remains are
drawn from the 450-year period immediately preceding the adoption of
maize agriculture (ca. AD 700-1150). Unlike the Alabama series, there is a
Articular joint pathology 183
cultural and biological continuum with Iittle orno time gap separating the
population groupings. The temporal differences between the Alabama and
Georgia skeletal assemblages may not be significan!, because subsistence
reconstruction based on stablecarbon isotopes far the Eastern Woodlands
indicates that the shift to maize agriculture was widespread in the region
after about AD 900 (e.g., Ambrose, 1987; Smith, 1989). Thus, the two
regions are broadly comparable, at least with respect to the timing of the
introduction of maize agriculture and underlying socioeconomic factors.
The adaptive systems representing the Alabama and Georgia popula-
tions are different in sorne other importan! aspects. The Alabama skeletal
series represents populations of intensive terrestrial maize agriculturalists,
whereas the Georgia series represents populations of maize agriculturalists,
but who also engaged in fishing and collecting marine resources from local
estuarine and ocean contexts. Activity differences rellecting these adaptive
contrasts are suggested by long bone structural analysis (see Chapter 6).
It would be over simplistic to say that the shift to or intensification of
agriculture in prehistoric North America involved a reduction in workload
in native populations. For example, comparisons of less intensive with
more intensive agriculturalists from the Dickson Mounds site, Illinois,
show a general increase in prevalence and severity ofvertebral osteoarthri-
tis in the more intensive agriculturalists (Lallo, 1973). In adults (sexes
combined), the frequency increases from 39.7% in the Late Woodland
period to 65.8% in the Middle Mississippian period. Similarly, there is a
much greater frequency of osteoarthritis in Mississippian period agricul-
turalists from the A verbuch site compared to foragers from the Indian
Knoll (Pierce, 1987; and see Hodges, 1989).
Much of the facus on the temporal comparisons of osteoarthritis deals
with the shift from foraging to farming. The study of osteoarthritis in
relation to other changes in economic systems has also proven highly
infarmative. In the Santa Barbara Channel islands and mainland Pacific
coas!, the facus on hunting and gathering of terrestrial resources was
replaced by intensivefishing in la ter prehistory (Walker & Hollimon, 1989).
The latter adaptation is especially well documented by early explorers and
others who first arrived in the region. These observations provide an
importan! perspective on the types of activities undertaken by native
populations that may have potential inlluence on articular pathology.
Early accounts of native groups note the presence of an elaborate fishing
technology and material culture, including such items as harpoons, fish
traps, nets, and fishhooks. In addition to fishing, shellfish were collected
from rocks by the use of prybars constructed from wood or bone. Boats
made from carved planks were used far travel between islands and between
184 Articular and muscular modifications
islands and the mainland. Plant foods (e.g., acorns, chia and other seeds)
werecollected in large quantities, especially on the mainland. Various roots
and bulbswere extracted from the ground with digging sticks. Economic
tasks followed a strict division oflabor by sex- men hunted and fished, and
women collected plant foods and shellfish. These differences in work
activities are reflected in dietary differences between adult females and
males (Walker & Erlandson, 1986). For example, early prehistoric women
have higher caries rates than early prehistoric men, which reftects the
greater consumption of cariogenic plant carbohydrates by women (and see
Chapter 3).
Comparison of osteoarthritis in early and late prehistoric Indians from
this setting reveals temporal changes that are suggestive of alterations in
activity and workload with the transition to a marine focussed economy
(Walker & Hollimon, 1989). Severity of osteoarthritis, ranging from slight
articular surface porosity to extensive marginal lipping and eburnation,
increases in the late prehistoric period, especially in the lower limb (Walker
& Hollimon, 1989). On the basis of archaeological evidence, Walker &
Hollimon (1989) speculate that the increase in osteoarthritis may be dueto
increased work that involved trade, exchange, and more pedestrian travel.
The severity of elbow and wrist osteoarthritis increased in Late period
adult males, but not in adult females in the Santa Barbara Channel Islands
region. Osteoarthritis severity declined in the shoulder and hand. These
changes may be linked to an alteration in weaponry (replacement of the
atlatl by the bow-and-arrow) and fishing equipment (shift from harpoons
to nets). The temporal increase in forelimb osteoarthritis is perhaps related
to the increased use of canoes and fishing neis in the Late period (Walker &
Hollimon, 1989). Overall, the increase in osteoarthritis was greater for
males than females. Although there are severa! possible explanations for
this trend, the greater role of men in fishing suggests that the workload
increase was greater formales than females. The net result was a decrease in
the difference of osteoarthritis prevalence between men and women in the
Late period (Hollimon, 1991).
Viewed in a regional perspective, these North American studies show a
tremendous range ofvariation in mechanical stress loads. This high degree
of variability from one setting to another suggests that at leas! sorne factors
that intluence osteoarthritis are dependen! on local circumstances. Webb's
(1989, 1995) study of native populations from Australia serves to under-
score the point that what are often perceived as uniform adaptations- even
over very large areas - are in fact highly variable. His comparative analysis
of osteoarthritis prevalence reveals very differenl'patterns and mechanical
stress levels in skeletal series from regio ns of Australia. F or example, elbow
Nonpathological articular modifications 185
osteoarthritis is virtuallynonexistent in east coast populations, whereas it is
commonplace in the Murray River valley. This variability reflects a
remarkable degree of diversity in use of the upper limb.
5.4 Nonpathological articular modifications
Nonpathological skeletal modifications reflecting habitual postures pro-
vide a picture ofbehaviors such as squatting and kneeling(see K. Kennedy,
1989; Trinkaus, 1975). For example, the crouched posture that character-
izes squatting involves extreme flexion of the hip, knee, ankle, and foot
joints. As a result, mechanical demands on lower limb articular joints may
produce distinctive joint modifications. Charles (1893) described an exten-
sion of the articular surface from the femoral head onto the superior-
anterior neck - called Poirier's facet - which he assumed was related to
abduction and hyperftexion of the thigh during squatting. Most later
studies show no relationship between the articular extension and squatting
(see Angel, 1964; Trinkaus, 1975). Other skeletal modifications purported
to be linked with squatting include facets on the superior-posterior margins
of the femoral condyles, rounding of the posterior aspect of the lateral tibia!
condyle, retroversion of the tibia! plateau, formation of a groove for the
posterior cruciate ligamen! (on the femoral intercondylar line), angulated
facets on the anterior aspect of distal tibia, talar neck facet surfaces, and
various other alterations on the talus and calcaneus (reviewed by Trinkaus,
1975). Trinkaus (1975) concluded that none of these features bears an
unambiguous association with squatting. For example, the presence of
articular facets on the superior-posterior femoral condyles and the groove
created by the posterior cruciate ligamen! on the femoral intercondylar line
are not necessarily associated with squatting. The presence of a relatively
high frequency of femoral condylar facets, and extensions in the knee,
ankle, and subtalar articulations in western European Neandertals is
suggestive of a habitual squatting posture. The extreme degree of skeletal
robusticity in these late archaic Horno sapiens indicates very high levels of
activity generally, which would also promote the development of these
articular joint modifications. Therefore, the combination of squatting and
great physical activity best explains these skeletal features, at least in these
late Pleistocene hominids (Trinkaus, 1975).
Trinkaus's (1975) study reminds us that it is difficult to sort out general
activity levels and specific articular joint modifications in interpreting
skeletal morphology. Study of distinctive metatarsal-phalangeal joint
articular modifications in severa! different contexts reveals that habitual
186 Articular and muscular modifications
Figure 5.4. Nonpathological alterations of distal mctatarsals produced by
hyperdorsiflexion of toes; Ayalan, Ecuador. (Adapted from Ubelaker, 1979;
reproduccd with permission of author and John \Viley & Sons, lnc.)
kneeling - in food preparation tasks or other occupational activities - can
be identified. When walking or running, the high degree of dorsitlexion at
the metatarsal-phalangealjoint ofthe toes is sustained only momentarily.
In kneeling postures, where the hyperdorsiflexed position of the toes is
sustaincd for extended periods, the joints develop articular modifications
reflecting these behaviors. Metatarsal-phalangeal alterations have been
identified in late prehistoric (AD 700..-1550) human remains from the
Ayalan si te, Ecuador (Ubelaker, 1979). The articulations are characterized
as small extensions or facets or both on the distal ends of metatarsals
(Figure 5.4). The facets are flat with clearly demarcated proximal borders
Nonpathological articular modifications 187
and an accompanying superior surface bony extension extending distally
from the proximal articular surface. These alterations arepresent in about
20% of Ayalan foot bones; they are bilaterally distributed among the first
three metatarsals and first phalanx, and are far less common in the fourth
and fifth metatarsals.
Comparisons of the Ayalan series with other archaeological series
(Eskimos, Hawikku site Zuni, Mobridge site Arikara, Late Woodland
Nanjemoy, and Terry Collection Blacks and Whites) show a great deal of
variation in metatarsal-phalangeal articular facets (Ubelaker, 1979). The
alterations occur in ali of these samples, but they are most common in the
Ayalan group. The prevalence in the Hawikku and Nanjemoy samples is
5%; the other groups have prevalences of 2% or less. The distribution
amongst the digits of the foot also varies between samples. In the Ayalan
series, the alterations are present on the first to fourth metatarsals and first
proximal phalanges, whereas in the Hawikku and Eskimo series, they are
on the second to fourth metatarsals.
Although little research has been completed on these articular modifi-
cations, they are found in a wide diversity of human populations. In
addition to those analyzed by Ubelaker ( 1979), Mesolithic and Neolithic
skeletal series from the early agricultura! settlement at Tell Abu Hureyra,
Syria, display metatarsal-phalangeal alterations in the first metatarsals
(Molleson, 1989, 1994). In older adults, the margins of these facets are
associated with degenerative changes (osteoarthritis). Proximal exten-
sions of the distal articular surfaces in many of the metatarsals are also
identified in prehistoric samples from the north coast of Rota, Mariana
Islands (Hanson, 1988). The association with significan! mechanical
stresses of the foot is also indicated by the co-occurrence of osteoarthritis
in this setting. Similar joint modifications of the first metatarsals are
present in nineteenth century fur-trappers from Alberta, who spent long
hours canoeing from one location to the next, primarily in kneeling
postures with their toes dorsiflexed (Lai & Lovell, 1992; Lovell & Lai,
1994). Likewise, in Syria and Ecuador, the characteristic morphology,
location, and association with osteoarthritis in older adults indicate that
the metatarsal-phalangeal joint alterations were probably produced by
prolonged hyperdorsifiexion of the toes while kneeling (Molleson, 1989,
1994; Ubelaker, 1979). With regard to the Ecuador s r i ~ s especially,
Ubelaker (1979) speculates that kneeling while maize grinding with stone
metates was the most likely cause. Molleson (1989) provides confirmation
of a similar posture with regard to cereal grinding depicted in Assyrian
and Egyptian dynastic tomb art.
188 Articular and muscular modifications
5.5 Nonarticular pathological conditions relating to activity
5.5.1 defects
Cortical defects are linear depressions Jocated at muscle insertion siles on
various skeletal elements, especially the humerus, radius, tibia, femur,
metacarpals, metatarsals, and distal phalanges (Bulcin, 1971; Owsley et al.,
1991 ). The insertion siles for the pectoralis major and !eres major (proximal
humerus) and for the medial head ofthe gastrocnemius (distal femur) are
two common locations of cortical defects. Although variable in size and
morphology, they typically ha ve irregular floors and smooth margins. The
defects are caused by chronic mechanical stress (see Brower, 1977; Bulcin,
1971; Owsley et al., 1991; Resnick & Greenway, 1982; Ridgway et al.,
1995). Cortical defects are infrequently reported in archaeological human
remains. The War of 1812 battle casualties from Snake Hill, Ontario,
exhibit an unusually high frequency ( 40%) of defects on proximal humeri in
comparison with Civii War military, Native American hunter-gatherers,
and Native American agriculturalists (Owsley et al., 1991). Two-thirds
(62.5%) of the defects in the Snake Hill series are on the right side; if
bilaterally present, the defect on the left side is small and shallow compared
to the much larger defect on the right side. The bilateral asymmetry of
defects retlects right hand dominance or other activities associated with
military roles. The high levels of physical demand inferred from this
evidence is consisten! with the high frequency of Schmorl's nodes in the
same series (see above).
5.5.2 Enthesopathies and hypertrophied muse/e attachment sites
Enthesopathic lesions (enthesophytes) are irregularities; rough patches,
and bone projections or osteophytes at the insertions of tendons and
ligaments, especially the plantar, achilles, and patellar insertions (K.
Kennedy, 1989; Resnick & Niwayama, 1983; Shaibani et al., 1993).
Enthesopathies develop as a result of prolonged and excessive muscular
activity. Their location and size in the skeleton give an indication of
habitual activitiesinvolving specific muscles or groups ofmuscles (Dutour,
1986; Hawkey&Street, 1992; K. Kennedy, 1989; Plfi, 1992). Two series of
Neolithic Saharan remains - from Mali Hassi el Abiod and Chin-Tafidet-
possess enthesopathies involving elbows and feet of adult males only
(Dutour, 1986). For the elbow joint, tlie lesions are represented as osseous
exostoses on the right medial epicondyle, suggesting hyperactivity of
pronator teres, flexor carpi radialis, palmaris longus, flexor digitorum
1
1
Nonarticular pathological conditions
189
superficialis, and flexor carpi ulnaris. Large exostoses located on the
posterior-superior faces of ulnar olecranon processes retlect heavy use of
the Jower insertion of the tendon for triceps brachii, the primary elbow
extender. Severa] individuals exhibit lesions primarily on the right radial
tuberosity, the insertion site for biceps brachii, the primary flexor of the
elbow. This enthesopathy may be dueto the carrying ofheavy loads while
the elbows are tightly flexed (Dutour, 1986). Distinctive, vertically oriented
bon y projections are present on the posterior calcaneus (associated with the
achilles tendon)and on the posterior-inferior projection of the plantar
aspee! of the calcaneus (for insertion of the adductor hallucis) (Dutour,
1986). Both types of enthesopathies are linked with excessive walking and
running.
Proximal right ulnae of adult males from terminal Pleistocene sites in
central India show a high frequency of well developed supinator crests
(Kennedy, 1983). The insertion areas for the anconeus muscle on the ulna
are also unusually pronounced. These features also occur in relatively high
frequencies in Mesolithic populations from Gujarat and Rajasthan, the
Gangetic plain ofUttar Pradesh, and Sri Lanka (Kennedy, 1983). Kennedy
(1983) contends that well developed supinator crests reflect the heavy of
missile weapons (e.g., spears, bolas, slings, boomerangs) by South Astan
foragers. This type of throwing involves various movements that directly
involve the supinator muscle, including abrupt shifts from supination to
pronation of the forearm. Muscular strength is the critica] factor in
throwing abilities and ultimately in the formation of hypertrophic
supinator crests (Kennedy, 1983).
In sorne Arctic populations, high prevalence of postcranial en-
thesopathies and other indicators ofmechanical stress (e.g., size ofmuscle
attachment siles) mirrors the very high frequencies of osteoarthritis.
Human remains from the eastern Aleutian Islands (Akun and Akutan
Islands) have a high prevalence of muscle, ligamen!, and tendon en-
thesopathies (Hawkey & Street, 1992). Aleut females show evidence of
marked stress for the right hand and wrist, whereas use of the left arm
appears to have been associated with habitual adduction and abduction. In
males, the upper limbs show bilateral skeletal changes involving both
humeri, resulting from extremely heavy use of left and right arms. These
skeletal modifications may represen! endurance kayaking with double-
bladed paddles. Thule Eskimo groups from northwest Hudson Bay have
similar patterns of bilateral upper limb skeletal modifications, indicating
the widespread use of kayaks and the general demands ofliving in the harsh
Arctic setting (Hawkey & Merbs, 1997).
190 Articular and muscular modifications
Figure 5.5. Spondylolysis o lumbar vertebra; Avery, Georgia. (Photograph by
Mark C. Griffin.)
5.5.3 Stress fractures: spondylolysis and other vertebral injuries
Spondylolysis is a degenerative pathology that involves a separation ofthe
vertebral neural arch in the area between the superior and inferior articular
processes (called the pars interarticularis) (Figure 5.5). The condition is
unique to the Hominidae, suggesting that bipedality probably plays an
important role (Bridges, 1989a; Merbs, 1989b; Nachemson & Wiltse, 1976;
Stewart, 1956). The defect is usually bilateral (Waldron, 1992), and, when
present, almost always involves the fifth lumbar only, although it also
occurs with decreasing frequency from the inferior to the superior lumbar
spine in clinical (e.g., Moreton, 1966; Roche & Rowe, 1951) and archae-
ological settings (e.g., Bridges, J989a; Gunness-Hey, 1980; Lester &
Shapiro, 1968; Lundy, 1981; Merbs, 1983; Plfi, 1992; Snow, 1948; Stewart,
1979; Waldron, 1993). The defect has also been found in cervical and
thoracc vertebrae, but these are rare occurrences.
Stewart (l 953b) reported an unusually high prevalence in Aleut-Eskimo
populations. In his sample from north of the Yukon River, Alaska, more
than 40% of individuals exhibit separa te neural arches, which he originally
attributed to inbreeding of isolated groups (but see Stewart, 1979). Sorne
Nonarticular pathological conditions 191
researchers assume that the condition is inherited (e.g., Shahriaree et al.,
1979; Snow, 1974; Wiltse, 1957; Wynne-Davies & Scott, 1979). A more
compelling explanation is that spondylolysis is a type of fracture: it is
absent at birth, there is a progressive increase in defect frequency from
childhood through adulthood (e.g., Bridges, 1989a; Lester & Shapiro,
1968; Merbs, 1989b, 1995; Stewart, 1953b, 1956, 1979), followed by healing
in later adulthood (Merbs, 1995), the separation affects only intact bone
(e.g., Wiltse et al., 1975), and it develops gradually in response to excessive
mechanical loads overa period of time (e.g., Eisenstein, 1978; Roberts,
1947; Wiltse, 1962; Wiltse et al., 1975; and see Merbs, 1983, 1989b).
High frequencies in laborers (e.g., Lane, 1893) and in other individual$
involved in mechanically demanding activities, such as college football and
other athletic sports (Hoshina, 1980; Jackson et al., 1976; McCarroll et al.,
1986), supports the mechanical stress model. The low frequency of
spondylolysis in industrial populations engaged in activities involving
mnima! physical exertion also supports this interpretation. Far example,
one sample of twentieth century Americans has a prevalence of only 7%
(Moreton, 1966), and Terry Collection White males and females have
prevalences of 6.4% and 2.3%, respectively (Roche & Rowe, 1951; see also
Fredrickson et al., 1984). Eighteenth and nineteenth century Londoners
from Spitalfields also have low frequencies that are slightly greater in males
than in females (2.2% vs. 0.6%; Waldron, 1993). These values are lower
than in earlier populations, suggesting a decrease in physical demand as it
affects the lower back in Britain (T. Waldron, 1991; H. Waldron, 1991; cf.
Stirland, 1996).
lnherited structural characteristics of vertebrae may predispose the
individual to the defect under mechanically stressful conditions. For
example, Stewart (1956) observed that spondylolysis is associated more
frequently with (I) a long 'pre-arcuate' spine; (2) an acutely inclined
superior sacra! surface; (3) pronounced lumbar lordosis; and ( 4) minimal
curvature and depth of superior sacra! articular facets. Stewart noted that
' ... thepredictive value ofthese anomalies is low, if not ni!, in so faras their
use is concerned in foretelling the development of the neural arch defects'
(1956:58). Other suggested predisposing factors include size ofthe articular
processes (Nathan, 1959), large pars interarticularis vascular foramina
(Miles, 1975), scoliosis (McPhee & O'Brien, 1980), the presence of
lumbar-sacra! transition vertebrae (Merbs, 1983), and spina bfida (Merbs,
l 989b). There may well be genetically based predisposing factors far
spondylolysis, but the mechanical environment prompting its appearance
is required.
A range of activities involving hyperextension and hyperflexion of the
192 Articular and muscular modifications
lower back, perhaps accompanied by jarring and twisting, are linked with
spondylolysis (Merbs, 1989b). Given the broad ran.ge ofbehaviors that are
associated with these defects, no specific stresses appear to lead to
spondylolysis (Bridges, 1989a; Merbs, 1989b). Merbs (1989b) speculates
that spondylolysis may have an adaptive value in that it may engender
flexibility of the lower back (see also Snow, 1974). Bird and coworkers
(1980), for example, reported that adults with the defect considered
themselves 'more supple in youth' than adults who lacked the defect. The
defect seems to have minimal negative influence on physical performance.
F or example, college football players with the defect appear to lose neither
practice nor playing time, and they continue to play professionally in later
years (see discussion in Merbs, 1989b).
The el ose association between mechanical loading of the lower back and
spondylolysis indicates that the defect should be documented in archae-
ological skeletal series. In northwestern Alabama, early prehistoric hunter-
gatherers have a higher prevalence of spondylolysis than late prehistoric
agriculturalists (Bridges, l 989a). This finding is consistent with moderate
increases in osteoarthritis (see above) in this series. Statistical analysis of
the co-occurrence of spondylolysis and osteoarthritis reveals only a weak
relationship between the two conditions. Osteoarthritis prevalence is
broadly similar between individuals with and without spondylolysis.
Structural analysis of long bones reveals a general increase in mechanical
demand in the agricultura] groups, which seems to contradict the findings
based on spondylolysis prevalence (see Chapter 6). Bridges (l 989a)
speculates that spondylolysis may be associated with unusual postures or
specific activities affecting the lower spine rather than overall activity levels
(cf. Hoshina, 1980).
Adult males in populations with appreciable frequencies of spondylolysis
ha ve a higher prevalence than adult females (e.g., Arriaza, 1995; Gunness-
Hey, 1980, 1981; Merbs, 1983, 1995; Stewart, 1979; Trembly, 1995;
Waldron, 1993). Presumably, mechanical demands causing the defect were
greater for men than for women in these societies. There are sorne notable
exceptions to this pattern, which emphasizes the role of mechanical demand
and the influence of culture cross-cutting gender lines. Contact era Ornaba
and Ponca women from northeastern Nebraska, for example, ha ve a higher
prevalence of spondylolysis and spinal osteoarthritis than men (Reinhard et
al., 1994). Highly demanding activities affecting the female spine are well
documented ethnohistorically; these include hide scraping undertaken in a
stooped posture (Reinhard et al., 1994). In addition, women were respon-
sible for a range of other physically demanding activities not shared by men,
including house construction and firewood gathering.
1
%
1
'
1

*
1
1
1
!
1
1
1
l
1
1
!
Summary and conclusions 193
Archaeological and other data provide good contextual information on
why the prevalence of spondylolysis is elevated in other settings. In the
Mariana archipelago, adult males have very high levels of spondylolysis
(38% of fifth lumbar vertebrae) (Trembly, 1995). These elevated levels of
spondylolysis may be related to heavy lifting and carrying associated with
construction of large megalithic structures called latte. Sorne of the latte
stones weighed hundreds of pounds, and no doubt required a great deal of
physical effort to move. Trembly (1995) speculates that the construction of
these buildings by men explains such a striking frequency of spondylolysis
in adult male skeletons.
Anterior slippage of a vertebral body relative to another immediately
below sometimes occurs following a spondylolytic fracture (Wiltse et al.,
1975). The dislocation, called spondylolisthesis, is normally prevented by
the restraining effects of muscles, ligaments, intervertebral disks, and
especially by the buttressing provided by intact vertebral articular pro-
cesses. In the absence of this support, the vertebra is displaced forward,
owing to gravity (Merbs, l 989b ). The degree of slippage ranges from barely
perceptible to complete anterior displacement ofthe superior body relative
to the inferior body (Merbs & Euler, 1985). In archaeological settings,
spondylolisthesis is difficult to distinguish from postinterment displace-
ment arising from other causes (e.g., rodent activity), but matching of
degenerative changes (e.g., osteophytes) on two adjacent vertebrae can be
used to identify the condition (e.g., Manchester, 1982; Merbs & Euler,
1985).
The vulnerability of the back to other types of fracture has also been
observed in the lower cervical and upper thoracic vertebrae. In a range of
populations, the tip of the spinous process is fractured and associated with
a pseudarthrosis (Knsel et al., 1996). Clinical evidence suggests that the
fracture is dueto highly forceful muscle contraction involving hyperexten-
sion or hyperflexion of the neck or activities involving scapular retraction
toward the vertebral column during rib elevation (Knsel et al., 1996). In
twentieth century populations, individuals engaged in various kinds of
projects involving shovelling of heavy soils and clay ha ve a relatively high
frequency of this type of injury. Virtually ali individuals affected, archae-
ological and contemporary, are males.
5.6 Summary and conclusions
Study of articular joint modifications relating to mechanical demand -
especially osteoarthritis- offers insight in to the stresses of different activity
194 Articular and muscular modifications
patterns and lifestyles in past populations. Generally, the more mechan-
ically demanding the lifeway, the greater the prevalence of osteoarthritis
and other degenerative pathological conditions related to activity (e.g.,
spondylolysis). Conversely, less demanding work repertoires result in a
relatively lower prevalence ofthese conditions. Temporal comparisons of
contrasting subsistence strategies within regions indicates sorne suggestive
trends. For example, although there are sorne exceptions, la ter prehistoric
agriculturalists tend to show a reduction in osteoarthritis and degenerative
pathology than earlier foragers. This provides sorne support for the
traditional point of view that foragers work harder than agriculturalists.
More importantly, local circumstances and conditions influence osteoar-
thritis prevalence and pattern in a far more profound way. Adult males
generally show a tendency for greater degenerative pathology than adult
females in archaeological settings. Although osteoarthritis is related to
mechanical loading, the relationship to leve! or type of activity is not a
direct one. High levels of osteoarthritis in the skeleton suggest demanding
lifestyles, but do not indicate whether these demands also include long-
distance or frequent travel as a cause, except possibly with regard to the
high prevalence offoot osteoarthritis. Other skeletal indicators ofmobility
are discussed in the following chapter.
1
1
1
6 Activity patterns: 2. Structural
adaptation
6.1 Bone form and function
Julius Wolff, a leading nineteenth century German anatomist and ortho-
pedic surgeon, recognized the great sensitivity of bones to mechanical
stimuli, especially with regard to their ability to adjust size and shape in
response to externa! forces. Wolff concluded that 'every particle of mature
bone is very active. Such activity must appear in the externa! shape of the
bones' (1892:78). What he called the 'law of bone remodelling' - now
known as Wolff's Law- simply states that bone tissue places itself in the
direction of functional demand.
A great deal of evidence has accrued in support of Wolff's hypothesis,
thus demonstrating the primacy of the mechanical environment in inter-
preting skeletal structural variation. Experimental and other research on
bone remodeling is instrumental in identifying patterns of skeletal modifi-
cation under different loading regimes (see Lanyon et al., 1982; Meade,
1989; Trinkaus et al., 1994). Using laboratory dogs, Chama y & Tschantz
(1972) observed that the surgical removal of portions of radii resulted in the
hypertrophy of ulnar diaphyses. The ulnar diaphyses increased in size by
31%after just 16 days and 60-100% at nine weeks. Similarly, Lanyon and
coworkers (Goodship et al., 1979; Lanyon & Bourne, 1979) documented
increased apposition of bone on the radius following ulnar osteotomies in
pigs and sheep. Nonsurgical load alterations have also resulted in changes
in bone mass. Woo and coworkers (1981) identified significan! endosteal
. apposition in young pigs subjected to exercise. Simkin and coworkers
(1989) compared hnmeri from swimming and nonswimming rats in an
experimental setting. The swimming rats included a group trained to swim
for one hour per day and a group that underwent the same training, but
also hada lead weight (approximately 1 % of the rat's body weight) tied to
their tails. Comparison of bone size and structure revealed that both
groups of swimming rats had greater periosteal apposition than the
sedentary, nonswimming rats.
Humans with unusually high levels of activity involving the use of an
extradominant upper limb, such as professional tennis players (Jones et al.,
196 Structural adaptation
1977; Ruff et al., 1994), rodeo cowboys (Claussen, 1982), and baseball
pitchers (King et al., 1969), exhibit marked hypertrophy of the externa!
diaphyses of long bones of the playing si de. In professional ten nis players,
for example, one study revealed that males ha ve a 35% increase in cortical
area in the distal humerus of the playing arm vs. the nonplaying arm;
females have a 29% increase (Jones et al., 1977). The effects of elevated
mechanical demands in relation to increased exercise are well documented
in clinical settings. Adults who exercise tend to ha ve higher bon e mass and
size than adults who are relatively sedentary, especially in males (reviewed
by McMurray, 1995).
Removal of normal functional loading, especially involving extended
periods of bed rest, weightlessness in spacefiight and absence of gravi-
tational loading, or partial or complete immobilization oflimbs, results in
decreased bone mass (e.g., Abram et al., 1988; Jenkins & Cochran, 1969;
Kazarian & Von Gierke, 1969; Kiratli, 1992; Lanyon & Rubin, 1984;
Lazenby & Pfeiffer, 1993; Meade, 1989; Morey & Baylink, 1978; Prince et
al., 1988; Sevastikoglou et al., 1969; Todd & Barber, 1934; Whalen et al.,
1988; see discussion by Trinkaus et al., 1994).
Bone is anisotropic in that it is characterized by differcnt material
properties depending upon the direction ofloading (Cowin, 1989; Currey,
1984; Nordin & Frankel, 1980; Wainwright, 1988). Long bones, for
example, are stronger in the longitudinal direction than in any other plane
(Wainwright, 1988). The primary loading forces affecting bone include
tension, compression, shear, bending, and torsion (Figure 6.1). These can
be best understood by considering a thin section, or slice ofbone. Tensile
loading occurs when equal and opposite forces are applied outwardly from
the surface of the slice. Compression, the opposite of tension, occurs when
equal and opposite loads are directed toward the two surfaces. Shear loads
involve the application of forces parallel to the surfaces under consider-
ation. Bending forces produce two types of stresses, tension on the convex
side and compression on the concave side. Torsional loading is the twisting
of the skeletal element about an axis and results in a combination of
tension, compression, and shear. Individual skeletal elements ha ve irregu-
lar geometric structure, and a range of forces usually acts on bone in
normal physiological activities, such as running and walking. Therefore,
loading almos! always involves a combination ofthese modes. The largest
and most common loading modes as they affect the human skeleton are
bending and torsion, especially for the long bones.
Cross-sectiona/ geometry
..
UNLOADED TENSION


SHEAR
0 ~
..
COMPRESSION BENDING
..
COMBINED LOADING
TORSION TORSION-COMPRESSION
197
Figure 6.1. Loading modes that affect.long bones. (From Nordin & Frankel,
1980; reproduced with permission of authors and Williams & Wilkins.)
6.2 Cross-sectional geometry
Biomechanics - the application of engineering principies to biological
materials - represents an importan! means of analyzing and interpreting
skeletal morphology within the context of the mechanical environment.
Unlike straight mechanical analysis of building materials, biomechanics
deals with dynamic tissues that modify themselves continuously in relation
to loading modes and activity.
The density of bone tissue differs within the skeleton and withirt
individnal bones in response to varying mechanical demands. Mineral
content is a componen! ofbone strength (see Burr, 1980; Martin & Burr,
1989). However, the response to increased loading is primarily in the
distribution ofbone (geometric) rather than density or any other intrinsic
material property ofbone (Beck et al., 1990; Burr et al., 1989; Ruff, 1989).
Borrowing the simple beam model used by civil and mechanical
engineers to analyze buildings (e.g., Huiskes, 1982; Timoshenko & Gere,
1972), biological anthropologists and others have analyzed long bone
diaphyses (e.g., Bridges, 1989b; Larsen & Ruff, 1994; Ruff & Hayes, 1983a,
1983b ), mandibular corpi (e.g., Daegling, 1989; Hylander, 1979; Schwartz
198 Structural adaptation
(a)
Figure 6.2. Cross section of bone undergoing bending (a) torsion. (b), and
showing stress distribution around the neutral plane and axis, respect1vely.
Note that the magnitude of forces (indicated by heavier arrows) is greatest at
the periphery of the bone and least nearest the neutral plane or axis.
Nordin & Frankel, 1980; reproduced with permission of authors and W1lhams
& Wilk_ins.)
& Conroy, 1996),femoral necks (Beck et al., 1990; Phillips et al., 1975), and
second metacarpals (Roy et al., 1994). These investigations, as well as
experimental evidence based on laboratory animals (e.g., Abram et al.,
1988; Simkin et al., 1989), indicate the value of structural analys1s m
drawing inferences about physical activity and behavior patterns.
In bending and torsion of a hollow beam, such as a long bone, the
magnitud e of mechanical stresses is proportional to the distance fro':'1 the
central or 'neutral' axis of the bone (Figure 6.2). The neutral axis 1s the
plane (bending) or axis (torsion) where stress is zero. Thus, ali else
equal, the cross section that is strongest is that in which t?e matenal 1s
oriented furthest from the neutral axis (Currey, 1984; Nordm & Frankel,
1980; Ruff, 1992). And, by inference, the greater the distance from the axis,
the greater the magnitude of stresses (see Nordin & Frankel, 1980; Ruff,
1992; Wainwright, 1988). In long bones and in sorne other.ele.ments (e.g.,
Cross-sectional geometry 199
metacarpals), the cross-sectional area and the manner in which the bone is
distributed about an axis reflect mechanical/functional behavior.
Bending a ruler is a good analogy for demonstrating the principies
associated with beam analysis. If one attempts to bend a ruler against its
narrow edges, there is little or no give. In contras!, if forces are applied
virtually anywhere along its flat surface, especially toward the middle, the
ruler bends readily. From a mechanical perspective, the small amount of
give when applying bending forces to the narrow edges occurs because the
materials in this axis are distributed relatively far from a central, neutral
axis. Thus, in this plane of bending, the ruler has a great deal of strength.
Conversely, the ease of deformation when bending forces are applied to the
flat surface is made possible by the lack of material far from the neutral
axis; therefore there is very little strength of material when the ruler is
subjected to bending forces in this direction. A ruler is structured so as to
resist bending from one direction only. Given the tubular shape of long
bones, they are able efficiently to withstand the mechanical demands
associated with bending and torsion from multiple directions.
Beam analysis involves the measurement of geometric properties from
cross sections taken perpendicular to the long axis of a skeletal element. As
demonstrated by the ruler analogy, these properties are based on both the
amount and the distribution of bone tissue in the cross section. As suc:n,
they are direct measures of the 'strength' (more precisely, rigidity) of the
bone cross section in resisting forces or loadings. The properties measured
in archaeological human bones should, then, represen! a measure of the
cumulative forces operating on the skeletons of individuals during their
lifetimes.
Cross-sectional geometric properties measure the amount and the
distribution of skeletal tissue in a section. These properties include section
'areas' and 'second moments of area' ( or 'area moments ofinertia') (Figure
6.3). Areas include total subperiosteal area (TA), endosteal or medullary
area (MA), and cortical area (CA). Measurement ofmediolateral (mi) and
anteroposterior (ap) breadths of long bone diaphyses are used to calcula te
areas with formulas for two planes of measurement (e.g., Ruff & Jones,
1981):
TA= n(Tap/2)(Tm1/2)
MA = n(Map / 2)(Mm1/2)
CA=TA-,-MA
where T is the ti:ltal externa! diameter and M is the medullary diameter.
Thus, CA is a measure ofthe amount of cortical bone in a cross section and
200 Structural adapta/ion
001GPF
/\
Cortical properties
TA 454.96
\U
CA 287.26
X bar
19.40
Y bar
31.74
lx 16468.40
ly 12422.80
Theta 97.48
\
lmin
16539.40
~ Imax
12351.80
J 28891.20
U ser scale poin1s @ (1 O, 1 O) (30, 10)
Figure 6.3. Computcr-rcconstructed cross section of femur midshaft and
associated gcometric properties; Pecos Pueblo, New Mexico. Cross at ccnter
of section is the centroid, around which section properties are calculated,
including TA (total subperiostcal area), CA (cortical area), Xb'" and Y bar
(ccntroid coordinates), !_, and IJ" (second moments of arca about x ~ n d y axes),
Theta (orientation of greatest bending strength), Ima' and Imin (maximurn and
minimum sccond moments of area), and J (polar second moment of area).
(From Ruff, 1992; reproduccd with permission of author and John Wiley &
Sons, Inc.)
is also an indicator of strength of the long bone diaphysis under pure axial
loading (loading that is simultaneously applied to both ends of the bone).
Per cent cortical area (%CA= CA{f A) is an a!ternative indication of
amount of compact cortical bone. CA and %CA provide very different
representations of bone mass, owing to the fact that the latter, but not the
former, measures cortical bone relative to TA (and see Ruff, 1992; Ruff &
Larsen, 1990). TA and MA are measurements of the two major surfaces of
the bonecortex, including the outcr periosteal and inner endosteal surfaces,
respectively (Ruff, 1991). Expansion in TA and MA indicates a greater
distribution of skeletal tissue further from the neutral axis of the bone.
Bone areas, especially CA, are proportional to strength in compression
and tension when the forces are applied noneccentrically along the central
longitudinal axis ofthe bone diaphysis (axial loading). Beca use long bones
are curved and are affected by muscular forces applied off-center to a bone
axis, pure axial loadings in either compression or tension are rare; most
forces involving long bone diaphyses are eccentrically applied. Second
moments of area ha ve been shown generally to be more accurate indicators
of bone strength and mechanical function than areas alone. For example,
Cross-sectional geometry 201
analysis of second moments of area in second metacarpals from the
dominant hand in a large sample ofurban Americans (n = 992) reveals that
in both left-handed and right-handed individuals the metacarpals in the
dominan! hand have significantly greater second moments of area than
metacarpals in the nondominant hand (Roy et al., 1994). Increased bone
strength is not the result in this case of greater cortical thickness. This
finding underscores the point that cortical thickness by itself is not an
appropriate indicator of functionaljmechanical demand (see also Ruff,
1992; Ruff & Larsen, 1990).
Second moments of area are geometric properties that are used to
measure bending strength and torsional strength. Bending strength, values
of which are called 'I' with a subscript that references the specific axis
running through the cross section, is calculated in relation to the neutral
axis. The general formula for calculating !in relation to a particular axis is:
where a; is the unit area and d; is the perpendicular distance from the center
of the unit to the neutral axis. lx can refer to the bending strength in the
anteroposterior plane and !y to bending strength in the mediolateral plane
(Ruff, 1989). Other values of I expressing the maximum and minimum
bending strength in a cross section are referred to as I max and I m;n,
respectively, where Imax measures the maximum strength in resistance of
bone to bending and Imrn measures the minimum strength in resistance of
bone to bending. Torsional strength is calculated in reference to the neutral
center or 'centroid' of the cross section and is called the polar second
moment ofarea or 'J.' Jis equal to the sum ofthe values of Imax and Imrn (or
/., and !y), which are always perpendicular to each other (Ruff & Hayes,
1983a).
Values of I and J are calculated as products of very small unit areas in the
cross section and squared distances of the unit areas relative to the neutral
axis (for values of !) or the neutral center of the section (for values of J).
Therefore, second moments of area are presented as linear dimensions to a
specific power. Because ofvariability in body size or length oflong bones in
the comparison of different population samples or temporal series within a
particular setting, properties should be size standardized when compari-
sons are made between or within comparative groups (Ruff, 1984). Earlier
work suggested that, for the femur, dividing areas by the square of the
length and second moments of area by the fourth power of the length was
an appropriate meaos of size standardizing (e.g., Ruff, 1984; Ruff &
202 Structural adaptation
Larsen, 1990). More recent analyses utilizing additional and more exten-
sive samples of extinct and recen! human skeletal samples suggest that
more appropriate powers for size standardization of the femur are bone
Jength cubed and bone length to the power 5.33 for areas and second
moments of area, respectively (Ruff et al., 1993).
Cross-sectional geometric analysis was first applied to samples of more
than a single bone in a variety of settings involving both human and
nonhuman primates (e.g., Jungers & Minns, 1979; Kimura, 1971, 1974;
Klenerman et al., 1967; Lovejoy et al., 1976; Martin & Atkinson, 1977;
Miller & Piotrowski, 1977; Minos et al., 1975; Piziali et al., 1976). These
studies were generally limited to fewer than 10 individuals beca use of the
lengthy and tedious process involved in manually calcuJating geometric
properties. Specifically, two problems with calculating section properties
include the determination of endosteal and periosteal boundaries and the
mathematical integration of areas that are necessary to complete the
calculations (Ruff, 1992). The development of automated protocols far
computer analysis of large numbers of cross sections (Nagurka & Hayes,
1980) and new technologies has made it possible to carry out stud1es
involving more extensive samples. These developments have fostered a
more comprehensive understanding of variability both within and between
populations from archaeological and paleontological contexts (e.g., Berget
& Churchill, 1994; Bridges, 1985, 1989b, 199la, 1995a, 1995b; Brock, 1985;
Brock& Ruff, 1988; Churchill, 1994; Kimura & Takahashi, 1982; Larsen &
Ruff, 1991, 1994; Larsen, Ruff et al., 1995, 1996; Robbins et al., 1989; Ruff,
1987, 1989, 1991, l 994b; Ruff & Ha yes, 1982, 1983a, 1983b; Ruff & Larsen,
1990; Ruff et al., 1984, 1991; Sumner et al., 1985; Trinkaus et al., 1994; Van
Gerven et al., 1985).
The method of geometric analysis involves the preparation of a section
image made perpendicular to the longitudinal axis of the bon e. This image
is obtained through severa! alternative means, from existing breaks on long
bone diaphyses (e.g., Lovejoy & Trinkaus, 1980), direct cutting (e.g.,
Larsen & Ruff, 1994; Ruff & Hayes, 1983a, 1983b; Ruff et al., 1984), or
noninvasive imaging, especiallycomputed tomography (CT) (e.g., Brock &
Ruff, 1988; Bridges, 1989b; Larsen, Ruff et al., 1995; Ruff, 1987). Other
useful noninvasive techniques include multiple plane radiography (e.g.,
Biknevicius & Ruff, 1992; Fresia et al., 1990; Runestad et al., 1993;
Trinkaus & Ruff, 1989) and photon absorptiometry (e.g., Martin & Burr,
1984; Van Gerven et al., 1985; and see discussion of imaging techniques by
Ruff, 1989; Ruff & Leo, 1986). Noninvasive imaging is useful in situations
in which the cutting of bone specimens is not possible ( e.g., fossil
hominids). Unlike invasive cutting, the advantage of noninvasive analysis
Cross-sectional geometry 203
is that section properties can be determined directly from the images (e.g.,
CT sean).
The accuracy of geometric analysis is dependen\ upon the availability of
fully intact periosteal and endosteal section con tours. The con tour integrity
of the periosteum can be deterrnined through visual inspection. For
noninvasive analysis, it is advisable to examine the endosteal surface by
cutting one or two specimens or using diaphyses that have already been
broken. In addition to well preserved periosteal and endosteal surfaces, the
accuracy of analyses - including size (length) standardization as well as
precise location of sections - requires the presence of largely intact ends of
long bones. Therefore, even though large numbers of skeletons may be
present in an archaeological series, if differential preservation exists, then
only subsamples composed of well preserved specimens can be analyzed
(e.g., Ruff & Larsen, 1990; Ruff et al., 1984).
There are various ways of obtaining landmark data for section analysis.
Commonly, once the section image is obtained, a magnified photographic
slide image is projected onto a digitizer screen, and the periosteal and
endosteal borders are manually traced with a digitizer stylus directly input
to a microcomputer, recording x and y coordina tes at intervals of 1 mm. An
automated computer program (e.g., SLICE; Nagurka & Hayes, 1980) then
calculates the section properties.
Studies of cross-sectional geometric properties address a range of
long-standing issues in anthropology dealing with activity and physical
behavior, especially in regard to subsistence strategies, the relationship of
sex differences to dietary and subsistence adaptation, and variability in
skeletal growth and development in response to environmental and dietary
change.
6.2.J Specijic lifeway patterns
A perspective on physical activity is revealed by the study of patterns of
biomechanical variation within the context of specific regions and popula-
tions. Analysis of cross-sectional geometry in femara (50% and 80%
sections) and humeri (35% section) from prehistoric Stillwater Marsh
foragers in the western American Great Basin reveals importan\ skeletal
adaptations in upper and lower limbs in a harsh desert setting (Larsen, Ruff
et al., 1995; C. B. Ruff, unpublished manuscript). Comparisons of bone
areas with other North American archaeological series revea! that the
Stillwater group is on the low end of a continuum, indicating relatively low
bone mass (Figure 6.4). In sharp contras\, TA and J are remarkably high.
Forthe male femoral midshaft, CA and %CA are the lowest ofall ofthe
204
"' E'
<(
Structura/ adaptation
Midshaft Subtrochanteric Mid-distal
400
femur
400
femur
400
humerus
350 350

350
300 300
'
' 300

.......... ............... '\,
250 250 250
11-------0-----. ...,
200

200
------..--.

c----nc ..... ., 200
CA CA 11-------"---- ....
150 150 150
CA
100
PCCA 100
PCCA
100
PCCA


...------.
50 50 50
1
e,
ro
00
'iii
e,
j
e, 00 e,
j
e, e, o o
"'
o o o
<(
"'
o <(
"'
<(
E' o <D

"'
E'
"'
"'
E'
"'
(L E'
(L
'!2>
(L (L
e
(L
e
m
"'
(L ii:
o
"'
o 33
"'
o
e
ii: ;j
"'
e
"'
e
"'
o Cl
o Cl
o Cl
"' "'
"' "'
Cl Cl
Cl
Cl
Figure 6.4. Comparisons of cross-sectional areas - total subperiosteal area
(TA), cortical area (CA), and per cent cortical a rea (PCCA) - for three long
bone sections. Note that for each graph the foragers are at the left (Stillwater
at far left) and farmers are at the right. Areas are standardized over thc square
of the bone length (and multiplied by 10
5
). Filled squares, males; open squares,
females; Preag., Preagricultural; Gr. PI. Precoal., Great Plains Precoalescent;
Coal., Coalescent; Ag., Agricultura!. (From Larscn, RufT et al., 1995;
reproduced with permission of American Museum of Natural History.)
North American comparative samples, whereas TA and J are the highest.
These findings indicate that there is a relatively low amount of cortical
bone, but the skeletal tissue that is present is distributed far from the
neutral axes and centroids, indicating very high bending and torsional
strength_
Females and males from these various North American settings show
somewhat different patterns of skeletal morphology (Larsen, Ruff et al.,
1995; C. B. Ruff, unpublished manuscript). In males in these groups,
torsional strength (J) in the femoral midshaft closely parallels the subsis-
tence strategy- hunter-gatherers tend to be high (e.g., Stillwater, Georgia
preagricultural), and conversely, agriculturalists tend to be low (e.g., Pecos
Pueblo). This trend follows the rationale for a decline in mechanical
loading of the femur in sedentary agriculturalists in comparison with
mobile hunter-gatherers (Ruff, 1987). This is not to say that foragers are
1.30
1.25
1.20
1.15
1.10

1.05
1.00
0.95
Cross-sectional geometry 205
0.90
1
e, ;
"'
'iii
e,
o <(
8
"
o
"'

"'
(L E'
(L

e

"'
(L
o
e
"' "
o
ii: Cl Cl
"'
"' Cl Cl
Figure 6.5. Comparisons of femoral rnidshat section shape
(/ //.), Rutfs mobility index. Note that the foragers are at the left (Stillwater
at' left) and farmers are at the right. Flled squares, males; open squares,
fernales; Preag., Preagricultural; Gr. PI. Precoal., Great Plains Precoalescent;
Coal., Coalescent; Ag., Agricultura! (From Larsen, Ruff et al., 1995;
reproduCed with perrnission of American Museum of Natural History.)
uniformly mobile and farmers are no!. Increasing evidence indicates that a
number of hunter-gatherer groups in the prehistoric past were relatively
sedentary(e.g., Erlandson, 1994; O'Neill, 1994). Rather, these biomechani-
cal studies suggest tendencies in patterns of mobility from high levels in
foragers to low levels in agriculturalists.
Torsional loading bears no apparent relationship to subsistence strategy
in females in these North American groups. Rather, female torsional
Joading corresponds with degree of ruggedness of terrain - mountainous
populations (Stillwater, Pecos Pueblo) ha ve the highest values of J, coastal
populations have the Jowest values (Georgia coast), and Great Plains
populations are intermediate. The differences between males and females
are also indicated by comparison of Ruff's mobility index. Stillwater males
shows a very high degree of activity involving use of the lower limbs, such
as in long distance traveL Their values are among the highest in comparison
with other North American populations, and, like data presented for
second moments of area (]), the index value fits into a continuum from
hunter-gatherers to agriculturalists (see Figure 6.5). On the basis of this
index, it appears that Stillwater females are relatively less mobile than
206 Structural adapta/ion
males, which is true for ali other North American series. Comparisons of
females and males in the lxf I, ratio revea! that Stillwater foragers are highly
sexually dimorphic, which is a paramount characteristic of hunter-
gatherers generally and is related to sexual division of labor and a strong
male-female dichotomy in activity patterns (Larsen, Ruff et al., 1995; Ruff,
1987; C. B. Ruff, unpublished manuscript).
In the Stillwater series, humeri show a somewhat different pattern of area
and second moments of area from that of femora. Especially striking are
thc !ow values ofhumeral CA and J (Larsen, Ruff et al., 1995). This finding
is consisten! with the hypothesis that mechanical loading is localized in the
skeleton. With regard to the Stillwater adults, the presence of high bone
strength in the fen1ur and not the humerus indicates a prin1arily mechanical
effect (ruggedncss of environment, and thus, high lower limb loadings);
whereas the !ow values of CA for both skeletal elements - femur and
humerus - are more likely to be due to so1ne systemic influence, such as
undernutrition.
Therefore, the generally high levels of robusticity in Stillwater adults -
especially in femara - is consistent with findings based on osteoarthritis
prevalence (and supporting a limnomobile subsistence strategy' see Chap-
ter 5). Behaviors causing osteoarthritis and elevated robusticity result in
respective high frequencies of pathology and high second moments of area.
Comparative analysis of long bone geo1netry gives greater insight into
mobility patterns. From Ruff's mobility index, it appears that these
populations were highly mobile - especially males - and Jed physically
demanding lifestyles generally.
Populations inhabiting the Alcutian Islands, Alaska, from the mid-
eighteenth century to the present, a!so led physically demanding lifestyles.
An especially demanding task was kayaking on the open ocean (Laughlin
et al., 1991, 1992; see also Chapter 5). This activity underlies the extreme
externa! robusticity of Aleut humeri (e.g., Hrdlicka, 1945) and high levels
of osteoarthritis and other joint modifications in thcse populations (e.g.,
Hawkey & Street, 1992; Chapter 5). In arder to document and interpret the
effects ofhigh ]evels ofmechanical loading on humeri, Berget & Churchill
(1994) examined two sections of the humeral diaphysis (35% and 50%)
from the western Aleutian islands (Kagamil, Shiprock, Umnak) and
compared geometric properties (CA, J) with Euroamerican, African
American, and agricultura! Pueblo Native American samples. Not surpris-
ingly, bone strength in male Aleut humeri is far more pronounced than in
the other series. Torsional strength in Aleut female humeri is also much
higher than in females of the other groups; Aleut female va!ues are similar
to male values from other samples. These elevated levels ofbone strength in
Cross-sectional geometry 207
Table 6.1. Humeral polar moments of area - 35%, 50%, and 65%
sections - infossi/ and recen/ human adults. (Values divided by length to
thefourth power and multiplied by HJ1. Adaptedfrom Churchi/l, 1994:
Tables 16 and 17.)
Females Males
Group 35o/o 50% 65/o 35% 50% 65%
Euroamerican 79.9 89.7 96.8 124.3 145.5 149.3
(n) (19) (19) (19) (25) (25) (25)
Afroamerican 101.9 113.5 117.8 151.4 169.6 178.8
(n) (25) (24) (25) (25) (25) (25)
Aleut 141.2 184.7 210.0 198.6 265.6 316.5
(n) (21) (22) (22) (25) (25) (25)
Amerindian 102.3 121.4 128.3 102.1 118.5 124.8
(n) (20) (20) (20) (20) (20) (20)
Peruvians 66.1 86.5 95.6 127.6 139.7 147.3
(n) (1) (1) (1) (3) (3) (3)
Late Upper Paleolithic 125.2 155.0 172.I 154.3 183.8 183.0
(n) (6) (6) (5) (9) (9) (8)
Early Upper Paleolithic 82.0 132.8 120.0 144.0 128.8 192.7
(n) (4) (3) (3) (6) (6) (5)
Skhul/Qafzeh 101.3 79.5 95.9 92.2 82.9 123.5
(n) (1) (2) (2) (1) (2) (1)
Archaic humans 125.2 133.3 151.0 166.7 184.2 194.9
(n) (4) (4) (4) (7) (6) (6)
adult humeri reflect intensive mechanical loading of the upper limb.
Comparisons of Aleuts with other populations in humerus torsional
strength revea! that these subarctic peoples surpass values derived from a
range of modern and fossil humans, including Neandertals (Churchill,
1994) (Table 6.1).
With the exception of data comparisons by sex, few workers have
compared cross-sectional properties by other components of individual
popnlations, such as status (e.g., high vs. low) or diet (e.g., more maize vs.
less maize). High-status adults (mound burials) from the late prehistoric
Dallas site, Tennessee, have thinner femoral midshaft cortical bone (CA)
than low-status adults (mound periphery and village burials) (Hatch et al:,
1983). In ranked societies in the early contact era American Southeast,
Africa, and Polynesia, high-status individuals enjoyed a less physically
demanding lifestyle than low-status individuals (see Hatch et al., 1983).
Thus, the thinner cortical bone in high-status Dallas adults may reflect a
lifestyle involving more limitedphysical activity relative to their low-status
208 Structural adaptation
counterparts. CA is also subject to nutritional factors (see Chapter 2).
Thus, these differences may reftect a reduced-quality diet in high-status
individuals, and not less borre strength.
A more conviilcing argument for differences in physical activity within a
population can be made by examination of second moments of area.
Prehistoric individuals whose carbon isotopic signatures indicate more
maize consumption than others (see also Chapter 8) from the Great Salt
Lake region of Utah have relatively low femoral and humeral second
moments of area, especially in males (C. B. Ruff, unpublished manuscript).
Consumers and nonconsumers of maize in this setting have similar values
of per cent cortical area (%CA), suggesting that the differences in borre
structure are not dueto dietary stress, but rather to behavioral and activity
differences.
6.2.2 Age changes in diaphyseal structure
Via radiographic analysis, Smith & Walker (1964) demonstrated that
weight-bearing long bones (femur) undergo continuous diaphyseal expan-
sion throughout the years of adulthood. This pattern of borre apposition
has also been observed in nonweight- and weight-bearing bones in the
comparison of younger and older adults in a number of contemporary
settings (e.g., Epker et al., 1965; Epker & Frost, 1966; Garn, 1989; Garn et
al., 1967, 1992) and archaeological series ( e.g., Carlson et al., 1976; Pfeiffer,
1980; Ruff & Ha yes, l 983b; Stirland, 1993). Sorne contend that periosteal
expansion represents a compensatory response to endosteal bone loss and
thinning of the cortex with advancing age (Garn et al., 1967; Ruff & Ha yes,
1982; Smith & Walker, 1964; and see discussion by Martin & Burr, 1989).
Until recently, this hypothesis was difficult to test beca use of the impreci-
sion of radiographic measures of cortical borre remodeling (see Ruff &
Hayes, 1982). In order to examine the issue of age changes and periosteal
expansion in more detail, Ruff & Hayes (1982, 1983b) analyzed section
properties - areas and second moments of area - from the late prehistoric/
protohistoric Pecos Pueblo site, New Mexico. Analysis of femoral and
tibia! diaphyseal sections reveals that both sexes saw increases in MA and
TA and decreases in CA with advancing age (Table 6.2). Second moments
of area (/max and lm;n) increase in older adults. Thus, in support of the
compensatory hypothesis, continuous periosteal expansion in older adults
appears to maintain the mechanical integrity of the long bone despite
overall decline in bone mass.
Variation in the comparison of different sections along femoral and tibia!
diaphyses also reveals that skeletal remodeling with age is less pronounced
Cross-sectional geometry
Table 6.2. Percentage change with age in femoral and tibia/ cross-
sectional geometric properties. Calcu/ated by the formula: {[ ( 40 +
years) - (20 to 39 years) J + (20 to 39years)}x100. ( Adaptedfrom
Rujf & Hayes, 1982: Table l.)
Bon e
(section location) CA MA TA lmax /min
Males
Tibia (20%) -1.1 13.IY' 73n 9.4 7.8
Tibia (50%) 0.2 35.7' 12.4'" 14.6" 19.9
Femur (50%) 0.1 25.4' 6.7' 12.3' 6.8
Femur (80%) -0.5 13.1
6
4.7" 7.5 4.8
Fen1a/es
Tibia (20%) -14.5.., 9.1" 9.1" -1.5 -3.0
Tibia (50%) -13.1
6
62.4' 12.9' 10.3 4.1
Femur (50o/I)) -6.6 66.6'" 11.1 -9.6 15.6'
Femur (80%) -2.4 4J.2r 13.6'" 16.5' 16.l)"
"Statistically significant between age groups (Student's /-test: p:::;;0.05).
6
Statistically signicant between age groups (Student's /-test: p:50.0l).
''Statistically signicant between age groups (Student's /-test: pS0.001).
209
in the most distal and proximal sections. The greater remodeling in tibia!
and femoral midshaft and adjacent sections is probably due to lhe
relatively greater mechanical loads - especially bending - relative to distal
and proximal ends (Ruff & Hayes, 1982; Ruff, 1992).
Juveniles also appear to show compensatory diaphyseal remodeling.
Analysis of Medieval period juvenile ~ 1 6 years) tibiae from Kulubnarti,
Sudanese Nubia, shows increases in per cent cortical area (%CA) and
second moments of area (l.,, !y) from ages 3 to 12 (Van Gerven et al., 1985).
After age 12, areas decline, while second moments of area contiflue to
increase dramatically. Thus, despite decline in borre area, continucd
increase in second moments of area in the later juvenile years appears w
foster mechanical integrity throughout the years of growth and develop-
ment.
In order to document in more precise fashion the ontogenetic age
patterns of diaphyseal remodeling, Ruff and coworkers (1994) examined
areas and second iuoments of area of humeri in professional tennis players
aged 14 to 39 years. Both males and females showed a pattern of endosteal
contraction and periosteal expansion resulting in large increases in bone
area (CA) and torsional strength (J) in the dominan! playing arm. The
resulting robusticity is primarily due to greater periosteal expansion and
not endosteal contraction. The degree of humeralrobusticity has a stroilg
210 Structural adaptation
association with age: individuals who began playing tennis earlier had
greater robusticity than individuals who began later. The increased
mechanical loading in children and young adolescents has a more pro-
nounced effect on the periosteal surface; after mid-adolescence, loading
appears to have a more pronounced affect on the endosteal surface. These
findings indica te an age pattem of sensitivity in the bone-forming surfaces-
periosteal vs. endosteal - in response to increases in mechanical stimuli.
The cause of the shift in focus of sensitivity is unknown, but may be related
to changes in hormonal levels and their different effects on the two bone
surfaces (Ruff et al., 1994).
6.1.3 Bilateral asymmetry in humeral loading
Microwear orientation on stone tools (Toth, 1985) and on the anterior
teeth of early hominids (Bermdez de Castro et al., 1988) indicates that
humans ha ve had right-side dominance of the upper limb for mu ch of the
Pleistocene. lines of evidence represen\ indirect means of assessing
and interpreting upper Iimb use. A more comprehensive understanding is
provided by the study of structural bilateral asymmetry of limb bones, a
subject of interest to anthropologists and anatomists since the mid-
nineteenth century (see Dangerfield, 1994; Fresia et al., 1990; Schaeffer,
1928; Stirland, 1993). Asymmetry is of particular interest to biological
anthropologists, because sorne degree ofleft-right side difference has been
documented in a variety ofhuman populations and has played a key role in
discussions of genetic vs. functional explanations of bone size and
morphol9gy. All human populations express higher frequencies of right
than left dominance for externa! measurements of the humerus, including
Iength, thus indicating a probable genetic component. The differing
patterns of asymmetry in relation to specific levels and types of in
various skeletal samples investigated by biological anthropolog1sts argue
for a functional interpretation for diaphyseal long bone morphology (see
discussions by Martn & Burr, 1989; Trinkaus et al., 1994). Therefore, the
investigation of bilateral asymmetry in humeri from archaeological series
is an importan\ means of reconstructing and interpreting activity in the
past.
Comparisons of structural properties- areas (CA, MA, TA) and second
moments of area (/." 1,, J) - from precontact preagricultural, precontact
agricultura!, early contact, and late contact humeri (35% section) from the
Georgia Bight populations provide an important perspective on behavioral
shifts (Fresia et al., 1990; C. S. Larsen et al., unpublished manuscript).
These comparisons revea! a pattern of right-dominant bilateral asymmetry
1
1
l
1
l
Cross-sectional geometry 211
in this setting, similar to findings reported by other researchers (e.g.,
Ben-Itzhak et al., 1988; Berget & Churchill, 1994; Borgognini Tarli &
Repetto, 1986; Bridges, 1989b; Constandse-Westermann & Newell, 1989,
1990; Hrdlicka, 1932; Ruff & Hayes, 1983b; Ruff & Jones, 1981; Stirland,
1993; Trinkaus et al., 1994). Specific changes in pattern of asymmetry
indicate shifts in the manner in which the upper limbs were used from the
earliest to the latest period. Analysis of left-right torsional strength (J) of
the mid-distal humerus (35% section) shows a general decrease in asym'
metry through the early contact period; asymmetry then increases in the
late contact period.
Comparisons of adult males and females revea! a greater decline in
asymmetry in females than males in the transition from foraging to farming
prior to European contact. This suggests that change in the use of the upper
limbs was more profound for females !han males in the shift to agriculture
(and see above). The relatively greater change in females is consistent with
the notion that subsistence activities changed more in women than in men:
women were engaged in labors offood preparation (e.g., maize pounding),
whereas men continued to follow a lifeway similar to that of the earlier
prehistoric period (e.g., hunting). The asymmetry between left and right
sides declines in the precontact agricultura! women to virtually ni!. This
finding adds confirmation to Bridges' (199lb) hypothesis that American
Southeast females had relatively equal use of left and right arms in maize.
pounding and grinding and activities that generally required the simulta-
neous use of both arms.
Comparisons of sexual dimorphism - the per cent side differences
between males and females - show a marked decline in the temporal span,
culminating with the leas! amount of sexual dimorphism in the early and
late contact periods. This finding lends strong support to the hypothesis
that contact era men and women engaged in physical activities involving
comparable loading modes. The trend of reduction in sexual dimorphism
in torsional strength and other structural properties suggests that the
division of labor between sexes declined dramatically in intensive agricul-
turalists in the contact period. Historical accounts (e.g., Hann, 1986)
indicate that tasks normally performed by women were undertaken by
sorne men, especiallywith regard to activities involving the use ofthe arms.
This convergence of male and female behaviors would contribute to a
reduction in sexual dimorphism in upper limb bilateral asymmetry. Other
historical accounts suggest that male and feinale activities were different
(Swanton, 1946). The possibility remains, therefore, that the increasing
similarity of mechanical loading is due to other behavioral changes that
coincidentally resulted in comparable cross-sectional geometry.
212 Structural adaptation
The reduction in bilateral asymmetry in the Georgia Bight is similar to
the pattern observed in the comparison of pre historie and
agriculturalists from the Pickwick Basin, "."labama (see Bndges: 1989b).
Comparison of ei<ternal diaphyseal d1mens10ns (data are not avatlable for
left-right asymmetry of geometric properties) reveals that females
in asymmetry which may refiect the use of both arms in the preparat10n of
maize (Bridg;s, 1989b). Males also show a reduction in asymmetry of left
and right humeri, but it is slight in comparison with that of females. Ove.r
the course of time, female and male activities in both southeastern U.S.
settings probably became more similar, especially with regard to those
activities involving use of the upper limbs.
Bilateral asymmetry of cross-sectional geometric properties of the
humerus in modern human populations shows a w1de range of vanat10n
(Ruff, 1992; Trinkaus et al., 1994). Severa! modern samples ha ve moderate-
ly high Ievels of asymmetry ("' 5-14%). Profess10nal tenms players
extraordinarily high levels of asymmetry ( "' 28-57%), a pattern that IS
similar to that of Pleistocene late archaic hominids _( "' 24-57%) (Tnnkaus
et al., 1994). The high leve! of asymmetry in profess10nal tenms players and
the overall variability documented in human samples attest to the poten!Jal
for drama tic change in diaphyseal morphology, dependin_g u pon the na tu re
ofupper limb use and function and especially the mechamcal loadmg of the
left vs. the right arms (see also discussions by Ruff et al., 1993; Tnnkaus et
al., 1994).
1
Variability between status groups in bilateral asymmetry may revea
behavioral differences within ranked societies. The degree of left-nght
dominance in externa] measurements of Mesolithic limb bones from
western Europe shows sorne difference in work and activity between upper-
and Iower-status groups (Constandse-Westermann & Newell, 1989). H1gh-
status females have ]ess right dominance than low-status females. Con-
standse-Westermann & Newell ( 1989) contend that the greater upper hmb
lateralization in Iow-status women refiects their heavier work demands
than those ofhigh-status women. Higher-status males possess greater nght
dominance than Iower-status males. Although reasons for the contrastmg
attern in males and females are unclear, the authors speculate that m arder
ror males to achieve high status they were required to perform more
demanding and more differentiated tasks. . .
Asymmetry ofthe Iower limb is poorly known m archaeolog1cal or other
population settings. Lateralization of lower hmb bones has been stud1ed
from California (Ruff & Janes, 1981), Pecas Pueblo (Ruff & Hayes, 1983b),
and various Mesolithic siles in western Europe (Constandse-v.:estermann
& Newell, 1989, 1990). These investigations revea! that the left s1de tends to
Cross-sectional geometry
213
have slight size dominance and greater mechanical strength, especially in
anteroposterior bending forces. Overall, lower limb asymmetry in structure
and overall size is either considerably less than upper limb asymmetry or
shows no consisten! pattern (e.g., Borgognini Tarli & Repetto 1986
Constandse & Westermann, 1989, 1990; Dangerfield, 1994; Ruff & 'Hayes:
1983b). The differences in asymmetry in upper and lower limbs in humans
refiect the fact that the upper limbs are used in a wide variety of
nonambulatory activities, whereas lower limbs are used in one super-
function requiring equal use of left and right sides, bipedal
locomo!Jon.
6.1.4 Temporal trends in modera and pre-modera recent humans
Biomechanical approaches to the study of physical activity and behavioral
change add an importan! dimension to our understanding of adaptive
shifts in the past. In the Pickwick Basin of northwestern Alabama, analysis
of femoral and humeral cross-sectional geometry reveals a number of
differences between earlier Archaic period hunter-gatherers and later
Mississippian period agriculturalists (Bridges, 1991b). Femoral cortical
area and minimum and maximum area moments ofinertia increasein'the
Mississippian population in comparison with the Archaic population for
both adult males and females, thus indicating greater bone strength in the
agriculturalists than in the foragers. Structural analysis of male humeri
shows that the two temporal series are virtually indistinguishable. Thus, for
males, activity levels increased, but primarily in relation to lower limb
(ambulatory) functions. In females, there were significan! increases in
humeral and femoral strength. Bridges suggests that the increase in bone
strength in female femara and humeri reflects a relatively greater range of
activity changes in them than in males. Thus, as suggested by the
osteoarthritis analysis, the shift to food production may have had a
relatively greater impact on women in this setting. Sorne of these activities
are probably related to the person who was responsible for food process-
ing; increases in female humeral bone strength may be associated with
maize processing. Historie-era Southeastern native women used wooden
mortars and pestles that required very demanding physical labor involving
the upper limbs. One nineteenth century observer equated the task of maize
pounding to blacksmithing (Current-Garcia & Hatfield, 1973).
These findings from northwestern Alabama suggest that the adoption of
an agricultura! lifeway involved more strennous physical activity than in
earlier populations. The decline in osteoarthritis prevalence for these
populations suggested a decrease in mechanical loading (see Chapter 5).
214 Structural adaptation
Therefore, these two indicators ofmechanical stress seem to yield contra-
dictory results. Bridges ( 1991 b) suggests that osteoarthritis and long
geometry are due to different types of activities. F or example, c1tmg
findings from sports medicine and other research, she notes that
normal activities - such as running - do not con tribute to osteoarthnlls
(e.g., Eichner, 1989; Panush & Brown, 1987; and. Chapter 8). Less
frequent movements that lead to microtrauma and of be
importan! factors in the explanation of osteoarthnt1s. 1t 1s v1rtually
impossible to distinguish traumatic from daily 'wear-and-tear' osteoarthn-
tis, thus making it difficult to identify specific causes where behav10rs are
unknown, such as in archaeological settings. Skeletal structurnl change has
been tied to long-term repetitive forces (e.g., Lanyon & Rubm, 1984; Shaw
et al., 1987). Osteoarthritis and diaphyseal cross-sectional geometry should
not necessarily be concordant, because osteoarthritis would be expected to
develop in older adults, whereas diaphyseal remodeling is a lifelong
response to mechanical stimuli (see Ruff, 1992). .
Analysis of two diaphyseal sections of the femur
section taken at 80% of bone length measured from the distal end;
midshaft 50% from the distal end) and one section of the humerus
(mid-dis;al, 35% from the distal end) from Georgia Bight populations
yields a different pattern of structural change than in Alabama (Larsen &
Ruff 1991, 1994; Ruff & Larsen, 1990; Ruff et al., 1984). Structural analys1s
of p;econtact hunter-gatherers and precontact agriculturafts shows no
appreciable change in CA, whereas MA and TA declmes m both sexes.
These findings indica te little orno change in bon e mass, but rather, a t1ghter
distribution of skeletal tissue about the neutral axis in the later period.
Additionally, second moments of area decline in both males and females
(Figure 6.6). Therefore, in contras! to findings from Alabama, mechamcal
demand declined in later prehistory with the adopt10n of a hfeway
involving maize agriculture. .
The results reported from two regions ofthe southeastern Umted States
are different, especially in the comparison of second moments of area.
These findings are not conflicting, especially in view of the fact that the
Alabama and Georgia Bight populations represen! very different adapta-
tional and behavioral circumstances associated with shifts in subsistence
technology and economic focus. Both settings saw a change from foraging
to maize farming. However, the adoption ofmaize was only one factor of
many involved in adaptive shifts taking place in la ter prehistory, especially
when Alabama was compared with the Georgia Bight. Although these two
regions shared a number of resources in common (e.g., terrestrial animals),
the continued dependence on marine and estuarine resources even durmg
Cross-sectional geometry
215
100 (a)
# 50
O Males
lm Females
- Preagricultural
----Agricultura!
0'-=-L
CA MA TA lmax lmln J
Cross-sectional property
100 (b)
@)
*" 50
O Males
!mFemales
CA MA TA lmax Imin J
Cross-sectional property
'
,
'
- Preagricultural
----Agricultura!
Figure 6.6. Percentage decline in femoral sections, subtrochanteric (a) and
midshaft (b) for prehistoric Georgia coastal hunter-gatherers and
agriculturalists. Representative sections with more tightly distributed bone in
the agriculturalists relative to the hunter-gatherers are shown in the cross
sections at the right of the graphs. (Adapted from Ruff el al., 1984;
with permission of John Wiley & Sons, Inc.)
the agricultura! period on the Atlantic coast contributed to a markedly
different mechanical environment in comparison with the terrestrial
Alabama populations. These differences also underscore the statement
made in Chapter 1 that the terms like 'foragers' and 'farmers' are
oversimplifications in these types of broad comparisons. The comparisons
are not inappropriate, but it is necessary that other factors be considered,
such as dietary circumstances that are unique to specific regions. In sum,
then, it should come as no surprise that two different human groups- even
when drawn from the same general region - yield different skeletal
structural responses to adaptive shifts.
The study of structural morphology in Georgia Bight limb bones has
been expanded to include the descendant, contact era populations (Lar-
sen & Ruff, 1994; Ruff & Larsen, 1990; C. S. Larsen et al., unpublished
manuscript). In addition to the late contact series from Amelia Island,
Florida, an intermediary early contact sample from Mission Santa
Catalina de Guale (AD 1565--1680) on St. Catherines Island, Georgia, has
been analyzed. Cross-sectional geometric analysis reveals that; beginning
216 Structural adaptation
with the early contact period on St. Catherines Island, there is a reversa!
of the decrease in bone strength that had been documented in the
prehistoric populations from the region. Femora and humeri show
increases in MA: and TA; CA is unchanged in the early and late contact
periods. Mirroring the changes in MA and TA are successive increases in
second moments of area in the early and late contact periods. This trend
occurs uniformly for female geometric properties. In males, the trend in
the early and late contact periods is less straightforward. Geometric
properties increase in the early contact period, but decline slightly (e.g.,
subtrochanteric Im.x) or remain the same (midshaft /,) in the late contact
period.
In males humeral areas and second moments of area in crease successive-
ly in the ~ r l y and late contact periods. In females, there is a continued
decline in humeral val u es in the early contact period, but this reverses in the
late contact period. Thus, both females and males in the late contact period
experience a marked increase in humeral strength. In general, the changes
observed in bone strength in the contact period indicate that native
populations experienced increases in mechanical loads, probably due to
increased manual labor and physical demands placed on them by the
Spanish (and see Chapter 5).
Analysis of cross-sectional 'shape' of long bones provides insight into
variability in bending forces with regard to relative loading differences
between planes. An fx/Iy ratio close to 1.0 reftects a nearly circular shape,
and a ratio deviating from 1.0 represents an ovoid shape. This ratio for
the femur midshaft assesses the distribution of bone in the mediolateral
and anteroposterior planes; an ovoid-shaped midshaft in the anteropos-
terior plane (i.e., ratio > 1) represents relatively greater bone strength
and functional demand in the anteroposterior direction than in the
mediolateral direction. Ruff (1987) has shown a temporal decline in the
ratio in recen! human groups, which he interpreted to reflect a general
reduction in the amount of anteroposterior bending forces as populations
have become increasingly sedentary (cf. Lovejoy et al., 1976). Therefore,
at least with respect to Holocene populations, the lx/fy ratio represents a
mobility index.
In the Georgia Bight, early contact femora show a general reduction in
the femoral midshaft fx/Iy ratio relative to earlier prehistoric populations.
Historie sources indicate that populations during the historie period
became generally less mobile as they needed to or were coerced to live in
and around mission centers (e.g., Santa Catalina de Guale). This skeletal
indicator ofmobility is, therefore, in accord with other sources describing
population sedentism during the historie period. These structural modifi-
Cross-sectiona/ geometry 217
cations suggest that mission Indians in Spanish Florida worked harder,
but within the confines of the mission setting (see Larsen & Ruff, 1994).
Because nutritional quality also influences skeletal size to a degree (see
Chapter 2), the possibility remains that the mechanical environment may
not be the sole factor that explains the structural modifications
documented in the Georgia Bight. Standardized values of CA show very
little change in comparison of the four periods, thus suggesting that bone
mass remains essentially unchanged through time. In contras!, the dis-
tribution of bone tissue alters dramatically, which is consisten! with
skeletal adaptations to localized (mechanical) factors rather than systemic
(nutritional) stress (see Larsen & Ruff, 1994; Ruff et al., 1984).
Structural analysis of other populations undergoing the transition to
agriculture show changes in long bone diaphyses that are broadly similar
to patterns observed in the Georgia Bight. In the American Southwest,
femoral midshaft (50%) and subtrochanteric (80%) sections were ana-
lyzed far three prehistoric temporal periods in western New Mexico -
Early Village (AD 500-1150), Abandonment (AD 1150-1300), and Ag-
gregated Village (AD 1150-1300) (Brock & Ruff, 1988). Early Village
populations were highly mobile, subsisting primarily on nondomesticated
plants and animals; Abandonment populations experienced a transition
to new adaptive patterns and new circumstances, resulting in a state of
adaptive 'disequilibrium'; and Aggregated Village populations were sed-
entary maize agriculturalists living in large villages. Second moments of
area (/mx. 1m;n, J) show a general increase in the Abandonment period
and either remain constan! or decline in the Aggregated Village period. In
males, values for the femoral midshaft show a clear pattern of decline
from earliest to lates! periods. These findings suggest an overall decrease
in mechanical demand with agricultura! intensification and sedentism
during the Aggregated Village period. The ratio of I_,/Iy in the femur
midshaft shows a decline in both sexes, which also suggests a reduction in
bending stresses - particularly in the anteroposterior plane - as popula-
tions became less mobile during la ter prehistory. These observations are
consistent with archaeological reconstructions of increasing sedentism
with the shift to agriculture in the American Southwest (see Brock &
Ruff, !988).
Behavioral reconstructions of premodern human populations based on
structural analysis are not usually as clearly defined as the above discussed
groups, in large part owing to the small numbers of samples available for
study. Severa! key analyses serve to provide a broader understanding of
premodern human skeletal structural variation in relation to activity and
behavior. Geometric analysis of a limited sample of Neandertal tibiae -
218 Structural adaptation
700 600
:?
550 s S!. 650
"'
<(
!!
::
"'
(/)
1'.i
600 500
lii
~
8.
u
~
"O
550
450
" E N
~
"'
"'
~
"O
J!! e
J 400
J!!
500 (/)
(/)
CA
450 350
10-
7
10-6 10-
5
10-4 10-
3
10-
2
10-
1
1 ~
Years BP
Figure 6.7. Temporal reduction in femoral midshaft robusticity in H01no.
SMA, second moment of area. (Adapted from Ruff et al., 1993; reproduced
with permission of authors and John Wiley & Sons, lnc.)
from Shanidar, Amud, and La Chapelle-aux-Saints- reveals that bending
and torsional strength is on the order of twice that observed in modern
humans from the late prehistoric Libben site, Ohio (Lovejoy & Trinkaus,
1980). Comprehensive study of fossil hominid and modern human groups
encompassing the en tire record of the evolution of the genus Horno shows a
temporal change in femoral midshaft strength (Ruff et al., 1993). There is
an exponentially increasing temporal decline in axial (represented by CA)
and bending/torsional (represented by J) strength from premodern Horno
in the early Pleistocene to modern Horno sapiens in the late Holocene
(Figure 6. 7). The humerus shows a similar pattern of reduction in bon e
strength (and see Churchill, 1994; Fischman, 1995). Overall, there is a
marked reduction in skeletal robusticity.
Articular dimensions have a very different pattern from diaphyseal
structure in the temporal sequence from early Horno to modern humans. In
marked contras! to the diaphysis, femoral head size remains the same in
proportion to body mass throughout the temporal sequence. These
findings lend support to the hypothesis that joint morphology is more
genetically canalized or less developmentally plastic than long bone
diaphysis morphology. In adults, articular joint size and shape <loes not
alter in response to mechanical loading, unlike diaphyseal structure (see
Rafferty & Ruff, 1994; Ruff & Runestad, 1992; Ruff et al., 1991; Ruff et al.,
1994). Therefore, these two aspects of skeletal robusticity - joint size and
l

1
~
l
Histomorphometric biomechanical adaptation 219
diaphyseal strength - are not tightly linked. Changes in articular loading in
adults can have profound affects on subchondral and trabecular bone
structure organization underlying the joint surface (Pauwels, 1976; Poss,
1984; Radin et al., 1982, 1984; Rafferty & Ruff, 1994; Ruff, 1992), but these
changes are not manifested in externa! articular size. Thus, interna! and
externa! joint structure are independent and represen! contrasting expres-
sions of the mechanical environment.
The lack of influence of mechanical loading on joint size has importan\
implications for functional studies of archaeological remains, especially
where joint size differences are u sed to infer mechanical differences between
groups. For example, an increase in femoral head size in a comparison of
prehistoric hunter-gatherers and agriculturalists in the Caddo region ofthe
American Southeast" was interpreted to reflect an in crease in protein
consumption and increase in mechanical loading (Rose et al., 1984). These
new findings comparing articular and diaphyseal structure indicate that
mechanical loading as a causal factor in explaining temporal change in
joint size is highly unlikely.
6.3 Histomorphometric biomechanical adaptation
Histological research on archaeological human remains focusses primarily
on the documentation of systemic disturbances in interpreting remodeling
patterns (e.g., nutritional deprivation; see Chapter 2 and discussion by
Stout, 1989). Like the overall size and morphology of skeletal elements,
cortical remodeling at the microscopic leve! is also inftuenced by the
mechanical environment (Bouvier & Hylander, 1981; Stout, 1982). There-
fore, the study of histological structures has considerable potential for
elucidating behavioral adaptation and activity in past human groups.
Sorne researchers suggest that the high levels ofrobusticity in Pleistocene
hominid postcrania, especially in comparison of Neandertals and other
archaic Horno sapiens with modern humans, may be due to genetic or
endocrinological differences in bone remodeling rather than differences in'
mechanical loading (see Abbott et al., 1996). Humerus asymmetry and
experimental evidence suggest that intrinsic factors are unlikely (see above;
Trinkaus et al., 1994). Microscopic analysis of Pleistocene (Neandertal,
early modern) and late Holocene hominid (Pecs Pueblo) femora reveals
sorne key differences in histological and remodeling properties based on
various measured and derived histomorphometric parameters (e.g., osteon
area, per cent osteonal bone, osteon population density, osteon wall
thickness) (Abbott et al., 1996). Pleistocene hominids ha ve smaller osteons,
220 Structural adaptation
relatively few intact secondary osteons, reduced osteon population dcn.'11 y,
and greater porosity than rnodern humans. Bone formation ratcs in
Pleistocene hominids are only about one-third !hose of !he Pecos Puclil'
Indians. Abbott and coworkers (1996) argue that these elevated levcls "'
mechanical demand stimulate periosteal apposition. Therefore, !he hi..h
degree of robusticity in Pleistocene hominids is likely to be dueto strent1011"
behavioral regimens rather than to !he inherent physiological charactcris
tics of earlier humans (Abbott et al., 1996).
The comparison ofhistomorphometric parameters in femara ofNativr
American (Pecos Pueblo) and recen! comparative populations (twenticlh
century Euroamericans and Europeans) reveals important patterns o!'
variation in modern humans (Burr et al., 1990). Pecos adult females havc
small Haversian canals, and males have high osteon density. Burr and
coworkers (1990) speculate !ha! these differences reftect a more active
lifestyle in Pecos Indians !han in other modern humans, and hence, a
greater volurne of bone forn1ed per unit area. This interpretation is in
accord with !he findings based on structural and geometric analysis or
femur cross sections in Pecos Pueblo adults (see Ruff, 1991; Ruff & Ha yes,
1983a, 1983b). This reasoning is in line with the very significan! positivc
correlations found between osteon density and anteroposterior and
mediolateral second and polar moments of inertia in a Euroamerican
cadaver sample consisting of older adults ( > 50 years): individuals with
high levels of mechanical loadings ha ve high osteonal densities (Walker et
al. 1994). Similarly, comparisons of femoral osteon density between adult
males and females from late Christian period Nubia (AD 550-1450) show
that males have more osteons than females, which may represent greater
bone turnover from higher activity regimens in males than in females
(Mulhern, 1996). As with experimental evidence (e.g., Lanyon & Baggott,
1976), these studies indicate that activity has a strong inftuence on
histological variation in cortical bone.
6.4 Behavioral inference from whole bone measurements
The technology for detcrmining cross-sectional geometric properties may
not be available to ali researchers. If it is not possible to perform a
structural analysis, whole (externa!) bone measurements - su ch as
mediolateral and anteroposterior dimensions oflong bone diaphyses- can
provide a general picture of robusticity from which behavioral inferences
can be drawn.
Behavioral inference from whole bone measurements 221
Externa/ measurements and shapes
Skeletons displaying relatively large externa! long bone dimensions tend to
have correspondingly high cross-sectional geometric values, especially
second moments ofarea. For example, femoral midshaft and subtrochan-
teric anteroposterior and mediolateral dimensions show decline in com-
paring prehistoric foragers and later farmers in the Georgia Bight, which
Larsen (1981) interpreted to represen! decline in mechanical loading. This
conclusion is supported by analysis of second moments of area (Ruff et al.,
1984; cf. Bridges, l 989b ). Beca use the requirements for analysis are far less
stringent for externa! dimensions !han in geometric analysis - the skeletal
element <loes not have to have intact ends or well preserved periosteal or
endosteal surfaces- a larger comparative data base is available for analysis.
Comparison ofhundreds (n= 524) of Archaic period (6000-1000 se) and
Mississippian period (AD 1200-1600) femara from Tennessee revealed no
evidence of change in midshaft robusticity as it is expressed in mediolateral
and anteroposterior breadths or in the femoral robusticity index ([midshaft
breadthmt + midshaft breadlhap] + femur length) (Boyd & Boyd, 1989).
These findings suggest that biomechanical geometry did no! change in the
transition from hunting and gathering to agriculture in this area of the
southeastern United States (cf. Bridges, 1989b). The very large published
data base on externa! dimensions allows a more complete assessment of
variability in human populations !han is possible from crosscsectional
geometric analysis. Variation showing increases and decreases in robustic-
ity in the Old World similarly reveals the different responses to dietary
shifts, especially with regard to the shift from foraging to farming or
agropastoralism (cf. Jacobs, 1993; Smith, Bloom et al., 1984).
Externa! bone dimensions and comparisons ofleft and right humeri can
be used to infer differential use ofthe arms, albeit with less precision than is
available from geometric analysis (e.g., Borgognini Tarli & Repetto, 1986;
Stirland, 1993). Comparison of paired male humeri from two Medieval
British skeletal series, Norwich and Henry VIII's ftagship, the Mary Rose,
provides a perspective on upper limb use (Stirland, 1993). Norwich males
possess marked humeral asymmetry with clear right dominance, whereas
Mary Rose males exhibit very little evidence of asymmetry. Mary Rose
males exhibit a pronounced hypertrophy ofthe greater tubercle on the left
humerus. These findings suggest that in contras! to Norwich males !he
Mary Rose males subjected their left and right arms to relatively equal
mechanical loads (Stirland, 1993). This finding is compatible with the
historical records indicatingthat many ofthe deceased from the Mary Rose
were professional archers, an activity requiring great strength in both arms.
222 Structural adaptation
That higher-status individuals may be subject to less workload than
lower-status individuals has been tested by comparison of externa! dia-
physeal skeletal robusticity in the Oleni' ostrov Russian Mesolithic series
(Jacobs, 1995). In this setting, adult males from the artifactually 'richest'
graves show the leas! amount of humeral and femoral robusticity.
Wealthier, high-status individuals are less robust than poorer individuals.
This finding suggests that accumulation of wealth in this ranked society
was not achieved by having great musculoskeletal strength.
Externa] diaphyseal shape differences between human groups ha ve also
been documented. Various researchers report a temporal increase in
circularity of the femur (subtrochanteric and midshaft regions) or tibia
(midshaft) or both skeletal elements with the transition to sedentary
Iifeways, especially in settings involving the adoption of agriculture in the
American Southwest (Bennett, 1973), Southeast (Hoyme & Bass, 1962;
Larsen, 1982, 1984), and Midwest (Perzigian et al., 1_984). Increasing
circularity of lower limb bones is apparently a worldwide trend since the
late Pleistocene (e.g., Anderson, 1967; Brothwell, 1981; Buxton, 1938;
Elliot Smith & Wood Janes, 191 O; K. Kennedy, 1989; Kimura &
Takahashi, 1982; Larsen, 1982; Lovejoy et al., 1976; Ruff et al., 1984;
Townsley, 1946). With regard to the femoral and tibia! midshaft diaphyses,
the respective pilasteric and platycnemic indexes (both computed by the
formula: [breadthmt.;. breadth,p] x 100) tend to be lower in more mechan-
ically stressed populatioris than in less mechanically stressed populations.
More stressed populations possess greater mediolateral flattening than less
stressed populations, which may indicate relatively greater anteroposterior
bending forces in the upper and lower legs (see Lovejoy et al., 1976; Ruff &
Ha yes, J983a; Ruff et al., 1984). Far the pilasteric index, Ruff (1987, 1992)
showed a decline in degree of sexual dimorphism in the comparison of
prehistoric hunter-gatherers, prehistoric agriculturalists, and industrial
Western populations which closely follows the patterns observed far the
geometric I_,/Iy ratio in these same populations (see above). Ovoid or
flattened cross sections of long bones were previously interpreted to reflect
responses to suboptimal nutrition (e.g., Angel, 1979; Buxton, 1938; Hoyme
& Bass, 1962). Analysis ofbone areas and second moments ofarea does not
support this interpretation.
A number of researchers invoked functional arguments to explain the
flattening of long borre diaphyses, such as the effects of specific muscles or
muscle groups on diaphyseal morphology ( e.g., Angel, 1971; Chesterman,
1983; Fowke, 1902). Far example, Angel (1971) related anteroposterior
flattening of the femoral proximal subtrochanteric diaphysis in the pre-
Classical Lema, Greece, to greater stresses exerted by 'gluteal and other
Behavioral inference from whole bone measurements 223
hip-balancing' muscles. Similarly, he indicated that mediolateral ftattening
of the femoral midshaft was due to actions of the quadriceps muscles.
Although these muscle groups con tribute to bending and torsional loading,
cross-sectional geometric analysis indica tes that shaft shape is not primar-
ily determined by actions of specific muscles.
Although externally defined long bone diaphyseal size and shape based
on linear dimensions provide insight into biomechanical function, it is
importan! to emphasize that cross-sectional geometry is a far more precise
measure, especially because it gives details on the distribution of skeletal
tissue in a section.
This perspective also demonstrates the pitfalls of interpreting function
from the measurement of cortical bone thickness alone or other indictors of
bone mass. Relative thickness of cortical bone quantified by various
versions of the per cent cortical area index (%CA) has been widely used as
an indicator of nutritional health; far example, low values of %CA in
relation to sorne standard are interpreted to represen! a deficiency in
nutrition status (e.g., Brown, 1988; Cook, 1984; Storey, 1992a; and see
review by Pfeiffer & Lazenby, 1994) and imply negative effects on bone
structure. Sorne researchers relied on cortical thickness or mass as a:n
indicator of functional demand alone (e.g., Hatch et al., 1983; Smith,
Bloom et al., 1984). Per cent CA could reflect severa! things, including a
relatively small medullary space, or an expansion of the periosteum, or
even a combination of both medullary contraction and periosteal expan-
sion (see discussions by Ruff & Larsen, 1990; Ruff et al., 1994). Relatively
thick cortical bone in a section could even be associated with relatively
reduced bone strength (see Figure 6.8). Ruff and coworkers (1984) found
that late prehistoric populations from the Georgia coast ha ve among sorne
of the highest values of %CA in modern humans, yet the cortical tissue is
tightly constricted about the central axis, thus resulting in relatively low
values of bone strength. For this reason, mechanical analysis and behav-
ioral inference should be determined from both bone areas and second
moments of area (see Lovejoy et al., 1976; Ruff & Larsen, 1990; Ruff et al.,
1994).
6.4.2 Femoral neck-shaft angle
The femoral neck-shaft angle is a measurement of the relative degree of
more medial vs. more proximal orientation of the femur neck to the
diaphysis. In modern adults, the range is variable, from 11 O to 150
(Trinkaus, 1993). Active juveniles show a greater decrease in the angle
during the years of growth and development than inactive juveniles
l

224 Structural adaptation
~
/:t. lo Cortical thickness: -33/o
11 lo Cortical area: -33!o
6. Cortical area: + 12/o
.6. Second moments of area: + 100/o
Figure 6.8. Effects of subperiosteal and medullary expansion on cross-sectional
geometry. The section on the right has reduced per cent cortical thickness and
per cent cortical area compared to the section on the left. Cortical area
(reffecting axial strength) increases slightly and second moments of area
(bending strength) increase dramatically. (From Ruff, 1992; reproduced with
permission of author and John Wiley & Sons, Inc.)
(Houston & Zaleski, 1967). The orientation of the femoral neck relative to
the diaphysis appears to be responsive to the combined forces of body
weight, muscle forces, and activity generally. Additionally, the srnaller
angle provides greater hip joint rnechanical stability under increased
rnechanical loading (Trinkaus, 1993).
Comparisons of a range of human groups - foragers, agriculturalists,
and urban dwellers- support the mechanical hypothesis for variation in the
femoral neck-shaft angle (Trinkaus, 1993). Cornparisons indica te that the
foragers have the lowest femoral neck-shaft angles (mean= 125.7), urban
dwellers have the highest angles (mean= 132.3), and agriculturalists have
values between those of foragers and urban dwellers (mean= 128.2)
(meanscalculated frorn Trinkaus, 1993:Table 4). This pattern of increasing
neck-shaft angles from rnobile foragers to sedentary urban groups closely
parallels the general trends of decreasing robusticity based on whole bone
rneasurements and diaphyseal structure (Ruff et al., 1993), albeit with a
wide range of overlapping variability between groups (see Trinkaus, 1993).
6.5 Surnrnary and conclusions
Limb bon e diaphyses are highly responsive to the rnechanical environment.
Structural and histornorphornetric analysis from a variety of settings
underscores the extraordinary developrnental plasticity of bone tissue
Summary and conclusions 225
throughout the lifespan, thus providing an irnportant record of rnagnitude
and types of rnechanical loading. Differences between human populations
provide insight into behavioral patterns in the past. In contras! to
diaphyses, articular size and rnorphology appear to be resistan! to
rnechanical dernand, which reflects the strong genetic canalization of the
articular joints.
Because humans do not use their upper lirnbs in arnbulatory activities,
the influence ofbody weight is very rninirnal in the overall deterrnination of
sizeand rnorphology ofarrn skeletal elernents (e.g., humeros). The study of
upper lirnb bilateral asyrnmetries in various human populations allows
inferences to be rnade about loading levels and patterns in relation to
different lifestyles and rnechanical functions.
Measurement of areas and second rnoments of area in a range ofhuman
populations, ancient and rnodern, reveals a general trend for decline in
robusticity. Although this trend is especially pronounced in the transition
frorn archaic to modern Homosapiens in the late Pleistocene (Ruff et al.,
1993), it has continued throughout the Holocene. The reduction during the
Holocene is probably tied to increasing sedentisrn associated with plaut
dornestication (Larsen, 1995). Sorne modern human populations are quite
robust (e.g., early modern Europeans, Great Basin Arnerindians), which
rnay be linked with living in marginal environrnents and the great physical
effort in the food quest in these types of settings.
Externa! shaft dimensions, rneasures of bone rnass or volurne (e;g;,
%CA), and femoral neck-shaft angles provide sorne insight into activity
patterns, but precision in behavioral interpretation is dependen! upon
analysis of skeletal tissue distribution via cross-sectional geometry. Exter-
nal dirnensions are lirnited, in that they do not take into account the
distribution of bone in cross section. The relative degree of flattening of
long bone diaphyses, especially in the proximal and midshaft fernur and
midshaft tibia, is related to type and leve! of rnechanical loading and not to
nutritional factors or the actions of specific rnuscle groups.
7 Masticatory and nonmasticatory
functions: craniofacial adaptation
7 .1 Introduction
The infiuence of environment and behavior on skull morphology was
discussed quite early. In the fifth century BC, Herodotus remarked on
apparent differences in cranial robusticity between Persians and Egyptians:
'The skulls of Persians are so weak that ifyou so muchas throw a pebble at
one of them, yo u will pierce it; but the Egyptian skulls are so strong that a
blow with a large stone will hardly break them.' He interpreted these
differences as being related to the lifelong exposure of the head to the sun
and increased cranial thickening as a result in Egyptians but not in
Persians.
The influence of environment and behavior, no matter ho_w specious
the interpretation, has been only minimally considered in discussions of
cranial morphology in archaeological remains since Herodotus drew the
above conclusions about cranial robusticity. Beginning in the eighteenth
century, osteologists relied on craniofacial variation for determining
population history and classification, with little attention given to the
biological significance of observed patterns (see Armelagos, 1968; Ar-
melagos et al., 1982; Carlson, 1976a, 1976b; Carlson & Van Gerven,
1977, 1979). As in investigations of long borre morphology, there has
been a gradual reorientation from typological/historical to processual
analyses. This new emphasis focusses on underlying processes that infiu-
ence cranial morphology, revealing the adaptive and behavioral signifi-
cance of variation.
Given the strong infiuence of masticatory behavior and the role in
biomechanics of the skull and craniofacial adaptation generally (see
McDevitt, 1989 for overview of masticatory function), this chapter
emphasizes the masticatory roles of the jaws and teeth in interpreting
craniofacial variation. Additional consideration is devoted to nonmas-
ticatory behaviors, especially as they are interpreted from the dentition.
Crania/ form and function
227
7.2 Cranial form and function
7.2.1 Determinants of form
Cranial form in the growing child and the maturing adult is determined by
a complex interaction of intrinsic (genetic) and extrinsic (environmental)
factors. The overall form is principally a product of natural selection (cf.
Herring, 1993; Maynard Smith & Savage, 1959). Animal heritability
(Atchley, 1993) and experimental and observational studies on animals and
humans (Herring, 1993; Kiliaridis, 1995) demonstrate the considerable
infiuence of environment, especially in relation to the cumulative effects of
mastication and mechanical loading of the face and jaws.
Experimental studies involving extirpations of masticatory muscles in
laboratory animals show associated craniofacial skeletal modifications,
especially with regard to a reduction in size and robusticity (e.g., Avis,
1959, 1961; Horowitz & Shapiro, 1951, 1955; Moore, 1967, 1973; Pratt,
1943; Schumacher et al., 1979; Washburn, 1947a, 1947b). Craniofacial
skeletons of animals fed soft foods tend to be smaller and less robust than
animals fed hard foods (e.g., Beecher & Corruccini, 1981, 1983; Bouvier &
Hylander, 1981, 1982, 1984; Bouvier & Zimny, 1987; Corruccini &
Beecher, 1982, 1984; Hinton, 1990; Moore, 1965; Tuominen et al., 1993;
Watt & Williams, 1951; Whitley et al., 1966; and see reviews by Herring,
1993; Kiliaridis, 1995). The profound effects of alteration of masticatory
loading are also demonstrated in the experimental transpositions of
masseter and temporalis muscles in laboratory monkeys (Hohl, 1983). The
anterior relocation of these muscles leads to a number of changes, including
superior facial tilting. These extirpation and translocation studies show
that alterations in mechanical loading produce shifts in masticatory
behavior that result in distinctive craniofacial morphological changes.
7.2.2 Temporal trends in human populations
Contrary to the assertion that human head form is stable and highly
heritable (e.g., Dixon, 1923; Neumann, 1952; and see Gould, 1996; Marks,
1995), diachronic population studies revea! a high degree of plasticity.
Franz Boas demonstrated that head shape- based on a ratio ofhead length
to breadth (cephalic or cranial index)- of American-born immigrants was
appreciably different from that of their European foreign-born parents
(e.g., Boas, 1912, 1916; also Hrdlicka, cited in Boas, 1916:716); On the basis
of this observation of plasticity, Boas was strongly opposed to the use of
228 Masticatory and nonmasticatory functions
cranial form for tracing population history and linking past with living
groups. Rather, he argued that 'the anatomical forms of the present
population and of ancient skeletons do not allow us to draw
regardingnatinality of the ancient inhabitants' (Boas, 1902:445). The h1gh
degree of developmental plasticity in twentieth century populations was
furtherconfirmed by Shapiro and Hulse's comparisons of Japanese born m
Hawaii and Japanese immigrants to Hawaii (Shapiro, 1939). The differen-
ces between the two were pronounced; the longer the time the immigrant
population was living in Hawaii, the greater the differences with the
ancestral population still living in Japan.
An independent approach to understanding the plasticity of cranial
shape in volved efforts to document influence of culture and behavior. in
past populations. In sharp contras! to ancient British populations, Ke1th
observed that in modern Britons 'many persons have small, contracted
pala tes ... Their noses are narrow; so are their faces' (1950:402). Late Celtic
faces became smaller, reflecting in part the 'change in dietetic which has
occurred since the early years of the Christian era, cooked food and soft
cereals replacing tough meats and imperfectly ground corns' (Keith,
1916: 198). Keith (l 916) noted concomitan! changes in the occlusal surfaces
of teeth, especially with regard to a reduction in tooth wear in later
populations. . .
Following Keith's inchoate attempts at relatmg temporal trends m
craniofacial morphology to shifts in masticatory behavior and diet, various
researchers documented other trends in archaeological populations, both
regionally and globally. Weidenreich (1945) recognized the inappropriate-
ness of using cranial shape for identifying racial groups, instead observing a
trend in human evolution - and especially during the Holocene - for
increasingly shorter crania, a process he called brachycephalization. Most
workers argued that long-headed ('dolichocephalic') populations had been
replaced by alien short-headed ('brachycephalic') populations, thus ex-
plaining the trend (e.g., Retzius, 1900; and review by Weidenreich, 1945).
Contrary to this consensus, Weidenreich (1945) forcefully argued agamst
invasion and replacement models, by showing the widespread trend of
cranial vault shortening taking place in earlier populations throughout
Europe (see also Sokal & Uytterschaut, 1987; Sokal et al., 1987), the
Middle East, South and Central Asia, and in more recen! settings (and see
Vladescu, 1992).
As with other regions ofthe world, studies ofnative New World groups
in the first half of this century emphasized diffusionistic interpretations of
cranial shape variation, especially arguing that earlier long-headed
'dolichocephals' were replaced by later 'brachycephals' (e.g., Dixon, 1923;
Cranial form and function
229
Hooton, 1930, 1933; Hrdlicka, 1922a; Hulse, 1941; Newman & Snow,
1942).
Coinciding with and following the publication ofWeidenreich's (1945)
classic article, other researchers began to notice a temporal change in
cranial form, especially reductions in robusticity and/or increasing vault
roundness, in comparison of earlier and later prehistoric North American
Indians (e.g., Anderson, 1967; Boyd, 1988; El-Najjar, 1981; Guagliardo,
1982b; Hoyme & Bass, 1962; Ivanhoe, 1995; Newman, 1962; Newman &
Snow, 1942; Steele & Powell, 1992; Webb & Snow, 1945) and elsewhere
(e.g., Abdushelishvili, 1984; Henke, 1984; Nakahashi 1993 Newman
1951; Rightmire, 1984; Rosing & Schwidetzky, 1984; 1976; Smith'.
Bar-Yosef et al., 1984; Suzuki, 1969; Walimbe & Gambhir, 1994;
Walimbe & Kulkarni, 1993; Wu & Zhang, 1985; and references cited
below). In certain settings - especially in North America - cranial
shortening can also be attributed to artificial deformation in sorne
cultures, a practice largely limited to late prehistoric populations. In sorne
cases, changes consisten! with a general pattern of gracilization are also
present in areas of the skull not affected by vault deformation (faces and
jaws; e.g., Larsen, 1982).
Thus, increasing evidence indica tes that the worldwide trend of cranial
shortening and gracilization is much better understood in relation to
masticatory, dietary, and technological changes, especially those asso-
ciated with the shift from foraging to food production and the consump-
tion of softer foods by later prehistoric populations. These changes in
subsistence practices and their influence on craniofacial anatomy ha ve been
investigated in a number of regions, including the Nile Valley, central
Europe, and the American Eastern Woodlands.
The Ni/e Va/ley
Beginning in the nineteenth century, various workers speculated on the
origins of human groups occupying the region (e.g., Elliot Smith, 1910;
Morant, 1925; Morton, 1844). Following Morton's (1844) highly influen-
tial study of archaeological crania from Egypt and Nubia, the prevailing
notion was that two biologically distinct groups occupied the Nile Valley in
temporal succession; In Lower Nubia, Moran! (1925) identified an earlier
'Upper Nile type,' with predominantly 'Negroid' features, and a later
'Lower Nile type', which lacked 'Negroid' features. The changes were
viewed in a diffusionistic paradigm: simply, the disappearance of 'Negroid'
features resulted from an invasion and subsequent replacement by alien
'Caucasoid' (Egyptian) peoples from the north (see Calcagno, 1986a;
230 Masticatory and nonmasticatory functions
l986b; Carlson, l976a; Carlson & Van Gerven, 1977, 1979; Van Gerven et
al., 1973, 1977). .
Recent anal y ses of crania and dentitions from Lower Nubia indica te that
the evidence for the diffusionist model of biological change is less than
compelling. Independent analyses of skeletal and dental di serete and me trie
variables and other lines of evidence suggest that the earlier and later
Nubian populations representa biological continuum with no invasion by
nonindigenous populatipns (e.g., Batrawi, 1946; Berry & Berry, 1972;
Calcagno, l986a, l 986b; Franceschi et al., l 994; Greene, l 972; Mukherjee
et al., 1955; Nielsen, 1970; Van Gerven et al., 1977). Therefore, the
differences in cranial morphology between earlier and later populations
observed by Elliot Smith & Wood Jones ( 191 O), Morant (1925), and others
are best understood in relation to factors not involving population
replacement.
For better understanding of thcse factors, espccially those rclated to
dietary and technological change, Carlson and Van Gerven and their
coworkers (Armelagos et al., 1984; Carlson, 1976a, 1976b; Carlson & Van
Gerven, 1977, 1979; Hinton & Carlson, 1979; Van Gerven et al., 1973,
1977) compared craniofacial morphology in a Nubian-based temporal
sequence, including foragers from the Mesolithic (ca. 12000 BP), initial
agriculturalists from the combined A- and C-groups (3400-1200 BC), and
intensive agriculturalists from the combined Meroitic, X-group, and
Christian horizons (AD 0-1500). These comparisons reveal that Nubian
foragers and incipient agriculturalists have ftat and elongated vaults with
well developed, protruding supraorbital tori and occipitals. In contras!,
Iater intensive agriculturalists have rounded vaults with small and more
posteriorly positioned faces and masticatory muscle attachment sites
(temporalis and masseter) and reduced temporomandibular joint size
(Figure 7 .1 ).
Carlson and coworkers posit a n1astfrYlfory-functional hypothesis for
explainingcraniofacial changes in Nubia (Figure 7.2). They argue that the
primary factor influencing Nubian craniofacial anato1ny was the change in
subsistence economy, from foraging to food production and the shift to
consumption of softer foods. These changes resulted in a reduction in
activity of the masticatory muscles anda concomitant decrease in mechan-
ical Joading ofthe craniofacial skeleton. Alteration in masticatory function
Jed to alteration in craniofacial growth in two ways, including (1) decreased
stimulation ofbone growth, leading to a reduction in facial robusticity; and
(2) progressive alteration of the overall growth of the face and vault,
resulting in a smaller and more inferoposteriorly oriented face relative to
the cranial vault.
Cranial form and function 231
Figure 7.1. Summary ofcraniofacial changes in Nubia, comparing Mesolithic
foragers with Meroitic-Christian farmers (dashed line). Note that farmers have
relatively greater posterior placement of areas of muscle attachment, facial
reduction, vault length reduction, vault height increase, and more globular
shape than foragers. (Adapted from Carlson & Van Gerven, 1977; illustration
by Dennis O'Brien; reproduced with permission of authors and John .Wiley &_
Sons, Inc.)
Other studies of craniofacial morphology
The masticatory-functional interpretation of diachronic change in cran-
iofacial morphology in Nubia offers an important means of interpreting
morphological changes elsewhere, especially where food production and
agriculture have supplanted hunting and gathering as a primary mode of
subsistence. Neolithic mandibles from Lepenski Vir, Vlasac, and Vinca in
the Balkans region of central Europe show a reduction in size in
comparison with earlier Mesolithic mandibles (y'Edynak & Fleisch, 1983).
In this region, the change in size of the mandible coincides with the shift
from foraging and fishing in the earlier period to the farming of severa!
grains (e.g., eincorn and emmer wheat); Unlike the foraging adaptation;
the latter dietary focus also involved extensive cooking of food in ceramic
vessels. The mastication of generally softer foods, therefore, resulted in
232 Masticatory and nonmasticatory functions
CULTURAL CHANGE
1
CHANGE !N SUBSISTENCE PATTERN:
HUNTING - GATHERING-"" AGRICULTURE
1
CHANGE!N
MASTICATORY FUNCTION
1
REOUCEO NEUROMUSCULh.R ACT!VlTY
ALTERATION OF THE
PATIERN OF CRANIO-
FACIAL GROWTH
i
--'-----
~ REDUCEO SIZE ANO
(
REDUCED GROWTH OF THE
MAXILLO-MANDIBULAR COMPLEX
,, __ I
REDUCED SIZE ANO MORE
INFEROPOSTERIORLY LOCATED
MlDFACE ANO LOWER FACE
AL TERED POStTION OF
THE MAS-flCA TORY MUSCLES
I_
/ DECREASED MEGHANICAL \
1 STIMULATION OF THE
1
1
\ PERIOSTEAL MEMBRANE
'-,_T
REDUCED ROBUSTICITY OF
THE FACE ANO JAWS
~ ----------
1
/ ~
,. COMPENSATORY BIOMECHANICAL /
1
.. RESPONSE OY THE CRANIAL VAULT ANO BASE
- /
----------- --
-r
MORE ACUTE -- lNCREASED CRANlAl -o- DECREASED REDUCED GtABELLAR
CRANIAL BASE ANGLE -- HEIGHT _. CRANIAL LENGTH ~ AND OCCIPITAL REGIONS
... -- _J
L ~ ~
MORE GLOBULAR, LESS ROBUST,
ANO LESS PROGNATHIC CRANIOFACIAL
COMPLEX c=J PROCESSES
C) MECHANISMS
Figure 7 .2. Masticatory-functional model of craniofacial change in Nubia.
(Adapted from Carlson & Van Gerven, 1979; reproduccd with permission of
authors and American Anthropological Association.)
substantial decreases in mechanical loading of the craniofacial skeleton.
Similarly, temporal trends in severa! skeletal series from the Eastern
Woodlands of North America are consisten! with the masticatory-func-
tional paradigm. Crania of late prehistoric (Mississippian period) farmers
from Tennessee show a general decrease in robusticity anda reduction in
size and more posterior orientation of the masticatory muscles in compari-
son with crania of early prehistoric (Archaic period) foragers (Boyd, 1988).
Cranial form and function 233
In this setting, size reduction is especially pronounced in the mandible and
lower face, suggesting that the change in morphology resulted from
decreased mechanical loading of the face and jaws brought about by
consumption of softer-textured foods in later prehistory (Boyd, 1988).
Accompanying these changes is a marked reduction in size of the
temporomandibular joint (Hinton, 198la, 1983). Experimental studies
show that this joint is highly sensitive to alterations in mechanical loading
(e.g., Bouvier & Hylander, 1981, 1982, 1984; Tuominen et al., 1993), and
the joint tends to be largest in human populations with high masticatory
stresses (Corruccini & Handler, 1980; Hinton, l98la, l98lb, l98lc, 1983;
Hinton & Carlson, 1979; Wedel et al., 1978). Like the assessments of
robusticity and vault shape generally, these findings denote the primacy of
mechanical factors in interpretation of craniofacial morphology when
genetics is held constan!.
Comparisons of prehistoric Georgia coastal forager-fishers (pre-AD
1!50) and farmers (AD l 150--1550) revea! that, as in the Tennessee
populations, there is a general decrease in craniofacial robusticity, but
reductions in facial and mandibular dimensions and attachment si tes for
masticatory muscles (temporalis and masseter) were more pronounced
than reduction in nonfacial dimensions (Larsen, 1982). These changes in
craniofacial size and robusticity in prehistoric Georgia Indians appear to
be due to increased consumption of soft, maize-based foods in later
prehistory (Larsen, I 982). These findings, therefore, are strongly suggestive
of responses of the craniofacial skeleton to change in subsistence and the
manner in which food is prepared.
In summary, many changes observed in Holocene craniofacial morphol-
ogy and structure are related to biocultural factors. This is not to say that
craniofacial changes are exclusively due to extrinsic factors influenced by
diet and use of the face and jaws. In the Great Plains region of North
America, far example, morphometric analyses implicate gene flow between
neighboringpopulations rather than mechanical factors (Jantz, 1973; Key,
1983, l 994; Ubelaker & Jantz, l 986).
7.2.3 The supraorbital torus: a beam?
The size and robusticity of the supraorbital torus has been intensively
investigated by biological anthropologists and anatomists for well over a
century (see review by Russell, 1985). The development of the torus is
highly variable in humans, and ranges from a thick, bar-like projection in
early Horno (e.g., Weidenreich, 1943) to mild expression in most recent
human populations, with sorne notable exceptions (e.g., native populatins
234 Masticatory and nonmasticatory functions
in the American Great Basin and Australia). Beca use only small muscles of
facial expression are directly associated with the supraorbital torus, the
region is often interpreted as essentially nonfunctional and nonadaptive
(e.g., Owen, 1855; and later researchers). Alternatively, the feature has been
interpreted as deriving from a wide range of plausible (and implausible)
causes, including pathological processes (Virchow, 1872) or various
nonmechanical functions - keeping the hair out of one's eyes (Krantz,
1973), protection from blows to the head (Tappen, 1973, 1979), or as
anatomical sun visors (Boule & Vallois, 1957) (see also Russell, 1985).
The most serious studies of the supraorbital region focussed on the
functional-mechanical paradigm, especially in the context of mastication
and mechanical adaptation and patterns of tooth loading (Russell, 1982,
1985). Based on her reading ofEndo's (1966) experimental and mathemat-
ical analysis of the facial skeleton, Russell (1985:343) concluded that the
supraorbital torus is analogous to a beam whereby 'supraorbital develop-
ment is a function of bending stresses concentrating in the frontal .bone
above the orbits during anterior tooth loading'. The torus can be modelled
'as though it were a beam extending across the superior orbital margins.
This beam is intermittently bent by the downward pull ofthe masticatory
muscles and the upward push of the bite force.' Thus, with increased
chewing (or related behaviors}, greater bending stress on the supraorbital
torus should result in greater bony development in the glabellar region in
particular and the supraorbital torus in general.
Evaluation of Russell's argument in light of Endo's original published
discussion indicates that the supraorbital region is not highly stressedeither
in anterior incisa! loading (as interpreted by Russell, 1985) or even in
posterior Ioading (see Hylander & Johnson, 1992; Hylander et al., 1992;
Lieberman, 1995; Picq & Hylander, 1989). In fact, careful reading of
Endo's experimental results suggests that he could find little evidence of
stress in the region during isometric biting on the anterior teeth (see Endo,
1966). Additionally, biomechanical analysis ofld World monkey crania
suggests that the link between anterior dental loading and compensatory
remodeling in the browridge is unfounded (Ravosa, 1988).
Although the size and morphology of the supraorbital torus is not
understood, comparison of supraorbital development in temporal se-
quences of recent humans from archaeological settings suggests that
browridge size is best thought of as a general indicator of craniofacial
robusticity. Carlson (1976a) found that the torus was more developed in
the preagricultural foragers than in farmers from Nubia (and see above). In
the American northern Great Plains (South Dakota and North Dakota),
comparison of size and morphology of the adult female and male
Cranial form and function 235
supraorbital tori in Woodland tradition foragers (AD 61 ()...1033) with mixed
forager/agriculturalists from the Middle Missouri (AD 900--1675) and
Coalescent (AD 1600--1832) traditions reveals a general gracilization ofthe
supraorbital torus from the earlier to later periods (Cole & Cole, 1994).
These findings are consisten! with a functional hypothesis regarding a shift
in subsistence technology, especially since there is very little variation in
overall cranial size in these samples (Cole & Cole, 1994). Aside from the
supra orbital torus, Cole & Cole (1994) ha ve not analyzed other measures of
craniofacial size and robusticity that have been linked with masticatory
function by others (cf. Carlson & Van Gerven, 1977; and see above). Given
the unclear relationship between the supraorbital torus and masticatory
function (cf. Picq & Hylander, 1989), their results should be considered
consisten! with, but not confirmation of, a functional interpretation.
7.2.4 Eskimo craniofacial morphology: masticatory loading or cold
adaptation?
Craniofacial morphology of circumpolar groups - especially Eskimo
populations - is characterized by a pronounced degree of robusticity,
including marked facial ftatness, well developed and anteriorly placed
malars, high and pronounced temporal lines, and extreme robusticity ofthe
face, jaws, and masticatory apparatus generally. Coon and coworkers
(1950) argued that Eskimo craniofacial morphology represents an adapta-
tion to extreme cold. For example, they interpreted the presence of
enlarged and forwardly placed malars as reflecting a retraction of the
externa! nose, an area of the face that is especially vulnerable to cold stre&S.
Alternatively, others suggested that the Eskimo craniofacial morphology
represents adaptation to vigorous mastication (e.g., Furst & Hansen, 1915;
Hrdlicka, 1910b), or what has been called the 'hard-chewing' hypothesis
(Collins, 1951).
In order to determine which of the two models best explains Eskimo
craniofacial morphology, Hylander (1977) undertook a comprehensive
biometric and paleopathological analysis of masticatory behavior in past
and living Eskimos. His analysis reveals a link between craniofacial
morphology and loading of the jaws and teeth in these populations. F or
example, many dentitions show root resorption and crown fractures and
chipping due to excessive mechanical demands. These populations also
express high frequencies of mandibular, maxillary, and palatine tori,
skeletal features that have been linked with severe or elevated masticatory
stresses (see below). Bite force measured in living Alaskan Eskimos is
remarkably high compared to other, noncircumpolar, populations.
236 Masticatory and nonmasticatory functions
A rich body of ethnographic evidence indica tes that very heavy mechan-
ical demands are placed on the craniofacial complex of Eskimos, specifi-
cally involving heavy use of jaws and teeth in masticatory and extramas-
ticatory functions. In his observations of Eskimos, De Poncins (1941 :71-
72) noted, 'They had long since stopped cutting the meat with their circular
knives; their teeth sufficed, and the bones of the sea] cracked and splintered
in their faces. What those teeth could do, I airead y know. When the cover of
a gasoline drum could not be pried off with the fingers, an Eskimo would
take it between his teeth and it would come easily away. When a strap of
sealskin freezes hard - and I know of nothing tougher than sealskin - an
Eskimo will pul it in his mouth and chew it soft again' (quoted by
Hylander, 1977:142).
Drawing on these various lines of evidence, biological and behavioral,
Hylander (1977) argued that Eskimo craniofacial morphology long-
observed by biological anthropologists (e.g., forwardly placed zygomas) is
oriented toward maximizing the efficiency and power of chewing, especially
involving anterior tooth use: the craniofacial complex is suited to the
generation and dissipation of pronounced, vertically oriented masticatory
forces in the front of the mouth. This assessment is confirmed by analysis of
position of attachrnent sites for masseter and temporalis muscles and
incisors in a sample of prehistoric Inuit crania (Spencer & Demes, 1993).
The anteriorly placed masticatory muscles and posteriorly placed incisors
'indica te an increased efficiency for the application of either high magni-
tude or repeated bite forces on the anterior dentition' (Spencer & Demes,
1993:15).
Other high-latitude foragers display pronounced craniofacial robustic-
ity. Like Eskimos, crania from Tierra del Fuego and Patagonia, South
America, bear robust supraorbital tori and anteriorly placed zygomas,
sagittal keeling, occipital tori, and pronounced attachment siles for the
temporalis muscle (Lahr, 1995). The functional-masticatory paradigm
(Hylander, 1977; Spencer & Demes, 1993) best explains the similarities
between Eskimos and Fueguian/Patagonians, especially in regard to the
common skeletal responses to highly demanding masticatory regimes in
these two different settings.
Sorne workers have argued that the round head of high-latitude
populations would be best suited for cold adaptation, as a sphere
maximizes volume for heat retention and minimizes surface area far heat
loss prevention (e.g., Beals, 1972; Beals et al., 1984; Crognier, 1981).
However, the aforementioned studies linking craniofacial morphology
with trends in masticatory loading and dietary change in the Holocene and
changes in head shape in the absence of climatic change makes ther-
Cranial form and function 237
moregulatory models or similar types of ecological arguments less than
compelling (and see Armstrong, 1984; Gibson, 1984; Henke, 1984; Hen-
neberg, 1984; Lahr, 1995; Morimoto, 1984). In North America, analysis of
cranial dimensions based on thousands of native individuals measured by
Boas in the late nineteenth century shows no relationship between climate
and head shape (Jantz et al., 1992). Finally, the cold stress model has
limited explanatory power, since a wide range of other human populations
with round heads live in warm climates, and various populations, living
and extinct, having robust, forwardly placed malars are associated with
hot, dry climates (see Hylander, 1977).
7.2.5 Incisor shovelling and masticatory loading
Incisor shovelling has been observed in a wide range ofhuman populations
worldwide, but the highest frequencies of the well developed form appear
to be found in high-latitude and cold-adapted foragers (Mizogucli, 1985}.
Mizoguchi (1985) argues that meat eaten by these hunter-gatherers
requires heavier masticatory loading of the anterior dentition than does
other foods eaten in most other regions ofthe world (e.g., by pastoralists in
Africa). His assessment suggests a possible link between incisor morphol-
ogy and the demands of pronounced incisor loading.
7.2.6 Palatine, maxillary, and mandibular tori and masticatory stress
Tori located on the hard palate and lingual corpus and alveoli of the
mandible are the focus of attention regarding genetic vs. nongenetic
environmental influences on skeletal variation (Halffman et al., 1992;
Hauser & De Stefano, 1989; Morris, 1981) (Figure 7.3). Sorne workers
conclude that tori found in archaeological remains are indicative of high
mechanical demands placed on the masticatory apparatus (e.g., Halffman
et al., 1992; Hooton, 1918; Hrdlicka, 1940b; Pedersen, 1944; Scott et al.,
1991), but others argue that tori are largely genetically controlled (see
Hauser & De Stefano, 1989; Morris, 1981).
There are relatively high frequencies of palatine tori in far northern and
circumpolar populations, including Icelanders (Hooton, 1918), Lapplan-
ders (Schreinder, 1935), and Eskimos (Hylander, 1977; and discussion
above). Torus prevalence is also high in Medieval Norse living in Iceland
and Greenland in comparison with Europeans generally (e.g., Mellquist &
Sandberg, 1939; Pedersen, 1944). In order to address the question ofwhy
Medieval Norse and indigenons Arctic groups have developed a conver-
gence in torus expression with Eskimos and other circumpolar groups,
238 Masticatory and nonmasticatory functions
Figure 7.3. Superior (top) and posterior (bottom) views of large palatine torus
(A) and maxillary torus (B); Norse male from Benedictine Nunncry, Eastern
Settlement, Greenland. (From Halffman et al., 1992; reproduced with
permission of authors and John Wiley & Sons, Inc.)
Halffman and coworkers (1992; Scott et al., 1991) studied torus frequency
and size in a series of Medieval Norse skeletal remains from Norway,
Greenland, and Iceland dating to the eleventh to fourteenth centuries.
Temporal comparisons revea! that later Norse from Norway, Greenland,
and Iceland have a significantly higher prevalence of tori than do early
Norse from Greenland. Tori prevalence approaches 100% in later popula-
tions, which is amongthe highest in the world (cf. Halffman et al., 1992: 156;
Hauser & DeStefano, 1989). Tori increase in size in older adults, suggesting
that the trait is strongly influenced by age. The pattern of increase in torus
frequency and size within a relatively short temporal span (severa! hundred
Cranial form and function 239
years) suggests that environment exerts more influence on torus expression
than does inheritance.
Changes in subsistence and food preparation techniques provide sorne
insight into possible reasons for the secular increase in tori in Greenland.
Archaeological evidence indicates that the subsistence economy of later
Greenlandic Norse became increasingly focussed on wild game (e.g., seals
and caribou), rather than domestic animals (cattle, sheep, and goats).
Easily chewed foods such as grains and breads became increasingly scarce
in the later Norse in Greenland. Finally, archaeological evidence indica tes
a decrease in the cooking of food due to declines in the availability of
firewood. This would have involved an increase in the consumption of
tougher foods, namely raw or partially cooked foods, including mea t. Scott
and coworkers ( 1991) document a general increase in tooth wear and
anterior tooth crown chipping in the later period. These findings are
consisten! with the hypothesis that masticatory function is the principal
influence on torus expression.
7.2. 7 Pathological modifications of the temporomandihular joint
Like the other joints of the skeleton, the temporomandibular joint is
subject to mechanical demands leading to osteoarthritis. Populations
experiencing high masticatory loading show a tendency for elevated
prevalences of temporomandibular joint degenerative pathology (Brown,
1992; Webb, 1995; Wells, 1975). Eskimos have an unusually high preva-
lence of temporomandibular joint osteoarthritis, with women expressing
higher frequencies than men (Merbs, 1983). These differences may be
related to the preparation of hides with the teeth, a task performed
primarily by women in Eskimo societies (Merbs, 1983). Sedentary agricul-
turalists from the Medieval period Kulubnarti site in northern Sudan have
similarly high levels of articular pathology, with females having three times
that ofmales (Sheridan et al., 1991). In general, human populations with
elevated prevalences of temporomandibular joint osteoarthritis also pos-
sess heavy occlusal wear, thus indicating the strong influence of mechanical
stress in temporomandibular joint degeneration (Brown, 1992; Webb,
1995; Wells, 1975; Whittaker, 1993):
7.2.8 Age changes in the masticatory apparatus
Growth in the skull is not quiescent once adulthood is reached. Like the
postcranial skeleton, appositional borre growth in .the skull continues
during adulthood. These changes are documented . in cross-sectional
240 Masticatory and nonmasticatory functions
studies involving comparisons of individuals of different ages within the
same population (e.g., Baer, 1956; Goldstein, 1936; Hooton & Dupertuis,
1951; Howells & Bleibtreu, 1970; Hrdlicka, 1936; Lasker, 1953; Todd,
1924; Walker, 1995) and in longitudinal studies comparing sequential
observations of the same individual at different a ges ( e.g., Behrents, 1985;
Israel, 1968, 1973, 1977; although see Tallgren, 1974). Overall, these
investigations show the dynamic nature of craniofacial architecture dur-
ing adulthood, especially with regard to increases in craniofacial dimen-
sions, even in later years when muscle mass becomes reduced (Garn,
1985).
Most investigations of bone apposition are based on industrialized
urban populations, and thus represent a narrow perspective on human
variation. Studies of archaeological samples are importan! in that they
provide a wider context for assessing the roles of genes and environment.
In this regard, Ruff (1980) compared a large sample (n = 136) of adult
males from the Archaic period Indian Knoll, Kentucky, series. Statistical
comparisons of younger (20-34 years) with older (35-50 years) males
revealed that virtually all dimensions (e.g., face width, face height) were
larger in the older adults, six of which reached significance (p 0.05).
Comparisons of craniofacial dimensions in a series of adult females and
males from Late Woodland and Mississippian period sites in west-central
Illinois (Droessler, 1981) revealed similar trends to the Indian Knoll
series. Comparisons of young adults (20-34), middle-aged adults (35-49
years), and old adults (50 + years) indicate that cranial size tends to
increase with age, but to a relatively greater extent in the face, and more
so in females than in males (Table 7.l). Age comparisons ofcrania from
Europe, Melanesia, and the American Great Plains also revealed signifi-
can! increases in adult craniofacial size (Guagliardo, 1982b; Sejrsen et al.,
1997). Increases in facial projection, interorbital width, orbit and mastoid
size (Guagliardo, 1982b) and palate width (Sejrsen et al., 1997) are
especially prominent.
Although these studies show generally increasing size with advancing
age, there is sorne intrapopulation variability, such as changes in females
but not in males ( e.g., Droessler, 1981), earlier changes in adult males than
in adult females (e.g., Walker, 1995), or lack of age changes entirely (e.g.,
Sj0vold, 1995). This variability, especially between males and females, may
reflect differences in tooth loss or sorne unknown factors (see discussion by
Droessler, l 98 l ). The congruence of results from studies of populations
representing very diverse lifestyles and ecological settings suggests sorne
common factor or factors - genetic and/or environmental - that influence
bone apposition during adulthood.
Cranial form and function 241
Table 7.1. Age comparisons ofwest-centra/ Illinois adult cranial
measurements: young (20-34 years), middle {35-49 years}, old (50+
years) individuals. ( Adaptedfrom Droessler, 1981. Ali measurements in
millimeters.)
Young (n = 54)" Middle (n = 58)" Old (n= 39)"
Measurement Mean SD Mean SD Mean SD
Males
Vault Jength 180.9 6.3 182.3 7.1 180.4 6.0
Vault breadth 138.5 5.6 137.I 5.8 138.5 4.5
Vault height 142.3 5.4 140.3 5.3 140.4 4.3
Pace height 122.9 5.8 122.l 6.8 122.1 4.2
Mid-face breadth 99.8 5.2 99.9 4.8 100.3 4.8
Fe1na/es
Vault Jength 173.0 5.9 173.6 5.1 174.2 6.7
Vault breadth 134.0 4.3 134.7 4.5 135.l 6.6
Vault height 136.5 4.2 137.5 5.2 137.3 4.8
Face height 114.5 6.0 112.6 5.8 114.4 7.2
Mid-face breadth 96.4 4.2 95.5 5.2 97.2 4.4
Average sample size; sample size varies by measurement.
Baer (l 956) suggested three possible explanations, including secular
change, selective survival, and true ontogenetic development. In working
with archaeological samples, special problems u ni que to these kinds of data
sets emerge. For example, archaeological series usually include multiple
generations spanning many years, and differing adaptive regimes. The
west-central Illinois series, for example, encompasses sorne 600 years of
occupation with a major dietary shift from foraging to farming (Droessler,
1981). The consistency of craniofacial changes with advancing age in
investigations of modern (clnica!) and archaeological populations is
striking, suggesting that the trends are more real than apparent. Guag-
liardo (l982b) considered biomechanical factors that might influence
craniofacial expansion, regarding changes with age due to the cumulative
forces associated with mastication.
Regardless of cause, the increasing expansion of the craniofacial
complex with advancing age underscores the importance of considering the
demographic composition of skeletal and clinical series. Comparison of a
predominantly young adult series from one population with an old adult
series from another might give the impression of differences in robusticity.
Consideration of age structure, therefore, represents a vital part of skeletal
analysis (see also Ruff, 1980}. These craniofacial size differences are most
apparent in the comparison between very young and very old adults (e.g.,
242 Masticatory and nonmasticatory functions
Behrents, 1985). Most archaeological samples contain relatively few older
adults. Thus, differences in craniofacial robusticity between different
archaeological series are probably due to factors other than age.
7.3 Dental and alveolar changes
7.3.l Occ/usal abnormalities and dental crowding
Occlusal abnormalities and dental crowding are generaliy lumped under
the term 'malocclusion', which includes ali manner of conditions deviating
from the 'ideal' or normal occlusion. Angle's (1898) highly influential
treatise on malocclusion defined ideal occlusion as 'edge-to-edge' whereby
ali antimeric teeth in the upper and lower dentitions are in perfect
occlusion. This pattern is exceedingly rare in humans, especially in Western
industrialized populations or populations adopting Western diets and
subsistence technology. Abnormal occlusal patterns in humans fali into
two broad categories, including (1) dental crowding involving insufficient
space for ali teeth (e.g., tooth impactio.n); and (2) underdevelopment or
overdevelopment of the maxillary dentition relative to the mandibular
dentition or vice versa (e.g., overbite) (Corruccini, 1991; Scott & Turner,
1988).
The causes ofmalocclusion ha ve been debated for years, sorne workers
invoking genetic explanations (e.g., Hrdlika, l922b; Lundstriim, 1948;
Smith & Bailit, 1977) and others offering environmental explanations
(Begg, 1954; Hunt, 1960, 1961; Sagne, 1976). Comparisons of monkeys
(squirrel monkeys, macaques, and baboons) fed hard and soft diets revea]
significan! differences in occlusion; animals fed soft diets tend to have
narrower maxillary arches, and higher frequencies of tooth impactions,
rotations, and crowding than animals fed hard diets (Beecher & Corruccini,
1981; Corruccini & Beecher, 1982, 1984). Human studies are consisten\
with these findings. In general, non-Western societies consuming tradi-
tional diets composed ofhard-textured foods have very low prevalence of
occlusal abnormalities (e.g., Corruccini et al., 1981, 1983; Hunt, 1961;
Lombardi & Bailit, 1972; Lu, 1977; Moorrees, 1957; Price, 1936). In
addition, Corruccini & Whitley (198 l) showed that older Euroamericans in
rural Kentucky who had been raised on traditional diets (e.g., dried pork,
heavy coro-bread, wild and garden foods) have a low prevalence of
malocclusion. In contras\, high-status South Asians who have greater
access to soft, refined foods than low-status South Asians ha vean elevated
prevalence of occlusal abnormalities (Corruccini et al., 1983). Similarly,
Dental and alveolar changes 243
urban Greek populations consuming nontraditional diets dominated by
soft, processed foods have more dental crowding and non-edge-to-edge
bite than rural Greek populations consuming traditional diets (Angel,
1944).
Populations shifting to highly processed, nontraditional diets show an
increase in overbite, overjet, impactions, crowding (e.g., Brown, 1992;
Corruccini, 1984; Corruccini & Choudhury, 1985; Corruccini et al., 1981,
1990; Moorrees, 1957; Price, 1936; Waugh, 1937; Wood, 1971), and a
general narrowitg ofthe arches (e.g., Corruccini & Whitley, 1981; Goose,
1962, 1972; Lundstriim & Lysell, 1953; Lysell, 1958). The latter findings
suggest that the narrower faces of American-boro children in comparison
with their European-born parents (e.g., Boas, 1912, 1916; Hrdlika, cited
by Boas, 1916) resulted from the shift to softer, more processed foods.
The association between malocclusion and the consumption of soft-
textured foods in animal and human studies supports the 'disuse' hypoth'
esis; namely, reduction in masticatory mechanical loading has led to a
reduction in growth of bone tissue supporting the teeth. To be sure, sorne
populations appear to have a genetic predisposition for occlusal abnor-
malities (see Corruccini, 1991). However, a substantial body of evidenoe
indicates that reduction in masticatory stress has engendered al1 'levation
in occlusal abnormalities in recent human history.
From this, it can be implied that teeth ha ve become reduced over the
course ofhuman evolution (and see below) ata relatively slower rate than
the supporting bony structures of the maxilla and mandible. Differential
reduction in tooth and jaw size has been documented in recent human
populations. Comparisons of British orofacial and dental dimensions
spanning the Neolithic to the nineteenth century show a distinctive
reduction in bone dimensions (e.g., mandible, palate width) but no or
little reduction in tooth size (Goose, 1963; Keith, 1924; Moore et al.,
1968). Similarly, comparisons of Medieval and recent Swedish skulls
show reduction in mandible but not tooth size (Sagne, 1976). These
findings indicate that temporal increases in crowding are essentialiy
reductions in bone size without corresponding decreases in tooth size.
This differential reduction is due to the fact that, unlike bone, teeth
cannot respond to changes in use via differential growth. Therefore, while
jaw size decreases in the face of dietary changes, tooth size is not able to
do so.
Findings based on temporal comparisons from severa! archaeological
contexts are consisten! with the disuse hypothesis concerning the replace-
ment of hard-textured by soft-textured foods in the foraging-to-farming
transition or in increasing urbanization. Comparison of Archaic hunter-
244 Masticatory and nonn1asticatory Junctions
gatherers with Mississippian agriculturalists from Koger's Island in the
Pickwick Basin, Tennessee, reveals an increase in occlusal abnormalities in
the agriculturalists relative to the hunter-gatherers (Newman & Snow,
1942). Because there was also a decline in tooth wear in this setting,
Newman & Snow (1942) concluded that dietary consistency declined dueto
the use of wooden mortars and pestles in food preparation in later
prehistory, thus indirectly implicating decreased masticatory stress as a
factor contributing to increased prevalence of malocclusion.
In the eastern Mediterranean, a decrease in edge-to-edge bite from the
Neolithic to the twentieth century also coincides with a reduction in tooth
wear, which led Angel (1944) to infer a decrease in mechanical demand on
the faces and jaws. The pattern was also linked to a decline in craniofacial
robusticity, accelerating in recent Greek populations.
The trend of increasing prevalence of occlusal abnormalties has been
documented in a variety of settings and interpreted in various ways (cf.
Davies, 1972; Dickson, 1970). This trend is perhaps best documented in
archaeological rernains from Japan, especially in view of changes in diet,
food preparation technology, and orofacial adaptation (Hanihara et al.,
1981; Inoue et al., 1986; !to et al., 1983; Kamegai et al., 1982). Comparisons
of adult skulls from the Jomon (1000-500 ne}, Kofun (AD 50--350),
Medieval (AD 1300-1600), Yedo (AD 1600-1900), and modern (AD 1964-
1966) periods revea! clear trends in orofacial adaptation (Hanihara et al.,
1981). The frequency of malocclusion increased from a low of 20.0% in
Jomon hunter-gatherers to 45.5% in Kofun incipient agriculturalists to
76.2% in modern Japanese. Building on thcse findings, Jnoue and
coworkers examined the relationship between food consistency and oc-
clusal abnormalities in Japan, succinctly arguing that 'soft and highly
nutritious food reduces the functional activity of the human masticating
syste1n, because it requires less chewing force and less chewing time. Thus
reduction of the jaw bone has progressed through the course of human
micro-evo1ution, resulting in disharmony between the sizes of teeth and
jaws' (Inoue et al., 1986: 164; see also Suzuki, 1969). To test this hypothesis
further, malocclusion prevalences of Jomon period foragers (divided into
an early anda late sample}, Yayoi period early farmers, and Kofun period
protohistoric farmers fro1n n1ostly southern Ja pan were compared. Factor,
principal components, and cluster statistical analyses indicated clustcring
of occlusal patterns by culturaljsubsistence grouping. This analysis re-
vealed a low prevalence of malocclusions in the foragers followed by
increases in the later, agricultura! populations with increased use of soft
foods.
Dental and alveolar changes 245
7.3.2 Tooth size changes
The adaptational and evolutionary significance of past tooth size variation
in human populations has become a majar point of discussion, as is
indicated by an increase in research on the tapie from archaeological
settings in the Old World (e.g., Brace & Hinton, 1981; Brace et al., 1991;
Calcagno, 1989; Calcagno & Gibson, 1991; Jacobs, 1994; Lukacs, 1985;
Lukacs & Hemphill, 1991; Walimbe & Kulkarni, 1993) and in the New
World (e.g., Brace & Mahler, 1971; Dahlberg, 1963; Hinton et al., 1980;
Larsen, 1982; Sciulli, 1979; Simpson et al., 1990; Walker, 1978). These
types of studies emphasize the importance of teeth in the understanding of
craniofacial adaptation and dietary change throughout the evolution of the
Hominidae, including the appearance and evolution of recent Horno
sapiens.
The trend for a reduction in tooth size over the course of hominid
evolution has been well documented (e.g., Bermdez de Castro & Nicolas,
1995; Brace, 1995; Brace & Mahler, 1971; Brace et al., 1987, 1991;
Calcagno, 1986a, 1986b, 1989; Calcagno & Gibson, 1991; Cappa et al.,
1995; Frayer, 1978; Kieser, 1990; Larsen, 1982; Lukacs, 1985; Scott &
Turner, 1988; Scott et al., 1991; y'Edynak, 1989; but see Jacobs, 1994;
Scott, 1979) (Figure 7.4). Generally, foragers or groups having recently
made the shift to agriculture have larger teeth than populations with a
longer history of agricultura! use and associated food preparation tech-
niques, such as boiling in ceramic vessels and other forms of extended
cooking (e.g., Brace & Hinton, 1981; Brace & Mahler, 1971; Brace et al.,
1991; Lukacs, 1985).
However, mechanisms for tooth size reduction remain elusive (see
Kieser, 1990, for review of alternative models ei<plaining tooth size
reduction). Tooth size appears to be highly heritable, suggesting that size
reduction may be largely an evolutionary (genetic) change. The rapid
reduction in tooth size accompanying shifts in dietary focus (e.g., Hinton et
al., 1980; Larsen, 1982, l 983a; Sciulli, 1979) or the presence of small teeth in
physiologically stressed individuals (e.g., Garn & Burdi, 1971; Garn,
Osborne et al., 1979; Hillson, 1995; Kieser, 1990; Townsend & Brown,
1978) indicate a significan! influence of environment (e.g., nutrition,
maternal health), at least in certain situations. The greater heritability of
tooth size than bone size and the fact that teeth do not remodel once formed
underlies the potential imbalance between teeth and supporting bone, as is
so well illustrated in the aforementioned studies of occlusal abnormalities.
Large teeth located in alveolar tissue that is too small also represen! a
potential health risk, predisposing an individual to impactions, dental
246
- 1500
NE
.. 1400
~
~ 1300
o
~ 1200
* g_ 1100
"O
"' E 1000
E
~
(/)
Masticatory and nonn1asticatory functions
Hominid
Figure 7.4. Te1nporal rcduction in sun1111cd posterior tooth size in Hmno.
(Data from Calcagno & Gibson, 199!.)
caries, periodontal disease, and tooth loss (Calcagno & Gibson, 1991;
Inoue et al., 1986). Caries and dental impactions can also lead to abscesses
and localized infections, which can progress to more severe types of
systemic infections (e.g., gangrene, septicemia, osteomyelitis; Calcagno &
Gibson, 1991). Beca use systemic infections are potentially life-threatening,
teeth in populations consuming soft, cariogenic foods may be under
selection for reduction in these circumstances (Calcagno & Gibson, 1991 ).
However, small teeth in highly abrasive masticatory environments may
wear too rapidly, thus resulting in premature loss of crown height and pulp
exposure. Exposed pulp is highly susceptible to bacteria! infection (pulpi-
tis). This suggests, then, that teeth may also be under selection for large
occlusal size in abrasive environn1ents.
The above discussion highlights the extreme ends of variation involving
the costs and conseq uences of soft and hard diets in relation to mastication,
tooth size, and potential selective conditions. Most human populations lie
somewhere between these ends of the spectrum of masticatory adaptation.
Tooth size is, therefore, most likely to be a product ofan ongoing 'selective
compromise' involving the promotion of optimum dental health among
other things (Calcagno & Gibson, 1991 ). Severa] other models of tooth size
Dental wear and function 247
reduction are based on the premise that reduction occurs in the absence of
selection (cf. Brace et al., 1991; Macchiarelli & Bondioli, 1986), but these
models exclude a broader consideration of the potential health conse-
quences of occlusal abnormalities. Therefore, it appears that there is an
adaptive advantage to maintaining a harmonious relationship between
teeth and the skeletal structures supporting them (for an alternative
perspective, see Kieser et al., 1985).
7 .4 Dental wear and function
The use of the teeth in eating involves a two-stage process: first, the initial
preparation of food with the anterior teeth; and second, the reduction of
food with the posterior teeth. These activities result in the wearing of
occlusal surfaces as the upper and lower teeth come into contact with each
other and with the food being prepared or reduced. There has been a shift
away from the use of the teeth toward the use of the hands in the
manipulation of the environment and a reduction in the importance of
mastication in the consumption of processed foods in the evolution of the
Hominidae. It is also true that the presence of significant amounts and
variable patterns of dental wear in past and living human populations
indicates the continued importance of teeth in survival and adaptation.
Tooth wear is variously defined in the literature. Two forms of wear,
abrasion and attrition, are most commonly discussed (Townsend et al.,
1994). Abrasion is caused by contact between the tooth and the food or
other solid exogenous materials, especially as food is forced over occlusal
surfaces. Attrition is caused by tooth-on-tooth contact in the absence of
food or various other abrasives. Additionally, erosion - the loss of tooth
surfaces dueto chemical dissolution - is sometimes considered as a form of
wear (e.g., Davis & Winter, 1980; Linkosalo & Markkanen, 1985). Because
of the difficulty of distinguishing abrasion, attrition, and erosion, this
discussion regards 'wear' as including any combination thereof (also see
Wallace, 1974). Tooth wear varies widely between human populations.
Owing to localized behavioral characteristics, and differences in cultural
practices, age, sex, diet, and orofacial morphology, it provides enormously
important information on earlier foodways and masticatory behavior (e.g.,
Benfer & Edwards, 1991; Molnar, 1971, 1972; Molnar et al., 1983; Molnar
& Molnar, 1990; Powell, 1985; Richards, 1990; Richards & Miller, 1991;
Walker et al., 1991). The significance of dental wear in relation to diet was
succinctlysummarized by Walker, who noted that 'From an archaeological
standpoint, dietary information based on the analysis of (wear) is of
248 Masticatory and non1nasticatory Junctions
considerable value since it offers an independent check against reconstruc-
tion of prehistoric subsistence based on the analysis of floral, fauna! and
artifactual evidence' (1978:101).
Dental wear variation is reported as either severity or forro or both.
Wear can also involve substantial losses in the regions of contact between
teeth (Begg, 1954; van Reenen & Reinach, 1988; Wolpoff, 1971), resulting
in reduction in length Although wear in its severe forro
can predispose a tooth to discase (e.g., pulpitis, caries) and loss, tooth
wear is a normal physiological process rather than a disease (contra Wells,
1975).
The process oftooth wear is well understood: it commences with loss of
occlusal enamel, followed by deposition of secondary dcntin serving as a
protective zone overlying the pulp chamber. In extreme forros of wear, ali
enamel is removed from the occlusal surface, leaving a ncarly continuous
dentina! surface surrounded by a partial or complete rim of enamel. As the
process continues in older adults, the nerve supply in the pulp chamber
withdraws toward the roo! tip and is replaced in thc pulp chamber by
dentin. This replacement process is essential, because it accomn1odates
extreme wear - in son1e populations involving complete loss of crown
height (e.g., Hartnady & Rose, 1991; Larsen & Kelly, 1995). Moderate to
severe wear results in a reduction in tooth size dueto the combined effects
of occlusal and interproximal wear (e.g., van Reenen, 1982). In especially
severe wear environn1ents, loss of crown size in molars appears to be
compensated far by a combination of extra-eruption and lingual tilting
(Clarke & Hirsch, 1991; Comuzzie& Steele, 1989; Reinhardt, 1983; Taylor,
1963, I 986a, 1986b)(Figure 7.5). The added chewing surface on the bucea]
aspects ofthe molar roots in these teeth may represen! an adaptive response
to severe wear by maintenance of the occlusal surface area commensurate
with the dietary or masticatory needs of the individual.
The severity ofwear is highly influenced by theconsistency and texture of
faod, which is determined by either the characteristics of the food (e.g.,
presence of phytoliths or cellulose in plants), the manner of its preparation,
ar sorne combination. In arder to facilitate the ready availability of
nutrients to the digestive enzymes or to remove undesirablc constituents
(e.g., fiber, toxins), humans process plants in a variety of ways, including
grinding, pounding, grating, soaking, leaching, drying, heating, and
fermenting (Stahl, 1989). Sorne of these processing techniques may involve
the introduction of abrasive elements that promete tooth wear (e.g., use of
grinding stones far making flour from cereal grains) or they may involve
the removal of abrasives as in highly processed faods consumed by Western
industrialized populations. The fallowing sections discuss wear as
Dental wear and function 249
Figure 7.5. Lingual tilting of mandibular first molar; Tutu, St. Thomas, U.S.
Virgin Jslands. Note wear on superior bucea! roots and pronounced Occlusal
and interproximal wear on all teeth. (Digital photograph by R. P. Stephen
Davis.)
documented by two primary means, namely through visual inspection of
gross or macrowear and observation ofmicrowear.
7.4.1 Macrowear
Sorne of the most comprehensive infarmation on the behavioral signifi-
cance oftooth wear is from temporally successive series ofhuman remains.
Investigation oftooth wear and the mechanical environment reveals a high
degree of consistency with skeletal indicators of masticatory stress.
Populations with high levels of mechanical demand and/or reliance on
abrasive diets have relatively advanced wear (e.g., Barondess & Sauer,
1985; Hansen eta!., 1991; Hartnady & Rose, 1991; Hemphill, I992; Marks
et al., 1988; Molnar et al., 1983; Powell & Steele, 1994; Scott et al., 1991;
Walker, 1978). Traditional populations undergoing shifts from diets
containing tough foods to diets dominated by processed foods during the
twentieth century show reduction in wear (e.g., Davies & Pedersen, 1955;
Staz, 1938). Importan! differences in interproximal wear and specific
patterns of wear also help infarm our understanding of masticatory
behavior in human populations.
250 Masticatory and nonmasticatory functions
Occlusal sz11face lVear severity
Temporal comparisons of populations undergoing subsistence change
within regions, especially in regard to comparison of dentitions of foragers
and later farmers, demonstrate the importan! links between diet, mastica-
tory behavior, and wear. A wide variety of human populations show
appreciable declines in severity of the occlusal surface wear that accom-
panies this shift. In general, the shift involves a change from hard-textured
(and sometimes highly abrasive) foods to soft-textured foods. In the
prehistoric Eastern Woodlands and other areas of North America,
comparisons of hunter-gatherers with later agriculturalists show a consist-
en! pattern of decline in severity of wear, which has been documented in the
Tennessee River valley (Hin ton et al., 1980; Newman & Snow, 1942; Smith,
1982), the Roanoke River Valley in Virginia (Hoyme & Bass, 1962), the
Lower Mississippi River valley (Rose et al., 1991), the southern border
region of Oklahoma and Arkansas (Powell, 1985), western Pennsylvania
(Sciulli & Carlisle, 1977), Ontario (Patterson, 1984), and the Tehuacan
Valley of Mexico (Anderson, 1965, 1967). In ali of these settings, the
reduction in tooth wear appears to be linked with the shift from reliance on
nondomesticated to domesticated plants or more intensified use of domes-
ticated plants and changes in associated food preparation technology. In
the Oklahoma-Arkansas and Tennessee regions, the change in diet
involved the adoption of or increased emphasis on maize, a food which was
typically prepared into a soft gruel. Perhaps of more importance in this
setting in relation to the reduction in tooth wear is the replacement of
grinding stone in1plements used far processing nondomesticated plants
with wooden mortars and pestles that were used to process maize (Hin ton
et al., 1980; Powell, 1985). The shift from stone to wood in food
preparation technology greatly reduced the harsh abrasivecontent of foods
consumed by late prehistoricindians, beca use itcut down on the number of
exogenous materials added to foods.
Details on diet and food preparation in native populations are provided
in historical descriptions for sorne regions. For example, in Virginia and the
Carolinas, Arthur Barlowe described a late sixteenth century mea! as
including 'sorne wheate like furmentie, sodden Venison, and roasted, fish
sodden, boyled, and roasted, Melons rawe, and sodden, motes and divers
kindes and divers fruites' (cited in Hoyme & Bass, 1962:353). Observations
such as these provide importan! perspectives on the extent to which
southeastern North American tribes consumed soft foods.
Old World populations represented by temporal sequences of skeletal
remains of earlier hunter-gatherers and later agriculturalists generally
Dental wear and function 251
show a change from high wear to modera te or low wear. F or instance, a
major dietary transition during the Neolithic in west-central Portugal
involved a shift in subsistence from foraging and fishing to a more
terrestrial-focussed diet that included domesticated animals (pig, sheep,
goat) and plants (various grains) (Lubell et al., 1994). Quantification of
tooth wear reveals that these populations show a dramatic and rapid
reduction in severity of occlusal wear in this adaptive shift. For Mesolithic
individuals from Moita with their lower third molars newly erupted, 87 .5%
have lower first molars with occlusal-surface enamel worn away complete-
ly. Comparably aged individuals from the Neolithic show significantly less
wear on their lower first molars. For example, only 25.0% of individuals
from Melides have severely worn first molars.
Sorne of the earliest evidence of agriculture and food production in
post-Pleistocene populations is from the Near East. In a series of sites
dating from the period of 12 000 to 7000 years ago in northern Syria, plan!,
animal, and human remains provide information on the complex nature of
the shift from a primarily foraging to an intensive agricultura! economy
(Molleson & Jones, 1991). Populations placed a heavy emphasis on at least
six domesticated cereals beginning about 1O000 years ago. Preliminary
findings suggest that, unlike the settings discussed above, the initial
agriculturalists in the early Neolithic express an increase in the severity of
occlusal wear in comparison with earlier foragers (Molleson & Jones, 1991;
and see Smith, 1972). This change appears to be related to the shift from
less-coarse nondomesticated grains consumed by Mesolithic hunter-
gatherers to coarse grains consumed by Neolithic farmers. Both groups
used grinding stones to process grains; thus, it is unlikely that the manner of
food preparation led to the increase in wear in the later populations.
During the later Neolithic period, there is a reversa! in wear, as most late
Neolithic individuals show reduced wear compared with early Neolithic
individuals. This is probably because ceramic vessels were used in the later
period for boiling grains and other foods into soft mushes. In this case,
innovations in food preparation technology resulted in lessened wear rates.
Analysis of microwear in these populations confirms the reduction in
dietary texture and abrasiveness (Molleson & Jones, 1991; Molleson et al.,
1993; and see below).
At the other end of the continent in Japan, similar reductions in occlusal
wear are linked to the increased reliance on soft, low abrasive foods
beginning in the Jomon period and continuing to the present (Inoue et al.,
1986).
A general decline in masticatory demands with agriculture and its
intensification has also been documented via analysis of occlusal wear in
252
Masticatory and nonn1asticatory Junctions
northeastern Africa (Greene et al., 1967). Mesolithic foragers in Nubia
have very severe occlusal wear, which decreases considerably in the
following agriculture-dependent populations (Meroitic, X-group, Chris-
tian group). This decline in wear had importan\ implications far changing
patterns of oral health, as the reduction of wear coincides with an increase
in dental caries and other dental pathological conditions. Perhaps, then,
the continued presence of caries-prone grooves and fissures from reduced
wear in the agricultura! groups may have predisposed their teeth to
increased decay with the shift to an emphasis on plan\ carbohydrates
(Greene et al., 1967).
These studies underscore the very strong inftuence of dietary change on
the wear environment, especially in relation to the wor1dwide transition
from food collection to food production. Analysis of dental wearin settings
involving other kinds of dictary shil'ts, within both hunter-gatherers (e.g,,
Walker, 1978) and agriculturalists (Gualandi, 1992), demonstratc appreci-
able changes in masticatory behavior not involving the foraging to farming
transition. Early and late period populations from the Santa Barbara
Channel lslands region saw a dramatic shift in dietary focus - from an
emphasis on terrestrial foods to a reliance on marine foods after AD 1150
(Glassow, 1996; Walker, 1978). In the early period, terrestrial foods
included seeds, prepared with grinding stones, and significan! quantities of
shellfish (Erlandson, 1994; Erlandson & Coiten, 1991; Glassow, 1996). In
the later period, a much greater emphasis was placed on fishing and
hunting of marine mammals (Glassow, 1996; Walker, 1991-1992). The
changes in tooth wear in these populations are profound, including
especially a decrease in wear as measured by dentin exposure area on
occlusal surfaces. This trend is probably related to the change in dietary
emphasis and food preparation technology. The use of milling equipment
by earlier populationscontributed to relatively greater wear than in the late
prehistoric populations.
Interproxnal wear
Bioarchaeologists have given considerably less attention to the study of
interproximal wear involving fiattened facets at the areas of contact
between adjacent teeth. Interproximal wear results from differential
movement of teeth during chewing; the greater the mechanical forces
placed on the teeth during mastication, the greater the amount of
interproximal wear (Hinton, 1982; Wolpoff, 1971). Far example, Austra-
lian Aborigines eating tough or abrasive foods ha ve far more interproximal
wear than Europeans eating soft, processed foods (Wolpoff, 1971 ). Hin ton
Dental wear and function 253
Table 7.2. Interproximal wear facet breadth, PM2/M1, stratified by
leve/ of occlusal wear, prehistoric Tennessee Native Americans. ( Adapted
from Hinton, 1982. Ali measurements are in mil/imeters.}
Occlusal wear leve! on lower M 1
Period 3 4 5 6 7 8
Archaic 4.9 5.4 5.3 6.1 6.4 7.1
(n) (8) (6) (6) (18) (3l) (22)
Woodland 4.8 5.3 5.4 5.9 6.0
(n) (33) (11) (9) (22) (8)
Mississippian 4.3 4.6 4.9 4.7
(n) (43) (22) (lO) (6)
Table 7.3. /nterproximal wear facet breadth, M1/M2, stratified by
leve/ of occ/usal wear, prehistoric Tennessee Native Americans. ( Adapted
from Hinton, 1982. Ali measurements are in millimeters.)
Occlusal wear leve! on lower M2
Period 2 3 4 5 6 7 8
Archaic 4.8 5.5 6.0 6.3 6.7 6.7 6.9
(n) (7) (16) (7) (5) ( 18) (30) (8)
Woodland 4.2 5.0 5.4 5.7 6.2
(n) (14) (32) (13) (9) (8)
Mississippian 4.0 4.3 5.1 4.5
(n) (28) (42) (3) (3)
(l 982) documented a significan\ reduction in interproximal wear as one
moves in temporal succession from prehistoric Archaic to Woodland to
Mississippian period premolars and molars in the Tennessee River valley
(Tables 7 .2 and 7 .3). This trend suggests a decrease in masticatory loading
ofthe dentition, a finding consisten\ with declining craniofacial robusticity
in thesepopulations(e.g., Hinton, 1981a). Forexample, temporomandibu-
lar joint dimensions decrease from the earliest to the la test periods. Hinton
( l 982)argues that, although the dietary change probably hadan importan\ .
influence on both interproximal wear and craniofacial robusticity, the
more importan\ change may ha ve been related to the manner in which food
was prepared. At the time of contact, native populations were consuming
various soft foods, such as mushes, stews, and puddings (e.g., Hudson,
1976; Swanton, 1946). Archaeological and ethnobotanical evidence of
foods and food processing in the earlier periods indicates a greater
254 Masticatory and nonmasticatory functions
coarseness of diet requiring more mechanical force in chewing than in the
later periods. Comparisons of a temporal series spanning the shift from
foraging to farming in southern Ontario likewise show a decrease. m
interproximal wear (Patterson, 1984). This change _is cons_1stent w1th
Hinton's model of reduction in masticatory loadmg m relat10n to con-
sumption of soft, agriculture-based foods during later prehistory in the
Eastern Woodlands.
Occlusal 1vear patterns
Patterns of occlusal surface wear provide an additional perspective on
masticatory and dietary behavior in past human populations. A number of
early investigators noted trends in wear patterns with probable functional
significance. In his detailed report on pathological cond1t10ns present m
human ren1ains recovered in Nubia at the turn of the ccntury, Wood Jones
(1910:279, 282) compared occlusal wear patterns in Pr_edynastic_dentitions
with later samples (e.g., Christian and Byzantine penods), notmg a clear
tendency for ftat ('leve! and even') wear on tooth crowns in the former,
contrasting with 'a 111arked hollowing-out ofthe centre ofthe crowns' i_n the
latter.
In order to test the hypothesis that systematic differences in human
populations in tooth wear patterns are rclated to subsistence. and food
preparation, Smith (1984) examined flatness of occlusal wear m hunter-
gatherers vs. agriculturalists worldwide. Foragers included European and
Near Eastern Middle and Upper Paleolithic, French Mesoltthtc, precon-
tact Australian, precontact Eskimo, and Archaic period Native Americans
from Alabama; agriculturalists were from British and French Neolithic,
Nubia, historie Britain (!ron Age, Anglo-Saxon, Medieval), Mississippian
period from Alabama, and late prehistoric American Southwest Pu_eblo
(Smith, 1984:42). Although the general severity ofwear was stmtlar wtthm
the two groups, and foragers overa]] had relatively more severe wear than
agriculturalists, these differences were not uniform. For example, t o o ~
wear in the Nubian agriculturalists was as severe as tooth wear tn
Australian Aborigine hunter-gatherers. .
In spite of a great <leal of variation in diet and food preparat1on
technology across thc samples, highly consisten! differenccs in the angle of
the occlusal wear plane on the mandibular first molars were documented.
Agriculturalists show higher angles of occlusal surface w_ear than hunter-
gatherers. For the more advanced wear stages (see Figure 7.6), these
differences approach 10. Wear on first molars differs by subsistence
strategy: wear is cupped in agriculturalists and ftat in hunter-gatherers
(a)
Dental wear and function
255
(b)
Figure 7.6. Lateral views of tooth wear in Nublan A-group agriculturalist (a)
and Eskimo hunter-gatherer (b) showing greater angle of wear plane in the
former. (From Smith, 1984; reproduced with permission of author and John
Wiley & Sons, Jnc.)
(b)
Figure 7.7. Occlusal view of tooth wear in Nubian X-grcup agriculturalist (a)
and Mesolithic hunter-gatherer (b) showing cupiJed occlusal wear in the former
(first molars). (From Smith, 1984; reproduced with permission of author and
John Wiley & Sons, lnc.)
(Figure 7. 7). Overall, wear plan e angles and form reflect the kinds of foods
being eaten as well as the manner in which they are prepared. Smith drew
the general conclusion that these differences were related to greater
'toughness or fibrousness' of the diets in foragers than in farmers.
Therefore, the pattern of flat and cupped wear that Wood Jones (I 91 O)
observed in bis comparisons of Predynastic and Christian era populations
in Nubia reflects the distinctions between the respective hunter-gatherer
and agriculturalist groups documented in Smith's study. Additional
confirmation of Smith's findings has been provided in other settings
256 Masticatory and nonmasticatory functions
undergoing the shift from food collection to food production. In South
Asia, Pastor ( 1992) compared gross wear- and microwear (discussed below)
in lower first and second molars a series of prehistoric hunter-gatherers
from the Mesolithic site of Mahadaha in the Ganga River valley ami
incipient agriculturalists from the Chalcolithic site of Mehrgarh in thc
Indus River valley. Diet in the Mesolithic comprised tough, fibrous foods,
including nondomestic mammals and wild grains, roots, and other plants
that were processed in stone querns. Chalcolithic populations consumcd
domestic animals (cattle, sheep, goats) and severa! varieties of wheat and
barley. In addition to showing a marked decline in wear severity, thc
pattern of wear is strikingly similar to Smith's hunter-gatherer ami
agriculturalist samples for the Mesolithic and Chalcolithic, respectively:
Mesolithic leelh are worn ftat al slight angles, and Chalcolithic teeth are
worn ata greater angle with distinctive cupping.
In previously mentioned Portuguese populations, molar occlusal surl:i
ces also show a temporal change from ftat lo cuppcd wear as one
from the Mesolithic to Neolithic siles (Lubell et al., 1994). This study also
serves to illustrate the potential variation of wear patterns in hu111a11
populations as wear angles in Mesolithic foragers are much greater than i11
Iater Neolithic farmers. The reasons for the differences fro1n Sn1itlJ\
findings are largely unknown. Lubell and coworkers (1994) speculale that
differences between thc Portuguese populations and the samples stuc\icd hy
Smith, especially in age composition and n1olar crown morphology, may h1
1
importan! considerations. Although a common pattern includes flat wc111
in hunter-gatherers and obligue wear in agriculturalists, variability lri
present in sorne groups (see also Schmucker, 1985).
Other types of wear demonstratc significant variation in masticatory
behavior in past human populations. Analysis of prehistoric forap.'''"
(Alaskan Eskimos, Australian Aborigines), plus those supplemenlcd with
sorne maize agriculture (Libben, Ohio), and full maize agriculturnlhl'
(American Southwestern Pueblo) reveals importan! differences in fonm 111
anterior tooth wear as well as differences in anterior vs. posterior wetir
(Hin ton, l 98ld). In hunter-gatherers, the severity of anterior tooth wc111 I
greater than, or equal to, the severity of wear on posterior teeth. 111
inlensive agriculturalists from the Southwest, and to a reduced cxtcnt in tlltl
forager-farmers fro1n Libben, anterior wear is appreciably less sevcre 1 hilli
posterior wear. The form of wear on the anterior teeth is distinrllVPi
prehistoric foragers show a characteristic rounded wear, whc1-ea'.; lluj
agriculturalists (including Libben) have cupped wear. The consistc11q 11!
these findings confirms the overall conclusions drawn by Smith ( 1981) th1tl
tooth wear is mediated by masticatory behaviors. The relatively heavic1111>e
Dental wear and function 257
111' 1 he anterior teeth in hunter-gatherers is in line with ethnographic reports
111 tooth use in Australians and Eskimos, in both dietary and nondietary
f1111ctions. The distinctive labially oriented roimded wear in hunter-gatherer
dPnlitions is probably a function ofthe use of incisors and canines in various '
!'" rnmasticatory activities (e.g,, hide preparation; and see below). The
111 l inctivecupping wear on agriculturalist incisors and canines appears to be
;i;pecially prominent in individuals who ha ve lost posterior teeth prior to
1IP11lh. This anterior wear pattern may represen! an alteration oftooth use
11t lttlng from the loss of posterior teeth, namely from use ofthe incisors and
111t1ines for a combination of food preparation and mastication.
Social differences in pattern and severity of wear: sex and status
differences in the pattern and severity of wear indicate behavioral
\i111fability in tooth use for a range of human populations. Adult female
ilwlsors and canines are more worn than male incisors and canines in the
M11ttolithic population from Skateholm, Sweden (Frayer, 1988b). Similar
ltforences are observed in native populations from the American Great
1
1
1.!lns (Reinhard et al., 1994) and South Africa (Morris, 1992). In these
differences in anterior tooth wear suggest gender-specific behav-
liH. F or example, historie evidence indica tes that Ornaba women from the
l'!:'tlt Plains were responsible for hide processing, and probably used their
mt teeth in the activity. In South Africa, sorne San foragers use their
Wrior teeth to prepare plant fibers to make ropes (e.g., Van Reenen,
M). Study of archaeological dentitions from this setting suggests that
11e activities were primarily female responsibilities.
1 n A ustralian Aborigines, differences in fema le and male occlusal surface
r appears to be associated with gender-based masticatory and extra-
!ticatory activities (Richards, 1984). Dental wear in Narrinyeri and
!\t rna foragers from the southern Australian coast and adjacent main'
mi dating to the nineteenth century shows differences between the sexes.
the Narrinyeri foragers, maxillary premolars and second molars and
ndibular canines, premolars, and molars are less worn in males than in
tles. In Kaurna foragers, the only sex difference was for less worn
illary central incisors in males. Richards (1984) suggests that the
!!l ter wear in females than in males reftects distinctions along gender lines
innsticatory practices. Ethnographic observations note that men selected
most tender choices of meat, leaving women with the less-choice,
uher portions.
f\lscwhere in Australia, wear patterns vary between closely related
irnlations. In contras! to the pattern ofless wear in Australian Aborig-
258 Masticatory and nonmasticatory functions
inal males than females, females in the lower Murray River valley exhibit
less anterior wear than males (Molnar et al., 1989). Further up the Murray
River valley, females exhibit much more wear than males, a pattern that is
reminiscent of the Narrinyeri and Kaurna groups (cf. Richards, 1984).
Although the reasons for these differences are unknown, they attest to the
presence of highly variable wear patterns between closely related popula-
tions in what is oftencharacterized as a homogeneous region (Molnar et al.,
1989).
Gross wear differences by social group and rank ha ve been identified in
past societies. In the Medieval Edo period of Ja pan, members of the elite
Shogun class had virtually no occlusal surface wear, unlike lower-status,
non-Shogun individuals (Suzuki, 1969). This indica tes a lcss mechanically
demanding, less abrasive diet in thc elite than the remainder of Japanese
society. The presence of narro\.v faces, re<luced size of the maxillae and
mandibles, and gracile masticatory n1uscle attachment sites in Shogun
individuals corroborates a reconstruction of the consun1ption of soft,
processed foods by the elite.
7.4.2 Extramasticatory wear
Sorne of the greatcst mcchanical demands placed on the dentition involve
thc use of teeth as 'too]s' in nonn1asticatory functions. Milner & Larscn
(1991) note that 'teeth can show the effects of a wide variety of activities
unrelated to eating that rcsult in unusual, and at times highly distinctive,
patterns of abrasion, crown fractures, or trau1natic tooth loss'. Until
recently, a variety ofhuman populations havc used their teeth in extramas-
ticatory ways, and in no other group is this expressed as well as among
Eskimos (Cybulski, 1974; Larsen, 1985; Leigh, 1925; Merbs, 1983; Milner
& Larsen, 1991; Molnar, 1972; Pedersen, 1952; and many others). The
following discussion considers unintentional changes on teeth arising from
extramasticatory activities. (For a discussion of intentional mutilations of
teeth, see Milner & Larsen, 1991.)
Modifications involving transversely oriented grooves on worn occlusal
surfaces of permanent mandibular incisors and canines are especially
illustrative of the role of teeth as tools in a number of foraging human
groups in prehistoric North America. Five prehistoric older adult males
from the Great Basin in west-central Nevada display well defined single or
multiple grooves located on anterior teeth (Larsen, 1985) (Figure 7 .8). The
grooves are generally polished and rounded; scanning electron microscopy
(SEM) analysis reveals a series of fine scratches lying parallel to each
groove's main axis (Larscn, 1985). Similar types of grooves are present on
Dental wear and function
259
Figure 7 .8. Adult mandibular dentition showing occlusal surface grooves;
Humboldt Lake Basin, Nevada. (From Larsen, 1985; photograph by.Barry
Stark; reproduced with permission of Wiley-Liss, lnc., a division of John Wiley
& Sons, Jnc.)
occlusal surfaces of prehistoric forager dentitions from California (Schulz,
1977), Prince Rupert Harbour in British Columbia (Cybulski, 1974), and
central Texas (Bement, 1994). The high degree of polish and the orientation
of grooves in these samples indicates that, as in the Great Basin setting,
sorne type of flexible material had been passed transversely over the
anterior teeth in a repetitive and habitual fashion, especially in processing
materials such as sinews for bow strings or plant fibers for cordage or
basketry.
Ethnographic and historical evidence provides corroborative support
for possible uses of the dentition that may ha ve resulted in groo ves. In the
southwestern margin of the American Great Basin (Death Valley), Coville
(1892) described native Panamint women preparing sumac and willow for
materials to be used in making 'wicker-work utensils': A woman 'selects a
fresh shoot, ... bites one end so that it starts to split into three nearly equal
parts. Holding one of these parts in her teeth and one in either hand, she
pulls them apart, guiding the split with her fingers ... Taking one ofthese,
by a similar process she splits off the pith ... leaving a pliant, strong, flat
260 Masticatory and nonmasticatory functions
strip of young willow or sumac wood' (Coville, 1892:358). Greenland
Eskimo women pull thin cords of animal tendon across the clenchcd
anterior teeth in order to moisten and soften the sinew (Pedersen &
Jakobsen, 1989). Exclusively older women ( > 40 years) are responsible for
the preparation of sinew in this manner. Interestingly, in archaeological
dentitions, only older adult female dentitions display these 'sinew groovcs'
(Pedersen & Jakobsen, 1989). The presence of grooves in females only in
Eskimos and in males only in Great Basin Amerindians suggests that
activities causing these grooves were gender-based (although see Schulz,
1977).
Virtually ali occlusal surface grooves in anterior teeth are found in New
World foragers. Early Neolithic dentitions from later levels at Tell Abu
Hureyra, Syria, also display a distinctive pattern of occlusal surface
grooving. In this setting, grooves may ha ve been produced from prepara-
tion ofplant materials, such as canes for baskets, that were needed to carry
harvested grains from the fields (Molleson, 1994). Preparation of tough
plan\ materials for utilitarian purposes also may explain the presence of
occlusal surface grooves in prehistoric manioc agriculturalists in St.
Thomas, U.S. Virgin Islands (C. S. Larsen et al., unpublished manuscript).
Alterations of anterior teeth involving notching of the mesial or distal
occlusal margins have been observed in southeastern U.S. samples from
Tennessee (Blakely & Beck, 1984) and the Georgia coas! (Larsen &
Thomas, 1982). In the latter case, an individual shows wear on the mesial
corner of a maxillary right first incisor that resulted from extramasticatory
use, such as from clamping fishing nets or from processing plan! materials.
The presence of cumulative wear indicates that the teeth were not
intentionally altered for personal ornamentation, such as with purposeful
drilling.
Lingual surface wear on maxillary incisors and canines is another highly
distinctive pattern of tooth wear involving extramasticatory behaviors.
Most (39/46) adult crania from the preceramic (4200--3000 BP) site of
Corond, Rio de Janeiro State, Brazil, display pronounced lingually
oriented flat wear (furner & Machado, 1983) (Figure 7.9). There is no
corresponding wear on the lower teeth. Called 'lingual surface attrition of
the maxillary anterior teeth' (LSAMAT), the orientation of wear striations
strongly suggests that sorne kind of extramasticatory use of the teeth
caused the wear pattern. Its presence in adult females and males suggests
that it is not associated with a gender-specific activity. The presence of
moderate lingual wear in the anterior teeth of older children (beginning at
about 10--11 years) indicates the time in life when the behaviors commence
that cause the wear. Given the high prevalence of caries in this group
Dental wear and function 261
figure 7.9. Lingual wear on maxillary permanent left first incisor; Tut, St.
Thomas, U.S. Virgin lslands. (Digital photograph by R. P. Stephen Davis.)
(10.7% ofteeth) and the known heavy reliance on manioc - a cariogenic
carbohydrate requiring extensive preparation prior to consumption -' the
teeth appear to have been used to process plants for consumption (Turner
& Machado, 1983). Support for this argument has been provided by study
of additional prehistoric dentitions from Panama (Irish & Turner, 1987),
Texas (Hartnady & Rose, 1991), and the U.S. Virgin Islands (C. S. Larsen,
et al., unpublished manuscript). In these settings, the form ofwear, reliance
on manioc (or other fibrous plants), and high prevalence of dental caries
argues that these modifications are due to the habitual use of the upper
teeth in preparation of abrasive plant material. Because these studies
indicate that wear occurs mostly on the upper teeth, the material being
manipulated must have been placed between the upper teeth and tongue
and drawn across the upper teeth in a back-to-front movement. With
regard to the Brazilian populations, Turner & Machado (1983) suggest that
the upper teeth were used to shred or pee! tuberous plants (e.g., manioc)
'comparable to the modero way we eat artichokes- by pulling and planing
the edible petals across the occlusal surfaces of our anterior teeth'
(1983:128; and see Irish & Turner, 1987). Microwear analysis of incisors
from the Tutu population from the Virgin Islands shows that minute
262
Masticatory and non1nasticatory functions
striations caused by the passing of material across the tooth surfacc
involved a back-to-front movement (C. S. Larsen et al., unpublishcd
manuscript), thus adding additional contirmation ofTurner & Machado's
artichoke hypothesis.
A similar pattern of lingual wear in maxillary anterior teeth has bccn
identified in a Neolithic dentition at Mehrgarh, Baluchistan (Lukacs &
Pastor, 1988). Lukacs & Pastor (1988) suggest that lingual wear in this
individual resulted from the use ofteeth for the processing ofanimal skins.
The infrcquency of lingual wear and its presence in both 1naxillary and
mandibular teeth contrasts with the pattern ofhigh frcquency and prescncc
mostly in maxillary teeth in dental samples from Brazil (Turner &
Machado, 1983), Panama (Irish & Turner, 1987), the Virgin lslands (C. S.
Larsen et al., unpublished manuscript), and Texas (Hartnady & Rose,
1991). Thercfore, LSAMAT appears to be a primarily Ncw World
pheno1ncnon.
7.4.3 Microu1ear
Microscopic analysis of \.Vear on tooth surfaces provides important
information on the intricacies of dictary adaptation, tooth use, and
masticatory behavior not available from the study of gross wcar alone
(Teaford, 1991 ). The superior depth of fcus and highly detailed resolution
in SEM analysis has made it an especially powerful too! for documenting
minute aspects of wear (Teaford, 1991 ). Photographs (called micrographs)
taken with the SEM instrument represen! an clcctronic map of brightness
and contrast on the tooth surface. Becausc the analysis of microwear
features (e.g., pits and scratches) requires high magnification (e.g., x 500),
only small areas can be assessed at any one time. However, the details made
visible in SEM analysis potentially allow precise determinations of tooth
use.
Studies of extant and fossil primates, nonhominid and hominid, show
that qualitative and quantitative analysis of occlusal surface microwear
features provides useful information on dietary behavioral differences
among and between animal species (e.g., Gordon, 1984; Orine & Kay,
1988; Kay, 1987; Ryan, 1981; Ryan & Johanson, 1989; Teaford & Walker,
1984; Ungar, 1994; Walker, 1980, 1981). In all of these studies, there are
severa! potential drawbacks in SEM analysis, whether determinations of
wear are made in comparison of different species or between different time
periods within species. Microwear may simply representa documentation
of masticatory or dietary behavior for the time immediately prior to death,
otherwise known as the 'Last Supper' phenomenon (Orine, 1986). The
Dental wear andfunction 263
generally high degree of consistency in various analyses indicates the
importance of this kind of study in assessing patterns of tooth use atid
drawing inferences about masticatory behavior (see review by Teaford,
1991).
Experimental research indicates that the teeth of animals and humans
fed soft or nonabrasive diets have fewer microwear features than those of
animals fed hard or abrasive diets (Teaford, 1991; Teaford & Lytle, 1996).
Like macrowear, these differences are influenced by the nature ofthe food
itself and by extraneous elements, such as soil particles, which may be
incorporated prior to consumption. Microwear variation in archaeologi-
cally recovered humans indicates differences related to food texture and
diet within specific groups (e.g., Fine & Craig, 1981; Marks et al., 1988;
Puech et al., 1983) or in relation to subsistence change (e.g., Bullington,
1991; Rose et al., 1991; Teaford, 1991).
Occlusal microwear on molars from Early Mississippian pre maize
populations and Middle Mississippian period (Zebree site: AD 900-1200)
maize-dependent populations in the central Mississippi River valley
undergo a change from the presence of numerous large striations to few
large striations; occlusal surfaces go from being rough to smooth (Rose et
al., 1991). Rose and coworkers (1991 :17) interpret the striking change in
surface topography as indicating 'a radical reduction in abrasive particles
associated with changes in both food content and preparation'. They note
increased prevalence of dental caries, from 2.6 to 3.5 lesions per dentition,
and less negative '5
13
C values in bore collagen, indicating that the
microwear alterations resulted from a change in food consumption.
Similarly, an association between dietary change and microwear has
been documented in a preliminary study of first molars from precontact
hunter-gatherers (AD 1000-1150) from St. Catherines Island, Georgia, their
early contact period descendants living in a Catholic mission (AD 1607-
1680), and the displaced late contact period population living on Amelia
Island, Florida (AD 1680-1702) (Teaford, 1991). The precontact popula-
tion was dependent on hunting, gathering, and fishing and the la ter contact
populations were intensive maize agriculturalists. Overall, precontact teeth
display more pitting and fewer scratches on occlusal surfaces than those
from the later two periods (Figure 7.10). Striations are considerably
narrower in the contact era agriculturalists. These changes in microwear
reflect a reduction in abrasiveness of diet during the contact period,
suggesting that maize-based diets were of a softer-textured nature than the
foraging-based diets.
The number of microwear features on occlusal surfaces of deciduo11s
incisors and molars from Middle Woodland (50 BC-AD 250) iricipient
264
(a)
(b)
Masticatory and nonn1asticatory functions
Figure 7.1 O. Scanning elcctron micrographs o maxillary permanent first molar
occlusal surfaces ( x 500). (a) Dcep pits and wide grooves in forager (Marys
Mound, St. Catherines Island, Georgia). (b) Slight pits and narrow grooves in
farmcr (Santa C<italina de Guale de Santa Maria, Amelia Island. Floridu).
(From Teaf ord, 1991; reproduccd with permission of author and Wi!cy-Liss,
Inc., a division o John Wiley & Sons, !ne.)
premaize agriculturalists and Mississippian (AD 1000-1350) maize agricul-
turalists in the lower Illinois River valley of western Illinois increased with
age during both periods (Bullington, 1991 ). Reflecting the difference in use
of anterior and posterior teeth, the incisors have relatively more scratches
Dental wear and function 265
than pits and the molars have more pits !han scratches. These differences
indicate the use of the incisors in initial processing and the molars in
crushing hard objects. The earlier group focussed on wild plants and
animals in combination with reliance on various starchy seeds and with
hard seed coats. The later group continued to utilize these food items, but
with partial replacement by maize. In addition, these foods were probably
boiled in ceramic vessels for long periods of time, thus reducing the
toughness of foods consumed by prehistoric populations (Bullington,
1991 ). Unlike the. changes observed in the Mississippi River valley and the
Georgia Bight, comparisons of the Middle Woodland and Mississippian
groups revealed no differences in microwear. Bullington (1991) suggests
that the lack of temporal change may reflect the fact that her study focusses
exclusively on young juveniles; other investigations focus on adult micro-
wear. There is a strong similarity in microwear between the foragers and
farmers overall, but the two groups are distinctive in the youngest cohort
(ca. six to 12 months). Mississippian teeth have lower feature frequencies
than Middle Woodland teeth. This suggests that very young children in the
later period were consuming softer foods than very young children in the
earlier period.
Similar changes in microwear in relation to the shift from hard- to
soft-textured diets ha ve been identified in a number of settings in the Old
World. Comparison of canines and molars from two Neolithic (ca. 2000
ne) and two historie (AD 175()-1800) dentitions from western coastal
Kyushu, Japan, shows a change from generally long, wide striations to
thin, narrow striations (Hojo, 1989). Hojo (1989) suggests that the la ter diet
included smaller abrasives than the earlier diet.
Change in occlusal microwear in relation to the shift from foraging to
dependence on cereal grains in western Asia has been investigated via
comparison of dentitions from the earlier discussed Neolithic settlement at
Abu Hureyra in the Euphrates River valley in northern Syria (Molleson,
1994; Molleson & Jones, 1991; Molleson et al., 1993). Microscopic analysis
of molars provides corroborative evidence for the study of gross wear.
Comparison of pit diameter, feature frequency, pit density (percentage
determined from number and size ofpits), and ratio of striations and other
linear features to pits shows a clear shift in microwear in deciduous and
permanent teeth through time. The change in wear is especially conspicu-
ous in the shift from foraging in the Mesolithic to early agriculture in the
pre-pottery Neolithic, being accompanied by a dramatic increase in
microwear feature density. Molleson and coworkers (1993) attribute this
trend to a shift from a relatively soft diet based on consumption of roots
and small, wild grains to a few cereal types prepared with grinding stone
266
Masticatory and nonmasticatory functions
equipment. In the later Neolithic, there is a reduction in feature density and
pit diameter, reflecting a return to softer diets, involving the cooking of
food with ceramic vessels and increased consumption of mea t. Clearly, the
adoption of pottery and its use in cooking food to a soft consistency had
great implications for the manner in which teeth wore. The consumption of
softer foods brought about by cooking in ceramic vessels in the later
Neolithic may have provided a means for earlier weaning and shorter
periods between births. lf so, this may explain the drama tic in crease in birth
rate and population size during this time (and see Buikstra et al., 1986).
A parallel development for a shift from foraging to farming has been
documented via microwear analysis in South Asia based on the analysis of
dentitions from the Mesolithic (Mahadaha site; 8000-1000 se), the
pre-pottery Neolithic (Mehrgarh si te; 7000-6000 se), incipient agricultura!
Chalcolithic (Mehrgarh si te; 4500 ne), and urban agriculturalist Harappa
culture (Harappa site; 2500-2000sc)(Pastor, 1992, 1993, l 994). Compara-
tive analysis of microwear in permanent first and second molars indicates a
generally similar pattern of increase in features (pits, scratches) as in Abu
Hureyra. There is a marked increase in microwear features in the
pre-pottery Neolithic, followed by a decline with the adoption of agricul-
ture and agricultura! intensification (Pastor, l 994). In general, trends in
microwear indica te the higher prevalence of finer scratches in the hunter-
gatherers and incipient agricu1turalists, whereas microwear features be-
come coarser in more intensive agriculturalists. The latter characteristic
reflects an increased consumption of cereal grains and use of grinding
equipment in the Neolithic amd Chalcolithic. The use of stone grinding
equipment and its influence on microwear is illustrated by comparisons of
microwear in a living American fed a typical soft diet with the effects of
sandstone-ground maize (Teaford & Lytle, l 996). These cqmparisons
revea! that microwear on a maxillary molar increased by 30 times in the
consumption of ground maize o ver a period of a week. This study supports
the observation that food preparation technology has a profound affect on
tooth wear.
Virtually all of the aforementioned microwear analyses focus on the
occlusal surfaces ofteeth. Bucea! surfaces have the potential to provide an
important perspective on tooth use, beca use wear on them is not influenced
by contact between opposing teeth (Ungar & Teaford, 1997). The import-
ance ofbuccal surfaces is emphasized in research by Lalueza and coworkers
(1996) on a wide range of past and recen! human populations, including
hunter-gatherers, pastoralists, agriculturalists, and fossil hominids. Analy-
sis ofbuccal surfaces ofpremolars and molars in these samples reveals that
foragers with high-meat diets have fewer and more vertically oriented
t
l
i
1
l
'
Dental wear and function 267
Figure 7.11. Chipped teeth; Greenland Eskimo. (From Hansen eral., 1991;
reproduced with pennission o authors and Greenland National Museum and
Archives.)
striations than do agriculturalists. The greater frequency of bucea! stri-
ations in plant consumers may be dueto the presence of abrasive phytoliths
(Lalueza et al., l 996).
7.4.4 Tootli damage due to masticatory and extramasticatory use
Dentitions from a range of archaeological contexts display damaged
enamel, ranging from slight chipping of occlusal margins to fractures and
other related conditions (Hansen et al., l 991; Schwartz et al., 1995; Turner
& Cadien, 1969; reviewed by Milner& Larsen, 1991). Missingenamel from
molars, ranging from loss of tiny pieces to exfoliation of large blocks, is the
mostcommon type of damage (Figure 7 .l l). Few systematic investigations
of gross premortem dama ge to teeth, with regard to either the prevalence or
the distribution of damaged teeth, have been undertaken. This is especially
surprising, given the value of studies of occlusal surface damage observed
microscopicallyfor masticatory behavioral inference (see above), as well as
sorne of the well known patterns of crown fracture seen in populations with
excessive masticatory demands (e.g., Eskimos: Hansen et al., 1991;
Pedersen, 1947; Schwartz et al., 1995). An importan! exception is the
comparison of prehistoric and protohistoric Aleuts, Eskimos, and north-
ern Amerindians which demonstrates clear distinctions in patterns of tooth
use (Turner & Cadien, 1969). Turner & Cadien (l 969) observed a pattern of
dental dama ge reminiscent of dama ge observed in chipped stone tools. The
damage appears to be dueto use of teeth in heavy masticatory functions
268 Masticatory and nonmasticatory j'unctions
( e.g., crushing of bone) and in demanding extramasticatory ( e:g.,
hide preparation). Eskimos ha vean unusually high frequency of ch1ppmg
damage in teeth (79%) in comparison with Aleuts (22.8%) and Amerm-
dians (18.4%). Among other Eskimo groups, variation in the prevalence of
chipping reflects the different types of faods consumed and the manner of
their preparation. Illustrating this variation in faod and subs1stence
technology, the prevalence of macrodamage varies widely, ranging from
Jow values in Kodiak Islanders consuming predominantly fish (37.9%) to
high values in Sadlermiut populations consuming caribou and sea mam-
mals (87.5%).
Traumatized teeth are also relatively comn1onplace in sorne Polynesian
groups. For exan1ple, prehistoric Hawaiians show a prevalenc: of 18/o
fema les and 6% far males (Snow, 1974). The higher prevalence 111 women is
consistent with ethnographic accounts of women habitually using their
teeth for various extramasticatory tasks, such as cord production, leaf
cutting, and nut cracking. In these populations, maxillary and mandibular
tori are also more prevalent in females than males, thus support1ng a
functional interpretation of these skeletal features.
Few researchers ha ve documented temporal change in 1nacrodan1age to
teeth, especially with respect to majar adaptive shifts or change in
masticatory behavior (see discussion by Milner & Larscn, 1991). Compan-
son of trauma (chipping and fracturing of tooth crowns) in Middle
Woodland hunter-gatherers (LeVesconte Mound site; ca. AD 230) and Late
Woodland and historie Iroquois maize agriculturalists (Bcnnett site [AD
1270); Kleinburg Ossuary [AD 1600)) reveals a substantial decline in
trauma, from 42.9% (LeVesconte Mound) to only 7.4% (Bennctt s1te) of
permanent teeth (Patterson, 1984). This reduction in dental trauma is
consisten! with decreases in macrowear reported by Patterson (1984) far
the same series of dentitions, suggesting that a reduction in abrasives and
food consistency probably lead to less chipping and fracturing of teeth.
7 .5 Summary and conclusions
A great <leal of craniofacial and dental variation can be linked directly to
the mechanical environment and the role that diet and subsistence
technology plays in shaping it. Skull form is strongly influenced by
functional demands. In the Holocene, the change to a more globular
shaped cranial vault in many areas of the world appears to represen! a
compensatory response to decrease in functional demand as foods have
become softer. This conclusion is underscored by comparisons of archae-
Summary and conc/usions 269
ological populations that consumed abrasive faods with populations that
consumed nonabrasive foods. Food abrasiveness is influenced by the
nature of the faod itself or the manner in which it is processed or both. A
range of other data drawn from studies of age changes in craniofacial size
and morphology, occlusal abnormalities, tooth size, gross wear arising
from masticatory and nonmasticatory functions, microwear, and dental
trauma provides compelling evidence far the importan! role of the
dentition and craniofacial skeleton in masticatory and nonmasticatory
behaviors.
8 Isotopic and elemental signatures of
diet and nutrition
8.1 Introduction
Documentation of past foodways provides the requisite context for
cvaluating the effects of nutrition on growth and development, the
assessment of stress and disease from paleopathological indicators, and the
role of physical activity i_n the food quest, among other tapies discussed in
the foregoing chapters. There are a range of conventional approaches for
characterizing past diets, including analysis of plant and animal remains,
coprolites, and tools used for extracting food from the environment or for
processing it once it is acquired. These approaches do not necessarily
represen! the proportions of foods or food classes consumed by past
populations. For example, the notoriously poor preservation of plants in
many archaeological contexts can prevent the documentation of their role
in diets. Food refuse is often subject to preservation-related biases that
confound nutritional interpretation.
Bone chemistry - specifically involving the measurement of stable
isotope ratios and elemental (majar and trace) constituents in archaeologi-
cal human skeletons -- greatly enhances our ability to characterize past
human diets. The reading of chemical signatures passed from the foods
being eaten to the consumer allows the documentation of diet. These
signatures do not representa 'reconstruction' of diet; rather, they facilitate
the identification of consumption profiles of different foods eaten by past
populations (Keegan, 1989).
Regional and methodological perspectives on isotope and elemental
analysis in archaeological bone and reconstruction of diet are presented in
a number of comprehensive reviews (Ambrose, 1987, 1993; Katzenberg,
1992b; Keegan, 1989; Klepinger, 1984; Pate, 1994; Price et al., 1985;
Sandford, 1992, !993a; Schoeninger, 1989, !995a; Schoeninger & Moore,
1992; Schwarcz & Schoeninger, 1991 ). This chapter discusses isotopic and
elemental analyses of archaeological human remains and the ways in which
these analyses contribute to our understanding of dietary behavior and
nutritional ecology in earlier societies.
Isotopic analysis
271
8.2 Isotopic analysis
8.2.1 Background
Isotopes are chemical elements that share the same number of protons and
electrons, but differ in the number of neutrons. Unlike radioactive or
unstable isotopes (e.g.,
14
C), stable isotopes of the same element do not
transmute over time. Most elements exist in two or more isotopic forms. Of
the severa! hundreds of stable isotopes across ali elements, 1 O elements ha ve
at leas! two isotopes with biological significance. Two of these 1 O elements
have received the preponderance of attention by anthropologists in
reconstructing and interpreting earlier diets, namely carbon (C) and
nitrogen (N).
8.2.2 Stable carbon isotopes
Carbon has two stable isotopes,
12
C and
13
C. Field and laboratory studies
involving controlled feeding experiments have shown that stable carbon
isotope ratios in an animal's tissues, including bone, reflect the ratios of diet
(Render et al., 1981; DeNiro & Epstein, 1978; Tieszen et al., 1983). The
relative abundance of isotopes between dietary resources is quite small.
Thus, the ratios in tissues are expressed in parts per thousand (read as parts
'per mil' or %o} relative to an international standard (marine fossil,
Belemnitella, from the Peedee geological formation in South Carolina
[PDB)), as delta (J) values. Thesevalues denote differences that originate in
plant photosynthetic pathways, including either C
3
(Calvin-Benson}, C
4
(Hatch-Slack), or CAM (crassulacean acid metabolism). C
4
plants dis-
criminate Iess against the isotopically heavier Be isotope when using C0
2
,
the carbon source for ali terrestrial plants, from the atmosphere. As a
result, C4 plants have less negative J
13
C values than C
3
plants. In temperate
areas, most plants are the C3 variety (e.g., sorne grasses, trees, shrubs,
tubers) and have J
13
C values averaging - 26%0, with a range of - 22%0 to
- 38%0 (Tieszen, 1991). In C4 plants, !hose typically adapted to hot and dry
climates (e.g., tropical grasses such as maize, some amaranths, chenopods,
setarias), the reduced discrimination against the 13C isotope results in
values in the plan! that average - 12.5%0, and range from about - 9%o to
- 21%0 (Tieszen, 1991). The human consumers of these plants retain the
differences in J
13
C values, shifting approximately 5%o (called the fractiona-
tion factor) from the food to what is observed in their bone collagen
samples (Schoeninger & Moore, 1992; van der Merwe, 1982). On average,
C4 plants ha ve J
13
C values that are about 14%0 less negative than C
3
plants
272 Diet and nutrition
and their consumers. Plants with CAM photosynthesis pathways (many
cacti and succulents) have
13
C values that overlap the values in e, and C4
plants, because they use either a Ci or a C4 pathway, as determined by
environmental circumstances.
Collagen requires essential amino acids for its formation and, therefore,
largely reflects the
13
C of the protein componen! of diet. The apatite
signatures represen! the whole diet, which may include carbohydrates and
fats, in addition to protein (Ambrose et al., 1995; Ambrose & Norr, 1993;
Cooke et al., 1996; Krueger & Sullivan, 1984; Tieszen & Fagre, 1993).
Experimental research by Ambrose & Norr (1993) contradicts earlier
discussions and studies linking o
13
C values in collagen to whole diet (e.g.,
Schoeninger, 1989; van der Merwe, 1982) and bone apatite values to
carbohydrates and fats (e.g., Lee-Thorp et al., 1989).
Marine plants ha ve '5
13
C values that are between the values ofCi and C4
terrestrial plants, owing to variation in their carbon sources, ranging from
detritus of terrestrial origin (mix of local terrestrial plants), dissolved C02
( - 7.0%0), and dissolved carbonic acid (0%o) (Schoeninger & Moore, 1992).
iherefore, marine organisms consuming these plants ha ve values ranging
from closer to e, plants at one extreme to those of c. plants at the other.
Marine fishes and mammals have ii
13
C values that are less negative (by
about 6%0) than animals feeding on C3-based foods and values that are
more negative (by about 7%o) than animals feeding on C4-based foods
(Schoeninger & DeNiro, 1984).
Maize agriculture in archaeological settings
In noncoastal settings or regions of the world where the confounding
influence of marine foods on carbon isotopic signatures is not present, the
introduction and increased use of domesticated c. piants have been
precisely documented by carbon stable isotope analysis. Beginning with
Vogel & van der Merwe's (1977; van der Merwe and Vogel, 1978)
pioneering study of prehistoric populations from the North American
Eastern Woodlands, regional and site-specific studies revea! the appear-
ance and increased reliance on 1naize - the only majar economically
important C4 plant used by native societies in this area (Figure 8.1). In
prehistoric Ontario, isotopic values begin at a low of - 20.5%0 before AD
700, representing no maize in the diet (Katzenberg, 1993a; Katzenberg et
al., 1995; Schwarcz et al., 1985). After AD 700, isotope values become less
negative, rising to - 15%0. Overall, values become less negative especially
after AD 1000, peaking between AD 1300 and 1400 (Katzenberg et al., 1995).
The presence of relatively less negative i)llC values in later prehistoric siles
Isotopic analysis 273
o -10
1lt
~
"'
~
~

~
-15
"
"'
e:
!i1 {t
"'

1 f
"
-20
i !
::;;

1 f
f
f
-25
-4000 -3000 -2000 -1000 o 1000
Age (ec/o)
Figure 8. l. Temporal change in stable carbon isotope ratios in eastern North
America. Error bars indicate one standard deviation from the mean where
N > 1. (Adaptcd from Ambrose, 1987; revised version provided by and
reproduced with permission of author and Center for Archaeological
Investigations and Board ofTrustees, Southern Illinois University.)
in this setting is generally consisten! with findings from other localities in
the interior of eastern North America (cf. Ambrose, 1987; Boutton et al.,
1991; Buikstra, 1992; Buikstra & Milner, 1991; Schurr, 1992; Schurr &
Schoeninger, 1995).
For the broad region of midwestern and interior northeastern North
America (Illinois and Ohio River valleys, Great Lakes region), the period
of AD 500 to 1300 evinces a trend of increase in ollC values from the earliest
to lates! periods, but with notable variation relating to geography, climate,
and perhaps cultural preference. In the southern area ofthe eastern United
States (states ofMissouri, Arkansas, Tennessee), isotope analysis indicates
appreciable increases in maize consumption that occur relatively rapidly
bnt late in the prehistoric record (e.g.; Boutton et al., 1991; Buikstra et al.,
1988). In contras!, populations living in sorne areas (e.g., Illinois) have
higher i)llC values that increase gradually throughout the period (Buikstra
et al., 1987):
Temporal comparisons in Ontario populations show a trend that is
intermediate between the southeastern United States and Illinois samples
(Katzenberg et al:, 1995). Katzenberg and coworkers (1995) suggest that
the variation between northern and southern regions reflects the shorter
growing season in the former and, hence, the somewhat reduced use of
maize. They specniate that the somewhat later full-blown adoption of
maize in the southern area may reflect the existence of an established
274 Diet and nutrition
agricultura! lifeway at the time ofthe introduction ofmaize (e.g., involving
squash and other indigenous plants). As is characteristic of other cultural
phenomena, dietary practices may have been adopted rapidly by sorne
groups and not by others, for no apparent reason. Additionally, other
factors influencing stable isotope values include regional variation in o
13
C
values of maize, consumption of Ci versus C4 plants by economically
importan! animals (e.g., deer), fish consumption practices, and the pres-
ence of other C
4
plants (e.g., amaranth in the middle Mississippi and Ohio
River valleys) (Katzenberg et al., 1995).
Even within relatively restricted geographic regions, there are apparent
spatial differences in maize consu1nption. For example, maize appears to
ha ve been a greater part of the diet in the outlying areas around the late
prehistoric Mississippian center of Cahokia than in Cahokia itself
(Buikstra & Milner, 1991). These differences may represen! social distinc-
tions between the core and hinterlands of the Cahokia chiefdom. Analysis
of o"C values in the late prehistoric Ohio River valley drainage indicates a
link between social complexity and maize production (Schurr & Schoenin-
ger, 1995). Comparison of populations from late prehistoric tribal-society
Fort Ancient sites (Bau1n, Gartner, Feurt, Sun Watch) with contemporary
populations from organizationally 1nore co1nplex Mississippian sites
(Angel, Wiekliffe Mounds) revea! very different isotopic compositions.
Average D
13
C values are generally more negative than - l l<Jfo for the. Fort
Ancient samples, whereas the values were less negative than - 10%0 for the
Mississippian samples. Additionally, thcre is a strong correlation between
site size and isotope values (r= 0.9951) both in this series of siles and in
other Fort Ancient and Mississippian period si tes whose isotope values are
reported in the literature. These and other observations (e.g., location of
Mississippian siles in arcas with higher agricultura! productivity [cf. Ward,
1965]) strongly suggest that maize played a more important role in
subsistence strategies of Mississippian chiefdoms than Fort Ancient tribal
groups in late prehistory. Schurr & Schoeninger (1995) speculate that Fort
Ancient groups had a subsistence system that was more stable than that
practiced by Mississippian groups, as indicated by the persistence of Fort
Ancient cultures into the historie period. This persistence may reflect the
fact that Fort Ancient populations developed in situ and were well adapted
to the limitations of the Ohio Valley ecosystem. Mississippian societies
represen! a cultural intrusion into the region and practiced an agricultura!
and subsistence system that may not ha ve been as well suited to this setting.
These socially complex groups did not share the relatively greater long-
term success of the Fort Ancient cultures- they ceased to exist well befare
the arrival of Europeans.
1
1
j
Isotopic analysis 275
In addition to doeumenting variable patterns of maize consumption in
the broad region ofthe Eastern Woodlands, these studies point to another
importan! finding. Pallen and other nonisotopie evidence indicate that
maize was probably eaten by native populations quite early in sorne areas
ofthe Eastern Woodlands (e.g., 3500BP at Lake Shelby, Alabama [Fearn &
Liu, 1995)). Isotopic analysis of a wide array of prehistoric native
populations indicates, however, that maize did not become economically
importan! until very late in prehistory, certainly after AD 900 in eastern
North Ameriea. This pattern coincides with overall nutritional decline,
increased morbidity, increased warfare, and declining skeletal robusticity
in later prehistory (Cohen & Armelagos, 1984; Larsen, 1990b; Larsen &
Milner, 1994).
Stable isotope analysis presents a less clear picture of maize consumption
in western North Ameriea, primarily beca use native populations consumed
significan! amounts of various foods with o
13
C values similar to maize, such
as bison meat, amaranth, and cacti (Schoeninger & Moore, 1992). For
example, less negative J13C values (-8%0) were identified in skeletal series
from northern Texas and Oklahoma (Habicht-Mauche et al., 1994). These
values reflect the consumption of cactus and other nondomesticated C4
plants, bison, and perhaps sorne grown or traded maize. In other late
prehistorie Great Plains settings (e.g., Crow Creek, South Dakota), less
negative values also represen! the consumption of a mixture of Ci and C4
diets, although maize probably contributed significantly to native diets
(Bumsted, g84).
Analysis of carbon isotope ratios in various settings in the American
Southwest points to temporally distinctive patterns of diet. Comparison of
early (AD 1275-1330) with late (AD 1330-1400) populations at Grasshopper
Pueblo, Arizona, reveals a trend toward less negative o
13
C values, indica-
ting an intensification of maize agriculture (Ezzo, 1993). This. increase
coincides with a period ofresource stress, decrease in dietary diversity, and
environmental disruption during the later occupation of Grasshopper
Pueblo. Ezzo (1993) contends that these disruptions and a decreasing
quality of diet may ha ve contributed to the abandonment of the region in
the late fourteenth to early fifteenth centuries. At the Pecos Pueblo site,
temporal changes in stable isotope values show clear differences in diet in
comparison of earlier prehistoric and later mission populations. Increas-
ingly negative values indicate a decrease in consumption of maize in the
mission group, which may retlect a disruption in trade of food and other
materials between Puebloan and Plains groups during the contact period
(Spielmann et al., 1990):
Isotopic analyses of Mesoamerican and South American populations
276 Diet and nutrition
also show a highly variable pattern of C4 plan! (maize) consumption. In the
Tehuacan Valley, Mexico, maize consumption appears to occur far earlier
than in other regions of North America, with a shift to maize by as early as
4000 BC (DeNiro & Epstein, 1981; Farnsworth et al., 1985). In contras!,
analysis of stable carbon isotope ratios in lower Central America reveals
that maize consumption was very minar or missing in Panama for the same
time period (3000-5000 BC) and was adopted considerably later in Costa
Rica (AD 300-1550) (Norr, 1991).
Analysis ofprehistoric and contact era Maya skeletons from Belize and
Honduras provides an important picture of dietary variation in Me-
soamerica. The Maya Lowlands contain a wide diversity of plants and
anima Is with potential dietary value, but poor preservation of food remains
- plants and animals - greatly limits understanding of foodways in this
otherwise well archaeologically documented region. lt has long been
recognized that maize was an impo-rtant food source in past Mayan
populations. Maya skeletal series show an abundance of skeletal pathology
indicating precarious lifeways with poor-quality nutrition and elevated
disease burdeos (see Chapters 2 and 3; review by Wright & White, 1996).
Dueto the paucity of subsistence-related data from archaeological locali-
ties, it is difficult to link these indicators of health to any one particular
subsistence strategy in a precise fashion.
The picture of dietary ecology has changed markedly with the work on
stable isotopes in Mayan populations. Temporal and spatial comparisons
of 0
13
C values from Lamanai and Pacbitun, Belize, indica te that prehistoric
Mayans had less emphasis on maize than did sorne other Mesoamerican
groups (e.g., Tehuacan) (Whiteet al., 1993, 1994; White & Schwarcz, 1989).
Nonlinear temporal changes suggest a variable emphasis on maize in
Belize. Pre-Classic populations (1250 BC-AD 250) have Iess negative ;De
values ( - 12.4%0) at both siles, suggesting strong reliance on maize. Late
and Terminal Classic groups have increasingly negative iillC values al
Lamanai and less negative values at Pacbitun, indicating respective
decrease and increase in maize consumption at the two Iocalities (White et
al., 1993). Convergence of diet and reduced reliance on maize at the very
end ofthe Terminal Classic period is inferred by more negative .JllC values
in both populations.
Following the collapse ofthe Classic period Maya in the eighth to ninth
centuries AD, ii
13
C values at Lamanai (data are not available for Pacbitun)
show a marked reversa] in the trend of decreased maize consumption
documented for the Late and Terminal Classic periods. Post-Classie and
Historie period Maya skeletons have substantially less negative values
(-9.3%0, -9.9%0, respectively), indicating an increased reliance on maize.
/sotopic analysis 277
White & Schwarcz ( 1989) interpret this trend as representing a doubling of
maize consumption in less than a century. Therefore, in late prehistorie and
early contact era Maya, maize consumption was high, similar to estimates
of 65% to 86% of diet based on maize in living Maya Indians (see White et
al., I 994).
In the entire record of dietary evidence for the Maya region, little
evidence for an association between social collapse and dietary change can
be provided (Wright & White, 1996). From isotopic analysis, prehistoric
diets appear to have been linked in a general sense with majar events in
Maya history, but the variation is so great that no argument can be made
for tying resource stress - as indieated by dietary change- with the collapse
and abandonment of the southern Maya Lowlands (Wright & White,
1996).
Maize consumption and social status
As with comparisons of dental caries prevalences and stature (see Chapters
2 and 3), JI 3C values from high- and low-status prehistoric Central and
South American Amerindians suggest differences in maize consumption
along social lines. At Pacbitun, social differences in maize consumption is
indicated by the presenee of Iess negative isotope values (more maize) in
high-status (crypt burial) individuals than in low-status (pit and urn burial)
individuals (less maize). The reverse is indicated at Lamanai, where
high-status individuals consumed less maize than low-status individuals
(White et al., 1993; White & Schwarez, 1989). This suggests that at
Lamanaihigh-status adults may ha ve had greater access to protein and had
better diets generally than low-status individuals. High- and low-ranking
individuals at Copn express no differences in o
13
C mean values (Reed,
1994). High-status individuals al this Mayan center show a greater range of
variation in Jl3C values than low-status individuals, indicating perhaps
greater dietary variability and breadth in higher-ranked individuals (Reed,
1994).
In the La Florida site in the northern highlands of Ecuador, carbon
isotope analysis shows distinctive differences between high- and low-statns
individuals during the Chaupicruz phase (AD 100-450; Ubelaker et al.,
1995). High-status individuals have less negative carbon isotope values
than low-status individuals (Jl3C = - 10.3%0 vs. - l l.6%0, respectively).
Ethnohistoric and archaeological (mortuary) evidenee suggests that these
differenees may reflect a greater consumption of maize beer in high-status
adults, especially males;
The dietary signatures of carbon isotope values and inferences drawn
278 Diet and nutrition
from dental caries prevalence provide congruent results in the comparison
ofhigh- and low-status individuals. At Copn and Lamanai, dental caries
prevalences are substantially higher in low-status adults than high-status
adults, and very high-status adults have relatively negative o
13
C values.
Both data sets indicate greater consumption of maize in low-status
individuals (Hodges, 1985; White, 1994). At Copn, the greater range of
values and lower caries prevalences in high-status individuals suggests a
more varied, less carbohydrate-rich diet (and see Reed, 1994).
At Pacbitun, the apparently lower consumption of maize in low-status
individuals may be problematic, because ofthe small sample size. Isotopic
data are available for only two urn and three pit burials (White et al., 1993).
The argument for greater maize c'onsumption in high-status individuals is
bolstered by the significan! statistical correlation (r = 0.923) between
13
C
values and distance from the ceremonial core of the site: the closer to the
core, the Iess negative the values become. lt may be the case that, at this
particular locality, high-status individuals regarded maize as emblematic of
their social ranking. Although this is different from other Maya data, it
nevertheless may represen! a cultural tradition that is unique to Pacbitun.
Sex differences in diet
Adult males show less negative values than adult females at both Pacbitun
(White et al., 1993) and Copn (Reed, 1994). Lamanai females and males
show no difference in values (White & Schwarcz, 1989). Reed (1994) and
White and coworkers (1993) contend that sex differences in diet at Pacbitun
and Copn represent variation in socioeconomic status, with males
consuming more maize than females. With regard to Copn, Reed observes
that this difference 'parallels the observation of higher frequencies of
anemia, infection, and a statistically significan! higher rate of caries in
females than in males' (1994:216). Based on discussions presented in
Chapters 2 and 3, his findings of reduced consumption of maize in females
actually contradicts these other data sets. For example, greater caries
prevaience in females suggests more emphasis on maize in women than in
men in this setting. Therefore, although both isotopic and caries analyses
point to possible differences in diet by gender, the contradictory results do
not provide a clear picture of what these differences may have been. At
Lamanai, like the isotope evidence, dental caries prevalence data suggest
distinctive differences in diet between females and males (White, 1994).
The variable nature of diet in comparison of isotope and dental caries
suggests that these data sets may reflect somewhat different aspects of diet.
Dental caries is caused by acids produced by bacteria] metabolism of
Isotopic analysis 279
carbohydrates (sugar). Carbon isotope data derived from collagen - the
basis for ali of the above Mayan isotopic investigations - provides
information on protein consumption (e.g., Ambrose & Norr, 1993).
Therefore, changing pattems of diet - especially with regard to the
consumption of maize-will not necessarily be expressed in the same way in
the analysis of dental caries and stable carbon isotope ratios. The general
congruence of temporal trends in caries prevalence and in car bon isotopes
in the Mayan Lowlands and other regions argues for a generally overlap-
ping picture of these data sources and dietary shifts. For example, caries
prevalence declines dramatically in the Terminal Classic, but then peaks in
the post-Classic, indicating a respective decrease and increase in maize
consumption. This pattem is identical to changes in o
13
C values (White,
1988; White et al., 1994). The interpretations of dietary patterns are
enhanced by use ofboth direct (isotope) and indirect (caries) approaches.
Maize consumption by Euroamericans
Soon after the arrival ofthe first European explorers to the Americas in the
late fifteenth century, an exchange of a wide range of foods, including
plants, began between the Old World and the New World. From the
Americas, maize was introduced to and widely adopted by Europeans, not
for consumption by humans but as food for domes tic farm animals (Hawke
& Davis, 1992). As Europeans settled the Eastern Woodlands of North
America, and especially as they pushed westward into the American
Midwest (the so-called Corn Belt) in the nineteenth century, maize was
quickly recognized for its productive potential. Historical sources sugge.st
that maize continued to be viewed by many early Euroamerican settlers as a
food for farm animals, unfit for human consumption. Thus, maize was
economically importan!, but not for direct human use; Old World e,
plants introduced to the Americas - wheat, barley, and rye - were of much
greater direct dietary importance.
Analysis of Euroamerican skeletons from pre-twentieth century cemete-
ries indicates that maize was variably used throughout the Eastern and
Midwestern regions ofthe United States and Canada. Isotopic analysis of
the remains ofthe Cross family, an early nineteenth century pioneer family
in Illinois, shows a narrow range of o
13
C values around a mean of - 12.4%0
(Larsen, Craig et al., 1995). The high degree of homogeneity in values is to
be expected in a temporally restricted, closely related group of individuals
living in the same locality. The heavier isotope value indicates the likely
importance ofmaize in this setting, and is similar to that of prehistoric
Amerindians living in the sam general region (e.g., Dickson Mounds,
280 Diet and nutrition
- I0.8%0; Schild, - 12.3%0; Norris Farms, -12.6%0 [Buikstra & Milner,
1991]) and other areas of the Eastern Woodlands (see Buikstra, 1992;
Katzenberg et al., 1995). The Cross family bllC values are also consisten!
with other evidence of dietary practices and specific crops grown by the
family. Probate records enumerate '20 acres corn (more or less)' on the
Cross homestead. Although this maize probably served as food for farm
animals, isotope evidence indica tes that it was also an important source of
food for the family. The consumption of domestic farm animals that ate
maize may have also contributcd to these high values in the human
samples.
These isotope values from Illinois show so1ne similarities, but also
sorne contrasts, with other Euroamerican sa1nples analyzed fron1 other
contcxts in the Eastern Woodlands. Determination of b
13
C values from
remains of the nineteenth century llarvie family in southern Ontario
reveals very Iittle variation in diet (Katzenberg, 199Ia). Unlike in the
Illinois series, the Harvie carbon stable isotope values are quite different
from late prehistoric maize-dependent native populations living in the
region prior to European contact (see Katzenberg et al., 1995; Schwarcz
et al., 1985). The mean
13
C value is - 18.7, which is similar to fifteenth
to seventeenth century European values of - 18 to - 19 (cf. B. Kcnnedy,
1989). Thcsc findings suggest that the Harvie family did not eat maize,
but rather, mostly C
3
plants, such as wheat, barley, rye, and oats. The
Harvie dictary composition, thcn, is much more in linc with conventional
interpretations of nineteenth century foodways: farm animals ate maize;
farmers did not.
Analysis of ninetecnth century U.S. military personnel buried at the
Snake Hill site, Ontario, indicates a wide range of isotopic variation in
contrast with the Cross and Harvie pioneer families (Katzenberg, 199lb).
The mean value from the Snake Hill series is - 15.8%0, with a range of
- 18.5%0 to - 12.5%0. This variation reflects the presence of individuals
recruited from ali over the northeastern United States and the high degree
ofvariability in regional cuisines, especially with regard to use of Ci (e.g.,
wheat and rice) and C
4
(maize) plants. Therefore, it is not surprising that
these values range between a mostly C
3
diet and a mixed C1-C4 diet
(Katzenberg, !99Ib).
Native Old World C4 plant use - mi//et
In the archaeological past, virtually ali European plant domesticates with
economic significance were C
3
plants. An important exception is broorn-
corn millet (Panicum mileaceum), thc only major C4 plan!. Millet, a tropical
Isotopic ana/ysis 281
grass, was present in central Europe by the late fifth millennium BC (Murray
& Schoeninger, 1988). Like maize in la ter contexts, millet has been assumed
to ha ve been primarily used as food for farm animals. Analysis of collagen
from !ron Age skeletons from Magdalenska Gora, Slovenia, indicates less
negative b"C values, in sharp contrast with virtually ali other noncoastal
European samples from various time periods (cf. B. Kennedy, 1989).
Because marine foods were probably not part of the diets of this group,
consumption ofC4 plants or animals consuming C4 plants best explains
these values (Murray & Schoeninger, 1988). lt is unlikely that C
4
-consum-
ing animals contributed appreciably to !ron Age diets in Slovenia, because
isotope values derived from animal remains are very negative, unlike the
values determined from human remains.
Millet was also an important cultigen during the Neolithic in northern
China, especially in the H uang He (Yellow River) basin (Schwarcz &
Schoeninger, 1991; van der Merwe, 1992). Analysis of
13
C values indicates
that millet contributed well over 50% of the carbon in the diets of these
groups. From the period of the fifth century BC to the second century AD,
isotope values became increasingly negative, indicating a shift from C4
(millet) to predominantly a Ci diet based on rice and wheat. Today, the
isotopic signature of dietary carbon has again shifted to less negative
values, this time relecting a C4, maize-based diet (van der Merwe, 1992).
Along with millet, sorghum - also a C4 domesticate - dominated the
archaeological record for much of Nubian prehistory (White & Schwarcz,
1994). Analysis of bone and other tissues.indicates that, although these
plants were present in substantial amounts, diets were based principally on
Ci cultigens (e.g., wheat and barley) (White & Schwarcz, 1994), but
analysis of hair shafts from mummies indicates seasonal shifts from Cito
C4 staples (White, 1993). Isotopic evidence indicates a shift towards
consumption of more C4 plants in the X-group (AD 350-550), a period
characterized by political instability, alterations in trade patterns, and
decreased water availabi!ity as the Nile River lowered. In the following
Christian period (AD 500-1400), more negative isotope values revea! a
decline in c. plant consumption. This shift in diet appears to have
accompanied an increase in elevation of the Nile and generally improved
economic conditions.
Marine diets and coastal environments
In coastal areas where no C4 plants are consumed, carbon stable isotope
data provide an important'means of assessing the relative importance of
marine and terrestrial foods. In Scandinavia, clear shifts in dietary ,
282 Diet and nutrition
orientation are documented in the comparison of Mesolithic and post-
Mesolithic populations (e.g., Lidn, 1995; Tauber, 1981). In coastal
Mesolithic Danish populations, for example, generally less negative values
(- 11%0 to - 15%0) indica te a reliance on marine foods (Tauber, 1981). The
values are similar to those of populations known to ha ve depended on sea
food (e.g., Greenland Eskimos). In la ter coastal groups (Neolithic, Bronze
Age, early !ron Age), 1i
13
C values are progressively more negative (- 18%0
to - 23%0), indicatinga shift to terrestrial CJ foods, such as domestic plants
and farm animals. Isotopic evidence indica tes that post-Mesolithic Danish
and Swedish populations consumed few marine foods, despite close
proximity to them (Lidn, 1995; Tauber, 1981). Similarly, late Neolithic
populations from Alepotrypa Cave, coastal southern Greece, have very
negative ne values, for both collagen (mean= - 19.9%0) and apatite
(mean= - 13.1 %0) (Papathanasiou et al., 1995). These findings indica te
that the diets ofthesc groups were largely focussed on terrestrial C3 plants
and animals. The narrow rangc of values (collagen: - 19.5%0 to - 20.2%0)
indicates a rcmarkably high degree of homogeneity in diets.
An abrupt shift from tnarine to terrestrial food resources in the
comparison of Mesolithic and Neolithic populations (ca. 8500-4500 BP) in
Portugal is indicated by isotopic analysis (Lubell et al., 1994). Values for
the samples drawn from a range of coastal and near-coastal siles range
from - 15.3%0 to - 20.4%0. The more negative values are predon1inantly
Mesolithic and the less negative values are Neolithic. The presence of less
variability in the Neolithic period suggests an increasing homogeneity in
diet during the period of increased consumption of plan! and animal
domestica tes (Lubell et al., 1994). Unlike the pattern of increased preva-
lence of dental caries in New World settings with the increasing reliance on
plant domesticates, there is a marked decline in dental caries prevalence,
nuinber of carious tooth surfaces, and premortem tooth loss in permanent
mandibular molars in late Mesolithic and Neolithic Portuguese in com-
parison with earlier populations from the region. The relatively high
prevalence of dental caries in the Mesolithic period is probably related to
the consumption of cariogenic plants such as nondomesticated figs (Lubell
et al., 1994; and sec Chapter 3).
8.2.3 Nitroge11 stable isotopes
Nitrogen has two stable isotopes,
14
N and
15
N. Field and laboratory
feeding studies demonstrate that stable nitrogen isotope ratios in an
anirnal's tissues, including bone, reflect similar ratios in the diet (DeNiro &
Epstein, 1981; Hare et al., 1991; Wada, 1980). The ratios determined from
Jsotopic analysis 283
bone samples are expressed as %o relative to the international standard of
atmospheric nitrogen (or Ambient Inhalable Reservoir, AIR).
Nearly ali (99%) nitrogen is bound up as Nz in the atmosphere or in
ocean water (Schoeninger & Moore, 1992). Nitrogen isotopes are poten-
tially useful for discriminating between marine and terrestrial food sources,
owing to differences in the way that nitrogen enters the biological domain
of these ecological systems. This distinction involves differences between
plants and bacteria that fix nitrogen directly from the air (called nitrogen-
fixers) and plants that acquire nitrogen through the soil via bacteria!
degradation. With regard to the former, blue-green algae and bacteria fix
nitrogen directly from the air, thus giving these organisms 1i
15
N values
similar to that ofair (close to zero). In the latter, nitrates produced in the
soil from the decomposition of organic material have more
15
N than
14
N
relative to the atmosphere. Therefore, plants utilizing these nitrates tend to
have somewhat higher o
1
'N values (approximately 2%o) than nitrogen-
fixers. Although the values are highly variable, nitrogen-fixing terrestrial
plants tend to have o
15
N values close to zero.
Virtually ali available nitrogen is in the form of nitrogen compounds
(nitrates and ammonia) with relatively elevated
15
N concentrations. Gen-
erally, the o
15
N values for terrestrial plants are 4%o lower than for marine
plants. Overall, there is a wide range of values in terrestrial plants. The
differences at the bottom of the food chain are passed up to plant-
consuming animals higher in the food chain. Therefore, o
15
N values in
marine organisms and those in other aquatic settings (e.g., rivers and lakes)
are higher than in terrestrial ones (up to 20%0) (see Schwarcz et al., 1985;
Schwarcz & Schoeninger, 1991). For most regions, marine vertebrales
express higher o1
5
N values than do terrestrial vertebrales (Schoeninger &
DeNiro, 1984). ,j15N values in terrestrial plants and animals are about 10%o
less positive than in marine plants and animals for many areas ofthe world
(Schoeninger & DeNiro, 1984). Ultimately, differences in marine and
terrestrial environments are reflected in humans, and these differences in
tissues, including bone, can be used to determine the relative importance of
foods from these respective ecosystems (Schoeninger et al., 1983; Schwarcz
& Schoeninger, 1991).
Nitrogen isotopic signatures are influenced by a number of other factors,
especially clima te. Genera!ly, cool forest soils ha ve low o
15
N values, owing to
higher nitrogen-fixation and mineralization rates, and hot savannah or
desert soils have high 1
5
N values (Ambrose, 1993). Other contexts
producing generally high o
15
N soils include areas with a history of
evaporation (e.g., saline soils) and !hose enriched in organicmaterials (e.g.,
guano). Thevery high values reported from analysis ofhuman bonesamples
284 Diet and nutrition
in sorne desert settings may be explained by these factors (e.g., Ambrose &
DeNiro, l 986a, l 986b; Aufderheide, Tieszen et al., 1988; Heaton et al., 1986;
Schoeninger, 1995b). In sorne settings, terrestrial animals living in arid
environments have higher ii
15
N values than marine animals. Because of
climatic and/or other ecological variables, Late Stone Age coastal and
interior foragers in Southwestern Cape Province, South Africa, actually
have nitrogen isotope ratios inverse to those expected, but car bon isotope
values clearly identify marine vs. terrestrial differences in resources (Sealy,
1986; Sealy & van der Merwe, 1985, 1986; Sealy et al., 1987). In other
settings, stable nitrogen isotope ratios are useful far identifying consumers
ofterrestrial vs. marine foods (e.g., Schoeninger& DeNiro, 1984; Schoenin-
ger et al., 1983). These various findings indica te that local factors can play an
importan! role in isotopic composition and diet (see Ambrose, 1993).
Weaning
Weaning is a complex behavior involvingvariable periods oftime and rates
ofintroduction offoods, in addition to breas! milk. Enamel defect analysis
of archaeological dentitions provides sorne indirect information about
weaning, but these findings are complicated by a variety of confounding
factors that influence hard-tissue evidence of stress (see Chapter 2).
Analysis of stable nitrogen isotopes provides a far more precise indication
of the timing and rate ofweaning. Age changes in ii
15
N values in relation to
the breast-feeding of infants is demonstrated in a number of settings (e.g.,
Fogel et al., 1989; Katzenberg, 1991a, 1993b; Katzenberg& Pfeiffer, 1995;
Reed, 1994; Tuross & Fogel, 1994; White&Schwarcz, 1994; and seereview
by Katzenberg et al., 1996). The phenomenon is due to differences in
trophic levels between the mother and her nursing infant: the infant feeding
from the mother is ata leve! higher in the food chain than the mother. There
is a negative correlation between Jl SN and age in individua1s, ranging f rom
birth through young childhood (Figure 8.2). Pre-weaned infants ( < 2
years) are enriched by about 2-3%0 in comparison with the ii
15
N value of
weaned infants and older individuals (e.g., White & Schwarcz, 1994). These
differences are similar to the trophic leve! shifts in comparisons of
herbivores and carnivores: carnivores are 3%o enriched in cornparison with
their herbivore prey (Ambrose, 1986; Katzenberg, 1989; Schoeninger &
DeNiro, 1984). These findings also point to a gradual reduction in ii
15
N
values over young childhood, rather than an abrupt decrease, suggesting
that the introduction offoods and replacement ofbreast milk is a gradual
process. Newborn infants in archaeological samples have isotope values
that are similar to those of adults. This suggests that collagen at birth is
18
16


14
z

"'
12
10
Isotopic analysis 285
o
&

COof9
0
o
o
8
o
o
o
o 2 3 4 5 6 7 8
Age (years)
Figure 8.2. Stable nitrogen isotope values rom birth to eight years; Prospect
Hill Methodist Church, Newmarket, Ontario. (From Katzenberg, 1993b;
reproduced with permission of author and
similar to the mother's and there is a lag in the registration of the colla gen in
the nitrogen isotopic signature (Katzenberg, 1993b; Katzenberg et al.,
1996; Tuross & Fo gel, 1994). The age of weaning coincides with morbidity
indicators of physiological stress, such as hypoplasias and circular caries in
the few studies in which nitrogen isotope values ha ve been determined (e.g.,
Reed, 1994). Moreover, the pattern of change in J
15
N values corresponds
with the known age of weaning in populations in which data are historically
documented, such as in Euroamericans (e.g., Katzenberg & Pfeiffer, 1995).
Osteoporosis and nitrogen isotope ratios
In addition to diet, physiology appears to have an importan! impact on
bone chemistry. Study of bone collagen from Nubian X-group skeletons
reveals that osteoporotic females have higher Ji
5
N values than normal
females, especially in the third and fifth decades (White & Armelagos,
1997). This pattern is consisten! with histomorphometric differences
between X-group individuals with osteoporotic and normal skeletal tissue.
In this setting, it appears that the differences in Ji
5
N values reflect
differences in urea nitrogen excretion or altered renal processing and
clearance of phosphorus and calcium (White & Armelagos, 1997). In the
hot desert settingofNubia, water stress maycontribute to elevation in .5
15
N
values. Lactation contributes to urea loss; which has been linked theoreti-
cally to enrichment of
15
N in bone collagen (see Ambrose & DeNiro;
286 Diet and nutrition
1986b ). These findings suggest that physiological disruption, and not diet,
may be responsible for the variation in nitrogen stable isotopes (and borre
loss) in this population (White & Armelagos, 1997). More generally, it
strongly suggests that stable nitrogen isotopes are susceptible to nondietary
- especially physiological - factors and may serve as an indicator of
osteoporosis in past human populations.
8.2.4 Bivariate use of carbon and nitrogen stable isotope ratios
In many coastal areas of the New World, where marine foods and maize
were simultaneously consumed by human populations, the overlapping
carbon isotope ratios for both foodsprecludes the discrimination of diets,
at leas! with regard to assessing the relative contribution of marine foods
vs. maize. In order to circumvent this complication, the use of bivariate
plots of
15
N and
13
C values has been advocated (Cooke et al., 1996:
Schoeninger et al., 1983, 1990). In the Georgia Bight, archaeological
evidence indica tes that marine foods continued to be heavily used through-
out prehistory and into the contact period (Larsen, 1982). Maize consump-
tion is largely based on circumstantial evidence in this region, including
changes in settlement (increased population size and aggregation), increas-
ing social complexity, and increasing morbidity (e.g., dental caries,
periosteal reactions; see Chapters 2 and 3). Owing to poor preservation of
plan! remains in late prehistoric archaeological sites, dietary reconstruc-
tion is inconclusive.
Analysis of collagen samples from prehistoric foragers and farmers and
early and late contact mission Indians alleviates the incomplete picture of
die!. Isotopic analysis reveals a distinctive temporal trend showing increas-
ingly less negative ,jllC values and less positive
1
'N values (Hutchinson et
al., 1996; Larsen, Schoeninger et al., 1992; Schoeninger et al., 1990) (Figure
8.3). This trend indicates an increasing focus on terrestrial plants (maize)
and animals coupled with perhaps a decreasing reliance on marine
resources. This shift commences during the twelfth century AD, reaching its
peak in Spanish mission native populations inhabiting St. Catherines
Island, Georgia, in the early to middle seventeenth century and in the later
descendant groups on Amelia Island, Florida.
Additional analysis of collagen from the late prehistoric Irene Mound
site on the north Georgia coast indicates that, although maize generally
increased in importance, there is a marked decrease in
13
C values for !he
period immediately prior to European contact, suggesting that use of maize
temporarily decreased following its initial introduction in the region
(Larsen, Schoeninger et al., 1992). The decline in maize consumption may
oc
~

"' z
~
-..,
Isotopic analysis
287
1
"
14 r-
-
L.
L.
L, L,
E:
..

LL.t,

'
..
L,
" ...
" .. L,
- 12 r-
'
L. .o
..
..
'
..
.e
..
10 -
..
-
E,B,
~ ,ce .e
,e.e
,e
.
8 - -
B
1
'c%. (PDB)
Figure 8.3. Plot of Georgia Bight stable carbon and nitrogen isotope 'values;
E, early preagricultural; L, late preagricultural; A, agricultura!; C, contact era.
Note the increase in carbon values and decrease in nitrogen values. (From
Larsen, Schoeninger et al., 1992; reproduced with pennission of Wiley-Liss,
Inc., a division of John Wiley & Sons, Inc.)
be linked with social and environmental disruption in the Mississippian
period in this region in late prehistory. Similar declines in the use of maize
during the period preceding contact by Europeans has also beeri
documented in at least one other major Mississippian center in the
American Southeast, Moundville, Alabama (see Schoeninger et al., 1996):
These findings from Georgia and Alabama point to the highly variable
pattern of maize use in the Eastern Woodlands.
Analysis of carbon and nitrogen stable isotope ratios helps to clarify the
relative consumption ofmarine foods and maize in other coastal settings,
including New England (Bourque & Krueger, 1994; Little & Schoeninger,
1995; Medaglia et al., 1990), Gulf coas! Florida (Hutchinson & Norr,
1994), Panama and Costa Rica (Norr, 1991); and Belize (White &
288 Diet and nutrition
Schwarcz, 1989). The use of nitrogen isotope ratios alone is complicated in
certain marine settings having predominantly nitrogen fixation - for
example, coral reefs and salt marshes (Capone & Carpenter, 1982). In thesc
ecosystems, ii1
5
N values approach those documented in terrestrial plants
and animals (Schoeninger & DeNiro, 1984).
Even where maize is not utilized in New World settings, the isotopic
compositions involving carbon and nitrogcn are not always clear. Walker
& DeNiro (1986) showed that thc simultaneous use of carbon and nitrogen
ratios serves to clarify the potentially conflicting signatures of either
element alone in marine environ1ncnts of the Santa Barbara Channel
region. Archaeological and biocultural evidence indicates a shift in dietary
emphasis frotn terrestrial to marine in la ter prehistory in this rcgion as well
as a generally heavier en1phasis on marine foods on the islands than on the
nearby mainland coast or interior. Isotopic analysis reveals that the carbon
and nitrogcn ratios in native populations progressivcly dccrease from the
islands to the n1ainland coast and mainland interior, indicating a strong
correlation of diet with geographical location (Walker & DeNiro, 1986).
These gcographical distinctions in isotopic signatures show that terrestrial
diets were emphasized in mainland populations, and marine diets were
emphasized in island populations. Additionally, isotope values from older
sites on the mainland coast and interior are lower for both carbon and
nitrogen than younger sites in the same region. This finding is consistent
with other archaeological and biocultural evidence that even in the
mainland setting diet became 1nore marine-oricnted later in time.
In the Bahamas, whcrc terrestrial, reef, and deep ocean habitats were
utilized by prehistoric populations, Keegan and DeNiro (1988) analyzed
numerous potential foods from these settings. Sorne reef fish ha ve higher
c)l3C values and lower b
15
N values than other ocean fish. The analysis of
car bon and nitrogen isotope ratios in archaeological human samples from
this setting indicates that prehistoric populations fished primarily in
seagrass and coral-reef ecosystems. The most negative b
13
C values are in
the initial pcriod of occupation; la ter populations ha ve less negative values.
This temporal shift instable carbon isotope ratios suggests that Caribbean
populations becan1e increasingly marine-oriented in later prehistory.
8.2.5 lsotopes of other elements: dietary and environ1nental
implications
Strontiun1 isotopes
Strontium, an alkaline earth element, expresses isotope ratios (
87
Sr/
86
Sr)
that identify the relative contributions of marine and terrestrial food
Jsotopic analysis 289
sources to the diets of populations, especially in situations where either
carbon or nitrogen or both do not provide a clear picture. Because the
isotopes of strontium do not fractionate because of biological processes,
the strontium isotope ratios determined from bone directly reflect the local
geochemistry, and the geochemical isotope ratio is passed unaltered
through the food chain (Ericson, 1985; Price, Grupe et al., 1994; Sealy et al.,
1991 ). In the South A frican Cape region, strontium isotope ratios in human
bone and tooth apatite differ between coastal and interior regions. Coastal
skeletons display S
7
Sr/
86
Sr values (range: close to those
determined from marine and coastal animals, whereas those from the
interior are enriched in S
7
Sr, producing higher ratios (0.71382-0.71898).
Owing to the sensitivity of strontium isotope ratios to local geochemis-
try, it is theoretically possible to track individual residence changes and
mobility patterns throughout the period of growth and development. This
has been tested by comparing strontium isotope ratios between earlier
formed teeth with later formed teeth (e.g., Ericson, 1989; Sealy et al., 1993)
and first molars (which reflects the supply of strontium during the first
severa! years of life) with bone (which reflects strontium supply during the
last five to ten years before death) (Grupe, 1995; Price, Grupe et al., 1994;
Price, Johnson et al., 1994). A pilot study of isotope ratios of dental enamel
and bone from late prehistoric Grasshopper and Walnut Creek sites,
Arizona, reveals that only sorne individuals share values similar to those of
the local geology (Price, Johnson et al., 1994). These individuals probably
spent their lives at their birth residence, whereas others appear to have been
born elsewhere and moved to the place of residence at sorne later time.
Comparison of strontium ratios in tooth enamel and bone from the Bel!
Beaker period BC) of southern Bavaria also reveals significan!
variation (Price, Grupe et al., 1994), which appears to be more pronounced
in the earlier than later part of the period (Grupe, 1995). This finding
suggests that populations in the later period may have been more
sedentary, which is consistent with settlement analysis from conventional
archaeological data.
Oxygen isotopes
Climate has a tremendous influence on human adaptation, determining
such factors as resource productivity and the places where people live.
Thus, climatic patterns are potentially useful for interpreting land use
patterns. The )ISO values (determined from stable isotopes,
16
0 and
1
so of
terrestrial water varies in relation to climate, especially by temperature and
humidity (Kolodny et al.; 1983; Luz et al:, 1990). Because mammalian
290 Diet and nutrition
skeletal and dental apatite is in equilibrium with body water, the phosphal!'
oxygen isotope composition of these hard tissues directly reflects t h!'
temperature and climate at the time the organism was alive (Fricke et al.,
1995). In orderto test the hypothesis that hard tissues from archaeological
settings can be used to track the history of regional clima tes; Fricke ami
coworkers (1995) determined the temporal change in
18
0 values from
archaeological dental enamel (Eskimos and Europeans) in coastal western
Greenland and Denmark for the periods preceding, during, and following
the so-called 'Little Ice Age' of the Medieval period. In a comparison o\'
la ter with earlier populations, there is a 3%" decrease in b
18
0 values, which
is consistent with increased cooling docun1ented in historical records
describing colder climates in the North Atlantic region from ca. AD 1400 to
1700. These changes are in accord with other studies that indicatcd
increasing dietary and climatological stress and eventual abandonment of
Greenland by Vikings during the Medieval period (Buckland et al., 1996;
Scott et al., 1991).
8.3 Elemental analysis
8.3.1 Background
Various clements found in both the organic (collagen) and the inorganic
(mineral or apatite)components ofbone tissue ha ve dietary and nutritional
significance. Most of these elements are contained in the inorganic
componen! (see Sandford, 1992); more than 99% of strontium in verte-
brales, for example, is found in the bone mineral (Schroeder et al., 1972).
Major or bulk elements (carbon, hydrogen, iron, nitrogen, calcium,
phosphorus, oxygen, potassium, sulfur, chlorine, sodium, and magnesium)
perform critical functions in structura] maintenance and are generally
required in relatively large quantities by humans. Trace elements serve
mostly in catalytic reactions and are frequently associated with certain
enzymes, such as metal-activated enzymes. Zinc, manganese. and iron
serve vital functions. Sorne trace elements are toxic even if ingested at very
low levels (e.g., lead, mercury, cadmium), although virtually ali trace
elements are toxic if taken excessively.
The biochemistry and physiology of most major and trace elements in
humans are well known, especially in relation to specific dietary properties
and insufficiencies. A series of pioneering studies established the import-
ance of trace elements for reconstructing and interpreting diet in past
populations (e.g., Brown, 1973; Gilbert, 1975; Schoeninger, 1979). In-
Elemental analysis 291
itially, this research was received with enthusiasm, especially beca use it was
assumed that trace element values represen! accurate and unaltered
signatures of past diets, which evaluation had not been accomplished by
any other means in archaeological settings. Accumulating evidence shows
that the interpretation of trace elements in archaeological remains is far
more complex than had originally been anticipated. The enthusiasm for
elemental analysis has been tempered by the realization that a number of
factors, such as food preparation techniques, cooking utensils, geochemi-
cal variation, synergism between elements, nter- and intra-bone variation,
age, sex, and, especially, diagenetic processes or postmortem alterations
following death, can alter the chemistry of bone tissue in profound ways.
Thus, most trace elemental analysis research is at present devoted to
distinguishing between diagenetic and biogenic signals in archaeological
remains (Lamber! et al., 1990; Pate et al., 1989;.Price et al., 1992; Sandford,
1993b; Sillen, 1989). Analyses generally focus on three areas, including (1)
alkaline-earth elements (strontium and barium); (2) multi-elemental analy-
sis; and (3) single-element analysis (Sandford, 1992, l 993b ).
8.3.2 Alkaline-earth elements: strontium and barirtm analysis
Strontium
Unlike the isotopic composition of strontium, which <loes not fractionate
by biological ( or geological) processes, elemental concentrations show
broad variation by trophic position in plants and animals (Price, Grupe et
al., 1994). Strontium is taken up by organisms in in verse proportion to their
trophic level. Among numerous elements utilized in paleodietary and
bioarchaeological research, strontium is 'the only firmly established
elemental model in bone-chemistry analysis' (Ezzo, l 994a:608). Strontium
has no known biochemical function. It resembles calcium (also an alkaline
earth) structurally and can substitute for calcium in a number of physio-
logical roles, including fixation in the hydroxyapatite crystal structure of
calcified tissnes (Likins et al., 1960; Rosenthal, 1981). At the bottom ofthe
food chain, plants acquire strontium directly through the soil, whereas
mammals, including humans, obtain the element through secondary
sources, such as plants or the animals that consume plants. Additionally,
mammals discriminate against strontium in favor of calcium. This dis-
crimination pattern indicates that mammalian tissues contain less stron-
tium than plants, herbivores contain less strontium than plants, carnivores
contaili less strontium than herbivores, and omnivores (such as humans)
are intermediate between herbivores and carnivores; This distribution by
292 Det and nutrition
trophic leve! has been observed in field studies, in both aquatic and
terrestrial ecosystems (Elias et al., 1982; Ophel, 1963; Schoeninger, 1985).
Various local factors influence strontium chemistry, including geology
and regional levels of alkaline earths, and there is a considerable amount of
variability in strontium (and Sr/Ca ratios) within trophic levels (plants,
herbivores, carnivores) (Runia, 1987a; Schoeninger, 1985; Sillen, 1992;
Sillen & Kavanagh, 1982). Even within the same environment, economi-
cally importan! cereals and roots may have either elevated or lowered
Sr/Ca ratios in comparison with other plants (Runia, I 987a). Owing to
variation within trophic levels, knowledge oflocal foodwebs and predator-
prey relations is essential for accurate reconstruction ofpast diets based on
strontium (e.g., Sealy & Sillen, 1988). In addition, because bone strontium
and Sr/Ca ratios can be strongly influenced by consumption of high-
calcium foods (e.g., seafood) and use of mineral additives, the simple
measurement of strontium in archaeological bone is not necessarily
synonymous with trophic leve! and relative proportion of plant-to-meat
consumption in past populations (Burton, 1996; Burton & Wright, 1995;
Ezzo et al., 1995). For a variety of settings, strontium may not be a very
accurate indicator of trophic level or meat vs. plant consumption. Owing to
its chemical bonding properties with calcium, it rcpresents the dietary
sources of calcium (Burton, 1996). This and other confounding factors
indicate that inter- and intra-population comparisons based on archae-
ological samples must be made with a great <leal of caution (Buikstra et al.,
1989; Burton, 1996; Ezzo, 1994a; Radosevich, 1993; Sandford, 1992,
l 993b; Sillen & Kavanagh, 1982; Sillen et al., 1989).
Workers have also become increasingly aware that diagenesis in elemen-
tal analysis is the most important impediment to paleodietary reconstruc-
tion (e.g., Ezzo, 1992; Grupe & Piepenbrink, 1988; Lamber! et al., 1983,
1984; Nelson & Sauer, 1984; Nelson et al., 1986; Pate & Hutton, 1988; Pate
et al., 1991; Price, 1989; Sillen, 1981, 1992; and others). In addition to the
aforementioned factors influencing strontium in bone, there are a number
ofbehavioral and dietary influences. Sorne evidence suggsts that weaning
(Sillen & Smith, 1984), pregnancy and lactation (Blakely, 1989; Ra-
dosevich, 1989, 1993), and influence ofnondomesticated plants (e.g., nuts)
and shellfish (Benfer, 1984; Buikstra et al., 1989; Byrne & Parris, 1987;
Kyle, 1986; Schoeninger & Peebles, 1981) will affect values. In sum, reliable
biological signals can be acquired only through careful consideration of a
range of factors influencing strontium in bon e tissue. M uch of the literature
documenting strontium in archaeological bone has not controlled for
diagenesis and other factors. Therefore, the results based on strontium
alone should be viewed with scepticism (see Katzenberg, 1993a).
E
o.
-9'
E
"
E
e
iii
Elemental analysis
293
200
150
100
50
0
Year (AD)
Figure 8.4. Plot of strontium values in prehistoric and historie Ontario native
populations, showing an increase in maize in diets after AD 1000. (Data frorn
Katzenberg, 1984.)
Studies of strontium provide results that are suggestive of dietary
strategies (e.g., Katzenberg, 1984, 1993a; Radosevich, 1989, 1993; Runia,
1987b; Schoeninger, 1981, 1982; Sillen, I981; and others)(Figure 8.4). With
the use of strontium values from herbivore and carnivore controls,
comparisons of Aurignacian foragers with later Natufian farmers at
Hayonim Cave, Israel, show dietary change consisten! with the shift to
agriculture (Sillen, 1981). Analysis of other Middle Eastern populations
(Schoeninger, 1981) and comparisons of archaic and modern huma ns
(Schoeninger, 1982) also demonstrate marked distinctions in dietary focus
between foragers and farmers. Both the Sillen and Schoeninger investiga-
tions demonstrate the necessity of using herbivore and carnivore control
samples when attempting to characterize human diets in local settings.
Through her comparison of human and herbivore strontium values,
Schoeninger concluded that '(w]ithin the Levan!, the non-agricultura!
populations ... were including a large proportion of plant material in their
diets. This represents increased use in plant material when compared to the
earlier human population [in the region]' (198I:87). This conclusion was
independently confirmed with the use of a similar research protocol based
on analysis of strontium/calcium ratios from humans, herbivores, and
carnivores at Hayonim Cave (Sillen, 1981; Smith, Bar-Yosef eta/., 1984).
In prehistoric Ontario populations studied by Katzenberg (l 993a),
where diagenesis and otherfactors influencing bone strontium are control-
294 Diet and nutrition
Ied for there is a decrease in values that reflects a shift from plants high in
(leafy plants and nuts) to low-strontium grasses (e.g., maize)
(Figure 8.4). The results are consisten! with findings based on nitrogen and
carbon stable isotopes (see above), although the approaches-isotopic and
elemental- relate to different parts ofthe diet, those that contribute to the
organic and to the inorganic fractions of bone tissue.
Schoeninger (1979) determined strontium values in members of groups
of different status at Chalcatzingo, Mexico. Her study suggested lower
values in higher-ranked individuals, which is consisten! with the hypothesis
that the elite in this society probably had greater access to animal protein,
whereas Iower-ranked individuals consumed relatively more plants, such as
maize (and see Blakely & Beck, 1981; Hatch & Geidel, 1985; Jacobs, 199 5,
for alternative patterns).
Weaning age has also been estimated on the basis of the ratio of
strontium to calcium in human bone samples. Sillen & Smith (1984)
suggested that, because strontium is discriminated against in relation to
calcium in milk production in the mammary gland and in the placenta,
newborn and young infant Sr/Ca ratios should be low in comparison to
those of adults. In contras!, plan! foods su ch as wheat and barley, which are
primary weaning foods, should exhibit relatively high Sr/Ca ratios.
Consisten! with these observations, a prehistoric Middle Eastern skeletal
series exhibits low ratios in newborns; ratios increase thereafter, peaking
between 1.5 and 3.5 years (Siilen & Smith, 1984).
Barium
Barium has received little attention in bioarchaeological chemistry and
paleodietary studies. This underrepresentation is surprising, since a signifi-
can! body of evidence indicates that the element is a highly sensitive
indicator of past foodways (Burton & Price, l 990a, l 990b; Ezzo, 1992,
1993, 1994a; Francalacci & Borgognini Tarli, 1988). Like strontium,
barium has no known biochemical function, it is nontoxic, it is structurally
similar to calcium, it is not tightly regulated metabolically, and it undergoes
fractionation with increasing trophic level (Burton & Price, l 990a, l 990b;
Burton & Wright, 1995; Elias et al., 1982). Barium values and Ba/Ca ratios
are Iower in herbivores in relation to the plants they consume, and
carnivores have lower values and ratios than herbivores. Most barium in
the geological environment comes in the form ofbarite (BaSO,), a chemical
that is relatively less soluble than the carbonates that are associated with
strontium (see Ezzo, 1992). This suggests that the soil-to-plant discrimina-
tion should be greater for barium than for strontium. Consisten! with this
Elemental analysis 295
hypothesis, analysis of hundreds of archaeological bone samples indica tes
that barium separates organisms in their trophic positions better than
strontium (Burton & Price, 1990a, 1990b). Therefore, barium may be an
even more sensitive indicator of diet than strontium.
The importance ofbarium has been shown by comparison of the analysis
of Sr/Ca and Ba/Ca ratios in animal and human bone from early
prehistoric foragers from Carrier Milis, Illinois. The Sr/Ca ratio indicates
only slight differences between animal species; the overlapping ranges of
values in herbivores and carnivores prevents any meaningful dietary
interpretation (Burton & Price, 1990a). Barium ratios (Ba/Ca) show clear
interspecies differences. Human values at Carrier Milis are closer to values
for carnivores than herbivores, suggesting that meat was a dominan! food
in the diets of these prehistoric foragers.
Barium analyses yield an importan! perspective on temporal shifts in diet
in relation to social and environmental circumstances (e.g., Ezzo, 1992,
1993, l 994a, 1994b ). Increased population aggregation and environmental
degradation (wood depletion, increased aridity, game depletion) in later
prehistory appears to have evoked a shift towards reduced dietary diversity
and decreased hunting over the course of the 125-year occupation of
Grasshopper Pueblo (see Ezzo, 1992, 1993). Decreased barium values
indicate an increased use of maize and decreased use of meat in the diet,
much more so in females (from 350. l ppm to 261.1 ppm) than in males
(from 285.6 ppm to 271.8 ppm)(Ezzo, 1992, 1993). These findings indicate
that, in the earlier period, females consumed more wild plants and less
maize and meat than males. This pattern of dietary sexual dimorphism is
similar to dietary inferences made from observations of pathological
markers (e.g., dental caries). In the later period, female and male diets
became virtually identical, with both sexes placing increased emphasis on
maize. Thus, most of the dietary change appears to be associated with
females. The convergence of female and male diets in the la ter period may
reflect increased involvement of men in agricultura! activities, a pattern of
activity that characterizes agricultura! intensification in various ethno-
graphically observed populations worldwide (see Ember, 1983). The
reasons for a shift toward a greater focus on maize are speculative. It may
be that maize was a more reliable food resource in the face of increasing
aridity and environmental stress during the la ter occupation of the region
(see Ezzo, 1992, 1993).
Burton & Price (1990a, l990b) reviewed a wide range of published data
on the barium content of rocks, soils, plants, fresh water, and sea water
(and see Wessen et al., 1977, 1978). The pattern that emerges from their
review is one in which barium and strontium values are approximately
296 Diet and nutrition

-e- -0.75
!(i
ee. -1.00
"' o
_,
-1.25
-1.50
-1.75
-2.00
-
- Marine diet Terrestrial diet
-

,, ..

I4


T
I3
I6
!5
2
1-
. .
' ' ' '
' '
' '
'
Figure 8.5. Ba/Sr values in archacological bonc, showing thc distinction
bctwcen marine and terrestrial consumers: \,Paloma, Peru; 2, Rolling Bay,
Alaska; 3, Kiavak, Alaska; 4, Thrce Saints Bay, Alaska; 5, Chaluka, Alaska; 6,
Port M611er, Alaska; 7, Rio Viejo, Mcxico (coastal); 8, Cerro de !a Cruz,
Mcxico (coastal); 9, Fabrica San Jase, Mexico; 10, Monte Albn, Mexico; 11,
Poland (multiple sites); 12, Pirincay, Ecuador; 13, Pueblo Grande, Arizona; 14,
Stillwatcr Marsh, Nevada. (From Burton & Price, !990b; rcproduced with
permission of authors and Acadcmic Press Ltd.)
equal in terrestrial settings, but barium and Ba/Sr ratios in marine and
terrestrial settings are highly distinctive (Figure 8.5). Barium values and
Ba/Sr ratios are considerably lower in sea water and marine organisms than
in terrestrial organisms. Analysis of human remains from a wide range of
New World archaeological sites from predominantly marine settings,
coastal sites with agricultura! (terrestrial) consumption, and inland siles
with no access to marine foods reveals that Ba/Sr ratios also clearly
distinguish between human populations utilizing marine vs. terrestrial
resources (Burton & Price, 1990a, 1990b). Analysis of archaeological
human bone samples from desert settings (e.g., Stillwater Marsh, Nevada).
are an exception to the terrestrial pattern, and appear to be more similar to
marine values. This unusual pattcrn may result from strontium enrichment
in desert soils which would immobilize barium but not strontium (Burton
& Price, 1990a, 1990b) .. Therefore, in at leas! sorne desert settings, the
differentiation of marine/terrestrial diets based on Ba/Sr ratios does not
appear to be possible. Ali other previously tested contexts clearly separate
human populations that use predominantly marine resources from those
using terrestrial rcsourccs.
Elemental analysis 297
833 Multi-element analysis
Additional discrimination of dietary focus in past populations has been
approached by analysis of multiple elements, an approach first advoeated
by Gilbert's study of human bone samples from the Diekson Mounds site,
Illinois (see Ezzo, 1994a; Gilbert, 1975, 1977, 1985; and see Sandford, 1992,
l 993b, and Crist, 1995, for discussions of other examples of multi-element
analysis in different settings). Among the five elements included in his
analysis - copper, strontium, magnesium, manganese, and zinc - the
element that best discriminated between nonintensive agriculturalists and
intensive agriculturalists from the earlier and later occupations ofthe site,
respectively, was zinc: the earlier group had higher values than the later
group.
Lamber! and coworkers (1979, 1984; and see Szpunar et al., 1978)
employed a range of elements in documenting the shift from foraging to
maize farming in later prehistory in the lower Illinois Valley. The detailed
analysis of bone and soil samples from the burial matrix serves to
underscore the potential of diagenesis and elemental soil-bone and
bone-soil transfer. In large part, most elements show sorne evidence of
diagenesis.
83.4 Single-element analysis: tracking dietary dejiciencies and toxicity
/ron
Deficieney in iron in past populations is inferred va analysis ofpathologi-
cal lesions (cribra orbitalia, porotic hyperostosis) in a range of settings
worldwide (see Chapter 3). Sorne suggest that elemental analysis of
archaeological bone samples provides additional understanding of iron
status where pathology data are available (Edward & Benfer, 1993;
Fornaciari et al., 1981; Sandford et al., 1988; Zaino, 1968). On the basis of
information from limited samples, it was inferred that the presence oflower
iron levels in individuals with eribra orbitalia in Punic Carthage (third
century ec) eompared to nonpathological individuals suggested a link
between iron status and pathological indieators of anemia (Fornaciari et
al., 1981; although see Richtsmeier & Sheridan, 1996).
Juveniles from later Carthage (seventh century AD) have significantly
higher levels of iron than adults, which may indicate that iron in
arehaeological bone represents a biogenic signa! (Sandford et al., 1988),
Comparison of iron levels in the third century ec and seventh eentury AD
Carthaginians shows higher values in the later period; This suggests that
298 Diet and nutrition
either iron status improved or iron intakes increased in the later perio<l
(Sandford et al., 1988).
Ezzo (l994a) has questioned the biological significance ofiron measure-
ment in archaeological human bone. Except in regard to marrow, iron has
no apparent function in bone tissue, nor <loes borre actas a reservoir for the
element (Ezzo, 1994a). The lack of physiological evidence suggests that
varying amounts of iron in archaeological borre probably do not correlate
with the prevalence of iron deficiency anemia. Additionally, the iron
content in soil is considerably higher than that in bone, indicating that
unless all dirt is removed - a virtual impossibility in archaeological remains
-the iron content in the skeletal tissue is highly exaggerated (Ezzo, l 994a).
Zinc
F ollowing Gilbert's ( 1977) lead, there has been a general acceptance of zinc
as an importan! discriminator of diet (e.g., amount of mea! vs. plan!
consumption) and inferences about nutritional health in past populations.
Unlike for strontium, calcium, or barium, a theoretical basis has not been
developed for zinc as a paleodietary indicator (Ezzo, l994a, l994b).
Although zinc may be sensitive to dietary history, the lack of theoretical
models limits its usefulness in bioarchaeological chemistry and dietary
inference. Underlying the use of zinc is the unsubstantiated assumption
that the element is present at higher levels in meat and shellfish than in
plants. Zinc acquired in the diet will inftuence the levels in skeletal tissues of
growing animals, so that diets defi.cient in zinc will result in low values in
bone and other tissues (Ezzo, l 994b). Zinc appears to play an essential role
in growth: severe deficiencies of the element result in markedly reduced
growth and dwarfism. Therefore, in contrast to strontium and barium, the
essential nature ofzinc and its rather tightly controlled metabolic regula-
tion suggest that it has little use as a paleodietary indicator.
Lead
Lead is among the most discussed elements in bioarchaeologicalchemistry,
mostly in attempts to document chronic toxicity and excess intake in past
groups (e.g., Aufderheide, 1989; Aufderheide et al., 1981, 1985; Aufderhe-
ide, Wittmers et al., 1988; Ericson et al., 1979, 1991; Jaworowski, 1968;
Keenleyside et al., 1996; Lalich & Aufderheide, 1991; Reinhard & Ghazi,
1992; Reinhard et al., 1994; Waldron, 1981, 1982, 1983). Like barium and
strontium, lead shows preferential bone deposition in comparison with
othertissues (more than 95% ofabsorbed lead is stored in the skeleton), it is
no! readily excreted, and it has an unusually low turnover rate in skeletal
Elemental analysis 299
tissue (Aufderheide, 1989; l\lloore, 1986; Sandford, 1992; Wittmers et al.,
1988). These factors indicate that adult skeletal lead values represent a
measure of lifetime exposure. Because exposure is only rarely due to
natural sources, most lead found in bones and other tissues originales from
anthropogenic factors (Aufderheide, 1989). Consequently, lead toxicity in
skeletal tissues in archaeological skeletal samples provides perspectives on
patterns of behavior that are linked with exposure to environmental
hazards and heavy metals.
In Colonial era North America, the use of pewter- a lead-based material
used in cooking and eating implements - resulted in increased exposure to
lead, in sorne instances at toxic levels (Aufderheide et al., 1981, 1985;
Aufderheide, Wittmers et al., 1988). Owing to the greater access to pewter
by elite individuals in Colonial society, they had greater risk of exposure to
lead than did nonelite individuals. The relatively higher exposure is well
illustrated in the comparison ofthe bones ofland owners and their slaves.
For example, family members of the owner of the Clifts Plantation,
Virginia, hada considerably higher lead content than their slaves ( 185 ppm
vs. 35 ppm) (Aufderheide et al., 1981). This distinction reftects the use of
pewter by the landowner class and wood or unglazed ceramics by the
nonelite class. Analysis of lead content in military burials from the War of
1812 Snake Hill sample, Ontario, are similarly low, suggesting that recruits
were drawn from lower socioeconomic levels of the civilian population
(mean= 31.3 ppm; Lalich & Aufderheide, 1991).
Slave skeletons from a Colonial era Barbados sugar plantation ha ve very
high skeletal lead concentrations (mean= 118 ppm; values up to 424 ppm)
(Handler et al., 1986, 1988). Skeletal lead in this setting was apparently
derived from various origins (e.g., pewter and lead-glazed vessels used for
storage, preparation, or serving of food and drink). The major source of
lead originated from the processing of sugar into rum via lead stills.
In a similar vein, unusually high intake of lead in Romano-British and
later l\lledieval populations in Great Britain is indicated by skeletal analysis
(Waldron, 1981, 1983). The source oflead is unknown, but it appears to
have been acquired from lead-based eating utensils and water pipes. In
sorne ofthese settings, lead acquisition was clearly a postmortem phenom-
enon. For example, individuals buried in lead coflins can have high
concentrations of the element.
During the late eighteenth and early nineteenth centuries, Omaha
populations from northeastern Nebraska used lead for a variety of
purposes- such as for the production of body ornaments and musket balls
(Reinhard & Ghazi, 1992; Reinhard et al., 1994). Analysis of skeletal
remains from this group reveals unusually high lead concentrations in sorne
300 Diet and nutrition
individuals. Analysis of lead stable isotope ratios (
206
Pb/
204
Pb) suggests
that lead was traded to the Omaha from present-day Missouri. Lead was
also used to make red facial pigment, which would have been readily
absorbed. Thus, facial paint is probably the primary source of lead for
many of these individuals. Sorne skeletal lead may also ha ve originated
from activities taking place after death, since red paint was also applied to
deceased individuals prior to their interment.
8.4 Methodological issues in bioarchaeological chemistry
The two primary components ofbone tissue, apatite (mineral) and collagen
(organic), provide useful dietary information. In comparison with isotopic
analysis of collagen, elemental and isotopic analysis of the apatite is more
problematical, because of thc stronger inftuence of diagenesis and the
myriad of other factors both in living and in post-burial environments.
Measurement of the degree of alteration of apatite remains unknown for
either elemental or isotopic analysis, although progress is being made on
sorne fronts (see papers in Hedges & van Klinken, 1995). Stable isotope
(i.e., carbon and nitrogen) analysis of collagen has severa! advantages over
elemental and stable isotope analysis of apatite in dietary documentation
and interpretation. Importantly, because bone collagen is not subject to
ionic exchange for carbon and nitrogen, diagenetic effects are not as
inftuential in determining values as in apatite (Ambrose, 1987; Grupe et al.,
1989). Within the mineral componen! of bone, the carbonate fraction is
subject to extensive diagenetic change (Schoeninger & DeNiro, 1982;
Wright & Schwarcz, 1997). It is a more straightforward process to identify
and remove contaminants (fats, particulate plant matter, and humic
matter) in collagen samples that could potentially alter the biological signa!
(Ambrose, 1987, 1990; Ambrose & Norr, 1992; Stafford et al., 1988). New
technology and better understanding of the biochemistry of the mineral
componen! (apatite) ofbone for trace element ami stable isotope analysis
has greatly expanded the range of possibilities for paleodietary study (e.g.,
Lee-Thorp et al., 1994; Cooke et al., 1996).
8.5 Summary and conclusions
The application of isotopic and elemental analyses facilitates an under-
standing of foodways in past human societies by providing a nearly direct
record of diet. This approach addresses severa! issues in bioarchaeology
and archaeology that have heretofore been essentially speculative. Es-
Summary and conclusions 301
pecially noteworthy are the timing and spread of C4 cultigens (e.g., maize
agriculture in the New World) and the relative contribution of marine and
terrestrial foods in coastal settings globally. Other critica! issues in dietary
ecology are curren ti y under investigation, such as the consumption of meat
by eastern African hominids living sorne two to three million years ago
(Lee-Thorp et al., 1994; Sillen, 1986, 1992). Both stable isotopes and trace
elements are subject to sources ofvariation that potentially impede dietary
interpretation, including climate, habita!, physiology, and diagenesis.
The approaches to dietary reconstruction and ecological and behavioral
inference discussed in this chapter should be considered as par! of larger
issues dealing with subsistence, adaptation, and nutritional ecology in the
past. Although qualitative means of subsistence documentation (e.g.,
analysis offood refuse) are subject to a range ofbiases- as is also the case
for isotopic and elemental analyses - it is importan! to consider them in
order to gain a comprehensive perspective of the foodways of earlier
societies. Direct information on die! in particular and the impact of diet
and nutrition during the juvenile and adult years in general can be acquired
only through the investigation of human remains.
9 Historical dimensions of skeletal
variation: tracing genetic
relationships
9.1 lntroduction
Relatedness between human groups has long been a point of discussion in
anthropology, especially in historical studies of earlier societies. Biological
distance, or 'biodistance', is the measurement and interpretation or
relatedness or divergence between populations or subgroups within popu-
lations based on analysis of polygenic skeletal and dental traits (Buikstra et
al., 1990). The degree of relatedness presupposes that populations sharing
attributes are more closely related than populations expressing many
differences.
Biodistance analysis is complex, especially with regard to identifying
meaningful patterns of biological variation that distinguish between
populations, either in temporal succession or geographic distribution.
Because of the complex nature of pattern identification, a great deal of
attention has been devoted to methodological and theoretical concerns
(e.g., Berry & Berry, 1967; Finnegan & eooprider, 1978; Hauser & De
Stefano, 1989; Heathcote, 1986; Hillson, 1986, 1996; Howells, 1973, 1989;
Mayhall, 1992; Molto, 1983; Rosing, 1982, 1984; Saunders, 1989; Sj0vold,
1977; van Vark, 1970; van Vark & Howells, 1984).
eJose biological affinity is indicated by unusually high (or sometimes
low) frequencies of specific traits in sorne populations (e.g., Hrdlicka, 1935;
Larsen, eraig et al., 1995; Nelson, 1992; Snow, 1974) or within subgroups
of populations (e.g., Spence, 1994). Inter- and intrapopulation biological
relationships are mostly identified on the basis of the simultaneous
consideration of multiple traits via multivariate statistical analysis com-
monly involving the determination of principal components, discriminant
function, or biodistance statistics, and subsequent cluster analysis or
multidimensional scaling.
Although different biodistance statistics are used for metric and nonmet-
ric data, these procedures have underlying strategic commonalities. The
multivariate Mahalanobis' 0
2
has become the benchmark statistic for
analysis of nonmetric data. The e. A. B. Smith Mean Measure of
Divergence (MMD) is the most commonly used statistic for nonmetric
Introduction 303
nta. Until recently, the standard biodistance statisticfor metric traits has
een the 0
2
generalized distance (see Brace et al., 1990). More recently,
mean e-seores, 0
2
distances based on individual e-seores, and Q-mode
nnalysis of individual e-seores have gained favor as representations of
overall biological distance and intergroup patterning among populations
(e.g., Brace et al., 1990; Hanihara, 1994; Howells, 1986, 1989; Piet-
rusewsky, 1994). Another powerful multivariate analysis uses principal
componen! seores derived from tooth measurements for identifying broad-
scale population relationships (Harris & Rathbun, 1991). This method
reveals how size is apportioned across tooth types, thereby detecting crown
shape differences between populations.
Biodistance analysis is problema tic, owing to the multifactorial nature of
the traits being studied. Skeletal traits are inlluenced by intrinsic genetic as
well as both local and general epigenetic and environmental factors: It is
thus not surprising that skeletal and dental heritability is imprecisely
understood (for sorne estimates of heritability, see Goose, 1971; Potter et
al., 1976; Sj0vold, 1984, 1995; Townsend, 1992). Thus, the application ofa
biological population model - which assumes at leas\ sorne degree of
genetic control of traits and that the distances based on these traits are
directly proportional to those derived from gene frequencies - is difliclt to
verify in human populations drawn from archaeological settings (and see
Saunders, 1989).
Neither metric nor nonmetric traits bear a one-to-one correspondence
with an individual's genome. The humeral sepia! aperture, a frequently
used postcranial nonmetric variant in biodistance analysis, for example,
has a high degree of association with robusticity. Similarly, spondylolysis
has often been assumed to be a population genetic marker. However,
spondylolysis is strongly inlluenced by activity load on the lower back (see
ehapter 5): eraniofacial and postcranial metric traits, as well, frequently
reflect activity-induced remodeling (Heathcote, 1994; Larsen, 1987; and see
ehapters 5 and 6). eonsistency offindings with other lines of evidence (e.g.,
archaeological or historical), especially when care is taken to screen out
traits influenced by activity or representing biomechanical adaptations
(Brace et al., 1990; Heathcote, 1994), suggests that these traits provide
importan! insight into population structure and relatedness. Minimally,
they are importan! for testing nonrandomness in skeletal series (see
Saunders, 1989; Spence, 1994). Familia! studies of modern humans (e.g.,
Saunders & Popovich, 1978), laboratory mice (e.g., Grneberg, 1952), and
rhesus monkeys (eheverud & Buikstra, 1981a, 198lb, 1982) support this
assessment. Phenotypicanthropometric analysis seems to provide the same
results as quantitative genetic analysis (Konigsberg & Ousley, 1995). In
304 Tracing genetic relationships
assessing relationships, it does not matter so much that heritability of traits
may be low; rather, what matters is that environmental variation is random
with respect to the traits being analyzed (and see Buikstra et al., 1990).
There is a tendency on the par! of sorne biodistance investigators to
ignore or underplay the biological significance of and factors underlying
skeletal variation (see Armelagos et al., 1982), leading many workers to
conclude that biodistance is of little value in bioarchaeological inquiry. A
number of investigations have been successful in characterizing population
structure and history in thc same manner as for living populations
(Buikstra et al., 1990). lt is incorrect to say that ali (or even most)
investigators have blithely analyzed skeletal and dental variants without
considering their broader biological meaning.
There are three prin1ary motivations for conducting biodistance analysis
(after Buikstra et al., 1990). First, results are importan! for investigation of
issucs rclating to evolutionary history, such as genetic drift and selection
(Konigsberg, 1990), gene low, and the inftuence of geography and other
isolating mechanisms on biological relatedness (Conner, 1990; Droessler,
1981; Heathcote, 1994; Ossenberg, 1986; Rothhammer & Silva, 1990;
Sciulli, 1990). Second, biodistance analysis addresses sorne fundamental
archaeological and biohistorical issues. Key in this arena are questions that
arise from cultural and biological changes in the past and the degree to
which these changes are infl.uenced by extrinsic infl.uences vs. local
circumstances (e.g., Conner, 1990; Droessler, 1981 ). For full assessment of
these alternative agents, it is importan! that population history be
considered. Biodistance analysis has the potential to identify population
boundaries, postmarital residence patterns, familial and kin groupings,
social groupings, and the presence of individuals from other populations,
especially in settings involving contact