Anda di halaman 1dari 167

This book derives from the authors experiences when lecturing and tutoring at various international summer schools on

the themes of reality-based surveying and 3D-modelling in the fields of archaeology and cultural heritage. The contents of
this work therefore cover all scientific (and other) aspects of the new 3D technologies in the heritage field, with theoretical
sections, case studies, best practices and considerations.

3D Recording and Modelling in


Archaeology and Cultural Heritage
Theory and best practices
Edited by

Fabio Remondino
Stefano Campana

Stefano Campana since 2006 he has been a faculty member of the University of Siena (Italy), in the Department of History
and Cultural Heritage, where he has been engaged in teaching and research as Lecturer in Ancient Topography. He is
specialized in landscape archaeology, remote sensing, GIS and mobile mapping. His work is focused on the understanding
of past landscapes from prehistory to the present day, with particular emphasis on Late Antiquity and the Early Middle
Ages. The principal context for his work has been Tuscany, but he has also participated in, and led research work, in the
UK, Turkey, Palestine, and Turkmenistan. He has been very active in organizing international conferences, summer schools
and has given seminars and lectures at numerous universities including Durham (UK), Berkeley (USA), Stanford (USA), Duke
(USA), Beijing (China), Ghent (Belgium), Saragossa (Spain), and Port-au-Prince (Haiti). In 2011 was proposed and admitted
as a Fellow of the Society of Antiquaries of London (FSA), and in 2012 he was invited onto the General Management Board
(GMB) of HIST, the Governing Board of the International Centre on Space Technologies for Natural and Cultural Heritage,
under the auspices of UNESCO and the Chinese Academy of Sciences.

REMONDINO & CAMPANA (Eds)

Fabio Remondino received his PhD in Photogrammetry and Remote Sensing in 2006 from ETH Zurich (Switzerland), where
he worked until 2007 as research assistant. Since 2007 he has been based at the Bruno Kessler Foundation (FBK) of Trento
(Italy), where he leads the 3D Optical Metrology (3DOM) research unit. His research interests focus on automated data
processing, sensor characterization and integration, as well as information extraction from image and range data. He has
authored more than 100 scientific publications in journals and peer-review conferences, and he acts as reviewers of many
Geomatics journals. He has organized 20 scientific conferences and 12 summer schools for knowledge and technology
transfer. He is now serving as President of the ISPRS Technical Commission V on Close-range imaging, analysis and
applications, President of the EuroSDR Technical Commission I on Sensors, Primary Data Acquisition and Georeferencing,
and he is a member of the Executive Board of CIPA Heritage Documentation. Of the many projects and heritage sites he
has worked on, particularly memorable have been Pompeii, Paestum, Copan, Etruscan Necropolis, Bamiyan, Machu Picchu,
Jerash, and Angkor Wat.

BAR S2598 2014

3D RECORDING AND MODELLING IN ARCHAEOLOGY AND CULTURAL HERITAGE: THEORY AND BEST PRACTICES

3D RECORDING AND MODELLING

BAR International Series 2598


2014
Remondino and Campana 2598 cover.indd 1

16/01/2014 11:35:35

Contents
INTRODUCTION.................................................................................................................. 3
M. Santana Quintero
1 ARCHAEOLOGICAL AND GEOMATIC NEEDS
1.1 3D modeling in archaeology and cultural heritage
Theory and best practice ................................................................................................... 7
S. Campana
1.2 Geomatic and cultural heritage....................................................................................... 13
F. Remondino
1.3 3D modelling and shape analysis in archaeology........................................................... 15
J.A. Barcel
2 LASER/LIDAR
2.1 Airborne laser scanning for archaeological prospection ................................................ 27
R. Bennet
2.2 Terrestrial optical active sensors theory & applications .............................................. 39
G. Guidi
3 PHTOGRAMMETRY
3.1 Photogrammetry: theory ................................................................................................. 65
F. Remondino
3.2 UAV: Platforms, regulations, data acquisition and processing ...................................... 74
F. Remondino
4 REMOTE SENSING
4.1 Exploring archaeological landscapes with satellite imagery .......................................... 91
N. Galiatzatos
5 GIS
5.1 2D GIS vs. 3D GIS theory ........................................................................................... 103
G. Agugiaro
i

6 VIRTUAL REALITY & CYBERARCHAEOLOGY


6.1 Virtual reality, cyberarchaeology, teleimmersive archaeology .................................... 115
M. Forte
6.2 Virtual reality & cyberarchaeology virtual museums ................................................ 130
S. Pescarin
7 CASE STUDIES
7.1 3D Data Capture, Restoration and Online Publication of Sculpture ............................ 137
B. Frischer
7.2 3D GIS for Cultural Heritage sites: The QueryArch3D prototype ............................... 145
G. Agugiaro & F. Remondino
7.3 The Use of 3D Models for Intra-Site Investigation in Archaeology ............................ 151
N. DellUnto

ii

List of Figures and Tables


F. Remondino: Geomatics and Cultural Heritage
Figure 1. Geomatics and its related techniques and applications ......................................... 13
Figure 2. Existing Geomatics data and sensors according to the working scale
and object/scene to be surveyed ..................................................................................... 14
Figure 3. Geomatics techniques for 3D data acquisition, shown according
to the object/scene dimensions and complexity of the reconstructed
digital model ................................................................................................................... 14
J.A. Barcel: 3D Modelling and Shape Analysis in Archaeology
Table 1. Microtopography: 3D surface texture .................................................................... 19
R. Bennet: Airborne Laser Scanning for Archaeological Prospection
Figure 1. Demonstration and example of the zig-zag point distribution
when ALS data are collected using an oscillating sensor ............................................... 28
Figure 2. Schematic of the key components of the ALS system that enable
accurate measurement of height and location ................................................................. 28
Figure 3. Schematic illustrating the differences in data recorded by full-waveform
and pulse echo ALS sensors ........................................................................................... 29
Figure 4. An example of orange peel patterning caused by uncorrected point
heights at the edges of swaths. The overlay demonstrates uncorrected data
which in the red overlap zones appears speckled and uneven compared
with the same areas in the corrected (underlying) model ............................................... 29
Figure 5. An example of classification of points based on return which forms
the most basic method to filter non-terrain points from the DSM .................................. 30
Figure 6. Two examples of common interpolation techniques: IDW (left)
and Bicubic Spline (right)............................................................................................... 31
Figure 7. Comparison of visualisation techniques mentioned in this chapter ...................... 32
Figure 8. Angle and Illumination of a shaded relief model .................................................. 33
Figure 9. Different angles of illumination highlighting
different archaeological features..................................................................................... 34
G. Guidi: Terrestrial optical active sensors theory & applications
Figure 1. Triangulation principle: a) xz view of a triangulation based distance
measurement through a laser beam inclined with angle respect to the reference
system, impinging on the surface to be measured. The light source is at distance
iii

b from the optical centre of an image capturing device equipped with a lens
with focal length f; b) evaluation of xA and zA ............................................................. 40
Figure 2. Acquisition of coordinates along a profile generated by a sheet
of laser light. In a 3D laser scanner this profile is mechanically moved
in order to probe an entire area ....................................................................................... 41
Figure 3. Acquisition of coordinates along a different profiles generated by
multiple sheets of white light.......................................................................................... 42
Figure 4. Acquisition of coordinates of the point A through the a priori knowledge
of the angle , and the measurement of the distance through the Time Of Flight
of a light pulse from the sensor to the object and back ................................................... 43
Figure 5. Exemplification of the accuracy and precision concepts. The target has
been used by three different shooters. The shooter A is precise but not
accurate, B is more accurate than A but less precise (more spreading),
C is both accurate and precise ........................................................................................ 46
Figure 6. ICP alignment process: a) selection of corresponding points on two
partially superimposed range maps; b) rough pre-alignment;
c) accurate alignment after a few iterations .................................................................... 48
Figure 7. Mesh generation: a) set of ICP aligned range maps. Different colours
indicate the individual range maps; b) merge of all range maps in a single
polygonal mesh............................................................................................................... 48
Figure 8. Mesh optimization: a) mesh with polygon sizes given by the range
sensor resolution set-up (520,000 triangles); b) mesh simplified in order
to keep the difference with the unsimplified one, below 50 m.
The polygon sizes vary dynamically according to the surface
curvature and the mesh size drops down to 90,000 triangles.......................................... 49
Figure 9. Structure of the G Group of temples in MySon: a) map of the G area drawn
by the archaeologist Parmentier in the early 20th century (Stern, 1942); b) fisheye
image taken from above during the 2011 survey. The ruins of the mandapa (G3)
are visible in the upper part of the image, the posa (G5) on the right, the gopura
(G2) in the center, and the footprint of the holy wall all around .................................... 52
Figure 10. Handmade structures arranged on the field by local workers for
locating the laser scanner in the appropriate positions: a) mounting the
platform on the top of the structure surrounding the Kalan; b) laser
scanner located on the platform at 7 meters above the ruins;
c) multi-section ladder for reaching the platform; d) structure
for elevating the scanner at 3m from ground. During 3D acquisition
the operator lies in the blind cone below the scanner in order
to avoid the laser beam trajectory ................................................................................... 53
Figure 11. Map of the hill where the G Group is located within the My Son Area,
with the scanner positions for acquiring different structures highlighted
by colored dots ............................................................................................................... 54
Figure 12. Sculpted tympanum representing Krishna dancing on the snakes,
originally at the entrance of the kalan: a) 3D laser scanning in the store room
of the museum; b) reality-based model from the 3D data .............................................. 54
Figure 13. High resolution capture of the Foundation stone through SFM:
a) texturized 3D model measured through a sequence of 24 images shot
around the artifact; b) mesh model of the central part of the stone with
a small area highlighted in red; c) color-coded deviations of the SFM
acquired points from a best-fitting plane calculated on the red area
of b), clearly showing a the nearly 2 mm carving on the stone ...................................... 55
Figure 14. Tangential edge error in 3D point clouds: the red points represent
the incorrect data respect to the real ones (black-grey color) ......................................... 56
Figure 15. a) Point cloud model of the Kalan cleaned and aligned in the same
reference system; b) polygonal model of the Kalan with a decimated
and watertight mesh ........................................................................................................ 56
iv

Figure 16. Reality-based models of all ruins in the G group obtained from 3D data
generated by a laser scanner at 1 cm resolution and texturized with the actual
images of the buildings: a) G1, the main temple; b) G2, the entrance portal to
the holy area; c) G3, the assembly hall; d) G4, the south building;
e) G5; the kiosk of the foundation stone ......................................................................... 57
Figure 17. Reality-based models of eight of the 21 decorations found during
the G Group excavations and acquired in the My Son museum. All these
decorations have been acquired with a sampling step between 1 and 2 mm,
and post processed in order to strongly reduce the significant measurement
noise but not the tiniest details of their shapes. The visual representation
in this rendering have been made with a seamless texture ............................................. 58
Figure 18. Virtual reconstruction of the G Group and its surrounding panorama
starting from the reality-based models acquired through laser scanning
and digital images ........................................................................................................... 59
Table 1. Laser scanner configurations planned for 3D data acquisition ............................... 51
Table 2. Number of point clouds acquired at different resolution levels (first three
columns), and total number of 3D points acquired during the whole 3D survey
of the G Group and the related decorations (last column) .............................................. 55
F. Remondino: Photogrammetry: theory
Figure 1. The collinearity principle established between the camera projection
center, a point in the image and the corresponding point in the object
space (left). The multi-image concept, where the 3D object can be
reconstructed using multiple collinearity rays between corresponding
image points (right) ........................................................................................................ 66
Figure 2. A typical terrestrial image network acquired ad-hoc for a camera calibration
procedure, with convergent and rotated images (a). A set of terrestrial images
acquired ad-hoc for a 3D reconstruction purpose (b) ..................................................... 68
Figure 3. Radial (a) and decentering (b) distortion profiles for a digital camera set
at different focal lengths ................................................................................................. 69
Figure 4. 3D reconstruction of architectural structures with manual measurements in
order to generate a simple 3D model with the main geometrical features (a).
Dense 3D reconstruction via automated image matching (b). Digital Surface
Model (DSM) generation from satellite imagery (Geo-Eye stereo-pair) for
3D landscape visualization (c) ........................................................................................ 71
Figure 5. 3D reconstruction from images: according to the project needs
and requirements, sparse or dense point clouds can be derived...................................... 72
Table 1: Photogrammetric procedures for calibration, orientation
and point positioning ...................................................................................................... 68
F. Remondino: UAV: Platforms, regulations, data acquisition and processing
Figure 1. Available Geomatics techniques, sensors and platforms for 3D recording
purposes, according to the scene dimensions and complexity ...................................... 75
Figure 2. Typical acquisition and processing pipeline for UAV images .............................. 77
Figure 3. Different modalities of the flight execution delivering different image
blocks quality: a) manual mode and image acquisition with a scheduled
interval; b) low-cost navigation system with possible waypoints but irregular
image overlap; c) automated flying and acquisition mode achieved with
a high quality navigation system .................................................................................... 78
Figure 4. Orientation results of an aerial block over a flat area of ca 10 km (a).
The derived camera poses are shown in red/green, while color dots are the
3D object points on the ground. The absence of ground constraint (b) can led
to a wrong solution of the computed 3D shape (i.e. ground deformation).
The more rigorous approach, based on GCPs used as observations in the
v

bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e.
a flat terrain .................................................................................................................... 79
Figure 5. Orientation results of an aerial block over a flat area of ca 10 km (a).
The derived camera poses are shown in red/green, while color dots are the
3D object points on the ground. The absence of ground constraint (b) can led
to a wrong solution of the computed 3D shape (i.e. ground deformation).
The more rigorous approach, based on GCPs used as observations in the
bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e.
a flat terrain .................................................................................................................... 80
Figure 6. A mosaic view of the excavation area in Pava (Siena, Italy) surveyed
with UAV images for volume excavation computation and GIS applications (a).
The derived DSM shown as shaded (b) and textured mode (c) and the produced
ortho-image (d) [75]. If multi-temporal images are available, DSM differences
can be computed for volume exaction estimation (e) ..................................................... 81
Figure 7. A mosaic over an urban area in Bandung, Indonesia (a). Visualization of
the bundle adjustment results (b) of the large UAV block (ca 270 images) and
a close view of the produced DSM over the urban area, shown as point
cloud (c, d) and shaded mode (e) .................................................................................... 82
Figure 8. Approximate time effort in a typical UAV-based photogrammetric
workflow ........................................................................................................................ 83
Table 1. Evaluation of some UAV platforms employed for Geomatics applications,
according to the literature and the authors experience. The evaluation is
from 1 (low) to 5 (high) .................................................................................................. 76

N. Galiatzatos: Exploring archaeological landscapes with satellite imagery


Figure 1. Illustration of the spatial resolution property ........................................................ 92
Figure 2. The high radiometric resolution of IKONOS-2 (11-bit) allows for
better visibility at the shadows of the clouds .................................................................. 93
Figure 3. The left part displays the spectral resolution of different satellites.
The right part illustrates the spectral signature from the point of view
of hyperspectral, multispectral and panchromatic images respectively .......................... 94
Figure 4. Illustration of the different spatial coverage or swath width
(nominal values in parenthesis) ...................................................................................... 94
Figure 5. Classical and modern geospatial information system ........................................... 98
Table 1. Landsat processing levels as provided ................................................................... 94
Table 2. Description of error sources ................................................................................... 96

G. Agugiaro: 2D & 3D GIS and Web-Based Visualization


Figure 1. Example of relational model: two tables (here: countries and cities)
are depicted schematically (top). Attribute names and data types are listed
for each table. The black arrow represents the relation existing between them.
Data contained in the two tables is presented in the bottom left, and the result
of a possible query in the bottom right. The link between the two tables is
realized by means of the country_id columns .............................................................. 104
Figure 2. Raster (top) and vector (bottom) representation of point, lines
and polygon features in a GIS ...................................................................................... 105
Figure 3. Qualitative examples of different interpolation algorithms starting from
the same input (left). Surface interpolated using an Inverse Distance Weighting
interpolator (center) and a Spline with Tension interpolator (right) ............................. 108
Figure 4. Examples of network analyses. A road network (upper left), in which
5 possible destinations are represented by black dots, can be represented
according to the average speed typical for each roadway (upper right),
vi

where decreasing average speeds are represented in dark green, light green,
yellow, orange and red, respectively. The shortest route, considering distance,
connecting all 5 destinations is depicted in blue (bottom left), while the
shortest route, in terms of time, is depicted in violet (bottom right). These
examples are based on the Spearfish dataset available for Grass GIS .......................... 109
Figure 5. Examples of visualization of GIS data. A raster image (orthophoto)
and a vector dataset (building footprints) are visualized in 2D (left).
A 3D visualization of the extruded buildings draped onto the DTM............................ 110
Figure 6. Example of Web-based geodata publication in 3D: by means of virtual
globes, as in Google Earth, or in the case of the Heidelberg 3D project ...................... 111
M. Forte: Virtual reality, cyberarchaeology, teleimmersive archaeology
Figure 1. Digital Hermeneutic Circle ................................................................................. 116
Figure 2. Domains of digital knowledge ............................................................................ 116
Figure 3. 3D-Digging Project at atalhyk ...................................................................... 120
Figure 4. Teleimmersion System in Archaeology (UC Merced, UC Berkeley) ................. 121
Figure 5. Video capturing system for teleimmersive archaeology ..................................... 121
Figure 6. A Teleimmersive work session ........................................................................... 122
Figure 7. Building 77 at atalhyk: the teleimmersive session shows the spatial
integration of shape files (layers, units and artifacts) in the 3D model recorded
by laser scanning .......................................................................................................... 122
Figure 8. 3D Interaction with Wii in the teleimmersive system: building 77,
atalhyk.................................................................................................................... 123
Figure 9. Clouds of points by time of phase scanner (Trimble FX)
at atalhyk: building 77 ........................................................................................... 123
Figure 10. Image modeling of the building 89 at atalhyk ............................................ 124
Figure 11. Image modeling of the building 77 at atalhyk ............................................ 124
Figure 12. 3D layers and microstratigraphy in the teleimmersive system: midden
layers at atalhyk. This area was recorded by optical scanner ................................ 125
Figure 13. Virtual stratigraphy of the building 89, atalhyk: all the layers
recorded by time of phase laser scanner (Trimble FX)................................................. 125
Figure 14. Building 77 reconstructed by image modeling (Photoscan).
In detailhand wall painting and painted calf's head above niche .................................. 125
Figure 15. Building 77 after the removal of the painted calfs head.
The 3D recording by image modeling allows to reconstruct
the entire sequence of decoration ................................................................................. 125
Figure 16. Building 77: all the 3D layers with paintings visualized in transparency ......... 125
Table 1................................................................................................................................ 127
S. Pescarin: Virtual reality & cyberarchaeology virtual museums
Figure 1. The virtual museum of Scrovegni chapel (Padova, IT, 2003-):
the VR installation at the Civic Museum and the cybermap which is
part of the VR application ........................................................................................... 132
Figure 2. Aquae Patavinae VR presented at Archeovirtual 2011 (www.archeovirtual.it):
natural interaction through the web .............................................................................. 133
Figure 3. 3D reconstruction parts of the project Matera: tales of a city
with a view of the same place in different historical periods........................................ 133
Figure 4. Immersive room with Apa stereo movie inside the new museum
of the city of Bologna ................................................................................................... 134
vii

B. Frischer: 3D Data Capture, Restoration and Online Publication of Sculpture


Figure 1. View of the DSPs reconstruction of the statue group of Marsyas,
olea, ficus, et vitis in the Roman Forum ....................................................................... 139
Figure 2. Alexander, plaster cast (left) and original marble (right)
of the torso; front view ................................................................................................. 141
Figure 3. Alexander, digital model of the cast (left) and of the original (right)
of the torso; front view ................................................................................................. 142
Figure 4. Tolerance-Based Pass/Fail test of the digital models of the cast and original
torso of Alexander in Dresden. Green indicates that the two models differ
by less than 1 mm. Red indicates areas where the difference between
the models exceeds 1 m ............................................................................................. 142
Figure 5. Error Map comparing the digital models of the cast and original torso
of the Dresden Alexander ......................................................................................... 142
G. Agugiaro & F. Remondino: 3D GIS for Cultural Heritage sites:
The QueryArch3D prototype
Figure 6. Different levels of detail (LoD) in the Query Arch 3D tool. Clockwise
from top-left: LoD1 of a temple with prismatic geometries, LoD2 with more
detailed models (only exterior walls), LoD3 with interior walls/rooms
and some simplified reality-based elements, LoD4 with high-resolution
reality-based models ..................................................................................................... 147
Figure 7. Different visualization models in QueryArch3D: aerial view (a, b),
walkthrough mode (b) and detail view (d). Data can be queried according
to attributes (a) or by clicking on the chosen geometry (b, c, d). The amount
of information shown is depending on the LoD: in (b), attributes about
the whole temple are shown, in (c) only a subpart of the temple,
and the corresponding attributes, are shown ................................................................. 148
N. DellUnto: The Use of 3D Models for Intra-Site Investigation in Archaeology
Figure 1. This image presents an example of a 3D model acquired during
an investigation campaign in Uppkra (Summer 2011). The model has
been realised using Agisoft Photoscan and visualised through MeshLab .................... 152
Figure 2. This image shows the three steps performed by the software
(i.e., Photoscan and Agisoft) to calculate the 3D model for the rectangular
area excavated in 2011 during the investigation of a Neolithic grave in
Uppkra: (a) camera position calculations, (b) geometry creation,
and (c) map projection .................................................................................................. 154
Figure 3. This image shows the investigation area that was selected in 2010 to test
the efficiency of the Computer Vision techniques during an archaeological
excavation in Uppkra. The upper part of the image presents (A) a model
created during the excavation overlapped with the graphic documentation
created during the investigation campaign. The lower part of the image
presents (B) an example of models organised in a temporal sequence ......................... 155
Figure 4. This image shows two models of the excavation that were created
at different times during the investigation campaign. In the first model,
(a) the circular ditch is visible only in the Northwest rectangular area.
The second model shows (b) how the results of the archaeological
investigation allowed for the discovery of a ditch that was
in the Southeast rectangular area .................................................................................. 155
Figure 5. This image shows part of the 3D models that were created during
the excavation of a grave, organised in a temporal sequence ....................................... 156
Figure 6. This image shows the integration of the 3D models into the GIS.
ArcScene only imported models smaller than 34,000 polygons ........................................ 157

viii

3D MODELING IN ARCHAEOLOGY AND


CULTURAL HERITAGE
THEORY AND BEST PRACTICES

INTRODUCTION
Mario SANTANA QUINTERO

development of infrastructure. As well as, armed


conflicts, weathering, and pure vandalism.

INTRODUCTION
Digitally capturing cultural heritage resources have
become nowadays a common practice. Recording the
physical
characteristics
of
historic
structures,
archaeological sites and landscapes is a cornerstone of
their conservation, whatever it means actively
maintaining them or making a posterity record. The
information produced by such activity potentially would
guide decision-making by property owners, site
managers, public officials, and conservators around the
world, as well as, to present historic knowledge and
values of these resources. Rigorous documentation may
also serve a broader purpose: over time, it becomes the
primary means by which scholars and the public
apprehend a site that has since changed radically or
disappeared.

The rapid rise in new digital technologies has revolutionized the practice of recording heritage places. Digital
tools and media offer a myriad of new opportunities for
collecting, analyzing and disseminating information, with
these new opportunities; there are also conflicts and
constraints, involving fragmentation, longevity and
reliability of information. As well as, the threat of
generating digital representations that might falsify
instead of simplifying the understanding of our heritage.
Furthermore, a record can be used for promotion leading
to participation, increasing the knowledge about a
heritage place. It can be a tool for promoting the
participation of society in its conservation, a tool for
cultural tourism and regional development2.

A good selection and application of recording and


documentation tools is assured, when preparing a
comprehensive approach derived from the needs of the
site baseline. This base information set should take into
consideration the indicators defined by assessing the state
of conservation and statement of significance of the
heritage place.

In this context, the ICOMOS International Scientific


Committee on Heritage documentation (CIPA) has
endeavoured over 40 years to organize venues for
reflection, exchange and dissemination of research and
projects in the field of documentation of cultural heritage.
The contribution to the field has been substantial and can
be consulted on http://cipa.icomos.org (last accessed
20/05/2011). With the support of CIPA this book
provides a guideline for the appropriate training in threedimensional capturing and dissemination techniques.

Moreover, increasing the knowledge of the relevant


heritage places in a region can lead to its inclusion in
inventories and other legal instruments that can
eventually prevent its destruction and helps in combating
the theft of and illicit in cultural property on a global
scale2.

BASIC ENGAGEMENT RULES WHEN


RECORDING HERITAGE PLACES

A holistic approach in understanding the significance is


essential for safeguarding cultural Heritage properties;
equally important is the appropriate assessment of their
state of conservation taking into consideration the
potential degree of vulnerability to cumulative and/or
drastic risk/threats to their integrity, this is very relevant
when a digital record is being prepared of a site. As
evidenced in the most recent events, heritage places are
constantly threated by environmental calamities (earthquakes, tsunamis, inundations, etc), and indiscriminative

Recording for conservation of heritage places is a careful


process that requires following these rules:
Nothing is straight, square or horizontal;
Record from the wide (big) to the small (fault theory);
For conservation: record as-built condition: record only
what you see (make difference between what you see
and assumptions deduced from logical way of
fabric);

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Create a BASIS and CONTROL system;

FINAL REMARKS

Record and provide provenance information.

A holistic approach, centered in the relevance of


information to understand the significance, integrity
and threats to our built heritage is of paramount
importance;

The heritage recorders should bear in mind that it is


crucial to provide a measured dataset of representations
that truly presents the actual state of conservation of the
property.

Recorded information is required to be timely, relevant


and precise. It should provide a clear understanding
of the fabrics condition and materials, as well as, the
propertys chronology of modifications and alternations
over its extended lifespan. Therefore, documenting and
recording these issues, along with assessing the degree
and type of risks is an essential part of the propertys
understanding, conservation and management;

MAKING BASELINE RECORDS FOR


CONSERVATION
A baseline record is the product of any recording and
documenting project when studying cultural resources.
The structure, specification (metadata), quality and
extend of this record should follow international
recognize standards and should provide relevant, timely
and sufficient information to protect it.

Values is a crucial concept in defining the extend and


effective capturing and disseminating knowledge of
heritage places;
The rapid rise in new digital technologies has
revolutionized the way that our built heritage, with
these new opportunities; there are also conflicts and
challenges, especially in guaranteeing the scientific
correctness and reliability of information used to record
and document historic buildings.

This record, additionally, could be use as starting point


for designing and implementing plan monitoring strategy,
allowing detecting changes affecting the statement of
significance of the heritage place.
A baseline is defined by both a site report and a dossier of
measured representations that could include a site plan,
emplacement plan, plans of features, sections, elevations,
three-dimensional models, etc.

References
1. CLARK, Catherine M. 2001. Informed conservation:
Understanding historic buildings and their landscapes
for conservation. London: English Heritage.

In order to identify the extent of field recording


necessary, it is important to prepare a documentary
research to review and identify gaps in the existing
information (documentation) on the site. This first
assessment will allow to estimate the degree of additional
recording work required to prepare an adequate set of
documents to mapped indicators.

2. Council of Europe 2009. Guidance on inventory and


documentation of the cultural heritage.
3. EPPICH, E.; CHABBI, A. ed. 2007. Illustrated Examples
Recording,
Documentation,
and
Information
Management for the Conservation of Heritage Places,
The Getty Conservation Institute, J. Paul Getty Trust.

The following checklist can be used as guideline to


minimum requirements of information required to define
the baseline:

4. LETELLIER, R.; SCHMID, W.; LEBLANC, F. 2007.


Guiding Principles Recording, Documentation, and
Information Management for the Conservation of
Heritage Places, Getty Conservation Institute, J. Paul
Getty Trust.

Identify site location (centroid, boundaries, elements


and buffer zone);
Identify and map evidences of criteria;

5. MATERO, Frank G. 2003. Managing change: The


role of documentation and condition survey at Mesa
Verde national park, In Journal of the American
Institute for Conservation (JAIC), 42, pp. 39-58.

Relative chronology and history of the resources;


Significance and integrity assessment;
Risk assessment: threats and hazards associated to
indicators;

6. STOVEL, H. 1998. Risk Preparedness: a Management


Manual for World Cultural Heritage, ICCROM.

Administrative and management issues (current and


passed mitigations);

7. UNESCO 2010. The World Heritage Resource


Manual: managing Disaster Risks for World Heritage,
ICCROM.

Other assessments.

1
ARCHAEOLOGICAL AND
GEOMATIC NEEDS

ARCHAEOLOGICAL AND GEOMATIC NEEDS

1.1 3D MODELING IN ARCHAEOLOGY AND CULTURAL


HERITAGE THEORY AND BEST PRACTICE
S. CAMPANA

while at the same time relying on a variety of


methodological and technical skills. Only through a wideranging training in the development and maturing of the
researchers critical faculties is it possible negotiate the
transition from three-dimensional reality to graphical or
photographic representation in two dimensions. In
essence survey and documentation, as well as
photography, present not an alternative to reality but an
interpretation of reality, whether it be of an object, a
context or a landscape. This has been neatly expressed by
Gregory Bateson (1979) when he wrote that THE MAP
IS NOT THE TERRITORY, AND THE NAME IS NOT
THE THING NAMED. Naturally, a good interpretation
relies on a clear understanding of the object itself, and of
its essential characteristics. These form essential
preliminaries to a fuller understanding of the object itself
and in the best case a general improvement of the subject.
We are clearly dealing here with a process that is both
essential to and irreplaceable in the practice of
archaeological research.

1.1 ARCHAEOLOGICAL NEEDS


Paul Cezanne often commented that choses vivre sils
ont un volume and again la nature nest pas en surface;
elle est en profondeur. This maxim applies equally
forcefully within archaeology, across the full spectrum
from schools of thought dominated by art-historical
approaches to those which focus primarily upon context
and on the study of cultural material through insights
drawn from anthropology and ethnography. The products
of human endeavour objects, structures and landscapes
all have volume and are therefore capable of description
in three spatial dimensions as well, of course, as in
terms of their historical derivation.
The concepts of volume and of the third dimension are
not recent discoveries. Rather, they constitute an
archaeological component which has from the outset been
recognised as fundamental to the discipline, expressed
through documentation such as excavation plans,
perspective drawings, maps and the like (Renfrew, Bahn
2012). But it is also true that three-dimensionality has in
general been represented in a non-measuring mode,
predominantly through the development of a language of
symbols. In essence the reasons for this practice, common
practice to the present day, lie in the technology and
instrumentation available at the time. The forms of
graphical documentation which have underpinned
archaeology throughout the greater part of its life can be
reduced in essence to maps, excavation drawings,
matrices and photographs. All of these rely on methods of
presentation that are essentially two-dimensional. All
archaeologists including those still in embryo have
been and still are being educated to reduce and then to
represent three-dimensional archaeological information in
two dimensions. This practice should not be decried or
under-valued, nor should it be seen as a banal response to
the absence of alternative technical solutions. We are
dealing here with a complex process, the first requirement
of which is the acquisition of insights into the cultural
articulation of historical and archaeological contexts,

In the last two decades the rapid growth in available


technological support and the expanding field of their
application has produced new opportunities which
challenge traditional frames of reference that have
remained virtually unchanged over time. In particular,
laser scanning and digital photogrammetry (whether
terrestrial or airborne) have an extraordinary potential for
promoting a revolution in the documentation and
recording of archaeological evidence and in its
subsequent dissemination. But the availability of new
instruments, however revolutionary in their potential
impact, is not in itself sufficient cause to speak of a
revolution in the field of archaeology generally. To play
an active role in such advances a technique must be
developed in such a way as to answer to the real needs of
the archaeologist. The full and proper development of a
technique should allow the formulation of innovative
procedures that match the needs of their field of
application, facilitating the framing of new paradigms,
new standards and therefore new methods of achieving

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

large structures in three dimensions, in particular through


laser scanning, there has been a progressive cooling of
enthusiasm for this technique because of poor
understanding of the functions of point-clouds, meshes
and 3D models in general. In this case, too, 3D recording
must be placed in the context of the process of
understanding and documentating the structures
concerned. The situation for the recording of buildings is
in fact exactly analogous to that described above for
archaeological excavations the process relies above all
on the reading and interpretation of the structural
elements and related characteristics of the monuments
under survey. In a recent manual on survey and recording
Bertocci and Bini (2012) noted in his introduction how a
good survey engages with the history of the building,
identifying the chronological phases, pointing out
variations of technique, underlining stratigraphical
relationships, noting anomalies, clarifying engineering
choices and summarising in the final documentation the
forms, colours, state of preservation and quality of the
materials used in the buildings construction. Thus the
resulting record represents a synthesis of measurement
combined with reading and interpretation of the
structure and its development over time. It is evident that
if one makes use only of measurement, however accurate
and detailed as in the case of the point clouds produced
by laser scanning or photogrammetry the absence of
reading and interpretation irrevocably limits the results.

real advances in archaeological understanding. Today, the


use of 3D documentation systems, the creation of
complex models which can be navigated and measured,
and procedures for data handling and management
present us with almost daily with new challenges on
questions such as measurement, documentation,
interpretation and mapping. We must never forget that we
are speaking here about the pursuit of archaeology, in
which documentation is inseparably bound up with the
processes of understanding and interpretation. Graphical
documentation often represents the most appropriate
means of explaining and communicating the complexity
of the archaeological evidence, inter-relationships or
contexts being described.
These thoughts will hopefully provide a basic frame of
reference within which we can consider the introduction
of innovative systems for 3D recording within
archaeology. That said, it is time to discuss some of the
problems that have emerged in the course of the last
couple of decades.
In their early history the significance and the role of laser
scanning and photogrammetric recording in archaeology
was complicated by a number of misunderstandings and
ambiguities. It may be useful to start by considering the
experience of pioneering applications which placed a
high emphasis on objectivity and the faithful
representation of stratigraphical units, all too often
ignoring the central dictum that the main challenge within
any excavation in itself a process that cannot be
repeated consists not of objective documentation of
stratigraphical units but at root in the definition and
interpretation of those units. In addition, excavation
recording does not deal only with the relationship of
stratigraphical units in terms of their volumes but also
with such things as the consistency and composition of
the strata and their chemical and physical characteristics.
All of these are elements which can themselves change
under the influence of variations in environmental
conditions such as temperature, humidity and lighting,
and last but not least in response to the skill and
experience of the excavator. 3D documentation makes it
possible to create an objective record of some aspects
such as volume and texture of stratigraphical units that
have been defined or influenced by the necessarily
subjective interpretations made by the excavator. For this
very reason the adoption of 3D recording does not in
itself transform the process into an objective or neutral
procedure since the process of observation and hence of
understanding cannot by its very nature be other than
subjective. That said, it is undeniable that the essentially
destructive and unrepeatable nature of excavation makes
it imperative to employ recording systems that are as
sophisticated and accurate as possible at the time
concerned. In the context of the present day the most
relevant techniques in this respect are undoubtedly
photogrammetry and laser scanning.

Equally central to the problems that have arisen in the


first application of 3D recording, whether of buildings or
of excavations, lies the misleading idea that 3D recording
can act as a substitute for traditional methods of
documentation. It may be useful here to draw a parallel
with photography. When the possibility of capturing and
using photographs, whether aerial or terrestrial, in
archaeological work was first proposed the photographs
did not replace traditional landscape or excavation
recording but rather complemented them, adding a new
form of documentation which in its turn required
interpretation and sometimes graphical representation of
the archaeological information present in the
photographs. 3D documentation presents an innovative
means of executing and representing measurements taken
from archaeological sites, objects or contexts. It makes
possible the acquisition of an extraordinary amount of
positional data and measurements, the density of which is
conditioned by the scanning density (for example one
point every three mm etc) or the resolution of the camera
sensor (in both cases the final criterion being distance
from the object). These factors must be matched to the
characteristics of the object or context being documented.
But as with a photograph, the point cloud produced by
these methods is an intermediate document between
reality and its conceptual representation (limited, of
course, to those elements of reality that can be described
in three dimensions). It is certainly true, however, that the
aim of the recording work, which traditionally focuses on
taking measurements in the field, can in the case of laser
scanning and digital photogrammetry be re-allocated to a
later stage, reducing the amount of time spent on this
process in the field.

Ambiguities have also emerged in the recording of


historical buildings and field monuments. After the initial
euphoria generated by the possibility of documenting

ARCHAEOLOGICAL AND GEOMATIC NEEDS

technique, widely used to the present day, is however


subject to a limitations by the brevity of the windows of
opportunity to make it work effectively and by the
rigidity of the resulting documentation. Digital
photogrammetry, and above all LiDAR, can to a large
extent offset these limitations, offering opportunities not
previously available. LiDAR measures the relative
elevation of the ground surface and of features such as
trees and buildings upon it across large areas of landscape
with a resolution and accuracy hitherto unattainable
except through labour-intensive field survey or
photogrammetry. At a conference in 2003 Robert
Bewley, then Head of English Heritages Aerial Survey
Unit, argued that the introduction of LiDAR is probably
the most significant development for archaeological
remote sensing since the invention of photography
(Bewley 2005). Over the years since then LiDAR
applications have been developed widely around Europe
and particularly in the UK, Austria, France, Germany,
Norway and Italy (Cowley, Opitz 2012). Currently the
principal advantage of LiDAR for archaeologists is its
capacity to provide a high-resolution digital elevation
model (DEM) of the landscape that can reveal microtopography which is virtually indistinguishable at ground
level because of erosion by ploughing or other agencies.
Techniques have been developed for the digital removal
of modern elements such as trees and buildings so as to
produce a digital terrain model (DTM) of the actual
ground surface, complete with any remaining traces of
past human activity. An extremely important
characteristic of LiDAR is its ability to penetrate
woodland or forest cover so as to reveal features that are
not distinguishable through traditional prospection
methods or that are difficult to reach for ground-based
survey (as, for instance, in work at Leitha Mountain,
Austria, described in Doneus, Briese 2006). There have
been other notable applications at Elverum in Norway
(Risbl et al., 2006), Rastadt in Germany (Sittler,
Schellberg 2006), in the Stonehenge landscape and at
other locations in the UK (Bewley et al., 2005; Devereux
et al., 2005) and, returning to America, at Caracol in
Belize (Weishampel et al., 2010). Currently the cutting
edge of LiDAR applications in archaeology is represented
by the use of a helicopter as the imaging platform,
allowing slower and lower flight paths and use of the
techniques multiple-return features with ultra-high
frequency, enabling much higher ground resolution.
Densities of up to 60 pts/m2 (about 10 cm resolution) can
be obtained by these methods, permitting the recording of
micro-topographic variations even where the remains of
archaeological features are severely degraded allowing.
When used in combination with the multiple-return
facility of the LiDAR pulse these densities can also allow
effective penetration of even the most densely vegetated
areas, revealing otherwise hidden archaeological features
beneath the tree canopy (Shaw, Corns 2011).

In this connection we need to make a distinction of scale


albeit macroscopic for the various applications,
distinguishing between sites, landscapes and individual
objects (see Barcel elsewhere in this volume). Though
there are numerous 3D techniques that can be applied at
the landscape scale we will limit our discussion at this
stage to some comments on photogrammetry, LiDAR and
spatial analysis.
Amongst the requirements of landscape archaeologists
the detailed understanding of morphology occupies a role
of central importance. From the very outset cartography
has provided an essential underpinning for every aspect
of archaeological research, from the initiatial planning,
through the fieldwork to the documentation of the
observations made (in terms of their position) and finally
to archaeological interpretation and publication.
The first use of photogrammetry for archaeology was
undertaken in 1956 in Italy by Castagnoli and Schmidt in
a study of the townscape of Norba (Castagnoli, Schmiedt
1957). This method of research and documentation
continues to represent a fundamental instrument for the
understanding and documentation of complex contexts
where the required level of detail is very high and the
need for precision is therefore increased. Over the years
Italy has seen many aerophotogrammetry projects across
a wide range of cultural contexts in terms of history,
morphology and topography: one might think for
example of Heraclea, Serra di Vaglio, Ugento, Vaste,
Cavallino, Rocavecchia, Arpi and Veio etc. Amongst the
needs that can be met by this form of archaeological
analysis and documentation there should be mentioned
the opportunity to explore the landscape in three
dimensions, achieving a complex cartographic product
consisting not only as applies in typically twodimensional GIS systems of the previously exclusive
vectorisation of archaeological features but most of all in
the recording and representation of the topographical
characteristics of the landscape within which they sit. In
this way the landscape context is represented in its most
up-to-date and systematic interpretation of the term as an
interaction between human activity and the
environmental context and not in any way as something
that can be separated from the monuments, structures and
the connective tissue of sites, field systems,
communication routes etc. that sit within it. Recent
developments in digital photogrammetry, and an overall
lowering of costs, have helped to promote access to
instrumentation that was previously limited in its
availability. In reality, however, this is only partially the
case since digitally-based cartographic restitution remains
a highly specialised activity undertaken by a relatively
small number of highly skilled specialists.
Archaeologists have long been aware that the presence of
archaeological features can be revealed by relatively
modest variations in the morphology of the ground
surface. Ever since the beginnings of archaeological air
photography researchers have made use of raking
(oblique) lighting to emphasise the shadows cast by small
variations in the surface morphology. This diagnostic

It is worth mentioning here that interest in the LiDAR


technique is not limited to its potential for penetrating
woodland areas but also for its contribution to the study
of open landscapes dominated by pastureland or arable
cultivation. In these areas, as under woodland cover, the

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

(such as GPS survey, remote sensing and mobile


technology etc) that are methodologically and
technologically
appropriate
for
meeting
real
archaeological needs.

availability of extremely precise digital models of the


ground surface make it possible to highlight every tiny
variation in level by using computer simulations to
change the direction or angle of the light and/or to
exaggerate the value of the z coordinate. If properly
applied, the LiDAR technique could prove revolutionary
in its impact on the process of archaeological mapping,
making it possible to record the previously hidden
archaeological resource within woodland areas and
apparently-levelled
landscapes.
In
favourable
circumstances it might even be possible to uncover whole
fossil landscapes. This could have a dramatic impact on
opportunities for archaeological and landscape
conservation and management, as well as on scientific
investigation of settlement dynamics at various times in
the past.

All of this has at the moment not come to pass for those
who intend to operate in 3D (or 4D). This represents an
absolutely central problem which lies at the root of many
present-day limits in the diffusion of 3D working. In
particular this very serious lacuna expresses itself most of
all in the absence of 3D analytical instruments and
therefore in the difficulty of extracting original
archaeological information not otherwise identifiable in
2D limiting the contribution of 3D to an increase in the
quality of the documentation and to the successive
elaborations focused on communication. A significant
outcome, though altogether secondary in respect of the
primary capacity which archaeological data ought to be
explained: new archaeological information. As has been
said many times by Maurizio Forte, of the Duke
University in the USA, a fundamental need lies in the
availability for archaeologists of an OPEN-SPACE into
which it is possible to insert data acquired at various
times in the past, stratifying the information and at every
stage measuring and comparing the original observations,
data or stratigraphical relationships but also wherever
possible modifying and updating that data in the light of
new evidence. GIS provides an open working
environment which allows the management, analysis,
data enhancement, processing, visualisation and sharing
of hypothetical interpretations. The first change of
practice that this thought ought to provoke is a move
towards the acquisition and management of 3D data from
the very outset of any research project rather than (as so
often happens) at the end of the cognitive process: we
should no longer find ourselves in the position to hear
people say Now that we have studied everything lets
make a nice reconstruction (Forte 2008). This should
lead to a reversal of the process in which the 3D model
no longer constitutes the end but rather the means of
achieving better understanding through analysis and
simulation. It should also promote the better sharing and
communication of archaeological information, ideas and
concepts, in the first instance amongst our professional
colleagues and then with the public at large.

By this point we have hopefully overcome the first step in


the process of understanding. It remains true, however,
that the role of the archaeologist is fundamental in
integrating observations made in the field with the threedimensional model so as to advance the reading and
interpretation of the landscape, monument or excavation
in its 3D (or better still 4D) complexity through through
new methods of working and the development of new
hardware and software.
Now, however, we must tackle another major problem:
the software suites for the management of 3D and 4D
data. In addressing this problem it may be useful to take a
step back and to note the central role that has been played
since the early 1990s by the availability of relatively lowcost and user-friendly GIS systems. Although essentially
2D in character these have permitted innovation in almost
every sector of archaeological work, from landscape
studies to excavation recording and consideration of the
great intellectual themes of archaeology from the local to
the global context (economy, production, exchange
systems etc). GIS has above all provided a common
language which has facilitated interaction and integration
between all kinds of documents, observations or
phenomena so long as they can be represented in one way
or another in terms of geographical coordinates. GIS has
provided a spur for innovative methods of measurement
and documentation and hence for the development of
methodologies and technologies which have led to the
creation of a shared work-space within which data can be
managed, visualised and integrated with other sources of
information so as to maximise the outcome through
analysis and communication within a single digital
environment. GIS has produced a common mechanism in
the ambit of scientific innovation, acting as a trigger in a
process that has forced archaeologists to re-think and then
to extend the methodological and technical underpinning
of archaeological practice. In more practical terms it has
made a much-needed breach in a sector of study that has
tended to be conservative and traditional. Weithin a
decade or so archaeologists have realised that in addition
to permitting the integration of almost any form of source
of information gathered in the past by traditional means
was necessary to adapt a whole series of present-day and
future procedures through the introduction of solutions

Paradoxically, 3D technology tends to be more


appreciated and sought after by the general public than by
archaeologists. The communication and entertainment
sectors are amongst the major commitments of 3D
models of the landscape, buildings and objects. A further
leap in this direction can be attributed to the development
and wider availability of mobile technology in the form
of smartphones and tablets etc. Techniques such as
augmented reality offer a wide scope for innovative
development and practical applications. There remains
one final aspect to be tackled, however: the reluctance of
archaeologists themselves to start thinking in 3D. During
the Fourth International Congress on Remote Sensing in
Archaeology at Beijing in 2012, in the course of a
discussion on problems associated with the spread of new
technologies and methods of working within archaeology

10

ARCHAEOLOGICAL AND GEOMATIC NEEDS

advantages to archaeological research in general. This


could involve in the most extreme cases at least in the
initial stages a central role for the technologist, though
he in turn would have to acquire a basic competence in
archaeology so as to work in close cooperation with
archaeologists. This reversal of perspective, together with
the collaboration with engineers, architects, geophysicists
and information technologists etc, has made the writer
reflect on the necessity of working together, of risking a
degree of technological pollution while at the same time
conserving a proper scientific approach to innovation.
Otherwise one might run the risk of running into another
short circuit ... as Henry Ford once said: If I had asked
my customers what they wanted they would have said a
faster horse.

Professor Armin Gruen told a revealing story. After


weeks of difficult fieldwork and months of processing in
the laboratory he was about to present to the
archaeologists a very precise and detailed digital 3D
model of their site. At the presentation he showed the
archaeologists a whole series of opportunities for
measurement, navigation and spatial analysis within the
model (sections, slopes, surface erosion, varying display
methods, perspective, etc). At the end of the
domonstration the first comment by the archaeologists
was: Beautiful, extraordinary, but ... can it proviode us
with a plan? Clearly there is a hint of exasperation, and a
degree of paradox, in this story but perhaps the most
significant aspect lies in a statement made earlier in this
contribution: we have been educated to reduce reality
from three dimensions to two, and thus we are in the
habit of thinking in 2D. Intuitively or in some cases
theoretically we are well aware of the informative
value of the third dimension but we nevertheless find it
difficult to imagine, visualise or represent objects,
contexts, landscapes and phenomena from the past in
three dimensions. Faced with this difficulty it is hard to
imagine how complex it might be to achieve clear 3D
thought processes that will permit the identification of
archaeological problems and the framing of relevant
questions in the search for solutions. This kind of short
circuit might perhaps be circumvented through a point
mentioned earlier on the need to apply, from the
beginning to the end of the archaeological process, all of
the technological instruments and procedures that we can
call upon to help us to manage and benefit from the
availability 3D data. In a sense we need a new magic
box, an instrument which like GIS in its own time
can act as a bridgehead for the implementation of 3D
thinking in its totality, advancing from a two-dimensional
to a three-dimensional vision both of the initial
archaeological evidence and of the questions to which
that evidence gives rise.

Reference
BATESON, G. 1979. Mind and Nature: A Necessary Unity
(Advances in Systems Theory, Complexity, and the
Human Sciences). Hampton Press.
BERTOCCI, S.; BINI, M. 2012. Manuale di rilievo
architettonico e urbano, Torino.
BEWLEY, R.H. 2005. Aerial Archaeology. The first
century. Bourgeois J., Meganck M. (Eds.), Aerial
Photography and Archaeology 2003. A century of
information, Academia Press, Ghent, pp. 15-30.
BEWLEY, R.H.; CRUTCHLEY, S.; SHELL, C. 2005. New
light on an ancient landscape: LiDAR survey in the
Stonehenge World Heritage Site. Antiquity, 79 (305),
pp. 636-647.
CASTAGNOLI, F.; SCHMIEDT, G. 1957. Lantica citt di
Norba, in LUniverso, XXXVII, pp. 125-148.
COWLEY, D.C.; OPITZ, R.O. 2012. Interpreting
Archaeological Topography. 3D Data, Visualisation
and Observation, Oxford.
DEVEREUX, B.J.; AMABLE, G.S.; CROW, P.; CLIFF, A.D.
2005. The potential of airborne lidar for detection of
archaeological features under woodland canopies,
Antiquity, 79 (305), pp. 648-660.

1.2 CONCLUSION
Finally, a brief personal reflection on those who
undertake research and the ways in which research can be
pursured. The writer has long been a keen supporter of
the view that technological and methodological research
in archaeology, and in heritage management generally,
should be initiated or at least guided by the desire to
answer essentially historical questions. This implies a
central role for the archaeologist but at the same time
requires him to acquire technical skills so that he can
work closely and productively with engineers, physicists
and other specialist. Every other approach carries with it
the risk of a degenerative drift in research. However, the
experience of the last few years of experimentation in 3D
technology has led him to take a more flexible line,
without in any sense denying the central role of the
archaeologist and of inherently archaeological questions.
That said, he now sees possible value in testing
innovative technologies without necessarily starting from
specific archaeological question rather than from the
desire to see whether such techniques can offer

DONEUS, M.; BRIESE, C. 2011. Airborne Laser Scanning


in forested areas Potential and limitations of an
archaeological prospection technique. D. Cowley
(Ed.): Remote Sensing for Archaeological Heritage
Management, EAC Occasional Paper 5, Reykjavk
Iceland 25-27 March 2010, Brussel, pp. 59-76.
FORTE, M. 2008. La Villa di Livia, un percorso di
Ricerca di archeologia virtuale, LErma di
Bertschneider, Roma, pp. 54-68.
RENFREW, C.; BAHN, P. 2012. Archaeology: Theories,
Methods and Practice, London.
RISBL, O.; GJERTSEN, A.K.; SKARE, K. 2006. Airborne
laser scanner of cultural remains in forest: some
preliminary results from Norwegian project. S.
Campana, M. Forte (Eds.): From Space to Place. 2nd
International Conference on Remote Sensing in
Archaeology, CNR National Research Council

11

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Forte (Eds.): From Space to Place. 2nd International


Conference on Remote Sensing in Archaeology, CNR
National Research Council Roma 4-7 December
2006, BAR Oxford 2006, pp. 117-122.

Roma 4-7 December 2006, BAR Oxford 2006, pp.


107-112.
SHAW, R.; CORNS, A. 2011. High Resolution LiDAR
specifically for archaeology: are we fully exploiting
this valuable resource?, D.C. Cowley (Ed.): Remote
Sensing for Archaeological Heritage Management
EAC Occasional Paper 5, Reykjavk Iceland 25-27
March 2010, Brussel 2011, pp. 77-86.

WEISHAMPEL, J.F.; CHASE, A.F.; CHASE, D.Z.; DRAKE,


J.B.; SHRESTHA, R.L.; SLATTON, K.C.; AWE, J.J.;
HIGHTOWER, J.; ANGELO, J. 2010. Remote sensing of
ancient Maya land use features at Caracol, Belize
related to tropical rainforest structure. S. Campana,
M. Forte, C. Liuzza (Eds.): Space, Time, Place: 3rd
International Conference on Remote Sensing in
Archaeology, Tiruchirappalli Tamil Nadu India 17-21
August 2009, BAR Oxford, pp. 45-52.

SITTLER, B.; SCHELLBERG, S. 2006. The potential of


LIDAR in assessing elements of cultural heritage
hidden under forest or overgrown by vegetation:
Possibilities and limits in detecting microrelief
structures for archaeological surveys. S. Campana, M.

12

ARCHAEOLOGICAL AND GEOMATIC NEEDS

1.2 GEOMATICS AND CULTURAL HERITAGE


F. REMONDINO

Thus Geomatics for Cultural Heritage uses techniques


(photogrammetry, laser scanning, etc.) and practices for
scene recording and digital modeling, possibly in three
dimensions (3D), for the successive analyses and
interpretations of such spatially related data.

Geomatics, according to the Oxford Dictionary, is


defined as the mathematics of the earth, thus the science
of collecting (with some instruments), processing (with
some techniques), analyzing and interpreting data related
to the earth's surface. Geomatics is related to the data and
techniques, although the term Geoinformatics is often
also used.

Traditional recording methods were mainly hand


recording, e.g. by means of tape measurement, so
subjective, time consuming and applicable only to small
areas. On the other hand Geomatics 3D recording
methods are modern, digital, objective, rapid, 3D and cost
effective. Geomatics techniques rely on harnessing the
electromagnetic spectrum and they are generally
classified in active (ranges) and passive (images)
techniques.

A Cultural Heritage can be seen as a tangible (physical)


or intangible object which is inherit from the past
generations. Physical heritage include buildings and
historic places, monuments, artefacts, etc., that are
considered worthy of preservation for the future. These
include objects significant to archaeology, architecture
and science or technology of a specific culture.

Figure 1. Geomatics and its related techniques and applications

13

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 2. Existing Geomatics data and sensors according to the working scale
and object/scene to be surveyed

Figure 3. Geomatics techniques for 3D data acquisition, shown according to the object/scene dimensions
and complexity of the reconstructed digital model

14

ARCHAEOLOGICAL AND GEOMATIC NEEDS

1.3 3D MODELLING AND SHAPE ANALYSIS


IN ARCHAEOLOGY
Juan A. BARCEL
tools, or consumed waste material, or buildings, or
containers, or fuel, etc. If objects appear in some
locations and not in any other, it is because social actions
were performed in those places and at those moments.
Therefore, archaeological items have different shapes,
different sizes and compositions. They also have different
textures, and appear at different places and in different
moments. That is to say, the changes and modifications in
the form, size, texture, composition and location that
nature experiences as the result of human action (work)
are determined somehow by these actions (production,
use, distribution) having provoked its existence.

Archaeology seems to be a quintessentially visual


discipline, because visual perception makes us aware of
such fundamental properties of objects as their size,
orientation, shape, color, texture, spatial position,
distance, all at once. Visual cues often tell us about more
than just optical qualities. We see what we suppose are
tools, rubbish generated by some past society, the
remains of their houses Are we sure that we are right?
Why does this object look like a container? Why does
this other seem an arrow point? Or are those stones being
interpreted as the remains of a house? In which way an
activity area within an ancient hunter-gatherers
settlement can be recognized as such?

It is my view that the real value of archaeological data


should come from the ability to be able to extract useful
information from them. This is only possible when all
relevant information has been captured and coded.
However, archaeologists usually tend to only consider
very basic physical properties, like size and a subjective
approximation to shape. Sometimes, texture, that is, the
visual appearance of a surface is also taken into account,
or the mineral/chemical composition. The problem is that
in most cases, such properties are not rigorously
measured and coded. They are applied as subjective
adjectives, expressed as verbal descriptions preventing
other people will use the description without having seen
the object. If the physical description of such visual
properties is somewhat vague, then possibilities of
discovering the function the artifact had in the past is
compromised, we hardly can infer the objects physical
structure. The insufficiency and lack of a clear consensus
on the traditional methods of form description mostly
visual, descriptive, ambiguous, subjective and qualitative
have invariably led to ambiguous and subjective
interpretations of its functions. It is thus strongly
advisable to systematize, formalize and standardize
methods and procedures more objective, precise,
mathematical and quantitative, and whenever possible
automated.

Most of these questions seem out of order for when using


a range-scanner or a photogrametric camera. Current uses
of technology in archaeology seem addressed to simply
tell us what happens now at the archaeological site. They
do not tell us what happened in the past, nor why or how.
What is being seen in the present has been the
consequence of human action in the past, interacting with
natural processes through time. Human action exists now
and existed in the past by its capacity to produce and
reproduce labor, goods, capital, information, and social
relationships. In this situation, the obvious purpose of
what we perceive in the present is to be used as
evidences of past actions. It is something to be explained,
and not something that explains social action in the past.
In that sense, production, use and distribution are the
social processes which in some way have produced
(cause) archaeologically observed properties (size,
shape, composition, texture, place, time) (effect).
Archaeological artifacts have specific physical properties
because they were produced so that they had those
characteristics and not other. And they were produced in
that way, at least partially, because those things were
intended for some given uses and not to other: they were
15

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

think that it is just a question of realism in the


representation of enhanced aesthetic qualities. As a
matter of fact, visual impressions may seem more
accurately represented by the two dimensional projective
plane but experience proves the value of the third
dimensin: the possibilities of understanding movement
and dynamics, that is, use and function. The three
dimensionality of space is a physical fact like any other.
We live in a space with three different degrees of
freedom for movement. We can go to the left or to the
right. We can go forward or backward. We can go up or
we can go down. We are allowed no more options.
However, a rigid body in three-dimensional space has six
degrees of freedom: three linear coordinates for defining
the position of its center of mass--or any other point--and
another three angles defining relative rotation around the
body's center of mass. Rotations add three more closed
dimensions (dimensions of orientation). Then, you can
imagine a 6-D space with six intersecting lines, all
mutually orthogonal. The three obvious lines resulting
from the possibilities for describing movement in
absolute terms (without considering the object itself), and
three additional orientations resulting from a relative
description of movement (considering not only the
movement of the object with reference to a fixed point in
the landscape, but considering also the movements of the
object with respect to itself). Each of coordinates
represents the set of all possible orientations about some
axis. Any movement we make must be some combination
of these degrees of freedom. Any point in our space can
be reached by combining the three possible types of
motion. Up / down motions are hard for humans. We are
tied to the surface of the Earth by gravity. Hence it is not
hard for us to walk along the surface anywhere not
obstructed by objects, but we find it difficult to soar
upwards and then downwards: many archaeologists
prefer paper-and pencil, or digital pictures to resume what
they can see. But such flat representations do not allow
studying movement.

Let us consider the idea of SHAPE. Shape is the


structure of a localised field constructed around an
object (Koenderink 1990, Leymarie 2011). In other
words, the shape of an object located in some space can
be defined as the geometrical description of the part of
that space occupied by the object, as determined by its
external boundary abstracting from location and
orientation in space, size, and other properties such as
colour, content, and material composition (Rovetto 2011).
We call surfaces such boundaries of separation between
two phases. A phase is a homogenous mass of substance,
solid, liquid or gas, possessing a well-defined boundary.
When we have two phases in mutual contact we have an
interfacial boundary. This is called an interface. The
surface of a solid, kept in atmosphere, is in fact an airsolid interface; although it is often simply referred to as a
solid surface. We can also conceive of a solid-solid
interface that occurs when two solids or solid particles are
brought in mutual contact. By combining surfaces and
discovering discontinuities between boundaries we
recognize shapes in objects, so to speak, and this is how
we linguistically understand shape as a property. These
physical or organic shapes do not reflect the exact specifications of geometrical descriptions of the part of space
occupied by each object. They approximate geometric
shapes. We may treat the geometrical description of the
part of space occupied by each object as if existed
independently, but common sense indicates that it is an
abstractions with no exact mind-external physical
manifestation, and it would be a mistake to betray that
intuition. That which we consider to be shapeis intimately
dependent on that which has the shape. In the mindexternal world, shapes, it seems, are properties of things.
They [things] must have a shape, i.e. be delineated by a
shape. We say that a physical object exhibits a shape.
Thus, shapes must always be shapes of something in the
mind-external world. Outside idealized geometric space,
it does not make sense to posit the existence of an
independently existing shape, a shape with no bearer. The
shape cannot exist, but as an idea, without an entity that
bears, exhibits, or has that shape (Rovetto 2011). Shape
so delineated is a property dimension, which is quite
consistent with the fact that some shapes in turn have
(second-order) properties such as being symmetric,
being regular, being polyhedrical, and as having
mathematical properties such as eccentricity (Johansson
2008). If a shape is defined as having a particular number
of sides (as with polygons), a particular curvature (as
with curved shapes, such as the circle and the ellipse),
specific relations between sides, or otherwise, then it
should be apparent that we are describing properties of
properties of things. We might be inclined to say that it is
the shape that has a certain amount of angles and sides,
rather than the object bearing the shape in question, but
this is not entirely accurate (Rovetto 2011). The
distinction between geometric and physical space,
between ideas and ideal or cognitive constructions and
material mind-external particulars is significant.

Therefore, it is not a hobby of technicaly oriented


professionals an insistence on working with threedimensional visual properties. Fortunately for us,
technology has produced the right tool for such a task:
range-scan and photogrametry devices. This book
discusses such a technology. They can be considered as
instrumental-observers able to generate as an output
detailed point clouds of three-dimensional Cartesian
coordinates in a common coordinate system that describe
a point cloud representing the surfaces of the scanned
object. An objects form is then expressed in terms of the
resulting point cloud.
We may call surfaces the boundaries of separation
between two phases. A phase is a homogenous mass of
substance, solid, liquid or gas, possessing a well-defined
boundary. When we have two phases in mutual contact
we have an interfacial boundary. This is called an
interface. The surface of a solid, kept in atmosphere, is in
fact an air-solid interface; although it is often simply
referred to as a solid surface. We can also conceive of a
solid-solid interface that occurs when two solids or solid

Why 3D is so important when measuring and coding


shape information? There are still archaeologists that

16

ARCHAEOLOGICAL AND GEOMATIC NEEDS

shape-and-form variability within a well specified


population of archaeological observables what interests
us. This approach has some tradition in 2D shape
analysis. Russ (2002) gives a list of some of them:

particles are brought in mutual contact. By combining


surfaces and discovering discontinuities between
boundaries we recognize shapes in objects, so to speak,
and this is how we linguistically understand shape as a
property. These physical or organic shapes do not reflect
the exact specifications of geometrical descriptions of the
part of space occupied by each object. They approximate
geometric shapes. We may treat the geometrical
description of the part of space occupied by each object
as if existed independently, but common sense indicates
that it is an abstractions with no exact mind-external
physical manifestation, and it would be a mistake to
betray that intuition. That which we consider to be
shapeis intimately dependent on that which has the shape.
In the mind-external world, shapes, it seems, are
properties of things. They [things] must have a shape, i.e.
be delineated by a shape. We say that a physical object
exhibits a shape. Thus, shapes must always be shapes of
something in the mind-external world. Outside idealized
geometric space, it does not make sense to posit the
existence of an independently existing shape, a shape
with no bearer. The shape cannot exist, but as an idea,
without an entity that bears, exhibits, or has that shape
(Rovetto 2011). Shape so delineated is a property
dimension, which is quite consistent with the fact that
some shapes in turn have (second-order) properties such
as being symmetric, being regular, being
polyhedrical, and as having mathematical properties
such as eccentricity (Johansson 2008). If a shape is
defined as having a particular number of sides (as with
polygons), a particular curvature (as with curved shapes,
such as the circle and the ellipse), specific relations
between sides, or otherwise, then it should be apparent
that we are describing properties of properties of things.
We might be inclined to say that it is the shape that has a
certain amount of angles and sides, rather than the object
bearing the shape in question, but this is not entirely
accurate (Rovetto 2011). The distinction between
geometric and physical space, between ideas and ideal or
cognitive constructions and material mind-external
particulars is significant.

1) Elongation. Perhaps the simplest shape factor to


understand is Aspect Ratio, i.e., length divided by
breadth, which measures an aspect of elongation of an
object.
length
width

or

MaximumDiameter
MinimumDiameter

2) Roundness. It measures the degree of departure from a


circle of an objects two-dimensional binary
configuration. This is based not on a visual image or
an estimate of shape; rather, it is based on the
mathematical fact that, in a circular object with a
fixed area, an increase in the length of the object
causes the shape to depart from a circle.

4 Area
p
2

In the equation, p is the perimeter of the contour, and


Area is a measure of the surface of the object. The
roundness calculation is constructed so that the value
of a circle equals 1.0, while departures from a circle
result in values less than 1.0 in direct proportion to the
degree of deformation. For instant, a roundness value
of 0.492 corresponds approximately to an isosceles
triangle.
3) Shape Factor (or Formfactor). It is similar to
Roundness, but emphasizes the configuration of the
perimeter rather than the length relative to object area.
It is based on the mathematical fact that a circle
(Shape factor value also equal to 1.0), compared to all
other two-dimensional shapes (regular or irregular),
has the smallest perimeter relative to its area. Since
every object has a perimeter length and an area, this
mathematical relationship can be used to quantify the
degree to which an objects perimeter departs from
that of a smooth circle, resulting in a value less than
1.0. Squares are around 0.78. A thin thread-like object
would have the lowest shape factor approaching 0.

The most obvious use of range-scanning and


photogrametry is then calculating the observations
surface model. It can be defined in terms of lines and
curves defining the edges of the observed object. Each
line or curve element is separately and independently
constructed based on original 3D point co-ordinates. The
resulting polygon mesh is a set of connected polygonally
bounded planar surfaces. It is represented as a collection
of edges, vertices and polygons connected such that each
edge is shared by at most two polygons. The resulting 3D
geometric models are no doubt impressive and contain all
the information we would need to calculate the particular
relationship between form and function. However, we
should consider surface models as an intermediate step in
the process of quantifying shape. Each 3D model has to
be identified with a shape descriptor, providing a compact
overall description of the shape. What we need is an
approach towards the statistical analysis of shapes and
forms. In other words, instead of the particular high
resolution details of a single pot, knife or house, it is

4Area
p2
In the equation, p is the perimeter of the contour, and
Area is a measure of the surface of the object. Notice
that formfactor varies with surface irregularities, but
not with overall elongation.
4) Quadrature: The degree of quadrature of a solid,
where 1 is a square and 0.800 an isosceles triangle.
This shape is expressed by:

p
4 Area
In the equation, p is the perimeter of the contour, and
Area is a measure of the surface of the object.

17

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Shape indexxes allow thee integration of all param


meters
related with the 2D geom
metry of the objects
o
interffacial
boundaries innto a single measurement
m
i such a wayy that
in
a statistical comparison of such parrameters allow
ws a
complete desscription of visual variabiliity in a popullation
of material evvidences (Barrcel 2010).

variaations in the local properrties of their surfaces likee


albedo and coolor variationns, density, coarseness,,
roug
ghness, regulaarity, hardnesss, brightnesss, bumpiness,,
speccularity, reflecctivity, transpparency, etc. Texture
T
is thee
nam
me we give to the perceptionn of these varriations. Whatt
I am
m doing here iss introducing a synonym fo
or perceptuall
variaability or ssurface discoontinuity. It is a kind off
complementing
percceptual
infformation
shapee
inforrmation.

Unfortunatelly, many of the


t descriptorrs that have been
proposed foor 2D shape measures caannot be dirrectly
generalized to 3D (Lian,, 2010), and we have alrready
argued the reelevance of a proper 3D annalysis. Up to now,
just a few global form deescriptors witth direct meannings
for 3D modeels have beenn proposed, whhere each of them
describes 3D
D objects in a quite differennt manner, theereby
new
and
providing
independeent
informaation.
Compactnesss indices, for example,
e
mayy describe:

Colo
or, brightnesss, hue, are thee most obvio
ously visual
appeearances of anny entity. For too many tim
mes it has beenn
desccribed subjectiively using w
words: green, red,
r
yellow
Now
w, digital photography, esppectometry an
nd specializedd
softw
ware allows a formal quantificatio
on of colorr
inforrmation and its
i relative prroperties. Morre complex iss
the case
c
of Surfacce Micro-topoography. We have
h
already a
vocaabulary of micro-topogra
m
aphic values: coarseness,
roug
ghness, smooothness, poliish, burnish,, bumpiness,
waviness, which are the ressult of micro
o-topographicc
irreg
gularities. Such
S
variatioon is of fundamentall
impo
ortance to discover
d
the past function
n of ancientt
objeects, because the
t surface of solids plays has
h significantt
role in interfaciall phenomena, and its actuaal state is thee
resu
ult of physical forces that haave acted on th
hat surface.

1) The extennt to which a 3D mesh is spherical (Waadell,


1935; Asahina,
A
20111). The spheericity, , of
o an
observed entity (as measured using
u
the raangescanning device) is thhe ratio of thee surface areaa of a
sphere wiith the same volume
v
as the given entity to
t the
surface arrea of the entiity:
(6Vp)

=
Ap
where Vp is volume of
o the object or archaeoloogical
building structure andd Ap is the suurface area of the
object. The
T sphericityy of a sphere is 1 and, byy the
isoperimeetric inequalitty, any particcle which is not
n a
sphere wiill have spheriicity less thann 1.

To represent
r
micrro-variation thhe only we haave to do is too
indiccate the relatiive positions aand elevations of surfacess
poin
nts with diffferential inteerfacial contrribution. Thee
resolution of moddern range scaanners is enou
ugh to be ablee
to measure
m
tiny details
d
of com
mplex micro-sttructures, andd
meaasuring depth and heigth at well localized pointss
with
hin the surfacce, allowing uus to measuree their spatiall
variaability A moddern laser scaanner capturess surface dataa
poin
nts less than 50 micronss (0,05 mm), apart from
m
prod
ducing high-deensity trianguular meshes wiith an averagee
resolution of overr 1000 points pper cm2.

2) The exteent to which a 3D meshh is a cube. The


cubeness Cd (S)of an observed enntity (as meassured
using thee range-scannning device) is
i the ratio of the
surface area
a
of a cubee with the sam
me volume as
a the
given enttity to the surfface area of the entity:

As in
i the case off form, we do not have en
nough with a
simp
ple spatial innvariant meaasurement of heights andd
deptths at the miicro-level of a single surfface. Modernn
research in surfaace analysis, notably in geometry
g
andd
mateerial science have propoosed dozens of suitablee
paraameters of texture, like aaverage rough
hness, texturee
aspeect, texture direction, ssurface material volume,,
auto
ocorrelation, avverage peak-tto-valeyy, etc.

Where A(S) is the areaa of the enclossing surface. If


I the
shape is subdivided innto facets or voxels, then n(S)
reprents the
t number off different facces which form
m the
shape. Cd (S) picks up the higheest possible value
v
(which iss 1) if and onnly if the meeasured shapee is a
cube.

Alth
hough archaeoology has beeen traditionallly consideredd
as a quintessentiaally visual discipline (Shelley 1996),,
we need
n
also nonn-visual featuures to characterize ancientt
objeects and materials (i.e., com
mpositional data
d
based onn
mass spectromeetry, chronological dataa based onn
radio
oactive decayy measures, ettc.). Once we include non-visu
ual data we would have the initial elements forr
analysis of the recordedd
begiinning true explanatory
e
arch
haeological eleements.

Similar indeexes can be calculated for cylinderss, or


ellipsoids orr even in the case of rectillinear shapes.. The
fundamental role of such indexes is thaat they corresspond
to the diffeerent ways archaeological
a
l observabless are
judged to be similar or diffferent. Accorddingly, the forrm of
an archaeological artifacct can be defined
d
as an
a ndimensional vector spacee, whose axess represent global
g
shape-and-foorm parameteers or further vector sppaces
denoting diffferent domains of the same idea of shape.

Why
y archaeologiical artifacts are the way they are? A
posssible answer to this quesstion would be: becausee
objeects have a disstinctive appearance for thee sake of theirr
prop
per functioningg. This functtion would be distinguishedd
from
m other non-ffunctional (or accidental)) uses by thee

But we needd much more than shape. The surfaces of


archaeologiccal objects, artifacts, and materials aree not
uniform but contain
c
many variations; soome of them are
a of
visual or tacctile nature. Archaeologiccal materials have
18

ARCHAEOLOGICAL AND GEOMATIC NEEDS

Table 1. Microtopography: 3D surface texture


References: [ASME-B46-1-2009], [VARADI_2004]; see [MASAD_2007], [WHITEHOUSE_2002]
3D Areal
parameter

2D Profile
parameter

Sa: average roughness

The arithmetic average deviation of the surface (the absolute values


of the measured height deviations from the mean surface taken
within the evaluation area).

Amplitude

Height

Sq: root mean square


(rms) roughness

The root mean square average deviation of the surface


(the measured height deviations from the mean surface
taken within the evaluation area).

Amplitude

Height

Ssk: skewness

A measure of the asymmetry of surface heights about the mean surface.

Amplitude

Shape

Sku: kurtosis

A measure of the peakness of the surface heights


about the mean surface.

Amplitude

Shape

Sz: ten point height of


the surface (8 nearest
neighbor)

2D approximate: Height function, Z(x, y); and Maximum area


peak height, Sp.

Amplitude

Height

Sds: density of summits

2D approximate: C number of peaks.

Amplitude

Area Spacing

Str: texture aspect ratio

Is a measure of the spatial isotropy or directionality of


the surface texture.

Spatial

Other
parameters

Sal: fastest decay


autocorrelation length

Only to be interpreted in 3D.

Spatial

Std: texture direction of


surface

Is determined by the APSD (Angular Power Spectral Density Function)


and is a measure of the angular direction of the dominant lay
comprising a surface.

Spatial

Other
parameters

Sq: area root mean


square surface slope,
(Sdq)

The root mean square sum of the x and y derivatives of


the measured topography over the evaluation area.

Hybrid

Other
parameters

Sq (): area root mean


square directional slope,
(Sdq )

The root mean square average of the derivative of the measured


topography along a selected direction, (), calculated over the sampling
area.

Hybrid

Other
parameters

Ssc: mean summit


curvature

Evaluated for each summit and then averaged over the area.
Based on a summit.

Hybrid

Sdr: developed surface


area ratio

Developed Interfacial Area Ratio.


2D approximate: Lr.

Hybrid
Functional
Index family
(*1)

Sbi: surface bearing


index
Sci: core fluid retention
index

Geometrically speaking, Sci represents the value of empty volume


pertaining to a sampling surface unit of the core zone, as referred to Sq.
2D approximates: Rk parameters.

Functional
Index family
(*1)

Svi: valley fluid


retention index

It is a parameter similar to Sci. It represents the value of empty


volume pertaining to a sampling surface unit of the valley zone,
as referred to Sq.

Functional
Index family
(*1)

Sm: surface material


volume

Volume from top to 10% bearing area

Functional
Volume
family (*1)

Sc: core void volume

Volume enclosed 10%-80% bearing area

Functional
Volume
family (*1)

Sv: valley void volume

Volume from 80% to 100% bearing area

Functional
Volume
family (*1)

Area power spectral


density function, APSD

The square of the amplitude of the Fourier transform of the measured


topography. This 3D function is used to identify the nature of periodic
features of the measured topography.
2D: Single profiles through the function can be used to evaluate lay
characteristics.

??

19

Other
Parameters

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Area auto covariance


function, AACV

This 3D function is used to determine the lateral scale of the dominant


surface features present on the measured topography.
2D: The auto covariance function is a measure of similarity between two
identical but laterally shifted profiles. Single profiles through the
function can be used to evaluate lay characteristics.

??

Other
Parameters
Other
Parameters

Area autocorrelation
function, AACF
area waviness height,
SWt

The area peak-to-valley height of the filtered topography from


which roughness and part form have been removed.

Waviness

surface bearing area


ratio

The ratio of (the area of intersection of the measured topography with a


selected surface parallel to the mean surface) to (the evaluation area).

Other
Parameters

average peak-to-valley
roughness R and others

Is intended to include those parameters that evaluate the profile height


by a method that averages the individual peak-to-valley roughness
heights, each of which occur within a defined sampling length.

Additional
Parameters
for Surface
Characterizati
on (*2)

average spacing of
roughness peaks AR

Is the average distance between peaks measured in the direction


of the mean line and within the sampling length.

Additional
Parameters
for Surface
Characterizati
on (*2)

swedish height of
irregularities
(profiljup), R or H

Is the distance between two lines parallel and equal in length to the mean
line and located such that 5% of the upper line and 90% of the lower line
are contained within the material side of the roughness profile.

Additional
Parameters
for Surface
Characterizati
on(*2)

fact that the features that define the solid nature of the
object owe its existence to this particular use. In other
words, a single ancient pot, axe, weapon, jewel, burial or
house found at the archaeological site is assumed to be
like it is because it performed some particular action or
behaviour in the past. The object was made to do
something in a particular way, and the goal it had to
fulfill could only be attained when the artifact got some
determinate properties. A function is taken as an activity,
which can be performed by an object. The objects
activity is in fact its operating mode; more generally it
can be seen as an object behaviour specification.

1975, 2002). Quoting Kitamura et al., (2004), functional


models represent a part of (but not all) the designers
intentions, so called design rationale (see also Erden et
al., 2008).
It has been suggested that in many cases there is a direct
conditioning and even deterministic relationship between
how a prehistoric artifact looks like (what the rangescanner has acquired) and its past functionality. Design
theory is often defined as a means of creating or
adapting the forms of physical objects to meet functional
needs within the context of known materials, technology,
and social and economic conditions. If this approach is
right, then in order an archaeologist be capable of
ascribing functions to observed archaeological evidences,
she would need to combine Knowledge about how the
designers intended to design the artifact to have the
function, Knowledge about how the makers determined
the physical structure of that artifact on the basis of their
technological abilities, and Knowledge about how the
artifact was determined by its physical structure to
perform that function. Design theory principles assume
that there are different kinds of constraints operating in
the developing of solutions for each problem and that
tradeoffs between constraints make it unlikely that there
will be any single optimal solution to a problem but,
rather, a number of more or less equally acceptable
solutions that can be conceptualized. Among the most
powerful of these constraints are functional requirements,
material properties, availability, and production costs.

I know that I am reducing too much the very meaning of


functionality, but only such a crude simplification can
make my point clearer. I am arguing that archaeological
observables should be explained by the particular causal
structure in which they are supposed to have participated.
An objects use can be defined as the exertion of control
over a freely manipulable external object with the
specific intention of (1) altering the physical properties of
another object, substance, surface or mdium via a
dynamic mechanical interaction, or (2) mediating the
flow of information between the tool user and the
environment or other organisms in the environment. (St.
Amant and Horton 2008. See also (Becks 1980, McGrew
1993, Amant 2002, Bicici and Amant 2003). The
knowledge of the function of some perceived material
element should reflect the causal interactions that
someone has or can potentially have with needs, goals
and products in the course of using such elements.
Functional analysis is then the analysis of the objects
disposition to contribute causally to the output capacity of
a complex containing system of social actions (Cummins

In other words, understanding of function needs to be


connected with the understanding of the physics of forces
and causation. Changing the direction of forces, torques,
20

ARCHAEOLOGICAL AND GEOMATIC NEEDS

and impulses and devising plans to transmit forces


between parts are two main problems that arise in this
framework. To solve these, we need to integrate causal
and functional knowledge to see, understand, and be able
to manipulate past use scenarios (Brand 1997). We
should add the rules of physics that govern interactions
between objects and the environment to recognize
functionality. The functional outcome cannot occur until
all of the conditions in the physical environment are
present, namely the object(s), its material, kinematics and
dynamics. Once these conditions exist, they produce and
process the relevant behaviours, followed by the outcome
(Barsalou 2005).

Consequently, a basic requisite for inferring past uses


tools and other artifacts or constructions is the
recognition of additional properties which determine the
possibilities and limits of mechanical interaction with the
real world (Goldenberg and Spatt 2009). Given that solid
mechanics is the study of the behaviour of solid material
under external actions such as external forces and
temperature changes the expression mechanical
properties has been mentioned many times for referring
to these additional properties, in the sense that the value
of such properties is conditioned by the physical features
of the solid material involved and also affected by various
parameters governing the behaviour of people with
artefacts.

Therefore, shape, texture and non-visual properties of


archaeological entities (from artefacts to landscapes)
should be regarded as changing not as a result of their
inputoutput relations, but as a consequence of the effect
of processes (Kitamura & Mizoguchi, 2004, Erden et al.,
2008). Consequently, reasoning about the affordances of
physical artifacts depends on the following factors and
senses (Bicici and St. Amant 2003):

Physical properties - are those whose particular values


can be determined without changing the identity of the
substance, i.e. the chemical nature of matter.
Mechanical properties Thir value may vary as a
result of the physical properties inherent to each
material, describing how it will react to physical forces.
The main characteristics are ELASTIC, STRENGTH and
VIBRATION.

Form/Texture/Composition: For many tools, form,


texture and composition is a decisive factor in their
effectiveness.

o ELASTIC PROPERTIES: Materials that behave


elastically generally do so when the applied stress is
less than a yield value. When the applied stress is
removed, all deformation strains are fully recoverable
and the material returns to its undeformed state. The
Elastic modulus, or modulus of elasticity, is the ratio
of linear stress to linear strain. It measures the
stiffness of a given material and is measured in units
of pressure MPa or N/mm2. It can be obtained by the
Young modulus, bulk modulus, and shear modulus.
The Poissons ratio is the ratio of lateral strain to
axial strain. When a material is compressed in one
direction, it usually tends to expand in the other two
directions perpendicular to the direction of
compression.

Planning: Appropriate sequences of actions are key to


tool use. The function of a tool usually makes it
obvious what kinds of plans it takes part in.
Physics: For reasoning about a tools interactions with
other objects and measuring how it affects other
physical artifacts, we need to have a basic
understanding of the naive physical rules that govern
the objects.
Dynamics: The motion and the dynamic relationships
between the parts of tools and between the tools and
their targets provide cues for proper usage.

o STRENGTH PROPERTIES: The materials mechanical


strength properties refer to the ability to withstand an
applied stress without failure, by measuring the
extent of a materials elastic range, or elastic and
plastic ranges together. Loading, which refers to the
applied force to an object, can be by: Tension,
Compression, Bending, Shear, Torsion.

Causality: Causal relationships between the parts of


tools and their corresponding effects on other physical
objects help us understand how we can use them and
why they are efficient.
Work space environment: A tool needs enough work
space to be effectively applied.

o VIBRATION PROPERTIES: Speed of sound and internal


friction are of most importance in structural
materials. Speed of sound is a function of the
modulus of elasticity and density. Internal friction is
the term used for when solid material is strained and
some mechanical energy is dissipated as heat, i.e.
damping capacity.

Design requirements: Using a tool to achieve a known


task requires close interaction with the general design
goal and requirements of the specific task.

Only the first category is the consequence of using rangescanning and similar technology. This list suggests that
reasoning about the functionality of archaeological
objects recovered at the archaeological site requires a
cross-disciplinary investigation ranging from recognition
techniques used in computer vision and robotics to
reasoning, representation, and learning methods in
artificial intelligence. To review previous work on
approaches relevant to tool use and reasoning about
functionality, we can divide current approaches in two
main categories: systems that interact with objects and
environments, and systems that do not.

Each property tells us something about the reaction the


artefact would have shown in case prehistoric people
brought it into a certain environment and used it a certain
way. It wonders me the absolute lack of such information,
not only in current virtual archaeology projects, but also
in culture heritage databases. Archaeologists insist in
documenting ancient artifacts, but such documentation
never takes into account the physical and mechanical

21

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

properties of ancient materials. Without such information


any effort in functional analysis is impossible.

investigation of changes in finite-element structure


produces as a result of the simulated behaviours. We can
distinguishe between:

Only by direct interaction with real objects made of solid


materials may provide new insights into the complex
dynamics of certain phenomena, such as event-based
motion or kinematics. However, imagine the answer of a
Museum director when we ask her to break a prehistoric
object in order to discover the way it was used in the
past! Given that prehistoric and ancient objects tools not
always can be used in the present nor touched to
preserve its integrality, we are limited to the possibly of
manipulating a virtual surrogate of the object. Thats the
reason that we need a solid model, but a solid model is
much more than the surface model we have used to
represent shape and form. An interpolated surface
fitted to a point cloud acquired by means of a laser scan is
not a solid model, because it does not give information on
all the surfaces constituting the object. We need to
characterize the objects as a closed shape in order to use
it as a proper surrogate of the real observable.

Structural analysis consists of linear and non-linear


models. Linear models use simple parameters and
assume that the material is not plastically deformed.
Non-linear models consist of stressing the material
past its time-variant capabilities. The stresses in the
material then vary with the amount of deformation.
Fatigue analysis may help archaeologists to predict the
past duration of an object or building by showing the
effects of cyclic loading. Such analysis can show the
areas where crack propagation is most likely to occur.
Failure due to fatigue may also show the damage
tolerance of the material.
Vibrational analysis can be implemented to test a
material against random vibrations, shock, and impact.
Each of these incidences may act on the natural
vibrational frequency of the material which, in turn,
may cause resonance and subsequent failure. Vibration
can be magnified as a result of load-inertia coupling or
amplified by periodic forces as a result of resonance.
This type of dynamic information is critical for
controlling vibration and producing a design that runs
smoothly. But its equally important to study the forced
vibration characteristics of ancient or prehistoric tools
and artefacts where a time-varying load excited a
different response in different components. For cases
where the load is not deterministic, we should conduct
a random vibration analysis, which takes a probabilistic
approach to load definition.

The best approach is that of A FINITE ELEMENT


MODEL. The basic concept is that a body or structure
may be divided into smaller elements of finite dimensions
called Finite Elements. The original body or structure is
then considered as an assemblage of these elements
connected at a finite number of joints called Nodes or
Nodal Points. Nodes are assigned at a certain density
throughout the solid depending on the anticipated stress
levels of a particular area. Regions which will receive
large amounts of stress usually have a higher node
density than those which experience little or no stress.
Points of interest may consist of: fracture point of
previously tested material, fillets, corners, complex detail,
and high stress areas. Each element in FE model alludes
to the constructive block of a model and defines how
nodes are joined to each other. Mathematical relation
between elements characterizes one nodal degree of
freedom in relation with the next one. This web of vectors
is what carries the material properties to the object,
creating many elements. The properties of the elements
are formulated and combined to obtain the properties of
the entire body.

Heat Transfer analysis models the conductivity or


thermal fluid dynamics of the material or structure.
This may consist of a steady-state or transient transfer.
Steady-state transfer refers to constant thermo properties in the material that yield linear heat diffusion.
Motion analysis (kinematics) simulates the motion of
an artefact or an assembly and tries to determine its
past (or future) behaviour by incorporating the effects
of force and friction. Such analysis allows
understanding how a series of artefacts or tools
performed in the past e.g., to analyze the needed
force to activate a specific mechanism or to exert
mechanical forces to study phenomena and processes
such as wear resistance. It can be of interest in the cae
of relating the use of a tool with the preserved material
evidence of its performance: a lithic tool and the stone
stelae the tool contributed to engrave. This kind of
analysis needs additional parameters such as center of
gravity, type of contact and position relationship
between components or assemblies; time-velocity.

What I am suggesting here is to infer prehistoric (ancient)


functionality from the knowledge of physics. The idea
would be to investigate the interaction among planning
and reasoning, geometric representation of the visual
data, and qualitative and quantitative representations of
the dynamics in the artifact world. Since the time of
Galileo Gallilei we know that the use of an object can
be reduced to the application of forces to a solid, which in
response to them moves, deforms or vibrates. Mechanics
is the discipline which investigates the way forces can be
applied to solids, and the intensity of consequences.
Pioneering work by Johan Kamminga and Brian Cotterell
proves that experimental knowledge of the properties of
materials, many remarkable processes of shaping holding,
pressing, cutting, heating, etc them are now well known,
and
expressed
mathematically
in
equations.
Archaeological finite analysis then implies the

CONCLUSIONS

Archaeology should not be reduced to the visualization of


artefacts and buildings, but a complete simulation where
the archaeologist can modify the geometry and other
characteristics, redefine parameters, assign new values

22

ARCHAEOLOGICAL AND GEOMATIC NEEDS

functioning in the past depended both on its present form


or structure, as well as on the mode and conditions of its
past use. Once we know the form, the physical, structural
and mechanical properties and the effective use of an
object, then, by experiment (direct or virtual), the
computer can simulate its functional behaviour.

and settings or any other input data, select another


simulation study or run a new simulation test, to test the
validity of the model itself. The aim is not to prove that
any single visualization correctly captures all the past but
only that the explanantions are sufficiently diverse, given
available knowledge, that the dynamics of a concrete
historical situation should be contained within the
proposed explanatory model.

Consequently, the most productive way to understand


artifact morphology, design, and technological
organization is by analyzing each type of material
evidence in its own terms, identifying the constraints and
design strategies represented by each one, and then
combining these strategies to understand entire
assemblages. According to such assumptions, if one
wants to create a specific tool meant to solve a specific
problem, some of the things that people have had to
consider in this design process include the size and
weight of the tool; its overall form (for holding or
halting); the edge angle where cutting, scraping, or
holding was important; the possibility of halting; the
duration of its use, how specialized the working parts
needed to be; whether it was at all desirable to combine
two or more functions in the same tool; how reliable the
tool needed to be; and how easily repaired or resharpened
it needed to be (Hayden 1998).

Function-based reasoning can be seen as a constraint


satisfaction problem where functional descriptions
constrain structure or structure constrains functional
possibilities. The mappings available between form and
function are actually many-to-many and recovering an
object by matching previously recognized ones
functionalities experience combinatorial growth, what
may constrain us not to infer the actual functionality in
the past, but its more improbable function(s).
Would the object have behaved as expected? As we have
been discussing, this depends on several interrelated
issues, for these determine possible outcomes. Its
geometry, i.e. form, its material, physical, structural and
mechanical properties; the workspace, ... but also the
physics involved in their manipulation. The artefact

23

2
LASER/LIDAR

LASER/LIDAR

2.1 AIRBORNE LASER SCANNING FOR


ARCHAEOLOGICAL PROSPECTION
R. BENNET

questions, and answers, that will aid effective and


appropriate use of ALS data. This information is paired
with and refers to the ALS case study at the end of the
chapter.

2.1.1 INTRODUCTION
The adoption of airborne laser scanning (ALS) for
archaeological landscape survey over the last decade has
been a revolution in prospection that some have likened
to the inception of aerial photography a century ago.
Commonly referred to as LiDaR (Light Detection and
Ranging)1, this survey technique records high resolution
height data that can be modelled in a number of ways to
represent the macro and micro topography of a landscape.
Arguably the most exciting aspect of this technique is the
ability to remove vegetation to visualise the ground
surface beneath a tree canopy (Crow et al., 2007; Crow,
2009), but its value has also been shown in open
landscapes and as a key component of multi-sensor
survey (Bennett et al., 2011, 2012).

2.1.2 TECHNICAL BACKGROUND HOW ALS


DATA ARE COLLECTED AND PROCESSED
Unlike aerial photography or digital spectral imaging,
ALS is an active remote sensing technique, meaning that
measurements are taken using light emitted from the
sensor unit rather than the reflection of natural light thus
enabling night-time collection of data. The principle of
laser scanning as a survey tool relies on the ability to
calculate the time taken by a beam of light to travel from
the sensor to the reflecting surface and back. The sensor
scans in a direction perpendicular to the direction of flight
creating a swath of points (Figure 1). Points are collected
in a zig-zag or saw-tooth pattern resulting in an uneven
distribution of spot heights along the swath. In addition,
as the rotating mirror reaches the edge of each oscillation
it slows down, resulting in a cluster of more-tightly
spaced points at the edges of each flight-line.

The increased interest in and availability of ALS data has


ensured its place in the tool kit of historical environment
professionals. Increasingly, archaeologists are not just
recipients of image data processed by environmental or
hydrological specialists but are taking on the task of
specifying and processing ALS data with archaeological
prospection in mind from the outset. Despite this shift,
the information and issues surrounding the capture,
processing and visualisation of ALS data for historic
environment assessment remain less well recorded than
the applications themselves.

Combining this with the information about the sensor's


real-time location via Global Positioning System (GPS)
and the roll, pitch and yaw of the plane via the Inertial
Measurement Unit (IMU), it is possible to precisely
calculate the distance of the sensor from the ground
(Figure 2). Although airborne laser systems were known
to be able to record height to less than 1m accuracy in the
1970s, it was advancements in GPS and IMU technology
throughout the 80s and 90s and the removal of signal
scrambling by the US military in 2000, that enabled ALS
sensors to be used for topographic mapping to an
accuracy typically in the order of 0.1-0.2 m (Beraldin et
al., 2010:20). This resolution means that features of
archaeological interest that are represented in the macrotopography of a landscape can be captured in detail by
ALS.

This chapter attempts to provide a balance of technical


knowledge with archaeological application and to explain
the benefits and disadvantage of ALS as a tool for
archaeological landscape assessment. The aim here is not
to overwhelm with detail (readers should look to Beraldin
et al., (2010) for an excellent technical summary of ALS
systems) but to provide historic environment
professionals, researchers and students with the
1 Lidar is a broader term that can be used to describe a range of space,
airborne and ground-based laser range measuring systems, while ALS
relates to a particular type of airborne sensor which uses a rotating
mirror to scan beneath the aircraft.

27

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. Demonstration and example of the zig-zag point distribution


when ALS data are collected using an oscillating sensor

Figure 2. Schematic of the key components of the ALS system


that enable accurate measurement of height and location

Full-waveform sensors record the entire returning beam


allowing the user to specify the pulse points that they
wish to use to define vegetation after the data is collected.
This method has been shown to improve the accuracy of
vegetation filtering (Doneus and Briese, 2010). However
this type of sensor is less common than the discrete return
systems and applications are hampered by the
computational power required to process the data, which
poses a challenge for many applications.

The reflected data from the laser beam can be recorded


by the sensor in one of two forms: Discrete Return or
Full-Waveform (Figure 3). Discrete Return systems
record individually backscattered pulses when the beam
encounters an obstacle from which it partially reflects,
such as vegetation as can be seen in Figure 3. A return is
only recorded when the reflection exceeds a
manufacturer-defined intensity threshold and there is
typically a discrete time interval before any subsequent
return can be recorded. Between four and six returns can
typically be recorded, forming the basis for identifying
and removing vegetation (see below).

The initial processing steps for ALS data are most often
done by the data supplier but are worth mentioning

28

LASER/LIDAR

Figure 3. Schematic illustrating the differences in data recorded


by full-waveform and pulse echo ALS sensors

Figure 4. An example of orange peel patterning caused by uncorrected point heights at the edges of swaths.
The overlay demonstrates uncorrected data which in the red overlap zones appears speckled
and uneven compared with the same areas in the corrected (underlying) model

briefly here as the techniques used may affect the ALS


data supplied and input to subsequent processing steps. In
addition to reconciling the data from the ALS sensor,
GPS and IMU to create accurate spot points, the
processing must also correct for effects caused by the
angle of the sensor at the edge of adjacent flightlines. The

increased distance the laser has to travel at the edge of the


flightline causes inaccuracies in the heights recorded. If
left uncorrected, these inaccuracies result in an uneven
orange peel effect of surface dimpling where flightlines
overlap (see Crutchley 2010: pp. 26-27 and Figure 4).
One of the most effective methods for correcting this is

29

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

The removal of non-terrain points is undertaken by


classification of the point cloud into points that represent
terrain and points that represent all other features. The
non-terrain points can be identified and removed using a
variety of algorithms that have been developed to
automate this procedure. Sithole and Vosselman, (2004)
provide a detailed evaluation of many filtering
approaches with respect to their accuracy, which are
found to be generally good for rural and level terrain but
worse in complex urban or rough, vegetated terrain. This
is because the simplest approaches apply only a local
minimum height filter that leads to systematic errors in
hilly or rough terrain (Briese, 2010:127). In addition, less
sophisticated filtering techniques have been noted to
remove archaeological features from the terrain model
and add artefacts (Crutchley, 2010). Recently more
sophisticated approaches have been developed, including
segmentation-based methods and the identification of
breaklines such as building edges as a pre-filtering step to
improve the final interpolation, though as yet no fully
automated procedure has been found that can be applied
universally to all landscape areas (Briese, 2010:139). This
means that manual checking and editing of the model is
necessary to improve the results of the automated
process, though this tends to be far more intensive in
urban areas with complex local surface characteristics
(ibid). It is worth noting that adaptive morphological
methods such as those proposed by Axelsson (2000) and
Chen et al., (2007) are the filtering techniques used by
Terrascan and LASTools software and so are likely to be
the most common.

the Least Squares Matching (LSM) algorithm (Lichti and


Skaloud, 2010:121), but this type of correction currently
requires specialised software.

2.1.3 BASIC AND ADVANCED PROCESSING


Filtering
ALS creates a dense point cloud of spot heights as the
laser beam scans across the landscape, resulting in a zigzag distribution of points each with an x, y and z value.
The point cloud data can then be interpolated into two
categories of terrain model: Digital Surface Models
(DSM), give the surface of the topography (usually
recorded from the first return per laser pulse), including
buildings, trees etc.; and Digital Terrain Models (DTM)
represent the bare earth surface stripped of vegetation,
buildings and temporal objects such as cars (Briese,
2010) (Figure 5). For the purposes of disambiguation,
both of these products can also be referred to as a Digital
Elevation Models (DEM) as they represent elevation in
its original units of height above sea level. Another
common product is the Canopy Height Model (CHM) or
normalised DSM (nDSM), which is defined as the bare
earth surface subtracted from the first return DSM. In
terms of the historic environment, while the DSM
provides environmental context for the model and should
always be viewed to identify areas where the data may be
affected by dense vegetation, the DTM, or filtered model,
is most commonly used to view the terrain that is
otherwise masked by vegetation and as such it is worth
discussing the processing required to create a DTM in
more detail here.

For full waveform data, the echo width and amplitude can
be used to improve the classification and filtering process

Figure 5. An example of classification of points based on return which forms


the most basic method to filter non-terrain points from the DSM

30

LASER/LIDAR

Figure 6. Two examples of common interpolation techniques: IDW (left) and Bicubic Spline (right)

(Figure 6). There are many methods of interpolation from


the most basic operation of taking the mean, median, or
modal height of the points within an area to complex
weighting of points and incorporation of breaklines to
negate the impact of smoothing when interpolating over
sharp changes in topography (Briese, 2010:125). Any of
the common interpolation methods can be used; typically
nearest neighbour, inverse distance weighting (IDW),
linear functions (regularised / bicubic or bilinear spline),
or kriging are the most common. In practise determining
the best interpolation method depends on the
topography, so trialling a number of techniques on
sample areas is often necessary. The accuracy of the
interpolation models can best be assessed by the
collection of ground-observation point via Real Time
Kinetic (RTK) GPS survey. There is no standard or
best method for the interpolation of point data to raster,
but users should be aware that models created using
different interpolation methods will represent microtopography differently. Consequently if visualisation
techniques are to be compared with each other they
should all be based on the same interpolation technique.

particularly in areas of dense, low vegetation such as


forest understory. Although these techniques are still in
development they have been shown to be very effective at
defining ground hits from low-level vegetation based on
texture (Doneus and Briese, 2010).
Interpolation
Although the general morphology of a landscape is an
important feature for archaeologists to observe, most
individual sites and features representing past human
interaction with the landscape can be observed as microtopographic changes. Such features are difficult to
visualise from the point cloud itself, so while the preprocessing steps described above use the point cloud, for
visualisation the survey is most often processed by
interpolating the x, y, z points into a 2.5D surface either
as a raster grid or triangulated irregular network (TIN)
format. Although a TIN or mesh is commonly used for
terrestrial and object laser scanning (see Remondino this
volume), for archaeological applications to date most
ALS data are rasterised. Rasterisation is advantageous for
the landscape researcher as it allows ALS data to be
visualised, processed, interrogated and interpreted in a
Geographical Information System (GIS) alongside a
range of other geographical or archival data, such as
historic mapping, aerial photographs and feature data
derived from Historic Environment Records. The
disadvantage of rasterisation is the loss of geometric
complexity as described below.

2.1.4 INTENSITY DATA


In addition to height data, the ALS sensor also captures
the intensity of the returned beam and there has been
some speculation regarding the use of the intensity
measure as a means to detect archaeological features with
varying reflectance properties (Coren et al., 2005; Challis
et al., 2011). While the intensity has been used for a
number of studies including environmental applications

The process of interpolating takes the data from a number


of points to provide a height for a cell in the image

31

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 7. Comparison of visualisation techniques mentioned in this chapter

calibration of most models of ALS scanner, the intensity


bears little correlation to reflectance recorded at the same
wavelengths by an airborne spectral sensor (Bennett,
2012).

such as canopy determination (e.g. Donoghue et al.,


2007) and landcover classification (e.g. Yoon et al.,
2008) and earth science applications such as volcanology
e.g. Spinetti et al., 2009) and glaciology (e.g. Lutz et al.,
2003), there are an number of problems with its
application to archaeological prospection at the present
time. The foremost of these is the fact that the intensity
measure is affected by many factors in addition to the
reflectance properties of the target, including the range
distance from the sensor to the target, the power of the
laser, angle of reflection, optical properties of the system
and attenuation through that atmosphere (see Starek et al.
2006) for a fuller discussion). Additionally recent
research has shown that standard ALS wavelengths
(generally between 800 nm and 1550 nm) are generally
less sensitive for archaeological feature detection than
shorter NIR wavelengths and due to the lack of

Sensor technology is improving, with calibrated ALS /


hyperspectral systems such as that developed by the
Finnish Geodetic Institute (Hakala et al., 2012) soon to
be available commercially. Although these have yet to be
tested regarding their application for archaeological
research, it is anticipated that this new generation of
combined sensors will provide higher quality spectral
information than the intensity data currently collected.
For now at least, the use of ALS intensity data for
archaeological prospection is limited in scope by the
factors listed above and users may find analysis of
complementary data a more profitable use of time.

32

LASER/LIDAR

highlight micro-topography) the shaded model must also


be calculated with a low solar altitude, typically 8-15.
This means that shaded relief models work poorly in
areas of substantial macro topographic change, with deep
shadows obscuring micro-topography regardless of
illumination direction (Hesse, 2010).

2.1.5 VISUALISATION TECHNIQUES


Due to their subtle topography, archaeological features
can be difficult to determine from the DTM, even when
the height component is exaggerated to highlight
topographic change. To map these features some form of
visualisation technique is required to highlight their
presence in the DTM to the viewer. This section covers
the most common forms of visualisation applied to
archaeological research and explains their uses, and some
pitfalls, for archaeological prospection. Figure 7 gives an
example of each of the visualisation techniques
mentioned below.

Recent research by the author has also shown that the


choice of both the azimuth and angle of light impact
feature visibility in a quantifiable way. For example,
altering the angle of the light from the standard output
of 45 to 10 improved feature detection by 6%.
Additionally when eight shaded-relief models with
identical altitude illumination but varying azimuths were
assessed a 12% difference in the number of detectable
features was observed between the best and worst
performing angles (see case study and Bennett, 2012).
These differences result in the requirement to create and
assess multiple models from a variety of illumination
angles and azimuths; a serious expenditure of time for
significant yet diminishing return. One of the proposed
solutions to this is by statistical combination of the
shaded-relief models through Principal Components
Analysis (see below).

Shaded Relief models


The creation of shaded relief models is the most common
process used to visualise ALS data for archaeology
(Crutchley 2010). This technique takes the elevation
model and calculates shade from a given solar direction
(or azimuth) and altitude (height above the horizon see
Figure 8), thus highlighting topographic features (Horn,
1981). Shaded relief models provide familiar, photogenic
views of the landscape and can be used to mimic ideal
raking light conditions favoured by aerial photographic
interpretors (Wilson, 2000:46).

A final point to consider when using these models to plot


potential archaeological features is locational inaccuracy.
As the shaded-relief model is a computation of light and
shade the perceived location of features alters as the angle
of illumination is changed (Figure 9) as the observer plots
not the topographic feature itself but the area of light or
shadow. This can lead to substantial locational
inaccuracies, especially when using low angle light
required to highlight microtopography (see case study).

Despite their frequent use and familiarity, shaded relief


images pose some problems for the interpretation and
mapping of archaeological features. Linear features that
align with the direction of illumination will not be easily
visible in the shaded relief model, requiring multiple
angles of illumination to be calculated and inspected
(Devereux et al., 2008). To mimic raking light (and so

Figure 8. Angle and Illumination of a shaded relief model

33

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 9. Different angles of illumination highlighting different archaeological features

In all the shaded-relief model provides an aesthetically


pleasing view of the landscape for illustrative purposes
but, due to the issues outlined above, it should always be
combined with at least one other visualisation technique
in order to map potential archaeological features.

Principle Components Analysis of Multiple Shaded


Relief Images
Principle Components Analysis (PCA) is a multivariate
statistical technique used to reduce redundancy in multi-

34

LASER/LIDAR

inclination from the horizontal. Aspect mapping produces


a raster that indicates the direction that slopes are facing,
represented by the number of degrees north of east.

dimensional or multi-temporal images. It has been


skilfully applied by Kvamme to geophysical data (2006)
and is used for minimising the number of images to be
analysed. PCA has also received some attention in
archaeological work (Winterbottom and Dawson, 2005;
Challis et al., 2008; Devereux et al., 2008).

Although common for geographical applications, there


has been limited application of slope, aspect and
curvature mapping for the detection of micro-topographic
change relating to archaeological features, though course
resolution aspect and slope terrain maps are well
established in predictive models of site location
(Kvamme and Jochim, 1989; Challis et al., 2011). It is
anticipated that topographic anomalies relating to
archaeological features will be identifiable in these
images, in particular the slope and aspect maps may aid
pattern recognition for features such as the lynchets of a
field system.

While the PCA transformation reduces the dimensionality


of the shaded relief technique, the interpreter must still
analyse a large number of shaded images to access the
information content of the terrain model. Also, to ensure
the most representative model of the topography, every
possible angle and azimuth should be processed. At the
time of writing this approach has never been undertaken;
the only published method for using the technique with
ALS shaded relief images used 16 angles of illumination
at the same azimuth (Devereux et al., 2008). The limit on
the number of input images is principally due to the
relatively diminished return of new information
compared with the increased costs in terms of
computation and interpretation time.

Horizon Modelling or Sky View Factor


To overcome some shortfalls of shaded relief models,
specifically the issues of illumination angle and
multidimensionality of data, the technique of horizon or
sky view factor (SVF) has been applied recently by
researchers in Slovenia (Kokalj et al., 2011). The
calculation is based on the method used to compute
shadows for solar irradiation models. The algorithm
begins at a low azimuth angle from a single direction and
computes at what point the light from that angle 'hits' the
terrain. The angle is increased until it reaches the angle
where it is higher than any point in the landscape (on that
line of sight). This procedure is then replicated for a
specified number of angles producing a number of
directional files which can then be added together to
produce a model that reflects the total amount of light
that each pixel is exposed to as the sun angle crosses the
hemisphere above it. Consequently, positive features
appear brighter and negative features are darker,
replicating the visual results of the shaded relief models
but without bias caused by the direction of illumination.
As with all light-level techniques, the SVF does not
provide a direct representation of topographic change and
additionally has been noted to accentuate data artefacts
more than other techniques (Bennett et al., 2012).

PC images represent statistical variance in light levels of


the shaded relief models, rather than the topographic data
collected by the sensor. While this might seem a pedantic
distinction to make, the visibility of archaeological
features is highly dependent on angle and azimuth of
illumination. The PCA will reduce some of this
directional variability but cannot account for the features
that were poorly represented in the original shaded relief
images. The output of the PCA will therefore be highly
influenced by the selection of these factors at the outset
and this could prove a limiting factor for subsequent
interpretation. Consequently, the choices made in the
processing of shaded relief and PC images (see above)
may mask features that were present in the original ALS
data. Additionally, for profiles drawn across
archaeological features in a PC image, the z component
of the profile will not be a logical height measurement as
in the original DTM but a product of the statistical
computation of varying light levels. While related to the
topographic features, this light-level scale is unhelpful
when trying to quantify or describe the feature as it is
entirely dependent on the input parameters chosen for the
shaded-relief models.

Local Relief Modelling (LRM)

Although applying PCA to the shaded-relief images does


reduce reduce redundancy somewhat as the first PC will
typically contain 95-99% of all variation in an image or
data stack, it has been shown that significant
archaeological information is detectable in subsequent
PCs (Bennett et al., 2012). As PCA can compute as many
PC images to assess as original input images used, it is
still necessary to assess multiple images to derive the full
archaeological potential from this technique.

While shaded models provide useful images, there has


been much recent emphasis on developing better methods
for extracting the micro-topography that represents
archaeological or modern features from the landscape that
surrounds them while retaining the height information as
recorded by the sensor. One of these methods, Local
Relief Modelling or LRM devised by Hesse (2010) for
analysing mountainous and forested terrain in Germany,
has received particular attention for its robust
methodology and accurate results. The technique reduces
the effect of the macro-topography while retaining the
integrity
of
the
micro-topography,
including
archaeological features by subtracting a low pass filtered
model from the original DTM and extracting features
outlined by the 0m contour. The advantage of this
technique over the others mentioned is that it allows the

Slope and Aspect


Slope, aspect and curvature maps are commonly used for
analysing topographic data in other geographic
disciplines. Slope mapping produces a raster that gives
slope values for each grid cell, stated in degrees of

35

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

wooded, mountainous environment. These include


colour-ramped DTMs, slope, LRM (termed here as trend
removal) SVF and a number of variants of solarillumination They also incorporate a survey of 12 users
with a range of experience in ALS data interpretation and
propose a method of quantifying efficiency of the
techniques by means of assessing contrast between cells
of the image using the median of five different standard
deviations. Noise is also calculated using the standard
deviations of the standard deviations although there is
little further explanation of how this enabled the authors
to quantify contrast and noise.

creation of a model that is not only unaffected by shadow


but which retains its topographic integrity allowing
measurements to be calculated from it in a way that is not
possible using shaded relief models, PCA or Horizon
View mapping. However the extent of distortion of the
micro-topographic feature extracted has yet to be
quantified as the development of the model took place
without any ground control data.
Although developed for mountain environments, the
technique has also be applied to gently undulating
landscapes to highlight archaeological features (see casestudy). Due to the isolation of the microtopography the
LRM model could also have the potential to be used as a
base topographic layer for digital combination with other
data.

These comparisons all conclude that there is no silver


bullet, but all the papers agree on the following points:
Although visually pleasing and the most commonly
used visualisation technique shaded-relief modelling is
a poor method for identifying and accurately mapping
archaeological features

Selecting a visualisation technique


Unfortunately for users of ALS data, there is no perfect
visualisation technique for identifying archaeological
topography. However, thanks to the development of
techniques such as the LRM and SVF described above,
archaeologists now have access to both generic and
specific tools to visualise ALS data. Understanding how
and when to apply these techniques is not an easy task
and until recently there was little published comparative
data meaning that users could not assess the
appropriateness of any technique for their research
environment. To this end readers are advised to consult
the case study presented with this chapter, which focuses
on the comparison of ALS visualisation techniques for a
site on the Salisbury Plain, Wiltshire, UK (see Bennett et
al., 2012 for a further publication relating to this).

Multi-method analysis is recommended, with LRM and


SVF and slope shown to be valuable techniques
Users should try to familiarise themselves with the
potential pitfalls of any technique prior to its
application
The most effective and appropriate selection comes
from the trial of a number of visualisation techniques
for a given environment.
Consulting the existing comparative papers by Challis et
al., (2011); Bennett et al., (2012) and tular et al., (2012)
is a good place to start, but visualisation techniques will
need to be tailored to the landscape surveyed and type of
feature to be detected. It is recommended that a pilot
study is undertaken using a number of sample areas of <1
km2 across the survey, to visually assess a number of the
listed visualisation techniques and their suitability for
feature detection in that environment (tular et al., 2012).
Bear in mind also that the most complete results in terms
of feature mapping have been demonstrated by the use of
multiple visualisation techniques, Hesse (2010)
recommends using both LRM and shaded-relief models
for mountainous terrain, while Bennett et al., (2012)
report that a combination of LRM and SVM provided the
most comprehensive visualisation for feature detection in
a grassland environment when compared with a range of
other techniques. Another advantage of this approach is
that artefacts and interference patterns in the data are
often easier to identity as anomalies when comparing two
visualisations.

While only one environment, grassland, is assessed by


this work undertaken on the Salisbury Plain, it provides
the only quantitative information published to date
regarding the varying visibility of individual
archaeological features in shaded-relief, slope, PCA,
LRM and SVF visualisation. The case study and
published paper give details of how various techniques
affect position, scale and accuracy of the archaeological
features represented along with a discussion of the nature
of false positives or artefact features whose presence
was enhanced by certain visualisation techniques.
Two other recent publications have also addressed this
issue. Challis et al.s (2011) paper presents the results of
visual analysis from four different locations, covering a
range of six processing techniques colour shading,
slope, hill-shading (shaded-relief), PCA (of shaded-relief
models), terrain filtering (LRM) and Solar Insolation
(SVF). Although the fact that the latter techniques are not
referred to by the most recently published names may
confuse some readers, the publication provides very
useful information regarding processing software and a
workflow to guide users through visualisation selection in
high and low relief landscapes.

2.1.6 SOFTWARE CONSIDERATIONS


There are a number of viewing and processing tools
available to users of ALS data, by far the most powerful
and accessible of which at the time of writing is the opensource LASTools suite developed by Martin Isenberg
(http://lastools.org). LASTools can be used as a standalone series of processing modules or be integrated as a
toolbox in ArcGIS. The value of ALS data lies in its

tular et al., (2012) present the results of similar analysis


of a range of techniques for an area of known sites in a

36

LASER/LIDAR

Journal of Photogrammetry and Remote Sensing


XXXIII, no. Part 4B: pp. 110-117.

geographic and morphological accuracy, therefore it is


most appropriate to use these data as a GIS layer, to
enable overlay with other forms of data. A range of GIS
software, both proprietary and open-source, is available
to do this, all of which should be capable of producing
shaded-relief imagery from a DTM. However for the
more specialised visualisations like SVF and LRM, the
processing becomes more complex. In some cases users
must apply external tools, for example an IDL module for
the creation of SVF models (http://iaps.zrc-sazu.si/en/
svf#v), or be comfortable creating a workflow for multistage models such as LRM across multiple software
(Hesse 2010) or as a script process in GRASS (see the
Appendix A of Bennett 2012 for details of the GRASS
workflow). Although work is in progress on a stand-alone
application to assist with creating multiple advanced
visualisation techniques, currently a good knowledge of
raster data processing is essential and this inhibits the
wider application of advanced ALS visualisations.

BENNETT, R.A. 2012. Archaeological Remote Sensing:


Visualisation and analysis of grass-dominated
environments using airborne laser scanning and
digital spectra data. PhD Thesis available from
http://www.pushingthesensors.com/thesis/
BENNETT, R.; WELHAM, K.; HILL, R.A.; FORD, A. 2011.
Making the most of airborne remote sensing
techniques
for
archaeological
survey
and
interpretation, in: Cowley, D.C. (Ed.), Remote
Sensing for Archaeological Heritage Management,
EAC Occasional Paper. Archaeolingua, Hungary, pp.
99-107.
BENNETT, R.; WELHAM, K.; HILL, R.A.; FORD, A. 2012. A
Comparison of Visualization Techniques for Models
Created from Airborne Laser Scanned Data. Archaeol.
Prospect. 19, pp. 41-48.
BENNETT, R.; WELHAM, K.; HILL, R.A.; FORD, A. 2013.
Using lidar as part of a multisensor approach to
archaeological survey and interpretation, in: Cowley,
D.C., Opitz, R. (eds.), Interpreting Archaeological
Topography Airborne Laser Scanning, Aerial
Photographs and Ground Observation. Oxbow Books,
Oxford, pp. 198-205.

2.1.7 CONCLUSIONS
Through the course of this chapter a number of key
factors to consider when using ALS data for historic
environment assessment have already been raised. Users
must be aware of issues such as mode of capture,
resolution and pre-processing of the ALS data, all of
which should be made clear by the provision of adequate
metadata by the data supplier. Attention should be given
to the original purpose of the data, usually hydrological
or environmental monitoring, and any filtering that has
been undertaken and how this might affect the representtation of archaeological features. This is not to say that
archive data collected for other purposes is not useful to
archaeologists, (the ever-increasing number of studies
testify this is clearly not the case) rather that users of the
data should familiarise themselves with the technical
details and make clear the processing applied to a dataset.

BERALDIN, J.-A.; BLAIS, F.; LOHR, U. 2010. Laser


Scanning Technology, in: Vosselman, G., Maas, H.-G.
(Eds.), Airborne and Terrestrial Laser Scanning.
Whittles Publishing, Dunbeath, Scotland, pp. 1-44.
BRIESE, C. 2010. Extraction of Digital Terrain Models, in:
Vosselman, G., Maas, H.-G. (Eds.), Airborne and
Terrestrial Laser Scanning. Whittles Publishing,
Dunbeath, Scotland, pp. 135-167.
CHALLIS, K.; CAREY, C.; KINCEY, M. and HOWARD, A.J.
2011. Airborne lidar intensity and geoarchaeological
prospection in river valley floors. Archaeological
Prospection 18(1): p. 1-13.
CHALLIS, K.; FORLIN, P.; KINCEY, M., 2011. A Generic
Toolkit for the Visualization of Archaeological
Features on Airborne LiDAR Elevation Data.
Archaeological Prospection 18.

Together with the case study and articles referenced, this


chapter provides a crucial starting point from which to
begin to understand the visualisation techniques that are
most appropriate for your research. However it is also
important to remember that ALS data captures only part
of what is archaeologically significant within a landscape.
Features such as vegetation marks or soil change that are
detectable using aerial photographs or in digital spectral
data will not be represented in ALS data unless they also
show a distinct topographic change from their
surroundings. Consequently, ALS data is most effective
when used as part of a multi-sensor approach that also
incorporates aerial imagery. Bennett et al., (2011, 2013)
provide details and discussion of the key role that ALS
data can play when incorporated into this type of
landscape study.

CHALLIS, K.; KOKALJ, Z.; KINCEY, M.; MOSCROP, D.;


HOWARD, A.J. 2008. Airborne lidar and historic
environment records. Antiquity 82, pp. 1055-1064.
CHEN, Q.; P. GONG, D.; BALDOCCHI, and G. XIE, 2007.
Filtering Airborne Laser Scanning Data with
Morphological
Methods.
Photogrammetric
Engineering and Remote Sensing 73, no. 2: 175.
COREN, F.; VISINTINI, D.; PREARO, G. and STERZAI, P.
2005. Integrating LIDAR intensity measures and
hyperspectral data for extracting of cultural heritage.
In Workshop Italy-Canada 17-18 Maggio 2005,
Padova.
CRUTCHLEY, S., 2010. The Light Fantastic - using
airborne lidar in archaeological survey. English
Heritage, Swindon.
DEVEREUX, B.J., AMABLE, G.S., CROW, P., 2008.
Visualisation of LiDAR terrain models for

References
AXELSSON, Peter 2000. DEM Generation from Laser
Scanner Data Using Adaptive TIN Models. ISPRS

37

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

LUTZ, E.; GEIST, T.H.; STOTTER, J. 2003. Investigations of


airborne laser scanning signal intensity on glacial
surfaces utilizing comprehensive laser geometry
modelling and orthophoto surface modelling (a case
study: Svar- tisheibreen, Norway), in: Proceedings of
the ISPRS Workshop on 3-D Reconstruction from
Airborne Laserscanner and INSAR Data. Presented at
the ISPRS working group III/3, Dresden, pp. 143-148.

archaeological feature detection. Antiquity 82, pp.


470-479.
DONEUS, M.; BRIESE, C. 2010. Airborne Laser Scanning
in forested areas potential and limitations of an
archaeological prospection technique, in: Cowley,
D.C. (Ed.), Remote Sensing for Archaeological
Heritage Management, EAC Occasional Paper.
Archaeolingua, Hungary, pp. 60-76.

SITHOLE, G.; VOSSELMAN, G. 2004. Experimental


comparison of filter algorithms for bare-Earth
extraction from airborne laser scanning point clouds.
ISPRS Journal of Photogrammetry and Remote
Sensing 59, 85-101.

DONOGHUE, D.N.M.; WATT, P.J.; COX, N.J.; WILSON, J.


2007. Remote sensing of species mixtures in conifer
plantations using LiDAR height and intensity data.
Remote Sensing of Environment 110, pp. 509-522.
HAKALA, T.; SUOMALAINEN, J.; KAASALAINEN, S.; CHEN,
Y. 2012. Full waveform hyperspectral LiDAR for
terrestrial laser scanning. Opt. Express 20, pp. 71197127.

SPINETTI, C.; MAZZARINI, F.; CASACCHIA, R.; COLINI, L.;


NERI, M.; BEHNCKE, B.; SALVATORI, R.; BUONGIORNO,
M.F.; PARESCHI, M.T. 2009. Spectral properties of
volcanic materials from hyperspectral field and
satellite data compared with LiDAR data at Mt. Etna.
International Journal of Applied Earth Observation
and Geoinformation 11, pp. 142-155.

HESSE, R. 2010. LiDAR-derived Local Relief Models - a


new
tool
for
archaeological
prospection.
Archaeological Prospection 18.
HORN, B.K.P. 1981. Hill shading and the reflectance map.
Proceedings
of
the
IEEE
DOI

10.1109/PROC.1981.11918 69, pp. 14-47.

STAREK, B.; LUZUM, R.; KUMAR, K.; SLATTON, K.C.


2006. Normalizing Lidar Intensities.
TULAR, B.; KOKALJ, .; OTIR, K.; NUNINGER, L. 2012.
Visualization of lidar-derived relief models for
detection of archaeological features. Journal of
Archaeological Science 39, pp. 3354-3360.

KOKALJ, Z.; ZAKSEK, K.; OSTIR, K. 2011. Application of


sky-view factor for the visualisation of historic
landscape features in lidar-derived relief models.
Antiquity 85, pp. 263-273.

WILSON, D.R. (Ed.) 2000. Air photo interpretation for


archaeologists, Batsford studies in archaeology.
Batsford, London.

KVAMME, K.L. 2006. Integrating multidimensional


geophysical data. Archaeological Prospection 13, pp.
57-72.

WINTERBOTTOM, S.J.; DAWSON, T. 2005. Airborne multispectral prospection for buried archaeology in mobile
sand dominated systems. Archaeological Prospection
12, pp. 205-219.

KVAMME, K.L.; JOCHIM, M.A. 1989. The Environmental


Basis of Mesolithic Settlement, in: Bonsall, C. (Ed.),
The Mesolithic in Europe: Papers Presented at the
Third International Symposium, Edinburgh 1985.
John Donald Publishers Ltd, Edinburgh, pp. 1-12.

YOON, J.-S.; SHIN, J.-I.; LEE, K.-S. 2008. Land Cover


Characteristics of Airborne LiDAR Intensity Data: A
Case Study. Geoscience and Remote Sensing Letters,
IEEE DOI - 10.1109/LGRS.2008.2000754 5, pp. 801805.

LICHTI, D.; SKALOUD, F. 2010. Registration and


Calibration, in: Vosselman, G., Maas, H.-G. (Eds.),
Airborne and Terrestrial Laser Scanning. Whittles
Publishing, Dunbeath, Scotland, pp. 83-133.

38

LASER/LIDAR

2.2 TERRESTRIAL OPTICAL ACTIVE SENSORS


THEORY & APPLICATIONS
Gabriele GUIDI

2.2.1 INTRODUCTION

2.2.2 ACTIVE 3D SENSING TECHNOLOGIES

From the introduction of active sensors as recording tool


for Cultural Heritage (CH), dating back almost twenty
years ago, many experiences have been done,
demonstrating its intrinsic value specially in the
archaeological field. Active sensors such as laser
scanners allow to record a complex site in 3D, supplying
directly a metric output that can be used to revisit any
complex stratigraphy, redraw cross sections and calculate
volumes, otherwise difficult. The approach based on
active range sensing is much faster than traditional
manual or theodolite-based methodologies, and
removes subjective impressions from the master record
that may influence the following data interpretation.
Complex 3D structures can be easily recorded even
if no peculiar geometrical elements are available
(i.e. edges and vertices), as usually happens with
ruined buildings. The resulting 3D model can be
therefore used as a geometrical data base to be consulted
afterward by scholars, but also as entry point for
accessing different types of archeological data, that
can be linked to the metric model enriching it with a 3D
GIS.

Triangulation based range sensing


Active systems, particularly those based on laser light,
make the measurement result nearly independent of the
texture of the object being photographed, projecting references on its surface through a suitably coded light. Such
light is characterized by an intrinsic information content
recognizable by an electronic sensor, unlike the environmental diffuse light, which has no particularly identifiable elements. For example, an array of dots or a series
of colored bands are all forms of coded light. Thanks to
such coding, active 3D sensors can acquire in digital form
the spatial behavior of an object surface. The output attainable from such a device can be seen as an image having
in each pixel the spatial coordinates (x, y, z) expressed in
millimeters, optionally enriched with color information
(R, G, B) or by the laser reflectance (Y). This set of 3D
data, called range image, is generally a 2.5D entity (i.e.
at each couple of x, y values, only one z is defined).
At present, 3D active methods are very popular because
they are the only ones capable to metrically acquire the
geometry of a surface in a totally automatic way. A tool
employing active 3D techniques is normally called range
device or, referring in particular to laser-based
equipment, 3D laser scanner. Different 3D operating
principles may be chosen depending on the object size
hence on the sensor-to-object distance. For measuring
small volumes, indicatively below a cubic meter,
scanners are based on the principle of triangulation.
Exceptional use of these devices have been done in
Cultural Heritage (CH) applications on large artifacts
(Bernardini et al., 2002a; Levoy et al., 2000).

These advantages offered by 3D active sensors has to be


confronted also with possible drawbacks, due for
example to the possible altered response to laser light
from some materials very diffused in CH as marble, or
the need of a balance between the huge amount of data
that these devices are nowadays able to capture in a very
short time and the related post processing work, that
sometimes risk to transform an helpful technology in a
massive waste of time.
For these reason in the following chapter all the active
range sensing principles are overviewed, highlighting
pros and cons, in order to identify the right field of
application for each technology, reasonable tradeoffs, and
possible ways of integration.

Principle
The kind of light that first allowed to create a 3D scanner
is the laser light. Due to its physical properties it allows
39

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. Triangulation principle: a) xz view of a triangulation based distance measurement through


a laser beam inclined with angle respect to the reference system, impinging on the surface to be measured.
The light source is at distance b from the optical centre of an image capturing device equipped
with a lens with focal length f; b) evaluation of xA and zA

camera pupil, an image containing the light spot can be


picked up. In this opto-geometric set-up the light source
emitting aperture, the projection center and light spot on
the object, form a triangle as the one shown in fig. 1a,
where the distance between image capture device and
light source is indicated as baseline b.

to generate extremely focused spots at relatively long


ranges from the light source, respect to what can be done,
for example, with a halogen lamp. The reason of this is
related to the intimate structure of light, which is made by
photons, short packets of electromagnetic energy
characterized by their own wavelength and phase. A laser
generates a peculiar light which is monochromatic (i.e.
made by photons all at the same wavelength), and
coherent (i.e. such that all its photons are generated in
different time instants but with the same phase). The
practical consequence of the first fact (monochromaticity) is that the lenses used for focusing a laser
can be much more effective, being designed for a single
wavelength rather than the wide spectrum of wavelengths
typical of white light. In other words with a laser it is
easier to concentrate energy in space. On the other hand
the second fact (coherence) allows all the photons to
generate a constructive wave interference whose
consequence is a concentration of energy in time. Both
these factors contribute to make the laser an effective
illumination source for selecting specific points of a
scenery with high contrast respect to the background,
allowing to measure their spatial position as described
below.

The lens located in front of the sensor is characterized by


its focal length f (i.e. distance in mm from the optical
center of the lens to the focal plane). On the collected
image, a trace of the light spot will be visible in a point
displaced with respect to the optical center of the system.
Depending from the position of the imaged spot respect
to the optical axis of the lens, two displacement
components will be generated along the horizontal (x)
and vertical (y) directions. Considering that the drawing
in fig. 1a represents the horizontal plane (xz) we will take
into account here only the horizontal component of such
displacement, indicated in fig. 1a as p (parallax). If the
system has been previously calibrated we can consider as
known both the inclination of the laser beam and the
baseline b. From the spot position the distance p can be
estimated, through which we can easily calculate the
angle :

tan

Lets imagine to have a range device made by the


composition of a light source and a planar sensor, rigidly
bounded each other. The laser source generates a thin ray
producing a small light dot on the surface to be measured.
If we put a 2D capture device (e.g. a digital camera)
displaced respect to the light source and the surface is
enough diffusive to reflect some light also toward the

p
f

As evidenced in fig. 1b, once the three parameters b,


and are known, the aforementioned triangle has three
known elements: the base b and two angles (90-, 90), from which all other parameters can be evaluated.

40

LASER/LIDAR

Figure 2. Acquisition of coordinates along a profile generated by a sheet of laser light.


In a 3D laser scanner this profile is mechanically moved in order to probe an entire area

light plane produces a straight line which becomes a


curved profile on complex surfaces.

Through simple trigonometry we go back to the distance


zA between the camera and point A on the object. This
range, which is the most critical parameter and therefore
gives name to this class of instruments (range devices), is
given by:

zA

Each profile point responds to the rule already seen for


the single spot system, with the only difference that the
sensor has to be 2D, so that both horizontal and vertical
parallaxes can be estimated for each profile point. Such
parallaxes are used for estimating the corresponding
horizontal and vertical angles, from which, together with
the knowledge on the baseline b and the optical focal
length f, the three coordinates of each profile point can be
estimated.

b
tan tan

Multiplying this value by the tangent of , we get the


horizontal coordinate xA.
In this schematic view yA never appears. In fact, with a
technique like this, the sensor can be reduced to a single
array of photosensitive elements rather than a matrix such
as those which are equipped with digital cameras. In this
case yA can be determined in advance by mounting the
optical measurement system on a micrometric mechanical
device providing its position with respect to a known y
origin. The Region Of Interest (ROI), namely the volume
that can be actually measured by the range device, is
defined by the depth of field of the overall system
consisting of illumination source and optics. As well
known the depth of field of a camera depends on a
combination of lens focal length and aperture. To make
the most of this area, it is appropriate that also the laser
beam is focused at the camera focusing distance, with a
relatively long focal range, in order to have the spot size
nearly unchanged within the ROI. Once both these
conditions are met, the ROI size can be further increased
by tilting the sensor optics, as defined by the principle of
Scheimpflug (Li et al., 2007).

This process allows therefore to calculate an array of 3D


coordinates corresponding to the illuminated profile for a
given light-object relative positioning.
By displacing the light plane along its normal of a small
amount y, a different strip of surface can be probed,
generating a new array of 3D data referred to an unknown
geometrical region close to the first one. The 3D laser
scanner is a device implementing the iteration of such
process for a number of positions which generates a set of
arrays describing the geometry of a whole area, strip by
strip. This kind of range image (or range map), is
indicated also as structured 3D point cloud.
Pattern projection sensors (multiple sheets of light)
With pattern projection sensors multiple sheets of light
are simultaneously produced thanks to a special projector
generating halogen light patterns of horizontal or vertical
black and white stripes. An image of the area illuminated
by the pattern is captured with a digital camera and each
Black-to-White (B-W) transition is used as geometrical
profile, similar to those produced by a sheet of laser light
impinging on an unknown surface. Even if the
triangulating principle used is exactly the same seen for
the two devices mentioned above, the main difference is
that here no moving parts are required since no actual
scan action is performed. The range map is computed in

Triangulation-based Laser scanner (single sheet of


light)
The principle described above can be extended by a
single point of light to a set of aligned points forming a
segment. Systems of this kind use a sheet of light
generated by a laser reflected by a rotating mirror or a
cylindrical lens. Once projected onto a flat surface such

41

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 3. Acquisition of coordinates along a different profiles generated by multiple sheets of white light

bridges or dams, a different working principle is used. It


is based on optically measuring the sensor-to-target
distance, having the a priori knowledge of angles through
the controlled orientation of the range measurement
device.

this way just through digital post-processing of the


acquired image.
The more B-W transitions will be projected on the probed
surface, the finer will be its spatial sampling, with a
consequent increase of the geometrical resolution.
Therefore the finest pattern would seem the most suitable
solution for gaining the maximum amount of data from a
single image, but, in practical terms, this is not
completely true. This depends by the impossibility to
identify, in an image of an unknown surface with striped
patterns projected on it, each single B-W transition, due
to the possible framing of an unknown subset of the
projected pattern (e.g. for surfaces very close to the
camera), or for the presence of holes or occlusions
generating ambiguity in the stripes order.

Principles
Active TOF range sensing is logically derived from the
so-called total station. This is made by a theodolite,
namely an optical targeting device for aiming at a specific
point in space, coupled with a goniometer for precisely
measuring horizontal and vertical orientations, integrated
with an electronic distance meter. TOF, or time of flight,
is referred to the method used for estimating the sensorto-target distance, that is usually done by measuring the
time needed by light for travelling from the light source
to the target surface and back to the light detector
integrated in the electronic distance meter.

In order to solve such ambiguity this category of devices


uses a sequence of patterns rather than a single one. The
most used approach is the Gray coded sequence, that
employ a set of patterns where the number of stripes is
doubled at each step, up to reaching the maximum
number allowed by the pattern projector. Other pattern
sequences have been developed and implemented, such
as phase-shift or Moir, with different metrological
performances.

Differently by a total station, a 3D laser scanner does not


need that an human operator take aim at a specific point
in space, therefore it does not have such sophisticate
crosshair. On the other hand it has the capability to
automatically re-orient the laser on a predefined range of
horizontal and vertical angles, in order to select a specific
area in front of the instrument. The precise angular
estimations are then returned by a set of digital encoders,
while the laser TOF gives the distance. As exemplified in
fig. 4, showing a schematic diagram of a system working
only on the xz plane analogously to what shown for
triangulation based systems, it is clear that if the system
return the two parameter distance () and laser beam
orientation (), the Cartesian coordinates of A in the xz
reference system are simply given by:

In general the advantage of structured-light 3D scanners


is speed. This makes some of these systems capable of
scanning moving objects in real-time.
Direct range sensing
With active range sensing methods based on
triangulation, the size of volumes that can be easily
acquired ranges from a shoe box to a full size statue. For
a precise sensor response the ratio between camera-target
distance and camera-source distance (baseline), has to be
maintained between 1 and 5. Therefore framing areas
very far from the camera would involve a very large
baseline, that above 1 m becomes difficult to be
practically implemented. For larger objects like buildings,

x A sin
zA cos
In case of a real 3D situation, in addition to the vertical
angle an horizontal angle will be given, and the set of
coordinate (xA, yA, zA) will be obtained by a simple
42

LASER/LIDAR

Figure 4. Acquisition of coordinates of the point A through the a priori knowledge of the angle , and the
measurement of the distance through the Time Of Flight of a light pulse from the sensor to the object and back

An interesting sensor fusion is given by the RangeImaging (RIM) cameras which integrate distance
measurements (based on the TOF principle) and imaging
aspects. RIM sensors are not treated in this chapter as not
really suitable for 3D modeling applications.

conversion from polar to Cartesian of the threedimensional input data.


Systems based on the measurement of distance are in
general indicated as LiDAR (Light Detection And
Ranging), even if in the topographic area this acronym is
often used for indicating the specific category of airborne
laser scanner. The most noticeable aspect of such devices
is in fact the capability to work at very long distance from
the actual scanning surface, from half meter up to few
kilometres, making such devices suitable also for 3D
acquisition from flying platforms (helicopters or
airplanes) or moving vehicles (boats or cars).

TOF laser scanner (PW)


Distance estimation is here based on a short Pulsed Wave
(PW) of light energy generated from the source toward
the target. Part of it is backscattered to the sensor,
collected and reconverted in an electric signal by a
photodiode. The transmitted light driving pulse and the
received one are used as start/stop command for a high
frequency digital clock that allows to count a number of
time units between the two events. Of course the higher is
the temporal resolution of the counting device, the finer
will be the distance estimation. However, frequency
limitations of electronic counting does not allow to go
below a few tens of ps in time resolution, corresponding
to some millimetres.

For ground based range sensors the angular movement


can be 360 horizontally and close to 180 vertically,
allowing a huge spherical volume to be captured from a
fixed position. As for triangulation based range sensors
the output of such devices is again a cloud of 3D points
originated by a high resolution spatial sampling an object.
The difference with triangulation devices is often in the
data structure. In TOF devices data are collected
sampling an angular sector of a sphere, with a step not
always fixed. As a results the data set can be formed by
scan lines not necessarily all of the same size. Therefore
the device output may be given by a simple list of 3D
coordinates not structured in a matrix.

Considering that the speed of light is approximately c=3


x 108 m/s, and that the TOF is related to a travel of the
light pulse to the surface and back (double of the sensorto-target distance), the range will be given by:

In term of performances, contributions to measurement


errors may be given by both angular estimation accuracy
and distance measurements. However, due to the very
high speed of light, the TOF is very short, and this
involves that the major source of randomness is due to its
estimation that becomes a geometrical uncertainty once
time is converted in distance. For this reason angle
estimation devices implemented in this kind of laser
scanners are similar each other. But different strategies
for obtaining distance from light have been proposed for
minimizing such uncertainty, all derived by approaches
originally developed for radars.

TOF c
2

Therefore a small deviation in estimating TOF, for example in the order of 20 ps, will give a corresponding range
deviation r=1/2 x (20 x 10-12) x (3 x 108) m = 3 mm.
For some recent models of laser scanner based on this
principle (Riegel, 2010), the device is capable to detect
multiple reflected pulses by a single transmitted pulse,
provided by situations where multiple targets are present
on the laser trajectory (e.g. a wall behind tree leaves). In
this case the cloud of points is not anymore a 2.5D entity.
43

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

By increasing indefinitely the number of steps between a


low to a high modulating frequency, a so-called chirp
frequency modulation (FM) is generated, with a linear
growing of the modulating frequency in the operating
range. As light is generated continuously, this kind of
instruments are indicated as FM-CW. Since this
processing is normally used in radars (Skolnik, 1990),
this devices is also known as laser radar. The
peculiar aspect of this approach is the capability to
reduce the measurement uncertainty at levels much
lower than that of PW laser scanners (typically 2-3
mm), and lower than that of CW laser scanners (less
than 1 mm on optically cooperative materials at the
proper distance), competing with triangulation laser
scanners, capable to reach a measurement uncertainty
lower than 100 m. Such devices have therefore the
advantage of the spherical acquisition set-up typical of
TOF laser scanners, with a metrological performances
comparable to that of triangulation based devices, at
operating distances from 1 to 20 meters, far larger than
the typical triangulation devices operating range (0.5 to 2
m). For this reason such instruments have been
experimented in applications where a wide area and high
precision are simultaneously required, like in industrial
(Petrov, 2006) and CH (Guidi et al., 2005; Guidi et al.,
2009b) applications.

Phase shift laser scanner (CW)


In this case distance is estimated with a laser light whose
intensity is sinusoidally modulated at a known frequency,
generating a Continuous Wave (CW) of light energy
directed toward the target. The backscattering on the
target surface returns a sinusoidal light wave delayed
respect to the transmitted one, and therefore characterized
by a phase difference with it. Similarly to the previous
approach, the distance estimation is based on a
comparison between the signal applied to the laser for
generating the transmitted light wave:

sTX cos( 0 t)
and the signal generated by re-converting in electrical
form the light backscattered by the surface and received
by the range sensor:

sRX cos( 0 t )
A CW laser scanner implement an electronic mixing the
two signals, that corresponds to a multiplication of these
two contributions. It can be reduced as follows:

1
1
cos( 0 t) cos( 0 t ) cos(2 0 t ) cos( )
2
2
The result is a contribution at double the modulating
frequency, that can be cut through a low-pass filter, and a
continuous contribution, directly proportional to phase
difference , that can be estimated. Since this angular
value is directly proportional to the TOF, from this value
the range can be evaluated similarly to the previous case.
This indirect estimation of TOF allows a better
performance in term of uncertainty for two main reasons:
a) since the light sent to the target is continuous, much
more energy can be transmitted respect to the PW case,
and the consequent signal-to-noise ratio of the received
signal is higher; b) the low-passing filtering required for
extracting the useful signal component involves a cut also
on the high frequency noise, resulting in a further
decrease of noise respect to signal.

2.2.3 SENSORS CHARACTERIZATION


When a range sensor has to be chosen for geometrically
surveying an object shape, independently of its size, the
first point to face regards which level of detail has to be
recognizable in the final 3D digital model that will be
built starting from the raw 3D data, and the acceptable
tolerance between the real object and its digital
counterpart. These matters are so important that influence
all the technological and methodological choices for the
whole 3D acquisition project.
The main metrological parameters related to
measurement are univocally defined by the International
Vocabulary of Metrology (VIM), published by the Joint
Committee for Guides in Metrology (JCGM) of ISO
(JCGM, 2008). Such parameters are basically Resolution,
Trueness (Accuracy) and Uncertainty (precision).

A peculiar aspect of this range measurement technique is


the possibility to have an ambiguous information if the
sensor-to-target distance is longer than the equivalent
length of a full wave of modulated light, given by the
ambiguity range ramb=c/0, due to the periodical
repetition of phase. Such ambiguity involves a maximum
operating distance that is in general smaller for CW
devices rather than PW.

Although the transposition of these concepts to the world


of 3D imaging has been reported in the reference guide
VDI/VDE 2634 by the Association of German
Engineers for pattern projection cameras, a more general
international standard on optical 3D measurement is still
in preparation by commission E57 of the American
Society for Testing Material (ASTM, 2006). Also the
International Standard Organization (ISO) has not yet
defined a metrological standard for non-contact 3D
measurement devices. In its ISO-10360 only the
methods for characterizing contact based Coordinate
Measuring Machines (CMM) has been defined, while
an extension for CMMs coupled with optical
measuring machines (ISO 10360-7:2011) is still under
development.

FM-CW laser scanner


In CW systems the need of a wavelength long enough for
avoiding ambiguity, influence the range detection performance which is as better as the wavelength is short (i.e.
as 0 grows). This leaded to CW solutions where two or
three different modulation frequencies are employed. A
low modulating frequency for a large ambiguity range (in
the order of 100 m), and shorter modulation frequencies
for increasing angular (and therefore range) resolution.

44

LASER/LIDAR

details of a complex shape could be considered as made


by the extrusions of sinusoidal profiles, but at least this
criteria gives a rule of the thumb for estimating a
minimum geometrical sampling step below which it is
sure that the smaller geometrical detail will be lost.

Resolution
According to VIM, resolution is the smallest change in a
quantity being measured that causes a perceptible change
in the corresponding indication. This definition, once
referred to non-contact 3D imaging, is intended as the
minimum geometrical detail that the range device is
capable to capture. This is influenced by the device
mechanical, optical and electronic features. Of course
such value represents the maximum resolution allowed by
the 3D sensor. For its 3D nature it can be divided in two
components: the axial resolution, along the optical axis of
the device (usually indicated as z), and the lateral
resolution, on the xy plane (MacKinnon et al., 2008).

Trueness (accuracy)
VIM definition indicates accuracy in general as
closeness of agreement between a measured quantity
value and a true quantity value of a measurand. When
such theoretical entity has to be evaluated for an actual
instrument, including a 3D sensor, such value has to be
experimentally estimated from the instrument output. For
this reason VIM also define trueness as closeness of
agreement between the average of an infinite number of
replicate measured quantity values and a reference
quantity value. It is a more practical parameter that can
be numerically estimated as the difference between a 3D
value assumed as true (because measured with a method
far more accurate), and the average of a sufficiently large
number of samples acquired through the range device to
be characterized. Such parameter refers therefore to the
systematic component of the measurement error with
respect to the real data (exemplified in fig. 5) and can be
minimized through an appropriate sensor calibration. For
3D sensors, accuracy might be evaluated both for the
axial direction (z) than for a lateral one (on the xy plane).
In general, accuracy on depth is the most important, and
varies from few hundredths to few tenths of a millimetre
for triangulation based sensors and FM-CW laser
scanners, it is in the order of 1-2 mm for CW laser
scanners, and in the order of 2-20 mm for PW laser
scanners.

For digitally capturing a shape, the 3D sensor generates a


discretization of its continuous surface according to a
predefined sampling step adjustable by the end-user even
at a level lower than the maximum. The adjustment leads
to a proper spacing between geometrical samples on the
xy plane, giving the actual geometrical resolution level
chosen by the operator for that specific 3D acquisition
action. The corresponding value in z is a consequence of
the opto-geometric set-up, and cant be usually changed
by the operator.
In other words it has to be made a clear distinction
between the maximum resolution allowed by the sensor,
often indicated as lateral resolution in the sensor data
sheet, and the actual resolution used for a 3D acquisition
work, that the end-user can properly set-up according to
the geometrical complexity of the 3D object to be
surveyed, operating on the xy sampling step.
The latter set-up is directly influenced by the lens focal
length and the sensor-to-target distance for triangulation
devices, using an image sensor whose size and pixel
density is known in advance. In that case the sampling
step will be attainable for example dividing the framed
area horizontal size for the number of horizontal pixels.
Since most cameras has square pixels, in general this
value is equivalent to (vertical size)/(vertical number of
pixels). For TOF devices the sampling can be set-up on
the laser scanner control software by defining the angular
step between two adjacent point on a scan line, and
between two adjacent scan-lines. Of course, in order to
convert the angular step in a linear step on the surface,
such angle expressed in radians has to be multiplied for
the operating distance. Some scanner control packages
allow to set directly the former value.

Uncertainty (precision)
Precision is the closeness of agreement between
indications or measured quantity values obtained by
replicate measurements on the same or similar objects
under specified conditions (JCGM, 2008). A practical
value for estimating such agreement is to calculate the
dispersion of the quantity values being attributed to a
measurand through the standard deviation of the
measured values respect to their average (or a multiple of
it), defined by VIM as uncertainty (fig. 5).
As accuracy is influenced by systematic errors, precision
is mostly influenced by random errors, leading to a
certain level of unpredictability of the measured value,
due to thermal noise in the sensors detector, and, in case
of laser based devices, by the typical laser speckle effect
(Baribeau & Rioux 1991).

The sampling should be made according to a rule


deriving directly by the Nyquist-Shannon sampling
theorem (Shannon, 1949), developed first in
communication theory. Such theorem states that, if a
sinusoidal behaviour has a frequency defined by its
period T, that in the geometrical domain becomes a
length (the size of the minimal geometrical detail that we
intend to digitally capture), the minimal sampling step
suitable for allowing the reconstruction of the same
behaviour from the sampled one, is equal to T/2. Of
course it is not generally true that the fine geometrical

For a 3D sensor such estimation can be done acquiring


several times the same area and analysing the measured
value of a specific point in space as a random variable,
calculating its standard deviation. This would involve a
very large number of 3D acquisitions to be repeated,
namely from 10000 to one million, in order to consider
the data statistically significant. For this reason a more
practical approach (even if not as theoretically coherent

45

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

leads to a set of measured points that can be used as


nodes of a mesh representing a 3D digital approximation
of the real object. Hence, for going from the raw data to
the final 3D model, a specific process has to be followed
(Bernardini & Rushmeyer, 2002b; Vrubel et al., 2009),
according to the steps described in the next sections.
Many of these steps have been implemented in 3D point
cloud processing packages, both open source, like
Meshlab (ISTI-CNR, Italy), Scanalize (Stanford
University, USA), and commercial, as Polyworks
(Innovmetric, Canada), RapidForm (Inus Technology,
South Corea), Geomagic Studio (Geomagic, USA),
Cyclone (Leica, Switzerland), 3D Reshaper (Technodigit,
France), JRC 3D Reconstructor (Gexcel, Italy).
Planning
The final scope of the digital model is the first matter to
be considered for properly planning a 3D acquisition
project. Applications of 3D models may span from a
simple support for multimedia presentations to a
sophisticate dimensional monitoring. In the former case a
visually convincing virtual representation of the object is
enough, while in the latter a strict metric correspondence
between the real object and its digital representation is
absolutely mandatory. Since parameters as global model
accuracy and geometrical resolution has a considerable
cost in terms of acquired data and post-processing
overhead, a choice coherent with the project budget and
final purpose, is a must. Once such aspects have been
clearly identified, the object to be acquired has to be
analyzed in terms of size, material and shape.

Figure 5. Exemplification of the accuracy and precision


concepts. The target has been used by three different
shooters. The shooter A is precise but not accurate,
B is more accurate than A but less precise (more
spreading), C is both accurate and precise

with the definition) is to acquire the range map of a target


whose shape is known in advance, like for example a
plane, and evaluate the standard deviation of each 3D
point respect to the ideal shape (Guidi et al., 2010). Since
a range map can be easily made by millions of points the
statistical significance is implicit.
Precision of active 3D devices ranges from a few tens of
micrometres for triangulation based sensors, with an
increase of deviation with the square of sensor-to-target
distance. It has similar values for FM-CW laser scanners
with a much less significant change with distance. For
CW laser scanners it has values starting from below 1mm
up to a few mm as the sensor is farer from the target, and
not less of 2 mm for PW laser scanners (Boehler et al.,
2003) with no significant change with distance (Guidi et
al., 2011).

Acquisition of individual point-clouds


Once the planning has been properly examined, the final
acquisition is rather straightforward. In addition to basic
logistics, possible issues may be related with sensor
positioning and environmental lighting. Camera
positioning for small objects can be solved either by
moving the object or the sensor, but when the object is
very large and heavy (e.g. a boat), or fixed into the
ground (e.g. a building), the only possibility is obviously
to move the range sensor. In that case a proper
positioning should be arranged through scaffoldings or
mobile platforms, and the related logistics should be
organized. Another aspect that might influence a 3D
acquisition is the need of working in open air rather than
in a laboratory where lighting conditions can be
controlled. In the former case it has to be considered that
TOF laser scanners are designed for working on the field
and are therefore not much influenced by direct sunlight.
Triangulation based range sensors employ much less light
power per surface unit and for this reason give worst or
no results with high environmental light. In this case a
possible but logistically costly solution is to prepare a set
with tents or shields for limiting the external light on the
surface to be acquired. However in that conditions a more
practical approach for obtaining the same high resolution
is dense image matching, that, being a passive technique,
works well with strong environmental lighting (Guidi et
al., 2009a).

For modelling applications the uncertainty level of the


range sensor should not exceed a fraction of the
resolution step for avoiding topological anomalies in the
final mesh (Guidi & Bianchini, 2007). A good rule of the
thumb is to avoid a resolution level smaller than the range
device measurement uncertainty.

2.2.4 ACQUISITION AND PROCESSING


Independently of the active 3D technology used, a range
map is a metric representation of an object from a
specific point of view through a set of 3D points properly
spaced apart, according to the complexity of the imaged
surface.
In order to create a model, several views have to be taken
for covering the whole object surface. This operation

46

LASER/LIDAR

The only difference is that no special target has to be


fixed on the scene and individually measured by the
operator. On the other hand for allowing a proper
alignment, a considerable level of overlapping between
adjacent range maps has to be arranged, resulting in a
large data redundancy and long computational time.

Point-clouds alignment
In general each range map acquired from a specific
position is given in a coordinate system with the origin
located into the range sensor.
Taking range data of a scene or object from different
points of view means gathering 3D data representing the
same geometry by different reference systems whose
mutual orientation is generally unknown. For such reason
it is necessary to align all 3D data into the same
coordinate system. The process can be achieved in three
different ways.

The algorithm for aligning this kind of 3D data sets


involves the choice of a range map whose coordinate
system is used as global reference. A second data set,
partially overlapping with the reference one, is manually
or automatically pre-aligned to the main one choosing at
least three corresponding points on the common area of
both range maps (fig. 6a). This step allows to start an
iterative process for minimizing the average distance
between the two datasets, initiated by a situation of
approximate alignment (fig. 6b) not too far from the
optimized one (fig. 6c), that can be reached after a
number of iterations as large as the initial approximation
is rough. For this reason this class of algorithms is called
Iterative Closest Point (ICP).

Alignment based on complementary equipment


This approach requires the measurement of the range
device position and orientation with a complementary 3D
measurement device like a CMM, giving such data in its
coordinate system which is assumed as the global
reference. This 6 pieces of information (position and
orientation) can be used for calculating the rototranslation matrix from the range device coordinate
system to the global one. Applying systematically such
roto-translation to any 3D point measured by the range
device allows to find immediately its representation in the
global reference system even for different device-totarget orientations. Although the working volume is
limited by the CMM positioning range, such approach is
very accurate. This is why it is used in equipment
typically employed in high-accuracy industrial
applications with articulated arms (contact CMM) or laser
trackers (non-contact CMM) coupled with triangulation
based scanning heads (Pierce, 2007; Peggs et al., 2009).

The most critical aspect is that the range maps to be


aligned represent different samplings of the same surface,
therefore there is not exact correspondence between 3D
points in the two coordinate systems. Several solutions
have been proposed by considering the minimization of
Euclidean distances between points as much
corresponding as possible, but it is highly time
consuming due to the exhaustive search for the nearest
point (Besl & McKay, 1992), or between a point and a
planar approximation of the surface at the corresponding
point on the other range map (Chen & Medioni, 1992). In
both cases the algorithm core is a nonlinear minimization
process, being based on a nonlinear feature such as a
distance. For this reason the associated cost function has
a behaviour characterized by several confusing local
minima, and its minimization needs to be started by a prealignment close enough to the final solution in order to
converge to the absolute minimum.

In case of long-range active range sensors (e.g. TOF laser


scanners) the complementary device can be represented
by a GNSS which is used, for every acquisition, to
measure the position of the range sensor in a global
reference system.

Once the first two range maps of a set are aligned, ICP
can be applied to other adjacent point clouds up the full
coverage of the surface of interest. This progressive pairwise alignment may lead to a considerable error
propagation, clearly noticeable on closed surfaces when
the first range map has to be connected with the last one.
For this reason global versions of ICP have been
conceived, where the orientation of each range map is
optimized respect to all neighbour range maps (Gagnon et
al., 1994).

Alignment based on reference targets


Measuring some reference points on the scene with a
surveying system like for example a total station, allows
to define a global reference system in which such targets
are represented. During the 3D acquisition campaign the
operator captures scenes containing at least three targets
which are therefore represented in the range device
reference system for that particular position. Being their
positions known also in a global reference system, their
coordinates can be used to compute the roto-translation
matrix for re-orienting the point cloud from its original
reference system to the global one. The operation is of
course repeated up to the alignment of all 3D data of the
scene. This approach is used more frequently with TOF
laser scanners thanks to their large region of interest.

Several refinements of the ICP approach have been


developed in the last two decades for pair-wise alignment
(Rusinkiewicz & Levoy, 2001), with the introduction of
additional non-geometrical parameters as colour, for
solving alignment of object with rich image content but
poor 3D structure like flat or regular texturized surfaces
(Godin et al., 2001b), and for managing possible shape
changes between different shots due to non-rigid objects
(Brown & Rusinkiewicz, 2007). A quantitative test of
different alignment algorithm has been recently proposed

Alignment based on 3D image matching (ICP)


Using as references natural 3D features in the scene is a
possible alternative somehow similar to the previous one.
47

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 6. ICP alignment process: a) selection of corresponding points on two partially superimposed
range maps; b) rough pre-alignment; c) accurate alignment after a few iterations

in term of metric performances and processing time


(Salvi et al., 2007). For a widespread updated state of the
art about alignment algorithms see (Deng, 2011).

2.2.5 POLYGONAL MODEL GENERATION


Once a point cloud from image matching or a set of
aligned point clouds acquired with an active sensor are
obtained, a polygonal model (mesh) is generally
produced. This process is logically subdivided in several
sub-steps that can be completed in different orders
depending by the 3D data source (Berger et al., 2011).

Mesh generation from structured point-clouds


The regular matrix arrangement of a structured point
cloud involves an immediate knowledge of the neighbour
potential mesh connection for each 3D point, making the
mesh generation a rather straightforward procedure. This
means that once a set of range maps is aligned, it can be
easily meshed before starting the final merge.
This is what is done for example by the Polyworks
software package used to create the alignment and
meshing shown in fig. 7. For carrying out the following
merge, the meshes associated to the various range maps
have to be connected with the neighbour meshes. This
can be achieved with two different approaches: (i) the socalled zippering method (Turk & Levoy, 1994) which
selects polygons in the overlapping areas, removes
redundant triangles and connects meshes together
(zipper) trying to maintain the best possible topology. An
optimized version that uses Venn diagrams for evaluating
the level of redundancy on mesh overlaps has been
proposed (Soucy & Laurendeau, 1995). Other approaches
work by triangulating union of the point sets, like the Ball
Pivoting algorithm (Bernardini et al., 1999), which
consists of rolling an imaginary ball on the point sets and
creating a triangle for each triplet of points supporting the
ball. All methods based on a choice of triangles from a
certain mesh on the overlapping areas may get critical in
case of large number of overlapped range maps; (ii) a
volumetric algorithm which operates a subdivision in
voxels of the model space, calculates an average position
of each 3D point on the overlapping areas and re-samples

b
Figure 7. Mesh generation: a) set of ICP aligned
range maps. Different colours indicate the
individual range maps; b) merge of all
range maps in a single polygonal mesh

meshes along common lines of sight (Curless & Levoy,


1996). In this case areas with possible large number of
overlapped range maps are evaluated more efficiently
than with the zippering method, with a reduction of
measurement uncertainty by averaging corresponding
points.
Mesh generation from unstructured point-clouds
While meshing is a pretty straightforward step for
structured point clouds, for an unstructured point cloud it
is not so immediate. It requires a specific process like
Delaunay, involving a projection of the 3D points on a
plane or another primitive surface, a search of the shorter
48

LASER/LIDAR

specified in sect. 4.1, the resolution is chosen for


capturing the smaller geometrical details and can be
therefore redundant for most of the model.

point-to-point connection with the generation of a set of


potential triangles that are then re-projected in the 3D
space and topologically verified. For this reason the mesh
generation from unstructured clouds may consist in: a)
merging the 2.5D point clouds reducing the amount of
data in the overlapped areas and generating in this way a
uniform resolution full 3D cloud; b) meshing with a more
sophisticate procedures of a simple Delaunay. The
possible approaches for this latter step are based on: (i)
interpolating surface that build a triangulation with more
elements than needed and then prune away triangles not
coherent with the surface (Amenta & Bern, 1999); (ii)
approximating surfaces where the output is often a
triangulation of a best-fit function of the raw 3D points
(Hoppe et al., 1992; Cazals & Giesen, 2006).

A selective simplification of the model can thus reduce


the number of polygons without changing significantly its
geometry (Hoppe, 1996). As shown in fig. 8a, the point
density set for the device appears to be redundant for all
those surfaces whose curvature radius is not too small.
A mesh simplification that progressively reduces the
number of polygons eliminating some nodes, can be
applied up to reaching a pre-defined number of polygons
(useful for example in game applications where such
limitation holds), or, as an alternative, checking the
deviation between simplified and un-simplified mesh and
stopping at a pre-assigned threshold. If such threshold is
chosen in the order of the 3D sensor measurement
uncertainty, this kind of simplification does not
practically influence the geometric information attainable
by the model (fig. 8b), with a strong data shrinking
(nearly six time in the example). Mesh simplification
algorithms have been extensively examined and
compared by Cignoni et al., (1998).

Dense image matching generally consist of unstructured


3D point clouds that can be processed with the same
approach used for the above mentioned laser scanner
unstructured point clouds. No alignment phase is needed
as the photogrammetric process deliver a unique point
cloud of the surveyed scene.
Editing and optimization
Mesh editing allows to correct all possible topological
incoherence generated after the polygonal surface
generation. Generally some manual intervention of the
operator is required in order to clean spikes and unwanted
features and to reconstruct those parts of the mesh that
are lacking due to previous processing stages or to an
effective absence of 3D data collected by the sensor.
These actions are needed at least for two purposes: (i) if
the final 3D model has to be used for real-time virtual
presentations or static renderings, the lacking of even few
polygons gives no support to texture or material shading,
creating a very bad visual impression and thwarting
the huge modelling effort made until this stage; (ii) if
the model has to be used for generating physical
copies through rapid prototyping, the mesh has to be
watertight.

Several approaches have been proposed for creating


lacking final mesh as much agreement as possible with
the measured object, like radial basis functions (Carr et
al., 2001), multi-level partition of unity implicits (Ohtake
et al., 2003) or volumetric diffusion (Davis et al., 2002;
Sagawa & Ikeuchi, 2008).
In some cases, like for example dimensional monitoring
applications, mesh editing is not suggested for the risk of
adding not existing data to the measured model, leading
to possible inconsistent output.

b
Figure 8. Mesh optimization: a) mesh with polygon
sizes given by the range sensor resolution set-up
(520,000 triangles); b) mesh simplified in order to
keep the difference with the unsimplified one,
below 50 m. The polygon sizes vary dynamically
according to the surface curvature and the mesh
size drops down to 90,000 triangles

Optimization is instead a final useful step in any


applicative case, where a significant reduction of the
mesh size can be obtained. After the mesh generation and
editing stages, the polygonal surface has a point density
generally defined by the geometrical resolution set by the
operator during the 3D data acquisition or image
matching procedure. In case of active range sensing as
49

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

purposes. In case of large and complex model the pointbased rendering technique does not give satisfactory
results and does not provide realistic visualization. The
visualization of a 3D model is often the only product of
interest for the external world, remaining the only
possible contact with the 3D data. Therefore a realistic
and accurate visualization is often required. Furthermore
the ability to easily interact with a huge 3D model is a
continuing and increasing problem. Indeed model sizes
(both in geometry and texture) are increasing at faster rate
than computer hardware advances and this limits the
possibilities of interactive and real-time visualization of
the 3D results. Due to the generally large amount of data
and its complexity, the rendering of large 3D models is
done with multi-resolution approaches displaying the
large meshes with different Levels of Detail (LOD),
simplification and optimization approaches (Dietrich et
al., 2007).

2.2.6 TEXTURE MAPPING AND VISUALIZATION


A polygonal 3D model can be visualized in wireframe,
shaded or textured mode. A textured 3D geometric model
is probably the most desirable 3D object documentation
by most since it gives, at the same time, a full geometric
and appearance representation and allows unrestricted
interactive visualization and manipulation at a variety of
lighting conditions. The photo-realistic representation of
a polygonal model (or even a point cloud) is achieved
mapping a colour images onto the 3D geometric data.
The 3D data can be in form of points or triangles (mesh),
according to the applications and requirements. The
texturing of 3D point clouds (point-based rendering
techniques (Kobbelt & Botsch, 2004) allows a faster
visualization, but for detailed and complex 3D models it
is not an appropriate method. In case of meshed data the
texture is automatically mapped if the camera parameters
are known (e.g. if it is a photogrammetric model and the
images are oriented) otherwise an interactive procedure is
required (e.g. if the model has been generated using range
sensors and the texture comes from a separate imaging
sensor). Indeed homologue points between the 3D mesh
and the 2D image to-be-mapped should be identified in
order to find the alignment transformation necessary to
map the colour information onto the mesh. Although
some automated approaches were proposed in the
research community (Lensch et al., 2000; Corsini et al.,
2009), no automated commercial solution is available and
this is a bottleneck of the entire 3D modelling pipeline.
Thus, in practical cases, the 2D-3D alignment is done
with the well-known DLT approach (Abdel-Aziz &
Karara, 1971), often referred as Tsai method (Tsai, 1986).
Corresponding points between the 3D geometry and a 2D
image to-be-mapped are sought to retrieve the interior
and exterior unknown camera parameters. The colour
information is then projected (or assigned) to the surface
polygons using a colour-vertex encoding, a mesh
parameterization or an external texture.

2.2.7 AN EXAMPLE OF 3D DIGITIZATION OF


AN ARCHEOLOGICAL SITE
The case study here described shows a practical case of
reality-based digital modeling of an archaeological site.
Differently from laboratory activities, where the external
conditions can be controlled, on-the-field works may
become particularly challenging for the presence of both
technical and logistic problems that have to be
simultaneously solved in order to obtain reliable results.
Since the location of an archaeological site may be even
very far from the lab, a particularly important issue is
how to set-up the process for collecting 3D data with a
certain level of redundancy checking it while still on site,
in order to avoid multiple missions to the location with
increased of costs for the overall digitization project.
The 3D project illustrated below concerns the ruins of
five temples in MySon, a wide archaeological area
located in central Vietnam. Created by the ancient Cham
civilization active in Vietnam from the 4th to the 18th
century, it has been listed as UNESCO World Heritage in
1999. MySon contains a reasonably well preserved
system of 78 Hindu tower temples, some of them
destroyed by the nature in the last centuries, one of them
by the Vietnam-USA War in the 70s. All the temples
found in this area have specific functions and have been
for this reason classified in groups, indicated with letters
from A to H.

In Computer Graphics applications, the texturing can also


be performed with techniques able to graphically modify
the derived 3D geometry (displacement mapping) or
simulating the surface irregularities without touching the
geometry (bump mapping, normal mapping, parallax
mapping).
In the texture mapping phase some problems can arise
due to lighting variations of the images, surface
specularity and camera settings. Often the images are
exposed with the illumination at imaging time but it may
need to be replaced by illumination consistent with the
rendering point of view and the reflectance properties
(BRDF) of the object (Lensch et al., 2003). High
dynamic range (HDR) images might also be acquired to
recover all scene details and illumination (Reinhard et al.,
2005) while colour discontinuities and aliasing effects
must be removed (Debevec et al., 2004; Umeda et al.,
2005).

The 3D survey was applied to the G group, whose


restoration and valorization was assigned in the 90s by
the local authorities to an Italian mission leaded by
Fondazione Lerici, in the framework of an UNESCO
program. The site development is based on several
actions, including multimedia and virtual representations
of the temples obtained thanks to this 3D digitization
project, for explanation to the visitors of the site structure
and the related religious rituals. The reality-based digital
model obtained by the 3D survey is also an accurate
metric documentation of the current site status. It

The photo-realistic 3D product needs finally to be


visualized e.g. for communication and presentation

50

LASER/LIDAR

Table 1. Laser scanner configurations planned for 3D


data acquisition

represents a valuable source of information for scholars


studying Cham civilization through their architectural
structures, giving the possibility to analyze them with
great detail on a PC, in a different time and without the
need of traveling to Vietnam.

Scan Scale

Planning
As known several factors may affect the quality of 3D
data acquired by a range device. Equipment choices,
logistics and environmental conditions such as
temperature and humidity has to be considered in a
survey planning, especially when operating in the middle
of a forest, like in this specific case. An accurate
evaluation of such factors allows optimizing the 3D
acquisition, minimizing possible problems that can occur
during the survey. Logistics and weather conditions
become crucial specially if the survey project has to be
planned abroad, with no possibility to travel back and
forth to the lab, and little or no possibility to lose
operating days for possible logistic delays (such as for
example days or weeks lost for custom controls, typical
when instrumentation is sent trough a courier), or on the
field, due to bad climate conditions.

Operating
Sampling step
Distance
Qualitative Quantitative (mm)
(m)

Framework

8-16

Coarse

7-60

Architecture

4-8

Medium

4-15

High

1-2

Details

The archaeological plan was examined in order to suggest


a first optimized network of scan positions, trying both to
minimize the acquisition time and to consider all the
morphological characteristics of the architectural
examples.
Structure of the G Group
Our understanding of the My Son Sanctuary is
underpinned by the work of the French archeologist
Henry Parmentier that recorded the significance of the
site through drawings and photographs taken immediately
after its discovery by the French Army at the end of the
19th century. He reported that Group G is composed of 5
buildings, built around the second half of the XII century
on the top of a hill within the MySon area. The buildings
have the following numeration:

The range sensing technology chosen for this project was


Continuous Wave (CW) laser scanning with detection of
phase shift. This is now implemented in several devices
from the major manufacturers, including the relatively
new Focus3D lasers scanner from Faro, that was used in
this project. This choice was made because it appears
very suitable for low-middle ranges in terms of tradeoff
between precision (around 2 mm standard deviation
tested on a planar target located at 20 m from the
instrument), working speed (1 million points per
second max), equipment weight and size (5 kg of
material fitting in a small bag, compliant with airlines
standards for hand-luggage), and, last but not least, a cost
definitely lower than other analogous products on the
market.

G1 = the sanctuary (Kalan)


G2 = the gateway, miniature copy of the temple (Gopura)
G3 = the assembly hall (Mandapa)
G4 = the south building (Kosagrha)
G5 = pavilion for the foundation stone (Posa)
The 3D survey of the area was planned following three
different steps. In the first one all the architectures were
acquired, adapting the number of scans and working
distance set-up to the different level of geometrical
complexity of every single ruin. For the main temple
(Kalan) the level of morphological complexity led to a
multi-resolution approach in order to survey the whole
structure, the different bricks carvings and the sculpted
decorations. In addition the terrain morphology and the
presence of vegetation was carefully taken into account.

In addition to the above mentioned laser scanner from


Faro, a Canon 5D Mark II digital camera was delivered to
the excavated area for a large photographic campaign
mainly devoted to collecting pictures suitable for texture
mapping the 3D models generated with the laser scans.
Part of the images were also used for a few SFM
experiments on some of the ruins, but laser scanning was
the main tool being metrical reliable and more mature
than SFM at the time of the execution of this work (early
2011).

The sum of these factors led to begin from the


architectonical survey instead of DTM, in order to
minimize the generation of possible aligning errors due to
the sliding effect of a huge number of scans required to
fill the great number of shadows of the DTM area. For
this reason the first central block of the area was
represented by the Kalan, in which the closed and
strongly 3D geometrical shape was essential to define a
point cloud alignment with an acceptable accuracy level.
In addition, a sequence of DTM point clouds, aligned in
the same reference system of the Kalan, was acquired,
generating a first DTM reference area. Afterwards the
remaining part of the DTM was scanned and aligned to
the Kalan range maps. In the same time the 3D

Before leaving for the acquisition campaign, the scanner


performances were accurately tested in laboratory,
verifying the data quality, reliability and ideal working
distance. A similar performance test was repeated on the
archaeological site, verifying the real behavior of the
electronic and optical system with high temperature and
extreme humidity condition, using the actual surfaces of
the monument as test objects. Different instrument setups were then defined, connecting a set of distances with
relative 3D scanner performances.
51

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figurre 9. Structure of the G Grouup of templess in MySon: a)) map of the G area drawn bby the archaeo
ologist
Parm
mentier in thee early 20th ceentury (Stern, 1942); b) fish
heye image takken from abovve during the 2011
2
survey. The ruuins of the maandapa (G3) are
a visible in the
t upper part of the image, the posa (G5)
on the rigght, the gopuraa (G2) in the center,
c
and thee footprint of the
t holy wall all around

the prompt and proactive collaboration of the locall


personnel involveed in the site m
maintenance, that providedd
to cobble
c
togethher structurees apparently
y unsafe butt
actu
ually very sollid and functiional to the purpose, likee
those shown in fiigure 10. In thhis way nearly any neededd
captturing positionn in the 3D space around
d the buildingg
was properly reacched.

acquisition campaign
c
of the other diffferent monum
ments
was carried out, aligninng and creatiing self-consiistent
point clouds models. Finaally those dataa were alignedd in a
common reeference sysstem using the DTM raw
representatioons of the builddings.
Three-dimen
nsional data acquisition

The second step consisted off the DTM accquisition forr


creaating a geom
metrical frameework in ord
der to locatee
the whole architeectures in a ccommon referrence system..
For this reasonn a wider surface resp
pect to thee
arch
haeological arrea was consiidered, in ord
der to acquiree
part of the moorphological terrain con
ntext. Duringg
Kalaan and DTM acquisition a raw alignmeent phase wass
also pursued, in order
o
to veriffy the presencce of lacks inn
the 3D survey. Thanks to this step, an
n integrativee
cam
mpaign was plaanned at the eend of the firrst acquisitionn
stage, scanning all the incoomplete areass. The otherr
arch
hitectural buildings presentted a simpler geometry orr
feweer decorative portion than tthe Kalan exaample, for thiss
reason a simpler acquisition pprocess was ad
dopted, usingg
only
y the medium
m resolution sset-up, integraated by somee
speccial scans forr better coveering the worst preservedd
portions.

The survey of
o G Area reggarded both thhe 3D geomettrical
acquisition of
o five different architecturres with assocciated
findings andd the 2D imaage acquisitioon for texturee and
environment documentatioon.
In this phasee a dedicated 3D
3 acquisitionn of the upperr part
of the Kalann was carriedd out, in ordeer to scan alll the
hidden area of this compllex geometry.. The scannerr was
positioned att 7 meters from
m ground in the
t four corneers of
the iron structure coveringg the Templee, acquiring 4 high
resolution scans
s
of the whole archhitecture andd the
surrounding DTM area (poositioning shoown in fig. 10bb).
A long sequuence of architectonic acquuisitions was then
carried out around the building andd integrated with
detailed ones for capturinng the decoraated basementt. To
avoid the shaadow effects generated
g
from
m the basemennt, an
additional seequence of 3 meters
m
height scans was caarried
out (positionning shown inn fig. 10d). Locating
L
the laser
scanner in thhe needed poosition aroundd the main teemple
(i.e. the talleer ruin of the group), was a crucial poinnt for
avoiding lackks in the final survey.

The laser scannerr positions forr acquiring thee G group aree


show
wn in figure 11,
1 with a higgher concentraation of scanss
close to the strructures, in ccorrespondencce of highlyy
occlluded areas, and
a a coarse sspatial distrib
bution of scann
posiitions in the open areas where the scanning
s
wass
finallized just for a low resoluution survey of
o the terrainn
arou
und the templees.

Such activityy was made possible


p
thankks to the smalll size
and low weiight of the chhosen instrum
ment, together with
52

LASER/L
L
IDAR
R

b
a

c
Figuure 10. Handm
made structurees arranged onn the field by local
l
workers for locating thhe laser scann
ner in
the appropriate positions:
p
a) mounting
m
the platform
p
on the top of the sttructure surrouunding the Kaalan;
b) laaser scanner loocated on the platform
p
at 7 meters
m
above the ruins; c) multi-section
m
lladder for reacching
the platform;; d) structure for
f elevating the
t scanner at 3m from grouund. During 3D acquisition
n
the operator lies
l in the blinnd cone below
w the scanner in
n order to avooid the laser beeam trajectory
y

m frrom the object to be scanneed, as shown in figure 12a..


As better explaiined in next section, a specific postt
proccessing for reducing thhe relevant measurementt
unceertainty was then
t
developeed, in order to
o obtain from
m
the raw
r 3D data models
m
like thee one shown in
i figure 12b.

The last phaase focused on


o the 3D accquisition of some
s
archaeologiccal artifacts that were found
f
duringg the
excavation of
o the G area and were theen classified innside
the store-rooom of the local museum
m. This step was
planned bothh to store diggitally these important souurces
and to createe 3D models of
o decorationss that could be
b repositioned affterwards on the
t virtual archhitectures. For this
task a precisee survey set was
w defined, inn order to optiimize
the geometrical resolutioon coherentlyy with the foormal
complexity of
o the sculptedd finds.

An overview
o
of the
t laser scannner resolution
n settings andd
the point cloudss actually accquired is sh
hown in thee
follo
owing table 2.
A ph
hotographic caampaign was also carried out
o in additionn
to th
he laser scan survey.
s
It wass devoted to th
he acquisitionn
of:

Although a phase-shift laser


l
scannerr is not the most
suitable device for high resolution 3D
D acquisitionn, the
absence of a large pool off instruments, due to the loggistic
constraints explained
e
at the beginningg of this chaapter,
forced us to use in unconvventional wayy our laser scaanner,
setting it upp at the maaximum reasoonable resoluution,
between 1 annd 2 mm, worrking with thee device at aboout 1

1. architectonic
a

images

foor

texturing
g

projectionn

purposes;
p

2. detailed
d
imagees for the creeation of seam
mless materiall
pictures;
p

53

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figure 111. Map of the hill where thee G Group is located withinn the My Son A
Area, with
the scannner positionss for acquiringg different stru
uctures highligghted by colored dots

Figure 12. Sculpted tym


mpanum repreesenting Krishhna dancing on
n the snakes, originally
o
at thhe entrance off the kalan:
a 3D laser scaanning in the store
a)

room of
o the museum
m; b) reality-based model frrom the 3D daata

3. panoramiic images too gather a believable repre-

4. few
f
image seets taken aroound four mo
onuments forr

sentation of the surrouunding environnment through the


stitching of multiple fissh-eye photoggraphies;

experimenting
e
g
techniques.
t

54

Structure

From

Mo
otion

(SFM))

LASER/LIDAR

Table 2. Number of point clouds acquired at different resolution levels (first three columns), and total number of
3D points acquired during the whole 3D survey of the G Group and the related decorations (last column)
Resolution
Coarse

Medium

High

# points
(x 106)

G1 (Kalan)

43

22

126

G2 (Portal)

21

G3 (Assembly hall)

15

G4 (South building)

13

31

G5 (Pavilion for the foundation stone)

DTM

49

27

21 Finds

60

Total

56

79

86

226

Figure 13. High resolution capture of the Foundation stone through SFM: a) texturized 3D model measured through
a sequence of 24 images shot around the artifact; b) mesh model of the central part of the stone with a small area
highlighted in red; c) color-coded deviations of the SFM acquired points from a best-fitting plane calculated
on the red area of b), clearly showing a the nearly 2 mm carving on the stone

camera at about 3 meters from the artifact, with the open


source software developed by Institut Gographique
National (IGN) in Paris, with the Apero module for
orienting the shots (Pierrot-Deseilligny, Clry, 2011), and
the MicMac module for generating the colored cloud of
3D points through image matching (www.micmac.ign.fr).

The main difficulty with these latter images were related


to the presence of architectural elements inside dense
vegetation, slightly moving due to wind, that involved the
presence of images difficult to match each other, with
SFM results not always good.
However, in a few cases the results of the SFM tests gave
very good results, as for example for the 3D capture of
the G5 temple. The example was chosen for the presence
of a Sanskrit inscription on the foundation stone whose
carving depth is in the order of a few millimeters. While
the laser scanner measurement uncertainty in the order of
2 to 4 millimeters (depending on the scanned material)
made impossible the readability of such tiny geometric
detail, an appropriate SFM processing produced a very
accurate detection on the carved inscription, as shown in
figure 13.

The result was then made metric evaluating a scale factor


respect to the laser scanning of the same structure.
Digital Data Management
An excel spreadsheet was created to keep track of the
huge amount of 2D and 3D images adding some useful
information like date, size and scanner set-up. This
allowed to easily managing the laser scans even if not
immediately processed. Such table was integrated with
the laser scanner positions during the survey, mapped in
figure 11.

The result here was generated by processing 24 images,


21 megapixels each, taken with a 20 mm lens and the
55

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Each sub-cloud was then meshed. The resulting highresolution polygonal models presented both several
topological errors, due to residual errors survived to the
cleaning phase, and a considerable number of lacking
mesh portions, due to occlusions originated by the
complex geometries involved. All these lacks were closed
with a manual identification process, choosing the best
closing algorithm for each situation. This was needed for
the different characteristics of the lacks in terms of size,
position within the model (flat plane, edge, corner, etc.)
and polygonal complexity of the borders. An automatic or
semi-automatic approach would have risked to neglect
these differences, generating not reliable mesh portions in
the reality-based model. As a consequence such process
was very long for irregular structures as the most ruined
buildings.

These supports allowed to carefully plan and manage the


whole 3D scanning campaign, avoiding extreme data
redundancy that would have affected the following postprocessing phase, with possible excessive amounts of
data that a device capable to generate 1 million of
Points/sec might easily produce.
Reality based modeling
As in any other 3D modeling process from 3D data
acquired on the field, the very first step is data cleaning,
that allows to delete unwanted 3D structures recorded
during the laser scanning operations such as trees,
possible people moving in front of the instrument, other
monuments not required in the survey and possible 3D
acquisition artifacts. In particular the most evident artifact
generated by scanner used in this campaign was the
generation of non-existing points in correspondence
of the building edges, when the acquired surfaces were
too much tangential with respect to the laser beam as
shown in figure 14. For this reason, although some
automatic filtering allowed reducing this effect, before
starting the point cloud alignment process a considerable
amount of manual preprocessing for deleting outliers was
needed.

Figure 14. Tangential edge error in 3D point clouds:


the red points represent the incorrect data respect
to the real ones (black-grey color)
b

Every cleaned scan was then aligned by means of the ICP


algorithm implemented in the Leica Cyclone 3D
processing software in order to position the point clouds
of each ruin in the same reference system. The resulting
point clouds were then decimated at 1 cm sampling step,
leveling all the over-sampled portion of the architecture
and lowering the amount of 3D data.

Figure 15 a) Point cloud model of the Kalan


cleaned and aligned in the same reference system;
b) polygonal model of the Kalan with a decimated
and watertight mesh

This stage allowed to build the 1 cm resolution geometry


of all the five buildings in the G Area, a 10 cm resolution
DTM of the hill where G Area is located, a set of
polygonal models of sculpted finds with a geometrical
resolution of 2 mm.

Each point cloud was subdivided in sub-units whose size


was limited to 3 million of points in order to make easier
and more controllable the following meshing step. Such
subdivision did not follow a semantic thinking because
the principal aim was just the identification of area
suitable to be closed afterwards with a polygonal postprocessing.

At the end different approaches were followed to


texturize such reality based models.

56

LASER/L
L
IDAR
R

b
a

c
d

e
Figure 16. Reality-based
R
m
models
of all ruins in the G group obtain
ned from 3D data
d generated by a laser scaanner at 1 cm
resolution and
a texturized with the actuual images of the
t buildings: a) G1, the maain temple; b) G2, the entran
nce portal to
the hooly area; c) G33, the assemblly hall; d) G4,, the south buiilding; e) G5; the kiosk of tthe foundation
n stone

The approach folllowed for accquisition and


d modeling off
the sculpted finddings was inn principle sim
milar to thatt
emp
ployed for thee architecturall structures, with
w a changee
in teerms of geom
metrical resoluution settings for the Faroo
Focu
us3D scannerr. Such equipm
ment showed a fairly goodd
capaability to reprooduce thin deetails in the fin
nal polygonall
mod
del even with a not negligibble amount of measurementt
unceertainty superrimposed on tthe true 3D data.
d
For thiss
reasons an optim
mized post-pprocessing prrocedure wass
anallyzed to preserrve such usefuul geometricall information..
It waas based on a light smoothiing filtering, strong
s
enoughh
for significantly
s
r
reducing
the m
measurement noise,
n
but nott
too strong
s
to wipee out the tinieest details. A set
s of specificc
testss on this pointt were carriedd out in orderr to set-up thee
optim
mal filtering. Such set up was found by
b comparingg
mod
dels generatedd at different level of filteering with thee
mod
del generatedd from thee raw dataa, evaluatingg

As shown in Figure 16, thhe models reprresenting the worst


w
conserved buildings, likee G2 and G44, were textuurized
with a seam
mless shadingg pattern origginated from real
images as thhe most practiccal way to acchieve a believvable
result (Fig. 16b and 16d reespectively).
For the Kaalan temple and for thhe well-preseerved
architecture, such as G33 and G5 (F
Fig. 16c and 16e
respectively)), most of thee texturing was
w done withh the
actual imagees of the ruins projected on the model, with
the integratioon of seamlesss shading paatterns for thee less
characterizedd componentts. In the latter case such
integration was
w
particulaarly extendedd, texturing with
projected im
mages the steela and the lateral walls, with
uniform seam
mless pattern the central sttone basementt and
the sand surrrounding it, and
a with non--uniform seam
mless
patterns for upper
u
faces off the joints bettween the tiless.
57

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figure 17.. Reality-based models of eight of the 21 decorations found


f
during the
t G Group eexcavations an
nd acquired
in the My
M Son museuum. All these decorations have
h
been acqu
uired with a saampling step bbetween 1 and
d 2 mm,
and post processedd in order to strrongly reducee the significan
nt measuremeent noise but nnot the tiniest details
d
off their shapes. The visual reepresentation in
i this rendering have been made with a sseamless textu
ure

simultaneoussly the standarrd deviation of


o error (indiccating
the amount of noise reeduction), annd the chrom
matic
mapping of the errors beetween the firrst and the seecond
model. The optimal valuee was chosenn as the maxim
mum
filtering nott involving a significannt shape chaange,
visualized inn the chromaatic map withh accumulatioon of
errors in speccific points onn the model chharacterized by
b the
tiniest detailss.

s
associaated to the alig
gned cloud off
same coordinate system
poin
nt originated by the laserr scanning. The
T
resultingg
scen
ne was then used
u
for showing the area to
t the visitorss
of th
he archaeologgical area throough an anim
mation. In thee
follo
owing figure 18 a frame off such animatiion shows thee
richn
ness of visuaal informationn associated to
t this digitall
artiffact that, in thhe mean timee, contains a rich level off
detaailed and metriic geometricall information..

The models obtained froom this proceess gave a set of


reality basedd mesh modells optimized for
f preservingg thin
geometric details,
d
finalizzed to the decoration
d
off the
reconstructedd geometrical models, as shhown in figuree 17.

2.2.8
8 CONCLUS
SIONS
Thiss chapter repoorted an overrview of the actual
a
opticall
3D measuremennts sensors aand techniqu
ues used forr
terreestrial 3D moddelling and a practical appllication of 3D
D
acqu
uisition in an archaeologica
a
al area. The laast 15 years off
appllications madee clear that reeality-based 3D models aree
very
y useful in many
m
fields bbut the relateed processingg
pipeeline is still far
f from beinng optimal, with
w
possiblee
imprrovements andd open researcch issues in many
m
steps.

Finally, in adddition to the single texturizzed models shhown


in figure 16, the 3D moddeling of the whole scenee was
obtained, forr digitally sim
mulating the actual
a
environnment
where the G Group is actuually located.
For this purrpose all thee reality baseed models where
w
exported to a rendering platform,
p
togeether with thee low
resolution model
m
of the DTM,
D
some library modeels of
trees similar to the ones grown
g
in that part of the foorest,
and a spheriical panoramaa captured froom the top of
o the
structure covvering the kallan, re-projectted on a spheerical
surface surroounding the arrea.

Firstt of all automaation in 3D daata processing


g is one of thee
most important issues
i
influenncing efficien
ncy, time andd
prod
duction costs. At present ddifferent reseearch solutionn
and commercial packages haave turned to
owards semi-auto
omated (interaactive) approoaches, wheree the humann
capaacity in data interpretationn is paired with
w the speedd
and precision of computer
c
algoorithms. Indeeed the successs
of fully
fu automatioon in image uunderstanding
g or 3D pointt
clou
uds processingg depends on many factorss and is still a

Apart from the syntheticc trees, all the


t
models where
w
already coheerent in terms of size and poositioning, coming
all from the same
s
metric pipeline
p
and beeing defined in
i the

58

LASER/L
L
IDAR
R

Figure 18. Virtual reconnstruction of the


t G Group and
a its surrounnding panoram
ma starting
from thhe reality-baseed models acqquired through
h laser scanninng and digital images

hot researchh topic. The progress is promising


p
buut the
acceptance of
o fully autoomated proceedures, judgeed in
terms of hanndled datasetss and accuraccy of the finaal 3D
results, depeends on the quuality specificcations of the user
and final usee of the produuced 3D modeel. A good levvel of
automation would
w
make also possible thhe developmeent of
new tools foor non-expert users.
u
These would
w
particuularly
useful sincee 3D capturiing and modelling has been
demonstratedd to be an intterdisciplinaryy task where nontechnical endd-users (archaaeologists, arcchitects, desiggners,
art historianss, etc.), may need
n
to interacct with sophistticate
technologies through clear protocols and user-frieendly
packages.

deveeloped, no sim
mple tools suiitable for non
n-expert userss
are available
a
yet.

Sensor fusioon has been experimentally


e
y demonstrateed to
be useful foor collecting as many feaatures as posssible,
allowing thhe exploitation of eachh range sennsing
technology capability. Currently
C
avvailable packkages
allows the crreation of diffferent geomettric levels of detail
d
(LoD) at moodel level (i.ee. at the end of the modeelling
pipeline), whhile this couldd be performedd also at data--level
with the devvelopment of novel
n
packages capable to deal
simultaneoussly with diffferent sensorss and data. Such
novel featuree should allow also to incclude new sennsors
and 3D data in the processsing pipeline taking
t
into acccount
their metroloogical characteeristics.

knowledgmen
nts
Ack

The latter open issue


i
is conneected with th
he problem off
remo
otely visualize large 3D m
models, both for
f navigationn
and data access.. Despite 3D
D navigation through thee
interrnet has been attempted booth with locall rendering off
dow
wnloaded 3D models (posssible large initial time lagg
and poor data seecurity), or w
with remote rendering
r
andd
streaaming to the client
c
of a seqquence of ren
ndered framess
(goo
od security buut poor real-tim
me navigation
n), a completee
and reliable user oriented
o
solutiion is still lack
king.

The author wouldd like to thankk Fabio Remo


ondino and J..
gelo Beraldin for many uuseful discussions on thee
Ang
theo
oretical part of
o the chapteer, Michele Russo
R
for thee
hugee work on thhe archaeologgical case stu
udy regardingg
dataa acquisition and
a geometric processing on
n the G groupp
mod
dels, Davide Angheleddu for their textture mappingg
and Livio De Lucca for the Apeero and MicM
Mac processingg
on th
he images shoown in figure 12.
Refeerences

For this reaason also thhe adoption of standardss for


comparing 3D
3 sensing technologies would helpp. At
present eveen no comm
mon terminoology exists for
comparing seensors perform
mances.

ABD
DEL-AZIZ, Y.I. & KARARA, H.M. 1971. Direct linearr
trans-formatio
t
on from com
mparator coordinates intoo
object
o
space coordinates iin close-rangee photogram-metry,
m
Proc. of the Sym
mposium on Close-Rangee
Photogramme
P
etry, Falls Chuurch (VA) USA
A, pp. 1-18.

A smooth coonnection bettween a data base and reaalitybased 3D models


m
is anotther issue thaat has to be faced
f
when the moodel becomess a portal foor accessing to
t an
informative system assocciated to the modelled obbject.
Although some
s
experimental systeems have been

AGA
ARWAL, S.; SNAVELY, N
N.; SIMON, I..; SEITZ, S.;;
SZELINSKI, R.
R 2009. Buuilding Romee in a Day,,
Proceedings
P
of the IE
EEE 12th International.
I
Conference
C
onn Computer V
Vision, pp. 72-7
79.
59

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Boissonnat and M. Teillaud, Eds. Springer-Verlag,


Mathematics and Visualization, pp. 231-276.

AMENTA, N. & BERN, M. 1999. Surface reconstruction by


Voronoi filtering, Discrete and Computational
Geometry Vol. 22, No. 4, pp. 481-504.

CHEN, Y. & MEDIONI, G. 1992. Object modeling by


registration of multiple range images, Image and
Vision Computing, Vol. 10-3, pp. 145-155.
CIGNONI, P.; MONTANI, C. & SCOPIGNO, R. 1998. A
comparison of mesh simplification algorithms.
Computers & Graphics Vol. 22, No. 1, pp. 37-54.

ASTM International, 2006. Committee E57 on 3D


Imaging Systems; West Conshohocken, PA, USA.
BARIBEAU, R. & RIOUX, M. 1991. Influence of speckle on
laser range finders, Applied Optics, Vol. 30, No. 20,
pp. 2873-2878.
BERALDIN, J.-A. 2004. Integration of laser scanning and
close-range photogrammetry the last decade and
beyond, Proc. IAPRS, Vol.35, No.5, pp. 972-983.
BERGER, M.; LEVINE, J.A.; NONATO, L.G.; TAUBIN, G.;
SILVA, C.T. 2011. An End-to-End Framework for
Evaluating Surface Reconstruction, SCI Technical
Report, No. UUSCI-2011-001, SCI Institute,
University
of
Utah.
Available
from:
www.sci.utah.edu/publications/SCITechReports/UUS
CI-2011-001.pdf
BERNARDINI, F.; MITTLEMAN, J.; RUSHMEIER, H.; SILVA,
C. & TAUBIN, G. 1999. The ball-pivoting algorithm
for surface reconstruction. IEEE Transactions on
Visualization and Computer Graphics, Vol. 5, pp.
349-359.
BERNARDINI, F.; RUSHMEIER, H.; MARTIN, I.M.;
MITTLEMAN, J.; TAUBIN, G. 2002a. Building a
digital model of Michelangelos Florentine Pieta,
IEEE Computer Graphics Application, vol. 22, pp.
59-67.
BERNARDINI, F. & RUSHMEIER, H. 2002b. The 3D Model
Acquisition Pipeline, Computer Graphics Forum,
NCC Blackwell, Vol. 21(2), pp. 149-172.
BESL, P.J. & MCKAY, N. 1992. A Method for
Registration of 3-D Shapes, IEEE Trans. On Pattern
Analysis and Machine Intelligence, Vol. 14-2, pp.
239-256.
BLAIS, F. 2004. Review of 20 Years of Range Sensor
Development, Journal of Electronic Imaging, Vol 131, pp. 231-240.
BOEHLER, W.; BORDAS, V.M. & MARBS, A. 2003.
Investigating laser scanner accuracy. Proceedings of
the XIXth CIPA Symposium, pp. 696-702.
BROWN, B. & RUSINKIEWICZ, S. 2007. Global non-rigid
alignment of 3-D scans. ACM Transactions on
Graphics, Vol. 26, No. 3, Article 21.
CALLIERI, M.; CHICA, A.; DELLEPIANE, M.; BESORA, I.;
CORSINI, M.; MOY S, J.; RANZUGLIA, G.; SCOPIGNO,
R.; BRUNET, P. 2011. Multiscale acquisition and
presentation of very large artifacts: The case of
Portalada, Journal of Computing and Cultural
Heritage, Vol. 3, No. 4, Article number 14.
CARR, J.C.; BEATSON, R.K.; CHERRIE, J.B.; MITCHELL,
T.J.; FRIGHT, W.R.; MCCALLUM, B.C. & EVANS, T.R.
2001. Reconstruction and representation of 3D objects
with radial basis functions, Proc. SIGGRAPH, pages
67-76.
CAZALS, F. & GIESEN, J. 2006. Delaunay triangulation
based surface reconstruction. In: Effective Computational Geometry for Curves and Surfaces, J.-D.

CORSINI, M.; DELLEPIANE, M.; PONCHIO, F. & SCOPIGNO,


R. 2009. Image-to-geometry registration: a mutual
information method exploiting illumination-related
geometric properties. Computer Graphics Forum,
Vol. 28(7), pp. 1755-1764.
CURLESS, B. & LEVOY M. 1996. A Volumetric Method
for Building Complex Models from Range Images,
Proc. SIGGRAPH96, pp. 303-312.
DAVIS, J.; MARSCHNER, S.R.; GARR, M. & LEVOY, M.
2002. Filling holes in complex surfaces using
volumetric diffusion. Proc. 3DPVT, pp. 428-438.
DEBEVEC, P.; TCHOU, C.; GARDNER, A.; HAWKINS, T.;
POULLIS, C.; STUMPFEL, J.; JONES, A.; YUN, N.;
EINARSSON, P.; LUNDGREN, T.; FAJARDO, M. and
MARTINEZ, P. 2004. Estimating surface reflectance
properties of a complex scene under captured natural
illumination. USC ICT Technical Report ICT-TR-06.
DENG, F. 2011. Registration between Multiple Laser
Scanner Data Sets, Laser Scanning, Theory and
Applications, Chau-Chang Wang (Ed.), ISBN: 978953-307-205-0, InTech.
DIETRICH, A.; GOBBETTI, A. & YOON S.-E. 2007.
Massive-model rendering techniques: a tutorial.
Computer graphics and applications, Vol. 27, No. 6,
pp. 20-34.
GAGNON, H.; SOUCY, M.; BERGEVIN, R. & LAURENDEAU,
D. 1994. Registration of multiple range views for
automatic 3D model building, Proc. of IEEE Conf. on
CVPR, pp. 581-586.
GODIN, G.; BERALDIN, J.-A.; RIOUX, M.; LEVOY, M. &
COURNOYER, L. 2001a. An Assessment of Laser
Range Measurement of Marble Surfaces, Proc. of the
5th Conference on Optical 3-D Measurement
Techniques, Vienna, Austria, pp. 49-56.
GODIN, G.; LAURENDEAU, D.; BERGEVIN, R. 2001b. A
Method for the Registration of Attributed Range
Images. Proc. of Third Int. Conference on 3D Digital
Imaging and Modeling (3DIM2001), pp. 179-186.
GODIN, G; BORGEAT, L.; BERALDIN, J.-A.; BLAIS, F.
2010. Issues in acquiring, processing and visualizing
large and detailed 3D models, Proceeding of the 44th
Annual Conference on Information Sciences and
Systems (CISS 2010), pp. 1-6.
GUIDI, G.; BERALDIN, J.-A.; CIOFI, S. & ATZENI, C. 2003.
Fusion of range camera and photogrammetry: a
systematic procedure for improving 3D models metric
accuracy, IEEE Trans. on Systems Man and
Cybernetics Part B, Vol. 33-4, pp. 667-676.
GUIDI, G.; BERALDIN, J.-A. & ATZENI, C. 2004. High
accuracy 3D modeling of Cultural Heritage: the
60

LASER/LIDAR

KOBBELT, L. & BOTSCH, M. 2004. A survey of


point-based techniques in computer graphics.
Computers and Graphics, Vol. 28, No. 6, pp. 801814.

digitizing of Donatellos Maddalena, IEEE


Transactions on Image Processing, Vol. 13-3, pp.
370-380.
GUIDI, G.; FRISCHER, B.; RUSSO, M.; SPINETTI, A.;
CAROSSO, L. & MICOLI, L.L. 2005. Threedimensional acquisition of large and detailed cultural
heritage objects, Machine Vision and Applications,
Special issue on 3D acquisition technology for
cultural heritage, Vol. 17-6, pp. 347-426.

LENSCH, H.; HEIDRICH, W. and SEIDEL, H. 2000.


Automated texture registration and stitching for real
world models. Proc. 8th Pacific Graph. Conf. CG and
Application, pp. 317-327.
LENSCH, H.P.A.; J. KAUTZ, M. GOESELE, W. 2003.
Heidrich,
and
H.-P.
Seidel.
Image-based
reconstruction of spatial appearance and geometric
detail, ACM Transaction on Graphics, Vol. 22, No. 2,
pp. 234-257.

GUIDI, G.; REMONDINO, F.; MORLANDO, G.; DEL MASTIO,


A.; UCCHEDDU, F.; PELAGOTTI, A. 2007. Performance evaluation of a low cost active sensor for
cultural heritage documentation, VIII Conference on
Optical 3D Measurement Techniques, Ed.
Gruen/Kahmen, Vol. 2, Zurich, Switzerland, pp. 5969.

LEVOY, M.; PULLI, K.; CURLESS, B.; RUSINKIEWICZ, S.;


KOLLER, D.; PEREIRA, L.; GINZTON, M.; ANDERSON,
S.; DAVIS, J.; GINSBERG, J.; SHADE, J.; FULK, D. 2000.
The Digital Michelangelo Project: 3D Scanning of
Large Statues, SIGGRAPH00, pp. 131-144.

GUIDI, G. & BIANCHINI, C. 2007. TOF laser scanner


characterization
for
low-range
applications.
Proceedings of the Videometrics IXSPIE Electronic
Imaging, San Jose, CA, USA, 29-30 January 2007;
SPIE: Bellingham WA, USA; pp. 649109.1649109.11.

LI, J.; GUO, Y.; ZHU, J.; LIN X.; XIN Y.; DUAN K. & TANG
Q. 2007. Large depth of view portable three
dimensional laser scanner and its segmental
calibration for robot vision, Optics and Laser in
Engineering, Vol. 45, pp. 1077-1087.

GUIDI, G.; REMONDINO, F.; RUSSO, M.; MENNA, F.; RIZZI,


A. & ERCOLI, S. 2009a. A multi-resolution
methodology for the 3D modeling of large and
complex archaeological areas, International Journal
of Architectural Computing, Vol. 7-1, pp. 40-55.

MACKINNON, D.; BERALDIN, J.-A.; COURNOYER, L.;


BLAIS, F. 2008. Evaluating Laser Spot Range Scanner
Lateral Resolution in 3D Metrology. Proceedings
of the 21st Annual IS&T/SPIE Symposium on
Electronic Imaging. San Jose. January 18-22,
2008.

GUIDI, G.; SPINETTI, A.; CAROSSO, L. & ATZENI, C.


2009b. Digital three-dimensional modelling of
Donatellos David by frequency modulated laser
radar, Studies in Conservation, Vol. 54, pp. 3-11.

OHTAKE, Y.; BELYAEV, A.; ALEXA, M.; TURK G. &


SEIDEL, H.P. 2003. Multi-level partition of Unity
Implicits, ACM Trans. on Graphics, Vol. 22, No. 3,
pp. 463-470.

GUIDI, G.; RUSSO, M.; MAGRASSI, G. & BORDEGONI, M.


2010. Performance Evaluation of Triangulation
Based Range Sensors, Sensors, Vol. 10-8, pp. 71927215.
GUIDI, G.; RUSSO, M.; MAGRASSI, G.; BORDEGONI, M.
2011. Low cost characterization of TOF range sensors
resolution, Proc. IS&T/SPIE Electronic Imaging, Vol.
7864, pp. D0-D10.

PEGGS, G.N.; MAROPOULOS, P.G.; HUGHES, E.B.;


FORBES, A.B.; ROBSON, S.; ZIEBART, M.;
MURALIKRISHNAN, B. 2009. Recent developments in
large-scale dimensional metrology, Journal of
Engineering Manufacture, Vol. 223, No. 6, pp. 571595.

HEALEY, G. & BINFORD, T.O. 1987. Local shape from


specularity. Proc. ICCV, London, UK.

PETROV, V.; KAZARINOV, A.; CHERNIAVSKY, A.;


VOROBEY, I.; KRIKALEV, S.; MIKRYUKOV, P. 2006.
Checking of large deployable reflector geometry,
Proc. EuCap 2006 - European Space Agency Special
Publication, Vol. ESA SP-626, 4p.

HIEP, V.H.; KERIVEN, R.; LABATUT, P.; PONS, J.-P. 2009.


Towards high-resolution large-scale multi-view
stereo, Proceedings of IEEE Conference on CVPR,
pp. 1430-1437.
HIRSCHMULLER, H. 2008. Stereo processing by semiglobal matching and mutual information. IEEE Trans.
Patt. Anal. Mach. Intell., 30, 328-341

PIERCE, J. 2007. Wider view, Engineer, Volume 293,


Issue 7735, Pages 36-40.
PIERROT-DESEILLIGNY, M. & PAPARODITIS, N. 2006. A
Multiresolution and Optimization-Based Image
Matching Approach: An Application to Surface
Reconstruction from SPOT5-HRS Stereo Imagery.
ISPRS Conf. Topographic Mapping From Space, Vol.
36 1/W41.

HOPPE, H.; DEROSE, T.; DUCHAMP, T.; MCDONALD, J. &


STUETZLE, W. 1992. Surface reconstruction from
unorganized points, SIGGRAPH92, Vol. 26, No. 2,
pp. 71-78.
HOPPE, H. 1996. Progressive meshes, SIGGRAPH96,
pages 99-108.
JCGM, 2008. International vocabulary of metrology
Basic and general concepts and associated terms
(VIM), Bureau International des Poids et Mesures
(BIPM), France.

PIERROT-DESEILLIGNY, M. & CLRYI 2011. APERO, an


Open Source Bundle Adjusment Software for
Automatic Calibration and Orientation of a Set of
Images. Proceedings of the ISPRS Commission V
Symposium, Image Engineering and Vision
Metrology, Trento, Italy, 2-4 March 2011.
61

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

STAMOS, I.; LIU, L.; CHEN, C.; WOLBERG, G.; YU, G.;
ZOKAI, S. 2008. Integrating automated range
registration with multiview geometry for the
photorealistic modelling of large-scale scenes. Int. J.
Comput. Vis., Vol. 78, pp. 237-260.

REINHARD, E.; WARD, G.; PATTANAIK, S. & DEBEVEC, P.


2005. High dynamic range imaging: acquisition,
display and image-based lighting. Morgan Kaufmann
Publishers.
RIEGL 2010. VZ400 Terrestrial Laser Scanner with online
waveform
processing,
Data
Sheet,
www.riegl.com/uploads/tx_pxpriegldownloads/10_Da
taSheet_VZ400_20-09-2010.pdf

STERN, P. 1942. Lart du Champa (ancien Annam) et son


evolution, Les FrresDouladoure and AdrienMaisonneuve, Toulouse and Paris.

RUSINKIEWICZ, S. & LEVOY, M. 2001. Efficient variants


of the ICP algorithm, Proceedings of 3DIM2001,
IEEE Computer Society, pp. 145-152.

STUMPFEL, J.; TCHOU, C.; YUN, N.; MARTINEZ, P.;


HAWKINS, T.; JONES, A.; EMERSON, B.; DEBEVEC, P.
2003. Digital reunification of the Parthenon and its
sculptures. Proceedings of VAST, pp. 41-50.

SAGAWA, R. & IKEUCHI, K. 2008. Hole filling of a 3D


model by flipping signs of a signed distance field in
adaptive resolution, IEEE Trans. PAMI, Vol. 30, No.
4, pp. 686-699.

STURM, P.; RAMALINGAM, S.; TARDIF, J.-P.; GASPARINI,


S.; BARRETO, J. 2011. Camera models and
fundamental concepts used in geometric Computer
Vision. Foundations and Trends in Computer
Graphics and Vision, Vol. 6, Nos. 1-2, pp. 1183

SALVI, J.; MATABOSCH, C.; FOFI, D. & FOREST, J. 2007. A


review of recent range image registration methods
with accuracy evaluation. Image and Vision
Computing, Vol. 25, No. 5, pp. 578-596.

TSAI. R.Y. 1986. An efficient and accurate camera


calibration technique for 3D machine vision, Proc.
CVPR 1986, pp. 364-374.

SCHARSTEIN, D. & SZELISKI, R. 2002. A taxonomy and


evaluation of dense two-frame stereo correspondence
algorithms. Intl. Journal of Computer Vision, Vol. 47,
pp. 7-42.

TURK, G. & LEVOY, M. 1994. Zippered polygon meshes


from range images, Proceedings of the ACM
SIGGRAPH Conference on Computer Graphics, pp.
311-318.

SHAN, J. & TOTH, C. 2008. Topographic Laser Ranging


and Scanning: Principles and Processing, CRC, p.
590.

UMEDA, K.; SHINOZAKI, M.; GODIN, G. & RIOUX. M.


2005. Correction of color information of a 3D model
using a range intensity image, Proc. of 3DIM2005,
pp. 229-236.

SHANNON, C.E. 1949. Communication in the presence of


noise, Proc. Institute of Radio Engineers, vol. 37, no.
1, pp. 10-21, Jan. 1949. Reprint as classic paper in:
Proc. IEEE, Vol. 86, No. 2, (Feb 1998).

VOSSELMAN, G. & MAAS, H.-G. 2010. Airborne and


Terrestrial Laser Scanning, CRC, p. 318.

SKOLNIK M.I. 1990. Radar Handbook, chapt. 14,


McGraw-Hill.

VRUBEL, A.; BELLON, O.R.P.; SILVA, L. 2009. A 3D


reconstruction pipeline for digital preservation,
Proceedings of IEEE Conference on CVPR, pp. 26872694.

SOUCY, M. & LAURENDEAU, D. 1995. A general surface


approach to the integration of a set of range views,
IEEE Trans. on PAMI, Vol. 17, No. 4, pp. 344-358.

62

3
PHOTOGRAMMETRY

PHOTOGRAMMETRY

3.1 PHOTOGRAMMETRY: THEORY


F. REMONDINO
cameras [Sandau, R., 2009]. The different systems feature
frame sensors or pushbroom line scanners (linear
arrays).Colour images are acquired either with a Bayer
filter option or using multiple cameras/lines, each
recording a single spectral band and then registering and
superimposing the singles images (generally RGB +
NIR). Between the available aerial platforms, particular
interest has been devoted to Unmanned Aerial Vehicles
(UAV) i.e. unmanned remotely piloted platforms which
can fly also in an autonomous mode, using integrated
GNSS/INS sensors. A digital camera (consumer or SLR,
according to the payload of the platform), or even a small
laser scanner, is used to acquired images which are then
processed integrating aerial and terrestrial algorithms [see
also chapter XXX].

3.1.1 PASSIVE SENSORS FOR IMAGE-BASED 3D


MODELLING TECHNIQUES
Passive sensors like digital cameras deliver 2D image
data which need to be afterwards transformed into 3D
information (Remondino and El-Hakim, 2006). Normally
at least two images are required and 3D data can be
derived using perspective or projective geometry
formulations (Gruen & Huang, 2001; Sturm et al., 2011).
Images can be acquired using terrestrial, aerial or satellite
sensors according to the applications and needed scale.
Terrestrial digital cameras
Terrestrial cameras come in many different forms and
format: single CCD/CMOS sensor, frame, linear,
multiple heads, SLR-type, industrial, consumer, highspeed, panoramic head, still-video, etc. Consumer
terrestrial cameras have at least 1015 Megapixels at very
low price while high-end digital cameras feature even
more than 40 Megapixel sensors. Mobile phone cameras
have up to 5 Megapixels and they could be even used for
3D purposes (Akca & Gruen, 2009). Panoramic linear
array cameras are able to deliver very high resolution
images with great metric performances (Luhmann &
Tecklenburg, 2004; Parian & Gruen, 2004). The high cost
of these sensors is limiting their market and thus
panoramic images are normally generated stitching
together a set of partly overlapped images acquired from
a unique point of view with a consumer or SLR digital
camera rotated around its perspective centre. This easy
and low-cost solution allows to acquire almost Gigapixel
images with great potential not only for visual needs
(e.g., Google Street View, 1001 Wonders, etc.), but also
for metric applications and 3D modelling purposes
(Fangi, 2007; Barazzetti et al., 2010).

Satellite platforms and sensors


The number of existing and planned imaging satellite
sensors is growing with very interesting prospective for
the near future, in particular for very high resolution and
SAR sensors. Optical satellite imaging still depends on
cloud coverage but large archives are available, often
with stereo-pairs for Geomatics and archaeological
applications.

3.1.2 PHOTOGRAMMETRY
Photogrammetry (Mikhail et al., 2001; Luhmann et al.,
2007) is the most well-known and important image-based
technique which allow the derivation of accurate, metric
and semantic information from photographs (images).
Photogrammetry thus turns 2D image data into 3D data
(like digital 3D models) rigorously establishing the
geometric relationship between the acquired images and
the scene as surveyed at the time of the imaging event.
Photogrammetry can be done using underwater,
terrestrial, aerial or satellite imaging sensors. Generally
the term Remote Sensing is more associated to satellite
imagery (see chapter XXX) and their use for land
classification and analysis or changes detection (i.e. no
geometric processing). The photogrammetric method

Aerial imaging sensors


Almost ten years after the introduction into the market of
the first digital large format aerial camera, nowadays we
have a great variety of aerial digital sensors which are
generally classified as small, medium and large format
65

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figurre 1. The collinnearity principple establishedd between thee camera projeection center, a point in the image
and thee correspondinng point in the object spacee (left). The multi-image
m
cooncept, where the 3D objectt can be
reconstructted using multtiple collinearrity rays betweeen corresponnding image pooints (right)
accu
urate and phhoto-realistic documentatio
on (geometryy
and texture); (ii) photogrammetric instrumeents (camerass
and software) aree generally chheap, very porrtable, easy too
use and with veryy high accuraccy potentials; (iii) an objectt
can be reconstruucted even iif it has disappeared orr
conssiderably channged using aarchived imag
ges (Gruen ett
al., 2004). But large
l
experience is requirred to derivee
accu
urate and detaailed 3D moddels from imaages. This hass
limitted a lot the use
u of photoggrammetry in favour of thee
more powerful active
a
3D sennsors (see ch
hapter XXX),,
whicch allow easily the derivattion of densee and detailedd
3D point
p
clouds with
w no user prrocessing.

generally em
mploys minim
mum two imaages of the same
static scene or object acqquired from different
d
poinnts of
view. Similaar to human vision,
v
if an object
o
is seen in at
least two im
mages, the diffferent relative positions of
o the
object in thee images (thee so-called paarallaxes) alloows a
stereoscopic view and the derivation of 3D informatioon of
the scene seeen in the overllapping area of
o the images.
Photogramm
metry is used inn many fields, from the tradditional mapping, to structure monitoring and
a 3D city modem
lling, from thhe video gam
mes movie inddustry to induustrial
inspections, from heritagee documentatiion to the meedical
field. Photoggrammetry waas always connsidered a maanual
and time coonsuming procedure but inn the last deccade,
thanks to thhe developmennts achieved by the Compputer
Vision comm
munity, great improvementts have been done
and nowadaays many fuully automated proceduress are
available. When
W
the projeects goal is the recovery of a
complete, deetailed, precise and reliablee 3D model, some
s
user interacction in the modelling pipeline is still
mandatory, in
i particular for
f geo-refereencing and quuality
control. Thuus photogramm
metry does not
n aim at thee full
automation of
o the image processing
p
buut it has alwayys as
first goal thee recovery of metric and acccurate resultss. On
the other hannd, for appliccations needinng 3D modells for
simple visuaalization or Virtual
V
Realityy (VR) uses, fully
automated 3D
D modelling procedures
p
caan also be adoopted
(Vergauwen & Van Gool, 2006; Snavelly et al., 2008)).

3.1.3
3 BASIC PR
RINCIPLES O
OF THE
PHOTOGR
RAMMETRIIC TECHNIQ
QUE
The basic principple of the phottogrammetric processing iss
the use of multiple images (at least tw
wo) and thee
ple establishess
colliinearity princiiple (Fig. 1). Such princip
the relationship
r
between imagee and object space definingg
a strraight line bettween the cam
mera perspective center, thee
imag
ge point P(x, y) and the obbject point P(X
X, Y, Z). Thee
colliinearity model is formulatedd as:
x f

(1)
r ( X X 0 ) r22 (Y Y0 ) r32 ( Z Z0 )
y f 12
y0
r13 ( X X 0 ) r23 (Y Y0 ) r33 ( Z Z0 )

The advantagges of photoggrammetry andd the image-bbased


approach staay in the fact that (i) imagges contain alll the
information required forr 3D reconsstruction as well

cam
mera constant or
o focal lengthh

x0, y0

prinncipal point off the sensor

X 0 , Y 0 , Z0

posiition of the peerspective centter

11., 12.. .33

elem
ments of the rootation matrix

x, y

2D image
i
coordinnates (tie poinnts)

X, Y, Z

3D object
o
coordinnates

r11 ( X X 0 ) r21 (Y Y0 ) r31 ( Z Z0 )


x0
r13 (X X 0 ) r23 (Y Y0 ) r33 ( Z Z0 )

with
h:

interior orienntation (IO) paarameters

exterior orienntation (EO) pparameters

66

PHOTOGRAMMETRY

i.e. the seven parameters of a spatial similarity


transformation between image and object space. This is
usually achieved by introducing some Ground Control
Points (GCP) at least 3. Another possibility is to solve
the system of equations (Eq. 2) in a free-network mode
providing at least a known objects distance to retrieve
the correct scale. The scaling is very important and it
needs to be done within the bundle adjustment procedure
(and not a posteriori, once the model is obtained)
otherwise possible block deformation cannot be
compensated.

All measurements performed on digital images (x, y)


refer to a pixel coordinate system while collinearity
equations refer to the metric image coordinate system.
The conversion from pixel to image coordinates is
performed with an affine transformation knowing the
sensor dimensions and pixel size.
For each image point measured in at least two images
(generally called tie or homologues points), a collinearity
equation (Eq. 1) is written. All the equations form a
system of equations and the solution is generally obtained
with an iterative least squares method (Gauss-Markov
model), thus requiring some good initial approximations
of the unknown parameters. The method, called bundle
adjustment, provides a simultaneous determination of all
system parameters along with estimates of the precision
and reliability of the unknowns. If the interior orientation
parameters are also unknowns, the method is named selfcalibrating bundle adjustment.

Depending on the parameters which are considered either


known or treated as unknowns, the collinearity equations
may result in different procedures (Table 1).
As previously mentioned, the photogrammetric
reconstruction method relies on a minimum of two
images of the same object acquired from different
viewpoints. Defining B the baseline between two images
and D the average camera-to-object distance, a
reasonable B/D (base-to-depth) ratio between the images
should ensure a strong geometric configuration and
reconstruction that is less sensitive to noise and
measurement errors.

The system of equations is iteratively solved with the


least squares method and after the linearization and the
introduction of an error vector e, it can be expressed as:

e A x l

(2)

A typical value of the B/D ratio in terrestrial


photogrammetry should be around than 0.5, even if in
practical situations it is often very difficult to fulfil this
requirement. Generally, the larger the baseline, the better
the accuracy of the computed object coordinates,
although large baselines arise problems in finding
automatically the same correspondences in the images,
due to strong perspective effects. According to Fraser
(1996), the accuracy of the computed 3D object
coordinates (XYZ) depends on the image measurement
precision (xy), image scale and geometry (e.g. the scale
number S), an empirical factor q and the number of
images k:

with
e = error vector;
A = design matrix n x m (number of observations x
number of unknowns, n>m) with the coefficients
of the linearized collinearity equations;
x = unknowns vector (exterior parameters, 3D object
coordinates, eventually interior parameters);
l = observation vector (i.e. the measurements).
Generally a weight matrix P is added in order to weight
the observations and unknown parameters during the
estimation procedure. The estimation of x and the
variance factor is usually (but not exclusively)
attempted as unbiased, minimum variance estimation,
performed by means of least squares and results in:
x ( A T PA ) 1 A T Pl

XYZ

(3)

v T Pv
r

(6)

The collinearity principle and the Gauss-Markov model


of the least squares are valid and employed for all those
images acquired with frame sensors (e.g. a SLR camera).
In case of linear array sensors, other mathematical
approaches should be employed. The description of such
methods is outside the scope of this chapter.

with the residual v and the standard deviation a posteriori


(0) as:

v A x l

qS xy

(4)

The entire photogrammetric workflow used to derive


metric and accurate 3D information of a scene from a set
of images consists of (i) camera calibration and image
orientation, (ii) 3D measurements, (iii) structuring and
modelling, (iv) texture mapping and visualization.
Compared to the active range sensors workflow, the main
difference stays in the 3D point cloud derivation: while
range sensors (e.g. laser scanners) deliver directly the 3D
data, photogrammetry requires the mathematical
processing of the image data to derive the required sparse
or dense 3D point clouds useful to digitally reconstruct
the surveyed scene.

(5)

with r the redundancy of the system (number of


observations - number of unknowns).
The precision of the parameter vector x is controlled by
its covariance matrix C xx 02 ( AT PA) 1 .
For (ATPA) to be uniquely invertible, as required in (Eq.
3), the image network needs to fix an external datum
67

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Table 1: Phootogrammetricc procedures foor calibration,, orientation and


a point posittioning
Method

Observations

Unknow
wns

General bunddle adj.

tie pointss, evt. datum

exterior param., 3D cooordinates

Self-calibratiing bundle adj.

tie pointss, evt. datum

interior and exterior, 3D


D coord

Resection

tie pointss, 3D coordinatees

interior and exterior paaram.

Intersection

tie pointss, interior and exxterior param.

3D coorrdinates

Figure 2.. A typical terrrestrial imagee network acqu


uired ad-hoc for
f a camera ccalibration
proceedure, with coonvergent andd rotated images (a). A set of
o terrestrial im
mages
acquiired ad-hoc for a 3D reconsttruction purpoose (b)

3.1.5
5 GEOMETR
RIC CAMER
RA CALIBRA
ATION

TAL CAMER
RA CALIBRA
ATION AND
D
3.1.4 DIGIT
IMAG
GE ORIENTA
ATION

The geometric calibration of a camera (R


Remondino &
Frasser, 2006) is
i defined aas the deterrmination off
deviiations of thee physical reaality from a geometricallyy
ideaal imaging sysstem based onn the collinearrity principle::
the pinhole
p
camerra. Camera callibration contiinues to be ann
areaa of active research
r
withhin the Com
mputer Visionn
com
mmunity, with a perhaps unnfortunate chaaracteristic off
mucch of the worrk being that it pays too little heed too
prev
vious findings from photoogrammetry. Part of thiss
migh
ht well be exxplained in terrms of a lack
k of emphasiss
and interest in acccuracy aspectts and a basicc premise thatt
noth
hing whateverr needs to bee known abou
ut the cameraa
whicch is to be calibrated withiin a linear pro
ojective ratherr
than
n Euclidean sccene reconstruuction. In pho
otogrammetry,,
a caamera is connsidered calibbrated if its focal length,,
principal point offfset and a sett of Additionaal Parameterss
(APs) are knownn. The camerra calibration procedure iss
baseed on the collinearity moodel which iss extended inn
ordeer to model thee systematic image errors and
a reduce thee
physsical reality of the sensor ggeometry to th
he perspectivee
mod
del. The moddel which haas proved to be the mostt
effecctive, in parrticular for close-range sensors, wass
deveeloped by D.
D Brown (11971) and expresses
e
thee
corrections (x, y)
to the measured imag
ge coordinatess
(x, y)
y as:

Camera calibbration and im


mage orientatiion are proceddures
of fundamenntal importannce, in particuular for all those
t
Geomatics applications
a
w
which
rely onn the extractioon of
accurate 3D
D geometric information
i
f
from
images. The
early theoriees and formulaations of orienntation proceddures
were developped many years ago and todday there is a great
number of procedures
p
annd algorithmss available (G
Gruen
and Huang, 2001).
2
Sensor calibbration and image orieentation, althhough
conceptuallyy equivalent,, follow different strategies
according to the employed imaging sennsors. The caamera
calibration procedure
p
cann be divided in geometricc and
radiometric calibration (in the folllowing only the
geometric calibration
c
off terrestrial frame
f
cameraas is
reported). A camera calibrration proceduure determinees the
interior paraameters whilee the exterioor parameterss are
determined with the im
mage orientatiion proceduree. In
photogramm
metry the twoo proceduress are very often
separated ass an image network opttimal for caamera
calibration iss not optimal for image orrientation (Figg. 2).
Other approaaches fuse the determination of interiorr and
exterior paraameters usingg the same set
s of imagess and
procedure buut the results are normally poor and not very
accurate.
68

PHOTOGRAMMETRY

(7)

(8)
with:

x x x0 ;
y y y0 ;
r2 x2 y 2 ;
Browns model is generally called physical model as
all its components can be directly attributed to physical
error sources. The individual parameters represent:
x0, y0, f = correction for the interior orientation
elements;
Ki = parameters of radial lens distortion;
Pi = parameters of decentering distortion;
Sx = scale factor in x to compensate for possible nonsquare pixel;
a = shear factor for non-orthogonality and geometric
deformation of the pixel.

Figure 3. Radial (a) and decentering (b) distortion profiles


for a digital camera set at different focal lengths

The three APs used to model radial distortion r are


generally expressed via the odd-order polynomial r =
K1r3 + K2r5 + K3r7, where r is the radial distance. A
typical Gaussian radial distortion profile r is shown in
Fig. 3a, which illustrates how radial distortion can vary
with focal length. The coefficients Ki are usually highly
correlated, with most of the error signal generally being
accounted for by the cubic term K1r3. The K2 and K3
terms are typically included for photogrammetric (low
distortion) and wide-angle lenses, and in higher-accuracy
vision metrology applications. The commonly
encountered third-order barrel distortion seen in
consumer-grade lenses is accounted for by K1.

of the self-calibration is the overall network geometry


and especially the configuration camera stations. Some
good hints and practical rules for camera calibration can
be summarized as follows:
acquire a set of images of a reference object, possibly
constituted of coded targets which can be automatically
and accurately measured in the images;
the image network geometry should be favourable, i.e.
the camera station configuration must comprise highly
convergent images, acquired at different distances from
the scene, with orthogonal roll angles and a large
number of well distributed 3D object points;

Decentering distortion is due to a lack of centering of lens


elements along the optical axis. The decentering
distortion parameters P1 and P2 are invariably strongly
projectively coupled with x0 and y0. Decentering
distortion is usually an order of magnitude or more less
than radial distortion and it also varies with focus, but to
a much less extent, as indicated by the decentering
distortion profiles shown in Fig. 3b. The projective
coupling between P1 and P2 and the principal point offsets
(x0, y0) increases with increasing focal length and can
be problematic for long focal length lenses. The extent of
coupling can be diminished, during the calibration
procedure, through both use of a 3D object point array
and the adoption of higher convergence angles for the
images.

the accuracy of the image network (and so of the


calibration procedure) increases with increasing
convergence angles for the imagery, the number of rays
to a given object point and the number of measured
points per image (although but the incremental
improvement is small beyond a few tens of points);
a planar object point array can be employed for camera
calibration if the images are acquired with orthogonal
roll angles, a high degree of convergence and,
desirably, varying
object distances;
orthogonal roll angles must be present to break the
projective coupling between IO and EO parameters.
Although it might be possible to achieve this
decoupling without 90 image rotations, through
provision of a strongly 3D object point array, it is
always recommended to have rolled images in the
self-calibration network.

The solution of a self-calibrating bundle adjustment leads


to the estimation of all the interior parameters and APs,
starting from a set of manually or automatically measured
image correspondences (tie points). Critical to the quality
69

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

On the other hand, automated procedures (dense image


matching) are employed when dense surface
measurements and reconstructions are required, e.g. to
derive a Digital Surface Model (DSM) to document
detailed and complex objects like reliefs, statues,
excavations areas, etc. (fig. 4b,c). The latest development
in automated image matching (Pierrot-Deseilligny &
Paparoditis, 2006; Hirschmuller, 2008; Remondino et al.,
2008; Hiep et al., 2009; Furukawa & Ponce, 2010) are
demonstrating the great potentiality of the image-based
3D reconstruction method at different scales of work,
comparable to point clouds derived using active range
sensors and with a reasonable level of automation.
Overviews on stereo and multi-image image matching
techniques can be found in (Scharstein & Szeliski, 2002;
Seitz et al., 2006). Some commercial, open-source and
web-based tools are also available to derive dense point
clouds from a set of images (Photomodeler Scanner,
MicMac, PMVS, etc.).

Nowadays self-calibration via the bundle adjustment is a


fully automatic process requiring nothing more than
images recorded in a suitable multi-station geometry, an
initial guess of the focal length and image sensor
characteristics (and it can be a guess) and some coded
targets which form a 3D object point array. A 2D flat
paper with some targets on it could be used to calibrate a
camera but, due to the flatness of the scene, great care
must be taken in the image acquisition.

3.1.6 IMAGE ORIENTATION


In order to survey an object, a set of images needs to be
acquired considering that a detail can be reconstructed in
3D if it is visible in at least 2 images. The orientation
procedure is then performed to determine the position
and attitude (angles) where the images were acquired in
order, afterwards, using the collinearity principle, to
derive 3D information. Having the images (and the
calibration parameters), a set of tie points needs to be
identified (manually or automatically) in the images at
least 5 respecting the fact that the points are well
distributed on the entire image format, non-coplanar
nor collinear. These image observations are then
used to form a system of collinearity equations (Eq. 1),
iteratively solved with the Gauss-Markov model of
least squares (Eq. 2 and 3) in order to derive the sought
EO parameter and 3D coordinated of the measured tie
points.

3.1.8 POLYGONAL MODEL GENERATION


Once a sparse or dense point cloudsis obtained, a
polygonal model (mesh or TIN) is generally produced
for texturing purposes, better visualization and other
issues. This process is logically subdivided in several
sub-steps that can be completed in different orders
depending by the 3D data source (Berger et al., 2011).
In case of sparse point clouds, the polygonal elements are
normally created with interactive procedure, firstly
creating lines, then polygons and finally surfaces.

A typical set of images, acquired for 3D reconstruction


purposes, forms a network which is generally not
suitable for a calibration procedure. Therefore it is
always better to separate the two photogrammetric
steps or to adopt a set of images suitable for both
procedures.

In case of dense point clouds, While meshing is a pretty


straightforward step for structured point clouds, for an
unstructured point cloud it is not so immediate. It requires
a specific process like Delaunay, involving a projection
of the 3D points on a plane or another primitive surface, a
search of the shorter point-to-point connection with the
generation of a set of potential triangles that are then reprojected in the 3D space and topologically verified. For
this reason the mesh generation from unstructured clouds
may consist in: a) merging the 2.5D point clouds
reducing the amount of data in the overlapped areas and
generating in this way a uniform resolution full 3D cloud;
b) meshing with a more sophisticate procedures of a
simple Delaunay. The possible approaches for this latter
step are based on: (i) interpolating surface that build a
triangulation with more elements than needed and then
prune away triangles not coherent with the surface
(Amenta & Bern, 1999); (ii) approximating surfaces
where the output is often a triangulation of a best-fit
function of the raw 3D points (Hoppe et al., 1992; Cazals
& Giesen, 2006).

3.1.7 PHOTOGRAMMETRIC 3D POINT CLOUDS


GENERATIONS
Once the camera parameters are known, the scene
measurements can be performed with manual or
automated procedures. The measured 2D image
correspondences are converted into unique 3D object
coordinates (3D point cloud) using the collinearity
principle and the known exterior and interior parameters
previously recovered. According to the surveyed scene
and project requirements, sparse or dense point clouds are
derived (fig. 4 and 5).
Manual (interactive) measurements, performed in
monocular or stereoscopic mode, derive sparse point
clouds necessary to determine the main 3D geometries
and discontinuities of an object. Sparse reconstructions
are adequate for architectural or 3D city modelling
applications, where the main corners and edges must be
identified to reconstruct the 3D shapes (fig. 4a) (Gruen &
X. Wang, 1998; El-Hakim, 2002). A relative accuracy in
the range 1:5,000-20,000 is generally expected for such
kinds of 3D models.

Dense image matching generally consist of unstructured


3D point clouds that can be processed with the same
approach used for the above mentioned laser scanner
unstructured point clouds. No alignment phase is needed
as the photogrammetric process deliver a unique point
cloud of the surveyed scene.
70

PHOTOGRAMMETRY
Y

c
Figuure 4. 3D recoonstruction off architectural structures witth manual meaasurements inn order to geneerate
a simple 3D moodel with the main
m geometrrical features (a).
( Dense 3D reconstructioon via automatted
image matching
m
(b). Digital
D
Surfacee Model (DSM
M) generationn from satellitee imagery
(Geo-Eyye stereo-pairr) for 3D landsscape visualizzation (c)

textu
uring of 3D point cloudds (point-based renderingg
tech
hniques (Kobbbelt & Botscch, 2004) alllows a fasterr
visu
ualization, but for detailed aand complex 3D models itt
is no
ot an appropriiate method. IIn case of meeshed data thee
textu
ure is automattically mappeed if the camera parameterss
are known
k
(e.g. iff it is a photoogrammetric model
m
and thee
imag
ges are orienteed) otherwise an interactivee procedure iss
requ
uired (e.g. if thhe model has bbeen generateed using rangee
senssors and the texture
t
comess from a sepaarate imagingg
senssor). Indeed homologue
h
pooints between the 3D meshh
and the 2D imagge to-be-mappped should bee identified inn
ordeer to find the alignment trransformation necessary too
map
p the colour information onto the mesh. Althoughh

3.1.9 TEXT
TURE MAPPIING AND VIISUALIZATIION
A polygonall 3D model can
c be visualiized in wirefrrame,
shaded or texxtured mode. A textured 3D
D geometric model
m
is probably the
t most desiirable 3D objeect documenttation
by most sincce it gives, at the same timee, a full geom
metric
and appearaance represenntation and alllows unrestrricted
interactive visualization and
a manipulation at a varieety of
lighting condditions. The photo-realistic
p
c representatioon of
a polygonal model (or evven a point cloud)
c
is achiieved
mapping a colour
c
imagess onto the 3D
D geometric data.
The 3D data can be in forrm of points or
o triangles (m
mesh),
according too the applicaations and requirements.
The
r
71

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Fiigure 5. 3D recconstruction from


f
images: according
a
to the
t project neeeds
and requirements, sparse or dense poiint clouds cann be derived
some autom
mated approaaches were proposed in the
research com
mmunity (Lennsch et al., 20000; Corsini et
e al.,
2009), no auttomated comm
mercial solutioon is availablee and
this is a botttleneck of thee entire 3D modelling
m
pipeeline.
Thus, in praactical cases, the 2D-3D alignment
a
is done
with the well-known
w
D
DLT
approachh (Abdel-Aziiz &
Karara, 19711), often referrred as Tsai meethod (Tsai, 19986).
Correspondinng points betw
ween the 3D geometry
g
and a 2D
image to-be--mapped are sought to retrieve the intterior
and exteriorr unknown caamera param
meters. The coolour
information is
i then projeccted (or assignned) to the suurface
polygons ussing a coloour-vertex enncoding, a mesh
m
parameterizaation or an extternal texture.

RDF) of the object (Lennsch et al., 2003). Highh


(BR
dynaamic range (H
HDR) images might also be
b acquired too
reco
over all scene details and illumination (Reinhard et al.,,
2005
5) while coloour discontinuuities and aliiasing effectss
must be removedd (Debevec et al., 2004; Umeda
U
et al.,,
2005
5).
The photo-realistic 3D prodduct needs finally
f
to bee
visu
ualized e.g. for communnication and presentationn
purp
poses. In case of large and complex mod
del the point-baseed rendering technique ddoes not givee satisfactoryy
resu
ults and does not provide rrealistic visuaalization. Thee
visu
ualization of a 3D model iss often the on
nly product off
interrest for the external woorld, remainiing the onlyy
posssible contact with the 3D data. Therefo
ore a realisticc
and accurate visuualization is often required.. Furthermoree
the ability to easily interact w
with a huge 3D
D model is a
conttinuing and inncreasing prooblem. Indeed
d model sizess
(both
h in geometryy and texture) are increasing
g at faster ratee
than
n computer hardware
h
advances and th
his limits thee
posssibilities of innteractive andd real-time vissualization off
the 3D
3 results. Duue to the geneerally large am
mount of dataa
and its complexitty, the renderring of large 3D
3 models iss
donee with multii-resolution aapproaches diisplaying thee
largee meshes wiith different Levels of Detail
D
(LOD),,
simp
plification andd optimizatioon approachess (Dietrich ett
al., 2007).
2

In Computerr Graphics appplications, the texturing cann also


be performedd with techniqques able to graphically
g
moodify
the derived 3D geometrry (displacem
ment mappingg) or
simulating thhe surface irreegularities witthout touching the
geometry (bbump mappinng, normal mapping,
m
parrallax
mapping).
In the texturre mapping phase
p
some problems
p
can arise
due to lighhting variations of the images, suurface
specularity and
a
camera settings.
s
Ofteen the imagess are
exposed withh the illuminaation at imaginng time but itt may
need to be replaced
r
by illlumination consistent withh the
rendering pooint of view and the refleectance propeerties
72

PHOTOGRAMMETRY

been adopted by the photogrammetric community in


order to automate most of the steps of the 3D modelling
pipeline. Computer vision researchers have indeed
developed different image processing tools which can be
used e.g. for automated 3D reconstruction purposes:
ARC3D, Photosynth, Bundler, etc., just to mention some
of them.

3.1.10 OTHER IMAGE-BASED TECHNIQUES


The
most
well-known
technique
similar
to
photogrammetry is computer vision (Hartley and
Zisserman, 2001). Even if accuracy is not the primary
goal, computer vision approaches are retrieving very
interesting results for visualization purposes, object-based
navigation, location-based services, robot control, shape
recognition, augmented reality, annotation transfer or
image browsing purposes. The typical computer vision
pipeline for scenes modelling is named structure from
motion (Pollefeys et al., 2004; Pollefeys et al., 2008;
Agarwal et al., 2009) and it is getting quite common in
applications where metrics is not the primary aim. For
photogrammetry, the greatest benefit of the recent
advances in computer vision is the continuous
development of new automated image analysis
algorithms and 3D reconstruction methods. These have

There are also some image-based techniques which allow


the derivation of 3D information from a single image.
These methods use object constraints (Van den Heuvel,
1998; Criminisi et al., 1999; El-Hakim, 2000) or
estimating surface normals instead of image
correspondences with methods like shape from shading
(Horn & Brooks, 1989), shape from texture (Kender,
1978), shape from specularity (Healey and Binford,
1987), shape from contour (Meyers et al., 1992), shape
from 2D edge gradients (Winkelbach & Wahl, 2001).

73

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

3.2 UAV: PLATFORMS, REGULATIONS, DATA


ACQUISITION AND PROCESSING
F. REMONDINO

GNSS/INS systems, necessary to navigate the platforms,


predict the acquisition points and possibly perform direct
geo-referencing. Although conventional airborne remote
sensing has still some advantages and the tremendous
improvements of very high-resolution satellite imagery
are closing the gap between airborne and satellite mapping applications, UAV platforms are a very important
alternative and solution for studying and exploring our
environment, in particular for heritage locations or rapid
response applications. Private companies are now
investing and offering photogrammetric products (mainly
Digital Surface Models DSM and orthoimages) from
UAV-based aerial images as the possibility of using
flying unmanned platforms with variable dimensions,
small weight and high ground resolution allow to carry
out flight operations at lower costs compared to the ones
required by traditional aircrafts. Problems and limitations
are still existing, but UAVs are a really capable source of
imaging data for a large variety of applications.

3.2.1 INTRODUCTION
According to the UVS (Unmanned Vehicle System)
International definition, an Unmanned Aerial Vehicle
(UAV) is a generic aircraft design to operate with no
human pilot onboard [1]. The simple term UAV is used
commonly in the Geomatics community, but also other
terms like Drone, Remotely Piloted Vehicle (RPV),
Remotely Operated Aircraft (ROA), Micro Aerial
Vehicles (MAV), Unmanned Combat Air Vehicle
(UCAV), Small UAV (SUAV), Low Altitude Deep
Penetration (LADP) UAV, Low Altitude Long Endurance
(LALE) UAV, Medium Altitude Long Endurance
(MALE) UAV, Remote Controlled (RC) Helicopter and
Model Helicopter are often used, according to their
propulsion system, altitude/endurance and the level of
automation in the flight execution. The term UAS
(Unmanned Aerial System) comprehends the whole
system composed by the aerial vehicle/platform (UAV)
and the Ground Control Station (GCS). [2] defines UAVs
as Uninhabited Air Vehicles while [3] defines UAVs as
uninhabited and reusable motorized aerial vehicles.

The paper reviews the most common UAV systems and


applications in the Geomatics field, highlighting open
problems and research issues related to regulations and
data processing. The entire photogrammetric processing
workflow is also reported with different examples and
critical remarks.

In the past, the development of UAV systems and


platforms was primarily motivated by military goals and
applications. Unmanned inspection, surveillance,
reconnaissance and mapping of inimical areas were the
primary military aims. For Geomatics applications, the
first experience was carried out three decades ago but
only recently UAVs in the Geomatics field became a
common platform for data acquisition (Fig. 1). UAV
photogrammetry [4-5] indeed opens various new
applications in the close-range aerial domain, introducing
a low-cost alternative to the classical manned aerial
photogrammetry for large-scale topographic mapping or
detailed 3D recording of ground information and being a
valid complementary solution to terrestrial acquisitions.
The latest UAV success and developments can be
explained by the spreading of low-cost platforms
combined with amateur or SRL digital cameras and

3.2.2 UAV PLATFORMS


The primary airframe types are fixed and rotary wings
while the most common launch/take-off methods are,
beside the autonomous mode, air-, hand-, car/track-,
canister- or bungee cord launched. A typical UAV
platform for Geomatics purposes can cost from 1000
Euro up to 50000 Euro, depending on the on-board
instrumentation, payload, flight autonomy, type of
platform and degree of automation needed for its specific
applications. Low-cost solutions are not usually able to
perform autonomous flights, but they always require
74

PHOTOGRAMMETRY

Figure 1. Available Geomatics techniques, sensors and platforms for 3D recording purposes,
according to the scene dimensions and complexity

medium altitude long endurance systems. The mass


varies from few kg up to 1,000 kg, the range from few
km up to 500 km, the flight altitude from few hundred
meter to 5 km and the endurance from some minutes to
2-3 days.

human assistance in the take-off and landing phases.


Low-cost and open-source platforms and toolkits were
presented in [6-11]. Simple and hand-launched UAVs
which perform flights autonomously using MEMS-based
(Micro Electro-Mechanical Systems) or C/A code GPS
for the auto-pilot are the most inexpensive systems [12],
although stability in case of windy areas might be a
problem.

Strategical UAVs, including high altitude long


endurance, stratospheric and exo-stratospheric systems
which fly higher than 20,000 m altitude and have an
endurance of 2-4 days.

More bigger and stable systems, generally based on an


Internal Combustion Engine (ICE), have longer
endurance with respect to electric engine UAVs and,
thanks to the higher payload, they allow medium format
(reflex) camera or LiDAR or SAR instruments on-board
[13-18].

Special tasks UAVs like unmanned combat autonomous


vehicles, lethal and decoys systems.
UAVs for Geomatics applications can be shortly
classified according their engine/propulsion system in:
unpowered platforms, e.g. balloon, kite, glider,
paraglide;

The developments and improvements at hardware and


platform levels are done in the robotics, aeronautical and
optical communities where breakthrough solutions are
sought in order to miniaturize the optical systems,
enhance the payload, achieve complete autonomous
navigation and improve the flying performances [19-20].
Researchers are also performed studies on flying
invertebrates to understand their movement capabilities,
obstacle avoidance or autonomous landing/takeoff
capabilities [21-22].

powered platforms, e.g. airship, glider, propeller,


electric, combustion engine.
Alternatively, they could be classified according to the
aerodynamic and physical features as:
lighter-than-air, e.g. balloon, airship;
rotary wing, either electric or with combustion engine,
e.g. single-rotor, coaxial, quadrocopter, multi-rotor;

Based on size, weight, endurance, range and flying


altitude, UVS International defines three main categories
of UAVs:

fixed wing, either unpowered, electric or with


combustion engine, e.g. glider or high wing.

Tactical UAVs which include micro, mini, close-,


short-, medium-range, medium-range endurance, low
altitude deep penetration, low altitude long endurance,

In table 1, pros and cons of different UAV typologies are


presented, according to the literature review and the
authors experience: rotor and fixed wing UAVs are

75

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Table 1. Evaluation of some UAV platforms employed for Geomatics applications, according to the literature
and the authors experience. The evaluation is from 1 (low) to 5 (high)
Fixed Wing

Rotary wings

Kite /
Balloon

electric

ICE engine

electric

ICE engine

Payload

Wind resistance

Minimum speed

Flying autonomy

Portability

Landing distance

compared to more traditional aerial low-cost kite and


balloons.

3.2.4 HISTORICAL FRAMEWORK AND


REGULATIONS
UAVs were originally developed for military
applications, with flight recognition in enemy areas,
without any risk for human pilots. The first experiences
for civil and Geomatics applications were carried out at
the end of the 70s [46] and their use greatly increased in
the last decades thanks to the fast improvements of
platforms, communication technologies and software as
well as the growing number of possible applications.
Thus the use of such flying platforms in civil applications
imposed to increase the security of UAV flights in order
to avoid dangers for human beings. Thus the international
community started to define the security criteria for UAV
some years ago. In particular, NATO and EuroControl
started their cooperation in 1999 in order to prepare
regulations for UAV platforms and flights. This work has
not lead to a common and international standard yet,
especially for civil applications. But the great diffusion
and commercialization of new UAV systems has pushed
several national and international associations to analyze
the operational safety of UAVs. Each country has one or
more authorities involved in the UAV regulations, that
operates independently. Due to the absence of a
cooperation between all these authorities, it is difficult to
describe the specific aims of each of them without loss of
generality. The elements of UAV regulations are mainly
keen to increase the reliability of the platforms,
underlining the need for safety certifications for each
platform and ensuring the public safety. This rules are in
continuous progress in most of the countries. As they are
conditioned by technical developments and safety
standards, rules and certifications should be set equal to
those currently applied to comparable manned aircraft,
although the most important issue, being UAVs
unmanned, it is the citizens security in case of an impact.

3.2.3 UAV APPLICATIONS


Some UAVs civilian applications are mentioned in [23]
while [24] reports on UAV projects, regulations,
classifications and application in the mapping domain.
The application fields where UAVs images and
photogrammetrically derived DSM or orthoimages are
generally employed include:
Agriculture: producers can take reliable decisions to
save money and time (e.g. precision farming), get quick
and accurate record of damages or identify potential
problems in the field [25];
Forestry: assessments of woodlots, fires surveillance,
vegetation monitoring, species identification, volume
computation as well as silviculture can be accurately
performed [7, 26-28];
Archaeology and architecture: 3D surveying and mapping of sites and man-made structures can be
performed with low-altitude image-based approaches
[29-33];
Environment: quick and cheap regular flights allow the
monitoring of land and water at multiple epochs [3435], road mapping [36], cadastral mapping [37],
thermal analyses [38], excavation volume computation,
volcano monitoring [39] or natural resources documentations for geological analyses are also feasible;
Emergency management: UAV are able to quickly
acquire images for the early impact assessment and the
rescue planning [40-42]. The flight can be performed
over contaminated areas without any danger for
operators or any long pre-flight operations;
Traffic monitoring: surveillance, travel time estimation,
trajectories, lane occupancies and incidence response
are the most required information [43].

UAVs have currently different safety levels according to


their dimension, weight and onboard technology. For this
reason, the rules applicable to each UAV could not be the
same for all the platforms and categories. For example, in
U.S., the safety is defined according to their use (public
or civic), in Europe according to the weight, as this
parameter is directly connected to the damage they can

UAV images are also often used in combination with


terrestrial surveying in order to close possible 3D
modeling gaps and create orthoimages [44-45].
76

PHOTOGRAMMETRY

Figure 2. Typical acquisition and processing pipeline for UAV images

intrinsic parameters of the on-board digital camera. The


desired image scale and used camera focal length are
generally fixed in order to derive the mission flying
height. The camera perspective centers (waypoints) are
computed fixing the longitudinal and transversal overlap
of the strips (e.g. 80%-60%). All these parameters vary
according to the goal of the flight: missions for detailed
3D model generation usually request high overlaps and
low altitude flights to achieve small GSDs, while quick
flights for emergency surveying and management need
wider areas to be recorded in few minutes, at a lower
resolution.

produce when a crash occurs. Other restrictions are


defined in terms of minimum and maximum altitude,
maximum payload, area to be surveyed, etc.
The indirect control of a pilot from the GCS may lead to
increased accidents due to human errors. For this reason,
in several countries UAV operators need some training
and qualifications.

3.2.5 DATA ACQUISITION AND PROCESSING


A typical image-based aerial surveying with an UAV
platform requires a flight or mission planning and GCPs
(Ground Control Points) measurement (if not already
available) for geo-referencing purposes. After the
acquisitions, images can be used for stitching and
mosaicking purposes [9] or they can be the input of the
photogrammetric process. In this case, camera calibration
and image triangulation are initially performed, in order
to generate successively a Digital Surface Model (DSM)
or Digital Terrain Model (DTM). These products can be
finally used for the production of ortho-images, 3D
modelling applications or for the extraction of further
metric information. In Fig. 2, the general workflow is
shown, while the single steps are discussed more in detail
in the following sections.

The flight is normally done in manual, assisted or autonomous mode, according to the mission specifications,
platforms type and environmental conditions. The
presence onboard of GNSS/INS navigation devices is
usually exploited for the autonomous flight (take-off,
navigation and landing) and to guide the image
acquisition. The image network quality is strongly
influenced by the typology of the performed flight (Fig.
3): in the manual mode, the image overlap and the
geometry of acquisition is usually very irregular, while
the presence of GNSS/INS devices, together with a
navigation system, can guide and improve the acquisition.
The navigation system, generally called auto-pilot, is
composed by both hardware (often in a miniaturize form)
and software devices. An auto-pilot allows to perform a
flight according the planning and communicate with the
platform during the mission. The small size and the
reduced pay-load of some UAV platforms is limiting the
transportation of high quality navigation devices like
those coupled to airborne cameras or LiDAR sensors. The
cheapest solution relies on MEMS-based inertial sensors

Flight planning and image acquisition


The mission (flight and data acquisition) is normally
planned in the lab with dedicated software, starting from
the knowledge of the area of interest (AOI), the required
Ground Sample Distance (GSD) or footprint and the
77

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 3. Different modalities of the flight execution delivering different image blocks quality: a) manual mode and
image acquisition with a scheduled interval; b) low-cost navigation system with possible waypoints but irregular
image overlap; c) automated flying and acquisition mode achieved with a high quality navigation system

[48], possible with strips at different flying heights.


Camera calibration and image orientation tasks require
the extraction of common features visible in as many
images as possible (tie points) followed by a bundle
adjustment, i.e. a non-linear optimization procedure in
order to minimize an appropriate cost function [49-51].
Procedure based on the manual identification of tie points
by an expert operator or based on signalized coded
markers are well assessed and used today. Recently fully
automated procedures for the extraction of a consistent
and redundant sets of tie points from markerless closerange images have been developed for photogrammetric
applications [52-53]. Some efficient commercial
solutions have also appeared on the market (e.g.
PhotoModeler Scanner, Eos Inc; PhotoScan, Agisoft)
while commercial software for aerial applications still
need some user interaction or the availability of
GNSS/INS data for automated tie points extraction. In
Computer Vision, the simultaneous determination of
camera (interior and exterior) parameters and 3D
structure is normally called Structure from Motion [5456]. Some free web-based approaches (e.g. Photosynth,
123DCatch, etc.) and open source solutions (VisualSfM
[57], Bundler [58], etc.) are also available although
generally not reliable and accurate enough in case of
large and complex image blocks with variable baselines
and image scale. The employed bundle adjustment
algorithm must be reliable, able to handle possible
outliers and provide statistical outputs to validate the
results. The collected GNSS/INS data, if available, can
help for the automated tie point extraction and can allow
the direct geo-referencing of the captured images. In
applications with low metric quality requirements, e.g.
for fast data acquisition and mapping during emergency
response, the accuracy of direct GNSS/INS observation
can be enough [59-60]. If the navigation positioning
system cannot be directly used (even for autonomous
flight) as the signal is strongly degraded or not available
(downtowns, rainforest areas, etc.), the orientation phase
must rely only on a pure image-based approach [61-64]
thus requiring GCPs for scaling and geo-referencing.
These two latter steps are very important in order to get
metric results. To perform indirect geo-referencing, there
are basically two ways to proceed:

which feature a very reduced weight but accuracy not


sufficient, to our knowledge, for direct geo-referencing.
More advanced and expensive sensors, maybe based on
single/double frequency positioning mode or the use of
RTK would improve the quality of positioning to a
decimetre level, but they are still too expensive to be
commonly used on low-cost solutions. During the flight,
the autonomous platform is normally observed with a
Ground Control Station (GCS) which shows real-time
flight data such as position, speed, attitude and distances,
GNSS observations, battery or fuel status, rotor speed,
etc. On the opposite, remotely controlled systems are
piloted by operator from the ground station. Most of the
systems allow then image data acquisition following the
computed waypoints while low-cost systems acquire
images with a scheduled interval. The used devices (platform, auto-pilot and GCS) are fundamental for the quality
and reliability of the final result: low-cost instruments can
be sufficient for little extensions and low altitude flights,
while more expensive devices must be used for long
endurance flights over wide areas. Generally, in case of
light weight and low-cost platforms, a regular overlap in
the image block cannot be assured as there are strongly
influenced by the presence of wind, piloting capabilities
and GNSS/INS quality, all randomly affecting the
attitude and location of the platforms during the flight.
Thus higher overlap, with respect to flights performed
with manned vehicles or very expensive UAVs, are
usually recommended to keep in count these problems.
Camera calibration and image orientation
Camera calibration and image orientation are two
fundamental prerequisites for any metric reconstruction
from images. In metrological applications, the separation
of both tasks in two different steps should be preferred
[47]. Indeed, they require different block geometries,
which can be better optimized if they are treated in
separated stages. On the other hand, in many applications
where lower accuracy is required, calibration and
orientation can be computed at the same time by solving
a self-calibrating bundle adjustment. In case of aerial
cameras, the camera calibration is generally performed in
the lab although in-flight calibration are also performed
78

PHOTOGRAMMETRY
Y

Figure 4. Orrientation resuults of an aeriaal block over a flat area of ca


c 10 km (a). The derived ccamera poses are
a shown in
red/green, while color dots are the 3D
D object pointss on the groun
nd. The absencce of ground cconstraint (b) can led to a
wrong sollution of the computed
c
3D shape (i.e. groound deformattion). The moore rigorous appproach, based
d on GCPs
used as obbservations in the bundle soolution (c), delliver the correect 3D shape of
o the surveyedd scene, i.e. a flat terrain

1) import at least three GCPs


G
in the bundle
b
adjusttment
solution, treaating them ass weighted observations innside
the least squaares minimizaation. This appproach is the most
rigorous as (i) it minimiizes the posssible image block
b
deformationss and possiblee systematic errors, (ii) it avvoids
instability off the bundle soolution (conveergence to a wrong
w
solution) andd (iii) it helpps in the dettermination of the
correct 3D shhape of the suurveyed scene..

bjects surfacee
extraact dense point clouds to ddefine the obj
and its main geeometric discontinuities. Therefore
T
thee
poin
nt density musst be adaptiveely tuned to preserve edgess
and,, possibly, avooid too many points in flatt areas. At thee
same time, a corrrect matching result must be
b guaranteedd
also in regions wiith poor texturres. The actuaal state-of-the-art is
i the multi-im
mage matchinng technique [67-69]
[
basedd
on semi-global
s
m
matching
algorrithms [70-71]], patch-basedd
meth
hods [72] or optimal flow
w algorithms [73].
[
The lastt
two methods hass been impleemented into open sourcee
pack
kages named, respectively, PMVS and MicMac.
M

2) use a freee-network appproach in the bundle


b
adjusttment
[65-66] and apply only at the end of the bunddle a
similarity (H
Helmert) transformation in order to bringg the
image netwoork results intoo a desired refference coorddinate
system. Thiss approach iss not rigorouus: the solutioon is
sought minim
mizing the trrace of the covariance
c
maatrix,
introducing the necessaary datum with
w
some innitial
approximatioons. As no exxternal constraaint is introduuced,
if the bundlle solution cannot determ
mine the rightt 3D
shape of thee surveyed sccene, the succcessive simillarity
transformatioon (from the initial
i
relativee orientation to
t the
external one)) would not im
mprove the ressult.

The derived unnstructured point clouds need to bee


afterrwards structuured and interrpolated, may
ybe simplifiedd
and finally textuured for photo-realistic visualization..
Den
nse point clouuds are generrally preferreed in case off
terraain/surface reconstructionn (e.g. archaeological
a
l
excaavation, foresttry area, etc.) while sparse clouds whichh
are afterward turrned into simp
mple polygonaal informationn
can be preferred when modeliing man-mad
de scenes likee
build
dings.

The two appproaches, in theory,


t
are thhus not equivvalent
and they cann lead to totallly different results
r
(Fig. 4):
4 in
the first appproach, the quality
q
of thhe bundle is only
influenced by
b the redunndant control information and,
moreover, addditional checck points can be used to derive
d
some statistics of the adjustment.
a
O the other,, the
On
e
shapee constraints inn the
second approoach has no external
bundle adjusstment thus thhe solution is only based onn the
integrity andd quality of thhe multi-ray reelative orientaation.
The fundam
mental requireement is thuss to have a good
image netwoork in order to achieve correect results in terms
t
of computed object coordiinates and scennes 3D shapee.

For the creation of orthoimagges, a dense point


p
cloud iss
man
ndatory in ordder to achievee precise ortho
o-rectificationn
and for a complette removal off terrain distorrtions. On thee
otheer hand, in case of low-aaccuracy appllications (e.g..
rapid
d response, diisaster assessm
ment, etc.) a simple imagee
rectiification methhod (without the need of dense imagee
matcching) can be
b applied followed by
y a stitchingg
operration [9].

3.2.6
6 CASE STU
UDIES
As already menntioned, imagges acquired flying UAV
V
platfforms give useful infformation for
f
differentt
appllications, succh as archaaeological do
ocumentation,,
geollogical studiess and monitooring, urban area
a
modelingg
and monitoring, emergency asssessment and
d so on. Thee
typiccal required products aare dense point
p
clouds,,
poly
ygonal modelss or orthoimaages which are
a afterwardss
used
d for mappinng, volume ccomputation, displacementt
anallyses, visualizzation, city m
modeling, map
p generation,,
etc.. In the folloowing sectionns an overviiew of somee
appllications is givven and the aachieved resullts are shown..
The data presentted in the folllowing case studies weree
acqu
uired by the authors
a
or by some projectt partners andd
they
y were processsed by the auuthors using th
he Apero [53]]

a orthoimage generation
n
Surface recoonstruction and
Once a set of
o images haas been orientted, the folloowing
steps in the 3D reconstruuction and modeling
m
workkflow
are the surfface measurem
ment, orthophhoto creationn and
feature extraction. Startiing from thee known caamera
orientation parameters, a scene can
c
be digitally
reconstructedd by means of interactivve procedurees or
automated deense image matching
m
technniques. The ouutput
is normally a sparse or a dense
d
point clooud, describinng the
salient corneers and featuures in the foormer case orr the
entire surfaces shape of the
t surveyed scene in the latter
case. Dense image
i
matchiing algorithmss should be abble to
79

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

d
e
Figure 5. Orrientation resuults of an aeriaal block over a flat area of ca
c 10 km (a). The derived ccamera poses are
a shown in
red/green, while color dots are the 3D
D object pointss on the groun
nd. The absencce of ground cconstraint (b) can led to a
wrong sollution of the computed
c
3D shape (i.e. groound deformattion). The moore rigorous appproach, based
d on GCPs
used as obbservations in the bundle soolution (c), delliver the correect 3D shape of
o the surveyedd scene, i.e. a flat terrain

and Mic-Maac [73] open-ssource tools customized


c
foor the
specific UAV
V applicationss.

The availability of accuurate 3D infformation is very


important duuring excavatioon in order to define the staate of
works/excavations at a particular
p
epooch or to digitally
reconstruct the
t
findings that had beeen discoveredd for
documentatioon, digital preservation
p
and visualizzation
purposes.

mplete flights in autonomoous mode, bu


ut the storedd
com
coorrdinates of the projection ccentres were not sufficientt
for direct geo-reeferencing. F
For this reaso
on, a set off
reliaable GCPs (meeasured with ttotal station on
o corners andd
featu
ures of the tem
mple) was neccessary to deriive scaled andd
geo--referenced 3D
3 results. T
The orientatio
on proceduree
proccessed simultaaneously terreestrial and UA
AV images (caa
190)) in order to bring
b
all the ddata in the sam
me coordinatee
systeem. After thee recovery of the camera poses,
p
a DSM
M
was produced for documenntation and visualizationn
purp
poses [74].

An example of such appliication is giveen in Fig. 5, where


w
the Neptunee Temple inn the archaeeological area of
Paestum (Itaaly) is shownn. Given the shape,
s
compllexity
and dimensiions of the monument, a combinatioon of
terrestrial annd UAV (verrtical and obllique) images was
employed inn order to guaarantee the coompleteness of
o the
3D surveyinng work. The employed UAV
U
is a 4-rrotors
MD4-1000 Microdrone
M
syystem, entirelly of carbon fibre
which can carry up to 1,0 kg insttruments withh an
endurance loonger than 45 minutes. Forr the nadir im
mages,
the UAV mounted
m
an Olympus E-P1
E
camera (12
Megapixels, 4.3 m pixell size) with 177 mm focal leength
while for thee oblique imagges it was usedd an Olympuss XZ1 (10 Megaapixels, 2 m
m pixel size) with 6 mm focal
length. For both
b
flights, thhe average GS
SD of the imagges is
ca 3 cm. Thhe auto-pilot system
s
alloweed to perform
m two

A second exampple is reporteed in Fig. 6, showing thee


arch
haeological arrea of Pava ((ca 60 x 50 m) surveyedd
everry year at thee beginning aand end of th
he excavationn
perio
od to monitorr the advancess of the work,, compute thee
exacction volume and
a produce m
multi-temporal orthoimagess
of th
he area. The flights (35 m height) weere performedd
with
h a Microdroone MD4-2000 in 2010 an
nd 2011. Thee
herittage area is quite
q
windy, sso an electric platform wass
prob
bably not the most
m suited onne. For each session,
s
usingg
multtiple shootinggs for each w
waypoint, a reeliable set off
imag
ges (ca 40) was
w acquired, w
with an averaage GSD of 1
cm. In order too evaluate thhe quality of
o the imagee
trian
ngulation proccedure, some circular targeets, measuredd
with
h a total stationn, are used ass ground contrrol (GCP) andd
otheer as check pooints (CK). Affter the orientaation step, thee
RMS
SE on the CK
K resulted 0..037 m in planimetry andd

Archaeologiical site 3D reecoding and modelling


m

80

PHOTOGRAMMETRY
Y

c
d

e
Figgure 6. A mosaaic view of thhe excavation area
a in Pava (Siena, Italy) surveyed
s
withh UAV imagess for
voluume excavatioon computatioon and GIS appplications (a). The derived DSM shown as shaded (b) and
teextured mode (c) and the prroduced orthoo-image (d) [75]. If multi-teemporal imagees are availablle,
D
DSM
differencces can be com
mputed for vo
olume exactionn estimation (e)

for cartographic, mapping annd cadastral applications..


Thesse images haave very highh resolution if flights aree
perfformed at 1000-200 m heigght over the ground.
g
Veryy
high
h overlaps arre recommennded in ordeer to reducee
occlluded areas annd achieve m
more completee and detailedd
DSM
M. A sufficiennt number of G
GCPs is mand
datory in orderr
to geo-reference the
t processedd images with
hin the bundlee
adju
ustment and deerive point clouds: the num
mber of GCPss
variees according to the image block dimensions and thee

0.023 m in height.
h
The derived
d
DSMss (Fig. 6b, c) were
used within the Pavas GIS
G to produuce vector laayers,
ortho-imagess (Fig. 6d) annd to check thhe advances inn the
excavation or the excavation volumes (F
Fig. 6e).
Urban areass
An UAV plaatform can bee used to survvey small urbaan or
heritage areaa, when natioonal regulationn allows doinng it,
81

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

d
e
Figure 7. A mosaic ovver an urban arrea in Bandunng, Indonesia (a).
( Visualizattion of the bunndle adjustmeent results
(b) of
o the large UA
AV block (ca 270 images) and
a a close view of the prodduced DSM oover the urban area,
show
wn as point clloud (c, d) and
d shaded modde (e)

appllication domains. Althoughh automation is not alwayss


dem
manded, the reported
r
achieevements dem
monstrate thee
high
h level of auttonomous phootogrammetriic processing..
UAV
Vs have recenntly received a lot of attentio
on, since theyy
are fairly inexpennsive platform
ms, with navig
gation/controll
deviices and recoording sensors for quick digital dataa
prod
duction. The great
g
advantagge of actual UAV
U
systemss
is th
he ability to quuickly deliverr high temporral and spatiall
resolution inform
mation and to aallow a rapid response in a
num
mber of criticaal situations w
where immediiate access too
3D geo-informatiion is crucial.. Indeed they
y feature real-timee capability foor fast data acqquisition, tran
nsmission and,,
posssibly, processsing. UAVs ccan be used in high riskk
situaations and inaaccessible areas although th
hey still havee
som
me limitations in particular for the paylo
oad, insurancee
and stability. Rotaary wing UAV
V platforms can
c even take-off and
a land vertiically, thus no runway areea is required,,
whille fixed wingg UAVs can cover wider areas in few
w
minu
utes. For som
me applicatioons, not dem
manding veryy
accu
urate 3D resuults, complete remote sensiing solutions,,
baseed on open haardware and ssoftware are also
a available..
And
d in case of sm
mall scale appplications, UA
AVs can be a
com
mplement or replacement of terrestriaal acquisitionn
(imaages or rangge data). Thee derived hiigh-resolutionn

complexity of
o the surveyeed area. The quality
q
of achiieved
point clouds is usually very high (up too few centimeetres)
and this dataa can thus bee used for fuurther analysiss and
feature extracction.
In Fig. 7, a dense urban area in Banduung (Indonesiia) is
shown: the area
a was surveeyed with an electric
e
fixed-wing
RPV platform
m at an averaage height of about 150 m. Due
to weather coonditions (quiite strong wingg) and the abssence
of an auto-ppilot onboard,, the acquiredd images (ca 270,
average GSD
D is about 5 cm)
c are not perfectly aligned in
strips (Fig. 9b). After thhe bundle bloock adjustmeent, a
dense DSM
M was createed for the estimation
e
off the
population inn the surveyedd area and mapp production.
3.2.7 CONC
CLUSIONS AND
A
FUTUR
RE
DEVE
ELOPMENTS
S
The article presented ann overview of
o existing UAV
U
systems, prroblems and applicationss with partiicular
attention to the
t Geomaticcs field. The examples
e
reported
in the papper show thhe current state-of-the-ar
s
rt of
photogramm
metric UAV technologyy in diffferent
82

PHOTOGRAMMETRY

2-3 pixels are normally reported, also due to the camera


performances, image network quality, un-modelled
errors, etc.

images (GSD generally in the centimetre level) can be


used, beside very dense point cloud generation, for
texture mapping purposes on existing 3D data or for
orthophoto production, mosaic, map and drawing
generation. If compared to traditional airborne platforms,
they decrease the operational costs and reduce the risk of
access in harsh environments, still keeping high accuracy
potential. But the small or medium format cameras which
are generally employed, in particular on low-cost and
small payload systems, enforce the acquisition of a higher
number of images in order to achieve the same image
coverage at a comparable resolution. In these conditions,
automated and reliable orientation software are strictly
recommended to reduce the processing time. Some
reliable solution are nowadays available, even in the lowcost open-source sector.

In the near future, the most feasible improvement should


be related to payload, autonomy and stability issues as
well as faster (or even real-time) data processing thanks
to GPU programming [76]. High-end navigation sensors,
like DGPS and inexpensive INS would allow direct georeferencing with accurate results. In case of low-end
navigation systems, real-time image orientation could be
achieved with onboard advanced SLAM (Simultaneous
Localisation And Mapping) methods [77-78]. Lab postprocessing will be most probably always mandatory for
applications requiring high accuracy results. On the other
hand, the acquisition of image blocks with a suitable
geometry for photogrammetric process is still a critical
task, especially in case of large scale projects and non-flat
objects (e.g. buildings, towers, rock faces, etc.). While the
planning of image acquisition is quite simple when using
nadir images, the same task becomes much more complex
in the case of 3D objects requiring convergent images.
Two or more flights can be necessary over large areas,
when UAV with reduced endurance limits are used,
leading to images with illumination changes due to the
different acquisition time that may affect the DSM
generation and orthoimage quality. Future research has
also to be addressed to develop tools for simplifying this
task. Other hot research issues tied to UAV applications
are related to the use of new sensors on-board like
thermal, multispectral [80] or range imaging cameras
[81], just to cite some of them.

The stability of low-cost and light platforms is generally


an important issue, in particular in windy areas, although
camera and platform stabilizers can reduce the weather
dependency. Generally the stability issue is solved
shooting many images (continuous acquisition or
multiple shots from the predefined waypoints) and using,
during the processing phase, only the best image. High
altitude surveying can affect gasoline and turbine engines
while the payload limitation enforce the use of low
weight GNSS/IMU thus denying direct geo-referencing
solutions. New reliable navigation systems are nowadays
available, but the cost has limited their use until now to
very few examples. A drawback is thus the system
manoeuvre and transportation that generally requires at
least two persons.
UAV regulations are under development in several
countries all around the world, in order to propose some
technical specifications and areas where these devices can
be used (e.g. over urban settlements), increasing the range
of their applications. At the moment, the lack of precise
rule frameworks and the tedious requests for flight
permissions, represent the biggest limitation for UAV
applications. Hopefully the incoming rules will regulate
UAV applications for surveying issues with a simple
letter of intent.
Considering an entire UAV-based field campaign (Fig. 8)
and based on the authors experience, we can safely say
that, although automation has reached satisfactory level
of performances for automated tie point extraction and
DSM generation, an high percentage of the time is
absorbed by the image orientation and GCPs
measurements, in particular if direct geo-referencing
cannot be performed. The time requested for the feature
extraction depends on the typology of feature to be
extracted and is generally a time-consuming phase too.

Figure 8. Approximate time effort in a typical


UAV-based photogrammetric workflow

References
[1]. http://www.uvs-international.org/
December, 2012).

(last

accessed:

[2]. SANNA, A.; PRALIO, B. Simulation and control of


mini UAVs. Proc. 5th WSEAS Int. Conference on
Simulation, Modelling and Optimization, 2005, pp.
135-141.

The GCPs measurement step represents an important


issue with UAV image blocks. As the accuracy of the
topographic network is influencing the image
triangulation accuracy and the GSD of the images is often
reaching the centimetre level, there might be problems in
reaching sub-pixel accuracies at the end of the image
triangulation process. So far, in the literature, RMSEs of

[3]. VON BLYENBURG, P. UAVs-Current Situation and


Considerations for the Way Forward. In: RTO-AVT
Course on Development and Operation of UAVs for
Military and Civil Applications, 1999.

83

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

remotely sensed data from a tethered balloon.


Remote Sensing of Environment, 2006, Vol. 103,
pp. 255-264.

[4]. COLOMINA, I.; BLZQUEZ, M.; MOLINA, P.; PARS,


M.E.; WIS, M. Towards a new paradigm for highresolution low-cost photogrammetry and remote
sensing. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Beijing, China, 2008; Vol. 37 (B1), pp. 1201-1206.

[15]. WANG W.Q.; PENG, Q.C.; CAI, J.Y. Waveformdiversity-based millimeter-wave UAV SAR remote
sensing. Transactions on Geoscience and Remote
Sensing, 2009, Vol. 47(3), pp. 691-700.

[5]. EISENBEISS, H. UAV photogrammetry. Diss. ETH


No.
18515,
Institute
of
Geodesy
and
Photogrammetry, ETH Zurich, Switzerland,
Mitteilungen Nr. 105, 2009; p. 235.

[16]. BERNI, J.A.J.; ZARCO-TEJADA, P.J.; SUREZ, L.;


GONZLEZ-DUGO, V.; FERERES, E. Remote sensing
of vegetation from UAV platforms using
lightweight multispectral and thermal imaging
sensors. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Hannover, Germany, 2009; Vol. 38 (1-4-7/W5).

[6]. BENDEA, H.; BOCCARDO, P.; DEQUAL, S.; GIULIO


TONOLO, F.; MARENCHINO, D.; PIRAS, M. Low cost
UAV for post-disaster assessment. In: Int. Archives
of Photogrammetry, Remote Sensing and Spatial
Information Sciences, Beijing, China, 2008; Vol. 37
(B1), pp. 1373-1379.

[17]. KOHOUTEK, T.K.; EISENBEISS, H. Processing of


UAV based range imaging data to generate detailed
elevation models of complex natural structures. In:
Int. Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, Melbourne,
Australia, 2012, Vol. 39(1).

[7]. GRENZDRFFER, G.J.; ENGEL, A.; TEICHERT, B. The


photogrammetric potential of low-cost UAVs in
forestry and agriculture. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Vol. 37, Part B1, Beijing,
China, 2008; pp. 1207-1213.

[18]. GRENZDOFFER, G.; NIEMEYER, F.; SCHMIDT, F.


Development of four vision camera system for
micro-UAV. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Melbourne, Australia, 2012, Vol. 39(1).

[8]. MEIER, L.; TANSKANEN, P.; FRAUNDORFER, F.;


POLLEFEYS, M. The PIXHAWK open-source
computer vision framework for MAVS. In: Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Zurich, Switzerland,
2011; Vol. 38 (1/C22).

[19]. HUCKRIDGE, D.A., EBERT, R.R. Miniature imaging


devices for airborne platforms. Proc. SPIE, Vol.
7113, 71130M, 2008, http://dx.doi.org/10.1117/
12.799635

[9]. NEITZEL, F.; KLONOWSKI, J. Mobile 3D mapping


with low-cost UAV system. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Zurich, Switzerland, 2011;
Vol. 38 (1/C22).
[10]. NIETHAMMER, U.; ROTHMUND, S.; SCHWADERER,
U.; ZEMAN, J.; JOSWIG, M. Open source imageprocessing tools for low-cost UAV-based landslide
investigation. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Zurich, Switzerland, 2011; Vol. 38 (1/C22).

[20]. SCHAFROTH, D., BOUABDALLAH, S., BERMES, C.,


SIEGWART, R.: From the test benches to the first
prototype of the muFly micro helicopter. Journal of
Intelligent and Robotic Systems, 2009, Vol. 54(1-3),
pp. 245-260.
[21]. FRANCESCHINI, N., RUFFIER, F., SERRES, J.: A bioinspired flying robot sheds light on insect piloting
abilities. Current Biology, 2007, Vol. 17(4), pp.
329-335.
[22]. MOORE, R.J.D., THURROWGOOD, S., SOCCOL, D.,
BLAND, D., SRINIVASAN, M.V.: A bio-inspired
stereo vision system for guidance of autonomous
aircraft. Proc. Int. Symposium on Flying Insects and
Robots, Ascona, Switzerland, 2007.

[11]. STEMPFHUBER, W.; BUCHHOLZ, M. A precise, lowcost RTK GNSS system for UAV applications. Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Zurich, Switzerland,
2011; Vol. 38 (1/C22).

[23]. NIRANJAN, S.; GUPTA, G.; SHARMA, N.; MANGAL,


M.; SINGH, V. Initial efforts toward mission-specific
imaging surveys from aerial exploring platforms:
UAV. In: Map World Forum, Hyderabad, India,
2007; on CD-ROM.

[12]. J. VALLET, F.; PANISSOD, C.; STRECHA, M.;


TRACOL. Photogrammetric performance of an ultralight weight Swinglet UAV. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Zurich, Switzerland, 2011,
Vol. 38 (1/C22).
[13]. NAGAI, M.; SHIBASAKI, R.; MANANDHAR, D.; ZHAO,
H. Development of digital surface and feature
extraction by integrating laser scanner and CCD
sensor with IMU. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information
Science, Istanbul, Turkey, 2004; Vol. 35(B5).
[14]. VIERLING L.A.; FERSDAHL, M.; CHEN, X.; LI, Z.;
ZIMMERMAN, P. The Short Wave Aerostat-Mounted
Imager (SWAMI): A novel platform for acquiring

[24]. EVERAERTS, J. The Use of Unmanned Aerial


Vehicles (UAVS) for Remote Sensing and
Mapping. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Beijing, China, 2008; Vol. 37 (B1), pp. 1187-1192.
[25]. NEWCOMBE, L. Green fingered UAVs. Unmanned
Vehicle, 2007.
[26]. MARTINEZ, J.R.; MERINO, L.; CABALLERO, F.;
OLLERO, A.; VIEGAS, D.X. Experimental results of
automatic fire detection and monitoring with UAVs.

84

PHOTOGRAMMETRY

[37]. MANYOKY, M.; THEILER, P.; STEUDLER, D.;


EISENBEISS, H. Unmanned aerial vehicle in
cadastral applications. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Zurich, Switzerland, 2011;
Vol. 38 (1/C22).

Forest Ecology and Management, 2006, 234S


(2006) S232.
[27]. RSTAS, A. The regulation Unmanned Aerial
Vehicle of the Szendro Fire Department supporting
fighting against forest fires 1st in the world! Forest
Ecology and Management, 2006, 234S.

[38]. HARTMANN, W.; TILCH, H. S.; EISENBEISS, H.;


SCHINDLER, K. Determination of the UAV position
by automatic processing of thermal images. In: Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Melbourne, Australia,
2012; Vol. 39 (5).

[28]. BERNI, J.A.J.; ZARCO-TEJADA, P.J.; SUREZ, L.;


FERERES, E. Thermal and Narrowband Multispectral
Remote Sensing for Vegetation Monitoring From an
Unmanned Aerial Vehicle. Transactions on
Geoscience and Remote Sensing, 2009, Vol. 47, pp.
722-738.
[29]. ABUK, A.; DEVECI A.; ERGINCAN F. Improving
heritage documentation. GIM International, 2007,
Vol. 21 (9).
[30]. LAMBERS, K.; EISENBEISS, H.; SAUERBIER, M.;
KUPFERSCHMIDT, D.; GAISECKER, TH.; SOTOODEH,
S., HANUSCH, Th. Combining photogrammetry and
laser scanning for the recording and modelling of
the late intermediate period site of Pinchango Alto,
Palpa, Peru. Journal of Archaeological Science
2007, Vol. 34 (10), pp. 1702-1712.

[39]. SMITH, J.G.; DEHN, J.; HOBLITT, R.P.; LAHUSEN,


R.G.; LOWENSTERN, J.B.; MORAN, S.C.;
MCCLELLAND, L.; MCGEE, K.A.; NATHENSON, M.;
OKUBO, P.G.; PALLISTER, J.S.; POLAND, M.P.;
POWER, J.A.; SCHNEIDER, D.J.; SISSON, T.W.
Volcano monitoring. Geological Monitoring,
Geological Society of America, Young and Norby
(Eds), 2009, pp. 273-305, doi: 10.1130/2009.
[40]. CHOU, T.-Y.; YEH, M.-L.; CHEN, Y.C.; CHEN, Y.H.
Disaster monitoring and management by the
unmanned aerial vehicle technology. In: Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Vienna, Austria,
2010; Vol. 38(7B), pp. 137-142.

[31]. OCZIPKA, M.; BEMMAN, J.; PIEZONKA, H.;


MUNKABAYAR, J.; AHRENS, B.; ACHTELIK, M. and
LEHMANN, F., Small drones for geo-archaeology in
the steppes: locating and documenting the
archaeological heritage of the Orkhon Valley in
Mongolia. Remote Sensing for Environmental
Monitoring, GIS Applications and Geology IX,
2009, Vol. 7874, pp. 787406-1.

[41]. HAARBRINK, R.B.; KOERS, E. Helicopter UAV for


Photogrammetry and Rapid Response. In: Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Antwerp, Belgium,
2006; Vol. 36 (1/W44).

[32]. CHIABRANDO F.; NEX F.; PIATTI D.; RINAUDO F.


UAV And RPV Systems For Photogrammetric
Surveys In Archaelogical Areas: Two Tests In The
Piedmont Region (ITALY). In: Journal of
Archaeological Science, 2011, Vol. 38, pp. 697-710,
ISSN: 0305-4403, DOI: 10.1016/j.jas. 2010.10.022.

[42]. MOLINA, P.; COLOMINA, I.; VITORIA, T.; SILVA, P.


F.; SKALOUD, J.; KORNUS, W.; PRADES, R.;
AGUILERA, C. Searching lost people with UAVs: the
system and results of the close-search project. In:
Int. Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, Melbourne,
Australia, 2012; Vol. 39(1).

[33]. RINAUDO, F.; CHIABRANDO, F.; LINGUA, A.; SPAN,


A. Archaeological site monitoring: UAV
photogrammetry could be an answer. In: Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Melbourne, Australia,
2012; Vol. 39(5).
[34]. THAMM, H.P.; JUDEX, M. The Low cost drone
An interesting tool for process monitoring in a high
spatial and temporal resolution. Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Enschede, The Netherlands,
2006; Vol. 36 (7).

[43]. PURI, A.; VALAVANIS, P.; KONTITSIS, M. Statistical


profile generation for traffic monitoring using realtime UAV based video data. In: Mediterranean
Conference on Control & Automation, Athens,
Greece, 2007; on CD-ROM.
[44]. PUESCHEL, H.; SAUERBIER, M.; EISENBEISS, H. A
3D model of Castle Landemberg (CH) from
combined photogrammetric processing of terrestrial
and UAV-based images. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Beijing, China, 2008; Vol. 37
(B6), pp. 96-98.

[35]. NIETHAMMER, U.; ROTHMUND, S.; JAMES, M.R.;


TRAVELETTI, J.; JOSWIG, M. UAV-based remote
sensing of landslides. In: Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Newcastle upon Tyne, UK,
2010; Vol. 38 (5), on CD-ROM.

[45]. REMONDINO, F.; GRUEN, A.; VON SCHWERIN, J.;


EISENBEISS, H.; RIZZI, A.; SAUERBIER, M.;
RICHARDS-RISSETTO,
H.
Multi-sensors
3D
documentation of the Maya site of Copan. In: Proc.
of 22nd CIPA Symposium, Kyoto, Japan, 2009; on
CD-ROM.

[36]. ZHANG, C. An UAV-based photogrammetric


mapping system for road condition assessment. In:
International Archives of Photogrammetry, Remote
Sensing and Spatial Information Science, Beijing,
China, 2008; Vol. 37.

[46]. PRZYBILLA, H.-J.; WESTER-EBBINGHAUS, W. Bildflug mit ferngelenktem Kleinflugzeug. Bildmessung


und Luftbildwesen. Zeitschrift fuer Photogram-

85

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Archives of Photogrammetry, Remote Sensing and


Spatial Information Sciences, Melbourne, Australia,
2012; Vol. 39(7).

metrie und Fernerkundung. Herbert Wichman


Verlag, Karlsruhe, Germany, 1979.
[47]. REMONDINO, F.; FRASER, C. Digital cameras
calibration
methods:
considerations
and
comparisons. Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
2006; Vol. 36(5), pp. 266-272.
[48]. COLOMINA, I.; AIGNER, E.; AGEA, A.; PEREIRA, M.;
VITORIA, T.; JARAUTA, R.; PASCUAL, J.; VENTURA,
J.; SASTRE, J.; BRECHBHLER DE PINHO, G.; DERANI,
A.; HASEGAWA, J. The uVISION project for
helicopter-UAV photogrammetry and remotesensing. In: Proceedings of the 7th International
Geomatic Week, Barcelona, Spain, 2007.

[60]. ZHOU, G. Near real-time orthorectification and


mosaic of small UAV video flow for time-critical
event response. IEEE Trans. Geoscience and
Remote Sensing, 2009; Vol. 47(3), pp. 739-747.
[61]. EUGSTER, H.; NEBIKER, S.. UAV-based augmented
monitoring real-time georeferencing and
integration of video imagery with virtual globes. Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, Beijing, China, 2008;
Vol. 37 (B1), pp. 1229-1235.
[62]. WANG, J.; GARRATT, M.; LAMBERT, A.; WANG, J.J.;
HAN,
S.;
SINCLAIR,
D.
Integration
of
GPS/INS/vision sensors to navigate unmanned
aerial vehicles. Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
Beijing, China, 2008; Vol. 37 (B1), pp. 963-969.

[49]. BROWN, D.C. The bundle adjustment progress and


prospects.
In:
International
Archives
of
Photogrammetry, 1976, Vol. 21 (3).
[50]. TRIGGS, W.; MCLAUCHLAN, P.; HARTLEY, R.;
FITZGIBBON, A. Bundle adjustment A modern
synthesis. W. Triggs, A. Zisserman, and R Szeliski
(Eds), Vision Algorithms: Theory and Practice,
LNCS, Springer Verlag, 2000; pp. 298-375.

[63]. BARAZZETTI, L.; REMONDINO, F.; SCAIONI, M. Fully


automated UAV image-based sensor orientation. In:
Int. Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, Calgary, Canada,
2010; Vol. 38 (1), on CD-ROM.

[51]. GRUEN A.; BEYER, H.A. System calibration through


self-calibration. Calibration and Orientation of
Cameras in Computer Vision, Gruen and Huang
(Eds.), Springer Series in Information Sciences,
2001, Vol. 34, pp. 163-194.

[64]. ANAI, T.; SASAKI, T.; OSARAGI, K.; YAMADA, M.;


OTOMO, F.; OTANI, H. Automatic exterior
orientation procedure for low-cost UAV
photogrammetry using video image tracking
technique and GPS information. Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Melbourne, Australia, 2012,
Vol. 39(7).

[52]. BARAZZETTI, L.; SCAIONI, M.; REMONDINO, F.


Orientation and 3D modeling from markerless
terrestrial images: combining accuracy with
automation. The Photogrammetric Record, 2010,
Vol. 25 (132), pp. 356-381.
[53]. PIERROT-DESEILLIGNY, M.; CLERY, I. APERO, An
Open Source Bundle Adjustment Software for
Automatic Calibration and Orientation of Set of
Images. Int. Archives of Photogrammetry, Remote
Sensing and Spatia Information Sciences, 2011;
Vol. 38 (5/W16), Trento, Italy (on CD-ROM).

[65]. GRANSHAW, S.I. Bundle adjustment method in


engineering photogrammetry. In: Photogrammetric
Record, 1980, Vol.10 (56), pp. 111-126.
[66]. DERMANIS, A. The photogrammetric inner
constraints. In: ISPRS Journal of Photogrammetry
and Remote Sensing, 1994, Vol. 49 (1), pp. 2539.

[54]. HARTLEY, R.; ZISSERMAN, A. Multiple View


Geometry in Computer Vision. Cambridge
University Press, 2004.
[55]. SNAVELY, N.; SEITZ, S. M.; SZELISKI, R. Modeling
the world from Internet photo collections. Int.
Journal of Computer Vision, 2008, 80 (2), pp. 189210.
[56]. D.P. ROBERTSON; R. CIPOLLA. Structure from
Motion. Practical Image Processing and Computer
Vision, John Wiley, Varga, M. (eds.), 2009.
[57]. WU, C. VisualSFM: A Visual Structure from
Motion System. http://www.cs.washington.edu/
homes/ccwu/vsfm/, 2011.
[58]. SNAVELY, S.; SEITZ, S.: M.; SZELISKI, R. Modeling
the World from Internet Photo Collections.
International Journal of Computer Vision, 2007,
Vol. 2(80), pp. 189-210.

[67]. SEITZ, S.; CURLESS, B.; DIEBEL, J.; SCHARSTEIN, D.;


SZELISKI, R., 2006. A comparison and evaluation of
multi-view stereo reconstruction algorithms. In:
Proc. IEEE Conf. CVPR06, New York, 17-22 June
2006; Vol. 1, pp. 519-528.
[68]. VU., H.H.; KERIVEN, R.; LABATUT, P.; PONS, J.-P.
Towards high-resolution large-scale multi-view
stereo. Proc. IEEE Conf. CVPR09, 2009, pp. 14301437.
[69]. ZHU, Q.; ZHANG, Y.; WU, B.; ZHANG, Y. Multiple
close-range image matching based on self-adaptive
triangle constraint. The Photogrammetric Record,
2010, Vol. 25 (132), pp. 437-453.
[70]. GERKE, S.; MORIN, K.; DOWNEY, M.; BOEHRER, N.;
FUCHS, T. Semi-global matching: an alternative to
LiDAR for DSM generation? Int. Archives of
Photogrammetry, Remote Sensing and Spatial
Information Sciences, Calgary, Canada, 2010; Vol.
38 (1), on CD-ROM.

[59]. PFEIFER, N.; GLIRA, P.; BRIESE, C. Direct


georeferencing with on board navigation
components of light weight UAV platforms. In: Int.

86

PHOTOGRAMMETRY

[71]. HIRSCHMLLER, H. Stereo processing by SemiGlobal Matching and Mutual Information. IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 2008, Vol. 30 (2), pp. 328341.

[76]. WENDEL A, MAURER M, GRABER G, POCK, T.,


BISCHOF, H.: Dense reconstruction on-the-fly. In:
Proc. IEEE Int. CVPR Conference, 2012,
Providence, USA.

[72]. FURUKAWA, Y.; PONCE, J. Accurate, dense and


robust multiview stereopsis. IEEE Transactions on
Pattern Analysis and Machine Intelligence (PAMI),
2010, Vol. 32 (8), pp. 1362-1376.

[77]. KONOLIGE, K.; AGRAWAL, M. Frameslam: from


bundle adjustment to realtime visual mapping. IEEE
Journal of Robotics and Automation, 2008, Vol. 24
(5), pp. 1066-1077.

[73]. PIERROT-DESEILLIGNY, M.; PAPARODITIS N. A


multiresolution and optimization-based image
matching approach: an application to surface
reconstruction from SPOT5-HRS stereo imagery.
In: Int. Archives of Photogrammetry, Remote
Sensing and Spatial Information Sciences, Antalya,
Turkey, 2006; Vol. 36 (1/W41), on CD-ROM.

[78]. NUECHTER, A.; Lingemann, K.; Hertzberg, J.;


Surmann, H. 6D SLAM for 3D mapping outdoor
environments. Journal of Field Robotics (JFR),
Special Issue on Quantitative Performance
Evaluation of Robotic and Intelligent Systems,
2007; Vol. 24 (8-9), pp. 699-722.
[79]. STRASDAT, H.; MONTIEL, J. M. M.; DAVISON, A.J.
Scale drift-aware large scale monocular SLAM.
Robotics: Science and Systems, 2010.

[74]. FIORILLO F, JIMENEZ FERNANDEZ-PALACIOS B,


REMONDINO F, BARBA S, 2012. 3D Surveying and
modeling of the archaeological area of Paestum,
Italy. In: Proc. 3rd Inter. Conference Arquelogica
2.0, 2012, Sevilla, Spain.

[80]. BOLTEN A, BARETH G. Introducing a low-cost MiniUAV for Thermal- and Multispectral-Imaging. In:
Int. Archives of Photogrammetry, Remote Sensing
and Spatial Information Sciences, Melbourne
(Australia), 2012, Vol. 39(1).

[75]. REMONDINO, F., BARAZZETTI, L., NEX, F., SCAIONI,


M., SARAZZI, D: UAV photogrammetry for mapping
and 3D modeling Current status and future
perspectives. In: Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences,
2011, Vol. 38(1/C22). ISPRS Conference UAV-g,
Zurich, Switzerland.

[81]. LANGE S, SNDERHAUF N, NEUBERT P, DREWS S,


PROTZEL P. Autonomous corridor flight of a UAV
using a low-cost and light-weight RGB-D camera.
In: Advances in Autonomous Mini Robots, Proc.
6th AMiRE Symposium, 2011, pp. 183-192, ISBN
978-3-642-27481-7.

87

4
REMOTE SENSING

REMOTE SENSING

4.1 EXPLORING ARCHAEOLOGICAL LANDSCAPES


WITH SATELLITE IMAGERY
NIKOLAOS GALIATZATOS

natural environment with human values and


interpretations and belief intertwined. Remote sensing
may show that two landscapes are environmentally
similar, but it will never show the difference in
significance of the two landscapes in the values and
beliefs (for example the mountain Olympos in ancient
Greece) (Philip, personal communication, 2003).

4.1.1 INTRODUCTION
According to Clark et al., (1998):
Landscape archaeology is a geographical approach
whereby a region is investigated in an integrated
manner, studying sites and artefacts not in isolation,
but as aspects of living societies that once occupied
the landscape

In the following paragraphs, the archived and existing


satellite data will be discussed according to their
properties. Then, there will be discussion on the current
modelling techniques to bring the data to a form that can
be processed/combined/analysed for the information
extraction. This is the information that the application
needs. The quality of the information depends on the
properties of the selected satellite imagery, the quality of
the reference data needed for the pre-processing stage, the
recording of certainty, and last but not least it depends on
what the application needs.

In Landscape archaeology, the integration of data such as


land cover, land use, vegetation, geology, geomorphology, and the location of major roads and hydrographical
features help to provide the context for human activity
and evidence of human occupation in the landscape. The
spatial context and geographical distribution are
important for the interpretation and understanding of the
historic landscape. For example, as part of the theoretical
framework of landscape archaeology, roadways reflect
the interplay among technology, environment, social
structure, and the values of a culture (Trombold, 1991).
And under certain circumstances, it may be possible to
make inferences regarding past environments by
interpreting the contemporary landscapes. Gathering such
data from the ground might be possible, but often is
prohibitively expensive because of the need to collect
large amounts of data. Therefore, archaeologists are
increasingly interested in effective, objective and costeffective methods to gather information from the sources
such as aerial photography and satellite remote sensing.
Aerial photography often provides a mechanism to detect
traces of the past like soil and crop marks (Scollar et al.,
1990). On the other hand, satellite remote sensing
provides environmental and archaeological information
over large areas such as patterns in the landscape and
field systems. By allowing archaeologists to recognise
patterns at different spatial and temporal scales, satellite
imagery provides the means for moving from local to
regional and from static to dynamic descriptions of the
landscape (Kouchoukos, 2001).

4.1.2 SATELLITE IMAGES AND THEIR


PROPERTIES
Photography existed long before satellite observation.
L.J.M. Daguerre and J.N. Niepce developed the first
commonly used form of photograph between 1835 and
1839. In 1845, the first panoramic photograph was taken,
and in 1849 an exhaustive program started to prove that
photography could be used for the creation of
topographic maps. The same year, the first stereophotography is produced. In 1858, Gaspard Felix
Tournachon took the first known photographs from an
overhead platform, a balloon (Philipson, 1997). For the
next 101 years, aerial photography was developed and
widely used in military and civilian applications. The
platforms changed to include kites, pigeons, balloons and
airplanes (chapter 2 in Reeves, 1975). In 1957, the USSR
(Union of Soviet Socialist Republics) put the first
satellite, Sputnik 1 into orbit and the era of satellite
remote sensing began with the first systematic satellite
observation of the Earth by the meteorological satellite

The young archaeologist should never forget that


landscapes are cultural products, that is, a combination of

91

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. Illustration of the spatial resolution property

has access. Further imagery can be ordered from the


current operating remote sensing satellites. However,
these data cost money and time to acquire; hence one
must be careful with the choice of satellite imagery to use
in the project/application. Some of the factors to take into
consideration are the format (digital or analogue), the
time (which year, when in the year), the spatial detail
(spatial resolution), the spectral range/response that is
reflected or emitted by the surface of Earth (spectral
resolution), the range and number of brightness values
(radiometric resolution or dynamic range), the spatial
coverage (swath width) and the cost. The general purpose
of acquiring remote sensing image data is to be able to
identify and assess either surface materials or their spatial
properties, which can be then inferred to the application
needs.

TIROS-1 (Television Infrared Observation Satellite


Program) in 1960. This was a meteorological satellite,
designed for weather forecasting. The era of satellite
photogrammetry1 starts in 1960 with the CORONA
military reconnaissance program. The era of using
satellite images for mapping and making measurements
starts in 1962 with the CORONA KH-4 satellite design.
While civilian satellites evolved along the lines of the
multispectral concept (Landgrebe, 1997) with the advent
of ERTS-1 (Earth Resources Technology Satellites) or
Landsat-1 in 1972, the military reconnaissance satellites
proceeded to use higher resolution imagery and follow a
very different path (Richelson, 1999). After 1972, several
satellite sensor systems similar to Landsat were launched
such as SPOT HRV (Systme Pour lObservation de la
Terre High Resolution Visible) and the Indian Liss.
Other highlights in the history of satellite remote sensing
include the insertion of radar systems into space, the
proliferation of weather satellites, a series of specialised
devices dealing with environmental monitoring or with
thermal and passive microwave sensors, and the more
recent hyperspectral sensors. The first commercial very
high resolution (VHR) satellite to be launched
successfully was IKONOS-2 in 1999. It was followed by
Quickbird in 2001, OrbView-3 in 2003, and more to
follow (Kompsat-2, EROS-B1, Resource-DK-1 in 2006
only).

To illustrate the above-mentioned properties of satellite


imagery, some examples will be used. In figure 1, the
spatial resolution is displayed. The Landsat image has a
spatial resolution of 30 m hence features like the fences
are not visible. As mentioned earlier though, it also
depends on the application. For example, if we are
interested in simply land cover mapping (e.g. forest,
urban land, and agricultural land) then the Landsat image
is enough. In the same application IKONOS-2 1m spatial
resolution will probably show the individual trees but it
will miss the forest.

All these remote sensing satellites created (and keep


creating) a large archive of imagery, in which everybody

Figure 2 illustrates the radiometric resolution advantage.


While in most satellites the radiometric resolution is
8-bit (this means 256 different levels of gray), in
IKONOS-2 (illustrated in figure 2) and other modern
satellites the radiometric resolution increases to 11-bit
(or more) which translates to 2048 different levels of
gray. This results in seeing clearer the features that are
covered by shadow or that are on top of a very reflective
area.

1
A definition of satellite photogrammetry may be found in Slama et al.,
(1980): Satellite photogrammetry, as distinguished from conventional
aerial photogrammetry, consists of the theory and techniques of
photogrammetry where the sensor is carried on a spacecraft and the
sensors output (usually in the form of images) is utilised for the
determination of coordinates on the moon or planet being investigated.
From this definition, it is obvious that people were not openly aware of
the military use of same techniques towards our planet, Earth.

92

REMOTE SENSING

Figure 2. The high radiometric resolution of IKONOS-2 (11-bit)


allows for better visibility at the shadows of the clouds

seamless image coverage of a large area can be very


useful for many applications, and here is where large
swath width can prove useful.

The spectral resolution depends on which part of the


spectrum the satellites can detect, and how many portions
of this part can be detected. The hyperspectral sensors
can provide a smoother spectral signature of the different
features, which can be easily compared and verified with
laboratory measurements and hence confirm the type of
feature under observation. The multispectral sensors can
distinguish features in a less smooth fashion, but still
enough for many applications. The reason why the
satellites look only at a particular part of the spectrum
lays on the route of the light through the atmosphere
where it is reflected, diffused, and absorbed. The
absorption occurs mainly because of vapours and carbon
dioxide gas. This leaves open the so-called atmospheric
windows for the satellite sensor to use. When the light
returns to the sensor after its travel from the sun to the
earth and then back to space, it transfers information
about the features it met. Each feature reflects light in a
unique way, which is called spectral signature. If we can
detect more distinctive parts of this signature then we
have better chances to successfully recognise the feature.
Understandably, there is no archaeological spectral
signature.

Apart from the above satellite image characteristics


(properties), the companies also offer a variety of
different products from each satellite. These vary from
raw data to over-precise data. For example the table 1
displays the different Landsat processing levels.
Almost every year the American Society of
Photogrammetry and Remote Sensing (ASPRS) produces
a list of the existing and future remote sensing satellite.2
This list is not exhaustive. According to the list there are
31 optical satellites in orbit and 27 planned, and 4 radar
satellites in orbit with 9 planned. There have been efforts
to classify the satellite imagery against the applications.
They are not necessary wrong, but they forget the
ingenuity of human to discover new ways to use the
data.
So, after choosing an image (or many images) that suits
the application with the least possible cost, it is time to
proceed to its processing. As you will notice (unless you
have purchased the high-precision product, which is georectified but simultaneously expensive) the image will not
display the real world accurately. There is need to rectify
this with the use of sensor models.

In figure 4, the swath width of the satellite imagery is


presented. The satellite sensor covers particular area per
scene, and this can be useful when calculating the cost.
This is because the area of interest does not necessarily
need a whole Landsat scene. However, the user buys it as
a whole scene. For this, the cost is usually calculated per
useful square km. On the other hand, instantaneous

2
http://www.asprs.org/Satellite-Information/Guide-to-Land-ImagingSatellites.html / (last accessed: December 2011).

93

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 3. The left part displays the spectral resolution of different satellites. The right part illustrates the spectral
signature from the point of view of hyperspectral, multispectral and panchromatic images respectively

Figure 4. Illustration of the different spatial coverage or swath width (nominal values in parenthesis) (reproduced from
http://www.asprs.org/a/news/satellites/ASPRS_ DATABASE_021208.pdf Last accessed December 2011)

Table 1. Landsat processing levels as provided


Level 0 reformatted (0R, RAW)

Pixels are neither resampled nor are they geometrically corrected or


registered. Radiometric artefacts are not removed

Level 1 Radiometrically Corrected (1R, RADCOR)

Pixels are neither resampled nor are they geometrically corrected or


registered. Radiometric artefacts are removed and is calibrated to
radiance units

Level 1 System Corrected (1G)

Standard product for most users. Radiometrically and geometrically


corrected to known map projection, image orientation and
resampling algorithm. No atmospheric corrections are applied

GTCE (Ground Terrain Corrected Enhanced) or L1T

Rectified using the SRTM, NED, CDAD, DTED, GTOPO30 DEMs,


and control points. Accuracy below 50 m RMSE

94

REMOTE SENSING

where x, y are normalised pixel coordinates on the image;


X,Y,Z are normalised 3D coordinates on the ground, and
aijk, bijk, cijk, dijk are polynomial coefficients. If we limit
the polynomials to the third order (0 m1 3, 0 m2 3,
0 m3 3, m1+m2+m3 3), then the above equations can
be re-written as follows:

4.1.3 THE SENSOR MODELS


To rectify the relationship between image and object,
sensor models are required. They are separated into two
categories: physical sensor models, and generalised
sensor models.
Physical sensor models represent the physical imaging
process, and they need parameters such as orbital information, sensor, ephemeris data, Earth curvature, atmospheric
refraction, and lens distortion to describe the posi-tion
and orientation of the sensor with respect to an objects
position. These parameters are statistically uncorrelated,
as each parameter has a physical significance. Physical
models are rigorous, such as with collinearity equations,
and they normally produce a highly accurate model.

row

column

X ... Y 3
X ... Y 3

X 3 ) (co c1 ... c19 )


X 3 ) (1 d1 ... d19 )T

The polynomial coefficients are called Rational Function


Coefficients (RFCs) (Tao et al, 2000) or Rational
Positioning Capability (RPC) data (Open GIS
Consortium, 1999), and the imagery provider gives them
to the user for the application of the model. They are also
termed Rational Polynomial Coefficients (RPCs, a term
used by SpaceImaging and Fraser and Hanley, 2003),
while the RFM is also termed Universal Sensor Model
(Open GIS Consortium, 1999). Dowman and Dolloff
(2000) separate RFM and USM, considering USM to be
an extension of the RFM. Like everything new, the
terminology is not universal, but it varies according to
whom is discussing the topic.

For these reasons, generalised sensor models were


developed independent of sensor platforms and sensors.
These involved modelling the transformation between
image and object as some general function without the
inclusion of the physical imaging process. The function
can be in several different forms such as polynomials,
and since they do not require knowledge of the sensor
geometry, they are applicable to different sensor types
and offer support to real-time calculations, which are
used in military surveillance applications. Also, because
of their independence from the physical parameters, they
provide a mechanism for commercial vendors to keep
information about their sensors confidential.

X iY j Z k

In the case of RPCs, there are differences among the


VHR satellites. For example, the RPCs for the IKONOS
provide very good results and a shift is enough to
improve supplied RPCs (Fraser and Hanley, 2005;
Toutin, 2006), and the imaging system is free of
significant non-linearities (Fraser et al., 2002). The RPCs
for the Quickbird perform better when corrected with
higher order polynomials (Fraser et al., 2006; Toutin,
2006), and even better when the preprocessing occurs in
combination with a rigorous approach (Valadan Zoej and
Sadeghian, 2003; Toutin, 2003), and this is because the
Quickbird is provided with complete metadata physical
model information, while IKONOS physical model is still
not published. However, this approach is sensitive to the
number and distribution of the ground control
(Wolniewicz, 2004). In all cases, the relief of the ground
can influence the results.

X iY j Z k

4.1.4 PREPROCESSING STAGE

X iY j Z k

The preparation of the data before the processing stages


has become a key issue for applications using multisource digital data. The main steps include the translation
of all data into digital format, and geometric and
radiometric correction. The kind of application and the

However, when using conventional polynomials, there is


a tendency to oscillation, which produces much less
accuracy than if using a rigorous sensor model. Thus,
there was a need for the civilian and military satellite
companies/agencies to develop a generalised sensor
model with high accuracy and without a functional
relationship to the physical parameters of the satellite. For
this reason, the Rational Function Model (RFM) was
developed. This model is currently used by most VHR
satellite companies.
The RFM is a generic form of polynomial models. It
defines the formulation between a ground point and the
corresponding image point as ratios of polynomials:
m1 m 2 m 3

i 0 j 0 k 0
n1 n 2 n 3

i 0 j 0 k 0

ijk

ijk

m1 m 2 m 3

p3( X , Y , Z )
y

p 4( X , Y , Z )

(1 Z Y
(1 Z Y

The superscript T denotes a vector transpose. Ratios of


first order terms represent distortions caused by optical
projection; ratios of second order terms approximate the
corrections of Earth curvature, atmospheric refraction,
lens distortions and more; ratios of third order terms can
be used for the correction of other unknown distortions
with high order components (Tao et al., 2000). Grodecki
(2001) offers a detailed explanation of the RFM.

Because they are sensor-dependent, it is not convenient


for users to switch among different software packages or
add new sensor models into their systems. And in some
cases, the physical sensor models are not always available.
Without knowing the above-mentioned parameters, it is
very difficult to develop a rigorous physical sensor model.

p1( X , Y , Z )
x

p 2( X , Y , Z )

(1 Z Y X ... Y 3 X 3 ) ( ao a1 ... a19 )


(1 Z Y X ... Y 3 X 3 ) (1 b1 ... b19 )T

c
i 0 j 0 k 0
n1 n 2 n 3

ijk

d
i 0 j 0 k 0

ijk

X iY j Z k

95

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Table 2. Description of error sources (Toutin, 2004)


Source

Relation
Platform-related

Description of error
Platform movement (altitude, velocity)
Platform attitude (roll, pitch, yaw)
Viewing angles

Acquisition system

Sensor-related

Panoramic effect with field of view


Sensor mechanics (scan rate, scanning velocity, etc.)

Instrument-related

Area of interest

Time-variations or drift
Clock synchronicity

Atmosphere-related

Refraction and turbulence

Earth-related

Rotation, curvature, topographic relief

Map-related

Choice of coordinate system, approximation of reality

geometric transformation of an image to an absolute


coordinate system. In this project, there is no absolute
coordinate system. Instead, there are datasets of satellite
and reference data, with an estimated error from an
absolute coordinate system. For this reason, a better
definition needs to be adopted for the action of the
integration of the project data. According to Ehlers
(1997), this is defined as registration, which is the
process of an actual geometric transformation of a slave
image to the geometry of a master image or dataset.

level of accuracy required define the methods utilised for


preprocessing. It mainly depends on the data
characteristics and the nature of the application. The data
preprocessing stage demands high accuracy and so the
best approach must always be sought according to the
available means.

4.1.5 THE BASEMAP PROBLEM


An important concept in spatial integration is the spatial
standard. Geographical information systems (GIS)
provide tools to make two or more different spatial data
sources match each other, but without reference to a
common basemap standard it is difficult to go any
further. A spatial basemap provides a common
framework that any data source can be registered to, and
once registered, all other data meeting that same standard
are immediately available for comparison with the new
data. A basemap also commonly includes control points,
precisely located benchmark coordinates that allow the
error and accuracy of positional data to be readily
determined.

The possible error sources that need to be corrected in an


image are separated into two broad categories, the errors
because of the acquisition system, and the errors because
of the observed area of interest. Some of these distortions,
especially those related to instrumentation, are corrected
at the ground receiving stations. Toutins (2004)
categorisation of all such errors is shown in Table 2.
There are two main ways to rectify these distortions. Both
require models and mathematical functions to be
applied. One way is the use of rigorous physical models.
These models are applied in a distortion-by-distortion
correction at the ground receiving station to offer
different products (for example the IKONOS group of
image products).

Thus, it is important to establish a good basemap standard


where all data can be registered. The basemap should
offer good and reliable control for the rest of the data. For
this, one should have a look at the quality of the data, and
their suitability as a basemap. The available project data
are separated into two broad categories, the satellite data,
and the reference data.

The other way is the use of generalised models either


with polynomial or rational functions. The generalised
models method was tried with success and is mostly used
in this research project. The polynomial functions are still
in use today by many users mainly because of their
simplicity. Their usage was prevalent until the 1980s. But
with the increased need for accuracy, other more detailed
functions replaced them. Today, polynomial models are
limited to nadir-viewing images, systematically corrected
images or small images on relatively flat terrain (Bannari
et al., 1995), and according to De Leeuw et al., (1988),
the GCPs3 have to be numerous and distributed as evenly
as possibly in the area of interest.

4.1.6 IMAGE RECTIFICATION AND


RESAMPLING
At this point, there is understanding about the satellite
data, all the data are in digital format, and the basemap
layer is assessed. The next step is the rectification and
resampling of the data for integration under a common
basemap.

3
Chen and Lee (1992) use the term RPCs (Registration Control Points)
that is more precise. However, to avoid confusion, this term will not be
used.

Ehlers (1997) defines remote sensing image rectification


(or geocoding) as the process of an actual pixelwise

96

REMOTE SENSING

span many fields, including city and regional planning,


architecture, geology and geomorphology, hydrology,
geography, computer science, remote sensing and
surveying. Given such wide application and divergent
constituencies, there is no single universally accepted
definition of GIS. An early functional definition states
(Calkins and Tomlinson, 1977):

However, one must always keep in mind that the RMSe


can be a useful indicator of accurate image rectification,
only if another means of calibration is available to
evaluate standards (Morad et al., 1996). Otherwise, it is
just a diagnostic of weak accuracy value (McGwire,
1996). The use of independent well-distributed test points
that are not used in the image geometric transformation
would give a more precise estimate of the residual error
(Ehlers, 1997).

A geographical information system is an integrated


software package specifically designed for use with
geographic data that performs a comprehensive
range of data handling tasks. These tasks include
data input, storage, retrieval and output, in addition
to a wide variety of descriptive and analytical
programs.

Buiten and van Putten (1997) suggest a way to assess


qualitatively the satellite data registration through
applying tests. Thus, the user could gain a better
insight into the quality of the image registration. For a
detailed review on image registration see Zitov &
Flusser (2003).

A more recent definition (Goodchild et al., 1999) states


the meaning of geographical information and
geographical information science:

The image registration process has one more step after


the application of the model, the resampling, which is
part of the registration process. When transforming the
slave image to the new geometry/location, then the
pixel position will not be the same. Thus, new pixel
values need to be interpolated. This is called resampling
(Ehlers, 1997).

Geographical information (GI) can be defined as


information about the features and phenomena
located in the vicinity of the surface of the Earth. []
The fundamental primitive element of GI is the record
<x, y, z, t,U> where U represents some thing (a
class, a feature, a concept, a measurement or some
variable, an activity, an organism, or any of a myriad
possibilities) present at some location (x,y,z,t) in
space-time.

The three main resampling methods are nearest


neighbour, bilinear interpolation, and cubic convolution
interpolation. Nearest-neighbour resampling is simply the
assignment of the brightness value of the raw pixel that is
nearest to the centre of the registered pixel. Thus, the raw
brightness values are retained. This resampling method is
mainly preferred if the registered image is to be
classified. The bilinear interpolation uses three linear
interpolations over the four pixels that surround the
registered pixel. The cubic convolution interpolation uses
the surrounding sixteen pixels. Both bilinear and cubic
interpolations smooth the image, and they are mainly
used for photointerpretation purposes. However, they are
not suggested if the spectral detail is of any importance to
the application (Richards and Jia, 1999).

Information science generally can be defined as the


systematic study according to scientific principles of the
nature and properties of information. From this position it
is easy to define GIScience as the subset of information
science that is about GI.
In every application, the inclusion of GIS as a processing
tool is an approach that leads to a different perspective on
the underlying problem. By using the simple topological
figures of polygon, line and point, one can express
everything that exists in the (x, y, z, t) space. Under one
coordinate system, all real world data can be overlaid and
analysed. It would take many pages to analytically detail
the capabilities of GIS in data analysis. In few words, the
conceptual components of a GIS are (Jakeman et al.,
1996).

According to Richards & Jia (1999), the cubic


convolution would be the resampling method used for
photointerpretation purposes. But Philipson (1997) argues
that the contrast must be preserved and not smoothed.
The resampling method is merely personal choice of the
photointerpreter.

The database is all data files that can be assessed by the


user. Data are organised via some common and controlled
approach. The database manager performs all database
retrieval, handling and storage functions. The
manipulation consists of tasks needed to respond to
simple user data summary requests and preliminary to
analytical processes. The data entry and cleaning are
procedures for entering and editing data. The user
interface is the main working environment that has
moved from one-dimensional command line to the
object-oriented one. It is the interaction space between
the user and the computer. The analysis includes
procedures that derive information from the data. It is the
most important part of a GIS. It incorporates a variety of
analytical techniques, which when combined answer the
specific needs of the user of the system.

4.1.7 GEOGRAPHICAL INFORMATION SCIENCE


The idea of portraying different layers of data on a series
of base maps, and relating things geographically, has
been around much longer than computers. The bestknown example is the case of Dr. John Snow. He used a
map showing the locations of death by cholera in central
London in September 1854, trying to track the source of
the outbreak to a contaminated well.
For such applications, a software system was developed
that later expanded into a science, the Geographical
Information Systems/Science (GIS). The users of GIS

97

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

After all data are integrated in a common space, it is time


to proceed with either qualitative or quantitative
information extraction, in other words photointerpretation
or image analysis.

roles. On one hand, if digital image processing is applied


beforehand to enhance the imagery, then this helps the
photointerpreter in his work. On the other hand, image
analysis depends on information provided at key stages
by an analyst, who is often using photointerpretation
(Richards & Jia, 1999).

Colwell (1960) defined photographic interpretation (also


termed photointerpretation) as

Konecny (2003) defines remote sensing and photogrammetry according to their object of study:

the process by which humans examine photographic


images for the purpose of identifying objects and
judging their significance

Photogrammetry concerns itself with the geometric


measurement of objects in analogue or digital
images

4.1.8 PROCESSING STAGE

Remote sensing can be considered as the


identification of objects by indirect means using
naturally existing or artificially created force fields.

With the advent of computer technology, the methods for


photographic interpretation changed and the new term
image analysis (also termed quantitative analysis) came
to complement (underlined) the old term:

Thus, photogrammetric techniques were adopted by


remote sensing mainly for quantitative analysis. In its
turn, remote sensing expanded the data that could aid an
image analyst with the extraction of quantitative
information.

Image analysis is the process by which humans


and/or machines examine photographic images
and/or digital data for the purpose of identifying
objects and judging their significance (Philipson,
1997)

All of the above terms give a specific meaning to the


approaches, but the approaches complement each other
when it comes into implementation. In other words, the
sciences of photogrammetry and remote sensing moved
from the previous independent way of working, towards a
more interdisciplinary network, where in comparison
with other sciences like Geographical Information
Systems, Geodesy, and Cartography, they produce better
results and increase the processing capability for modern
day applications (figure 5).

Photointerpretation involves direct human interaction, and


thus it is good for spatial assessment but not for quantitative accuracy. By contrast, image analysis requires
little human interaction and it is mainly based on machine
computational capability, and thus it has high quantitative
accuracy but low spatial assessment capability.
Today, both techniques are used in very specific and
complementary ways, and the approaches have their own

Figure 5. Classical and modern geospatial information system (reproduced from Konecny, 2003)

98

REMOTE SENSING

4.1.9 SUMMARY

4.1.10 EXERCISES

The main aim of this paper was to provide a brief


overview of the satellite imagery today, and the main
approaches to extract information from the image data. It
is not enough though, and the reader should get better
informed through dedicated classics such as Lillesand et
al., (2004), Richards and Jia (1999), Sabins (2007),
Jensens (2006) to name a few.

For any practical application of remote sensing that you


specify, define and critically discuss its spatial,
temporal and spectral characteristics and the kind of
image data required.

The satellite imagery is nothing more than data collected


at a particular moment in time, from a specific area.
There is an abundance of archived satellite imagery in
different spatial, radiometric, spectral resolutions and
different swath widths and prices. Furthermore, there is
an abundance of active satellite sensors with different
characteristics that can provide imagery. Careful selection
of the appropriate for the application/project imagery is
the first step.

Which types of sensors are mostly used in your


discipline or field-of-interest?

Which aspects need to be considered to assess if the


statement Remote sensing data acquisition is a cost
effective method is to be true?

ABSTRACT
The era of satellites for earth observation started in 1960s
for meteorological and military applications. The
multispectral concept helped earth observation to takeoff in other applications with Landsat in 1972. Since
then, a huge archive of image data has become available
for almost every place on Earth, and satellite imagery was
utilised in many different specialties. Today, more and
more satellites are launched with improved characteristics
and special properties according to the application market
targeted. However, there is no satellite tailor-made for
archaeological applications. This chapter will provide an
insight on the existing and future satellite image data
along with their properties, and how can the archaeologist
make the best use of them.

This data (the satellite imagery) should then be spatially


integrated in a common co-ordinate system with the
other data of the project, e.g. ground collected
data, reference data, etc. To achieve this, first the
data must be in a common format (e.g. digital). Then,
a base layer is needed that can accommodate all data
with as good spatial certainty as possible and as close
to reality as possible. There is a variety of sensor models
to use so as to approximate reality. And all preprocessing steps should be recorded. This way, it is
possible to provide a picture of the quality of the
integrated data.

References

The last stage involves the extraction of useful


information from the data. The approaches can provide
qualitative and/or quantitative information. The
qualitative information extraction has long been done
through photointerpretation approach. It is an approach
highly dependent on human physiology and psychology,
and it is analysed into elements and levels. The computer
as a tool increases the interpreters perception and
photointerpretation ability through image enhancements
in the radiometric, spectral and spatial space, and in the
spatial frequency domain. On the other hand, the
quantitative information extraction needs numerical preprocessing certainty for a statistically robust result.
However, it still needs the human analyst who translates
the results from numbers to something more meaningful
by using photointerpretation and human perception
during the different stages of the process.

BANNARI, A.; MORIN, D.; BNI, G.B. and BONN, F.J.


1995. A theoretical review of different mathematical
models of geometric corrections applied to remote
sensing images, Remote sensing reviews, vol. 13, pp.
27-47.
BUITEN, H.J. and VAN PUTTEN, B. 1997. Quality
assessment of remote sensing image registration
analysis and testing of control point residuals, ISPRS
journal of photogrammetry and remote sensing, vol.
52, pp. 57-73.
CALKINS, H.W. and TOMLINSON, R.F. 1977. Geographic
Information Systems: methods and equipment for
land use planning. Ottawa, International Geographical
Union, Commission of Geographical Data Sensing
and Processing and U.S. Geological Survey.
CLARK C.D.; GARROD S.M. and PARKER PEARSON M.
1998. Landscape archaeology and Remote Sensing in
southern Madagascar, International journal of remote
sensing, vol. 19, No. 8, pp. 1461-1477.

The computations and the imagery may display particular


information, but they cannot display the significance, and
they cannot know how to proceed in a process. It is
always up to the human to understand the importance of
the results and adopt certainty for the final
decisions/conclusions. Similarly, remote sensing may
provide information about the surface features, but it
cannot detect the cultural significance of the landscape
morphology. However, the closer to reality are this
information, the larger the chance for the human to
proceed into correct conclusions.

COLWELL, R.N. (ed.) 1960. Manual of photographic


interpretation, American society of photogrammetry.
DE LEEUW, A.J.; VEUGEN, L.M.M. and VAN STOKKOM,
H.T.C. 1988. Geometric correction of remotelysensed imagery using ground control points and
orthogonal polynomials, International journal of
remote sensing, vol. 9, Nos. 10 and 11, pp. 17511759.

99

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

journal of geographical information systems, vol. 10,


No. 3, pp. 347-353.

DOWMAN I.; DOLLOFF J.T. 2000. An evaluation of


rational functions for photogrammetric restitution,
International archives of photogrammetry and remote
sensing, Vol. XXXIII, part B3, Amsterdam.

OPEN GIS CONSORTIUM, 2004. The OpenGIS Abstract


Specification, Topic 7: The Earth Imagery Case 99107.doc, http://www.opengeospatial.org/standards/as

EHLERS, M. 1997. Rectification and registration In: Star,


J.L.; Estes, J.E. and McGwire, K.C., (eds.),
Integration of geographic information systems and
remote sensing: Topics in remote sensing 5,
Cambridge University Press.

PHILIPSON, W.R. (ed.), 1997. Manual of photographic


interpretation, Science and engineering series,
American society of photogrammetry and remote
sensing.

FRASER, C.S.; HANLEY, H.B. and YAMAKAWA, T. 2002.


Three-dimensional geopositioning accuracy of
IKONOS imagery, Photogrammetric record, vol. 17,
No. 99, pp. 465-479.

REEVES, R.G. (ed.), 1975. Manual of remote sensing,


American society of photogrammetry.
RICHARDS, J.A. and JIA, X. 1999, Remote Sensing digital
image analysis An introduction, Springer-Verlag
Berlin Heidelberg.

FRASER, C.S. and HANLEY, H.B. 2003. Bias compensation in rational functions for IKONOS satellite
imagery, Photogrammetric engineering and remote
sensing, vol. 69, No. 1, January 2003, pp. 53-57.

RICHELSON, J.T. 1999. U.S. Satellite Imagery, 19601999, National Security Archive Electronic Briefing
Book No 13. http://www.gwu.edu/~nsarchiv/
NSAEBB/NSAEBB13/ (Last accessed: December
2011).

FRASER, C.S. and HANLEY, H.B. 2005. Bias-compensated


RPCs for sensor orientation of high-resolution
satellite imagery, Photogrammetric engineering and
remote sensing, vol. 71, pp. 909-915.

SABINS, F.F. 2007. Remote sensing: principles and


interpretation, 3rd edition, Waveland Pr. Inc.

FRASER, C.S.; DIAL, G. and GRODECKI, J. 2006. Sensor


orientation via RPCs, ISPRS journal of
photogrammetry and remote sensing, vol. 60, pp. 182194.

SCOLLAR, I.; TABBAGH, A.; HESSE, A. and HERZOG, I.


1990. Archaeological prospecting and Remote
Sensing, Cambridge University Press.

GOODCHILD, M.F.; EGENHOFER M.J.; KEMP K.K.; MARK


D.M. and SHEPPARD E. 1999. Introduction to the
Varenius
project,
International
journal
of
geographical information science, vol. 13, no.8, pp.
731-745.

TAO, C.V.; HU, Y.; MERCER, J.B.; SCHNICK, S. and


ZHANG, Y. 2000. Image rectification using a generic
sensor model rational function model, International
archives of photogrammetry and remote sensing, vol.
XXXIII, part B3, Amsterdam.

GRODECKI, J. 2001. IKONOS Stereo Feature


ExtractionRPC Approach, Proceedings of ASPRS
2001 conference, 23-27 April, St. Louis.

TOUTIN, T. 2003. Error tracking in IKONOS geometric


processing using a 3D parametric model,
Photogrammetric engineering and remote sensing,
vol. 69, pp. 43-51.

JAKEMAN, A.J.; BECK M.B. and MCALEER M.J. 1996.


Modelling change in environmental systems, John
Wiley and Sons.

TOUTIN, T. 2004. Review article: Geometric processing


of remote sensing images: models, algorithms and
methods, International journal of remote sensing, vol.
25, No. 10, pp. 1893-1924.

JENSEN, R.J. 2006. Remote sensing of the environment:


an earth resource perspective (2nd edition), Prentice
Hall.

TOUTIN, T. 2006. Comparison of 3D physical and


empirical models for generating DSMs for stereo HR
images, Photogrammetric engineering and remote
sensing, vol. 72, pp. 597-604.

KONECNY, G. 2003. Geoinformation: Remote sensing,


Photogrammetry and Geographic Information
Systems, Taylor and Francis, London.

TROMBOLD, C.D. 1991. Ancient road networks and


settlement hierarchies in the New World, Cambridge
University Press.

KOUCHOUKOS, N. 2001. Satellite images and Near


Eastern landscapes, Near Eastern archaeology, vol.
64, No. 1-2, pp. 80-91.

VALADAN ZOEJ, M.J.V. and SADEGHIAN, S. 2003.


Rigorous
and
non-rigorous
photogrammetric
processing of IKONOS Geo image, Proceedings of
ISPRS joint workshop High resolution mapping from
space, Hannover, Germany

LANDGREBE, D. 1997. The evolution of Landsat data


analysis, Photogrammetric engineering and remote
sensing, vol. 63, No. 7, pp. 859-867.
LILLESAND, T.M.; KIEFER, R.W. and CHIPMAN, J.W.
2004. Remote sensing and image interpretation, 5th
edition, John Wiley & Sons.
MCGWIRE, K.C. 1996. Cross-validated assessment of
geometric accuracy, Photogrammetric engineering
and remote sensing, vol. 62, No. 10, pp. 1179-1187.

WOLNIEWICZ, W. 2004. Assessment of geometric


accuracy of VHR satellite images, Proceedings of the
XXth international archived of the photogrammetry,
remote sensing and spatial information sciences,
35(part B1), Istanbul, Turkey, pp.1-5 (CD-ROM).

MORAD, M.; CHALMERS, A.I. and OREGAN. P.R. 1996.


The role of mean-square error in the geotransformation of images in GIS, International

ZITOV, B. and FLUSSER, J. 2003. Image registration


methods: a survey, Image and vision computing, vol.
21, pp. 977-1000.

100

5
GIS

GIS

5.1 2D & 3D GIS AND WEB-BASED VISUALIZATION


Giorgio AGUGIARO

5.1.1 DEFINITIONS

5.1.1.1 Geodata

In the simplest terms, a geographical information system


(GIS) can be considered as the merging of cartography,
statistical analysis, and database technology. A more
elaborate definition is given by Clarke (1986): a
geographical information system can be defined as a
computer-assisted system for capturing, storing,
retrieving, analysing, and displaying spatial data. Due to
its general character, some authors (e.g. Cowen, 1988)
have argued that such a vague definition allows the term
GIS to be applied to almost any software system able to
display a map (or map-like image) on a computer output
device. Nevertheless, what characterises a GIS is its
capability to handle spatial data that are geographically
referenced to a map projection in an Earth coordinate
system, and to perform spatial analyses using such data
(Maguire et al., 1991). Moreover, most todays GIS
software packages allow geographical data to be
projected from one coordinate system into another, so
that heterogeneous data from different sources can be
collected into a common database and layered together
for mapping purposes and further analyses.

An information system relies on a data model, i.e. a


description of how to structure and save the data to
handle (Hoberman, 2009). If real objects are going to be
represented, their description can be done by means of
descriptive data identifying them clearly and univocally.
For example, a car can be described in terms of
manufacturer, model type, engine, colour, number plate,
etc. These data are generally called thematic or attribute
data. Data can be retrieved by means of queries, which
define the criteria how to extract data from the archive,
e.g. all red cars built in a certain year.
Moreover, it is possible to model and to store relations
between different objects: a person (object) can be the
owner (relation) of a certain car (object). Therefore,
proper queries are performed to extract data according to
this relational information, e.g. all cars belonging to the
same owner.
A geographical information system relies itself on a data
model, the only difference consisting in the fact that it
must deal not only with attribute data, but also with
geometric data. The latter describe the geographical
position, the shape, the orientation and the size of objects.

It must be noted that the terms geographical and spatial


are often used interchangeably, for example when
referring to data or describing geographical features.
However, strictly speaking, spatial is more general and
is intended for any type of information tied to a location,
while geographical refers only to information about or
near the Earths surface. Another word often used for
geographical data is geodata.

Moreover, in a GIS spatial (topological) relations among


different objects can be defined and stored. Unlike
geometry, which describes the absolute shape and
position in space, topology describes relations between
objects in terms of neighbourhood. Typical topological
relations are union/disjunction, inclusion and intersection.
Given two objects A and B, a spatial query is performed
for example to determine whether A is inside B, A
intersects B, or to obtain the object resulting from the
union of A and B.

Geographical information systems find application in


many different fields, ranging for example from
cartography, urban planning, environmental studies, and
resource management, up to archaeology, agriculture,
marketing and risk assessment not to mention the
steadily growing number of web-based applications,
which have been booming in the past ten years (e.g.
Google Maps, OpenStreetMap, etc.)

GIS are therefore characterised by the integrated


management of attributes, geometry and topology. The
advantage is that queries or data analyses can be carried

103

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figure 1.
1 Example off relational moodel: two tablees (here: coun
ntries and citiees) are depicteed schematicallly (top).
Attriibute names and data types are listed for each table. Th
he black arrow
w represents thhe relation exiisting
betweeen them. Datta contained inn the two tablees is presented
d in the bottom
m left, and thee result of a po
ossible
queryy in the bottom
m right. The linnk between thhe two tables is
i realized by means of the country_id co
olumns

a
(e.g..
an externally linkked table, itself containing attributes
ded structure,,
soil classes). Thaanks to their rregularly gridd
s
mapp
operrations can bee carried out bby means of so-called
algeebra: differennt maps are layered upon
n each other,,
funcctions then combine the vallues of each raster's
r
matrixx
acco
ording to som
me criteria. M
Maps can be for examplee
overrlaid to identify and comppute overlapp
ping areas, orr
statistical analyses can are carriied out on thee cell values.

out using all above mentiooned types of information at


a the
same time. Inn a geographiccal informatioon system, datta are
stored accorrding to a so-called
s
relaational modell: all
attributes of homologous objects, and their
t
relations with
each other, are
a organised and stored by
b means of tables
linked to thee geometric features
fe
(Date, 2003). A siimple
example is given in Figuree 1.
metry modelss: rasters and
d vectors
5.1.1.2 Geom

In a vector modeel, objects arre geometricaally describedd


usin
ng primitives (points,
(
lines and polygonss), which cann
be further
fu
aggreggated in ordereed on unorderred groups. A
poin
nt is stored in terms of iits coordinatees in a givenn
referrence system,, a line is deffined through
h its start andd
end points. Multiiple ordered lines define a polyline, inn
that the end poinnt of a line is at the same time the startt
poin
nt of the succcessive line. T
The surface delimited by a
plan
nar and clossed polyline defines a polygon. Iff
geom
metric primitives are groupped, more com
mplex objects,,
like a river or a building,
b
can be representeed (see Figuree
2, bo
ottom). A grooup of homoloogous geomettries results inn
a mu
ulti-geometry. For examplee, a multipoin
nt geometry iss
form
med by severaal points, whilee a multipolyg
gon is formedd
by a group of pollygons. If hetterogeneous geometries
g
aree
grou
uped, e.g. pooints, polylinees and polygons together,,
then
n a geometry collection
c
is crreated.

As far as geeometry is conncerned, two broad modells are


mainly adoppted and used to store data in a GIS,
depending on whether the object to be representedd is a
continuous field,
f
like in case of elevaations, or disccrete,
like a bridge or a house. The
T former is generally
g
storeed by
means of rastter data, the laatter as vectorr data.
In a raster model
m
the areaa of interest iss regularly divvided
(tessellated) into elementaary units, calleed cells and haaving
all the sam
me size and shape. They are concepttually
analogous too pixels (pictuure elements) in a digital im
mage
(see Figure 2, top). Most
M
commonn are squarre or
rectangular cells.
c
Every cell
c holds a numeric
n
valuue, so
that the dataa structure is conceptuallyy comparable to a
matrix. Thee size of thhe cells deteermines alsoo the
resolution off the data. Rasster data modeels are best ussed to
represent coontinuously vaarying spatial phenomenaa like
terrain elevattion, amountss of rainfall, population
p
dennsity,
soil classes, etc. The cellss can store vaalues directly (e.g.
elevation vallues), or values, which reppresent the keys to

Ano
other vector data
d
structure frequently ussed in GIS too
represent surfacees are TINss (Triangulatted Irregularr
Netw
works): they consist of irrregularly distrributed nodess

104

GIS
S

Figure 2. Raster (top)


(
and vecttor (bottom) reepresentation of point, liness and polygonn features in a GIS

same planimetric position: onee for the cave floor, one forr
the cave
c
ceiling, and
a finally onne for the mou
untain surfacee
on to
op of the cavee.

and lines witth 2D or 3D coordinates


c
thaat are organizzed in
a network of non-overlappping trianglees. TINs are often
used to repreesent terrain suurfaces, or anyy other continnuous
features. Unnlike raster models,
m
differeent point densities
are allowedd in the sam
me TIN moodel, thus higher
resolutions allow
a
for a more
m
precise representatioon of
details (e.g. in
i the terrain), a lower resoolution can be used
in areas that are less variabble or less inteeresting.

A po
ossibility to store
s
3D featuures in a rasteer-like fashionn
is offered
o
by vooxels (volumeetric pixel or Volumetricc
Pictu
ure Element)): each voxeel represents the smallestt
subd
division unit of
o a volumetriic domain and
d has typicallyy
the form of a cuube or of cubboid, with a given height,,
widtth and depth. Similarly to a raster cell,, a voxel cann
storee a numeric value
v
(like a ccell in a rasterr), or a key too
an externally
e
linnked table, ittself containin
ng attributes..
Vox
xels are typicaally used to reepresent threee-dimensionall
conttinuous featuures, like forr example geological
g
orr
atmo
ospheric strataa and their chaaracteristics. Voxels
V
can bee
also used in certain appliccations to overcome thee
limittations of rassters and reprresent terrain features, i.e..
overrhangs, caves,, arches, or othher 3D featurees.

Every objectt in a vector representation


r
n is given a unnique
key linking it
i to a table coontaining the attributes.
a
A single
s
object can be thereforre characterised by multiple
attributes, eaach one storeed in a separrate table column.
Unlike map algebra with rasters, operaations with vector
v
a
or interrsections are more
data like callculations of areas
complicated,, however mosst GIS packagges offer nowaadays
tools to perrform the most
m
commonn operations (e.g.
buffering, ovverlay, geostattistics, etc.).
several are the possibiilities
Regarding dimensions,
d
offered by GIS
G to repressent two- or three-dimenssional
geographicall features. Duue to their strructure, raster data
can best reprresent 2D objjects like the surface of a lake.
However, likke in the casee of an elevattion model, height
h
values storedd in the cells provide
p
some information about
a
the third dim
mension. Givenn that for eachh raster cell only a
single value can be stored, such modelss are defined 2.5D,
2
as to indicatte an interm
mediate status between 2D
D and
3D models. The height is
i a function of the xy planar
p
coordinates, so no multiplle height valuues can exist at
a the
same position in a rasster cell. Thhis leads too the
impossibilityy to representt vertical featuures like wallls, or
objects like bridges,
b
or cavves in a mounntain. The reasson is
that for a cavve, for instance, more valuees are needed at
a the

Com
mpared to the raster approaach, vector daata offer moree
chan
nces when itt comes to m
multiple dim
mensions. Thee
coorrdinates of a point
p
can be sstored either in
i 2D (e.g. x,,
y) or
o in 3D (x, y, z). It folllows that lines, polylines,,
poly
ygons and thee resulting agggregations can
c be storedd
also in a three-dim
mensional spaace. Moreoveer, vector dataa
is no
ot subject to the
t rasters lim
mitations: for any given xyy
posiition, multiplee z values can be stored in a vector-basedd
mod
del.
Som
metimes also the
t time variiable can be added to thee
threee dimensionns in spacee, resulting in a 4D
D
representation. This
T
is achievved, for exam
mple, in thatt
everry object is given
g
a timesttamp defining
g the objectss
prop
perties at a ceertain momennt. This enablles to exploree

105

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

data not only by means of spatial queries, but also to


include time and investigate how a spatial feature has
evolved or changed over the course of time.

Finally, most modern GIS allow to connect directly to


already existing on-line data repositories, which publish
geodata by means of web mapping services, thus
facilitating data reuse for different purposes.

5.1.2 GIS FUNCTIONS

Geodata can be saved in different file formats or into


different databases. In practice, every commercial GISsoftware producer tends to define and implement its own
formats. This has led in the past to a plethora of existing
proprietary formats, both for raster and for vector data,
although in the last decade much more effort has been
put into defining standards to facilitate data interoperability.

Compared to classical maps, a geographical


information system allows to greatly extend the number
of possible uses with regards to the type and amount of
data it stores and manages. Several functionalities are
offered by most GIS environments, from data capture
and maintenance to visualisation and geodata analysis
tools.

If ESRIs shapefiles have become de facto an accepted


file-based exchange format for vector data, the same
cannot be stated for raster data, although (geo)tiff files
are generally wide-spread. Moreover, there is a gradual
shift in the storage strategies, in that more and more GIS
packages offer the choice to use a spatially-enabled
database management system (DBMS) to store and
retrieve data, instead of relying on local file-system
solutions.

5.1.2.1 Data capture and management


Data capture, i.e. the process of entering information into
the system and store it digitally, is a fundamental activity,
which applies to any GIS. Therefore, several methods
exist according to the type and nature of data to be
imported.
For existing older data sources, like maps printed on
paper or similar supports, an intermediate digitalisation
process is required. Printed maps are scanned and
transformed into raster maps. It is the case, for example,
of older analog photos (often aerial imagery), which are
scanned and imported as raster data. Alternatively, by
means of a digitiser, older maps or images are used as
reference to trace points, lines and polygons that are later
stored as vector data and enriched with thematic
attributes.

Spatially-enabled DBMS differ from standard


databases in that they are able to handle not only usual
attribute, but also spatial data, both in terms of vectors
and rasters. Commercial examples are IBMs DB2 with
its Spatial Extender or Oracle Spatial, while PostgreSQL
DBMS with its spatial extension PostGIS, or SQLite
coupled with its spatial extension Spatialite are free and
open-source alternatives.
Nevertheless, any GIS offers several function to convert
geodata between different standards and proprietary
formats, whilst geometrically transforming the data in the
process. These are generally called spatial ETL (Extract,
Transform, Load) tools.

The digitisation process requires generally proper editing


tools to correct errors or to further process and enhance
the data being created. Raster maps may have to be
corrected from flecks of dirt found on the original
scanned paper. In case of vector maps, errors in the
digitising process may result in incorrect geometries or
invalid topology, e.g. two adjacent polygons actually
intersecting. Particular care must also be taken when
assigning attributes.

With growing amounts of stored data, it becomes vital


that metadata are collected and properly managed along
geodata. Metadata give, among others, information about
origin, quality, accuracy, owner and structure of the data
they refer to. Most geographical information systems
allow to edit and manage metadata to some extents, or to
retrieve them by means of external software products or
on-line services.

Another possibility consists in acquiring and importing


data directly from surveying instruments. For example,
modern portable GNSS receivers offer up to subdecimetre accuracy and can be directly interfaced with a
GIS environment to download the measured features,
sometimes even directly on the field.

5.1.2.2 Data processing


As geodata are collected (and stored) in various ways, a
GIS must be able not only to convert data between
different formats, but also between different data
structure models, e.g. from vector data to raster data and
vice-versa. If a vector is to be transformed into a raster,
this operation is called rasterisation. The user must set
the cell dimension and which attribute field is to be
converted into the values contained in the raster cells.
Since the rasterisation process introduces a fixed
discretisation in space, it is crucial that the raster
resolution be set properly and according to the needed
application. As a general rule, a raster cell is assigned a

Nowadays, nearly all surveying devices produce digital


data, making their integration in a GIS environment more
straightforward since the intermediate digitalisation step
can be skipped. Point clouds from laser scanning (both
aerial and terrestrial) or digital imagery from close range
cameras (e.g. UAV), up to satellite multi-spectral or radar
data area are also typical products for GIS data collection.
In particular, satellite-based imagery plays an important
role due to the high frequency of data collection and the
possibility to process the different bands to identify
objects and classes of interest, such as land cover.

106

GIS

found for example at the Geospatial Analysis website


(www.spatialanalysisonline.com).

value if the majority of its surface is covered by the input


vector feature. If no vector features (points, lines or
polygons) fall within or intersect a cell, its value remains
set to null.

Queries according to some criteria are the first and more


immediate type of spatial analysis. Data is selected and
extracted from a larger dataset, e.g. for immediate
visualisation or possibly for further use. Selection criteria
can be according to standard attributes (e.g. How
many provinces belong to a given Italian region?) or can
be of spatial nature (e.g. Select all cities touched by the
Danube river).

If rasterised data is to be converted into a vector, this


operation is called vectorisation. In case vector points are
to be created, the coordinates of the raster cell centre are
generally used for the geometry and the raster cell value
is stored as attribute. In case of linear of areal features,
lines and polygons are created by grouping neighbouring
raster cells with the same value and assigning it to the
attribute table.

Whenever a buffer is created, a closed area around the


input object (a point, a line or a polygon) is created. Its
boundaries identify a delimited portion of space, which is
no farther than a certain distance from the input object,
which remains unchanged. Sometimes buffers can be
used to perform analysis by mean of overlay operations.
The term overlay refers to a series of operations where a
set of two or more thematic maps is combined to create a
new map. This process is conceptually similar to
overlaying Venn diagrams and performing operations like
union, intersection or difference. The geometric features,
and the accompanying attributes, of two or more input
maps are combined by means of union, in that all features
and attributes from both maps are merged. In case of an
intersection overlay, only overlapping features are kept.
With rasters, map overlay operations can be
accomplished in the framework of map algebra by means
of Boolean operators.

Another common GIS feature consists in georeferencing.


This operation consists in defining the location of a
dataset in terms of map projections or coordinate systems,
i.e. its relation (reference) to the physical space (geo). For
example, with vector point data one can assign longitude
and latitude (and height) values to each point. A
positioning device like a GNSS receiver can be used for
this purpose. In this way, the position of a point is
univocally set on the Earths surface. If raster-based
imagery is to be georeferenced, then a common approach
consists in identifying control points on the image,
assigning known geographic coordinates to them,
choosing the coordinate system and the projection
parameters, and eventually performing the actual
coordinate transformation by means of intermediate
operations (data interpolation, reduction of distortions,
adjustment to the chosen coordinate system).

The term spatial analysis refers to a vast range of


operations that can be performed in most common GIS
packages, although at different levels of complexity. In
general terms, spatial analysis is defined as a set of
processes to obtain useful information from raw data in
order to facilitate decision-making. The goal is to
discover and explore patterns and/or relations between
objects, whereas these relations may not be immediately
visible.

Terrain analysis tools are generally widely available in


most modern GIS packages. Starting from a terrain model
(e.g. a DTM or a DSM), generally provided as a raster,
typical products are maps of slope, aspect or surface
curvature. They are all obtained by using the value of a
cell and its neighbouring cells. A slope map gives information about the tangent to the terrain surface at a certain
position, while the aspect map refers to the horizontal
direction to which the slope faces. In other words, if the
north is taken as origin for the aspect, a valley stretching
west to east has its northern side facing south (thus an
aspect value of 180), and the southern side facing north
(thus an aspect value of 0). The northern side of the
valley is therefore the one, which generally receives more
sunlight (in the northern hemisphere). Other functions
allow to create contour lines from a terrain model or viceversa, or to obtain a continuous raster DTM/DSM from
contour lines (see interpolation, later on).Since water
always flows down a slope. Slope and aspect maps are a
prerequisite for hydrological analysis tools. Given a
DTM, watersheds and drainage basins are computed
automatically. The latter corresponds to an area of land
where surface water from rain and melting snow or ice
converges to a single point, usually the exit of the basin
(e.g. a sea, a lake, another river), while the former is the
line separating neighbouring drainage basins.

Given the vast range of existing spatial analysis


techniques, this subject can be covered only to a limited
extent in this chapter, so only the most common ones
will be presented. A comprehensive review can be

Visibility analyses allow identification of all areas visible


from a given position, or from a given set of positions. In
this case view shed maps are created. Astronomical
analyses can also be performed, in that the amount of

Whenever geodata are given in different coordinate


systems, it is of primary relevance to transform them to a
common one, in order to facilitate data comparison and
analysis. Once the input and the output coordinate
systems are known, a GIS can perform the coordinate
transformation on the fly (e.g. when visualising
heterogeneous data), or create an actual copy of the data
in the new coordinate system. Common operations are
map translations and/or rotations, transformation from
geographic coordinates to any map projections (e.g.
UTM, Universal Transverse Mercator) and vice-versa, up
to more complex transformation from one national
reference system to another.
5.1.2.3 Spatial analysis

107

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figure 3.
3 Qualitative examples of different
d
interppolation algorrithms startingg from the sam
me input (left). Surface
interpolateed using an Invverse Distancee Weighting innterpolator (center) and a Spline
S
with Teension interpolator (right)

sunlight reacching a certainn position cann be computeed, as


well as the shadowing effect of a hilly/mountaiinous
region.

a
interppolation methhods can be classified
c
intoo
In addition,
glob
bal or local with
w regards too whether theey use all thee
avaiilable sample points to geenerate predicctions for thee
who
ole area of interest, or only a subsset of them,,
respectively. Som
me algorithm
ms with glob
bal behaviourr
inclu
ude kriging, polynomiall trend anallyses, splinee
interrpolation and finite elemennt method (FE
EM) and thesee
meth
hods can be used
u
to evaluuate and separrate trends inn
the data.
d
In case of a local appproach, the prredicted valuee
is obtained
o
insteead only from
m known poiints within a
certaain distance, whereas
w
the cooncept of distance does nott
referr strictly to the Euclideean one only
y, but moree
geneerally to neighhbourhood. Allgorithms belo
onging to thiss
classs include, for example, neaarest neighbou
ur and naturall
neig
ghbour interpoolation.

Often spatiall data are acquuired or can be


b sampled onnly at
certain positiions, howeverr it may be neecessary to prredict
values of a certain
c
variablle at unsamplled location within
w
the area of interest. Thhis is, in genneral terms, what
interpolationn consists off (Burrough and McDonnnell,
1998). Inteerpolation is to be diffferentiated from
extrapolationn, which deals with the prrediction of values
of a certain variable outsiide the samplling area. Usuually,
the goal of innterpolation iss to convert pooint data to suurface
data. All GIS packages offer
o
several interpolation tools,
t
which implement differennt interpolatioon algorithms.. The
fundamental idea behindd interpolatioon is that nnear
points are more
m
related (oor similar) than
t
distant points
p
and, therefoore, near points generally receive higher
weights thann far away pooints. The obttained surfacee can
pass throughh the measurred points or not. In this case
interpolationn methods are
a
classified into exact and
inexact. In caase of an exacct interpolatorr, a predicted value
v
at a sample location coincides with the
t measurem
ments
i is the case of
o an
values at thee same locatioon, otherwise it
inexact interrpolator: preddictions are different
d
from
m the
measured values
v
at sampled locaations and their
differences are
a used to givve a statemennt about the model
m
quality. The very large nuumber of existing interpollation
models allow
ws to define different classsification criteria,
according to their characteeristics.

Ano
other typical GIS appliccation is rep
presented byy
netw
work analysis,, which is bassed on the graaph theory. A
grap
ph is a mathem
matical structuure where relattions betweenn
objeects are modeelled pair wisse. Topologiccally, a graphh
conssists of nodes (also called vertices), and edgess
conn
necting pairss of verticess. In a GIS
S, it is bestt
impllemented in thhe vector moddel, where po
oints representt
the nodes
n
and linees represent thhe edges of a graph. Once,,
for example,
e
a sttreet network is modelled according
a
thee
these criteria, problems can be solved such as thee
com
mputation of thhe shortest paath between tw
wo nodes, or,,
giveen for examplle a list of cities (nodes) and
a their pairr
wisee distances (eddges), the com
mputation of shortest routee
that visits each city exactly once (this is also calledd
trav
velling salesm
man problem). An examplle is given inn
Figu
ure 4.

A distinctionn can be maade between deterministicc and


geostatisticall interpolationn methods. The
T first are based
b
on mathemaatical functionns that calcullate the values at
unknown locations accorrding either to the degreee of
similarity or to the degreee of smoothinng in relation with
neighbouringg data. Typicaal examples off this interpollation
family are Innverse Distannce Weightingg (IDW) or radial
r
basis functiions (e.g. thhin-plate splline, spline with
tension). An example is giiven in Figuree 3.

5.1.2
2.4 Data visualisationand map generallisation
Onee of the fields where
w
geograp
aphical inform
mation systemss
havee always founnd great application is carrtography, i.e..
the process
p
of designing and pproducing map
ps to visuallyy
represent spatiall data and to help ex
xploring andd
undeerstanding thhe results oof analysis. Geodata aree
geneerally stackedd as thematic layers, and each layer iss
form
matted using styles
s
that deffine the appeearance of thee
dataa in terms of colours,
c
symbbols, etc. Rastter and vectorr
dataa can be repreesented at thee same time or
o selectively..
Morreover, legendds, scale bars, and north-arrrows can alsoo
be added.
a
The output
o
maps are typically on paper orr

Geostatisticaal interpolatioon methods usse both mathhematical and staatistical methoods, in order to predict values
and their prrobabilistic esstimates of the
t quality off the
interpolationn. These estim
mates are obbtained usingg the
spatial autocoorrelation amoong data points.

108

GIS

Figure 4. Examples of network analyses. A road network (upper left), in which 5 possible destinations are represented
by black dots, can be represented according to the average speed typical for each roadway (upper right), where
decreasing average speeds are represented in dark green, light green, yellow, orange and red, respectively.
The shortest route, considering distance, connecting all 5 destinations is depicted in blue (bottom left),
while the shortest route, in terms of time, is depicted in violet (bottom right).
These examples are based on the Spearfish dataset available for Grass GIS

is therefore conceived to reduce the complexity of the


real world by dropping ancillary and unnecessary details
by means of a proper (geo)data selection. Other common
map simplification strategies consist in simplification
(e.g. the shapes of significant features are retained but
their geometries altered to reduce complexity and
increase visibility), combination (e.g. two adjacent
building footprints are merged into a single one, as the
gap between them is negligible at the chosen scale),
smoothing (e.g. the polyline representing a road is
smoothed to appear more natural) or enhancement, in that
some peculiar but significant details are added to the map
in order to help readability by the user (e.g. symbols
hinting at a particularly steep climb in a hiking map).

directly on screen. In the latter case, more information


can be added, e.g. as images, graphs, or any other
multimedia objects linked to the thematic data being
displayed.
Every GIS package allows some kind of data exploration.
Queries on attribute data can be performed and the results
can be visualised as geometric features, or, vice versa, a
feature (or a group of them) can be selected and the
attributes retrieved and presented on screen.
Most GIS data can be visualised on screen as standard 2D
maps, or in 3D in that thematic maps (rasters or vectors)
are draped on top of an elevation map, whose cell values
are used to create a 2.5 surface. According to the viewer
capabilities, 3D vector data (sometimes with textures) can
also be visualised. In Figure 5 some example of 2D and
3D data visualisation are presented.

If the cartographic generalisation process has traditionally


been carried out manually by expert cartographers, who
were given license to adjust the content of a map
according to its purpose using the appropriate strategies,
the emergence of GIS has led to the automated
generalisation process and to the need of developing and
establishing algorithms for the automatic production of
maps according to the purpose and scale.

A fundamental aspect tied with cartography is the


selection and the representation on a map of data in a way
that adapts to the scale of the display medium. Not all
geographical or cartographic details need to be
necessarily always preserved, as they might hinder the
readability of the map, or they might be unsuitable for the
purpose the map has been created for. Map generalisation

Several conceptual models for automated generalisation


have been proposed in the course of time (Brassel and

109

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figuree 5. Examples of visualizatiion of GIS datta. A raster im


mage (orthophooto) and a vecctor dataset (bu
uilding
footpprints) are visuualized in 2D (left). A 3D visualization
v
of
o the extrudedd buildings draaped onto the DTM

Serv
vices (WMS),, Web Featuree Services (W
WFS) or Webb
Cov
verage Servicees (WCS). Tooday the speccifications aree
defin
ned and maintained
m
byy the Open
n Geospatiall
Con
nsortium (O
OGC), an international standardss
orgaanisation, which
w
encourrages develo
opment andd
impllementation of
o open standaards for geosp
patial contentt
and services, GIS data processiing and data sh
haring.

Weibel, 19888; McMaster and


a Shea, 19992; Li, 2006). Two
main approaaches for automated generaalisation exist: one
deals with thhe actual proccess of generaalisation, the other
focuses on reepresentation of data at diffferent scales.. The
latter is thereefore in tight relation with the framework of
multi-scale databases,
d
whhere two maain methods have
been establiished. The first consistss in a steppwise
generalisation, in which each
e
derived dataset
d
is baseed on
the other onne of the nextt larger scalee; with the seecond
method, the derived dataasets at all sccales are obtaained
from a singlle large-scale one. Automaated generalissation
is, however,, still a subjeect of currennt research, as
a no
definitive annswers have been given also due too the
continuouslyy expanding number of applications and
devices usinng and displayying heterogeeneous geodaata at
multiple scalles.

A Web
W Map Service is implem
mented whenev
ver geodata iss
to bee delivered (sserved) in form
m of georeferrenced imagess
overr the Interneet. These imaages correspo
ond to mapss
geneerated by a map server, which retriev
ves data, forr
exam
mple, from a spatial databbase and send
d them to thee
clien
nt application for visualisattion. During pan
p and zoom
m
operrations, WMS
S requests ggenerate map
p images byy
meaans of a varietyy of raster renndering processses, the mostt
com
mmon being geenerally calledd resampling, interpolation,,
and down-sampliing. WMS is a widely su
upported openn
standard for mapss and GIS datta accessed viia the Internett
and loaded into client side G
GIS software,, however itss
limittation consistss mainly in thhe impossibilitty for the userr
to ed
dit or spatiallyy analyse the sserved imagess.

5.1.2.5 Web--based geodaata publicatioon


In the past decade, web-baased mappingg applications have
experienced a steady groowth in termss of diffusionn and
popularity. Examples
E
are Google Mapps, Bing Mapps by
Microsoft, or
o the comm
munity-drivenn OpenStreetM
Map,
which have made
m
available to the publicc large amounnts of
spatial data.

In case
c
of a Webb Feature Seervice, geodatta are insteadd
serv
ved encoded in the XML
L-based GML
L (Geographyy
Marrkup Languagge) format (but other formats likee
shap
pefiles can bee employed), which allowss every singlee
geog
graphic featuure to be trransmitted in
ndependently,,
querried and analyysed. Essentially, GML passses data forthh
and back betweeen a Web Feature Server and a client..
Whiile a WMS seerves a static m
map image aas is, a WFS
S
can be
b thought to serve the souurce code off the map.

In general, web
w mapping services
s
facilittate distributioon of
generated maps
m
throughh web browssers, followinng a
classical clieent-server struucture, accordding to whichh the
user perform
ms a query onn certain dataa (spatial or nonspatial) from
m his client application, running geneerally
within the web browser, and
a the resultss are providedd by a
remote serveer to the webb browser (geenerally) over the
Internet. Thiis allows to explore data dynamicallyy and
interactively,, as well as to combine different datta to
create new maps
m
accordingg to certain crriteria given by
b the
user.

A Web
W Coveragge Service is implemented
d whenever a
web-based retrievval of coveraages is needeed. The term
m
coveerage refers to any digittal geospatiall informationn
representing sppace- or tiime-varying phenomena..
Therrefore, similarrly to WMS aand WFS serv
vice instances,,
a WCS
W
allows cllients to queryy portions rem
motely storedd
geod
data accordingg to certain criteria. However, there aree
som
me differencess to WMS aand WFS. Un
nlike static

Web-based geodata publlication can be performeed in


different wayys, although thhe most comm
mon strategiess rely
upon adoptioon of standarrd protocols such
s
as Web Map

110

GIS
S

Figure 6. Example of
o Web-based geodata publiication in 3D: by means of virtual globess, as in Googlee Earth,
or in
i the case of the Heidelberrg 3D project (http://www.hheidelberg-3dd.de)

images serveed by WMS, a WCS providdes data (and their


metadata) soo that they caan be interpreeted and anallysed
(and not jusst visualized). With regardds to WFS, which
w
serves only discrete
d
data, the main diffference consissts in
the possibiliity for the WCS
W
to providde spatial daata in
form of covverages, i.e. representations of phenom
mena
that relate a spatio-temporal domain to a (potentiially)
multidimensiional range off properties.

has been experienncing a steaddy developmen


nt in the pastt
decaade. Until reccently, web m
mapping appllications withh
som
me 3D GIS capability
c
couuld be deliveered only byy
meaans of pluginss, mostly avaailable for thee VRML andd
X3D
D formats (thee latter beingg the successo
or of VRML))
and able to suppoort 3D vector graphics and virtual realityy
mod
dels in a web browser.
b
Thesse plugins weere not widelyy
adop
pted and sufffered from pperformance issues whenn
appllied to largee volumes oof data, typical of GIS
S
appllications. Eveen if some woorkarounds an
nd alternativee
impllementations were propossed and ado
opted, perfor-man
nce limitationns continued to hinder th
heir extensivee
adop
ption.

If the above mentioned sttandard protoccols are nowaadays


definitely esttablished andd allow for daata publishingg in a
two-dimensioonal way, a new spectrum of possibilitiees are
increasingly being offereed thanks to the advancees in
geoinformatiion technologgies like 3D
D virtual envvironments, 3D analytical
a
visuualisation andd 3D data forrmats
(Isikdag andd Zlatanova, 2010). Threee-dimensional data
exploration offers
o
in fact several advaantages in terrm of
representatioon of geograpphical data, as well as more
m
effective posssibilities to acccess and anallyse data.

Supp
port for 3D from the maajor commerccial web mapp
serv
vers has also been
b
limited, though impro
oving. Today,,
if 3D
D support is provided, theen the focus is mostly onn
deliv
vering data with
w the third ddimension and
d on their 3D
D
visu
ualization, buut rarely on offering sup
pport for 3D
D
querries and advaanced 3D anaalyses. Within
n the OGCss
standards framew
work, supportt for 3D is similarly
s
andd
grad
dually reachinng maturity. WFS and WCS
W
servicess
prov
vide supportt for 3D vectors and
d coverages,,
respectively. In addition,
a
Geoggraphic Mark
kup Languagee
(GM
ML) and Keey Hole Markup Langu
uage (KML))
standards supportt z values, tooo. However, similar to thee
com
mmercial couunterparts, foocus is still mostly onn
deliv
vering and vissualising dataa with z values obtained byy
meaans of 2D querries.

Today, georeferenced datta can be vissualised usingg socalled virtuaal globes: suuch technologies permit a thhreedimensional exploration of
o the Earths surface, on toop of
which satelliite imagery, digital
d
elevatioon models, as well
as other geographic rasterr and vector data
d
(e.g. texttured
3D models of
o buildings and
a landmarkks) are mappeed by
direct stream
ming through thhe Internet.
Several virtuual globes exxist, both as closed and open
source soluutions. The most populaar closed soource
technologies are namely Google
G
Earth (Figure
(
6, leftt) and
Microsoft Biing Maps 3D. These platforrms have madde 3D
visualisation of geograaphical featuures known and
accessible too everyone, however, in the open soource
community, similar soluttions exist, e.g.
e
NASA World
W
Wind, ossimP
Planet, and ossgEarth.

Som
me of the aboove mentionedd problems might
m
soon bee
resolved with thee introduction of HTML 5 and the rapidd
adop
ption of modeern (i.e. releaased after 201
10) browsers..
HTM
ML 5 includdes standardissed support for WebGL,,
whicch brings pluggin-free and hhardware-acceelerated 3D too
the web,
w
implemeented right innto the browsser. All majorr
web browsers likke Safari, Chhrome, Firefox
x, Opera andd
Interrnet Explorer (only from veersion 11, releeased in 2013))
alreaady support it.

When it com
mes to accessinng and visualissing 3D geodaata in
a web browsser, the movee from desktoop GIS to the web

111

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Despite the several approaches for the web-based 3D data


management and visualisation presented in the past years,
there is still a lack of definitive solutions, as no unique,
reliable, flexible and widely accepted package or
implementation is available yet, in particular with regards
to large and complex 3D models.

CLARKE, K.C. 1986. Advances in geographic information


systems. Computers, environment and urban systems,
10(3-4), pp. 175-184.
COWEN, D.J. 1988. GIS versus CAD versus DBMS: What
Are the Differences?. Photogrammetric Engineering
and Remote Sensing 54(11), November 1988, pp.
1551-1555.

What is clear is that, in term of geodata access, the web


has evolved from showing just static documents (text and
images) to a more elaborate platform for running
complex 3D applications. This paradigm shift has been
dramatic and has led to new issues and challenges, which
are now current subject of research. Section XXX of this
chapter contains further details and examples concerning
this topic.

DATE, C.J. 2003. Introduction to Database Systems. 8th


edition. Addison-Wesley. ISBN 0-321-19784-4.
HOBERMAN, S. 2009. Data Modeling Made Simple: A
Practical Guide for Business and IT Professionals,
2nd Edition. Technics Publications, ISBN: 9780977140060.
ISIKDAG, U.; ZLATANOVA, S. 2010. Interactive modelling
of buildings in Google Earth: A 3D tool for Urban
Planning. In: (T. Neutens, P. Maeyer, eds.)
Developments in 3D Geo-Information Sciences,
Springer, pp. 52-70.

References
BRASSEL, K.E.; WEIBEL, R. 1988. A Review and
Framework of Automated Map Generalization. Int.
Journal of Geographical Information Systems, 2(3),
pp. 229-244.

LI, Z. 2006. Algorithmic Foundations of Multi-Scale


Spatial Representation. Boca Raton, CRC Press.
MCMASTER, R.B.; SHEA, K.S. 1992. Generalization in
Digital Cartography. Washington, DC, Association of
American Geographers.

BURROUGH, P.A.; MCDONNELL, R.A. 1998. Principles of


geographical information systems. Oxford University
Press, Oxford.

MAGUIRE, D.J.; GOODCHILD, M.F.; RHIND, D.W. 1991.


Geographical Information Systems, Longman.

112

6
VIRTUAL REALITY &
CYBERARCHAEOLOGY

VIRTUAL REALITY & CYBERARCHAEOLOGY

6.1 VIRTUAL REALITY, CYBERARCHAEOLOGY,


TELEIMMERSIVE ARCHAEOLOGY
Maurizio FORTE

environment: the mind embodied in the environment. A


knowledge created by enaction is constructed on motor
skills (real or virtual), which in virtual worlds can derive
by gestures, haptic interfaces, 1st or 3rd person
navigation, multisensorial and multimodal immersion and
so on.

6.1.1 VIRTUAL REALITIES


We live in a cyber era: social networks, virtual
communities, human avatars, 3D worlds, digital
applications, immersive and collaborative games are able
to change our perception of the world and, first of all, the
capacity to record, share and transmit information.
Terabyte, Petabyte, Exabyte, Zettabyte of digital data are
constructing the human knowledge of future societies and
changing the access to the past. If the human knowledge
is rapidly migrating in digital domains and virtual worlds,
what happens to the past? Can we imagine the
interpretation process of the past as a digital hermeneutic
circle (fig. 1)? The idea that a digital simulation process
one day could remake the past has stimulated dreams and
fantasies of many archaeologists. We know that this is
impossible, but new cybernetic ways to approach the
interpretation process in archaeology are particularly
challenging since they open multiple perspectives of
research otherwise not identifiable.

Digital interactive activities used in our daily life play an


essential role in managing and distributing information at
personal and social level. We could say that humans
typically interact with different virtual realities whether
by personal choice, or by necessity, given the fact that
there is a consistent amount of information digitally born
and available just in digital format. In the 90s many
writers, artists and scholars (including who is writing this
article, (Forte 2000)) discussed for a long time on the
definition of virtual reality (VR, immersive, semiimmersive, off line, etc.) mostly in relation with
archaeology and cultural heritage. Nowadays the term is
quite blurred, hybrid and elusive: virtual realities
represent many social diversifications of the digital Real
and are an essential part of the human life. It is possible
to recognize and classify them by technology, brand,
purpose, functionality; but all of them are VR, open
domains for users, players and developers. The evolution
of software and digital information in cloud computing is
a good example of distributed virtual realities where all
the performance runs on line in a network and it doesnt
require end-user knowledge of the physical location and
configuration of the system that delivers the services.

In digital archaeology the cybernetic factor is


measurable in terms of interaction and feedback, in a
word a trigger allowing creation and exploration of
virtual worlds. The trigger can be considered a
metaphor of our embodiment in the cyber world: clicking,
trigging, interacting is the way to involve our minds in
the digital universe. Any environment, digital or real,
could be studied in a similar perceptual way (of course
with some limitations): analyzing althea relations
between humans and ecosystems. A remarkable factor in
the evolution of cyber worlds is identifiable in the
informational capacities of digital worlds to generate new
knowledge (fig. 2), an autopoiesis1 (Maturana and Varela
1980) of models, data and metadata, which co-evolve in
the digital environment. Data and models generate new
data and meanings by interaction and, for example, by
collaborative activities. The core of this process is the
enaction (Maturana and Varela 1980), as information
achieved by perception-action interaction with the
1

It is likely unnecessary to describe VR at this point


because there are too many VR and all of them follow
different principles of embodiment and digital
engagement: everything could be VR. In the past decades
for example VR was mainly recognizable for the degree
of immersion and the real time interaction (at least 30/60
frames per second) but nowadays the majority of
applications are in real time and full immersion is just an
option (and sometimes not really relevant). What really
changes our capacities of digital/virtual perception is the
experience, a cultural presence in a situated environment

Capacity to generate new meanings.

115

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. Digital Hermeneutic Circle

Figure 2. Domains of digital knowledge

reconstruction with a wrong code can increase the


distance between present and past disorienting the
observer or the interactor and making the models less
authentic. The issue of authenticity of virtual worlds is
quite complex and it is strongly linked with our cultural
presence, knowledge and perception of the past. If for
instance we perceive the virtual model as fake or too
artificial it is because it doesnt match our cultural
presence. In theory people with different cultural
backgrounds can have a different cultural presence with a
diverse perception of the past, so that also the vision of
the past becomes extremely relative.

(Champion 2011). According to Subhasish DasGupta


(Dasgupta 2006) cultural presence can be defined as a
feeling in a virtual environment that people with a
different cultural perspective occupy or have occupied
that virtual environment as a place. Such a definition
suggests cultural presence is not just a feeling of being
there but of being in a there and then not the cultural
rules of the here and now. To have a sense of a cultural
presence when one visits a real site requires the
suggestion of social agency, the feeling that what one is
visiting is an artifact, created and modified by conscious
human intention (Dasgupta 2006). Cultural presence is
the interpretation code, the cybernetic map necessary for
interpreting the past in relation with space and time (for
Gregory Bateson the map is not the territory, (Bateson
1972). In the second cybernetics the study of codes was
aimed at understanding the relation between mind and
information, between objects and environment. This
ecological approach is helpful also in the evaluation of a
virtual reconstruction, since a cyber world has to be
considered a digital environment with related rules,
affordances and features. Ultimately we have to study
these relations for a correct comprehension of a virtual
reconstruction or simulation. In fact a virtual

This argument unfortunately risks pushing the


interpretation at a certain level of relativism because of
all the components involved in the interpretation,
simulation and reconstruction. For instance, the sense of
photorealism in a model could be more convincing than a
scientific non-realistic reconstruction because of the
aesthetic engagement or for the embodiment of the
observer (for example in the case of interaction with
avatars and other artificial organism). Cultural presences,
experience, perception, narrative of the digital space
create the hermeneutic circle of a cyber environment. The

116

VIRTUAL REALITY & CYBERARCHAEOLOGY

3D devices such as kinect as interfaces, open new


perspectives in the domain of cyber/haptic worlds and
simulation environments. The interaction does not come
from mouse, trackballs, data gloves, head mounted
display, but simply by human gestures. In other words all
the interaction is based on natural gestures and not on a
device: the camera and the software recognizes an action
and this is immediately operative within the digital world.
This kind of kinesthetic technology is able to cancel the
computational frame separating users and software,
interaction and feedback; in short the focus is not on the
display of the computer but on 3D software interactions.
One day the interpenetration of real and virtual will create
a sort of hybrid reality able to combine real and virtual
objects in the same environment.

level of embodiment of any application can determine the


amount of information acquired by a user or an observer
during the exploration of the digital space. For example a
third person walkthrough across an empty space and
without receiving an adequate feedback from the system
cant produce a high level of embodiment since the
engagement is very low. Human presence in virtual
spaces determines also the scale of the application and
other spatial relations.
If we analyze for example the first virtual reconstructions
in archaeology in the 90s they reproduced mainly empty
architectural spaces, without any further social
implication or visible life in the space: they were just
models. The past was represented as snapshot of 3D
artificial models deprived by a multivocal and dynamic
sense of time. Yet Dasupta: So in this sense, cultural
presence is a perspective of a past culture to a user, a
perspective normally only deduced by trained
archaeologists and anthropologists from material remains
of fossils, pottery shards, ruins, and so forth (Dasgupta, p.
97). Actually cultural presence should not be a
perspective deduced only by archaeologists and
anthropologists, but it should be transparent and
multivocal.

Understanding the social and technological context of


these virtual realities is a necessary premise for
introducing cyberachaeology and the problem of the
digital reconstruction of the past.

6.1.2 CYBERARCHAEOLOGY
In a recent book Cyberarchaeology (Forte 2010) I have
discussed the term in the light of the last two decades of
theory and practice of digital archaeology. More
specifically, in the 90s Virtual Archaeology (Forte
1997) designed the reconstructive process for
communication and interpretation of the past. This digital
archaeology was mainly reconstructive because of a
deep involvement of computer graphics and high res
renderings in the generation of virtual worlds. The first
3D models of Rome, Tenochtitlan, Beijing, Catalhuyuk
were generally based on evocative reconstructions rather
than by a meticulous process of documentation,
validation and scientific analysis (Forte 1997). The main
outcome was a static, photorealistic model, displayed in a
screen or in a video but not interactive (Barcel, Forte et
al., 2000). The photorealism of the scene was the core of
the process with a special emphasis on computer graphics
and rendering rather than the scene interaction. It is
interesting to note that an extreme photorealism was a
way to validate the models as authentic, even if the
term can be disputable in the domain of virtuality
(Bentkowska-Kafel, Denard et al., 2011).

If in the 80s and 90s the term Virtual Reality was very
common and identifying a very specific, advanced and
new digital technology (Forte 2000), now it is more
appropriate to classify this domain as virtual realities
where the interaction is the core, but the modalities of
engagement, embodiment, interfaces and devices are
diverse and multitasking. According to a retrospective
view, VR could be considered a missing revolution, in
the sense that it didnt have a relevant social and
technological impact with very few outstanding results in
the last two decades. Internet for example was a big
revolution, not VR.
Nowadays an interesting example is represented by 3D
games: very sophisticated virtual environments, with a
superb graphic capacity to engage players in a continuous
participatory and co-evolving interaction, collaborative
communication and digital storytelling. They can expand
the digital territory they occupy according to participatory
interaction. The ultimate scope of a game in fact is the
creation of a digital land to explore and settle. In the
game context the role of simple users is transformed in
active players, that is the players themselves contribute
to the construction and evolution of the game. These new
trends of co-active embodiment and engagement have
radically changed the traditional definition of virtual
environment/virtual reality as a visualization space
peopled by predetermined models and actions. The game
is an open collaborative performance with specific goals,
roles, communication styles and progressive levels of
engagement. The narrative of the game can produce the
highest level of engagement, a gamification of the user
(Kapp 2012).

In addition, every model was static and without any interrelation with human activities or social behaviors. For
example, in the 90s the virtual models of Rome and
Pompei were just architectural empty spaces without any
trace of human activity (Cameron and Kenderdine 2010):
a sort of 3D temporal snapshot of the main buildings of
the city. At that time of digital reconstructions there was
scarce attention to reproduce dynamic models and to
include human life or activities in virtual worlds. Virtual
world were magnificent, realistic and empty digital
spaces.
It is interesting to point out that all these reconstructions
were made by collecting and translating archaeological
data from analogue format to digital: for example from

Serious games, cyber games, haptic systems, are


changing the rules of engagement: the use for example of

117

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

beyond a textual description. Visual interactions and


graphic simulations stimulate to afford a deeper
perceptual approach to the analysis of data. For example,
a very detailed textual description of a site, a monument
or an artifact can suggest multiple hypotheses but none of
them translated in a visual code. In addition the
archaeological language is often cryptic, difficult and not
easily understandable. Virtual Archaeology started to use
complex visual codes able to create a specific digital
grammar and to communicate much more information
than a traditional input.

paper maps, drawings, iconographic comparisons, books


and so on. Here the process of reconstruction mediates
from different data sources of different formats and
shapes. At the dawning of virtual archaeology all the
applications were model-centered and without a
consistent validation process able to prove the result of
the reconstruction. The effect of reconstruct the past
was dominant and very attractive: several corporations
and international companies invested in the 90s in the
creation of digital archaeological models, but for most of
them the work was focused much more on advertising
the past rather than reconstructing it. In addition, at the
beginning virtual archaeology was not easily accepted in
the academic world as scientific field but it was
considered mainly a tool for a didactic and spectacular
communication of the past. Not enough attention was
given to new research questions coming up from the
virtual reconstruction process or to the importance of new
software and devices in the archaeological research. In
this climax virtual archaeology was looking for great
effects, digital dreams able to open new perspectives in
the interpretation and communication process. Most part
of the first applications was more technologically
oriented than aimed at explaining the multidisciplinary
effort of interpretation behind the graphic scene. The
general outcome of the first digital revolution of virtual
archaeology was certain skepticism. A big issue was to
recognize in so effective and astonished models a precise,
transparent and validated reconstruction of the past but
which past? The scientific evaluation of many virtual
reconstructions is not possible because of lack of
transparency in the workflow of data used. Moreover the
majority of graphic reconstructions seemed too artificial,
with graphic renderings more oriented to show the
capabilities of the software than a correct interpretation of
data.

Unfortunately, this great potential was not systematically


used at the beginning for a low involvement of the
communities of archaeologists at interdisciplinary level
(however with very few digital skills), but also for the
difficulties to manage so diverse information sources
(most of them analogue) in a single digital environment.
Below a schematic distinction between the digital
workflow generated by virtual archaeology and by
cyberarchaeology:
Virtual Archaeology workflow:
Data capturing (analog)
Data processing (analog)
Digitalization from analog sources (analog-digital)
Digital outcome: 3D static or pre-registered rendering
CyberArchaeology workflow:
Data capturing (digital)
Data processing (digital)
Digital input (from digital to digital)
Digital outcome: virtual reality and interactive
environments (enactive process)
It is important to consider that cyberarchaeology
elaborates data already born-digital: for example from
laser scanners, remote sensing, digital photogrammetry,
computer vision, high-resolution or stereo cameras.
Cyber Archaeology can represent today a research path
of simulation and communication, whose ecologicalcybernetic
relations
organism-environment
and
informative-communicative feedback constitute the core.
The cyber process creates affordances and through them
we are able to generate virtual worlds by interactions and
inter-connections (Forte 2010). The workflow of data
generated by cyber-archaeology is totally digital and can
make reversible the interpretation and reconstruction
process: from the fieldwork to virtual realities. More in
detail, cyberarchaeology elaborates spatial data during the
fieldwork or generally in any bottom-up phase and reprocesses them in simulation environments where it is
possible to compare bottom-up and top-down
interpretation phases. The integration of bottom-up data
(documentation)
and
top-down
(reconstruction)
hermeneutic phases is the necessary approach for the
digital interpretation within the same spatial domain. In
short the cyber process involves a long digital workflow,
which crosses all the data in different formulations and
simulations in a continuous feedback between existing
information (data input), produced information (for

In a recent article (Forte, 2010) I have named this period


the wow era because the excitement on the production
of models was in many cases much bigger than an
adequate scientific and cultural discussion. This was and
still is a side effect in the use of digital technologies in
archaeologies: a strong technological determinism where
the technology is the core and the basis of any
application.
Even if with several limitations and issues, however, the
first digital big bang in virtual archaeology represented
the beginning of a new era for the methodology of
research in archaeology (Forte 2009). With some
constrains, actually a virtual reconstruction is potentially
able to advance different research questions, hypotheses,
or can address the researcher to try unexplored ways of
interpretation and communication. However, this process
works just in case the virtual reconstruction is the product
of a complex digital workflow where the interpretation is
the result of a multivocal scientific analysis (data entry,
documentation,
simulation,
comparative
studies,
metadata). Questions like how, how much, which
material, textures, structures, which phase, etc.
stimulate new and more advanced discussions about the
interpretation because they push the researchers to go

118

VIRTUAL REALITY & CYBERARCHAEOLOGY

the past cannot be reconstructed but simulated. Cyberarchaeology is aimed at the simulation of the past and not
on its reconstruction: the simulation is the core of the
process. For this it is better to think about potential past,
a co-evolving subject in the human evolution generated
by cyber-interaction between worlds (Forte 2010). In
short cyberarchaeology studies the process of simulation
of the past and its relations with the present societies. Is
this a revolutionary change in theoretical archaeology?
Perhaps a new methodological phase after processualism
and post-processualism? Is cyber archaeology a change in
methodology, a change in paradigm, or a reflection of a
broader change? (Zubrow 2010). According to Ezra
Zubrow (Zubrow 2011) both processual and post
processual are now integrated into something new. Cyber
archaeology bridges the gap between scientific and
interpretational archaeology for it provides testable in
the sense of adequacy material representations of either
interpretations
or
scientific
hypotheses
or
discoveries. (Zubrow 2010). And further: if postprocessual archaeology will continue to exist it will exist
through cyber archaeology. It is in cyberarchaeology
where the interesting issues of cognition, memory,
individual difference, education etc are actually being
researched and actually being used. (Zubrow 2011).

example reconstructed models) and potential information


(what is generated by simulation). Potentiality of the
information is the core of the cyber process: different
potential interpretations coexist in the same virtual
environment and the simulation itself is able to create
new and possibly more advanced interpretation. The key
is the capacity to generate comparable and interactive
models in sharable domains integrating bottom-up and
top-down data. In fact during a virtual simulation it is
possible to change and improve several factors and
different
operators/users
can
obtain
diverse
interpretations and ways to proceed. Cyberarchaeology
does not look for the Interpretation but for achieving
possible consistent interpretations and research questions:
how is more important that what according to a
digital hermeneutic approach.
For example in the case of the digital project of the
Roman Villa of Livia (Forte 2007) it was possible to
create a complex hermeneutic circle starting with the 3D
documentation of the site by laser scanning and then
proceeding with the potential reconstruction/simulation of
different phases of the monument integrated also with the
reconstruction of some social activities displayed by the
use of digital avatars (Livia, Augustus and other
characters). In this project NPC (non-player-characters)
and PC (player characters) have been used in order to
populate the virtual world of actions, event and
behaviors. NPC and PC interact each other stimulating a
dialogue between users and digital environments and
designing new digital affordances (a digital affordance
identifies the properties of a virtual object).

6.1.3 TELEIMMERSIVE ARCHAEOLOGY


6.1.3.1 Introduction
One of the key problems in archaeology is that the
production of data from the fieldwork to the publication,
communication and transmission is unbalanced: no
matter if data are digital or not, a low percentage of them
is used and distributed. In the long pipeline involving
digging, data recording, documentation, archiving and
publication there is a relevant dispersion of information
and the interpretation process is too much influenced by
authorships and scholarships and not by a real multivocal
critical perspective. The archaeologist alone arguing in
front of his/her data is not just a stereotype: the
circulation of data before the publication is very limited
and it does not involve a deep and interactive analysis
with all the information available (from the fieldwork or
other sources).In short it is difficult to make available and
transparent the entire pipeline of archaeological data and
to share adequately them in the right context. For
example an artifact or a stratigraphic deposit could be
otherwise interpreted if it is possible to compare in 3D its
contextualization on site and the original functionality
and properties. Documentation and interpretation are
often separated and not overlapping in the same spatial
domain. In fact the usual result is that the interpretation is
segmented in different domains, often not mutually
interacting, and with enormous difficulties in making the
research work a collaborative research. In archaeology
collaborative activities start in the field and sometimes
continue in laboratory but with limited capacities of data
integration, data sharing and reversibility of the
interpretation process. More specifically in digital
archaeology it is difficult to integrate for example 2D and

The Virtual Villa of Livia is a good example of the use of


digital affordances: any virtual model is accomplished by
multiple properties that describe and validate its creation.
For example frescos and paintings show which
iconographic comparisons and data sources were used for
the reconstruction; in the case of architectural elements
the affordances display maps and reliefs of other sites and
monuments studied and analyzed for validating the
process of reconstruction. The more there are potential
simulations, the more it is possible to have multiple
interpretations.
The
coexistence
of
different
interpretations is one of the key features of the digital
domain of virtual realities and in this way it is possible to
create new knowledge. How can this knowledge be
distributed through virtual realities and which virtual
realities? (fig. 2).
How is it possible to approach the problem of
authenticity in a process of virtual reconstruction? How is
it possible to manage the link between data in situ and
reconstruction of the original past? The validation of a
digital process can show the consistency of the
simulation/reconstruction: in other words the digital
workflow has to be transparent (Bentkowska-Kafel,
Denard et al., 2011).
The most important distinction between virtual and cyber
archaeology is in the relation data entry feedback/
simulation: the interactive factor. From this point of view

119

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 3. 3D-Digging Project at atalhyk

challenge the work in team is essential and as well the


quality and amount of information to study and test. The
creation of very advanced digital labs is not easy in the
humanities and, in addiction, very expensive and time
consuming. The work in isolation does not pay off: it is
important to work in a network, to share resources and
first of all to multiply the faculty of interpretation
worldwide.

3D data, shape files and 3D models, old and new data. It


is also very difficult to mitigate the destructive impact of
archaeological digging and to make reversible the virtual
recomposition of layers and units, after the excavation.
6.1.3.2 TeleArch: a Collaborative Approach
Discussions and arguments around virtual and
cyberarchaeology should help to understand the
controversial relationships between digital technologies
and archaeology: risks, trends, potentialities, problems,
but whats the next? What if after we have digitally
recorded and simulated archaeological excavations,
reconstructed hypothetical models of the past integrating
documentation and interpretation processes? How can we
imagine the future after virtual-cyber archaeology?

Teleimmersive Archaeology can be considered an


advanced evolution of 3D visualization and simulation in
archaeology: not a simple visualization tool but a virtual
collaborative space for research, teaching and education
(fig. 3); a network of virtual labs and models able to
generate and to transmit virtual knowledge. It is named
Teleimmersive because can involve the use of stereo
cameras or kinect haptic systems in order to represent the
users as human avatars and to visualize 3D models in
immersive remote participatory sessions. Teleimmersive
Archaeology tries to integrate different data sources and
provide real-time interaction tools for remote
collaboration of geographically distributed scholars.

Collaborative research represents nowadays one of the


most important challenges in any scientific field. Minds
at work simultaneously with continuous feedback and
interaction, able to share data in real time can co-create
new knowledge and come with different research
perspectives. Networking and collaborative activities can
change the methodological asset of archaeological
research and communication. The intensive interactive
use of 3D models in archaeology at different level of
immersion hasnt been monitored and analyzed: actually
we dont know how much this can have an impact on the
generation of new digital and unexplored hermeneutic
circles.

I would consider Teleimmersive a simulation tool for the


interpretation and communication of archaeological data.
The tools allow for data decimation, analysis,
visualization, archiving, and contextualization of any 3D
dbase in a collaborative space. This kind of activity can
start in the field during the excavation and can continue
in lab in the phase of post-processing and interpretation.
Fieldwork archaeologists for example could discuss with
experts of pottery, geoarchaeologists, physical
anthropologists, conservation experts, geophysicists and
so on: the interpretation of an object, a site or a landscape
is always the result of a work in team. At the end the

Any significant progress, any new discovery, can depend


by the capacity of scientific communities to share their
knowledge and to analyze the state of the art of a specific
research topic in a very effective manner. In this

120

VIRTUAL REA
ALITY & CYBER
RARCHAEOLOGY
Y

Figuure 4. Teleimm
mersion System
m in Archaeollogy (UC Merrced, UC Berkkeley)

Figure 5. Viideo capturingg system for teeleimmersive archaeology

and Kurillo 20100) aimed at creating a 3D immersivee


collaaborative envvironment for research and
d education inn
arch
haeology, nam
med TeleArch (Teleimmerssive Archaeo-logy
y, figs. 4-6). TeleArch
T
is a teleimmersivee system ablee
to connect
c
remotte users in a 3D cyberspaace by stereoo
cam
meras, kinect caameras and m
motion tracking
g sensors (fig..
4). The
T system iss able to provvide: immersiive visualiza-tion,, data integraation, real-tim
me interaction
n and remotee
presence. The sofftware is baseed on OpenGL
L-based openn
sourrce Vrui VR
R Toolkit devveloped at University
U
off
California, Davis. Last tests saay that it allow
ws a real timee

most importaant outcome inn Teleimmerssive archaeoloogy is


the kinesthettic learning. In
I other wordds the transmission
of knowledgge comes frrom the inteeractive emboodied
activity in virtual envirronments and though viirtual
models, while traditional learning com
mes through linear
l
systems, suchh as books, texts, reports.
6.1.3.3 The System
In 2010 UC
C Merced (M. Forte) and UC Berkeleyy (G.
Kurillo, R. Baycsj)
B
startedd a new reseaarch project (F
Forte

121

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

dalone it cann
interrface and conntent (fig. 3, 6). As stand
elaborate all the models in 33D including
g GIS layers,,
metaadata and dbases (fig. 7). The digital workflow off
TeleeArch is able to integrate aall the data in
n 3D from thee
field
dwork to the collaborative
c
system with the followingg
sequ
uence:
3 format byy
Arrchaeological data can be recorded in 3D
lasser scannerss, digital phhotogrammetrry, computerr
vision, image modeling.
m
he 3D models have to be deecimated and optimized forr
Th
real time simulaations.
D models havee to be exported in obj form
mat.
3D
hey are optiimized in M
Meshlab and uploaded too
Th
TeeleArch.

Figuure 6. A Teleim
mmersive worrk session

Ulltimately diffferent geograaphically distributed userss


staart to work sim
multaneously though a 3D network
n

rendering off 1 million triaangles with thhe frame rate of


o 60
FPS (frames per second) on NVidia GeForce GTX 8800
(typically 200/30 object forr scene). In thhe virtual envvironment, users can
c load, delette, scale, movve objects or attach
a
them to diffeerent parent noodes. 3D layerrs combine seeveral
3D objects that share geeometrical annd contextual properties but arre used as a sinngle entity in the environm
ment.

6.1.3
3.4 3D Interaaction
TeleeArch supporrts different kinds of 3D
D interaction::
hum
man avatars (11st person intteraction), 3rd
d person andd
standalone. In 1stt person operaability the useer can interactt
like in the real world
w
within thhe space mapp
ped by stereoo
cam
meras: he/she operates
o
like a human avaatar since thee
systeem reconstruccts the body m
motion in real time (figs. 5-6). In
I this case users
u
can seee each other using naturall
interrfaces and boody languagee. In 3rd perrson the userr
interracts collaboraatively with ddata and models but withoutt
stereeo cameras. Ultimately TeleArch wo
orks also ass
standalone softw
ware, so thaat the user can interactt
indiv
vidually with models and ddata in stereo vision.
v

The framewoork supports Meshlab


M
projeect format (A
ALN),
which definnes object filenames annd their rellative
geometric reelationship. Using
U
a sliderr in the propeerties
dialog, one can easily uncover
u
diffeerent stratigraaphic
layers associiated with thee correspondinng units. TeleeArch
works as nettwork or standdalone softwaare. In a netwoork it
can develop all the properrties of the TeeleImmersion, with
the ability to
t connect reemote users sharing the same

Figure 7. Buuilding 77 at atalhyk:

thhe teleimmersiive session shhows the spatiaal integration


of shapee files (layers, units and artiffacts) in the 3D model recoorded by laser scanning

122

VIRTUAL REA
ALITY & CYBER
RARCHAEOLOGY
Y

The followinng tools are cuurrently implem


mented:
navigation tools: for navvigation througgh 3D space;
m
graphic user interface toools: for interaction with menus
and other on-screen
o
objeects;
measuremeent tools: for acquiring object geometry (e.g.
dimensionaal and angularr measuremennts);
t
for religghting parts of
o the 3D scenne or
flashlight tool:
pointing att salient featurres.
f
marking and
annotation and pointiing tools: for
communicating importaant interestingg features to other
remote useers;
f picking upp, moving and rotating objeccts;
draggers: for
screen locaators: for renddering mode manipulation (e.g.
mesh, textuure, point clouud)

gure 8. 3D Intteraction with Wii in the telleimmersive


Fig
systeem: building 777, atalhyk
k

bjects to perrform
object sellectors: for selecting obj
such
different actions relatedd to the local functionality,
f
as changinng object renndering style (e.g. texturee, no
texture, mesh
m
only), retrieving object metaadata,
focusing current
c
view to object priincipal planess etc.
(Forte and Kurillo 2010)).
D ARCHAEO
OLOGY AT
6.1.4 CASE STUDY: 3D
ALHUYUK
CATA
6.1.4.1 Introoduction
The project 3D Archaeeology at Cattalhuyuk (figg. 3)
started in 20010 thanks to the collaborattion with Stannford
University (A
Archaeologicaal Center) andd UC Merced with
the scope too record, document (withh different digital
technologies) and visualize in virtual reality all the phhases
of archaeoloogical excavaation. Phase I (2010) off the
project was mainly
m
orienteed to test diffferent technoloogies
during the excavation
e
(tiime of flight and optical laser
scanners). Inn phase II (20111) the UC Merced
M
team sttarted
from scratch the excavvation of a Neolithic house
h
(building 89) recording all
a the layers by time of phase
p
scanners (figg. 9), optical scanners
s
(fig. 12) and compputer
vision techniiques (image modeling,
m
figss. 10-11). In phase
p
III (2012) thhe plan is to document the entire site (East
Mound) withh the integraation of diffeerent technoloogies
(scanners, computer
c
vision, stereo cameras) annd to
continue thee digital recoording of thee Neolithic house
h
focusing on the micro-depposits which backfill the floor.
f
Final aim is to virttually museaalize the entire
e
archaeologiccal site for thhe local visittor center andd for
TeleArch, thhe Teleimmerrsive system for
f archaeologgy at
UC Merced and
a UC Berkeeley (fig. 6).

Figure
F
9. Clouuds of points bby time of phaase scanner
(Trimble FX) at atalhhyk: buildin
ng 77

perccentage of thee entire area has been ex


xcavated. Thee
digittal archeologiical project aaims to virtuaally reproducee
the entire
e
archaeoological proceess of excavattion using 3D
D
tech
hnologies (laseer scanners, 3D
D photogramm
metry) on sitee
and 3D Virtual Reality
R
of the deposits of Catalhoyuk
C
ass
they
y are excavateed (fig. 8). Inn this way it is
i possible too
mak
ke the excaavation proccess virtually
y reversible,,
reproducing in laab all the phases of diggiing, layer-by-layer, unit-by-unnit (fig. 7).. Unlike traaditional 2D
D
tech
hnology, 3D reconstructionn of depositts allows thee
arch
heologist to deevelop a morre complex un
nderstandingss
and analyses of the depositss and artifactts excavated..
Digg
ging is a deestructive technique: how can we re-anallyze and interpret whhat we exccavate? Thee
interrpretation phaase uses two approaches. One
O approachh
invo
olves the interrpretation andd documentation during thee
excaavation; the other approach is related to thee
reco
onstruction proocess after thhe excavation.. Both phasess
are typically
t
separate and not ccontextualized
d in one singlee
research workfloow. The doocumentation process off
excaavation is seggmented in ddifferent repo
orts, pictures,,
metaa-data and arrchives; the iinterpretation comes from
m

atalhyk is consideredd for many reasons ideaal for


addressing complex
c
reseaarch methodoological questtions.
More than thirty yearrs of studiees, archaeoloogical
fieldwork and
a
research have beeen devotedd to
investigatingg the ideoloogy, religionn, social sttatus,
architectural structures, art,
a environmeent and landsscape
of the site, producing seeveral publicaations, bookss and
other media http://www.ca
h
atalhoyuk.com
m/), but just a small
s

123

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

The site rapidly became famous internationally due to the


large size and dense occupation of the settlement, as well
as the spectacular wall paintings and other art that was
uncovered inside the houses. Another distinguishing
feature of atalhyk was the nature of the houses: they
are complex units involving ritual and domestic activities
in the same space. In particular, the diachronic
architectural development of the site is still very
controversial and it needs more studies and analyses in
relation with the landscape and the symbolic, ritual and
social use of the buildings.
Since February 2009, the site is inscribed in the
tentative list of UNESCO World Heritage Sites. The
specific critical conditions of the houses (mud-brick
dwellings, earth floors, artifacts, etc.) and the difficulties
to preserve all the structures in situ urge to document
digitally all the structures before they collapse or
disappear.

Figure 10. Image modeling of the building 89


at atalhyk

6.1.4.3 Research Questions


The project can open new perspectives at the level of
methodology of research in archaeology, generating a
more advanced digital pipeline from the fieldwork to a
more holistic interpretation process in the use of
integrated spatial datasets in three dimensions. More
specifically, it should be able to define a new digital
hermeneutics of the archaeological research and new
research questions. One of the key points of the project in
fact is the migration of 3D data from the digital
documentation in the field to a simulation environment
and one day with an installation in a public visitor
center.
In fact, in this case the 3D documentation of the new
excavation areas could be linked and georeferenced with
layers and datasets recorded in the past, reconstructing at
the end a complete 3D map of the site and of the entire
stratigraphic context (figs. 12-13). In that way, it will be
possible to redesign the relative chronology of the site
and the several phases of settlement. In fact the
reconstruction of the Neolithic site in thousands years of
continuous occupation and use is still very difficult and
controversial. In addition, the 3D recontextualization of
artifacts in the virtual excavation is otherwise important
for the interpretation of different areas of any singles
house or for studying possible social activities
perpetuated within the site.

Figure 11. Image modeling of the building 77


at atalhyk

comparative studies and analyses of all the


documentation recorded in different files and archives.
TeleArch aims at the integration of both phases of
documentation (bottom-up) and reconstruction (topdown) in the same session of work and interpretation.
6.1.4.2 The site
atalhyk lies on the Konya plain on the southern edge
of the Anatolian Plateau at an elevation of just over 1000
m above sea level. The site is made up of two mounds:
atalhyk East and atalhyk West (Hodder 2006).
atalhyk East consists of Neolithic deposits dating
from 7400-6000 B.C. while atalhyk West is almost
exclusively Chalcolithic (6000-5500 B.C.). atalhyk
was discovered in the 1950s by the British archaeologist
James Mellaart (Hodder 2000) and it was the largest
known Neolithic site in the Near East at that time. From
1993 up today the site was excavated by Ian Hodder with
the collaboration of several international teams
experimenting multivocality and reflexivity methods in
archaeology (Hodder 2000).

Other important research questions regard the sequence


and re-composition of wall art paintings and, in general
the decoration of buildings with scenes of social life,
symbols or geometrical shapes. For example in the
buildings 77, it was possible to recompose the entire
sequence of paintings after four years of excavation,
but this entire sequence is not visible on site anymore
since the paintings are very fragile and cannot be
preserved in situ (figs. 13-16). In short, the only way
to study them is in a virtual environment with all the links
to their metadata and stratigraphic contexts (figs. 7, 12,
13).

124

VIRTUAL REALITY & CYBERARCHAEOLOGY

Figure 12. 3D layers and microstratigraphy in the teleimmersive system (accuracy < 1 mm):
midden layers at atalhyk. This area was recorded by optical scanner (Minolta 910)

Figure 15. Building 77 after the removal of the painted


calfs head. The 3D recording by image modeling
allows to reconstruct the entire sequence
of decoration (by different layers)

Figure 13. Virtual stratigraphy of the building 89,


atalhyk: all the layers recorded by time
of phase laser scanner (Trimble FX)

Figure 16. Building 77: all the 3D layers


with paintings visualized in transparency
(processed in Meshlab)

Figure 14. Building 77 reconstructed by image modeling


(Photoscan). In detailhand wall painting and painted
calf's head above niche

125

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

The combined use of the 3D stereo camera and the stereo


video projector have allowed the visualization of 3D
archaeological data and models day by day, stimulating a
debate on site about the possible interpretations of
buildings, objects and stratigraphy.

6.1.4.4 Collaborative Research at Catalhuyuk


Since the system is still a prototype it is too early for a
significant analysis of the performance and for discussing
deeply the first results. Most of the time was invested in
the implementation, testing, optimization of data and the
creation of a new beta version of the software running
also as standalone version. A bottle-neck is the number of
users/operators the system can involve simultaneously:
current experiments were tested with the connection of
two campuses. The expandability of the system is crucial
for a long-term collaborative research and also for getting
adequate results in terms of interpretation and validation
of models and digital processes. In fact in Teleimmersive
archaeology the interpretation is the result of an
embodied participatory activity engaging multiple
users/actors in real time interaction in the same space.
The participation of human avatars in teleimmersion has
the scope to augment the embodiment of the operators, to
use natural interfaces during the interaction and to
perceive all the models on scale. This cyberspace
augments then the possibilities to interpret, measure,
analyze, compare, illuminate, and simulate digital models
according to different research perspectives while sharing
models and data in the same space.

With the time of flight scanner Buildings 80, 77, 96 and


all the general areas of excavation in the North and South
shelter were recorded and documented. With the optical
scanner Nextengine, 35 objects were recorded in 3D
involving different categories: figurines, ceramics and
stone. Finally all these models were exported for 3D
sessions in TeleArch.
6.1.4.6 Fieldwork 2011
The experience acquired in 2010 was able to address
differently the strategy of data recording in 2011. In fact
in 2010 timing was a very critical factor in laser scanning
during the archaeological excavation and the use of
optical scanners (Minolta 910) was not appropriate for
capturing stratigraphy and layers (optical scanner have
troubleshooting working outdoor).
In addition the accuracy produced by the use of Minolta
scanner, even if very valuable, was even too much (a
range of few microns) for the representation of
stratigraphic layers (fig. 12). The Minolta 910 in fact, as
many other optical scanners, does not work properly in
the sunlight, and because of that the use in 2010 was
limited under a small surface of 1 sq mt under a dark tent.
However the final models produced in 2010 were very
interesting because of the very detailed features
represented in the sequence of stratigraphic units and in
relation with the sequence of midden layers.

In the case of Catalhuyuk, the Teleimmersive system is


aimed to recreate virtually all the archaeological process
of excavation. Therefore all the data are recorded
originally by time-of-flight and optical scanners and then
spatially linked with 3D dbases, alphanumeric and GIS
data. Two fieldwork seasons, 2010 and 2011 were
scaled and implemented for TeleArch with all the 3D
layers and stratigraphies integrated with dbases and GIS
data (figs. 7, 8, 13). All the 3D models have to be aligned
and scaled first in Meshlab and then exported in
TeleArch.

Therefore in 2011 we have opted for an integrated system


able to shorten dramatically the phases of post-processing
and to allow a daily reconstruction in 3D of all the trench
of excavation. It is important in fact to highlight that
timing is a crucial factor in relation with the daily need to
discuss the results of 3D elaboration and the strategy of
excavation.

6.1.4.5 Fieldwork 2010


The fieldwork activity had the twofold scope of
excavating a multistratified deposit such as a midden
area (East mound, Building 86, Space 344, 329, 445)
and to document all the excavation by 3D laser scanners,
computer vision and 3D stereoscopy. For this scope we
have used a triangulation scanner for the
microstratigraphy (Minolta 910), an optical scanner for
the artifacts (Nextengine) and a time of flight/phase
scanner for the buildings and the largest areas of
excavation (Trimble CX). The use of different
technologies was necessary for applying a multiscale
approach to the documentation process. In fact, scanners
at different accuracy are able to produce different kinds
of 3D datasets with various levels of accuracy. More
specifically a special procedure was adopted for the data
recording of the stratigraphic units: every single phase
and surface of excavation was recorded by the
triangulation scanner after cleaning and the traditional
manual archaeological drawing. The contemporaneous
use of both methodologies was fundamental in order to
overlap the logic units of the stratigraphic sequence (and
related perimeter) on their 3D models.

Differently from 2010, we have adopted two new systems


working simultaneously: a new time of phase scanner
(Trimble FX) and a combination of camera based
software of computer vision and image modeling
(Photoscan, stereoscan, Meshlab). The Trimble FX is a
time of phase shift able to generate 216000 pt/sec and
with a 360 x 270* field of view; it is a very fast and
effective scanner with the capacity to generate meshes
during the data recording, so that to save time in the
phase of post processing. The strategy in the
documentation process was to record simultaneously all
the layers/units in the sequence of excavation using laser
scanning and computer vision. At the end of the season
we have generated 8 different models of the phases of
excavation by computer vision (3D camera image
modeling) and as well by laser scanning. The scheme
below shows the principal features and differences
between the two systems; laser scanning requires a longer

126

VIRTUAL REALITY & CYBERARCHAEOLOGY

Table 1.

post-processing but it produces higher quality of data.


Computer vision allows to have immediate results and to
follow the excavation process in 3D day by day (but not
with the same geometrical evidence of the laser scanner).
The digital workflow used during the excavation was the
following:

integrated with all the 2D maps, GIS layers and


archeological data.
Ultimately and differently from 2010, the post processing
phase was very quick and effective for laser scanning and
computer vision. In fact the models recorded with the
above mentioned technologies were ready and available
for a 3D visualization a few hours after data capturing.
The speed of this process has allowed a daily discussion
on the interpretation of the archaeological stratigraphy
and on 3D spatial relations between layers, structures and
phases of excavation. The excavation of an entire
building (B89) has allowed testing the system in one
single context so that to produce a 3D multilayered model
of stratigraphy related to an entire building. In addition a
3D model of the painted wall of Building 80 was created
in 3D computer vision in order to study the relations
between micro-layers of frescos and the surface of the
wall.

Identification of archaeological layers and recognition


of shapes and edges.
Cleaning of the surface (in the case of computer vision
applications).
Registration of targets by total station (so that all the
models can be georeferenced with the excavations
grid).
Digital photo-recording for computer vision
Digital photo recording for laser scanning
Laser scanning

The last part of the work was the 3D stereo


implementation of the models for the OgreMax viewer
and for Unity 3D in order to display them in stereo
projection. For this purpose we have used the DLP
Projector Acer H5360 in association with the NVIDIA
3D vision kit and a set of active stereo glasses. The
buildings B77 and B89 (during the excavation) were
implemented for a stereo visualization in real time
(walkthrough, flythrough, rotation, zooming and
panning). Thanks to the portability of this system, the

The digital workflow for the computer vision processing


is based on 1) photos alignment; 2) construction of the
geometry (meshes) 3) texturing and ortophoto generation.
The accuracy by computer vision measured in 2011
models was around 5 mm.
The use of georeferenced targets on site was implemented
for the automatic georeferencing of the 3D models with
the excavation grid. In that way all the 3D information
recorded during the excavation is perfectly oriented and

127

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

different feedback if compared with the digital ones. In


some circumstances the virtual object has a more dense
information, it is comprehensible from different
perspectives, not necessarily reproducible in the real
world.

stereo projection was available in the seminar room for


all the time of excavation.

6.1.5 CONCLUSIONS

Humans, as visual animals, have constructed their


hermeneutic skills throughout several generations of
genetic and cultural evolution. Digital materiality is a
new domain of hermeneutic, with different rules, spaces
and contexts. The informative content of a complex
digital representation could be more than authentic: it is
hyper-real. This hyper-real archaeology elaborates at the
end much more data and information than in the past: this
new digital materiality has therefore to be studied with a
diverse hermeneutic approach.

The future of digital archaeology is in the interactive


kinesthetic process: enactive embodiments of data,
models, users, human avatars: a continue work in
progress. If in the past the attention was focused on the
validation of models and environments, the future of
archaeological information is in the digital performance
between operators in shared environments and cyber
worlds. We could say: performing the past rather
than reconstructing. The virtual performance
represents a new digital frame within which the
archaeological interpretation can be generated and
transmitted.

This new digital phase of research and communication


permits to review the entire digital workflow from data
capturing to the final documentation and reconstruction
process. The integrated use of different technologies of
data capturing and post-processing then generates a more
sophisticated pipeline of digital interpretation, thanks to
the comparison among models, meshes, geometry and
clouds of points. In addition, the relevant speed of all the
digital process is able to increase the capacities of
interpretation during the excavation and, more
specifically, to simulate the entire excavation in 3D.

If at the beginning of virtual archaeology the goals were


to reconstruct the past (mainly in computer graphics), at
the present the past can be simulated in virtual
environments, re-elaborated in Internet, transmitted by
different social media. This last digital phase, borndigital, is completely different: the bottom-up phase
during the fieldwork, the documentation process, the 3D
modeling produce an enormous quantity of data, whose
just a low percentage is really used and shared.
Instruments, tools and software of data capturing have
substantially increase the capacity of digital recording
and real time renderings, but unfortunately there are not
yet adequate instruments for interpretation and
communication. The interpretation several times is
hidden somewhere in or through models but we dont
have the key for discovering or extrapolating it from the
digital universe. The research work in the last two
decades was concentrated more on recording tools and
data entry rather than accurate analyses and
interpretations. The result is that too much information or
too little have a similar effect: there is no way to interpret
it correctly.

Ultimately Teleimmersive archaeology is still in


embryonic stage of development, but collaborative minds
at work simultaneously in the same immersive
cyberspace can potentially generate new interpretations
and simulation scenarios never explored before.

Acknowledgements
Teleimmersive archeology project was supported by
Center for Information Technology Research in the
Interest of Society (CITRIS) at University of California,
Berkeley. We also acknowledge financial support from
NSF grants 0703787 and 0724681, HP Labs, The
European Aeronautic Defence and Space Company
(EADS) for the implementation of the teleimmersion
software. We thank Ram Vasudevan and Edgar Lobaton
for the stereo reconstruction work at University of
California, Berkeley. We also thank Tony Bernardin and
Oliver Kreylos from University of California, Davis for
the implementation of the 3D video rendering.

One more thing to consider in this new dimension of


virtual interaction in archaeology is the digital
materiality. The cyber world is now populated of digital
artifacts and affordances: they create networks of a new
material culture, totally digital. The multiplication of
affordances in a virtual environment depends on
interaction design and on the digital usability of the
models. Therefore there are new material contexts to
analyze: shall we create specific taxonomic approaches
for this domain? New classes and categories of digital
materiality? When we analyze for example a 3D model of
a statue or a potsherd and we compare it with the original,
we assume that the 3D model is a detailed copy of a real
artifact. Is that true? Actually it is not: a digital artifact is
a representation of objects simulated by different lights,
shadows, contexts and measurable on scale: in other
words it is simulated model not a copy or a replica. Of
course there are several similarities between the digital
and the real one, but we cannot use the same analytical
tool. Hands-on experiences on real artifacts reproduce a

For the project 3D Archaeology at Catalhuyuk, special


thanks to all the students and participants involved in the
fieldwork and lab post-processing and in particular
Fabrizio Galeazzi (2010 season), Justine Issavi (201011), Nicola Lercari (2011), Llonel Onsurez (2010-11).

Bibliography
BARCEL, J.A.; FORTE, M. et al. 2000. Virtual reality in
archaeology. BAR international series 843.

128

VIRTUAL REALITY & CYBERARCHAEOLOGY

BATESON, G. 1972. Steps to an ecology of mind. New


York, Ballantine Books.

FORTE, M. 2010. Cyber-archaeology. Oxford, England,


Archaeopress.

BENTKOWSKA-KAFEL, A.; DENARD, H. et al. 2011.


Paradata and transparency in virtual heritage.
Farnham, Surrey, England ; Burlington, VT, Ashgate.

FORTE, M. 2010. Cyber-archaeology. Oxford u.a.,


Archaeopress.

CAMERON, F. and KENDERDINE, S. 2010. Theorizing


digital cultural heritage: a critical discourse.
Cambridge, Mass.; London, MIT Press.

FORTE, M. and KURILLO, G. 2010. Cyberarchaeology


Experimenting Teleimmersive Archaeology 16th
International Conference on Virtual Systems and
Multimedia (VSMM 2010), Oct 20-23, 2010: 155-162.

CHAMPION, E. 2011. Playing with the past. Humancomputer interaction series. London; New York,
Springer: 1 online resource (xxi, 214 p.).

HODDER, I. 2000. Towards reflexive method in


archaeology the example at atalhyk. BIAA
monograph no. 28. Cambridge.

DASGUPTA, S. 2006. Encyclopedia of virtual


communities and technologies. Hershey, PA, Idea
Group Reference: 1 online resource (1 v.).

HODDER, I. 2000. Towards reflexive method in


archaeology the example at atalhyk. BIAA
monograph no. 28. Cambridge, Oxford, McDonald
Institute for Archaeological Research University of
Cambridge.

DASGUPTA, S. 2006. Encyclopedia of virtual communities


and technologies. Hershey, PA, Idea Group
Reference.

HODDER, I. 2006. atalhyk : the leopard's tale :


revealing the mysteries of Turkey's ancient town.
London, Thames & Hudson.

FORTE, M. 1997. Virtual archaeology: re-creating


ancient worlds. New York, NY, Abrams.

KAPP, K.M. 2012. The gamification of learning and


instruction : game-based methods and strategies for
training and education. San Francisco, Calif.
Chichester, Jossey-Bass; John Wiley distributor.

FORTE, M. 2000. About virtual archaeology: disorders,


cognitive interactions and virtuality. Virtual reality in
archaeology. F.M. Barcelo J., Sanders D. Oxford. in
Barcelo J., Forte M., Sanders D., 2000 (eds.), Virtual
reality in archaeology, Oxford, ArcheoPress (BAR
International Series S 843), 247-263: 247-263.

MATURANA, H.R. and VARELA, F.J. 1980. Autopoiesis


and cognition: the realization of the living. Dordrecht,
Holland; Boston, D. Reidel Pub. Co.

FORTE, M. 2007. La villa di Livia: un percorso di


ricerca di archeologia virtuale. Roma, LErma di
Bretschneider.

ZUBROW, E. 2010. From Archaeology to I-archaeology:


Cyberarchaeology, paradigms, and the end of the
twentieth century. Oxford, Archaeopress. Cyberarchaeology (ed. by Maurizio Forte): 1-7.

FORTE, M. 2009. Virtual Archaeology. Communication


in 3D and ecological thinking. Beyond Illustration:
2D and 3D Digital Technologies as Tools for
Discovery in Archaeology, edited by Bernard Frischer
and Anastasia Dakouri-Hild, Archaeopress, Oxford:
31-45.

ZUBROW, E.B.W. 2011. The Magdalenian household :


unraveling domesticity. Albany, N.Y., State Univ. of
New York Press.

129

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

6.2 VIRTUAL REALITY & CYBERARCHAEOLOGYVIRTUAL MUSEUMS


Sofia PESCARIN
cultural interest that are accessed through electronic
media. A virtual museum does not house actual objects
and therefore lacks the permanence and unique qualities
of a museum in the institutional definition of the term. In
fact, most virtual museums are sponsored by institutional
museums and are directly dependent upon their existing
collections, an interesting point is made by Antinucci in
the previously cited paper. He states that there is an easy
exercise that can be done, defining what is not a Virtual
Museum. He proposed that it is not the real museum
transposed to the web (or to any electronic form), nor
an archive of, database of, or electronic complement to
the real museum since these aren't meant for
communication, and finally nor what is missing from the
real museum. He finally underlines as the visual
narrative is the best means to effectively communicate
about objects in a museum to the ordinary visitor.
(Antinucci, 2007:80-81).

In this chapter the author will focus on virtual museums,


an application area related to virtual heritage. She will
analyse what is a virtual museum, its characteristics and
categories. The section will be closed with four examples
of Virtual Museums.

6.2.1 MUSEUMS AND VIRTUAL MUSEUMS


The term Virtual Museum has become more and more
used in the last 10 years, but it has also been adopted in
very different ways, referring to on line museums, 3D
reconstructions, interactive applications, etc. As F.
Antinucci was writing in 2007 this fact immediately
becomes apparent when we observe the various entities
that are called by this name and realize that we are
dealing with a wide variety of very different things, often
without any theory or concept in common [Antinucci,
2007: 79].

Coming back to the ICOM definition, there are 5


interesting common characteristics for a museum and a
virtual museum: 1) there is often an institution behind the
museum; 2) heritage (tangible and intangible) forms the
collections; 3) it has always a communication system; 4)
it is created to be accessed by a public; 5) it is built
following one or more scopes (education, study,
enjoyment).

Virtual Museum is made of two terms: virtual and


museum. The definition of museum is widely
accepted and approved. ICOM updated definition, refers
to a museum as a non-profit, permanent institution in
the service of society and its development, open to the
public, which acquires, conserves, researches,
communicates and exhibits the tangible and intangible
heritage of humanity and its environment for the
purposes of education, study and enjoyment.
(http://icom.museum/who-we-are/the-vision/museumdefinition.html). On the other side, the term virtual is
the real cause of the not univoque definition of virtual
museum, since it is used in the ICT community as
connected to interactive real time 3D, while in the
Cultural Heritage community in a broader and more
epistemological way, often including any reconstruction,
independently from the presentation layer.

Virtual Museums are aimed at creating a connection to


the remains of our past and their knowledge, with a
fundamental focus they should have on users. They are
communication media developed on top of different
technologies, whose goal is to build a bridge between
Heritage and People. Their goal is to make users
experiencing the future of their past. Virtual Museums are
the application domain of several different researches:
content-related research, cognitive sciences, ICT, and
more specifically interactive digital media, Technology
Enhanced Learning (TEL), serious and educational
games. They are an aggregations of digital contents
(various kind of Multimedia assets: 3D models, audio,
video, texts) built on top of a narrative or descriptive

So, what is a Virtual Museum? Although the


Encyclopdia Britannica refers to a virtual museum as
a collection of digitally recorded images, sound files,
text documents, and other data of historical, scientific, or

130

VIRTUAL REALITY & CYBERARCHAEOLOGY

case of use of an immersive workbench, through either


sound or vision. This is the case of Head Mounted
Display, wearable haptic, retinal display where either the
view or the hearing is involved.

story, with a presentation layer which defines the specific


ICT solution and the behaviours.
This definition is a work-in-progress activities of vmust.net (www.v-must.net), the network of excellence
financed by the European Commission on Virtual
Museums.

6) Distribution: There is a further category which regards


how the virtual museum is distributed. In fact it might be
not-distributed at all, such in the case of an on site
installation inside a museum, not connected with Internet,
or distributed.

6.2.2 CATEGORIES OF VIRTUAL MUSEUMS

7) Scope: An important distinction regards the aim, the


scope why a virtual museum has been developed. This
issue in fact has an impact on the application itself. In the
recent analysis within the v-must project we have
distinguished six possible scopes: educational,
edutainment, entertainment, research, enhancement of
visitor experience and promotion. In educative virtual
museum the main focus is addressed to specific
instructional purposed, while in edutainment following
Chen and Michael [Chen, Michael 2006] the concept is
related to serious games where fun and entertainment are
strictly related to the transmission of specific information
and to foster learning. In entertainment in fact, the focus
lays in the fun and enjoyment which are at the base of the
development, while research purposes are address in case
of testing or analysing some specific aspects of interest of
a restricted scientific community. Nevertheless Virtual
Museums may be also developed to enhance visitor
experience of a site or a museum, or just to promote or
advertise a specific cultural heritage.

As we have seen, the definition of virtual museum is


quite wide. This is the reason why there are several types
of virtual museums. They can in fact be defined in
accordance to their: 1) content, 2) interaction technology;
3) duration; 4) communication; 5) level of immersion; 6)
distribution; 7) scope; or 8) sustainability level. These
eight categories have a direct implication on the technical
and digital asset development.
1) Content: There are several types of virtual museums if
we consider their content, such as archaeology, history,
art, ethnography, natural history, technology, design
virtual museums.
2) Interaction: If we consider interaction technology,
there are two main types: interactive virtual museums,
which use either device-based interaction (such in the
case of mouse or joystick) or natural-interaction (speech
or gesture based interaction) and non-interactive virtual
museums, which provide the user with passive
engagement.

8) Sustainability: There is finally a category that is more


and more perceived as important to be defined and that
regards the level of sustainability of a project, meaning
the capacity to be persistent and durable in the future. In
fact the life-cycle of many installation is still very limited
in time. Furthermore, important long-lasting projects are
today completely lost and inaccessible, due also to the
lack of preservation policies. An entire digital patrimony
is in danger due to the lack of a shared methodology for
preserving content. This danger is felt more and more,
when this digital patrimony is the only testimony of
heritage disappeared or in danger (i.e. Lascaux Caves). In
this cases, the need of pairing the real artefact or the real
site with Virtual Museums installations is particularly
evident. This characteristic, in case of virtual museums,
might be verified through their level of re-usability or ex
changeability (in term of software, hardware or digital
multimedia assets) and it has a connection with the
approached followed (open source, open formats).

3) Duration: A virtual museum can be installed and be


accessible continuously on line or inside a museum
(permanent virtual museum) or it may be playable not
continuously, but only for a limited time (temporary or
periodic virtual museums). These two cases have
different needs and requirements, especially related to
their maintenance and sustainability.
4) Communication: An interesting distinction in virtual
museums regards the communication style. Although
there are several types of narratives, a basic distinction
can be made among exposition, description and narration.
A narration implies a sequence of events which are
reported to a receiver in a subjective way. In exposition
or description the concept are defined and interpreted as
to inform.
5) Level of immersion: Following Carrozzino and
Bergamasco [Carrozzino & Bergamasco, 2010] there are
three main categories related to the level of immersion:
high immersion, low immersion and not-immersive.
While for not-immersive virtual museums the concept is
quite clear, a distinction should be done between high and
low immersion. In the first case, we are dealing with
virtual reality systems where both visual and audio
systems are made as to immerse the users deeply in the
digital environment, through 3D stereo projected on a
large screen and multichannel sound (such as in a cave).
In the second case a lower level is guaranteed, such as in

In the following section, examples of various types of


virtual museums are described. In order to simplify the
possible interconnection between the categories we have
defined 4 main types of virtual museums, representing a
cross-selection.
On site Virtual Museum
On line Virtual Museum
Mobile Virtual Museum or Micro Museum
Not interactive Virtual Museum

131

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. The virtual museum of Scrovegni chapel (Padova, IT, 2003-): the VR installation at the Civic Museum
and the cybermap which is part of the VR application (courtesy of CNR ITABC
[M. Forte, E. Pietroni, C. Rufa] and Padova city council [D- Banzato])

University of Padova, funded by MIUR and Veneto


Region. It is part of a wider project whose goal is to
study, reconstruct, promote the thermal landscape typical
of the Euganean Hills, around Montegrotto Terme
(Padova, Italy: www.aquaepatavinae.it). The known
elements of this territory are different and spread along a
wide area: there is only a primary tourist archaeological
site; some are still under excavation and cannot yet be
see, while others are recognizable only through little
scattered evidences identified through archaeological and
geological surveys, historical studies and remote sensing
of the whole area. The project is directed mainly to nonexpert users. An on line interactive application has been
set up and integrated in the main website. It was used a
plug-in approach to enable full integration inside the
browser of the 3D interaction with the complex scenes
(DEM, geoimages, 3D models obtained with RBM and
IBM technique and with not-reality based modelling,
vegetation, water effects, etc.). It was used the
OpenSceneGraph library (www.openscenegraph.org) and
the OSG4WEB project [Fanini et al., 2011, Calori et al.,
2009]. The user interface was improved, such as the
interaction of the plug-in: the navigation system is
personalized in accordance with the type of exploration,
the scale of the landscape, the visualisation device (fly,
walk; touch-screen, natural interaction); specific user
interface was developed through which the user can
upload different models (interpreted simulated reconstructed) on the landscape, in some cases as
transparent volumes above the original 3D archaeological
remains; it was added a loading system for information in
3D and a dynamic real time system to add plants directly
on the scene, useful in case of gardens reconstruction.
Although OSG4WEB is a still ongoing open source
project, it is still a good solution, at least in the open
source panorama, to create an on line virtual museum,
made of complex scenes, based on geographical datasets
and complex interactions. Nevertheless, a not-plug-in

6.2.2.1 Scrovegni Chapel (2003)


[On site virtual museum. Categories: art history,
interactive VR, descriptive, permanent, not-immersive,
partially sustainable, not distributed (on site),
enhancement of visitor experience.]
In 1971, after only 8 years after the last restoration of the
Scrovegni Chapel in Padova (Italy), the superintendent
announced the rapidly growing damages, already visible
on Giottos 13th century frescos, due mainly to pollution
problems. After the earthquake of 1976, other damages
have been registered, therefore a new restoration program
was started. In 2002 the Chapel was accessible again to
the public, but with limitations in terms of time spent
inside the monument (15 minutes maximum) and number
of visitor a day (groups of maximum 15 people each
time). For this reason, in 2000 a multimedia and virtual
reality project was started by Padova city council, in
cooperation with CNR ITABC. The project, finished in
2003, with the official opening of the new multimedia
room (Sala Wiegand) inside the Eremitani Museum, is
still accessible today. It includes a not-interactive movie,
which follow a narrative style, two multimedia interactive
applications and one main desktop-based virtual reality
(DVR) application, where the users can freely explore the
monument with its frescos and its history in 3D and real
time. A cyber map was created describing the conceptual
structure of the application. [Forte et alii 2011]
6.2.2.2 Aquae Patavinae VR (2010-2011)
[On line virtual museum. Categories: archaeology,
interactive VR, descriptive, permanent, not-immersive,
sustainable, on line, research]
Aquae Patavinae VR is an on line virtual museum,
developed by CNR ITABC, in cooperation with

132

VIRTUAL REALITY & CYBERARCHAEOLOGY

a tablet. An example is Matera: tales of a city [Pietroni


et al., 2011], a project developed by CNR in cooperation
with Basilicata Region. The project is aimed at creating a
digital platform able to help tourists and visitors before
and during the visit of Matera. The cultural content,
audio-visuals, texts, 3d models, including a complete
reconstruction of the landscape and one of the city
(Matera) in nine historical periods, from Pliocene up to
nowadays (fig. 3). The result is an application that
integrates different digital assets, used and combined
following a narrative approach: from a more traditional
multimedia approach to a 3D real time navigation system
for I-Pad.
Figure 2. Aquae Patavinae VR presented at
Archeovirtual 2011 (www.archeovirtual.it):
natural interaction through the web

6.2.2.4 Apa (2009-2011)


[Not interactive virtual museum. Categories: history, notinteractive, narrative, permanent, immersive, sustainable,
not-distributed (on site), edutainment]

approach would be preferable, especially in case of notexpert users and it is desirable that in the near future
webGL
could
offer
concrete
solutions
(http://www.khronos.org/webgl/).

The last example is an open project where Blender, the


3D modelling software (www.blender.org), was used as
main tool to develop a computer animation stereoscopic
movie about the story of the city of Bologna. This
software was also selected as to become a real training
environment, where several students have contributed. A
rendering farm, based also on Blender, and a subversioning were developed to simplify the production
and help content developers, directors, artists, experts,
etc. For the movie, the city of Bologna has been
completely reconstructed during Etruscan, Roman,
Medieval times until now-a-days, re-using digital assets

6.2.2.3 Matera: tales of a city (2009-2011)


[Mobile virtual museum. Categories: history, interactive,
narrative, permanent, not-immersive, partially re-usable,
distributed: mobile, edutainment]
Another type of virtual museum is the one that can be
accessed through a mobile device, such a smart phone or

Figure 3. 3D reconstruction parts of the project Matera: tales of a city with a view
of the same place in different historical periods (courtesy of S. Borghini, R. Carlani)

133

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

collaboration in inclusive environments; simulation in


real time environments; 3D collaborative environments;
multi user or serious games, multi-user virtual museums.

References
ANTINUCCI, F. 2007. The virtual museum. In: Virtual
Museums and archaeology. The contribution of the
Italian National Research Council, Ed. P. Moscati,
Archeologia e Calcolatori, Suppl. 1, 2007: 79-86.
CALORI, Luigi; CAMPORESI, Carlo; PESCARIN, Sofia
2009. Virtual Rome: a FOSS approach to Web3D,
Proceedings of the 14th International Conference on
3D Web Technology (Web3D 09), 2009.

Figure 4. Immersive room with Apa stereo movie


inside the new museum of the city of Bologna
(courtesy of CINECA).

CARROZZINO, M. and BERGAMASCO, M. 2010. Beyond


virtual museums: Experiencing immersive virtual
reality in real museums. In: Journal of Cultural
Heritage, 11: 452-458.

and knowledge previously created by main research


institutions (University, CNR, Enea, City Council, IBC).
The result is now accessible inside the new museum of
the city, in an immersive room (fig. 4) [Guidazzoli et al.,
2011]. The entire digital asset is going to be soon shared,
following the Creative Commons licence, and to be a
training example for the development of the same project.

CHEN, Sande and MICHAEL, David 2006. Serious games:


Games that educate, train, and inform. Boston, MA.:
Thomson Course Technology.
FANINI, Bruno; CALORI, L.; FERDANI, D.; PESCARIN, S.
2011. Interactive 3D Landscapes Online,
Proceedings of the 3D Virtual Reconstruction and
Visualization of Complex Architectures Conference
(3D-ARCH 2011), 2-5 March 2011, Trento, Italy.

6.2.3 FUTURE PERSPECTIVES

FORTE, M.; PESCARIN, S.; PIETRONI, E.; RUFA, C.;


BACILIERI, D.; BORRA, D. 2003. The multimedia room
of the scrovegni chapel: a virtual heritage project, in
Enter the Past. The E-way into the four dimensions
of Cultural Heritage. Vienna Apr. 2003, BAR
International Series 1227, Oxford 2004: 529-532.

If we look at Virtual Museum field as an interdisciplinary


domain, as it is, than it is more and more clear how
technology is just a part of the question. In fact it should
be selected strictly in accordance with other relevant
categories such as the communication style and the scope
(even following the 2nd Principle of the London Charter
A computer-based visualisation method should normally
be used only when it is the most appropriate available
method for that purpose: www.londoncharter.org].

GUIDAZZOLI, A.; CALORI, L.; DELLI PONTI, F.;


DIAMANTI, T.; IMBODEN, S.; MAURI, A.; NEGRI, A.;
BOETTO COHEN, G.; PESCARIN, S.; LIGUORI, M.C.
2011. Apa the Etruscan and 2700 years of 3D
Bologna history, SIGGRAPH Asia 2011 Posters
Hong Kong, China, 2011.

Moreover, there are several problems that are still open


and will need to be faced in the next years, such as the
duration of the life-cycle of a virtual museum. Projects
such as V-MUST (www.v-must.net) are more and more
focused on improving the level of sustainability of such
projects.

PIETRONI, Eva; BORGHINI, Stefano; CARLANI, Raffaele;


RUFA, Claudio 2011. Matera citt narrata project: an
integrated guide for mobile system, in Volume
XXXVIII-5/W16, 2011 ISPRS Workshop 3D-ARCH
2011 3D Virtual Reconstruction and Visualization
of Complex Architectures, 2-4 March 2011, Trento,
Italy - Editor(s): Fabio Remondino, Sabry El-Hakim.

Interesting perspectives for the future of the domain


include: details enhancement and support to
reconstruction based on Artificial Intelligence, Neural
Networks, Genetic Art and Procedural Modeling;

WebGL 2009. Khronos Group. WebGL - OpenGL ES


2.0 for the Web. http://www.khronos.org/webgl.

134

7
CASE STUDIES

7.1 3D DATA CAPTURE, RESTORATION AND


ONLINE PUBLICATION OF SCULPTURE
Bernard FRISCHER

scratches; and it may well be lacking important projecting


parts such as limbs, noses, etc. Thus, in the digital
representation of sculpture, the issue of restoration is
often encountered.

7.1.1 INTRODUCTION
Homo sapiens is an animal symbolicum (Cassirer 1953:
44). Sculpture is a constitutive form of human artistic
expression. Indeed, the earliest preserved examples of
modern human symbol-making are not (as one might
think) the famous 2D cave paintings from Chauvet in
France (ca. 33,000 BP) but 3D sculpted statuettes from
the Hohle Fels Cave near Ulm, Germany dating to ca.
40,000 BP (Conard 2009). Practitioners of digital
archaeology must thus be prepared to handle works of
sculpture, which constitute an important class of
archaeological monument and are often essential
components of virtual environments such as temples,
houses, and settlements. The challenges of doing so relate
to two typical characteristics of ancient sculpture: its
form tends to be organic; its condition usually leaves
something to be desired.

The purpose of this contribution is to discuss the process


of how we gather, restore, and publish online the 3D data
of sculpture. Given space limitations, the goal is not to be
comprehensive but to focus on specific examples handled
in recent years by The Virtual World Heritage Laboratory
through its Digital Sculpture Project (hereafter: DSP), the
goal of which is to explore how the new 3D technologies
can be used in promoting new visualizations of and
insights about ancient sculpture.2

7.1.2 TERMINOLOGY AND METHODOLOGY


In any project of digital representation of sculpture it is
essential to define at the outset the goal of the final
product. Generally, one is attempting to create: (a) a
digital representation of the current state of the statue, (b)
a digital restoration of the original statue of the statue, or
(c) both. In case (a), we speak of a state model; in (b) of a
restoration model. Of course, other categories, or subcategories, are possible, including restoration models that
show the work of art at different phases of its existence.
Finally, there is (d) the reconstruction model that takes as
its point of departure not the physical evidence of the
actual statue (or an ancient copy of it), which no longer
exists, but representations of the statue in other media
such as descriptions in texts or 2D representations on
coins, reliefs, etc. (see, in general Frischer and Stinson
2007). In the case of restoration and reconstruction
models, one must reckon with the problem of uncertainty
(see Zuk 2008) and the resulting need to offer alternative
hypotheses since rarely are our restorations or
reconstructions so securely attested that there is no room
for doubt or different solutions. It is essential in scientific

The organic nature of sculpture means that accurate,


realistic digital representation requires digital models that
are generally more curvilinear and hence more dataintensive than equivalent models of built structures. For
example, whereas our labs Rome Reborn 3D model
could describe the entire 25 sq. km. city of late-antique
Rome in 9 million triangles (version 1.0),1 the pioneering
Stanford Digital Michelangelo Project need 2 billion
triangles to describe the geometry of a single statue
Michelangelos Davidresulting in a file size of 32
GB (Levoy 2003).
David is, in archaeological terms, a relatively recent
creation and is in good condition. In comparison, the
statue typically found on an archaeological site is
degraded: its surface has been randomly patinated by
components of the soil; its polychromy has faded or
disappeared; it is probably covered with many dents and
1
This version was reduced to 3 million triangles by Google engineers
for inclusion in the Gallery of Google Earth, where it is known as
Ancient Rome 3D. The current version (2.2) of Rome Reborn has
over 700 million triangles; see www.romereborn.virginia.edu.

137

See www.digitalsculpture.org.

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

3D modeling to define ones goal clearly and, when


appropriate, to deliver a final product that exposes,
whether through words or images, areas of uncertainty
and includes relevant alternative solutions.
In developing restorations and alternative solutions, it is
important that the technicians who typically are
responsible for data capture and modeling work closely
with subject experts. Generally the project director will
initiate a project by putting together a team of 3D
technicians, experts on the statue of interest, and one or
more restorers familiar with the work of art. The work of
the team almost always is iterative: the technical team
produces a model, which is then reviewed by the subject
experts and restorers. On the basis of their feedback, a
revised model is made. This back-and-forth can continue
many times until all members of the team are satisfied
that the best possible results have been obtained. It is
advisable for the project director to ask all members of
the team to sign a statement expressing their approval of
the final product and granting their permission to publish
it with their names listed as consultants or, if appropriate,
co-authors.3

7.1.3 3D DATA CAPTURE


In making a state or restoration model, the first step is to
capture the 3D data of the surviving statue, including any
fragments. The instrument traditionally used for this is
the 3D scanner. Types of scanners include Triangulation,
Time-of-Flight, Structured Light and Coherent Frequency
Modulated Continuous-Wave Radar.4 The instruments are
either fixed, mounted on arms, or movable. Normally,
Triangulation scanners are not appropriate for campaigns
of 3D data capture of sculpture (Guidi, Frischer et al.,
2005: 121).
In approaching data capture, it is usually important to
take into account the material properties of the object at
hand. Sculpture has been produced in a variety of
materials, including terracotta, stone, and metal. Casts of
sculpture have been typically made with Plaster of Paris
or resin. The data capture device used should be
appropriate to the material of the object. For example,
Beraldin, Godin et al., 2001 showed that because of its
crystalline surface, marble is not as receptive to 3D
scanning as other materials. Our own research (see
Appendix I) has shown that Plaster of Paris is highly
receptive to 3D scanning. Thus, one might obtain more
reliable results from scanning a good first-generation cast
of a statue rather than the original, if the original is in
marble.
Statues whose material is a dark metal are also
problematic. In our project to reconstruct the lost portraitstatue of the Hellenistic philosopher Epicurus,5 the DSP
3
For example of one way to handle team credits, see:
www.digitalsculpture.org/credits.html.
4
For a basic introduction see 3D scanner, in Wikipedia at
http://en.wikipedia.org/wiki/3D_scanner (seen February 1, 2012).
5
See http://www.digitalsculpture.org/epicurus/index.htm.

used the marble torso in the National Archaeological


Museum of Florence and for the head the bronze bust
from Herculaneum now in the National Archaeological
Museum in Naples. For 3D data capture of the bust the
LR200 manufactured by Metric Vision Inc. of Virginia
was used because it employs the principle of Coherent
Frequency Modulated Continuous-Wave Radar, which
(unlike TOF or Structure Light instruments) operates
independently of the surface color.
Whether the scanner is fixed, mounted on an arm, or
hand-held can also be a relevant consideration. For
objects with many occluded parts (e.g., intertwined,
projecting limbs such as are found on the Laocoon
statue group in the Vatican Museums) a fixed scanner
may not be as effective in data capture as one mounted on
an arm or hand-held because the latter instruments allow
one to move the instrument around and behind the
occlusions, something not possible with a fixed scanner.
The Laocoon was indeed mentioned by Levoy as a
statue group that may be forever unscannable.6 The
DSP succeeded in capturing the data of the Laocoon7
despite using a fixed scanner because to handle the
occlusions it was possible to scan casts of the six
individual blocks making up the statue group and to
combine the scan data from these with those from the
original itself.
Given the range of problems presented by 3D data
capture, few university or museum laboratories will be
able to purchase the full range of scanners that would
ideally be needed to deal with any situation. Moreover
each scanner has its own peculiarities and requires a
certain amount of expertise if it is to be deployed to its
best advantage. Thus it is often practical to hire a service
provider specializing in 3D data capture using the
appropriate instrument for the job at hand. The DSP has
worked with a number of companies on its projects,
including Direct Dimensions,8 Breuckmann GmbH,9
FARO,10 and Leica.11 On other occasions, it has partnered
with university laboratories such as the TECHLab12 at the
University of Florence and the INDACO Lab at the
Politecnico di Milano.13
Thus far, we have discussed traditional devices for data
capture. A promising new approach based on structure
from motion (cf. Dellaert, Seitz et al., 2000) is being
developed by Autodesk14 and the EU-sponsored Arc3D
project.15 With these free solutions, one can upload digital
photographs taken all around and on top of the statue and
receive back a digital model. To date, no studies have
6
http://graphics.stanford.edu/
talks/3Dscanning-3dpvt02/3Dscanning3dpvt02_files/v3_document.htm.
7
For the results, see http://www.digitalsculpture.org/laocoon/index.
html.
8
www.dirdim.com/.
9
www.breuckmann.com/.
10
www.faro.com.
11
www.leica-geosystems.us.
12
www.techlab.unifi.it/.
13
http://vprm.indaco.polimi.it/.
14
See www.123dapp.com/catch.
15
See www.arc3d.be/.

CASE STUDIES

been undertaken to compare the accuracy of models made


by these software packages with those made by
traditional data capture. What is clear is that, to date, the
resulting models are smaller in terms of polygons and
hence lower in terms of resolution. But not all digital
archaeological applications require high-resolution
models. For example, our lab has had good success with
Autodesks 123D Catch to produce the small to mediumsize models used in virtual worlds. We find that to obtain
the best results from 123D Catch, it is important to
observe a few, simple rules that are designed to get the
best performance from the digital camera:

2D views in media such as coins or reliefs. The DSP uses


a process of hand-modeling with the software 3D Studio
Max to create reconstruction models in mesh format. As
an example, the model of the statue group showing
Marsyas, the olea, ficus, et vitis in the Roman Forum can
be cited (figure 1). Nothing survives of the group, but it is
illustrated on two sculpted reliefs dating to ca. 110 CE
(see Frischer forthcoming). Besides 3D Studio Max,
published by Autodesk, other hand modeling packages
that are commonly used include Autodesks Maya18 and
Blender, which is free and open source.19

If possible, prepare the statue to be shot by place 10-20


small post-it notes randomly distributed over the
surface. Each post-it should have an X written on it
to provide a reference point later in the process.
Put a scale with color bars in front of the statue. This
will allow you to give your model accurate, absolute
spatial values.
Shoot all around the object while keeping the view
centered on it.
Shoot in a gyric pattern from bottom to the top around
the statue several times, ending with shots all around
the top.
Make sure that shots overlap.
Keep the lighting constant.
Keep the object fixed: do not rotate it.

Figure 1. View of the DSPs reconstruction of


the statue group of Marsyas, olea, ficus, et vitis
in the Roman Forum

Depth of field should be as big a possible so that the


maximum area is in focus.
ISO should be set to 100 or 200.
A tripod and shutter remote control should be used.

As for file formats for point clouds, restoration models


and reconstruction models, there are no universally
accepted standards, but for point clouds PLY and OBJ are
well supported; and for polygonal models (including both
the restoration and reconstruction varieties) PLY, OBJ,
3DS, and COLLADA are widely used. Generally, one
will want to work with a file format that can be imported
into Meshlab, a popular free, open-source software for
editing point clouds and polygonal models.20 Proprietary
software packages such as Geomagic, Polyworks, and
Rapidform are also commonly used, though they are
often too expensive for the budgets of university and
museum laboratories.

Format should be RAW


7.1.4 MODEL TYPES AND FORMATS
The 3D data are captured as a collection of points with X,
Y, Z spatial coordinates.16 One can either leave the model
in point-cloud format or join the points to make a mesh
defined by triangles (also known as polygons; hence the
term polygonal model).17 Generally, the latter is preferable since a point cloud only looks like the object represented from a distance but dissolves into its constituent points when one zooms in during visualization.
Other advantages of a mesh are that it reduces the data to
the bare essentials. The vertices of the polygons can be
painted, or the faces can be overlaid with textures.
Finally, unlike a point cloud, a polygonal model can be
the basis of a restoration, which is made by generating
new polygons and adding them to the existing mesh.

7.1.5 RESTORATION

As mentioned, reconstruction models are not based on


scans of the existing statue but on verbal descriptions or

Sculpture found in archaeological contexts is generally


damaged. Defects can range from minor surface scratches
and dents to more major losses such as missing limbs,
noses, or entire heads. Sometimes most or all of a statue
is preserved but is found broken into fragments.
Moreover, the paint on the surface of a work of art rarely

16

18

http://usa.autodesk.com/maya.
www.blender.org/.
20
See http://meshlab.sourceforge.net/; Wikipedia s.v. MeshLab at
http://en.wikipedia.org/wiki/Meshlab (seen February 1, 2012).

See the useful introduction in Wikipedia, s.v. Point cloud at


http://en.wikipedia.org/wiki/Point_cloud (seen February 1, 2012).
17
See, in general, Wikipedia s.v. Polygon mesh, at
http://en.wikipedia.org/wiki/Polygon-mesh (seen February 1, 2012).

19

139

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

survives well in the soil and may be faded, invisible to


the naked eye, or lost. A member of the team, typically a
restorer, should be skilled in the use of methods and
techniques to detect traces of pigments. An art historian
can provide useful input for the restoration of large
missing pieces such as limbs and heads.
In some sculptural traditions, copies were often made of a
work of art such as a ruler portrait. Thus, even if the
statue of interest is damaged, it may be possible to
supplement its missing elements by data collection from
other copies. In the latter case, one must be attentive to
the issue of scaling: in pre-modern times, the pointing
method was not in use (Rockwell 1993: 119-122) and so
no two copies are exactly to the same scale. Thus, one
cannot simply scan the torso of one copy and the head of
another and combine the two digital models.
When a statue exists but exists in separate fragments, it is
possible to scan each fragment and to use digital
technology to join the individual fragment models in a
composite restoration model. An example of this can be
seen in the DSPs project to restore the disassembled
fragments of the Pan-Nymph statue group in The Dresden
State Museums.21
When direct evidence (e.g., from a copy) is lacking for
how a missing element is to be restored, the advice of the
teams art historian is crucial for determining the valid
range of alternative solutions available to the artist in the
period in which the work of art was created. In the case of
the DSPs Caligula project, the goal of which was to
scan, model, and restore the full-length portrait of the
emperor Caligula (AD 12-41), the consulting art
historians agreed that the missing arms and hands should
be restored on the basis of an analogous portrait from
Herculaneum of L. Mammius Maximus (cf. Davies
2010). On the other hand, the archaeologist-restorer on
the team found only one small area where pigments of
pink madder and Egyptian blue were still preserved.
These traces suggested that the toga of Caligula was
painted with a purple. But whether the purple was
confined to a stripe (the toga praetexta) or covered the
entire garment (toga purpurea) was uncertain. Moreover,
the edge of the garment might have been decorated with a
gilt pattern (toga picta), something that emperors were
known to have worn. So although the missing arms and
hands could be restored with high probability, the toga
had to be painted in three alternative ways.22
The DSP makes its restoration models with Zbrush,23
software published by one of its sponsors, Pixologic.
Autodesk publishes Mudbox, which has many features
similar to ZBrush. The best freeware is Sculptris, also
published by Pixologic, which those on a tight budget
may find a serviceable alternative.24 For painting 3D
meshes, the DSP commissioned a special version of
21

See www.digitalsculpture.org/pan-nymph/index.html.
For the state and three restoration models, see www.digitalsculpture.
org/caligula/index.html.
23
www.pixologic.com/zbrush/.
24
www.pixologic.com/sculptris/.
22

Meshlab known as Meshlab Paint Edition, which it


distributes at no cost.25 It is quite easy to use and might be
most useful as a way for the art historian on the team to
quickly mark up a mesh with color prior to more
professional painting by a technician expert in the use of
Zbrush or Mudbox. In painting a mesh, it is important to
try to imitate the actual painting techniques found in the
culture that produced the work of art.
To give a concrete example of restoration, here are the
steps involved in using Zbrush to go from the state model
of Caligula to the toga praetexta restoration. First, the
scan data are brought into ZBrush as geometry. To
restore missing elements such as missing limbs, new
geometry is created and sculpted using photographs,
drawings, or other precedents to achieve a natural
appearance. In the case of a missing arm on a sculpture,
this might entail using photos of similar sculptures with
extant arms, or photographs of models posed in a similar
fashion as the sculpture to be restored. Once major
restorations have been completed, attention can be turned
to the small nicks, dents, and gashes. Many such damages
can be easily repaired using ZBrushs basic tools, and
made to look realistic and natural by using the
surrounding undamaged geometry of the sculpture as a
guide. Following the digital restoration of the geometry
of the sculpture, polychromy can be added to the model.
This is done in ZBrush using the painting tools. In the
case of Caligula, as noted, three different versions of the
toga were restored. When restoration of the geometry and
polychromy have been completed to a satisfactory level,
turntable animations or still image renderings can be
outputted from ZBrush in a variety of formats and
resolutions. The geometry can also be exported, and,
through a process using MeshLab and MeshLab Paint, be
converted into a format suitable for interactive display on
the web.26

7.1.6 DIGITAL PUBLICATION


The goal of a project of 3D data capture, modeling, and
restoration is generally publication in some format.
Typical forms of publication include 2D renderings,
video animations, and interactive objects and
environments. For 2D renderings and video animations
the DSP exports the finished model to 3D Studio Max, a
product of Autodesk.27 Sculpture is often an integral
component of a reconstructed archaeological site.
Finished models of statues can be imported into game
engines and integrated into the scene in the right position.
Like many laboratories in the field of virtual archaeology,
the DSP uses Unity 3D as its preferred game engine.
Normally, the statue imported into a game engine has to
be simplified from several million to several hundred
thousand polygons. In contrast, the digital model of the
statue can be published with little if any loss of resolution
25
Meshlab Paint Edition may be downloaded at: www.digitalsculpture.
org/tools.html.
26
I thank Matthew Brennan, lead 3D technician in The Virtual World
Heritage Laboratory, for his input to this paragraph.
27
http://usa.autodesk.com/3ds-max/.

CASE STUDIES

2012-13 so that creators of 3D models can embed them


where they are needed in their own web publications.

and detail through the use of Seymour, a new product


developed by a company owned by the present author.
Seymour exploits WebGL to make it possible to run
interactive 3D models as elements of web pages
analogous to text and 2D images. The user downloads a
simplified version of the full model to her standard
HTML browser. Seymour functions not only in browsers
supporting WebGL such as Chrome, Firefox, and Opera
but also in Internet Explorer, which does not. When the
desired view of the model has been set, the user
automatically and quickly receives the exact same view
of the full model rendered on the cloud and sent to the
users browser over the Internet. It is expected that a
Seymour web service will be established in the period

In whatever format a 3D model is published, best practice


requires that a report accompany it giving the goals and
history of the project and discussing elements of the
model that are based on evidence and those based on
analogy or hypothesis. Best practice also requires
inclusion of the appropriate paradata as required by The
London Charter.28 A new peer-reviewed, online journal
called Digital Applications in Archaeology and Cultural
Heritage will appear sometime in the period 2012-13 in
which 3D models of sculpture can be published (for more
information, see Appendix II).

Appendix I: 28
A COMPARISON OF CASTS VS.
ORIGINALS29
The purpose of this comparison made using the marble
statue and cast (figure 2) Alexander in the Dresden
State Museums is to ascertain two things: (1) which
material is more responsive to 3D digital data capture,
marble or the plaster used in the cast; (2) how closely
does a first-generation plaster cast of a statue correspond
to original marble sculpture? In order to make this
comparison, the statues were first scanned with a FARO
ScanArm. The resulting point clouds were processed with
Polyworks and polygonal models were made.
As will be seen, these two questions are related since the
material qualities of marble turn out to be less receptive
to digitization than are those of plaster or silicon. This
can be easily seen in figure 3. Whereas the digital model
of the plaster cast renders the smooth surface of the
marble, that of the marble has a surface marred by bumps
that do not correspond to any true feature of the original.
Marble is composed primarily of calcite (a crystalline
form of calcium carbonate, CaCO3).30 As Beraldin, Godin
et al., 2001: 1 showed:

Figure 2. Alexander, plaster cast (left) and original


marble31 (right) of the torso; front view. Photographs
with kind permission of The Dresden State Museums

Beraldin, Godin et al., conclude by noting that noise on


the surface of a material such as marble that is not
cooperative to laser scanning was estimated to be 2-3
times larger than on optically cooperating surfaces
(Beraldin, Godin et al., 2001: 8). Beraldin, Godin et al.,
suggest a possible remedy: development of a predictive
algorithm that can digitally correct for the distortion
caused by scanning marble surfaces. They do not
consider another solution: scanning not the marble
original of a statue but a first-generation plaster or silicon
cast.

Marbles translucency and heterogeneous structure


produce significant bias and increased noise in the
geometric measurements. A bias in the depth
measurement is also observed. These phenomena are
believed to result from scattering on the surface of
small crystals at or near the surface.
28

www.londoncharter.org/.
I thank David Koller for his collaboration in writing the Appendix.
30
Wikipedia, s.v. Marble sculpture, http://en.wikipedia.org/wiki/
Marble_sculpture (seen May 1, 2010).
29

31

141

Inventory nr. H4 118/254.

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 4. Tolerance-Based Pass/Fail test of the digital


models of the cast and original torso of Alexander
in Dresden. Green indicates that the two models differ
by less than 1 mm. Red indicates areas where
the difference between the models exceeds 1 m
Figure 3. Alexander, digital model of the cast (left)
and of the original (right) of the torso; front view.
Note the bumpiness or noise in the scan model
of the original torso

As can be seen in figure 3, the plaster cast of the torso of


Alexander is free of noisy bumps. But is there not a
loss of accuracy when a cast is made? Borbein has
recently traced the development of a negative attitude
among art historians and archaeologists toward the plaster
cast (Borbein 2000: 29). Perhaps the climax of this trend
came several years ago when the Metropolitan Museum
in New York deaccessioned its entire cast collection.32
Ironically, this happened just at the time when, in Europe,
at least, the reputation of the cast was rising again
(Borbein 2000: 29).
Figure 5. Error Map comparing the digital models of
the cast and original torso of the Dresden Alexander

With respect to the second question about the accuracy of


casts, we attempt to give a quantitative appraisal of the
assertion of cast-maker Andrea Felice that plaster is the
material par excellence in the production of sculptural
casts owing to its ease of use, malleability, widespread
availability, its granular fineness and thus its notable
ability to reproduce detail, its remarkable mechanical
resistance (13 N/mm2), its moderate weight, and its
excellent rendering of light and shade.33

the set range and so designated in the images as the color


red. Anything within this range passes and is shown as
green. The results can be seen in figure 4.
Clearly, most of the surfaces measurements of the two
digital models fall within 1 mm of each other. To find
out how great the difference is in the red areas, we ran a
second test called an Error Map Comparison. In this test,
a rainbow of colors is outputted, with each color in the
spectrum equivalent to a certain deviation. The results are
seen in figure 5. They show that the biggest difference
between the two models on the front of the torso is 1.343
mm; on the back it is 1.379 mm. These errors can be
expressed in inches as 0.053 inch and 0.054 inch
respectively.

We ran two tests utilizing the scan models of the torso of


the Dresden Alexander to address this issue.34 The first
test is called a Tolerance based Pass/Fail Comparison.
To run it, an acceptable tolerance of variance of one
model from another is set. In our case, this was set at 1
mm (=0.03937 [< 1/25th] inch). The red/green stop
light images show deviation of 1mm as being outside
32

See www.plastercastcollection.org/de/database.php?d=lire&id=172.
www.digitalsculpture.org/casts/felice/
34
I thank Jason Page of Direct Dimensions, Inc. for running this test and
producing the images seen in figures 4 and 5.
33

It is unclear whether the difference between the two


modelswhich, as we have seen, at worst totals about

142

CASE STUDIES

1/20th of an incharises from the calibration of the


scanner; the native error rate of the scanner; the lack of
receptivity of marble to laser scanning; or a random
imprecision in the casting process. Further tests would
need to be undertaken to resolve this uncertainty. What is

already clear is that even if we attribute the maximum


error found to a natural result of the casting process, the
difference we detected between the model of the cast and
that of the original is trivial for most cultural heritage
applications.

Appendix II:
DIGITAL APPLICATIONS IN
ARCHAEOLOGY AND CULTURAL
HERITAGE

opportunity of publishing their models online with full


interactivity so that users can explore them at will. It is
unique in that it will provide full peer-review for all 3D
models, not just the text, 2D renderings, or video
animations. It requires all models to be accompanied by
metadata, paradata, documentation, and a related article
explaining the history and state of preservation of the
monument modeled as well as an account of the modeling
project itself. The journal focuses on scholarship that
either promotes the application of 3D technologies to the
fields of archaeology, art and architectural history, or
makes a significant contribution to the study of cultural
heritage through the use of 3D technology. Creators of
3D models of sculpture are encouraged to consider
publishing their work in DAACH.

The author of the present article has recently been named


editor-in-chief of Digital Applications in Archaeology
and Cultural Heritage (DAACH). The journal will be the
worlds first online, peer-reviewed publication in which
scholars can disseminate 3D digital models of the worlds
cultural heritage sites and monuments accompanied by
associated scientific articles. The journal aims both to
preserve digital cultural heritage models and to provide
access to them for the scholarly community to facilitate
the academic debate. DAACH offers scholars the

CONARD, N.J. 2009. Die erste Venus. Zur ltesten


Frauendarstellung der Welt, in Eiszeit. Kunst und
Kultur (Jan Thorbecke Verlag der Schwabenverlag
AG, Ostfildern) 268-271.

Bibliography
BERALDIN, J.-A.; GODIN, G. et al., 2001. An Assessment
of Laser Range Measurement on Marble Surfaces,
5th Conference on Optical 3D Measurement
Techniques, October 1-4, 2001, Vienna Austria. 8 pp.;
graphics.stanford.edu/papers/marbleassessment/marbr
e_gg_final2e_coul.pdf.

DAVIES, G. 2010. Togate Statues and Petrified Orators,


in D.H. Berry and A. Erskine, editors, Form and
Function in Roman Oratory (Cambridge) 51-72.
DELLAERT, F.; SEITZ, S.M. et al. 2000. Structure from
Motion Without Correspondences, Proceedings
Computer Vision
and Pattern Recognition
Conference; available online at: (seen February 1,
2012).

BORBEIN, A. 2000. On the History of the Appraisal and


Use of Plaster Casts of Ancient Sculpture (especially
in Germany and in Berlin), inLes moulages de sculptures antiques et l'histoire de l'archologie. Actes du
colloque international Paris, 24 octobre 1997, edited
by Henri Lavagne and Franois Queyrel (Geneva
2000) 2943; translated by Bernard Frischer. Available online at: www.digitalsculpture.org/casts/borbein/.

FRISCHER, B. forthcoming. A New Feature of Rome


Reborn: The Forum Statue Group Illustrated on the
Trajanic Anaglyphs, a paper presented at the 2012
Meeting of the Archaeological Institute of America.

CASSIRER, E. 1953. An Essay on Man. In Introduction to


a Philosophy of Human Culture (Doubleday, Garden
City, NY).

FRISCHER, B. and STINSON. P. 2007. The Importance of


Scientific Authentication and a Formal Visual

143

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

18-20 January 2005, San Jose, California, USA, SPIE.


Vol. 5665: 119-133; available online at:
www.frischerconsulting.com/frischer/pdf/Plastico.pdf.

Language in Virtual Models of Archeological Sites:


The Case of the House of Augustus and Villa of the
Mysteries, in Interpreting the Past. Heritage, New
Technologies and Local Development. Proceedings of
the Conference on Authenticity, Intellectual Integrity
and Sustainable Development of the Public
Presentation of Archaeological and Historical Sites
and Landscapes, Ghent, East-Flanders 11-13
September 2002 (Brussels) 49-83. Available online at:
www.frischerconsulting.com/frischer/pdf/Frischer_Sti
nson.pdf. (seen February 1, 2012).

ROCKWELL, P. 1993. The Art of Stoneworking


(Cambridge); available online at http://www.
digitalsculpture.org/rockwell1.html (seen February 1,
2012).

GUIDI, G.; FRISCHER, B. et al. 2005. Virtualizing


Ancient Rome: 3D Acquisition and Modeling of a
Large Plaster-of-Paris Model of Imperial Rome,
Videometrics VIII, edited by J.-Angelo Beraldin,
Sabry F. El-Hakim, Armin Gruen, James S. Walton,

ZUK, T.D. 2008. Visualizing Uncertainty, a thesis


submitted to the faculty of graduate studies in partial
fulfillment of the requirements for the degree of
Doctor of Philosophy, Department of Computer
Science, University of Calgary (Alberta).

LEVOY, M. 2003. The Digital Michelangelo Project,


available online at http://graphics.stanford.edu/
projects/mich/ (seen January 10, 2012).

144

CASE STUDIES

7.2 3D GIS FOR CULTURAL HERITAGE SITES:


THE QUERYARCH3D PROTOTYPE
Giorgio AGUGIARO & Fabio REMONDINO

Linking quantitative information (obtained from surveying) and qualitative information (obtained by data interpretation or by other documentary sources), analysing
and displaying it within a unique integrated platform
plays therefore a crucial role. In literature, some approaches exist to associate information to an entire building
(Herbig and Waldhusl, 1997), to 2D entities (Salonia and
Negri, 2000), to 3D objects (Knuyts et al., 2001), or
according to a model description (Dudek et al., 2003).

7.2.1 INTRODUCTION
Constant advances in the field of surveying, computing
and digital-content delivery are reshaping the approach
Cultural Heritage can be virtually accessed: thanks to
such new methodologies, not only researchers, but also
new potential users like students and tourists, are given
the chance to use a wide array of new tools to obtain
information and perform analyses with regards to art
history, architecture and archaeology.

Use of 3D models can be an advantage as they act as


containers for different kinds of information. Given the
possibility to link their geometry to external data, 3D
models can be analysed, split in their subcomponents and
organised following proper rules.

One useful possibility is offered by 3D computersimulated models, representing for example both the
present and the hypothetical status of a structure. Such
models can be linked to heterogeneous information and
queried by means of (sometimes Web-enabled) GIS tools.
In such a way, relationships between structures, objects
and artefacts can be explored and the changes over space
and time can be analysed.

The NUBES Tempus Project [link] (MAP Gamsau Laboratory, France) is an example (De Luca et al., 2011),
where 3D models in the field of Cultural Heritage are
used for diachronic reconstructions of the past. Segmented models can also help to interpret history allowing the
assembly of sub-elements located in different places but
belonging to the same artefact or site (Kurdy et al., 2011).

For some research purposes, a traditional 2D approach


generally suffices, however more complex analyses
concerning spatial and temporal features of architecture
require 3D tools, which, in some cases, have not yet been
implemented or are not yet generally available, as more
efforts have been put in recent year in 3D data
visualisation rather than 3D spatial analysis.

With regards to 3D data visualisation, development tools


come traditionally from the videogames domain and can
be adapted to support 3D geodata (e.g. Unity3D, OSG,
OGRE3D, OpenSG, 3DVIA Virtools, etc.) but with
limited capabilities when it comes to large and complex
reality-based 3D models. Another general constraint
resides in the limited query functionalities for data
retrieval. These are actually typical functions of GIS
packages, which, on the other hand, often fall short when
dealing with detailed and complex 3D data.

Nowadays 3D models of large and complex sites are


generated using methodologies based on image data
(Remondino et al., 2009), range data (Vosselman and
Maas, 2010), classical surveying or existing maps (Yin et
al., 2009), or a combination of them, depending on the
required accuracy, the object dimensions and location, the
surface characteristics, the working team experience, the
projects budget. The goal is often to produce multiresolution data at different levels of detail (LoD), both in
geometry and texture (Barazzetti et al., 2010; Remondino
et al., 2011).

In 1994, VRML (Virtual Modelling Language) was


launched and became an international ISO standard in
1997. The idea was to have a simple exchange format for
3D information.

145

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

producing static scenes, but also for requesting data, in


order to stream it to the client which implements a more
dynamic visualisation. Its specification is however
currently still in draft status and not yet adopted by the
OGC, although W3DS has already been implemented in
some projects, e.g. for the city of Heidelberg in Germany
(Basanow et al., 2008).

The specifications for its successor, X3D (eXtensible


3D), were officially published in 2003, whose goal was to
better integrate with other web technologies and
tools.BothX3D and VRML can be used to visualise 3D
geoinformation. VRML and X3D languages have been
developed in order to describe3D scenes in terms of
geometry, material and illumination, while still requiring
specific plugins or applets for the rendering inside a web
browser. The advantage for X3D resides in its XML
encoding, which represents a major advantage for on-thefly retrieval in Internet applications.

Another web application to share 3D contents is


OSG4Web (Pescarin et al., 2008; Baldissini et al., 2009).
It consists of a browser plugin that uses the Open Scene
Graph library. It supports multiple LoDs and can load
different 3D model formats. These peculiarities are
particularly useful for large scale visualisations, such as
terrain or urban landscape models.

Research on spatial query and VRML-based visualisation


has produced some prototype systems, as in Coors and
Jung (1998) or in Visintini et al., (2009), where VRML
3D models are linked to external databases and can be
accessed via Web.

Virtual globes (e.g. Google Earth and Nasa World Wind)


have gathered much visibility in the recent years as
visualisation tools for geospatial data: the user can and
get external information by clicking on the selected
object, or by activating a selectable layer. However more
complex queries cannot be generally performed out-ofthe-box, unless some specific functions are written ad
hoc. In Brovelli et al., (2011), for example, the buildings
in the city of Como can be selected according the
construction date and the results visualised both in 2D
and 3D in a web browser where a multi-frame view is
implemented using both WebGIS and Nasa World Wind
technologies.

Regarding X3D, as it strives to become the 3D standard


for the Web integrated in HTML5 pages, X3DOM has
been proposed as syntax model and implemented as a
script library to demonstrate how this integration can be
achieved without a browser plugin, using only WebGL
and JavaScript.
The 3D-COFORM project web site presents collection of
3D
scanned
artefacts
visualised
using
the
X3DOMtechnology [link].X3D is being exploited even
for mobile devices, e.g. in the case of the Archeoguide
Project (21), or in Behr et al., (2010) and Jung et al.,
(2011), where examples of mobile augmented reality in
the field of Cultural Heritage are reported.

Currently on-going developments in terms of HTML5


and WebGL offer great potential for the future
development of more interactive, responsive, efficient
and mobile WebGIS applications. This includes the use
of 2D, 3D and even temporal and animated content
without the need of any third party plugins (Auer 2012).
The trend is therefore that more and more graphics
libraries will rely on WebGL (a list of the principal ones
can be found in the Khronos Groups webpage [link].

The Venus 3D publishing system [link] by the CCRM


Labs enables to explore high-resolution 3D contents on
the web using a WebGL-enabled browser and no
additional plugins. The system manages both high- and
low-resolution polygonal models: only decimated
versions of the models are downloaded and used for
interactive visualisations, while the high resolution
models are rendered remotely and displayed for static
representations only. In the Digital Sculpture Project
[link] (Virtual World Heritage Laboratory, University of
Virginia) 3D models of sculptures are published on-line
(Frischer and Schertz, 2011). Although this application
actually shows just shaded geometries, other textured 3D
models of cultural heritage are available in [link].

A more detailed overview of the available 3D Web


technologies is presented in Behr et al., (2009), with a
division in plugin and plugin-less solutions, and in
Manferdini and Remondino (2011).
But, as of today, a standard and widely accepted solution
does not exist yet.

At urban scale, CityGML [link] represents a common


information model for the representation of 3D urban
objects, where the most relevant topographic objects in
cities and their relations are defined, with respect to their
geometrical, topological, semantic and appearance
properties. Unfortunately, even CityGMLs highest level
of detail (LoD4) is not meant to handle high-resolution,
reality-based models, which are characterised by complex
geometry and detailed textures.

7.2.2 THE QUERYARCH3D TOOL


This section presents a web-based visualisation and query
tool called QueryArch3D [link] (Agugiaro et al., 2011),
conceived to deal with multi-resolution 3D models in the
context of the archaeological Maya site in Copan
(Honduras).The site contains over 3700 structures,
consisting in palaces, temples, altars, and stelae, spread
over an area of circa 24 km2, for which heterogeneous
datasets (e.g. thematic maps, GIS data, DTM,
hypothetical reconstructed 3D models) have been created
over the course of time (Richard-Rissetto, 2010). More
recently (Remondino et al., 2009), high-resolution 3D

Nevertheless, once created, 3D city models can be


visualised on the Web through Web3D Services (W3DS).
W3DS delivers 3D scenes over the Web as VRML, X3D,
GeoVRML or similar formats. It is used not only for

146

CASE STUDIES

Figure 1. Different levels of detail (LoD) in the Query Arch 3D tool. Clockwise from top-left: LoD1 of a temple
with prismatic geometries, LoD2 with more detailed models (only exterior walls), LoD3 with interior walls/rooms
and some simplified reality-based elements, LoD4 with high-resolution reality-based models

the model is. LoD1 contains simplified 3D prismatic


entities with flat roofs.LoD2 contains 3D structures at a
higher level of detail, however only the exteriors of the
structures. The sub-structures (e.g. walls, roofs or
external stairs) can be identified. For the LoD2, some
hypothetical reconstructions models were used.LoD3
adds the interior elements (rooms, corridors, etc.) to the
structures. Some simplified, reality-based models can be
optionally added. These reality-based models were
obtained from the more detailed ones used in LoD4 by
applying mesh simplification algorithms. LoD4 contains
structures (or parts of them) as high-resolution models.
These models can be further segmented into subparts.
Examples are shown in Figure 1.

data was acquired using terrestrial photogrammetry, UAV


and terrestrial laser scanning.
In the case of Copan, an ideal tool should enable a
centralised data management, access and visualisation.
The following characteristics would be therefore
desirable: a) the capability handle 3D multi-resolution
datasets, b) the possibility to perform queries based both
on geometries and on attributes, c) the choice to visualise
and navigate the models in 3D, d) the option to allow
both local and on-line access to the contents.
As no tool able to guarantee the four identified properties
currently exists, a prototype, called Query Arch 3D was
implemented. Despite being tailored to the needs of
researchers working at the Copan archaeological site, the
underlying concepts can be generalised to other similar
contexts.

The adoption of a LoD-dependent hierarchical schema


required the contextual definition of geometric and
semantic hierarchical schemas. This was achieved by an
identification and description of the so-called part-ofrelations, in order to guarantee a spatio-semantic
coherence (Stadler and Kolbe, 2007). At the semantic
level, once every structure is defined, its entities are
represented by features (stairs, rooms etc.) and they are
described by attributes, relations and aggregation rules
between features. If a query on an attribute table is
carried out for a certain roof, the user retrieves
information not only about the roof itself, but also about
which structure contains that roof. However, the semantic
hierarchy needs to be linked to the corresponding
geometries, too: if a query on an attribute table is carried
out for a certain roof, not only the linked attributes should
be retrieved, but also the corresponding geometric object.
This operation requires however to structure the
geometric models in compliance with the hierarchy.
Some manual data editing was therefore necessary to

Conceptually similarly to CityGML, geometric data are


organised in successive levels of detail (LoD), provided
with geometric and semantic hierarchies and enriched
with attributes coming from external data sources. The
visualisation and query front-end enables the 3D
navigation of the models in a virtual environment, as well
as the interaction with the objects by means of queries
based on attributes or on geometries. The tool can be used
as a standalone application, or served through the web.
In order to cope with the complexity of multiple
geometric models, a conceptual scheme for multiple
levels of detail, which are required to reflect independent
data collection processes, was defined. For the Copan
site, four levels of detail were defined for the structures:
the higher the LoD rank is, the more detailed and accurate

147

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 2. Different visualization models in QueryArch3D: aerial view (a, b), walkthrough mode (b) and detail
view (d). Data can be queried according to attributes (a) or by clicking on the chosen geometry (b, c, d).
The amount of information shown is depending on the LoD: in (b), attributes about the whole temple
are shown, in (c) only a subpart of the temple, and the corresponding attributes, are shown

Inside the 3D environment front-end, the user can


perform attribute queries over the whole dataset (e.g.
highlight all structures built by a ruler X; highlight all
altars; highlight only stelae belonging to group Y and
built in year Z). The user can also perform standard
queries-on-click: once a geometric object is selected, the
related attribute values are shown in a text box. However,
the amount of retrieved information is dependent on the
LoD: for LoD1 structures, only global attributes are
shown, while for higher LoDs also the detailed
information is shown, according to the selected segment
(Figure 2).

segment the geometric models into subparts according to


the hierarchical schemes. Upon completion of the
segmentation, all geometric models were aligned and
georeferenced.
In order to reduce data formats heterogeneity, the free
and open-source DBMS PostgreSQLand its extension
PostGIS were chosen as data repository, thus providing a
valuable (and unique) management system for spatial and
non-spatial data. For data administration purposes a
simple front-end was developed. The front-end connects
directly to the PostgrSQL server and allows update
operations on the data currently stored.

7.2.3 CONCLUSIONS

For of the interactive 3D visualisation and query frontend, the game engine Unity 3D, an integrated authoring
tool for creation of 3D interactive content, was adopted.
Unity allows to develop applications which can be
embedded also in a webpage. Moreover, it can be linked
to external databases and retrieve data when needed, e.g.
by means of a PHP interface between Unity and
PosgreSQL.

The continuous development and improvement of new


sensors, data capture methodologies, multi-resolution 3D
representations contribute significantly to the growth of
research in the Cultural Heritage field. Nowadays 3D
models of large and complex sites can be produced using
different methodologies that can be combined to derive
multi-resolution data and different levels of detail. The
3D digital world is thus providing opportunities to change
the way knowledge and information can be accessed and
exchanged, as faithful 3D models help to simulate reality
more objectively and can be used for different purposes.

Regarding the navigation in the 3D environment,


three models are available: a) an aerial view over
the whole archaeological site, where only LoD1 models
are shown, b) a ground-based walkthrough mode, in
which the user can approach and enter a structure on
foot up to LoD3 (provided such a model exists,
otherwise a lower-ranked model at LoD2 or LoD1 is
visualised) and c) a detail view, where LoD4 models are
presented.

The QueryArch3D prototype was developed to address


some of open issues regarding multi-resolution data
integration and access in the framework of Cultural
Heritage. Some requirements were identified in terms of

148

CASE STUDIES

Systems, Lecture Notes in Geoinformation


and Cartography, SpringerBerlin Heidelberg, pp. 6586.

capability to handle multi-resolution models, to query


geometries and attributes in the same virtual
environment, to allow 3D data exploration, and to offer
on-line access to the data. QueryArch3D fulfils these
requirements. It is still at a prototypic stage, but it is
being further developed and improved, to extend and
refine its capabilities. Adding more high-resolution
models into an on-line virtual environment requires good
hardware and internet connections; proper strategies will
have to be tested and adopted to keep the user experience
acceptable as the number of models grows.

BEHR, J.; ESCHLER, P.; JUNG, Y.; ZOELLNER, M. 2009.


X3DOM A DOM-based HTML5/X3D Integration
Model, Web3D Proceedings of the 14th International
Conference on 3D Web Technology, ACM Press,
New York, USA, pp. 127-135.
BEHR, J.; ESCHLER, P.; JUNG, Y.; ZLLNER, M. 2010. A
scalable architecture for the HTML5/X3D integration
model X3DOM, Spencer S. (Ed.): Web3D
Proceedings of the 15th International Conference on
3D Web Technology, ACM Press, New York, USA,
pp. 185-194.

As of now QueryArch3D relies on the Unity plugin, but


the constant improvements and innovations with regards
to the web-based access and visualisation capabilities
offered by HTML5 and WebGL will make it possible to
switch, at a certain point, to a plugin-free architecture.

BROVELLI, M.A.; VALENTINI, L.; ZAMBONI, G. 2011.


Multi-dimensional and multi-frame web visualization
of historical maps. Proc of the 2nd ISPRS workshop
on Pervasive Web Mapping, Geoprocessing and
Services, Burnaby, British Columbia, Canada.

The Web has already improved accessibility to 2D spatial


information hosted in different computer systems over the
Internet (e.g. by means of WebGIS), so the same
improvements are expected in the near future for the 3D.

COORS, V. and JUNG, V. 1998. Using VRML as an


Interface to the 3D Data Warehouse. Proceedings of
VRML'98, New York, pp.121-127.

References

DE LUCA, L.; BUSSAYARAT, C.; STEFANI, C.; VRON, P.;


FLORENZANO, M. 2011. A semantic-based platform
for the digital analysis of architectural heritage
Computers & Graphics. Volume 35, Issue 2, April
2011, pp. 227-241 Elsevier.

AGUGIARO, G.; REMONDINO, F.; GIRARDI, G.; VON


SCHWERIN, J.; RICHARDS-RISSETTO, H.; DE AMICIS,
R. 2011. QUERYARCH3D: Querying and visualizing
3D models of a Maya archaeological site in a webbased interface. Geoinformatics FCE CTU Journal,
vol. 6, pp. 10-17, Prague, Czech Republic. ISSN:
1802-2669.

DUDEK, I.; BLAISE, J.-Y.; BENINSTANT, P. 2003.


Exploiting the architectural heritages documentation:
a case study on data analysis and visualisation. Proc.
of I-KNOW03. Graz, Austria.

AUER, M. 2012. Realtime Web GIS Analysis using


WebGL. International Journal of 3-D Information
Modeling (IJ3DIM), Special Issue on Visualizing 3D
Geographic Information on the Web (Special Issue
Eds.: M. Goetz, J.G. Rocha, A. Zipf). Special Issue
on: 3D Web Visualization of Geographic Data. IGIGlobal. DOI: 10.4018/ij3dim.2012070105.

HERBIG, U.; WALDHUSL, P. 1997. APIS: architectural


photogrammetry information system. International
Archives of Photogrammetry and Remote Sensing,
vol. 38(5C1B), Geo-Information Science and Earth
Observation, University of Twente, Enschede, The
Netherlands, pp. 23-27.
IOANNIDIS, N.; CARLUCCI, R. 2002. Archeoguide:
Augmented Reality-based Cultural Heritage On-site
Guide, GITC, ISBN 908062053X.

BALDISSINI, S.; MANFERDINI, A.M.; MASCI, M.E. 2009.


An information system for the integration,
management and visualization of 3d reality based
archaeological models from different operators,
Remondino F., El-Hakim S., Gonzo L. (Ed.): 3D
Virtual Reconstruction and Visualization of Complex
Architectures, 3rd ISPRS, International Workshop
3D-ARCH 2009. Trento, Italy, 38(5/W1), (on
CDROM).

JUNG, Y.; BEHR, J.; GRAF, H. 2011. X3DOM as carrier of


the virtual heritage, Remondino F., El-Hakim S.
(Ed.): Int. Archives of Photogrammetry, Remote
Sensing and Spatial Information Sciences, 4th ISPRS
International Workshop 3DARCH 2011, Trento, Italy,
38(5/W16), (on CD-ROM).

BARAZZETTI, L.; FANGI, G.; REMONDINO, F.; SCAIONI, M.


2010. Automation in multi-image spherical
photogrammetry for 3D architectural reconstructions,
Proc. of 11th Int. Symposium on Virtual Reality,
Archaeology and Cultural Heritage (VAST 2010),
Paris, France.

KNUYTS, K.; KRUTH, J.-P.; LAUWERS, B.; NEUCKERMANS,


H.; POLLEFEYS, M.; LI, Q. 2001. Vision on
conservation: VIRTERF. Proc. of the Int. Symp. On
Virtual and Augmented Architecture. Springer,
Dublin, pp 125132.
KURDY, M.; BISCOP, J.-L.; DE LUCA, L.; FLORENZANO,
M. 2011. 3D Virtual Anastylosis and Reconstruction
of several Buildings on the Site of Saint-Simeon,
Syria, Remondino F., El-Hakim S. (Ed.): Int.
Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, 38(5/W16), (on
CDROM).

BASANOW, J.; NEIS, P.; NEUBAUER, S.; SCHILLING, A. and


ZIPF, A. 2008. Towards 3D Spatial Data
Infrastructures (3D-SDI) based on open standards experiences, results and future issues. In: P.
Oosterom, S. Zlatanova, F. Penninga and E.M.
Fendel, eds. Advances in 3D Geoinformation

149

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

STADLER A.; KOLBE T.H. 2007. Spatio-Semantic


Coherence in the Integration of 3D CityModels. In:
Proceedings of the 5th International Symposium on
Spatial Data QualityISSDQ 2007, Enschede,
Netherlands, ISPRS Archives.

MANFERDINI, A.M.; REMONDINO, F. 2012. A review of


reality-based 3D model generation, segmentation and
web-based visualization methods. Int. Journal of
Heritage in the Digital Era, Vol. 1(1), pp. 103-124,
DOI 10.1260/2047-4970.1.1.103.
PESCARIN, S.; CALORI, L.; CAMPORESI, C.; IOIA, M.D.;
FORTE, M.; GALEAZZI, F.; IMBODEN, S.; MORO, A.;
PALOMBINI, A.; VASSALLO, V.; VICO, L. 2008. Back
to 2nd AD A VR on-line experience with Virtual
Rome Project, Ashley M., Hermon S., Proenca A.,
Rodriguez-Echavarria
K.,
(Ed.),
Proc.
9th
International Symposium on Virtual Reality,
Archaeology and Cultural Heritage (VAST 2008), pp.
109-116.
REMONDINO, F.; GRN, A.; VON SCHWERIN, J.;
EISENBEISS, H.; RIZZI, A.; SAUERBIER, M.; RICHARDSRISSETTO, H. 2009. Multi-sensors 3D documentation
of the Maya site of Copan. Proc. of 22nd CIPA
Symposium, 11-15 Oct., Kyoto, Japan.

VISINTINI, D.; SIOTTO, E.; MENEAN, E. 2009. The 3D


modeling of the St. Anthony abbot church in San
Daniele del Friuli: from laser scanning and
photogrammetry to vrml/x3d model, Remondino F.,
El-Hakim S. (Ed.), Int. Archives of Photogrammetry,
Remote Sensing and Spatial Information Sciences, 3th
ISPRS International Workshop 3D-ARCH 2009,
Trento, Italy, 38(5/W10), (on CD-ROM).
VOSSELMAN, G.; MAAS, H. 2010. Airborne and terrestrial
laser scanning. CRC, Boca Raton, 318 pp. ISBN: 9781904445-87-6.
Web links

REMONDINO, F.; RIZZI, A.; BARAZZETTI, L.; SCAIONI, M.;


FASSI, F.; BRUMANA, R.; PELAGOTTI, A. 2011.
Geometric and RadiometricAnalyses of Paintings.
The Photogrammetric Record.

[NUBES] http://www.map.archi.fr/nubes/NUBES_
Information_System_at_Architectural_Scale/
Tempus.html

RICHARDS-RISSETTO, H. 2010. Exploring Social


Interaction at the Ancient Maya Site of Copn,
Honduras: A Multi-scalar Geographic Information
Systems (GIS) Analysis of Access and Visibility.
Ph.D. Dissertation, University of New Mexico.

[3DCOFORM] http://www.3d-coform.eu/

SALONIA, P.; NEGRI, A. 2000. ARKIS: an information


system as a tool for analysis and representation of
heterogeneous data on an architectural scale. Proc. of
the WSCG2000. Plzen, Czech Republic.

[KHRONOS] http://www.khronos.org/

[VENUS] http://www.ccrmlabs.com/
[DIGSCO] http://www.digitalsculpture.org/
[CITYGML] http://www.citygml.org

[QUERYARCH3D] http://mayaarch3d.unm.edu/
index.php

150

CASE STUDIES

7.3 THE USE OF 3D MODELS FOR INTRA-SITE


INVESTIGATION IN ARCHAEOLOGY
Nicolo DELLUNTO

investigation, as they are able to provide more complete


overviews of archaeological contexts.

7.3.1 INTRODUCTION
The exponential evolution of spatial and visual
technologies has deeply impacted archaeology as a
discipline. Technology has always been an important part
of archaeological practices, and its use has contributed to
developing methods and theories for the investigation and
analysis of archaeological sites. Instruments and tools
that are typically used to conduct field activities have
aided archaeologists from the very beginning of this
discipline. The use of these instruments and tools has
been customised over the years to improve the excavation
process.

In particular, the diffusion of digital formats and the


availability of powerful visualisation platforms, such as
the Geographic Information System (GIS), have
exponentially increased the ability to highlight and
identify new information by placing data of different
natures into a spatial relationship.
Although new technologies have provided a plethora of
options for recording material data, the use of these
technologies
during
on-going
archaeological
investigations has always been related to the
technologies ability to fit within the logistic framework
and time constraints of the field campaign. In contrast
with other areas of the cultural heritage sector, the longterm use of digital technologies during archaeological
field activities requires sustainable and functional
workflows for the acquisition, visualisation and
permanent storage of the data.

In other scientific disciplines, results and hypotheses can


be verified multiple times, whereas, in archaeology, this
practice is not feasible given the irreversible nature of the
investigation process (Barker, 1993). Therefore, choosing
recording system is a fundamental part of archaeological
research, as it will determine the quality and typology of
the data employed during the interpretation process.
The evolution of archaeological practices has been
described in previous literature (Jensen, 2012), and
experimentation with new investigation methodologies
has always been evident in archaeology. It is important to
highlight how the approaches that have been adopted in
the past by different scholars have represented an
important contribution to the definition of a balance
between documentation and field activities.

Digital technologies influence how archaeologists


experience sites, as the technologies are transforming a
type of research that has traditionally developed in an
isolated context into a more collective experience
(Zubrow, 2009).

To date, this discussion has provided a platform through


which scientific methodologies of investigation have
been argued and defined.

Excavation is the primary method of data acquisition in


archaeology. During excavation, fragmented contexts are
recognised and then diachronically removed and recorded
with the goal of reconstructing and interpreting the
evolution of the sites stratigraphy and chronological
sequence (Barker, 1993).

7.3.2 3D MODELS AND FIELD PRACTICES

In the last decade, archaeological practices and


documentation have been strongly affected by the
diffusion of digital technologies. A result of this process
has been the introduction of new typologies of
instruments and data that benefit current methods of

The documentation system that is adopted in the field is


typically designed according to the characteristics of the

151

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 1. This image presents an example of a 3D model acquired during an investigation campaign in Uppkra
(Summer 2011). The model has been realised using Agisoft Photoscan and visualised through MeshLab

exposure. A 3D model provides a high-resolution


geometric description of the archaeological evidence that
characterises a site and is able to do so in the specific
time frame of the investigation activity (Fig. 1). In
contrast to a picture, 3D models can be measured and
explored at different levels of detail, and, if generated
during field activities, they can be used together with
graphic documentation to plan new excavation strategies
and monitor the field activity that has developed.

site and is based on the systematic collection of


archaeological evidence. The recording process may be
one of the most delicate parts of a field campaign. In fact,
at the end of an investigation, interpretative drawings,
photographs and site records are the only data sources
available for post-excavation research.
Field documentation plays an important role during the
interpretation process (i.e., excavation and postexcavation) and significantly influences the successful
planning of an on-going investigation.

Certainly the possibility of creating measureable 3D


replicas of archaeological data would be an important
achievement for archaeological research, but it is
important to note that 3D models cannot be used as a
substitute for interpretative drawings. In fact, both
methods provide descriptions of different aspects of the
same context and their combined use may represent the
most exhaustive visual tool for describing a site. When
associated with field documentation, three-dimensional
models can be used to track and reconstruct the evolution
of a sites field activities, including mapping the
metamorphosis of an archaeological excavation through
its entire life cycle.

Since personal computers were first used in archaeology,


instruments, such as the Computer Aided Designed
(CAD), have been utilised to digitalise hand drawings and
field records. Although these instruments are not
comparable with the tools that are available today, the
irintroduction
revolutionised
documentation
by
supporting the creation of maps that displayed varying
levels of detail regarding a site in a single document.
Another important technological achievement in
archaeology was the introduction of Geographic
Information System (GIS), which is a sophisticated
database management system designed for the
acquisition, manipulation, visualisation, management
and display of spatially referenced data (Aldenderfer
1996). Although GIS did not substantially affect how
graphic documentation was realised during excavation
nor did it influence the typology of the information
documented in field records, it did support data
management and space analysis by providing the capacity
to display a complete overview of on-going investigation
activity through a temporal spatial connection of the
whole dataset, once it was imported into the system.

7.3.3 INTRA-SITE 3D DOCUMENTATION IN


UPPKRA
The use of three-dimensional models to document
archaeological contexts is an important step in the
archaeological recording process. Although the
advantages to employing this typology of data during
investigations are obvious, it is not as simple to define
strategies for systematic employment in the field. As
stated, to be successfully employed during excavation,
3D models need to be available during the time frame of
the field activity and have to be visualised in a spatially
related manner with all of the other types of documents
collected during an investigation.

The use of 3D models to document archaeological


investigations is an important novelty in the area of field
recordings. In contrast within terpretative drawings,
which provide a schematic and symbolic description of a
context, a three-dimensional model has the capacity to
display the full qualities of a context immediately upon

Since the spring of 2010, experiments at the


archaeological site of Uppkra, Sweden, have been

152

CASE STUDIES

The process of 3D data construction begins with the


employment of algorithms regarding the structure from
motion (SFM). This procedure involves calculating the
camera parameters for each image. After this operation,
the software detects and matches similar geometrical
features from each pair of consecutive images and
calculates the figures corresponding positions in space.
Once the cameras position has been estimated,
algorithms for dense stereo reconstruction are used to
generate a detailed 3D model of the scene. During the
second stage, the pre-estimated camera parameters and
image pixels are used to build a dense cloud of points that
are then processed into a high-resolution 3D model
(Verhoeven, 2011; Scharstein & Szeliski, 2002; Seitz et
al., 2006, Callieri et al., 2011) (Fig. 2). Although it is
flexible and versatile, the success of this method largely
depends on the skill of the operator with regard to taking
a good set of pictures, the quality of the digital camera
used to take the photographs and the computational
characteristics of the computer used for data processing
(Callieri et al., 2012).

conducted via collaboration between the Lund University


(http://www.lunduniversity.lu.se/) and the Visual
Computing Lab in Pisa (http://vcg.isti.cnr.it/). The goal of
these experiments is to examine the advantages and
disadvantages of using 3D models to document and
interpret on-going archaeological investigations. This
typology of research aims to highlight how the use of
spatial technologies changes the perception of an
investigation site through the direct employment of new
digital approaches during excavation.
The archaeological site of Uppkra is considered one of
the most important examples of an Iron Age central place
in Sweden. The site is located in Scania, which is 5
kilometres south of Lund and consists of approximately
100 acres of land. To date, the archaeological
investigation has revealed the existence of a settlement
that was established at the beginning of the 1st century
BC and existed till the end of the 11th century AD. This
settlement has many different typologies of structures and
finds. The site, which was discovered in 1934, has been
the subject of archaeological investigations since 1996. It
has proven from the very beginning to be an
extraordinarily rich site. During afield campaigns initial
phase (1996-2000), a metal detector survey indicated the
presence of approximately 20,000 findings, which
supported the continuity of human activities at this site
from the Pre-Roman Iron Age until the Viking Age
(Larsson, 2007).

The first experiment using this technique was performed


in the summer of 2010 during an excavation on the
Southeast side of the previously mentioned long house.
The goals of this experiment were to test the efficiency of
this technique when producing complete 3D models
within the time frame of the excavation and to gain
insight into the level of accuracy of the 3D models
created using this technique. This experiment was
performed without joining the field campaign, and the
photographs were taken every day at the end of the daily
excavation activities.

Thus far, Uppkra has been an ideal environment for


conducting our experiments. The rich stratigraphy that
characterise this site and the large variety of structures
found to date have allowed for the testing of tools and
instruments across a number of different types of
archaeological situations. Currently, this research
environment serves as an ideal place for developing and
testing new research methodologies.

Despite limitations in the software available at the time of


the experiment, both of these goals were successfully
accomplished. Every day, a complete 3D replica of the
archaeological context under investigation was available
for archaeologists to utilise to monitor their previous
investigation activities (Fig. 3). Moreover, these 3D
models were sufficiently accurate for use as geometrical
references in documentation (Dellepiane et al., 2012).

One of the first in situ tests was conducted during the


spring of 2010 and involved using a time of flight laser
scanner to document an Iron Age long house that was
situated in the Northeast area of the site. Despite the short
amount of time needed to accomplish this task, the postprocessing of the raw data was extremely timeconsuming and did not provide results that could be
utilised before the end of the field campaign. After this
experience, it was clear that this typology of instrument,
although extremely accurate and resolute, was not
appropriate for use in this experiment.

Although these results were positive, whether this new


typology of data could be employed during the practice of
an excavation remains unclear as the first experiment was
performed without joining the excavation campaign.
Therefore, we designed an experiment to evaluate
whether the models created during a field campaign could
influence the development of an investigation campaign,
given that these models may provide new information for
interpretation. The goals of this experiment were as
follows: (i) to assess the sustainability of a documentation
method based on the combination of 3D models and
traditional data, (ii) to evaluate the use of 3D models as
geometrical references for documentation, and (iii) to
shed light on whether the use of different visualisation
tools increased comprehension of the stratigraphic
sequence of the site within the time frame of the
investigation. This experiment was performed in the
summer of 2011 during an excavation campaign of a
Neolithic grave, which was detected in 2010 in the

During the same time period, we were also testing


Computer Vision techniques, which are tools that
generate resolute tridimensional models from sets of
unordered images. This methods primary advantage is
that a simple digital camera (without any preliminary
calibration) can be used to generate a 3D model of a large
archaeological context. Despite the limitations of the
image processing software that was used at that time, this
technique was extremely flexible in the field and
provided resolute 3D models within the time frame of the
excavation (Dellepiane et al., 2012).

153

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Figure 2. This image shows the three steps performed by the software (i.e., Photoscan and Agisoft) to calculate
the 3D model for the rectangular area excavated in 2011 during the investigation of a Neolithic grave
in Uppkra: (a) camera position calculations, (b) geometry creation, and (c) map projection

found in the north and south parts of the excavation. To


verify the continuity of this circular structure, a
perpendicular trench was excavated. A large pit was
discovered in the middle of the structure and a stone
paving was found at the bottom of this pit. (Fig. 4)
(Callieri et al., 2011).

Northwest area of the site when a geophysical inspection


of the area highlighted the clear presence of anomalies
(Trinks et al., 2013).
The excavation began with an investigation of a
rectangular area that crossed the circular structure found
during the geophysical inspection. The archaeological
contexts were documented combining Total Station
(which was used by the excavation team to produce
graphic interpretations), field records and several sets of
images, which were used to generate the 3D models.
During the field campaign, circular-shaped ditches, which
may have served as the border of a grave mound, were

This experiment assessed whether the use of 3D models


could provide a better understanding of an on-going
excavation during field activities. In particular, we
attempted to use this new typology of data during the
discussion frame, which typically occurs when there is a
direct examination of the archaeological evidence.

154

CASE STUDIESS

Figure 3. This image shhows the inveestigation areaa that was seleected in 2010 to
t test the effiiciency of the Computer
Vision techhniques duringg an archaeoloogical excavattion in Uppkrra. The upper part of the im
mage presents (A) a model
created durring the excavvation overlappped with the graphic
g
docum
mentation creaated during thee investigation
n campaign.
T lower partt of the image presents (B) an
The
a example off models orgaanised in a tem
mporal sequencce

Figgure 4. This im
mage shows tw
wo models of the excavatio
on that were crreated at differrent times durring
the investigationn campaign. In
I the first moodel, (a) the cirrcular ditch iss visible only iin the Northw
west
rectangular areea. The secondd model show
ws (b) how the results of thee archaeological investigatio
on
allow
wed for the disscovery of a ditch
d
that was in the Southeaast rectangulaar area

155

3D MODELING
G IN ARCHAEOLO
OGY AND CULTU
URAL HERITAGE
E

Figure 5. This image show


ws part of the 3D
3 models thaat were createdd
during the exxcavation of a grave, organiised in a temporal sequencee

How
wever, this soolution was not available when
w
the lastt
expeeriment was performed.
p

t
goal, it was
w necessaryy to develop a 3D
To achieve this
model beforee the field investigation proogressed. Althhough
Computer Vision
V
techniqques create high-resolution
h
n 3D
models, we decided to maintain
m
strictt control oveer the
number of poolygons used to describe eaach file. Thereefore,
we establisheed guidelines to develop models
m
that equuated
resolution annd usability, meaning
m
that the
t files should be
easy to mannage and stoore. In fact, the limited space
s
resources thaat often charaacterise archaaeological archhives
could have prevented
p
the storage of thhe 3D models with
the rest of thhe documentattion, which would have ressulted
in the loss off these modelss in the future..

The inability to visualise the thhree-dimensional models inn


spatial relation with
w the sitess documentatiion preventedd
the creation
c
of a complete visuual description
n of the sitess
docu
umentation. This
T
highlightts how a 3D replica of ann
arch
haeological coontext, if noot connected to the sitess
docu
umentation, looses a large ppart of its co
ommunicationn
pow
wer.

7.3.4
4 3D MODELS AND GIS
S
After establlishing the guidelines
g
foor processingg the
models, we began
b
a dailyy acquisition of
o the site by georeferencing the
t 3D modells with the sitte grid. During the
excavation, staff
s
memberss primarily ussed the 3D filles to
discuss issuees regarding thhe on-going excavation,
e
such as
the horizonttal relations among
a
differrent layers orr the
vertical proggression of thhe site stratiggraphy. Moreover,
different virttual perspectivves and anglees were utiliseed to
review the complex
c
metaamorphos is of
o the site, which
w
would have been
b
impossibble to do in thhe real life. (Fiig. 5)
(Callieri et al., 2011).

Our results reveaal shed light on the use of


o 3D modelss
durin
ng archaeoloogical investiigations. Crittically, thesee
resu
ults highlight the importannce of finding
g new visuall
solu
utions for merging trii-dimensional data intoo
excaavation routinees.
The increasing diffusion andd use of 3D
D models inn
vate sector too
diffeerent disciplinnes has encouuraged the priv
prop
pose new soolutions. Com
mpanies, succh as ESRII
(http
p://www.esri.ccom/), havee recently invested inn
deveeloping GIS platforms thhat have thee capacity too
man
nage and visuualise 3D m
models. This technologicall
imprrovement proovides an im
mportant opp
portunity forr
gain
ning a new unnderstanding oof the currentt topic. Thus,,
the experiments included inn this paperr should bee
deveeloped furtherr to gain a preeliminary und
derstanding off
the advantages and disadvanntages of co
ombining 3D
D
mod
dels with GIS.

In contrast with graphic documentatiion, during which


w
interpretationns are recorrded as a result
r
of intense
discussion processes,
p
thee 3D modells were prim
marily
utilised to achieve a deeper undeerstanding off the
stratigraphic sequence.
This test asseessed whetherr exploring multiple
m
3D moodels
of as ite in real-time
r
enhaanced archaeoologists abiliity to
monitor andd recognise archaeologicaal evidence. This
experiment revealed thhat elaboratee tri-dimenssional
models increease researcheers sense of awareness,
a
theereby
improving thhe quality of thhe final interppretation. Althhough
we achievedd most of ouur goals, wee were unablle to
visualise botth the traditionnal documentation (i.e., graaphic
elaborations and field reccords) and thhe 3D files inn the
same virtual space.

Afteer briefly invvestigating thee best data workflow


w
forr
achiieving a sustaainable proceess of data migration,
m
wee
startted a systemaatic import off the availablee informationn
into a GIS.
To obtain this result,
r
the w
whole dataset, which wass
prev
viously acquirred in the fielld, was re-pro
ocessed usingg
Agissoft Photoscaan (http://www.agisoft.ru/), which is a
softw
ware that provvides an advaanced image-b
based solutionn
for creating
c
threee-dimensional contents from
m still imagess
(Verrhoeven, 20111). Despite similarities with otherr
Com
mputer Visionn applicationss, the advanttage of usingg
Agissoft is that itt semi-automaatically createes scaled andd

Obviously, the ideal siituation wouuld be to use a


visualisation platform thatt was capable of displaying both
graphic interrpretations annd 3D modelss within the same
referenced viirtual space.

156

CASE STUDIES

Figure 6. This image shows the integration of the 3D models into the GIS

geo-referenced texturised models, which is crucial using


3D files in a GIS platform.

period, but that had been excavated during a different


time frame.

The visualisation of the dataset was performed using


ArcScene, which is a 3D software application developed
by ESRI that displays GIS data in three dimensions. After
the models were optimised, they were imported and
visualised using ArcScene with the shape files that were
created during the investigation campaign (Fig. 6a). Once
defined, the workflow was easy to apply, and, although
there was necessary optimisation (Fig. 6b), the models
maintained a sufficient level of visualisation.

Unfortunately, the typology of data collected during our


experiments does not support performing this simulation.
However, we intend to initiate new experiments focusing
on these aspects.

7.3.5 CONCLUSION
This paper presents the advantages of incorporating threedimensional models into current archaeological recording
systems. The results supported the combination of 3D
files with the current documentation system, as this
approach would represent a more informative tool for the
description of the excavation process. Additionally, the
results of our experiments indicate that the appropriate
integration of 3D models within the time frame of field
activities exponentially increases the perception of the
archaeological relations that characterize the on-going
investigation by providing a 3D temporal reference of the
actions performed on the site.

ArcScene only imported models smaller than 34,000


polygons.
An important advantage to having the 3D models
available in the GIS is that an attribute table can be
defined for and connected to each file. Attribute tables
can directly link portions of field records as metadata
with the three-dimensional models, which allows for very
interesting scenarios regarding the future development of
field investigations.
When collecting large 3D datasets, Geographic Information System can be used to select and display the threedimensional files that are associated with similar field
records (i.e., metadata). This operation will automatically
display any artificial 3D environments that were created
based on archaeological contexts belonging to the same

Abstract
In recent decades, the development of technology that
aids in documenting, analysing and communicating
information regarding archaeological sites has affected

157

3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE

Pena Serna, S.; Rushmeier, H. &Van Gool, L. (eds),


Eurographics Association, pp. 33-40.

the way that historical information is transmitted and


perceived by the community.

DELLEPIANE, M.; DELLUNTO, N.; CALLIERI, M.;


LINDGREN, S. & SCOPIGNO, R. 2012. Archeological
excavation monitoring using dense stereo matching
techniques. Journal of Cultural Heritage, 14, Elsevier,
pp. 201-210.

Digital technologies have affected archaeology at all


levels; for example, novel investigation methods have
highlighted new and unknown aspects of archaeological
research. The constant development of friendly userinterfaces has encouraged the diffusion of and
experimentation with different approaches.

FISHER, R.; DAWSON-HOWE, K.; FITZGIBBON, A.;


ROBERTSON, C.; TRUCCO, E. 2005. Dictionary of
Computer Vision and Image Processing, John Wiley
& Sons, Hoboken.

This article discusses how the use of three-dimensional


models has changed our perception of field practices in
archaeology. Specifically, this paper presents several
experiments in which three-dimensional replicas of
archaeological contexts were processed and used to
document and monitor the short lifetime of on-going
archaeological excavations. These case studies
demonstrate how the use of digital technologies during
field activities allows archaeological researchers to timetravel through their work to revisit contexts and material
that had been previously removed during the
investigation process.

LARSSON, L. 2007. The iron age ritual building at


Uppkra, southern Sweden, Antiquity 81, pp. 11-25.
SZELISKI, R. 2010. Computer Vision: Algorithms and
Applications, Springer-Verlag, London.
TRINKS, I.; LARSSON, L.; GABLER, M.; NAU, E.;
NEUBAUER, W.; KLIMCYK, A.; SDERBERG, B.;
THORN, H. 2013. Large-Scale archaeological
prospection of the iron age settlement site UppkraSweden, Neubauer, W.; Trinks, I.; Salisbury, R.B.;
Einwgerer, C., (eds), Austrian Academy of Sciences
Press, pp. 31-34.
VERHOEVEN, G. 2011. Taking computer vision aloftarchaeological three-dimensional reconstruction from
aerial photographs with photoscan, Wiley Online
LibraryArchaeological prospection 18, pp. 67-73.

References
ALDENDERFER, M. 1996. Introduction. Aldenderfer M.
and Maschner H.D.G. (eds): Antropology, space, and
geographic information system. Oxford University
Press, Oxford, pp. 3-18.

JENSEN, O. 2012. Histories of archaeological practices,


The National Historical Museum Stockholm, Studies
20, Stockholm.

BARKER, P. 1993. Techniques of Archaeological


Excavation, B.T. Batsford, London.
Websites

CALLIERI, M.; DELLUNTO, N.; DELLEPIANE, M.;


SCOPIGNO, R.; SDERBERG, B. & LARSSON, L. 2011.
Documentation and Interpretation of an Archeological
Excavation: an experience with Dense Stereo
Reconstruction tools. Dellepiane, M.; Nicolucci, F.;

http://meshlab.sourceforge.net/
http://www.agisoft.ru/
http://www.esri.com/

158

Anda mungkin juga menyukai