Anda di halaman 1dari 14

APPLICATION OF ECOGNITION® SOFTWARE FOR PINE

PLANTATION ASSESSMENT USING LIDAR AND DIGITAL


CAMERA DATA IN NSW

Peter Worsley1, Catherine Carney1, Christine Stone1, Russell Turner1,


Christian Hoffmann2

1
Forest Science Centre, Industry & Investment NSW
P.O. Box 100, Beecroft, NSW 2119, Australia
02 98720132
Peter.Worsley@industry.nsw.gov.au
2
Trimble Germany GmbH
Am Prime Parc 11, 65479 Raunheim, Germany
christian_hoffmann@trimble.com

Abstract
Plantation managers require spatially accurate and current information on the
structure, stocking and health of their plantation estate. Harvesting, thinning and
re-planting operations occur across plantations throughout the year and
incremental stock losses due to drought, pests and diseases are common
place. Consequently these forests are constantly changing and the systematic
assessment of their planted resource is essential for predicting current and
future stand volumes implementing silvicultural regimes aimed at maximizing
returns. Keeping plantation database records up to date is becoming more
challenging as the commercial forestry sector consolidates its workforce. This
paper presents the results to date, of the application of eCognition® Developer
software for the multi-scale image segmentation and classification of P. radiata
plantations using either airborne light detection and ranging (lidar) data or high
spatial resolution, multispectral imagery acquired with a Leica ADS40 linear
scanner sensor. We successfully mapped compartment net stocked areas
(NSA) and classified treatment zones (i.e. thinned, unthinned, bare ground and
exclusion zones). For both the lidar CHM and ADS40 imagery, commission and
omission errors in classifying unthinned and thinned stands for net stocked
areas were minimal (< 5.0 %) The greatest number of misclassified points in
the lidar imagery was associated with the exclusion zones, areas of vegetation,
mostly native species, outside the compartment net stocked areas (producer’s
accuracy = 84.5% and user’s accuracy = 85.5%). A better result in classifying
the non P. radiata vegetation was achieved with the ADS40 imagery (producer’s
accuracy = 98.8% and user’s accuracy = 95.5%). However, there was more
confusion in classifying bare ground compared to the lidar results (producer’s
accuracy = 92.8% and user’s accuracy = 82.5%). These rulesets will be
incorporated into an overall system enabling the operational adoption of high
resolution airborne data for accurate and cost-effective plantation inventory.

1
Introduction
Forests New South Wales (Forests NSW) manages the largest softwood
plantation estate in Australia. In 2009, the net market value of timber of its
193,000 ha estate was approximately A$600 million. The agency annually
harvests millions of tonnes of products (pulp and sawlogs) within a long-term,
sustainable management framework. These highly productive plantations are
continuously harvested, thinned and re-planted and the dynamic nature of the
estate presents a significant challenge for forest planners.
Systematic assessment of their planted resource is essential for predicting
current and future stand volumes and implementing silvicultural regimes aimed
at maximizing returns. Keeping database records up to date with these
plantation activities is becoming more challenging as the commercial forestry
sector consolidates its workforce. This has traditionally been driven by plot
based inventory and dGPS measurements and supported by Aerial
Photogrammetric Interpretation (API). However, manual delineation and
interpretation of air photos is a time consuming and hence expensive process.
Mapping accuracies depend on the skills and experience of the image
interpreters. This declining skill base is a problem facing forestry agencies
internationally (e.g. Leckie et al. 2003; Wulder et al. 2008). As a consequence,
cost effective, semi-automated stand classification mapping techniques to
supplement traditional methods need to be developed (e.g. Leckie et al. 2003,
Hay et al. 2005, Wulder et al. 2008, Haywood and Stone 2010).
A Pinus radiata plantation has a hierarchy of spatial features defined by the
presence of P. radiata planted in management units (compartments), the age of
the P. radiata stand and its thinning history. Within Forests NSW,
compartments are planted to approximately 1000 stems per hectare (ha),
thinned between the ages 13 to 17 years old down to 450-500 stems per ha and
then thinned again after 23 years down to 200 to 250 stems per ha. Most
compartments are harvested before 35 years of age. We are using eCognition®
software (Trimble Navigation Ltd.) to develop a semi-automated, object-based
approach for the multi-scale image segmentation and classification of these P.
radiata plantations using airborne light detection and ranging (lidar) data or high
spatial resolution, multispectral imagery acquired with a Leica ADS40 linear
scanner. Our first aim was to compare segmentation schemes that extract net
stocked areas at the compartment scale, derived solely from either the lidar
data with the digital multispectral imagery. Net stocked area (NSA) is the area
of land that is effectively stocked with the species of interest. Although the
synergistic use of lidar data and high spatial resolution multispectral imagery
has been demonstrated for object-based forest classification (e.g. Ke et al.
2010), access to coincident imagery may not always be possible or affordable
and so we sought to identify their strengths and weaknesses before integrating
the two types of imagery.
This study forms part of a larger project having the overall objective of
promoting the operational adoption of high resolution airborne data within
existing P. radiata resource inventory systems. Numerous recent studies have

2
successfully demonstrated the application of object-based image analysis
(OBIA) to classify forested landscapes (e.g. Tiede et al. 2006, Pascual et al.
2008, Kim et al. 2009; Johansen et al. 2010, Ke et al. 2010). This paper
presents progress so far, in particular on the application of eCognition® software
to classify a selection of feature class items that are managed in the Forests
NSW forest resource geo-database. Smith and Morton (2010) claim that to
exploit the benefits of geospatial object-based image analysis, the object
structures must align themselves as closely as possible to what is already in
use and what fits with the users, in our case, Forests NSW. Failure to do so will
prevent adoption of the proposed software tools.

Methodology
Existing plantation management databases
Forests NSW has two integrated databases that are integral to their plantation
planning and management activities. The non-spatial GeoMaster database
(Atlas Technology, Rotorua, New Zealand) is an event management system
that captures the temporal dynamics of plantation activities, and a geo-database
(supported by ESRI software). The geo-database holds a series of feature
classes that are described by an array of items such as compartment number,
age class, species etc. The base unit within the softwoods feature class is the
‘resource unit’, a distinct sub-compartment object with a unique management
history. The geo-database also contains all the other administrative boundaries
and operational spatial data, for example, native vegetation retention areas
(exclusion zones) and net stocked areas within compartments. Forests NSW
has developed a front end interface that enables operational staff access to
both of these two databases.
At present, the databases are updated through inventory assessment in plots
located within the areas of interest using Atlas Cruiser® software. Summaries of
this data can be assigned as an event and entered into GeoMaster. The
boundary line-work within the geo-database is sourced from either hand-held
GPS data or manually delineated using digital camera imagery or high
resolution satellite data (e.g. IKONOS).

Study area
The 5,000 ha study site is located within Green Hills State Forest (SF) (35.5oS,
148.0oE), near Batlow within the Hume Region of Forests NSW. Green Hills SF
is a large commercial P. radiata plantation planted on mostly undulating
topography, with a mean elevation of 750 m and annual rainfall of
approximately 1200 mm.

Acquisition of imagery
Small-footprint discrete return lidar data was acquired using a Lite Mapper LMS-
Q5600 ALS system (Riegl, Australia) mounted in a fixed-wing aircraft and

3
supplied through Digital Mapping Australia Pty. Ltd. (Perth, Australia). The lidar
mission was flown in July 2008 to coincide with winter. The winter season was
selected because it is the leaf-off period for deciduous blackberries which are
the key understorey weed species in the plantation. The near infra-red (NIR)
lidar system was configured for a pulse rate of 88,000 pulses per second, mean
footprint size of 50 cm, maximum scan angle of 15o (off vertical), mean swath
width 500 m and a mean point density of 2 pulses m -2. The first and last return
for each laser pulse was recorded.
Laser scanning points were processed, geo-referenced and classified by the
service provider into ground and non-ground categories using TerraScan
software (TerraSolid, Finland). Processed lidar point data was supplied on an
external drive in LAS format with each file representing a 1 km X 1 km tile (GDA
94, MGA 55). The company also provided processed tiles of the Digital Terrain
Model (DTM) and Vegetation Elevation Model (VEM) generated at 1.0 m pixel
resolution. Additional 0.5 m pixels resolution rasters were later generated
directly from the original LAS point data using ENVI 4.7 (ITT Visual Information
Solutions). The tiles were mosaiced and a Canopy Height Model (CHM) was
derived by subtracting the DTM from the VEM using ENVI.
The digital multispectral aerial photography (DMAP) was supplied by Fugro
Spatial Solutions Pty. Ltd., acquired using an airborne ADS40 Linear Scanner
sensor in late September 2009. The company provided 16-bit 4 band
(NIR.R.G.B.) orthophoto mosaics acquired at 0.3 m ground sample distance
and with a forward overlap and side-lap of 60%. The processed imagery was
then re-sampled from 30 cm to 50 cm pixel size. The 16 bit data was rescaled
to 8 bit using ERDAS IMAGINE 2010 (ERDAS Inc., Hexagon Group, Sweden).
A number of eCognition® Developer algorithms are not optimized for 16 bit
data.

Application of eCognition®
We developed rule-sets using eCognition Network Language (CNL) within the
eCognition® Developer (version 8.0.1) environment. The eCognition® suite was
chosen because of its ability to treat imagery both as a raster and vector
dataset, significantly increasing the analyst’s ability to produce clean end user
outputs. Two separate rulesets were developed for the lidar and ADS40
imagery and their classification accuracies compared. We also utilised the
automated tiling and stitching procedures available in eCognition® Server. This
software module is designed for the batch execution of thousands of image tiles
processed using eCognition® Developer.
The data workflow developed to extract net stocked areas from the lidar-derived
CHM is summarized in Figure 1. The CHM image was initially smoothed to
remove gaps within individual crowns using a median and Gaussian filter.
Thematic layers containing compartment level and resource unit level data,
were extracted from Forests NSW geo-database and loaded into the Developer
workspace. Over 380 rules were defined resulting in repeated segmentations;
image object fusions and looping processes (Figure 1). Ground and exclusion
areas were identified before locating thinned and unthinned stands. ‘Exclusion

4
zones’ consist of a range of vegetation types including pine wildlings, eucalypts,
blackberries and grasses. Thematic information was used in the process of
separating the P. radiata vegetation from non P. radiata vegetation. Because
the thematic line work was often inconsistent with many of the true boundaries
(visible in the lidar imagery), a series of looping algorithms were used
throughout the ruleset to improve sliver errors, bit by bit, to improve the existing
boundary line work. A threshold area of 0.25 ha was used for locating
unthinned areas that fell completely within a thinned compartment, areas that
may have been missed in the silvicultural treatment. Another section of the
routine was written to improve the convoluted object line work in order to more
closely resemble manual delineations. A final routine was included that
improved road connectivity in places where tree crowns merged across the
roads which in turn improved thematic line-work. After stitching the tiles back
together in eCognition® Server, the results were exported as a vector shape file
to ArcGIS for manual processing of any remaining errors that could not be fixed
in eCognition® Server.

The ADS40 image was initially split by age class categories (5 years or
younger; 6 – 23 years old; older than 23 years) and processed by three age
class category specific modules (Figure 2). Parameterisation of the ruleset was
based on a priori knowledge of average crown diameter per age class and stem
density per sivicultural treatment. The extensive array of spectral and textural
statistics available in eCognition® were examined through recursive partitioning
using classification trees (Breiman et al. 1983) in R (2010). The separation of
P. radiata and native vegetation (mostly eucalypt species) was significantly
improved with inclusion of the Plant Pigment Ratio (Red band –Blue band)/(Red
band + Blue band) (Metternicht et al. 2000).

5
Pre-processing
Create workspace & load lidar-derived 1m CHM image

Create tiles 2000 x 2000 pixels & submit scenes for analysis

Process routine
‘NSA-Detection’ Load thematic layers & apply smoothing filters

Locate ‘Ground’ & small trees in ‘Ground’ using height thresholds

Segment remaining non-Ground pixels using thematic layers

Find ‘Exclusion zones’ (all non P. radiata) & clean up

Segment on height to protect stands ‘<5m’ tall & clean up

Locate obvious ‘Thinned’ & ‘UT’ stands >5m tall & clean up. Assign
difficult stands to ‘Post Active’ for amendment in Post-Process

Locate ‘UT’ stands >0.25ha in ‘Thinned’ stands & reclassify

Smooth ‘Ground’ (rows) & re-assign to relevant classes &


connect ‘Ground’ (main roads) in ‘Thinned’ & ‘UT’ stands

Segment to compartment level using thematic layers & clean up slivers

Process routine
‘Post-Process’ Stitch tiles, load thematic layers & apply smoothing filters

Classify ‘Post Active’ to relevant classes using relationships to neighbours

Merge each class, then segment to compartment level again & clean up slivers

Export results as a shape file to ArcGIS for final manual clean up

Figure 1. Summary description of methodology for NSA classification of a Pinus radiata plantation
using eCognition® Developer & Server software with a lidar-derived CHM at 1m resolution

6
Pre-processing
Create workspace & load ADS40 60cm image

Create tiles 2000 x 2000 pixels & submit scenes for analysis

Process routine
‘NSA-Detection’ Segment image to ageclass level using thematic layers

Remove non-vegetation (roads, shadow &


water) using NDVI & Brightness values

From remaining objects extract P.radiata stands from all


vegetation types (other pine spp., eucalypts, blackberries,
grasses) using PPR & R/G ratios

Segment P.radiata stands to compartment


level using the thematic layers & improve the
compartment boundaries

Assign P. radiata compartments to Thinned/UT


stands by their number of internal objects

Locate small UT stands that fall within Thinned stands

Identify NSA polygons by their silviculture treatment (Thinned/UT)

Process routine
‘Post-Process’ Stitch tiles to form a continuous surface
representation of plantation

Export results as a shape file to ArcGIS with area & perimeter statistics

Generalise NSA boundaries in ArcGIS to improve


visual appearance for cartographic purposes

Figure 2. Summary description of methodology for NSA classification of a Pinus radiata plantation using
eCognition® Developer software with an ADS40 image (NIR, R, G, B bands) at 60cm resolution

7
Accuracy analysis:
We examined classification accuracy through the derivation of a point-based
error matrix. We acknowledge, however that when dealing with spatial objects
their geometrical accuracy needs to be assessed as well i.e. location and
semantic agreement as well as evaluating how an object was delineated (e.g.
Tiede et al. 2006) and this will be done later. One thousand reference points
were visually identified in the original imagery and compared with the classified
images. We calculated producer and user accuracy statistics and the Kappa
coefficient of agreement of both the classified lidar and ADS40 images.

Results
The final classification derived from the lidar CHM identifies five classes:
thinned, unthinned < 5.0m in Ht., unthinned > 5.0m in Ht., ground and exclusion
zone, with the first three classes representing P. radiata NSA (Figure 3). The
error matrix for the classified scene selected in the CHM lidar data prior to
manual editing (Figure 3) produced an overall accuracy of 96.2%, supported by
a Kappa coefficient of 94.9% (Table 1). The commission and omission errors
for classifying the three stand categories of P. radiata for net stocked areas
were minimal (< 5.0 %) The greatest number of misclassified points was
associated with the exclusion zones, areas of vegetation, mostly native species,
outside the compartment net stocked areas (producer’s accuracy = 84.5% and
user’s accuracy = 85.5%). We did not include the near-infrared intensity values
in the classification scheme because of issues with calibration (e.g. Donoghue
et al. 2007). The other source of confusion was associated with separating
ground visible from above in the thinned stands with ground outside the net
stocked areas (Table 1).

Table. 1 Error matrix of high order plantation classification derived from the 1.0 m lidar
Canopy Height Model of a 2.5 x 2.5 km scene over the Green Hills Pinus radiata
plantation

User’s accuracy
Reference (Visual classification)
(%)

Unthinned Unthinned Thinned Exclusion


< 5.0 m Ht > 5.0 m Ht (T1 & T2) zone Ground

Unthinned
< 5.0 m Ht
225 0 0 0 0 100

Unthinned
> 5.0 m Ht
0 234 9 1 0 95.9
eCognition
Thinned
Classification (T1 & T2)
0 1 349 2 2 98.6

Exclusion
zone
4 2 6 71 0 85.5

8
Ground
1 0 0 10 83 88.3

Producer’s
accuracy (%)
97.8 98.7 95.9 84.5 97.6

Overall accuracy: 96.2%


Kappa coefficient: 94.9%

The overall accuracy achieved using the ADS40 imagery was 97.9% with a
Kappa Coefficient of Agreement of 96.1% (Table 2). Both the user’s and
producer’s accuracies for correctly classifying thinned and unthinned stands
exceeded 95% (Table 2). Separation of the exclusion zone areas (non P.
radiata vegetation) was also successful (user’s accuracy = 95.5% and
producer’s accuracy = 98.8%). This is a better result than that obtained using
the lidar data. The multispectral imagery, however, did not achieve the same
level of accuracy as the lidar data in detecting bare ground (Tables 1 and 2).
The presence of shadows in the ADS40 image contributed to these errors.

Table 2. Error matrix of high order plantation classification derived from 4 band digital
multispectral aerial photography (ADS40, 60cm) covering a 1.8 x 1.9 km scene over
the Green Hills Pinus radiata plantation

User’s
Reference (Visual classification)
accuracy (%)

Unthinned Thinned Exclusion


stands (T1 & T2) zone Ground

Unthinned
stands
125 0 0 0 100

Thinned
(T1 & T2)
0 633 2 0 99.7
eCognition
Exclusion
Classification zone
0 4 169 4 95.5

Ground
4 7 0 52 82.5

Producer’s
accuracy (%)
96.9 98.3 98.8 92.8

Overall accuracy: 97.9%


Kappa coefficient: 96.1%

9
Figure 3. The classified lidar CHM scene used for accuracy assessment.
Purple = thinned stands; pink = unthinned stands > 5.0 m Ht; green = unthinned
stands < 5m; brown = ground; pale blue = exclusion zones.

Figure 3. Lidar scene used for accuracy assessment of the lidar CHM data after stand
classification in eCognition® Developer

10
Figure 4. ADS40 scene used for accuracy assessment after stand
classification in eCognition® Developer

11
Discussion

This study has demonstrated the use of the eCognition® Developer software to
delineate and correctly identify silvicultural stands within a P. radiata plantation
for the extraction of net stocked areas and thinning status. Both the lidar and
ADS40 imagery were successfully classified to produce stand level features as
a file geodatabase that require little manual editing before being used as an
input to the Forests NSW geo-database. While the process does require some
supervision, our results indicate that this OBIA approach will improve both
accuracies and efficiencies relative to current inventory methodology.
We anticipate gaining improved classification accuracies through combining
lidar CHM data with digital camera images. This has been demonstrated in
several past studies (e.g. Leckie et al. 2003, McCombs et al, 2003, Holmgren et
al. 2008). Although the synergistic use of lidar data and high spatial resolution
multispectral imagery has been demonstrated for object-based forest
classification elsewhere (e.g. Ke et al. 2010), access to coincident imagery may
not always be possible or affordable and hence we needed to determine what
could be achieved using just lidar data or the ADS40 multispectral imagery.
An issue arising from this study relates to reconciling the line-work associated
with vector-based features currently in FNSW’s geo-database (e.g. API derived
polygon line work) with the often, more accurate object boundaries defined
through the OBIA segmentation process. The updating or correction of
compartment boundaries and thinning status at the compartment and sub
compartment (resource unit) level was achieved through the application of
these rule sets.
Our overall aim is to develop a series of integrated rulesets that perform multi-
scale segmentation matching the spatial resolution of GIS features accessed by
plantation foresters. Tiede et al. 2006 also advocated the need to establish a
script library for scale specific target features. While not presented here, we
have significantly progressed the automated delineation of tree crowns using
OBIA techniques at the individual tree scale. In addition, to delineating
individual tree crowns, we will classify further the thinned stands into first and
second thinnings using a stem density function derived from the tree delineation
process.
After final classification, image information at both the stand and tree crown
levels will be extracted and modelled for a range of inventory parameters
including mean stand height, volume and biomass. These statistics can also be
used for optimising the design of plots required for stem grade assessment.
Finally we will collate data capture standards for lidar and ADS40 imagery of
plantations to improve accuracies and ruleset transferability.

12
Acknowledgements
The authors wish to thank Duncan Watt (Planning Manager, Hume Region,
Forests NSW) for his advice and Amrit Kathuria (Industry & Investment NSW)
for biometrical assistance. The results presented here are part of a project
partially funded by Forest & Wood Products Australia Ltd.

References

Breiman, L, Friedman, J.H., Olshen, R.A. and Stone, C.J. 1983, Classification
and Regression Trees. Wadsworth. Belmont, California.
Donoghue, D.N.M., Watt, P.J., Cox, N.J. and Wilson, J. 2007, Remote sensing
of species mixtures in conifer plantations using LiDAR height and
intensity data. Remote Sensing of Environment 110, 509-522.
Hay, G.J., Castilla, G., Wulder, M.A. and Ruiz, J.R. 2005, An automated object-
based approach for the multiscale image segmentation of forest scenes.
International Journal of Applied Earth Observation and Geoinformation 7,
pp. 339-359.
Haywood, A. and Stone, C., 2010, Updating forest stand information: Part A:
Semi-automated stand delineation. Australian Forestry In Press
Holmgren, J., Persson, A. and Söderman, U., 2008, Species identification of
individual trees by combining high resolution LiDAR data with multi-
spectral images. International Journal of Remote Sensing 29, pp. 1537-
1552.
Johansen, K., Arroyo, L.A., Armston, J., Phinn, S. and Witte, C., 2010, Mapping
riparian condition indicators in a sub-tropical savanna environment from
discrete return LiDAR data using object-based image analysis.
Ecological Indicators 10, pp. 796-807.
Ke, Y., Quackenbush, L.J. and Im, J., 2010, Synergistic use of QuickBird
multispectral imagery and LIDAR data for object-based forest species
classification. Remote Sensing of Environment 114, pp. 1141-1154.
Kim, M., Madden, M. and Warner, T.A., 2009, Forest type mapping using
object-specific texture measures from multispectral Ikonos imagery:
Segmentation quality and image classification issues. Photogrammetric
Engineering & Remote Sensing 75, pp. 819-829.
Leckie, D.G., Gougeon, F.A., Walsworth, N. and Paradine, D., 2003, Stand
delineation and composition estimation using semi-automated individual
tree crown analysis. Remote Sensing of Environment 85, pp. 355-369.
McCombs, J.W., Roberts, S.D. and Evans, D.L. 2003, Influence of fusing lidar
and multispectral imagery on remotely sensed estimates of stand density
and mean tree height in a managed loblolly pine plantation. Forest
Science 49, pp. 457-466.

13
Metternicht, G., Honey, F., Beeston, G. and Gonzalez, S., 2000, Potential of
high resolution airborne videography for rapid assessment and
monitoring of vegetation conditions in agricultural landscapes.
International Archives of Photogrammetry and Remote Sensing XXXIII,
Part B7, pp.868-875.
Pascual, C., Garćia-Abril, A., Garćia-Montero, L.G., Martín-Fernández, S. and
Cohen, W.B., 2008, Object-based semi-automatic approach for forest
structure characterization using lidar data in heterogeneous Pinus
sylvestris stands. Forest Ecology and Management 255, pp. 3677-3685.
R Development Core Team, 2010, R: A language and environment for statistical
computing. R Foundation for Statistical Computing, Vienna, Austria.
ISBN 3-900051-07-0, URL http://www.R-project.org.
Smith, G.M. and Morton, R.D., 2010, Real world objects in GEOBIA through the
exploitation of existing digital cartography and image segmentation.
Photogrammetric Engineering & Remote Sensing 76, pp. 163-171.
Tiede, D., Lang S. Hoffmann C., 2006, Supervised and forest type-specific
multi-scale segmentation for a one-level-representation of single trees.
In: International Archives of Photogrammetry, Remote Sensing and
Spatial Information Science, Vol. No. XXXVI-4/C42, Salzburg, Austria.
Wulder, M.A., White, J.C., Hay, G.J. and Castilla, G., 2008, Towards automated
segmentation of forest inventory polygons on high spatial resolution
satellite imagery. The Forestry Chronicle 84, p. 221-224.

14

Anda mungkin juga menyukai