Anda di halaman 1dari 11

VALIDATION OF SAMPLING AND ASSAYING QUALITY

FOR BANKABLE FEASIBILITY STUDIES


William J. Shaw
Principal Consultant
Golder Associates
Level 3 Kirin Centre, 15 Ogilvie Rd
MT PLEASANT WA 6153
Phone +619 3162710 Fax +619 3161791 Email wshaw@golder.com.au

(published 1997 in The Resource Database Towards 2000,


Wollongong, 16 May, AusIMM, Melbourne, 41-49.)

ABSTRACT providers of debt finance in assessing the technical risk


Capital investment to develop mining projects has always before any final commitment to mining.
involved significant risk. To reduce that risk, the quality of
bankable feasibility studies has been steadily improving. Assessment of a deposit moves through a number of
An awareness of the impact of the data on an ore reserve stages:
statement is explicit in the Joint Ore Reserves Committee • exploration
Code which requires qualitative assessment of the data • discovery of mineralisation
used. The standards by which data are being assessed must • conceptual study
continue to improve, necessitating the development of • development drilling for resource definition
quantitative measures for comparison of data quality that • pre-feasibility study
are valid between different sampling regimes and across • bankable feasibility study
different deposits and projects. • construction and preparation for mining
• grade control
After a brief review of the relevant parts of sampling • mining production
theory, the practices involved in sampling, subsampling,
sample comminution, and various assaying methods are The three studies highlighted in this list become more
discussed. The impact of these on both precision and rigorous as the level of capital exposure increases.
accuracy of data are examined and the strengths and Conceptual studies may be “back of the envelope”
weaknesses of quality assurance schemes in addressing this calculations or more formal studies, but the paucity of
impact are noted. Practical methods to assess and compare available data limits the validity of the results. In many
the precision of assay data sets are presented and their cases the estimation of tonnes and grade is done using
implementation is outlined using a case study. The use of simplistic methods, which may be subject to significant
sensitivity analysis to evaluate the impact of data quality error. Accordingly these conceptual studies are only a
on bankable feasibility studies is examined. guide to further work; they may be used to define limits for
development drilling and to guide preliminary mining and
Caution is advised in using sampling data on a statistical metallurgical investigations.
basis without cognisance of the spatial relationships. Data
should be derived from consistent geological domains, Pre-feasibility studies are a test of the project viability for
with orientation studies carried out on the material that is management. Mineral Resource and Ore Reserve
likely to tbe the most difficult to sample. estimations are carried out to define the likely limits of
mineable ore and the impact of various styles (and scales)
of mining and processing. These studies should highlight
INTRODUCTION areas of concern, including deficiencies in the amount and
Capital investment to develop mining projects has always quality of the input data. On completion of the pre-
involved significant risk. The process of managing this risk feasibility study, management should have a clear picture
is becoming standardised within and between mining of the work that is still required to establish the viability of
companies, partly by the uniformity brought about by joint the project. Unfortunately, this is often not the case. Pre-
ventures between the various participants. The major feasibility studies tend to focus on the likely benefits of
influence however has been the involvement of the developing the project, rather than on the deficiencies in
information, which if addressed would reduce the risk. So are valid between different sampling regimes and across
pre-feasibility studies are often seen as a practice run at the different deposits and projects. This paper examines
feasibility study, without much of the rigour. The specific techniques used in recent bankable feasibility
difficulties in this approach are immediately apparent: studies to define the quality and limitations of the sampling
omissions in the pre-feasibility study will not be recognised and assaying data in a quantitative and objective manner.
if that study is not adequately reviewed. Moving to the These methods may be used to augment the subjective
feasibility study without addressing these deficiencies descriptive assessments of data quality required by the
means that there is often little improvement in the amount JORC Code, which are not examined further in this paper.
and quality of the underlying data on which the final
decision to proceed depends. The issue of representative sampling, geological setting
and spatial relationships is frequently ignored in sampling
Bankable feasibility studies are reviewed by the providers studies. The assumption that all samples are equally
of funding. They are formal documents that: representative of the orebody may not be valid. Statistical
• demonstrate that the project is economically viable, analysis of geological data is notorious for ignoring the
• provide a detailed plan of the proposed mining and spatial component. Thus any sampling study data being
processing, with costs, and examined should be derived from consistent geological
• define the quality and limitations of the data and domains and care should be taken in applying the results to
assumptions used. other domains. The preferred approach is to carry out an
orientation study to determine which material is likely to
Invariably the bankable feasibility study is reviewed by be the most difficult to sample. More detailed work on this
independent consultants on behalf of the lenders. The material will provide a suitable sampling protocol that can
extent of this review may vary, but for large projects a be safely used for the whole deposit.
detailed audit should be expected. To reduce the exposure
of the lenders and the overall project risk, the quality of
bankable feasibility studies has been steadily improving. SAMPLING
Accuracy And Precision
An awareness of the impact of the data on an ore reserve The difference between the estimated value and the true
statement is explicit in the Joint Ore Reserves Committee value for any determination (or prediction) is termed the
Code which requires qualitative assessment of the data error. We are interested in two aspects of this error.
used. The Australian Stock Exchange (ASX) requires that Consistent components of this error are termed bias and
public reporting by mining companies of all Mineral reflect the accuracy of the determination method. Random
Resources and Ore Reserves must be in accordance with components of this error are termed precision and reflect
the current Joint Ore Reserves Committee Code (JORC, the repeatability of the method. The classic analogy
1996). It is likely that the framework of this code will be proposed to understand the difference between these two
adopted internationally. It is also likely that reporting components of error is the game of archery. If the arrows
requirements will become more stringent rather than being are tightly clustered then they show high precision. If the
relaxed. A sound understanding of the requirements of an arrows fall equally around the bull’s-eye they may on
ore reserve statement under the JORC (1996) Code is average show high accuracy. But the objective is to have
necessary (but of course not sufficient) to avoid the pitfalls both high accuracy and high precision. It is not appropriate
that face many development projects. to discuss the average accuracy without qualifying it by
discussing the repeatability of the individual results.
Bankable feasibility studies are becoming increasingly
rigorous as the technical awareness of the lenders grows. The differences between bias and precision are reflected in
Banks employ not only financial analysts, but also the way these aspects of the total error are presented and
technical experts in geology, mining and metallurgy. As used. Bias is frequently discussed in terms of the
awareness of the impact of technical problems grows differences in the central tendencies (e.g. mean, median,
through experience at various operations, the bankers etc.) of two sets of data, or between a set of data and the
naturally seek to ensure that such problems do not arise true value.
again.
Precision is frequently discussed in terms of the variability
The sampling and assay data for an orebody are the of sets of data by comparing the distribution of the
foundation on which the rest of the feasibility study is differences. Common measures for this are the second
built. Mathematical sophistry cannot replace good quality order moments such as the standard deviation and variance
data. The standards by which data are being assessed must
continue to improve, necessitating the development of
quantitative measures for comparison of data quality that
or their standardised equivalents, the coefficient of ML is the mass of the lot in grams
variation and the relative variance1. α is an exponent characterising the deposit of interest

Sampling Theory A parametric approach to defining the sampling constant k


The principles of sampling theory developed by Gy (1979), requires definition of further factors:
as expounded by Pitard (1993), are accepted by many
practitioners in the mining industry. Explanations of
practical applications of the theory are available in Radford k = c f g d 3L−α
(1987), Cooper (1988), Keogh and Hunt (1990), Taylor Where:
(1993) and Shaw (1993), amongst others. Recently a number c is the composition factor
of improvements on the basic theory have been provided by f is the shape factor of particles or fragments
François-Bongarçon (1995) bringing theoretical and
g is the size distribution (grouping) factor
experimental results much closer together.
dL is a function of the sample particle size and the
liberation size
The Fundamental Sampling Error (FE) of Gy is a formal
description of the observed relationship between the
Solving for the parameters α and k enables a sampling
particle size of a component of interest (e.g. gold) within a
nomogram to be defined for the deposit of interest. The
sample and the nominal particle size2 of that sample. The
nomogram enables prediction of the total sampling error
FE is measured as a relative variance. In sampling theory
that would be obtained using alternative sampling
the relative variances of the various stages of a sampling
protocols. (Examples of sampling nomograms are
protocol are considered to be additive (e.g. Pitard, 1993,
discussed as part of a case study presented later in this
p.31). For example, the total experimental error may be
paper).
regarded as the sum of the sampling error, plus sample
preparation error, plus assaying error, if all these errors are
The use of the parametric approach as originally proposed
determined as relative variances. Thus at any stage of the
by Gy led to a wide divergence between predicted results
sampling and assaying process, the differential relative
and experience, causing significant harm to the acceptance
variance due solely to that part of the process can be
of sampling theory by the gold mining industry. By
determined using suitably designed experiments.
contrast, the use of empirical data to define the critical
parameters α and k proposed by François-Bongarçon
In summary and in its simplest form, the Fundamental
provides a quantitative basis for the comparison of
Sampling Error is represented by the following equation:
different deposits. (Nevertheless it must be noted that there
are still practical limitations imposed on experimental work
kd α by the laboratory precision of the final assaying stage.)
s2FE =
mass Sampling theory also refers to non-quantifiable errors that
arise due to mistakes in sampling, sampling preparation
and assaying. Problems in data recording and management
This relationship may also be frequently seen as: and in grade interpolation cause additional errors. All these
errors are best minimised by maintaining good quality
1 1 control practices (through vigilance and audits) to reduce
s2FE = ( − ) kd α their potential impact on profitability.
MS M L Monitoring Of Sampling And Assaying Quality
The quality of assays can be monitored by submitting
Where: standards, blanks, duplicates and repeats of drill samples
S2FE is the Fundamental Sampling Error expressed as a and previously prepared pulps.
relative variance
k is a sampling constant Standard samples of known (usually certified) grade are
d is the nominal particle size (95% passing) in cm submitted to monitor the accuracy of a laboratory, i.e. the
MS is the mass of the sample in grams ability of the laboratory to get the correct or known result.
A laboratory showing a systematic difference from the
expected result is said to exhibit a bias. Standard samples
1 The relative variance is the square of the coefficient of ensure that the laboratory quality control procedures are
variation and is similarly dimensionless. effective and that significant bias is not evident within or
2 The nominal particle size is defined by convention as the between assay batches.
95% passing sieve size and is a summary statistic taken to
characterise the distribution of particle sizes.
Blank or barren samples with an expected very low grade The objective of representative sampling is to obtain
are submitted to ensure that there is no contamination samples rather than specimens. Determination of the grade
between samples during the sample preparation or within a drilled interval is an example of the use of
assaying. If the blank samples following high grade sampling “to estimate the component of the lot”. It is a
samples have elevated grades, then there have been difficult enough process to ensure that the sampling is
problems. correctly carried out when the lot is regarded as all of the
material from within a single drilled interval. Once this
Pairs of duplicate samples prepared and assayed in an problem is appreciated, the greater difficulty in obtaining
identical manner provide a measure of the total error of representative samples of the deposit becomes clear.
sampling. When this error is derived in relative terms, the
total error is the sum of the errors due to splitting the initial The process of sampling a drilled interval can be defined
duplicate, preparing the sample and assaying the sample. by developing and testing a sampling protocol. Such a
protocol can be characterised by description of the nominal
Field duplicate samples or resplits are generally collected particle size and mass at each stage of subsampling. The
for sample preparation and assay by splitting the reverse following example not only illustrates the minimum
circulation (RC) drill chips, or by submitting the second information required but can also be regarded as a
half of diamond drill core. minimum safe initial sampling protocol, until experimental
work is carried out as an early part of resource definition
Crushing and pulverising reduces the particle size of drill sampling:
core and RC chips to a nominal size (e.g. 90% passing 75
µm) and then a small subsample (say 200 g) of this pulp is “Each 5 kg sample was dried and reduced to less than
retained for assay in a pulp packet. Residue samples may 6 mm in a jaw crusher. The whole sample was then
be collected at all stages of the sampling protocol. cone crushed to 2 mm. The crushed sample was riffle
split in half, to approximately 2.5 kg, then pulverised to
Repeat samples are pulps that have been previously 90% passing 75 microns in a Labtechnics LM5 mill
prepared and assayed and that then have been re-submitted using a single large puck. The entire pulp was roll-
for another identical analysis. Comparison of the results mixed on a plastic sheet and four 100 g cuts were taken
provides a measure of the precision of a laboratory, i.e. the with a small trowel, to provide 400 g for storage in a
ability of the laboratory to get the same assay result under kraft envelope. The residue was rebagged and stored
the same conditions. for six months. From the 400 g subsample 50 g was
weighed out for fire assay with an aqua regia digest
Pairs of samples assayed at different laboratories may help finish.”
to define the inter-laboratory precision and may also
identify a bias between the two laboratories. Particle sizing tests and experimental repeatability
sampling should be carried out at each stage of
Experimental Definition Of Sampling Repeatability comminution, i.e. in the above example after the jaw
In describing any form of sampling, from diamond drilling, crusher, cone crusher and pulveriser. The use of 100 pairs
reverse circulation drilling, trenching or any other source, of samples at each stage constitutes one example of a
it is important to conform to the following conventional sampling tree experiment by means of which the total
terminology of Gy (after Pitard, 1993): relative variance and consequently the differential
relative variance can be defined for each stage of the
Lot - the total collection of material for which it is sampling protocol. This procedure should be used to
required to estimate some component. optimise the sample preparation protocol, to cost
Component - the property that we require to estimate effectively minimise the total random error of the sample
using the sample, e.g. grade, density, thickness, etc. assays.
Sample - part of a lot on which a determination of a
component will be carried out, where each part of the Assaying Principles And Quality Assurance
lot had an equal probability of being selected into the The assaying3 precision varies with the grade being
sample. determined. The precision deteriorates as the limit of
Specimen - part of a lot on which a determination of a
component will be carried out, but for which the rule of 3 In this paper the term assay is used in preference to
equal probability of selection has not been respected. analysis, the latter term being commonly used for the
Increment - part or whole of a sample, selected from analysis of the assay results. In this context assays should
the lot with a single cut of the sampling device. A be considered to include all chemical determinations
number of increments may be combined to form one including those for components like Total Fe, SiO2, Al2O3,
sample. acid extractable Ni, organic C, as well as Zn, Cu, Ag, etc.,
and of course Au.
detection (LOD) is approached. The use of repeat assays should be defined in the documentation provided with
on pulps provides a check on the quality of the assaying the standard),
being carried out. • they have different mineral compositions and particle
distributions to the samples of interest (these define the
From personal experience, an audit of data being used for a matrix of a sample), and
feasibility study should include examination of the sample • they are readily identified by the laboratory.
preparation and assaying procedures. Obviously such an
audit should be carried out before the resource estimation; Experimental Definition Of Sampling Accuracy
due diligence audits that reveal deficiencies in the data can After the sample preparation and assaying of a batch of
be embarrassing, expensive and raise doubts about the samples has been completed, the kraft envelopes of
viability of the project. pulverised material (pulps) should be retrieved from the
laboratory. The available assay results should be used to
There is often little real difference in the accuracy or select batches of 100 samples with grades at or above the
precision that the various commercial laboratories can level of interest, i.e. from just below the economic cut-off
produce. Real differences do exist in the quality of the grade up to values that are likely to significantly affect
results provided however and they may be explained by grade estimates (for example the high grade cutting value).
two aphorisms “you get what you pay for” and “let the These batches of samples should be randomised and
buyer beware”. Commercial laboratories produce results renumbered. They can then be submitted to the same
for which the quality is only constrained by the time and laboratory for determination by a different assay method,
procedures used, i.e. by the cost that the client will pay. or to a second or third laboratory.

Certification of a laboratory by the National Association of Assessment of the comparative results for these pairs of
Testing Authorities (NATA) provides some surety that the pulps allows the average differences in grade to be
written procedures adopted by the laboratory are followed, quantified for various grade ranges. Inclusion of standard
and that an audit trail is established for all work carried and blank samples enables the accuracy to be monitored at
out. However as with all quality assurance schemes, such the same time.
certification provides no guarantee that the documented
procedures are appropriate, just that these procedures are Common Problems With Sampling And Assaying
followed. Studies
Many of the accuracy and repeatability studies seen to date
While quality control systems are now uniformly adopted, suffer from design problems, either because of a lack of
these are designed to meet the production needs of the clear understanding of the principles discussed above, or
laboratory, rather than the specific needs of the client. because the experimental work is trying to achieve too
Accordingly they can suffer from two specific drawbacks. much. Common failings are:
The laboratory carries out repeat assays on the same pulp • reliance solely on the quality control procedures of the
using the same method, but it knows the final result. Where laboratory, with no independent validation
the original and the repeat assays do not agree sufficiently, (consequently standards used to check instrument
the results may be abandoned and the work repeated. This calibration are claimed to demonstrate accuracy, and
is appropriate from the point of view of the laboratory, but repeats used to monitor and eliminate between batch
may mask the true variability of the results being produced. variability are used to demonstrate precision)
Secondly, from a theoretical point of view, all repeat • the systematic re-submission of a proportion of samples
assays are open to question if they are not blind and (e.g. 10%) so that most of the repeats are of irrelevant
randomised. All the pulps should be submitted as blind very low grades
randomised samples so that they cannot be matched by the • re-submission of samples using the same sample
laboratory and to remove the effect of any bias due to numbers, or in the same order, so that the reason for
procedural or instrument errors. systematic differences (bias) is not clearly defined
• confusion between the comparison of duplicate field
Definition of the assaying accuracy is frequently addressed samples and replicate assays on pulps, so that it is
by assaying standard samples. Such standards are obtained unclear which errors are due to assaying, which are due
from commercial sources and themselves suffer from a to sample preparation and which are due to the initial
number of problems: sample splitting
• some are very expensive, and differences in costs may • submission of all the duplicates and repeats at the end
reflect differences in quality, of the resource drilling when it is too late to change the
• the expected accuracy and repeatability characteristics protocols if significant precision or bias errors are
may differ for different assaying techniques (these identified
• submission of duplicates and repeats without
monitoring (and acting on) the results. The methodology adopted by Pitard (1993) for
determining precision using the Fundamental Sampling
Error as the relative variance of the sample data set is
ANALYSIS OF RESULTS extremely sensitive to single outlier values. Similar
Tests For Accuracy difficulties occur with other univariate statistics (e.g. the
The most poorly utilised tool for interpreting sampling data standard deviation and the geometric mean) and with
is the scatterplot. Plots with different scales on the abscissa bivariate statistics (e.g. the covariance, the correlation
and ordinate, and plots showing regression lines between coefficient and regression). Although it may be argued that
the paired data, seem designed to confuse. Rather, square it is the outlier values that are of interest, problems with the
format plots with equal scales should be produced, with a statistical analysis can make the results of the sampling
45o line defining the expected perfect correlation. experiment apparently unrepeatable.
Systematic differences between pairs of assays (bias) are
then clearly seen as deviations of the trend of paired data To enable comparison between various assay methods in
from the 45o line. gold deposits, a method of using paired samples is
Accuracy is frequently defined as the difference between required, with the results standardised to enable statistical
the means of the paired data. This is a valid measure but comparison. Techniques for quantifying assaying precision
the sensitivity of this statistic to outliers must be tested by have been discussed by Thompson and Howarth (1973)
trimming the data and by examining subsets of the data4. A and Bumstead (1984). These have been adopted by the
non-linear bias may be identified between two assay data author to define a robust estimate of error termed the Half
sets (called here the “standard” and the “check” assay Absolute Relative Difference (HARD) for paired data.
methods). For example the “standard” method may This is produced by dividing half the absolute difference
overestimate gold grades below 4 g/t Au and underestimate between the two values by the mean of the two values. A
them above that value. At a mining cut-off grade of similar measure termed the Half Relative Difference
1 g/t Au this creates a problem. Use of the “standard” (HRD) may be used where the sign of the differences is
method results in overestimation of tonnages above the significant.
cut-off grade, but underestimation of their grade. Use of
the “check” method would produce the opposite result. The Half Absolute Relative Difference (HARD) is
calculated as follows (note that the factors of two cancel
Tests For Precision out):
Precision can be determined by carrying out a large
number of assays on a single homogeneous sample, to 1 ABS( Assay1 − Assay2)
define the distribution of errors about the mean result. The HARD = * *100
residuals can be found by subtracting the mean from each 2 Assay1 + Assay2
assay result and examined to ensure that they are 2
distributed normally. The standard deviation of the
residuals defines the precision. Thus, for example, for a The HARD and HRD values may be used to produce
confidence level of 95 percent, the precision would be the robust measures of the relative bias and the relative
standard deviation times 1.96. This can then be used to variance of the Fundamental Sampling Error for the
determine the precision as the absolute error of the assays purpose of defining sampling protocols5.
(in the case of a gold deposit as ± g/t Au).
Three examples follow:
In dealing with real gold bearing samples it is necessary to • If the original assay was 0.33 g/t Au and the repeat
recognise that each sample is inherently different. It would assay was 0.99 g/t Au, the average of the two grades
not be appropriate to compare the means of 40 different would be 0.66 g/t Au, and the average of the two
samples, it would be too expensive to carry out a large residuals would be 0.33 g/t Au. Thus the relative error
number of replicates (repeat assays) on many different as measured by the HARD would be 50%.
samples, and generally there is insufficient homogeneous • The relative error between an original assay of 1 g/t Au
material available to carry out a large number of replicates. and a repeat of 2 g/t Au is 33.3% (the same as if the

4 While mathematical treatments of the impact of outliers


on statistical methods are available (e.g. Barnett and 5 The calculation of the HARD value provides similar
Lewis, 1994) the impact of geological and spatial benefits to the use of the pairwise relative variogram for
characteristics of the samples cannot be ignored. assessing grade variability (e.g. Isaacs and Srivastava,
Trimming, Cutting and Winzorising may be effective 1989) which serves to reduce the impact of large values in
techniques for obtaining repeatable results but their strongly skewed distributions.
limitations and impact on the results must be understood.
original assay was 100 g/t Au and the repeat was avoided. In particular the effect of particle size and mass of
200 g/t Au). sample that creates large precision errors during the sample
• The relative error between any original assay and a comminution process can be removed. However it must be
repeat assay approaching 0 g/t Au approaches 100% stressed that if poor sampling practice occurs at any stage
(note that assays of zero should never appear in the of the sampling protocol, knowing that the assaying
database). precision is excellent provides little comfort.

The limit of detection for a particular assaying method is Typical Results


found to be where the HARD value reaches an upper limit A benefit of carrying out a sampling tree experiment is that
for low level assays. causes for sampling difficulties can be identified. The
solution of the sampling equations previously discussed
The HARD values are standardised to the mean of the pair can characterise the deposit and provide a basis for
of samples and so allow comparison between otherwise comparison with other orebodies. For example in gold
independent samples. When sufficient samples (at least 30) deposits, the nominal particle size of the gold, statistics on
are compared, the distribution of HARD values can be the HARD values, or a derived sampling nomograph, could
regarded as equivalent to a distribution of residuals. The be used as comparative measures of the expected relative
mean of all the pairs of samples used indicates the level sampling error. Such comparisons can highlight:
around which the precision is being defined. • High sampling errors due to pulverising a relatively
small sample or due to using a relatively coarse
Advantages of this method are many. It provides a robust nominal pulp size.
method which enables comparison between sample • Opportunities to reduce the sampling errors by
batches, between different assaying methods, across improving the sampling protocol.
laboratories and across deposits. It provides a measure of • Problems of repeatability due to the pulveriser
error in which the difference between an assay and a repeat attempting to reduce coarse gold particles below their
can be compared for different levels of mineralisation. It natural liberation size.
also becomes a basis for other non-parametric measures
and comparisons. For example the percentage of samples Two artificial case studies illustrate the approach that can
exceeding a specific HARD limit is a very robust measure be taken. In each case a 5 kg sample has been submitted to
of error. Scatterplots and cumulative plots of HARD the following sampling protocol:
against the mean of the pairs of samples provide a • jaw crushed to 5 mm and then riffle split to 2.5 kg
complete characterisation of the quality of the sampling • milled to 1 mm and then riffle split to 1.0 kg
and assaying in terms of relative error, and are directly • pulverised to 75 µm and then mat rolled and cut to
comparable from one deposit to another. extract 200 g
• grab sampled to produce a 50 g charge for fire assay.
The HARD statistics are of particular use where the
precision of comparable assaying techniques must be Typical experimental precisions that might be obtained
assessed. By doing all the work on correctly split, using such a protocol are shown in Table 1.
pulverised material that is presented to the laboratory as
randomised blind samples, a number of difficulties are

Table 1 Experimental sampling study results for Case 1 and Case 2

Case Particle size Sample Mass Differential Precision at


relative 95 percent
variance confidence
Case 1 5 mm 2.5 kg 0.0400 39.2 %
1 mm 1.0 kg 0.0200 27.7 %
75 µm 200 g 0.0030 10.7 %
75 µm 50 g 0.0005 4.4 %
Case 2 5 mm 2.5 kg 0.0100 19.6 %
1 mm 1.0 kg 0.0030 10.7 %
75 µm 200 g 0.0005 4.4 %
75 µm 50 g 0.0001 2.0 %
Scatterplots indicating the variability of original and repeat
assays are shown in Figures 1 and 2. In Figure 1 the data 20

set has a precision of 20% at the 95 percent confidence


limit; in Figure 2 the data set has a precision of 10%. These
figures represent the differential variance observed for 15
samples in Case 2 after reducing the particle size to 5 mm
and then 1 mm.

Repea
Sampling nomograms, in the format adopted by Pitard 10

(1993), are shown in Figures 3 and 4 for Case 1 and Case


2. Sampling nomograms indicate the relationship between
Mass and Fundamental Error, plotted on the axes using log 5
scales. The vertical parts of the graphed line indicate where
the particle size is being reduced. The sloping parts of the
graphed line indicate where splitting is being carried out to
0
reduce the mass. These sloping lines define samples with
0 5 10 15 20
particles of constant size. (Other sloping lines for inferring
Original
the effects of alternative sampling protocols have been
omitted for clarity). The horizontal lines show error limits
Figure 1 Scatterplot of a data set with 20%
of precision at the 95 percent confidence limit.
precision at the 95 percent confidence limit. This
For the quoted precision levels it can be seen that the is typical of the differential relative variance for
sampling protocol is much less appropriate for Case 1 than samples from Case 2 reduced to a particle size of
for Case 2. This is because the nominal gold particle size 5 mm.
for Case 1 is 200 µm while for Case 2 it is 50 µm. While a
number of factors affect the calculation of this nominal 20
particle size, it can be assumed that Case 1 represents a
deposit with a moderate amount of visible gold, and Case 2
a deposit with very little visible gold.
15

Both the experimental results and the inferences of


nominal gold particle size indicate that the assay results are
Repea

less precise for Case 1. This deposit would thus have 10


poorer definition of ore blocks for both resource estimation
and grade control, incurring a higher risk that would
hopefully be compensated for by higher average grades.
5

The sampling repeatability error is affected by the orebody


(i.e. the nominal particle size of the component of interest),
the sampling practices and the assaying procedures. Thus it 0
is recommended that experimental work be carried out to 0 5 10 15 20

define the sampling nomogram long before any bankable Original


feasibility study is undertaken. Adopting generic
nomograms, or the use of the parametric approach to Figure 2 Scatterplot of a data set with 10%
define sampling constants, can be misleading and is likely precision at the 95 percent confidence limit. This
not to be supported by experimental work. is typical of the differential relative variance for
samples from Case 2 reduced to a particle size of
1 mm.
1

0.1
1 mm 5 mm
Variance

20% error
0.01
1 0% error

5% error
0.001
75 micron

0.0001

0.00001
10 1 00 M ass (g) 1 000 1 0000

Figure 3 Sampling nomogram for Case 1, a gold deposit with moderate visible gold

0.1
Variance

20% error
0.01 5 mm
1 0% error
1 mm

5% error
0.001

0.0001 75 micron

0.00001
10 1 00 1 000 1 0000
M ass (g)
Figure 4 Sampling nomogram for Case 2, a gold deposit with very little visible gold
Field Techniques Sensitivity Studies
The drilling or other initial sampling method, and all stages A resource estimate should be carried out on a set of data
of subsampling and assaying, impact on both the precision for which there is information on the sampling and
and accuracy of the data. Correctly designed experiments assaying quality. If a bias or a precision problem has been
in conjunction with the analytical methods presented here identified it would thus be reasonable, as part of a bankable
can indicate where expenditure on improved procedures feasibility study, to examine the sensitivity of the study to
can significantly reduce the errors. However in many cases, the data quality. It may well be that differences in the Net
there is no need for an experiment. Poor sampling Present Value of a project would be identified, depending
techniques, such as grabbing specimens rather than on the change of average grade (in the case of bias) or the
splitting samples, can be avoided at very little cost. change in recovered tonnes and grade (in the case of
misclassification).
Statistical analysis of the data can quickly indicate whether
sampling problems are a characteristic of the particular The effect on a project of the sampling and assaying error
deposit or whether the sampling has been done poorly. should be kept in perspective. Cases have been seen where
Significant advances in this area can be expected as the the effect on the average grade was less than the error
techniques presented here are further refined to handle associated with the estimation method; in other cases the
other problems, for example the quantitative analysis of viability of the project was cast into doubt by the errors
downhole smearing enabling comparisons between drilling associated with the sampling or differences between the
methods and across deposits. sampling methods.

Laboratory Techniques
Any assay method has an associated error. Measures of CONCLUSIONS
precision quantify the expectation that the same result can Monitoring and evaluating the sampling and assaying
be repeated continuously. There are a number of factors precision and bias is necessary for bankable feasibility
that cumulatively affect the precision of any assay result, studies. The statistical methods presented provide a means
including: by which the quality of the data can be quantified.
• instrument calibration and drift Meaningful comparisons are now possible between
• integrity of the chemicals used to dissolve the sampling methods, assaying methods, laboratories and
component of interest deposits. Further refinements and other quantitative tests
• the impact of volumetric determinations using non- can be expected where these provide a mechanism for
calibrated glassware developers of mining projects to reduce their risk.
• matrix effects due to other elements in solution
• the concentration of the component of interest Some words of caution are advised for those who see this
• the mass of material being assayed approach as a panacea: sampling consists of extracting a
• the homogeneity of the material being assayed representative parcel of material from the deposit and then
determining the grade of that parcel through a process of
Establishing An Audit Trail subsampling and assaying. In reality there may be
It is no longer sufficient to expect that a few duplicate and additional “errors” involved in this process which may not
repeat samples can be produced as evidence that the be identified with a sampling study. These errors include
sampling quality has been monitored. ignoring the spatial relationships inherent in geological
data, as well as the non-quantifiable procedural errors that
The assay data set may include original samples, repeats, may be built into a quality assurance scheme. As
duplicates and checks at different laboratories or with previously stated, these errors are best minimised by
different assay techniques. The data to be used for tonnage maintaining good quality control practices (through
and grade estimates should be maintained in a relational vigilance and audits) to reduce their potential impact on
database. The repeats and duplicates should be all stored in profitability.
a manner that allows easy separate extraction and cross
referencing. Averaging should be closely examined during
the data audit stage. It may not be appropriate to include REFERENCES
the data used for monitoring the sample quality in averages Barnett, V. and Lewis, T., 1994,
of assay results. Outliers in statistical data. Third edition, John Wiley &
Sons, 573 pp.
Bumstead, E., 1984, Keogh, D. C. and Hunt, S. J., 1990,
Some comments on the precision and accuracy of gold Statistical determination of a safe sampling protocol: case
analysis in exploration.. AusIMM proceedings 289, pp. 71- studies at Granny Smith and Big Bell, Western Australia.
78, March. In Strategies For Grade Control, Australian Institute of
Geoscientists Bulletin 10, pp 31-36.
Cooper, W. J., 1988,
Sample preparation - gold. Theoretical and practical Pitard, F., 1993,
aspects. In Sample Preparation and Analyses for Gold and Pierre Gy's Sampling Theory and Sampling Practice. CRC
Platinum - Group Elements, Australian Institute of Press Inc. (second edition), 488 pp.
Geoscientists Bulletin 8, pp 31-48.
Radford, N. W., 1987,
François-Bongarçon, D., 1995, Assessment of error in sampling. In Meaningful Sampling
Modern Sampling Theory. Course Notes, Perth. in Gold Exploration, Australian Institute of Geoscientists
Bulletin 7, pp 123-143.
Gy, P. M., 1979,
Sampling of Particulate Materials - Theory and Practice. In Shaw, W. J., 1993,
Developments in Geomathematics, Vol 4, Elsevier, 431 pp. Mining Geology, Grade Control and Reconciliation.
Course Notes, Perth.
Isaacs, E. H. and Srivastava, R. M., 1989,
Applied Geostatistics, Oxford University Press, 561 pp. Taylor, M., 1993,
Grade control review at Boddington gold mine, SW region,
JORC, 1996, WA. In Proceedings of the International Mining Geology
Australasian Code for Reporting of Identified Mineral Conference, Kalgoorlie-Boulder, WA, 5 to 8 July.
Resources and Ore Reserves. Report of the Joint
Committee of the Australasian Institute of Mining and Thompson M., and Howarth R.J., 1973,
Metallurgy, Australian Institute of Geoscientists and The rapid estimation and control of precision by duplicate
Australian Mining Industry Council (JORC), issued July. determination, The Analyst, 98, pp 153-160.

Anda mungkin juga menyukai