Abstract. Over the last few years, use of artificial intelligence (AI) has increased in many areas of mining and allied branches of engineering. The technique has been successfully applied to solve many engineering problems and has demonstrated reasonable feasibility therein. A review of the literature reveals that geotechnical studies, mineral processing, reserve estimation, rock fragmentation etc. are some of the areas of mining engineering where AI based approach has been successfully implemented. This paper aims to briefly provide a general view of some of the existing AI based models for prediction of rock fragmentation. Empirical studies for estimation and assessment of fragmentation are also reviewed. The reach and flexibility of basic AI techniques have been elucidated. Keywords: Rock fragmentation, ANN, Fuzzy set.
1 Introduction
Rock fragmentation is governed by numerous parameters relating to the rock, explosive and blast geometry. Estimation of rock fragmentation for a blast round is an important stage before the implementation of the production blast. Many models have been developed and are in practice. Bergmann (1973), Cunningham (1983), Kou- Rastan (1993) and Chung and Katsabanis (2000) developed models with which the fragmentation can be estimated (see Ouchterlony, 2003) [12]. The model developed by Cunningham is the most widely used model worldwide. Julius Kruttschnitt Mineral Research Centre (JKMRC) has done a notable work in the development of models. The Two Component Model (TCM) and Crushed Zone Model (CZM) have been developed by them. The TCM uses experimental data obtained from a blast chamber to estimate the fines end of the distribution, while the CZM uses a semi-mechanistic approach to estimate fines. Onederral, I, et al proposed a new model to predict the proportion of fines generated during blasting. Kuznetsov (1973) suggested an empirical equation to predict the mean fragmentation size resulting from rock blasting using mean fragment size, rock factor, rock volume, mass of explosive per blast hole (2). Kuznetsov also has suggested using RosinRammler equation to estimate the
892 P.Y. Dhekne, M . Pradhan, and R.K. Jade
complete fragmentation distribution resulting from rock blasting. Cunningham (1983, 1987) modified the Kuznetsov's equation to estimate the mean fragment size and used the RosinRammler distribution to describe the entire size distribution [2,3]. The uniformity exponent of RosinRammler distribution was estimated as a function of blast design parameters. The suggested empirical models based on theoretical and mechanistic reasoning have been developed for the prediction of rock fragmentation. The complexity of using a number of parameters makes it an arduous task to predict the fragmentation using such models. The pattern of the output of these models is not conducive to the needs of the field engineers. Further, these models overlook some of the important factors involved in blasting and assign equal weights to all the parameters irrespective of their importance in blasting. To such problem involving a variety of governing parameters the AI techniques can render better solution in prediction of rock fragmentation. This paper discusses on the basic elements of networks, fundamental rules underlying network selection to the present problem. Also, the AI applications which consider the dominant parameters affecting the rock fragmentation for prediction are hence presented.
intelligent behavior without necessarily attempting to provide a correlation between structures in the program and structures in the brain. Neural computing, the branch of AI, has most widely found applications in wide spectrum including life sciences, medicines, engineering, management, and even in stocks and finance. Neural networks have donned the tasks of both being a dynamic system as well as adaptive system. The machine learning community is frequently working in areas where no plausible data model suggests itself, due to our lack of knowledge of the generating mechanisms [14]. All neural networks do pattern classification, pattern completion, optimization, data clustering, approximation, and function evaluation. A neural network performs nonlinear mapping of a set of inputs to a set of outputs. Learning molds the mapping surface according to a desired response, either with or without an explicit training process. A network can learn under supervised or unsupervised training. Whilst external prototypes are available these can be used as target outputs for specific inputs helping in supervised learning. Learning algorithms use to main rules of learning - Hebbian learning, used with unsupervised learning and the delta rule (or least mean squared error rule [LMS]), used with supervised learning. It is estimated that over 80% of all neural network projects use backpropagation [17]. In Backpropagation training algorithm for training multilayer (feed-forward)
Artificial Intelligence and Prediction of Rock Fragmentation 893
perceptron (MLP), there are two phases in its learning cycle, one to propagate the input pattern through the network and the other to adapt the output, by changing the weights in the network. It is the error signals that are backpropagated in the network operation to the hidden layers. The portion of the error signal that a hidden-layer neuron receives in this process is an estimate of the contribution of a particular neuron to the output error. Adjusting on this basis the weights of the connections, the squared error, or some other metric, is reduced in each cycle and finally minimized, if possible. Three main issues need to be addressed [9] Complexity: Is the network complex enough to encode a solution method? Practicality: Can the network achieve such a solution within a feasible period of time? Efficacy: How do we guarantee that the generalization achieved by the machine matches our conception of a useful solution? The threshold functions do the final mapping of the activations of the output neurons into the network outputs. But the outputs from a single cycle of operation of a neural network may not be the final outputs; the network is iterated into further cycles of operation until convergence is achieved. If convergence seems possible, but is taking a lot of time and effort, i.e., if it is too slow to learn, a tolerance level may be assigned and so as to settle for the network to achieve near convergence. Thresholding sometimes is done for the sake of scaling down the activation and mapping it into a meaningful output for the problem, and sometimes for adding a bias. Generally used functions include sigmoid, linear, ramp, and step function. A network with stability indicates convergence that facilitates an end to the iterative process, e.g., same output for two consecutive cycles, convergence of weights. A neural network can comprise of single or multiple layers (fig. 1).
a single-layer network. In Perceptron the neurons in the same layer, the input layer in this case, are not interconnected.
894 P.Y. Dhekne, M . Pradhan, and R.K. Jade Fig. 1 A typical neural network Fig. 2 A feed-forward neural network with topology 2-2-1
2.2 Two-Layer
Many important neural network models have two layers. The feed-forward backpropagation network, in its simplest form, is one example. Grossberg and Carpenters ART1 paradigm uses a two-layer network. The Counterpropagation network has a Kohonen layer followed by a Grossberg layer. Bidirectional Associative Memory, (BAM), Boltzman Machine, Fuzzy Associative Memory, and Temporal Associative Memory are other two-layer networks. For autoassociation, a single-layer network could do the job, but for heteroassociation or other such mappings, you need at least a two-layer network [17].
Artificial Intelligence and Prediction of Rock Fragmentation 895
2.3 Multi-layer
Kunihiko Fukushimas Neocognitron, noted for identifying handwritten characters is a network with several layers. It is also possible to combine two or more neural networks into one network by creating appropriate connections between layers of one subnetwork to those of the others. This would create a multilayer network.
Of the myriad solutions obtained from neural network, some applications in mining (9) are found in Geophysics: for the interpretation of seismic and geophysical data from
various sources, Mineral processing [4, 5, 15]: for performing pattern classification, e.g., particle shape and size analysis. ANN has been used to estimate the mean bubble diameter and the bubble size distribution of mineralized froth surfaces [13]. Image analysis: for multispectral classification of Landsat images so as to highlight uses of land, habitation, over-mining, subsidence, pollution, etc. Process control optimization in mineral processing plants and equipment selection Optimal blast design: for estimation of burden and spacing to achieve optimum production blast [6].
fragmentation. The results of the model were compared with the results of the regression model. For the regression method R2 and RMSE were calculated 0.701 and 1.958, respectively. Linearity hypothesis applied may be the main cause of poor efficiency of the statistical method. They adopted cosine amplitude method (CAM) to identify the most sensitive factors affecting rock fragmentation. It was concluded that Blastability index, charge per delay, burden and powder factor were the most effective parameters on rock fragmentation. Whereas, average hole depth, specific drilling, spacing, stemming length and hole diameter were the least effective parameters on fragmentation. The AI based models have still a potential to apply in the field of blasting and fragmentation. Besides, the neural network models, fuzzy technique can provide a good measure of prediction, especially in cases where the information on which outputs are to be based is incomplete and or missing.
5 Conclusion
Rock fragmentation is governed by numerous parameters relating to the rock,
explosive and blast geometry. Empirical models to estimate the rock fragmentation are tedious to work and still remain difficult to implement in the field. Few attempts have been made to use neural network approach for predicting fragmentation. The fragmentation models developed so far using ANN suffer from the drawbacks like insufficient number of datasets, their inconsequentiality and collection from an entirely diverse rock mass e.g. that involved in uranium and coal mine. It therefore becomes difficult to apply these models in a universal form in the field conditions. However, the models based on artificial intelligence score over the regression models because of their distinctive advantages like flexibility, non-linearity, greater fault tolerance, adaptive learning, ease in handling the incompleteness, inexactness, uncertainty probabilistic reasoning and fuzzy reasoning. The diversity of AI techniques are yet not been exhaustively applied to such complex problem of rock fragment classification and prediction. There is a sufficient scope and a need for the development of new model which can be adaptive, handles noise in the data, and can provide near-tolerance results.