Anda di halaman 1dari 7

Automation in Construction 20 (2011) 12041210

Contents lists available at ScienceDirect

Automation in Construction
j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / a u t c o n

Innovation in articial neural network learning: Learn-On-Demand methodology


Farzad Khosrowshahi
School of the Built Environment, University of Salford, Salford M5 4WT, UK

a r t i c l e

i n f o

a b s t r a c t
The articial neural networks represent the state of the art tool for forecasting and prediction. However, the technique relies heavily on the availability of adequate data for its training. There have been many attempts to overcome the problems associated with the acquisition of learning data. These include the use of simulation techniques, which prepare the data for pre-processing prior to learning. Nevertheless, these methods tend to undermine the specic nature of the application that is reected in its data. Furthermore, it is evident that, in certain circumstances, the current learning methods, grouped under on-line and off-line, do not provide an effective learning solution and their advantages are mutually exclusive. With these problems in mind, this research proposes a method for rectifying these shortcomings. The solution focuses on the learning processes rather than data. The work offers a new learning mechanism, namely the Learn-On-Demand (LOD) methodology, which enables the ANN to learn where the lack of knowledge is evident. The proposed LOD methodology integrates into ANN's learning process. Having produced the algorithm for its implementation, the paper then produces the mathematical representation of the Learn-On-Demand methodology by integrating the new algorithm into existing methodologies. The need for this solution emerged out of a research in the eld of construction, where Structured Systems Analysis and Design was sued as a platform for integrating a hybrid of AI techniques in order to develop an enhanced method of client brieng. 2011 Elsevier B.V. All rights reserved.

Article history: Accepted 9 May 2011 Available online 8 June 2011 Keywords: Articial neural network Learn-On Demand Articial intelligence Learning methodology Construction client brieng

1. Introduction The research in this paper is a direct response to a problem within the construction domain, where there are many areas where articial neural networks can provide viable decision solutions [22]. However, despite data-intensive nature of construction processes, there are a number of areas where high quality data are lacking. These characteristics of the industry place signicant demands on the training and retraining sides of the intended ANN solutions. This is to the point that retraining of the network may not be viable. The articial neural networks (ANNs) represent a mathematical approximation of biological neural behaviour. They have been successfully used as a multivariate non-linear analytical tool, and are known to be highly effective in recognising patterns from noisy and complex data, and estimating non-linear relationships. During the recent years there has been a signicant rejuvenation in the use of articial neural networks and considerable inroads have been achieved in the way the networks are designed and perform. However, despite the progress, the attainment of an optimum design remains an inherent problem of ANNs. Over the past two decades, researchers have developed a variety of methods to overcome the inherent problems of neural networks. A pragmatic remedy for

E-mail address: f.khosrowshahi@Salford.ac.uk. 0926-5805/$ see front matter 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.autcon.2011.05.004

situations where not all data are properly labelled, is the use of semisupervised learning where both labelled and unlabelled data are used for learning, even if the number of former is much fewer than the latter. Although the idea of the use of unlabelled data dates back to mid 1960s [25], it is relatively recently that the use of unlabelled data has proved benecial [5]. This has been mainly due to the need to deal with situations where class labels are difcult or expensive to collect. These apply to a range of real-life problems such as speech recognition, spam email detection, medical diagnosis, and software fault detection [6]. Also, the problems relating to the learning and understanding of ANNs, cited by Maass [20] and Andrews et al. [2] have instigated numerous research activities. In the absence of deterministic solutions, researchers have devised alternative solutions such as heuristic methods to develop a near-optimum designs. For instance, to maximise efciency and accuracy of networks Ghosh-Dastidar and Adeli [10] adopted parametric analysis to optimise values of parameters for a single-spiking model. Among other parameters, they obtained optimum numbers for learning rate, simulation time, time step, number of hidden layers, input and output encoding parameters, and number of neurons in the input, hidden, and output layers [11]. They further extended their notion of exploiting heuristic rules and optimum parameters (that dene the network architecture, spiking neuron and the training algorithm) and applied them to epilepsy and seizure detection. In general, there are two types of learning methods, off-line and on-line. In the off-line learning methods, the following three phases exist: learning,test and validating, and running. Here, if the patterns,

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210

1205

which are used in the learning phase are completely different from those in the running phase or if they are not properly labelled, the ANN will generate wrong outputs and the solution for this situation is relearning(use the original learning set in conjunction with new complementary patterns). In the on-line learning methods the ANNs will automatically learn from the new situations when they occur. Even fully trained ANNs will continue the learning. However, as a result, some plausible features of the global learning set may be compromised in favour of the local characteristics. Hence, during the relearning phase, the system requires an additional mechanism in order to facilitate a supervisory role. Therefore, for the generation of a satisfactory output, one group of learning methods may require an extensive learning and relearning procedure while the methods in the other group require an expert teacher applying concurrent supervision. In the light of the above, this paper proposes an alternative learning mechanism which attempts to overcome the aforesaid shortcomings. The proposed mechanism, here referred to as the Learn-On-Demand (LOD) method, caters for the lack of knowledge about new events (previously unaccounted for) by facilitating supervision (teaching) which is provided on demand by an expert. The method relies on the ability to carryout an appended learning (or growing cells structure)[8,9], rather than relearning once the architecture of the ANNs is altered.

2.1. Learning methods in ANNs As noted earlier, there are two main types of learning methods in ANNs. Supervised learning or associative learning in which network is trained by a teacher providing correctly labelled training inputs and matching them with the output patterns. These input output variables can be provided by an external teacher, or selfsupervision can be carried out by the system containing the network. Unsupervised learning or self-organisation in which the output unit is trained to respond to clusters of pattern contained within the input. In this paradigm the extraction of information is from unlabeled data and the system identies statically salient features of the input population. Unlike the supervised learning paradigm, there is no priori set of categories into which the patterns are classied; rather the system must develop its own representation of the input stimuli [19]. While the supervised learning methods rely on iteration in order to minimise the error or the penalty function, the unsupervised methods adopt an analytical approach to determining the parameters (weights) of the network As shown in Fig. 1, the conventional learning methods, adopted for most ANNs, use the output error as the basis of learning (learning by mistake). Generally, an ANN model is ready for use after it is trained and validated. However, the introduction of a new learning situation can have an undesired impact on the performance of the model. This is particularly true for situations where retraining is not feasible. In the conventional approach, the learning curve and learning speed are affected by the size of the network as well as the stop-learning point (the cut-off point). The aim of optimisation in conventional methods is to achieve the optimal stop-learning time by using the different ANN machines and learning machines.

2. Articial neural networks The articial neural networks fall within the categories of parallel distributed processing (PDP). The main aspects of an ANN are its framework, topology and learning mechanism. During their rst generation in the 1940s and 1940s, a simple structure enabled ANNs neuron to re when a threshold was reached. During the second ge neration in 1960s to 1980s, activation functions were exploited which were mathematically dened. In the third generation models, the spiking neurons offer an enhanced computational power. Basically, an ANN provides a framework for distributed representation; it consists of a pool of simple processing units that communicate by sending signals to each other over large number of weighted connection [21,24]. In this respect, the function of each processing unit must be designed. This entails consideration of the processing units, connection between units, activation and output rules. The topology of the network is primarily concerned with the design of connection types and initialisation. The two main network topologies are identied as the feed-forward and recurrent [1,13,18,24]. This research is primarily concerned with the training of ANNs when the learning takes place. The learning situations are categorised into supervised learning or associative learning and unsupervised learning or self-organisation. Basically, the learning machines in ANNs are associated with changes in synaptic efciency. The synaptic learning rules will control the variations of the parameters (synaptic weight) of the network. Researchers in this area have proposed learning rules and expressed them mathematically (e.g. back propagation) or biological analogy (e.g. Hebbian rules), but better learning rules may be needed to achieve human-like performance in many learning problems [3]. In his farewell-lookalike article Hammerstrom [12] praises the developments of the third generation ANNs and suggests that the way forward is the integration of diverse, complementary neuro-subsystems into complex entities. To this end, he contemplates that the three challenges to overcome include the scaling problems of ANNs, the required level of biological accuracy and the system integration.

3. Proposed Learn-On-Demand (LOD) Elman [7] highlights the benets of slow-learning in childhood by establishing link between maturational change of a child and his/her ability to learn complex concepts such as language. He claims that the success of the training is contingent upon starting small and with limited memory and allowing the networks to gradually mature to the adult state by starting with a limited memory. ANNs have been inspired and evolved through better understanding of biological networks and their forms of information processing. In doing so, the machine learning methods generate new situations by learning rst from processing the training data. However, unlike conventional ANNs, the human learning process takes advantage of interventions by teachers who frequently interact with the learner and interject the learning process.

Input

ANN Machine

Actual Output

Error Learning Machine _ Expected Output

Fig. 1. Conventional learning machine.

1206

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210

An attempt to enhance the structure of supervised learning is offered by Kamimura [16]: the proposed teacher-directed information maximisation, where appropriate outputs are produced by directing the network through provision of target information in the input layer. Unlike traditional supervised learning, in this method the errors between targets and outputs are not backpropagated. The method offers both ef ciency and exibility because when information is small, then several units are activated but when the information is highly maximised the method approaches conventional the winner takes it all behaviour to competitive learning. Due to its intervention nature, the proposed method inherently builds on the supervised learning, which is just as well, because, as noted by Kamimura [16], unsupervised learning cannot cope with complex problems. Other solutions include incremental learning. While looking at the ability to recognise sequentially, namely temporally sequences (e.g. video frames of a movie, robot controller and an animal predicting the next position of the prey), Yoshio Konishi and Fujii [27] highlight the benets of incremental learning where the introduction of a new relationship will not force the network to relearn the past relationships. For the majority of cases such as electrical engineering and management where ANNs are practically used as a decision support tool, the availability of adequate training data is a major problem. While it is possible to apply strategic decision information from one project to another, each situation has its distinct characteristics; hence, the necessary adaptations should be carried out. This adaptation might be very small; nevertheless, it will force the system to relearn all previous data. The relearning of the complete data set, in order to accommodate the new unexpected parameters, can be very complicated and hinder the practicality and performance of the model. This paper introduces a new layer for the conventional networks by facilitating a teacher interruption in a systematic manner on demand. The extended potential uses include introduction of value systems for robots, such as those proposed by Huang and Weng [14]. The proposed method also addresses the inherent problem of ANNs in dealing with situations where there is inadequate training data. The proposed LOD method will facilitate accommodation of new situations without the need for a complete retraining. In order to achieve a satisfactory acceptance level of output, the LOD method facilitates the training by prompting the teacher for additional training data relating to the new situation and requesting a new output with its corresponding acceptance level synonymous to the real-life phrase her advise changed my life. The mechanism of the LOD method must be such that it will be applicable to most of existing learning methods. Also, it must comply with the following requirements: Extensive use of feed-back to be able to be used in conjunction with the learn and forget method (error feed back and focus on interest feed back) [17] Satisfy the parametric learning methods requirements (e.g. facilitatory value in transfer function) [3] Ability to signal the user if the given input does not generate the satisfactory level of knowledge. Thenceforth, a new output tolerance (acceptance level) is identied and an on-line learning process is initiated. Capability to act in real-time-learning applications (learn the representation of actual environment) [28] 3.1. Learn-On-Demand model schematic The proposed model is schematically represented in Fig. 2. Here, the LOD activator is incorporated into the conventional ANNs shown in Fig. 1.

Input ANN Machine

(O) Output

Cut-off Value (COV)

(Pa) percentage accuracy Error Learning Machine

LOD Activator

Actual Output (O) (EO) Expected Value

Pa

Fig. 2. New Learn-On-Demand model.

As shown in Fig. 2, the model commences with an initial priori output (O'). If the percentage accuracy (Pa) of the priori output is greater than the Lear-On-Demand request level (xed for each LOD system), then the system will generate the actual output (O). Otherwise, the LOD mechanism is activated and the system will perform the on-line learning and an expected output (EO) is requested from the expert teacher. Subsequently, based on the comparison of the O' and EO, a new percentage accuracy (Pa') is calculated. The new percentage accuracy (Pa') is fed back into the network as a new entry. The process is repeated until the Pa becomes greater than cut-off value (COV). As demonstrated in Fig. 3, it is expected that during the rst round of iteration, relating to one learning pattern, (Pa) will increase rapidly. Then, the (Pa) will decrease during the subsequent iterations; however, the system will never generate the accuracy level less than LOD request level for previously learned data. 3.2. Algorithm of Learn-On-Demand model In the proposed model any viable ANN machine and learning mechanism can be used. But, the incorporation of the LOD requires additional algorithm. The following is the algorithm for a single input and single output ANN which is also applicable to non-relevant N input and N output models. If Pa b = (Learn-On-Demand request level) then Exec Request-To-Learn Else O = O'

Percentage of Accuracy of Outputs. 100% COV% LOD Request Level

Learning Iterations
Fig. 3. The Learning Curve for LOD.

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210

1207

Request-To-Learn: If (1 ABS(O' EO)/O') N = COV then Exec Stop-Learning-Algorithm Else Begin Error = EO O' Pa' = 1 ABS(O' EO)/O' Exec Learning-Algorithm Go to Request-To-Learn End. 3.3. Generalisation The above algorithm relates to a 1 1 or N N network whereas, in practice, the majority of networks are based on N M inputoutput dimension. The algorithm which generalises the implementation of LOD for N M networks is given below and the process is highlighted in Fig. 4. For j = 1 to M If Pa( j) b = (Learn-On-Demand request level) then Exec Request-To-Learn Else Oj = O'( j) Next j Request-To-Learn: For j = 1 to M If (1 ABS(O'( j) EO( j))/O') b COV then Exec Learn-Step Next j Exec Stop-Learning Learn-Step: Begin For j = 1 to M Error( j) = Eo( j) O'( j) Pa'( j) = 1 ABS(O'( j) Eo( j))/O'( j) Next j Exec Learning-Algorithm Go to Request-To-Learn End. It is evident from the above that it is the dimension of the output and not the size of the input which increases the complexity of the LOD mechanism. Fortunately, in practice, the majority of applications

have a much lower number of outputs in comparison with the size of the input; hence, the complexity of LOD is not signicantly affected. 3.4. Mathematical model The following is an example of a basic mathematical representation of learning, which is based on the Hebbian learning rule set.     wij = g aj t ; tj h oi t ; wij Where: wij aj(t) tj oi(t) g() h() weight of the link from unit i to unit j; activation of unit j in step t; teaching input, in general the desired output of unit j; output of unit i at time t; function, depending on the activation of the unit and the teaching input; function, depending on the output of the preceding element and the current weight of link.

For the calculation of each wij and updating of the weights, forward propagation and backward propagation are used and single learning step for single data is performed. This process is repeated for all data in the training set until satisfactory outputs are generated. (Learning stops in minimum of validation set error.) The learning can be optimised based on learning time minimisation or error minimisation [26]. Below is the back-propagation formula based on generalised delta-rule. wij = j oi    8 < unitX j = output : fj0 netj tj oj   j = : unitX j = hidden : fj0 netj k wjk
k

Where: j tj oi i j k learning factor eta (constant), error (difference between the real output and the teaching input of unit j), teaching input of unit j, output of preceding unit i, index of predecessor to the current unit j with link wij from i to j, index of current unit, index of a successor to the current unit j with link wjk from j to k.

Again, in the following example the Hebbian rule is used but this time with LOD incorporated.

Input 1..n ANN Machine

Oj j=1..m

Cut-off Value (COV)

    0 0 0 wij = G aj t ; Eoj H Oi t ; wij Where: wij aj(t) Eoj O'i(t) G'() H'() weight of the link from unit i to unit j and Pa's as new inputs; activation of unit j in step t; teaching input, in general the desired output of unit j for last level it will use Eo; output of unit i at time t; function, depending on the activation of the unit and the teaching input; function, depending on the output of the preceding element and the current weight of link.

Paj j=1..m Errorj j=1..m Learning Machine

LOD Activator

Oj j=1..m Eoj j=1..m

Paj j=1..m

Fig. 4. The general model overview.

1208

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210

Therefore, the back-propagation based on generalised delta-rule gives: wij = j oi    8 > unitX j = output : fj0 netj Eoj O0j > > <  n + m > > 0 > : unitX j = hidden : fj netj k wjk
k=1

j =

Where: j Eoj O'i i j k learning factor eta (constant which is related to COV), error (difference between the real output and the teaching input of unit j and for output level is between Eo and O'), teaching input of unit j, output of preceding unit i, index of predecessor to the current unit j with link wij from i to j. index of current unit, index of a successor to the current unit j with link wjk from j to k.

In the above mathematical model the Pa'j (j = 1..m) are extended inputs which are appended to the existing. 4. Learn-On-Demand application The LOD method is particularly suited for models which provide support at strategic levels where the prior set of training data is scarce not enough to enable the system to properly learn. The proposed methodology emerged as a necessity of a research work into develo-

ping an enhanced construction client brieng system that integrates a hybrid of AI models on a Structured Systems Analysis and Design (SSADM) platform. While adapting SSADM to the overall model of design, production and maintenance, the focus of the work is on the clientdesigner interface where a small improvement is expected to yield a favourable impact on all other phases. The importance of client brieng and its impact on project performance has long been recognised [4]. Equally, the development of an enhanced brieng system relies on a deeper understanding of clients' strategy. Basically, clients' strategy can be dened in terms of their needs as well as their wants, while complying with certain standards and regulations. These requirements could be synonymous to t for purpose and value for money. In this work, the totality of the client's requirements is referred to as the client's project strategy and the prime objective of the work is to develop a brieng system to provide the true reection of the client's project strategy. In order to automate the overall brieng process, it is necessary to provide a quantitative evaluation of the outcome and its comparison against a set of measurable criteria. Project timecostquality (TCQ) has long been a common set of criteria for performance evaluation representing client's needs for a plethora of information [15]. It is recognised that for any given situation there exist a trade-off between these variables. This means there can be several nal costtimequality outcomes that are acceptable to the population of clients with different strategic needs. Therefore, there may be many tactical paths through which any acceptable CTQ outcome could be reached and the collection of all these acceptable tactical paths produces an envelope of acceptable solutions. The diagram in Fig. 5 shows all constituent stages of the proposed automated enhanced client brieng. The system shows the role of organisation outlook and environmental factors in shaping project strategy development, before feasibility study is conducted. Having identied client's request and the strategic boundaries, the enhance brieng is iteratively generated.

Fig. 5. Design development system detailed overview.

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210

1209

The LOD methodology proposed in this paper is shown in the shaded area within Fig. 5, where the Strategic Making model initiates the LOD process and through a series of teacher intervention, an acceptable solution (within the strategic envelop) is identied and passed to next phases of the model. It is shown here that the proposed methodology forms part of a larger system while dealing with a problem inherent, but not conned to construction eld. The use of the method and its data population is contingent upon extensive data collection and analysis of all parts of the broader model, depicted in Fig. 5; the full description of which is well beyond the scope of this theoretically-based paper. Therefore, the specic application of the method will be the subject of another comprehensive paper. In this paper, the focus is on the theoretical description of the proposed method and prompting its possible use within construction eld, in general, and the construction client brieng in particular. It is envisaged that the potential use of the proposed method in various disciplines is vast. Below are a few indicators. Applications in Electrical Engineering: i Intelligently control the powerstations ii Power network control in order to optimise the power vasted in network iii Power transfer switching to perform as a practical fault tolerant system iv Improve the operator based systems as intelligent auto-pilot Applications in Management: i Strategic decision supports ii Using as intelligent unit in game theory iii Practical optimiser in resource management Applications in Construction Management: i The practical strategic design ii Strategic decision support iii Using as an alternative to conventional ANN models where the learning set is not large enough to perform suitable output, e.g. plant selection (crane, diggers, etc. ..) iv Other areas where teacher intervention can assist the network include procurement, mark-up estimation, risk mitigation, tendering decision, prequalication, defect prediction, clash control, design optimisation, etc. 5. Conclusion This work emerged out of a broader research work aiming at developing an enhanced system of construction client brieng, which makes use of a number of AI methods. Part of the system requires clients' project strategic setting which makes use of neural networks. However, lack of relevant data necessitated extending the architecture of the conventional ANNs to cater for the intervention by a teacher. Initially, the paper highlights an area where the current learning methods could improve to accommodate situations that previously were not possible to resolve. The current two types of learning methods off-line and on-line are complementary; however, their use is mutually exclusive. The shortcomings relate to the choice between the off-line learning where every new situation requires a complete relearning, as opposed to the on-line learning methods which, due to the absence of a supervisory control, the recurrence of local situations may replace the earlier plausible learning associated with global learning situations. The paper proposes an enhanced learning mechanism, namely the Learn-On-Demand (LOD) learning mechanism. LOD combines the

features of the two conventional methods. It facilitates a supervised on-line learning without the need for complete retraining. Instead, an appended-learning (growing cell structure applied to the initial stage) mechanism is adopted whereby, the learning of new situations is appended to the existing network rather than carrying out a complete relearning. This is facilitated by adding new input/output variables, hence, involves the use of a dynamic ANNs structure, a challenge which has already been addressed by researchers. Finally, the paper discusses the model which highlights the role and position of LOD activator within the overall process and provides the algorithm which can be used for the implementation of LOD. The algorithm formed the basis for the development of the mathematical representation of the LOD learning by integrating the new maths into the current Hebbian learning rule set. Acknowledgments I express thanks to Hossain N. Rad, my ex-PhD student for his assistance. References
[1] J.A. Anderson, Neural models with cognitive implications, in: D. Laberg, S.J. Samuels (Eds.), Basic Processing in Reading Perception and Comprehension Models, Erlbaum, Hillsdale, NJ, 1977, pp. 2790. [2] R. Andrews, J. Diederich, A.B. Tickle, Survey and critique of techniques for extracting rules from trained articial neural networks, Knowledge-Based Systems, neural networks 8 (6) (1995) 373389. [3] S. Bengio, Y. Bengio, J. Cloutier, J. Gecsei, Generalization of a Parametric Learning Rule, ICANN '93, Proceeding of the Int. Conf. On Articial Neural Networks, Amsterdam, Netherland, 1993. [4] W. Bordass, A. Leaman, Design for manageability, Building Research and Information 25 (no.3) (1997) 148157. [5] O. Chapelle, B. Schlkopf, A. Zien, Semi-supervised Learning, MIT Press, Cambridge, MA, 2006. [6] C. Catal, B. Diri, Unlabelled extra data do not always mean extra performance for semi-supervised fault prediction, Expert Systems, The Journal of Knowledge Engineering, Vol. 26, No. 5, Blackwell Publishing Ltd., 2009, November 2009. [7] J.L. Elman, Learning and Development in Neural Networks: The Importance of Starting Small, Volume 48, Issue 1, 1993, pp. 7199, July 1993. [8] B. Fritzke, Unsupervised Clustering with Growing Cell Structures, Proc. of IJCNN91 Seattle IEEE, 1991. [9] B. Fritzke, Let it GrowSelf Organising Feature Maps with Problem Dependent Cell Structure, Proc. of ICANN1991 Helsinki, 1991. [10] S. Ghosh-Dastidar, H. Adeli, Improved spiking neural networks for EEG classication and epilepsy and seizure detection, Integrated Computer-Aided Engineering 14 (3) (2007) 187212. [11] S. Ghosh-Dastidar, A. Adeli, A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection Elsevier, Neural Networks 22 (2009) (2010) 14191431. [12] D. Hammerstrom, Articial Neural Networks, Where Do We Go Next? 0-78038359- 1/04/02004 IEEE P2989, 2004. [13] J.J. Hopeld, Neural networks and physical systems with emergent collective computational abilities, Proceeding of the National Academy of Sciences 79 (1982) 25542558. [14] X. Huang, I. Weng, Novelty and reinforcement learning in the value system of developmental robots, Second International Workshop on Epigenetic Robotics. Edinburgh, Scotland. August 101 1 2002, 2002. [15] U.K. IAI, Client Brieng Domain Committee Charter CCIT Admin 29 October 1999, 1999. http://www.iai.org.uk/cbdchart.htm. [16] R. Kamimura, Teacher-Directed Information Maximization: Supervised Information-Theoretic Competitive Learning with Gaussian Activation Functions, 2004 0-7803-8359-1/04/2004 IEEE, Pg 2831. [17] H. Keuchel, E. Von Puttkamer, U.R. Zimmer, Learning and Forgetting Surface Classications with Dynamic Neural Networks, ICANN '93, Amsterdam, 1993 Sep 1993. [18] T. Kohonen, Associative Memory: A System-Theoretical Approach, SpringerVerlag, 1977. [19] B.J.A. Krose, P.P. Von Der Smagt, An Introduction to NN, Fifth edition University of Amsterdam, Netherlands, 1993 Jan 1993. [20] W. Maass, On the complexity of learning on neural nets, in: J. Shawe-Taylor, M. Anthony (Eds.), Computational Learning Theory: EuroColt'93, Oxford University Press, Oxford, 1994, pp. 117. [21] J.L. McClelland, D.E. Rumelhart, Parallel Distributed Processing, Explorations in the Microstructure of cognition, Volume 2, The MIT Press, 1986. [22] O. Moselhi, T. Hegazy, P. Fazio, Potential applications of neural networks in construction, Canadian Journal of Civil Engineering 19 (1992) 521529. [23] B.A. Pearlmutter, Dynamic recurrent neural networks, school of computer science, Carnegie Mellon University, CMU_CS_90_196, Pittsburgh, PA, 15213, 1990 Dec 1990.

1210

F. Khosrowshahi / Automation in Construction 20 (2011) 12041210 [27] Y. Yoshio Konishi, R.H. Fujii, Incremental Learning of Temporal Sequences Using State Memory and a Resource Allocating Network, 2004 0-7803-8359-1/04/2004 IEEE. [28] U.R. Zimmer, E. Von Puttkamer, Realtime-learning on an autonomous mobile robot with neural networks, Euromicro '94Realtime Workshop, Vaesteraas, Sweden, June, 1994.

[24] D.E. Rumelhart, J.L. McClelland, Parallel Distributed Processing, Explorations in the Microstructure of cognition, Volume 1, The MIT Press, 1986. [25] H.J. Scudder, Probability of error of some adaptive pattern-recognition machines, IEEE Transactions on Information Theory 11 (1965) 363371. [26] C. Wang, S.S. Venkatesh, J.S. Judd, Optimal stopping and effective machine complexity in learning, NIPS_6, 1993.

Anda mungkin juga menyukai