Anda di halaman 1dari 2
BIRLA INSTITUTE OF TECHNOLOGY, MESRA, RANCHI (END SEMESTER EXAMINATION) CLASS: BE. SEMESTER : Vil BRANCH : EEE SESSION : MO/13, SUBJECT : EE7117 - NEURAL NETWORKS TIME = 3 HOURS FULL MARKS: 60 INSTRUCTION: 9. 5 1s The questo ROMER 7 SP cise ares anc twa 64 mars, 2. Candidates may attempt any 5 questions maximum of 60 marks. 3. The missing data, fany, may be assumed sutably, 4. Before attempting the question paper, be sure that you have got the correct question paper. 5. Tables/ Data, hand book/Graph Paper ef. tobe supplied tothe candidates in the examrocton hall Qf.) Sketch diferent neural network architecture and draw and represent mathematically etfferent oe pes of actnationfunetion ised In ANN, (b) A sigmoid functions detined by 1 Os 4 exp(—an) Show that 4 dgv) 00) - Ose -609] ‘What is the value of p(y) at the origin? Sketch ib(v) and ov), (c) A neuron j receives input from four other neurons whose activity levels are 10, -20, 4 and -2. The ‘respective synaptic weights of neuron j are 0.8. 0.2, -1.0 and -0.9. Calculate the output of neuron j for the following situations: (i) The neuron is linear (ti) The neuron is represented by Mc Culloach-Pitts model (ii) The neuron zises sigmoid activation function. ‘Assume that the threshold applied to neuron is Zero, (0 co MN GREE re tc catoxtrene n¥ rate rm oe mary / Separable patterns? (6) What’ age the weight updating rules of 2 perceptron network? Is a perceptron network a linear $577 max Where amas is the maximum eigen value of the auto-correlation matrix R of x(n) (input to the fitter). () The auto correlation matrix R of the input vector X(n) in the LMS algorithm is defined by ros ast ‘ Define the range of values for the learning rate parameter 1 of the LMS algorithm for (a) the 4 algorithm to be convergent in the mean, and (b) for it to be convergent in the mean squat (Apa) Draw the graph of MLP (Back propagation) network having 2-3-3-2 structure. Use proper notation to *\ denote each component, (b) Develop back propagation algorithm to update the weights, (c) For the neural network shown in fig.1 neurons in hidden tayer and output layer have activation 1 function g(v). Where ¢(v) = The input layer neurons are linear. If Input X = [0, 1] and +e desired output d = 1 obtain change in weight Aa{(n) for k= j=1 and A@?(n) fori=j 2 io} 1 2) a (6 4] (61 tea) 14 [6] x 2 os ‘ Bia) vscuss now vo mesure tammin dance between to binty veetorsusiig Hemming Network (2) {b) Discuss how a SOFM network works. z 4) (©) dlscss the working of an ARTY network. How does it solve the problem of plstcty andstabitty. fe} . (2) Draw the architecture of 2 Hopfietd neural network, ‘ 2 (0) Discuss the stability ofa discrete Heptield d network, (4) (©) Consider a Hopfield network made up of five neurons, which is required to store the fatiowing three fe} fundamental memories: Pt, +1, +1, 41, «4 4,4, 4,94, 1]! Xn LA, +4, <1, 44, oT" — Gi) Svaluate the 5 By 5 synaptic weight matrix of the network, (8) Demonstrate that all three fundamental memories X,, Xj, Xs satisfy the alignment condition using a synchronous updating @ te Enumerate the areas where neural networks canbe applied 2 (©) Discuss how to simulate inverse dynamics of a plant using neural network, © (1 (€) Write a brief summary of a neural network application that you have discussed in the:class. [6] “7A 2013

Anda mungkin juga menyukai