Anda di halaman 1dari 5

Code No:420206 Set No:

IV-B.Tech. II-Semester Examination April 2003 1


NEURAL NETWORKS AND APPLICATIONS
(Electrical and Electronics Engineering)
Time: 3 hours Max.Marks:80

Answer any FIVE questions.


All questions carry equal marks.
---
1. Explain how you justify that brain is a parallel distributed processing system.

2.a) What are requirements of learning laws


b) Distinguish between activation and synaptic dynamics models.

3.a) What is perceptron learning for pattern classification?


b) Using the perceptron learning law design a classifier for
the following
problem
Class C1: [-2 2]T, [-2 1.5]T, [-2,0]T,[1 0]T and [3 0]T
Class C2: [1 3]T,[3 3]T,[1 2]T,[3 2] Tand [10 0]T

4.a). What is meant by generalization in feed forward networks?


b). Develop a software solution to implement a two layer feed forward ANN with
GDR based training. Verify the solution using a simple (two layer) example.

5. What are the properties of the continuous time dynamical system model?
Explain them using a single layer neural network.

6. Differentiate between Hamming NET and MAXNET

7. With suitable diagrams, explain how to solve the optimization problems using
single layer feedback networks

8.a) Explain with suitable examples, why neural networks can handle massive amount
of input/output data more efficiently than conventional computers.
b) Compare the conventional computers with the parallel-distributed processing
hardware.

******
Code No:420206 Set No:

IV B.TECH. II-SEMESTER. EXAMINATION. APRIL,2003 2


NEURAL NETWORKS AND APPLICATIONS
(Electrical and Electronics Engineering)
Time: 3 hours Max.Marks:80

Answer any FIVE questions.


All questions carry equal marks.
---
1. Describe the characteristics of artificial neural networks.

2.a) Distinguish between local minima and global minima in neural networks.
What are the effects of these neural networks?
b) Explain the distinction between stability and convergence.

3.a) Give two examples of linearly inseparable problems.


b) Try to use the pseudoinverse formulation
AT = (ATA)-1AT
With a single multiple-input,one-output unit and a single training pattern.
Show where this approach fails.

4.a) Discuss various interpretations of the results of back propagation learning.


b) Consider as quantitatively as possible the effect of one or more redundant
pattern pairs denoted (ir,or), in H on the training of a feed forward networks
using the GDR.

5. What is gradient type Hopfield Network ? Differentiate between Discrete time


Hopfield Network and gradient type Hopfield network.

6. Explain about Winner-take-all learning method. Also give its algorithm.

7. Define the classification problem. With suitable diagram, explain how to


classify the printer characters using the multilayer feedforward network.

8.a) What are invariant characteristics of neuro computing model? Explain each of
them
b) What are variable characteristics of neuro computing model? Explain each of
them
**********

Code No:420206 Set No:

IV-B.TECH. II-SEMESTER. EXAMINATION. APRIL,2003 3


NEURAL NETWORKS AND APPLICATIONS
(Electrical and Electronics Engineering)
Time: 3 hours Max.Marks:80

Answer any FIVE questions.


All questions carry equal marks.
---
1. What is meant by training of a neural net? Explain in detail with an
example.

2.a) Explain in detail “Recall in Neural Networks”.


b) Explain autonomous and non-autonomous dynamical systems.

3.a) Explain the nature of the input vectors in each of the following
cases of the optional choice of weight matrix (i) W=BAT (ii)W=BA-1
and
(iii)W=BA+
b) What is meant by perceptron representation problem?

4.a) Give a qualitative explanation for the effect of adapting the Gaussian
mixture parameters on learning in a feed forward neural networks.
b) What is the significance of momentum term in back propagation learning?

5. Design a simple continuous-time network using the concept of


computational energy function and also evaluate stationary solutions of
the network.

6. Mention some of the feature mapping capabilities of neural networks and


explain any two of them in detail.

7. With suitable diagrams, explain how to use Error-Back propagation


algorithm for recognition processing.

8. With necessary diagrams explain the Neuron and weight implementation


using resistors and an operational amplifier. Derive its relevant
mathematical relations.

********
Set No:
Code No:420206

IV-B.TECH. II-SEMESTER. EXAMINATION. APRIL,2003


4
NEURAL NETWORKS AND APPLICATIONS
(Electrical and Electronics Engineering)
Time: 3 hours Max.Marks:80

Answer any FIVE questions.


All questions carry equal marks.
---
1.a) Explain the ‘Recent trends in neural networks’ and their applications.
b) Describe the model of artificial neuron.

2.a) Bring out the limitations of single layer perceptrons in computing logical
functions
b) Explain how a multiplayer perceptrons can function as a classifier. Give
examples.

3. Compute the weight matrix for the following pattern association task.
a1 = [1/6 – 5/6-1/6.1/2]T , b1 = [1 0 0]T
a2 = [1/2.1/2 – 1/2.1/2]T , b2 = [0 1 1]T
a3 = [-5/6.1/6-1/6.1/2]T , b3 = [0 0 0 ]T

4.a) Why LMS learning is called a stochastic gradient descent method?


b) Show that the RTRL method applied to a fully recurrent network of N
units has O (N power 4) computational complexity for each learning
iteration?

5. Discuss how vector field method can be used to illustrate the real time
phenomena in networks with finite gain neurons.

6. How do you convert patterns of arbitrary dimensionality into the


responses of one or two dimensional arrays of neurons?

7. Define the problem of process identification. What are the possible neural
network configurations for plant identification? Explain each of them.

8. Give examples of application of neural networks in Load Forecasting.

*******

Anda mungkin juga menyukai