1
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆
1. (a) In the sigmoidal function s(x) = 1/ (1+e−cx ) explain the role of the constant
C, and draw (the role of constant C) for various values of C. Also draw the
sigmoidal function.
(b) Determine the final weights for logic function of AND and OR using perceptron
learning rule. [8+8]
2. Explain in detail the differences between competitive learning and differential com-
petitive learning. [16]
(a) Pattern
(b) Classes
(c) Pattern space
(d) Decision regions. [16]
5. What are the properties of the continuous time dynamical system model? Explain
them using a single layer neural network. [16]
6. Explain the architecture of ART-1 neural networks with emphasis on the function
of each part. What is the importance of the vigilance parameter in its working?[16]
7. Consider the simple neural net shown in the figure 7. Assume the hidden unit
has an activation function f(ξ) = tanh(ξ) and that the output unit has a linear
activation with unit slope. Show that there exists a set of real valued weights
{w1 , w2 , v1 , v2 } that approximates the discontinuous function g(x) = a sgn(x-b)+c,
for all x,a,b and c ε R to any degree of accuracy. [16]
Figure 7
1 of 2
Code No: RR410212 Set No. 1
8. What do you understand by finite resolution and conversion error. Explain the
circuit producing a single digitally programmable weight employing a multiplying
D/A converters (MDAC). [16]
⋆⋆⋆⋆⋆
2 of 2
Code No: RR410212 Set No. 2
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆
1. (a) How do you justify that brain is a parallel distributed processing system?
(b) Explain the structure of a brain. [8+8]
2. (a) Distinguish between local minima and global minima in neural networks and
what are the effects of these on neural networks.
(b) Explain the distinction between stability and convergence. [8+8]
4. With neat Block diagram and flow chart, explain Error Back propagation algorithm.
[16]
6. What is minimum spanning tree? Write the algorithm of Self - organizing feature
map? [16]
8. What do you understand by finite resolution and conversion error. Explain the
circuit producing a single digitally programmable weight employing a multiplying
D/A converters (MDAC). [16]
⋆⋆⋆⋆⋆
1 of 1
Code No: RR410212 Set No. 3
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆
1. (a) For a particular network the data is given below. Draw the architecture and
verify the result.
2. (a) Distinguish between local minima and global minima in neural networks and
what are the effects of these on neural networks.
(b) Explain the distinction between stability and convergence. [8+8]
3. Write and discuss about Single layer Discrete Perceptron Training Algorithm.[16]
4. Show by geometrical arguments that with 3 layers of non-linear units, any hard
classification problem can be solved. [16]
5. Describe the Discrete time Hopfield networks with necessary illustrations. [16]
7. Using backpropagation learning, find the new weights for the network shown in
figure 7 when presented with an input (0 ,1) and the target output is 0.8. Use a
learning rate of α = 0.25 and the bipolar signoid activation function. [16]
1 of 2
Code No: RR410212 Set No. 3
Figure 7
8. Analyze the neuron circuit given in figure 8 and compute its weight value. Compute
the neurons response for the following inputs knowing that fsat+ = −fsat− = 13V.
Figure 8
⋆⋆⋆⋆⋆
2 of 2
Code No: RR410212 Set No. 4
IV B.Tech I Semester Regular Examinations, November 2007
NEURAL NETWORKS AND APPLICATIONS
(Electrical & Electronic Engineering)
Time: 3 hours Max Marks: 80
Answer any FIVE Questions
All Questions carry equal marks
⋆⋆⋆⋆⋆
1. (a) How do you justify that brain is a parallel distributed processing system?
(b) Explain the structure of a brain. [8+8]
2. (a) What are the requirements of learning laws.
(b) Distinguish between activation and synaptic dynamics models. [16]
3. Discuss in detail about minimum distance classification system for a linear discrim-
inant function. [16]
4. (a) Why convergence is not guaranteed for the back propagation learning algo-
rithm?
(b) Discuss few tasks that can be performed by a back propagation network and
significance of semi linear functions in back propagation. [6+10]
5. (a) What are the advantages of vector field method over other methods?
(b) The weight matrix W for a network with bipolar discrete binary neurons as
given as:
0 1 −1 −1 −3
1 0 1 1 −1
−1
W = −1 1
0 3 1 Ω
−1 1 3 0 1
−3 −1 1 1 0
knowing that the thresholds and external inputs of neurons are zero, compute
the values of energy for v = [-1 1 1 1 1]t and v = [-1 -1 1 -1 -1]t . [4+12]
6. Write short notes on Grossberg layer and its training. Explain with an example.
[16]
7. Consider the simple neural net shown in the figure 7. Assume the hidden unit
has an activation function f(ξ) = tanh(ξ) and that the output unit has a linear
activation with unit slope. Show that there exists a set of real valued weights
{w1 , w2 , v1 , v2 } that approximates the discontinuous function g(x) = a sgn(x-b)+c,
for all x,a,b and c ε R to any degree of accuracy. [16]
Figure 7
1 of 2
Code No: RR410212 Set No. 4
8. Analyze the neuron circuit given in figure 8 and compute its weight value. Compute
the neurons response for the following inputs knowing that fsat+ = −fsat− = 13V.
Figure 8
⋆⋆⋆⋆⋆
2 of 2