How To Exam?

a knowledge trading engine...


SRM University 2007 B.Tech Computer Science and Engineering BANK of ARTIFICIAL NEURAL NETWORKS - Question Paper

Thursday, 31 January 2013 12:10Web
10. What is GDR? Write the weight update equations for hidden layer and
output layer weights.
11. Draw the flow chart of overall GDR procedure.
12. Draw the architecture of layered feedforward architecture.
13. Draw the feedforward architecture for ANN based compressor.
14. Distinguish ranging from trend Mode and Batch Mode.
15. What is local minimum and global minimum?
16. Explain how the network training time and accuracy is influenced by the size of the hidden layer.
17. List out a few applications of BPN.
18. What are the 2 kinds of signals identified in the BackPropagation network?
19. Why the layers in the Bidirectional Associative Memory are called x and y layers?


UNIT-III
PART-B
1. Explain Hebbian Learning.
2. Draw the architecture of Back Propagation Network (BPN) and
explain in detail.
3. Derive GDR for a MLFF network
4. (a) discuss the significance of adding momentum to the training procedure.
(b) Write the algorithm of generalized delta rule(Back Propagation Algorithm).
5. Draw the architecture of Bidirectional Associative memory(BAM) and
explain in detail.



UNIT-IV
PART-A
1. What do you mean by Weight Space in Feedforward Neural Networks?
2. How can you perform search over weight space?
3. How will you determine the characteristics of a training algorithm?
4. What are the effects of fault surface on training algorithms?
5. What is premature saturation in the fault surface?
6. What is saddle point in the fault surface?
7. What are the 2 kinds of transformations which outcomes in symmetries in weight spaces? discuss in brief.
8. What is meant by generalization?
9. What are Ontogenic Neural Networks? Mention their advantages.
10. Distinguish ranging from constructive and destructive methods for network
topology modification.
11. Write the differences ranging from Cascade Correlation(CC) network and
Layered Feedforward network.
12. Write the quickprop weight correction algorithm for Cascade
Correlation Network.
13. Define residual output fault.
14. Define pruning.
15. Write the applications of Cascade Correlation network.
16. How will you identify superfluous neurons in the hidden layer?
17. What do you mean by network inversion?
18. Write the differences ranging from HeteroAssociative Memories and interpolative associative memories.
19. Write the differences ranging from Autossociative and HeteroAssociative
memories.


UNIT-IV
PART-B
1. Explain Generalization.
2. What are the major features of Cascade Correlation Network? Draw the
architecture of a cascade correlation network and discuss in detail.
3. Explain how a feedforward network size can be minimized.
4. Explain the stochastic optimization methods for weight determination.
5. (a)Explain the methods for network topology determination.
(b) elaborate the costs involved in weights and discuss how it is minimized?
6. Draw the architecture of Cascade Correlation Network and discuss in detail.
7. Explain the method pruning by weight decay to minimize the neural network size.
8. Explain in detail how the superfluous neurons are determined and the
network is pruned.

UNIT-V
PART-A
1. What is competitive learning network? provide examples.
2. What is Self-Organizing network? provide examples.
3. Define the term clustering in ANN.
4. What is c-means algorithm?
5. How will you measure the clustering similarity?
6. What is on-centre off surround technique?
7. Describe the feature of ART network.
8. Write the differences ranging from ART one and ART 2.
9. What is meant by stability plasticity dilemma in ART network?
10. What is 2/3rd rule in ART?
11. What are the 2 subsystems in ART network?
12. What are the applications of ART?
13. What are the 2 processes involved in RBF network design?
14. List a few applications of RBF network.
15. What are the basic computational needs for Hardware implementation
of ANN?

UNIT-V
PART-B
1. Explain the architecture and components of Competitive Learning
Neural Network with neat diagram.
2. Explain the clustering method Learning Vector Quantization.
3. Draw the architecture of SOM and discuss in detail.
4. Explain the SOM algorithm.
5. Draw the architecture of ART1 network and discuss in detail.
6. Explain ART1 algorithm.
7. Draw the architecture of RBF network and discuss in detail.
8. Explain Time Delay Neural Network.







( 0 Votes )

Add comment


Security code
Refresh

Earning:   Approval pending.
You are here: PAPER SRM University 2007 B.Tech Computer Science and Engineering BANK of ARTIFICIAL NEURAL NETWORKS - Question Paper