Search History  -  Selected Records  -  My Profile  -  My Alerts End Session
Easy Search Quick Search Expert Search Thesaurus eBook Search Ask an Expert Help
Abstract  -  Detailed
               
Record 53 from Inspec for: ((meybodi) WN All fields), 1790-2007
Check record to add to Selected Records
53. A note on learning automata based schemes for adaptation of BP parameters
Meybodi, M.R. (Dept. of Comput. Eng., Amirkabir Univ. of Technol., Tehran, Iran); Beigy, H. Source: Intelligent Data Engineering and Automated - IDEAL 2000. Data Mining, Financial Engineering, and Intelligent Agents. Second International Conference. Proceedings (Lecture Notes in Computer Science Vol.1983), 2000, 145-9
ISBN: 3 540 41450 9
Conference: Intelligent Data Engineering and Automated Learning - IDEAL 2000. Data Mining, Financial Engineering, and Intelligent Agents. Second International Conference. Proceedings, 13-15 Dec. 2000 , Hong Kong, China
Publisher: Springer-Verlag, Berlin, Germany


Abstract: Backpropagation is often used as the learning algorithm in layered-structure neural networks, because of its efficiency. However, backpropagation is not free from problems. The learning process sometimes gets trapped in a local minimum and the network cannot produce the required response. In addition, The algorithm has number of parameters such as learning rate (μ), momentum factor (α) and steepness parameter (λ), whose values are not known in advance, and must be determined by trial and error. The appropriate selection of these parameters has a large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. A class of algorithms which have been developed uses learning automata (LA) for adjusting the parameters μ, α, and λ based on the observation of the random response of the neural networks. One of the important aspects of learning automata based schemes is the remarkable effectiveness as a solution for increasing the speed of convergence. Another important aspect of learning automata based schemes which has not been pointed out earlier is the ability to escape from local minima with high possibility during the training period. We study the ability of LA based schemes in escaping from local minima when standard BP fails to find the global minima. It is demonstrated through simulation that LA based schemes comparing to other schemes such as SAB, Super SAB, Fuzzy BP, ASBP method, and VLR method have higher ability in escaping from local minima


Inspec controlled terms:
backpropagation  -  convergence  -  feedforward neural nets  -  learning automata  -  multilayer perceptrons

Classification Code:   
C1230L Learning in AI  -  C1230D Neural nets  -  C5290 Neural computing techniques

Database: Inspec

  Full-text and Local Holdings Links

About Ei  -  About Engineering Village  -  Feedback  -  Privacy Policy
© 2006 Elsevier Inc. All rights reserved.