Details

A resource-efficient localized recurrent neural network architecture and learning algorithm

by 1983 - Budik, Daniel Borisovich

Abstract (Summary)
Recurrent neural networks (RNNs) are widely acknowledged as an e¤ective tool that can be employed by a wide range of applications that store and process temporal sequences. The ability of RNNs to capture complex, nonlinear system dynamics has served as a driving motivation for their study. RNNs have the potential to be e¤ectively used in modeling, system identi…cation and adaptive control applications, to name a few, where other techniques fall short. Most of the proposed RNN learning algorithms rely on the calculation of error gradients with respect to the network weights. What distinguishes recurrent neural networks from static, or feedforward networks, is the fact that the gradients are time-dependent or dynamic. This implies that the current error gradient does not only depend on the current input, output and targets, but rather on its possibly in…nite past. How to e¤ectively train RNNs remains a challenging and active research topic. This thesis introduces TRTRL, an e¢ cient, low-complexity online learning algorithm for recurrent neural networks. The approach is based on the real-time recurrent learning (RTRL) algorithm, whereby the sensitivity set of each neuron is reduced to weights associated either with its input or output links. As a consequence, storage requirements are reduced from O(N 3) to O(N 2) and the computational complexity is reduced from O(N 4) to O(N 2). Despite the radical reduction in resource requirements, it is shown through simulation results that the overall performance degradation of the truncated real-time recurrent learning (TRTRL) algorithm is minor. Moreover, the scheme lends itself to e¢ cient hardware realization by virtue of the localized property that is inherent to the approach. The TRTRL algorithm is …rst implemented and evaluated using a multi-purpose CPU. Next, the framework is extended to a hardware implementation that scales to high network densities without compromising computation speed and overall performance. v
Bibliographical Information:

Advisor:

School:The University of Tennessee at Chattanooga

School Location:USA - Tennessee

Source Type:Master's Thesis

Keywords:

ISBN:

Date of Publication:

© 2009 OpenThesis.org. All Rights Reserved.