Biologically Inspired Learning System
Learning Systems used on robots require either a-priori knowledge in the form of models, rules of thumb or databases or require that robot to physically execute multitudes of trial solutions. The first requirement limits the robot’s ability to operate in unstructured changing environments, and the second limits the robot’s service life and resources. In this research a generalized approach to learning was developed through a series of algorithms that can be used for construction of behaviors that are able to cope with unstructured environments through adaptation of both internal parameters and system structure as a result of a goal based supervisory mechanism. Four main learning algorithms have been developed, along with a goal directed random exploration routine. These algorithms all use the concept of learning from a recent memory in order to save the robot/agent from having to exhaustively execute all trial solutions. The first algorithm is a reactive online learning algorithm that uses a supervised learning to find the sensor/action combinations that promote realization of a preprogrammed goal. It produces a feed forward neural network controller that is used to control the robot. The second algorithm is similar to first in that it uses a supervised learning strategy, but it produces a neural network that considers past values, thus providing a non-reactive solution. The third algorithm is a departure from the first two in that uses a non-supervised learning technique to learn the best actions for each situation the robot encounters. The last algorithm builds a graph of the situations encountered by agent/robot in order to learn to associate the best actions with sensor inputs. It uses an unsupervised learning approach based on shortest paths to a goal situation in the graph in order to generate a non-reactive feed forward neural network. Test results were good, the first and third algorithms were tested in a formation maneuvering task in both simulation and onboard mobile robots, while the second and fourth were tested simulation.
Advisor:John Tyler; Li Li; Jianhua Chen; S. S. Iyengar; Jim Belanger; Brian Bourgeois
School:Louisiana State University in Shreveport
School Location:USA - Louisiana
Source Type:Master's Thesis
Date of Publication:10/25/2005