The genetic algorithm (GA) has been applied to game playing machines, pattern recognition, optimization of pipeline systems, the traveling salesman problem, and numerous other optimization problems, with varying degrees of success. This paper demonstrates how to apply the genetic algorithm to nonlinear recursive system identification problems on which gradient descent methods fail miserably. A previous attempt to use the algorithm for identification purposes was stymied by slow convergence and biased estimates. A simple modification to the algorithm is proposed which increases the speed of convergence, and appears to solve the bias problem. Analysis, in terms of the convergence of a particular martingale difference sequence indicates why the modifications are successful.
Application of the genetic algorithm requires that the problem be coded in parameter strings (called genes), and that there be an evaluation criterion which can determine a fitness value associated with each gene. As usually implemented, there are two phases to the genetic algorithm. During the initial phase, characterized by a highly diverse genetic pool and dominated by matings of highly diverse "parents", the parameter estimates move rapidly towards the desired parametrization. In the second phase, when the gene pool is highly uniform and the incorporation of "new" genetic material must rely on the mutation process, the motion of the parameters is painfully slow. Our modification essentially monitors the diversity of the gene pool, and forces a mass "extinction and immigration" whenever the diversity has fallen below a preset value. Several examples illustrate the method, including linear and nonlinear, FIR and IIR identification. The method is also applied to identify the parameters in layered feedforward and feedback (recurrent) neural network structures.
This method of nonlinear system identification first appeared in the IEEE Transactions on Signal Processing, in April 1994.
To get to my homepage, click here.