![]() |
Genetic AlgorithmsEvolutionary algorithms have been around since the early sixties. They apply the rules of nature: evolution through selection of the fittest individuals, the individuals representing solutions to a mathematical problem. Genetic algorithms are so far generally the best and most robust kind of evolutionary algorithms. They were invented by John Holland and presented in his book "Adaption in Natural and Artificial Systems" from 1975. What makes a GA special?What are the trademarks of a genetic algorithm (GA)? A GA is heuristic, which means it estimates a solution. You won't know if you get the exact solution, but that may be a minor concern. In fact, most real-life problems are like that: you estimate a solution rather than calculating it exactly. For most problems you don't have any formula for solving the problem because it is too complex, or if you do, it just takes too long to calculate the solution exactly. An example could be space optimization - it is very difficult to find the best way to put objects of varying size into a room so they take as little space as possible. The most feasible approach then is to use a heuristic method. Genetic algorithms are different from other heuristic methods in several ways. The most important difference is that a GA works on a population of possible solutions, while other heuristic methods use a single solution in their iterations. Another difference is that GAs are probabilistic (stochastic), not deterministic. Each individual in the GA population represents a possible solution to the problem. The suggested solution is coded into the "genes" of the individual. One individual might have these genes: "1100101011", another has these: "0101110001" (just examples). The values (0 or 1) and their position in the "gene string" tells the genetic algorithm what solution the individual represents. Here comes the clever part: you apply the rules of evolution to the individuals. You find the individuals you think are the best suggestions to your problem and then combine these individuals into new individuals. Using this method repeatedly, the population will hopefully evolve good solutions. Specifically, the elements of a GA are: selection (according to some measure of fitness), cross-over (a method of reproduction, "mating" the individuals into new individuals), and mutation (adding a bit of random noise to the off-spring, changing their "genes"). As you can see, Darwin's principles have been a major inspiration to GAs. What makes a GA "genetic"?The algorithm must have these elements to be "genetic":
This flowchart illustrates the basic steps in a GA: ![]() The genetic operations of a GAHere's how the genetic operations of the GA work:
Note that the methods mentioned are just one way of making the GA work. There are many different methods of selection (how many parents to select? Kill the parents after cross-over? How to measure the fitness? etc.), many methods of cross-over (number of cross-over points, how to reassemble cross-over parts), and many methods of mutation (random noise, swapping values, no mutation, who to mutate, etc.). The choice of methods will depend on the nature of the problem and your personal preferences. Advantages and disadvantages of GAsA GA has a number of advantages. It can quickly scan a vast solution set. Bad proposals do not effect the end solution negatively as they are simply discarded. The inductive nature of the GA means that it doesn't have to know any rules of the problem - it works by its own internal rules. This is very useful for complex or loosely defined problems. GAs have drawbacks too, of course. While the great advantage of GAs is the fact that they find a solution through evolution, this is also the biggest disadvantage. Evolution is inductive; in nature life does not evolve towards a good solution - it evolves away from bad circumstances. This can cause a species to evolve into an evolutionary dead end. Likewise, GAs risk finding a suboptimal solution. Consider this example: a GA must find the highest point in a landscape. The algorithm will favor solutions that lie on a peak (a hill or whatever). As the individuals gradually begin to propose solutions in the same vicinity (somewhere on the peak), they begin to look alike. In the end you may have individuals that are almost identical. The best of these suggest the top of the peak as the solution. But what if there is another, higher peak at the other end of our map? It is too hard for the individuals to venture away from their current peak. The ones that do will be eliminated, because their solution is worse than the ones we have. An individual might get "lucky", but that would mean its "genes" are very different from the rest of the population, so this is unlikely. In other words, the algorithm produces a suboptimal solution - and we may not even know it. For more on GAs, see the page on classifier systems. For an example program using a GA, go to my Delphi page and download the Travelling Salesman program (for Windows). |