11.0 Advanced Concepts in Genetic Algorithms
Having covered the fundamental mechanics of Genetic Algorithms, we now move beyond the basics to explore more advanced theoretical and practical considerations. These topics are crucial for tackling more complex, real-world problems and for achieving a deeper academic understanding of how and why GAs work.
11.1 Models of Lifetime Adaptation
In many advanced applications, GAs are hybridized with local search methods to create what are known as Memetic Algorithms. These models blend the global exploration of a GA with the fine-tuning exploitation of a local search. The key question is how to handle the improvements found by the local search. Two models exist to address this:
- The Lamarckian Model: Named after biologist Jean-Baptiste Lamarck, this model posits that traits acquired during an individual’s lifetime can be passed on to its offspring. In a GA context, this means that if a local search operator finds an improvement to a chromosome, the chromosome’s genotype is directly modified to reflect this improvement before it is passed to the next generation.
- The Baldwinian Model: This model offers an intermediate approach. It suggests that the tendency or ability to learn is what gets encoded, not the learned traits themselves. When a local search improves an individual, its fitness value is updated to reflect the improvement, but its genotype remains unchanged. This gives the “good learner” a higher chance of being selected for reproduction, thereby propagating the underlying genetic traits that led to successful learning.
11.2 Handling Constrained Optimization Problems
A constrained optimization problem is one where solutions must satisfy certain constraints to be considered feasible. Genetic operators like crossover and mutation can easily produce offspring that violate these constraints. Several methods are used to handle this challenge:
- Penalty functions: Reduce the fitness of infeasible solutions, often in proportion to the magnitude of the constraint violation.
- Repair functions: Algorithmically modify an infeasible solution to make it feasible.
- Disallowing infeasible solutions: Prevent any solution that violates constraints from ever entering the population.
- Special representations/decoders: Design the solution encoding in such a way that it is impossible for an operator to generate an infeasible solution.
11.3 Theoretical Foundations
- Holland’s Schema Theorem: This foundational theorem provides a mathematical explanation for how GAs work. A Schema is a template representing a subset of chromosomes, defined over the alphabet {0, 1, *}, where * is a “don’t care” symbol. A schema’s Order is the number of fixed positions (0s or 1s), and its Defining Length is the distance between the first and last fixed positions. The theorem states that schemas with above-average fitness, short defining length, and low order are propagated at an exponentially increasing rate in subsequent generations. A short defining length is crucial because it lowers the probability that a single-point crossover will fall between the fixed bits and disrupt the schema.
- The Building Block Hypothesis: Arising from the Schema Theorem, this hypothesis suggests that a GA works by identifying these high-quality, short schemas—called building blocks—and combining them to form progressively better solutions.
- The No Free Lunch (NFL) Theorem: This important theorem states that, when averaged over the space of all possible problems, no single optimization algorithm performs better than any other. The practical implication is that a GA that is highly tailored and effective for one specific problem will, on average, perform poorly on other, different problems. There is no “one size fits all” master algorithm.
These advanced concepts provide a deeper framework for understanding GA behavior. We will now conclude by exploring how these theoretical principles translate into effective practical implementation and real-world applications.