Most of the real-world optimization problems have constraints. Constraints limit the feasible portion of the search space. Constraints are of two types, hard constraints and soft constraints. Hard constraints are constraints that have to be met at any cost, irrespective of the objective function. Soft constraints are more flexible. A minor violation of the soft constraints may be acceptable if it provides a significant gain in the fitness value. If the soft constraints are not met, then certain level of penalty will be imposed.

Different approaches are used to handle constraints in Genetic Algorithms, some of which are explained below:

Remove infeasible solutions: In this approach, infeasible solutions are simply thrown away. This approach has an advantage that there will be no infeasible solutions in the population. Many seem to believe that penalty functions should be harsh so that Genetic Algorithms will avoid the forbidden spaces. The foundation of Genetic Algorithms theory, however, suggests that Genetic Algorithms optimize by combining partial information from the population. Therefore, this approach can result in valuable information being lost as infeasible solutions may still contain fit schema. Another disadvantage is that the algorithm spends much time in evaluation and rejection of infeasible solutions, especially in highly constrained problems.

Repair infeasible solutions: In this approach, the algorithm converts a solution that violates hard constraints into one that does not. This method replaces infeasible solutions with their repaired equivalent. The disadvantage is that the repair strategies have to be problem specific.

Penalize infeasible solutions: Penalty function methods have been the most popular approach in Genetic Algorithms, because of their simplicity and ease of implementation. In this approach, the algorithm transforms constrained optimization into unconstrained optimization. Depending on the problem and the importance of the constraint, the penalty function can be uniform, polynomial, exponential or stepped. The advantage of this approach is that it can consider infeasible solutions. However, the most difficult aspect of the penalty function approach is to find appropriate penalty parameters needed to guide the search towards the constrained optimum.