Towards AIblog

A Fundamental Introduction to Genetic Algorithm -Part Two

Thursday, April 16, 2026Hossein CheginiView original
Last Updated on April 16, 2026 by Editorial Team Author(s): Hossein Chegini Originally published on Towards AI. “A 100-Queen solution” …picture from ‘repo/images/solutions’ Code Investigation In the previous introduction, I provided a detailed explanation of the fundamental steps involved in training a Genetic Algorithm (GA). I discussed important concepts such as mutation, genes, chromosomes, and genetic population, and presented a case study on solving the N-Queen problem using a GA. Following the publication of the article, I proceeded to create a repository and converted my previously written Matlab code into Python code. In this article, my focus is on explaining the different components of the repository and how the main file is structured to set up the GA scenario and find the optimal solution. You can find the repo here. The main file (n_queen_solver.py) serves as the entry point for setting up the GA model and initiating the training process. It prompts the user to provide essential parameters that are crucial for configuring the GA model. These parameters include: Chromosome size or Chessboard Size: The size of the chessboard, which represents the number of queens and the dimensions of the board. Population Size: The number of chromosomes in the population, representing the number of candidate solutions. Epochs: The number of iterations or generations for which the GA model will be trained. parser = argparse.ArgumentParser(description='Computation of the GA model for finding the n-queen problem.')parser.add_argument('chromosome_size', type=int, help='The size of a chromosome')parser.add_argument('population_size', type=int, help='The size of the population of the chromosomes')parser.add_argument('epoches', type=int, help='The nmber of iterations to traing the GA model')args = parser.parse_args() After obtaining the parameters, the next block of code is responsible for initializing the population. The ‘init_population()’ method generates the population based on the specified number of individuals, using the encoding explained in the previous article. It returns the initialized population back to the main method of the file. The fitness function and the calculation of the fitness score play a crucial role in selecting the best parents and guiding their reproduction to ensure the program progresses along the optimal path. For the simplicity and clarity of this implementation, I have chosen a straightforward fitness method. The following code block demonstrates the ‘fitness()’ method: def fitness(chrom,chromosome_size): q = 0 for i1 in range(chromosome_size): tmp = i1 - chrom[i1] for i2 in range(i1+1,chromosome_size): q = q + (tmp == (i2 - chrom[i2])) for i1 in range(chromosome_size): tmp = i1 + chrom[i1] for i2 in range(i1+1,chromosome_size): q = q + (tmp == (i2 + chrom[i2])) return 1/(q+0.001) The ‘fitness()’ In this block received an individual chromosome and its size as input parameters and returns back its fitness score. In the given code block, the fitness function checks whether two queens in the chromosome are crossing each other or not. If two queens are found to be crossing, the variable ‘q’ is incremented by one. The purpose of this check is to evaluate the chromosome's fitness based on the number of queen collisions. The line ‘1 / (q+0.001)’ represents the fitness score based on ‘q’ . By using the reciprocal of the value ‘q + 0.001’, the fitness score will be higher for chromosomes with fewer queen collisions (i.e., a lower value of ‘q’). The addition of ‘0.001’ is to avoid division by zero. The fitness score is used to assess the quality of each chromosome in the population. The higher the fitness score, the better the chromosome’s performance. In the case of reaching a fitness score of 1000, it signifies that the solution has been found, and the program can terminate without further operations. def train_population(population,epoches,chromosome_size): num_best_parents = 2 ft = [] success_booelan = False population_size = len(population) for i1 in tqdm(range(epoches)): # 1 should be epoches later fitness_score = [] for i2 in range(population_size): fitness_score.append(fitness(population[i2],chromosome_size)) #fitness score initialisation ft.append(sum(fitness_score)/population_size) pop = np.concatenate((population, np.expand_dims(fitness_score, axis=1)), axis=1) sorted_indices = np.argsort(pop[:, -1]) pop_sorted = pop[sorted_indices] pop = pop_sorted[:, :-1] best_parents_muted = [] best_parents = pop[-num_best_parents:] best_parents_muted = [mutation(best_parents[i], chromosome_size) for i in range(num_best_parents)] pop[0:num_best_parents] = best_parents_muted population = pop if ft[-1] == 1000: # this should be calculated accurately. In each case the model might pass the potimum solution, so whenever it is touching the solution's score we should stop the training print('Woowww, the model could find the solution!!') print('Here is an example of a solution : ',population[-1]) success_booelan = True break return population, ft, success_booelan The genetic algorithm (GA) employs a selection process to choose parents with higher fitness scores for reproduction through mutation or crossover. The resulting offspring are added to the population, while chromosomes with lower fitness scores are excluded from the next training round. The line ‘if ft[-1] == 1000’ checks if the latest fitness score indicates convergence to a solution. If the condition is true, the loop breaks, and the method terminates, making way for the execution of subsequent blocks." HINT: After reaching the solution or finding the global optimum of the solution space, it is possible for the program to continue executing operations. To ensure that the program terminates after finding the best fitness score, a break statement is used. The break statement exits the loop and ensures that no further operations are performed. In the figure above, we observe the step-wise behavior of the learning curve. The program remains at a fitness score of 0 for the first 28 epochs and then suddenly jumps to a fitness score of 100. During a typical run, the solution is reached after 70 epochs, but there is a brief period where the program gets stuck at a fitness score of 600. You can run the program to generate additional learning curves or check the “repo/images/learning_curve” directory for existing curves. After training the model, two additional methods, namely “fitness_curve_plot” and “n_queen_plot”, are called to display the learning curve and visualize the positions of the queens on the chessboard. Conclusion and Questions This article presented various blocks of Python code for solving the N-Queen problem. The code can be modified and enhanced in different ways to improve efficiency or add additional […]