遗传算法:基本原理及Python实现¶
发布于:2018-10-21 | 分类:numeric calculation
遗传算法(Genetic Algorithm)是美国J.Holland教授于1975年首先提出的借鉴生物进化规律(适者生存,优胜劣汰遗传机制)演化而来的随机化搜索方法,目前已被广泛地应用于组合优化、机器学习、信号处理、自适应控制和人工生命等领域。本文将根据其基本原理,基于Python的Numpy模块实现采用浮点数编码的遗传算法,用于求解单目标优化问题。
基本原理¶
经典遗传算法基本过程:
- 生成初始种群
 - 选择、交叉、变异操作生成下一代种群
 - 重复流程
 
其中一些关键术语如下:
种群(Population)参与演化的生物群体,即解的搜索空间个体(Individual)种群的每一个成员,对应每一个可能的解染色体(Chromosome)对应问题的解向量基因(Gene)解向量的一个分量,或者编码后的解向量的一位
适应度(Fitness)体现个体的生存能力,与目标函数相关的函数
遗传算子(Operator)个体的演化操作,包括选择、交叉、变异选择(Selection)基于适应度的优胜劣汰,以一定的概率从种群中选择若干个体交叉(Crossover)两个染色体进行基因重组变异(Mutation):单个染色体的基因以较低概率发生随机变化
初始种群产生了一系列随机解,选择操作保证了搜索的方向性,交叉和变异拓宽了搜索空间,其中交叉操作延续父辈个体的优良基因,变异操作则可能产生比当前优势基因更优秀的个体。变异操作有利于跳出局部最优解,同时增加了随机搜索的概率,即容易发散。
遗传算法需要在过早收敛(早熟)和发散、精度和效率之间平衡。当物种多样性迅速降低即个体趋于一致,例如选择操作时过分突出优势基因的地位,结果可能只是收敛于局部最优解。当物种持续保持多样性,例如选择力度不大、变异概率太大,结果可能很难收敛,即算法效率较低。
程序设计¶
考虑程序的通用性,减少模块之间的耦合,种群、遗传算子将被设计为基本类,最后作为参数传入遗传算法类。参考伪代码:
# 基于个体生成种群
I = Individual()
P = Population(I, 50)
# 遗传算子
S = Selection()
C = Crossover()
M = Mutation()
# 综合种群和遗传算子进入遗传算法标准流程
G = GA(P, S, C, M)
res = G.run()   
遗传算法基本流程相对固定,因此以上结构有利于:
- 当需要改进遗传算子策略时,可以从基本算子类派生子类而不必改动其他部分
 - 当需要应用于不同问题时,可以从个体基类派生相应的子类(满足编码的需要)
 
1 种群¶
目前仅考虑浮点数编码,因此省去了编码/解码过程——整个染色体就是解向量,每个基因是其中一个分量。
Individual类表征个体,重要的属性为:
solution解向量evaluation目标函数值fitness适应度值
Population类表征群体,根据个体实例生成指定大小的种群,其中:
individuals所有个体列表,Numpy的ndarray类型best最优个体
#----------------------------------------------------------
# GAComponents.py: Individual, Population
#----------------------------------------------------------
import numpy as np
class Individual:
    '''individual of population'''
    def __init__(self, ranges):
        '''
        ranges: element range of solution, e.g. [(lb1, ub1), (lb2, ub2), ...]
        validation of ranges is skipped...
        '''     
        self.ranges = np.array(ranges)
        self.dimension = self.ranges.shape[0]
        # initialize solution within [lb, ub]
        seeds = np.random.random(self.dimension)
        lb = self.ranges[:, 0]
        ub = self.ranges[:, 1]
        self._solution = lb + (ub-lb)*seeds
        # evaluation and fitness
        self.evaluation = None
        self.fitness = None
    @property
    def solution(self):
        return self._solution
    @solution.setter
    def solution(self, solution):
        assert self.dimension == solution.shape[0]
        assert (solution>=self.ranges[:, 0]).all() and (solution<=self.ranges[:, 1]).all()
        self._solution = solution
class Population:
    '''collection of individuals'''
    def __init__(self, individual, size=50):
        '''
        individual   : individual template
        size         : count of individuals     
        '''
        self.individual = individual
        self.size = size
        self.individuals = None
    def initialize(self):
        '''initialization for next generation'''
        IndvClass = self.individual.__class__
        self.individuals = np.array([IndvClass(self.individual.ranges) for i in range(self.size)], dtype=IndvClass)
    def best(self, fun_evaluation, fun_fitness=None):
        '''get best individual according to evaluation value '''
        # evaluate first and collect evaluations
        _, evaluation = self.fitness(fun_evaluation, fun_fitness)
        # get the minimum position
        pos = np.argmin(evaluation)
        return self.individuals[pos]
    def fitness(self, fun_evaluation, fun_fitness=None):
        '''
        calculate objectibe value and fitness for each individual.
        fun_evaluation  : objective function
        fun_fitness     : population fitness based on evaluation
        '''
        # fitness is same as evaluation by default
        if not fun_fitness:
            fun_fitness = lambda x: x
        # get the value directly if it has been calculated before
        evaluation = np.array([fun_evaluation(I.solution) if I.evaluation is None else I.evaluation for I in self.individuals])
        # calculate fitness
        fitness = fun_fitness(evaluation)
        fitness = fitness/np.sum(fitness) # normalize
        # set attributes for each individual
        for I, e, f in zip(self.individuals, evaluation, fitness):
            I.evaluation = e
            I.fitness = f
        return fitness, evaluation
if __name__ == '__main__':
    ranges = [(-10,10)] * 3
    obj = lambda x: x[0]+x[1]**2+x[2]**3    
    I = Individual(ranges)
    P = Population(I, 100)  
    P.initialize()
    print(P.best(obj).solution)
2 遗传算子¶
2.1 选择
采用标准的轮盘赌(RouletteWheel)方式,以种群中个体的适应度为参考,从中选择出同样大小的新的种群个体。从上一节种群适应度的计算可知,个体的适应度已经被归一化,因此可以直接作为轮盘赌的概率参考。
#----------------------------------------------------------
# GAOperators.py: Selection, Crossover, Mmutation
#----------------------------------------------------------
import numpy as np
import copy
from GAComponents import Individual
#----------------------------------------------------------
# SELECTION
#----------------------------------------------------------
class Selection:
    '''base class for selection'''
    def select(self, population, fitness):
        raise NotImplementedError
class RouletteWheelSelection(Selection):
    '''
    select individuals by Roulette Wheel:
    individuals selected with a probability of its fitness
    ''' 
    def select(self, population, fitness):
        selected_individuals = np.random.choice(population.individuals, population.size, p=fitness)
        # pay attention to deep copy these objects
        population.individuals = np.array([copy.deepcopy(I) for I in selected_individuals])
2.2 交叉
从选择后的样本中随机选择两个个体a和b,以一定的概率进行交叉操作:将随机位置的对应基因(例如g_a,g_b)按系数\alpha进行线性插值,其余位置保持不变,从而得到两个新的个体a'和b'。
对于交叉位置的基因:
\begin{align*} g_a' &= g_a + (g_b-g_a)(1-\alpha)\\\ g_b' &= g_b - (g_b-g_a)(1-\alpha) \end{align*}
对于非交叉位置的基因:
\begin{align*} g_a' &= g_a\\\ g_b' &= g_b \end{align*}
综合起来,个体a和b所有位置的基因交叉操作可以写成向量形式:
\begin{align*} a' &= a + (b-a)(1-\alpha)\beta\\\ b' &= b - (b-a)(1-\alpha)\beta \end{align*}
其中,\beta表示是否进行交叉的0-1向量。
# ......接上一段代码
#----------------------------------------------------------
# CROSSOVER
#----------------------------------------------------------
class Crossover:
    def __init__(self, rate=0.8, alpha=0.5):
        '''
        crossover operation:
            rate: propability of crossover.
            alpha: factor for crossing two chroms, [0,1]
        '''
        # parameters check is skipped
        self.rate = rate
        self.alpha = alpha
    @staticmethod
    def cross_individuals(individual_a, individual_b, alpha):
        '''
        generate two child individuals based on parent individuals:
        new values are calculated at random positions
        alpha: linear ratio to cross two genes, exchange two genes if alpha is 0.0
        '''
        # random positions to be crossed
        pos = np.random.rand(individual_a.dimension) <= 0.5
        # cross value
        temp = (individual_b.solution-individual_a.solution)*pos*(1-alpha)
        new_value_a = individual_a.solution + temp
        new_value_b = individual_b.solution - temp
        # return new individuals
        new_individual_a = Individual(individual_a.ranges)
        new_individual_b = Individual(individual_b.ranges)
        new_individual_a.solution = new_value_a
        new_individual_b.solution = new_value_b
        return new_individual_a, new_individual_b
    def cross(self, population):
        new_individuals = []        
        random_population = np.random.permutation(population.individuals) # random order
        num = int(population.size/2.0)+1
        for individual_a, individual_b in zip(population.individuals[0:num+1], random_population[0:num+1]):         
            # crossover
            if np.random.rand() <= self.rate:
                child_individuals = self.cross_individuals(individual_a, individual_b, self.alpha)
                new_individuals.extend(child_individuals)
            else:
                new_individuals.append(individual_a)
                new_individuals.append(individual_b)
        population.individuals = np.array(new_individuals[0:population.size+1])
2.3 变异
浮点数编码个体的变异操作为将随即位置的基因g增加或减少一个随机值。这个随机值由允许变化区间乘以一个[0,1]区间的随机数\alpha来确定。例如基因g的区间为[L,U],则变异后的基因g':
\begin{align*} g &= g - (g-L)\alpha\,\,\,(rand() \leq 0.5)\\\ g &= g + (U-g)\alpha\,\,\,(rand() > 0.5) \end{align*}
# ......接上一段代码
#----------------------------------------------------------
# MUTATION
#----------------------------------------------------------
class Mutation:
    def __init__(self, rate):
        '''
        mutation operation:
        rate: propability of mutation, [0,1]
        '''
        self.rate = rate
    def mutate_individual(self, individual, positions, alpha):
        '''
        positions: mutating gene positions, list
        alpha: mutatation magnitude
        '''
        for pos in positions:
            if np.random.rand() < 0.5:
                individual.solution[pos] -= (individual.solution[pos]-individual.ranges[:,0][pos])*alpha
            else:
                individual.solution[pos] += (individual.ranges[:,1][pos]-individual.solution[pos])*alpha
        # reset evaluation
        individual.evaluation = None 
        individual.fitness = None 
    def mutate(self, population, alpha):
        '''
        alpha: mutating magnitude
        '''
        for individual in population.individuals:
            if np.random.rand() > self.rate:
                continue
            # select random positions to mutate
            num = np.random.randint(individual.dimension) + 1
            pos = np.random.choice(individual.dimension, num, replace=False)
            self.mutate_individual(individual, pos, alpha)
3 遗传算法¶
本次实现的遗传算法默认求最小值,因此适应度函数应为目标函数值的反函数。并且考虑到它将作为选择概率,即函数值应非负。于是,此处设置为如下形式:
#----------------------------------------------------------
# GA.py: Simple Genetic Algorithm
#----------------------------------------------------------
import numpy as np
class GA():
    '''Simple Genetic Algorithm'''
    def __init__(self, population, selection, crossover, mutation, fun_fitness=lambda x:np.arctan(-x)+np.pi):
        '''
        fun_fitness: fitness based on objective values. minimize the objective by default
        '''
        self.population = population
        self.selection = selection
        self.crossover = crossover
        self.mutation = mutation
        self.fun_fitness = fun_fitness
    def run(self, fun_evaluation, gen=50):
        '''
        solve the problem based on Simple GA process
        '''
        # initialize population
        self.population.initialize()
        # solving process
        for n in range(1, gen+1):
            # select
            fitness, _ = self.population.fitness(fun_evaluation, self.fun_fitness)
            self.selection.select(self.population, fitness)
            # crossover
            self.crossover.cross(self.population)
            # mutation
            self.mutation.mutate(self.population, np.random.rand())
        # return the best individual
        return self.population.best(fun_evaluation, self.fun_fitness)
测试¶
采用二元函数Schaffer_N4进行测试,最小值点f(0,1.25313)=0.292579。

#----------------------------------------------------------
# test.py
#----------------------------------------------------------
from GAComponents import Individual, Population
from GAOperators import RouletteWheelSelection, Crossover, Mutation
# schaffer-N4
# sol: x=[0,1.25313], min=0.292579
schaffer_n4 = lambda x: 0.5 + (np.cos(np.sin(abs(x[0]**2-x[1]**2)))**2-0.5) / (1.0+0.001*(x[0]**2+x[1]**2))**2  
I = Individual([(-10,10)]*2)
P = Population(I, 50)
S = RouletteWheelSelection()
C = Crossover(0.9, 0.75)
M = Mutation(0.2)
g = GA(P, S, C, M)
res = []
for i in range(10):
    res.append(g.run(schaffer_n4, 500).evaluation)
val = schaffer_n4([0,1.25313])
val_ga = sum(res)/len(res)
print('the minimum: {0}'.format(val))
print('the GA minimum: {0}'.format(val_ga))
print('error: {:<3f} %'.format((val_ga/val-1.0)*100))
#----------------------------------------------------------
# output:
#----------------------------------------------------------
the minimum: 0.29257863204552975
the GA minimum: 0.30069849785520314
error: 2.7752764283927034 %
运行10次的平均误差在3%左右,还有很大的改进空间,这将在后续篇章中完成。