Differential Evolution#
Differential Evolution (DE) is a population-based optimization algorithm that is particularly effective for continuous optimization problems.
This algorithm has the following steps:
Initialization: Generate an initial population of candidate solutions randomly within the defined bounds.
Mutation: Create new candidate solutions by combining existing ones.
Crossover: Combine the mutated candidates with the original ones to create a new population.
Selection: Evaluate the new candidates and select the best ones to form the next generation.
Initialization#
Consider a function \(f: \mathbb{R}^n \to \mathbb{R}\) that we want to minimize over a bounded search space \(\mathcal{X} = \{\mathbf{x} \in \mathbb{R}^n | \mathbf{x}_{\text{min}} \leq \mathbf{x} \leq \mathbf{x}_{\text{max}}\}\), where \(\mathbf{x}_{\text{min}}\) and \(\mathbf{x}_{\text{max}}\) are the lower and upper bounds of the search space, respectively.
Let \(\mathbf{x}_{i,t} \in \mathcal{X}\) be the \(i\)-th solution in the population at generation \(t\). The initial population can be generated randomly by the following equation:
where \(N\) is the population size and \(\mathbf{r}_{i}\) is a vector of random numbers uniformly distributed in the range \([0, 1]\).
Mutation#
In the mutation operation, a new candidate solution is generated by adding the weighted difference between two randomly selected solutions to a third solution. The mutation can be expressed mathematically as follows:
where \(\mathbf{v}_{i,t}\) is the mutated vector for the \(i\)-th solution at generation \(t\), \(F\) is a scaling factor, and \(\mathbf{x}_{r_1,t}\), \(\mathbf{x}_{r_2,t}\), and \(\mathbf{x}_{r_3,t}\) are three distinct randomly selected solutions from the population.
Crossover#
The crossover operation combines the mutated vector \(\mathbf{v}_{i,t}\) with the original solution \(\mathbf{x}_{i,t}\) to create a trial vector \(\mathbf{u}_{i,t}\). The binomial crossover is commonly used:
where \(CR\) is the crossover probability, \(j_{rand}\) is a randomly chosen index, and \(r_j\) is a random number uniformly distributed in the range \([0, 1]\).
Selection#
The selection operation determines whether the trial vector \(\mathbf{u}_{i,t}\) or the original vector \(\mathbf{x}_{i,t}\) will be retained for the next generation. The selection can be expressed as follows:
where \(f(\cdot)\) is the objective function to be minimized.
Algorithm#
Algorithm 14 (Differential Evolution)
Initialize the population \(\mathbf{x}_{i,1}\) for \(i = 1, 2, \ldots, N\).
Evaluate \(f(\mathbf{x}_{i,1})\) for \(i = 1, 2, \ldots, N\).
For \(t = 1\) to \(T\) (number of generations):
For each solution \(i = 1, 2, \ldots, N\):
\(\mathbf{v}_{i,t} \gets \mathbf{x}_{r_1,t} + F \cdot (\mathbf{x}_{r_2,t} - \mathbf{x}_{r_3,t})\).
Perform crossover to create \(\mathbf{u}_{i,t}\).
If \(f(\mathbf{u}_{i,t}) < f(\mathbf{x}_{i,t})\), then \(\mathbf{x}_{i,t+1} \gets \mathbf{u}_{i,t}\); otherwise, \(\mathbf{x}_{i,t+1} \gets \mathbf{x}_{i,t}\).
return \(\arg \min_{i} f(\mathbf{x}_{i,T})\).
Python Implementation#
import numpy as np
class DE:
'''
Differential Evolution for single objective optimization problem
Parameters
----------
problem: object
The optimization problem to be solved.
n_dim: int
The number of dimensions of the problem.
n_gen: int
The number of generations to run the algorithm.
n_pop: int
The number of individuals in the population.
ub: float
The upper bound of the search space.
lb: float
The lower bound of the search space.
F: float
The scaling factor for mutation.
CR: float
The crossover probability.
'''
def __init__(self, problem, n_dim=10, n_gen=100, n_pop=10, ub=-100, lb=100, F=0.8, CR=0.5):
self.problem = problem
self.n_dim = n_dim
self.n_gen = n_gen
self.n_pop = n_pop
self.ub = ub
self.lb = lb
self.F = F
self.CR = CR
def optimize(self):
# initialization
x = self.lb + (self.ub - self.lb) * np.random.rand(self.n_pop, self.n_dim)
f_x = np.zeros((self.n_pop,1))
f_u = np.zeros((self.n_pop,1))
for i in range(self.n_pop):
f_x[i] = self.problem.evaluate(x[i])
gen = 0
# iteration
while gen < self.n_gen:
# mutation
random_idx = np.random.randint(0, self.n_pop, size=(self.n_pop, 3))
r1, r2, r3 = random_idx[:, 0], random_idx[:, 1], random_idx[:, 2]
# v[i]=x[r1]+F(x[r2]-x[r3])
v = x[r1, :] + self.F * (x[r2, :] - x[r3, :])
# repair
mask_bound = np.random.uniform(low=self.lb, high=self.ub, size=(self.n_pop, self.n_dim))
v = np.where(v < self.lb, mask_bound, v)
v = np.where(v > self.ub, mask_bound, v)
# crossover
mask_co = (np.random.rand(self.n_pop, self.n_dim) < self.CR)
jrand =np.random.randint(0, self.n_dim, size=(1, self.n_pop))
for i in range(self.n_pop):
mask_co[i,jrand[0,i]] = True
u = np.where(mask_co, v, x)
# selection
for i in range(self.n_pop):
f_u[i] = self.problem.evaluate(u[i])
x = np.where(f_u<f_x, u, x)
f_x = np.where(f_u<f_x, f_u, f_x)
gen = gen + 1
return np.min(f_x)
The following is a problem for minimizing the sphere function:
This function is a simple quadratic function that is often used as a test case for optimization algorithms. The global minimum is at \(\mathbf{x} = \mathbf{0}\), where \(f(\mathbf{x}) = 0\).
class Sphere:
def __init__(self, n_dim=10):
self.n_dim = n_dim
self.ub = 100
self.lb = -100
def evaluate(self, x):
return np.sum(x**2)
if __name__ == "__main__":
problem = Sphere()
de = DE(problem, n_dim=10, n_gen=1000, n_pop=10, ub=100, lb=-100, F=0.8, CR=0.5)
best_solution = de.optimize()
print("Best solution found: ", best_solution)
To implement this algorithm to solve a simulation optimization problem, you can create a simulation class that simulates the system and returns the objective function value.
class Sim:
def __init__(self):
# define the simulation parameters
pass
def simulate(self, x):
# run the simulation with the given parameters x
# return the objective function value
pass