Title: PowerPoint Presentation
URL Source: blob://pdf/6597e76c-5e30-4d2b-b509-d2cb451b35e3
Markdown Content:
Search and
# Optimisation
# Local Search
# and
# Evolutionary Algorithms
> Dr Amir Pourabdollah
Main resources
Textbook Chapter 4 -5
Wirsansky , Eyal (2020): Hands -on genetic algorithms
with Python : applying genetic algorithms to solve real -
world deep learning and artificial intelligence problems
Available online from NTU Library
Eiben & Smith (2015), Introduction to Evolutionary
Computing, 2 nd ed., Springer, Ch 2.
freely available at https://www.cs.vu.nl/~gusz/ecbook/Eiben -
Smith -Intro2EC -Ch2.pdf
DEAP documentation:
https://deap.readthedocs.io/en/master/ Outline
Local Search
Evolutionary Algorithms
Introducing some other related algorithms Reminder:
What is Local Search?
In some other problems, a solution is just a final state
The action sequence and/or the starting point are not important or do not exist.
Usually very large search space
Matches to non -deterministic, partially -observable
Optimality test is usually hard or not applicable.
We typically have a solution candidate and try to iteratively improve it (many of which rely on
randomness) search in the candidates locality
Applications in: Logistics and Transport, Networking, VLSI (electronic chip design), drug design, Example :
# The Traveling Salesperson Problem (TSP)
A famous optimisation problem
Very large search space
For n cities, (n1)!/2 possible routes
For 60 cities, there are more routes
than the number of atoms in the
visible universe!
Fitness function (or objective function):
Total journey length (to be minimised)
Local search can help!
https://youtu.be/SC5CX8drAtU Local Search Algorithms
Hill Climbing Algorithm: From the current state,
go to one of the neighbour states which has
the highest fitness function, or stop if no better
neighbour found.
Idealistic, no return.
Similar to greedy search (if limited choices on each move)
Challenge: Stuck in local maxima
A solution: Random -restart Hill Climbing
Another solution: Tabu Search
Randomly choose neighbours with lower fitness
Avoid visited states Hill climbing examples
TSP Example: At each step, try swapping the order of
two cities in the path, that can decrease the total
journey length
4 -Queen Example: At each step, try a move that
decreases h
h=number of ways two queen can hit each others
8 -Puzzle Example:
Similar, but for h= number of misplaced tiles
(compared to the goal state) Simulated Annealing (SA)
Inspired by nature: Metal crystal structures find its
stronger state if cooled down gradually.
Define a decreasing variable T (temperature)
Higher T: Higher probability to move to a state
with lower fitness
Lower T: Lower probability ~~
Adiabatic theorem : Optimum solution is found if T
decreases slowly enough
T
> P: Probability
> E: Fitness
> k: constant factor
Step 1
> T=0.2
Step 20
> T=0.02
Step 40
> T=0.002
# Beam Search
A multi -thread local search. Useful information is passed among the parallel search threads.
Keep track of k states rather than just one (k: beam width). Out of all the neighbours of the k current nodes,
the best k nodes are selected.
Select k random initial states, repeat until no better neighbour is found.
Example k=3
Stochastic Beam Search (SBS) : Selection is not always limited to the "best" nodes. Some bad moves can be
randomly chosen too.
Evolutionary Algorithm : A variant of SBS where the random selection is replaced by "natural selection".
(next to come) Exploitation & Exploration
As you see in many local search
algorithms, there is a mix of two
approaches:
Exploitation (being orderly): The
tendency to try solutions by
improving the existing solutions
Exploration (being risky): The
tendency to randomly try solutions
that are not tried before
A good trade -off between the two
approaches can increase the chance of
reaching to the global optimum. Constraint
# Satisfaction
# Problems
Local search problems, in which
goal test exists
Goal is formulated as satisfying
some constraints typically
assigning values to variables
Applications in Assignment
problems, Timetabling,
Hardware configuration, Job
scheduling, Floor planning, Example: Map Colouring Problem
Assigning one of n colours to each region on a map *
Constraint: No two adjacent regions with the same colour
Example in Australia map:
Variables={WA, NT, Q, NSW, V, SA, T}
Values={red, green, blue}
Can also be shown as a constraint hyper -graph
* Four -Colour Theorem (for reading only): No more than 4 colours are
needed to colour the regions of any map! Many CSPs can be modelled as trees
Can be tackled by a variant of DFS called Backtracking
No need to assign values to all variables at each step
Goal node is when the constraint is satisfied
If a branch failed to satisfy the constraint, backtrack.
Strategy to traverse the tree?
Aim: Reduce the tree size and find any false choices
sooner
Some heuristics to choose the next variable and for
choosing its value.
Examples: MRV, Degree Heuristic, LCV Backtracking
Decisions at each step:
1. Which variable to choose next for assignment.
2. Which value to be assigned to the chosen variable.
An intuitive method for making the above decisions:
1. Choose the most constrained variable, i.e., the variable
with the Minimum Remaining Values (MRV)
2. Given the variable, assign it the Least Constraining Value
(LCV) in relation to the unassigned variables
The rest of CSP for your own information (textbook
chapter 6)
> MRV: Choose
> SA with just 1
> colour choice
> MRV: Choose NA
> (or SA) with just 2
> colour choices
# Mid -way
# Summary
Local Search Algorithms
Hill Climbing and its variations
Simulated Annealing
Other related techniques
Constraint Satisfaction Problems
Next week: Evolutionary Algorithms
Genetic algorithms
Other related techniques Evolutionary Computing
In biology, evolution theory explains the
process that changes populations of
animals / plants over many generations by
mutations (random genetic changes) and
selection (survival of the best).
In computing, we are mostly interested in
the process of this theory as an inspiration
for search/optimisation algorithms What are Evolutionary Algorithms (EA)?
Inspired by biological evolution theory, used for a range of optimisation problems.
Survival of the best, over many generations
Local search, but population -based
Computer -Automated Evolution of an X -Band Antenna for NASAs
Space Technology 5 Mission
https://www.mitpressjournals.org/doi/pdf/10.1162/EVCO_a_00005 Reminder:
## Exploitation &
## Exploration
As you see in many local search algorithms, there is a
mix of two approaches:
Exploitation (being orderly): The tendency to try
solutions by improving the existing solutions
Exploration (being risky): The tendency to
randomly try solutions that are not tried before
A good trade -off between the two approaches over
many iterations, can increase the chance of EV in
reaching to the global optimum (i.e., a pre -mature
convergence).
> (e.g., EV for Traveling Salesperson Problem, 100 cities)
## Genetic Algorithm
(the most known form of EA: Applying the
EV operations at the level of genes)
Concepts
Individual or Chromosome
A potential solution for the problem
Genes
Single items that compose an individual (bits of a
chromosome)
Population
A set of individuals in a generation (typically fixed size)
Fitness
A function that measures how good an individual is to
solve the problem Genetic Algorithm (GA)
(cont.)
Operators
Selection
Selecting a number of individuals by some criteria
Crossover (or recombination)
Producing new individuals by combine the genes of 2 (or
more) individuals
leads to exploitation
Mutation
Apply random changes to some genes
leads to exploration
Watch an introductory video here: https://www.youtube.com/watch?v=MacVqujSXWE (about knapsack problem) Critical: How to map a problem to GA
(Deciding what is a gene, chromosome, population size, fitness, etc.)
Example 1: Knapsack Problem
Which boxes to choose to maximise total value, while not
exceeding the total weight capacity?
A chromosome is a sequence of binary bits
e.g., [00101] means choose the 3 rd and the 5 th box
Fitness function is the total value - to be maximised
(1)
(2)
(3)
(4)
(5)
Example 2: TSP
What is the order to cities to visit, so that the total journey
distance is minimised?
A chromosome is a sequence of city numbers.
e.g., for 8 cities, [35721648] means going from 3 to 5 to 7
Fitness function is total journey distance - to be minimised.
Watch: https://www.youtube.com/watch?v=MacVqujSXWE Each chromosome
is one order of
visiting cities
Each chromosome
is an order of
visiting cities Selection
Many methods, e.g.,
Take the fittest n individuals!
Too strict! We may miss some good genes hidng elsewhere!
Truncate Selection
The probability of choosing an individual is proportional to its
fitness.
Fitness -proportionate Selection
Randomly take a small number individuals (typically 2)
Choose the fittest (strictly, or sometimes with a probability p)
Repeat till having enough parents for recombination
Tournament Selection Recombination
## (or Cross -over)
Many methods, e.g.,
This may destroy some very good chromosomes?
Optionally use Elitism :
(Directly pass some good chromosomes to the next generation)
(to use a uniform random
selection of bits from each) Mutation
Many methods, e.g.,
(most common)
Or: Example:
Knapsack
Problem
with
6 items
items = [
[1, 2],
[2, 4],
[3, 4],
[4, 5],
[5, 7],
[6, 9]
]
max_weight = 10
population_size = 10
weights values Genetic Programming (GP)
A special form of GA, aiming to find (evolve to)
the best computer program for a given
purpose.
Suited to build functions with input and output,
not a whole system!
A program (here, an individual) is modelled
as a tree structure, with operators, variables
and constants
Trees are then iteratively being selected,
crossed -over, mutated, and their fitness to
deliver the required result is evaluated.
> Example: (x+3) -(x 2/2)
> A cross -over example
# Particle swarm optimisation (PSO)
Inspiration from natural groupings of individual organisms, such as flocks
of birds or schools of fish
working together toward a common goal no central supervision
A swarm (population) includes particles (individuals) that move around a
search space according a simple rule.
Each particle at each iteration has a position and a velocity (in n -dim)
Each particles next move is influenced by its best known position and the
swarms (or a subset of the swarms) best known position
The best known position will hopefully converge a global optimum. Other EA and nature -inspired Algorithms
(information only)
Memetic Algorithm
Ant Colony Optimisation
Bee Colony Algorithm
Artificial Immune Systems
Cuckoo Search
Imperialistic Competitive Algorithm (ICA)
Summary
Part 1: Local Search Algorithms
Hill Climbing and its variations
Simulated Annealing
Other related techniques
Constraint Satisfaction Problems
Part 2: Evolutionary Algorithms
Genetic algorithms
Other related techniques