Schedule for Saturday
(top)
Schedule for Sunday
(top)
Schedule for Monday
(top)
Schedule for Tuesday
(top)

Talks on Saturday
(top)
Saturday, 10am
Timo Kötzing, Andrei Lissovoi, Carsten Witt:
(1+1) EA on Generalized Dynamic OneMax
Evolutionary algorithms (EAs) perform well in settings involving
uncertainty, including settings with stochastic or dynamic fitness
functions.
In this paper, we analyze the (1+1) EA on dynamically changing
OneMax, as introduced by Droste (2003).
We reprove the known results on first hitting times using the
modern tool of drift analysis.
We extend these results to search spaces which allow for more than
two values per dimension.
Furthermore, we make an anytime analysis as suggested by Jansen and
Zarges (2014), analyzing how closely the (1+1) EA can track the
dynamically moving optimum over time. We get tight bounds both for
the case of bit strings, as well the case of more than two values
per position. Surprisingly, in the latter setting, the expected
quality of the search point maintained by the (1+1) EA does not
depend on the number of values per dimension.
Saturday, 11.30am
Johannes Lengler, Nick Spooner:
Fixed Budget Performance of the (1+1)EA on Linear Functions
We present a fixed budget analysis of the (1+1) evolutionary
algorithm for general linear functions, considering both the
quality of the solution after a predetermined ‘budget’ of fitness
function evaluations (a priori) and the improvement in quality when
the algorithm is given additional budget, given the quality of
the current solution (a posteriori). Two methods are presented:
one based on drift analysis, the other on the differential equation
method and Chebyshev’s inequality. While the first method is
superior for general linear functions, the second can be more
precise for specific functions and provides concentration
guarantees.
As an example, we provide tight a posteriori fixed budget results
for the function OneMax.
Saturday, 1.30pm
Leslie Goldberg: Evolutionary Dynamics on Graphs
The Moran process models the spread of mutations in populations
on graphs. This talk discusses the model, as adapted by Lieberman,
Hauert and Nowak. I will describe what is known about fixation
probabilities and about the expected absorption time of the process.
The talk is based on joint work with Josep Diaz, David Richerby
and Maria Serna and also by joint work with these authors and
with George Mertzios and Paul Spirakis.
Saturday, 3.30pm
Eric Scott, Kenneth De Jong: Understanding Simple Asynchronous Evolutionary Algorithms
Asynchronous evolutionary algorithms promise to eliminate idle time
and improve the wallclock time incurred while evolving solutions
to large, complex, stochastic simulation models. The behavior of
asynchronous algorithms is not well understood, however, and it’s
not clear if and when the evolutionary trajectory they follow may
be detrimental. We conduct a preliminary analysis of simple
asynchronous EAs, finding that a bias toward individuals with fast
evaluation times exists in some cases, but that this bias tends
only to speed up convergence, not slow it down.
Talks on Sunday
(top)
Sunday, 9.30am
Sandra AsteteMorales, MarieLiesse Cauwet, Olivier Teytaud: Evolution Strategies with Additive Noise: A Convergence Rate Lower Bound
We consider the problem of optimizing functions corrupted with
additive noise. It is known that evolutionary algorithms can reach
a simple regret $O(1/\sqrt{n})$ within logarithmic factors, when n is the
number of function evaluations. We show mathematically that this
bound is tight, at least for a wide family of evolution strategies
without large mutations.
Sunday, 11am
Adam PrügelBennett, Jonathan Rowe, Jonathan Shapiro: RunTime Analysis of PopulationBased Evolutionary Algorithm in Noisy Environments
This paper analyses a generational evolutionary algorithm using
only selection and uniform crossover. With a probability
arbitrarily close to one the evolutionary algorithm is shown to
solve OneMax in $O(n\log^2(n))$ function evaluations using
a population of size $c n \log(n)$. We then show that this algorithm
can solve OneMax with noise variance $n$ again in $O(n\log^2(n))$
function evaluations.
Talks on Monday
(top)
Monday, 9.30am
DucCuong Dang, Per Kristian Lehre: Efficient Optimisation of Noisy Fitness Functions with Populationbased Evolutionary Algorithms
Populationbased Evolutionary Algorithms (EA) can optimise
pseudoBoolean functions in expected polynomial time, even when
only partial information about the problem is available.
In this paper, we show that this result extends to optimisation of
pseudoBoolean problems with an additive noise term. Very general
conditions on the noise term is derived, under which the EA optimises
the noisy function in expected polynomial time. In the case of the
OneMax and LeadingOnes problems, efficient optimisation is even
possible when the variance of the noise distribution grows quickly
with the problem size.
Monday, 11am
Keki Burjorjee: Implicit Concurrency in Evolutionary Computation
We establish theoretical support for Implicit concurrent
multivariate effect evaluation —implicit concurrency for short—
a versatile computational learning efficiency thought to underlie
generalpurpose, nonlocal, noisetolerant optimization in genetic
algorithms. We demonstrate that implicit concurrency is indeed a
form of efficient learning by showing that it can be used to obtain
closetooptimal bounds on the time and queries required to
approximately correctly solve a constrained version ($k = 7$,
$\eta = 1/5$) of a recognizable computational learning problem:
learning parities with noisy membership queries. We argue that a
genetic algorithm that treats the noisy membership query oracle as
a fitness function can be straightforwardly used to approximately
correctly learn the essential attributes in $O(\log^{1.585} n)$
queries and $O(n \log^{1.585} n)$ time, where $n$ is the total
number of attributes. Our proof relies on an accessible symmetry
argument and the use of statistical hypothesis testing to reject
a global null hypothesis at the $10^{−100}$ level of significance.
It is, to the best of our knowledge, the first relatively rigorous
identification of efficient computational learning in a genetic
algorithm on a nontrivial learning problem.
Monday, 1pm
Mathys C. Du Plessis, Andries P. Engelbrecht, Andre Calitz:
SelfAdapting the Brownian Radius in a Differential Evolution Algorithm for Dynamic Environments
Several algorithms aimed at dynamic optimisation problems have been
developed. This paper reports on the incorporation of a
selfadaptive Brownian radius into competitive differential
evolution (CDE). Four variations of a novel technique to achieving
the selfadaptation is suggested and motivated. An experimental
investigation over a large number of benchmark instances is used
to determine the most effective of the four variations. The new
algorithm is compared to its base algorithm on an extensive set
of benchmark problems and its performance analysed. Finally,
the new algorithm is compared to other algorithms by means of
reported results found in the literature. The results indicate that
CDE is improved by the incorporation of the selfadaptive Brownian
radius and that the new algorithm compares well with other algorithms.
Monday, 2pm
Renato Tinos, Darrell Whitley, Francisco Chicano:
Partition Crossover for PseudoBoolean Optimization
A partition recombination operator is introduced for all $k$bounded
pseudoBoolean functions such as NK landscapes and MAX$k$SAT.
Under Partition crossover, the evaluation of the offspring can be
directly obtained from partial evaluations of components found in
the parents. Partition crossover explores the variable interaction
graph of the pseudoBoolean functions in order to partition the
variables of the solution vector. Proofs are presented showing that
if the differing variable assignments found in the two parents can
be partitioned into $q$ noninteracting sets, Partition crossover
can be used to find the best of $2^q$ possible offspring. This is
done at $O(n)$ cost per recombination. Proofs are presented which
show that parents that are locally optimal will always generate
offspring that are locally optimal with respect to a (more
restricted) hyperplane subspace. Empirical experiments show that
parents that are locally optimal generate offspring that are locally
optimal in the full search space more than 80 percent of the time.
Experimental results show the efficiency of the proposed crossover
when used in combination with a hybrid genetic algorithm.
Monday, 3.30pm
Luigi Malagò, Giovanni Pistone:
Information Geometry of Gaussian Distributions in View of Stochastic Optimization
We study the optimization of a continuous function by its stochastic
relaxation, i.e., the optimization of the expected value of the
function itself with respect to a density in a statistical model.
We focus on gradient descent techniques applied to models from the
exponential family and in particular on the multivariate Gaussian
distribution. From the theory of the exponential family, we
reparametrize the Gaussian distribution using natural and
expectation parameters, and we derive formulas for natural gradients
in both parameterizations. We discuss some advantages of the natural
parameterization for the identification of submodels in the
Gaussian distribution based on conditional independence assumptions
among variables. Gaussian distributions are widely used in
stochastic optimization and in particular in modelbased
Evolutionary Computation, as in Estimation of Distribution
Algorithms and Evolutionary Strategies. By studying natural
gradient flows over Gaussian distributions our analysis and results
directly apply to the study of CMAES and NES algorithms.
Monday, 4.30pm
Oswin Krause, Christian Igel:
A More Efficient Rankone Covariance Matrix Update for Evolution Strategies
Learning covariance matrices of Gaussian distributions is at the
heart of most variablemetric randomized algorithms for continuous
optimization. If the search space dimensionality is high, updating
the covariance or its factorization is computationally expensive.
Therefore, we adopt an algorithm from numerical mathematics for
rankone updates of Cholesky factors. Our methods results in a
quadratic time covariance matrix update scheme with minimal memory
requirements. The numerically stable algorithm leads to triangular
Cholesky factors. Systems of linear equations where the linear
transformation is defined by a triangular matrix can be solved in
quadratic time. This can be exploited to avoid the additional
iterative update of the inverse Cholesky factor required in some
covariance matrix adaptation algorithms proposed in the
literature. When used together with the (1+1)CMAES and the
multiobjective CMAES, the new method leads to a memory reduction
by a factor of almost four and a faster covariance matrix update.
The numerical stability and runtime improvements are demonstrated
on a set of benchmark functions.
Talks on Tuesday
(top)
Tuesday, 9.30am
MarieLiesse Cauwet, ShihYuan Chiu, KuoMin Lin, David SaintPierre,
Fabien Teytaud, Olivier Teytaud, ShiJim Yen:
Parallel Evolutionary Algorithms Performing Pairwise Comparisons
We study mathematically and experimentally the convergence rate of
differential evolution and particle swarm optimization for simple
unimodal functions. Due to parallelization concerns, the focus is
on lower bounds on the runtime, i.e on upper bounds on the speedup,
as a function of the population sizes. Two cases are particularly
relevant: population size of the order of magnitude of the
dimension, and large population size. We use the branching factor
as a tool for proving bounds, and get as upper bounds a linear
speedup for a population size of same order as the dimension,
and a logarithmic speedup for larger population sizes. We then
propose parametrizations for differential evolution and particle
swarm optimization that reach these bounds.
Tuesday, 11am
Alan Lockett: Insights From Adversarial Fitness Functions
The performance of optimization is usually studied in specific
settings where the fitness functions are highly constrained with
static, stochastic or dynamic properties. This work examines what
happens when the fitness function is a player engaged with the
optimizer in an optimization game. Although the advantage of the
fitness function is known through the No Free Lunch theorems,
several deep and previously unexplored insights about the space of
possible performance measurements arise as a consequence of
studying these adversarial fitness function, including: 1) Every
continuous and linear method of measuring performance can be
identified with the optimization game for some adversarial
fitness; 2) For any convex continuous performance criterion, there
is some deterministic optimizer that performs best, even when the
fitness function is stochastic or dynamic; 3) Every stochastic
optimization method can be viewed as a probabilistic choice over
countably many deterministic methods.
Tuesday, 1pm
Richard Mealing, Jonathan Shapiro:
Convergence of Strategies in Simple CoAdapting Games
Simultaneously coadapting agents in an uncooperative setting can
result in a nonstationary environment where optimisation or
learning is difficult and, where the agents strategies may not
converge to solutions. This work looks at simple simultaneousmove
games with two or three actions and, two or three players.
Fictitious play is an old but popular algorithm that can converge
to solutions, albeit slowly, in selfplay in games like these.
It models its opponents assuming that they use stationary strategies
and, plays a bestresponse strategy to these models. We propose two
new variants of fictitious play that remove this assumption,
explicitly assuming that the opponents use dynamic strategies.
The opponent’s strategy is predicted using a sequence prediction
method in the first variant and, a change detection method in the
second variant. Empirical results show that if our variants converge
to solutions, then they converge faster than fictitious play.
However, they do not always converge exactly to correct solutions.
For change detection, this is a very small number of cases but, for
sequence prediction there are many. The convergence of sequence
prediction is improved by combining it with fictitious play.
Tuesday, 2pm
Golnaz Badkobeh, Per Kristian Lehre, Dirk Sudholt:
Blackbox Complexity of Parallel Search with Distributed Populations
Many metaheuristics such as island models and cellular evolutionary
algorithms use a network of distributed populations that
communicate search points along a spatial communication topology.
The idea is to slow down the spread of information, reducing the
risk of "premature convergence", and sacrificing
exploitation for an increased exploration.
We introduce the distributed blackbox complexity as the minimum
number of function evaluations every distributed blackbox
algorithm needs to optimise a given problem. It depends on the
topology, the number of populations $\lambda$, and the problem
size $n$. We give upper and lower bounds on the distributed
blackbox complexity for unary unbiased blackbox algorithms on a
class of unimodal functions in order to study the impact of
communication topologies on performance. Our results confirm that
rings and torus graphs can lead to higher blackbox complexities,
compared to unrestricted communication. We further determine
cutoff points for the number of populations $\lambda$, above which
no distributed blackbox algorithm can achieve linear speedups.
Tuesday, 3.30pm
Thomas Jansen:
On the BlackBox Complexity of Example Functions: The Real Jump Function
Blackbox complexity measures the difficulty of classes of functions
with respect to optimisation by blackbox algorithms. Comparing the
blackbox complexity with the worst case performance of a best know
randomised search heuristics can help assessing if the randomised
search heuristic is efficient or if there is room for improvement.
When considering an example function it is necessary to extend it to
a class of functions since single functions always have blackbox
complexity. Different kinds of extensions of single functions to
function classes have been considered. In cases where the gap
between the performance of the best randomised search heuristic and
the blackbox complexity is still large it can help to consider
more restricted blackbox complexity notions like unbiased blackbox
complexity. For the wellknown Jump function both approaches have
not been successful so far. We argue that the problem is not with
the notion of blackbox complexity but with the extension to a
function class. We propose a different extension and show that for
this extension there is a much better agreement even between the
performance of an extremely simple evolutionary algorithm and the
most general notion of blackbox complexity.
