GECCO 2015 logo

5th Workshop on Evolutionary Computation for the Automated Design of Algorithms

July 11-15, 2015 @ GECCO 2015 in Madrid, Spain

Workshop Schedule & Location

Time: Sunday, 12 July 2015, 9:00-13:00
Location: Comendador
Session 1 - Foundations - Session Chair: John Woodward
9:00-9:20 Workshop Intro
9:20-9:50 Paper 1 - Sean Harris, Travis Bueter, Daniel R. Tauritz "A Comparison of Genetic Programming Variants for Hyper-Heuristics"
9:50-10:20 Paper 2 - Eric O. Scott, Jeffrey K. Bassett "Learning Genetic Representations for Classes of Real-Valued Optimization Problems"
10:20-10:50 Paper 3 - Matthew A. Martin, Daniel R. Tauritz "Hyper-Heuristics: A Study On Increasing Primitive-Space"
Session 2 - Applications - Session Chair: Daniel Tauritz
11:10-11:40 Paper 4 - Gopinath Chennupati, R. Muhammad Atif Azad, Conor Ryan "Synthesis of Parallel Iterative Sorts with Multi-Core Grammatical Evolution"
11:40-12:10 Paper 5 - Patricia Ryser-Welch, Julian F. Miller, Asta Shahriar "Generating Human-readable Algorithms for the Travelling Salesman Problem using Hyper-Heuristics"
12:10-12:40 Invited Talk by Holger Hoos - Invited Talk Chair: Manuel López-Ibáñez
12:40-12:55 Discussion Panel
12:55-13:00 Final Words

Invited Talk

[Picture] Speaker: Holger H. Hoos

Title: From Programs to Program Spaces: Leveraging Machine Learning and Optimisation for Automated Algorithm Design

Abstract: In recent years, there has been a significant increase in the use of automated algorithm design methods, notably automated algorithm configuration and selection techniques, across many areas within artificial intelligence, operations research and beyond. These methods are based on cutting-edge machine learning and optimisation techniques and, in many cases, have prompted advances in those areas. In this presentation, I will give a brief overview of these automated algorithm design methods and introduce Programming by Optimisation (PbO), a principled approach for developing high-performance software based on them. I will explain how PbO can fundamentally change the nature of software development and give examples for its successful application from a range of prominent problems, including propositional satisfiability, mixed integer programming and machine learning classification. I will also discuss how PbO can be used to not only to obtain performance-optimised software for particular use cases, but to automatically create effective parallel algorithms from sequential sources.

Short Bio: Holger H. Hoos is a Professor for Computer Science and a Faculty Associate at the Peter Wall Institute for Advanced Studies at the University of British Columbia (Canada). His main research interests span empirical algorithmics, artificial intelligence, bioinformatics and computer music. He is known for his work on the automated design of high-performance algorithms and on stochastic local search methods. Holger is a co-author of the book "Stochastic Local Search: Foundations and Applications", and his research has been published in numerous book chapters, journals, and at major conferences in artificial intelligence, operations research, molecular biology and computer music. Holger was elected a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) in 2015 and won two prestigious IJCAI/JAIR best paper prizes in 2009 and 2010. He is a past president of the Canadian Artificial Intelligence Association / Association pour l'intelligence artificielle au Canada (CAIAC) and Associate Editor of the Journal of Artificial Intelligence Research (JAIR). Currently, his group is helping UBC to produce better exam timetables, Actenum Inc. to increase production efficiency in the oil and gas industry, and IBM to improve their CPLEX optimisation software, which is used by 50% of the world's largest companies and thousands of universities. (For further information, see Holger's web page at


How can we automatically generate algorithms on demand? While this was one of the original aims of Machine Learning and Artificial Intelligence in the early 1950s, and more recently Genetic Programming in the early 1990s, existing techniques have fallen-short of this elusive goal. This workshop will outline a number of steps in the right direction on the path to achieving this goal. In particular, this workshop will focus on the burgeoning field of hyper-heuristics which are meta-heuristics applied to a space of algorithms; i.e., any method of sampling a set of candidate algorithms. Genetic Programming has most famously been employed to this end, but random search and iterative hill-climbing have both also successfully been employed to automatically design novel (components of) algorithms.

This approach is in contrast to standard Genetic Programming which attempts to build programs from scratch from a typically small set of atomic functions. Instead we take already existing programs and allow evolution to improve on them. When the automatic design of algorithms is done using a Genetic Programming system (effectively in vitro), the methodology is typically referred to as Genetic Programming as a Hyper-heuristic. When the automatic design of algorithms is done directly on source code (effectively in situ), the methodology is typically referred to as Genetic Improvement [5, 7].

Although most Evolutionary Computation techniques are designed to generate specific solutions to a given instance of a problem, some of these techniques can be explored to solve more generic problems. For instance, while there are many examples of Evolutionary Algorithms for evolving classification models in data mining and machine learning, the work described in [1] employed a hyper-heuristic using Genetic Programming to create a generic classification algorithm which will, in turn, generate a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., we are searching the space of algorithms as opposed to directly searching in the problem solution space [2], raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. For instance, a hyper-heuristic can generate a generic heuristic for solving any instance of the traveling salesman problem, involving any number of cities and any set of distances associated with those cities [3], whilst a conventional evolutionary algorithm would just evolve a solution to one particular instance of the traveling salesman problem, involving a predefined set of cities and associated distances between them.

Hyper-heuristics have some distinctions from standard Genetic Programming. In essence, they consist of a stage where an algorithmic framework or template is defined along with algorithmic primitives, and another stage where a meta-heuristic such as Genetic Programming searches the program space defined by the aforementioned template and primitives. This approach can be identified with the Template Method pattern from Designed Patterns associated with Object Oriented programming. In short, the human provides the overall architecture of the algorithm (e.g., loops and conditional branching) and the meta-heuristic fills in the details (for example, the bodies of the loops, or the condition and actions of conditional branching statements). While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, genetic programming was used to evolve mate selection in evolutionary algorithms [8]; in 2011, linear genetic programming was used to evolve crossover operators [9]; more recently, genetic programming was used to evolve complete black-box search algorithms [10,11].

The main objective of this workshop is to discuss hyper-heuristics employing evolutionary computation methods for generating algorithms. These methods have the advantage of producing solutions that are applicable to any instance of a problem domain, instead of a solution specifically produced to a single instance of the problem. The areas of application of these methods include, for instance, data mining, machine learning, and optimization [1-11].

[1] Gisele L. Pappa and Alex A. Freitas. Automating the Design of Data Mining Algorithms: An Evolutionary Computation Approach, Springer, Natural Computing Series, 2010.
[2] Edmund K. Burke, Matthew Hyde, Graham Kendall and John Woodward. A genetic programming hyper-heuristic approach for evolving 2-D strip packing heuristics. In IEEE Transactions on Evolutionary Computation, 14(6):942-958, December 2010.
[3] M. Oltean and D. Dumitrescu. Evolving TSP heuristics using multi expression programming. In: Computational Science - ICCS 2004, Lecture Notes in Computer Science 3037, pp. 670-673. Springer, 2004.
[4] John R. Woodward and Jerry Swan, "The automatic generation of mutation operators for genetic algorithms", in Proceedings of the 14th international conference on Genetic and evolutionary computation conference, 2012.
[5] William B. Langdon and Mark Harman. Genetically Improving 50000 Lines of C++. Research Note , RN/12/09, Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK, 2012.
[6] Su Nguyen and Mengjie Zhang and Mark Johnston and Kay Chen Tan. Automatic Design of Scheduling Policies for Dynamic Multi-objective Job Shop Scheduling via Cooperative Coevolution Genetic Programming. IEEE Transactions on Evolutionary Computation, 18(2):193-208, April 2014.
[7] Justyna Petke, Mark Harman, William B. Langdon, and Westley Weimer. Using Genetic Improvement & Code Transplants to Specialise a C++ Program to a Problem Class Proceedings of the 17th European Conference on Genetic Programming, EuroGP 2014, Granada, Spain, 2014. Springer Verlag.
[8] Ekaterina A. Smorodkina and Daniel R. Tauritz. Toward Automating EA Configuration: the Parent Selection Stage. In Proceedings of CEC 2007 - IEEE Congress on Evolutionary Computation, pages 63-70, Singapore, September 25-28, 2007.
[9] Brian W. Goldman and Daniel R. Tauritz. Self-Configuring Crossover. In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '11), pages 575-582, Dublin, Ireland, July 12-16, 2011.
[10] Matthew A. Martin and Daniel R. Tauritz. Evolving Black-Box Search Algorithms Employing Genetic Programming. In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '13), pages 1497-1504, Amsterdam, The Netherlands, July 6-10, 2013.
[11] Matthew A. Martin and Daniel R. Tauritz. A Problem Configuration Study of the Robustness of a Black-Box Search Algorithm Hyper-Heuristic. In Proceedings of the 16th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '14), pages 1389-1396, Vancouver, BC, Canada, July 12-16, 2014.

Call for Papers

This workshop explores the Automated Design of Algorithms employing hyper-heuristics which are meta-heuristics applied to algorithm space, with an emphasis on hyper-heuristics of the evolutionary computation persuasion. Genetic Programming has most famously been employed to this end, but random search and iterative hill-climbing have both also successfully been employed to automatically design novel (components of) algorithms. These methods have the advantage of producing solutions that are applicable to any instance of a specified problem domain, instead of a solution specifically produced for a single problem instance. This is particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain. The areas of application of these methods include, for instance, data mining, machine learning, optimization, bioinformatics, image processing, economics, cyber security, critical infrastructure protection, etc. The workshop welcomes original submissions on all aspects of Evolutionary Computation for the Automated Design of Algorithms, which include - but are not limited to - the following topics and themes:

Important Dates

Paper Submission

Submitted papers may not exceed 8 pages and are required to be in compliance with the GECCO 2015 Call for Papers Preparation Instructions. However, note that the review process of the workshop is not double-blind; hence, authors' information should be included in the paper.

To submit, E-mail papers to both and

All accepted papers will be presented at the workshop and appear in the GECCO Conference Companion Proceedings.

Organizers & Contact Info



Daniel R. Tauritz is an Associate Professor in the Department of Computer Science at the Missouri University of Science and Technology (S&T), on sabbatical at Sandia National Laboratories for the 2014-2015 academic year, a former Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, COMPSAC 2011 Doctoral Symposium Chair, GECCO 2012 GA Track Co-Chair, and GECCO 2013 GA Track Co-Chair. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and search-based software engineering. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.


John R. Woodward is a Lecturer at the University of Stirling, within the CHORDS group and is employed on the DAASE project, and for the previous four years was a lecturer with the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 100 talks at international conferences and as an invited speaker at universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.


Manuel López-Ibáñez is a Postdoctoral Researcher (Chargé de recherches) of the Belgian Fund for Scientific Research (F.R.S.-FNRS) working at the IRIDIA laboratory of Université libre de Bruxelles, Brussels, Belgium. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 13 journal papers, 6 book chapters and 33 papers in peer-reviewed proceedings of international conferences on diverse topics such as evolutionary algorithms, ant colony optimisation, multi-objective optimisation, and various combinatorial and real-world optimisation problems. His current research interests are experimental analysis, automatic configuration and automatic design of optimisation algorithms, for single and multi-objective optimisation. He is the lead developer and current maintainer of the irace automatic configuration method (

Previous ECADA workshops