9:00-9:30 | Invited IAM Talk: Thomas Bäck "Optimization when the Evaluation Budget is very Limited: Algorithms and Applications" |
9:30-9:50 | IAM Paper: Daniel Hein, Steffen Udluft, Thomas A. Runkler "Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming" |
9:50-10:10 | ECADA Paper: Samuel N. Richter, Daniel R. Tauritz "The Automated Design of Probabilistic Selection Methods for Evolutionary Algorithms" |
10:10-10:40 | Invited ECADA Talk: Emma Hart "Lifelong Learning Methods in Heuristic Optimisation for Continual Problem Solving" |
Speaker: Emma Hart Title: Lifelong Learning Methods in Heuristic Optimisation for Continual Problem Solving Abstract: In the general field of Artificial Intelligence, there has been a recent shift in attention towards lifelong learning machines - systems that continually learn from experience and improve their performance over time. In contrast, in the optimisation domain, there is has been very little focus within this area. I will discuss why it is important that optimisers used in the real-word can continuously learn, then describe the important components of a life-long optimiser. I will illustrate the concept with two examples. First, a system that is capable of dealing with a continuous stream of combinatorial optimisation problems that improves over time and reacts to changes in instance characteristics. Secondly, I will describe recent work on a real-world application to evolve policies for managing a community-energy system. Short Bio: Prof. Hart gained a 1st Class Honours Degree in Chemistry from the University of Oxford, followed by an MSc in Artificial Intelligence from the University of Edinburgh. Her PhD, also from the University of Edinburgh, explored the use of immunology as an inspiration for computing, examining a range of techniques applied to optimisation and data classification problems. She moved to Edinburgh Napier University in 2000 as a lecturer, and was promoted to a Chair in 2008 in Natural Computation. She is active world-wide in the field of Evolutionary Computation, for example as General Chair of PPSN 2016, and as a Track Chair at GECCO for several years. She has given keynotes at EURO 2016 and UKCI 2015, as well as invited talks and tutorials at many Universities and international conferences. She is Editor-in-Chief of Evolutionary Computation (MIT Press) from January 2016 and an elected member of the ACM SIGEVO Executive Board. She is also a member of the UK Operations Research Society Research Panel. |
The main objective of this workshop is to discuss hyper-heuristics and related methods, including but not limited to evolutionary computation methods, for generating and improving algorithms with the goal of producing solutions (algorithms) that are applicable to multiple instances of a problem domain. The areas of application of these methods include optimization, data mining and machine learning [1-18].
Automatically generating and improving algorithms by means of other algorithms has been the goal of several research fields, including Artificial Intelligence in the early 1950s, Genetic Programming in the early 1990s, and more recently automated algorithm configuration [1] and hyper-heuristics [2]. The term hyper-heuristics generally describes meta-heuristics applied to a space of algorithms. While Genetic Programming has most famously been used to this end, other evolutionary algorithms and meta-heuristics have successfully been used to automatically design novel (components of) algorithms. Automated algorithm configuration grew from the necessity of tuning the parameter settings of meta-heuristics and it has produced several powerful (hyper-heuristic) methods capable of designing new algorithms by either selecting components from a flexible algorithmic framework [3,4] or recombining them following a grammar description [5,9].
Although most Evolutionary Computation techniques are designed to generate specific solutions to a given instance of a problem, one of the defining goals of hyper-heuristics is to produce solutions that solve more generic problems. For instance, while there are many examples of Evolutionary Algorithms for evolving classification models in data mining and machine learning, the work described in [8] employed a hyper-heuristic using Genetic Programming to create a generic classification algorithm which in turn generates a specific classification model for any given classification dataset, in any given application domain. In other words, the hyper-heuristic is operating at a higher level of abstraction compared to how most search methodologies are currently employed; i.e., it is searching the space of algorithms as opposed to directly searching in the problem solution space [9], raising the level of generality of the solutions produced by the hyper-heuristic evolutionary algorithm. In contrast to standard Genetic Programming, which attempts to build programs from scratch from a typically small set of atomic functions, hyper-heuristic methods specify an appropriate set of primitives (e.g., algorithmic components) and allow evolution to combine them in novel ways as appropriate for the targeted problem class. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach as the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
As meta-heuristics are themselves a type of algorithm, they too can be automatically designed employing hyper-heuristics. For instance, in 2007, Genetic Programming was used to evolve mate selection in evolutionary algorithms [11]; in 2011, Linear Genetic Programming was used to evolve crossover operators [12]; more recently, Genetic Programming was used to evolve complete black-box search algorithms [13,14,16]. Moreover, hyper-heuristics may be applied before deploying an algorithm (offline) [5] or while problems are being solved (online) [9], or even continuously learn by solving new problems (life-long) [19]. Offline and life-long hyper-heuristics are particularly useful for real-world problem solving where one can afford a large amount of a priori computational time to subsequently solve many problem instances drawn from a specified problem domain, thus amortizing the a priori computational time over repeated problem solving.
Very little is known yet about the foundations of hyper-heuristics, such as the impact of the meta-heuristic exploring algorithm space on the performance of the thus automatically designed algorithm. An initial study compared the performance of algorithms generated by hyper-heuristics powered by five major types of Genetic Programming [18].
Workshop paper submission deadline | |
April 10th 2018 | Notification of acceptance |
April 24th 2018 | Camera-ready deadline |
TBA | Author registration deadline |
E-mail: manuel.lopez-ibanez@manchester.ac.uk Manuel López-Ibáńez is a lecturer in the Decision and Cognitive Sciences Group of the Alliance Manchester Business School, University of Manchester, UK. Between October 2011 and September 2015, he was a postdoctoral researcher (Chargé de recherche) of the Belgian Fund for Scientific Research (F.R.S.-FNRS) working at the IRIDIA laboratory of Université libre de Bruxelles, Brussels, Belgium. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 17 journal papers, 6 book chapters and 36 papers in peer-reviewed proceedings of international conferences on diverse topics such as evolutionary algorithms, ant colony optimization, multi-objective optimization, and various combinatorial and real-world optimization problems. His current research interests are automatic configuration and design of optimization algorithms, for single and multi-objective optimization. He is the lead developer and current maintainer of the irace automatic configuration method (http://iridia.ulb.ac.be/irace). |
|
E-mail: dtauritz@acm.org Daniel R. Tauritz is an Associate Professor and Associate Chair for Undergraduate Studies in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a contract scientist for Sandia National Laboratories, a university collaboration scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He has served on many EC related conference program committes, including the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, the PPSN program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members. |
|
E-mail: j.woodward@qmul.ac.uk John R. Woodward is a Lecturer in Computer Science at the Queen Mary University of London, within the School of Electronic Engineering and Computer Science and for the previous four years was a lecturer at the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests are Operational Research, Optimization, and Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 100 talks at international conferences and as an invited speaker at universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities. |
E-mail: sbr@cs.stir.ac.uk Alexander "Sandy" Brownlee is a Senior Research Assistant at the University of Stirling, Stirling, U.K., previously working at Loughborough University and as a Software Engineer in industry. He holds a BSc and PhD in Computer Science from Robert Gordon University. He has published more than 40 papers in international journals and conferences. His research interests include evolutionary algorithms and estimation of distribution algorithms, search based software engineering, hyperheuristics and fitness modelling. His goal is explainable optimization: combining metaheuristics and related methods with machine learning to both find optimal solutions and reveal insights into the problem, helping people make better decisions, and closing the loop to automatically improve the algorithms. |