Accepted IEEE CEC 2019 Tutorials

CEC-T01 Brain Storm Optimization Algorithms

Organized by Shi Cheng and Yuhui Shi


For swarm intelligence algorithms, each individual in the swarm represents a solution in the search space, and it also can be seen as a data sample from the search space. Based on the analyses of these data, more effective algorithms and search strategies could be proposed. Brain storm optimization (BSO) algorithm is a new and promising swarm intelligence algorithm, which simulates the human brainstorming process. Through the convergent operation and divergent operation, individuals in BSO are grouped and diverged in the search space/objective space. In this tutorial, the development history, and the state-of-the-art of the BSO algorithm are reviewed. Every individual in the BSO algorithm is not only a solution to the problem to be optimized, but also a data point to reveal the landscape of the problem. Based on the survey of brain storm optimization algorithms, more analyses could be conducted to understand the function of a BSO algorithm and more variants of BSO algorithms could be proposed to solve different problems.

Intended Audience

This tutorial is primarily intended for researchers, engineers, and graduate students with an interest in brain storm optimization (BSO) algorithms and their applications. This tutorial covers various aspects of BSO algorithms and collectively provides broad insights into what these algorithms have to offer, such as, the utility and applicability of BSO algorithms in solving optimization problems.

Short Biography

Shi Cheng

Shi Cheng received the Bachelor's degree in Mechanical and Electrical Engineering from Xiamen University, Xiamen, the Master's degree in Software Engineering from Beihang University (BUAA), Beijing, China, the Ph.D. degree in Electrical Engineering and Electronics from University of Liverpool, Liverpool, United Kingdom in 2005, 2008, and 2013, respectively. He is currently a Lecturer with School of Computer Science, Shaanxi Normal University, China. His current research interests include swarm intelligence, multiobjective optimization, and data mining techniques and their applications.

Yuhui Shi

Yuhui Shi received the PhD degree in electronic engineering from Southeast University, Nanjing, China, in 1992. He is a chair professor in the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. He is a Fellow of the IEEE. His main research interests include the areas of computational intelligence techniques (including swarm intelligence) and their applications. Dr. Shi is the Editor-in-Chief of the International Journal of Swarm Intelligence Research.

CEC-T02 Evolutionary Algorithms for Smart Cities

Organized by Enrique Alba


The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies.
It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this talk we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced.
We will show many new systems based in the use of bio-inspired techniques, in particular evolutionary algorithms (EAs) and their much needed extensions like parallel EAs, multiobjective EAs, and dynamic EAs (at least), to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travelers (private and public transport).
This tutorial will then discuss on potential applications of bio-inspired techniques for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelligent design), and several other applications linked to smart living, tourism, and smart municipal governance.

Intended Audience


Short Biography

Enrique Alba

Enrique Alba had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of M·laga (Spain). He works as a Full Professor in this university with varied teaching duties: data communications, distributed programing, software quality, and also evolutionary algorithms, bases for R+D+i and smart cities, both at graduate and master/doctoral programs. Prof. Alba leads an international team of researchers in the field of complex optimization/learning with applications in smart cities, bioinformatics, software engineering, telecoms, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, smart-CTÖ) Prof. Alba has offered dozens postgraduate courses, more than 70 seminars in international institutions, and has directed many research projects (7 with national funds, 5 in Europe, and numerous bilateral actions). Also, Prof. Alba has directed 12 projects for innovation in companies (OPTIMI, Tartessos, ACERINOX, ARELANCE, TUO, INDRA, AOP, VATIA, EMERGIA, SECMOTIC, ArcelorMittal, ACTECO, CETEM, EUROSOTERRADOS) and has worked as invited professor at INRIA, Luxembourg, Ostrava, Japan, Argentina, Cuba, Uruguay, and Mexico. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He is included in the list of most prolific DBLP authors, and has published 104 articles in journals indexed by ISI, 11 books, and hundreds of communications to scientific conferences. He is included in the top ten most relevant researchers in Informatics in Spain (ISI), and is the most influent researcher of UMA in engineering (webometrics), with 13 awards to his professional activities. Pr. Albaís H index is 54, with more than 14,500 cites to his work.

CEC-T03 Evolutionary Transfer and Multi-task Optimization

Organized by Abhishek Gupta, Kai Qin, and Liang Feng


Evolutionary algorithms (EAs) typically start the search from scratch by assuming no prior knowledge about the task being solved, and their capabilities usually do not improve upon past problem-solving experiences. In contrast, humans routinely make use of the knowledge learnt and accumulated from the past to facilitate dealing with a new task, which provides an effective way to solve problems in practice as real-world problems seldom exist in isolation. Similarly, practical artificial systems like optimizers will often handle a large number of problems in their lifetime, many of which may share certain domain-specific similarities. This motivates the design of advanced optimizers which can leverage on what has been solved before to facilitate solving new tasks. In this tutorial, we will present recent advances in the field of evolutionary computation under the theme of evolutionary transfer and multi-task optimization via automatic knowledge transfer. Particularly, we will describe a general definition of transfer optimization, encompassing the sequential transfer and multitasking paradigms. We will also introduce recent theoretical developments in transfer optimization and describe corresponding evolutionary methodologies that can be put into use in practice. Some potential applications of evolutionary transfer and multi-task optimization in real-world scenarios will also be discussed.

Intended Audience

In general, the audience is expected to have some knowledge of EAs, as well as of basic statistical modeling methods such as regression analysis (in surrogate-assisted evolutionary optimization) and density estimation schemes. The intended audience may be those who are interested in designing novel EAs equipped with knowledge transfer and multi-tasking capability, those seeking to reduce optimization time for real-world problems by leveraging data collected from related tasks, or those who are generally interested in simultaneous problem learning and optimization tasks that can arise from evolutionary computation.

Short Biography

Abhishek Gupta

Abhishek Gupta received his PhD in Engineering Science from the University of Auckland, New Zealand, in the year 2014. He graduated with a Bachelor of Technology degree in the year 2010, from the National Institute of Technology Rourkela, India. He currently serves as a Scientist in the Singapore Institute of Manufacturing Technology (SIMTech), Agency of Science, Technology and Research (A*STAR), Singapore. He has diverse research experience in the field of computational science, ranging from numerical methods in engineering physics, to topics in computational intelligence. His recent research interests are in the development of memetic computation as an approach for automated knowledge extraction and transfer across problems in evolutionary design.

Kai (Alex) Qin

Kai (Alex) Qin is an Associate Professor at Swinburne University of Technology (Australia), leading Swinburne Intelligent Data Analytics Lab. He received the BEng degree at Southeast University (China) in 2001 and the PhD degree at Nanyang Technology University (Singapore) in 2007. From 2007 to 2012, he had worked first at the University of Waterloo (Canada) and then at the French National Institute for Research in Computer Science and Control (INRIA) (France). Since 2013, he was a Lecturer and Senior Lecturer at RMIT University. In 2017, he joined Swinburne University of Technology as an Associate Professor. His major research interests include evolutionary computation, machine learning, computer vision, GPU computing, services computing and pervasive computing. He won the 2012 IEEE Transactions on Evolutionary Computation Outstanding Paper Award and the Overall Best Paper Award at the 18th Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES 2014). One of his conference papers was nominated for the best paper award at the 2012 Genetic and Evolutionary Computation Conference (GECCO 2012). As an IEEE senior member, he is currently co-chairing the IEEE Emergent Technologies Task Forces on “Collaborative Learning and Optimization” and “Multitask Learning and Multitask Optimization”.

Liang Feng

Liang Feng received the PhD degree from the School of Computer Engineering, Nanyang Technological University, Singapore, in 2014. He was a Postdoctoral Research Fellow at the Computational Intelligence Graduate Lab, Nanyang Technological University, Singapore. He is currently an Assistant Professor at the College of Computer Science, Chongqing University, China. His research interests include Computational and Artificial Intelligence, Memetic Computing, Big Data Optimization and Learning, as well as Transfer Learning. He is serving as the Chair of the IEEE Task Force on “Transfer Learning and Transfer Optimization”, and also the PC member of the IEEE Task Force on “Memetic Computing”. He had co-organized and chaired the Special Session on “Memetic Computing” held at CEC’16, CEC’17, CEC’18, CEC’19, and the Special Session on "Transfer Learning in Evolutionary Computation" held at CEC’18, CEC’19.

CEC-T04 Particle Swarm Optimization: A Multi-Purpose Optimization Approach

Organized by Andries Engelbrecht


The main objective of this tutorial will be to show that particle swarm optimization (PSO) has emerged as a multi-purpose optimization approach. In the context of this tutorial, this means that the PSO can be applied to a wide range of optimization problem types as well as search domain types. The tutorial will start with a very compact overview of the original, basic PSO. The remainder and bulk of the tutorial will cover a classification of different optimization problem types, and will show how PSO can be applied to solve problems of these types. The focus will be on PSO adaptations that are simple, both in their formulation and implementation. This part of the tutorial will be organized in the following sections, one for each problem type:

  • Continuous-valued versus discrete-valued domains
  • Unimodal versus multi-modal landscapes
  • Large-scale optimization
  • Multi-solution problems requiring niching capabilities
  • Constrained versus unconstrained problems, also covering boundary constraints
  • Multi-objective optimization
  • Dynamic environments
  • Dynamic Multi-objective optimization
  • Optimization with dynamically changing constraints
For each problem type, it will be shown why the standard PSO can not solve these types of problems efficiently. Simple adaptations to the PSO that will allow it to solve each problem type will then be discussed. The focus will be on PSO adaptations that do not violate the foundational principles of PSO. For each of these problem types a small subset of the most successful algorithms will be discussed.
The participant will gain mportant skills in understanding PSO and how Pso can be applied to a wide range of optimization problems.

Intended Audience

PSO researchers and practitioners. The tutorial is introductory and will have a great appeal to postgraduate students and researchers that are exploring the value of PSO.

Short Biography

Andries Engelbrecht

Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Professor in Computer Science at the University of Pretoria, and is currently appointed as the Director of the Institute for Big Data and Data Science. He holds the position of South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, neural networks, artificial immune systems, and the application of these paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems. He has published over 330 papers in these fields and is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.
Prof Engelbrecht is very active in the international community, annually serving as reviewer for over 20 journals and 10 conferences. He is an Associate Editor of the IEEE Transactions on Evolutionary Computation, IEEE Transactions on Neural Networks and Learning Systems, the Swarm Intelligence Journal, and the journal of Engineering Applications of Artificial Intelligence. He served on the international program committee and organizing committee of a number of conferences, organized special sessions, presented tutorials, and took part in panel discussions. He was the founding chair of the South African chapter of the IEEE Computational Intelligence Society. He is a member of the Evolutionary Computation Technical Committee and the Neural Networks Technical Committee, and serves as member of a number of task forces.

CEC-T05 Evolutionary Many-Objective Optimization (Part I & Part II)

Organized by Hisao Ishibuchi and Hiroyuki Sato


The goal of the proposed tutorial is clearly explain difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We also explain some state-of-the-art many-objective algorithms. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Intended Audience

Part I: Introductory tutorial for all students and all researchers who are interested in evolutionary multi-objective and many-objective optimization.
Part II: Advanced tutorials for all students and all researchers who are doing or want to start to do their research related to evolutionary many-objective optimization.

Short Biography

Hisao Ishibuchi

Dr. Ishibuchi received the BS and MS degrees from Kyoto University in 1985 and 1987, respectively. In 1992, he received the Ph. D. degree from Osaka Prefecture University where he was a professor since 1999. From April 2017, he is with Department of Computer Science and Engineering, Southern University of Science Technology (SUSTech), Shenzhen, China as a Chair Professor. He received Best Paper Awards from GECCO 2004, HIS-NCEI 2006, FUZZ-IEEE 2009, WAC 2010, SCIS & ISIS 2010, FUZZ-IEEE 2011, ACIIDS 2015 and GECCO 2017, 2018. He also received a 2007 JSPS (Japan Society for the Promotion of Science) Prize and a 2019 IEEE CIS Fuzzy Systems Pioneer Award. He was the IEEE CIS Vice-President for Technical Activities (2010-2013), an IEEE CIS Distinguished Lecturer (2015-2017), and the President of the Japan EC Society (2016-2018). Currently, he is the Editor-in-Chief of IEEE CI Magazine (2014-2019) and an IEEE CIS AdCom member (2014-2019). He is also an Associate Editor of IEEE TEVC, IEEE Access and IEEE T-Cyb. He is an IEEE Fellow. In 2018, he was selected in the “Recruitment Program of Global Experts for Foreign Experts” known as the “Thousand Talents Program” in China.

Hiroyuki Sato

Dr. Sato received B.E. and M.E. degrees from Shinshu University, Japan, in 2003 and 2005, respectively. In 2009, he received Ph. D. degree from Shinshu University. He has worked at the University of Electro-Communications since 2009. He is currently an associate professor in Graduate School of Informatics and Engineering, and a member of Artificial Intelligence eXploration research center (AIX) in the University of Electro-Communications. He received best paper awards on the EMO track in GECCO 2011 and 2014, Transaction of the Japanese Society for Evolutionary Computation in 2012 and 2015. His research interests include evolutionary multi- and many-objective optimization, and its applications. He is an Associate Editor of Elsevier Swarm and Evolutionary Computation. He is a member of IEEE, ACM/SIGEVO.

CEC-T06 Pareto Optimization for Subset Selection: Theories and Practical Algorithms

Organized by Chao Qian and Yang Yu


Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the produced non-dominated solution set. Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large- scale, noisy and dynamic subset selection. We assume that the audiences are with basic knowledge of probability theory.

Intended Audience

Potential audiences include those who are curios in theoretical grounded evolutionary algorithms, and those who are interested in applying evolutionary algorithms to achieve state- of-the-art performance in machine learning, data mining, natural language processing, etc.

Short Biography

Chao Qian

Chao Qian is an associate researcher in School of Computer Science and Technology, University of Science and Technology of China, China. He received the BSc and PhD degrees in computer science from Nanjing University, China, in 2009 and 2015, respectively. His research interests are mainly the theoretical foundation of evolutionary algorithms and its application in machine learning. He has published more than 30 papers in top-tier journals (e.g., AIJ, TEvC, ECJ, Algorithmica) and conferences (e.g., NIPS, IJCAI, AAAI). He has won the ACM GECCO 2011 Best Theory Paper Award, the IDEAL 2016 Best Paper Award, and has been in the team of the PAKDD 2012 Data Mining Competition Grand Prize Winner. He is chair of IEEE Computational Intelligence Society (CIS) Task Force on Theoretical Foundations of Bio- inspired Computation.

Yang Yu

Yang Yu is an associate professor of computer science in Nanjing University, China. He joined the LAMDA Group as a faculty since he got his Ph.D. degree in 2011. His research area is in machine learning and reinforcement learning. He was recommended as AI’s 10 to Watch by IEEE Intelligent Systems in 2018, invited to have an Early Career Spotlight talk in IJCAI’18 on reinforcement learning, and received the Early Career Award of PAKDD in 2018.

CEC-T07 Representation in Evolutionary Computation

Organized by Daniel Ashlock


This tutorial has the goal of laying out the principles of representation for evolutionary computation and providing illustrative examples. The tutorial will consist of two parts. The first part will establish the importance of representation in evolutionary computation, something well known to senior researchers but potentially valuable to junior researchers and students. This portion of the tutorial will give concrete examples of how the design of a representation used in evolutionary computation influences the character of the fitness landscape. Impact on time to solution, character of solutions located, and design principles for representations will all be covered. This portion of the tutorial has been offered several times before and has been well received and will be substantially updated in this offering.
Research in representation is quite an active area and so a number of beautiful new examples of representations will be presented in the second half of the tutorial. These will be chosen to emphasize the design principles expounded in the first half of the presentation. These novel representations will include applications in games, bioinformatics, optimization, and evolved art. New evolutionary algorithms enabled by novel representations will be highlighted. The fitness landscape abstraction will serve as a unifying theme throughout the presentation. New topics will include self-adaptive representations, parameterized manifolds of representations, state-conditioned representations, and representations that arise from the formalisms of abstract algebra. No prior knowledge of abstract algebra will be assumed.

Intended Audience

This tutorial is potentially valuable to anyone that needs evolutionary computation beyond simple of-the-shelf technology and also anyone doing research in evolutionary computation. Audience members unfamiliar with the issue of representation will find the tutorial helpful in improving their research skills.

Short Biography

Daniel Ashlock

Daniel Ashlock has 280 peer reviewed scientific publications, many of which deal with the issues of representation in evolutionary computation. He has presented tutorials and invited plenary lectures on representation at IEEE’s WCCI, CEC, CIG, and CIBCB conferences as well as at numerous universities. Dr. Ashlock serves as an associate editor for the IEEE Transactions on Evolutionary Computation, the IEEE Transactions on Games, The IEEE/ACM Transactions on Computational Biology and Bioinformatics, and Biosystems. He is a member if the IEEE CIS Technical Committee on Bioinformatics and Biomedical Engineering and is currently chair of the IEEE Technical Committee on Games. Dr. Ashlock has written books on evolutionary computation, on the representation of game playing agents, and on representation for automatic content generation. Dr. Ashlock has served as the general chair of three IEEE Conferences and served on the program committee of more than twenty conferences.

CEC-T08 Introduction to Grammar-Based Evolutionary Computation

Organized by Grant Dick and Peter Whigham


The tutorial is intended as a solid introduction to the use of grammars in evolutionary search. A brief introduction into grammars will be provided to ensure all audience members have an appropriate understanding of the basic concepts (e.g., context-free grammars and derivation trees). Following that, the tutorial will identify the problems that the introduction of grammars into EC helps to eliminate (e.g., closure requirements and/or search on arbitrary structures). The tutorial will then demonstrate that
By the end of the tutorial, audience members will:

  • Understand how derivation trees and context-free grammars can be used as the basis of an evolutionary search mechanism.
  • Observe how context-free grammar GP (CFG-GP) can be used to search for solutions to arbitrary problems that can be defined through a context-free grammar.
  • Be introduced to the relevant software to easily apply grammar-based methods (esp., CFG- GP) to their own problems
  • Have the basic knowledge to implement their own CFG-GP systems.

Intended Audience

The proposed tutorial is primarily aimed at researchers that are familiar with general evolutionary computation concepts, but with little experience with using grammar-based techniques in EC. A second stream of interest will be from those researchers wishing to better understand how they might apply grammar-based methods to their own problems. Students wishing to embark on new research into the theoretical properties of grammar-based methods will also find the content very useful.

Short Biography

Grant Dick

Grant Dick is a member of the 100-level teaching group and has a background in Information Systems development. Outside of teaching, his research interests include: Computational Intelligence methods, in particular evolutionary computation; Adaptive business intelligence; Multimodal and multi-objective problem solving; Theoretical population genetics; Evolving systems, particularly the role of population structure in speciation.

Peter Whigham

Peter Whigham has taught and conducted research in the areas of spatial modelling, computational models and evolutionary computation for over 25 years. He has been at the University of Otago since 1999, and previously worked at the Commonwealth Scientific & Industrial Research Organisation (CSIRO) in Australia from 1987. He has an extensive background in building computational models for ecological systems and has been involved in evolutionary computation (EC) and grammar-based Genetic Programming since 1994. Dr Whigham has published in areas such as ecology, finance, public health, population genetics, limnology and decision support. His main research interest in EC is in the development of robust methods for incorporating domain knowledge into stochastic search.

CEC-T09 Evolutionary Large-Scale Global Optimization

Organized by Mohammad Nabi Omidvar, Xiaodong Li, Daniel Molina, and Antonio LaTorre


Many real-world optimization problems involve a large number of decision variables. The trend in en- gineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years [10], and this trend continues with an ever- increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems [1, 11]. The recent advances in the area of machine learning has also witnessed very large scale optimization problems en- countered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables [3, 7]. It is this “curse-of-dimensionality” that has made large-scale op- timization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with black-box optimization problems. Currently, there are two different approaches to tackle this complex search. The first one is to apply decomposition methods, that divide the total number of variables into groups of variables allowing researcher to optimize each one separately, reducing the curse of dimension- ality. Their main drawback is that choosing proper decomposition could be very difficult and expensive computationally. The other approach is to propose algorithms specifically designed for large-scale global optimization, creating algorithms whose features are well-suited for that type of search. The Tutorial is divided in two parts, each dedicated to exploring the advances in the approaches stated above, presented by experts in each respective field.

Intended Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community. The expected duration of each part is approximately 110 minutes.

Short Biography

Mohammad Nabi Omidvar

Mohammad Nabi Omidvar is a research fellow in evolutionary computation and is a member of Centre of Excellence for Research in Computational Intelligence and Applications (Cercia) at the school of computer science, the University of Birmingham. Prior to joining the University of Birmingham, Dr. Omidvar completed his Ph.D. in computer science with the Evolutionary Computing and Machine Learning (ECML) group at RMIT University in Melbourne, Australia. Dr. Omidvar holds a bachelor in applied mathematics and a bachelor in computer science with first class honors from RMIT University. Dr. Omidvar won the IEEE Transaction on Evolutionary Computation Outstanding Paper award for his work on large-scale global optimization. He has also received an Australian Postgraduate Award in 2010 and also received the best Computer Science Honours Thesis award from the School of Computer Science and IT, RMIT University. Dr. Omidvar is a member of IEEE Computational Intelligence Society since 2009 and is a member of IEEE Taskforce on Large-Scale Global Optimization. His current research interests are large-scale global optimization, decomposition methods for optimization, and multi-objective optimization.

Xiaodong Li

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is an Associate Professor at the School of Computer Science and Information Technology, RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, com- plex systems, multiobjective optimization, and swarm intelligence. He serves as an Associate Editor of the journal IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and Inter- national Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a for- mer Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL’08, a Program Co-Chair AI’09, and a Program Co-Chair for IEEE CEC’2012. He is the recipient of 2013 ACM SIGEVO Impact Award.

Daniel Molina Cabrera

Daniel Molina Cabrera received his B.Sc. degree and Ph.D. degree in Computer Science from University of Granada, Spain. He was Associate Professor at the School of Engineering in the University of Cadiz for several years, and currently he is an Associate Professor at the School of Computer and Telecommunications Engineering in the University of Granada. His current research interests are large- scale global optimization and other different continuous optimization using Evolutionary Algorithms. He won twice the competition on Large-Scale Global Optimization, in 2010 and in 2018. Since 2015 he is the chair of IEEE CIS Task Force on Large Scale Global Optimization. Dr. Daniel Molina has organized an special issue on large-scale global optimization in Soft Computing and several special sessions and competitions on the IEEE Congress on Evolutionary Computation.

Antonio LaTorre

Antonio LaTorre robtained a MS. in Computer Science at the Universidad Politécnica de Madrid (UPM), a MS. in Distributed Systems at École Superieure des Télécommunications de Bretagne (ENST- B), both in 2004, and a PhD in Computer Science at UPM in 2009. He has developed his career in the field of heuristic optimization, high-performance data analysis and modeling and, in the last years, he has an active research in applied problems in the domain of logistics, neurosciences and health. He has more than 14 years of research experience backed-up by participation in 14 national and international projects, both with public and private funding, leading 3 of them. He has published more than 40 peer- reviewed contributions in international journals and conferences and participates as associator editor in 3 international journals. He is currently serving as vice-chair of the IEEE CIS Task Force on Large Scale Global Optimization.

CEC-T10 Evolutionary Bilevel Optimization

Organized by Ankur Sinha, Kalyanmoy Deb


Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the cor- responding lower level variable vector is optimal for the lower level optimization problem. Consid- er, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the "nestedness" of one optimization task into another.
Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will pre- sent a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be dis- cussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.

Intended Audience

Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.

Short Biography

Ankur Sinha

Ankur Sinha is working as an Associate Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, and CEC 2015, 2017, 2018. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page:

Kalyanmoy Deb

Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 500 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from

CEC-T11 Benchmarking Iterative Optimization Heuristics with IOHprofiler

Organized by Carola Doerr, Hao Wang, Ofer M. Shir, and Thomas Bäck


Benchmarking optimization solvers aims at supporting practitioners in choosing the best algorithmic technique and its optimal configuration for the problem at hand through a systematic empirical investigation and comparison amongst competing techniques. For theoreticians, benchmarking can be an essential tool for the enhancements of mathematically-derived ideas into techniques being broadly applicable in practical optimization. In addition, empirical performance comparisons constitute an important source for formulating new research questions. Sound benchmarking environments therefore make an essential contribution towards our understanding of optimization algorithms.

In the context of evolutionary computation for discrete optimization problems, no commonly agreed-upon benchmarking environment exist. We therefore recently announced IOHprofiler, a new tool for analyzing and comparing iterative optimization heuristics such as EAs and local search variants. Given as input algorithms and problems written in C or Python, it provides as output a statistical evaluation of the algorithms' anytime performance by means of detailed statistics for the fixed-target and fixed-budget performance. In addition, IOHprofiler also allows to track the evolution of algorithm parameters, making our tool particularly useful for the analysis, comparison, and design of (self-)adaptive algorithms.

IOHprofiler is a ready-to-use software. It consists of two parts: an experimental part, which generates the running time data, and a post-processing part, which produces the summarizing comparisons and statistical evaluations. The experimental part is built on the COCO software, which has been adjusted to cope with discrete optimization problems. The post-processing part is our own work. It can be used as a stand-alone tool for the evaluation of running time data of arbitrary benchmark problems. It accepts as input files not only the output files of IOHprofiler, but also original COCO data files. The post-processing tool is designed for an interactive evaluation, allowing the user to choose the ranges and the precision of the displayed data according to his/her needs.

IOHprofiler is available on GitHub at where both the experimental part and the post-processing tool as well as a detailed documentation can be downloaded. An online version of the evaluation part can also be found online at

In this tutorial the participants will learn to obtain detailed anytime performance analyses for their algorithms, through a hands-on example demonstrating the use and functionalities of IOHprofiler.

Intended Audience

This tutorial addresses all CEC participants interested in learning how to benefit from the automated performance analyses that IOHprofiler offers. We also welcome researchers interested in discussing various performance measures, ranging from fixed-budget results over fixed-target results and ECDF curves and probabilities of success to multi-criterial performance statistics, and beyond.
No specific background is required to attend this tutorial. Attendees bringing an implementation of their own favorite algorithms and problems will be able to test the advantages of IOHprofiler with their own data.
The slides of the tutorial will be made available online at, but note that the tutorial also covers a live-demo and experiments that will not be part of the slide deck.

Short Biography

Carola Doerr

Carola Doerr formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. She studied Mathematics at Kiel University (Germany, 2003-2007, Diplom) and Computer Science at the Max Planck Institute for Informatics and Saarland University (Germany, 2010-2011, PhD). Before joining the CNRS she was a post-doc at Paris Diderot University (Paris 7) and the Max Planck Institute for Informatics. From 2007 to 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, where her interest in evolutionary algorithms originates from.
Carola Doerr's main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community.
Carola Doerr has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is chairing the program committee of FOGA 2019 and previously chaired the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on "Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)".

Hao Wang

Hao Wang is a postdoctoral researcher at Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is the member of the Natural Computing group. Hao Wang received his master’s degree in Computer Science from Leiden University in 2013 and obtained his PhD (cum laude, promotor: Prof. Thomas Bäck) in Computer Science from the same university in 2018. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he also works on developing statistical machine learning algorithms for big and complex industrial data. He also aims at combining the state-of-the-art optimization algorithm with data mining / machine learning techniques to make the real-world optimization tasks more efficient and robust.

Ofer M. Shir

Ofer Shir is a Senior Lecturer (Assistant Professor) at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute, where he heads the Scientific Informatics and Experimental Optimization group – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel (conferred 2003), and both MSc and PhD in Computer Science from Leiden University, The Netherlands (conferred 2004, 2008; PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz in the Department of Chemistry – where he specialized in computational aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current topics of interest include Statistical Learning in Theory and in Practice, Experimental Optimization, Theory of Randomized Search Heuristics, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Machine Learning.

Thomas Bäck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer’s Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

CEC-T12 A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms

Organized by Pietro S. Oliveto


Great advances have been made in recent years towards the runtime complexity analysis of evolutionary algorithms for combinatorial optimisation problems. Much of this progress has been due to the application of techniques from the study of randomised algorithms. The first pieces of work, started in the 90s, were directed towards analysing simple toy problems with significant structures. This work had two main goals:

  • to understand on which kind of landscapes EAs are efficient, and when they are not
  • to develop the first basis of general mathematical techniques needed to perform the analysis.

Thanks to this preliminary work, nowadays, it is possible to analyse the runtime of evolutionary algorithms on different combinatorial optimisation problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis.

Intended Audience

The tutorial is targeted at scientists and engineers who wish to:

  • theoretically understand the behaviour and performance of the search algorithms they design;
  • familiarise with the techniques used in the runtime analysis of EAs;
  • pursue research in the area of time complexity analysis of randomised algorithms in general and EAs in particular.
Previous tutorials at CEC 2013, WCCI 2014, WCCI 2016, SSCI 2017, WCCI 2018 attracted over 50 participants each. The slides online at and at have been downloaded over 200 times. A similar tutorial has been given at GECCO 2013-2015 and at PPSN 2016 by the speaker together with Dr. Per Kristian Lehre.

Short Biography

Pietro Simone Oliveto

Pietro Simone Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor's Fellow at the University of Sheffield.
His main research interest is the time complexity analysis of randomised search heuristics for combinatorial optimisation problems. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Artificial Immune Systems (AIS) and Ant Colony Optimisation (ACO) algorithms for classical NP-Hard combinatorial optimisation problems such as vertex cover, mincut and spanning trees with maximal number of leaves together with a review paper of the field of time complexity analysis of EAs for combinatorial optimisation problems and two book chapters containing a tutorial on the runtime analysis of EAs. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14 conferences and got very close with several best paper nominations.
Dr. Oliveto has given tutorials on the runtime analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014, GECCO 2014, GECCO 2015, SSCI 2015, GECCO 2016 and PPSN 2016. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), IEEE Senior member, Associate Editor of IEEE Transactions on Evolutionary Computation and Chair of the IEEE CIS Technical Committee on Evolutionary Computation. Since 2008 he has been invited to the Dagstuhl seminar series on the Theory of Evolutionary Algorithms.

CEC-T13 Evolutionary Algorithms and Hyper-Heuristics

Organized by Nelishia Pillay


The tutorial will aim to:

  • provide a sufficient introduction and overview of evolutionary algorithm hyper-heuristics to enable researchers to start their own research in this domain
  • provide an overview of recent research directions in evolutionary algorithms and hyper-heuristics
  • highlight the benefits of evolutionary algorithms to the field of hyper-heuristics
  • highlight the benefits and hyper-heuristics to evolutionary algorithms
  • stimulate interest and discussion on future research directions in the area of evolutionary algorithms and hyper-heuristics.

Intended Audience

The tutorial is aimed at researchers in computational intelligence who have an interest in hyper-heuristics or have just started working in this area. A background in evolutionary algorithms is assumed.

Short Biography

Nelishia Pillay

Nelishia Pillay is a Professor and Head of Department of Computer Science at the University of Pretoria. She is chair of the IEEE Task Force on Hyper-Heuristics with the Technical Committee of Intelligent Systems and Applications at IEEE Computational Intelligence Society and holds the Multichoice Joint-Chair in Machine Learning. Her research areas include hyper-heuristics, combinatorial optimization, genetic programming, genetic algorithms and other biologically-inspired methods. She has published in these areas in journals, national and international conference proceedings. She has served on program committees for numerous national and international conferences and is a reviewer for various international journals. She is an active researcher in field of evolutionary algorithm hyper-heuristics and the application thereof to optimization problems and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group which she has established.

CEC-T14 Evolutionary Machine Learning

Organized by Masaya Nakata, Shinichi Shirakawa, Will Browne


A fusion of Evolutionary Computation and Machine Learning, namely Evolutionary Machine Learning (EML), has been recognized as a rapidly growing research area as these powerful search and learning mechanisms are combined. Many specific branches of EML with different learning schemes and different ML problem domains have been proposed. These branches seek to address common challenges –

  • How evolutionary search can discover optimal ML configurations and parameter settings,
  • How can the deterministic models of ML influence evolutionary mechanisms,
  • How can EC and ML be integrated into one learning model.
Consequently various insights address principle issues of the EML paradigm that are worthwhile to “transfer” to these different specific challenges.

The goal of our tutorial is to provide ideas of advanced techniques of specific EML branches, and then to share them as a common insight to the EML paradigm. Firstly, we introduce the common challenges in the EML paradigm and then discuss how various EML branches address these challenges. Then, as detailed examples, we provide two major approaches to EML: Evolutionary rule- based learning (i.e. Learning Classifier Systems) as a symbolic approach; Evolutionary Neural Networks as a connectionist approach.

Our tutorial will be organized for not only beginners but also experts in the EML field. For the beginners, our tutorial will be a gentle introduction regarding EML from basics to recent challenges. For the experts, our two specific talks provide the most recent advances of evolutionary rule-based learning and of evolutionary neural networks. Additionally, we will provide a discussion on how these techniques' insights can be reused to other EML branches, which shapes the new directions of EML techniques.

Intended Audience

As a fusion of Machine Learning and Evolutionary Computation, the tutorial covers a wide range of computational intelligence fields. This tutorial will attract a great number of the potential audience working not only EML but also Genetic Programming, Feature selection, Real-world applications and so forth.

This tutorial would be a re-edition of our tutorial, first held at WCCI2018. Our first tutorial in 2018 had two sessions: 1) EML and 2) Analysis of EC with ML. In this proposal for CEC2019, we will more specifically focus on EML (the first session of the first tutorial), which had a great attention from more than 40-50 attendees at WCCI2018 (According to the Whova mobile app, used in WCCI2018, our tutorial recorded 43 attendees, which was larger than many other tutorials).

Short Biography

Masaya Nakata

Dr. Nakata is an assistant professor at Faculty of Engineering, Yokohama National University, Japan. He received his Ph.D. degree in informatics from the University of Electro-Communications, Japan, in 2016. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, University Bristol and Victoria University of Wellington. His contributions have been published as more than 10 journal papers and more than 20 conference papers including the high-quality conferences in evolutionary computation, e.g., CEC, GECCO, PPSN. He is an organizing committee member of International Workshop on Learning Classifier Systems/Evolutionary Rule-based Machine Learning 2015-2016, 2018-2019 in GECCO conference, elected from the international LCS research community. He received IEEE CIS Japan chapter Young Research Award.

Shinichi Shirakawa

Dr. Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd., Aoyama Gakuin University, and University of Tsukuba. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Will Browne

Associate Prof Will Browne’s research focuses on applied cognitive systems. Specifically, how to use inspiration from natural intelligence to enable computers/machines/robots to behave usefully. This includes cognitive robotics, learning classifier systems, and modern heuristics for industrial application. A/Prof. Browne has been co-track chair for the Genetics-Based Machine Learning (GBML) track and is currently the co-chair for the Evolutionary Machine Learning track at Genetic and Evolutionary Computation Conference. He has also provided tutorials on Rule-Based Machine Learning at GECCO, chaired the International Workshop on Learning Classifier Systems (LCSs) and lectured graduate courses on LCSs. He has recently co- authored the first textbook on LCSs ‘Introduction to Learning Classifier Systems, Springer 2017’. Currently he leads the LCS theme in the Evolutionary Computation Research Group at Victoria University of Wellington, New Zealand.

CEC-T15 Nature-Inspired Dynamic Constrained Optimization

Organized by Efrén Mezura-Montes, María-Yaneli Ameca-Alducin


Dynamic constrained optimization problems (DCOPs) in which the objective function or/and the constraints can change over time have been a focal point for researchers' studies in recent years due to their representation in many real-world problems.

The examples include optimal control of hybrid systems or/and hydro-thermal power scheduling (contain both, discrete and continuous variables). In hydro-thermal power scheduling the demand or the available resources will change over time, creating a constrained dynamic environment. In constrained continuous spaces, the problem of source identification can be a representation of DCOPs in which the search space changes because the information about the problem is not fully known and can only be gradually revealed when time goes by. Another example in a constrained continuous space is the problem of pattern recognition in which the objective is to maximize the income by appropriately choosing a mixed grazing strategy. In this case, the pattern of change is observed from real-world data.

There are different approaches proposed in the literature to solve DCOPs. Those algorithms are usually based on nature-inspired algorithms originally designed to tackle static optimization problems. To make those algorithms ready to deal with constrained dynamic environments, they should firstly be able to handle the dynamic constraints of the problem as well as detecting the changes in the environment (feasible region and objective function) and then react to these changes properly.

The purpose of this tutorial is five-fold: (1) providing a definition and applications of DCOPs in real-world problems; (2) detailing the common mechanisms that have been applied to deal with constrained dynamic environments; (3) revising the state-of-art in DCOPs; (4) presenting benchmarks and measurements; and (5) presenting a recently-proposed framework for creating test DCOPs.

Intended Audience

The tutorial is intended for researchers, practitioners and students working on numerical optimization using nature-inspired algorithms, particularly in real-world problems, which are usually constrained and where the objective function and/or the constraints can change over time.

Short Biography

Efrén Mezura-Montes

Dr. Efrén Mezura-Montes is a full-time researcher at the Artificial Intelligence Research Center, University of Veracruz, MEXICO. His research interests are the design, analysis and application of bio-inspired algorithms to solve complex optimization problems. He has published over 120 papers in peer-reviewed journals and conferences. He also has one edited book and six book chapters published by international publishing companies. From his work, Google Scholar reports more than 5,200 citations.
Dr. Mezura-Montes is member of the IEEE Computational Intelligence Society Evolutionary ComputationTechnical Committee and he is also member of the IEEE Systems Man and Cybernetics Society Soft Computing Technical Committee. He is also the founder of the IEEE Computational Intelligence Society task force on Nature-Inspired Constrained Optimization.
Dr. Mezura-Montes is a member of the editorial board of the journals: “Swarm and Evolutionary Computation”, “Complex & Intelligent Systems”, and the “Journal of Optimization”. He is also a reviewer for more than 20 international specialized journals, including the IEEE Transactions on Evolutionary Computation and the IEEE Transactions on Cybernetics. Dr. Mezura-Montes is a Level 2 member of the Mexican National Researchers System (SNI). Finally, Dr. Mezura-Montes is a regular member of the Mexican Sciences Academy (AMC) and also a regular member of the Mexican Computing Academy (AMEXCOMP)

María-Yaneli Ameca-Alducin

Dr. María-Yaneli Ameca-Alducin is a research associate in the field of Computer Science at the University of Adelaide, Australia. She received her PhD in Artificial Intelligence from the University of Veracruz, Mexico and also has a Master degree in Applied Computing from National Laboratory of Advanced Computing (LANIA), Mexico. Her Master’s thesis was awarded wit the first place by the Mexican Society of Artificial Intelligence (SMIA) in 2013, and her PhD’s thesis titled “Differential evolution to solve dynamic constrained optimization” was also awarded with the third place by SMIA in 2017. She is Candidate member of the Mexican National Researchers System (SNI). Her research interests are the design, analysis, and application of the nature-inspired algorithms to solve dynamic constrained optimization problems.

CEC-T16 Genetic improvement: Taking real-world source code and improving it using genetic programming

Organized by Saemundur O. Haraldsson, John Woodward, Brad Alexander, Markus Wagner


Genetic Programming (GP) has been on the scene for around 25 years. Genetic Improvement (GI) is “the new kid on the block”. GP evolves a small program or component from scratch. In contrast GI evolves changes to the code of an extant application. Where GP aims to evolve specialised code for a pre-determined niche, GI aims to improve some functional or non-functional aspect of an application that was not originally designed for automated change. In short, GI is the process of taking existing software, of any scale, in its own context and make it better.

These differences may not seem important, as we can still generate the same set of functions; however, this subtle difference opens up a vast number of new possibilities for research and this will make GI attractive for industrial applications. Furthermore we can optimize the non-functional properties of code such as power consumption, size of code, bandwidth, and other non-functional properties, including execution time.

The aim of the tutorial is to

  • examine the motives for evolving source code directly, rather than a language built from a function set and terminal set which has to be interpreted after a program has been evolved
  • understand different approaches to implementing genetic improvement including operating directly on text files, and operating on abstract syntax trees
  • appreciate the new research questions that can be addressed while operating on actual source code
  • understand some of the issues regarding measuring non-functional properties such as execution time and power consumption
  • examine some of the early examples of genetic improvement and our flagship application will be the world’s first implementation of GI in a live system (this technique has found and fixed all 40 bugs in its first 6 months while operating in a medical facility)
  • understanding links between GI and other techniques such as hyper-heuristics, automatic parameter tuning, and deep parameter tuning
  • highlight some of the multi-objective research where programs have been evolved that lie on the Pareto front with axes representing different non-functional properties
  • give an introduction to GI in No Time - an open source simple micro-framework for GI (

Intended Audience

This tutorial will be of interest to two groups of people.
Firstly, people with a genetic programming background, who are interested in applying their genetic programming techniques to real source code.
Secondly, software practitioners who are interested in using machine learning techniques to improve their software. We will not assume prior knowledge of genetic programming.
This tutorial will be suitable for PhD students upwards.

Short Biography

Saemundur O. Haraldsson

Saemundur is a Senior Research Associate at Lancaster University. He has multiple publications on Genetic Improvement, including two that have received best paper awards; in 2017’s GI and ICTS4eHealth workshops. Additionally, he co-authored the first comprehensive survey on GI which was published in 2017. He has been invited to give multiple talks on the subject, including three Crest Open Workshops and for an industrial audience in Iceland. His PhD thesis (submitted in May 2017) details his work on the world's first live GI integration in an industrial application. Saemundur has previously given a tutorial on GI at PPSN 2018.

John Woodward

John is a lecturer at QMUL. He has organized workshops at GECCO including Metaheuristic Design Patterns and ECADA, Evolutionary Computation for the Automated Design of Algorithms which has run for 7 years. He has also given tutorials on the same topic at PPSN, CEC, and GECCO. He currently holds a grant examining how Genetic Improvement techniques can be used to adapt scheduling software for airport runways. With his PhD Student, Saemundur Haraldsson (who this proposal is in collaboration with), won a best paper award at the 2017 GI workshop. He has also organized a GI workshop at UCL as part of their very successful Crest Open Workshops.

Brad Alexander

Brad's research interests include program optimisation, rewriting, genetic-programming (GP) - especially the discovery of recurrences and search-based-software-engineering. He has also supervised successful projects in the evolution of control algorithms for robots, the evolution of three-dimensional geological models, and the synthesis of artificial water distribution networks, and using background optimisation to improve the performance of instruction set simulators (ISS)'s. He has also worked on improving algorithms for the analysis of water distribution networks.

Markus Wagner

Markus is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. His areas of interest are heuristic optimisation and applications thereof, and more specifically in theory-motivated algorithm design and in applications to wave energy production as well as to non-functional code optimisation. He currently holds a grant on dynamic adaptive software systems with a focus on mobile devices, and he has co-organised the GI@GECCO Workshop in 2018. He has worked on theoretical aspects of genetic programming, in particular on bloat-control mechanisms, and he is currently involved in the development of two open-source platforms that have genetic programming at their core.

CEC-T17 Supply Chain Optimization in HUAWEI:Challenges and Opportunities for Metaheuristics

Organized by Fangzhou Zhu


Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. We need to deliver hundreds of billions of dollar products across four key domains – telecom networks, IT, smart devices, and cloud services to more than 170 countries and regions on time every year. It is essential to maintain an efficient supply chain to support such a huge amount of product manufacturing and delivery. As one of the most complex supply chains in the world, a large number of scenarios rely on more efficient and intelligent forecasting, control, scheduling, and optimization algorithms. Many scenarios, such as scheduling, vehicle routing, packing and loading, warehousing optimization, and etc., are multi-objective optimization problems with a large number of constraints. It is still very challenging to solve these practical problems, e.g. most of them are super-large-scale NP-hard problems, they are normally required to be solved in a limited time (sometimes we must give 'almost optimal solution' in quasi-real time), the problems contain multiple objectives, the input data are always 'dirty', it needs a lot business prior knowledge to build the correct model, and etc.. Solving these challenges will bring not only great technical values, but also significant economic benefits. Thus, Huawei supply chain provides many good opportunities to research, develop and deploy meta-heuristic techniques. This tutorial consists of two parts. In the first part, we will introduce some representative scenarios in Huawei's supply chain. Then in the second part, we will talk about some of our research works, including algorithm design, experience, lessons and future works. We hope that through this seminar, industry and academia can work together more closely to promote the research of meta-heuristic technologies.

Intended Audience


Short Biography

Fangzhou Zhu

Fangzhou Zhu received his Bachelor's and Master's degree in Soochow University, Suzhou, China, in 2014, 2017, respectively. He is currently an algorithm engineer in Huawei Noah's Ark Lab. His research interests include telecom data mining and large scale production planning.

CEC-T18 Evolution of Neural Networks

Organized by Risto Miikkulainen


Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state=of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Intended Audience


Short Biography

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision; he is an author of over 400 articles in these research areas. Risto is an IEEE Fellow, and his work on neuroevolution has recently been recognized with the Gabor Award of the International Neural Network Society and Outstanding Paper of the Decade Award of the International Society for Artificial Life.