Adaptive Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its fifteenth year. Previous editions of this workshop may be found at the following urls:

The goal of this workshop is to increase awareness and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multiagent systems. It aims at bringing together not only scientists from different areas of computer science (e.g., agent architectures, reinforcement learning, and evolutionary algorithms) but also from different fields studying similar concepts (e.g., game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion on ongoing or completed work in both theoretical and practical issues of adaptive and learning agents and multiagent systems.

This workshop will focus on all aspects of adaptive and learning agents and multiagent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc.

Supervised multiagent learning

Reinforcement learning (single and multiagent)

Planning (single and multiagent)

Reasoning (single and multiagent)

Distributed learning

Adaptation and learning in dynamic environments

Evolution of agents in complex environments

Co-evolution of agents in a multiagent setting

Cooperative exploration and learning to cooperate and collaborate

Learning trust and reputation

Communication restrictions and their impact on multiagent coordination

Design of reward structure and fitness measures for coordination

Scaling learning techniques to large systems of learning and adaptive agents

Emergent behaviour in adaptive multiagent systems

Game theoretical analysis of adaptive multiagent systems

Neuro-control in multiagent systems

Bio-inspired multiagent systems

Applications of adaptive and learning agents and multiagents systems to real world complex systems

Accepted papers from the workshop will be eligible to be extended for inclusion in a special issue journal.

Important Dates

Submission Deadline: February 14, 2017

Notification of acceptance: March 17, 2017

Camera-ready copies: March 22, 2017

Workshop: May 8/9, 2017

Journal Special Issue

We are delighted to announce that extended versions of all original contributions at ALA 2017 will be eligible for inclusion in a special issue of The Knowledge Engineering Review (Impact Factor 1.039). The deadline for submitting extended papers will be September 1 2017.

We will post further details about the submission process and expected publication timeline here in the coming weeks.

Invited Talks

Bio: Ana L. C. Bazzan holds a PhD degree from the University of Karlsruhe in Germany, and is a full professor at the Informatics Institute of the Federal University of Rio Grande do Sul (UFRGS) in Brazil. She has served as general co-chair of the AAMAS 2014 and is serving as one of the PC chairs of the PRIMA 2017 and one of the area chair of the IJCAI 2017 conference. She has served several times as member of the AAMAS (and other conferences) program committee (as PC member or senior PC member) and as an associated editor for: J. of Autonomous Agents and Multiagent Systems, Advances in Complex systems, and Mutiagent and Grid Systems. She is a member of the IFAAMAS board (2004-2008 and 2014-). She co-organized the Workshop on Synergies between Multiagent Systems, Machine Learning, and Complex Systems (TRI 2015), held together with IJCAI 2015, and the Workshop Agents in Traffic and Transportation (ATT) series. Her research interests include MAS, ABMS, machine learning, multiagent reinforcement learning, evolutionary game theory, swarm intelligence, and complex systems. Her work is mainly applied in domains related to traffic and transportation.

Talk Title: Beyond Reinforcement Learning in Multiagent Systems

Talk Abstract: Learning is an important component of an agent's decision making process. Despite the diversity of approaches in the machine learning area, in the multiagent community, learning is associated mostly with reinforcement learning. Given this background, this talk has two aims: to revisit the old days motivations for multiagent learning, and to describe some of the work addressing the frontiers of multiagent systems and machine learning. The intention of the latter task is to try to motivate people to address the issues that are involved in the application of techniques from multiagent systems in machine learning and vice-versa.

Bio: Thore Graepel is a research group lead at DeepMind and holds a part-time position as Chair of Machine Learning at University College London. He studied physics at the University of Hamburg, Imperial College London, and Technical University of Berlin, where he also obtained his PhD in machine learning in 2001. He spent time as a postdoctoral researcher at ETH Zurich and Royal Holloway College, University of London, before joining Microsoft Research in Cambridge in 2003, where he co-founded the Online Services and Advertising group. Major applications of Thore's work include Xbox Live's TrueSkill system for ranking and matchmaking, the AdPredictor framework for click-through rate prediction in Bing, and the Matchbox recommender system which inspired the recommendation engine of Xbox Live Marketplace. More recently, Thore's work on the predictability of private attributes from digital records of human behaviour has been the subject of intense discussion among privacy experts and the general public. Thore's research interests are in artificial intelligence and machine learning and include probabilistic graphical models, reinforcement learning, game theory, and multi-agent systems. He has published over one hundred peer-reviewed papers, is a named co-inventor on dozens of patents, serves on the editorial boards of JMLR and MLJ, and is a founding editor of the book series Machine Learning & Pattern Recognition at Chapman & Hall/CRC. At DeepMind, Thore has returned to his original passion of understanding and creating intelligence, and recently contributed to creating AlphaGo, the first computer program to defeat a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Talk Title: The role of multi-agent learning in artificial intelligence research

Talk Abstract: We consider intelligence to be the ability of an agent to achieve goals in a wide range of environments (Legg & Hutter). This notion motivates an approach towards artificial intelligence research in which we progress on two fronts: On one side, we define wider and more difficult sets of tasks or environments, and on the other side we develop agents capable of learning to succeed within these ever more challenging environments. Thinking in evolutionary/ecological terms, the richest environments for a given agent are themselves evolving collections of agents, be that biological organisms within their ecological niche or companies within a given market. When thinking about a route towards artificial intelligence it is therefore crucial to go beyond the reinforcement learning (RL) paradigm of agent and environment, and consider evolving multi-agent systems. In this talk, I will discuss the important role multi-agent learning has to play in artificial intelligence research and the challenges it presents. I will discuss three examples from our work, including i) the role of self-play in AlphaGo, ii) the emergence of cooperation of self-interested agents in sequential social dilemmas, and iii) the use of evolutionary principles to channel gradient descent in super neural networks (PathNet).