Refining the behaviour of robot swarms in this way has previously required external computation, but this limits them to lab settings where those additional resources are located.
The new research, from the University of Bristol and the University of the West of England (UWE), internalises the computation required to evolve behaviour. This could lay the path for robot swarms to operate in real-world environments, learning on the fly as they perform industrial, agricultural and search & rescue tasks. The work is published in Advanced Intelligent Systems.
“This is the first step towards robot swarms that automatically discover suitable swarm strategies in the wild,” said research co-lead Dr Hauert, senior lecturer in Robotics at Bristol University’s Department of Engineering Mathematics and Bristol Robotics Laboratory (BRL).
“The next step will be to get these robot swarms out of the lab and demonstrate our proposed approach in real-world applications.”
The Bristol team was also able to discover which rules give rise to desired swarm behaviours. By making the controllers understandable to humans using behaviour trees, the artificial evolution of the swarm behaviour can be queried, explained and improved. This type of ‘explainable AI’ is paramount for the safe rollout of systems that rely on artificial intelligence, particularly for something like robot swarms that are designed to operate autonomously in real-world environments.
"In many modern AI systems, especially those that employ Deep Learning, it is almost impossible to understand why the system made a particular decision,” said Professor Alan Winfield, BRL and Science Communication Unit, UWE.
“This lack of transparency can be a real problem if the system makes a bad decision and causes harm. An important advantage of the system described in this paper is that it is transparent: its decision making process is understandable by humans."