Swarm behavior—a cooperative behavior commonly observed in nature and often exhibited by small insects, birds, etc.—entails the collective and unified motion of many self-propelled individuals.
Inspired from swarms in nature, robot swarms can exhibit seemingly complex behaviors by interacting with their respective neighbors or the local environment, based on a set of simple rules. Swarm intelligence can find application in several fields: from environmental monitoring to disaster recovery, infrastructure maintenance, logistics, and agriculture. The challenge is discovering which rules can originate a desired swarm behavior.
In the past, artificial evolution has been mainly used to discover controllers for robot swarms. However, this approach is typically carried out off-board, meaning that all the simulations are run on a computer external to the swarm; only the best strategy is then copied onto the robots. As a result, this method requires an external infrastructure, considerably limiting the ability of swarms to discover suitable behaviors when placed outside the laboratory setting.
Alternative techniques, such as embodied evolution, have been proposed to allow robots to develop autonomous discovery in the swarm. Unfortunately, these approaches are quite time-consuming as possible solutions are tested on the real robots, rather than simulated. Besides, a real robot equipped with a defective controller could cause damage to itself or external entities, making it unsuitable for real-world use.
In a joint effort between researchers at the University of Bristol and University of West England, Prof. Sabine Hauert and co-workers take advantage of recent advances in high-performance mobile computing to develop the “Teraflop Swarm”, a robot swarm with the ability to run the computationally intensive automatic design process entirely within the swarm, thus overcoming the constraint of off-line resources.
The distributed evolutionary system running on the swarm itself generates new controllers by using a “Behaviour Tree” architecture. Within only fifteen minutes and with no reliance on external infrastructure, the swarm can reach a high level of fitness, demonstrating a considerably shorter time frame than previous embodied evolution methods. The researchers also showed that the automatically generated controllers can be analyzed, understood, and even improved.
“This is the first step towards robot swarms that automatically discover suitable swarm strategies in the wild,” says Prof. Hauert.
Notably, relieving the swarm from the need of an external infrastructure and demonstrating the ability to understand the generated controllers are critical steps toward the automatic design of swarm controllers for real-world applications. In the future, robot swarms could discover suitable strategies directly in situ, developing the ability to adapt autonomously and continuously their behavior to suit changing tasks and circumstances. Future research will involve the demonstration of the proposed approach in dynamic environments as well as the design of a suitable robot swarm for real-world applications.
It is time to let these robots out of the laboratories.