April 9, 2013

Dynamic Adaptation in Ant Colonies and Robot Swarms

A particular interesting engineering discipline is the area of materials and technologies that are inspired by nature, e.g. the surface of a Lotus flower or robots which look and behave like animals. For example, last X-mas I got this cute little robot bug which reacts on noises and tries to walk around obstacles on its path.


Interestingly, nature is an inspiration not only in technological areas, but also in theoretical computer science, specifically in the designing of algorithms. The first thing that comes to mind are of course evolutionary algorithms. Another area where ideas from nature are borrowed is swarm intelligence. One specific example of swarm intelligence are so-called ant colony optimization algorithms. The problem solved by such algorithms is that a colony of ants needs to find the shortest path from their nest (N) to a food place (F), as shown below.
Initially, all ants walk around randomly until some of them stumble on the food place, get some food and walk back to the nest. When moving around, the ants leave a pheromone trail. Other ants that pass such trails tend to give up there random exploration and follow the pheromone trail instead, thereby intensifying the trail. On the other hand the pheromones evaporate. That means the longer the path to the food place, the more time there is for the pheromones to evaporate. Thus, the percentage of ants taking the shorter path increases until eventually almost all ants use the shorter path. Using this simple trick, the ants manage to find the shortest path from the nest to the food place without any further coordination.

Such randomized optimization algorithms are also appealing for coordinating robots. But the immediate problem is that there is no equivalent notion for the pheromone trails produced by ants. One possible solution that works without pheromones is the so-called majority rule. The idea is that the robots travel around in teams of size 3. These teams are formed randomly in the nest. Every time a team is formed, they make a majority vote which path they should choose. All team members change their opinion to the elected path and travel to the food place and back. Since the teams are formed randomly, the system will reach a steady state where all robots agreed on one path. Similarly to the ant optimization algorithm, the shortest path has the highest probability to be chosen by the robot swarm. 

While this is a neat algorithm for finding an optimal path without any orchestration, there is one problem to it: it is not adaptable. That means, if due to some change in the environment suddenly the longer path becomes the shorter one, then the robot swarm will stick to its chosen path and not change to the new shorter one. This is because there is no robot left that could seed a drift to the new path.

We investigated two possibilities to make the majority rule adaptive. The first one is called the Switch scheme: if there is a team where all members already believe that the same path is the right one, they switch with a small probability to the other path. The second approach is the so-called MinPop scheme: a small population of the agents is stubborn, i.e. they take part in the voting, but they do not change their opinion. We modeled both approaches as discrete-time Markov chains (DTMCs) and fed the models into PRISM to analyze them. Below you can see the result.


One the top, you can see the efficiency of both schemes, i.e., what is the probability of an agent going down the right path. On the bottom, you can see the adaption times of both schemes, i.e., how much time it takes two turn a complete A-path population into a B-path population. The main thing to observe is that there is a trade-off between efficiency and adaptivity. The more adaptive a protocol is, the less efficient it is. We also compared Swtich and MinPop regarding this trade-off and it looks like the MinPop scheme performs better than the Switch scheme.

The details are described in a short paper by Erik and Pim de Vink and myself, which appeared in the pre-proceedings of QAPL 2013. I want to use the chance to thank Erik for pointing me to this very interesting research area.

No comments:

Post a Comment