I’ve recently been to the 2014 Winter Meeting of ASAB (the Association for the Study of Animal Behaviour) in London. A really nice conference with a focus on collective behaviour (how do individuals behave in groups, how do individuals influence group dynamics, and so on). There were many excellent talks. One of my favourites was delivered by Niels Dingemanse (Max Planck Institute, Germany) on Interacting Personalities and the importance of Social Environment (which inevitably reminded me of my bumblebee projects).
Another talk that is closely related to what I do came from Alecia Carter (University of Cambridge). She presented results from the Namibian Baboon project she and her colleagues are involved in. She collected data for several networks that are simultaneously present in the baboon group, e.g. proximity, grooming, and aggression networks. The researches hid food items and recorded how the information spreads through the network. One question that the scientists asked was: which of the recorded networks would best predict the information diffusion? Surprisingly, it was the proximity network that best predicted who received the information when.
A second result of this study suddenly struck me: possessing information is not equal to receiving a reward that is connected to this information. Of course, this is completely intuitive (right?), but as a computational biologist I sometimes simplify the world a little bit too much. Alecia described how some individuals received the information (acquisition), while others did not. Some individuals that knew where the desired food items were went there (use), while others did not. And finally, as a consequence of the strict dominance hierarchy in baboons, a dominant individual would eat first, before subordinate individuals could get access to the remaining food items (exploit).
I wonder how important this factor is for general learning models. How important are different networks that are present in parallel, and how important are dominance and hierarchies? Excited to see some computational models in the near future!