An introduction to Agent-Based Modelling in R


Rock Paper Scissors Lizard Spock

NOTE: WordPress keeps destroying my code below. In case the code looks weird, head over to the github version: http://bit.ly/20r4dJv

As part of my PhD I am using computational models to unravel the evolution of certain behaviours. In my case I am interested in the evolution of social learning. Here, I want to give a very short introduction for how to create a simple agent-based model (ABM) using R. When I started with my first ABM I had no clue where to start. When you read scientific papers that use ABMs they usually do not talk about the implementation (code-wise) either. So, here is an example for an agent-based model for individuals that play a game commonly known as Rock, Paper, Scissors. But first, what actually are ABMs? Wikipedia says, that ‘an agent-based model is one of a class of computational models for simulating the actions and interactions of autonomous agents with a view to assessing their effects on the system as a whole.’ Now, let us analyse what the fundamentals for an ABM are.

What you need for an agent-based model

The minimum ingredients for an agent-based model are:

  • Agents that interact with the world around them and/or with other agents
  • A world in which the agents ‘live’ or move around
  • A set of rules that determines what every agent is allowed or has to do
  • A loop, which allows to repeatedly act or interact

In our case agents are not moving around and therefore we will not consider the second point (a world). Let us start with only two agents that play one of the three strategies (Rock, Paper, Scissors) against each other. We will define two individuals, let them choose a strategy and then play them.

Creating agents is actually very simple. We need to keep track of an individual and therefore it needs an ID. In our model they will choose a strategy, which we will associate with the ID. And finally, to make the model interesting, let us monitor the times an individual wins against the other. To keep track of all this we create a data.frame with the according columns

indDF <- data.frame(id=1:2, strategy=NA, num_wins=0)
indDF
## id strategy num_wins
## 1 1 NA 0
## 2 2 NA 0

Check at our first point: we got our agents ready to play. In the next step we let them choose a strategy. As we our agents will choose their strategies repeatedly we simply create a function. We will hand over the indDF and the function assigns a random strategy to the stratscolumn in the data.frame. We use number instead of names for the strategies, as this will make working with them easier later on.

chooseStrategy <- function(ind){
strats <- sample(x=1:3, size=nrow(ind)) # 1:Paper, 2:Scissors, 3:Rock
ind$strategy <- strats
return(ind)
}

Let us proceed to the next step where the agents play their strategies. Again, we create its own function. What is happening inside the function can be summarised like this: strategies are ordered numerically in the way the win against each other, i.e. paper:1, scissor:2, rock:3. As 11 we need to identify this special case (rock loosing against paper). The number of wins of the individual with the winning strategy is increased by one. If both individuals play the same strategy, nothing happens in this round.

playStrategy <- function(ind){
if(ind$strategy[1]==ind$strategy[2]) {} else{
#in the case that one chose Rock and the other paper:
if(any(ind$strategy == 3) && any(ind$strategy == 1)){
tmp <- ind[ind$strategy==1, "id"]
ind[tmp,"num_wins"] <- ind[tmp,"num_wins"]+1
}else{
#for the two other cases, the better weapon wins:
tmp <- which(ind[,"strategy"]==max(ind[,"strategy"]))
ind[tmp,"num_wins"] <- ind[tmp,"num_wins"]+1
}
}
return(ind)
}

Now we can let the individuals play against each other repeatedly. We are going to use a simple for loop for this and let individuals play 1000 times against each other.

# indDF <- setup()
for(i in 1:1000){
indDF <- chooseStrategy(indDF)#; indDF
indDF <- playStrategy(indDF)#; indDF
i <- i+1
};indDF
## id strategy num_wins
## 1 1 2 488
## 2 2 3 512

You might have spotted a function at the beginning of above’s chunk. I wrote a small setup function that allows us to quickly create the data.frame we were using above. An easy way to reset the simulations.

setup <- function(){
return(data.frame(id=1:2, strategy=NA, num_wins=0))
}

We now habe a neat little model. You will find that there is not much of a difference between the one or the other individuals. Especially, when you let it run more often.

But say, you would like to monitor what is happening throughout the simulation. We can record the process when we let the loop report individual results every turn. We will simply write the number of wins of both individuals in a two column matrix called dat. Subsequently, we will plot the result:

rounds <- 1000
indDF <- setup()
dat <- matrix(NA, rounds, 2)
for(i in 1:rounds){
indDF <- chooseStrategy(indDF)
indDF <- playStrategy(indDF)
dat[i,] <- indDF$num_wins
i <- i+1
}

plot(dat[,1], type='l', col='#EA2E49', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='#77C4D3', lwd=3)

ABM_1-1

The model is running and we can observe what is happening. Now it becomes interesting. We can use the model to actually find the answer to a hypothesis. For instance: is it true that a player, wich never switches it’s strategy, is more successful when it plays against another individual that randomly switches its strategy? To test this we need to adjust the strategy choosing function.

chooseStrategy2 <- function(ind){
strats <- sample(x=1:3, size=1) # 1:Paper, 2:Scissors, 3:Rock
ind$strategy[2] <- strats
return(ind)
}

Now, the second individual will change its strategy randomly, while the first chooses a strategy once and then sticks with it. We will return the results of this simulation to a matrix called res2. We will compare it to a simulation where every individual switches randomly between strategies. To make the results more robust let us repeat all simulations 100 times.

<br />rounds <- 1000
repetitions <- 100
dat <- matrix(NA, rounds, 2)
res2 <- c()
for(j in 1:repetitions){
indDF <- setup()
indDF[1,"strategy"] <- sample(1:3,1)
for(i in 1:rounds){
indDF <- chooseStrategy2(indDF)
indDF <- playStrategy(indDF)
dat[i,] <- indDF$num_wins
i <- i+1
}
res2 <- c(res2, which(indDF[,"num_wins"]==max(indDF[,"num_wins"])))
j <- j+1
}

plot(dat[,1], type='l', col='blue', lwd=3, xlab='time', ylab='number of rounds won')
lines(dat[,2], col='red', lwd=3)

# for comparisson let's calculate the winning vector for both players switch strategies:

res1 <- c()
for(j in 1:repetitions){
indDF <- setup()
for(i in 1:rounds){
indDF <- chooseStrategy(indDF)
indDF <- playStrategy(indDF)
dat[i,] <- indDF$num_wins
i <- i+1
}
res1 <- c(res1, which(indDF[,"num_wins"]==max(indDF[,"num_wins"])))
j <- j+1
}

# and the winner is:
t.test(res1,res2)

##
## Welch Two Sample t-test
##
## data: res1 and res2
## t = 0.5579, df = 202, p-value = 0.5775
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.09938468 0.17781605
## sample estimates:
## mean of x mean of y
## 1.529412 1.490196

ABM_2-1

At the end of the chunk I added a t-test to compare the two types of simulations. As you can see, no, it doesn’t make a difference whether the agent changes its strategy or not. However, this doesn’t take into account human psychology. Nevertheless, an interesting result.

Rock Paper Scissors on a network

In this second example we are going to use the same game, but this time several individuals will play against each other. In my phd project I assume that individuals can only interact with other individuals with which they have a connection. An easy way to think about this is a network, where individuals are represented by nodes and connections by ties. We will use a simple lattice network and therefore individuals can only play with their direct neighbours. To add an evolutionary dynamic to the simulation individuals that loose adopt the strategy of the winner. Have a read through the code, there are some explanations in it.

require(igraph) # for networks
require(reshape) # to change the resulted data in a format ggplot2 can use
require(ggplot2) # for plotting

# size of the lattice
sidelength<-10

# creating an empty data.frame to store data
stat<-data.frame()
# creating a lattice network using the igraph package
l<-graph.lattice(length=sidelength,dim=2)
# now every individual chooses a strategy at random
V(l)$weapon<-sample(c(1:3), size=length(V(l)), replace=T)
# for a nicer visualisation lets colour the different options
V(l)[weapon==1]$color<-'blue' # Paper
V(l)[weapon==2]$color<-'yellow' # Scissors
V(l)[weapon==3]$color<-'green' # Rock
# and this is what it looks like:
plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)

for(t in 1:1000){
# pick a random agent ...
from <- as.numeric(sample(V(l), 1)) # or sample(sidelength^2,1)
# who are its neighbours?
nei<-neighbors(l, v=from, mode='all')

# if there is only one weapon type left
if(length(unique(V(l)$weapon))==1) {
# we can either stop the simulation
stop(paste(c('Paper','Scissors','Rock')[unique(V(l)$weapon)], 'has won the game after',t,'rounds!'))
# or we let the selected individual choose a different strategy to let the dynamics go on
# V(l)$weapon[from]<-sample((1:3)[1:3!=as.numeric(V(l)$weapon[from])], 1)
} else {
# ... and one of its neighbours
to <- sample(nei, 1)
fromto<-c(from,to)
w<-as.numeric(V(l)$weapon[fromto])
# if both choices are equal, nothing happens:
if(w[1]==w[2]) {} else{
# in the case that one chooses Rock and the other Paper, Paper wins:
if(max(w) == 3 && min(w) ==1) {
V(l)$weapon[fromto[w==3]] <- "1"
}
else{
# for the two other cases, the better weapon wins:
V(l)$weapon[fromto[w==min(w)]] <- V(l)$weapon[fromto[w==max(w)]]
}
}

}
# let's record the individual abundance of each strategy
stat<-rbind(stat, c(sum(V(l)$'weapon'=="1"), sum(V(l)$'weapon'=="2"), sum(V(l)$'weapon'=="3")))
# plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)
}

names(stat)<-c("Paper","Scissors","Rock")
s<-melt(stat)
s$time<-1:nrow(stat)
ggplot(data=s, mapping=aes(x=time, y=value, col=variable)) + geom_line() + theme_bw()

<img class="alignnone wp-image-594 size-full" src="https://marcosmolla.files.wordpress.com/2015/07/abm_3-1.png" alt="ABM_3-1" width="672" height="480" />

<a href="https://marcosmolla.files.wordpress.com/2015/07/abm_3-2.png"><img class="alignnone wp-image-595 size-full" src="https://marcosmolla.files.wordpress.com/2015/07/abm_3-2.png" alt="ABM_3-2" width="672" height="480" /></a>

What we observe are interesting dynamics between the three strategies. For the limited number of rounds that we let the model run all strategies coexist. However, sometimes one strategy disappears which will lead to the win of the strategy that loses against this one. For example, if paper disappears, rock should win on a long run. You can now experiment what would happen if you have a smaller or bigger network, or even a different network type. The ```igraph``` package offers many possibilities here.

### And what is with Spock?
The model we created so gar can be used to investigate for example epidemic dynamics. How do for instance information, rumors, and ideas, spread through a network. When we think of strategies spreading through a network we might want to add another strategy and see how a four-strategies-game differs from a three-strategies-game. Let us add Spock from the __Rock, Paper, Scissors, Lizard, Spock__ (see for example [here](http://www.samkass.com/theories/RPSSL.html))that you might have heard off from [The Big Bang Theory](https://www.google.it/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&ved=0CAMQjxxqFQoTCKy_hIfd4cYCFcjtFAodxhAAvA&url=http%3A%2F%2Fwww.fanpop.com%2Fclubs%2Fthe-big-bang-theory%2Fimages%2F15090520%2Ftitle%2Frock-paper-scissors-lizard-spock-fanart&ei=7rmoVenrNM2f7gaU0I6QBQ&bvm=bv.98197061,d.ZGU&psig=AFQjCNHLCUOTpBDCfNDHRBHWLmg-4Wlipg&ust=1437207412361054&cad=rja).

```r
# size of the lattice
sidelength<-10
# creating an empty data.frame to store data
stat<-data.frame()
# creating a lattice network using the igraph package
l<-graph.lattice(length=sidelength,dim=2)
# now every individual chooses a strategy at random
V(l)$weapon<-sample(c(1,2,2.9,3), size=length(V(l)), replace=T)
# for a nicer visualisation lets colour the different options
V(l)[weapon==1]$color<-'blue' # Paper
V(l)[weapon==2]$color<-'yellow' # Scissors
V(l)[weapon==3]$color<-'green' # Rock
V(l)[weapon==2.9]$color<-'purple' # Spock
# and this is what it looks like:
plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)

Let us have a look how our network looks like with four strategies:

ABM_4-1

Finally, let us run the slightly altered model.

<br />for(t in 1:2500){
# pick a random agent ...
from <- as.numeric(sample(V(l), 1)) # or sample(sidelength^2,1)
# who are its neighbours?
nei<-neighbors(l, v=from, mode='all')

# if there is only one weapon type left
if(length(unique(V(l)$weapon))==1) {
# we can either stop the simulation
stop(paste(c('Paper','Scissors','Rock')[unique(V(l)$weapon)], 'has won the game after',t,'rounds!'))
# or we let the selected individual choose a different strategy to let the dynamics go on
# V(l)$weapon[from]<-sample((1:3)[1:3!=as.numeric(V(l)$weapon[from])], 1)
} else {
# ... and one of its neighbours
to <- sample(nei, 1)
fromto<-c(from,to)
w<-as.numeric(V(l)$weapon[fromto])
# if both choices are equal, nothing happens:
if(w[1]==w[2]) {} else{
# in the case that one chooses Rock and the other Paper, Paper wins:
if(max(w) == 3 && min(w) ==1) {
V(l)$weapon[fromto[w==3]] <- "1"
}
else{
# for the two other cases, the better weapon wins:
V(l)$weapon[fromto[w==min(w)]] <- V(l)$weapon[fromto[w==max(w)]]
}
}

}
# let's record the individual abundance of each strategy
stat<-rbind(stat, c(sum(V(l)$'weapon'=="1"), sum(V(l)$'weapon'=="2"), sum(V(l)$'weapon'=="2.9"), sum(V(l)$'weapon'=="3")))
# you can also plot each individual network configuration
# V(l)[weapon==1]$color<-'blue' # Paper
# V(l)[weapon==2]$color<-'yellow' # Scissors
# V(l)[weapon==3]$color<-'green' # Rock
# V(l)[weapon==2.9]$color<-'purple' # Spock
# plot(l, layout=as.matrix(expand.grid(1:sidelength, 1:sidelength)), vertex.label=NA)
}

names(stat)<-c("Paper","Scissors","Rock","Spock")
s<-melt(stat)
s$time<-1:nrow(stat)
ggplot(data=s, mapping=aes(x=time, y=value, col=variable)) + geom_line() + theme_bw()

ABM_4-2

I hope this introduction was helpful and allows you to come up with your own ideas for agent-based models. Share and post your versions of the code in the comments if you like.