2.15. Lecture 14: Agent Based Models¶
Before this class you should:
Read Think Complexity, Chapter 9
Before next class you should:
Read Think Complexity, Chapter 10
Note Taker: Christina Tissington
2.15.1. Agent-Based Models¶
- Overview
Agent-based models (ABMs) operate similarly to cellular automata (CA) in the sense that they are a system governed by simple rules. It is important to note, however, that although they are similar to CAs, they are not the same. ABMs are more general than CAs and there are differences among the agents in the model. ABMs also include an element of randomness whereas CAs are more deterministic. Although the course material will subclass 2D arrays in ABM examples similar to the examples we covered for CAs, ABMs are not a subset of CAs.
- Agent Based Models VS. Cellular Automata
Agent-based models include randomness through the actions and decisions of the individual agents within the environment. This allows for very diverse outcomes even in scenarios with the same initial conditions. An example of this can be simulating traffic, each car is an agent and they may randomly decide to change lanes based on certain probability parameters (ie. other traffic, speed, relative location to other cars, etc). This can result in varied traffic patterns. Unlike ABMs, Cellular Automata include deterministic behaviour based on rigid rules. Randomness can be included in these types of models in the form of initial conditions or modified rules of the simulation. An example of this can be Conway’s Game of Life which follows deterministic behaviour for cell state changes, but can incorperate randomness through the initial conditions of cell placement. In summary, ABMs tend to rely on randomly determined behaviour (to an extent), whereas CAs tend to maintain determinism but can still display randomness.
- Agents
Agents are intended to model people and other entities. Their purpose is to gather information about the world, make decisions, and take actions. Agents can be connected through edges, the connections or relationships between agents within a network, and often interact with each other locally. Agents can move around and change their location in the world. They have imperfect local information, also known as “bounded rationality”. This means people will act based on the information they have access to, however the information we have access to is bounded (or incomplete). An example of this is that I may answer a quesiton about University life using my experiences at the University of Guelph, however, I would not be able to answer from a perspective of a University of Waterloo student since my knowledge of that University is limited.
2.15.2. Examples of Agent Based Models¶
- IIROC
The Investment Industry Regulatory Authority is a Canadian company that supports equity markets. They used ABMs to model low and high frequency trading which was able to display how it can cause flash crashes in the market.
- Ecology
There are various use cases for ABMs in ecology. An example is agents representing animals and how they behave in various environments.
- Flame GPU
The Flame (Flexible Large Scale Agent Modelling Environment) GPU utilizes various ABM building blocks and can be used as a visualization tool. An example of this is using the Flame GPU to create a pedestrian crowd control simulation for the design of public spaces. It simulates people leaving a sports stadium and can allows you to easily see if people are able to safely and comfortably filter out after a game is finished. It shows where the congested spaces are which allows cities to redesign their public spaces to improve the safe flow of walking traffic.
- AgentTorch
AgentTorch is a differentiable agent based learning model built on PyTorch. This tool allows you to set parameters based on agents in the model.
2.15.3. Parable of the Polygons Recap¶
The parable of the polygons is a model of racial segregration. This is an ABM that includes two groups of different colours. At any point an agent (one of the colours) can be in one of two states; happy or unhappy. This condition is determined by that agent’s neighbours (the neighbours are the 8 adjacent cells to the agent). The agent is happy if two or more of their neighbors are like them in colours, and are unhappy otherwise. By observing this model, it becomes clear that clusters of similar agents begin to appear, grow, and group together in the environment. When you take a step back and think about it, any of the agents would be happy in a mixed neighborhood, they just don’t want to be completely surrounded by people unlike themselves, but observing the behaviour of the agents in the model provides a strong argument about the system and segregation.
Model Class
The following class is a fundamental component of the Schelling segregation model. The model class simulates the behaviour of agents within a grid-based environment. We will use this class to model the Parable of Polygons. By adjusting the arrangement of agents based on their preferences, the model demonstrates how even slight individual biases can lead to the emergence of segregated patterns. The Schelling class provides valuable insights into what drives segregation in society.
n = side length of the shared grid p = how many like neighbours the agent requires to be happy (a decreased p value means that agent is less picky about their neighbours) probs = initial probabilities of what we expect to see in the agents in the world (10% will be empty, 45% will be red, 45% will be blue)
class Schelling(Cell2D): def __init__(self, n, p): self.p = p # 0 is empty, 1 is red, 2 is blue choies = np.array([0.1, 0.45, 0.45]), dtype=np.int8) probs = [0.1, 0.45, 0.45] self.array = np.random.choice(choices, (n, n), p=probs)
Step Function
In the Schelling class, the step function that updates the position of the agents based on their preferences. During each step of the simulation, agents assess their current neighborhood and determine whether they are happy. If an agent is unhappy (if they do not have enough similar neighbours), it will move to an empty cell.
a = self.array red = a==1 blue = a==2 empty = a==0 options = sict(mode='same', boundary='wrap') kernel = np.array([ [1, 1, 1], [1, 0, 1], [1, 1, 1]], dtype=np.int8) num_red = correlate2d(red, kernel, **options) num_blue = correlate2d(blue, kernel, **options) num_neighbors = num_red + num_blueThe kernel counts the neighbors. You can correlate the kernel with the red and blue array to get the number of red and number of blue agents.
frac_red = num_red / num_neighbors frac_blue = num_blue / num_neighbors frac_same = np.where(red, frac_red, frac_blue) frac_same[empty] = np.nanThe fraction is used to see what makes an agent in a cell unhappy. The frac_same array checks if there is a red agent or a blue agent. Where there is a red agent present, use frac_red, where there is a blue agent, use frac_blue. The where() function is similar to an if statement in the sense that it checks a condition and behaves a certain way if it is met. Anywhere the cell is empty, put a np.nan (similar to null) and don’t consider if an agent is happy or unhappy since the cell is empty.
unhappy = frac_same < self.p unhappy_locs = locs_where(unhappy) def locs_where(condition): return list(zip(*np.nonzero(condition))) empty_locs = locs_where(empty)unhappy = return a boolean array with a value of true at each index where an agent is unhappy unhappy_locs = tuples pointing to locations where agents are unhappy empty_locs = list where the cells are all empty
The following code block shows how to find where the unhappy agents are and where the empty cells they can move to are.
num_empty = np.sum(empty) for source in unhappy_locs: i = np.random.randint(num_empty) dest = empty.locs[i] a[dest] = a[source] a[source] = 0 empty_locs[i] = sourceThis code block is the core of the whole simulation. An unhappy agent is assigned to a new location randomly from the set of empty locations. Their previous location is then set to empty since they have moved.
2.15.4. Sugarscape¶
This experiment represents an artificial society and is related to modelling a simple economy. The rules of the world are that agents can move around a 2D grid harvesting and accumulating sugar; this sugar represents wealth. Some parts of the grid produce more sugar and some agents are better at finding sugar than other. Sugarscape is a great simulation to explore the distribution of wealth.
The world shows where the sugar can be found and red dots represent agents. Each cell has a capacity of how much sugar it can hold. There are two high sugar regions that have a capacity of 4 sugars. These regions have a radius of decreasing sugar capacity. The agents are initialized in random locations on the grid. Each agent has three randomly chosen attributes: starting sugar, metabolism, and vision. The starting sugar is chosen from a uniform distribution between 5 and 25. An agent’s metabolism is how much sugar they must consume each step or they will die. This is randomly assigned from a uniform distribution from 1 to 4. An agent’s vision is how far they can see to identify where the sugar heaps are. They must be able to see a cell to move there. This is randomly assigned from a uniform distribution between 1 and 6.
Rules
The rules of the model are that the agents move one at a time in a random order and execute some behaviour. They will first survey k cells where k is the range of the agent’s vision. The agent will then choose an unoccupied cell that it can see has the most sugar. The agent will move to that cell and harvest all the sugar there, leaving it empty. The agent will then consume the amount of sugar it needs depending on it’s metabolism. If it can’t consume enough sugar, it will die and is removed from the grid. After all the agents have completed their “turn”, the cells will grow back a unit of sugar (never beyond their capacity).
Behaviour
Even after only two time steps, most agents will move towards the big mounds of sugar. Agents with high vision scores will move the fastest towards the large sugar mounds. Agents with an average vision score tend to get stuck on plateaus. Agents with low vision tend to wander randomly since they cannot see where the large sugar mounds are and die quickly. Agents born in areas with less sugar tend to starve unless they have high vision and low metabolism.
The model demonstrates wealth inequality, however, it doesn’t reach a steady state unless you give each agent a lifespan. After around five hundred steaps, the distribution of wealth doesn’t change much. You can assign each agent a random lifespan upon creation between 60 and 100 steps. When an agent dies, it is replaced with a new agent to keep the population constant. Most of the world tends to be very poor, but a small handful of agents end up with an exceptionally large amount of wealth. This is a heavy tailed distribution. This model is similar to how the world works where there are only a small handful of people who hold the majority of wealth.
Implementation
The agent class will have all aspects of an agents attributes (ie. lifespan, starting sugar, metabolism, vision). If all the agents start in the lower left hand corner of the grid, an interesting behaviour can be observed. The agents will propogate and move up towards the sugar mounds in aves. The wave-like movement occurs automatically from the system. This type of emergent behaviour can be observed in various complex systems.