Schelling's segregation model

In this introductory example we parallelize the main Tutorial while building the following definition of Schelling's segregation model:

  • Agents belong to one of two groups (0 or 1).
  • The agents live in a two-dimensional grid. For each agent we care about finding all of its 8 nearest neighbors (cardinal and diagonal directions). To do this, we will create a DiscreteSpace with a Chebyshev metric, and when searching for nearby agents we will use a radius of 1 (which is also the default). This leads to 8 neighboring positions per position (except at the edges of the grid).
  • Each position of the grid can be occupied by at most one agent.
  • If an agent has at least k=3 neighbors belonging to the same group, then it is happy.
  • If an agent is unhappy, it keeps moving to new locations until it is happy.

Schelling's model shows that even small preferences of agents to have neighbors belonging to the same group (e.g. preferring that at least 3/8 of neighbors to be in the same group) could still lead to total segregation of neighborhoods.

This model is also available as Models.schelling.

Creating a space

using Agents

space = GridSpace((10, 10); periodic = false)
GridSpace with size (10, 10), metric=chebyshev, periodic=false

Notice that by default the GridSpace has metric = Chebyshev(), which is what we want. Agents existing in this type of space must have a position field that is a NTuple{2, Int}. We ensure this below.

Defining the agent type

mutable struct SchellingAgent <: AbstractAgent
    id::Int             # The identifier number of the agent
    pos::NTuple{2, Int} # The x, y location of the agent on a 2D grid
    mood::Bool          # whether the agent is happy in its position. (true = happy)
    group::Int          # The group of the agent, determines mood as it interacts with neighbors
end

We added two more fields for this model, namely a mood field which will store true for a happy agent and false for an unhappy one, and an group field which stores 0 or 1 representing two groups.

Notice also that we could have taken advantage of the macro @agent (and in fact, this is recommended), and defined the same agent as:

@agent SchellingAgent GridAgent{2} begin
    mood::Bool
    group::Int
end

Creating an ABM

To make our model we follow the instructions of AgentBasedModel. We also want to include a property min_to_be_happy in our model, and so we have:

properties = Dict(:min_to_be_happy => 3)
schelling = ABM(SchellingAgent, space; properties)
AgentBasedModel with 0 agents of type SchellingAgent
 space: GridSpace with size (10, 10), metric=chebyshev, periodic=false
 scheduler: fastest
 properties: min_to_be_happy

Here we used the default scheduler (which is also the fastest one) to create the model. We could instead try to activate the agents according to their property :group, so that all agents of group 1 act first. We would then use the scheduler Schedulers.by_property like so:

schelling2 = ABM(
    SchellingAgent,
    space;
    properties = properties,
    scheduler = Schedulers.by_property(:group),
)
AgentBasedModel with 0 agents of type SchellingAgent
 space: GridSpace with size (10, 10), metric=chebyshev, periodic=false
 scheduler: property
 properties: min_to_be_happy

Notice that Schedulers.by_property accepts an argument and returns a function, which is why we didn't just give Schedulers.by_property to scheduler.

Creating the ABM through a function

Here we put the model instantiation in a function so that it will be easy to recreate the model and change its parameters.

In addition, inside this function, we populate the model with some agents. We also change the scheduler to Schedulers.randomly. Because the function is defined based on keywords, it will be of further use in paramscan below.

using Random # for reproducibility
function initialize(; numagents = 320, griddims = (20, 20), min_to_be_happy = 3, seed = 125)
    space = GridSpace(griddims, periodic = false)
    properties = Dict(:min_to_be_happy => min_to_be_happy)
    rng = Random.MersenneTwister(seed)
    model = ABM(
        SchellingAgent, space;
        properties, rng, scheduler = Schedulers.randomly
    )

    # populate the model with agents, adding equal amount of the two types of agents
    # at random positions in the model
    for n in 1:numagents
        agent = SchellingAgent(n, (1, 1), false, n < numagents / 2 ? 1 : 2)
        add_agent_single!(agent, model)
    end
    return model
end
initialize (generic function with 1 method)

Notice that the position that an agent is initialized does not matter in this example. This is because we use add_agent_single!, which places the agent in a random, empty location on the grid, thus updating its position.

Defining a step function

Finally, we define a step function to determine what happens to an agent when activated. For the purpose of this implementation of Schelling's segregation model, we only need an agent step function and not a model stepping function.

function agent_step!(agent, model)
    minhappy = model.min_to_be_happy
    count_neighbors_same_group = 0
    # For each neighbor, get group and compare to current agent's group
    # and increment count_neighbors_same_group as appropriately.
    # Here `nearby_agents` (with default arguments) will provide an iterator
    # over the nearby agents one grid point away, which are at most 8.
    for neighbor in nearby_agents(agent, model)
        if agent.group == neighbor.group
            count_neighbors_same_group += 1
        end
    end
    # After counting the neighbors, decide whether or not to move the agent.
    # If count_neighbors_same_group is at least the min_to_be_happy, set the
    # mood to true. Otherwise, move the agent to a random position.
    if count_neighbors_same_group ≥ minhappy
        agent.mood = true
    else
        move_agent_single!(agent, model)
    end
    return
end
agent_step! (generic function with 1 method)

When defining agent_step!, we used some of the built-in functions of Agents.jl, such as nearby_positions that returns the neighboring position on which the agent resides, ids_in_position that returns the IDs of the agents on a given position, and move_agent_single! which moves agents to random empty position on the grid. A full list of built-in functions and their explanations are available in the API page.

Stepping the model

Let's initialize the model with 370 agents on a 20 by 20 grid.

model = initialize()
AgentBasedModel with 320 agents of type SchellingAgent
 space: GridSpace with size (20, 20), metric=chebyshev, periodic=false
 scheduler: randomly
 properties: min_to_be_happy

We can advance the model one step

step!(model, agent_step!)

Or for three steps

step!(model, agent_step!, 3)

Visualizing the data

There is a dedicated tutorial for visualization, animation, and interaction for agent based models. See Visualizations and Animations for Agent Based Models.

We can use the abmplot function to plot the distribution of agents on a 2D grid at every generation, via the InteractiveDynamics.jl package and the Makie.jl plotting ecosystem.

Let's color the two groups orange and blue and make one a square and the other a circle.

using InteractiveDynamics
using CairoMakie # choosing a plotting backend

groupcolor(a) = a.group == 1 ? :blue : :orange
groupmarker(a) = a.group == 1 ? :circle : :rect
figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure # returning the figure displays it

Animating the evolution

The function abmvideo can be used to save an animation of the ABM into a video. You could of course also explicitly use abmplot in a record loop for finer control over additional plot elements.

model = initialize();
abmvideo(
    "schelling.mp4", model, agent_step!;
    ac = groupcolor, am = groupmarker, as = 10,
    framerate = 4, frames = 20,
    title = "Schelling's segregation model"
)

Collecting data during time evolution

We can use the run! function with keywords to run the model for multiple steps and collect values of our desired fields from every agent and put these data in a DataFrame object. We define a vector of Symbols for the agent fields that we want to collect as data

adata = [:pos, :mood, :group]

model = initialize()
data, _ = run!(model, agent_step!, 5; adata)
data[1:10, :] # print only a few rows

10 rows × 5 columns

stepidposmoodgroup
Int64Int64Tuple…BoolInt64
101(14, 19)01
202(14, 10)01
303(7, 11)01
404(1, 16)01
505(2, 13)01
606(6, 3)01
707(20, 14)01
808(14, 5)01
909(15, 20)01
10010(15, 14)01

We could also use functions in adata, for example we can define

x(agent) = agent.pos[1]
model = initialize()
adata = [x, :mood, :group]
data, _ = run!(model, agent_step!, 5; adata)
data[1:10, :]

10 rows × 5 columns

stepidxmoodgroup
Int64Int64Int64BoolInt64
1011401
2021401
303701
404101
505201
606601
7072001
8081401
9091501
100101501

With the above adata vector, we collected all agent's data. We can instead collect aggregated data for the agents. For example, let's only get the number of happy individuals, and the average of the "x" (not very interesting, but anyway!)

using Statistics: mean
model = initialize();
adata = [(:mood, sum), (x, mean)]
data, _ = run!(model, agent_step!, 5; adata)
data

6 rows × 3 columns

stepsum_moodmean_x
Int64Int64Float64
10010.4437
2119910.4531
3225510.675
4328810.5844
5430110.7219
6530810.7063

Other examples in the documentation are more realistic, with more meaningful collected data. Don't forget to use the function dataname to access the columns of the resulting dataframe by name.

Launching the interactive application

Given the definitions we have already created for a normally plotting or animating the ABM it is almost trivial to launch an interactive application for it, through the function abm_data_exploration.

We define a dictionary that maps some model-level parameters to a range of potential values, so that we can interactively change them.

parange = Dict(:min_to_be_happy => 0:8)
Dict{Symbol, UnitRange{Int64}} with 1 entry:
  :min_to_be_happy => 0:8

We also define the data we want to collect and interactively explore, and also some labels for them, for shorter names (since the defaults can get large)

adata = [(:mood, sum), (x, mean)]
alabels = ["happy", "avg. x"]

model = initialize(; numagents = 300) # fresh model, noone happy
AgentBasedModel with 300 agents of type SchellingAgent
 space: GridSpace with size (20, 20), metric=chebyshev, periodic=false
 scheduler: randomly
 properties: min_to_be_happy
using GLMakie # using a different plotting backend that enables interactive plots

figure, adf, mdf = abm_data_exploration(
    model, agent_step!, dummystep, parange;
    ac = groupcolor, am = groupmarker, as = 10,
    adata, alabels
)

Saving/loading the model state

It is often useful to save a model after running it, so that multiple branching scenarios can be simulated from that point onwards. For example, once most of the population is happy, let's see what happens if some more agents occupy the empty cells. The new agents could all be of one group, or belong to a third, new, group. Simulating this needs multiple copies of the model. Agents.jl provides the functions AgentsIO.save_checkpoint and AgentsIO.load_checkpoint to save and load models to JLD2 files respectively.

First, let's create a model with 200 agents and run it for 40 iterations.

model = initialize(numagents = 200, min_to_be_happy = 5, seed = 42)
run!(model, agent_step!, 40)

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

Most of the agents have settled happily. Now, let's save the model.

AgentsIO.save_checkpoint("schelling.jld2", model)

Note that we can now leave the REPL, and come back later to run the model, right from where we left off.

model = AgentsIO.load_checkpoint("schelling.jld2"; scheduler = Schedulers.randomly)

Since functions are not saved, the scheduler has to be passed while loading the model. Let's now verify that we loaded back exactly what we saved.

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

For starters, let's see what happens if we add 100 more agents of group 1

for i in 1:100
    agent = SchellingAgent(nextid(model), (1, 1), false, 1)
    add_agent_single!(agent, model)
end

Let's see what our model looks like now.

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

And then run it for 40 iterations.

run!(model, agent_step!, 40)

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

It looks like the agents eventually cluster again. What if the agents are of a new group? We can start by loading the model back in from the file, thus resetting the changes we made.

model = AgentsIO.load_checkpoint("schelling.jld2"; scheduler = Schedulers.randomly)

for i in 1:100
    agent = SchellingAgent(nextid(model), (1, 1), false, 3)
    add_agent_single!(agent, model)
end

To visualize the model, we need to redefine groupcolor and groupmarker to handle a third group.

groupcolor(a) = (:blue, :orange, :green)[a.group]
groupmarker(a) = (:circle, :rect, :cross)[a.group]

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

The new agents are scattered randomly, as expected. Now let's run the model.

run!(model, agent_step!, 40)

figure, _ = abmplot(model; ac = groupcolor, am = groupmarker, as = 10)
figure

The new agents also form their own clusters, despite being completely scattered. It's also interesting to note that there is minimal rearrangement among the existing groups. The new agents simply occupy the remaining space.

Ensembles and distributed computing

We can run ensemble simulations and collect the output of every member in a single DataFrame. To that end we use the ensemblerun! function. The function accepts a Vector of ABMs, each (typically) initialized with a different seed and/or agent distribution. For example we can do

models = [initialize(seed = x) for x in rand(UInt8, 3)];

and then

adf, = ensemblerun!(models, agent_step!, dummystep, 5; adata)
adf[(end - 10):end, :]

11 rows × 4 columns

stepsum_moodmean_xensemble
Int64Int64Float64Int64
1120110.45942
2227310.60312
3329710.35622
4430710.50312
5530910.45312
60010.56253
7120810.62193
8227310.81873
9329610.81253
10430311.05633
11531110.97193

It is possible to run the ensemble in parallel. For that, we should start julia with julia -p n where n is the number of processing cores. Alternatively, we can define the number of cores from within a Julia session:

using Distributed
addprocs(4)

For distributed computing to work, all definitions must be preceded with @everywhere, e.g.

using Distributed
@everywhere using Agents
@everywhere mutable struct SchellingAgent ...
@everywhere agent_step!(...) = ...

Then we can tell the ensemblerun! function to run the ensemble in parallel using the keyword parallel = true:

adf, = ensemblerun!(models, agent_step!, dummystep, 5; adata, parallel = true)

Scanning parameter ranges

We often are interested in the effect of different parameters on the behavior of an agent-based model. Agents.jl provides the function paramscan to automatically explore the effect of different parameter values.

We have already defined our model initialization function as initialize. We now also define a processing function, that returns the percentage of happy agents:

happyperc(moods) = count(moods) / length(moods)
adata = [(:mood, happyperc)]

parameters = Dict(
    :min_to_be_happy => collect(2:5), # expanded
    :numagents => [200, 300],         # expanded
    :griddims => (20, 20),            # not Vector = not expanded
)

adf, _ = paramscan(parameters, initialize; adata, agent_step!, n = 3)
adf

32 rows × 4 columns

stephappyperc_moodmin_to_be_happynumagents
Int64Float64Int64Int64
100.02200
210.5752200
320.8352200
430.92200
500.03200
610.33200
720.533200
830.673200
900.04200
1010.144200
1120.3054200
1230.3854200
1300.05200
1410.0155200
1520.0555200
1630.095200
1700.02300
1810.8166672300
1920.952300
2030.982300
2100.03300
2210.5966673300
2320.833300
2430.9166673300
2500.04300
2610.354300
2720.574300
2830.754300
2900.05300
3010.135300
3120.3066675300
3230.4066675300

We nicely see that the larger :min_to_be_happy is, the slower the convergence to "total happiness".