ABM Framework Comparison

Many agent-based modeling frameworks have been constructed to ease the process of building and analyzing ABMs (see here for a review). Notable examples are NetLogo, Repast, MASON, and Mesa.

In this page we compare Agents.jl with Mesa, Netlogo and Mason, to assess where Agents.jl excels and also may need some future improvement. We used the following models for the comparison:

  • Predator-prey dynamics (Wolf Sheep Grass), a GridSpace model, which requires agents to be added, removed and moved; as well as identify properties of neighbouring positions.
  • The Flocking model (Flocking), a ContinuousSpace model, chosen over other models to include a MASON benchmark. Agents must move in accordance with social rules over the space.
  • The Forest fire model, provides comparisons for cellular automata type ABMs (i.e. when agents do not move). NOTE: The Agents.jl implementation of this model has been changed in v4.0 to be directly comparable to Mesa and NetLogo. As a consequence it no longer follows the original rule-set.
  • Schelling's-segregation-model (Schelling), an additional GridSpace model to compare with MASON. Simpler rules than Wolf Sheep Grass.

The results are characterised in two ways: how long it took each model to perform the same scenario (initial conditions, grid size, run length etc. are the same across all frameworks), and how many lines of code (LOC) it took to describe each model and its dynamics. We use this result as a metric to represent the complexity of learning and working with a framework.

Time taken is presented in normalised units, measured against the runtime of Agents.jl. In other words: the results can only vary slightly from the ones presented here with a different hardware. If one wishes to repeat the results personally by using the scripts in the ABMFrameworkComparisons repository: they will compute nearly the same results. For details on the parameters used for each comparison and the hardware used for the latest run, see the README and the benchmark.jl file in that repository.

For LOC, we use the following convention: code is formatted using standard practices & linting for the associated language. Documentation strings and in-line comments (residing on lines of their own) are discarded, as well as any benchmark infrastructure. NetLogo is assigned two values since its files have a code base section and an encoding of the GUI. Since many parameters live in the GUI, we must take this into account. Thus 375 (785) in a NetLogo count means 375 lines in the code section, 785 lines total in the file. An additional complication to this value in NetLogo is that it stores plotting information (colours, shapes, sizes) as agent properties, and as such the number outside of the bracket may be slightly inflated.

Model/FrameworkAgents 5.12.0Mesa 1.2.0Netlogo 6.3.0MASON 21.0
Wolf Sheep Grass150.2x28.2xNA
(LOC)122227137 (871).
Flocking1129.8x25.5xᕯ9.0x
(LOC)6210282 (689)369
Forest Fire1303.7x222.2xNA
(LOC)233543 (545).
Schelling1209.4x166.4x195.9x
(LOC)315660 (743)248

ᕯ Netlogo has a different implementation to the other three frameworks here. It cheats a little by only choosing one nearest neighbor in some cases rather than considering all neighbors within vision. So a true comparison would ultimately see a slower result.

The results clearly speak for themselves. Across all four models, Agents.jl's performance is exceptional whilst using the least amount of code. This removes many frustrating barriers-to-entry for new users, and streamlines the development process for established ones.

Table-based comparison

In our paper discussing Agents.jl, we compiled a comparison over a large list of features and metrics from the four frameworks discussed above. They are shown below in a table-based format:

Table 1 Table 1 continued