Welcome to Abmarl’s documentation!

Abmarl is a package for developing Agent-Based Simulations and training them with MultiAgent Reinforcement Learning (MARL). We provide an intuitive command line interface for engaging with the full workflow of MARL experimentation: training, visualizing, and analyzing agent behavior. We define an Agent-Based Simulation Interface and Simulation Manager, which control which agents interact with the simulation at each step. We support integration with popular reinforcement learning simulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define our own GridWorld Simulation Framework for creating custom grid-based Agent Based Simulations.

Abmarl leverages RLlib’s framework for reinforcement learning and extends it to more easily support custom simulations, algorithms, and policies. We enable researchers to rapidly prototype MARL experiments and simulation design and lower the barrier for pre-existing projects to prototype RL as a potential solution.


Abmarl has been published in the Journal of Open Source Software. It can be cited using the following bibtex entry:

  doi = {10.21105/joss.03424},
  url = {https://doi.org/10.21105/joss.03424},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {64},
  pages = {3424},
  author = {Edward Rusu and Ruben Glatt},
  title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning},
  journal = {Journal of Open Source Software}