What’s New in Abmarl
Abmarl version 0.2.6 features the new Absolute Grid Observer, which produces “top-down” observations of the grid “from the grid’s perspective”; the Maze Placement State component for structuring the initial placement of agents within a grid while allowing for variation in each episode; and enhanced support for buildling gridworld simulations.
Absolute Grid Observer
The Single and Multi Grid Observers provide observations of the the grid centered on the observing agent, a view of the grid “from the agent’s perspective”. Abmarl’s Grid World Simulation Framework now contains the Absolute Grid Observer, which produces observations of the grid “from the grid’s perspective”. The observation size matches the size of the grid, and the agent sees itself moving around the grid instead of seeing all the other agents positioned relative to itself.
Here we show the following state observations for the bottom-left red agent with
a view_range
of 2 via the Single Grid Observer
and the new Absolute Grid Observer. The
Single Grid Observation is sized by the agent’s view range, the observing agent
is in the very center, and all other cells are shown by their relative positions,
including out of bounds cells. The Absolute Grid Observation is sized by the grid,
all agents are shown in their actual grid positions, there are no out of bounds
cells, and any cell that the agent cannot see is masked with a -2.
# Single Grid Observer, observing agent is shown here as *3
[ 0, 2, 2, 0, 2],
[ 0, 2, 0, 0, 0],
[ 0, 0, *3, 3, 0],
[ 0, 0, 0, 0, 0],
[-1, -1, -1, -1, -1],
# Absolute Grid Observer, observing agent is shown as -1
[-2, -2, -2, -2, -2, -2, -2],
[-2, -2, -2, -2, -2, -2, -2],
[-2, -2, -2, -2, -2, -2, -2],
[ 0, 2, 2, 0, 2, -2, -2],
[ 0, 2, 0, 0, 0, -2, -2],
[ 0, 0, -1, 3, 0, -2, -2],
[ 0, 0, 0, 0, 0, -2, -2]
Maze Placement State
The Position State supports placing agents in the the grid either (1) according to their initial positions or (2) by randomly selecting an available cell. The new Maze Placement State supports more structure in initially placing agents. It starts by partitioning the grid into two types of cells, free or barrier, according to a maze that is generated starting at some target agent’s position. Agents with free encodings and barrier encodings are then randomly placed in free cells and barrier cells, respectively. The Maze Placement State component can be configured such that it clusters barrier agents near the target and scatters free agents away from the target. The clustering is such that all paths to the target are not blocked. In this way, the grid can be randomized at the start of each episode, while still maintaining some desired structure.
Building a Gridworld Simulation
Abmarl’s Gridworld Simulation Framework now supports building the simulation in these ways:
Building the simulation by specifying the rows, columns, and agents;
Building the simulation from an existing grid;
Building the simulation from an array and an object registry; and
Building the simulation from a file and an object registry.
Additionally, when building the simulation from a grid, array, or file, you can specify additional agents to build that are not in those inputs. The builder will combine the content from the grid, array, or file with the extra agents.
Miscellaneous
New built-in Target agent component supports agents having a target agent with which they must overlap.
New Cross Move Actor allows the agents to move up, down, left, right, or stay in place.
The All Step Manager supports randomized ordering in the action dictionary.
The Position State component supports ignoring the overlapping options during random placement. This results in agents being placed on unique cells.
Abmarl’s visualize component now supports the
--record-only
flag, which will save animations without displaying them on screen, useful for when running headless or processing in batch.Bugfix with the Super Agent Wrapper enables training with rllib 2.0.
Abmarl now supports Python 3.9 and 3.10.
Abmarl now supports gym 0.23.1.