The Wayback Machine - http://web.archive.org/web/20200611142321/https://github.com/deepmind/acme
Skip to content
A library of reinforcement learning components and agents
Python Shell
Branch: master
Clone or download

Latest commit

aslanides and Copybara-Service Move utils/tf2_* -> tf/*.
This is the last change in code restructuring w.r.t JAX and TensorFlow.

As this is an API-breaking change, we also bump the version from 0.1.3 -> 0.1.4.

PiperOrigin-RevId: 315883733
Change-Id: I8a35ae9792d8580117958f3ea4dea38025e7a86b
Latest commit d30c509 Jun 11, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/workflows Add a PyPI release workflow. Jun 10, 2020
acme Move utils/tf2_* -> tf/*. Jun 11, 2020
docs Move utils/tf2_* -> tf/*. Jun 11, 2020
examples Move utils/tf2_* -> tf/*. Jun 11, 2020
CONTRIBUTING.md Initial commit. May 15, 2020
LICENSE Initial commit. May 15, 2020
MANIFEST.in add license to source distributions Jun 2, 2020
README.md Add link to blog post in README. Jun 11, 2020
setup.py Make versioning more explicit; add a setup flag for nightlies. Jun 11, 2020
test.sh Pin TF and Reverb versions. Jun 1, 2020

README.md

Acme: A research framework for reinforcement learning

Overview | Installation | Documentation | Agents | Examples | Paper | Blog post

PyPI Python Version PyPI version pytest

Acme is a library of reinforcement learning (RL) agents and agent building blocks. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.

Overview

If you just want to get started using Acme quickly, the main thing to know about the library is that we expose a number of agent implementations and an EnvironmentLoop primitive that can be used as follows:

loop = acme.EnvironmentLoop(environment, agent)
loop.run()

This will run a simple loop in which the given agent interacts with its environment and learns from this interaction. This assumes an agent instance (implementations of which you can find here) and an environment instance which implements the DeepMind Environment API. Each individual agent also includes a README.md file describing the implementation in more detail. Of course, these two lines of code definitely simplify the picture. To actually get started, take a look at the detailed working code examples found in our examples subdirectory which show how to instantiate a few agents and environments. We also include a quickstart notebook.

Acme also tries to maintain this level of simplicity while either diving deeper into the agent algorithms or by using them in more complicated settings. An overview of Acme along with more detailed descriptions of its underlying components can be found by referring to the documentation. And we also include a tutorial notebook which describes in more detail the underlying components behind a typical Acme agent and how these can be combined to form a novel implementation.

Installation

We have tested acme on Python 3.6 & 3.7.

  1. Optional: We strongly recommend using a Python virtual environment to manage your dependencies in order to avoid version conflicts:

    python3 -m venv acme
    source acme/bin/activate
    pip install --upgrade pip setuptools
  2. To install the core libraries (including Reverb, our storage backend):

    pip install dm-acme
    pip install dm-acme[reverb]
  3. To install dependencies for our JAX- or TensorFlow-based agents:

    pip install dm-acme[tf]
    # and/or
    pip install dm-acme[jax]
  4. Finally, to install a few example environments (including gym, dm_control, and bsuite):

    pip install dm-acme[envs]

Citing Acme

If you use Acme in your work, please cite the accompanying technical report:

@article{hoffman2020acme,
    title={Acme: A Research Framework for Distributed Reinforcement Learning},
    author={Matt Hoffman and Bobak Shahriari and John Aslanides and Gabriel
        Barth-Maron and Feryal Behbahani and Tamara Norman and Abbas Abdolmaleki
        and Albin Cassirer and Fan Yang and Kate Baumli and Sarah Henderson and
        Alex Novikov and Sergio Gómez Colmenarejo and Serkan Cabi and Caglar
        Gulcehre and Tom Le Paine and Andrew Cowie and Ziyu Wang and Bilal Piot
        and Nando de Freitas},
    year={2020},
    journal={arXiv preprint arXiv:2006.00979},
    url={https://arxiv.org/abs/2006.00979},
}
You can’t perform that action at this time.