#rl #derive #env #border

macro border-derive

Derive macros for observation and action in RL environments of border

2 releases

0.0.6 Sep 19, 2023
0.0.5 Feb 5, 2022

#22 in #rl

25 downloads per month
Used in border


266 lines


A reinforcement learning library in Rust.

CI Latest version Documentation License

Border consists of the following crates:

  • border-core provides basic traits and functions generic to environments and reinforcmenet learning (RL) agents.
  • border-py-gym-env is a wrapper of the Gym environments written in Python, with the support of pybullet-gym and atari.
  • border-atari-env is a wrapper of atari-env, which is a part of gym-rs.
  • border-tch-agent is a collection of RL agents based on tch. Deep Q network (DQN), implicit quantile network (IQN), and soft actor critic (SAC) are includes.
  • border-async-trainer defines some traits and functions for asynchronous training of RL agents by multiple actors, each of which runs a sampling process of an agent and an environment in parallel.

You can use a part of these crates for your purposes, though border-core is mandatory. This crate is just a collection of examples. See Documentation for more details.


Border is experimental and currently under development. API is unstable.


In examples directory, you can see how to run some examples. Python>=3.7 and gym must be installed for running examples using border-py-gym-env. Some examples requires PyBullet Gym. As the agents used in the examples are based on tch-rs, libtorch is required to be installed.


Crates License
border-core MIT OR Apache-2.0
border-py-gym-env MIT OR Apache-2.0
border-atari-env GPL-2.0-or-later
border-tch-agent MIT OR Apache-2.0
border-async-trainer MIT OR Apache-2.0
border GPL-2.0-or-later


Derive macros for making newtypes of types that implements border_core::Obs, border_core::Act and order_core::replay_buffer::SubBatch.

These macros will implements some conversion traits for combining interfaces of an environment and an agent.


~274K SLoC