#rl #border #tch

border-tensorboard

Reinforcement learning library

1 unstable release

0.0.6 Sep 19, 2023

#423 in Science

Download history 26/week @ 2023-09-17 10/week @ 2023-09-24 19/week @ 2023-10-01 9/week @ 2023-10-08 13/week @ 2023-10-15 17/week @ 2023-10-22 14/week @ 2023-10-29 12/week @ 2023-11-05 9/week @ 2023-11-12 18/week @ 2023-11-19

57 downloads per month
Used in 3 crates (2 directly)

MIT/Apache

72KB
1.5K SLoC

Border

A reinforcement learning library in Rust.

CI Latest version Documentation License

Border consists of the following crates:

  • border-core provides basic traits and functions generic to environments and reinforcmenet learning (RL) agents.
  • border-py-gym-env is a wrapper of the Gym environments written in Python, with the support of pybullet-gym and atari.
  • border-atari-env is a wrapper of atari-env, which is a part of gym-rs.
  • border-tch-agent is a collection of RL agents based on tch. Deep Q network (DQN), implicit quantile network (IQN), and soft actor critic (SAC) are includes.
  • border-async-trainer defines some traits and functions for asynchronous training of RL agents by multiple actors, each of which runs a sampling process of an agent and an environment in parallel.

You can use a part of these crates for your purposes, though border-core is mandatory. This crate is just a collection of examples. See Documentation for more details.

Status

Border is experimental and currently under development. API is unstable.

Examples

In examples directory, you can see how to run some examples. Python>=3.7 and gym must be installed for running examples using border-py-gym-env. Some examples requires PyBullet Gym. As the agents used in the examples are based on tch-rs, libtorch is required to be installed.

License

Crates License
border-core MIT OR Apache-2.0
border-py-gym-env MIT OR Apache-2.0
border-atari-env GPL-2.0-or-later
border-tch-agent MIT OR Apache-2.0
border-async-trainer MIT OR Apache-2.0
border GPL-2.0-or-later

Dependencies

~13MB
~207K SLoC