3 releases
new 0.0.6 | Sep 19, 2023 |
---|---|
0.0.5 | Jan 29, 2022 |
0.0.4 | Jul 18, 2021 |
#217 in Science
41 downloads per month
Used in 2 crates
205KB
4.5K
SLoC
Border
A reinforcement learning library in Rust.
Border consists of the following crates:
- border-core provides basic traits and functions generic to environments and reinforcmenet learning (RL) agents.
- border-py-gym-env is a wrapper of the Gym environments written in Python, with the support of pybullet-gym and atari.
- border-atari-env is a wrapper of atari-env, which is a part of gym-rs.
- border-tch-agent is a collection of RL agents based on tch. Deep Q network (DQN), implicit quantile network (IQN), and soft actor critic (SAC) are includes.
- border-async-trainer defines some traits and functions for asynchronous training of RL agents by multiple actors, each of which runs a sampling process of an agent and an environment in parallel.
You can use a part of these crates for your purposes, though border-core is mandatory. This crate is just a collection of examples. See Documentation for more details.
Status
Border is experimental and currently under development. API is unstable.
Examples
In examples directory, you can see how to run some examples. Python>=3.7 and gym must be installed for running examples using border-py-gym-env. Some examples requires PyBullet Gym. As the agents used in the examples are based on tch-rs, libtorch is required to be installed.
License
Crates | License |
---|---|
border-core |
MIT OR Apache-2.0 |
border-py-gym-env |
MIT OR Apache-2.0 |
border-atari-env |
GPL-2.0-or-later |
border-tch-agent |
MIT OR Apache-2.0 |
border-async-trainer |
MIT OR Apache-2.0 |
border |
GPL-2.0-or-later |
lib.rs
:
RL agents implemented with tch.
Dependencies
~19–29MB
~459K SLoC