2 releases
0.0.6 | Sep 19, 2023 |
---|---|
0.0.5 | Feb 5, 2022 |
#22 in #rl
25 downloads per month
Used in border
14KB
266 lines
Border
A reinforcement learning library in Rust.
Border consists of the following crates:
- border-core provides basic traits and functions generic to environments and reinforcmenet learning (RL) agents.
- border-py-gym-env is a wrapper of the Gym environments written in Python, with the support of pybullet-gym and atari.
- border-atari-env is a wrapper of atari-env, which is a part of gym-rs.
- border-tch-agent is a collection of RL agents based on tch. Deep Q network (DQN), implicit quantile network (IQN), and soft actor critic (SAC) are includes.
- border-async-trainer defines some traits and functions for asynchronous training of RL agents by multiple actors, each of which runs a sampling process of an agent and an environment in parallel.
You can use a part of these crates for your purposes, though border-core is mandatory. This crate is just a collection of examples. See Documentation for more details.
Status
Border is experimental and currently under development. API is unstable.
Examples
In examples directory, you can see how to run some examples. Python>=3.7 and gym must be installed for running examples using border-py-gym-env. Some examples requires PyBullet Gym. As the agents used in the examples are based on tch-rs, libtorch is required to be installed.
License
Crates | License |
---|---|
border-core |
MIT OR Apache-2.0 |
border-py-gym-env |
MIT OR Apache-2.0 |
border-atari-env |
GPL-2.0-or-later |
border-tch-agent |
MIT OR Apache-2.0 |
border-async-trainer |
MIT OR Apache-2.0 |
border |
GPL-2.0-or-later |
lib.rs
:
Derive macros for making newtypes of types that implements
border_core::Obs
, border_core::Act
and
order_core::replay_buffer::SubBatch
.
These macros will implements some conversion traits for combining interfaces of an environment and an agent.
Dependencies
~14MB
~274K SLoC