#machine #reinforcement #learning #rl #ai

rsrl

A fast, extensible reinforcement learning framework in Rust

17 unstable releases (7 breaking)

✓ Uses Rust 2018 edition

0.8.1 Jun 18, 2020
0.7.1 Apr 15, 2020
0.7.0 Nov 8, 2019
0.6.0 Nov 22, 2018
0.1.0 Dec 24, 2017

#10 in Machine learning

Download history 24/week @ 2020-03-14 42/week @ 2020-03-21 17/week @ 2020-03-28 16/week @ 2020-04-04 103/week @ 2020-04-11 7/week @ 2020-04-18 3/week @ 2020-04-25 5/week @ 2020-05-02 2/week @ 2020-05-09 51/week @ 2020-05-16 17/week @ 2020-05-23 52/week @ 2020-05-30 6/week @ 2020-06-06 33/week @ 2020-06-13 8/week @ 2020-06-20 22/week @ 2020-06-27

113 downloads per month

MIT license

280KB
7.5K SLoC

RSRL (api)

Crates.io Build Status Coverage Status

Reinforcement learning should be fast, safe and easy to use.

Overview

rsrl provides generic constructs for reinforcement learning (RL) experiments in an extensible framework with efficient implementations of existing methods for rapid prototyping.

Installation

[dependencies]
rsrl = "0.8"

Note that rsrl enables the blas feature of its ndarray dependency, so if you're building a binary, you additionally need to specify a BLAS backend compatible with ndarray. For example, you can add these dependencies:

blas-src = { version = "0.2.0", default-features = false, features = ["openblas"] }
openblas-src = { version = "0.6.0", default-features = false, features = ["cblas", "system"] }

See ndarray's README for more information.

Usage

The code below shows how one could use rsrl to evaluate a QLearning agent using a linear function approximator with Fourier basis projection to solve the canonical mountain car problem.

See examples/ for more...

let env = MountainCar::default();
let n_actions = env.action_space().card().into();

let mut rng = StdRng::seed_from_u64(0);
let (mut ql, policy) = {
    let basis = Fourier::from_space(5, env.state_space()).with_bias();
    let q_func = make_shared(LFA::vector(basis, SGD(0.001), n_actions));
    let policy = Greedy::new(q_func.clone());

    (QLearning {
        q_func,
        gamma: 0.9,
    }, policy)
};

for e in 0..200 {
    // Episode loop:
    let mut j = 0;
    let mut env = MountainCar::default();
    let mut action = policy.sample(&mut rng, env.emit().state());

    for i in 0.. {
        // Trajectory loop:
        j = i;

        let t = env.transition(action);

        ql.handle(&t).ok();
        action = policy.sample(&mut rng, t.to.state());

        if t.terminated() {
            break;
        }
    }

    println!("Batch {}: {} steps...", e + 1, j + 1);
}

let traj = MountainCar::default().rollout(|s| policy.mode(s), Some(500));

println!("OOS: {} states...", traj.n_states());

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate and adhere to the angularjs commit message conventions (see here).

License

MIT

Dependencies

~7.5MB
~213K SLoC