5 releases

0.2.0 Sep 26, 2022
0.1.3 Jul 26, 2022
0.1.2 Jul 25, 2022
0.1.1 Jul 24, 2022
0.1.0 Jul 23, 2022
Download history 20/week @ 2023-10-29 15/week @ 2023-11-05 17/week @ 2023-11-12 13/week @ 2023-11-19 32/week @ 2023-11-26 13/week @ 2023-12-03 11/week @ 2023-12-10 18/week @ 2023-12-17 20/week @ 2023-12-24 7/week @ 2023-12-31 12/week @ 2024-01-07 12/week @ 2024-01-14 13/week @ 2024-01-21 21/week @ 2024-01-28 18/week @ 2024-02-04 35/week @ 2024-02-11

89 downloads per month
Used in entity-gym-rs

MIT/Apache

15KB
345 lines

EntityGym for Rust

Crates.io PyPI MIT/Apache 2.0 Discord Docs Actions Status

EntityGym is a Python library that defines a novel entity-based abstraction for reinforcement learning environments which enables highly ergonomic and efficient training of deep reinforcement learning agents. This crate provides bindings that allows Rust programs to be used as EntityGym training environments, and to load and run neural networks agents trained with Entity Neural Network Trainer natively in pure Rust applications.

Overview

The core abstraction in entity-gym-rs is the Agent trait. It defines a high-level API for neural network agents which allows them to directly interact with Rust data structures. To use any of the Agent implementations provided by entity-gym-rs, you just need to derive the Action and Featurizable traits, which define what information the agent can observe and what actions it can take:

  • The Action trait allows a Rust type to be returned as an action by an Agent. This trait can be derived automatically for enums with only unit variants.
  • The Featurizable trait converts objects into a format that can be processed by neural networks. It can be derived for most fixed-size structs, and for enums with unit variants. Agents can observe collections containing any number of Featurizable objects.

Example

Basic example that demonstrates how to construct an observation and sample a random action from an Agent:

use entity_gym_rs::agent::{Agent, AgentOps, Obs, Action, Featurizable};

#[derive(Action, Debug)]
enum Move { Up, Down, Left, Right }

#[derive(Featurizable)]
struct Player { x: i32, y: i32 }

#[derive(Featurizable)]
struct Cake {
    x: i32,
    y: i32,
    size: u32,
}

fn main() {
    // Creates an agent that acts completely randomly.
    let mut agent = Agent::random();
    // Alternatively, load a trained neural network agent from a checkpoint.
    // let mut agent = Agent::load("agent");

    // Construct an observation with one `Player` entity and two `Cake entities.
    let obs = Obs::new(0.0)
        .entities([Player { x: 0, y: 0 }])
        .entities([
            Cake { x: 4, y: 0, size: 4 },
            Cake { x: 10, y: 42, size: 12 },
        ]);
    
    // To obtain an action from an agent, we simple call the `act` method
    // with the observation we constructed.
    let action = agent.act::<Move>(obs);
    println!("{:?}", action);
}

For a more complete example that includes training a neural network to play Snake, see examples/bevy_snake.

Docs

Dependencies

~1.5MB
~34K SLoC