3 unstable releases
new 0.2.1 | May 1, 2025 |
---|---|
0.2.0 | Apr 30, 2025 |
0.1.1 | Apr 29, 2025 |
0.1.0 |
|
#318 in Algorithms
62 downloads per month
88KB
1K
SLoC
arboriter-mcts
A flexible, well-documented Monte Carlo Tree Search (MCTS) implementation for Rust, built for game AI and decision-making processes.
Features
- ๐งฐ Generic implementation that works with any game or decision process
- ๐ Multiple selection policies (UCB1, UCB1-Tuned, PUCT) for optimal exploration/exploitation
- ๐ฒ Customizable simulation strategies to match your domain knowledge
- ๐ Memory-efficient node pooling for improved performance in sequential searches
- ๐ Detailed search statistics and visualization for debugging and analysis
- ๐งช Comprehensive test suite ensuring correctness and reliability
- ๐ Thorough documentation with examples for easy integration
Installation
Add this to your Cargo.toml
:
[dependencies]
arboriter-mcts = "0.2.0"
Basic Usage
Here's a simple example of how to use the library:
use arboriter_mcts::{MCTS, MCTSConfig, GameState};
// Implement GameState for your game
impl GameState for MyGame {
type Action = MyAction;
type Player = MyPlayer;
// Return legal actions from the current state
fn get_legal_actions(&self) -> Vec<Self::Action> { /* ... */ }
// Apply an action and return the new state
fn apply_action(&self, action: &Self::Action) -> Self { /* ... */ }
// Check if the game is over
fn is_terminal(&self) -> bool { /* ... */ }
// Get the result (0.0 = loss, 0.5 = draw, 1.0 = win)
fn get_result(&self, for_player: &Self::Player) -> f64 { /* ... */ }
// Get the current player
fn get_current_player(&self) -> Self::Player { /* ... */ }
}
// Create a configuration for the search
let config = MCTSConfig::default()
.with_exploration_constant(1.414)
.with_max_iterations(10_000);
// Create the MCTS searcher with initial state
let mut mcts = MCTS::new(initial_state, config);
// Find the best action
let best_action = mcts.search()?;
// Get search statistics
println!("{}", mcts.get_statistics().summary());
Running the Examples
The repository includes complete examples for common games that demonstrate the MCTS algorithm in action:
- Tic-Tac-Toe: A simple 3x3 game where you can play against the AI
- Connect Four: A more complex 7x6 game with stronger tactical elements
To run the examples:
# Play Tic-Tac-Toe against the AI
cargo run --example tic_tac_toe
# Play Connect Four against the AI
cargo run --example connect_four
How MCTS Works
Monte Carlo Tree Search combines tree search with random sampling to find optimal decisions:
-
Selection: Starting from the root, select successive child nodes down to a leaf node using a selection policy that balances exploration and exploitation.
-
Expansion: If the leaf node is not terminal and has untried actions, create one or more child nodes by applying those actions.
-
Simulation: From the new node, simulate a game to completion using a default policy (often random play).
-
Backpropagation: Update the statistics (visit counts and rewards) for all nodes in the path from the selected node to the root.
This process is repeated many times, gradually building a tree of game states and improving the value estimates for each action.
Our implementation includes memory-efficient node pooling which recycles nodes during sequential searches instead of constantly allocating and deallocating memory, significantly improving performance especially for multi-step decision processes.
Advanced Customization
You can customize all aspects of the MCTS algorithm to match your specific needs:
// Standard configuration
let mut mcts = MCTS::new(initial_state, config)
// Use UCB1 for selection with custom exploration constant
.with_selection_policy(UCB1Policy::new(1.414))
// Use domain-specific heuristics for simulation
.with_simulation_policy(HeuristicPolicy::new(my_heuristic_fn))
// Use a weighted backpropagation policy
.with_backpropagation_policy(WeightedPolicy::new(0.5));
// Configure time-based search limits
let action = mcts.search_for_time(Duration::from_secs(5))?;
// Enable node pooling for better performance
let config_with_pooling = MCTSConfig::default()
.with_max_iterations(10_000)
.with_node_pool_config(1000, 500); // Initial pool size and chunk size
// Create MCTS with node pooling
let mut mcts_with_pool = MCTS::with_node_pool(
initial_state,
config_with_pooling,
1000, // Initial pool size
500 // Chunk size
);
// For sequential searches (e.g., playing a full game), you can reset the root
// to reuse nodes instead of creating a new MCTS instance each time
mcts_with_pool.reset_root(new_state);
Node pooling provides significant performance benefits by reducing memory allocation overhead, especially for:
- Games with large branching factors
- Sequential searches during gameplay
- Deep search trees
- Time-critical applications
The statistics output will show node pool usage details when enabled.
Documentation
For detailed documentation and API reference, visit docs.rs/arboriter-mcts.
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
License
This crate is licensed under the MIT license. See LICENSE for details.
Acknowledgments
- Built on the arboriter tree traversal primitive
- Inspired by Tyler Glaiel's blog post on tree traversal primitives
- Influenced by successful MCTS implementations in AlphaGo and other game AI systems
Dependencies
~2.5MB
~22K SLoC