7 releases
0.1.6 | Jun 19, 2020 |
---|---|
0.1.5 | Jun 17, 2020 |
0.1.4 | May 12, 2020 |
0.1.0 | Apr 13, 2020 |
#2 in #optimizations
21 downloads per month
48KB
1K
SLoC
POS_PSO
Highly Configurable Particle Swarm Optimizer implemented in pure Rust
Easily mimimize a cost function across multiple threads
- Multiple optimizations can be run in parallel with Independant Swarms
- or Swarms can periodically share best-known locations in the search space with Collaborative Swarms
Motion dynamics are loosly based on Matlab's Particle Swarm Algorithm
Optimizer takes a closure with the following header as the cost function:
move |x: &[f64]| -> f64 {}
- The input slice-array represents a point in the N dimensional optimization space
- The returned cost is used to navigate the search space to locate a minimum
- currently only supports f64, but future updates may allow more generic cost-functions
Some Examples to get started:
Single-Threaded Example (Single Swarm)
- Use the default independant swarm configuration (good place to start, but you may want to define your own based on your use case)
- Search Space Bounds and Max Velocities still must be defined in a JobConfig object
- Single-Swarm may be the fastest performing option for high-end intel processors
extern crate pos_pso;
use pos_pso::{PSO, PSOConfig, JobConfig};
fn main() {
let num_variables = 5;
// Define a cost function to minimize:
//data captured by closure
let mins = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let coeffs = vec![0.25; num_variables];
//cost function closure must take a &[f64] and return f64
let cost_function = move |point: &[f64]| -> f64 {
let mut sum = 0.0;
for i in 0..num_variables {
sum += (point[i] - mins[i]).powi(2) * coeffs[i];
}
sum
};
// Create a PSO Configuration:
let pso_config = PSOConfig::new(
1, // 1 swarm used in optimization
256, // 256 particles are spawned
10, // console is updated every 10 itterations
true // optimizer is verbose (will provide more detailed information to console)
);
// Create a PSO:
let pso = PSO::new(pso_config);
// Create a Job Configuration:
let job_config = JobConfig::new(
num_variables,
vec![[-15.0, 15.0]; num_variables], // [upper, lower] bounds for each variable
vec![1.125; num_variables], // max velocity for each variable
100, // run for 100 itterations
0.0000001, // exit cost (optimization will stop when a cost of 0.0000001 is reached)
);
// Minimize cost function:
//use minimize_independant to optimize with the default independant-swarm configuration
//the next example will show how to use collaborative-swarms
let min = pso.minimize_independant(job_config, cost_function);
println!("Minimum of: {}, With value: {:?}", min.0, min.1);
}
Multi-Threaded Example* Use 4 collaborative swarms with fewer particles in each)
- A custom swarm configuration is defined with motion coefficients and a record-sharing period
- This may be the fastest performing option on high-end AMD processors
let num_variables = 5;
// Define a cost function to minimize:
//data captured by closure
let mins = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let coeffs = vec![0.25; num_variables];
//cost function closure must take a &[f64] and return f64
let cost_function = move |point: &[f64]| -> f64 {
let mut sum = 0.0;
for i in 0..num_variables {
sum += (point[i] - mins[i]).powi(2) * coeffs[i];
}
sum
};
// Create a PSO Configuration:
let pso_config = PSOConfig::new(
4, // 4 swarms (each spawned on their own thread)
64, // 64 particles per swarm
10, // console is updated every 10 itterations
true // optimizer is verbose (will provide more detailed information to console)
);
// Create a PSO:
let pso = PSO::new(pso_config);
// Create a Job Configuration:
let job_config = JobConfig::new(
num_variables,
vec![[-15.0, 15.0]; num_variables], // [upper, lower] bounds for each variable
vec![1.125; num_variables], // max velocity for each variable
100, // run for 100 itterations
0.0000001, // exit cost (swarms will stop when a cost of 0.0000001 is reached)
);
// Create a custom Swarm Configuration:
//collaborative swarms will share information with eachother about best known locations in the search space
let swarm_config = SwarmConfig::new_collaborative(
1.45, // local weigth: how much particles care about their best known location
1.6, // tribal weight: how much particles care about their swarms best known location
1.25, // global weight: how much particles care about the overall best known location
0.4, // inertial coefficient: component of a particles velocity that contributes to its next velocity
1.25, // inertial growth factor: how much inertia grows and shrinks throughout optimization
0.125, // wall bounce factor: component of velocity that is saved when particle goes out of bounds
10, // tribal-global collab period: swarms share best known location every 10 itterations
);
// Minimize cost function:
//use minimize to optimize with a custom SwarmConfig
let min = pso.minimize(job_config, swarm_config, cost_function);
println!("Minimum of: {}, With value: {:?}", min.0, min.1);
Advanced Example
- Use a group of 8 independant swarms to minimize the same cost function with different swarm parameters
- This example shows how to use a SwarmConfigDistribution where swarm-behavior coefficients are sampled from a user defined distribution
- All 8 swarms run in parallel on seperate threads (the lowest cost and best location are returned after all threads have finished)
extern crate pos_pso;
use pos_pso::{PSO, PSOConfig, JobConfig, SwarmConfigDistribution, ParamDist};
fn main() {
let num_variables = 5;
// Define a cost function to minimize:
//data captured by closure
let mins = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let coeffs = vec![0.25; num_variables];
//cost function closure must take a &[f64] and return f64
let cost_function = move |point: &[f64]| -> f64 {
let mut sum = 0.0;
for i in 0..num_variables {
sum += (point[i] - mins[i]).powi(2) * coeffs[i];
}
sum
};
// Create a PSO Configuration:
let pso_config = PSOConfig::new(
8, // 8 swarms (each spawned on their own thread)
128, // 128 particles per swarm
10, // console is updated by each thread every 10 itterations
true // optimizer is verbose (will provide more detailed information to console)
);
// Create a PSO:
let pso = PSO::new(pso_config);
// Create a Job Configuration:
let job_config = JobConfig::new(
num_variables,
vec![[-15.0, 15.0]; num_variables], // [upper, lower] bounds for each variable
vec![1.125; num_variables], // max velocity for each variable
100, // run for 100 itterations
0.0000001, // exit cost (swarms will stop when a cost of 0.0000001 is reached)
);
// Create a Swarm Configuration Distribution:
//the optimizer will sample a new swarm configuration for each swarm
//this is usefull for automatically creating a range of swarm behaviors
//in this case we are using new_independant, so all 8 optimizations will run seperately in parallel
let swarm_config = SwarmConfigDistribution::new_independant(
ParamDist::Fixed(1.45), // local: fixed value of 1.45
ParamDist::Range([1.65, 0.25]), // tribal: random value: 1.65 +/- 25%
ParamDist::Fixed(0.4), // momentum: fixed value of 0.4
ParamDist::Range([1.25, 0.05]), // momentum growth factor: random value: 1.25 +/- 5%
ParamDist::Fixed(0.0125), // wall bounce factor: fixed value of 0.0125
);
// Minimize cost function:
//use minimize_distributed to accept SwarmConfigDistribution
let min = pso.minimize_distributed(job_config, swarm_config, cost_function);
println!("Minimum of: {}, With value: {:?}", min.0, min.1);
}
Dependencies
~1.4–2MB
~36K SLoC