3 releases
| 0.1.2 | Sep 6, 2025 |
|---|---|
| 0.1.1 | Jun 17, 2025 |
| 0.1.0 | Jun 13, 2025 |
#1858 in Math
237 downloads per month
460KB
12K
SLoC
NonConvex-RUST
Continuous non-convex optimizers for constrained and unconstrained maximization problems. These algorithms were implemented as a side project, but they may be useful and have been open-sourced.
Sources/links to more information in the respective algorithm .md files.
Algorithms
Installation
Add this to your Cargo.toml:
[dependencies]
nonconvex-opt = "0.1.0"
Importing
use non_convex_opt::NonConvexOpt;
use non_convex_opt::utils::config::Config;
use non_convex_opt::utils::opt_prob::{ObjectiveFunction, BooleanConstraintFunction};
use nalgebra::{DVector, DMatrix}; // Works with dynamic
use nalgebra::{SVector, SMatrix}; // Works with static
The library works with both statically-sized and dynamically-size vectors. For dynamic examples, see de_tests, mspo_tests, nm_tests, pt_tests, or solver_tests. For static examples, see any other test.
Usage
// Setup objective and constraints
#[derive(Clone)]
pub struct Kbf;
impl ObjectiveFunction<f64, U2> for Kbf
where
DefaultAllocator: Allocator<U2>,
{
fn f(&self, x: &SVector<f64, 2>) -> f64 {
let sum_cos4: f64 = x.iter().map(|&xi| xi.cos().powi(4)).sum();
let prod_cos2: f64 = x.iter().map(|&xi| xi.cos().powi(2)).product();
let sum_ix2: f64 = x
.iter()
.enumerate()
.map(|(i, &xi)| (i as f64 + 1.0) * xi * xi)
.sum();
(sum_cos4 - 2.0 * prod_cos2).abs() / sum_ix2.sqrt()
}
// Some algorithms require gradients
fn gradient(&self, x: &SVector<f64, 2>) -> Option<SVector<f64, 2>> {
}
fn x_lower_bound(&self, _x: &SVector<f64, 2>) -> Option<SVector<f64, 2>> {
Some(SVector::from_vec(vec![0.0, 0.0]))
}
fn x_upper_bound(&self, _x: &SVector<f64, 2>) -> Option<SVector<f64, 2>> {
Some(SVector::from_vec(vec![10.0, 10.0]))
}
}
#[derive(Debug, Clone)]
pub struct KbfConstraints;
impl BooleanConstraintFunction<f64, U2> for KbfConstraints
where
DefaultAllocator: Allocator<U2>,
{
fn g(&self, x: &SVector<f64, 2>) -> bool {
let n = x.len();
let product: f64 = x.iter().product();
let sum: f64 = x.iter().sum();
x.iter().all(|&xi| (0.0..=10.0).contains(&xi))
&& product > 0.75
&& sum < (15.0 * n as f64) / 2.0
}
}
// Load config from file
let config = Config::new(include_str!("config.json")).unwrap();
// Or create config from JSON string
let config_json = r#"{
"opt_conf": {
"max_iter": 1000,
"rtol": "1e-6",
"atol": "1e-6",
"rtol_max_iter_fraction": 1.0
},
"alg_conf": {
"CGA": {
"common": {
"num_parents": 2,
},
"crossover": {
"Heuristic": {
"crossover_prob": 0.8
}
},
"selection": {
"Tournament": {
"tournament_size": 2
}
},
"mutation": {
"NonUniform": {
"mutation_rate": 0.23,
"b": 5.0
}
}
}
}
}"#;
let config = Config::new(config_json).unwrap();
let obj_f = Kbf;
let constraints = KbfConstraints;
let mut opt = NonConvexOpt::new(
config,
init_x, // Initial population - must be from nalgebra
obj_f, // Objective function
Some(constraints), // Optional constraints
42 // Seed for rng
);
// Unconstrained optimization
let mut opt = NonConvexOpt::new(
config,
init_x,
obj_f,
None::<KbfConstraints>,
42
);
let result = opt.run();
To see the differences between setting up unconstrained and constrained problems, please refer to the benches/ subdirectory. See the examples/ subdirectory for more direction on using the lib.
Example configs are provided in tests/jsons/. More information on each config can be found in the respective algorithm .md files, (links above).
Contributing
- Fork the repository
- Create a new branch:
git checkout -b my-feature - Make your changes
- Run tests:
cargo test - Run benchmarks:
cargo bench- To view the results, run:
open target/criterion/report/index.html # on macOS xdg-open target/criterion/report/index.html # on Linux start target/criterion/report/index.html # on Windows - Add sources and more information to the respective algorithm .md file - so that others can learn and share too!
- Commit and push
- Open a Pull Request
License
This project is open-sourced under the MIT License.
Dependencies
~10MB
~210K SLoC