4 releases (breaking)
0.3.0 | Aug 6, 2023 |
---|---|
0.2.0 | Jul 24, 2023 |
0.1.0 | Jul 23, 2023 |
0.0.1 | Jul 21, 2023 |
#1655 in Algorithms
22 downloads per month
20KB
360 lines
stochastic_optimizers
This crate provides implementations of common stochstic gradient optimization algorithms. They are designed to be lightweight, flexible and easy to use.
Currently implemted:
- Adam
- SGD
- AdaGrad
The crate does not provide automatic differentiation, the gradient is given by the user.
Examples
use stochastic_optimizers::{Adam, Optimizer};
//minimise the function (x-4)^2
let start = -3.0;
let mut optimizer = Adam::new(start, 0.1);
for _ in 0..10000 {
let current_paramter = optimizer.parameters();
// d/dx (x-4)^2
let gradient = 2.0 * current_paramter - 8.0;
optimizer.step(&gradient);
}
assert_eq!(optimizer.into_parameters(), 4.0);
The parameters are owned by the optimizer and a reference can be optained by parameters()
.
After optimization they can be optained by into_parameters()
.
What types can be optimized
All types which impement the Parameters
trait can be optimized.
Implementations for the standart types f32
, f64
, Vec<T : Parameters>
and [T : Parameters ; N]
are provided.
Its realativly easy to implement it for custom types, see Parameters
.
ndarray
By enabling the ndarray
feature you can use Array
as Parameters
Unit tests
The unit tests require libtorch via the tch crate. See github for installation details.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Dependencies
~94–590KB
~11K SLoC