### 50 releases (18 breaking)

0.19.4 | Oct 23, 2023 |
---|---|

0.18.2 | Oct 19, 2023 |

#**151** in Machine learning

**427** downloads per month

**MIT**license

28KB

525 lines

# QOpt

A simple optimization package.

## Optimization Paradigms

The latest version of QOpt supports the following paradigms.

- Steepest Descent (Gradient Descent)
- Newton's Method
- Genetic Optimization
- Simulated Annealing

# Getting Started

## Importing `maria-linalg`

`maria-linalg`

You must import the latest version of the Rust crate

in order to use this package.`maria-linalg`

## Creating an Objective Function

First, you must define an objective function

. This represents a function that accepts an `struct`

-dimensional vector and outputs a scalar.`N`

accepts up to three functions.`Optimizer`

(`Function`objective`::`*required*). Accepts continuous and discrete input. Evaluates to the function output (

).`f64`

(`Function`gradient`::`*optional*). Accepts continuous input. Evaluates to the function gradient (

).`Vector``<`N`>`

(`Function`hessian`::`*optional*). Accepts continuous input. Evaluates to the function Hessian (

).`Matrix``<`N`>`

See the example below. Note that you must also import

and (only if you implement the Hessian) `maria_linalg ::`Vector

`maria_linalg``::`Matrix

.`use` `qopt``::`Function`;`
`use` `maria_linalg``::``{`Matrix`,` Veector`}``;`
`///` Number of continuous variables.
`const` C`:` `usize` `=` `6``;`
`///` Number of discrete variables.
`const` D`:` `usize` `=` `0``;`
`fn` `objective``(``&``self`, `continuous``:` `Vector``<`C`>`, `discrete``:` `Vector``<`D`>``)`` ``->` `f64` `{`
`//` Required
`}`
`fn` `gradient``(``&``self`, `continuous``:` `Vector``<`C`>``)`` ``->` `Vector``<`C`>` `{`
`//` Optional
`}`
`fn` `hessian``(``&``self`, `continuous``:` `Vector``<`C`>``)`` ``->` `Matrix``<`C`>` `{`
`//` Optional
`}`

## Creating an `Optimizer`

`Optimizer`

Once you have an

function, you can create your `objective`

.`Optimizer`

`use` `qopt``::`Optimizer`;`
`///` Number of individuals per optimization iteration.
`///`
`///` For deterministic methods (gradient descent or Newton's method), this should be 1.
`///` For stochastic methods (genetic optimization or simulated annealing), this should be about 100.
`const` `POPULATION``:` `usize` `=` `100``;`
`fn` `main``(``)`` ``{`
`let` f `=` `MyFunction``::`new`(``)``;`
`let` optimizer`:` `Optimizer``<`C, D, POPULATION`>` `=` `Optimizer``::`new`(`objective`,` `Some` `(`gradient`)``,` `Some` `(`hessian`)``)``;`
`//` An initial guess for our continuous variables
`let` c `=` `Vector``::`zero`(``)``;`
`//` An initial guess for our discrete variables
`let` d `=` `Vector``::`zero`(``)``;`
`let` output `=` optimizer`.``optimize``(`c`,` d`,` `&``[``]``)``;`
`println!``(``"``{}``"``,` output`)``;`
`}`

## Running the Optimizer

You are now ready to run the optimizer using command-line arguments.

The structure for a command to execute the optimizer is as follows.

`$`` cargo run`` --`release` --`quiet` --`` ``[`-`-`flag`]` `[`-`-`parameter value`]`

Alternatively, if you have written a binary, you may run the binary according to the same rules. Suppose the binary is named

.`myoptimizer`

`$`` myoptimizer ``[`-`-`flag`]` `[`-`-`parameter value`]`

### Command-Line Arguments

The following are permitted command-line arguments and values. Note that all arguments are optional.

`--opt-``help`

`--opt-`

`help`Prints a help menu.

`--quiet`

`--quiet`

Does not print status updates.

`--no-stop-early`

`--no-stop-early`

Disables gradient-based convergence criterion.

`--``print`-every [integer]

`--``print`-every [integer]

Number of iterations per status update.

Defaults to

. This is the "quiet" option. No status will be printed until the optimizer converges or the maximum iteration limit is reached.`0`

Accepts an integer. For example, if this integer is

, then the optimizer prints a status update every fifth iteration.`5`

`--paradigm [string]`

`--paradigm [string]`

Optimization paradigm.

Defaults to

.`steepest-descent`

Accepts the following options.

. Steepest (gradient) descent. It is recommended (but not required) to implement`steepest-descent`

for this.`Function`gradient`::`

. Newton's method. It is recommended (but not required) to implement`newton`

and`Function`gradient`::`

for this.`Function`hessian`::`

. Genetic algorithm.`genetic`

. Simulated annealing.`simulated-annealing`

`--criterion [float]`

`--criterion [float]`

Gradient-based convergence criterion. When the gradient is less than this value, the optimizer halts. Note that this requires a locally convex function.

Defaults to

.`1.``0e-3`

Accepts a floating-point number.

`--maxiter [integer]`

`--maxiter [integer]`

Maximum number of optimization iterations.

Defaults to

.`100`

Accepts an integer.

`--maxtemp [float]`

`--maxtemp [float]`

Maximum temperature. This is only used for the simulated annealing paradigm.

Defaults to

.`1.``0`

Accepts a floating-point number.

`--stdev [float]`

`--stdev [float]`

Standard deviation of mutations. This is only used for stochastic methods (genetic optimization and simulated annealing).

Defaults to

.`1.``0`

Accepts a floating-point number.

#### Dependencies

~1MB

~21K SLoC