|0.1.7||Oct 19, 2018|
|0.1.6||Nov 7, 2015|
|0.1.5||Oct 26, 2015|
|0.1.4||Aug 8, 2015|
|0.1.3||Jun 29, 2015|
#45 in Machine learning
61 downloads per month
Used in fann
libc to the list of dependencies in your
[dependencies] fann-sys = "*" libc = "*"
and this to your crate root:
extern crate fann; extern crate libc;
Raw bindings to C functions of the Fast Artificial Neural Network library
The FANN library is designed to be very easy to use.
A feedforward ANN can be created by a simple
fann_create_standard function, while
other ANNs can be created just as easily. The ANNs can be trained by
and executed by
All of this can be done without much knowledge of the internals of ANNs, although the ANNs created will still be powerful and effective. If you have more knowledge about ANNs, and desire more control, almost every part of the ANNs can be parametrized to create specialized and highly optimal ANNs.
There are many different ways of training neural networks and the FANN library supports a number of different approaches.
Two fundementally different approaches are the most commonly used:
Fixed topology training - The size and topology of the ANN is determined in advance and the training alters the weights in order to minimize the difference between the desired output values and the actual output values. This kind of training is supported by
Evolving topology training - The training start out with an empty ANN, only consisting of input and output neurons. Hidden neurons and connections are added during training, in order to achieve the same goal as for fixed topology training. This kind of training is supported by FANN Cascade Training.
Cascade training differs from ordinary training in the sense that it starts with an empty neural network and then adds neurons one by one, while it trains the neural network. The main benefit of this approach is that you do not have to guess the number of hidden layers and neurons prior to training, but cascade training has also proved better at solving some problems.
The basic idea of cascade training is that a number of candidate neurons are trained separate from the real network, then the most promising of these candidate neurons is inserted into the neural network. Then the output connections are trained and new candidate neurons are prepared. The candidate neurons are created as shortcut connected neurons in a new hidden layer, which means that the final neural network will consist of a number of hidden layers with one shortcut connected neuron in each.
It is possible to save an entire ann to a file with
fann_save for future loading with
Errors from the FANN library are usually reported on
It is however possible to redirect these error messages to a file,
or completely ignore them with the
It is also possible to inspect the last error message by using the
The two main datatypes used in the FANN library are
which represents an artificial neural network, and
which represents training data.