Constant Index

Standard linear error function.

Tanh error function, usually better but can require a lower learning rate.

Constant array consisting of the names for the activation function, so that the name of an activation function can be received by:

Periodical cosinus activation function.

Periodical cosinus activation function.

Periodical cosinus activation function.

Unable to allocate memory

Unable to open configuration file for reading

Unable to open configuration file for writing

Unable to open train data file for reading

Unable to open train data file for writing

Error reading info from configuration file

Error reading connections from configuration file

Error reading neuron info from configuration file

Error reading training data from file

Unable to train with the selected activation function

Unable to use the selected activation function

Unable to use the selected training algorithm

Index is out of bound

The number of input neurons in the ann and data don’t match

No error

The number of output neurons in the ann and data don’t match

Scaling parameters not present

Irreconcilable differences between two struct fann_train_data structures

Trying to take subset which is not within the training set

Wrong version of configuration file

Number of connections not equal to the number expected

Fast (sigmoid like) activation function defined by David Elliott

Fast (sigmoid like) activation function defined by David Elliott

Fast (symmetric sigmoid like) activation function defined by David Elliott

Fast (symmetric sigmoid like) activation function defined by David Elliott

Standard linear error function.

Constant array consisting of the names for the training error functions, so that the name of an error function can be received by:

Tanh error function, usually better but can require a lower learning rate.

Gaussian activation function.

Gaussian activation function.

Symmetric gaussian activation function.

Symmetric gaussian activation function.

Linear activation function.

Linear activation function.

Bounded linear activation function.

Bounded linear activation function.

Bounded linear activation function.

Bounded Linear activation function.

Each layer only has connections to the next layer

Each layer has connections to all following layers

Constant array consisting of the names for the network types, so that the name of an network type can be received by:

Sigmoid activation function.

Sigmoid activation function.

Stepwise linear approximation to sigmoid.

Stepwise linear approximation to sigmoid.

Symmetric sigmoid activation function, aka.

Symmetric sigmoid activation function, aka.

Periodical sinus activation function.

Periodical sinus activation function.

Periodical sinus activation function.

Stop criteria is number of bits that fail.

Stop criteria is Mean Square Error (MSE) value.

Constant array consisting of the names for the training stop functions, so that the name of a stop function can be received by:

Threshold activation function.

Threshold activation function.

Threshold activation function.

Threshold activation function.

Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set.

Standard backpropagation algorithm, where the weights are updated after each training pattern.

Constant array consisting of the names for the training algorithms, so that the name of an training function can be received by:

A more advanced batch training algorithm which achieves good results for many problems.

A more advanced batch training algorithm which achieves good results for many problems.

Each layer only has connections to the next layer

Each layer has connections to all following layers

Stop criteria is number of bits that fail.

Stop criteria is Mean Square Error (MSE) value.

Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set.

Standard backpropagation algorithm, where the weights are updated after each training pattern.

A more advanced batch training algorithm which achieves good results for many problems.

A more advanced batch training algorithm which achieves good results for many problems.