Simple Artificial Neural Networks with FANN and C++

Recently I’ve been investigating using Artificial Neural Networks to solve a classification problem in my Masters work. In this post I’ll share some of what I’ve learned with a few simple examples.

An Artificial Neural Network (ANN) is a simplified emulation of one part of our brains. Specifically, they simulate the activity of neurons within the brain. Even though this technique falls under the field of Artificial Intelligence, it is so simple by itself as to be almost unrelated to any form of actual self-aware intelligence. That’s not to say it can’t be useful.

First, a quick refresher on how ANNs work.

Each input value fed into a neuron is multiplied by specific weight value for that input source. The neuron then sums all of the multiplied input * weight values. This sum is then fed through an activating function (typically the Sigmoid Function) which determines the output value of the neuron, between 0 and 1 or -1 and 1. Neurons are arranged in a layered network, where the output from a given neuron is connected to one or more nodes in the next layer. This will be described in more detail with the first simple example. ANN are trained by feeding data through the network and observing the error, then adjusting the weights throughout the network based on the output error.

So why FANN? There’s certainly plenty of choices out there when it comes to creating ANNs. FANN is one of the most prevalent libraries for creating practical neural network applications. It is written in C but provides an easy-to-use C++ interface, among many others. Despite its reasonably friendly interface (the C++ interface could benefit from being more idiomatic), FANN can still be counted on to provide performance in both training and running modes. Mostly I’m using FANN because of its maturity and ubiquity, which usually results in better documentation (whether first or third party) and better long term support.

While using the latest stable version of FANN (2.2.0) for the first example in this post I ran into a bug in the C++ interface for the create_standard method of the neural_net object. This bug has persisted for about 6 years, and could have been around since the C++ interface was first introduced back in 2007. The last stable release (2.2.0) of FANN was in 2012, now over 4 years ago. There was a 5 year gap before that release too. The latest git snapshot seems to improve the ergonomics of the C++ interface a little, and includes unit tests. To install on a linux-based environment simply run the following commands (requires Git and CMake):

1
2
3
4
git clone git@github.com:libfann/fann.git
cd fann
mkdir build && cd build
cmake .. && make install

Another issue that might trip new FANN users up is includes and linking. FANN uses different includes to switch between different underlying neural network data types. Including fann.h or floatfann.h will cause FANN to use the float data type internally for network calculations, and should be linked with -lfann or -lfloatfann respectively. Likewise doublefann.h and -ldoublefann will cause FANN to use the double type internally. Finally, as a band-aid for environments that cannot use float, including fixedfann.h and linking with -lfixedfann will allow the excecution of neural networks using the int data type (this cannot be used for training). The header included will dictate the underlying type of fann_type. In order to use the C++ interface you must include one of the above C header files in addition to fann_cpp.h.

The most basic example for using a neural network is emulating basic boolean logic patterns. I’ll take the XOR example from FANN’s Getting Started guide and modify it slightly so that it uses FANN’s C++ interface instead of the C one. The XOR problem is an interesting one for neural networks because it is not linearly separable, and so cannot be solved with a single-layer perceptron.

The training code, train.cpp, which generates the neural network weights:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#include <array>
#include <fann.h>
#include <fann_cpp.h>
using uint = unsigned int;
int main() {
// Neural Network parameters
constexpr uint num_inputs = 2;
constexpr uint num_outputs = 1;
constexpr uint num_layers = 3;
constexpr uint num_neurons_hidden = 3;
constexpr float desired_error = 0.0001;
constexpr uint max_epochs = 500000;
constexpr uint epochs_between_reports = 1000;
// Create the network
const std::array<uint, 4> layers = {num_inputs, num_neurons_hidden, num_outputs};
FANN::neural_net net(FANN::LAYER, num_layers, layers.data());
net.set_activation_function_hidden(FANN::SIGMOID_STEPWISE);
net.set_activation_function_output(FANN::SIGMOID_STEPWISE);
net.train_on_file("xor.data", max_epochs, epochs_between_reports, desired_error);
net.save("xor_float.net");
}

There aren’t too many changes from the original example here. I’ve defined an alias for unsigned int to save some typing, changed the const variables to constexpr, and moved the neuron counts for each layer into an array instead of passing them directly. One significant change I did make was to change the activation function from FANN::SIGMOID_SYMMETRIC to FANN::SIGMOID_STEPWISE. The symmetric function produces output between -1 and 1, while the other non-symmetric Sigmoids product an input between 0 and 1. The stepwise qualifier on the activation function I have used implies that it is an approximation of the Sigmoid function, so some accuracy is sacrificed for some gain in calculation speed. As we are dealing with discrete values at opposite ends of the scale, accuracy isn’t much of a concern. In reality for this example there is no difference between using either FANN::SIGMOID_SYMMETRIC or FANN::SIGMOID_STEPWISE, but there are applications where the activation function does affect the output. I encourage you to experiment with changing the activation function and observing the effect.

The network parameters in this training program describe a multi-layer ANN with an input layer, one hidden layer and an output layer. The input layer has two neurons, the hidden layer has three, and the output layer has one. The input and output layers obviously correspond to the desired number of inputs and outputs, but how is the number of hidden layer neurons or hidden layers calculated? Even with all of the research conducted on ANNs, this part is still largely driven by experimentation and experience. In general, most problems won’t require more than one hidden layer. The number of neurons has to be tweaked based on your problem; if you have too few you will probably see issues with poor fit or generalization, too many will mostly be a non-issue apart from driving up training and computation times.

One optimization we can make to this network is to use the FANN::SHORTCUT network type instead of FANN::LAYER. In a standard multi-layer perceptron, all of the neurons in each layer are connected to all of the neurons in the next layer (see illustration above). With the SHORTCUT network type, each node in the network is connected to all nodes in all subsequent layers. In some cases (such as this one) shortcut connectivity can reduce the number of neurons required, because some layered neurons can be acting as pass-through neurons for subsequent layers. If we change the network type to FANN::SHORTCUT and reduce the number of hidden nodes to 1, the network topology becomes:

Fundamentally, this network produces exactly the same output as the layered network, but with fewer neurons.

The input data, xor.data:

1
2
3
4
5
6
7
8
9
4 2 1
0 0
0
0 1
1
1 0
1
1 1
0

Note the input data format. The first line gives the number of input/output line pairs in the file, the number of inputs to the network, and the number of outputs. Following that is the training test cases with alternating input and output lines. Values on each line are space-separated. I’ve changed the data from the original example to be based on logic levels of 0 and 1 instead of -1 and 1.

Finally, running the network with run.cpp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#include <cstdlib>
#include <array>
#include <iostream>
#include <fann.h>
#include <fann_cpp.h>
int main(int argc, char const **argv) {
// Parse command line input for values
std::array<fann_type, 2> input{0.f, 1.f};
if (argc > 2) {
std::cout << "Got input parameters: ";
for (int i = 1; i < 3; ++i) {
input[i-1] = std::atof(argv[i]);
std::cout << input[i-1] << " ";
}
std::cout << std::endl;
} else {
std::cout << "Using default input values: 0 1" << std::endl;
}
// Run the input against the neural network
FANN::neural_net net("xor_float.net");
fann_type* output = net.run(input.data());
if (output != nullptr)
std::cout << "output: " << *output << std::endl;
else
std::cout << "error, no output." << std::endl;
return 0;
}

The first part of the main function is just parsing the command-line arguments for input values, so the network can be tested against multiple inputs without having to recompile the program. The second part of the program has been translated into more idiomatic C++ and updated to use the new and improved C++ API from the in-development FANN version (tentatively labeled 2.3.0). The neural network is loaded from the file produced by the training program, and executed against the input.

Note that in order to run this code you will need to download and install the latest development version of FANN from the project’s Github repository.

I created a simple script to compile the code, run the training, and test the network.

1
2
3
4
5
6
7
g++ -std=c++14 -Wall -Wextra -pedantic -lfann -o train train.cpp
g++ -std=c++14 -Wall -Wextra -pedantic -lfann -o run run.cpp
./train
./run 0 0
./run 1 1
./run 0 1
./run 1 0

Training Output:

1
2
3
Max epochs 500000. Desired error: 0.0001000000.
Epochs 1. Current error: 0.2512120306. Bit fail 4.
Epochs 168. Current error: 0.0000802190. Bit fail 0.

Running Output:

1
2
3
4
5
6
7
8
Got input parameters: 0 0
output: 0.00792649
Got input parameters: 1 1
output: 0.0101204
Got input parameters: 0 1
output: 0.993475
Got input parameters: 1 0
output: 0.990801

The outputs aren’t exactly 1 or 0, but that’s part of the nature of Artificial Neural Networks and the Sigmoid activation function. ANNs approximate the appropriate output response based on inputs and their training. If they are over-trained they will produce extremely accurate output values for data that they were trained against, but such over-fitted networks will be completely useless for generalizing in response to input data that the network was not trained against. Generalization is a key reason for using an ANN instead of a fixed function or heuristic algorithm, so over-fitting is something we want to avoid. This property of ANN output is also useful for obtaining a measure of confidence in results produced. In this case we can threshold the outputs of the neural network at 0.5 to obtain a discrete 0 or 1 logic level output. From the results we can see that the network does indeed mimic a logical XOR function.

How about another slightly more complex example? One of my favorite small data sets to play around with is the Iris data set from the UCI Machine learning repository. I modified the code I used for the XOR example above to increase the number of inputs, outputs and hidden neurons. The data set includes 4 inputs; sepal length, sepal width, petal length and petal width. The output is the class of the iris, which I have encoded as a 1.0 output on one of three outputs. The number of hidden neurons was increased through experimentation, 10 seemed like a reasonable start points.

The training code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#include <array>
#include <fann.h>
#include <fann_cpp.h>
using uint = unsigned int;
int main() {
// Neural Network parameters
constexpr uint num_inputs = 4;
constexpr uint num_outputs = 3;
constexpr uint num_layers = 3;
constexpr uint num_neurons_hidden = 10;
constexpr float desired_error = 0.01;
constexpr uint max_epochs = 500000;
constexpr uint epochs_between_reports = 1000;
// Create the network
const std::array<uint, 4> layers = {num_inputs, num_neurons_hidden, num_outputs};
FANN::neural_net net(FANN::LAYER, num_layers, layers.data());
net.set_activation_function_hidden(FANN::SIGMOID);
net.set_activation_function_output(FANN::SIGMOID);
net.train_on_file("iris.data", max_epochs, epochs_between_reports, desired_error);
net.save("iris_float.net");
}

The training data:

Download the training data here

And to run the resulting network:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#include <cstdlib>
#include <array>
#include <vector>
#include <iostream>
#include <fstream>
#include <cstdint>
#include <iterator>
#include <algorithm>
#include <cassert>
#include <fann.h>
#include <fann_cpp.h>
int main() {
// Load neural network from file created by train.cpp
FANN::neural_net net("iris_float.net");
// Load test values from file
std::ifstream test_file("iris.data.test");
uint32_t test_count = 0, input_count = 0, output_count = 0;
test_file >> test_count >> input_count >> output_count;
std::vector<std::array<float, 4>> input;
input.resize(test_count);
assert(input_count == 4);
std::vector<std::array<float, 3>> expected_output;
expected_output.resize(test_count);
assert(output_count == 3);
auto input_it = input.begin();
auto expected_output_it = expected_output.begin();
while (test_file.good() && input_it != input.end() && expected_output_it != expected_output.end()) {
std::copy_n(std::istream_iterator<float>(test_file), input_count, input_it->begin());
std::copy_n(std::istream_iterator<float>(test_file), output_count, expected_output_it->begin());
++input_it;
++expected_output_it;
}
// Run the input against the neural network
uint32_t pass_count = 0, fail_count = 0;
for (uint32_t i = 0; i < test_count; ++i) {
fann_type* output = net.run(input[i].data());
if (output != nullptr) {
std::cout << "-- test " << i << " --" << std::endl;
std::cout << "output: " << output[0] << " " << output[1] << " " << output[2] << std::endl;
std::cout << "expected output: " << expected_output[i][0] << " " << expected_output[i][1] << " " << expected_output[i][2] << std::endl;
if (std::round(output[0]) == expected_output[i][0] &&
std::round(output[1]) == expected_output[i][1] &&
std::round(output[2]) == expected_output[i][2]) {
++pass_count;
} else {
++fail_count;
}
} else {
std::cout << "error, no output." << std::endl;
++fail_count;
}
}
std::cout << "-----------------------------------------" << std::endl
<< "passed: " << pass_count << std::endl
<< "failed: " << fail_count << std::endl
<< "total: " << pass_count + fail_count << std::endl
<< "-----------------------------------------" << std::endl;
return 0;
}

The training output:

1
2
3
Max epochs 500000. Desired error: 0.0099999998.
Epochs 1. Current error: 0.2574429810. Bit fail 423.
Epochs 140. Current error: 0.0099833719. Bit fail 8.

In order to prove the generalization capacity of neural networks, I took 9 random input/output pairs from the training data set and put them into a second file iris.data.test (removing them from the iris.data training file). The program in run.cpp loads this data and runs the network against it. So bear in mind the results are against data that the network has never seen in training. The training data in iris.data.test follows the same format as the training files:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
9 4 3
4.4 2.9 1.4 0.2
1 0 0
5.1 3.3 1.7 0.5
1 0 0
5.2 4.1 1.5 0.1
1 0 0
5.6 2.9 3.6 1.3
0 1 0
6.7 3.0 5.0 1.7
0 1 0
6.4 2.9 4.3 1.3
0 1 0
7.1 3.0 5.9 2.1
0 0 1
7.7 2.6 6.9 2.3
0 0 1
6.0 2.2 5.0 1.5
0 0 1

The output from running the network against the test data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
-- test 0 --
output: 1 0 0
expected output: 1 0 0
-- test 1 --
output: 1 5.43803e-08 7.23436e-34
expected output: 1 0 0
-- test 2 --
output: 1 0 0
expected output: 1 0 0
-- test 3 --
output: 4.68331e-10 0.999837 0.000162127
expected output: 0 1 0
-- test 4 --
output: 8.09432e-13 0.707596 0.32776
expected output: 0 1 0
-- test 5 --
output: 8.26562e-11 0.999368 0.000931072
expected output: 0 1 0
-- test 6 --
output: 1.56675e-14 0.00915556 0.990563
expected output: 0 0 1
-- test 7 --
output: 1.61667e-15 0.000615397 0.999422
expected output: 0 0 1
-- test 8 --
output: 1.40737e-13 0.186112 0.801713
expected output: 0 0 1
-----------------------------------------
passed: 9
failed: 0
total: 9
-----------------------------------------

In order to categorize the correctness of the output I rounded each output and compared it with the expected output for that input. Overall, the network correctly classified 9 of 9 random samples that it had never seen before. This is excellent generalization performance for a first attempt at such a problem, and with such a small training set. However, there is still room for improvement in the network; I did observe the ANN converging on a suboptimal solution a couple of times where it would only correctly classify about 30% of the input data and always produce 1.0 on the first output, no matter what the input. When I ran the best network against the training set, it would produce an misclassification rate of about 1-2% (2/141 was the lowest error rate I observed). This is a more realistic error rate than the 0% error rate of the small test data set.

Improving the convergence and error rates could be achieved by having more training data, adjusting the topography of the network (number of nodes/layers and connectivity), or changing the training parameters (such as reducing the learning rate). One facility offered by FANN which can make this process a little less experimental is cascade training which starts with a bare perceptron and gradually adds nodes to the network to improve its performance, potentially arriving at a more optimal solution.

With some experimentation I was able to remove 2 of the nodes from the network without visibly affecting its performance. Removing more nodes did increase the error slightly, and removing 5 nodes caused the network to sometimes fail completely to converge on a solution. I also experimented with reducing the desired error for training to 0.0001. This caused the network to become over-fitted: it would produce perfect results against the training data set (100% accuracy) but didn’t generalize as well for the data it hadn’t seen.

I found Artifical Neural Networks to be fairly easy to implement with FANN, and I have been very impressed by the performance obtained with minimal investment in network design. There are avenues to pursue in the future to further increase performance, but for most classification applications an error rate of 1-2% is very good.

Share