For my voice AI project, I’ve been looking at genetic algorithms and neural nets. I wrote a gate array learner and created a truth table of 4 input points and 2 output points. I knew, ahead of time, that 2 XOR gates wired to inputs 1,2 and 3,4 respectively would perfectly fit the space.
I then wrote a genetic algorithm to solve the space problem, and counted how many times the algorithm tried to solve the problem before succeeding.
The algorithm tried between 811 and about 28,000 times before solving the space. A Neural Net solved the same problem in between 43 and never times, even when given the same number of nodes. A massively overfitted neural net of 2000 nodes in the hidden layer converged far faster than a low overfitted network of only 5 nodes in the hidden layer.
So, I’d probably call NN the winner — but only when massively overt-fitted.