Wednesday, August 10, 2016

Neural Networks vs Genetic Algorithms: Part 2 - Different ways to train Neural AI


I'm currently aware of exactly two approaches toward the training of Neural Networks.
They are known by several names, such as Supervised versus Unsupervised training, but as far as I can tell, there are just two methods being implemented today.

The first training method I want to talk about is the traditional 'BackPropagation' technique, also known as 'Supervised training'.
This training method has some requirements in order to work effectively, and the most important requirement is Human Supervision - the Supervised training of neural networks invariably involves the engagement of a human trainer, or trainers, whose responsibility is to confirm that the network is producing appropriate output for any given set of inputs.
The second major requirement for Supervised training is a comprehensive set of training data, comprised of a big list of test input data and associated 'desirable output' data.
The idea is to show the Neural Network one set of Input data at a time, examine the outputs produced by the network, and compare them to 'known,good' output data. If the outputs match the desired pattern, then training is complete for that specific input data pattern. If not, the difference between the current output, and the desired output, can be measured, and the resulting 'Error Term' can be fed backwards through the network by determining at each Neuron layer, how much each Neuron contributed to the Error Term, and adjusting the Neuron weights such that they should produce the desired output. Such corrections are introduced 'gently', such that many training iterations are required in order to achieve the correct output results - this is deliberate, because modifying the Neuron Weights too sharply has two consequences - the network will rapidly learn to produce correct output for the current training inputs, but will begin to produce bad output for input patterns that it has already been trained for - that's definitely counter-productive.
Anyway, the math involved in back-propagation is just a transposition of the same basic math formula that is used to calculate the outputs from the inputs to begin with - if you can grasp the math required to calculate the Activation Sum for a Neuron, then you have already got the answer you need in order to 'correct' the network error terms at each Neuron, you just need to express the formula slightly differently.

The second training method that I've encountered, and the one that I am more interested in for the purpose of game development and simulation of artificial life forms, has a completely different basis to the traditional training method discussed above. It's not based on measuring and correcting error terms, instead it's based on chaos theory, or more precisely, a natural phenomenon which is called 'emergence'. In my next post, I'll present first the mathematics involved in both forward and backward propagation of neural signals, and follow that up with a discussion of the 'nature of chaos', and the very nature of nature itself - I will define 'emergence' and talk about properties of closed systems, and how all of that can be applied to produce an Unsupervised machine-learning system.


No comments:

Post a Comment