EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


The hidden layer neurons perform calculations with the signals similar to the calculations performed by the input layer neurons resulting in activation signals that are passed to the output layer neurons. The output neurons use these weighted input activation signals coming from the hidden layer neurons to compute their output activation signals. These activation signals coming from the output neurons are the output of the neural network. This process by which an input signal (a vector) is received from the physical world, multiplied by weights associated with the connections between input and middle layer neurons (these weights are actually matrices in NN implementation), nonlinearized with activation functions, multiplied by a second set of weights associated with the connections between middle and output layers, and nonlinearized again with activation functions, is called a forward pass. This pass is used both in training an NN and in using a trained NN to model a system. There is no improvement or learning present in this forward pass. It is, however, in a backward pass that an NN adjusts its weights so that it accurately models the training sets.

The backward pass through a neural network involves using derivative information to adjust weight values so that the network accurately models the response of the system being modeled. The derivatives are in the form of changes in error with respect to changes in weight values. This derivative information can be used to propagate the error back through the network in the form of adjustments to the weight values. The result is a derivative-based search that produces a well-trained network capable of predicting the response of a variety of systems.


Figure 11.3  Typical backpropagation NNs consist of three layers of neurons interconnected by weighted synapses.

We now have provided an architecture in which numerous neurons are connected into layers. The network has a mechanism for accepting an input signal and converting it into an output signal. The relationship between the network’s output signal and its input signal is determined by the weighted connections that are adjusted in a backward propagation of errors. The backward propagation of errors through the network represents the training phase of an NN while the forward pass is the actual implementation of the NN. Generally, the implementation is easy and fast while the training can be complex and time-consuming.

Before we discuss this process, it is prudent for a moment to consider what the input and output signals for a modeling problem might be. As an example, recall the hydrocyclone model presented in Chapter 10. In this modeling problem the goal was to model the relationship between several system parameters. Specifically, the goal was to model the relationship

where Dc is the diameter of the hydrocyclone, Di is the diameter of the slurry input, Do is the diameter of the overflow, Du is the diameter of the underflow, h is the height of the hydrocyclone, Q is the volumetric flow rate into the hydrocyclone, φ is the percent solids in the slurry input, and ρ is the density of the solids. Thus, an input data set used to train an NN would contain measured values of each of the parameters on the right-hand side of the above equation. A corresponding value of d50 would also be available to the NN so that it could determine the accuracy of its computed value of d50. This comparison is used to compute errors that are propagated back through the network in the backward pass.

Backpropagation NNs take a training example, compute an output signal, and then compute an associated error based on the known solution. Initially, errors occur mainly because the weighting values on the synapses are not accurate. Thus, a mechanism is needed to adjust the weights so that the network “will get a better answer.” Weight adjustment by the backpropagation of errors is a relatively generic concept. There are many, many ways to do this. However, the classic backpropagation algorithm uses a derivative based approach. Perhaps the most effective way to present the algorithm is to provide a step-by-step procedure for implementing and training an NN.

Before presenting the algorithm, let’s review a couple of key issues. First, the training of a backpropagation NN requires data sets for which we “know the answer.” Thus, we must supply data from the system being modeled for which we have measured the input parameters (the independent variables of the model) as well as the output parameters (the dependent variables of the model). Next, we use a three-layer architecture as shown in Figure 11.3 including an input layer, a hidden layer, and an output layer. Associated with these three layers are two weight matrices that contain the values of the weights associated with the synapses connecting the neurons. Finally, the algorithm includes a learning rate term, α, indicating how much of the weight change to effect on each pass. This is typically a number between 0 and 1. There is a momentum term, φ, indicating how much a previous weight change should influence the current weight change. There is also a term indicating within what tolerance we can accept an output as “good.”


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.