EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


Memory models are characterized by the physical or constitutive aspects of memory functions and by the information processing abilities of the storage mechanism. The synaptic action and the state transitional considerations in the neuron (or in a set of neurons), whether transient or persistent, refers to a set of data or a spatiotemporal pattern of neural signals constituting an addressable memory on short- or long-term basis. Such a pattern could be dubbed as a feature map. In the interconnecting set of neurons, the associated temporal responses proliferating spatially represent a response pattern or a distribution of memory.

Relevant to this memory unit, there are writing and reading phases. The writing phase refers to the storage of a set of information data (or functionalities) to the remembered. Retrieval of this data is termed as the reading phase.

The storage of data implicitly specifies the training and learning experience gained by the network. That is, the neural network adaptively updates the synaptic weights that characterize the strength of the connections. The updating follows a set of informational training rules. That is, the actual output value is compared with a new teacher value; and, if there is a difference, it is minimized on least-squares error basis. The optimization is performed on the synaptic weights by minimizing an associated energy function.

The retrieval phase follows nonlinear strategies to retrieve the stored patterns. Mathematically, it is a single or multiple iterative process based on a set of equations of dynamics, the solution of which corresponds to a neuronal value representing the desired output to be retrieved.

The learning rules indicated before are pertinent to two strategies, unsupervised learning rule and supervised learning rule. The unsupervised version (also known as Hebbian learning) is such that, when unit i and j are simultaneously excited, the strength of the connection between them increases in proportion to the product of their activation. The network is trained without the aid of a teacher via a training set consisting of input training patterns only. The network learns to adapt based on the experiences collected through the previous training patterns.

In the supervised learning, the training data has many pairs of input/output training patterns. Figure 3.6 illustrates the supervised and unsupervised learning schemes.


Figure 3.6  Learning schemes (a) Supervised learning; (b) Unsupervised learning (Adapted from [48])

Networks where no learning is required are known as fixed-weight networks. Here the synaptic weights are prestored. Such an association network has one layer of input neurons and one layer of output neurons.

Pertinent to this arrangement, the pattern can be retrieved in one shot by a feed-forward algorithm; or the correct pattern is deduced via many iterations through the same network by means of a feedback algorithm. The feed-forward network has a linear or nonlinear associative memory wherein the synaptic weights are precomputed and prestored. The feedback associative memory networks are popularly known as Hopfield nets.

3.4 Net Function and Neuron Function

The connection network neurons are mathematically represented by a basis function U (W, x) where W is the weight matrix and x is the input matrix. In hyperplane, U is a linear basis function given by:

and in hypersphere representation the basis function is a second-order function given by:


Figure 3.7  Activation functions
(a) Step function; (b) Ramp function; (c) Sigmiodal function; (d) Gaussian function (Adapted from [48])

The net value as expressed by the basis function can be transformed to depict the nonlinear activity of the neuron. This is accomplished by a nonlinear function known as the activation function. Commonly, step, ramp, sigmoid, and gaussian functions are useful as activation functions. These are illustrated in Figure 3.7.

3.5 Concluding Remarks

Mathematical representation of neural activity has different avenues. It may be concerned with a single neuron activity or the collective behavior of the neural complex. Single neuron dynamics refers to the stochastical aspects of biochemical activity at the cells manifesting as trains of spikes. The collective behavior of neural units embodies the interaction between massively connected units and the associated memory, adaptive feedback or feed-forward characteristics, and the self-organizing controlling endeavors. The analytical representation of memory functions of the neural complex governs the learning (or training) abilities of the network exclusively via the transient and/or persistent state of the system variables.

Another mathematical consideration pertinent to the neural system refers to the spatiotemporal dynamics of the state-transitional proliferations across the interconnected neurons. Existing models portray different analogical “flow” considerations to equate them to the neuronal flow. Equations of wave motion and informational transit are examples of relevant pursuits.

The contents of this chapter as summarized above provide a brief outline on the mathematical concepts as a foundation for the chapters to follow.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.