EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


6.6 Stochastical Bounds and Estimates of Neuronal Activity

Considering a neural network, the Hopfield energy surface as given by Equation (6.18) has the first term which refers to a combinatoric part whose minima correspond to solutions of a complex problem involving several interactive dichotomous variates. The second part of Equation (6.18) is monadic wherein such interactions are not present. This monadic term diminishes as the gain of the nonlinear process, namely, the scale factor Λ → ∞ (as in the case of ideal McCulloch-Pitt’s type of transitions). This also corresponds to Hopfield’s pseudo-temperature being decreased in the simulated annealing process.

Suppose the variate W is uniformly distributed and the mean square deviation of W is designated as MSW = (<W> - W)2. The functional estimates of MSW are bounded by upper and lower limits. For example, Yang et al. [91] have derived two possible lower bounds for MSW, namely, the Cramer-Rao (CR) lower bound and the information-theoretic (IT) lower bound. Correspondingly, an asymptotic upper bound has also been deduced [91].

Further, considering a train of input sequences Si stipulated at discrete time values ti (i = 1, 2, …, N), the weighting function Wi can be specified as a linear least-squares estimator as follows:

where WiNF is the initial intercept of Wi at ti = 0 corresponding to the noise-free state (devoid of Fokker-Planck evolution) and ei’s are errors in the estimation; and:

The minimum least-squares estimator of Wi, namely, We is therefore written as:

where ae = (HTH)-1(HTW) with:

Hence explicitly:

In ascertaining the best estimate of Wi, the slope (∂WR/∂t) should be known. For example, relevant to the data of Figure 6.5, the variation of WR with respect to the normalized time function t/Td (for different values of Γ) is depicted in Figure 6.6.


Figure 6.6  Evolution of root-mean-squared value W(t) for different extents of the correlation time (Γ)
(1.Γ = 10-2. Γ=10-1; 3.Γ=100; 4.Γ=10+1 5.Γ=10+4

As Γ increases, the corresponding time delay in neural response (Td) decreases (or t/Td increases) as in Figure 6.6. Hence, the functional relation between WR and t/Td can be “best fitted” as:

where Td∞ refers to the values of Td as Γ→∞; and, exp(+Td∞/Td) accounts for the constant of proportionality between WR and exp (+t/Td). Hence:

in the presence of η(t), Equation (6.17) can therefore be rewritten for the estimate of Si, namely, Sei as:

where τd denotes the time of integration or the low-pass action. Further, the subscript e specifies explicitly that the relevant parameters are least-squares estimates. Thus, the above equation refers to the time-dependent evolution of the stochastical variable Si written in terms of its least squares estimate in the absence of η(t) and modified by the noise-induced root-mean-squared value of WR.

Upon integation, Equation (6.27) reduces at discrete-time instants to:

where (tio≥τdo).

Obviously, the first part of Equation (6.28) refers to noise-free deterministic values of Si; and the second part is an approximated contribution due to the presence of the intracell disturbance/noise η. Implicitly, it is a function of the root-mean-squared value (WR) of the stochastical variable Wi, spectral characteristics of η specified via the delay term Td(Γ), the time-constant of the low-pass action in the cell (τo), and the time of integration in the low-pass section (τd).

The relevant estimate of the Hopfield energy function of Equation (6.28) can be written as the corresponding Lyapunov function. Denoting the time-invariant constant term, namely, τo[exp(τdo) - 1] as φ1, the estimate of the Lyapunov function is given by:

where τdo and φ2o2/Td(Γ)] represents the following expression:

In the absence of noise or disturbance (η), the energy function E as defined by Equation (6.18) (with the omission of η) has minima occuring at the corners of the N-dimensional hypercube defined by 0 ≤ i σi ≤ 1, provided θi’s (i = 1, 2, …, N) are forced to zero by a suitable change of coordinates.

In the event of the noise (η) being present, the estimated Lyapunov energy function given by Equation (6.29) cannot have such uniquely definable locations of minima at the corners. This is due to the presence of the noise-induced fourth term of the Equation (6.29a) correlating the ith and the jth terms, namely, , unless the second term involving σi and this fourth term are combined as one single effective bias parameter, Ii(ti); hence, Ii’s can be set to zero via coordinate transformation forcing the minima to the corners of the N-dimensional hypercube as a first-order approximation discussed in the following section.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.