![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
Both Hopfields model and Littles have been treated by Peretto under noisy conditions also. It is concluded that, in Hopfields model, considering a Hebbian learning procedure* the Hamiltonian description of the neuronal state (analogous to that of the spin glass) can still be modified to ascertain the steady-state properties of the network exactly at any level of noise. (However, for a fully connected network, the dynamics is likely to become chaotic at times.)
Though the Hamiltonian approach of Littles model also permits the analysis of the network under noisy conditions, it is, however, more involved than Hopfields model since it depends upon the noise level. The last consideration, namely, the storage capacity of Hopfields and Littles models by Peretto, leads to the inference that both models (which have the common basis vis-a-vis spin-glass analogy) present the same associative memory characteristics. There is, however, a small distinction: Littles model allows some serial processing (unlike Hopfields model which represents a totally parallel processing activity). Hence, Peretto concludes that Littles model is more akin to biological systems. Subsequent to Perettos effort in compromising Hopfields and Littles models on their behavior equated to the spin-glass system, Amit et al. in 1985 [69] analyzed the two dynamic models due to Hopfield and Little to account for the collective behavior of neural networks. Considering the long-time behavior of these models being governed by the statistical mechanics of infinite-range Ising spin-glass Hamiltonians, certain configurations of the spin system chosen at random are shown as memories stored in the quenched random couplings. The relevant analysis is restricted to a finite number of memorized spin configurations (patterns) in the thermodynamic limit of the number of neurons tending to infinity. Below the transition temperature (TC) both models have been shown to exhibit identical long-term behavior. In the region T < TC, the states, in general, are shown to be either metastable or stable. Below T ≅ 0.46TC, dynamically stable states are assured. The metastable states are portrayed as due to mixing of the embedded patterns. Again, for T < TC the states are conceived as symmetric; and, in terms of memory configurations, the symmetrical states have equal overlap with several memories. 5.8 Littles Model versus Hopfields ModelThe Hopfield model defined by Equation. (5.12) with an associated memory has a well-defined dynamics. That is, given the initial pattern, the system evolves in time so as to relax to a final steady-state pattern. In the generalized Hopfield model* the transition probability ρ(I/J) from state J to the next state I takes the usual form for T > 0 as:
and the system relaxes to the Gibbs distribution: In Littles model the transition probability is given by: where Thus, in the Little model at each time-step, all the spins check simultaneously their states against the corresponding local field; and hence such an evolution is called synchronous in contrast with the Hopfield model which adopts the asynchronous dynamics. Peretto has shown that Littles model leads to a Gibbs-type steady state exp (-βHN) where the effective Hamiltonian HN is given by: This Hamiltonian specified by HN(I/I) corresponds to Hopfields Hamiltonian of Equation (5.12). The corresponding free energy of the Little model has been shown [33] to be twice that of the generalized Hopfield model at the extreme points. As a consequence, the nature of ground states and metastable states in the two models are identical as explained below. Little [33] points out only a nontrivial difference between the neural network problem and the spin problem assuming a symmetry in the system. That is, as mentioned earlier, considering a matrix TM consisting of the probabilities of obtaining a state |S1, S2, , SM> given state |S1, S2, , SM> (where the primed set refers to the row and the unprimed set to the column of the element of the matrix) immediately preceding it, TM is symmetric for the spin system. However, in neural transmission, the signals propagate from one neuron down its axon to the synaptic junction of the next neuron and not in the reverse direction; hence, TM is clearly not symmetric for neural networks. That is, in general, the interaction of the jth neuron with the ith is not the same as that of the ith neuron with the jth. Though TM is not symmetric, Little [33] observes that the corresponding result can be generalized to an arbitrary matrix because, while a general matrix cannot always be diagonalized, it can, however, be reduced to so-called, Jordan cannonical form (see Appendix B); and Little develops the conditions for a persistent order based on the Jordan canonical form representation of TM. Thus the asymmetry problem appears superficially to have been solved. However, there are still many differences between physical realism and Littles model. The discrete time assumption as discussed by Little is probably the least physically acceptable aspect of both this model and the formal neuron. In addition, secondary effects such as potential decay, accommodation, and postinhibitory rebound are not taken into account in the model. To compare Littles model directly with real networks, details such as the synaptic connectivity should be known; and these can only be worked out only for a few networks. Thus, it should be emphasized that this model, like the formal neuron, represents only a minimal level of description of neural firing behavior.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |