EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


8.8 Continuous Neural Entropy

Reverting back to the situation wherein the subjective and objective targets merge, the spread space can be quantized into subspaces at the target center concentrically as shown in Figure 8.4 with a quantizing extent of spread space equal to δ.


Figure 8.4  Discretized neural parametric spread-space
position entropy by means of a nonlinear coefficient , such that , where is the Shannon entropy which depends explicitly on the quantizing level δ as follows:

Equation (8.20) predicts that in the limiting (continuous) case, if δ → 0 in confirmation with the fact that as the cardinality of the given set of events increases to infinity, the entropy of the system also inclines to follow a similar trend.

Suppose the vector yi in the spread-space has an equal probability of occurrence at all subspaces Δyir measured from the target by the same distance | yir - yT| = Qir. Then, for δ → 0, the probability of finding the vector yir in the rth annular space can be written as:

satisfying the total probability condition that:

The corresponding position entropy is defined in a continuous form as:

with 0 ≤ Q ≤ Qmax and Q = 0 elsewhere.

The above expression is again identical to Shannon's entropy and represents the continuous version of Equation (8.13) with δ → 0. It can also be related functionally to Shannon's entropy as before with characterizing the statistical system disorganization for a constant set of cardinality. The goal-seeking position entropy whenever pi → 1/(Qi + 1) for all realizations of the state vectors.

8.9 Differential Disorganization in the Neural Complex

The difference in entropies at two locations, namely, in the vicinity of the objective locale of the goal and at an arbitrary ith location in the spread-space, measures implicitly the differential extent of disorganization. When the corresponding information gain is given by .

The information or entropy pertinent to the disorganized ith locale can be specified either by an associated a priori probability of attaining the goal pai so that , or in terms of conditional probability specified by pi/pai' where pai' is the a posteriori probability of attaining the goal in reference to ith subspace. The corresponding value of is equal to . In both cases, pi in general is the actual probability of the ith subspace and pi ≠ pai. Further, pai' depicts the forecast probability of the ith subspace. The value of differential measure of disorganization is obtained by using with p1 = p2 = ... = pκ and the pragmatic aspects of can be deduced from Hi'.

8.10 Dynamic Characteristics of Neural Informatics

The stochastical aspects of a neural complex are invariably dynamic rather than time-invariant. Due to the presence of intra- or extracellular disturbances, the associated information in the neural system may degrade with time; and proliferation of information across the network may also become obsolete or nonpragmatic due to the existence of synaptic delays or processing delays across the interconnections. That is, aging of neural information (or degenerative negentropy ) leads to a devalued (or value-weighted) knowledge with a reduced utility. The degeneration of neural information can be depicted in a simple form by an exponential decay function, namely:

where is the initial information and is the time constant specifying the duration within which has the pragmatic value. This time constant would depend on the rate of flow of syntactic information, information-spread across the entropy space, the entropy of the sink which receives the information, and the characteristics of synaptic receptors which extract the information. Should a degradation in information occur, the network loses the pragmatic trait of the intelligence and may iterate the goal-seeking endeavor.

Another form of degradation perceived in neural information pertains to the delays encountered. Suppose the control-loop error information is delayed when it arrives at the controlling section; it is of no pragmatic value as it will not reflect the true neural output state because the global state of the neural complex would have changed considerably by then. In other words, the delayed neural information is rather devalued of its “usefullness” (or attains nonpragmatic value) at the receiving node, though the input and output contents of syntactic information remained the same.

In either case of information degradation, the value of information (devalued for different reasons) can be specified as where λ depicts the pragmatic measure of information. It is also likely that there could be information enhancement due to redundancies being added in the information processor. This redundancy could be a predictor (or a precursor such as the synchronization of cellular state-transitional spikes) of information which would tend to obviate the synaptic delays involved. The corresponding time dependency of can be represented as where τen is the enhancement time constant.

The information aging and/or enhancement can occur when the neural dynamics goes through a nonaging or nonenhancement (quiescent) period. This quiescent period corresponds to the refractory effects in the neurocellular response.

The property of neural information dynamics can be described by appropriate informational transfer functions. Depicting the time-dependency of information function as , its Laplace transform is the informational transfer function which describes the changes in the processor algorithm of the neural network.

The loss of neuronal information due to degradation can be specified by an informational efficiency which is defined as:

where is the maximum usable information in the system and is the available information regarding the ith subset in the spread space. This informational efficiency is distributed among the sections of the neural network, namely, the input section, the output unit, and the control processor.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.