EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


which refers to the divergence from the objective characterized by the parsed groups of subsets, namely, in reference to the entropy measure . The number of groups, each with a fixed number of subsets m, is determined by the combination . Let the set be denoted by and the subsets (y1, y2, ..., yκ) be divided into two sample spaces such that the first group (y1, y2, ..., yh) represents the subsets which are under the learning schedule to reach the objective function and (yh+1, yh+2, ..., yκ) are the subsets which have been annealed so that the dynamic states of the neurons have almost reached the steady-state condition. The (first) learning group can therefore be denoted by and the second group (which is closer to the objective function) is .

The differential measure of the features in these two groups can be specified by a matrix F represented as follows:

The corresponding average error <ε> is evaluated as

where q is the number of subsets in the group of converged subsets (PM) and the summation is performed only in respect to subsets belonging to PM. Further, and .

The back-propagation algorithm in the neural network attempts to gradually reduce the error specified by Equation (8.37) in steps. To increase the rate of algorithmic convergence, the divergence CJS can be weighted at each step of the algorithm so that ζ = (1 - <ε>) approaches a maximum ζmax → 1. This condition is represented by

where and . Further, and the elements of the divergence matrix at the rth step of the algorithm are determined as follows:

The method of taking arguments into account by parsed groups constitutes an algorithm of evaluating a generalized objective criterion with initial information being insufficient to compute all the values of for a complete vector set y.

8.12 Semiotic Framework of Neuroinformatics

As discussed earlier, the neural complex bears three domains of informatics: The first one embodies the input, the second one is a processor stage, and the third part refers to the controlling stage.

The associated knowledge or the information in the first section is largely descriptive on the environment of the domain space accommodating the neurons. It represents a declarative knowledge which reflects the structure and composition of the neural environment. On the contrary, the second section has activities which are processed in translating the neuronal state-transitions across the interconnected cells as per certain instructions, rules, and set of learning protocols. In other words, a set of rule-based algorithms depicting the knowledge (or information) useful to achieve the goal constitutes the processing sections of the neural complex. The associative memory of a neural network immensely stores this declarative information.

Likewise, the controlling section of the network has information looped into the system to refine the achievement of the goal or minimize the organizational deficiency (or die divergence measure) which depicts the offsets in realizing the objective function. In this self-control endeavor, the neural automaton has the knowledge or information which is again largely procedural. It represents a collection of information which on the basis of phenomenological aspects of neurons reflects the rational relationship between them in evaluating the organizational deficiency.

Therefore, the neural complex and its feedback system harmoniously combine the declarative and procedural information. Should the control activity rely only on procedural informatics, it represents the conventional or classical model. However, due to the blend of declarative informatics, the neural controlling processor is essentially a semiotic model — a model that depicts a sum of the declarative knowledge pertinent to the controlled object, its inner structures, characteristic performance and response to control actions. The control actions themselves are, however, rule-based or procedural in the informatic domain.

The cohesive blend of declarative and procedural informatics in the neural automaton permits its representation by a semiotic model. That is, the neural complex is a system that could be studied via semiotics or the science of sign systems.

The semiotic modeling of the neural network relies essentially on the information pertinent to the controlled object, the knowledge on its control being present in the associated memory so that the system can be taught (or made to learn) and generate procedural knowledge by processing the stored control information [115].

To understand the applicability of semiotic concepts in neural activity, a fragment of interconnected network of neuronal cells is illustrated in Figure 8.5. The fragmented network has a set of cells {X, Y, Z, R, S, f, a, b, c} with axonal interconnections at synapses denoted by the set of numbers {1, 2, 3, 4, 5}. The synapses can be in either of the dichotomous states, namely, excited or inhibited. If some of the synapses are excited, the respective cell also becomes excited and this state is transmitted along all the outgoing axons which eventually terminate on other synapses. A synapse attains the excited state only when all the incoming axons are activated.


Figure 8.5  Semantic network model of the neural complex


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.