![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
The above network has three features: syntax, semantics, and pragmatics. The syntax refers to signs or symbols associated with the network. Syntactic representation breaks the network into fragments of cellular units or parsed units. Each syntax has a characteristic meaning referred to as the semantics. While a syntax has independent significance, the semantics of a sign can be meaningful only within the universal set of syntactics. The pragmatic aspect of the network refers to the use of a sign when the system is under a state of activity. With the, aforesaid semiotic attributions, the fragment of a neural network as depicted in Figure 8.5 is termed as a semantic network. When an excited state sets in a cell (due to inherent or exterior cellular disturbances), it proliferates across the neural complex to activate the other cells of the interconnected network. Shortly, the dynamic state-transitional process will become stabilized to freeze the network into a static excited state. Referring to Figure 8.5, let S denote the realization of an excited state in a particular cell and refer to an objective goal. It is assumed that there are two rule-based state-transitional proliferations which could accomplish this, one via the set of cells denoted by {a, b, c, f} and the other through the set {X, Y, Z, R}. Considering the first possibility, let the cells {a, b, c} be initially activated. All the associated axonic outputs will then become activated. This will lead to the initiation of the synapse I at the cell f; and then, through the activation of this cell, an axonic output will render the cell S excited through synaptic coupling, 4. The above procedure successfully meets the objective function or goal iff, S is excited. Otherwise, a feedback mechanism will let the process repeated on the basis of information gained by a learning procedure until the objective is met. The process of activation to achieve the goal can be depicted in terms of semantic network representations with the associated procedural information pertinent to the network. Let every synapse stand for a certain procedure stored in the network memory (via training). The numerical signs (1, 2, ..., etc.) designate the name of the; procedure, and the activation of the synapse is equivalent to the call of the respective procedure (from the memory) for execution. One can associate the initiation of the network cells with the presence of certain starting information required to solve the problem (or to attain the goal). In the present case, this information is designated by syntactics (a, b, c, ..., etc). The initiation of synapse I may call for a procedure of evaluating, say, func(c). Subsequently, synapse 4 calls for a procedure say, xy[func(c)]. If this end result is sufficient and successfully accomplished, the activation of S takes place and the goal is completed. Otherwise an error feedback would cause an alternative protocol (within the scope of learning rules). Nonconvergence to the objective function could be due to the nonobservance of the syntactic rules (caused by noise, etc.). In this respect, a neural network works essentially as a recognizer set of syntactics. An alternative procedure of goal-seeking could be as follows: The initial activation of cells {X, Y, Z} will excite the synapse 3 via an axonic input. The procedural consideration at 3 could be say, (X + Y + Z)R. Then subsequently, this information is routed to synapse 6, where the procedure could be a comparison of (X + Y + Z)R and S. If the difference is zero, the goal is reached. Otherwise, again feedback error information is generated to reiterate the procedure by means of a learning rule. That is, achieving the goal renders the transition of the network into statistically active states with the search for any additional procedural information being terminated. The fragmented or parsed network constitutes a tree of top-down or bottom-up parse with the object-seeking protocol commencing at a set of synapses and culminating at the goal site. In these parsed tree representations, the error-information feedback and the subsequent control processing (or towards the minimization of the error to achieve the objective function) refer to an error-seeking scheme. Relevant strategy makes use of a set of tokens called synchronization set (or tokens). The tokens are identities of a set of manageable subsets (termed as lexemes) of cellular states obtained by parsing the cellular universal set. The exemplification of the semantic network as above indicates the roles of procedural and declarative information in neurocybernetic endeavors. The first one refers to the stored information as the traditional memory and the second as the specially arranged model (parsed tree) represented by the synaptic interconnections. The information associated with the semiotic model of a neural control processor can use three types of languages for knowledge representation, namely, predicative, rational, and frame languages. The predicative language employs an algorithmic notation (like a formula) to describe declarative information. Rational languages in a semantic network state the explicit relationships between the cells; that is, they represent the weighting functional aspects of interconnections. The gist of these relationships represents the procedural knowledge of executing a functional characteristic at the synapse. For this purpose, the rational language that can be deployed is a set of syntagmatic chains. These chains can be constituted by elementary syntagma. For example, the weight and functional relation across a pair of interconnected synapses 2 and 1 in Figure 8.5 can be specified by an elementary stygma (α2 βc α1) where α2 and α1 are the synaptic elements interconnected by a relation, βc. Between nodes 2 and 1 in Figure 8.5, this relation includes the state of the cell and the weights associated between 2 and 1. Proceeding further from synapse I to synapse 4, the corresponding network activity can be depicted by ((α2 βc α1) βf α4) which corresponds to the information pertinent to the activation, S. The entire relational language representation, as in the above example, has both declarative and procedural information. Thus in tree-modeling, information on the neural data structure, functioning, and control of the neural complex can be represented in a language format constituted by syntagmatic chains.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |