![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
The McCulloch-Pitts neuron is a binary device since it exists in one of two states which can be designated as active and inactive. Hence, it is often convenient to represent its state in binary arithmetic notation, namely, it is in state 0 when it is inactive or in state 1 when it is active. In the 1950s and 1960s, the first artificial neural networks consisting of a single layer of neurons were developed. These systems consist of a single layer of artificial neurons connected by weights to a set of inputs (as shown in Figure 3.3) and are known as perceptrons. As conceived by Rosenblatt [39], a simplified model of the biological mechanisms of processing of sensory information refers to perception. Essentially, the system receives external stimuli through the sensory units labeled as SE. Several SE units are connected to each associative unit (AA unit), and an AA unit is on only if enough SE units are activated. These AA units are the first stage or input units. As defined by Rosenblatt [40], a perception is a network composed of stimulus-unit, association-unit, and response-unit with a variable interactive matrix which depends on the sequence of fast activity states of the network.
A perceptron can be represented as a logical net with cybernetic notions as shown in Figure 3.4. It was found, however, that these single-layer perceptrons have limited computational abilities and are incapable of solving even simple problems like the function performed by an exclusive-or gate. Following these observations, artificial neural networks were supposed lacking in usefulness; and hence pursuant research remained stagnant except for a few dedicated efforts due to Kohonen, Grossberg, and Anderson [41]. In the 1980s, more powerful multilayer networks which could handle problems such as the function of an exclusive-or gate, etc. emerged; and the research in neural networks has been continually growing since then. 3.2 Mathematics of Neural Activities3.2.1 General considerationsMathematical depiction of neural activities purport the analytical visualization of the function of real neurons. In its simplest form, as stated earlier, the mathematical neuron refers to McCulloch-Pitts logical device, which when excited (or inhibited) by its inputs delivers an output, provided a set threshold is exceeded. An extended model improvises the time-course of the neuronal (internal) potential function describing the current values of the potential function for each neuron at an instant t, as well as at the times of firing of all attached presynaptic neurons back to the times (t - Δt). By storing and continuously updating the potential-time data, the evolution of activity on the network as a function of time could be modeled mathematically. Thus, classically the mathematics of neurons referred to two basic considerations:
The logical neuron lends itself to analysis through boolean-space, and therefore an isomorphism between the bistable state of the neurons and the corresponding logic networks can be established via appropriate logical expressions or boolean functions as advocated by McCulloch and Pitts. Further, by representing the state of a logical network (or neurons), with a vector having 0s and 1s for its elements and by setting a threshold linearly related to this vector, the development of activity in the network can be specified in a matrix form. The logical neuron, or McCulloch-Pitts network, has also the characteristic feature that the state vector x(t) depends only on x (t - 1). In other words, every state is affected only by the state at the preceding time-event. This depicts the first-order markovian attribute of the logical neuron model. Further, the logical neural network follows the principle of duality. That is, at any time, the state of a network is given by specifying which neurons are firing at that time; or as a duality it would be also given, if the neurons which are not firing are specified. In other words, the neural activity can be traced by considering either the firing-activity or equivalently by the nonfiring activity, as well. Referring to the real neurons, the action potential proliferation along the time-scale represents a time series of a state variable; and the sequence of times at which these action-potentials appear as spikes (corresponding to a cell firing spontaneously) does not normally occur in a regular or periodic fashion. That is, the spike-train refers to a process developing in time according to some probabilistic regime. In its simplest form, the stochastic process of neuronal spike occurrence could be modeled as a poissonian process with the assumption that the probability of the cell firing in any interval of time is proportional to that time interval. In this process, the constancy of proportionality maintains that firing events in any given interval is not influenced by the preceding firing of the cell. In other words, the process is essentially regarded as memoryless. The feasibility of poissonian attribution to neural activity is constrained by the condition that with the poissonian characteristic, even a single spike is sufficient to fire the cell. There is a mathematical support to this possibility on the basis of mathematical probability theory: Despite the possibility that the firing action by a neuron cell could be non-poissonian, the pooling of a large number of non-poissonian stochastical events leads to a resultant process which approximates to being poissonian; that is, a non-poissonian sequence of impulse train arriving at a synapse when observed postsynaptically would be perceived as a poissonian process, inasmuch as the synapses involved in this process are innumerable. Griffith [11-14] points out that even if the process underlying the sequence of spikes is not closely poissonian, there should always be a poissonian attribute for large intervals between spikes. This is because a long time t after the cell has last fired, it must surely have lost memory exactly when it did. Therefore, the probability of firing settles down to a constant value for large t; or the time-interval distribution p(t) has an exponential tail for sufficiently large t. That is, p(t) = λe-λt, where λ is a constant and ∫0∞ p(t) dt = 1. The mean time of this process < t > is equal to 1/λ.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |