![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
8.11 Jensen-Shannon Divergence MeasureAs defined earlier, the disorderliness refers to a measure of deviation of a selected variable, say yi, with reference to a specified target standard of order yT. In a geometrical representation, yT can be denoted by a vector corresponding to the center of a region of orderliness wherein a stipulated stochastical extent of orderliness is specified within bounds. The disorderliness at the ith realization in the parameter spread space is, therefore, given by Equation (8.2) with j replaced by i. That is: where |yi - yT| = Qi refers to the magnitude of error vector and D(yi) is the distance from the center to the boundary of the quasi-ordered region. The corresponding goal-associated positional entropy is specified by Considering the average information for discrimination in favor of which is known as J-divergence and in the present case represents the divergence of disorganization associated with the subspace regions of ith realization and that of jth realization. More appropriately, each of the realizations should be weighted in their probability distributions to specify their individual strength in the goal-seeking endeavor. Suppose Πi and Πj (Πi, Πj≥ 0 and Πi + Πj = 1) are the weights of the two probabilistic pi and pj, respectively. Then, a generalized divergence measure (known as the Jensen-Shannon measure) can be stipulated as follows [114]: This measure is nonnegative and equal to zero when pi = pj. It also provides upper and lower bounds for Bayes' probability of error. The Jensen-Shannon (JS) divergence is ideal to describe the variations between the subspaces or the goal-seeking realizations, and it measures the distances between the random graph depictions of such realizations pertinent to the entropy plane In a more generalized form, the Jensen-Shannon divergence measure can be extended to the entire finite number of subspace realizations. Let p1, p2, ..., pκ be κ probability distributions pertinent to κ subspaces with weights Πi, Π2, ..., Πκ, respectively. Then the generalized Jensen-Shannon divergence can be defined as: where Π = (Πi, Π2, ..., Πκ). The control information processing unit sees κ classes c1, c2, ..., cκ with a priori probabilities Π1, Π2, ..., Πκ. Each class specifies a distinct or unique strength of achieving the goal or minimization of the distance between its subspace and the objective goal. Now, the control information processing faces a decision problem pertinent to Bayes' error for κ - classes written as: This error is limited by upper and lower bounds given by [114]: where and The JS divergence measure can be regarded as an estimate of the extent of disorganization. Performance of the neural complex in the self-control endeavor to reduce the disorganization involved can be determined by maximizing an objective function criterion, subjected to certain constraints. This can be accomplished in an information domain via achieving a minimal JS- divergence. Denoting the information pertinent to the subsets yi and yj as where the JS measure, Therefore, a generalized control criterion can be specified with the arguments by groups, each group comprising of limited subsets {y1, y2, ..., ym}. The corresponding performance criterion is hence given by: where
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |