THE CRITERION FOR FEATURE INFORMATIVENESS ESTIMATION IN MULTI ROBOT TEAMS CONTROL

Context. The task of automation of feature set informativeness estimation process in multi robot teams control is solved. The object of the research is the process of multi robot teams control. The subject of the research is the criterion of feature set informativeness estimation. Objective. The research objective is to develop the criterion for feature set informativeness estimation in multi robot teams control. Method. The criterion for feature set informativeness estimation is proposed. The developed criterion is based on the idea that feature set informativeness is computed according to values of the prior probabilit і es of finding features in the descriptions of the environment states. The use of the proposed criterion allows to efficiently solve the problem of feature set informativeness estimation, leading to effective solution of the multi robots control task. The developed criterion is based on the maximizing mutual information criterion and can be applicable when measurements are interdepended and environment has a variable number of states. The criterion doesn’t require to construct models based on the estimated feature combinations, in such a way considerably reducing time and computing costs for multi robot teams control. Application of the proposed criterion for feature set informativeness estimation allows to make a decision how much a new observation will increase the certainty of the robots’ beliefs about the environment state which is observed. Results. The software which implements the proposed criterion for feature set informativeness estimation and allows to manage multi robot teams has been developed. Conclusions. The conducted experiments have confirmed operability of the proposed criterion for feature set informativeness estimation and allow to recommend it for multi robot teams control in practice. The prospects for further researches may include the modification of the known multi robot teams control methods and the development of new ones based on the proposed criterion for feature set informativeness estimation.

( ) / is a probability of the situation when a feature k has value j on condition that output parameter has value i L ; N is a number of possible environment states ( ) is a probability of the situation when the environment state has value i L on condition that feature has value kj x ; J is a number of feature values J j , 1 = ; n is a number of features ( ) is a prior entropy of the environment state; ( ) is a informativeness of the k -th feature; ( ) is a informativeness of the feature set; ( ) is a relative informativeness of the feature set; ( ) is a informativeness of the β feature; ( ) is the required entropy of the environment state; ( ) is the required amount of mutual information of the environment state; ( ) is an amount of mutual information which multi robot teams receive from action;

INTRODUCTION
Multi robot teams that intelligently gather information have the potential to transform industries as diverse as agriculture, space exploration, mining, environmental monitoring, search and rescue, and construction.Despite large amounts of research effort on active perception problems, there still remain significant challenges.
The ultimate goal of an active perception problem is to estimate some unknown quantity of interest.Most modern perception approaches are probabilistic [1]; instead of forming a single concrete guess of what something is, they determine a probability distribution over possible values that it could be.Consequently, the goal of an active perception strategy is to reduce the uncertainty of the probabilistic estimate as quickly as possible.To formally define this goal of "uncertainty reduction," a lot of modern approaches use information theory.According to them the robots are controlled to seek informative observations by moving along the gradient of mutual information at each time step.Mutual information is a quantity from information theory that predicts how much a new observation will increase the certainty of the robots' beliefs about the environment state.Thus by moving along the mutual information gradient, the robots maximally increase the informativeness of their next observation.
To control this complex process we have to use feature set informativeness estimation methods that allows us to make a decision how much a new observation will increase the certainty of the robots' beliefs about the environment state.
Feature set informativeness estimation methods generally use classification error obtained by the model which was constructed using estimated data set as criterion of feature set informativeness estimation [2,3].Such approach needs significant computational and time costs of resources, because it is connected with computationally complex procedure of model synthesis which should be performed for every estimated feature set [2,3].
Informational criteria [2,4] don't require to perform computationally complex procedure of mathematical model synthesis for estimation of feature set informativeness.However, in the known approaches of multi robot teams control, such criteria suppose that features of initial data sample are independent.Therefore it is difficult to use such criteria in practice and it is unsuitable for situations when features in initial samples are interdependent.Also, however there is a binding relationship between mutual information and classification error there is no functional relationship between these values.This does not allow us to make precise assessment of the classifica-tion error in the known approaches of multi robot teams control.
The described shortcomings cause actuality of the development of the criterion for feature set informativeness estimation, which is free from these drawbacks.
The research objective is to develop the criterion for feature set informativeness estimation which enables to estimate the classification error in the task of multi robot teams control.

PROBLEM STATEMENT
Suppose we have prior probabilities ( ) . Given a set of possible robots actions . Then the problem of multi robot teams control strategy can be ideally stated [1] as driving the team to obtain measurements which lead to maximize the mutual information ( ) ( )

REVIEW OF THE LITERATURE
Bayesian approaches for estimation have a rich history in robotics, and mutual information has recently emerged as a powerful tool for controlling robots to improve the quality of the Bayesian estimation, particularly in multi robot systems.As an early example, in [5,6] was proposed controlling multiple robot platforms so as to increase the mutual information between the robots sensors and the position of a target in tracking applications.In [7] was used a similar method for exploring and mapping uncertain environments.The problem of planning paths through an environment to optimize mutual information was investigated in [8][9][10].
In [11] was used mutual information for control with highly non Gaussian belief states, achieving scalability by using a pairwise approximation to mutual information.In [12] was used the identical gradient of mutual information to drive a network of robots for general environment state estimation tasks.In [13] was developed a consensus algorithm to achieve decentralization and a sampling strategy to reduce complexity.
Information theoretic costs metrics also have been used to manage sensors [14], and led to algorithms to control sensor networks for information gathering over an area by parameterizing the motion of collectives of vehicles [15].The optimal probing control law to minimize Shannon entropy for the dual control problem was shown to be the input that maximizes mutual information [16].A property relating probability distributions, the alphadivergence, was computed for particle filters and applied to manage sensors with binary measurements, though scalability in sensor network size was not addressed, and Shannon entropy was only found in the limit of the presented equations [17].Probability-of-detection was computed using both grid cell and particle filter estimators, and experimentally demonstrated [18].An approximate method was used to estimate the expected entropy for particle filters over a finite horizon [19].Gaussian particle filtering was used with a mutual information objective function, though the technique approximates the posterior probability distribution as Gaussian at every update [20].The version of mutual information approximation techniques was presented in [21].
For planning approaches that seek to maximize mutual information, in [22] derived approximation guarantees for greedy maximizations of mutual information and other submodular set functions.These results were applied to mobile robot environmental monitoring problems in [23,24].These guarantees only hold in offline settings where teams do not update their actions based on measurements they receive.In [25] was developed a sampling based strategy for maximizing a variety of information metrics with asymptotic optimality guarantees.However, they assume that information is additive across multiple measurements (i.e., measurements are independent).This assumption limits cooperation in multi-robot settings [1] and can lead to overcondence when considering multiple measurements of the same quantity.
Information-theoretic objectives have also been used for planning and control in robotics for related information rich tasks involving uncertainty such as inspection [26], environment modeling [27], extrinsic calibration of LIDAR sensors [28], visual servoing [29], and active object modeling [30].They have also been used for applications outside of active perception.For example, Kretzschmar and Stachniss [31] use mutual information as a criterion for storing a minimal number of laser scans toward map reconstruction.
It is necessary to admit that classification methods generally use classification error obtained by the model which was constructed using estimated data set as criterion of statistic effectiveness estimation [2,3].But such approach needs significant computational and time costs of resources, because it is connected with computationally complex procedure of model synthesis which should be performed for every estimated feature set [2].
Although there is a binding relationship between mutual information and prognostication or classification error there is no functional relationship between these values.This does not allow to make reliable assessment of the classification error [32] and to estimate the ratio of incorrectly recognized measurements to the total number of measurements for multi robot teams.
It means that for planning approaches to multi robot teams control which seek to maximize mutual information the increasing of the mutual information is not the same as decreasing the ratio of incorrectly recognized measurements to the total number of measurements.
Thus disadvantages of the known criteria for feature set informativeness estimation in robotics cause actuality of the development of criterion which should be free from the discovered drawbacks.
The described shortcomings cause actuality of the development of the criterion for feature set informativeness estimation, which is free from these drawbacks.

MATERIALS AND METHODS
The entropy of the environment state L on condition that output parameter k x has value kj x can be defined as: Use Bayes' Rule: To obtain an entropy solution, we find the sum of values ( ) for all values with weights, proportional to the probability of occurrence of each values ( ) The value of the prior entropy of the solution will be found by expression The mutual information of the environment state on condition that measurements are independent is defined as However, to estimate the informativeness of a group of statistically related features using the Shannon measure is rather difficult.Therefore it is difficult to estimate mutual information for situations when measurements are interdependent.
To solve this problem we use method which was defined in [33].According to this method informativeness for situations when features are interdependent can be estimated as The statistical relationships between features in (1) are taken into account in all of their pair combinations.The connections of higher orders were not taken into account because of their non-essentialness for practical calculations [33].As can be seen from ( 1), in order to find the informativeness of the feature set, it is necessary to estimate the statistical relationships between them, which are characterized by the coefficient β γ k .To do this we use the criterion, which are based on the measured frequencies differences of the consistent occurrence of the features discrete values with a hypothetical distribution of frequencies, which corresponds to the condition of the features independence.First, assume that the features are statistically independent.The hypothetical frequency distribution, which corresponds to this condition, needs to be verified statistically.To do this, use the Pearson criterion.
Assume that it is necessary to quantify the statistical relationship between the features 1 x and 2 x , which in general can have several values j x 1 and φ -maximum number of values for the feature φ ).
In Table 1 shows the distributions of the measured frequencies φ j M of the compatible appearance of the j value of the 1 x feature and the φ value of the 2 x feature.
In this table ( ).

W
When using Pearson's criterion in expression (1), the for this pair of features, that is, one has to find However, the value of the mutual information depends on the number of the investigated system states.However, when conducting robots measurements we can have different number of the explored environment state.Therefore, in practical calculations of informative it is expedient to use the relative mutual information received by the multi robots team, whose value does not depend on the number of the environment states.
We can find it as ( ) ( ) ( ) The required amount of mutual information of the environment state , where ( ) is the required entropy of the environment state L .
To provide the required value of the feature set informativeness for multi robot teams control the condition ( ) ( ) have to be met.However, as noted in [32], there is no functional relationship between conditional entropy and classification error.
Therefore, the condition ( ) ( ) is necessary, but not sufficient to ensure the necessary probability of making a false decision about the state of environment.So, if a decision is taken on one of the two states of the system, in order to unambiguously guarantee the necessary value of the average probability of making a false decision, it is enough to fulfill the condition to ensure compliance with the condition (3).
We prove the sufficiency of the condition (4), for this we use the well-known expressions for the exact upper and lower bounds of the mean condition entropy ( ) x L H / at a given average probability of error ( ) . These expressions in accordance with the theorem of Kovalevsky [32] is presented in this form On the basis of expression ( 7), we conclude that for a given average conditional entropy ( ) Inequality ( 8) is correct, since assuming the opposite; we find that ( ) ( ) this contradicts the condition (7).
We will proceed from the fact that the condition ( 4) is fulfilled, then, taking into account (5), we obtain Under this condition Indeed, let's assume the opposite, that is . With this assumption, taking into account that ( ) However, this inequality contradicts (9).Thus, inequality is confirmed (10).Comparison ( 8) and (10) allow us to conclude that when the condition ( 4) is fullfilled so (3) is also fullfilled.
Thus, ensuring the required value of the entropy solution is a necessary, but not sufficient condition for the decision on the state of the environment with the required probability.In order to ensure a condition ( ) ( )

≥
, it is sufficient that the condition (3) is fulfilled.
However, the sufficiency of the condition (3) can be satisfied not only by the fulfillment of conditions ( 4), but also the fulfillment of the condition Thus, knowing the lower boundary of the required conditional entropy ( ) Let us prove the condition (11).On the basis of expression (14), the condition (11) can be represented as We will show that under this condition However, this condition contradicts the condition (15).Consequently, the sufficiency of the condition ( 11) is proved.
The value of the lower bound of the required conditional entropy in accordance with [32] can be found taking into account the a priori probabilities of in terms of expression ( )      Further, in accordance with our approach we determine the values of the informativeness of each feature.As a result of calculations we obtain the values k E that are presented in the Table 5.On the basis of the obtained values of the feature relative informativeness we determine the relative informativeness of the feature set.Since the ignoring of the statistical relationships between the features overestimates the relative informativeness of the feature set, it leads to an incorrect definition of its informativeness of the feature set.Therefore, we will determine the relative informativeness of the feature set, taking into account the statistical relationships between the features.To do this we will determine the value of the statistical coupling coefficients for each pair of signs β γ k .
Further, we determine the relative informativeness of the feature set, taking into account the statistical relationships between the features 62 .
. From the obtained results it is clear that the inclusion of statistical links between features by 5% reduces the informativeness of the feature set, which leads to a more accurate determination of the possible efficiency indicator.

DISCUSSION
As can be seen from the data shown in the Figure 1, there is an ambiguous relationship between the probability of classification error e p and the entropy of the decision ( ) x L H / .Thus, ensuring the required value of the entropy solution is a necessary, but not sufficient condition for the decision on the state of the environment with the required probability.In order to ensure a condition ( ) ( )

≥
, it is sufficient that the condition (3).From the analysis of the data shown in the Figure 2 and 3, it is evident that, with a value of the required probability of classification error const p r e = , the value * r I determined by the expression ( 17) is less than that determined by the expression (5).Therefore, in order to determine the sufficient condition for the feature set informativeness when recognizing the environment state, it is expedient to use value * r I , which is determined by expression (5).Since this will provide the required probability of error to recognize the environment state.It provides control of the multi robot teams which minimize the ratio of incorrectly recognized measurements to the total number of measurements.
Thus the proposed criterion for feature set informativeness estimation in the task of multi robot teams control allows to efficiently solve the problem of feature set informativeness estimation, leading to effective solution of the multi robots control task.At that in comparison with traditional feature set informativeness estimation approaches based on the maximizing mutual information criterion this process is applicable when measurements are interdepended, environment has a variable number of states and allow estimate the ratio of incorrectly recognized measurements to the total number of measurements.

CONCLUSIONS
In this paper the actual task of automation of feature set informativeness estimation process in the task of multi robot teams control was solved.
The scientific novelty of obtained results is that the method of feature set informativeness estimation is improved.The improved method enables to estimate feature set informativeness in classification problems in situations when input data samples contain interdependent features and environment has a variable number of states.The proposed criterion is based on the idea that feature set significance is computed according to the mutual information of the multi robot teams next observation.The article defines a sufficient condition for ensuring the require probability of making a false decision on the environment state and proves it.The fulfillment of this condition guarantees a decision on the environment state with the required probability.
the prior probability of the situation when a feature k has value j; possible robots action; D is a number of possible robots actions D c , 1 = ;

1 φ
possible informativeness of the β feature; β γ k is a coefficient characterizing the statistical connection between k and β -th features; are the frequencies of the values of the features;W is a sum of all measured frequencies of the consistent occurrence of the feature values; each pair of feature k x and βx , the greater the statistical relationship between them (for a statistically independent pair of features

value 2 β
χ k for the pairs of features should be normalized by dividing them by ( ) max 2 β χ k in the Fig.3.

Figure 1 -
Figure 1 -Graph of dependence between ( ) x L H / and e p Numerical study of the developed software system application based on the proposed estimation criterion and the traditional methods of effectiveness estimation shows that proposed criterion in average by 5% reduces the informativeness of the feature set.For example in Tables 2-4 are presented probabilities of the situation when a features has different values on condition that output parameters has values 1 L and 2 L .

Figure 2 -Figure 3 -
Figure 2 -Graph of dependence between * r I and r e p

Table 1 -
Frequencies of compatible appearance of the j value of the 1x feature and the φ value of the 2 at the given required prob-

Table 5 -
The obtained values of the feature relative informativeness