Weight empowerment method in information fusion for radar-seeker performance evaluation

: With the continuous development of radar terminal guidance technology, the evaluation of the performance of a radar seeker has become a very necessary and important system subject. Rationally allocating the index weight has always been an important spot in the research of performance evaluation. For this problem, a new method based on importance level to allocate the index weights is proposed, which can both reflect the subjective opinion of an expert and the objective facts. Second, rather than evaluate radar seeker performance in simulation test and outfield experiment, respectively, information fusion is applied to evaluate the radar performance instead of serial-phased scheme which takes advantage of both infield simulated and outfield measured data. Furthermore, a model was proposed to calculate the weights of each information element in the information- fusion stage. Considering that the performance evaluation of a radar seeker involves many experimental stages, information fusion which can make full use of the multi-source test information can help to obtain more accurate and reliable evaluation conclusion. Both the theoretical analysis and experimental results prove the effectiveness and efficiency of the proposed method.


Introduction
With the continuous development in guidance technology and continuous improvement in radar performance, radar guidance has been paid increasing attention in various types of guidance technology owing to its advantages in target detection and location being an all-weather and omnidirectional method. Radar seekers have been widely used in all types of weapon equipment [1]. At present, electronic warfare in the field of modern radar guidance focuses on the conflict between electromagnetic interference and anti-interference [2]. With the development in the interference method, the electromagnetic-environment battlefield now faced by the radar seeker has become increasingly complex [3]. Evaluating the anti-jamming capability of a radar seeker in an increasingly fierce confrontation environment is an important task faced by military and technical experts [4].
Constructing a complete evaluation-index system and allocating reasonable index weight are two key parts in the performanceevaluation process [5,6]. Index system is an entire framework composed of a series of independent indicators that measures the characteristics of a research object in different aspects. A reasonable index system can not only comprehensively measure the overall performance of a radar seeker but also must be concise and convenient for later evaluation of data acquisition and calculation. In [7], a three-level evaluation system, which includes a radar-seeker-unit test index, is constructed, and the quantisation level of the seeker performance is obtained using this index system. The advantage of this index system is its simple implementation. However, measuring the effectiveness of a radar seeker using only its theoretical performance obtained in the laboratory test is not sufficient. The index system used in the present work has been perfected and supplemented according to the existing literature to make it more suitable for performance evaluation of a radar seeker [8].
Rationally allocating the index weight has always been a hot spot in the study of evaluation researchers. The methods proposed until now have mainly included the subjective weight empowerment of experts [9,10] and the objective weightempowerment method that extracts the index weight from index values [11,12]. The first subject discussed in this paper is to find a more reasonable method of weight empowerment based on the comparison and analysis of the advantages and disadvantages of the subjective and objective weight-empowerment methods.
Multi-source information fusion has become a hot research topic. Information fusion is commonly referred to as data fusion in the early stage, whose research object is the multi-source data obtained from multi-sensor measurement, and its focus is on parameter estimation, regression analysis, and confidence interval calculation [13]. Multi-source information fusion is used in the military field of early warning threats, situation assessment, and intelligence synthesis [14,15]. With the continuous improvement in fusion structure and algorithm, information fusion is widely used in comprehensive processing of multi-sensor signal, such as target recognition, intelligent robot, fault diagnosis, and image processing [16,17]. Considering that the performance evaluation of a radar seeker involves many experimental stages, we propose a performance evaluation fusion scheme for test information obtained in multiple test stages, which is similar with classical fusion process of information obtained by different test device. The second subject discussed in this paper focuses on how to make full use of the multi-source test information to improve the information integrity from the perspective of information fusion to obtain more accurate and reliable evaluation conclusion.
The remainder of this paper is organised as follows. Section 2 presents the development of an index weight-empowerment method based on subjective and objective weight empowerment. This method can not only reflect the subjective opinion of an expert but also is more objective by considering the evaluation data themselves. Section 3 explains the procedure of the informationfusion process in the radar-seeker performance evaluation. Section 4 illustrates the validity of the method mentioned in Sections 3 and 4 using a set of simulation and field experiment. The paper concludes with some discussion and guidance of the index-weight method and information-fusion process in the radar-seeker performance evaluation presented in Section 5.

Index weight-empowerment method
In the performance-evaluation process, the weight of an index, which is a measure of its importance, is directly significant for the J. Eng accuracy and rationality of the evaluation results. The method of determining the index weight mainly includes the subjective weight empowerment of an expert and the objective weight empowerment. Comparison of these methods reveals that the subjective weight empowerment is intuitive and easy, which can provide full utilisation of experts research experience. Its disadvantage is that when the information is insufficient or the expert ability is limited, the subjective determination of the weight lacks credibility [18]. The objective determination method of the index weight is based on the law of the data themselves, which considers how to allocate weights to maximise the separation of alternatives using information entropy or deviation. The disadvantage of this method is that the difference in the relative importance among indicators is ignored [19]. When this difference exists, the objective weight-empowerment method provides, to some extent, unreasonable evaluation results. To obtain a more reasonable index weight, this paper proposes a new weightallocation method to combine the advantages of the subjective and objective methods. The main idea of our method is to determine the level of importance of the indexes by first using the experience of the experts. Then, we allocate the weight of the indexes that are in the same level of importance using the maximum-deviation method.
The expert scoring method is the most widely used method in concrete applications of the subjective weighting method. An assessment team provides each assessment expert a rating sheet that contains the evaluation index items. The assessment experts provide their individual index weight without communicating with one another. These scoring results are processed by the assessment team in an open environment. The processing methods usually include the direct average of all the scores on the same target provided by the experts or the average of the scores given by the experts without including the highest and lowest points. Then, the resulting score is considered as the ultimate subjective index weight [20].
In the current usage of the subjective weight-empowerment method, experts allocate the evaluation-index weight based on their own professional knowledge and experience. Such method is simple, straightforward, and easy to implement. However, when the professional data are insufficient or the expert ability is limited, the method of subjectively determining the index weights lacks credibility. To provide a more reasonable allocation of weights while preserving the expert subjective opinions, we introduce some improvements to the subjective weight-empowerment method. The specific practice is that we create an importance level such as three levels that represent very important, important, and less important, instead of using specific index weights assigned by each expert, to directly evaluate the radar-seeker performance. We can use the averaging method to change the specific index weights assigned by each expert into a corresponding level weight for each importance level. Then, the experts classify the indexes into different importance levels using a level weight. Thus, the first step in our improved method for assigning the index weight is completed.
After classifying the indexes into the corresponding importance levels with different level weights, the next step is to allocate the index weight within each importance level. In this process, we use the maximisation-deviation method [21,22], which is a widely used objective weight-empowerment method. The following illustrates the steps of using the deviation-maximisation method to obtain the weight vector x t j − x s j is the sum of the deviations among all the alternatives at index C j . The optimal solution of this model is the objective weight value of index C j . To obtain the optimal solution, the Lagrange equation is constructed as follows: Then, the partial derivative of L is obtained After calculating (2) and (3), we can obtain normalised After obtaining the subjective importance ranks and objective weights, we can find the final index weight that reflects both the expert subjective opinion and objective facts by multiplying the two vectors. Then, we can use the analytic hierarchy process (AHP) to evaluate the performance of the radar seeker.
The implementation steps of this method are summarised below.
Step 1: Obtaining subjective weight of experts Step 2: Dividing importance levels and determine their weights according to subjective weight information.
Step 3: Calculating objective weights in each importance level using the deviation-maximisation method.
Step 4: Synthesising importance levels weight and objective weights in each importance level to obtain index weight finally.
The effectiveness of this method can be verified in example part of this paper.
Compared with the previous subjective and objective weight empowerment method, this method uses importance ranks as a bridge to connect the advantages of subjective and objective method. The effectiveness of this method can be verified in example part of this paper. Comparing data in Tables 1 and 2 with  data Table 3, we can find that the comprehensive weights obtained by method proposed in this paper are the balance of the weights obtained by subjective and objective methods to some extent.
In terms of subjective weight processing, this method uses importance ranks to reflect expert opinion rather than specific index values. Such process reduces the expert subjectivity to some degree while preserving their main opinion and increases the rationality of the final weight distribution. The effect of this method becomes much better when the importance difference of the indexes is larger, which means that expert opinion is clearer and more reliable.

Information-fusion evaluation
Currently, evaluation of the overall performance of the radar seeker includes four stages, namely, simulation test in a microwave darkroom, static test, flying experiment, and shooting experiment [23]. The simulation test in a microwave darkroom is performed by first setting up a simulation test environment of an outfield in a microwave darkroom. Then, we place the radar seeker in it to measure its performance by testing its ability in the environment where a target and a jamming signal are simulated. The static test requires the radar seeker to be placed in a high location (top or roof) to test its ability to capture and track specific targets in a realoutfield environment where all types of clutter signals exist. Compared with the static experiments, the flying experiment in which the radar seeker is placed onboard an airplane or a helicopter to simulate a flying process can also test the motion characteristic of the seeker and increase the performance test range such as fluctuations in the projectile range distance, pitch angle, and so on. In addition, the radar seeker receives control commands from the flight control device in the general flying experiment, which also improves the reliability of the experimental results to a certain extent. The shooting experiment, namely, missile-launching experiment, is a comprehensive test and evaluation process for the overall performance of the seeker, and the experimental results have the highest degree of credibility. At present, in the field of radar-seeker performance evaluation, the specific test outline and standard for each stage are first provided. Then, we use the serialtest method to determine the performance of the batch products, which means that we do not perform the next stage until the previous phase of the experiment satisfies the standard. Finally, we complete the four stages of testing experiment and obtain the performance evaluation results. This evaluation process is easy to implement, but it ignores the correlation of the experimental data in the four testing stages and does not provide full account of the effect of the test. In this paper, the information fusion method is proposed to deal with the experimental data in each stage to achieve more meaningful evaluation results as shown in Fig. 1. Therefore, we need to first analyse the correlation and difference in each experimental stage. In terms of reliability, with the increase in the similarity to the actual working conditions of the radar seeker, the test results of the abovementioned four stages are more reliable, and the difference is obvious. Correspondingly, in terms of cost, the experimental consumption of the abovementioned four stages also significantly increases when the experimental environment can be controlled and can be repeated more and more difficultly. As a result, considering only the reliability and importance of the information element in trying to determine the weights of each phase of the experimental data in the fusion-evaluation process is not sufficient. Actually, from the microwave-darkroom simulation to the shooting experiment, the reliability and importance of the experimental data in each test stage obviously increase. Thus, when the information weights are determined using only the information reliability and importance, the relative weight of the microwave-darkroom simulation information becomes very low. Hence, it cannot play a significant role in the fusion-evaluation process. On the other hand, it would be inconsistent with the actual situation if we assign a large weight for the microwave-darkroom simulation information. To solve this contradiction, we put forward a new method of weight determination by fully considering the information characteristics at each stage.
Considering the contradiction in the microwave-darkroom simulation information, which is high repeatability and low reliability (relative to field testing), we propose a weightadjustment model in which experiment repetition times is used as an adjustment factor that considers the characteristics of low reliability and high repeatability of the information in this stage. The basic principle of this adjustment model is the following. When the test is repeatable and the experimental data obtained by the single test are not very convincing, the reliability of the test results will increase with the increase in the number of test repetitions. Among the commonly used mathematical models, the logarithm model satisfies the expectation for the weight adjustment of the information in which the growth rate gradually decreases and saturation is finally achieved [24]. Based on logarithm model, this paper proposes a simple weight adjustment model of the information ω = ω 0 * (1 + ln(n) * μ) (4) Here, ω 0 is the initial information weight, ω is the adjusted weight, n is the experiment repetition time, and μ is the adjustment factor that reflects how much attention is attributed to the repeatability of the experiment. One extreme situation may occur in this method. In performance fusion evaluation process, when difference between test schemes is too great, test information will be not consistent. It will lose significance and cause wrong result to demonstrate fusion process. Before we apply fusion method in performance evaluation, we need to make sure that different test schemes are similar and test information is consistent.

Examples
The implementation steps of performance evaluation method in this paper are summarised below.
Step 1: Building index system and calculating index weight according to part 2.
Step 2: Calculating score vector of alternative radar seeker performance in each test stage, respectively, with weighted sum method.
Step 3: Adjusting information weight of each test stage according to part 3.
Step 4: Synthesising information weight and score vector of alternative radar seeker performance in each test stage to obtain evaluation result finally.  Table 3 Normalised index weight obtained by synthesising the subjective and objective weights Index

Example 1
To evaluate the overall performance of a group of four radar seekers (represented as a i ), we construct an index system referring to Radar Test Outline, as shown in Fig. 2. Considering many simulation test performances as a benchmark, the relative attribute values of the index of each object to be evaluated are listed in Table 4. To more rationally use AHP, we first use the combination of the subjective and objective weight-endowment methods to assign index weight. First, the experts give an initial index weight vector (0.3, 0.3, 0.4, 0.5, 0.6, 0.6, 0.7, 0.8, 0.8, 0.9). According to the idea mentioned earlier, we can obtain the subjective importance-rank weight value listed in Table 1 based on the three standard divisions of Very important, Important, and Secondary and the averaging method. The objective weight vector can be calculated according to the method of finding the objective weight by maximising the deviation, as listed in Table 2. The normalised weight vector of an index can be obtained by synthesising the subjective and objective weights, as listed in Table 3.
From the weight and attribute value of the index, we can obtain the fractional vectors using AHP, which is calculated to be 0.259,  0.248, 0.240, and 0.254. Then, the overall performance of the radar seeker is evaluated and sorted from among these four alternatives.

Example 2
To further evaluate the anti-jamming performance of a group of four radar seekers (represented as a i ), we carried out performance test scheme based on active homing radar seeker and high-power active suppression jamming equipment. Then we construct a common index system referring to Radar Test Outline, as shown in Fig. 3. Considering any test-stage performance as a benchmark, the relative index attribute values in each test-stage performance are listed in Tables 5-8.  Table 4 Index relative attribute values (a 1 is considered as a standard) Index alternatives   Table 5 100 Times index test value in the microwave-darkroom simulation experiment (a 1 is considered as a standard) Index The first step in the performance fusion evaluation is to calculate the order of anti-interference performance of the alternative radar seekers in each test stages. We use the method used in Example 1 to calculate the order of the anti-interference performance of the alternative seekers in the four test stages, and the calculation method is no longer presented in detail. The result is listed in Table 9.
The second step of the performance-evaluation experiment is to determine the weight of each test result in the fusion process. This weight is given an initial value by the members of the evaluation experts, which is listed in Table 10. We then use the weightadjustment model described above to obtain a more reasonable weight value.
According to (4) of the weight-adjustment method, we obtain the adjusted weight vector as listed in Table 11, where adjustment factor μ is 0.5 in the adjusted model.
Obviously, the microwave darkroom simulation has the highest repetition rate and the lowest reliable result, and the shooting test show the greatest influence and lowest repetition rate, which are the two stages in which the information weight changes the most after the adjustment. Such adjustment result achieves the desired goal. The adjustment results are to the evaluator psychological classification according to adjustment factor μ, which reflects how much weight is given to the repeatability of the experiment.
According to the sorted vector and the information weight in each experimental stage obtained earlier, we can calculate the antijamming performance score vector of each subject, which is listed in Table 11. In contrast, we can see that the adjustment of the information weight has a direct effect on the final ranking result, and the effect agrees with the actual situation.

Conclusion
Taking radar seeker performance evaluation as background, two problems are discussed and solved in this work. The first one is effective index weight-allocation process. In this subject, we first analyse the advantages and disadvantages of subjective and objective weight empowerment method. Then, we propose the importance rank as a link for the comprehensive method between subjective and objective weight empowerment. The index weight that we finally obtain can preserve subjective expert opinion while strengthening its objectivity. Such process increases the rationality Table 6 50 Times index test value in static test experiment (a 1 is considered as a standard) Index      of the final weight distribution. The second subject concerns with how to rationally and effectively employ the value of the different information elements in the multi-source information-fusion evaluation. According to the actual background of radar seeker anti-jamming evaluation, this study proposes an information weight adjustment model where the information reliability and repeatability in different test stages can be used as adjustment factors to obtain more realistic evaluation results. The adjustment methods of the index and information value in the performance evaluation conform to the actual assessment requirements and the psychological sentiment of the evaluators. Through examination of many experimental data, including the test results of the simulation test and field experiment at sea, the radar seeker performance evaluation results that we obtained are objective and rational. Thus, the evaluation method that we proposed in this paper is valid and effective.