Game Theoretic Request Scheduling with Queue Priority in Video Sensor Networks

Video sensor networks have been widely used to monitor environment and report abnormality. Each node collects video data, select a head node, and transmit the data to the head, and then the head reports the data to the base station. A head has to process both normal and abnormal data-reporting requests from its nearby nodes. To achieve QoS of surveillance, previous request scheduling methods minimize the data transmission delay or blocking rate but no comprehensive way was studied in the literature. In this paper, we propose a game strategic request scheduling based on a queue priority model in which a handover mechanism ensures that the abnormal requests are processed in time. In the game, video sensors select their heads to decide the arriving rates of both normal and abnormal requests; the heads decide the probability of handing over the abnormal requests. At the Nash Equilibrium Point (NEP), the normal data requesters optimize mean delay, the abnormal data requesters optimize mean blocking rate, and the heads balance the request load on them. Numerical analysis shows that the game strategic scheduling outperforms other scheduling methods that consider single objective (minimum delay or minimum blocking rate).


Introduction
Video sensor networks act as an efficient and reliable way to monitor environment and discover emergency or accidents [1,2]. Each node of a sensor network collects video data and transmits the data to a nearby head, which is also a sensor and selected within a cluster. The head processes the datareporting requests from members of the cluster and transmits the data to a sink node or a base station. The monitoring quality of service (QoS) of a video sensor network depends on how the data-reporting requests are scheduled and processed [2]. In order to collect data in time, a scheduling mechanism is supposed to optimize a metric of QoS (e.g., delay or blocking rate).
When executing the task of surveillance, a video sensor node can provide two types of service, one of which is to collect the data of normal video stream, and the other one is to collect the data of accident identification. Compared with the normal data, the abnormal data has a strict requirement of real time in order that the emergency and accidents can be discovered in time. A sensor node may transmit either normal or abnormal data to heads, which have to process both kinds of requests. Accordingly, a scheduling mechanism has to be designed to ensure that regular monitoring tasks are accomplished and accident reports are not delayed.
Previous request scheduling mechanisms focused on optimizing a single QoS metric (e.g., delay or blocking rate) without considering different QoS demands of two kinds of requests. Some work deals with packet queue management to improve the reporting delay and the video distortion [3]. Some other work pays too much attention to the occasionally accident data to avoid disturbing the regular monitoring tasks [4,5]. A comprehensive way to achieve QoS of both kinds of reporting requests has not been studied in the literature. Such request scheduling problem entails a solution to optimize the utility of both the normal data-reporting requesters and 2 International Journal of Distributed Sensor Networks abnormal data-reporting requesters. Game theory can be used where multiple participants interact with each other by their own strategies and pursue their respective optimal utility. When the participants arrive a Nash Equilibrium Point, they will not deviate from this solution because no more profit can be achieved by any of them [6,7].
During the transmission of normal and abnormal data, there are totally three active participants: abnormal data-reporting requesters (ADR), normal data-reporting requesters (NDR), and cluster heads. They interact with each other. Both ADR and NDR have to choose the right heads to report their data because their choices will influence QoS of their monitoring tasks and the performance of the whole network. The cluster heads have to use cooperative way to optimize the cost and prolong the lifetime of the network. The three participants optimize their respective utility objective and converge to an equilibrium at which ADR and NDR make optimal head selection and heads use cooperation mechanism to lower and balance load on them.
In this paper, we propose a game theoretic request scheduling based on a handover mechanism between heads' priority queues. ADR, NDR, and the heads act as three players in the game. NDR decides the head selection strategy to optimize mean request response delay. ADR selects their heads to optimize mean request blocking rate. The Cluster heads decide the handover strategy to optimize their load. We can get a Nash Equilibrium Point (NEP) at which each participant optimizes its own utility [6,7]. Using NEP solution as the request scheduling, we satisfy the demand of service quality of both ADR and NDR as well as achieving load balance between heads. The main contributions of this paper include the following: (1) we analyze the interaction among the three active participants in request scheduling game and aim to converge their respective behaviors (strategies) to an optimal equilibrium; (2) from the view of data-reporting requesters, we use a comprehensive way to optimize utility of both kinds of data-reporting requesters rather than pay too much attention to a single QoS metric; (3) from the view of cluster heads, we propose a handover mechanism between their request queue with priority in order to balance their respective load and prolong the lifetime.
The remainder of the paper is organized as follows. Related work is introduced in Section 2. In Section 3, we illustrate the network and QoS metrics. In Section 4 we propose the request scheduling game and the NEP solution. Some properties of NEP are validated in Section 5. Evaluation of the scheduling is shown in Section 6. Section 7 concludes the paper.

Related Work
In video sensor networks, abnormal data-reporting requests are usually given the high priority for accident discovery. In the paper [4], the authors propose an energy-aware packet scheduling algorithm to minimize the power consumption and prolong the lifetime of the whole video sensor network. In order to minimize the video distortion, they use a priority strategy to let the high priority packets to be transmitted over high bandwidth paths. Their proposed algorithm can constrain the energy consumption by selectively dropping the least important video packets. Durmus et al. propose an event-based fairness scheme for fair resource allocation of the involving events [3]. Their scheme is a queue management to improve the video distortion and the reporting latency. In the paper [5], the authors propose a periodic time scheduling to maximize the quality of monitoring of stochastic events. This scheduling has to decide the proportion of the time of eventmonitoring and distribute a sensor's coverage time in order to achieve the proportion sharing and prolong the lifetime.
Cluster head or server selection schemes are studied in wired/wireless networks for efficient data transmission and QoS satisfaction [8][9][10][11]. Through redirection of a request dispatcher, distributed system assigns requests and balances loads on source servers. Much work pertains to system performance analysis with resource allocation method [12]. Different optimization schemes have been proposed to achieve QoS in data transmission [13][14][15].
Queue theory has been widely used in request scheduling [16]. In [12], the authors investigate price of anarchy of the distributed networks with multiple dispatchers. They model the request assignment by the dispatchers as a multiplayer noncooperative game, in which each dispatcher optimizes its own cost. Selfish server selection strategy of each dispatcher results in worse global cost of the whole network. Lower and upper boundaries of price of anarchy are given to illustrate that network performance degrades with increasing number of dispatchers. Authors in [17] study the job assignment problem in heterogeneous distributed server system. They propose an optimal static assignment to minimize the loads on the entire system. For the queue model, the author uses a scheduler to assign jobs (requests) to different servers. The scheduler is similar to the head selecting function in our request assignment game. This method can be deployed with handover mechanism to obtain global optimization of both normal and abnormal requests.

Priority Queue Model
3.1. Video Sensor Network Architecture. Cluster-based video sensor network architecture is composed of three types of nodes. As shown in Figure 1, each video sensor serves as a node to collect both normal and abnormal data from surrounding environment and transmits the data reports to their cluster heads. The head processes the data-reporting requests and transmits them to the sink node.

QoS Metrics for Both ADR and NDR.
To give the QoS metrics based on the video sensor network, we detail the queue model of handover mechanism between cluster heads.
Let ( ) denote the probability that the head j hands over the abnormal data-reporting request to the head at time . This sharing policy satisfies ∑ =1 ( ) = 1 for all the n heads.
As shown in Figure 2, head 1 has a probability of 1 to keep abnormal requests and (1 − 1 ) to hand over the requests to head 2. Head 2 has a probability of 2 to keep abnormal requests and (1 − 2 ) to hand over the requests to sender 1.  For each head, let 1 denote the sharing policies of Head 1. As shown in Figure 2, 1 represents the proportion of abnormal requests kept and processed by head 1, and 1 − 1 represents the proporyion of abnormal requests handed over bye head 1. and let 0 denote the idle state probability of the head. As shown in Figure 2, we suppose that in the head 1's request processor the request arrival interval and the service time both obey exponential distribution. Then we let the normal request arrival rate be 1 and the service rate 1 . The sharing policy for the two heads is ( 1 , 2 ). 1 means that in enough little interval the probability of keeping the abnormal requests in their own processor. (1 − 2 ) represents the probability of receiving requests from the other heads. Hence, the queuing rate of abnormal request packets in head 1 would be 1 1 +(1− 2 ) 2 , while the arrival rates of both heads are 1 and 2 . Then we can construct the state transition differential equations. In the stable condition, the equations are expressed as follows: ( 1 + 1 ) 00 = [ 1 + 1 1 + (1 − 2 ) 2 ] 00 + 1 10 , with the constraint of 0 + ∑ ∞ =0 0 = 1. By solving the equations, we can get the state probabilities as follows: For head 2, the state probabilities are expressed as follows:

Delay for Normal
Requests. Let ( ) 1 and ( ) 2 denote the waiting time of normal request packets in head 1 and head 2. Based on the results above, we can get the expression of delay as follows:

Blocking Rate for Abnormal Requests.
Because the abnormal request packets can be regarded as the one processed by the cooperative heads, the blocking rate should be the average probability of abnormal requests being rejected by these heads. We use the sharing policy to figure out the condition probability. As shown in Figure 2, the probability that a packet arrives at head 1 is expressed as . The probability that a packet arrives at head 2 is The abnormal requests can be processed only when a head is idle. So the blocking rate ( ) 1 for abnormal requests at head 1 is expressed as follows: In the same way, we can get the blocking rate ( ) 2 at head 2 as follows: 4 International Journal of Distributed Sensor Networks With condition probabilities, the total blocking rate ( ) at time i can be expressed as follows:

Request Scheduling Game
In this section, we introduce the request scheduling game. By modeling the game as a 3-player noncooperative game, we obtain the NEP at which the three players of ADR, NDR, and cluster heads all optimize their own utilities with no motivation to deviate the NEP.

The Three-Player Game.
A multiplayer noncooperative game includes participants, policy for every player, and utility function for every player. The game is also a complete information static game in which every player knows the policy sets and the objective functions of others, but he does not know the policy choice of others. For the policy set, every player in this three-player game has infinite choices. We also suppose that every player in this game is selfish and rational. They choose the policy that can maximize their own utilities. The Nash Equilibrium Point is defined as the optimal policies chosen by all the players to optimize their own utility. At the NEP, no one has the motivation to deviate from this solution, because they will not gain more profit with other policies. Consider a video sensor network with n heads at N different times. let the total arrival rate of abnormal requests be a constant . Let ( ) express the arrival rate of abnormal requests to head at time . The policy set of ADR is ( ) = { ( ) 1 , ( ) 2 , . . . , ( ) } with the requirement as follows: Since abnormal data is time-sensitive, an abnormal request packet will be dropped if it exceeds the acceptable delay threshold. Therefore, we use the blocking rate to represent the quality demand from ADR. The utility function of ADR is to minimize the average blocking rate.
Let ( ) express the arrival rate of normal requests to the head k at time i. The policy set of NDR is ( ) = { ( ) 1 , ( ) 2 , . . . , ( ) } with the requirement as follows: NDR queue in the high priority queue in a head. We use the average delay to express the objective function.
Two heads can communicate through free load detection message. By using this message a busy head can find the free load head nearby and pass on the requests to them since the abnormal requests are immediate rejection type. Through this sharing way, heads cooperate to balance the load. We define ( ) as the probability that head j passes on its own abnormal requests to head k. The policies for all the heads at time i can be formatted with the matrix as follows: with the constraints of ∑ =1 According to the definition of the NEP, given the utility function f, the policy set ( * , * , * ) =

The Game Theoretic
Formulae. According to (8), we can get the probability that head k at time i shares at least one request. This probability is shown as follows: where 1 ≤ ≤ . Based on the definitions and constraints above, we obtain the utility objective for the three players in this game.
The ADR wants to minimize the blocking rate which is expressed as follows: Then we get the average blocking rate in the whole networking by weighting ( ) with fraction of abnormal request arrival rate as follows: The NDR wants to minimize the response delay of the normal requests. The delay of head k and the average delay of the whole networking are expressed as follows: With the queue intensity ( ) = ( ( ) + ∑ =1 ( ) ( ) )/ ( ) in head k, we get the load balancing index as follows: International Journal of Distributed Sensor Networks 5 Each player in the game chooses policy to optimize its own utility objective function.
ADR solves the following problem. Minimize With respect to constraints, NDR solves the following problem. Minimize With respect to constraints, Cluster heads solve the following problem. Minimize With respect to constraints, To sum up, we can obtain the NEP solution from the following equation group: NEP solution is the optimal policy for each player to maximize their own utility. As rational player, no one has the motivation to deviate from this optimal policy choice.

Property Validation
In this section, we give some theorems and propositions with their proofs to reveal some properties of the NEP solution. We prove the existence and stability of NEP as well as the advantage of NEP and handover efficiency.

Theorem 1. A Nash Equilibrium Point exists in the request scheduling game.
Proof. As defined in Section 3, we use A, B, and C, respectively, to represent the policy sets of NDR, ADR, and heads. So the whole policy space of all the three players will be = × × . Then we use ( , , ) to represent a point in the space S. We can see that A, B, and C are all the subset of Hausdorff spaces. With the total arrival rate constraints (8) and (9) and the total sharing probability constraint (10) in Section 3, we can reason that the sets A, B, and C are all closed. We can also prove that A, B, and C are all convex sets (proof will be given in Appendix A).
This theorem has practical significance in video sensor networks. In this game, when game function, such as average delay, changes slightly into another similar but different formulation, the new NEP will not vary greatly. A NEP represents the results of data-reporting request assignment and networking resource allocation. We know that accidents in the networking always happen without in-time response. As shown in (23), changes of processor in the head will lead to changes of the service rate . Then the whole game function of delay will change a little. Because of the stability of the NEP solution, even still following the proper policies, quality of service and performance of networking will not be affected too much. Hence, the NEP can give a smooth transition in some emergent accidents. It is meaningful to stable surveillance.
We will prove that the NEP solution and handover mechanism outperform other mechanisms in some difficult working conditions. Proposition 3. Given the arrival rates for both ADR and NDR as 1 and 1 as well as the service rate 1 , the normal request response delay ( ) 1 is less than the delay 1 in M/M/1 model, if the rates satisfy 1 / 1 → ∞.
Proof. We use the waiting time of a normal request packet to express the delay in M/M/1. Delay in M/M/1 and delay of the handover mechanism can be expressed as follows: By using the condition 1 / 1 → ∞, 1/ 2 → 0, we can get the ratio of two delays. Then we use the high order infinitesimal of 1/ 2 to obtain the limit of the ratio. Consider So, we get ( ) 1 < . The limitation condition 1 / 1 → ∞ can be regarded as the situation that the number of normal requests is too much greater than abnormal requests. In this situation, normal data are the main load of networking comparing with abnormal data. It is necessary to pay attention to the quality demand of NDR. Proposition 3 indicates that by using handover the request delay is less than the delay in nondifferential M/M/1 model. Proof. Low priority queue for abnormal requests is immediate rejection. Suppose that there are two heads and the probabilities of sharing are 1 and 2 . According to (11), we get the weights 1 and 2 , which mean that no abnormal request is handed over to head 1 or 2 ,

Proposition 4. On condition
For M/M/1(1) model, weighted blocking rate for abnormal requests to the two heads are as follows: International Journal of Distributed Sensor Networks 7 In the handover mechanism, weighted blocking rate for abnormal requests to the two heads are expressed as follows: The M/M/1(1) model is used to serve the requests, just because the queue model has to accommodate the condition 1 / 1 → ∞, 2 / 2 → ∞, which means that abnormal requests are much more than normal requests. Additionally, abnormal request has a delay threshold and will not wait for a long period of time. Since too little normal requests need to be processed, the main objective is to serve the ADR. M/M/1(1) is one of proper models for abnormal requests. Proposition 4 indicates that handover mechanism performs as well as M/M/1(1) model when abnormal data imposes tremendous load on the networking. Proof. Suppose that there are two heads to do handover. We use 2 to express the balancing index in M/M/1, and 2 to express the index between heads with handover On the conditions 1 / 2 < 1, 1 / 2 → ∞, 1/ 2 1 → 0, the ratio can be limited with high order infinitesimal of 1/ 2 1 as follows: By using the condition and square inequality, we get The nodes of high process ability are supposed to serve more packets in accordance with their practical capacity. To analyze the condition 1 / 2 < 1, 1 / 2 → ∞, we find that this is the situation in which the two heads may lose balance. Because the processor in head 2 is stronger than the one in head 1, head 2 is supposed to process more requests than head 1, yet, there are much more requests arriving at head 1 than head 2. This situation will test the adjustability of networking. Proposition 5 indicates that the handover model shows better adjustability than M/M/1.

Proposition 6. The NEP solution shows inefficiency as the number of heads increases on conditions as follows:
(37) Proof. By solving the equation group (23) to get the NEP, we have a load balancing constraint. According to the format of the balancing index and the square inequality, we have with the equalization condition 1 = 2 = ⋅ ⋅ ⋅ = . Then we can figure out the sharing probability for each head in different cases. When = , we get = 1/ , 1 ≤ , ≤ . When = − 1, we get = 1/( − 1), 1 ≤ , ≤ − 1. We define the inefficiency indexes and as follows: where ( The inefficiency index can be regarded as the degree of wasting networking resource. It is the total probability that one node has to do some work but it does not . On the conditions in Proposition 6, every head in a LSG has the same work load and process ability. But in an infinitesimal period every head has a probability of free load by NDR but receive no abnormal request. It is a waste of networking resource. We figure out the total probability of inefficiency and find that the index gets larger as the head number increases. This proposition also has a practical significance. Because of the similar process ability and same work load, each head does not have motivation to share the abnormal requests. No share means inefficiency of handover. Therefore, we prefer some distinction in head selection.

Numerical Results
We use Matlab to evaluate the performance of our proposed handover mechanism and game theoretic scheduling method. We consider the single-hop case. It has 6 heads that response to all the data-reporting requests from 30 sensor nodes. We set the 1st, 3rd, and 5th nodes to have capacity of 4.2 Mbps, and the 2nd, 4th, and 6th nodes to have capacity of 3.3 Mbps. This is because homogeneous nodes may cause the inefficiency which has been shown in Proposition 6. The ADR and NDR packet sizes are both 1 MB. Accordingly, the average service rates of the 1st and 2nd nodes are 1 = 25 packets/min and 2 = 20packets/min. The 30 sensor nodes generate request packets in a Poisson distribution, and the average packet arrival rate + is set to be less than the total service rate , thus keeping the networking stability. According to the capacities, the total service rate 135 requests/min is the upper bound of load. The networking utilization can be expressed as = ∑ 6 =1 ( + )/ ∑ 6 =1 . We will evaluate the networking performance in different utilization values ranging over (0, 1).

Handover Mechanism.
By employing the game theoretic optimization to get the NEP solution, we record the probability changes with the ratio of normal request rate at head 1 to the rate at head 2. Probabilities (1) 12 and (1) 21 in vertical axis corresponding to ten values of (1) 1 / (1) 2 in horizontal axis are shown in Figure 3(a). On the first half of horizontal axis, we have (1) 1 / (1) 2 < 1. Normal request arriving rate at head 1 is less than head 2, yet, head 1 has a capacity larger than head 2. It means that unbalance load will happen. To keep load balance, head 1 lowers the probability of passing abnormal request on to head 2. Meanwhile, head 2 raises the probability to pass on the requests. As shown in Figure 3, when the ratio reaches a value a little larger than 1, load balance is achieved. At the horizontal values larger than but close to 1, the system does not have an obvious tendency to lose balance. So, the two heads both experience a section of probability fluctuation across the horizontal axis. The section of fluctuation on the second half of horizon axis can be interpreted as the selfadjustment of heads to obtain the load balance again. When the ratio of is large enough, as shown in right section in Figure 3(a), heads have the definite orientation to change their probability of handing request over. Figure 3(b) shows more details of the adjustment and fluctuation by taking 100 ratio values into account. We can see the similar change tendency as Figure 3(a) on the first half of coordinate. We use the normal request ratio rather than abnormal request ratio, because large amount of normal requests is the main reason to make no free load head for the ADR; heads need to share abnormal requests frequently.
From Figure 3, we see the interaction between request arriving amount and heads' policy based on handover. In different state of arriving rate, heads always use the NEP solution to alleviate load and keep balance through handover.

Performance Evaluation.
Let be the total normal request arriving rate. Let be the total abnormal request arriving rate. We record the simulation results in two cases of / = 2 and / = 1. So we get 9 values of + in 9 utilization values as a sequence as [13.5, 27, 40.5, 54, 67.5, 81, 94.5, 108, 121.5] (requests/min).
In our proposed game theoretic request scheduling based on handover, we use the NEP from (23) to make head selection to optimize utility functions of three players in the game as we discussed in Section 3.
On condition of M/M/1, the solution can be figured out with a load balance constraint as follows: (41) In minimum delay selection method, based on M/M/1, optimization is to solve the problem In minimum blocking probability method based on M/M/1(1), optimization is to solve the problem .
The NEP in our method contains three optimal solutions. Hence, we compare simulation results from three aspects. We compare our NEP with equal load and minimum delay solutions by delay simulation. For blocking rate, we show the comparison of our NEP with minimum blocking probability solution. In load index comparison, results of all the four optimizations will be illustrated. Figure 4 shows three surfaces, which respectively represent the delay solutions of NEP, (minimum delay) MD [13], and (equal load) EL [14]. Each surface shows perform of an optimal delay method by giving delay value change with both ratio / and networking utilization. For normal requests, NEP outperforms MD and EL. Details are shown in Figure 5. In Figure 5(a), when ratio / = 2, we can see that NEP has lower delays than both of the other two methods when the networking utilization < 0.6. The NEP performs better than minimum delay method in all utilization points except = 0.9. The equal load method performs best only when = 0.7, yet, we can see that all the three methods perform similarly in = 0.7. In all the utilizations, average delays of both NEP and minimum delay are all lower than 0.15 min. When the ratio changes into / = 1, Figure 5(b) shows the comparison. The NEP has lower delays than both the other two methods when < 0.7 except = 0.3. In particular in the point = 0.9, the NEP has significant dominance over the other two methods.

Delay for NDR.
As shown in Figures 4 and 5, the dominance of NEP is not only in the situation that the networking has relatively small utilizations, but also in the scenario that large amount of load put on the networking. We consider the reason from two aspects. Although NEP solution has the similar optimizing algorithm and constraints as the minimum delay method, the NEP is based on the handover mechanism to balance load. Another reason is that each head use high and low priority queues to process two kinds of packets. The head drops or hand over abnormal request packets when necessary to ensure these request are processed in time.   Figure 7(a) are on condition / = 2. We see that the NEP blocking rate is relatively stable and performs better than minimum blocking probability method when the networking utilizations are less than 0.6. As load increases further, the NEP blocking rate reaches high values. In Figure 7(b), we have total arriving rate ratio / = 1. Although NEP still performs better at most utilization points in a range < 0.5, the NEP blocking rate figure shows considerable fluctuation.
We draw two conclusions from the two figures. First, there is an optimal utilization range for blocking rate by NEP like < 0.6 in Figure 7(a). In this range, live content delivery works well with a relatively low blocking probability. Second, performance fluctuation is mainly caused by the gradual reduction of ratio / . Increasing the number of both ADR and NDR do not lead to a fluctuation. When abnormal requests takes a considerable proportion of the total arriving rate, performance of blocking rate becomes unstable. In high utilization, high blocking rate for abnormal requests is reasonable. Rejection in time can let ADR find a new head as quickly as possible rather than wait a long period of time in the queue.

Load of Heads.
According to (16), minimizing the index is a control and optimization process. we have   When expression on the right side of the inequality is a constant, we get the condition to equalize (44) as (1) The NEP load index is figured out with (16). For other methods, the index is expressed as ∑ 6 =1 (( + )/ ) 2 . To evaluate the load balance of the four optimization methods, we let all the 6 heads have the same capacity of 4.2 Mbps. In addition, the total arriving rate + at each networking utilization point is a constant. As shown in Figure 8, all the four methods perform nearly the same in different utilizations and ratio / . The results indicate that NEP satisfies QoS demands of both ADR and NDR without losing load balance.

Conclusion
In this paper, we propose a comprehensive way to satisfy the QoS demands of both normal and abnormal data-reporting requests in order to improve the accident discovery but not to interfere with the regular surveillance task in video sensor networks. The request scheduling game optimizes the utilities of both ADR and NDR and balances the load on cluster heads. A handover mechanism ensures that the abnormal requests are processed in time. Based on game theoretic analysis, we prove some properties of the NEP and the handover mechanism. Existence and stability of NEP ensure efficient optimization and good adjustability to some accidents in networks. Numerical analysis shows that the game strategic scheduling outperforms other scheduling methods that consider single objective (minimum delay or minimum blocking rate).         For ∀ ∈ (0, 1) and any two policies , ∈ , we have the same result proof as NDR. Then we have ∑ =1 ∑ =1 ( ( ) + (1 − ) ( ) ) = .

A. Policy Set Convex Property Proof
(A.5) Therefore, policy ⋅ + (1 − ) ⋅ is in set B. B is a convex set.

B. Utility Function Pseudoconvex Property Proof
NDR objective function is expressed as (15). We adjust the constants to simplify the equation as the following formats: ( 1 , 2 , . . . , ) =  Function D is a convex function. A convex function must be a pseudoconvex.
ADR objective function is expressed as (13). We also adjust constants to get the expression Function F is a convex function. It is also a pseudoconvex function.

(C.2)
According to the definition of normed linear space, NDR policy space is a normed linear space.