1 Introduction

This section contains the general literature overview, emphasizes the contributions and gives the later organization of the paper.

1.1 Literature overview

Game theory plays a fundamental role in solving different conflicts and problems of decision theory (Roberts, 2008). Game theory has various application fields from economics and policy (Jiang & Liu, 2018; Kicsiny et al., 2014) to environmental protection and, within that, (fresh, thermal etc.) water resource management (e.g. Kicsiny, 2017; Madani, 2010).

Within game theory, bimatrix games (implying their mixed extension) have had a vital role since the very beginning of game theory (von Neumann, 1928) to date (Fernández et al., 1998; Kudryavtsev et al., 2017). Because it may be hard to solve a bimatrix game, approximate and/or numerical solution concepts, which need some computer/software tools, are often used.

In the cooperative approach of game-theoretical problems, the Pareto optimal strategy vectors of the Players (or the Pareto optimal payoff vectors) often represent a reasonable solution concept since they provide such strategies (payoff values) for the Players that are not improvable in a rather natural sense (see Sect. 2.2 for the formal definition in case of two Players). (Here and later, we use the term strategy vector as the vector that contains all Players’ (current) strategies. In case of two players, the term strategy pair is also used.)

There is a wide range of practical applications working with the Pareto optimal solution concept. For example, Jiang and Liu (2018) suggest a leader–follower equilibrium as the solution for the conflict between two stakeholders in a water supply network. One of the Players (the follower) selects its equilibrium strategy in such a way that it is also a Pareto optimal solution with respect to its own two objective (payoff) functions. Manea (2007) examines whether a subgame perfect equilibrium is Pareto optimal and vice versa in serial allocation games, where the Players move sequentially. In the present paper, we examine the Pareto optimality in one-shot simultaneous-move bimatrix games. Generally, game theory and Pareto optimality have a fundamental role in solving multi-objective optimization (Chaudhuri & Deb, 2010) and control (Ungueranu, 2018) problems. In (Geoffrion, 1967), it is shown that a wide set of Pareto optimal (strategy) points, which is called the set of proper efficient points, can be produced as the solution of a scalar maximization problem. The method is more effective if the Players’ payoffs are concave in the strategy coordinates.

Bimatrix games may serve with approximate solutions for other game theoretical problems. For example, differential games are discretized and simplified to bimatrix games to solve the conflict situation among more consumers of (ground)water (Kicsiny & Varga, 2019) and solar heated domestic water (Kicsiny, 2019) resources. The Nash equilibria (see Sect. 2.2 for the formal definition in case of two Players) and the Pareto optimal strategy pairs of the gained bimatrix games are considered as the approximate non-cooperative and cooperative solutions of the original problem, respectively, in both papers. One of the use cases of the algorithm proposed in our present work is to check whether a Nash equilibrium is Pareto optimal, which is an important question in the field. See e.g. (Gaskó et al., 2012; Juszczuk, 2019; Kudryavtsev et al., 2017). As for a more recent reference, Braggion et al. (2020) examine Nash equilibria (and specifically strong Nash equilibria) from different points of view, e.g. in view of Pareto optimality, in finite games. Below, we deal with the Pareto optimality of any points in bimatrix games (not only Nash equilibria).

Generally, it may be hard (or even impossible) to determine all exact Pareto optimal strategy vectors or at least check the Pareto optimality of a given strategy vector. Accordingly, approximate solution methods are of great importance. In (Gaskó et al., 2012), certain Pareto optimal equilibria are dealt with and detected with an evolutionary algorithm for continuous function optimization. Biró and Gudmundsson (2020) apply constrained Pareto optimal welfare optimization primarily for school choices of students. Basically, the problem is NP-hard (i.e. it cannot be solved in polynomial time), nevertheless, polynomial-time solvability is discovered under certain settings of the problem.

As Bárány et al. (1992), Gatti et al. (2013) and Gatti and Sandholm (2014) deal with Pareto optimality in bimatrix games, their works might be considered the most closely related ones to ours. Accordingly, these are detailed more in Sect. 2.1.

1.2 Contributions and organization of the paper

In the present paper, general, n × m (\(n = 2,3, \ldots\); \(m = 2,3, \ldots\)) bimatrix games are studied (where Player 1’s strategies are n-dimensional vectors and Player 2’s strategies are m-dimensional vectors). First of all an elementary proof is provided for a useful theorem, based on which the proposed algorithm for checking the Pareto optimality of strategy pairs becomes simpler. The algorithm can be made even more convenient with a proposed nonlinear transform e.g. in the important case of 2 × 2 bimatrix games (Bárány et al., 1992; Gatti & Sandholm, 2014). The contributions of the paper, in detail, are the following:

  1. (a)

    A new elementary proof is provided for the following theorem: If a given strategy pair of a bimatrix game is not Pareto optimal, then one of the following two cases should hold: A) There are only strict Pareto improvements of the strategy pair (see Definition 2.2 in Sect. 2.2). In addition, at least one of the payoffs can be increased only in such a way that the other payoff also increases (while changing to another strategy pair). B) There is a weak but not strict Pareto improvement of the strategy pair. This theorem excludes certain cases (namely Case C of Theorem 3.1 in Sect. 3.1).

  2. (b)

    Taking advantage of the above theorem, a new algorithm (dealing only with Cases A and B above) is proposed to check the Pareto optimality of any strategy pair of a bimatrix game.

  3. (c)

    A new nonlinear transform (bijection) of the Players’ strategy coordinates (as the variables of the bimatrix game) is proposed. The payoff functions become linear in terms of the new transformed variables. This transform makes the above algorithm even more easy-to-use in case of 2 × 2 bimatrix games (and for other bimatrix games under certain circumstances).

The organization of the paper is the following: Sect. 2 details closely related works from the literature and recalls preliminary basic game-theoretical concepts as they are used in the later sections. The main used theorem with its elementary proof, the Pareto optimality checking algorithm and the bijection for bimatrix games are given in Sect. 3, where also two detailed numerical examples illustrate the applicability of the results. Section 4 provides conclusions and future research proposals.

2 Preliminaries

In this section, the contents of some works (Bárány et al., 1992; Gatti & Sandholm, 2014; Gatti et al., 2013) from the literature are detailed as they might be considered the most closely related ones to ours. Furthermore, for the Reader’s convenience, the formal definition of some basic concepts from game theory (Forgó et al., 1999) are recalled as they are used later.

2.1 Related works

The problem of determining the Pareto optimal strategy vectors or at least checking Pareto optimality may be hard also in case of n × m (\(n = 2,3, \ldots\); \(m = 2,3, \ldots\)) bimatrix games. More particularly, checking the Pareto optimality of a strategy pair in bimatrix games is a P-hard problem. It means that the number of the needed time steps to solve the problem is a polynomial function of the number of the possible pure strategies of the Players (Gatti et al. 2013). Gatti et al. (2013) also suggest that the Pareto optimality checking problem is equivalent to a nonlinear optimization problem. Such optimization problems are generally considered difficult. Gatti et al. (2013) deal generally with the computational time needed to determine whether a strategy vector is Pareto optimal. In our present paper, certain cases are excluded in advance (based on a theorem), which makes the Pareto optimality checking process in bimatrix games faster. Furthermore, we give a nonlinear transform to simplify the checking process further in case of certain bimatrix games (e.g. in the 2 × 2 case generally).

Gatti and Sandholm (2014) deal with the structure of Pareto frontiers (set of the Pareto optimal payoff vectors) and the computational time needed to determine them in n × n (\(n = 2,3, \ldots\)) bimatrix games. It is found that the Pareto frontier can be calculated in polynomial (in n) time and consists of a polynomial (in n) number of pieces. In the present paper, we deal with n × m bimatrix games generally in checking the Pareto optimality of strategy pairs.

In (Bárány et al., 1992), the structure of the Pareto frontier is revealed for ordinal bimatrix games, where the entries within each payoff matrix are different numbers. In the present work, we study the Pareto optimality of strategy pairs in general (not necessarily ordinal) bimatrix games.

2.2 Definition of basic concepts

Definition 2.1

Consider a game with two Players. Let us denote the strategy sets of Players 1 and 2 with \(X\) and \(Y\), respectively. Let the functions \(F_{1}\) and \(F_{2}\): \(X \times Y \to {\mathbf{R}}\) be the payoff functions of Players 1 and 2, respectively.

The strategy pair \(\left( {x^{*} ,y^{*} } \right) \in X \times Y\) is called Pareto optimal if there is no strategy pair \(\left( {x,y} \right) \in X \times Y\) for which the following hold:

$$ F_{i} \left( {x,y} \right) \ge F_{i} \left( {x^{*} ,y^{*} } \right) $$

for each \(i = 1,2\); and for at least one \(i_{0} \in \left\{ {1,2} \right\}\),

$$ F_{{i_{0} }} \left( {x,y} \right) > F_{{i_{0} }} \left( {x^{*} ,y^{*} } \right). $$

Then the ordered pair of payoff values \(\left( {F_{1} \left( {x^{*} ,y^{*} } \right),F_{2} \left( {x^{*} ,y^{*} } \right)} \right) \in {\mathbf{R}}^{2}\) is called a Pareto optimal payoff pair. The set of the Pareto optimal payoff pairs is called the Pareto frontier.

Definition 2.2

Consider a game with two Players with the same meaning of \(X\), \(Y\), \(F_{1}\) and \(F_{2}\) as in Definition 2.1.

It is said that there is a weak Pareto improvement of the strategy pair \(\left( {x,y} \right) \in X \times Y\) if there is a strategy pair \(\left( {x^{*} ,y^{*} } \right) \in X \times Y\) for which the following hold:

$$ F_{i} \left( {x^{*} ,y^{*} } \right) \ge F_{i} \left( {x,y} \right) $$

for both indices \(i \in \left\{ {1,2} \right\}\); and for at least one index \(i_{0} \in \left\{ {1,2} \right\}\),

$$ F_{{i_{0} }} \left( {x^{*} ,y^{*} } \right) > F_{{i_{0} }} \left( {x,y} \right). $$

If the above conditions hold, then it is also said that \(\left( {x^{*} ,y^{*} } \right)\) weakly Pareto dominates \(\left( {x,y} \right)\).

If strict inequality holds for both indices \(i \in \left\{ {1,2} \right\}\), then it is also said that there is a strict Pareto improvement of \(\left( {x,y} \right)\) and \(\left( {x^{*} ,y^{*} } \right)\) strictly Pareto dominates \(\left( {x,y} \right)\).

Definition 2.3

Consider a game with two Players with the same meaning of \(X\), \(Y\), \(F_{1}\) and \(F_{2}\) as in Definition 2.1.

The strategy pair \(\left( {x^{*} ,y^{*} } \right) \in X \times Y\) is called a Nash equilibrium of the game if the following hold:

$$ F_{1} \left( {x^{*} ,y^{*} } \right) \ge F_{1} \left( {x,y^{*} } \right), $$
$$ F_{2} \left( {x^{*} ,y^{*} } \right) \ge F_{2} \left( {x^{*} ,y} \right) $$

for any \(x \in X\), \(y \in Y\).

Remark 2.1

  1. 1.

    Thus, when selecting the Nash equilibrium strategies \(x^{*}\), \(y^{*}\), each Player maximizes its own payoff function, provided that the other Player also selects its own equilibrium strategy. Generally, a Nash equilibrium is considered as a non-cooperative solution of a game.

  2. 2.

    In comparison with a Pareto optimal solution (strategy pair) of the game, there are no such strategies for the Players with which at least one of their payoffs would increase while the other payoff would not decrease. Accordingly, if a given strategy pair/payoff pair is not Pareto optimal, then at least one Player’s payoff can be improved without harming the other’s one. So it is generally natural to expect that the Players assign some Pareto optimal strategy pair for cooperation (otherwise the cooperation could be clearly improved). It is often examined in the practice if a Nash equilibrium is also Pareto optimal or not. If not, it may be reasonable for the Players to change from a non-cooperative solution to a cooperative (Pareto optimal) one (being either a Nash equilibrium or not).

3 Pareto optimality checking in bimatrix games

Let us have (as the mixed extension of a two-person finite game) a bimatrix game with payoff matrices \(A \in {\mathbf{R}}^{n \times m}\) for Player 1 and \(B \in {\mathbf{R}}^{n \times m}\) for Player 2 \(\left( {n,m \ge 2} \right)\), where

$$ A: = \left[ {\begin{array}{*{20}c} {a_{1,1} } & \cdots & {a_{1,m} } \\ \vdots & {} & \vdots \\ {a_{n,1} } & \cdots & {a_{n,m} } \\ \end{array} } \right],\quad B: = \left[ {\begin{array}{*{20}c} {b_{1,1} } & \cdots & {b_{1,m} } \\ \vdots & {} & \vdots \\ {b_{n,1} } & \cdots & {b_{n,m} } \\ \end{array} } \right]. $$

A strategy pair/point \(\left( {x,y} \right): = \left( {x_{1} , \ldots ,x_{n} ,y_{1} , \ldots ,y_{m} } \right)\) is admissible in the bimatrix game (where \(x = \left( {x_{1} , \ldots ,x_{n} } \right)\) denotes a strategy of Player 1 and \(y = \left( {y_{1} , \ldots ,y_{m} } \right)\) denotes a strategy of Player 2) if the following conditions hold:

$$ 0 \le x_{1} , \ldots ,x_{n} ,y_{1} , \ldots ,y_{m} , $$
(1a)
$$ x_{1} + \cdots + x_{n} = 1, $$
(1b)
$$ y_{1} + \cdots + y_{m} = 1. $$
(1c)

Let \(S\) denote the set of the admissible points in the bimatrix game. (In fact \(S\) can be identified with the Cartesian product of two simplices.)

Remark 3.1

Strategies x and y correspond to probability distributions over the n pure strategies of Player 1 and the m pure strategies of Player 2, respectively, according to the mixed extension of a two-person finite game. Matrices A and B represent directly the payoff values of Players 1 and 2, respectively, if both of them play only pure strategies (when x and y have only one nonzero coordinate with value 1).

Matrices A and B determine the payoff functions \(F_{1}\) and \(F_{2}\) for Players 1 and 2, respectively, as it can be seen in Eqs. (2a) and (2b).

$$ F_{1} \left( {x,y} \right) = x^{T} Ay = a_{1,1} x_{1} y_{1} + \cdots + a_{n,1} x_{n} y_{1} + \cdots + a_{1,m} x_{1} y_{m} + \cdots + a_{n,m} x_{n} y_{m} $$
(2a)
$$ F_{2} \left( {x,y} \right) = x^{T} By = b_{1,1} x_{1} y_{1} + \cdots + b_{n,1} x_{n} y_{1} + \cdots + b_{1,m} x_{1} y_{m} + \cdots + b_{n,m} x_{n} y_{m} $$
(2b)

where \(\left( {x,y} \right) \in S\).

3.1 Underlying theorem

In this section, an elementary proof is provided for a useful theorem (Theorem 3.1). This theorem makes the Pareto optimality check significantly easier since it excludes certain cases (Case C below) from the possible ones in advance.

Basically, the following three disjoint cases (Case A, B or C) may occur if a strategy point \(L \in S\) is not Pareto optimal:

Case A

There are only strict Pareto improvements of the point \(L\). In addition, at least one of the payoffs (values of \(F_{1}\) and \(F_{2}\)) at point \(L\) can be increased (while moving from point \(L\) to some other point) only in such a way that the other payoff also increases.

Case B

There is a weak but not strict Pareto improvement of the point \(L\).

Case C

At least one of the payoffs at point \(L\) can be increased. If \(F_{i}\) (\(i = 1,2\)) can be increased, it is possible in the following both ways and in no other way: 1. \(F_{i}\) can be increased in such a way that the other payoff also increases. 2. \(F_{i}\) can be increased in such a way that the other payoff decreases. (That is, \(F_{i}\) cannot be increased in such a way that the other payoff is constant.)

Theorem 3.1

A strategy pair \(L \in S\) is not Pareto optimal in a bimatrix game (given in Sect. 3) if and only if the above Case A or B holds. That is, the above Case C is not possible.

Proof

Reasoning from the contrary, assume that Case C holds with respect to the Pareto optimality of a strategy pair/point \(L: = \left( {x_{1}^{L} , \ldots ,x_{n}^{L} ,y_{1}^{L} , \ldots ,y_{m}^{L} } \right) \in S\) in the general bimatrix game given in Sect. 3.

After rearranging Eqs. (2a) and (2b), using equalities \(x_{n} = 1 - x_{1} - \cdots - x_{n - 1}\) and \(y_{m} = 1 - y_{1} - \cdots - y_{m - 1}\) and the notation

$$ \begin{aligned} \tilde{a}_{1,1} & : = a_{1,1} - a_{n,1} - a_{1,m} + a_{n,m} ,\; \ldots , \\ \tilde{a}_{n - 1,1} & : = a_{n - 1,1} - a_{n,1} - a_{n - 1,m} + a_{n,m} , \\ \tilde{a}_{1,2} & : = a_{1,2} - a_{n,2} - a_{1,m} + a_{n,m} ,\; \ldots , \\ \tilde{a}_{n - 1,2} & : = a_{n - 1,2} - a_{n,2} - a_{n - 1,m} + a_{n,m} ,\; \ldots , \\ \tilde{a}_{1,m - 1} & : = a_{1,m - 1} - a_{n,m - 1} - a_{1,m} + a_{n,m} , \, \ldots , \\ \tilde{a}_{n - 1,m - 1} & : = a_{n - 1,m - 1} - a_{n,m - 1} - a_{n - 1,m} + a_{n,m} , \\ \tilde{a}_{1}^{x} & : = a_{1,m} - a_{n,m} ,\; \ldots , \\ \tilde{a}_{n - 1}^{x} & : = a_{n - 1,m} - a_{n,m} , \\ \tilde{a}_{1}^{y} & : = a_{n,1} - a_{n,m} ,\; \ldots , \\ \tilde{a}_{m - 1}^{y} & : = a_{n,m - 1} - a_{n,m} , \\ \end{aligned} $$
$$ \begin{aligned} \tilde{b}_{1,1} & : = b_{1,1} - b_{n,1} - b_{1,m} + b_{n,m} , \, \ldots , \\ \tilde{b}_{n - 1,1} & : = b_{n - 1,1} - b_{n,1} - b_{n - 1,m} + b_{n,m} , \\ \tilde{b}_{1,2} & : = b_{1,2} - b_{n,2} - b_{1,m} + b_{n,m} ,\; \ldots , \\ \tilde{b}_{n - 1,2} & : = b_{n - 1,2} - b_{n,2} - b_{n - 1,m} + b_{n,m} , \, \ldots , \\ \tilde{b}_{1,m - 1} & : = b_{1,m - 1} - b_{n,m - 1} - b_{1,m} + b_{n,m} , \, \ldots , \\ \tilde{b}_{n - 1,m - 1} & : = b_{n - 1,m - 1} - b_{n,m - 1} - b_{n - 1,m} + b_{n,m} , \\ \tilde{b}_{1}^{x} & : = b_{1,m} - b_{n,m} , \, \ldots , \\ \tilde{b}_{n - 1}^{x} & : = b_{n - 1,m} - b_{n,m} , \\ \tilde{b}_{1}^{y} & : = b_{n,1} - b_{n,m} , \, \ldots , \\ \tilde{b}_{m - 1}^{y} & : = b_{n,m - 1} - b_{n,m} , \\ \end{aligned} $$

the Players’ payoffs have the following form:

$$ \begin{aligned} F_{1} \left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) = \tilde{a}_{1,1} x_{1} y_{1} + \cdots + \tilde{a}_{n - 1,1} x_{n - 1} y_{1} + \tilde{a}_{1,2} x_{1} y_{2} + \cdots + \tilde{a}_{n - 1,2} x_{n - 1} y_{2} + \cdots + \tilde{a}_{1,m - 1} x_{1} y_{m - 1} + \hfill \\ \cdots + \tilde{a}_{n - 1,m - 1} x_{n - 1} y_{m - 1} + \tilde{a}_{1}^{x} x_{1} + \cdots + \tilde{a}_{n - 1}^{x} x_{n - 1} + \tilde{a}_{1}^{y} y_{1} + \cdots + \tilde{a}_{m - 1}^{y} y_{m - 1} + a_{n,m} , \hfill \\ \end{aligned} $$
(3a)
$$ \begin{aligned} F_{2} \left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) = \tilde{b}_{1,1} x_{1} y_{1} + \cdots + \tilde{b}_{n - 1,1} x_{n - 1} y_{1} + \tilde{b}_{1,2} x_{1} y_{2} + \cdots + \tilde{b}_{n - 1,2} x_{n - 1} y_{2} + \cdots + \tilde{b}_{1,m - 1} x_{1} y_{m - 1} + \hfill \\ \cdots + \tilde{b}_{n - 1,m - 1} x_{n - 1} y_{m - 1} + \tilde{b}_{1}^{x} x_{1} + \cdots + \tilde{b}_{n - 1}^{x} x_{n - 1} + \tilde{b}_{1}^{y} y_{1} + \cdots + \tilde{b}_{m - 1}^{y} y_{m - 1} + \tilde{b}_{n,m} . \hfill \\ \end{aligned} $$
(3b)

These expressions of \(F_{1}\) and \(F_{2}\) contain only the n − 1 + m − 1 = n + m − 2 free coordinates of x and y, the values of which can be selected independently from one another (among certain bounds). More particularly, these coordinates are \(x_{1}\), …, \(x_{n - 1}\), \(y_{1}\), …, \(y_{m - 1}\), for which

$$ 0 \le x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} ;\;x_{1} + \cdots + x_{n - 1} \le 1;\;y_{1} + \cdots + y_{m - 1} \le 1 $$

should hold.

Let us denote the projection of point \(L = \left( {x_{1}^{L} , \ldots ,x_{n}^{L} ,y_{1}^{L} , \ldots ,y_{m}^{L} } \right) \in S\) onto the space of the n + m − 2 free coordinates with \(L_{v} : = \left( {x_{1}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\). Furthermore, \(F_{1} \left( {L_{v} } \right): = F_{1} \left( L \right)\), \(F_{2} \left( {L_{v} } \right): = F_{2} \left( L \right)\) expressing that the payoffs depend directly only on the free strategy coordinates. Accordingly, we can consider \(x_{v} : = \left( {x_{1} , \ldots ,x_{n - 1} } \right) \in {\mathbf{R}}^{n - 1}\) and \(y_{v} : = \left( {y_{1} , \ldots ,y_{m - 1} } \right) \in {\mathbf{R}}^{m - 1}\) (the projections of x and y) as the strategies of Players 1 and 2, respectively. (Equivalently to considering x and y themselves as strategies.) The corresponding projection of S (see Sect. 3) is

$$\begin{aligned} S_{v} &:= \big\{ \left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) \in {\mathbf{R}}^{n + m - 2} |0\\ &\le x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} ;x_{1} + \cdots + x_{n - 1} \le 1;y_{1} + \cdots + y_{m - 1} \le 1 \big\}. \end{aligned}$$

Figure 1 shows the general scheme with respect to Case C.

Fig. 1
figure 1

General scheme for Case C a before and b after redefining \(L_{v}\) as \(L_{\phi }\)

According to Case C, there is a point \(M_{v} \in S_{v}\) and another point \(N_{v} \in S_{v}\) such that both payoffs are higher at \(M_{v}\) than at \(L_{v}\) and one of the payoffs is higher, the other one is lower at \(N_{v}\) than at \(L_{v}\). Without loss of generality, assume that \(F_{1}\) is lower at \(N_{v}\) than at \(L_{v}\). That is,

$$ F_{1} \left( {M_{v} } \right) > F_{1} \left( {L_{v} } \right),\;F_{2} \left( {M_{v} } \right) > F_{2} \left( {L_{v} } \right),\;F_{1} \left( {N_{v} } \right) < F_{1} \left( {L_{v} } \right),\;F_{2} \left( {N_{v} } \right) > F_{2} \left( {L_{v} } \right). $$

Fix an arbitrary \(A_{v} \subseteq S_{v}\) such that the set of interior points of \(A_{v}\) is a connected, (n + m − 2)-dimensional, non-empty, open set, \(M_{v} \in A_{v}\), \(L_{v} \in A_{v}\), \(N_{v} \in A_{v}\). Let

\(B_{v} : = \{(x_{v} ,y_{v} ): = ( x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1}) \in A_{v} |F_{1} (x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} ) = F_{1} \left( {L_{v} } \right)\}\). Then \(L_{v} \in B_{v}\).

For any arbitrary (continuous) path \(\phi \subseteq A_{v}\) leading from \(M_{v}\) to \(N_{v}\), there is a point \(L_{\phi} \in \phi\) for which \(F_{1} \left( {L_{\phi } } \right) = F_{1} \left( {L_{v} } \right)\), that is, \(L_{\phi } \in B_{v}\) (see Case a in Fig. 1). Even, let \(L_{\phi }\) be the first such point for which either \(F_{1} = F_{1} \left( {L_{v} } \right)\) or \(F_{2} = F_{2} \left( {L_{v} } \right)\) while moving from \(M_{v}\) to \(N_{v}\). This can be specified since both \(F_{1} = F_{1} \left( {L_{v} } \right)\) and \(F_{2} = F_{2} \left( {L_{v} } \right)\) should hold at \(L_{\phi }\). Otherwise, in case of \(F_{1} \left( {L_{\phi } } \right) = F_{1} \left( {L_{v} } \right)\), \(F_{2} \left( {L_{\phi } } \right) > F_{2} \left( {L_{v} } \right)\) would hold, in case of \(F_{2} \left( {L_{\phi } } \right) = F_{2} \left( {L_{v} } \right)\), \(F_{1} \left( {L_{\phi } } \right) > F_{1} \left( {L_{v} } \right)\) would hold. That is, not Case C but Case B would hold. From now, let us redefine \(L_{v}\) as such an \(L_{\phi }\) that is strictly in the interior of \(B_{v}\): \(L_{v} : = L_{\phi }\) (see Case b in Fig. 1). (The original \(L_{v}\) may be on the boundary of \(B_{v}\).) It is allowed because \(L_{v}\) and \(L_{\phi }\) are equivalent in view of Pareto optimality as the payoff values are the same at these points.

Assume that there is a subset \(C_{v} \subseteq B_{v}\) such that \(C_{v}\) is a connected, non-empty, open set of the (n + m-2)-dimensional space. Thus \(F_{1}\) is constant on \(C_{v}\), so

$$ \begin{aligned} gradF_{1} \left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) = \left( {\tilde{a}_{1,1} y_{1} + \tilde{a}_{1,2} y_{2} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1} + \tilde{a}_{1}^{x} ,} \right.\,\tilde{a}_{2,1} y_{1} + \tilde{a}_{2,2} y_{2} + \cdots + \hfill \\ \tilde{a}_{2,m - 1} y_{m - 1} + \tilde{a}_{2}^{x} , \ldots ,\,\tilde{a}_{n - 1,1} y_{1} + \tilde{a}_{n - 1,2} y_{2} + \cdots + \tilde{a}_{n - 1,m - 1} y_{m - 1} + \tilde{a}_{n - 1}^{x} ,\,\tilde{a}_{1,1} x_{1} + \tilde{a}_{2,1} x_{2} + \cdots + \tilde{a}_{n - 1,1} x_{n - 1} + \tilde{a}_{1}^{y} , \hfill \\ \left. {\tilde{a}_{1,2} x_{1} + \tilde{a}_{2,2} x_{2} + \cdots + \tilde{a}_{n - 1,2} x_{n - 1} + \tilde{a}_{2}^{y} , \ldots ,\,\tilde{a}_{1,m - 1} x_{1} + \tilde{a}_{2,m - 1} x_{2} + \cdots + \tilde{a}_{n - 1,m - 1} x_{n - 1} + \tilde{a}_{m - 1}^{y} } \right) = 0 \hfill \\ \end{aligned} $$
(4)

at every point in \(C_{v}\). This is possible only under the following condition:

\(\tilde{a}_{1,1} = \ldots = \tilde{a}_{1,m - 1} = \tilde{a}_{1}^{x} = \tilde{a}_{2,1} = \ldots = \tilde{a}_{2,m - 1} = \tilde{a}_{2}^{x} = \tilde{a}_{n - 1,1} = \ldots = \tilde{a}_{n - 1,m - 1} = \tilde{a}_{n - 1}^{x} = \tilde{a}_{1}^{y} = \ldots = \tilde{a}_{m - 1}^{y} = 0.\)

In this case, \(F_{1}\) is constant, Case C cannot hold, only Case B may occur if \(L_{v}\) (or L) is not Pareto optimal. Therefore, we can assume that \(B_{v}\) has no subset with the properties of \(C_{v}\). That is, \(B_{v}\) is an (n + m − 3)-dimensional manifold (hypersurface) in the (n + m − 2)-dimensional space. Informally speaking, it means that \(B_{v}\) is not “thick” anywhere.

Similarly to \(B_{v}\) with respect to \(F_{1}\), another subset \(D_{v}\) of \(A_{v}\) can be defined for \(F_{2}\) as follows:

\(D_{v} : = \left\{ {\left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) \in A_{v} \left| {F_{2} \left( {x_{1} , \ldots ,x_{n - 1} ,y_{1} , \ldots ,y_{m - 1} } \right) = F_{2} \left( {L_{v} } \right)} \right.} \right\}\). If

\(\tilde{b}_{1,1} = \ldots = \tilde{b}_{1,m - 1} = \tilde{b}_{1}^{x} = \tilde{b}_{2,1} = \ldots = \tilde{b}_{2,m - 1} = \tilde{b}_{2}^{x} = \tilde{b}_{n - 1,1} = \ldots = \tilde{b}_{n - 1,m - 1} = \tilde{b}_{n - 1}^{x} = \tilde{b}_{1}^{y} = \ldots = \tilde{b}_{m - 1}^{y} = 0,\)

then \(F_{2}\) is constant, Case C cannot hold, only Case B may occur if \(L_{v}\) is not Pareto optimal. Consequently, similarly as above for \(B_{v}\), we can assume that \(D_{v}\) is not “thick” anywhere. (\(D_{v}\) is an (n + m − 3)-dimensional manifold in the (n + m − 2)-dimensional space.)

After these, it can be assumed that \(B_{v}\) = \(D_{v}\) around \(L_{v}\). Otherwise, there would be a non-empty (n + m-3)-dimensional subset in the neighbourhood of \(L_{v}\) on which \(F_{1} > F_{1} \left( {L_{v} } \right)\) while \(F_{2} = F_{2} \left( {L_{v} } \right)\), or \(F_{2} > F_{2} \left( {L_{v} } \right)\) while \(F_{1} = F_{1} \left( {L_{v} } \right)\), so Case C could not hold (but Case B would hold). Accordingly, there is a sufficiently small ((n + m − 2)-dimensional) neighbourhood \(G_{v} \subseteq A_{v}\) around \(L_{v}\) in which the open, (n + m − 3)-dimensional manifold \(E_{v} : = B_{v} \cap G_{v} = D_{v} \cap G_{v}\) can be defined (as a part of the level set of both payoffs). Furthermore, for every point in \(G_{v}\)\\(E_{v}\), \(F_{1} \ne F_{1} \left( {L_{v} } \right)\) and \(F_{2} \ne F_{2} \left( {L_{v} } \right)\) hold.

There is at least one coordinate axis (from the n + m − 2 ones) that is not parallel to the ((n + m − 3)-dimensional) tangential hyperplane of \(E_{v}\) (or not perpendicular to the normal vector of \(E_{v}\)) at point \(L_{v}\). Without loss of generality, assume that this axis is that of the coordinate \(x_{1}\), so the corresponding unit vector is \(u_{{x_{1} }} : = \left( {1,0, \ldots ,0} \right) \in {\mathbf{R}}^{m + n - 2}\). See Fig. 2 for the scheme of this situation.

Fig. 2
figure 2

Scheme of \(E_{v}\) with the used coordinate axes

According to Fig. 2, we can move away from \(L_{v}\) and \(E_{v}\) along the direction of the coordinate \(x_{1}\) while changing from \(x_{1}^{L}\) to \(x_{1}^{L} + \Delta x_{1}\) (or from \(L_{v}\) to \(L_{1}\)). Then we can move back to \(E_{v}\) along the direction of \(y_{j}\) (\(j = 1, \ldots ,m - 1\)) while changing from \(y_{j}^{L}\) to \(y_{j}^{L} + \Delta y_{j}\) (or from \(L_{1}\) to \(L_{2}\)) if the coordinate axis of \(y_{j}\) is not perpendicular to the normal vector of \(E_{v}\) at \(L_{v}\) (see Case a in Fig. 2). Alternatively, we can move back to \(E_{v}\) along the direction of \(x_{i}\) (\(i = 2, \ldots ,n - 1\)) while changing from \(x_{i}^{L}\) to \(x_{i}^{L} + \Delta x_{i}\) (or from \(L_{1}\) to \(L_{2}\)) if the coordinate axis of \(x_{i}\) is not perpendicular to the normal vector of \(E_{v}\) at \(L_{v}\) (see Case b in Fig. 2).

If the axis of \(y_{j}\) (or \(x_{i}\)) is perpendicular to the normal vector of \(E_{v}\) at \(L_{v}\), then \(L_{v}\) should be redefined as another point of \(E_{v}\) (if really needed) to be appropriate. (It is allowed because all points are equivalent in \(E_{v}\) in view of Pareto optimality as the payoff values are uniform in \(E_{v}\).) For the case of that it is not possible, that is, if the axis of \(y_{j}\) (or \(x_{i}\)) is perpendicular to the normal vector of \(E_{v}\) everywhere, see Part 2 within Case a (and Case b) below.

Case a) Moving back to or on \(E_{v}\) along \(y_{j}\).

Part 1. Coordinate axis of \(y_{j}\) not perpendicular to the normal vector of \(E_{v}\) at \(L_{v}\).

The moving has two steps. In Step 1 below, Player 1’s payoff is changed from \(F_{1} \left( {L_{v} } \right)\) to \(F_{1} \left( {L_{1} } \right) = F_{1} \left( {L_{v} } \right) + \Delta F_{1} \ne F_{1} \left( {L_{v} } \right)\), and Player 2’s payoff is changed from \(F_{2} \left( {L_{v} } \right)\) to \(F_{2} \left( {L_{1} } \right) = F_{2} \left( {L_{v} } \right) + \Delta F_{2} \ne F_{2} \left( {L_{v} } \right)\). In Step 2 below, Player 1’s payoff is changed (back) from \(F_{1} \left( {L_{1} } \right)\) to \(F_{1} \left( {L_{2} } \right) = F_{1} \left( {L_{v} } \right)\), and Player 2’s payoff is changed (back) from \(F_{2} \left( {L_{1} } \right)\) to \(F_{2} \left( {L_{2} } \right) = F_{2} \left( {L_{v} } \right)\). See also Case a in Fig. 2 for this part.

Step 1

We change from \(L_{v} = \left( {x_{1}^{L} ,x_{2}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\) to \(L_{1} = \left( {x_{1}^{L} + \Delta x_{1} ,x_{2}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\), from \(F_{1} \left( {L_{v} } \right)\) to

$$ F_{1} \left( {L_{1} } \right) = F_{1} \left( {L_{v} } \right) + \left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right)\Delta x_{1} \ne F_{1} \left( {L_{v} } \right) $$
(5)

and from \(F_{2} \left( {L_{v} } \right)\) to

$$ F_{2} \left( {L_{1} } \right) = F_{2} \left( {L_{v} } \right) + \left( {\tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + \cdots + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} } \right)\Delta x_{1} \ne F_{2} \left( {L_{v} } \right). $$
(6)

See also (3a), (3b) for the above substitutions.

Step 2

We change from \(L_{1} = \left( {x_{1}^{L} + \Delta x_{1} ,x_{2}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\) to \(L_{2} = \left( {x_{1}^{L} + \Delta x_{1} ,x_{2}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{j - 1}^{L} ,y_{j}^{L} + \Delta y_{j} ,y_{j + 1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\), from \(F_{1} \left( {L_{1} } \right)\) to

$$ \begin{aligned} F_{1} \left( {L_{2} } \right) = F_{1} \left( {L_{v} } \right) + \left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right)\Delta x_{1} + \hfill \\ \left( {\tilde{a}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{a}_{2,j} x_{2}^{L} + \cdots + \tilde{a}_{n - 1,j} x_{n - 1}^{L} + \tilde{a}_{j}^{y} } \right)\Delta y_{j} = F_{1} \left( {L_{v} } \right) \hfill \\ \end{aligned} $$
(7)

and from \(F_{2} \left( {L_{1} } \right)\) to

$$ \begin{aligned} F_{2} \left( {L_{2} } \right) = F_{2} \left( {L_{v} } \right) + \left( {\tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + \cdots + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} } \right)\Delta x_{1} + \hfill \\ \left( {\tilde{b}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{b}_{2,j} x_{2}^{L} + \cdots + \tilde{b}_{n - 1,j} x_{n - 1}^{L} + \tilde{b}_{j}^{y} } \right)\Delta y_{j} = F_{2} \left( {L_{v} } \right). \hfill \\ \end{aligned} $$
(8)

See also (3a), (3b) for the above substitutions.

It is concluded from (7) that

$$ \left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right)\Delta x_{1} = - \left( {\tilde{a}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{a}_{2,j} x_{2}^{L} + \cdots + \tilde{a}_{n - 1,j} x_{n - 1}^{L} + \tilde{a}_{j}^{y} } \right)\Delta y_{j} $$
(9)

and from (8) that

$$ \left( {\tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + \cdots + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} } \right)\Delta x_{1} = - \left( {\tilde{b}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{b}_{2,j} x_{2}^{L} + \cdots + \tilde{b}_{n - 1,j} x_{n - 1}^{L} + \tilde{b}_{j}^{y} } \right)\Delta y_{j} . $$
(10)

From (5) and (6) it follows that

$$ 0 \ne \tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + \cdots + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} = K\left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right) $$
(11)

for some \(K \in {\mathbf{R}}\)\\(\left\{ 0 \right\}\). (11) involves (with the same K number) for the right hand side of (9) and (10) that

\(\tilde{b}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{b}_{2,j} x_{2}^{L} + \cdots + \tilde{b}_{n - 1,j} x_{n - 1}^{L} + \tilde{b}_{j}^{y} = K\left( {\tilde{a}_{1,j} \left( {x_{1}^{L} + \Delta x_{1} } \right) + \tilde{a}_{2,j} x_{2}^{L} + \cdots + \tilde{a}_{n - 1,j} x_{n - 1}^{L} + \tilde{a}_{j}^{y} } \right),\),

that is,

$$ \left( {\tilde{b}_{1,j} - K\tilde{a}_{1,j} } \right)\left( {x_{1}^{L} + \Delta x_{1} } \right) + \left( {\tilde{b}_{2,j} - K\tilde{a}_{2,j} } \right)x_{2}^{L} + \cdots + \left( {\tilde{b}_{n - 1,j} - K\tilde{a}_{n - 1,j} } \right)x_{n - 1}^{L} + \tilde{b}_{j}^{y} - K\tilde{a}_{j}^{y} = 0. $$
(12)

\(\tilde{b}_{1,j} - K\tilde{a}_{1,j} = 0\) results from (12) since \(\Delta x_{1}\) is arbitrary real number (within some proper limits). That is, \(\tilde{b}_{1,j} = K\tilde{a}_{1,j}\).

From the properties of \(F_{1}\) and \(F_{2}\) (which are continuous, even smooth functions) we obtain that \(E_{v}\) is a smooth (n + m − 3) dimensional hypersurface. In addition, any normal vector of \(E_{v}\) (e.g. \(gradF_{1}\) or \(gradF_{2}\)) is not perpendicular to the coordinate axis of \(x_{1}\) at \(L_{v}\). (There are n + m − 2 coordinate axes that are perpendicular to one another.) These imply that \(L_{v}\) can be arbitrarily redefined in a proper neighbourhood of itself in such a way that only the coordinate \(x_{2}\) (from \(x_{2}^{L}\)) and eventually \(x_{1}\) (from \(x_{1}^{L}\)) are changed in the new \(L_{v}\) point (the other coordinates remain unchanged).

For this (new) \(L_{v}\) the above process can be repeated with the same K number (since (11) remains the same as the coordinate values \(y_{{_{1} }}^{L}\), …, \(y_{m - 1}^{L}\) are not changed in the new \(L_{v}\)). It means that \(\tilde{b}_{2,j} - K\tilde{a}_{2,j} = 0\), that is, \(\tilde{b}_{2,j} = K\tilde{a}_{2,j}\) should hold in (12). Similarly, it could be derived that \(\tilde{b}_{3,j} = K\tilde{a}_{3,j}\) (if \(n \ge 4\)), …, \(\tilde{b}_{n - 1,j} = K\tilde{a}_{n - 1,j}\) and \(\tilde{b}_{j}^{y} = K\tilde{a}_{j}^{y}\). In sum, it could be derived with similar considerations that

$$ \tilde{b}_{1,j} = K\tilde{a}_{1,j} ,\;\tilde{b}_{2,j} = K\tilde{a}_{2,j} , \, \ldots ,\;\tilde{b}_{n - 1,j} = K\tilde{a}_{n - 1,j} ,\;\tilde{b}_{j}^{y} = K\tilde{a}_{j}^{y} \left( {K \in {\mathbf{R}}\backslash \left\{ 0 \right\},\;j = 1, \ldots ,m - 1} \right) $$
(13)

if the coordinate axis of \(y_{j}\) is not perpendicular (everywhere) to the normal vector of \(E_{v}\).

Part 2. Coordinate axis of \(y_{j}\) perpendicular to the normal vector of \(E_{v}\).

Assume that the coordinate axis of \(y_{j}\) is perpendicular to the normal vector of \(E_{v}\) everywhere. Then

$$ \frac{{\partial F_{1} }}{{\partial y_{j} }} = \tilde{a}_{1,j} x_{1} + \tilde{a}_{2,j} x_{2} + \cdots + \tilde{a}_{n - 1,j} x_{n - 1} + \tilde{a}_{j}^{y} = \frac{{\partial F_{2} }}{{\partial y_{j} }} = \tilde{b}_{1,j} x_{1} + \tilde{b}_{2,j} x_{2} + \cdots + \tilde{b}_{n - 1,j} x_{n - 1} + \tilde{b}_{j}^{y} = 0 $$

for all \(x_{1}\), …, \(x_{n - 1}\) in \(E_{v}\), from which \(\tilde{a}_{1,j}\) = \(\tilde{b}_{1,j}\) = \(\tilde{a}_{2,j}\) = \(\tilde{b}_{2,j}\) = … = \(\tilde{a}_{n - 1,j}\) = \(\tilde{b}_{n - 1,j}\) = \(\tilde{a}_{j}^{y}\) = \(\tilde{b}_{j}^{y}\) = 0. This involves that (13) is fulfilled here with the same number \(K \in {\mathbf{R}}\)\\(\left\{ 0 \right\}\) as above.

Summarizing Parts 1 and 2,

$$ \tilde{b}_{1,j} = K\tilde{a}_{1,j} ,\;\tilde{b}_{2,j} = K\tilde{a}_{2,j} , \, \ldots ,\;\tilde{b}_{n - 1,j} = K\tilde{a}_{n - 1,j} ,\;\tilde{b}_{j}^{y} = K\tilde{a}_{j}^{y} \;\left( {K \in {\mathbf{R}}\backslash \left\{ 0 \right\},\;j = 1, \ldots ,m - 1} \right). $$
(14)

Case b) Moving back to or on \(E_{v}\) along \(x_{i}\).

Part 1. Coordinate axis of \(x_{i}\) not perpendicular to the normal vector of \(E_{v}\) at \(L_{v}\).

The moving has two steps. In Step 1 below, Player 1’s payoff is changed from \(F_{1} \left( {L_{v} } \right)\) to \(F_{1} \left( {L_{1} } \right) = F_{1} \left( {L_{v} } \right) + \Delta F_{1} \ne F_{1} \left( {L_{v} } \right)\), and Player 2’s payoff is changed from \(F_{2} \left( {L_{v} } \right)\) to \(F_{2} \left( {L_{1} } \right) = F_{2} \left( {L_{v} } \right) + \Delta F_{2} \ne F_{2} \left( {L_{v} } \right)\). In Step 2 below, Player 1’s payoff is changed (back) from \(F_{1} \left( {L_{1} } \right)\) to \(F_{1} \left( {L_{2} } \right) = F_{1} \left( {L_{v} } \right)\), and Player 2’s payoff is changed (back) from \(F_{2} \left( {L_{1} } \right)\) to \(F_{2} \left( {L_{2} } \right) = F_{2} \left( {L_{v} } \right)\). See also Case b in Fig. 2 for this part.

Step 1

This step is the same as Step 1 in Case a above, so the same Eqs. (5) and (6) are resulted here.

Step 2

We change from \(L_{1} = \left( {x_{1}^{L} + \Delta x_{1} ,x_{2}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\) to \(L_{2} = \left( {x_{1}^{L} + \Delta x_{1} ,x_{2}^{L} , \ldots ,x_{i - 1}^{L} ,x_{i}^{L} + \Delta x_{i} ,x_{i + 1}^{L} , \ldots ,x_{n - 1}^{L} ,y_{1}^{L} , \ldots ,y_{m - 1}^{L} } \right)\), from \(F_{1} \left( {L_{1} } \right)\) to

$$ \begin{aligned} F_{1} \left( {L_{2} } \right) = F_{1} \left( {L_{v} } \right) + \left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right)\Delta x_{1} + \hfill \\ \left( {\tilde{a}_{i,1} y_{1}^{L} + \tilde{a}_{i,2} y_{2}^{L} + \cdots + \tilde{a}_{i,m - 1} y_{m - 1}^{L} + \tilde{a}_{i}^{x} } \right)\Delta x_{i} = F_{1} \left( {L_{v} } \right) \hfill \\ \end{aligned} $$
(15)

and from \(F_{2} \left( {L_{1} } \right)\) to

$$ \begin{aligned} F_{2} \left( {L_{2} } \right) = F_{2} \left( {L_{v} } \right) + \left( {\tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + \cdots + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} } \right)\Delta x_{1} + \hfill \\ \left( {\tilde{b}_{i,1} y_{1}^{L} + \tilde{b}_{i,2} y_{2}^{L} + \cdots + \tilde{b}_{i,m - 1} y_{m - 1}^{L} + \tilde{b}_{i}^{x} } \right)\Delta x_{i} = F_{2} \left( {L_{v} } \right). \hfill \\ \end{aligned} $$
(16)

See also (3a), (3b) for the above substitutions.

It is concluded from (15) that

$$ \left( {\tilde{a}_{1,1} y_{1}^{L} + \tilde{a}_{1,2} y_{2}^{L} + \cdots + \tilde{a}_{1,m - 1} y_{m - 1}^{L} + \tilde{a}_{1}^{x} } \right)\Delta x_{1} = - \left( {\tilde{a}_{i,1} y_{1}^{L} + \tilde{a}_{i,2} y_{2}^{L} + \cdots + \tilde{a}_{i,m - 1} y_{m - 1}^{L} + \tilde{a}_{i}^{x} } \right)\Delta x_{i} $$
(17)

and from (16) that

$$ \left( {\tilde{b}_{1,1} y_{1}^{L} + \tilde{b}_{1,2} y_{2}^{L} + ... + \tilde{b}_{1,m - 1} y_{m - 1}^{L} + \tilde{b}_{1}^{x} } \right)\Delta x_{1} = - \left( {\tilde{b}_{i,1} y_{1}^{L} + \tilde{b}_{i,2} y_{2}^{L} + ... + \tilde{b}_{i,m - 1} y_{m - 1}^{L} + \tilde{b}_{i}^{x} } \right)\Delta x_{i} . $$
(18)

According to (11), as a conclusion of (5) and (6), for the right hand side of (17) and (18) it holds that

$$ \tilde{b}_{i,1} y_{1}^{L} + \tilde{b}_{i,2} y_{2}^{L} + \cdots + \tilde{b}_{i,m - 1} y_{m - 1}^{L} + \tilde{b}_{i}^{x} = K\left( {\tilde{a}_{i,1} y_{1}^{L} + \tilde{a}_{i,2} y_{2}^{L} + \cdots + \tilde{a}_{i,m - 1} y_{m - 1}^{L} + \tilde{a}_{i}^{x} } \right) $$
(19)

for the same \(K \in {\mathbf{R}}\)\\(\left\{ 0 \right\}\) as in Part 1 of Case a above. (14) contains that \(\tilde{b}_{i,1} = K\tilde{a}_{i,1}\), \(\tilde{b}_{i,2} = K\tilde{a}_{i,2}\), …, \(\tilde{b}_{i,m - 1} = K\tilde{a}_{i,m - 1}\), which involves that \(\tilde{b}_{i}^{x} = K\tilde{a}_{i}^{x}\) for all \(i = 1, \ldots ,n - 1\) if the coordinate axis of \(x_{i}\) is not perpendicular (everywhere) to the normal vector of \(E_{v}\).

Part 2. Coordinate axis of \(x_{i}\) perpendicular to the normal vector of \(E_{v}\).

Assume that the coordinate axis of \(x_{i}\) is perpendicular to the normal vector of \(E_{v}\) everywhere. Then

$$ \frac{{\partial F_{1} }}{{\partial x_{i} }} = \tilde{a}_{i,1} y_{1} + \tilde{a}_{i,2} y_{2} + \cdots + \tilde{a}_{i,m - 1} y_{m - 1} + \tilde{a}_{i}^{x} = \frac{{\partial F_{2} }}{{\partial x_{i} }} = \tilde{b}_{i,1} y_{1} + \tilde{b}_{i,2} y_{2} + \cdots + \tilde{b}_{i,m - 1} y_{m - 1} + \tilde{b}_{i}^{x} = 0 $$

for all \(y_{1}\), …, \(y_{m - 1}\) in \(E_{v}\). As (14) contains that \(\tilde{b}_{i,1} = K\tilde{a}_{i,1}\), \(\tilde{b}_{i,2} = K\tilde{a}_{i,2}\), …, \(\tilde{b}_{i,m - 1} = K\tilde{a}_{i,m - 1}\), it follows that \(\tilde{b}_{i}^{x} = K\tilde{a}_{i}^{x}\) (for the same K number).

Summarizing Parts 1 and 2,

$$ \tilde{b}_{i}^{x} = K\tilde{a}_{i}^{x} \left( {K \in {\mathbf{R}}\backslash \left\{ 0 \right\},\;i = 1, \ldots ,n - 1} \right). $$

As a consequence of Cases a and b above, assuming Case C, it holds that

$$ \tilde{b}_{i,j} = K\tilde{a}_{i,j} ,\;\tilde{b}_{i}^{x} = K\tilde{a}_{i}^{x} ,\;\tilde{b}_{j}^{y} = K\tilde{a}_{j}^{y} \;\left( {K \in {\mathbf{R}}\backslash \left\{ 0 \right\},\;i = 1, \ldots ,n - 1,\;j = 1, \ldots ,m - 1} \right). $$
(20)

(20), together with (3a) and (3b), means that \(F_{2} - b_{n,m} = K\left( {F_{1} - a_{n,m} } \right)\), that is,

$$ F_{2} = KF_{1} + b_{n,m} - Ka_{n,m} \;\left( {K \in {\mathbf{R}}\backslash \left\{ 0 \right\},\;b_{n,m} \in {\mathbf{R}},\;a_{n,m} \in {\mathbf{R}}} \right), $$

which means that Case C is not possible (Case A should hold if \(L_{v}\) is not Pareto optimal with a positive number K).

Finally, it is concluded that Case C is excluded, only Case A or Case B is possible if \(L_{v}\) (or L) is not Pareto optimal. Theorem 3.1 has been proved.

3.2 Algorithm for checking Pareto optimality

Hereupon the Pareto optimality of a strategy point \(L: = \left( {x_{1}^{L} , \ldots ,x_{n}^{L} ,y_{1}^{L} , \ldots ,y_{m}^{L} } \right)\) in \(S\) can be checked with the possible algorithm below (see also Fig. 3).

Fig. 3
figure 3

Algorithm for checking Pareto optimality of a strategy point \(L\)

In Steps A1 and A2, mostly Case A is checked, in Steps B1 and B2, mostly Case B is checked.

Step A1

Check if \(F_{1}\) (in itself) can be increased while moving from point L to some point M in S. It is equivalent to the following examination: The existence of an admissible point \(M: = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) should be checked, for which \(F_{1} \left( M \right) > F_{1} \left( L \right)\) and (1a), (1b), (1c) hold.

If there is such an M point, check the relation between \(F_{2} \left( M \right)\) and \(F_{2} \left( L \right)\). If \(F_{2} \left( M \right) \ge F_{2} \left( L \right)\), then \(L\) is not Pareto optimal, our examination is finished, stop the algorithm. If \(F_{2} \left( M \right) < F_{2} \left( L \right)\), then go to Step A2. If there is no such M point, then go to Step A2.

Step A2

Check if \(F_{2}\) (in itself) can be increased while moving from point L to some point M in S. It is equivalent to the following examination: The existence of an admissible point \(M = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) should be checked, for which \(F_{2} \left( M \right) > F_{2} \left( L \right)\) and (1a), (1b), (1c) hold.

If there is such an M point, check the relation between \(F_{1} \left( M \right)\) and \(F_{1} \left( L \right)\). If \(F_{1} \left( M \right) \ge F_{1} \left( L \right)\), then \(L\) is not Pareto optimal, our examination is finished, stop the algorithm. If \(F_{1} \left( M \right) < F_{1} \left( L \right)\), then go to Step B1. If there is no such M point, then go to Step B1.

Step B1

Here it is examined if \(F_{1}\) can be increased in such a way that \(F_{2}\) is constant while moving from point L to some point \(M = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) in S. For this purpose, the following conditions should be checked: \(F_{1} \left( M \right) > F_{1} \left( L \right)\), (1a), (1b), (1c) and \(F_{2} \left( M \right) = F_{2} \left( L \right)\).

If there is such an M point, then L is not Pareto optimal, the examination is finished, stop the algorithm. If there is no such M point, then go to Step B2.

Step B2

Here it is examined if \(F_{2}\) can be increased in such a way that \(F_{1}\) is constant while moving from point L to some point \(M = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) in S. For this purpose, the following conditions should be checked: \(F_{1} \left( M \right) > F_{1} \left( L \right)\), (1a), (1b), (1c) and \(F_{1} \left( M \right) = F_{1} \left( L \right)\).

If there is such an M point, then L is not Pareto optimal. If there is no such M point, then L is Pareto optimal.

Of course, the above algorithm serves with a point \(M = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) which weakly Pareto dominates L if L is not Pareto optimal.

3.3 Bijection to be possibly used

In this section, a set \(\tilde{S} \subset {\mathbf{R}}^{nm}\) and a nonlinear (quadratic) transform \(T:S \to \tilde{S}\) are defined such that T is a bijection (one-to-one correspondence) between S (defined with (1a), (1b) and (1c)) and \(\tilde{S}\). We will see that for some bimatrix games, the Pareto optimality checking (for example with the above algorithm) is easier in \(\tilde{S}\) than in S.

Definition 3.1

\(\tilde{S}\) is defined as the set of the (admissible) points \(\left( {\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} } \right)\) for which the following hold:

$$ 0 \le \tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} , $$
(21a)
$$ \tilde{s}_{1,1} + \cdots + \tilde{s}_{n,1} + \cdots + \tilde{s}_{1,m} + \cdots + \tilde{s}_{n,m} = 1, $$
(21b)

and

$$ \frac{{\tilde{s}_{k,i} }}{{\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} }} = \frac{{\tilde{s}_{k,j} }}{{\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} }} $$
$$ {\text{if}}\;\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} \ne 0,\;\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} \ne 0\left( {k = 1, \ldots ,n - 1;\;i = 1, \ldots ,m;\;j = 1, \ldots ,m;\;i \ne j} \right), $$

or equivalently,

$$ \tilde{s}_{k,i} \tilde{s}_{1,j} + \cdots + \tilde{s}_{k,i} \tilde{s}_{n,j} = \tilde{s}_{k,j} \tilde{s}_{1,i} + \cdots + \tilde{s}_{k,j} \tilde{s}_{n,i} \;\left( {k = 1, \ldots ,n - 1;\;i = 1, \ldots ,m;\;j = 1, \ldots ,m;\;i \ne j} \right). $$
(21c)

In fact (21c) holds even if \(\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} \ne 0\) (or/and \(\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} \ne 0\)) since \(\tilde{s}_{k,i} = 0\) (or/and \(\tilde{s}_{k,j} = 0\)) in this case.

Definition 3.2

Define transform \(T:S \to \tilde{S}\) with the following assignment rule:

$$ \tilde{s}_{1,1} : = x_{1} y_{1} , \, \ldots ,\;\tilde{s}_{n,1} : = x_{n} y_{1} , \, \ldots ,\;\tilde{s}_{1,m} : = x_{1} y_{m} , \, \ldots ,\;\tilde{s}_{n,m} : = x_{n} y_{m} , $$
(22)

where \(\left( {x_{1} , \ldots ,x_{n} ,y_{1} , \ldots ,y_{m} } \right) \in S\).

Lemma 3.1

The transform T is a bijection between S and \(\tilde{S}\).

Proof

Clearly, every point in \(S\) has exactly one image in \(\tilde{S}\) according to \(T\), which automatically satisfies (21a), (21b) and (21c) relating to \(\tilde{S}\). In fact, for any \(k = 1, \ldots ,n\); \(i = 1, \ldots ,m\); \(j = 1, \ldots ,m\); \(i \ne j\),

$$ x_{k} = \frac{{x_{k} y_{i} }}{{x_{1} y_{i} + \cdots + x_{n} y_{i} }} = \frac{{\tilde{s}_{k,i} }}{{\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} }} $$

if \(\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} \ne 0\), and

$$ x_{k} = \frac{{x_{k} y_{j} }}{{x_{1} y_{j} + \cdots + x_{n} y_{j} }} = \frac{{\tilde{s}_{k,j} }}{{\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} }} $$

if \(\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} \ne 0\) since \(x_{1} + \cdots + x_{n} = 1\) in the denominators above.

The existence and uniqueness also hold the other way around (with respect to the inverse transform \(T^{ - 1}\)) as well. In fact, let us take an arbitrary point \(\left( {\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} } \right)\) in \(\tilde{S}\) and find proper \(x_{1} , \ldots ,x_{n} ,y_{1} , \ldots ,y_{m}\) values for it in \(S\) according to Eqs. (22). If there exist such \(x_{1} , \ldots ,x_{n}\) values, then \(y_{1} = \left( {x_{1} + \cdots + x_{n} } \right)y_{1} = x_{1} y_{1} + \cdots + x_{n} y_{1} = \tilde{s}_{1,1} + \cdots + \tilde{s}_{n,1}\), …, \(y_{m} = \tilde{s}_{1,m} + \cdots + \tilde{s}_{n,m}\) exist and are unique (since \(x_{1} + \cdots + x_{n} = 1\) must hold), furthermore, \(0 \le y_{1} , \ldots ,y_{m}\) and \(y_{1} + \cdots + y_{m} = 1\) also hold.

It also follows from (22) that the following should be fulfilled for \(x_{1} , \ldots ,x_{n}\):

$$ x_{k} = \frac{{\tilde{s}_{k,i} }}{{\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} }} $$

(\(k = 1, \ldots ,n\); \(i = 1, \ldots ,m\)) if \(\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} \ne 0\), from which the values of \(x_{1} , \ldots ,x_{n}\) are unambiguously resulted (without contradiction because of Eq. (21c)) satisfying the requirements \(0 \le x_{1} , \ldots ,x_{n}\) and \(x_{1} + \cdots + x_{n} = 1\) as well.

Thus \(T\) (\(T^{ - 1}\)) is a bijection between \(S\) and \(\tilde{S}\). Lemma 3.1 has been proved.

Remark 3.2

If Eq. (21c) were not prescribed for \(\tilde{S}\), then not every point in \(\tilde{S}\) would have an image in \(S\) according to \(T^{ - 1}\) because of the possible contradiction

$$ x_{k} = \frac{{\tilde{s}_{k,i} }}{{\tilde{s}_{1,i} + \cdots + \tilde{s}_{n,i} }} \ne \frac{{\tilde{s}_{k,j} }}{{\tilde{s}_{1,j} + \cdots + \tilde{s}_{n,j} }} = x_{k} $$

for some \(k = 1, \ldots ,n\); \(i = 1, \ldots ,m\); \(j = 1, \ldots ,m\); \(i \ne j\). For example, if in a 2 × 2 bimatrix game (n = m = 2), \(\tilde{s}_{1,1} : = \tilde{s}_{2,2} : = 0.5\) were prescribed, \(\tilde{s}_{1,2} = \tilde{s}_{2,1} = 0\) would hold (see (21b)), from which (21c) would not hold (since \(0.25 = \tilde{s}_{1,1} \left( {\tilde{s}_{1,2} + \tilde{s}_{2,2} } \right) \ne \tilde{s}_{1,2} \left( {\tilde{s}_{1,1} + \tilde{s}_{2,1} } \right) = 0\), and the contradiction

$$ x_{1} = 1 = \frac{{\tilde{s}_{1,1} }}{{\tilde{s}_{1,1} + \tilde{s}_{2,1} }} \ne \frac{{\tilde{s}_{1,2} }}{{\tilde{s}_{1,2} + \tilde{s}_{2,2} }} = 0 = x_{1} $$

would result.

Lemma 3.1 implies that it is equivalent if payoffs of Players 1 and 2 are examined as functions of the points in \(S\) or \(\tilde{S}\). In both cases, all possible strategy pairs/points (and nothing else) and all possible payoff pairs \(\left( {F_{1} \left( {x,y} \right),F_{2} \left( {x,y} \right)} \right)\) (and nothing else) are considered, which are resulted according to Eqs. (23a) and (23b) in \(\tilde{S}\):

$$ F_{1} \left( {x,y} \right) = F_{1} \left( {\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} } \right) = a_{1,1} \tilde{s}_{1,1} + \cdots + a_{n,1} \tilde{s}_{n,1} + \cdots + a_{1,m} \tilde{s}_{1,m} + \cdots + a_{n,m} \tilde{s}_{n,m} , $$
(23a)
$$ F_{2} \left( {x,y} \right) = F_{2} \left( {\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} } \right) = b_{1,1} \tilde{s}_{1,1} + \cdots + b_{n,1} \tilde{s}_{n,1} + \cdots + b_{1,m} \tilde{s}_{1,m} + \cdots + b_{n,m} \tilde{s}_{n,m} , $$
(23b)

where \(\left( {x,y} \right) \in S\) and \(\left( {\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m} } \right): = T\left( {x,y} \right)\).

In particular, a strategy point \(L = \left( {x_{1}^{L} , \ldots ,x_{n}^{L} ,y_{1}^{L} , \ldots ,y_{m}^{L} } \right)\) is Pareto optimal (in \(S\)) if and only if its image \(\tilde{L}: = T\left( L \right) = \left( {\tilde{s}_{1,1}^{L} , \ldots ,\tilde{s}_{n,1}^{L} , \ldots ,\tilde{s}_{1,m}^{L} , \ldots ,\tilde{s}_{n,m}^{L} } \right)\) is Pareto optimal (in \(\tilde{S}\)). Thus Pareto optimality can be studied in \(\tilde{S}\), where it is easier to do in 2 × 2 and in some other bimatrix games because of that \(F_{1}\) and \(F_{2}\) are linear functions of the variables \(\tilde{s}_{1,1} , \ldots ,\tilde{s}_{n,1} , \ldots ,\tilde{s}_{1,m} , \ldots ,\tilde{s}_{n,m}\) (see Eqs. (23a), (23b)). Accordingly, the level sets of \(F_{1}\) (and \(F_{2}\)) are contained in hyperplanes in \(\tilde{S}\) corresponding to the relations \(a_{1,1} \tilde{s}_{1,1} + \cdots + a_{n,1} \tilde{s}_{n,1} + \cdots + a_{1,m} \tilde{s}_{1,m} + \cdots + a_{n,m} \tilde{s}_{n,m} =\) constant (and \(b_{1,1} \tilde{s}_{1,1} + \cdots + b_{n,1} \tilde{s}_{n,1} + \cdots + b_{1,m} \tilde{s}_{1,m} + \cdots + b_{n,m} \tilde{s}_{n,m} =\) constant). Consequently, the directions of the “fastest increase” of \(F_{1}\) (and \(F_{2}\)) defined by \(gradF_{1}\) (and \(gradF_{2}\)) are the same vector at every point \(\tilde{L}\) in \(\tilde{S}\). Furthermore, \(F_{1}\) and \(F_{2}\) are quadratic (nonlinear) functions in S.

In 2 × 2 bimatrix games, which form a very important case in the field (Bárány et al., 1992; Gatti & Sandholm, 2014), the two quadratic payoff functions and the two linear constraints (according to (1b) and (1c)) in S are transformed to two linear payoffs and one linear and one quadratic constraints (according to (21b) and (21c)) in \(\tilde{S}\). That is, the number of the governing relations is the same, but the number of the quadratic relations is decreased by the transform T. So it is basically easier to check Pareto optimality in \(\tilde{S}\) than in S in case of 2 × 2 bimatrix games. In bimatrix games of bigger size, the usefulness of the transform T, which makes the payoffs linear, depends on the particular position of the examined point L. In certain cases, if L is on the boundary of S, the transform may be advantageous.

Remark 3.3

In the probability approach, \(\tilde{s}_{i,j}\) (\(i = 1, \ldots ,m\); \(j = 1, \ldots ,m\)) can be considered as the probability of the joint realization of the i-th pure strategy of Player 1 (with probability \(x_{i}\)) and the j-th pure strategy of Player 2 (with probability \(y_{j}\)) as independent events. In this regard, the transform T may seem intuitively evident. Nevertheless, the application of T appears to be novel in checking the Pareto optimality in bimatrix games in spite of that it is very useful in some cases (e.g. in the 2 × 2 case).

3.4 Algorithm for checking Pareto optimality in \(\tilde{S}\)

Based on that the Pareto optimality checking is more advantageous in \(\tilde{S}\) in some bimatrix games (and generally in 2 × 2 bimatrix games), it is worth discussing the corresponding version of the checking algorithm of Sect. 3.2 in detail.

First of all let us take the point \(\tilde{L} = T\left( L \right) = \left( {\tilde{s}_{1,1}^{L} , \ldots ,\tilde{s}_{n,1}^{L} , \ldots ,\tilde{s}_{1,m}^{L} , \ldots ,\tilde{s}_{n,m}^{L} } \right) \in \tilde{S}\) corresponding to the point \(L \in S\) to be checked. Furthermore, let us introduce the following notation for the mathematical expression of the steps of the algorithm: Let \(\tilde{v}: = \left( {\tilde{v}_{1,1} , \ldots ,\tilde{v}_{n,1} , \ldots ,\tilde{v}_{1,m} , \ldots ,\tilde{v}_{n,m} } \right)\) be the vector of displacement from point \(\tilde{L}\) to some point \(\tilde{M}\), that is, \(\tilde{M} = \tilde{L} + \tilde{v}\). Let \(\tilde{n}_{1} : = gradF_{1} = \left( {a_{1,1} , \ldots ,a_{n,1} , \ldots ,a_{1,m} , \ldots ,a_{n,m} } \right)\) be the (universal) normal vector of the hyperplanes \(a_{1,1} \tilde{s}_{1,1} + \cdots + a_{n,1} \tilde{s}_{n,1} + \cdots + a_{1,m} \tilde{s}_{1,m} + \cdots + a_{n,m} \tilde{s}_{n,m} =\) constant (in \(\tilde{S}\)). Similarly, let \(\tilde{n}_{2} : = gradF_{2} = \left( {b_{1,1} , \ldots ,b_{n,1} , \ldots ,b_{1,m} , \ldots ,b_{n,m} } \right)\) be the (universal) normal vector of the hyperplanes \(b_{1,1} \tilde{s}_{1,1} + \cdots + b_{n,1} \tilde{s}_{n,1} + \cdots + b_{1,m} \tilde{s}_{1,m} + \cdots + b_{n,m} \tilde{s}_{n,m} =\) constant. Finally, let \(\tilde{n}_{c} : = \left( {1, \ldots ,1} \right) \in {\mathbf{R}}^{nm}\) denote the normal vector of the hyperplane \(\tilde{s}_{1,1} + \cdots + \tilde{s}_{n,1} + \cdots + \tilde{s}_{1,m} + \cdots + \tilde{s}_{n,m} = 1\) relating to condition (21b) in \(\tilde{S}\). Then the corresponding algorithm in \(\tilde{S}\) is as follows (see also Fig. 4).

Fig. 4
figure 4

Algorithm for checking Pareto optimality of a strategy point \(L\) with using bijection T

Step A1

Check if \(F_{1}\) can be increased while moving from point \(\tilde{L}\) to some point \(\tilde{M}\) in \(\tilde{S}\). It is equivalent to the following examination: The existence of an admissible displacement vector \(\tilde{v}\) should be checked, for which

$$ \tilde{n}_{1} \tilde{v} = a_{1,1} \tilde{v}_{1,1} + \cdots + a_{n,1} \tilde{v}_{n,1} + \cdots + a_{1,m} \tilde{v}_{1,m} + \cdots + a_{n,m} \tilde{v}_{n,m} > 0 $$
(24a)

hold. The displacement from \(\tilde{L}\) to \(\tilde{M}\) should take place in the hyperplane (21b), so \(\tilde{v}\) and \(\tilde{n}_{c}\) should be perpendicular, that is,

$$ \tilde{n}_{c} \tilde{v} = \tilde{v}_{1,1} + \cdots + \tilde{v}_{n,1} + \cdots + \tilde{v}_{1,m} + \cdots + \tilde{v}_{n,m} = 0 $$
(24b)

should hold. Also Eq. (21c), corresponding to \(\tilde{S}\) generally, should be fulfilled for the terminal point of the displacement \(\tilde{M}: = \left( {\tilde{s}_{1,1}^{M} , \ldots ,\tilde{s}_{n,1}^{M} , \ldots ,\tilde{s}_{1,m}^{M} , \ldots ,\tilde{s}_{n,m}^{M} } \right)\). Since \(\tilde{s}_{1,1}^{M} = \tilde{s}_{1,1}^{L} + \tilde{v}_{1,1}\), …, \(\tilde{s}_{n,1}^{M} = \tilde{s}_{n,1}^{L} + \tilde{v}_{n,1}\), …, \(\tilde{s}_{1,m}^{M} = \tilde{s}_{1,m}^{L} + \tilde{v}_{1,m}\), …, \(\tilde{s}_{n,m}^{M} = \tilde{s}_{n,m}^{L} + \tilde{v}_{n,m}\), this requirement is equivalent to the following quadratic constraints:

$$ \left( {\tilde{s}_{k,i}^{L} + \tilde{v}_{k,i} } \right)\left( {\tilde{s}_{1,j}^{L} + \tilde{v}_{1,j} } \right) + \cdots + \left( {\tilde{s}_{k,i}^{L} + \tilde{v}_{k,i} } \right)\left( {\tilde{s}_{n,j}^{L} + \tilde{v}_{n,j} } \right) = \left( {\tilde{s}_{k,j}^{L} + \tilde{v}_{k,j} } \right)\left( {\tilde{s}_{1,i}^{L} + \tilde{v}_{1,i} } \right) + \cdots + \left( {\tilde{s}_{k,j}^{L} + \tilde{v}_{k,j} } \right)\left( {\tilde{s}_{n,i}^{L} + \tilde{v}_{n,i} } \right), $$
$$ \begin{aligned} \tilde{v}_{k,i} \tilde{v}_{1,j} + \cdots + \tilde{v}_{k,i} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{v}_{k,i} + \tilde{s}_{k,i}^{L} \tilde{v}_{1,j} + \cdots + \tilde{s}_{k,i}^{L} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{s}_{k,i}^{L} = \hfill \\ \tilde{v}_{k,j} \tilde{v}_{1,i} + \cdots + \tilde{v}_{k,j} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{v}_{k,j} + \tilde{s}_{k,j}^{L} \tilde{v}_{1,i} + \cdots + \tilde{s}_{k,j}^{L} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{s}_{k,j}^{L} \hfill \\ \end{aligned} $$
(24c)
$$ \left( {k = 1, \ldots ,n - 1;\;i = 1, \ldots ,m;\;j = 1, \ldots ,m;\;i \ne j} \right). $$

Finally, it is also required that \(\tilde{M} \ge 0\), that is,

$$ \tilde{v}_{1,1} \ge - \tilde{s}_{1,1}^{L} , \ldots ,\tilde{v}_{n,1} \ge - \tilde{s}_{n,1}^{L} , \ldots ,\tilde{v}_{1,m} \ge - \tilde{s}_{1,m}^{L} , \ldots ,\tilde{v}_{n,m} \ge - \tilde{s}_{n,m}^{L} . $$
(24d)

If system of conditions (24a)–(24d) can be satisfied with an appropriate \(\tilde{v}\) vector, check the relation between \(F_{2} \left( {\tilde{M}} \right)\) and \(F_{2} \left( {\tilde{L}} \right)\). If \(F_{2} \left( {\tilde{M}} \right) \ge F_{2} \left( {\tilde{L}} \right)\), then \(\tilde{L}\) (or \(L\)) is not Pareto optimal, stop the algorithm. If \(F_{2} \left( {\tilde{M}} \right) < F_{2} \left( {\tilde{L}} \right)\), then go to Step A2. If system of conditions (24a)–(24d) cannot be satisfied, then go to Step A2.

Step A2

Check if \(F_{2}\) can be increased while moving from point \(\tilde{L}\) to some point \(\tilde{M}\) in an admissible way. Similarly as in Step A1, it is equivalent to checking the following conditions:

$$ \tilde{n}_{2} \tilde{v} = b_{1,1} \tilde{v}_{1,1} + \cdots + b_{n,1} \tilde{v}_{n,1} + \cdots + b_{1,m} \tilde{v}_{1,m} + \cdots + b_{n,m} \tilde{v}_{n,m} > 0, $$
(25a)
$$ \tilde{n}_{c} \tilde{v} = \tilde{v}_{1,1} + \cdots + \tilde{v}_{n,1} + \cdots + \tilde{v}_{1,m} + \cdots + \tilde{v}_{n,m} = 0, $$
(25b)
$$ \begin{aligned} \tilde{v}_{k,i} \tilde{v}_{1,j} + \cdots + \tilde{v}_{k,i} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{v}_{k,i} + \tilde{s}_{k,i}^{L} \tilde{v}_{1,j} + \cdots + \tilde{s}_{k,i}^{L} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{s}_{k,i}^{L} = \hfill \\ \tilde{v}_{k,j} \tilde{v}_{1,i} + \cdots + \tilde{v}_{k,j} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{v}_{k,j} + \tilde{s}_{k,j}^{L} \tilde{v}_{1,i} + \cdots + \tilde{s}_{k,j}^{L} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{s}_{k,j}^{L} \hfill \\ \end{aligned} $$
(25c)
$$ \left( {k = 1, \ldots ,n - 1;\;i = 1, \ldots ,m;\;j = 1, \ldots ,m;\,i \ne j} \right), $$
$$ \tilde{v}_{1,1} \ge - \tilde{s}_{1,1}^{L} , \ldots ,\tilde{v}_{n,1} \ge - \tilde{s}_{n,1}^{L} , \ldots ,\tilde{v}_{1,m} \ge - \tilde{s}_{1,m}^{L} , \ldots ,\tilde{v}_{n,m} \ge - \tilde{s}_{n,m}^{L} . $$
(25d)

If system of conditions (25a)–(25d) can be satisfied with an appropriate \(\tilde{v}\) vector, check the relation between \(F_{1} \left( {\tilde{M}} \right)\) and \(F_{1} \left( {\tilde{L}} \right)\). If \(F_{1} \left( {\tilde{M}} \right) \ge F_{1} \left( {\tilde{L}} \right)\), then \(\tilde{L}\) is not Pareto optimal, stop the algorithm. If \(F_{1} \left( {\tilde{M}} \right) < F_{1} \left( {\tilde{L}} \right)\), then go to Step B1. If system of conditions (25a)–(25d) cannot be satisfied, then go to Step B1.

Step B1

Here it is examined if \(F_{1}\) can be increased in such a way that \(F_{2}\) is constant while moving from point \(\tilde{L}\) to some point \(\tilde{M}\) in an admissible way. For this, the following conditions should be checked:

$$ \tilde{n}_{1} \tilde{v} = a_{1,1} \tilde{v}_{1,1} + \cdots + a_{n,1} \tilde{v}_{n,1} + \cdots + a_{1,m} \tilde{v}_{1,m} + \cdots + a_{n,m} \tilde{v}_{n,m} > 0, $$
(26a)
$$ \tilde{n}_{c} \tilde{v} = \tilde{v}_{1,1} + \cdots + \tilde{v}_{n,1} + \cdots + \tilde{v}_{1,m} + \cdots + \tilde{v}_{n,m} = 0, $$
(26b)
$$ \begin{aligned} \tilde{v}_{k,i} \tilde{v}_{1,j} + \cdots + \tilde{v}_{k,i} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{v}_{k,i} + \tilde{s}_{k,i}^{L} \tilde{v}_{1,j} + \cdots + \tilde{s}_{k,i}^{L} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{s}_{k,i}^{L} = \hfill \\ \tilde{v}_{k,j} \tilde{v}_{1,i} + \cdots + \tilde{v}_{k,j} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{v}_{k,j} + \tilde{s}_{k,j}^{L} \tilde{v}_{1,i} + \cdots + \tilde{s}_{k,j}^{L} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{s}_{k,j}^{L} \hfill \\ \end{aligned} $$
(26c)
$$ \left( {k = 1, \ldots ,n - 1;\;i = 1, \ldots ,m;\;j = 1, \ldots ,m;\;i \ne j} \right), $$
$$ \tilde{v}_{1,1} \ge - \tilde{s}_{1,1}^{L} , \ldots ,\tilde{v}_{n,1} \ge - \tilde{s}_{n,1}^{L} , \ldots ,\tilde{v}_{1,m} \ge - \tilde{s}_{1,m}^{L} , \ldots ,\tilde{v}_{n,m} \ge - \tilde{s}_{n,m}^{L} , $$
(26d)
$$ \tilde{n}_{2} \tilde{v} = b_{1,1} \tilde{v}_{1,1} + \cdots + b_{n,1} \tilde{v}_{n,1} + \cdots + b_{1,m} \tilde{v}_{1,m} + \cdots + b_{n,m} \tilde{v}_{n,m} = 0. $$
(26e)

(26e) assures that \(F_{2}\) remains constant after the displacement from \(\tilde{L}\) to \(\tilde{M}\).

If system of conditions (26a)–(26e) can be satisfied with an appropriate \(\tilde{v}\) vector, then \(\tilde{L}\) is not Pareto optimal, stop the algorithm. If (26a)–(26e) cannot be satisfied, then go to Step B2.

Step B2

Here it is examined if \(F_{2}\) can be increased in such a way that \(F_{1}\) is constant while moving from point \(\tilde{L}\) to some point \(\tilde{M}\) in an admissible way. For this, the following conditions should be checked:

$$ \tilde{n}_{2} \tilde{v} = b_{1,1} \tilde{v}_{1,1} + \cdots + b_{n,1} \tilde{v}_{n,1} + \cdots + b_{1,m} \tilde{v}_{1,m} + \cdots + b_{n,m} \tilde{v}_{n,m} > 0, $$
(27a)
$$ \tilde{n}_{c} \tilde{v} = \tilde{v}_{1,1} + \cdots + \tilde{v}_{n,1} + \cdots + \tilde{v}_{1,m} + \cdots + \tilde{v}_{n,m} = 0, $$
(27b)
$$ \begin{aligned} \tilde{v}_{k,i} \tilde{v}_{1,j} + \cdots + \tilde{v}_{k,i} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{v}_{k,i} + \tilde{s}_{k,i}^{L} \tilde{v}_{1,j} + \cdots + \tilde{s}_{k,i}^{L} \tilde{v}_{n,j} + \left( {\tilde{s}_{1,j}^{L} + \cdots + \tilde{s}_{n,j}^{L} } \right)\tilde{s}_{k,i}^{L} = \hfill \\ \tilde{v}_{k,j} \tilde{v}_{1,i} + \cdots + \tilde{v}_{k,j} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{v}_{k,j} + \tilde{s}_{k,j}^{L} \tilde{v}_{1,i} + \cdots + \tilde{s}_{k,j}^{L} \tilde{v}_{n,i} + \left( {\tilde{s}_{1,i}^{L} + \cdots + \tilde{s}_{n,i}^{L} } \right)\tilde{s}_{k,j}^{L} \hfill \\ \end{aligned} $$
(27c)
$$ \left( {k = 1, \ldots ,n - 1;\,i = 1, \ldots ,m;\;j = 1, \ldots ,m;\;i \ne j} \right), $$
$$ \tilde{v}_{1,1} \ge - \tilde{s}_{1,1}^{L} , \ldots ,\tilde{v}_{n,1} \ge - \tilde{s}_{n,1}^{L} , \ldots ,\tilde{v}_{1,m} \ge - \tilde{s}_{1,m}^{L} , \ldots ,\tilde{v}_{n,m} \ge - \tilde{s}_{n,m}^{L} , $$
(27d)
$$ \tilde{n}_{1} \tilde{v} = a_{1,1} \tilde{v}_{1,1} + \cdots + a_{n,1} \tilde{v}_{n,1} + \cdots + a_{1,m} \tilde{v}_{1,m} + \cdots + a_{n,m} \tilde{v}_{n,m} = 0. $$
(27e)

(27e) assures that \(F_{1}\) remains constant after the displacement from \(\tilde{L}\) to \(\tilde{M}\).

If system of conditions (27a)–(27e) can be satisfied with an appropriate \(\tilde{v}\) vector, then \(\tilde{L}\) is not Pareto optimal. If (27a)–(27e) cannot be satisfied, then \(\tilde{L}\) is Pareto optimal.

As an additional result, the above algorithm serves with a point \(\tilde{M}: = \left( {\tilde{s}_{1,1}^{M} , \ldots ,\tilde{s}_{n,1}^{M} , \ldots ,\tilde{s}_{1,m}^{M} , \ldots ,\tilde{s}_{n,m}^{M} } \right) = \tilde{L} + \tilde{v}\) which weakly Pareto dominates \(\tilde{L}\) if \(\tilde{L}\) is not Pareto optimal. Then its image in S, that is, \(M = \left( {x_{1}^{M} , \ldots ,x_{n}^{M} ,y_{1}^{M} , \ldots ,y_{m}^{M} } \right)\) can be determined with the following relations (see also the proof of Lemma 3.1):

$$ x_{k}^{M} = \frac{{\tilde{s}_{k,j}^{M} }}{{\tilde{s}_{1,j}^{M} + \cdots + \tilde{s}_{n,j}^{M} }} $$
(28a)
$$ \left( {k = 1, \ldots ,n;\;j = 1, \ldots ,m} \right)\;{\text{if}}\;\tilde{s}_{1,j}^{M} + \cdots + \tilde{s}_{n,j}^{M} \ne 0, $$
$$ y_{j}^{M} = \tilde{s}_{1,j}^{M} + \cdots + \tilde{s}_{n,j}^{M} $$
(28b)
$$ \left( {j = 1, \ldots ,m} \right). $$

3.5 Examples

In this section, two examples are given to present the practical application of the prosed Pareto optimality checking algorithm in \(\tilde{S}\). The examples are on 2 × 2 bimatrix games, where it is basically more advantageous to work in \(\tilde{S}\) than in S.

Example 1

Let us have the following bimatrix game in accordance with the notation of Sect. 3. n = m = 2. The payoff matrices of Players 1 and 2, respectively, are

$$ A: = \left[ {\begin{array}{*{20}c} {4.1} & 5 \\ {8.5} & {11.1} \\ \end{array} } \right],\;B: = \left[ {\begin{array}{*{20}c} 4 & 7 \\ 6 & 5 \\ \end{array} } \right]. $$

Let us check the Pareto optimality of point \(L = \left( {x_{1}^{L} ,x_{2}^{L} ,y_{1}^{L} ,y_{2}^{L} } \right): = \left( {0,1,1,0} \right)\). This point is a Nash equilibrium as \(F_{1} \left( L \right) = a_{2,1} = 8.5 > a_{1,1} = 4.1\) and \(F_{2} \left( L \right) = b_{2,1} = 6 > b_{2,2} = 5\). It is not trivial if L is Pareto optimal or not since neither \(a_{2,1}\) nor \(b_{2,1}\) is the maximal entry in its payoff matrix. Also \(a_{2,1} + b_{2,1}\) is not the maximal entry in the sum matrix A + B. (This maximum is \(a_{2,2} + b_{2,2} = 11.1 + 5\).) Thus it is reasonable to use the algorithm of Sect. 3.4 to check the Pareto optimality of L.

First, we should determine \(\tilde{L} = \left( {\tilde{s}_{1,1}^{L} ,\tilde{s}_{2,1}^{L} ,\tilde{s}_{1,2}^{L} ,\tilde{s}_{2,2}^{L} } \right) = \left( {x_{1}^{L} y_{1}^{L} ,x_{2}^{L} y_{1}^{L} ,x_{1}^{L} y_{2}^{L} ,x_{2}^{L} y_{2}^{L} } \right) = \left( {0,1,0,0} \right)\). Then Steps A1, A2, B1, B2 should follow.

Step A1

The payoff value \(F_{1} \left( {\tilde{L}} \right) = a_{2,1} \tilde{s}_{2,1}^{L} = 8.5\) can be increased with point \(\tilde{M} = \left( {0,0,0,1} \right)\) to \(F_{1} \left( {\tilde{M}} \right) = a_{2,2} \cdot 1 = 11.1\). This corresponds to the (admissible) displacement vector \(\tilde{v} = \tilde{M} - \tilde{L} = \left( {0,0,0,1} \right) - \left( {0,1,0,0} \right) = \left( {0, - 1,0,1} \right)\) satisfying conditions (24a)–(24d). Since \(5 = F_{2} \left( {\tilde{M}} \right) < F_{2} \left( {\tilde{L}} \right) = 6\), Step A2 should follow.

Step A2

The payoff value \(F_{2} \left( {\tilde{L}} \right) = b_{2,1} \tilde{s}_{2,1}^{L} = 6\) can be increased with point \(\tilde{M} = \left( {0,0,1,0} \right)\) to \(F_{2} \left( {\tilde{M}} \right) = b_{1,2} \cdot 1 = 7\). This corresponds to the (admissible) displacement vector \(\tilde{v} = \tilde{M} - \tilde{L} = \left( {0,0,1,0} \right) - \left( {0,1,0,0} \right) = \left( {0, - 1,1,0} \right)\) satisfying conditions (25a)–(25d). Since \(5 = F_{1} \left( {\tilde{M}} \right) < F_{1} \left( {\tilde{L}} \right) = 8.5\), Step B1 should follow.

Step B1

System of conditions (26a)–(26e) is resulted as follows now (after some simplifications):

$$ 4.1\tilde{v}_{1,1} + 8.5\tilde{v}_{2,1} + 5\tilde{v}_{1,2} + 11.1\tilde{v}_{2,2} > 0, $$
(29a)
$$ \tilde{v}_{1,1} + \tilde{v}_{2,1} + \tilde{v}_{1,2} + \tilde{v}_{2,2} = 0, $$
(29b)
$$ \tilde{v}_{1,1} \tilde{v}_{2,2} = \tilde{v}_{1,2} \tilde{v}_{2,1} + \tilde{v}_{1,2} , $$
(29c)
$$ \tilde{v}_{1,1} \ge 0,\;\tilde{v}_{2,1} \ge - 1,\;\tilde{v}_{1,2} \ge 0,\,\tilde{v}_{2,2} \ge 0, $$
(29d)
$$ 4\tilde{v}_{1,1} + 6\tilde{v}_{2,1} + 7\tilde{v}_{1,2} + 5\tilde{v}_{2,2} = 0. $$
(29e)

After combining (29b), (29e) and (29a), \(\tilde{v}_{1,2} \le - \frac{96}{9}\tilde{v}_{1,1}\) is resulted, which contradicts \(\tilde{v}_{1,2} \ge 0\) (see (29d)) since \(\tilde{v}_{1,1} \ge 0\) should hold (see also (29d)). That is, system of conditions (29a)–(29e) cannot be fulfilled. Step B2 should follow.

Step B2

System of conditions (27a)–(27e) is resulted as follows now (after some simplifications):

$$ 4\tilde{v}_{1,1} + 6\tilde{v}_{2,1} + 7\tilde{v}_{1,2} + 5\tilde{v}_{2,2} > 0, $$
(30a)
$$ \tilde{v}_{1,1} + \tilde{v}_{2,1} + \tilde{v}_{1,2} + \tilde{v}_{2,2} = 0, $$
(30b)
$$ \tilde{v}_{1,1} \tilde{v}_{2,2} = \tilde{v}_{1,2} \tilde{v}_{2,1} + \tilde{v}_{1,2} , $$
(30c)
$$ \tilde{v}_{1,1} \ge 0,\;\tilde{v}_{2,1} \ge - 1,\;\tilde{v}_{1,2} \ge 0,\;\tilde{v}_{2,2} \ge 0, $$
(30d)
$$ 4.1\tilde{v}_{1,1} + 8.5\tilde{v}_{2,1} + 5\tilde{v}_{1,2} + 11.1\tilde{v}_{2,2} = 0. $$
(30e)

After combining (30b) and (30e),

$$ \tilde{v}_{2,1} = - \frac{70}{{26}}\tilde{v}_{1,1} - \frac{61}{{26}}\tilde{v}_{1,2} $$
(31)

is resulted. Substituting it and \(\tilde{v}_{2,2} = - \tilde{v}_{1,1} - \tilde{v}_{2,1} - \tilde{v}_{1,2}\) (from (30b)) into (30a), we get

$$ - \tilde{v}_{1,1} + \left( { - \frac{70}{{26}}\tilde{v}_{1,1} - \frac{61}{{26}}\tilde{v}_{1,2} } \right) + 2\tilde{v}_{1,2} > 0, $$
(32)

that is,

$$ \frac{96}{{26}}\tilde{v}_{1,1} + \frac{9}{26}\tilde{v}_{1,2} < 0. $$
(33)

(33) contradicts \(\tilde{v}_{1,1} \ge 0\), \(\tilde{v}_{1,2} \ge 0\) (see (30d)). That is, system of conditions (30a)–(30e) cannot be fulfilled.

Consequently, \(\tilde{L}\) (or L) is Pareto optimal.

Example 2

Now, let us have the bimatrix game with the following payoff matrices of Players 1 and 2, respectively:

$$ A: = \left[ {\begin{array}{*{20}c} {4.9} & 5 \\ {5.4} & 9 \\ \end{array} } \right],\;B: = \left[ {\begin{array}{*{20}c} {6.9} & {6.1} \\ 6 & 5 \\ \end{array} } \right]. $$

Check again the Pareto optimality of point \(L = \left( {x_{1}^{L} ,x_{2}^{L} ,y_{1}^{L} ,y_{2}^{L} } \right): = \left( {0,1,1,0} \right)\). This point is a Nash equilibrium as \(F_{1} \left( L \right) = a_{2,1} = 5.4 > a_{1,1} = 4.9\) and \(F_{2} \left( L \right) = b_{2,1} = 6 > b_{2,2} = 5\). It is not trivial if L is Pareto optimal or not since neither \(a_{2,1}\) nor \(b_{2,1}\) is the maximal entry in its payoff matrix. Also \(a_{2,1} + b_{2,1}\) is not the maximal entry in the sum matrix A + B. (This maximum is \(a_{2,2} + b_{2,2} = 9 + 5\).) Thus it is reasonable to use the algorithm of Sect. 3.4 to check the Pareto optimality of L.

\(\tilde{L} = \left( {\tilde{m}_{1,1}^{L} ,\tilde{m}_{2,1}^{L} ,\tilde{m}_{1,2}^{L} ,\tilde{m}_{2,2}^{L} } \right) = \left( {0,1,0,0} \right)\) again (as in Example 1). Then following the algorithm, it could be seen (in Step B1) that an (admissible) weak Pareto improvement can be achieved, for example, with the displacement vector \(\tilde{v} = \left( {0.0435, - 0.9935,0.8265,0.1235} \right)\) and the weakly Pareto dominating point \(\tilde{M}\) = \(\tilde{L} + \tilde{v}\) = \(\left( {0.0435,0.0065,0.8265,0.1235} \right)\), which increase \(F_{1}\) by \(4.9 \cdot 0.0435 - 5.4 \cdot 0.9935 + 5 \cdot 0.8265 + 9 \cdot 0.1235 = 0.09\) while keeps \(F_{2}\) constant.

Point \(M \in S\), corresponding to \(\tilde{M}\), can be determined from Eqs. (28a) and (28b):

$$ x_{1}^{M} = \frac{{\tilde{s}_{1,1}^{M} }}{{\tilde{s}_{1,1}^{M} + \tilde{s}_{2,1}^{M} }} = \frac{0.0435}{{0.0435 + 0.0065}} = 0.87,\;x_{2}^{M} = 0.13,\;y_{1}^{M} = \tilde{s}_{1,1}^{M} + \tilde{s}_{2,1}^{M} = 0.05,\;y_{2}^{M} = 0.95. $$

That is, point \(M = \left( {x_{1}^{M} ,x_{2}^{M} ,y_{1}^{M} ,y_{2}^{M} } \right) = \left( {0.87,0.13,0.05,0.95} \right)\) weakly Pareto dominates L.

Remark 3.4

The 2 × 2 bimatrix games in the above examples could be solved by hand. Of course, if the size of the game is larger, it may become impossible. Nevertheless, numerous computational tools/software packages are available in the practice to carry out the checking algorithm in exact or at least approximate way (with arbitrary precision).

4 Conclusion

Besides the theoretical importance of bimatrix games, they have important application fields like water resource management. Bimatrix games have had a vital role since the very beginning of game theory to date. Pareto optimal strategy vectors of the Players represent a reasonable and often applied solution concept (mainly in the cooperative approach of games). However, it may be hard to determine the Pareto optimal strategy vectors or at least check the Pareto optimality of a given strategy vector.

In the present paper, general n × m bimatrix games have been studied. First of all an elementary proof has been provided for a useful theorem, which makes the proposed Pareto optimality checking algorithm simpler. The algorithm can be made even more convenient with a proposed nonlinear transform in the important case of 2 × 2 bimatrix games (and for other bimatrix games under certain circumstances).

Based on the examples in Sect. 3.5, the algorithm for 2 × 2 bimatrix games can be executed even by hand if the transform is used. For games of larger size, numerous computational tools/software packages are available to carry out the checking algorithm in exact or at least approximate way.

Consequently, the proposed algorithm can be recommended generally for checking the Pareto optimality of strategy pairs in bimatrix games. In case of 2 × 2 bimatrix games, the algorithm is recommended with using the proposed transform.

Future researches may deal with the generalization possibilities of the present results. For example, the nonlinear transform and the Pareto optimality checking algorithm might be extended to games with more than two Players (with finitely many pure strategies) or to generalized bimatrix games, where each of the two Players has finitely many payoff matrices (not only one), from which they can select their current one when playing the game (Siddiqi et al., 2011). The possibility and the usefulness of that the Pareto optimality checking in any n × m bimatrix game can be divided into the checking in 2 × 2 bimatrix (sub-)games may be also worth examining in the future. On the one hand, the number of problems would increase largely in this way. On the other hand, each problem would be simple, where not only the proposed checking algorithm but also the proposed transform could be used really advantageously.