Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra

: The max-Łukasiewicz algebra describes fuzzy systems working in discrete time which are based on two binary operations: the maximum and the Łukasiewicz triangular norm. The behavior of such a system in time depends on the solvability of the corresponding bounded parametric max-linear system. The aim of this study is to describe an algorithm recognizing for which values of the parameter the given bounded parametric max-linear system has a solution—represented by an appropriate state of the fuzzy system in consideration. Necessary and sufﬁcient conditions of the solvability have been found and a polynomial recognition algorithm has been described. The correctness of the algorithm has been veriﬁed. The presented polynomial algorithm consists of three parts depending on the entries of the transition matrix and the required state vector. The results are illustrated by numerical examples. The presented results can be also applied in the study of the max-Łukasiewicz systems with interval coefﬁcients. Furthermore, Łukasiewicz arithmetical conjunction can be used in various types of models, for example, in cash-ﬂow system.


Introduction
The max-Łukasiewicz algebra (max-Łuk algebra, for short), is one of the so-called max-T fuzzy algebras, which are defined for various triangular norms T.
A max-T fuzzy algebra works with variables in the unit interval I = 0, 1 and uses the binary operations of maximum and a t-norm, T, instead of the conventional operations of addition and multiplication. Formally, a max-T fuzzy algebra is a triplet (I, ⊕, ⊗ T ), where I = 0, 1 and ⊕ = max, ⊗ T = T are binary operations on I. By I(m, n), I(n), we denote the set of all matrices, vectors, of the given dimensions over I. The operations ⊕, ⊗ T are extended to matrices and vectors in the standard manner. Similarly, partial orderings on I(m, n) and I(n) are induced by the linear ordering on I.
The triangular norms (t-norms, for short) were introduced in [1], in connection with probabilistic metric spaces. The t-norms interpretations are mainly the conjunction in fuzzy logics and intersection of fuzzy sets. Therefore, they find applications in many domains, for example in decision making processes, game theory and statistics, information and data processing or risk management. The t-norms and t-conorms belong to basic notions in the theory of fuzzy sets. The following four main t-norms: Łukasiewicz, Gödel, product and drastic (and many others) can be found in [2].
The Łukasiewicz norm is often characterized as a logic of absolute or metric comparison.
The Gödel norm is defined as the minimum of the entries (the truth degrees of the constituents). Gödel logic is the simplest norm; it is often characterized as a logic of relative comparison x ⊗ G y = min(x, y). ( The product norm is defined by the formula The drastic triangular norm (the "weakest norm") is a basic example of a non-divisible t-norm on any partially ordered set. This t-norm is defined by the formula The max-T algebras with the above mentioned t-norms have various applications and their steady states and optimization methods have been intensively studied, see, for example, [3][4][5][6][7]. The algebras with interval entries have been studied in [8][9][10].
In the particular case when T is the Gödel t-norm, we get an important max-min algebra which is useful in solving various problems in fuzzy scheduling and optimization. Max-min algebra belongs to the so-called tropical mathematics, which has many applications and brings a great number of contributions to mathematical theory. Interesting monographs [11][12][13][14] and collections of papers [15][16][17][18][19] come from tropical mathematics and its applications.
Tropical algebras are often used for describing and studying systems working in discrete time stages. The state of the system in stage k is described by the state vector, x(k). Then the transition matrix, A, determines the transition of the system to the next stage. In more detail, the next state of the system, x(k + 1), is obtained by multiplication A ⊗ x(t) = x(t + 1). During the work of the system, it can happen that, after some time, the system reaches a steady state. In algebraic notation, the state vectors of steady states are eigenvectors of the transition matrix with some eigenvalue λ ∈ I: The eigenproblem in max-min algebra has been frequently investigated, and many interesting results have been found. The structure of the eigenspace has been described and algorithms for computing the largest eigenvector have been suggested, see for example [20,21]. The eigenvectors in a max-T algebra, for various triangular norms T, have applications in fuzzy set theory. Such eigenvectors have been studied in [5,7,22]. The eigenvalues and eigenvectors are important characteristics of the system described by the fuzzy algebra. For the case of the drastic and product t-norms, the structure of the eigenspace has been studied in [5,7]. Finally, [22] describes the case of a Łukasiewicz fuzzy algebra.
Łukasiewicz arithmetical conjunction has applications in many model situations. The operation subtracts 1 from the sum of the components and takes the maximum with zero.This leads to the idea that the result of the operation is a remainder that is over the unit. Thus, the Łukasiewicz conjunction can be used, for example, in describing backup of data on a computer, the maximal capacity of an oil tank or lump payment in finances.
Such applications often lead to systems of max-Łuk linear equations. There is no inverse operation to ⊕ in max-Łuk algebra, therefore the transfer of variables from one side of equation to the other side is not possible. As a consequence, solving the one-sided linear systems (with variables, say, on the left-hand side of the equations) requires an approach different from solving the two-sided systems (with variables on both sides).
The aim of this paper is to present an algorithm for recognizing solvability of a given one-sided max-Łuk linear system with bounded variables, in dependence of a linear parameter factor on the right side, see (9) and (10) for an exact formulation.
This problem has not yet been studied in the parametrized version. The main contribution of this paper is description of the recognition algorithm, which has crucial role in the investigation of interval eigenvectors. The algorithm for recognizing the solvability of a given one-sided max-Łuk linear system can be shortly summarized in the following steps: 1.
permute the equations in the system so that the right-hand side will be decreasing, that is 2. recognize the solvability for some λ with 1 − b m < λ ≤ 1, according to Theorem 3 (case A), by verifying recognize the solvability for some recognize the solvability for some λ with 0 ≤ λ ≤ 1 − b 1 , according to Theorem 5 (case C), by verifying y j ≤ i∈M 1 − c ij , for every j ∈ N.

5.
the system is solvable if the answer is positive at least once in steps 2, 3 or 4. Otherwise, the system is insolvable for any value of λ.
The structure of this paper is the following. Section 2 contains a case study based on an interactive cash-flow system, which shows motivation for solving linear systems in max-Łuk algebra. The problem is formulated in Section 3, where the preparatory results are also presented. The main results are described in Section 4. Illustrative numerical examples related to the case study from Section 2 are shown with details in Section 5. Discussion, comparison of the results with other papers, as well as future developments, are given in Conclusions.

Case Study: Interactive Cash-Flow System
Consider an interactive cash-flow system created by a network of n cooperating banks, B 1 , B 2 , . . . , B n . Assume that the cooperation is performed in stages. During the run of the system, variable interest rates of the banks mutually influence each other. In each stage, every bank B i chooses a cash-flow cooperation with some other bank B j (choice i = j is also possible) in order to achieve the optimal profit, expressed by the value of the interest rate achieved for the next stage.
The system can be modeled as a discrete events system (DES). For any bank B i , variable x i (k) shows the interest rate value in stages k = 1, 2, . . . , the vector x(k) = x 1 (k), x 2 (k), . . . , x n (k) T is called the state vector of DES in stage k. The change of the next state-vector values during the transition of the DES to the state vector x(k + 1) depends on the entries a ij of the so-called transition matrix A. The possible increase of the profit coming from the cooperation of B i with bank B j is equal to a ij (including the lump payment). Thus, the efficient sum of a ij and x i (k) is only the part exceeding 1 (that is, exceeding 100%), in the case when B i chooses B j for cooperation in the stage k.
Optimization of the variable interest rate in stage k + 1 leads every B i to such a choice of B j , where the efficient increase of the profit is maximal. That is, In max-Łuk notation the optimal choice can be written as x(k + 1) = A ⊗ L x(k).
For simplicity we assume that the system is homogeneous (that is, A does not change from stage to stage).
In real life, the matrix and vector entries are not always exact values. For example, if (7) is applied for prediction, then the transition matrix is not exactly known, it is only an estimation, belonging to some interval A ∈ A = [A, A]. Analogously, the state vector belongs to some interval x ∈ X = [x, x]. We say that the DES is considered with interval coefficients.
For formulas with interval coefficients, it must be decided which values from the corresponding interval will be taken. One possibility is to take all values (using the universal quantifier). The other possibility is to use the existential quantifier and only require that there is some value from the interval, such that the formula is satisfied.
If there are more interval variables in the formula in consideration, then the quantifiers can be combined. For example, various types of quantified notions in max-min algebra are described in [23,24].
By recurrent application of (7), the sequence of state vectors (also called: orbit of the DES) , can be created. The orbit represents a predicted evolution of interest rates. Two natural questions arise: Q1. Can the orbit reach a fixed given state vector value? Q2. Can the orbit reach a steady state (such a state which does not change from stage to stage)? Q1 requires to recognize whether, in some stage k, there is a value y = x(k) such that b = x(k + 1) for a given vector b ∈ I(n). If we consider the problem in the interval arithmetic, then we get the state vector variable y ∈ [y, y]. Moreover, we can generalize the problem by adding a parameter λ ∈ I to the given value b. Then the original question is beeing solved as a special subcase with λ = 1.
Therefore, question Q1 can be solved as one-sided bounded parametric problem studied in Sections 3 and 4. The main result is Theorem 6, which describes a necessary and sufficient condition for solvability of the system (9) and (10).
The computations answering to Q1 are illustrated by Example 1 (positive answer) and Example 2 (negative answer) in Section 5, with detailed interpretation.
Q2 is connected with the eigenproblem of the transition matrix. A steady state is characterized by the equation x(k + 1) = x(k) or, equivalently, by A ⊗ L x = x. That is, steady states are equivalent to max-Łuk eigenvectors of the transition matrix A. Usually the eigenvectors are considered in a more general form, with added the so-called eigenvalue λ ∈ I. That is, x ∈ I(n) is an eigenvector of matrix A ∈ I(n, n) with eigenvalue λ ∈ I if A ⊗ L x = λ ⊗ L x. The eigenvectors in max-Łuk algebra have been studied in [3,6], and recently, in a more general context, in [25].
If we wish to answer Q2 in the interval arithmetics, then we have to consider According to the choice of universal/existential quantifier for A ∈ A, and for x ∈ X, various types of interval eigenproblem have been studied by various authors over max/plus and max-min algebra.
For example, X is called a strongly tolerable eigenvector of A if In words, we ask for the existence of λ and A ∈ A such that every x ∈ X is an eigenvector of A with eigenvalue λ (we shortly say that every x ∈ X is tolerated by A).
Analogous problem has been solved in max-min algebra in [23], where it has been shown that the problem can be reduced to the solvability of the systemC ⊗ y = λ ⊗b using generators of the interval matrix A. The main idea of the algorithm is to find a certificate matrix of the given instance of dimension n × n, as a max-min linear combination of generators. The necessary coefficients of this linear combinations can be computed by solving an auxiliary one-sided max-min linear system of dimension n 2 × n 2 .
By analogy, this approach can easily be transferred from max-min to max-Łuk algebra, with a single exception of recognizing the solvability for the auxiliary one-sided linear system of dimension n 2 × n 2 . Namely, to recognize the parametric solvability of a one-sided linear system is substantially more complicated problem in max-Łuk algebra than it is in max-min algebra. In fact, it is this manuscript, where an efficient algorithm for the necessary solvability problem has been formulated.
Till now, the specific methods of max-Łuk algebra have only been presented at Conference EURO 2019 in Dublin. The extended version of this presentation is in preparation and will be submitted soon. The recognition method described in this manuscript, plays important role in the proofs of the following two theorems.

Theorem 1 ([23]). Let an interval matrix
and an interval vector X = [x, x] be given. Then, X is a strongly tolerable eigenvector of A if and only ifC ⊗ L y = λ ⊗ Lb is solvable for some λ ∈ I. Theorem 2 ([23]). The recognition problem of whether a given interval vector X is a strong tolerance eigenvector of a given interval matrix A in max-min algebra, is solvable in O(n 5 ) time.

Bounded Parametric Systems of Max-Łuk Linear Equations
In view of the motivation inspired by the case study in Section 2, the solvability problem for a bounded parametric linear system in max-Łuk algebra is studied in this paper. We consider the system with fixed matrix C ∈ I(m, n) and the right-hand side vector b ∈ I(m). The basic question is whether the system is solvable for some value 0 < λ ∈ I of the parameter. In other words, we are looking for a necessary and sufficient condition allowing the recognition of whether there is a λ ∈ I \ {0} such that (9) and (10) is solvable (the case λ = 0 is trivial).
In the sequel, we use the notation M = {1, 2, . . . , m} and N = {1, 2, . . . , n}. The set of all solutions to (9) without any constraint is denoted by S(C, λ ⊗ L b); the solution set with the upper bound is S(C, λ ⊗ L b, y), and the solution set with both upper and lower bound is denoted by S(C, λ ⊗ L b, y, y). That is, we have to recognize whether S(C, λ ⊗ L b, y, y) = ∅, for some λ ∈ I or not.
Without any loss of generality, we assume till the end of the paper that the right-hand side vector b ∈ I(m) satisfies the monotonicity condition System (9) is equivalent to which is further equivalent to In view of the definition of ⊗ L , the inequality in (14) takes one of the following forms We shall use the notation For brevity, we write (16) and (17). That is, (ii) For i ∈ M \ H we have 0 ≥ λ + b i − 1, which implies 0 ≥ c ij + y j − 1, by (18). Then y j ≤ 1 − c ij , that is, y j ≤ i∈M\H 1 − c ij .
(iii) The assertion follows directly from the definition.
If the equality c kj ⊗ L y j = λ ⊗ L b k in (15) holds, then we say that y j is active in row k. If so, we write k ∈ A j (λ) and A j = A j (λ) is then called the activity set of the variable y j .
There are two possible activity subcases: Then also c kj ⊗ L y j > 0, which gives c kj + y j − 1 = λ + b i − 1. As a consequence, Then, also c kj ⊗ L y j = 0, which implies c kj + y j − 1 ≤ 0, and y j ≤ 1 − c kj .
In subcase (21) with k ∈ H, we have y j = λ + d kj ≤ λ + i∈H d ij . As a consequence, In subcase (22) with k ∈ M \ H, we get, using Lemma 1(ii), Lemma 2. Assume C ∈ I(m, n), b ∈ I(m) with the monotonicity condition (11), y, y, y ∈ I(n), λ ∈ I and h ∈ M with 1 − b h < λ ≤ 1 − b h+1 . Then H = {1, 2, . . . , h} and the following statements are equivalent: 1. y ∈ S(C, λ ⊗ L b, y, y), where the submatrix C H (subvector b H ) consists of the rows in C i (in b i ) with i ∈ H. Analogously, the vector y h ∈ I(n) with y h j = i∈M\H 1 − c ij ) for every j ∈ N is constructed from the rows C i of C, i ∈ M \ H.
For λ ∈ I and for every i ∈ M, j ∈ N, we define Furthermore, we define y (λ) ∈ I(n) by putting, for j ∈ N, Lemma 3. Let C ∈ I(m, n), b ∈ I(m), λ ∈ I and y ∈ I(n). Then y (λ) is the maximal vector in I(n) fulfilling conditions (i) and (ii).
Proof. It is easy to verify, using the definition of ⊗ L , that c ij ⊗ L y ij (λ) = λ ⊗ L b i for every i ∈ M, j ∈ N. Then, assertions (i) and (ii) follow from the definition of y (λ).
(iii) Assume y ∈ I(n) satisfies conditions (i) and (ii) with y (λ) replaced by y. Let i ∈ M, j ∈ N. By (i) we have c ij ⊗ L y j ≤ λ ⊗ L b i . Suppose, by contradiction, that there is j ∈ N such that y j > y j (λ). Then, in view of (28), there is i ∈ M such that y j > y ij (λ). We consider two cases.
Case (a): i ∈ H. Then y Given that i, j are arbitrary, y ≤ y (λ) follows.

Lemma 4.
Let C ∈ I(m, n), b ∈ I(m), λ ∈ I and y ∈ I(n). Then the following statements are equivalent.
Proof. The assertion of the lemma follows directly from Lemma 3.

Lemma 5.
Let C ∈ I(m, n), b ∈ I(m), λ ∈ I and let y, y ∈ I(n) with y ≤ y. Then the following statements are equivalent.

Remark 1.
The assertions of Lemma 5 are often expressed by saying that for fixed λ ∈ I, y (λ) is the maximal possible candidate for a solution of the system (9) and (10).

Remark 2. By a standard definition, the minimum of the empty subset of I is the maximal element in
Similarly, if there is no i ∈ M with λ > 1 − b i , then i∈H λ + d ij = I, and y j (λ) = y j ∧ i∈M\H 1 − c ij .

Parametric Solvability Problem of Max-Łuk Linear Equations
The main result of this paper is description of a recognition algorithm for the parametric solvability problem. The problems (9) and (10) will be discussed according to Similarly to Remark 2, we put h(λ) For j ∈ N we also use the notation We consider three cases: (A) h(λ) = m, (B) 0 < h(λ) < m and (C) h(λ) = 0. The solvability in case A is described by the following theorem, with the notation Theorem 3. Case (A). Assume C ∈ I(m, n) and b ∈ I(m) with the monotonicity condition (11). Then the following statements are equivalent.
That is, y ≤ y (κ) for every λ ≤ κ ≤ 1. The conditions for the upper bound inequality y (κ) ≤ y will be discussed later. First, we verify conditions (14) and (15). In view of the assumption, we have Similarly, for every j ∈ N It follows that the equalities are equivalent. Similarly, are equivalent. Therefore, the assumption y (λ) ∈ S(C, λ ⊗ L b) is equivalent to y (κ) ∈ S(C, κ ⊗ L b), for every κ ∈ I with λ ≤ κ ≤ 1.
To achieve also the upper bound inequality y (κ) ≤ y, further conditions must be imposed on κ. Namely, for every j ∈ N the condition y j (κ) = κ + i∈M d ij ≤ y j (39) must be added. As a consequence, we get Now, with the notation we have Therefore, y (κ) ∈ S(C, λ m max ⊗ L b, y, y). The converse implication (ii) ⇒ (i) is trivial.
Proof. For fixed λ ∈ Λ h , (i) is equivalent (in view of Lemma 5) to the statement which is further equivalent (in view of Lemma 2) to The proof will be completed by demonstrating that (47) is equivalent to (ii). We assume (47) for fixed λ ∈ Λ(h), and we describe conditions under which (47) also holds for arbitrary κ ∈ Λ(h) with λ ≤ κ.
We shall verify the restrictions y ≤ y (κ) ≤ y ∧ y h and the activity of the variables y (κ) in every row k ∈ H of the matrix C H with vector b H .
The lower restriction follows by monotonicity y j ≤ y j (λ) ≤ y j (κ), for every j ∈ N. The upper restriction follows by Lemma 2 directly from y j (κ) = y j ∧ y h j ∧ κ + i∈H d ij ≤ y j ∧ y h j .
As a consequence, the activity condition (22) is fulfilled in all rows k ∈ M \ H for every variable y j (κ) with ∈ N.
To verify also the second activity condition (21) for at least one variable j ∈ N in every row k ∈ H, we denote by µ h j the break-point at which the (min/plus)-linear function of the variable κ changes its direction. In other words, At the break-point, both parts of the function (50) have the same value. That is, or, equivalently, Proof of Claim 1. By assumption, y j (λ) = λ + d kj = λ + i∈H d ij , in view of (21) and (23). Then, λ ≤ µ h j , and the activity of y j (λ) in k is described by the formula while the activity of y j (κ) under assumption κ ≤ µ h j is described by As (54) and (55) are equivalent, the assertion of Claim 1 follows. In view of (44), (45) and (53) we have Proof of Claim 2. By assumption, for every k ∈ H there is a j ∈ N such that y (λ) is active in k. Then by Claim 1, under assumption λ ≤ κ ≤ j∈N µ h j , for every k ∈ H there is a j ∈ N(h) such that y (κ) is active in k. That is, y (κ) ∈ S(C H , λ h max ⊗ L b H , y ∧ y h , y).
In case C we have H = ∅ and M \ H = M. That is, λ ≤ 1 − b i for all i ∈ M. The solvability in case C is described by the following theorem. Theorem 5. Case (C). Assume C ∈ I(m, n), b ∈ I(m) and y ≤ y ∈ I(n), with the monotonicity condition (11). Then the following statements are equivalent 1.
y j ≤ i∈M 1 − c ij , for every j ∈ N.
Theorem 6. Assume C ∈ I(m, n), b ∈ I(m) and y ≤ y ∈ I(n), with the monotonicity condition (11). The bounded parametric system (9) and (10) is solvable for some λ ∈ I if and only if at least one of the following statements is fulfilled h(λ) = 0 and y j ≤ i∈M 1 − c ij for every j ∈ N.
Proof. For the convenience of the reader, we recall the previous definitions.
Theorem 7. Suppose that C ∈ I(m, n), b ∈ I(m) and y, y ∈ I(n). The problem of recognizing the solvability of the bounded parametric max-Łuk linear system with bounds y ≤ y ≤ y for some value of the parameter λ ∈ I can be solved in O(mn 2 ) time.
Proof. In view of Theorem 6, the solvability of the bounded max-Łuk linear system (64) for some λ ∈ I can be verified by verifying the solvability of (64) for the values λ 1 max , λ 2 max , . . . λ m max and verifying the condition y j ≤ i∈M 1 − c ij , for every j ∈ N.
Example 2 (A numerical illustration to Q1-insolvable case). Similarly to Example 1, transition matrix C and required state vector b are given. Again, we wish to recognize whether there is 0 < λ ∈ I and y ∈ I(n) with y ≤ y ≤ y such that C ⊗ L y = λ ⊗ L b. In this example, the matrix C is the same, only the vectors b, y and y have different entries. (66) Applying the method described in Theorem 6, we get a negative result: the system has no solution for any λ ∈ I. As a consequence, neither b, nor any of its multiples λ ⊗ L b can be reached by the orbit of the DES. The details of the computation are shown below. By Definition (59), we get three different values of h(λ) and distinguish the following cases: (a) h(λ) = 5, (b) h(λ) = 3 and (c) h(λ) = 0.
In view of Lemma 5, C ⊗ L y (6) = λ 5 max ⊗ L b implies that the system has no solution when 0.5 < λ ≤ 1. Case (b). In this case, only one subcase has to be considered. does not fulfill y ≤ y j (0.4) and at the same time is not a solution to the system, because C ⊗ L y (0.4) = λ 3 max ⊗ L b. Therefore, the system has no solution when 0.2 < λ ≤ 0.5. Case (c). In this case we have h(λ) = 0, i.e., H = ∅, M \ H = M and λ ∈ (0, 0.2 . The maximal candidate y (λ) = (0.1, 0.1, 0.1, 0.2, 0.1) T satisfies C ⊗ L y (λ) = λ ⊗ L b, but the requirement of y ≤ y (λ) is not fulfilled. As a consequence, the considered system is not solvable when 0 < λ ≤ 0.2.

Conclusions
In this study, existence of a bounded solution to a one-sided linear system in max-Łuk algebra has been considered in dependence on a given linear parameter factor on the fixed side of the system. Equivalent solvability conditions have been found and a polynomial-time recognition algorithm has been suggested. The correctness of the algorithm has been exactly demonstrated. The work of the algorithm has been illustrated by numerical examples.
The results are new: although the solvability of a one-sided linear system in max-Łuk algebra in the non-parametric case can easily be verified, the method of recognizing the solvability of the parameterized system has not yet been known.
The presented results can be applied in the study of the max-Łukasiewicz systems with interval coefficients. Łukasiewicz arithmetical conjunction can also be used in various types of optimization problems, for example, in the study of interactive cash-flows. Furthermore, the suggested recognition algorithm plays an important role in the investigation of interval eigenvectors.
An advantage of the presented algorithm is, that not only the existence or non-existence of the solution is recognized; the solution values are computed as well, in the solvable case. A possible generalization of the results for other t-norms, different from the Łukasiewicz t-norm and minimum (the Gödel t-norm), remains open for future research.
Author Contributions: All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Czech Science Foundation (GAČR) #18-01246S.