Functions ’ algebra in nonlinear control : computational aspects and software

The paper describes the Mathematica-based software for studying nonlinear control systems. The software relies on an algebraic method, called functions’ algebra. The advantage of this approach, over well-known linear algebraic and differential geometric methods is that it is applicable to certain non-smooth systems. The drawback is that the computations are more complicated since the approach manipulates directly with the functions related to the system equations and not with the differential one-forms/vector fields that simplify (linearize) the computations. We have implemented the basic operations of functions’ algebra, i.e., partial order, equivalence, summation, and multiplication, but also finding the simplest representative of an equivalence class. The next group of functions is related to the control system and involves binary relation, operators m, M, and computation of certain sequences of invariant vector functions on the basis of system equations. Finally, we have developed Mathematica functions, allowing us to solve the following control problems in case of discrete-time systems: checking accessibility, static state feedback linearization, and disturbance decoupling.


INTRODUCTION
The algebraic approach, called historically functions' algebra [13] and based actually on algebraic lattice structure, is developed by analogy with the algebra of partitions [4].The advantage of this method over difference/linear algebraic and differential geometric methods in nonlinear control is that it allows, in theory, handling also certain types of non-smooth functions.We have specified the class of piecewise smooth functions, which includes, for instance, saturation, friction, and dead zone but not hysteresis and backlash.Although the methodology can, in principle, handle more general functions, computations for such cases become even more complicated without much practical value, since in most 'real-life' examples the functions are piecewise smooth.In particular, the absolute value and signum functions are the most typical non-smooth functions encountered in system descriptions.
However, the possibility of handling non-smooth functions is difficult to realize, since the constructions of the theory are complicated and mainly unknown to control community.Therefore, to support further research within the new framework and explore/evaluate its advantages/disadvantages, the respective symbolic software would be of great help.This paper describes such software, developed within the Mathematica environment and incorporated into the package NLControl, which has been developed in the Institute of Cybernetics at Tallinn University of Technology.In certain steps the new Mathematica functions that handle computations related to functions' algebra, use the existing functions in NLControl.The first steps towards developing such software were reported in [8] and [10].Compared to these conference papers, here numerous new algorithms are presented, including (1) computation of the summation operation ⊕, (2) computation of the operator M in the general case, and (3) finding the simplest representative of the equivalence class.Moreover, the solution of the disturbance decoupling problem was not discussed in the conference papers.Note that in this paper we focus on the discrete-time case, though the analogous constructions with some modifications exist for continuous-time systems, too.We comment on the continuous-time case only briefly at the end of the paper.
Unlike the differential geometric [5,12] and difference/linear algebraic methods [1,3], this approach is not based on the globally linearized system description (i.e., on differential one-forms), or on vector fields, defined in the tangent space of directional derivatives, but manipulates directly with the functions, including the state transition map.Therefore, in finding the solutions of different modelling, analysis, and synthesis problems, there is no need to solve specific partial differential equations (PDEs) or integrate one-forms, in principle, though in case of smooth or analytic functions the computations may be handled with the help of distributions and vector spaces over the field of meromorphic functions (codistributions), respectively.On the other hand, since computations are not linearized by applying the differential operators, they are much more complicated.One may hardly expect to find the solutions manually, except for very simple cases.Therefore, for the algebra of functions to be truly applicable, one has to formalize the computations with the key elements of this approach and implement the corresponding algorithms.
The paper is organized as follows.In Section 2 the basic notions of functions' algebra are recalled, as well as the solution of a few nonlinear control problems, based on functions' algebra.In Section 3 the main computation algorithms are given and the details of the Mathematica implementation are described.Section 4 is devoted to finding the simplest representative of the equivalence class and in Section 5 the case of non-smooth functions is discussed.In Section 6 two examples are given, demonstrating the application of the developed software to nonlinear control problems.Finally, Section 7 enlightens the possible future studies.

Control system and differential one-forms
Consider the discrete-time nonlinear systems, described by the state equations where the state x(t) ∈ X ⊆ R n and the control input Hereinafter we use for a variable ξ (t) the notation ξ ; the forward shift ξ (t + 1) is denoted by ξ + and the backward shift ξ (t − 1) by ξ − .
For smooth functions the following approach of differential one-forms proves to be useful in several cases.For smooth α = [α 1 , . . ., α k ] T , β = [β 1 , . . ., β ℓ ] T , depending on variable x, one can define the codistributions Ω α and Ω β , respectively, as The symbols α i , β ι denote the components of α and β , respectively.The notation sp K {dα i } means that Ω α contains all linear combinations of one-forms dα i , computed over a certain field K , described below.Hereinafter we sometimes omit the indices and write Ω α = sp K {dα}, Ω β = sp K {dβ }.By introducing the column vector dx := [dx 1 , . . ., dx n ] T and the matrices where i = 1, . . ., k, j = 1, . . ., n, ι = 1, . . ., ℓ, we write The field K is associated with the control system (1), its construction is borrowed from [1].By K we denote the field of meromorphic functions in a finite number of variables from the infinite set {x, u j (t), z j (−ℓ), j = 1, . . ., m, t 0, ℓ > 0}.The variables z 1 , . . ., z m are chosen from the set {x, u j } such that one could locally express the variables x, u j in terms of x + and z j from (1), i.e. to find a function (x, u) = ψ(x + , z).This is possible under the assumption that rank(∂ f /∂ (x, u)) = n.The choice of z is not unique, however, all choices lead to the equivalent field.The forward-shift operator σ : K → K is defined by σ φ(x, u) = φ( f (x, u), u + ), where f is determined by system (1).The inverse operator of σ is called backward shift and denoted as σ −1 :

Basics of functions' algebra
In functions' algebra we work with the vectors (of any finite dimension), whose elements are functions defined either on the domain X × U or on X. Denote the corresponding sets of vectors by S X×U and S X , respectively.The elements of S X×U or S X are called vector functions.
Below we define two preorders and s , where s is a special case 1 of .Note that a preorder is a binary relation that is reflexive and transitive.If, in addition, the relation is also antisymmetric, it is called the relation of partial order.For simplicity, these preorders are defined on S X .For S X×U , they are defined in a similar manner.Note that the phrase 'for every x ∈ X' throughout this paper should be understood as 'for every x ∈ X from the intersection of the domains of α and β '.Definition 1.Given α, β ∈ S X , one says that (i) α β , if for every x ∈ X there exists a function γ, such that β = γ(α); (ii) if the function γ in (i) is defined uniquely for all x ∈ X, we say that α s β .Example 1.This example illustrates the difference between preorders and s .Let α = x and β = x2 .Then, since β = α 2 for every x ∈ R, it is clear that α s β .But the opposite is not true, because there does not exist a unique function γ such that α = γ(β ) for all x ∈ R. In fact, if x < 0, then α = − √ β and if x 0, then α = √ β .However, by definition, β α, since for every x ∈ R one can find a function γ such that α = γ(β ).
Based on the preorders and s , one can define the equivalence relations 2 ∼ = and ∼ = s , respectively.In the following we continue only with preorder , since we use mostly when studying the systems of the form (1). The case when s is considered is defined similarly.Definition 2. Vector functions α, β ∈ S X satisfy the relation ∼ =, if α β and β α.
The relation ∼ = is obviously an equivalence relation.
The relation ∼ = is an equivalence relation and thus divides the elements of S X into the equivalence classes.Let S X \ ∼ = be the set of the equivalence classes.The relation was defined on the set S X , but can be viewed 3 also as a relation on S X \ ∼ =, where it becomes a partial order.Then the pair (S X \ ∼ =, ) becomes a lattice, since 0 := [x 1 , . . ., x n ] α 1 for all α ∈ S X \ ∼ =, where 1 is the equivalence class containing constant vector functions.
Remark 1.In control theory it is customary to operate with representatives of some equivalent objects.In the rest of this paper, if not stated otherwise, the equivalence classes are defined by their representatives, which are vector functions from S X or S X×U and thus, a vector function should always be understood as an equivalence class.The operations ×, ⊕, m, M and the relation ∆ are defined on S X \ ∼ =, although in terms of the representatives of the equivalence classes.Also, since we work with equivalence classes, the sign '=' should be understood as ' ∼ ='.
As (S X \ ∼ =, ) is a lattice, we can define the binary operations × and ⊕ as for all α, β ∈ S X \ ∼ =.The infimum and supremum in (3) are considered with respect to the partial order relation .In the same way, the notion maximal/minimal vector function means maximality/minimality with respect to .
Example 4. Consider the vector functions α T and let n = 3.By definition, α ×β is the maximal vector function (containing the minimal number of functionally independent elements) γ that satisfies γ α and γ β .This example contains only one vector function, γ = [x 1 , x 2 , x 3 ] T , which satisfies γ α and γ β and thus, α The vector function α ⊕ β is defined as the minimal vector function γ satisfying α γ and β γ.In general, there are many vector functions satisfying α γ and β γ.In this example there are two such functions: The previously defined lattice will be connected to the system dynamics (1) through the binary relation ∆.Note that ∆ is defined only on S X \ ∼ =.
Definition 3. Given α, β ∈ S X \ ∼ =, one says that (α, β ) satisfy binary relation ∆, denoted as α∆β , if for all x ∈ X and u ∈ U there exists a function f * such that The binary relation ∆ is mostly used for the definition of the operators m and M.
3 When an equivalence class is represented by an element (a vector function) of this equivalence class.
Example 5. Consider the system and the vector function α = [x 1 , x 2 ] T .First, note that α∆x 1 , because x + 1 can be written in terms of α and u: Also α∆[x 1 , x 2 − x 3 ] T , because of ( 5) and T is a minimal vector function β satisfying α∆β because there exist no other functions whose forward shifts do not depend on x 3 , and thus Next, observe that one can always write α( f (x, u)) in terms of x and u, which means by Definition 3 that x∆α.Therefore, x M(α) for every α.In this example, the vector function γ In fact, γ is a maximal vector function that satisfies γ∆α and thus, by Definition 4,

Applications of functions' algebra
In this section, the solutions of three nonlinear control problems (accessibility, static state feedback linearization, and disturbance decoupling problem) are formulated in terms of functions' algebra.For that purpose, we first define the notion of f -invariant vector function.Definition 5.The vector function δ is said to be invariant with respect to the system dynamics (1), or said alternatively, f -invariant, if δ ∆δ .
Next, one has to find a minimal (containing the maximal number of functionally independent components) vector function δ 1 (x) such that its forward shift δ The vector function δ 1 (x) initializes Algorithm 1 below.In general, the components of δ 1 (x) are scalar functions with relative degree two or more [5].
System (1) is said to be static state feedback linearizable if, generically, there exist (i) a state coordinate transformation φ : X → X and (ii) a regular static state feedback of the form u = ϑ (x, v) such that, in the new coordinates z = φ(x), the compensated system reads z + i = z i+1 , for i = 1, . . ., n − 1 and z + n = v.Theorem 7 ( [11]).The single-input system ( 1) is static state feedback linearizable iff δ i ̸ = 1, for i = 1, . . ., n − 1, but δ n = 1.The state transformation is given by z The third problem under study is the disturbance decoupling problem under a dynamic measurement feedback (DDDPM).Consider a discrete-time nonlinear control system where The DDDPM can be stated as follows: find a dynamic measurement feedback of the form where such that the values of the outputs-to-be-controlled y * (t), for t 0, of the closed-loop system are independent of the disturbances w.Note that we call the compensator, described by (7), regular if it generically defines the (y, z)-dependent one-to-one correspondence between the variables v and u.One says that the disturbance decoupling problem is solvable via static output feedback if u = G(y, v).
Find first a minimal vector function α 0 (x) such that its forward shift α 0 ( f (x, u, w)) does not depend on the unmeasurable disturbance w.Definition 8.The vector function α is said to be(h, f )-invariant if (α × h)∆α.Definition 9.The vector function α is said to be a controlled invariant if there exists a regular static state feedback u = G(x, v) such that the vector function α is f -invariant for the closed-loop system.Theorem 10 ([6]).System ( 6) can be disturbance decoupled by feedback (7), where z = α(x) iff there exists an (h, f )-invariant vector function ϕ and a controlled invariant vector function ξ such that α 0 ϕ ξ h * .

MATHEMATICA IMPLEMENTATION
The programs, described in this section, are implemented as a part of the Mathematica-based package NLControl.By employing NLControl, various problems, related to analysis, synthesis, and modelling of nonlinear control systems can be solved.The most important functions of the package are made available on the package website [2] using webMathematica technology, so that anyone can use them via a internet browser without installing special software.
Here the methods are described to compute the operations and operators, defined in the previous section.To simplify the presentation, assume first that the vector functions α and β are smooth.The non-smooth case is discussed later.This paper focuses only on the case of the partial order relation α β .
For the strong relation α s β , in Definition 1 the existence of unique γ is required for all x ∈ X.As demonstrated in Example 1, uniqueness essentially depends on the chosen region.Thus, the study of strong relation basically requires determining whether a certain equation (or system of equations) has a unique solution in a certain region.The other possible task is to determine a 'good' region where unique γ exists.

Relation of partial order
A condition to check whether α β is formulated as follows.
T from S X .The package can be loaded by the command4 In[1]:= << NLControl`MasterÌ n this implementation the column vectors α and β are represented in the single pair of brackets 5 for the sake of simplicity: It is also necessary to specify the list of the variables x i , denoted by xi.The preorder of α and β can now be checked by In If we look at the vector functions α and β as representatives of the equivalence classes, the same function can be used to check whether they satisfy the partial order .

Equivalence
The equivalence of two vector functions α and β can be checked easily, by just examining whether α β and β α.A function LatticeEquivalent [ α, β , xi ] allows one to check if α and β belong to the same equivalence class.

Computation of α
This operation can be computed by a simple formula ] .
Note that typically a simpler representative of the equivalence class exists, since there may be dependent elements in [α T , β T ] T (see Section 4 for details): To find α ⊕ β , the following algorithm can be used: 2. Find the largest integrable subspace Ω of Ω.
3. Integration of Ω gives the components of the vector function α ⊕ β .
Intersection sp K {dα} ∩ sp K {dβ } can be computed by the following straightforward algorithm: where matrices A and B are defined by (2) and ker denotes the null space of the matrix.2. Take the first k columns of K and denote the obtained matrix by K. 3. Intersection sp{dα} ∩ sp{dβ } = sp{ KAdx}, where KA is the product of matrices.
The largest integrable subspace of Ω = sp K {ω 1 , . . . ,ω k } can be computed as the limit of the following sequence of subspaces, sometimes known as the derived flag of the Pfaffian system (see, for instance, [1]).Algorithm 4.

Binary relation ∆
The equality (4) may be alternatively expressed as follows: Indeed, by applying the definition of partial order to (11) and replacing γ by f * we get the equality (4).The condition ( 11) is more suitable for computer implementation than (4) since it does not depend on the unknown vector function f * and, moreover, it allows one to take advantage of the already existing function

PartialPreOrder.
The Mathematica function allowing one to check whether α and β satisfy the binary relation ∆ is called BinaryRelation∆.
Choose, for instance, x − 1 and u − 1 as independent variables 7 .Then the remaining variables may be obtained by solving (12) for x 2 , x 3 , x 4 , u 2 and applying to them the backward shift operator

Operator M
Unlike for m(α), there is no general formula to compute M(α) in terms of , ×, and ⊕.In some special cases one can give a formula for M(α).For example, consider the case when the components α j ( f (x, u)), j = 1, . . ., k, of α( f (x, u)) can be represented in the form where a j1 (x), . . ., a jd j (x) are arbitrary functions and all the functions b j1 (u), . . ., b jd j (u) that are nonconstant are linearly independent.Then Technically the most complicated step is the decomposition of the expression α j ( f (x, u)) in (15) into the sum of products a ji (x) and b ji (u).For that purpose the products and positive integer powers are expanded out: for instance, (x + u) 2 is rewritten in the form x 2 + 2xu + u 2 .If α( f (x, u)) contains rational terms, the denominators are factorized: for instance, 1 2+2u+x+ux is rewritten as 1 x+2 • 1 u+1 .Complex roots are ignored.Additionally, the functions with arguments involving the sum of x and u are written in the form of a product: for instance, e x+u is replaced by e x • e u and sin(x + u) by cos x sin u + cos u sin x.Finally, each summand of α j ( f (x, u)) is split into two parts: a ji (x) and b ji (u).However, there exist simple expressions, for instance, 1/(x + u), which cannot be decomposed.In such cases the following general algorithm has to be applied.
If (15) does not hold, one may use the general Algorithm 5 for computing M(α), expressed in terms of one-forms.

The integration of Ω gives the elements of vector M(α).
To justify Algorithm 5, note that by definition, M(α) is a maximal vector function ξ that satisfies for some f * , i.e., ξ ∆α.Algorithm 5 computes the integrable subspace Ω such that dα( f ) ∈ Ω + sp K {du} 8 .From above and Definition 3, integrating Ω yields vector function µ that satisfies µ∆α.To make µ maximal (in the sense of ), one has to integrate minimal Ω.

Define for
where a i j ∈ K .3. The one-forms ωi , i = 1, . . ., γ − k have to satisfy the condition because the subspace sp K {ω 1 , . . ., ω k , ω1 , . . ., ωγ−k } has to be integrable.This gives a system of equations in a i j , which has to be solved.4. It remains to guarantee that d ωi = 0 for i = 1, . . ., γ − k, i.e., n ∑ j=1 da i j ∧ dx j = 0 for i = 1, . . ., γ − k.This gives a system of differential equations, which has to be solved.

THE SIMPLEST REPRESENTATIVE OF THE CLASS
In order to simplify the computations, one has to replace the components of vector functions by equivalent but simpler functions.The task seems to be trivial at first glance, however, the formalization of it has turned out to be one of the most complicated steps in the implementation.First, simplicity is something that cannot be formally defined as 'α is simpler than β if certain condition is satisfied'.Simplicity is tightly related to the problem under study.In this section we speak about simplicity regarding its intuitive meaning; for instance, we consider [x 1 ] to be simpler than [x 2  1 ] and In practice the simplicity of the expression is measured by the Mathematica function LeafCount.We developed two approaches for simplification: the first is based on one-forms and the other on the simplification rules.Unfortunately, both approaches have certain classes of functions that cannot be handled.

Approach of one-forms
Let α = [α 1 , . . . ,α k ] T and suppose A is defined by (2).Transform the matrix A into the row-reduced form Ā, using linear transformations over K .This can be done by the Mathematica built-in function RowReduce.
If the fractions appear in the matrix Ā, the respective rows are multiplied by the least common multiple of their denominators.This leads usually to simpler matrix entries.The zero rows are removed, so Ā is a k × n matrix.
Integrate the one-forms ω i := ∑ n i=1 ᾱi j dx j , i = 1, . . ., k.This procedure requires solving the system of PDEs, since Mathematica (version 10.4) has a limited ability to solve systems of PDEs; the task is reduced to solving a sequence of single PDEs.This process has been implemented earlier in NLControl as a function IntegrateOneForms, described in [9].
However, this approach is not problem-free.Integration often returns the result which is more complex than the original vector.
, where C is an arbitrary function.Choosing C as the identity function yields the simplified vector α ] T , which is not the desired solution, because x 2 √ x 3 is more complex than x 2 2 x 3 and, additionally, the domain of the function has also changed.There are two ideas that may offer a solution in this situation.
First, observe that choosing α = [C 1 ,C 2  2 ] would result in the desired solution.The problem here is that though the solution is easy to see through in the case of simple examples, it is hard to generalize into the universal algorithm.
Second, it turns out that reordering the state coordinates also gives the desired solution.The choice x = (x 1 , x 3 , x 2 ) causes the second and the third columns of the matrix [α i j ] to be swapped and therefore, the null space of the new matrix will be [0, Unfortunately, again there is no method to determine the suitable order or the coordinates a priori.
Moreover, sometimes Mathematica fails to solve the PDEs, even if the solution exists.
Example 11.Let α = [x 1 + x 2 x 3 + x 4 , x 1 x 2 + x 3 , x 4 ] T .This vector can be clearly simplified.The variable x 4 appears independently as an element of the vector, thus However, if we find the matrix , we have to solve the PDE Mathematica is not able to solve this PDE and the simplification program fails.Note that changing the order of the coordinates to [x 1 , x 3 , x 2 , x 4 ] leads to the PDE ( which Mathematica can solve, yielding the vector function (18).However, there is no general method to determine the right order or the coordinates a priori.
As a temporary solution we have implemented the program which finds all permutations of coordinates and then tries to solve the respective PDEs in turn until one of them yields the solution.The time limit 0.5 s has been set for each PDE; if the solution is not found within this time, the program turns to the next PDE.

Replacement rules
This approach uses certain replacement rules, based on theoretical justification.The important advantage of this method is that it is applicable also to non-smooth functions.A few well-known equivalence rules are listed below9 : (i) const ∼ = 1, (ii) α(x) Generalizing these rules yields the following three simplification methods.
1. Replacing square blocks by identity matrices.Let x = [x 1 , . . ., x n ], α = [α 1 , . . ., α n ].Assume that rank[α i j ] = n in (2) (if non-differentiable functions appear in α, then formal derivation is applied).In such case α ∼ = [x 1 , . . ., x n ].Note that the square block may appear as a minor of [α i j ].The program searches for such minors and replaces them by identity matrices.The minors can be from order 1 up to the order n.For example, in the case of the 2nd-order minor the matrix [α i j ] has to be in the form In such a case, if the rank of the minor is 2, we have the equivalence Unfortunately, currently the program does not recognize the minors with zero elements.
2. Reducing operators.The generalization of rules (iii) and (iv) leads to the idea that in principle we may consider G(g(x)) ∼ = g(x) for any (single-valued) elementary function G.For instance, the functions G, involving trigonometric functions, log a g(x) and exponent g a (x), where a is a real number or a function that does not depend on x, may be removed.
3. Reducing sums and products.The third simplification algorithm is based on the observations that If the term γ = const, we obtain from (20) the well-known rule (ii).This program is able to efficiently handle Example 11, in which case the integration failed.This program has much space for improvements.For instance, the vector [sin(x . However, currently the program does not recognize the equivalence, since x 1 x 2 is a product, not a pure coordinate.
Combining the three methods.The program LatticeSimplify applies all three methods in turn repeatedly until the expression no longer changes.
Example 12. (Continuation of Example 10).The application of the 2nd method to α = [ 1 The application of the 3rd method to ᾱ yields the final result [x 2 2 x 3 , x 1 ] T .

NON-SMOOTH CASE
Regarding the non-smooth functions, the book [13] gives the algorithm to check the relation α β if β has non-smooth components.The implementation of this algorithm is straightforward.Unfortunately, there is no theoretical solution for the case when α has non-smooth components; thus, there is no method for checking the equivalence of two vectors.We see two practical approaches, which may offer a solution.
The first approach is based on the notion of equivalence.Regarding the properties of the absolute value and signum function, we can consider the derivatives d dx |x| ∼ = signx and d dx (signx) ∼ = 0 if x ̸ = 0.The implementation of this replacement is simple: Mathematica denotes, by default, derivatives of |x| and signx by formal functions Abs' [ x ] and Sign' [ x ], respectively.All we have to do is to replace these functions by Sign [ x ] and by 0, respectively.The approach is used for checking partial preorder by condition (9), equivalence, computation of ⊕, binary relation ∆, m(α), and M(α).
The second approach is based on the fact that, in general, non-smooth functions can be considered as smooth in different regions.Then one can apply the methods developed for the smooth case in these regions and if in all the regions a property is true, it is true also for a given non-smooth function.For example, let α = x and β = |x|.Then β is smooth in (−∞, 0) and [0, ∞).In the region (−∞, 0) β = −x and β α, α β is true.The same is true for the region [0, ∞) and thus α ∼ = β .

DISCUSSION AND CONCLUSIONS
This paper documents an ongoing effort to implement a mathematical framework, called functions' algebra within Mathematica-based symbolic software NLControl.We do hope that the developed software will facilitate further study of various control problems.Moreover, since the functions' algebra approach for nonlinear control systems is developed in full accordance with the algebra of partitions for finite automata [4], it is especially suitable for handling hybrid systems, i.e., systems governed by differential equations whose parameters change due to discrete input or event-driven state transition.The first steps in this direction have been made in [7].However, recall that in this paper a control system is assumed to be described by difference equations.
The extension of the results for the continuous-time case is not immediate.In general, one cannot find the derivative of a non-smooth function, while forward-shifting of a non-smooth function is not a problem; also, the integration of a vector function is not straightforward and requires the assumption of differentiability of vector functions.In the continuous-time case there is no general formula/algorithm to compute m(α), and as a consequence, also the sequence δ i , i 0.Moreover, unlike the discrete-time case, the inequality δ k−1 m(δ k−2 ) does not yield the inequality M(δ k−1 ) M(m(δ k−2 )) on which the proof of Theorem 7 relies.
Finally, a few problems are left for the future study.First, some functions can be improved, most importantly, by finding the simplest representative of the equivalence class and computation of the operator M. Second, several algorithms/programs, described in Section 3, require at some point going to the level of one-forms.This concerns, in particular, checking partial order by condition (9), equivalence, computation of ⊕, binary relation ∆, M(α), and M(α).If the non-smooth functions are involved, these algorithms/programs require splitting the domain into the regions where the functions are smooth.Currently no software exists for the dividing.Third, currently the approach based on strong partial relation is not supported.Here, again, splitting the domain into suitable regions is necessary.