Skip to content
BY 4.0 license Open Access Published by De Gruyter March 29, 2023

Generalized Cauchy–Riemann equations in non-identity bases with application to the algebrizability of vector fields

  • Julio Cesar Avila ORCID logo EMAIL logo , Martín Eduardo Frías-Armenta ORCID logo and Elifalet López-González ORCID logo
From the journal Forum Mathematicum

Abstract

We complete the work done by James A. Ward in the mid-twentieth century on a system of partial differential equations that defines an algebra 𝔸 for which this system is the generalized Cauchy–Riemann equations for the derivative introduced by Sheffers at the end of the nineteenth century with respect to 𝔸 , which is also known as the Lorch derivative with respect to 𝔸 , and recently simply called 𝔸 -differentiability. We get a characterization of finite-dimensional algebras, which are associative commutative with unity.

MSC 2010: 35A09; 35F35; 15A27

Introduction

The theory of analytic functions in algebras was started by Sheffers [12] at the end of nineteenth century. Other notable works are [3, 6, 8, 11, 9, 13]. The corresponding differentiability is known as Lorch differentiability which is associated to algebras 𝔸 (in all this work algebra will be an -algebra associative commutative with unit), so we call it 𝔸 -differentiability, see Section 2.2. This is similar to how the complex derivative is associated with the system of complex numbers. We denote by 𝔸 an algebra that as a linear space 𝕃 is n , and by 𝕄 an algebra that as a linear space 𝕃 is a subspace of matrices of dimension n into M n ( ) .

In this work, n-dimensional vector field and function from n to n , in both cases differentiable in the usual sense, have the same meaning, except that we associate integral curves to vector fields. We suppose that all vector fields are defined on open sets. Although the motivation for the study of algebras comes from the study of differential equations, in this paper we do not study such differential equations. We will say that a vector field F is algebrizable if there exists an algebra 𝔸 such that F is 𝔸 -differentiable. Next we give a description of part of our motivation to study the algebrizability of vector fields:

  1. For a vector field and its corresponding system of autonomous ordinary differential equations (ODEs)

    (0.1) F = ( f 1 f n ) , { x ˙ 1 = f 1 ( x 1 , , x n ) , x ˙ n = f n ( x 1 , , x n ) ,

    we can consider a 𝔸 -differential equation of one 𝔸 -variable ( 𝔸 -ODE)

    (0.2) d x d τ = F ( x ) ,

    for which each solution x ( τ ) , which is an 𝔸 -differentiable function whose 𝔸 -derivative satisfies the differential equation (0.2), determines a solution ξ ( t ) = x ( t e ) of the system (0.1). That is, one can use some of the classical methods to solve ODEs in one variable, to solve equations in one variable over algebras like (0.2) since one has 𝔸 -calculus for 𝔸 -differentiability (see [3]), and by evaluating these solutions in the direction of the identity, te, we obtain solutions of the considered system of ODEs (0.1). Therefore, by solutions of 𝔸 -ODEs some systems of ODEs can be solved.

  2. For each 𝔸 -algebraizable planar vector field F and each constant b 𝔸 , b t e , where e is the identity of 𝔸 , for all t , the vector field G = b F obtained by the product b times F with respect to 𝔸 , is a non-trivial infinitesimal symmetry G of F, so the determinant of the matrix with columns F and G is an inverse integrating factor of F. See [5] for infinitesimal symmetries and integrating factors. Therefore, an integrating factor can always be found for algebraizable planar vector fields.

  3. Every algebraizable vector field F is geodesible and the corresponding Riemannian metric tensor g can be found explicitly. Also, if the vector field is of dimension n, for each regular point of F there exist n - 1 first integrals whose level sets intersect transversally, whose intersection is a one-dimensional curve which can be parameterized by arc length with respect to g. Thus the integral curves for these vector fields can be found. Therefore, ODEs associated with algebraizable vector fields can be solved, see [2] and [4].

  4. For partial differential equations (PDEs) of mathematical physics, families of 𝔸 -differentiable functions F have been found for which there exist linear functions φ such that the families of functions F φ define complete solutions of the PDEs, as it is the case of the harmonic functions are related to the conjugate functions of complex functions, see [7]. Other works related to solutions of PDEs can be seen in [6, 9, 10]. Therefore, 𝔸 -differentiable functions give solutions for some PDEs.

  5. For algebrizable vector fields a visualization method for their phase portrait is developed in [1].

For each n-dimensional algebra 𝔸 the 𝔸 -differentiability is characterized by a system of n ( n - 1 ) PDEs of first order, similarly to the complex case, so these systems are called generalized Cauchy–Riemann equations associated with 𝔸 (or associated with the 𝔸 -differentiability). In the literature on the subject these systems were assumed to be linearly independent, but no justification for this statement was observed, a proof is given in Section 5. In [13] an inverse problem arises; given the linearly independent system of PDEs

(0.3) { j = 1 n i = 1 n d k i j f i x j = 0 : 1 k n ( n - 1 ) } ,

where d k i j represents real constants, f 1 , , f n functions of the variables x 1 , , x n , and

f i x j = f i x j ,

the question is about the existence of an algebra 𝔸 for which this set is a system of generalized Cauchy–Riemann equations. In [13], Ward considers matrix algebras 𝕄 that are images 𝕄 = R ( 𝔸 ) under the first fundamental representation R of algebras 𝔸 with unit e = e p in the canonical basis { e 1 , e 2 , , e n } of n , and solves the inverse problem for sets (0.3) which are systems of PDEs for these algebras 𝔸 . Thus, the general inverse problem was partially solved. In this paper the work is completed; given a set of PDEs of the type (0.3) we give necessary and sufficient conditions for the existence of an algebra 𝔸 with unit e = p P α p e p , where P { 1 , , n } and α p , such that the given set is a system of Cauchy–Riemann equations for 𝔸 (Theorem 4).

For the proof of Theorem 4 it was necessary to prove a generalization of [13, Ward’s Theorem 1], which gives sufficient conditions for a set of matrices in M n ( ) to be the image of the canonical base of n under the first fundamental representation of an algebra 𝔸 with unit e = e p in the canonical basis of n . In Section 1 we discuss a condition on how to solve the partial derivatives { f i x j : 1 i , j n } in terms of the partial derivatives { f i x p : 1 i n } with respect to a single variable x p . The generalization presented in this article, given in Theorem 1, characterizes all the matrix algebras that are the image of the first fundamental representation of an algebra 𝔸 , and hence their units are not necessarily canonical vectors e i . In this case the solving of the partial derivatives { f i x j : 1 i , j n } of the components is achieved in terms of the partials derivatives of the components { f i x p : 1 i n , p P } with respect to the variables { x p : p P } associated with the canonical basis vectors { e p : p P } that define the unit

e = p P α p e p

of 𝔸 . Therefore, this characterizes the whole family of algebras.

If all partial derivatives f j x i can be expressed in terms of the partial derivatives f i x p with respect to a single variable x p , through elementary operations on the system of generalized Cauchy–Riemann equations associated with an algebra 𝔸 , it is possible to arrive at simpler systems of generalized Cauchy–Riemann equations associated with an algebra 𝔸 s , in such a way that the families of functions 𝔸 -differentials and 𝔸 s -differentials match. In this way two families of 2D algebras 𝔸 can be constructed. Ward’s work does not consider the set

(0.4) { f 1 x 2 = 0 , f 2 x 1 = 0 } ,

which is a system of generalized Cauchy–Riemann equations for the algebra 𝔸 defined by 2 endowed with the product between the elements of the canonical basis: e 1 e 1 = e 1 , e 1 e 2 = 0 , e 2 e 2 = e 2 ; 𝔸 has unit e = e 1 + e 2 . All other cases of 2D algebras 𝔸 which have unit e = α 1 e 1 + α 2 e 2 are already considered in Ward’s work or the corresponding Cauchy–Riemann equations are equivalent to a system already considered by Ward’s work, see Section 6 and [4]. Example 4 illustrates this for the case of 3D algebras. If we add this algebra that is missing in Ward’s work, we obtain three families of two-dimensional algebras such that each algebrizable vector field is 𝔸 -differentiable for an algebra 𝔸 in some of these families. This has been useful in the following two contexts: in the study of vector fields which are differentiable in the sense of Lorch, see [4], and in the construction of complete solutions of families of PDEs of the type

A u x 1 x 1 + B u x 1 x 2 + C u x 2 x 2 = 0 , u x i x j = 2 u x i x j ,

which generalizes the classical result showing that the components of complex analytic functions define a complete solution of the 2D Laplace equation, see [7]. There are other papers that have worked on the solution of PDEs of mathematical physics through algebras, see [9], [10], and references therein.

In Theorem 2 we present three equivalences of 𝔸 -differentiability: item (2) is the generalization of classic Cauchy–Riemann equations F 2 = i F 1 , item (3) was presented by Sheffers in [12, Satz 3], in form of components, item (4) is a generalization of [13, equation (18), p. 460]. For the algebras characterized in Theorem 1 the generalized Cauchy–Riemann equations given in Theorem 2 give a characterization of the algebrizability of vector fields. That is, a vector field F is algebrizable if and only if there exists an algebra 𝔸 , which is given in Theorem 1, whose associated Cauchy–Riemann equations, given in Theorem 2, are satisfied by F.

All the results obtained in this paper are made over the real field , however they can be generalized to any field 𝔽 , as it is made in Ward’s paper [13].

1 Ward’s paper

Definition 1.

We will say that system (0.3) satisfies the zero trace condition if i = 1 n d k i i = 0 .

In Ward’s work [13], systems of n ( n - 1 ) first-order linear PDEs of the form (0.3) are considered. Ward’s approach is about the existence of an algebra 𝔸 such that the set of equations (0.3) is a system of generalized Cauchy–Riemann equations for the 𝔸 -derivative. One of the conditions required by Ward is the existence of a variable x p { x 1 , x 2 , , x n } such that all the partial derivatives f i x j , for 1 i , j n , can be solved in terms of a linear combination of partial derivatives of the set { f i x p : i = 1 , 2 , , n } . From system (0.3) for each x j there is a matrix M j M n ( n - 1 ) , n ( ) such that system (0.3) is written as

(1.1) M 1 F x 1 + + M n F x n = 0 .

Ward’s condition above implies the existence of matrices A i M n ( ) such that

(1.2) ( f 1 x 1 f 1 x n f n x 1 f n x n ) = f 1 x p A 1 + + f n x p A n .

A second condition is the commutativity of the set { A 1 , , A n } , and a third condition is that system (0.3) satisfies the zero trace condition. Under these three conditions Ward [13] proved the existence of the algebra 𝔸 with the required conditions.

The solution condition of all the partial derivatives f i x j of the components f i in terms of the partial derivatives f i x p of the components f i with respect to a single variable x p , reduces to verifying the invertibility of n matrices of n ( n - 1 ) × n ( n - 1 ) , as we see below.

Consider the matrix M M n ( n - 1 ) , n 2 ( ) given by

(1.3) M = ( M 1 M 2 M n ) .

Denote by π i : M n ( n - 1 ) , n 2 ( ) M n ( n - 1 ) , n ( n - 1 ) ( ) the projection which avoid the i-th sumbatrix M i from M

π i ( M ) = ( M 1 M 2 M i - 1 M i + 1 M n ) .

The following proposition gives conditions under which there exists p such that equality (1.2) is satisfied.

Proposition 1.1.

If for some p the matrix π p ( M ) , where M is given in (1.3), is invertible, then all partial derivatives in { f i x j : 1 i , j n } can be written in terms of partial derivatives in { f i x p : 1 i n } .

Proof.

One can start from a system as (1.1) and if some matrix π p ( M ) is invertible, then multiplying the system by π p ( M ) - 1 we get a new system. Thus, the partial derivatives can be written by

(1.4) ( F x 1 F x p - 1 F x p + 1 F x n ) = - π p ( M ) - 1 M p F p , F i = ( f 1 x i f 2 x i f n - 1 x i f n x i ) .

Thus, the proof is finished. ∎

2 Algebras and 𝔸 -differentiability

2.1 Algebras and matrix algebras

Definition 2.

We call an -linear space 𝕃 an algebra if it is endowed with a bilinear product 𝕃 × 𝕃 𝕃 denoted by ( x , y ) x y , which is associative and commutative x ( y z ) = ( x y ) z and x y = y x for all x , y , z 𝕃 ; furthermore, there exists a unit e 𝕃 , which satisfies e x = x for all x 𝕃 .

An algebra 𝕃 will be denoted by 𝔸 if 𝕃 = n and by 𝕄 if 𝕃 is an n-dimensional matrix algebra in the space of matrices M ( n , ) , where the algebra product corresponds to the matrix product.

Definition 3.

If 𝔸 is an algebra, the 𝔸 -product between the elements of the canonical basis { e 1 , e 2 , , e n } of n is given by

e i e j = k = 1 n c i j k e k ,

where c i j k for i , j , k { 1 , 2 , , n } are called structure constants of 𝔸 . The first fundamental representation of 𝔸 is the injective linear homomorphism R : 𝔸 M ( n , ) defined by R : e i R i , where R i is the matrix with [ R i ] j k = c i k j , for i = 1 , 2 , , n .

2.2 𝔸 -differentiability and algebrizability of vector fields

The 𝔸 -differentiability of vector fields is the same definition as the differentiability in the sense of Lorch with respect to 𝔸 , see [8].

Definition 4.

Let 𝔸 be an algebra, and F a vector field which is defined and differentiable in the usual sense on an open set Ω n . We say F is 𝔸 -differentiable on Ω if there exists a vector field F defined on Ω such that

(2.1) d F p ( v ) = F ( p ) v ,

where F ( p ) v denotes the 𝔸 -product of F ( p ) and v for every vector v in n and p Ω .

For the 𝔸 -differentiability, most of the known results on calculus in or transfers to 𝔸 -calculus, see [3], only one must to be careful with singular elements, these are non-invertible elements with respect to the 𝔸 -product.

Definition 5.

We say two system of linear partial differential equation (PDEs) with constant coefficients are equivalent if through elementary row operations carry one of them to the other.

The 𝔸 -differentiability has associated sets of PDEs, see Theorem 2.

Definition 6.

We call generalized Cauchy–Riemann equations associated to 𝔸 to any system of PDEs equivalent to equations obtained of e j F i = e i F j , with i , j { 1 , 2 , , n } .

3 Characterization of algebras

The following theorem, proved in [13] for P with | P | = 1 and α p = 1 , characterizes the associative commutative algebras 𝔸 with unit e p in the canonical basis { e 1 , e 2 , , e n } of n . This completes the characterization of associative commutative algebras 𝕄 in M n ( ) that are the image of a first fundamental representation of n-dimensional algebras 𝔸 . Since algebras are isomorphic to their first fundamental representations, this gives a complete characterization of the algebras. Also this result is used to give conditions on PDEs systems so that they are generalized Cauchy–Riemann equations.

Theorem 1.

The spanned set by { A i : i = 1 , , n } is the image of the first fundamental representation of an algebra A , with R ( e i ) = A i , e = p P α p e p , where

(3.1) A i A j = t = 1 n a i j t A t , I = p P α p A p ,

if and only if

  1. there exists a commutative set { A 1 , A 2 , , A n } M n ( ) , where A i = ( a i s r ) , that is

    (3.2) A i A j = A j A i , i , j = 1 , , n ,

  2. there exists an index set P { 1 , , n } with { α p } p P such that

    (3.3) p P α p a i p r = δ i r , i , r = 1 , , n .

Proof.

The proof in the forward direction is known, see [6, p. 642, equation 4].

Conversely, let B i j = A i A j and let b p u the element of the matrix B i j with row-index u and column-index p. Then by (3.3)

p P α p b p u = p P α p t = 1 a i t u a j p t = t = 1 a i t u p P α p a j p t = t = 1 a i t u δ j t = a i j u .

Using that A i A j = A j A i , and doing the same calculations as above, for A j A i we have

(3.4) a i j u = a j i u for  i , j , u = 1 , , n .

Furthermore, another expression for the entries of A i A j = A j A i is

(3.5) t = 1 a i t u a j v t = t = 1 a j t u a i v t .

Then we have (following Ward’s proof [13])

( A i A j ) r s = t = 1 n a i t r a j s t = t = 1 n a j t r a i s t = t = 1 n a j t r a s i t = t = 1 n a s t r a j i t = t = 1 n a t s r a i j t = t = 1 n a i j t a t s r

and then A i A j = t = 1 n a i j t A t . The first equality is obtained by matrix-product definition, the second and fourth are by (3.5), the third and fifth by (3.4), and the sixth by commutativity of . From (3.3), we see that the A i are linearly independent with respect to . Now we shall prove p P α p A p = I . If

A p = ( a p 11 a p 21 a p n 1 a p 12 a p 22 a p n 2 a p 1 n a p 2 n a p n n ) ,

then analyzing every element of the matrix p P α p A p (with row-index r and column-index s)

( p P α p A p ) r s = p P α p a p s r = p P α p a s p r = δ s r ,

and then p P α p A p = I , where we used (3.4) and (3.3). ∎

Next, an example outside the scope of Theorem 1 is given, i.e., the algebra is not image of a first fundamental representation.

Example 1.

Consider the matrices β = { A 1 , A 2 , A 3 } given by

A 1 = ( 1 2 0 0 0 1 3 0 0 0 1 3 ) , A 2 = ( 0 0 0 0 - 1 3 0 0 0 - 1 3 ) , A 3 = ( 0 0 0 0 0 0 0 1 0 ) .

It can be verified that the matrices { A 1 , A 2 , A 3 } are commutative, and their matrix products satisfy the following relations:

(3.6) A 1 A 2 A 3 A 1 1 2 A 1 + 1 6 A 2 1 3 A 2 1 3 A 3 A 2 1 3 A 2 - 1 3 A 2 - 1 3 A 3 A 3 1 3 A 3 - 1 3 A 3 0 .

Then they define a 3D commutative matrix algebra 𝕄 , which in this case is given by

𝕄 = { ( x 0 0 0 y 0 0 z y ) : x , y , z } .

We take P = { 1 , 2 } , α 1 = 2 , and α 2 = - 1 , since 2 a 112 - a 122 = 2 0 - 1 3 = - 1 3 , the conditions of Theorem 1 are not satisfied. Then it is not first fundamental representation.

We can find the first fundamental representation with respect to this basis, which would give a matrix algebra 𝕄 R , which should not match 𝕄 , but should be an algebra of simultaneously diagonalizable matrices which is conjugate to 𝕄 , i.e., 𝕄 = B 𝕄 R B - 1 , where B is an invertible matrix.

Next, two examples are given where the algebra is the image of a first fundamental representation.

Example 2.

The following matrices satisfy (3.2) and (3.3) of Theorem 1

A 1 = ( 1 2 0 0 1 6 1 3 0 0 0 1 3 ) , A 2 = ( 0 0 0 1 3 - 1 3 0 0 0 - 1 3 ) , A 3 = ( 0 0 0 0 0 0 1 3 - 1 3 0 ) .

Actually, they have the same matrix products as (3.6). For condition (3.3) we take P = { 1 , 2 } , with α 1 = 2 and α 2 = - 1 . Then

2 A 1 - A 2 = ( 2 a 111 - a 121 2 a 211 - a 221 2 a 311 - a 321 2 a 112 - a 122 2 a 212 - a 222 2 a 312 - a 322 2 a 113 - a 123 2 a 213 - a 223 2 a 313 - a 323 ) = ( 1 0 0 0 1 0 0 0 1 ) .

Example 3.

Consider the matrices β = { A 1 , A 2 , A 3 } given by

A 1 = ( 1 0 0 0 0 0 0 0 0 ) , A 2 = ( 0 0 0 0 1 0 0 0 1 ) , A 3 = ( 0 0 0 0 0 0 0 1 0 ) .

We take P = { 1 , 2 } , α 1 = 1 , and α 2 = 1 . Then

( a 111 + a 121 a 211 + a 221 a 311 + a 321 a 112 + a 122 a 212 + a 222 a 312 + a 322 a 113 + a 123 a 213 + a 223 a 313 + a 323 ) = ( 1 0 0 0 1 0 0 0 1 ) .

Therefore, the conditions of Theorem 1 are satisfied.

It can be verified that the matrices { A 1 , A 2 , A 3 } are commutative, I = A 1 + A 2 with I the identity matrix, and their matrix products satisfy the following relations:

A 1 A 2 A 3 A 1 A 1 0 0 A 2 0 A 2 A 3 A 3 0 A 3 0 .

Thus, R ( A i ) = A i for i = 1 , 2 , 3 .

4 Characterization of algebrizable vector fields

In the following lemma we think the elements of n as columns.

Lemma 4.1.

Let A be an algebra and R : A M n ( R ) its first fundamental representation. Then R ( a ) b = a b , where R ( a ) b denotes the product between the matrix R ( a ) and the vector b, and ab denotes the product in A .

Proof.

Firstly, we see that R ( e i ) e j = e i e j :

R ( e i ) e j = ( c i 11 c i 21 c i n 1 c i 12 c i 22 c i n 2 c i 1 n c i 2 n c i n n ) e j = ( c i j 1 c i j 2 c i j n ) = k = 1 n c i j k e k .

Then

R ( e i ) b = R ( e i ) j = 1 n b j e j = j = 1 n b j R ( e i ) e j = j = 1 n b j e i e j = e i j = 1 n b j e j = e i b .

Next, using the previous equality we obtain

R ( a ) b = R ( i = 1 n a i e i ) b = i = 1 n a i R ( e i ) b = i = 1 n a i e i b = a b .

This prove the lemma. ∎

By using Lemma 4.1, the Cauchy–Riemann equations e i F k = e k F i can be written as

R ( e i ) F k = R ( e k ) F i , i , i { 1 , 2 , , n } , i j .

If the unit e of 𝔸 is given by e = p P α p e p , then the partial derivatives f i x k of the components f i of algebraizable vector fields F can be expressed as a linear combination of { f i x p : p P } , this is included in the fourth characterization of 𝔸 -differentiability given in the following theorem. Note that (3) is the same as [12, Satz 3].

Theorem 2.

Let F = ( f 1 , f 2 , , f n ) be a differentiable field vector in the usual sense, A an algebra with first fundamental representation R given by R ( e i ) = R i and unity e = p P α p e p , where P is an index set P { 1 , , n } . Then the following items are equivalent:

  1. F is 𝔸 -differentiable.

  2. F satisfies e j F x i = e i F x j for all i , j { 1 , 2 , , n } with respect to 𝔸 .

  3. The partial derivatives F x k of F satisfy

    (4.1) F x k = R k p P α p F x p .

  4. The Jacobian of F satisfies

    (4.2) J F = p P α p f 1 x p R 1 + p P α p f 2 x p R 2 + + p P α p f n x p R n .

Proof.

(1)   (2) Since F is 𝔸 -differentiable, we have that there exists a vector field F such that d F x ( v ) = F ( x ) v for every vector v. This implies that

e j F x i = e j d F ( e i ) = e j F e i = e i F e j = e i d F ( e j ) = e i F x j ,

which are the generalized Cauchy–Riemann equations associated to 𝔸 .

(2)   (3) If

R i F x k = R k F x i for  i , k = 1 , , n ,

then

α p R p F x k = α p R k F x p for  p P .

Thus, summing for p P ,

p P α p R p F x k = p P α p R k F x p ,
F x k = R k p P α p F x p ,

where p P α p R p = I is the identity matrix because the expression of the identity e.

(3)   (4) Let U = p P α p F x p be. From (3) we have F x k = R k U . Thus, the Jacobian matrix JF of F is given in component notation by

J F = ( R 1 U R 2 U R n U )
= ( ( r 111 r 121 r 1 n 1 r 112 r 122 r 1 n 2 r 11 n r 12 n r 1 n n ) ( u 1 u 2 u n ) ( r n 11 r n 21 r n n 1 r n 12 r n 22 r n n 2 r n 1 n r n 2 n r n n n ) ( u 1 u 2 u n ) )
= ( i = 1 n u i ( r 1 i 1 r 1 i 2 r 1 i n ) i = 1 n u i ( r 2 i 1 r 2 i 2 r 2 i n ) i = 1 n u i ( r n i 1 r n i 2 r n i n ) )
= i = 1 n u i ( r 1 i 1 r 2 i 1 r n i 1 r 1 i 2 r 2 i 2 r n i 2 r 1 i n r 2 i n r n i n ) = i = 1 n u i ( r i 11 r i 21 r i n 1 r i 12 r i 22 r i n 2 r i 1 n r i 2 n r i n n ) ,

where the last equality is obtained from commutativity of 𝔸 .

(4)   (1) Suppose that the Jacobian of F satisfies equality (4.2). So our candidate for 𝔸 -derivative of F is

k = 1 n p P α p f k x p e k .

We have to prove the equality

(4.3) d F ( v ) = ( k = 1 n p P α p f k x p e k ) ( v ) ,

where the product indicated on the right hand side represents the product of 𝔸 . First, we have that the differential of F applied to v is the Jacobian matrix JF of F multiplied by v through the matrix product

(4.4) d F ( v ) = ( k = 1 n ( p P α p f k x p ) R k ) ( v ) .

Next, by Lemma 4.1 we have

(4.5) d F ( v ) = k = 1 n ( p P α p f k x p ) ( e k v ) ,

where e k v represent the product with respect to an 𝔸 . Therefore equality (4.3) holds, because the right side of (4.5) is equal to the right side of (4.3). This shows that F is 𝔸 -differentiable and that the 𝔸 -derivative of F is

F = k = 1 n ( p P α p f k x p ) e k .

Due to Theorem 2, in the next corollary we give the set of all solutions of system (0.3).

Corollary 4.1.

If there is an algebra A such that system (0.3) are the Cauchy–Riemann equations for A , then the A -differentiable functions are all solutions of system (0.3).

The following example gives two algebras 𝔸 1 and 𝔸 2 for which the family of functions 𝔸 1 -differentiable and 𝔸 2 -differentiable are the same, since they have the same generalized Cauchy–Riemann equations.

Example 4.

The linear space 3 endowed with the product

e 1 e 2 e 3 e 1 1 2 e 1 + 1 6 e 2 1 3 e 2 1 3 e 3 e 2 1 3 e 2 - 1 3 e 2 - 1 3 e 3 e 3 1 3 e 3 - 1 3 e 3 0 ,

define an algebra 𝔸 with unit e = 2 e 1 - e 2 . The Cauchy–Riemann equations for the 𝔸 -derivative are given by

f 1 y = 0 , f 1 x - f 2 x - f 2 y = 0 , f 3 x + f 3 y = 0 ,
f 1 z = 0 , f 2 z = 0 , f 1 x - f 2 x - f 3 z = 0 .

Thus,

( f 1 x f 1 y f 1 z f 2 x f 2 y f 2 z f 3 x f 3 y f 3 z ) = ( 2 f 1 x - f 1 y ) ( 1 2 0 0 1 6 1 3 0 0 0 1 3 ) + ( 2 f 2 x - f 2 y ) ( 0 0 0 1 3 - 1 3 0 0 0 - 1 3 ) + ( 2 f 3 x - f 3 y ) ( 0 0 0 0 0 0 1 3 - 1 3 0 ) .

That is,

J F = ( 2 f 1 x - f 1 y ) R 1 + ( 2 f 2 x - f 2 y ) R 2 + ( 2 f 3 x - f 3 y ) R 3 ,

where R i = R ( e i ) , R : 𝔸 M n ( ) of 𝔸 .

On the other hand, for the same generalized Cauchy–Riemann equations, the partial derivatives f i x j can be written in terms of a linear combination of f 1 x , f 2 x , f 3 x , from which we obtain

( f 1 x f 1 y f 1 z f 2 x f 2 y f 2 z f 3 x f 3 y f 3 z ) = f 1 x ( 1 0 0 0 1 0 0 0 1 ) + f 2 x ( 0 0 0 1 - 1 0 0 0 - 1 ) + f 3 x ( 0 0 0 0 0 0 1 - 1 0 ) .

Since these matrices satisfy the Ward conditions, we have that F = ( f 1 , f 2 , f 3 ) is 𝔹 -differentiable, where 𝔹 is the algebra defined by 3 with respect to the product

e 1 e 2 e 3 e 1 e 1 e 2 e 3 e 2 e 2 - e 2 - e 3 e 3 e 3 - e 3 0 .

The unit e of 𝔹 is e = e 1 .

5 Linear independence of Cauchy–Riemann equations

Theorem 3.

A set of generalized Cauchy–Riemann equations associated with an algebra A contains n ( n - 1 ) linearly independent PDEs.

Proof.

By Theorem 2, a set of generalized Cauchy–Riemann equations associated with an algebra 𝔸 is equivalent to equation (4.1). Then a set of Cauchy–Riemann equations for 𝔸 -differentiability is given by

F k = R k p = 1 n α p F p , k = 1 , 2 , , n ,

where R i are its first fundamental representation n × n matrices with

I = α 1 R 1 + + α n R n .

They can be rewritten as n equations as follows:

F 1 = R 1 ( α 1 F 1 + + α n F n ) ,
F n = R n ( α 1 F 1 + + α n F n ) ,

as well as

( α 1 R 1 - I ) F 1 + α 2 R 1 F 2 + + α n R 1 F n = 0 ,
α 1 R n F 1 + α 2 R n F 2 + + ( α n R n - I ) F n = 0 .

In order to prove that the later equations are linearly independent, we are to consider the next n × n -matrix (where each entry is another n × n -matrix)

[ α 1 R 1 - I α 2 R 1 α 3 R 1 α n R 1 α 1 R 2 α 2 R 2 - I α 3 R 2 α n R 2 α 1 R n α 2 R n α 3 R n α n R n - I ] ,

and prove that this matrix has maximal range, namely n ( n - 1 ) . For this it is only necessary to prove that only one of the columns is linearly dependent of the others. To achieve this, we will do operations between rows and columns in order to preserve the same set of solutions, and one column will be only zeros.

For this, we take only the columns where α p 0 , because if α p = 0 , then that column will have only zeros except the p-th entry, as follows:

[ 0 - I 0 ] .

Therefore, let us consider the non-zero α p , let us say α p for p = 1 , 2 , , l . Then the n × l -matrix with α p 0 is

[ α 1 R 1 - I α 2 R 1 α 3 R 1 α l R 1 α 1 R 2 α 2 R 2 - I α 3 R 2 α l R 2 α 1 R l α 2 R l α 3 R l α l R l - I ] .

To the later matrix, it will be done the next row operation: fixing the first row, for each k-th row, k = 2 , 3 , , l , k-th row is replaced by k-th row plus α 1 α k times first row. Then one gets

[ α 1 R 1 - I α 2 R 1 α 3 R 1 α l R 1 α 1 α 2 ( α 1 R 1 + α 2 R 2 - I ) α 1 R 1 + α 2 R 2 - I α 3 α 2 ( α 1 R 1 + α 2 R 2 ) α l α 2 ( α 1 R 1 + α 2 R 2 ) α 1 α 3 ( α 1 R 1 + α 3 R 3 - I ) α 2 α 3 ( α 1 R 1 + α 3 R 3 ) α 1 R 1 + α 3 R 3 - I α l α 3 ( α 1 R 1 + α 3 R 3 ) α 1 α l ( α 1 R 1 + α l R l - I ) α 2 α l ( α 1 R 1 + α l R l ) α 3 α l ( α 1 R 1 + α l R l ) α 1 R 1 + α l R l - I ] .

Now, for the next column operation: fix the last column, for each s-th column, s = 1 , 2 , , l - 1 , the s-th column is replaced by s-th column minus α s α l times last column. Then one gets

[ - I 0 0 α l R 1 - α 1 α 2 I - I 0 α l α 2 ( α 1 R 1 + α 2 R 2 ) - α 1 α 3 I 0 - I α l α 3 ( α 1 R 1 + α 3 R 3 ) 0 α 2 α l I α 3 α l I α 1 R 1 + α l R l - I ] .

This last matrix is almost an inferior triangular matrix, except for the last column. Now, we are going to fix the first row, and for each k-th row, k = 2 , 3 , , l , k-th row is replaced by k-th row minus α 1 α k times first row. Then one gets

[ - I 0 0 α l R 1 0 - I 0 α l R 2 0 0 - I α l R 3 α 1 α l I α 2 α l I α 3 α l I α l R l - I ] .

Finally, we are going to do several row operations at once, we are going to sum a multiple constant of each row, to the last row, i.e. the last row will be replaced by

l -th row + α 1 α l (first row) + α 2 α l (second row) + + α l - 1 α l [ ( l - 1 ) - th row ] .

Then one gets

[ - I 0 0 α l R 1 0 - I 0 α l R 2 0 0 - I α l R 3 0 0 0 0 ] .

Where we use the expression I = α 1 R l + α 2 R 2 + + α l R l in the ( l , l ) -entry. Therefore, the range of the last matrix is ( l - 1 ) and each entry is a n × n matrix. In addition, we have ( n - l ) independent columns because the α p = 0 . With this we have shown that the n ( n - 1 ) Cauchy–Riemann equations are linearly independent. ∎

6 Ward completion

The next theorem is the main result in Ward’s paper [13].

Theorem (Ward [13, Theorem 2]).

Suppose the system of PDEs (0.3) has the property that for some fixed integer p, it implies the set

J F = f 1 x p R 1 + f 2 x p R 2 + + f n x p R n .

Suppose further that the matrices A i = ( a i s r ) , i = 1 , , n , are commutative and satisfy the zero trace condition. Then there is a uniquely determined algebra A for which (0.3) is a set of generalized Cauchy–Riemann differential equations.

Now, we give a generalization of the algebrizability of vector fields, in the following theorem, which completes the above theorem.

Theorem 4.

There exists an algebra A for which the set (0.3) is the system of generalized Cauchy–Riemann equations if and only if the following three statements are satisfied:

  1. there exists a set of matrices { A i = ( a i s r ) : i = 1 , 2 , , n } in M n ( ) such that set ( 0.3 ) implies equality ( 4.2 ) for R i = ( a i s r ) ,

  2. { A i : i = 1 , 2 , , n } is commutative, that is, A i A j = A j A i for 1 i , j n , and

  3. set ( 0.3 ) satisfies the zero trace condition (Definition 1 ).

Proof.

Suppose we take 𝔸 such that the set of PDEs (0.3) is its system of generalized Cauchy–Riemann equations, and A i = R ( e i ) for i = 1 , 2 , n , where R is the first fundamental representation of 𝔸 . Thus:

  1. is a direct consequence of Theorem 3,

  2. is satisfied because is 𝔸 is an algebra,

  3. system (0.3) satisfies the zero trace condition because it is satisfied by the set of PDEs obtained by equality (4.2) of Theorem 2 and this set is equivalent to system (0.3).

Now we have to show the converse.

Since each PDE of system (4.2) is a linear combination of the set of PDEs (0.3), it follows since PDEs (0.3) hold the zero trace condition of PDEs (0.3), that is, i = 1 n d k i i = 0 with k = 1 , 2 , , n ( n - 1 ) , thus p P α p a t p r = δ r t . For example, (4.2) can be rewritten as

0 = - ( f 11 f 12 f 1 n f 21 f 22 f 2 n f n 1 f n 2 f n n ) + ( a 111 a 121 a 1 n 1 a 112 a 122 a 1 n 2 a 11 n a 12 n a 1 n n ) p P α p f 1 p
+ ( a 211 a 221 a 2 n 1 a 212 a 222 a 2 n 2 a 21 n a 22 n a 2 n n ) p P α p f 2 p + + ( a n 11 a n 21 a n n 1 a n 12 a n 22 a n n 2 a n 1 n a n 2 n a n n n ) p P α p f n p .

Then, for k = 1 one has

0 = - f 11 + a 111 ( α 1 f 11 + + α n f n ) + a 211 ( α 1 f 11 + + α n f n ) + + a n 11 ( α 1 f 11 + + α n f n )

and the coefficients are d 111 = - 1 + α 1 a 111 , d 122 = α 2 a 211 , , d 1 n n = α n a n 11 . Then from the zero trace condition of PDEs (0.3),

(6.1) i = 1 n d 1 i i = - 1 + i = 1 n α i a i 11 = - 1 + i = 1 n α i a 1 i 1 = 0 .

Similarly, for k = 2 one has

0 = - f 12 + a 121 ( α 1 f 11 + + α n f n ) + a 221 ( α 1 f 11 + + α n f n ) + + a n 21 ( α 1 f 11 + + α n f n )

and the coefficients are d 211 = α 1 a 121 , d 222 = α 2 a 221 , , d 2 n n = α n a n 21 . Then from the zero trace condition of PDEs (0.3),

(6.2) i = 1 n d k i i = i = 1 n α i a i 21 = i = 1 n α i a 2 i 1 = 0 .

In both cases (6.1)–(6.2), we used commutativity and p P α p a t p r = δ r t . Hence by Theorem 1 the A i , i = 1 , 2 , , n , form a basis of an algebra 𝕄 with identity I = p P α p A p . Therefore, if 𝔸 is the first fundamental representation of 𝕄 , the set of PDEs (0.3) is satisfied for the 𝔸 -differentiable functions. ∎

The following corollary is given by Ward in [13].

Corollary 6.1.

A necessary and sufficient condition that the linearly independent PDEs

(6.3) i , j = 1 2 d k i j f i x j = 0 , k = 1 , 2 ,

determine an algebra A for which (6.3) is a set of generalized Cauchy–Riemann equations is that

(6.4) d k 11 + d k 22 = 0 , k = 1 , 2 .

A system of generalized Cauchy–Riemann equations for the algebra 𝔸 with product given in the canonical basis of 2 given by e 1 e 1 = e 1 , e 1 e 2 = 0 , e 2 e 2 = e 2 , is the set (0.4). The unit e of 𝔸 is e = e 1 + e 2 . The 𝔸 -differentiable functions is the set of all the functions F = ( f 1 , f 2 ) , where f 1 ( x 1 , x 2 ) = f ( x 1 ) , f 2 ( x 1 , x 2 ) = g ( x 2 ) , and f, g are differentiable functions of one variable. Therefore, this is a case not covered by [13, Theorem 2]. However, this system satisfies and is in harmony with Corollary 6.1.


Communicated by Jan Frahm


References

[1] A. Alvarez-Parrilla, M. E. Frías-Armenta, E. López-González and C. Yee-Romero, On solving systems of autonomous ordinary differential equations by reduction to a variable of an algebra, Int. J. Math. Math. Sci. 2012 (2012), Article ID 753916. 10.1155/2012/753916Search in Google Scholar

[2] J. C. Avila, M. E. Frías-Armenta and E. López-González, Geodesibility of algebrizable three-dimensional vector fields, in preparation. Search in Google Scholar

[3] J. Cook, Introduction to 𝒜 -calculus, preprint (2017), https://arxiv.org/abs/1708.04135. Search in Google Scholar

[4] M. E. Frías-Armenta and E. López-González, On geodesibility of algebrizable planar vector fields, Bol. Soc. Mat. Mex. (3) 25 (2019), no. 1, 163–186. 10.1007/s40590-017-0186-2Search in Google Scholar

[5] I. A. García and M. Grau, A survey on the inverse integrating factor, Qual. Theory Dyn. Syst. 9 (2010), no. 1–2, 115–166. 10.1007/s12346-010-0023-8Search in Google Scholar

[6] P. W. Ketchum, Analytic functions of hypercomplex variables, Trans. Amer. Math. Soc. 30 (1928), no. 4, 641–667. 10.1090/S0002-9947-1928-1501452-7Search in Google Scholar

[7] E. López-González, On solutions of PDEs by using algebras, Math. Methods Appl. Sci. 45 (2022), no. 8, 4834–4852. 10.1002/mma.8073Search in Google Scholar

[8] E. R. Lorch, The theory of analytic functions in normed Abelian vector rings, Trans. Amer. Math. Soc. 54 (1943), 414–425. 10.1090/S0002-9947-1943-0009090-0Search in Google Scholar

[9] S. A. Plaksa, Monogenic functions in commutative algebras associated with classical equations of mathematical physics, Ukr. Mat. Visn. 15 (2018), no. 4, 543–575. Search in Google Scholar

[10] A. Pogorui, R. M. Rodríguez-Dagnino and M. Shapiro, Solutions for PDEs with constant coefficients and derivability of functions ranged in commutative algebras, Math. Methods Appl. Sci. 37 (2014), no. 17, 2799–2810. 10.1002/mma.3019Search in Google Scholar

[11] S. V. Rogosin and A. A. Koroleva, Advances in Applied Analysis, Trends Math., Birkhäuser/Springer, Basel, 2012. 10.1007/978-3-0348-0417-2Search in Google Scholar

[12] G. Scheffers, Verallgemeinerung der Grundlagen der gewöhnlich komplexen Funktionen I, Leipz. Ber. 45 (1893), 828–848. Search in Google Scholar

[13] J. A. Ward, From generalized Cauchy–Riemann equations to linear algebras, Proc. Amer. Math. Soc. 4 (1953), 456–461. 10.1090/S0002-9939-1953-0055981-6Search in Google Scholar

Received: 2022-10-05
Revised: 2023-01-27
Published Online: 2023-03-29
Published in Print: 2023-11-01

© 2023 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/forum-2022-0292/html
Scroll to top button