1 Introduction

Let us first introduce some notations. Vectors are written as italic lowercase letters (\(x,y,\ldots \)), matrices correspond to italic capitals \((A,B,\ldots )\), and tensors correspond to calligraphic capitals \(({\mathcal {A}},{\mathcal {B}},\ldots )\). The symbol \({\mathbb {C}}({\mathbb {R}})\) denotes the set of all complex(real) numbers, and similarly, by \({\mathbb {C}}^n({\mathbb {R}}^n)\) we denote the space of n-dimensional complex(real) column vectors. The symbol \({\mathbb {C}}^{[m,n]}({\mathbb {R}}^{[m,n]})\) denotes the set of order m dimension n tensors with complex(real) entries, and by \(\mathbb {S}^{[m,n]}\) we denote the collection of all real symmetric tensors of order m dimension n.

For a positive integer n, let \([n]=\{1,2,\ldots , n\}\). An mth order n-dimensional complex (real) tensor is a multidimensional array of \(n^m\) elements of the form

$$\begin{aligned} {\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\quad a_{i_1i_2\cdots i_m}\in {\mathbb {C}}({\mathbb {R}}),\ i_j\in [n],\ j\in [m], \end{aligned}$$

see [1,2,3]. Obviously, a second-order n-dimensional tensor \({\mathcal {A}}\) is an n-by-n matrix. The entries of the form \(a_{ii\cdots i}\) are called the diagonal entries of the tensor \({\mathcal {A}}\). A tensor \({\mathcal {A}}\) is called a \({\mathcal {Z}}\)-tensor [1, 2, 4] if all of its off-diagonal entries are non-positive. A tensor \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\) is called nonnegative, denoted by \({\mathcal {A}}\ge 0\), if all its entries are nonnegative.

Suppose that \({\mathcal {A}}\in {\mathbb {R}}^{[m,n]}\), we say that the pair \((\lambda ,x)\in {\mathbb {C}}\times ({\mathbb {C}}^n{\setminus }\{0\})\) is an eigenvalue-eigenvector pair of \({\mathcal {A}}\) [1, 3, 5] if \({\mathcal {A}}x^{m-1}=\lambda x^{[m-1]},\) where \({\mathcal {A}}x^{m-1}\) and \(x^{[m-1]}\) are all n-dimensional column vectors given by

$$\begin{aligned} ({\mathcal {A}}x^{m-1})_i=\sum _{i_2,\ldots ,i_m\in [n]} a_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m} \end{aligned}$$
(1.1)

and \((x^{[m-1]})_i=x_i^{m-1}\). Especially, \((\lambda ,x)\) is called an H-eigenpair of \({\mathcal {A}}\) if \(\lambda \) and x are real. In addition, the spectral radius of \({\mathcal {A}}\) is defined as \(\rho \mathcal {(A)}=\max \{|\lambda |:\lambda \in \sigma ({\mathcal {A}})\},\) where \(\sigma ({\mathcal {A}})\) is the set of all eigenvalue of \({\mathcal {A}}\).

Since the introduction of tensor eigenvalue in 2005, the tensor eigenvalue theory has become a hot topic [1, 3, 5,6,7,8]. As an important case of tensor eigenvalue, the spectral radius theory of nonnegative tensors has also attracted extensive attention and interest in recent years [1, 6, 8, 9]. One of these results given in [8] is an upper bound on the spectral radius of a nonnegative tensor, which depends only on the entries of the tensor. We state this result as the following theorem.

Theorem 1.1

[8] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\) and \({\mathcal {A}}\ge 0\). Then, 

$$\begin{aligned} \rho ({\mathcal {A}})\le \max _{i\in [n]}\sum _{i_2,\ldots ,i_m\in [n]}a_{ii_2\cdots i_m}. \end{aligned}$$

The Hadamard product of matrices was defined in [10]. We now recall this definition as follows: let \(A=(a_{ij})\) and \(B=(b_{ij})\) be two \(m\times n\) matrices, then their Hadamard product, denoted by \(A\circ B\), is a new \(m\times n\) matrix \(C=(c_{ij})\) such that \(c_{ij}=a_{ij}b_{ij}\). As we know, the Hadamard product of matrices is a useful concept and tool in matrix theory, and has important applications in various areas, such as trigonometric moments of convolutions of periodic functions, products of integral equation kernels (especially the relationship with Mercer’s theorem), the weak minimum principle in partial differential equations, and characteristic functions in probability theory (Bochner’s theorem), the association schemes in combinatorial theory and so on [10]. Due to its many applications, scholars are interested in the Hadamard product of matrices and have achieved some meaningful results. For the facts described above, readers can refer to the literature [11,12,13,14,15,16].

With the development of tensor theory, Qi recently proposed the Hadamard product of tensors as follows (see [17]): for any \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\) and \({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\), the Hadamard product of \({\mathcal {A}}\) and \({\mathcal {B}}\), denoted by \({\mathcal {A}}\circ {\mathcal {B}}\), is a new tensor \(\mathcal {C}=(c_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) such that \(c_{i_1i_2\cdots i_m}=a_{i_1i_2\cdots i_m}b_{i_1i_2\cdots i_m},\) where \(i_j\in [n],~j\in [m].\) In addition, the \(\alpha \)th Hadamard power of \({\mathcal {A}}\in {\mathbb {R}}^{[m,n]}\) is defined as \({\mathcal {A}}^{(\alpha )}=(a_{i_1i_2\cdots i_m}^\alpha )\) for \(\alpha \ge 0\).

Obviously, the definition of the Hadamard product of tensors is a higher-order generalization of the Hadamard product of matrices. For the Hadamard product of tensors, some researchers have referred to the concept in some literature, especially it plays an important role in studying of the closure of special structured tensors under the Hadamard product of tensors, including (strongly) Hankel tensors, \({\mathcal {H}}\)-tensors, (strongly) completely positive tensors and so on. More detailed results involving the Hadamard product of tensors can be found in [1, 17,18,19,20,21,22,23]. Very recently, Sun et al. [24] generalized some inequalities for the spectral radius of the Hadamard product of nonnegative matrices to nonnegative tensors of higher order. The precious elegant results make authors interested in the study of the Hadamard product of tensors. In this paper, we discuss some properties of the Hadamard product of tensors and the closure of several structured tensors under the Hadamard product of tensors, and present an upper bound on the spectral radius of the Hadamard product of nonnegative tensors. Moreover, we give some inequalities on the spectral radius of the Hadamard products of the Hadamard powers for nonnegative tensors.

This paper is organized as follows. In Sect. 2, we give some basic properties of the Hadamard product of tensors. In Sect. 3, we discuss the closure of the Hadamard products of several structured tensors, such as (strictly) diagonally dominant tensors, doubly strictly diagonally dominant tensors and \({\mathcal {B}}\)-tensors. By using the Hadamard product of tensors, some inequalities on the spectral radius of nonnegative tensors are obtained in Sect. 4. Finally, the conclusions of this paper are given in Sect. 5.

2 The Basic Properties for the Hadamard Product of Tensors

In this section, some fundamental properties for the Hadamard product of tensors are exploited.

A tensor \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) is called symmetric [3] if \(a_{i_1i_2\cdots i_m }=a_{\pi (i_1i_2\cdots i_m )}\) for any \(\pi \in \Pi _m,\) where \(\Pi _m\) is the permutation group of m indices.

Theorem 2.1

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in \mathbb {S}^{[m,n]}\). Then, \({\mathcal {A}}\circ {\mathcal {B}}\in \mathbb {S}^{[m,n]}.\)

Proof

Denote \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}\circ {\mathcal {B}},\) and suppose that the indices \(j_1j_2\cdots j_m\) are any permutation of the indices \(i_1i_2\cdots i_m\). By the definition of the Hadamard product of tensors and the symmetry of \({\mathcal {A}}\) and \({\mathcal {B}}\), it is obvious that

$$\begin{aligned} c_{j_1j_2\cdots j_m}=a_{j_1j_2\cdots j_m}b_{j_1j_2\cdots j_m}=a_{i_1i_2\cdots i_m}b_{i_1i_2\cdots i_m}=c_{i_1i_2\cdots i_m}. \end{aligned}$$

Therefore, \(\mathcal {C}={\mathcal {A}}\circ {\mathcal {B}}\in \mathbb {S}^{[m,n]}\). The proof is completed. \(\square \)

A tensor \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) is called reducible [2, 4, 7], if there exists a non-empty proper index subset \(I\subset \{1,2,\ldots ,n\}\) such that \(a_{i_1i_2\cdots i_m}=0~\forall i_1\in I,~\forall i_2,\ldots ,i_m\notin I.\) Otherwise, we say \({\mathcal {A}}\) is irreducible.

Theorem 2.2

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\). If \({\mathcal {A}}\circ {\mathcal {B}}\) is an irreducible tensor,  then \({\mathcal {A}}\) and \({\mathcal {B}}\) are two irreducible tensors.

Proof

Since \({\mathcal {A}}\circ {\mathcal {B}}\) is irreducible, then for arbitrary non-empty proper index subset \(I\subset \{1,2,\ldots ,n\},\) such that

$$\begin{aligned} a_{i_1i_2\cdots i_m}b_{i_1i_2\cdots i_m}\ne 0,\quad \exists i_1\in I,\ \exists i_2,\ldots ,i_m\notin I. \end{aligned}$$

Thus,

$$\begin{aligned} a_{i_1i_2\cdots i_m}\ne 0,\ b_{i_1i_2\cdots i_m}\ne 0,\quad \exists i_1\in I,\ \exists i_2,\ldots ,i_m\notin I. \end{aligned}$$

This implies that \({\mathcal {A}}\) and \({\mathcal {B}}\) are two irreducible tensors. The proof is completed. \(\square \)

Note that the converse of Theorem 2.2 does not hold. That is, if \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\) and \({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) are two irreducible tensors, then \({\mathcal {A}}\circ {\mathcal {B}}\) is not necessarily an irreducible tensor. We can illustrate this fact by the following example.

Example 2.3

Consider two tensors \({\mathcal {A}}=(a_{ijk})\) and \({\mathcal {B}}=(b_{ijk})\in {\mathbb {R}}^{[3,3]}\) defined as follows:

$$\begin{aligned} a_{122}= & {} a_{133}=1,a_{211}=a_{233}=a_{213}=a_{231}=-1,\\ a_{311}= & {} a_{322}=a_{312}=a_{321}=2, ~other~a_{ijk}=0.\\ b_{123}= & {} b_{132}=-1,b_{211}=b_{233}=b_{213}=b_{231}=1,\\ b_{311}= & {} b_{322}=b_{312}=b_{321}=-2, ~other~b_{ijk}=0. \end{aligned}$$

Then, \({\mathcal {A}}\) and \({\mathcal {B}}\) are two irreducible tensors. By simple computations, \(\mathcal {C}={\mathcal {A}}\circ {\mathcal {B}}=(c_{ijk})\in {\mathbb {R}}^{[3,3]}\), where

$$\begin{aligned} c_{211}= & {} c_{233}=c_{213}=c_{231}=-1,\\ c_{311}= & {} c_{322}=c_{312}=c_{321}=-4 \end{aligned}$$

and other \(c_{ijk}=0\). Clearly, \({\mathcal {A}}\circ {\mathcal {B}}\) is a reducible tensor.

A tensor \({\mathcal {A}}\in {\mathbb {C}}^{[m,n]}\) is called a rank-one tensor if there exist nonzero \(a_i\in {\mathbb {C}}^{n}(i=1,\ldots ,m)\) such that \({\mathcal {A}}=a_1\otimes a_2\otimes \cdots \otimes a_m,\) where \(a_1\otimes a_2\otimes \cdots \otimes a_m\) is the Segre outer product of \(a_1\in {\mathbb {C}}^{n},a_2\in {\mathbb {C}}^{n},\ldots ,a_m\in {\mathbb {C}}^{n}\) with entries \(a_{i_1i_2\cdots i_m}=(a_1)_{i_1}(a_2)_{i_2}\cdots (a_m)_{i_m}\). The rank of a tensor \({\mathcal {A}}\), denoted by rank\(({\mathcal {A}}),\) is defined to be the smallest r such that \({\mathcal {A}}\) can be written as a sum of r rank-one tensors. Especially, if \({\mathcal {A}}=0,\) then rank\(({\mathcal {A}})=0.\) The above theories of tensor rank can be found in [25].

Theorem 2.4

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be two rank-one tensors. Then, there exist nonzero \(a_i,b_i\in {\mathbb {R}}^n(i=1,2,\ldots ,m)\) such that

$$\begin{aligned} {\mathcal {A}}\circ {\mathcal {B}}= & {} (a_1\otimes a_2\otimes \cdots \otimes a_m)\circ (b_1\otimes b_2\otimes \cdots \otimes b_m) \\= & {} (a_1\circ b_1)\otimes (a_2\circ b_2)\otimes \cdots \otimes (a_m\circ b_m). \end{aligned}$$

Proof

As \({\mathcal {A}}\) and \({\mathcal {B}}\) are two rank-one tensors, then

$$\begin{aligned} {\mathcal {A}}= a_1\otimes a_2\otimes \cdots \otimes a_m\quad \mathrm{and}\quad {\mathcal {B}}=b_1\otimes b_2\otimes \cdots \otimes b_m, \end{aligned}$$

where \(a_i,~b_i\in {\mathbb {R}}^{n}~(i=1,2,\ldots ,m)\). Hence, \({\mathcal {A}}\circ {\mathcal {B}}=(a_1\otimes a_2\otimes \cdots \otimes a_m)\circ (b_1\otimes b_2\otimes \cdots \otimes b_m),\) and \(({\mathcal {A}}\circ {\mathcal {B}})_{i_1i_2\cdots i_m}=a_{i_1i_2\cdots i_m}b_{i_1i_2\cdots i_m}=(a_1)_{i_1}(a_2)_{i_2}\cdots (a_m)_{i_m}(b_1)_{i_1}(b_2)_{i_2}\cdots (b_m)_{i_m}=(a_1\circ b_1)_{i_1}(a_2\circ b_2)_{i_2}\cdots (a_m\circ b_m)_{i_m}\). Therefore, \({\mathcal {A}}\circ {\mathcal {B}}=(a_1\otimes a_2\otimes \cdots \otimes a_m)\circ (b_1\otimes b_2\otimes \cdots \otimes b_m)=(a_1\circ b_1)\otimes (a_2\circ b_2)\otimes \cdots \otimes (a_m\circ b_m).\) The proof is completed. \(\square \)

By Theorem 2.4, we can provide a characterization for the rank of \({\mathcal {A}}\circ {\mathcal {B}}\), which is stated as the following corollary.

Corollary 2.5

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\) and \({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be two rank-one tensors. Then, rank\(({\mathcal {A}}\circ {\mathcal {B}})\le 1\).

Proof

Since rank\(({\mathcal {A}})=1\) and rank\(({\mathcal {B}})=1\), then there exist nonzero \(a_i,~b_i\in {{\mathbb {C}}}^{n}(i=1,2,\ldots ,m)\) such that

$$\begin{aligned} {\mathcal {A}}= a_1\otimes a_2\otimes \cdots \otimes a_m\quad \mathrm{and}\quad {\mathcal {B}}= b_1\otimes b_2\otimes \cdots \otimes b_m. \end{aligned}$$

By Theorem 2.4, we have

$$\begin{aligned} {\mathcal {A}}\circ {\mathcal {B}}= & {} (a_1\otimes a_2\otimes \cdots \otimes a_m)\circ (b_1\otimes b_2\otimes \cdots \otimes b_m) \\= & {} (a_1\circ b_1)\otimes \cdots \otimes (a_m\circ b_m). \end{aligned}$$

Consequently, there are two cases as follows. If \(a_i\circ b_i\ne 0\) for all \(i\in [n]\), then rank\(({\mathcal {A}}\circ {\mathcal {B}})=1\). Otherwise, if there exists some \(i\in [n]\) such that \(a_i\circ b_i=0\), then rank\(({\mathcal {A}}\circ {\mathcal {B}})=0\). Summarizing the above analysis, we conclude that rank\(({\mathcal {A}}\circ {\mathcal {B}})\le 1\). The proof is completed. \(\square \)

3 The Closure of the Hadamard Product of Some Structured Tensors

In this section, we investigate the closure of the Hadamard product of some tensors with special structure, including (strictly)diagonally dominant tensors, doubly strictly diagonally dominant tensors and \({\mathcal {B}}\)-tensors. Moreover, other corresponding properties involving the Hadamard product of tensors are also obtained.

3.1 The Closure of the Hadamard Product of Diagonally Dominant Tensors

We discuss the closure of several classes of diagonally dominant tensors under the Hadamard product of tensors. First of all, we prove that the Hadamard product of (strictly) diagonally dominant tensors is closed under the Hadamard product of tensors.

The Kronecker symbol [3] of m indices is defined as

$$\begin{aligned} \delta _{i_1i_2\ldots i_m}=\left\{ \begin{array}{ll} 1, &{}\quad \mathrm{if}\ i_1=i_2=\cdots =i_m, \\ 0, &{}\quad \mathrm{otherwise}. \end{array} \right. \end{aligned}$$

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {C}}^{[m,n]}\). \({\mathcal {A}}\) is called diagonally dominant [2] if

$$\begin{aligned} |a_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots , i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}| \end{aligned}$$

for all \(i\in [n]\). \({\mathcal {A}}\) is called strictly diagonally dominant if the above inequality is strict for all \(i\in [n]\).

Theorem 3.1

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be two diagonally dominant tensors. Then, \({\mathcal {A}}\circ {\mathcal {B}}\) is a diagonally dominant tensor.

Proof

Since \({\mathcal {A}}\) and \({\mathcal {B}}\) are two diagonally dominant tensors, then for all \(i\in [n]\),

$$\begin{aligned} |a_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots , i_m\in [n]\\ \delta _{ii_2\cdots i_m=0} \end{array}}|a_{ii_2\cdots i_m}| \end{aligned}$$
(3.1)

and

$$\begin{aligned} |b_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|. \end{aligned}$$
(3.2)

Let \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}\circ {\mathcal {B}},\) then it follows from the inequalities (3.1) and (3.2) that for all \(i\in [n]\),

$$\begin{aligned} |c_{ii\cdots i}|= & {} |a_{ii\cdots i}b_{ii\cdots i}|=|a_{ii\cdots i}||b_{ii\cdots i}| \\\ge & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|\\\ge & {} \sum _{\begin{array}{c} i_2,\ldots , i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}||b_{ii_2\cdots i_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|c_{ii_2\cdots i_m}|. \end{aligned}$$

Therefore, \({\mathcal {A}}\circ {\mathcal {B}}\) is a diagonally dominant tensor. The proof is completed. \(\square \)

Theorem 3.2

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be nonzero diagonally dominant tensors with nonzero diagonal entries,  and suppose that at least one of \({\mathcal {A}}\) and \({\mathcal {B}}\) is a strictly diagonally dominant tensor. Then, \({\mathcal {A}}\circ {\mathcal {B}}\) is a strictly diagonally dominant tensor.

Proof

Without loss of generality, we assume that \({\mathcal {A}}\) is a strictly diagonally dominant tensor, and that \({\mathcal {B}}\) is a diagonally dominant tensor, then for all \(i\in [n]\),

$$\begin{aligned} |a_{ii\cdots i}|>\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}| \end{aligned}$$
(3.3)

and

$$\begin{aligned} |b_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|. \end{aligned}$$
(3.4)

Denote \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}\circ {\mathcal {B}}\). By the inequalities (3.3) and (3.4) for all \(i\in [n]\),

$$\begin{aligned} |c_{ii\cdots i}|= & {} |a_{ii\cdots i}b_{ii\cdots i}|=|a_{ii\cdots i}||b_{ii\cdots i}|\\> & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|\\\ge & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}||b_{ii_2\cdots i_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|c_{ii_2\cdots i_m}|, \end{aligned}$$

which implies that \({\mathcal {A}}\circ {\mathcal {B}}\) is a strictly diagonally dominant tensor. The proof is completed. \(\square \)

Remark 3.3

From the proof of Theorem 3.2, we easily know that \({\mathcal {A}}\circ {\mathcal {B}}\) is a strictly diagonally dominant tensor if \({\mathcal {A}}\) and \({\mathcal {B}}\in {\mathbb {R}}^{[m,n]}\) are strictly diagonally dominant, which implies that two strictly diagonally dominant tensors is closed under the Hadamard product of tensors.

In [3], Qi gave the definition of the positive definiteness of tensors, and illustrated its important application in the stability study of nonlinear autonomous systems. Additionally, it was also proved in [3] that a tensor \({\mathcal {A}}\) is positive definite (positive semi-definite) if \({\mathcal {A}}\in \mathbb {S}^{[m,n]}\) is strictly diagonally dominant (diagonally dominant) and m is even. Based on the above fact, we obtain the following corollary by Theorems 3.1 and 3.2.

Corollary 3.4

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in \mathbb {S}^{[m,n]}\) with positive diagonal entries,  and m be even. Then,

  1. (1)

    \({\mathcal {A}}\circ {\mathcal {B}}\) is positive semi-definite if \({\mathcal {A}}\) and \({\mathcal {B}}\) satisfy the conditions of Theorem 3.1.

  2. (2)

    \({\mathcal {A}}\circ {\mathcal {B}}\) is positive definite if \({\mathcal {A}}\) and \({\mathcal {B}}\) satisfy the conditions of Theorem 3.2.

Next, we consider the Hadamard product of two doubly strictly diagonally dominant tensors.

Definition 3.5

[26] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) and \(n\ge 2\). Then \({\mathcal {A}}\) is called a doubly strictly diagonally dominant tensor (DSDD) if

  1. (1)

    when \(m=2\), \({\mathcal {A}}\) satisfies

    $$\begin{aligned} |a_{ii\cdots i}||a_{jj\cdots j}|> & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}|\nonumber \\&\quad \text {for any }i,j\in [n],\ i\ne j; \end{aligned}$$
    (3.5)
  2. (2)

    when \(m>2\), \({\mathcal {A}}\) satisfies

    $$\begin{aligned} |a_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\quad \text {for any }i\in [n], \end{aligned}$$

    and the inequality (3.5) holds.

Theorem 3.6

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be two doubly strictly diagonally dominant tensors. Then, \({\mathcal {A}}\circ {\mathcal {B}}\) is a doubly strictly diagonally dominant tensor.

Proof

Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be two doubly strictly diagonally dominant tensors. By Definition 3.5, we need to consider the following cases.

(1) If \(m=2,\)

$$\begin{aligned} |a_{ii\cdots i}||a_{jj\cdots j}|>\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}| \end{aligned}$$
(3.6)

and

$$\begin{aligned} |b_{ii\cdots i}||b_{jj\cdots j}|>\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|b_{jj_2\cdots j_m}|, \end{aligned}$$
(3.7)

for any \(i,~j\in [n]\), \(i\ne j.\) Define \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}\circ {\mathcal {B}},\) then by the inequalities (3.6) and (3.7) for any \(i,~j\in [n]\), \(i\ne j\),

$$\begin{aligned} |c_{ii\cdots i}||c_{jj\cdots j}|= & {} |a_{ii\cdots i}b_{ii\cdots i}||a_{jj\cdots j}b_{jj\cdots j}| =|a_{ii\cdots i}||b_{ii\cdots i}||a_{jj\cdots j}||b_{jj\cdots j}|\\> & {} \sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m}=0 \end{array}}^n|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}| \\&\times \sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|b_{jj_2\cdots j_m}| \\\ge & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}||b_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots , j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}||b_{jj_2\cdots j_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}b_{jj_2\cdots j_m}|\\= & {} \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|c_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|c_{jj_2\cdots j_m}|. \end{aligned}$$

(2) If \(m>2\),

$$\begin{aligned}&|a_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|,\quad |b_{ii\cdots i}|\ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|, \\&|a_{ii\cdots i}||a_{jj\cdots j}|>\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n]\\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n]\\ \delta _{jj_2\cdots j_m}=0 \end{array}}|a_{jj_2\cdots j_m}| \end{aligned}$$

and

$$\begin{aligned} |b_{ii\cdots i}||b_{jj\cdots j}|>\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|b_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots ,j_m\in [n] \\ \delta _{jj_2\cdots j_m}=0 \end{array}}|b_{jj_2\cdots j_m}| \end{aligned}$$

for all \(i,~j\in [n]\), \(i\ne j\). Let \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}\circ {\mathcal {B}};\) we know from the proof of Theorem 3.1 that for any \(i,~j\in [n]\), \(i\ne j\),

$$\begin{aligned} |c_{ii\cdots i}|=|a_{ii\cdots i}b_{ii\cdots i}| \ge \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}|=\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|c_{ii_2\cdots i_m}|, \end{aligned}$$

and similar to the proof of (1), we have

$$\begin{aligned} |c_{ii\cdots i}||c_{jj\cdots j}|\ge \sum _{\begin{array}{c} i_2,\ldots , i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}|c_{ii_2\cdots i_m}|\sum _{\begin{array}{c} j_2,\ldots , j_m\in [n]\\ \delta _{jj_2\cdots j_m}=0 \end{array}}|c_{jj_2\cdots j_m}|. \end{aligned}$$

Therefore, \({\mathcal {A}}\circ {\mathcal {B}}\) is a doubly strictly diagonally dominant tensor. The proof is completed. \(\square \)

As mentioned in Theorem 9 of [18], an even-order real symmetric tensor \({\mathcal {A}}\) with positive diagonal entries is positive definite if \({\mathcal {A}}\) satisfies the condition (2) in Definition 3.5. Based on this fact, the following result can be deduced from Theorem 3.6.

Corollary 3.7

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),\)\({\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in \mathbb {S}^{[m,n]}\) with positive diagonal entries, m be even and \(n>2\). Then, \({\mathcal {A}}\circ {\mathcal {B}}\) is positive definite if \({\mathcal {A}}\) and \({\mathcal {B}}\) satisfy the conditions of Theorem 3.6.

3.2 The Closure of the Hadamard Product of \({\mathcal {B}}\)-Tensors

We study the closure of the Hadamard product of \({\mathcal {B}}\)-tensors. Before proving the main results of this subsection, we first review the definition of \({\mathcal {B}}\)-tensors and give some lemmas as follows.

Definition 3.8

[27] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}.\) We say that \({\mathcal {A}}\) is a \({\mathcal {B}}\)-tensor, if for all \(i\in [n]\),

$$\begin{aligned} \sum _{i_2,\ldots ,i_m\in [n]}a_{ii_2\cdots i_m}>0 \end{aligned}$$

and

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,\ldots ,i_m\in [n]}a_{ii_2\cdots i_m}\right) >a_{ij_2\cdots j_m} \end{aligned}$$

for all \((j_2,\ldots ,j_m)\ne (i,\ldots ,i)\).

Lemma 3.9

[27] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}.\) Then, \({\mathcal {A}}\) is a \({\mathcal {B}}\)-tensor if and only if for each \(i\in [n],\)

$$\begin{aligned} \sum _{i_2,\ldots ,i_m\in [n]}a_{ii_2\cdots i_m}>n^{m-1}\beta _i({\mathcal {A}}), \end{aligned}$$
(3.8)

where \(\beta _i({\mathcal {A}})=\max \{0,~a_{ij_2j_3\cdots j_m}:~(j_2,j_3,\ldots ,j_m)\ne (i,i,\ldots , i),\)\(j_2,j_3,\ldots ,j_m\in [n]\}.\)

Let \(r_{i}({\mathcal {A}})=\max \{a_{ii_2\cdots i_m}:(i_2,\ldots ,i_m)\ne (i,\ldots ,i)\}\). By rearranging the inequality in (3.8), Lemma 3.9 can provide the following characterization of \({\mathcal {B}}\)-tensors when \(r_{i}({\mathcal {A}})\ge 0\) for all \(i\in [n]\).

Lemma 3.10

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\) and \(r_{i}({\mathcal {A}})\ge 0\) for all \(i\in [n]\). Then, \({\mathcal {A}}\) is a \({\mathcal {B}}\)-tensor if and only if for each \(i\in [n],\)

$$\begin{aligned} a_{ii\cdots i}>(n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],\delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}a_{ii_2\cdots i_m}, \end{aligned}$$

where \(r_{i}({\mathcal {A}})=a_{ii_2^{(0)}\cdots i_m^{(0)}}.\)

Especially, \(r_{i}({\mathcal {A}})=a_{ii_2^{(0)}\cdots i_m^{(0)}}\ge 0\) for all \(i\in [n]\) when \({\mathcal {A}}\) in Lemma 3.10 is nonnegative, and hence a necessary and sufficient condition of \({\mathcal {B}}\)-tensors can be described as the following lemma, which is very useful in the latter proof.

Lemma 3.11

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\) and \({\mathcal {A}}\ge 0\). Then, \({\mathcal {A}}\) is a \({\mathcal {B}}\)-tensor if and only if for each \(i\in [n],\)

$$\begin{aligned} a_{ii\cdots i}>(n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],\delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}a_{ii_2\cdots i_m}. \end{aligned}$$

With the above theoretical preparation, we provide the following example to show that the Hadamard product of two \({\mathcal {B}}\)-tensors is not necessarily a \({\mathcal {B}}\)-tensor.

Example 3.12

Consider a tensor \({\mathcal {A}}=(a_{i_1i_2i_3})\in {\mathbb {R}}^{[3,3]}\) defined as follows:

$$\begin{aligned} a_{111}=a_{222}=a_{333}=7,a_{121}=-6,\quad \hbox {other}\ a_{i_1i_2i_3}=0. \end{aligned}$$

Obviously, \(\sum _{i_2,i_3\in [3]}a_{ii_2i_3}>0\) for \(i=1,2,3\). For \(i=1\),

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{1i_2i_3}\right) =\frac{1}{3^{3-1}}\times 1=\frac{1}{9}>-6. \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{1j_2j_3}\right) >a_{1j_2j_3}\quad \text {for all }(j_2,j_3)\ne (1,1). \end{aligned}$$

For \(i=2,3\),

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{2i_2i_3}\right) =\frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{3i_2i_3}\right) =\frac{1}{3^{3-1}}\times 7=\frac{7}{9}>0. \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{2i_2i_3}\right) >a_{2j_2j_3}\quad \text {for all }(j_2,j_3)\ne (2,2) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a_{3i_2i_3}\right) >a_{3j_2j_3}\quad \text {for all }(j_2,j_3)\ne (3,3). \end{aligned}$$

Hence, \({\mathcal {A}}\) is a \({\mathcal {B}}\)-tensor by Definition 3.8. By simple computations, \({\mathcal {A}}\circ {\mathcal {A}}=(a'_{i_1i_2i_3})\in {\mathbb {R}}^{[3,3]}\) is defined as

$$\begin{aligned} a'_{111}=a'_{222}=a'_{333}=49,a'_{121}=36,\quad \text {other }a'_{i_1i_2i_3}=0. \end{aligned}$$

For \(i=1\),

$$\begin{aligned} \frac{1}{n^{m-1}}\left( \sum _{i_2,i_3\in [3]}a'_{1i_2i_3}\right) =\frac{1}{3^{3-1}}\times 85=\frac{85}{9}<a'_{121}=36. \end{aligned}$$

Therefore, \({\mathcal {A}}\circ {\mathcal {A}}\) is not a \({\mathcal {B}}\)-tensor according to Definition 3.8.

From the above analysis, we naturally put forward the following question: in what case does the Hadamard product of two \({\mathcal {B}}\)-tensors remain closed? Owing to this motivation, we give the main results of this subsection as follows.

Theorem 3.13

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),~{\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be two \({\mathcal {B}}\)-tensors,  and \({\mathcal {A}},{\mathcal {B}}\ge 0\). Then, \({\mathcal {A}}\circ {\mathcal {B}}\ge 0\) is a \({\mathcal {B}}\)-tensor.

Proof

Obviously, \({\mathcal {A}}\circ {\mathcal {B}}\ge 0\). For any \(i\in [n],\) we have

$$\begin{aligned} r_i({\mathcal {A}}\circ {\mathcal {B}})= & {} \max \left\{ a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}:(i_2,\ldots ,i_m)\ne (i,\ldots ,i)\right\} \\\le & {} \max \left\{ a_{ii_2\cdots i_m}:(i_2,\ldots ,i_m)\ne (i,\ldots ,i)\right\} \\&\cdot \max \left\{ b_{ii_2\cdots i_m}:(i_2,\ldots ,i_m)\ne (i,\ldots ,i)\right\} \\= & {} r_i({\mathcal {A}})r_i({\mathcal {B}}). \end{aligned}$$

Therefore, two cases are considered as follows.

(1) When \(r_i({\mathcal {A}}\circ {\mathcal {B}})=r_i({\mathcal {A}})r_i({\mathcal {B}}),\) we assume that \(r_i({\mathcal {A}})=a_{ii_2^{(0)}\cdots i_m^{(0)}},~r_i({\mathcal {B}})=b_{ii_2^{(0)}\cdots i_m^{(0)}},\) then \(r_i({\mathcal {A}}\circ {\mathcal {B}})=a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}}.\) From Lemma 3.11, it follows that

$$\begin{aligned} a_{ii\cdots i}>(n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}a_{ii_2\cdots i_m}\ge 0, \end{aligned}$$

and

$$\begin{aligned} b_{ii\cdots i}>(n^{m-1}-1)b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}b_{ii_2\cdots i_m}\ge 0. \end{aligned}$$

Then,

$$\begin{aligned} a_{ii\cdots i}b_{ii\cdots i}> & {} \left[ (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0\\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}a_{ii_2\cdots i_m}\right] \\&\cdot \left[ (n^{m-1}-1)b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}b_{ii_2\cdots i_m}\right] \\= & {} \left[ a_{ii_2^{(0)}\cdots i_m^{(0)}}+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],\delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})\right] \\&\cdot \left[ (n^{m-1}-1)b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}b_{ii_2\cdots i_m}\right] \\= & {} (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})\\&\cdot \left[ b_{ii_2^{(0)}\cdots i_m^{(0)}}+\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n],\delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(b_{ii_2^{(0)}\cdots i_m^{(0)}}-b_{ii_2\cdots i_m})\right] \\= & {} (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})b_{ii_2^{(0)}\cdots i_m^{(0)}}\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0\\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(0)}\cdots i_m^{(0)}}-b_{ij_2\cdots j_m})\\= & {} (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}} \\&+\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})b_{ii_2^{(0)}\cdots i_m^{(0)}}\\&-\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})b_{ii_2\cdots i_m} \\&-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(0)}\cdots i_m^{(0)}}-b_{ij_2\cdots j_m})\\= & {} (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots , j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(0)}\cdots i_m^{(0)}}-b_{ij_2\cdots j_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2^{(0)}\cdots i_m^{(0)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(0)}\cdots i_m^{(0)}}-b_{ii_2\cdots i_m})\\\ge & {} (n^{m-1}-1)a_{ii_2^{(0)}\cdots i_m^{(0)}}b_{ii_2^{(0)}\cdots i_m^{(0)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots ,i_m^{(0)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}). \end{aligned}$$

Hence,

$$\begin{aligned} a_{ii\cdots i}b_{ii\cdots i} > (n^{m-1}-1)r_i({\mathcal {A}}\circ {\mathcal {B}})-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(0)},\ldots , i_m^{(0)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}). \end{aligned}$$

(2) When \(r_i({\mathcal {A}}\circ {\mathcal {B}})<r_i({\mathcal {A}})r_i({\mathcal {B}})\), suppose that \(r_i({\mathcal {A}})=a_{ii_2^{(1)}\cdots i_m^{(1)}}\) and \(r_i({\mathcal {B}})=b_{ii_2^{(2)}\cdots i_m^{(2)}}\), where \((i_2^{(1)},\ldots , i_m^{(1)})\ne (i,\ldots ,i),~(i_2^{(2)},\ldots ,i_m^{(2)})\ne (i,\ldots ,i),\)\(i_2^{(1)},\ldots ,i_m^{(1)},i_2^{(2)},\ldots ,i_m^{(2)}\in [n].\) If \((i_2^{(1)},\ldots ,i_m^{(1)})=(i_2^{(2)},\ldots , i_m^{(2)}),\) then \(r_i({\mathcal {A}}\circ {\mathcal {B}})=r_i({\mathcal {A}})r_i({\mathcal {B}})\). Otherwise, \(r_i({\mathcal {A}}\circ {\mathcal {B}})=a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}\), where \((i_2^{(3)},\cdots i_m^{(3)})\ne (i,\ldots ,i),~i_2^{(3)},\ldots , i_m^{(3)}\in [n].\)

As \({\mathcal {A}}\) and \({\mathcal {B}}\) are nonnegative \({\mathcal {B}}\)-tensors, then by Lemma 3.11 we have

$$\begin{aligned} a_{ii\cdots i}>(n^{m-1}-1)a_{ii_2^{(1)}\cdots i_m^{(1)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)}) \end{array}}a_{ii_2\cdots i_m}\ge 0, \end{aligned}$$

and

$$\begin{aligned} b_{ii\cdots i}>(n^{m-1}-1)b_{ii_2^{(2)}\cdots i_m^{(2)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}b_{ii_2\cdots i_m}\ge 0. \end{aligned}$$

Similar to the derivation of (1), we obtain

$$\begin{aligned} a_{ii\cdots i}b_{ii\cdots i}> & {} (n^{m-1}-1)a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2\cdots i_m}) \\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})b_{ii_2^{(2)}\cdots i_m^{(2)}}\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} (n^{m-1}-1)a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}} \\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})b_{ii_2^{(2)}\cdots i_m^{(2)}}\\&-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})b_{ii_2\cdots i_m} \\&-\sum _{\begin{array}{c} i_2,\ldots , i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} (n^{m-1}-1)a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2^{(2)}\cdots i_m^{(2)}})b_{ii_2^{(2)}\cdots i_m^{(2)}}-(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2^{(1)}\cdots i_m^{(1)}})b_{ii_2^{(1)}\cdots i_m^{(1)}}\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} n^{m-1}a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}} \\&-\left[ \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})+a_{ii_2^{(2)}\cdots i_m^{(2)}}b_{ii_2^{(2)}\cdots i_m^{(2)}}\right] \\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} n^{m-1}a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n] \\ \delta _{ii_2\cdots i_m}=0 \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} n^{m-1}a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}} \\&-\left[ \sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(3)},\ldots ,i_m^{(3)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})+a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}\right] \\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\= & {} (n^{m-1}-1)a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(3)},\ldots ,i_m^{(3)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n], \delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (i_2,\ldots ,i_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ii_2\cdots i_m})\\&+\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],j_2,\ldots ,j_m\in [n]\\ \delta _{ii_2\cdots i_m}=0,\delta _{ij_2\cdots j_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(1)},\ldots ,i_m^{(1)})\\ (j_2,\ldots ,j_m)\ne (i_2^{(2)},\ldots ,i_m^{(2)}) \end{array}}(a_{ii_2^{(1)}\cdots i_m^{(1)}}-a_{ii_2\cdots i_m})(b_{ii_2^{(2)}\cdots i_m^{(2)}}-b_{ij_2\cdots j_m})\\&+\,n^{m-1}a_{ii_2^{(1)}\cdots i_m^{(1)}}b_{ii_2^{(2)}\cdots i_m^{(2)}}-n^{m-1}a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}. \end{aligned}$$

Thus,

$$\begin{aligned} a_{ii\cdots i}b_{ii\cdots i} > (n^{m-1}-1)a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],\delta _{ii_2\cdots i_m}=0 \\ (i_2,\ldots ,i_m)\ne (i_2^{(3)},\ldots ,i_m^{(3)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}), \end{aligned}$$

i.e.,

$$\begin{aligned} a_{ii\cdots i}b_{ii\cdots i} >(n^{m-1}-1)r_i({\mathcal {A}}\circ {\mathcal {B}})-\sum _{\begin{array}{c} i_2,\ldots ,i_m\in [n],\delta _{ii_2\cdots i_m}=0\\ (i_2,\ldots ,i_m)\ne (i_2^{(3)},\ldots ,i_m^{(3)}) \end{array}}(a_{ii_2\cdots i_m}b_{ii_2\cdots i_m}), \end{aligned}$$

and \(r_i({\mathcal {A}}\circ {\mathcal {B}})=a_{ii_2^{(3)}\cdots i_m^{(3)}}b_{ii_2^{(3)}\cdots i_m^{(3)}}.\) Summarizing the above analysis, we conclude from Lemma 3.11 that \({\mathcal {A}}\circ {\mathcal {B}}\ge 0\) is a \({\mathcal {B}}\)-tensor. The proof is completed. \(\square \)

In [28], Qi and Song proved that an even-order real symmetric \({\mathcal {B}}\)-tensor is positive definite. Hence, the following corollary is easily obtained by Theorem 3.13.

Corollary 3.14

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),~{\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in \mathbb {S}^{[m,n]},\) and m be even. Then, \({\mathcal {A}}\circ {\mathcal {B}}\) is positive definite if \({\mathcal {A}}\) and \({\mathcal {B}}\) satisfy the conditions of Theorem 3.13.

We give the following example to verify the validity of Corollary 3.14.

Example 3.15

Consider two \({\mathcal {B}}\)-tensors \({\mathcal {A}}=(a_{ijkl})\) and \({\mathcal {B}}=(b_{ijkl})\in {\mathbb {R}}^{[4,2]}\) defined as follows:

$$\begin{aligned} a_{1111}= & {} a_{2222}=10,a_{1112}=a_{1121}=a_{1211}=a_{2111}=1,\quad \text {other } a_{ijkl}=0.\\ b_{1111}= & {} b_{2222}=15,b_{1112}=b_{1121}=b_{1211}=b_{2111}=2,\quad \text {other }b_{ijkl}=0. \end{aligned}$$

Obviously, \({\mathcal {A}}\ge 0\) and \({\mathcal {B}}\ge 0\) are two even-order real symmetric tensors. By Theorem 2.1 and Corollary 3.14, we know that \({\mathcal {A}}\circ {\mathcal {B}}\in \mathbb {S}^{[4,2]}\) is positive definite.

4 Some Inequalities on the Spectral Radius of Hadamard Product of Nonnegative Tensors

As the application of the Hadamard product of tensors, we in this section use the Hadamard product of tensors to give some inequalities for the spectral radius of nonnegative tensors.

4.1 An Upper Bound for \(\rho ({\mathcal {A}}\circ {\mathcal {B}})\)

Let \({\mathcal {A}}\in {\mathbb {C}}^{[k,n]}\) and \({\mathcal {B}}\in {\mathbb {C}}^{[k,m]}\). Define the direct product \({\mathcal {A}}\overline{\otimes }{\mathcal {B}}\) to be the following tensor of order k and dimension nm (the set of subscripts is taken as \([n]\times [m]\) in the lexicographic order): \(({\mathcal {A}}\overline{\otimes }{\mathcal {B}})_{(i_1,j_1)(i_2,j_2)\cdots (i_k,j_k)}=a_{i_1i_2\cdots i_k}b_{j_1j_2\cdots j_k}\) (see [29]).

Lemma 4.1

[29] Let \({\mathcal {A}},\)\({\mathcal {B}}\in {\mathbb {R}}^{[m,n]},\lambda \in \sigma ({\mathcal {A}}),~\mu \in \sigma ({\mathcal {B}})\), and \(x,y\in {\mathbb {C}}^n\) be the eigenvectors of \({\mathcal {A}}\) and \({\mathcal {B}}\) corresponding to \(\lambda \) and \(\mu ,\) respectively. Then, \(\lambda \mu \in \sigma ({\mathcal {A}}\overline{\otimes }{\mathcal {B}})\) and \(x\overline{\otimes } y\) is the eigenvector of \({\mathcal {A}}\overline{\otimes }{\mathcal {B}}\) corresponding to \(\lambda \mu \).

A tensor \(\mathcal {C}\in {\mathbb {R}}^{[m,r]}\) is called a principle sub-tensor [27] of a tensor \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) (\(1\le r\le n\)) if there is a set J that is composed of r elements in [n] such that \(\mathcal {C}=(a_{i_1i_2\cdots i_m})~\mathrm{for~all}~i_1,~i_2,\ldots ,\)\(i_m\in J.\)

Lemma 4.2

[30] Let \({\mathcal {A}}\in {\mathbb {R}}^{[m,n]},~{\mathcal {A}}\ge 0,\) and suppose that \(\overline{{\mathcal {A}}}\) is an arbitrary principal sub-tensor of \({\mathcal {A}}\). Then, \(\rho (\overline{{\mathcal {A}}})\le \rho ({\mathcal {A}})\).

In [10], an upper bound on the spectral radius of the Hadamard product of nonnegative matrices by making use of the nonnegative matrices theory was given as follows: let \(A=(a_{ij})\ge 0\) and \(B=(b_{ij})\ge 0\) be two \(n\times n\) matrices, then \(\rho (A\circ B)\le \rho (A)\rho (B).\) We next extend the well-known result from matrices to higher-order tensors.

Theorem 4.3

Let \({\mathcal {A}},{\mathcal {B}}\in {\mathbb {R}}^{[m,n]},\) and \({\mathcal {A}},{\mathcal {B}}\ge 0.\) Then, \(\rho ({\mathcal {A}}\circ {\mathcal {B}})\le \rho ({\mathcal {A}})\rho ({\mathcal {B}})\).

Proof

We have \(\rho ({\mathcal {A}}\overline{\otimes }{\mathcal {B}})= \rho ({\mathcal {A}})\rho ({\mathcal {B}})\) by Lemma 4.1. Since \({\mathcal {A}}\overline{\otimes }{\mathcal {B}}\ge 0\) and \({\mathcal {A}}\circ {\mathcal {B}}\) is a principal sub-tensor of \({\mathcal {A}}\overline{\otimes }{\mathcal {B}}\), then it follows from Lemma 4.2 that \(\rho ({\mathcal {A}}\circ {\mathcal {B}})\le \rho ({\mathcal {A}}\overline{\otimes }{\mathcal {B}})\). Therefore, \(\rho ({\mathcal {A}}\circ {\mathcal {B}})\le \rho ({\mathcal {A}})\rho ({\mathcal {B}})\). The proof is completed. \(\square \)

The following numerical example shows that the upper bound obtained by Theorem 4.3 can improve the upper bound of Theorem 1.1.

Example 4.4

Consider a tensor \(\mathcal {C}=(c_{i_1i_2i_3i_4})\in {\mathbb {R}}^{[4,2]}\) defined as follows:

$$\begin{aligned} c_{1111}= & {} 1,\quad c_{1112}=2,\quad c_{1121}=c_{1211}=3,\\ c_{1122}= & {} 8,\quad c_{2222}=2,\quad \hbox {other }c_{i_1i_2i_3i_4}=0. \end{aligned}$$

We now take \({\mathcal {A}}=(a_{i_1i_2i_3i_4})\), \({\mathcal {B}}=(b_{i_1i_2i_3i_4})\in {\mathbb {R}}^{[4,2]} \), where

$$\begin{aligned} a_{1111}= & {} a_{1112}=a_{1211}=1,\quad a_{1121}=3,\\ a_{1122}= & {} 4,\quad a_{2222}=2,\quad \hbox {other }a_{i_1i_2i_3i_4}=0.\\ b_{1111}= & {} b_{1121}=1,\quad b_{1112}=2,\quad b_{1211}=3,\\ b_{1122}= & {} 2,\quad b_{2222}=1,\quad \hbox {other }b_{i_1i_2i_3i_4}=0. \end{aligned}$$

Then, \(\mathcal {C}={\mathcal {A}}\circ {\mathcal {B}}\). By the equality (1.1) and using Matlab, we get \(\rho ({\mathcal {A}})=2\) and \(\rho ({\mathcal {B}})=1\), respectively. By Theorem 1.1, we have \(\rho (C)\le 17.\) By Theorem 4.3, we have

$$\begin{aligned} \rho (C)=\rho ({\mathcal {A}}\circ {\mathcal {B}})\le \rho ({\mathcal {A}})\rho ({\mathcal {B}})=2. \end{aligned}$$

Therefore, in this case we use Theorem 4.3 to estimate the upper bound on the spectral radius of \(\mathcal {C}\), which is better than Theorem 1.1.

4.2 Some Inequalities on the Spectral Radius of the Hadamard Products of the Hadamard Powers for Nonnegative Tensors

Also in [10], some inequalities on the spectral radius of the Hadamard products of the Hadamard powers for nonnegative matrices were given. We in this subsection generalize these interesting results to higher-order tensors. To prove our theoretical results, it is necessary to introduce two famous inequalities and some useful lemmas.

  • The Hölder’s inequality: let \(a_i\) and \(b_i\) be nonnegative numbers for \(i=1,2,\ldots ,n\), and let \(0<\alpha <1\). Then,

    $$\begin{aligned} \sum _{i\in [n]}a_i^\alpha b_i^{1-\alpha }\le \left( \sum _{i\in [n]}a_i\right) ^\alpha \left( \sum _{i\in [n]}b_i\right) ^{1-\alpha }. \end{aligned}$$
  • The Minkowski’s inequality: let \(a_i\) be nonnegative numbers for \(i=1,2,\ldots ,n\), and let \(0<\alpha <1\). Then,

    $$\begin{aligned} \sum _{i\in [n]}a_i^\alpha \le \left( \sum _{i\in [n]} a_i\right) ^\alpha . \end{aligned}$$

Lemma 4.5

[1, 7] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\) be irreducible and \({\mathcal {A}}\ge 0\). Then, \(\rho ({\mathcal {A}})>0\) with an entrywise positive eigenvector x,  i.e.,  \(x>0,\) corresponding to it.

Lemma 4.6

[2, 8] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\)\({\mathcal {A}}\ge 0,\) and \(\epsilon >0\) be a sufficiently small number. If \({\mathcal {A}}_\epsilon ={\mathcal {A}}+{\mathcal {E}},\) where \({\mathcal {E}}\) denotes the tensor with every entry being \(\epsilon ,\) then

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\rho ({\mathcal {A}}_\epsilon )=\rho ({\mathcal {A}}). \end{aligned}$$

Lemma 4.7

[8] Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\)\({\mathcal {A}}\ge 0,\) and x be a real positive vector. Suppose that there exists \(\alpha ,\beta \ge 0\) such that \(\alpha x^{[m-1]}\le {\mathcal {A}}x^{[m-1]}\le \beta x^{[m-1]},\) then \(\alpha \le \rho ({\mathcal {A}})\le \beta \).

The main results of this subsection are given as follows.

Theorem 4.8

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),{\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\)\({\mathcal {A}},{\mathcal {B}}\ge 0,\)\(\alpha \in {\mathbb {R}},\) and \(0\le \alpha \le 1\). Then,

$$\begin{aligned} \rho \left( {\mathcal {A}}^{(\alpha )}\circ {\mathcal {B}}^{(1-\alpha )}\right) \le \rho ({\mathcal {A}})^\alpha \rho ({\mathcal {B}})^{1-\alpha }. \end{aligned}$$
(4.1)

Proof

The inequality (4.1) is clearly valid for \(\alpha =0\) or \(\alpha =1\), and hence we only prove that the inequality (4.1) holds for \(0<\alpha <1\). There are two cases as follows.

Case I: If \({\mathcal {A}}\) and \({\mathcal {B}}\) are irreducible, then it follows from Lemma 4.5 that there exists \(x=(x_1,x_2,\ldots ,x_n)^\mathrm{T}>0\) and \(y=(y_1,y_2,\ldots ,y_n)^\mathrm{T}>0\) such that \({\mathcal {A}}x^{m-1}=\rho ({\mathcal {A}})x^{[m-1]}\) and \({\mathcal {B}}y^{m-1} =\rho ({\mathcal {B}})y^{[m-1]}.\) Hence, for all \(i\in [n]\),

$$\begin{aligned} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}=\rho ({\mathcal {A}})x_i^{m-1} \end{aligned}$$

and

$$\begin{aligned} \sum _{{i_2,\ldots , i_m=1}}^nb_{ii_2\cdots i_m}y_{i_2}\cdots y_{i_m}=\rho ({\mathcal {B}})y_i^{m-1}. \end{aligned}$$

Let \(z=(z_1,z_2,\ldots ,z_n)\in {\mathbb {R}}^n\) and \(z=x^{[\alpha ]}\circ y^{[1-\alpha ]},\) then by the Hölder’s inequality, for all \(i\in [n]\),

$$\begin{aligned} \left( \left( {\mathcal {A}}^{(\alpha )}\circ {\mathcal {B}}^{(1-\alpha )}\right) z^{m-1}\right) _i= & {} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}^\alpha b_{ii_2\cdots i_m}^{1-\alpha }z_{i_2}\cdots z_{i_m}\\= & {} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}^\alpha b_{ii_2\cdots i_m}^{1-\alpha }x_{i_2}^\alpha y_{i_2}^{1-\alpha }\cdots x_{i_m}^\alpha y_{i_m}^{1-\alpha }\\= & {} \sum _{{i_2,\ldots , i_m=1}}^n\left( (a_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m})^\alpha (b_{ii_2\cdots i_m}y_{i_2}\cdots y_{i_m})^{1-\alpha }\right) \\\le & {} \left( \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}\right) ^\alpha \\&\times \left( \sum _{{i_2,\ldots , i_m=1}}^nb_{ii_2\cdots i_m}y_{i_2}\cdots y_{i_m}\right) ^{1-\alpha }\\= & {} \left( \rho ({\mathcal {A}})x_i^{m-1}\right) ^\alpha \left( \rho ({\mathcal {B}}) y_i^{m-1}\right) ^{1-\alpha }\\= & {} \rho ({\mathcal {A}})^\alpha \rho ({\mathcal {B}})^{1-\alpha }z_i^{m-1}. \end{aligned}$$

By Lemma 4.7, we have

$$\begin{aligned} \rho \left( {\mathcal {A}}^{(\alpha )}\circ {\mathcal {B}}^{(1-\alpha )}\right) \le \rho ({\mathcal {A}})^\alpha \rho ({\mathcal {B}})^{1-\alpha }. \end{aligned}$$

Case II: If at least one is reducible for \({\mathcal {A}}\) and \({\mathcal {B}}\), without loss of generality, we assume that both \({\mathcal {A}}\) and \({\mathcal {B}}\) are reducible. Define an all one tensor \({\mathcal {T}}=(t_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\), i.e., \(t_{i_1i_2\cdots i_m}=1\) for all \(i_1,i_2,\ldots ,i_m\in [n],\) then both \({\mathcal {A}}_\epsilon ={\mathcal {A}}+\epsilon {\mathcal {T}}\) and \({\mathcal {B}}_\epsilon ={\mathcal {B}}+\epsilon {\mathcal {T}}\) are irreducible nonnegative tensors for any chosen sufficiently small positive real number \(\epsilon \), implying that both \({\mathcal {A}}_\epsilon ^{(\alpha )}\) and \({\mathcal {B}}_\epsilon ^{(1-\alpha )}\) are irreducible nonnegative tensors. We now substitute \({\mathcal {A}}_\epsilon ^{(\alpha )}\) and \({\mathcal {B}}_\epsilon ^{(1-\alpha )}\) for \({\mathcal {A}}^{(\alpha )}\) and \({\mathcal {B}}^{(1-\alpha )}\) in case I, then

$$\begin{aligned} \rho \left( {\mathcal {A}}_\epsilon ^{(\alpha )}\circ {\mathcal {B}}_\epsilon ^{(1-\alpha )}\right) \le \rho ({\mathcal {A}}_\epsilon )^\alpha \rho ({\mathcal {B}}_\epsilon )^{1-\alpha }. \end{aligned}$$

Let \(\epsilon \rightarrow 0\), the result holds by Lemma 4.6. The proof is completed. \(\square \)

Theorem 4.9

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\)\({\mathcal {A}}\ge 0,\)\(\alpha \in {\mathbb {R}},\) and \(\alpha \ge 1\). Then,

$$\begin{aligned} \rho \left( {\mathcal {A}}^{(\alpha )}\right) \le \rho ({\mathcal {A}})^\alpha . \end{aligned}$$

Proof

To prove the conclusion, we consider two cases as follows.

Case I: If \({\mathcal {A}}\) is irreducible, then it follows from Lemma 4.5 that there exists \(x=(x_1,x_2,\ldots ,x_n)^\mathrm{T}>0\) such that \({\mathcal {A}}x^{m-1}=\rho ({\mathcal {A}})x^{[m-1]}.\) Hence, for all \(i\in [n]\),

$$\begin{aligned} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}=\rho ({\mathcal {A}})x_i^{m-1}. \end{aligned}$$

Let \(z=(z_1,z_2,\ldots ,z_n)\in {\mathbb {R}}^n,\) where \(z=x^{[\alpha ]},\) then it follows from the Minkowski’s inequality that for all \(i\in [n]\),

$$\begin{aligned} \left( {\mathcal {A}}^{(\alpha )}z^{m-1}\right) _i= & {} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}^\alpha z_{i_2}\cdots z_{i_m}\\= & {} \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}^\alpha x_{i_2}^\alpha \cdots x_{i_m}^\alpha \\= & {} \sum _{{i_2,\ldots , i_m=1}}^n(a_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m})^\alpha \\\le & {} \left( \sum _{{i_2,\ldots , i_m=1}}^na_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}\right) ^\alpha \\= & {} \left( \rho ({\mathcal {A}})x_i^{m-1}\right) ^\alpha \\= & {} \rho ({\mathcal {A}})^\alpha z_i^{m-1}. \end{aligned}$$

Thus, \(\rho \left( {\mathcal {A}}^{(\alpha )}\right) \le \rho ({\mathcal {A}})^\alpha \) by Lemma 4.7.

Case II: If \({\mathcal {A}}\) is reducible. Define an all one tensor \({\mathcal {T}}=(t_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\), i.e., \(t_{i_1i_2\cdots i_m}=1\) for all \(i_1,i_2,\ldots ,i_m\in [n],\) then \({\mathcal {A}}_\epsilon ={\mathcal {A}}+\epsilon {\mathcal {T}}\) is an irreducible nonnegative tensor for any chosen sufficiently small positive real number \(\epsilon \), which means that \({\mathcal {A}}_\epsilon ^{(\alpha )}\) is also an irreducible nonnegative tensor. We now substitute \({\mathcal {A}}_\epsilon ^{(\alpha )}\) for \({\mathcal {A}}^{(\alpha )}\) in Case I. Then similar to the proof of Case II of Theorem 4.8, the result follows. The proof is completed. \(\square \)

By Theorems 4.8 and 4.9, we obtain the following inequality for \(\rho ({\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\alpha _k)})\), which is a higher-order generalization of Theorem 5.7.7 in [10].

Theorem 4.10

Let \({\mathcal {A}}_i\in {\mathbb {R}}^{[m,n]},\)\({\mathcal {A}}_i\ge 0,\)\(\alpha _i\ge 0,\)\(i=1,2,\ldots ,k,\) and \(\alpha _1+\alpha _2+\cdots +\alpha _k\ge 1\). Then,

$$\begin{aligned} \rho \left( {\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\alpha _k)}\right) \le \rho ({\mathcal {A}}_1)^{\alpha _1}\rho ({\mathcal {A}}_2)^{\alpha _2}\cdots \rho ({\mathcal {A}}_k)^{\alpha _k}. \end{aligned}$$
(4.2)

Proof

If \(\alpha _1+\alpha _2+\cdots +\alpha _k=1\), then by Theorem 4.8 the inequality (4.2) is obvious for \(k=2\). Suppose that the inequality (4.2) follows for \(k-1\), i.e.,

$$\begin{aligned} \rho \left( {\mathcal {A}}_1^{(\gamma _1)}\circ {\mathcal {A}}_2^{(\gamma _2)}\circ \cdots \circ {\mathcal {A}}_{k-1}^{(\gamma _{k-1})}\right) \le \rho ({\mathcal {A}}_1)^{\gamma _1}\rho ({\mathcal {A}}_2)^{\gamma _2}\cdots \rho ({\mathcal {A}}_{k-1})^{\gamma _{k-1}}, \end{aligned}$$

where \(\gamma _i\ge 0\), \(i=1,2,\ldots ,k-1\), and \(\gamma _1+\gamma _2+\cdots +\gamma _{k-1}=1\).

Let \({\mathcal {B}}\ge 0\), \({\mathcal {B}}^{(1-\alpha _k)}\equiv {\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_{k-1}^{(\alpha _{k-1})}\) and \(\beta _j\equiv \alpha _j/(1-\alpha _{k}),~j=1,2,\ldots ,k-1\), then \(\beta _j\ge 0\), \(\beta _1+\beta _2+\cdots +\beta _{k-1}=1\) and \({\mathcal {B}}={\mathcal {A}}_1^{(\beta _1)}\circ {\mathcal {A}}_2^{(\beta _2)}\circ \cdots \circ {\mathcal {A}}_{k-1}^{(\beta _{k-1})}\). By Theorem 4.8, we have

$$\begin{aligned} \rho \left( {\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\alpha _k)}\right)= & {} \rho \left( {\mathcal {B}}^{(1-\alpha _k)}\circ {\mathcal {A}}_k^{(\alpha _k)}\right) \\\le & {} \rho ({\mathcal {B}})^{1-\alpha _k}\rho ({\mathcal {A}}_k)^{\alpha _k}\\= & {} \rho \left( {\mathcal {A}}_1^{(\beta _1)}\circ {\mathcal {A}}_2^{(\beta _2)}\circ \cdots \circ {\mathcal {A}}_{k-1}^{(\beta _{k-1})}\right) ^{1-\alpha _k} \rho ({\mathcal {A}}_k)^{\alpha _k}\\\le & {} \left( \rho ({\mathcal {A}}_1)^{\beta _1}\rho ({\mathcal {A}}_2)^{\beta _2} \cdots \rho ({\mathcal {A}}_{k-1})^{\beta _{k-1}}\right) ^{1-\alpha _k} \rho ({\mathcal {A}}_k)^{\alpha _k}\\= & {} \rho ({\mathcal {A}}_1)^{\alpha _1}\rho ({\mathcal {A}}_2)^{\alpha _2} \cdots \rho ({\mathcal {A}}_k)^{\alpha _k}. \end{aligned}$$

We now consider the case in which \(\varrho =\alpha _1+\alpha _2+\cdots +\alpha _k>1\), and set \(\omega _i=\alpha _i/\varrho \) for \(i=1,2,\ldots ,k\). Note that \(\omega _i\ge 0\) and \(\omega _1+\omega _2+\cdots +\omega _k=1\). Let \(\mathcal {C}={\mathcal {A}}_1^{(\omega _1)}\circ {\mathcal {A}}_2^{(\omega _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\omega _k)}\), then \(\mathcal {C}^{(\varrho )}={\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\alpha _k)}\). From Theorem 4.9, it follows that

$$\begin{aligned}&\rho \left( {\mathcal {A}}_1^{(\alpha _1)}\circ {\mathcal {A}}_2^{(\alpha _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\alpha _k)}\right) \\&\quad =\rho \left( \mathcal {C}^{(\varrho )}\right) \le \rho (\mathcal {C})^\varrho \\&\quad =\rho \left( {\mathcal {A}}_1^{(\omega _1)}\circ {\mathcal {A}}_2^{(\omega _2)}\circ \cdots \circ {\mathcal {A}}_k^{(\omega _k)}\right) ^\varrho \\&\quad \le \left( \rho ({\mathcal {A}}_1)^{\omega _1}\rho ({\mathcal {A}}_2)^{\omega _2} \cdots \rho ({\mathcal {A}}_k)^{\omega _k}\right) ^\varrho \\&\quad =\rho ({\mathcal {A}}_1)^{\alpha _1}\rho ({\mathcal {A}}_2)^{\alpha _2} \cdots \rho ({\mathcal {A}}_k)^{\alpha _k}. \end{aligned}$$

Summarizing the above analysis, the inequality (4.2) holds. The proof is completed. \(\square \)

Remark 4.11

  1. (1)

    When \(k=2\) and \(\alpha _1=\alpha _2=1\), then Theorem 4.10 is reduced to Theorem 4.3.

  2. (2)

    Theorem 4.10 is a generalization of Theorem 4.9, since Theorem 4.10 is reduced to Theorem 4.9 when \({\mathcal {A}}_1={\mathcal {A}}_2=\cdots ={\mathcal {A}}_k={\mathcal {A}}\) and \(\alpha _1+\alpha _2+\cdots +\alpha _k=\alpha \).

Finally, we give the following inequality for \(\rho ({\mathcal {A}}_1\circ {\mathcal {A}}_2\circ \cdots \circ {\mathcal {A}}_s)\) by extending Theorem 2.1 in [31].

Theorem 4.12

Let \({\mathcal {A}}_k=((a_k)_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]},\) and \({\mathcal {A}}_k\ge 0,\)\(k=1,2,\ldots ,s\). Then,

$$\begin{aligned} \rho ({\mathcal {A}}_1\circ {\mathcal {A}}_2\circ \cdots \circ {\mathcal {A}}_s) \le \max _{i\in [n]}\left\{ \prod _{k=1}^s (a_k)_{ii\cdots i}+\prod _{k=1}^s\left( \rho ({\mathcal {A}}_k)-(a_k)_{ii\cdots i}\right) \right\} . \end{aligned}$$

Proof

To prove the conclusion, we consider two cases as follows.

Case I: If \({\mathcal {A}}_k\) is irreducible, where \(k=1,2,\ldots ,s\), then \({\mathcal {A}}_k\) is an irreducible nonnegative tensor. By Lemma 4.5, there exists \(u_k=((u_k)_1,(u_k)_2,\ldots ,(u_k)_n)^\mathrm{T}>0\) such that

$$\begin{aligned} {\mathcal {A}}_ku_k^{m-1}=\rho ({\mathcal {A}}_k)u_k^{[m-1]}. \end{aligned}$$

Thus,

$$\begin{aligned} (a_k)_{ii\cdots i}(u_k)_i^{m-1}+\sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^n(a_k)_{ii_2\cdots i_m}(u_k)_{i_2}\cdots (u_k)_{i_m} =\rho ({\mathcal {A}}_k)(u_k)_i^{m-1} \end{aligned}$$

for all \(i\in [n]\), and hence for \((i_2,\ldots ,i_m)\ne (i,\ldots ,i)\) and \(j=2,\ldots ,s\),

$$\begin{aligned} (a_j)_{ii_2\cdots i_m}(u_j)_{i_2}\cdots (u_j)_{i_m}\le & {} \sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^n(a_j)_{ii_2\cdots i_m}(u_j)_{i_2}\cdots (u_j)_{i_m} \\= & {} (\rho ({\mathcal {A}}_j)-(a_j)_{ii\cdots i})(u_j)_i^{m-1}, \end{aligned}$$

i.e., \((a_j)_{ii_2\cdots i_m}\le \frac{(\rho ({\mathcal {A}}_j)-(a_j)_{ii\cdots i})(u_j)_i^{m-1}}{(u_j)_{i_2}\cdots (u_j)_{i_m}}.\)

Denote \(\mathcal {C}=(c_{i_1i_2\cdots i_m})={\mathcal {A}}_1\circ {\mathcal {A}}_2\circ \cdots \circ {\mathcal {A}}_s,\) and let \(z=(z_1,z_2,\ldots ,z_n),\) where \(z_i=(u_1)_i(u_2)_i\cdots (u_s)_i\), \(i\in [n],\) then for all \(i\in [n],\)

$$\begin{aligned} (\mathcal {C}z^{m-1})_i= & {} c_{ii\cdots i}z_i^{m-1}+\sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^nc_{ii_2\cdots i_m}z_{i_2}\cdots z_{i_m}\\= & {} \prod _{k=1}^s (a_k)_{ii\cdots i}z_i^{m-1}+\sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^n\prod _{k=1}^s (a_k)_{ii_2\cdots i_m}z_{i_2}\cdots z_{i_m}\\\le & {} \prod _{k=1}^s (a_k)_{ii\cdots i}z_i^{m-1}+\sum _{\begin{array}{c} i_2,\ldots , i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^n(a_1)_{ii_2\cdots i_m} \\&\times \prod _{j=2}^s\frac{(\rho ({\mathcal {A}}_j)-(a_j)_{ii\cdots i})(u_j)_i^{m-1}}{(u_j)_{i_2}\cdots (u_j)_{i_m}}z_{i_2}\cdots z_{i_m}\\= & {} \prod _{k=1}^s (a_k)_{ii\cdots i}z_i^{m-1}+\prod _{j=2}^s(\rho ({\mathcal {A}}_j)-(a_j)_{ii\cdots i})(u_j)_i^{m-1} \\&\times \sum _{\begin{array}{c} i_2,\ldots ,i_m=1 \\ \delta _{ii_2\cdots i_m=0} \end{array}}^n(a_1)_{ii_2\cdots i_m}(u_1)_{i_2}\cdots (u_1)_{i_m}\\= & {} \prod _{k=1}^s (a_k)_{ii\cdots i}z_i^{m-1}+\prod _{k=1}^s(\rho ({\mathcal {A}}_k)-(a_k)_{ii\cdots i})(u_k)_i^{m-1} \\= & {} \prod _{k=1}^s (a_k)_{ii\cdots i}z_i^{m-1}+\prod _{k=1}^s(\rho ({\mathcal {A}}_k)-(a_k)_{ii\cdots i})z_i^{m-1}\\= & {} \left( \prod _{k=1}^s (a_k)_{ii\cdots i}+\prod _{k=1}^s(\rho ({\mathcal {A}}_k)-(a_k)_{ii\cdots i})\right) z_i^{m-1}. \end{aligned}$$

By Lemma 4.7, we have

$$\begin{aligned} \rho (\mathcal {C}) =\rho ({\mathcal {A}}_1\circ {\mathcal {A}}_2\circ \cdots \circ {\mathcal {A}}_s) \le \max _{i\in [n]}\left\{ \prod _{k=1}^s (a_k)_{ii\cdots i}+\prod _{k=1}^s(\rho ({\mathcal {A}}_k)-(a_k)_{ii\cdots i})\right\} . \end{aligned}$$

Case II: If at least one among \({\mathcal {A}}_k\) is reducible, without loss of generality, we assume that all \({\mathcal {A}}_k\) are reducible, where \(k=1,2,\ldots ,s\). Define an all one tensor \({\mathcal {T}}=(t_{i_1i_2\cdots i_m})\in {\mathbb {R}}^{[m,n]}\), i.e., \(t_{i_1i_2\cdots i_m}=1\) for all \(i_1,i_2,\ldots ,i_m\in [n],\) then \({\mathcal {A}}_k+\epsilon {\mathcal {T}}~\mathrm{for~all}~k\in \{1,2,\ldots ,s\}\), are irreducible nonnegative tensors for any chosen sufficiently small positive real number \(\epsilon \). We now substitute \({\mathcal {A}}_k+\epsilon {\mathcal {T}}\) for \({\mathcal {A}}_k\) in Case I. Then similar to the proof of Case II of Theorem 4.8, the result follows. The proof is completed. \(\square \)

Setting \(s=2\) in Theorem 4.12, we easily obtain the following corollary.

Corollary 4.13

Let \({\mathcal {A}}=(a_{i_1i_2\cdots i_m}),~{\mathcal {B}}=(b_{i_1i_2\cdots i_m})\in {\mathbb {C}}^{[m,n]},\) and \({\mathcal {A}},{\mathcal {B}}\ge 0\). Then,

$$\begin{aligned} \rho ({\mathcal {A}}\circ {\mathcal {B}})\le \max _{i\in [n]}\left\{ 2a_{ii\cdots i}b_{ii\cdots i}+\rho ({\mathcal {A}})\rho ({\mathcal {B}})-a_{ii\cdots i}\rho ({\mathcal {B}})-b_{ii\cdots i}\rho ({\mathcal {A}})\right\} . \end{aligned}$$

Remark 4.14

Corollary 4.13 is Theorem 3.2 in [24], which implies that Theorem 4.12 is a generalization of Theorem 3.2 in [24].

5 Conclusions

In this paper, we have presented some fundamental properties of the Hadamard product of tensors. We have also derived the closure of several classes of structured tensors under the Hadamard product of tensors, and then some sufficient conditions for identifying the positive definiteness of a given tensor are obtained by using the Hadamard product of two even–order real systemic tensors. Finally, we have established some inequalities on the spectral radius of the Hadamard product of nonnegative tensors by extending some results of Hadamard product of matrices [10, 31].