1 Introduction

1.1 Preliminaries

We deal here with equations of the form

$$\begin{aligned} \sum _{i=1}^na_if(\alpha _ix+(1-\alpha _i)y)=0, \end{aligned}$$
(1)

where \(f:\mathbb {R}\rightarrow \mathbb {R}\) is an unknown function and the numbers \(a_i,\alpha _i\in \mathbb {R}\) are given. As it will be explained, using the results of L. Székelyhidi, it may be shown that every solution of a more general equation

$$\begin{aligned} \sum _{i=1}^na_if(\alpha _ix+\beta _iy)=0, \end{aligned}$$

is a polynomial function of order at most \(n-2.\) However, we restrict ourselves to the case \(\beta _i=1-\alpha _i\) to exhibit the connection of (1) with the inequality

$$\begin{aligned} \sum _{i=1}^na_if(\alpha _ix+(1-\alpha _i)y)\ge 0. \end{aligned}$$
(2)

Remark 1

Without loss of generality, it could be assumed that \(\alpha _i\in [0,1].\) Indeed, it is enough to find the numbers \({i_1},{i_2}\) such that

$$\begin{aligned} \alpha _{i_1}\le \alpha _i\le \alpha _{i_1}{\text { for all }}i=1,\dots ,n \end{aligned}$$

and introduce new variables \(u=\alpha _{i_1}x+(1-\alpha _{i_1})y,v=\alpha _{i_2}x+(1-\alpha _{i_2})y.\) Then all other points lie between u and v and, consequently, their weights are between 0 and 1. After such assumption our equation could be considered on the interval (then the results contained in [19] could be used). However, we do not make such an assumption and we deal with functions defined on \(\mathbb {R}.\)

Remark 2

Note that equations of the form

$$\begin{aligned} a_1f(x+\gamma _1y)+\cdots +a_nf(x+\gamma _ny)=0, \end{aligned}$$

where \(\gamma _1<\dots <\gamma _n\) are, in fact, also of the form (2). Indeed, it is enough to consider new variables \(u=x+\gamma _1y,v=x+\gamma _ny.\) Then all other points will take the form

$$\begin{aligned} \alpha u+(1-\alpha )v. \end{aligned}$$

1.2 Polynomial functions

Polynomial functions are defined with the use of the difference operator \(\Delta _h\) defined by the formula

$$\begin{aligned} \Delta _hf(x):=f(x+h)-f(x). \end{aligned}$$

Further, \(\Delta _{h_1,\dots ,h_n}f(x)\) is defined recursively i.e.

$$\begin{aligned} \Delta _{h_1,\dots ,h_n}f(x)=\Delta _{h_n}\Delta _{h_1,\dots ,h_{n-1}}f(x). \end{aligned}$$

If we take \(h_1=h_2=\cdots =h_n=h\) then \(\Delta _{h_1,\dots ,h_n}f(x)\) is denoted by \(\Delta _h^nf(x).\)

Definition 1

We say that \(f:\mathbb {R}\rightarrow \mathbb {R}\) is a polynomial function of order n if and only if f satisfies the equation

$$\begin{aligned} \Delta _h^{n+1}f(x)=0. \end{aligned}$$

Remark 3

In our case the equations \(\Delta _{h_1,\dots ,h_n}f(x)=0\) and \(\Delta _h^nf(x)=0\) are equivalent. Therefore, each of them could serve as a definition of polynomial functions.

In the monograph [20], authored by L. Székelyhidi the following result can be found.

Theorem 1

Let G be an Abelian semigroup, S an Abelian group, n a nonnegative integer \(\varphi _i,\psi _i\) additive functions from G to G and let \(\varphi _i(G)\subset \psi _i(G),\;i=1,2,\dots ,n.\) If functions \(f,f_i:G\rightarrow S\) satisfy equation

$$\begin{aligned} f(x)+\sum _{i=1}^nf_i(\varphi _i(x)+\psi _i(y))=0, \end{aligned}$$
(3)

then f satisfies

$$\begin{aligned} \Delta _{h_1,\dots ,h_n}f(x)=0. \end{aligned}$$

Remark 4

Using the above theorem, it is easy to show that every solution of (1) is a polynomial function of degree at most \(n-2.\)

Indeed, let \(u,v\in \mathbb {R}\) be given. Put \(x=u+(1-\alpha _1)v\) and \(y=u-\alpha _1v.\) We get

$$\begin{aligned} \alpha _1x+(1-\alpha _1)y=\alpha _1\bigl (u+(1-\alpha _1)v\bigr ) +(1-\alpha _1)\bigl (u-\alpha _1v\bigr )=u \end{aligned}$$

and, for \(j\ne 1\)

$$\begin{aligned}{} & {} \alpha _jx+(1-\alpha _j)y=\alpha _j\bigl (u+(1-\alpha _1)v\bigr ) +(1-\alpha _j)\bigl (u-\alpha _1v\bigr )\\{} & {} \quad =u+(\alpha _j-\alpha _1)v. \end{aligned}$$

This means that, after such substitutions, (1) is presented in the form (3) and from Theorem 1 we obtain the polynomiality of its solutions.

Polynomial functions are sometimes called generalized polynomials. Their general form is given by

$$\begin{aligned} f(x)=a_0+a_1(x)+\cdots +a_n(x), \end{aligned}$$

where \(a_0\) is a constant and

$$\begin{aligned} a_i(x)=A_i(x,\dots , x), \end{aligned}$$

where \(A_i:\mathbb {R}^i\rightarrow \mathbb {R}\) is an \(i-\)additive and symmetric function. A continuous polynomial function is an ordinary polynomial - of the same degree.

1.3 Higher-Order Convexity

It will be shown that the solutions of (2) are expressed in terms of convex functions of higher order. Therefore, now we will say a few words concerning this notion. The higher-order divided differences are defined recursively:

$$\begin{aligned} f[x_1]=f(x_1) \end{aligned}$$

and

$$\begin{aligned} f[x_1,\dots ,x_n]=\frac{f[x_1,\dots ,x_{n-1}]- f[x_2,\dots ,x_n]}{x_n-x_1}. \end{aligned}$$

Let \(I\subset \mathbb {R}\) be an interval. We say that function \(f:I\rightarrow \mathbb {R}\) is convex of order n if

$$\begin{aligned} f[x_1,\dots ,x_{n+2}]\ge 0, \end{aligned}$$

for any pairwise distinct points \(x_1,\dots ,x_{n+2}\in I\). Note that \(0-\)convexity means nondecreasingness and \(1-\)convexity is equivalent to the standard convexity. Thus the notion of convexity of higher order is a natural extension of these well-established concepts.

1.4 Stochastic Orderings

The central role in the current paper is being played by two results. The first of them is the following theorem from the paper by Denuit et al. [4].

Theorem 2

Let X and Y be two random variables such that

$$\begin{aligned} \mathbb {E}(X^j-Y^j)=0,\;\;j=1,2,\dots ,s. \end{aligned}$$
(4)

If the distribution functions \(F_X,F_Y\) cross exactly \(s-\)times, at points \(x_1,\dots ,x_s\) and

$$\begin{aligned} (-1)^{s+1}(F_Y(t)-F_X(t))\le 0\;\;for\;all\;t\in [a,x_1], \end{aligned}$$
(5)

then

$$\begin{aligned} \mathbb {E}f(X)\le \mathbb {E}f(Y) \end{aligned}$$

for all \(s-\)convex functions \(f:\mathbb {R}\rightarrow \mathbb {R}.\)

Note that Theorem 2 with \(s=1\) is known as the Ohlin lemma [12]. The idea to use the Ohlin lemma and other stochastic tools in the inequalities theory was started by Rajba in [16]. Then this idea evolved in different directions. In the paper [9], such tools were used to solve a problem posed by Raşa. In [10], strongly convex functions are considered, in [11] results for set-valued maps are obtained, and in [5] these two concepts are treated jointly. See also [1, 13] where the Levin-Stechkin theorem is used, [2] with result converse to Ohlin lemma, and [6, 7, 17, 22], where some results for higher-order convex functions are obtained.

Theorem 2 gives a sufficient condition for \(s-\)convex ordering. We will explain later how to use this theorem in our case.

The second tool is the main result from the paper by Bessenyei and Páles [3]. If \(I\subset \mathbb {R}\) then \(\Delta (I)\) and D(I) are defined by the formulas:

$$\begin{aligned} \Delta (I):=\{(x,y)\in I^2:x\le y\} \end{aligned}$$

and

$$\begin{aligned} D(I):=\{(x,x):x\in I\}. \end{aligned}$$

Let \(\mu \) be a a non-zero bounded Borel measure on the interval [0, 1], then its moments \(\mu _n\) are defined by the usual formulas,

$$\begin{aligned} \mu _n:=\int _0^1t^nd\mu (t),\;n=0,1,2,\dots . \end{aligned}$$

The main results of [3] is given by the following theorem.

Theorem 3

Let \(I\subset \mathbb {R}\) be an open interval let \(\Omega \subset \Delta (I)\) be an open subset containing the diagonal D(I) of \(I\times I\) and let \(\mu \) be a non-zero bounded Borel measure on [0, 1]. Assume that n is the smallest non-negative integer such that \(\mu _n\ne 0.\) If \(f:I\rightarrow \mathbb {R}\) is a continuous function satisfying the integral inequality

$$\begin{aligned} \int _0^1f(x+t(y-x))d\mu (t)\ge 0, \end{aligned}$$
(6)

then \(\mu _nf\) is \((n-1)-\)convex.

Remark 5

Theorem 3 is a generalization of a result from [15], where a similar result for equations was obtained.

2 Results

2.1 Continuous solutions of (2)

In this chapter, we describe the complete procedure needed to solve inequalities of the type (2). From now on, we assume that the numbers \(\alpha _i\) are arranged in descending order i.e.

$$\begin{aligned} \alpha _1>\alpha _2>\cdots >\alpha _n. \end{aligned}$$

After such an arrangement, for \(x<y\) we have

$$\begin{aligned} \alpha _i x+(1-\alpha _i)y<\alpha _{i+1} x+(1-\alpha _{i+1})y;\;\;i=1,2,\dots ,n-1. \end{aligned}$$

Remark 6

Divide the numbers \(a_i\) into two sets:

$$\begin{aligned} \left\{ a_{i_1},\dots ,a_{i_l}\right\} \end{aligned}$$

and

$$\begin{aligned} \left\{ a_{j_1},\dots ,a_{j_{n-l}}\right\} \end{aligned}$$

such that

$$\begin{aligned} a_{i_1},\dots ,a_{i_l}>0 \end{aligned}$$

and

$$\begin{aligned} a_{j_1},\dots ,a_{j_{n-l}}<0 \end{aligned}$$

and define the measures:

$$\begin{aligned} \mu _X=\frac{a_{i_1}\delta _{\alpha _{i_1}x+(1-\alpha _{i_1})y} +\cdots +a_{i_k}\delta _{\alpha _{i_k}x+(1-\alpha _{i_k})y}}{a_{i_1}+\cdots +a_{i_k}} \end{aligned}$$
(7)

and

$$\begin{aligned} \mu _Y=\frac{a_{j_1}\delta _{\alpha _{j_1}x+(1-\alpha _{j_1})y} +\cdots +a_{j_{n-k}}\delta _{\alpha _{j_{n-k}}x+(1-\alpha _{j_{n-k}})y}}{a_{j_1}+\cdots +a_{j_{n-k}}}. \end{aligned}$$
(8)

Then a given function f satisfies (2) if and only if

$$\begin{aligned} \mathbb {E}f(X)\le \mathbb {E}f(Y). \end{aligned}$$

Therefore, Theorem 2 gives sufficient conditions under which (2) is satisfied for all \(s-\)convex functions. In the next two remarks, we analyze the meaning of the assumptions of this theorem in our situation.

Remark 7

The number of crossing points mentioned in Theorem 2 is equal to the number of sign changes of the following partial sums sequence

$$\begin{aligned} (a_1,a_1+a_2,\dots ,a_1+\cdots +a_n). \end{aligned}$$
(9)

Further, the equality of moments (4) in our case means simply that the functions \(x\mapsto x^i,i=1,\dots ,s,\) satisfy (1).

Remark 8

Assume that conditions (4) are satisfied and we have s crossing points. Then inequality (2) may be satisfied either by all \(s-\)convex functions or by all \(s-\)concave functions. It depends on the sign of the expression occurring in (5). In our case, (5) means simply that \(a_n>0.\)

On the other hand, Theorem 3 used in our situation provides a result in the converse direction.

Corollary 1

Let \(a_i,\alpha _i\) be such that the functions

$$\begin{aligned} x\mapsto x^j,j=1,\dots ,k, \end{aligned}$$

satisfy (1) and the function

$$\begin{aligned} x\mapsto x^{k+1} \end{aligned}$$

does not satisfy (1). Then either every continuous solution of (2) is \(k-\)convex or \(k-\)concave.

Proof

To prove this corollary, we first make the substitutions mentioned in Remark 1 and we have a new (equivalent) equation with \(\alpha _i\in (0,1).\) Then we write (1) in the form

$$\begin{aligned} \sum _{i=1}^na_if(x+(1-\alpha _i)(y-x))=0, \end{aligned}$$

which allows us to treat this inequality as an integral inequality (6). Now, the assertion follows from Theorem 3. \(\square \)

The next lemma gives two properties of equation (1) that will be frequently used in the rest of the paper.

Lemma 1

If the equation (1) is satisfied by the function

$$\begin{aligned} x\mapsto x^k, \end{aligned}$$

then the followings assertions are true:

  1. (i)

    equation (1) is satisfied by every function of the form

    $$\begin{aligned} x\mapsto x^l,l<k, \end{aligned}$$
  2. (ii)

    the sequence defined by (9) has at least k sign changes.

Proof

For the indirect proof of the first assertion assume that there exists an \(m<k\) such that the mapping

$$\begin{aligned} x\mapsto x^m \end{aligned}$$

does not satisfy (1). Then we may find \(m_0\le m\) such that the functions

$$\begin{aligned} x\mapsto x^l,l<m_0 \end{aligned}$$

satisfy (1) and

$$\begin{aligned} x\mapsto x^{m_0} \end{aligned}$$

does not satisfy it. In view of Corollary 1, this means that every continuous solution of (1) is a polynomial of degree at most \(m_0-1,\) i.e. we have a contradiction with the fact that \(x\mapsto x^k\) satisfies (1).

Now, we prove the second assertion. Consider the case \(a_n>0\) and assume, without loss of generality, that k is the biggest number for which the function

$$\begin{aligned} x\mapsto x^k \end{aligned}$$

is the solution of (1). In view of Corollary 1, this means that every solution of (2) is \(k-\)convex. Suppose, contrary to condition (ii),  that (9) has m sign changes for some \(m<k.\) We already know that the functions

$$\begin{aligned} x\mapsto x^l,l\le m, \end{aligned}$$

satisfy (1). In view of Theorem 2, this means that inequality (2) is satisfied by all \(m-\)convex functions.

Combining the above observations, we get the false statement that every \(m-\)convex function is \(k-\)convex and the proof is finished. \(\square \)

Remark 9

At this point the reason why we restrict ourselves to linear equations with weights summing up to one becomes evident. Note that the quadratic functional equation

$$\begin{aligned} f(x+y)+f(x-y)-2f(x)-2f(y)=0 \end{aligned}$$

is satisfied by the function \(x\mapsto x^2\) but it is not satisfied by the identity function. This is caused by the fact that the quadratic functional equation is not of the form (1).

After the above remarks, we may formulate the following theorem.

Theorem 4

Assume that the functional equation (1) is satisfied by the function \(x\mapsto x^k\) and that the sequence defined by (9) has exactly k sign changes.

If \(a_n>0,\) then a given function f satisfies (2) if and only if f is \(k-\)convex. In the case \(a_n<0,\) f satisfies (2) if and only if f is \(k-\)concave.

Proof

Let \(a_n>0,\) it follows from Lemma 1 that every function \(x\mapsto x^l,\) where \(l\le k\) satisfies (1). Therefore the assumptions of Theorem 2 are fulfilled (note that the assumption \(a_n>0\) yields (5)) and, consequently, every \(k-\)convex function satisfies (2).

On the other hand, if the function \(x\mapsto x^{k+1}\) satisfied (2), we would have at least \(k+1\) sign changes. Therefore every solution of (2) is \(k-\)convex. \(\square \)

From the above theorem, we can obtain an even simpler corollary.

Corollary 2

If the function \(x\mapsto x^{n-2}\) satisfies equation (1) then a continuous function f satisfies (2) if and only if f is \((n-2)-\)convex (if \(a_n>0\)).

Proof

To use Theorem 2, we need to show that (9) has \(n-2\) sign changes. However, it is clear that this sequence can have a sign change only in points

$$\begin{aligned} \alpha _ix+(1-\alpha _i)y,i=2,\dots ,n-1. \end{aligned}$$

Thus we have at most \(n-2\) sign changes. Now it is enough to use Lemma 1 to see that (9) has exactly \(n-2\) sign changes. \(\square \)

Now we present examples of applications of our results. We begin with two warm-up examples concerning \(t-\)convexity and \(t-\)Wright convexity.

Example 1

Let \(t\in (0,1)\) be fixed. It is well known that every continuous solution of the inequality

$$\begin{aligned} f(tx+(1-ty)\le tf(x)+(1-t)f(y) \end{aligned}$$

is convex. The classical proof is elementary but requires some substitutions. In our approach, this fact follows directly from Corollary 2.

Example 2

Let \(t\in (0,1)\) be fixed. every continuous solution of the inequality

$$\begin{aligned} f(tx+(1-ty)+f((1-t)x+ty)\le f(x)+f(y) \end{aligned}$$

is convex. Now, the inequality is satisfied by the identity function but it has four terms. Therefore, we cannot use Corollary 2, but we can use Theorem 4 instead. To do this, just notice that the sequence (9) has only one sign change.

Now, we present a new application of the tools developed here. It concerns the functional inequality considered in [14]. This inequality is much more general than \(t-\)Wright convexity. But (surprisingly) the continuous solutions are just convex functions.

Theorem 5

Let \(t_i\in (0,1)\) be given numbers. A continuous function \(f:\mathbb {R}\rightarrow \mathbb {R}\) satisfies

$$\begin{aligned} \sum _{i=1}^nf(t_ix+(1-t_i)y)\le \sum _{i=1}^nt_if(x)+\left( n-\sum _{i=1}^nt_i\right) f(y) \end{aligned}$$
(10)

if and only if it is convex.

Proof

It is easy to check that the identity function satisfies (10). Further, the sequence (9) has only one sign change, therefore, from Theorem 4 we know that every convex function satisfies (10).

Conversely, since (9) has only one sign change, from Lemma 1, we know that the function

$$\begin{aligned} x\mapsto x^2 \end{aligned}$$

does not satisfy (10). In view of Corollary 1, this means that every solution of (10) has to be convex. \(\square \)

Remark 10

Note, how short proofs become when the methods connected with stochastic orderings are used.

In [14], inequality (10) with \(n=2\) was considered

$$\begin{aligned} f(sx+(1-s)y)+f(tx+(1-t)y)\le \alpha f(x)+(2-\alpha )f(y)). \end{aligned}$$
(11)

It was shown that in the case \(s,t\in (0,1)\) the solutions of (11) are convex. However, without this assumption, inequality (11) could be satisfied by \(2-\)convex functions. If we expect that inequality (10) may have \(n-\)convex functions as solutions (in the case where \(t_i\) are not necessarily from (0, 1)), we will be surprised by the next theorem.

Theorem 6

Let \(t_i\in \mathbb {R},i=1,\dots ,n.\) If a continuous function satisfies (10), then one of the following possibilities occurs:

  1. (1)

    f is \(3-\)concave,

  2. (2)

    f is \(l-convex\) with \(l\le 2,\)

  3. (3)

    f is \(l-concave\) with \(l\le 2.\)

Proof

Observe that, if we move all terms of (10) to the right-hand side then we have at most two terms with a positive coefficient \(a_i.\) Therefore, the sequence (9) can have at most 3 crossing points. Consequently, the equation

$$\begin{aligned} \sum _{i=1}^nf(t_ix+(1-t_i)y)= \sum _{i=1}^nt_if(x)+\left( n-\sum _{i=1}^nt_i\right) f(y) \end{aligned}$$
(12)

may be satisfied only by the functions \(x\mapsto x^k\) with \(k\le 3.\) This means that f satisfying (10) is an \(l-\)convex or \(l-\)concave function with \(l\le 3.\)

To finish the proof, we need to show that (10) cannot have \(3-\)convex solutions. Indeed, assume that the mapping \(x\mapsto x^3\) satisfies the equation. Then, according to Lemma 1, the sequence (9) has three sign changes. It is possible only if all of the following three conditions are satisfied:

  • At least one \(t_i\) is negative,

  • At least one \(t_i\) is in (0, 1), 

  • At least one \(t_i\) is greater than 1.

However, in such a case, we have \(a_n=-1<0.\) This means that the solutions of (10) are \(3-\)concave. \(\square \)

First, we give an example in which inequality (10) is satisfied by \(3-\)concave functions.

Example 3

A continuous function f satisfies

$$\begin{aligned} f\left( \frac{2-\sqrt{6}}{4}x+\frac{2+\sqrt{6}}{4}y\right)+ & {} f\left( \frac{x+y}{2}\right) +f\left( \frac{2+\sqrt{6}}{4}x +\frac{2-\sqrt{6}}{4}y\right) \nonumber \\{} & {} \quad \le \frac{3}{2} f(x)+\frac{3}{2}f(y) \end{aligned}$$
(13)

if and only if it is \(3-\)concave.

Indeed, an equation connected with this inequality is satisfied by \(x\mapsto x^3\) and there are five terms in (13). Further, the coefficient standing at the rightmost point is negative. Thus our claim follows from Corollary 2.

We end this part of the paper with an example of a particular form of inequality (10) satisfied by \(2-\)convex functions.

Example 4

A continuous function f satisfies

$$\begin{aligned} f\left( -\frac{1}{5}x+\frac{6}{5}y\right) +f\left( \frac{2}{5}x+\frac{3}{5}y\right) \le \frac{1}{5} f(x)+\frac{9}{5}f(y) \end{aligned}$$
(14)

if and only if it is \(2-\)convex.

Indeed, observe that (14) is satisfied by the function \(x\mapsto x^2\) with equality and that \(a_4=\frac{9}{5}>0.\) Moreover, our equation has four terms. This means that the above observation follows from Corollary 2.

At first glance, it may seem strange that it is impossible to construct a particular version of (10) satisfied by \(3-\)convex functions. However, note that the numbers \(a_i\) appearing on the left-hand side of this inequality are already specified and they are all equal to one. Therefore, after moving all terms to the right-hand side, the number of terms with positive weights cannot be greater than two (no matter what is the choice of \(t_i\)).

2.2 Some Properties of (1)

In this part, we observe that the tools developed in the previous part of the paper may be used to gather new information concerning functional Eq. (1). We assume here that the weights of (1) are rational.

Remark 11

It is well known that a linear equation with rational weights is either satisfied by every monomial function of a given order or is not satisfied by any such function (see Corollary 3.10 [18] or Lemma 1 [21]).

Using this remark and Lemma 1, we immediately obtain the following result.

Theorem 7

Let \(\alpha _i\in \mathbb {Q}, i=1,\dots ,n\) be some numbers. Then f is a polynomial function of order not greater than the number of sign changes of the sequence (9).

We give now some examples. The sequences (9) associated with these equations were analyzed in the previous part of the paper.

Example 5

Let f be a solution of Jensen equation

$$\begin{aligned} f(x)-2f\left( \frac{x+y}{2}\right) +f(y)=0. \end{aligned}$$

Then it follows from Theorem 1 that f is a polynomial function of degree at most 1. The sequence (9) has one sign change. Therefore from Theorem 7 we also get that the solution of this equation is a polynomial function of order at most 1.

In the above example, there was no difference in the estimation of the degree with the one obtained from Theorem 1. In the next example, the equation has four terms, therefore the order obtained from the theorem of Székelyhidi is equal to two.

Example 6

Let \(t\in (0,1)\) be a rational number, consider the equation

$$\begin{aligned} f(x)-f(tx+(1-ty)-f((1-t)x+ty)+f(y)=0 \end{aligned}$$
(15)

Then the sequence (9) has one sign change. Therefore, the solutions of (15) are polynomials functions of order at most 1.

The difference is much bigger in the case of the equation connected with (10). Observe that the degree obtained from Theorem 1 is equal to n.

Example 7

Let \(t_i\) be rational numbers. If \(t_i\in (0,1)\) then f satisfies

$$\begin{aligned} \sum _{i=1}^nf(t_ix+(1-t_i)y)= \sum _{i=1}^nt_if(x)+\left( n-\sum _{i=1}^nt_i\right) f(y) \end{aligned}$$
(16)

if and only if it is of the form

$$\begin{aligned} f(x)=a(x)+b, \end{aligned}$$

where b is a constant and a is an additive function.

If \(t_i\in \mathbb {Q}, i=1,2,\dots ,n\), then the solutions of (16) are polynomial functions of order at most 3.

We end the paper with a remark concerning not the partial sum sequence (9) but the sequence \((a_1,\dots ,a_n).\)

Remark 12

Observe that for a change of sign of (9) it is necessary that this sequence alternately increases and decreases. Thus let \(\alpha _i\in \mathbb {Q}, i=1,\dots ,n\) , be some numbers and let k be the number of sign changes of the numbers \(a_i\) in the sequence

$$\begin{aligned} (a_1,a_2,\cdots ,a_n). \end{aligned}$$

Then f is a polynomial function of order not greater than \(k-1.\)

As we can see, if we have an equation of the form (1) and we want to achieve the maximal order of polynomial functions (as solutions), then the signs of \(a_i, i=1,2,\dots ,n\) must alternate. Each time two positive (or negative) numbers meet, we lose one possible sign change of (9) and, consequently, one order of polynomiality.

Clearly, the case of irrational numbers \(\alpha _i\) is different. It was shown by Lajkó in [8] that equation (15) may have solutions being polynomial functions of order two. It may happen however only for irrational t and such solutions have to be discontinuous.