INTEGRAL EQUATIONS WITH CONTRASTING KERNELS

In this paper we study integral equations of the form x(t) = a(t) R t 0 C(t,s)x(s)ds with sharply contrasting kernels typ- ified by C � (t,s) = ln(e + (t s)) and D � (t,s) = (1 + (t s)) 1 . The kernel assigns a weight to x(s) and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for a 2 L 2 (0,1), then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential dif- ferences. In fact, those differences become large as the magnitude of a(t) increases. The form of the kernel alone projects necessary conditions con- cerning the magnitude of a(t) which could result in bounded so- lutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on C regardless of whether a is chosen large or small; this is important in real-world problems since we would like to have a(t) as the sum of a bounded, but badly behaved function, and a large well behaved function.


Introduction
Our work here is, in every respect, of nonconvolution type.But it will be easier to explain the direction we will take by discussing some simple convolution kernels.Let (1) C * (t, s) = ln(e + (t − s)) and (2) noting that C * (t, t) = 1, D * (t, t) = 1.We use the symbol * here because these functions will later be generalized and denoted by C and D in (11) and (12).In particular, positive constants can be added to and multiplied by these functions without changing our basic assumptions.
Note also that C * (t, 0) = ln(e + t) is increasing to infinity, while D * (t, 0) is decreasing to zero.
Here is the first question we study.If a ∈ L 2 [0, ∞) what are the essential qualitative differences between solutions of (3 * ) x(t) = a(t) −  3) and (4) will follow (12).
There would be no story to tell unless it were the case that there is little qualitative difference and that many investigators have studied such problems since 1928.We find that under general conditions patterned from (1) and ( 2) that z ∈ L 2 [0, ∞), while for y = t 0 x(s)ds we have y ∈ L 2 [0, ∞) and t 0 x(s)ds → 0 as t → ∞.More can be said.It is a surprise because C * and D * have fairly opposite properties and (4 * ) is known to have nice qualitative properties so we would suspect that (3 * ) does not.
Kernel (1) is the prototype and is, so to speak, the middle of the road.We will devote the next section to showing how closely the solution of (3 * ) is to the solution of (4 * ) when a ∈ L 2 ; moreover, solutions of (3 * ) are small in spite of enormous perturbations.In the process we will vary C * considering also the kernels r(t − s) + ln(e + (t − s)), r ≥ 0, and 1 + Arctan(t − s) as members of the same class of kernels as in (1), but being above and below (1), respectively.
Let us interpret C * and D * .For fixed t the integrals involve history of the solution on the interval [0, t].For a given s in that interval we are multiplying x(s) by a weight and then the integral is adding up all of those products to determine, along with a(t), the value of the solution.Notice that at s = 0 then C * has the weight ln(e + t), which is large for t large, while for s = t then C * has the weight C * (t, t) = 1; the value of x(t) is being overwhelmed by the early values of x, while recent values, by comparison, are practically ignored.It is customary to refer to (2) as an example of a fading memory kernel.Accordingly, we could refer to (1) as a growing memory kernel, typically seen in problems driven by genetics which become more pronounced as an individual ages.
EJQTDE, 2008 No. 2, p. 2 It is not at all unusual to see problems which have a growing memory.Reynolds [9] considers such a problem for buckling of viscoelastic rods.His problem is singular and the solution is continually driven by an infinite weight at zero.On the other hand, the literature is replete with fading memory problems, many of which are singular, but with a locally integrable singularity.We could mention work on superfluidity by Levinson [7], or heat transfer by Padmavally [8], for example.
In an attempt to convey to the reader, perhaps gratuitously, an image of what is happening we suggest the following.Think of (1) as "genetically" driven while (2) is "environmentally" driven.In the first case an individual's characteristics are continually magnified as a result of genetics; the infant comes to resemble the parent more and more as time goes on.In the second case, the individual's characteristics are changing because of diet, exercise, and general environment; sadly, good habits of youth translate into far too little benefit if not practiced in our old age, a clear example of fading memory.
Equation (5) has application in many areas beginning with mathematical biology, reactor dynamics, and viscoelasticity.Volterra [10] proposed a related form for a problem in mathematical biology, suggesting that one might construct a Liapunov functional.That functional was constructed by Levin [5], yielding strong qualitative results for (5).But in Levin's paper [6] he is interested in and ( 10) Our interest was stimulated when Levin remarked that (5) could be converted to (8) by integration and that b(t) = t 0 a(s)ds and f (t) = x(0) for that conversion.While it is true that (5) can be written as for b(t) = t 0 a(s)ds and a(t) ≥ 0, then we do have b(t) ≥ 0, as required in (9), but b (t) = a(t) ≥ 0 violating (9) in the nontrivial case.
The question arises: Can we violate b (t) ≤ 0 and still retain nice qualitative properties?In this paper we construct Liapunov functionals proving this under more general conditions on the kernel.We arrive at the stunning conclusion that it makes little difference whether the kernel increases or decreases, when other important conditions hold.Indeed, if Levin had continued with the case which violated (9) he would have been dealing with an equation whose solutions were little changed for a ∈ L 2 , but infinitely better able to withstand large perturbations without letting the solution become large.
1.1.Necessary conditions for boundedness.In both ( 5) and (8) Levin is concerned with showing that the solution remains small.One of the goals of the investigator is to identify kernels which will promote stablity.It is very simple to show that the kernel (1) is potentially far more stable than (2).
We ask the question: How large can a(t) be and still have x(t) or z(t) bounded?We are dealing with very large kernels and it is going to require strong methods to prove boundedness.It is time to test just how strong our methods are.
Is this a sharp estimate?Are our techniques good enough that if a(t) = (t + 1) 2 , can we expect to prove that x(t) is bounded using that kernel?So far, we prove that for this C(t, s) then we can take a(t) = (t + 1) p where 0 < p < 3/2 in order to obtain x(t) bounded.
Lest we become too disappointed with that result, let us realize that what we have proved is that a(t) = (t + 1) p for p < 3/2 is a harmless perturbation.And that should give us some pause when we consider the feeble perturbations which motivated this study.
Next, if we are to have z(t) bounded, how large can we choose a(t)?
for some M > 0.
1.2.Derivation of the Liapunov Functionals.Perhaps the most disappointing part of Liapunov theory is the manner in which the reader is abruptly confronted with a Liapunov function which "works" and is given no clue as to how it was derived.In the remainder of this section we intend to give a clear picture of where the functionals arise.In [1; pp.176-180] there is found an "algorithm" for constructing Liapunov functionals with integral delays.That discussion leads us as follows.While our work here is mainly linear and with finite delay, our derivations will show that the same techniques work for nonlinear problems and those with infinite delay.
In the classical theory of integral equations, if has a well-behaved kernel and if g has the sign of x, then the solution follows a(t) in some broad sense.Thus, we write and we strive to prove that the left-hand-side remains small, in obedience to the classical theory.Under reasonable, but unstated, convergence conditions we integrate the right-hand-side by parts, apply the Schwarz inequality, and arrive at a Liapunov functional.Here are the details.Assume that D s ≥ 0 and that there is an M > 0 with In subsequent work we will not require D s ≥ 0 and that will lead to interesting consequences.Since we want to keep x(t) − a(t) small, we write If we integrate the right-hand-side by parts and use the Schwarz inequality we obtain There are several tacit assumptions on convergence which must be made explicit when we apply the result.But we have arrived at the Liapunov functional One of the interesting parts is that we did not have to do anything to get rid of even slight problems.In fact, that is because it is infinite delay and because of the aforementioned convergence assumption.If the lower limit on the integral is zero, then we must add a term and obtain Finally, at times we will find it convenient to derive a differential equation from the integral equation and in such cases we form the sum EJQTDE, 2008 No. 2, p. 6 of two Liapunov functions.In the linear case, we may look at and use the Liapunov function, V (x) = x 2 for the first part plus the aforementioned Liapunov functional for the second part and have our third form as REMARK We have arrived at three different Liapunov functionals from that simple beginning.Such work is also found in [2].Moreover, we will continue the ideas and obtain three more so that, in all, there will be six Liapunov functionals from that start.We will add one more Liapunov functionals from a different start.In this process we will see that the Liapunov functionals will accommodate either of the radically different kernels in (1) or (2).

Small perturbations
In this section we want to show that if a ∈ L 2 then (1) and ( 2) result in similar behavior of the solutions.The first result below was obtained in the same way in [2], while the the first part of the second was obtained in [4].But that first part of the second result is crude and our purpose here is to see if conditions (11) and (12), being, so to speak, opposite, generate similar behavior.
We look to (1) and ( 2) for guidance in our assumptions by defining new functions C and D with and These are large kernels and it should not be thought that an element, C(t, s), satisfying (11) is necessarily larger or smaller than an element, D(t, s), satisfying (12).For example, D(t, s) = M + D * (t, s), M ≥ 0, satisfies (12) and, if M is large, then it lies entirely above C(t, s) = 1 + Arctan(t − s), satisfying (11).So often in the theory of integral equations methods call for kernels to be of convolution type and L  4) The next result is found in [2], but we need both it and its proof here.
Theorem 2.1.If a : [0, ∞) → R is continuous, while (12) holds for (4) then along the solution of (4) the functional then along the solution of (4) we have where (4) does not require a ∈ L 2 .However, if a ∈ L 2 and bounded then both V (t) and z are bounded.
Proof.We have Cancel terms, use the sign conditions, and use (4) in the last step of the process to unite the Liapunov functional and the equation obtaining From this we obtain This yields EJQTDE, 2008 No. 2, p. 9 Notice that to this point we conclude that (4) with ( 12) is quite straightforward with a ∈ L 2 implying that z ∈ L 2 and we consider that result sufficient.But matters are more difficult for (3).However, with more work it does turn out that for a(t) small then ( 12) and (13) yield surprisingly similar behavior.
For our present methods we integrate (3) by parts and write x(u)duds so that by taking The first part of the next result is found in [4], but we will need both it and its proof here.
The reader may verify that it is possible to find C and D satisfying (11) and (12), respectively, whose sum will satisfy the conditions here.Thus, one may consider equations driven both genetically and environmentally.
In reading Theorems 2.3 and 2.4 it may help to think of them in terms of (i) and (ii) of Theorem 2.1 holding.
Theorem 2.2.Suppose that (11) holds.If y is a solution of (13) and if we define V 4 by Thus, We may write With a ∈ L 2 [0, ∞) we have y 2 (t) + α t 0 y 2 (s)ds bounded.Now, from (13) we have x = y so of course, says that there is a sequence {t n } ↑ ∞ along which that integrand tends to zero.But since that integrand is an integral, the integrand actually converges to zero.Here are the details.
In understanding this result, recall that Theorem 2.2 gave conditions ensuring that |x(t) − a(t)| be bounded so if a is bounded, then that yields x bounded, as required below.
} is a sequence of intervals on which x(t) is of one sign, where s n → ∞, then tn sn x(s)ds → 0 as n → ∞.Proof.If the theorem is false then there is an > 0 and a sequence Since the integral of y 2 converges, for each n there is t > t n with | t 0 x(u)du| 2 < /2, so there is a sequence {λ n } of positive numbers with x(u)du → 0 as t → ∞ so the same is true for their difference.
Notice also that in (2) we have D * (t, s) → 0 as t − s → ∞, while in (1) we have C * (t, s) → ∞ as t − s → ∞.If we equalize these and let C(t, s) → L < ∞ as t − s → ∞, then we can obtain a much stronger result.
Theorem 2.4.Let x solve (3), let (11) hold, let y solve (13), and let y(t) = t 0 x(u)du → 0 as t → ∞.If for all large fixed T we have and both C(t, t) and C(t, T ) are bounded independently of t and T , then Consider the last line.Let > 0 be given.For the last term, since C(t, t) and C(t, T ) are bounded, take T so large that the last term is bounded by .With that T fixed, consider the first term and let t be so large that the first term is also bounded by .
Now, we will have good reason (discussed at the beginning of the last section) for wanting to show that solutions of (3) are bounded when we only ask that a(t) is bounded.One such result will now be given.
Theorem 2.5.Suppose that (11) holds, that a(t) is bounded, and that there is an M > 0 with Then for V 4 defined in Theorem 2.2 we have both V 4 and y 2 (t) = t 0 x(s)ds 2 bounded.
Proof.If V 4 is not bounded then there is a sequence {t n } ↑ ∞ with V 4 (t n ) ≥ V 4 (s) for 0 ≤ s ≤ t n and there is a γ > 0 with y 2 (t n ) ≤ γ, as may be seen from the derivative of V 4 .Taking t = t n we then have from Use this and the Schwarz inequality in V 4 to obtain The result follows from this.

Sufficient conditions for boundedness
We now turn to the necessary conditions which we derived in Subsection 1.1 and derive conditions on D and C so that we obtain sufficient conditions for boundedness.These boundedness results are based on a(t) being differentiable.In the last section we will seek boundedness without differentiating a(t).Theorem 3.1.Let a ∈ L 2 [0, ∞), D(t, t) ≥ α > 0, and One will find Liapunov functionals for such equations throughout [1] of the form for positive constants γ, λ.The result follows from this.
Our D * will not quite satisfy the conditions of Theorem 3.1 and its necessary conditions given in Theorem 1.2.So we face the task of working harder to meet those necessary conditions.Turning to C(t, s) we ask if we can simply differentiate (3) and conquer a(t) = t 0 ln[e + s]ds using C(t, s) = ln[e + (t − s)]; in fact, we will have to differentiate twice.
Then a ∈ L 2 implies that the solution x of (3) is also in L 2 ; moreover, x(t) is bounded.
Proof.The derivative of ( 3) is bounded, so is x.Differentiate V 3 along a solution of that derivative of (3) and obtain

This results in
V 3 (t) ≤ 2xa (t) − 2αx 2 A standard inequality, followed by integration, now finishes the proof.
If we hope to conquer a(t) = t 0 ln[e + s]ds with C * (t, s) it is clear that we must take another derivative of (3).
We displayed three "genetic" type kernels: r(t − s) + C * (t, s), r > 0, C * (t, s) 1 + Arctan(t − s) For a ∈ L 2 we found that the smallest one generated behavior of x more closely approximating that of D * (t, s).In this section we will show that the largest one will yield x(t) bounded when a ∈ L 2 , allowing a(t) = (t + 1) p , where p < 3/2, for example.
We are now going to continue the process and obtain second order equations.First, return to (3) and differentiate twice to obtain (14) x (t) = a (t) − [(C(t, t)) + C t (t, t)]x − C(t, t)x − We come now to the critical part.Given (14), can we find an appropriate Liapunov functional?We can, and with fascinating ease by the simple device of integration by parts; in fact, we see that we have learned to use integrals of the form given here in many contexts for construction of Liapunov functionals and adding them together.In the theorem below the reader will reasonably ask where these conditons come from and we want to show that they are natural.Notice that as long as C(t, s) is of convolution type then

t 0 C 0 D
* (t, s)x(s)ds and (4 * ) z(t) = a(t) − t * (t, s)z(s)ds and do investigators have reason to care?For later reference, Equations (

2 = / 2 x(u)du 2 ≥ / 2
and the equality is false for a smaller λ.Clearly, λ n → 0 as n → ∞, otherwise we would have tn+s 0 for all s ∈ [0, λ n ], contradicting the convergence.As x(t) is bounded and λ n → 0 we have a contradiction.Notice that t 0 x(u)du and t+L 0