Skip to main content
Log in

Optimal Shape of an Underwater Moving Bottom Generating Surface Waves Ruled by a Forced Korteweg-de Vries Equation

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

It is well known since Wu and Wu (in: Proceedings of the 14th symposium on naval hydrodynamics, National Academy Press, Washington, pp 103–125, 1982) that a forcing disturbance moving steadily with a transcritical velocity in shallow water can generate, continuously and periodically, a succession of solitary waves propagating ahead of the disturbance in procession. One possible new application of this phenomenon could very well be surfing competitions, where in a controlled environment, such as a pool, waves can be generated with the use of a translating bottom. In this paper, we use the forced Korteweg–de Vries equation to investigate the shape of the moving body capable of generating the highest first upstream-progressing solitary wave. To do so, we study the following optimization problem: maximizing the total energy of the system over the set of non-negative square-integrable bottoms, with uniformly bounded norms and compact supports. We establish analytically the existence of a maximizer saturating the norm constraint, derive the gradient of the functional, and then implement numerically an optimization algorithm yielding the desired optimal shape.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Here, the term cnoidal simply refers to the profile of the periodic travelling-wave solutions to the KdV equation (see [19]).

References

  1. Wu, T.Y., Wu, D.M.: Three-dimensional nonlinear long waves due to moving surface pressure. In: Proceedings of the 14th Symposium on Naval Hydrodynamics, pp. 103–125. National Academy Press, Washington (1982)

  2. Instant Surfing: Available at http://www.wavegarden.com (2012). Accessed 26 Sep 2018

  3. Wu, T.Y.: Generation of upstream advancing solitons by moving disturbances. J. Fluid Mech. 184, 75–99 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cao, Y., Beck, R.F., Schultz, W.W.: Numerical computations of two-dimensional solitary waves generated by moving disturbances. Int. J. Numer. Methods Fluids 17, 905–920 (1993)

    Article  MATH  Google Scholar 

  5. Lee, S.J., Yates, G.T., Wu, T.Y.: Experiments and analyses of upstream-advancing solitary waves generated by moving disturbances. J. Fluid Mech. 199, 569–593 (1989)

    Article  Google Scholar 

  6. Zhang, D., Chwang, A.T.: Numerical study of nonlinear shallow water waves produced by a submerged moving disturbance in viscous flow. Phys. Fluids 8, 147–155 (1996)

    Article  MATH  Google Scholar 

  7. Bona, J.L., Zhang, B.Y.: The initial-value problem for the forced Korteweg–de Vries equation. Proc. R. Soc. Edinb. Ser. A 126, 571–598 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  8. Colliander, J., Keel, M., Staffilani, G., Takaoka, H., Tao, T.: Sharp global well-posedness for KdV and modified KdV on \({\mathbb{R}}\) and \({\mathbb{T}}\). J. Am. Math. Soc. 16, 705–749 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  9. Tsugawa, K.: Global well-posedness for the KdV equations on the real line with low regularity forcing terms. Commun. Contemp. Math. 8, 681–713 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kenig, C.E., Ponce, G., Vega, L.: A bilinear estimate with applications to the KdV equation. J. Am. Math. Soc. 9, 573–603 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  11. Rosier, L., Zhang, B.Y.: Control and stabilization of the Korteweg–de Vries equation: recent progresses. J. Syst. Sci. Complex. 22, 647–682 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Nersisyan, H., Dutykh, D., Zuazua, E.: Generation of 2D water waves by moving disturbances. IMA J. Appl. Math. 80, 1235–1253 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bona, J.L., Chen, M., Saut, J.-C.: Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media II. Nonlinearity 17, 925–952 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Zabusky, N.J., Kruskal, M.D.: Interaction of solitons in a collisionless plasma and the recurrence of initial states. Phys. Rev. Lett. 15, 240–243 (1965)

    Article  MATH  Google Scholar 

  15. Fornberg, B., Whitham, G.B.: A numerical and theoretical study of certain nonlinear wave phenomena. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Sci. 289, 373–404 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  16. Trefethen, L.N.: Spectral Method in MATLAB, Chapter 10. Society for Industrial and Applied Mathematics, Philadelphia (2000)

    Book  Google Scholar 

  17. Simon, J.: Compact sets in the space \({L}^{p}(0,{T};{B})\). Annali di Matematica Pura ed Applicata 146, 65–96 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  18. Roubíc̆ek, T.: Nonlinear partial differential equations with applications. In: International Series of Numerical Mathematics, vol. 153. Birkhäuser (2005)

  19. Drazin, P.G., Johnson, R.S.: Solitons: an introduction. In: Cambridge Texts in Applied Mathematics, 2nd edn. Cambridge University Press (1989)

  20. Trélat, E., Zuazua, E.: The turnpike property in finite-dimensional nonlinear optimal control. J. Differ. Equ. 258, 81–114 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Furihata, D.: Finite difference schemes for \(\frac{\partial u}{\partial t} = (\frac{\partial }{\partial x})^{\alpha } \frac{\delta g}{\delta u}\) that inherit energy conservation or dissipation property. J. Comput. Phys. 156, 181–205 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  22. Camassa, R., Wu, T.Y.: Stability of forced steady solitary waves. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 337, 429–466 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  23. Djidjeli, K., Pride, W.G., Twizell, E.H., Wang, Y.: Numerical methods for the solution of the third and fifth-order dispersive Korteweg–de Vries equations. J. Comput. Appl. Math. 58, 307–336 (1995)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work started in March 2010 under the original idea of E. Zuazua, who guided the author J. D. during an internship at the Basque Center for Applied Mathematics (Spain). J. D. gratefully acknowledges E. Zuazua for his support, dynamism, and guidance during this period, which now, looking back, feels like good old times. The work was then considerably improved while J. D. was finishing his Master degree at the Institut Elie Cartan de Lorraine (France), for which the author would like to acknowledge financial support and thank A. Henrot for the encouragement to pursue this study. R. B. acknowledges the support of Science Foundation Ireland under grant 12/IA/1683. Finally, we sincerely thank the Editors and anonymous Referees for their valuable comments, which helped us improve the quality of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeremy Dalphin.

Additional information

Communicated by Grégoire Allaire.

Appendices

Appendix A: Proofs of the Main Results

1.1 A.1 Global Well-Posedness of the fKdV Equation

Proof of Proposition 4.1

Assume \(T > 0\) is fixed and \(b \in L^{2}(\mathbb {R},\mathbb {R})\) is given. As a particular case of [9, Theorem 1.2], with \(\sigma = -1\), \(f = - \frac{\mathrm{d}b}{\mathrm{d}x}\), and initial data \(u_{0} \equiv 0\), there exists a solution \(u_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) to the initial-value problem (4). Consider any other bottom \(\mathfrak {b} \in L^{2}(\mathbb {R},\mathbb {R})\), with corresponding solution , then set \(\delta b = \mathfrak {b} - b\) and . Clearly, one has:

$$\begin{aligned} \begin{array}{ll} \dfrac{ \partial \left( \delta u \right) }{\partial t} + \dfrac{\partial }{\partial x} \left[ \dfrac{ \left( \delta u \right) ^{2}}{2} + u_{b} \, \delta u + \dfrac{\partial ^{2} \left( \delta u \right) }{\partial x^{2}} \right] = - \dfrac{\mathrm {d}\left( \delta b \right) }{\mathrm {d}x} \in H^{-1}(\mathbb {R},\mathbb {R}), &{}\quad t \in [0,T], \\ \delta u (x,0) = 0, &{}\quad x \in \mathbb {R}. \\ \end{array} \end{aligned}$$
(19)

Although the partial derivatives in (19) have to be handled with care, since they are understood in distributional sense, we can still apply the integration-by-parts formula stated in [18, Lemma 7.3] by considering the Gelfand triple \(H^{1}(\mathbb {R},\mathbb {R}) \subset L^{2}(\mathbb {R},\mathbb {R}) \subset H^{-1}(\mathbb {R},\mathbb {R})\) and the fact that we have \(\delta u \in \lbrace w \in L^{2}(0,T;H^{1}(\mathbb {R},\mathbb {R})) : \partial _{t}w \in L^{2}(0,T;H^{-1}(\mathbb {R},\mathbb {R})) \rbrace \). We thus obtain for any \(t \in [0,T]\):

$$\begin{aligned} \Vert \delta u \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2}= & {} \Vert \delta u ( \bullet ,0 ) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \\&+~2 \int _{0}^{t} \left\langle \partial _{t}(\delta u) ~\vert ~ \delta u \right\rangle _{H^{-1}(\mathbb {R},\mathbb {R}),H^{1}(\mathbb {R},\mathbb {R})} (\bullet ,s) \,\mathrm {d}s . \end{aligned}$$

We get \(\langle \partial _{t}(\delta u) ~\vert ~ \delta u \rangle _{H^{-1}(\mathbb {R},\mathbb {R}),H^{1}(\mathbb {R},\mathbb {R})} = \langle \frac{( \delta u)^{2}}{2} + u_{b} \, \delta u + \partial _{xx}(\delta u) + \delta b ~\vert ~ \partial _{x} (\delta u) \rangle _{L^{2}(\mathbb {R},\mathbb {R}),L^{2}(\mathbb {R},\mathbb {R})}\) and \(\delta u(\bullet , 0 ) = 0\) by using (19), from which it follows for any \(t \in [0,T]\):

$$\begin{aligned} \Vert \delta u \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2} ~~ = ~~ 2 \int _{0}^{t} \int _{\mathbb {R}} \delta b \, \dfrac{\partial (\delta u)}{\partial x} \, \mathrm {d}x \,\mathrm {d}s - \int _{0}^{t} \int _{\mathbb {R}} \left( \delta u \right) ^2 \dfrac{\partial u_{b}}{\partial x} \, \mathrm {d}x \,\mathrm {d}s . \end{aligned}$$

We may then write, by introducing , which is a finite constant:

$$\begin{aligned} \forall t \in [0,T], ~~ \Vert \delta u \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2} \leqslant 4CT \Vert \delta b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} + C \int _{0}^{t} \Vert \delta u \left( \bullet , s \right) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \, \mathrm {d}s. \end{aligned}$$

Since \(t \in [0,T] \mapsto \Vert \delta u (\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \in \mathbb {R}\) is a continuous function [18, Lemma 7.3], we can apply Grönwall’s Lemma, which yields

In particular, the uniqueness of solution to the initial-value problem (4) follows and the functional \(F: b \mapsto F(b) \) given by (3) is a well-defined application from \(L^{2}(\mathbb {R},\mathbb {R})\) into \(\mathbb {R}\).

1.2 A.2 Explicit A Priori Estimates

Proposition A.1

Let \(T > 0\) and \(b \in H^{\infty }(\mathbb {R},\mathbb {R}) : = \cap _{s \geqslant 0}H^{s}(\mathbb {R},\mathbb {R})\). Then, there exist three polynomials \(P_{0}\), \(P_{1}\), \(P_{2}\) in two variables with (non-negative) constant coefficients, such that the following estimates hold for the unique solution \(u_{b} \in C^{\infty }(0,T;H^{\infty }(\mathbb {R},\mathbb {R}))\) to the initial-value problem (4):

  1. (i)

    \(\displaystyle { \sup _{t \in [0,T]} \Vert u_{b} \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant P_{0} \left( T,\Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) }\),

  2. (ii)

    \(\displaystyle { \sup _{t \in [0,T]} \Vert \partial _{x} u_{b} \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant P_{1} \left( T,\Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) }\),

  3. (iii)

    \(\displaystyle { \sup _{t \in [0,T]} \Vert \partial _{xx} u_{b} \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant \hbox {e}^{T \left( 1 + \frac{1}{3} \Vert b \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \right) } P_{2} \left( T,\Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) }\).

Proof

Let \(T > 0\) and \(b \in H^{\infty }(\mathbb {R},\mathbb {R}) : = \cap _{s \geqslant 0}H^{s}(\mathbb {R},\mathbb {R})\). Lemmas 3.1 and 3.2 imply the existence of a unique smooth solution \(u_{b} \in C^{\infty }(0,T;H^{\infty }(\mathbb {R},\mathbb {R}))\) of (4). Since \(b \in L^{2}(\mathbb {R},\mathbb {R})\), we may apply [9, Proposition 3.1] with final time \(T+1 (> 1)\), \(\sigma = -1\), \(f = - \frac{\mathrm{d}b}{\mathrm{d}x}\), and homogeneous initial data \(u_{0} \equiv 0\), to establish the inequality:

$$\begin{aligned} \sup _{t \in [0,T] } \Vert u_{b}(\bullet , t ) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} ~\leqslant & {} ~ \sup _{t \in [0,T+1] } \Vert u_{b}(\bullet , t ) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \\\leqslant & {} C \left( 1 + \left( 1+T \right) ^{3} \Vert \partial _{x} b \Vert _{H^{-1}(\mathbb {R},\mathbb {R})}^{3} \right) , \end{aligned}$$

for some positive constant C, which does not depend on T, b, or \(u_{b}\). This proves assertion (i) with \(P_{0}(x,y) := C(1+(1+x)^{3}y^{3})\), using here the fact that \(\Vert \partial _{x}b \Vert _{H^{-1}(\mathbb {R},\mathbb {R})} \leqslant \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})}\). We now exploit the Hamiltonian structure of Eq. (4) (see [22]). Although energy is not conserved, as already pointed out, an extra conserved quantity is available for the fKdV equation, which is in fact a Hamiltonian for the system. Let H be such Hamiltonian. Then,

$$\begin{aligned} \forall t \in [0,T] , \quad H(t) : = \int _{\mathbb {R}} \left[ \left( \dfrac{\partial u_{b}}{\partial x} \right) ^{2} - \dfrac{1}{3}u_b^3 - 2 b \,u_{b} \right] \, \mathrm {d}x = 0. \end{aligned}$$
(20)

The Cauchy–Schwarz inequality and \(\Vert g \Vert ^{2}_{L^{\infty }(\mathbb {R},\mathbb {R})} \leqslant 2 \Vert g \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \Vert \partial _{x}g \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant \Vert g \Vert ^{2}_{H^{1}(\mathbb {R},\mathbb {R})} \) valid for any \(g \in H^{1}(\mathbb {R},\mathbb {R})\) are combined with the well-known inequalities \(2xy \leqslant x^{2} + y^{2} \) and \(\sqrt{x+y} \leqslant \sqrt{x} + \sqrt{y}\), valid for any \(x, y \geqslant 0\), so that one can deduce from (20):

$$\begin{aligned} \Vert \partial _{x} u_{b} \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant \sqrt{\dfrac{7}{5}} \left( \Vert u_{b} \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} + \Vert u_{b} \left( \bullet , t \right) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} + \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) . \end{aligned}$$

This proves the estimate (ii) of our Proposition A.1 with \(P_{1}(x,y) : = 2 [y+P_{0}(x,y)+P_{0}^{2}(x,y)]\). Finally, the same method is used to determine \(P_{2}\). We first show for any \(t \in [0,T]\):

$$\begin{aligned}&\dfrac{\mathrm {d}}{\mathrm {d}t} \int _{\mathbb {R}} \left[ \left( \partial _{xx} u_{b} \right) ^{2} + 2 b \, \partial _{xx}u_{b} - \dfrac{5 }{3} u_{b} \left( \partial _{x} u_{b} \right) ^{2} + \dfrac{2}{3} b \, u_{b}^{2} + \dfrac{5}{36} u_{b}^{4} \right] \,\mathrm {d}x \nonumber \\&\quad = \dfrac{2}{3} \int _{\mathbb {R}} b \, I \, \partial _{x} u_{b} \,\mathrm {d}x, \end{aligned}$$
(21)

with \(I : = \partial _{xx}u_{b}+ \frac{1}{2}u_{b}^{2}+b\). Notice that both sides of (21) depend only on time t. Denote by G(t) the left-hand side of the equation. Straightforward calculations lead to:

$$\begin{aligned} G(t)= & {} \int _{\mathbb {R}} 2 \partial _{xt}u_{b} \underbrace{\left[ - \partial _{xxx}u_{b} - b_{x} \right] }_{ = \partial _{t}u_{b} + u_{b} \partial _{x} u_{b}} - \dfrac{10}{3} u_{b} \partial _{x} u_{b} \partial _{xt}u_{b} \\&- \dfrac{5}{3} \partial _{t}u_{b} \left( \partial _{x} u_{b} \right) ^{2} + \dfrac{4}{3} b u_{b} \partial _{t}u_{b} + \dfrac{5}{9} u_{b}^{3} \partial _{t} u_{b}{~} \\= & {} \int _{\mathbb {R}} \dfrac{4}{3} u_{b} \underbrace{\partial _{t}u_{b}}_{ = -I_{x}} \underbrace{\left[ \partial _{xx} u_{b} + \dfrac{u_{b}^{2}}{2} + b \right] }_{:= I} - \dfrac{1}{3} \underbrace{\partial _{t}u_{b}}_{ = -I_{x}} \left( \partial _{x} u_{b} \right) ^{2} - \dfrac{1}{9} u_{b}^{3} \underbrace{\partial _{t} u_{b}}_{ = -I_{x}} \\= & {} \int _{\mathbb {R}} \dfrac{2}{3} \partial _{x} u_{b} \, I \, b . \end{aligned}$$

Integrating equality (21) on [0, t] for any \(t \in [0,T]\), following the same strategy above yields

$$\begin{aligned} \Vert \partial _{xx}u_{b} (\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2}\leqslant & {} 2 \left( 1 + \frac{\Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2}}{3} \right) \int _{0}^{t} \Vert \partial _{xx} u_{b}(\bullet ,s) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \mathrm {d}s \\&+ P_{01}\left( T,\Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) \end{aligned}$$

where we have set

$$\begin{aligned} P_{01}(x,y):= & {} 2x \left[ y^{2} \left( 1 + \dfrac{P_{1}(x,y)^{2}}{3} \right) + \dfrac{P_{0}(x,y)^{2}}{4} \left( P_{0}(x,y)^{2} + P_{1}(x,y)^{2} \right) \right] \\&+\,2 \left[ 2y^{2} + \dfrac{5 P_{1}(x,y)}{6} \left( P_{0}(x,y)^{2} + 2P_{1}(x,y)^{2} \right) \right. \\&\left. +\,\dfrac{y}{3} \left( 2 P_{0}(x,y)^{2} + P_{1}(x,y)^{2} \right) +\dfrac{5 P_{0}(x,y)^{2} }{36} \left( P_{0}(x,y)^{2} + P_{1}(x,y)^{2} \right) \right] . \end{aligned}$$

Consequently, we apply Grönwall’s Lemma to the last inequality above, from which the assertion (iii) follows by setting \(P_{2}(x,y) : = 1 + P_{01}(x,y)\), and concluding the proof. \(\square \)

Proof of Proposition 4.2

Let \(T > 0\) and \(b \in L^{2}(\mathbb {R},\mathbb {R})\). First, according to Proposition 4.1, we can consider the unique solution \(u_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) of (4). Moreover, by density, a sequence \((b_{n})_{n \in \mathbb {N}}\) of smooth maps with compact support is strongly converging to b in \(L^{2}(\mathbb {R},\mathbb {R})\). In addition, Proposition A.1 ensures that the sequence \((u_{b_{n}})_{n\in \mathbb {N}}\) of associated smooth maps also satisfies (4) and the a priori estimates for any \(n \in \mathbb {N}\). We deduce from the quantitative estimate of Proposition 4.1 the strong convergence of \((u_{b_{n}})_{n \in \mathbb {N}}\) to \(u_{b}\) in \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\). In particular, we can correctly let \(n \rightarrow + \infty \) in the inequality (i) of Proposition A.1 applied to \((u_{b_{n}},b_{n})\) in order to get \( \Vert u_{b} \Vert _{C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))} \leqslant P_{0} ( T, \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} ) \).

Then, let \(t \in [0,T]\) fixed. Since \((b_{n})_{n \in \mathbb {N}}\) is bounded, the sequence \((\partial _{x}u_{b_{n}}(\bullet ,t))_{n \in \mathbb {N}}\) is uniformly bounded in \(H^{1}(\mathbb {R},\mathbb {R})\). Consequently, there exists a subsequence that weakly converges in \(H^{1}(\mathbb {R},\mathbb {R})\) to a certain map, which has to be \(\partial _{x}u_{b}(\bullet ,t)\) by considering the convergence in distributional sense. We emphasize the fact that here the subsequence depends on the time variable so it is denoted by \((\partial _{x}u_{b_{n(t)}})_{n \in \mathbb {N}}\). Considering the lower-semicontinuity of the norm with respect to the weak convergence, we obtain for any \( t \in [0,T]\):

$$\begin{aligned} \Vert \partial _{x} u_{b}(\bullet , t) \Vert _{H^{1}(\mathbb {R},\mathbb {R})}\leqslant & {} \liminf _{n \in \mathbb {N}} \Vert \partial _{x} u_{b_{n(t)}}(\bullet , t) \Vert _{H^{1}(\mathbb {R},\mathbb {R})} \\\leqslant & {} \, P_{2} \left( T, \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) \hbox {e}^{ T+\frac{T}{3} \Vert b \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} } \\&+ \, P_{1} \left( T, \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) . \\ \end{aligned}$$

Hence, the inequality of Proposition 4.2 holds with \((b,u_{b})\), concluding the proof. \(\square \)

1.3 A.3 Existence of an Optimal Bottom

Proof of Proposition 4.3

The proof is very similar to the one of Proposition 4.1. Let \(T > 0\), \(K > 0\) and \(b \in L^{2}(\mathbb {R},\mathbb {R})\) with support included in \([-K,K]\). From Proposition 4.1, the initial-value problem (4) has a unique solution \(u_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\). Consider now any sequence \((b_{n})_{n \in \mathbb {N}}\) of square-integrable maps with supports all included in \([-K,K]\) that is weakly converging to b in \(L^{2}(\mathbb {R},\mathbb {R})\). In particular, such a sequence is bounded so Propositions 4.14.2 ensure that the sequence \((u_{b_{n}})_{n\in \mathbb {N}}\) of associated maps satisfies (4) and the a priori estimate for any \(n \in \mathbb {N}\). We deduce that \(u_{b}\) and \((u_{b_{n}})_{n\in \mathbb {N}}\) are uniformly bounded in \(C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\). First, we consider the two following compact embeddings: \(H^{2}(]-K,K[,\mathbb {R}) \subset H^{1}(]-K,K[,\mathbb {R}) \subset H^{-1}(]-K,K[,\mathbb {R})\). We can apply the Aubin–Lions–Simon Lemma [17, Section 8 Corollary 4] to obtain that the following one is also compact:

$$\begin{aligned} W:= & {} \left\{ w \in L^{\infty }\left( 0,T;H^{2} \left( \left] - K, K \right[,\mathbb {R} \right) \right) : \partial _{t}w \in L^{\infty } \left( 0,T;H^{-1} \left( \left] - K, K \right[ ,\mathbb {R} \right) \right) \right\} \\&\hookrightarrow C^{0} \left( 0,T;H^{1} \left( \left] -K,K \right[,\mathbb {R} \right) \right) . \end{aligned}$$

From the foregoing and (4), \( \Vert u_{b_{n}} \Vert _{L^{\infty }(0,T;H^{2}(]-K,K[,\mathbb {R}))} + \Vert \partial _{t} u_{b_{n}} \Vert _{L^{\infty }(0,T;H^{-1}(]-K,K[,\mathbb {R}))} \) is uniformly bounded, i.e. \((u_{b_{n}})_{n \in \mathbb {N}}\) is uniformly bounded in W. We deduce that there exists \(u_{K} \in C^{0}(0,T;H^{1}(]-K,K[,\mathbb {R}))\) and a subsequence \((u_{b_{n'}})_{n \in \mathbb {N}}\) that is strongly converging to \(u_{K}\) in \(C^{0}(0,T;H^{1}(]-K,K[,\mathbb {R}))\). Then, let \(n \in \mathbb {N}\). We introduce the quantities \(\delta b = b_{n'} - b\) and \(\delta u = u_{b_{n'}} - u_{b} \). One can check they satisfy the initial-value problem (19). We emphasize again the fact that the partial derivatives in (19) have to be handled with care since these are understood in a distributional sense. But we can still apply the integration-by-parts formula of [18, Lemma 7.3] by considering the Gelfand triple \(H^{1}(\mathbb {R},\mathbb {R}) \subset L^{2}(\mathbb {R},\mathbb {R}) \subset H^{-1}(\mathbb {R},\mathbb {R})\) and the fact that \(\delta u \in \lbrace w \in L^{2}(0,T;H^{1}(\mathbb {R},\mathbb {R})) : \partial _{t}w \in L^{2}(0,T;H^{-1}(\mathbb {R},\mathbb {R})) \rbrace \). Following the same calculations than we did in the proof of Proposition 4.1, we get for any \(t \in [0,T]\):

$$\begin{aligned} \Vert \delta u \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2} = 2 \int _{0}^{t} \int _{\mathbb {R}} \delta b \, \dfrac{\partial (\delta u)}{\partial x} \, \mathrm {d}x \, \mathrm {d}s - \int _{0}^{t} \int _{\mathbb {R}} \left( \delta u \right) ^{2} \dfrac{\partial u_{b}}{\partial x} \, \mathrm {d}x \, \mathrm {d}s . \end{aligned}$$

Finally, we use the Cauchy–Schwarz inequality, and the fact that all the supports of the considered bottoms are included in \([-K,K]\). We thus get for any \(t \in [0,T]\):

$$\begin{aligned}&\Vert \left( u_{b_{n'}} - u_{b} \right) \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2}\\&\quad \leqslant 2 \int _{0}^{T} \begin{array}{|c|} \displaystyle { \int _{-K}^{K} \left( b_{n'} - b \right) \left( x \right) \left[ \dfrac{\partial u_{b_{n'}}}{\partial x} -\dfrac{\partial u_{b}}{\partial x} \right] \left( x, s \right) \, \mathrm {d}x } \\ \end{array}~ \, \mathrm {d}s \\&\qquad +\, \Vert u_{b} \Vert _{C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))} \int _{0}^{t} \Vert \left( u_{b_{n'}} - u_{b} \right) \left( \bullet , s \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2} \, \mathrm {d}s . \\ \end{aligned}$$

Since \(t \in [0,T] \mapsto \Vert \delta u (\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \in \mathbb {R}\) is a continuous function [18, Lemma 7.3], we can apply Grönwall’s Lemma and we obtain:

$$\begin{aligned} \Vert u_{b_{n'}} - u_{b} \Vert _{C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))}^{2} ~~ \leqslant ~~ C \int _{0}^{T} \begin{array}{|c|} \displaystyle { \int _{-K}^{K} \left( b_{n'} - b \right) \left( x \right) \left[ \dfrac{\partial u_{b_{n'}}}{\partial x} -\dfrac{\partial u_{b}}{\partial x} \right] \left( x, s \right) \, \mathrm {d}x } \\ \end{array}\,\mathrm {d}s, \end{aligned}$$

where we have set \(C : = 2\hbox {e}^{T\Vert u_{b} \Vert _{C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))}}\). Hence, it remains to prove that the right member of the above inequality converges to zero as \(n \rightarrow + \infty \). For this purpose, let us now introduce the integrand \(R_{n}: t \mapsto \int _{-K}^{K} \delta b(x) \, \partial _{x}(\delta u)(x,t) \, \mathrm {d} x \). Since \(b_{n'}\) converges weakly to b in \(L^{2}(]-K,K[,\mathbb {R})\) and \(u_{b_{n'}} \) strongly to \( u_{K}\) in \(C^{0}(0,T;H^{1}(]-K,K[,\mathbb {R}))\), we get that \(R_{n}(t)\) converges to zero for any \(t \in [0,T]\). Moreover, the a priori estimate of Proposition 4.2 ensures that \(R_{n}(t)\) is uniformly bounded. Hence, Lebesgue’s dominated convergence theorem applies and \(\int _{0}^{T} \vert R_{n}(t) \vert \, \mathrm {d}t \rightarrow 0 \) as \(n \rightarrow + \infty \). One concludes from the last inequality that \((u_{b_{n'}})_{n \in \mathbb {N}}\) strongly converges to \(u_{b}\) in \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\). We also have proved the uniqueness of the limit for any other converging subsequence. Recalling that \((u_{b_{n}})_{n \in \mathbb {N}}\) is uniformly bounded, we deduce that the whole sequence converges to \(u_{b}\). \(\square \)

Proof of Theorem 2.1

Combining Lemma 4.1 and Proposition 4.3, it is possible to extract from any maximizing sequence of (2) a subsequence \((b_{n'})_{n \in \mathbb {N}}\) that is weakly converging in \(L^{2}(\mathbb {R},\mathbb {R})\) to a certain \(b^{*} \in \mathcal {B}\), and such that \(u_{b_{n'}} \rightarrow u_{b^{*}}\) in \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\). From the continuity of the embedding \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R})) \subset L^{2}(0,T;L^{2}(\mathbb {R},\mathbb {R})) \), we deduce that \( F(b^{*})= \lim _{n \rightarrow +\infty } F(b_{n'}) = \sup _{b \in \mathcal {B}} F(b) \) with \(b^{*} \in \mathcal {B}\) so the supremum is a maximum and problem (2) has a global maximizer. To conclude the proof, it remains to show that such a maximizer saturates the \(L^{2}\)-constraint, which is proved as in Proposition B.4. Indeed, if it not the case, then choose \(\theta \in ]1, (M / \Vert b^{\text {opt}} \Vert _{L^{2}(\mathbb {R},\mathbb {R})} )^{2/7}]\) and the bottom \(b^{\text {opt}}_{\theta }\) of Lemma B.1 is admissible. One can check that \(F(b^{\text {opt}}_{\theta })\geqslant F(b^{\text {opt}}) \) and the optimality of \(b^{\text {opt}}\) yields to \(u_{b^{\text {opt}}} = 0\) on \([T,\theta ^{3}T]\). Considering now (4), we obtain \(\partial _{x}b^{\text {opt}} = 0\) so \(u_{b^{\text {opt}}} = 0\) also on [0, T] and \(F(b^{\text {opt}}) = 0\), which is a contradiction. \(\square \)

1.4 A.4 Fréchet Differentiability of the Functional

Proof of Proposition 4.4

Formally speaking, if \(v_{b} \) is a solution of (12), then the equation satisfied by \(\partial _{x} v_{b} \) is already studied in [7]. However, we need to specify a bit the existence result because it is stated in terms of the so-called Bourgain space \(Y_{s,\beta }\) (see [7, Sect. 2] for details). First, we apply [9, Proposition 2.1] with \(\sigma = -1\), \(f = - \partial _{x}b\), \(u_{0} \equiv 0\), \(s = \sigma + 3 = 2\) and \(\beta = \varepsilon + \frac{1}{2}\), where \(\varepsilon > 0\) is chosen small enough. This local existence result combined with standard global arguments [7, Proposition 5.1] establishes that the unique solution \(u_{b}\) of (4) is the [0, T]-restriction of a map \(U_{b} \in Y_{2,\varepsilon + 1 / 2}\).

Then, we introduce new variables \((\xi ,\tau ) :=(-x, T- t)\) to transform (12) into an initial-value problem. We set \(U(\xi ,\tau ) : = U_{b}(-x,T-t)\), and we still have \(U \in Y_{2,\varepsilon + 1 / 2} \subset Y_{1,\varepsilon + 1 / 2} \) but we also get \(\partial _{\xi }U \in Y_{1,\varepsilon + 1 / 2} \) [7, above Theorem 5.5]. We can now apply [7, Theorem 2.6] with \(s = 1\), \(\beta = \varepsilon + \frac{1}{2}\), and \(f = 2\partial _{\xi }U \). We deduce that there exists a unique solution \(W \in Y_{1,\varepsilon + 1 / 2}\) satisfying \(W(\bullet ,0) = 0\) and \( \partial _{\tau } W + \partial _{\xi }\left( U\, W \right) + \partial _{\xi \xi \xi } W = 2 \partial _{\xi }U \) on \(\mathbb {R} \times [0,T] \).

Finally, it remains to get back to (12). For this purpose, we consider the [0, T]-restrictions \(u \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\), \(w \in C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\) of the maps U and W [7, Lemma 2.3]. Using the equations they satisfy, we get \(u(2-w) \in W^{1,1}(0,T;H^{-2}(\mathbb {R},\mathbb {R}))\). From standard linear semi-group theory [7, Section 1 Sect. III], there exists a unique function \( v \in C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\) satisfying \(v(\bullet , 0) = 0\) and the Airy equation \(\partial _{\tau }v + \partial _{\xi \xi \xi }v = u(2-w)\) on \(\mathbb {R} \times [0,T]\). Looking at the equation satisfied by \(w - \partial _{x}v\), we deduce that \(\partial _{x}v = w\). In particular, we obtain \(v \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R})\) and getting back to the original variables \((x,t) := (-\xi ,T - \tau )\), one can check that the map \(v_{b}(x,t): = v(\xi ,\tau )\) is a global solution of (12) in \(C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\).

At last, we prove such a solution is unique. Consider two maps of \(C^{0}(0,T; H^{2} (\mathbb {R},\mathbb {R}))\) solving (12), denoted \(v_{1}\) and \(v_{2}\), and introduce the quantity \(\delta v : = v_{1} - v_{2}\). From the linearity of (12), one can check that \( \delta v\) satisfies \(\delta v(\bullet ,T) = 0\) and \(\partial _{t}(\delta v) + u_{b} \, \partial _{x}(\delta v) + \partial _{xxx}(\delta v) = 0\). This last equality is understood in distributional sense but we can still apply the integration-by-parts formula of [18, Lemma 7.3] with the Gelfand triple \(H^{1}(\mathbb {R},\mathbb {R}) \subset L^{2}(\mathbb {R},\mathbb {R}) \subset H^{-1}(\mathbb {R},\mathbb {R})\) and the fact that \(\delta v \in \lbrace w \in L^{2}(0,T;H^{1}(\mathbb {R},\mathbb {R})): \partial _{t}w \in L^{2}(0,T;H^{-1}(\mathbb {R},\mathbb {R})) \rbrace \). Proceeding as below (19), we get:

$$\begin{aligned} \forall t \in [0,T], ~~ \Vert \delta v (\bullet , t) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \leqslant \Vert \partial _{x}u_{b} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} \int _{t}^{T} \Vert \delta v (\bullet ,s) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R}))} \, \mathrm {d}s. \end{aligned}$$

It follows from the continuity of the map \(t \in [0,T] \mapsto \Vert \delta v(\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \in \mathbb {R}\) [18, Lemma 7.3] and Grönwall’s Lemma that \(\delta v \equiv 0\) on \([0,T] \times \mathbb {R}\), i.e. \(v_{1} = v_{2}\). To conclude the proof, there exists a unique global solution \(v_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) satisfying (12). \(\square \)

Proof of Proposition 4.5

Let \(T > 0\) and \((b, h) \in L^{2}(\mathbb {R},\mathbb {R}) \times L^{2}(\mathbb {R},\mathbb {R})\). From Proposition 4.1, there exists two associated global solutions \(u_{b}\) and \(u_{b+h}\) in \(C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) such that \((b,u_{b})\) and \((b+h,u_{b+h}) \) satisfy (4). Introducing again the quantities \(\delta u := u_{b+h} - u_{b}\) and \(\delta b : = (b+h)-b = h\), one can check \((\delta b, \delta u)\) satisfies (19), which has to be understood in a distributional sense. Still, we can apply the integration-by-parts formula of [18, Lemma 7.3] with the Gelfand triple \(H^{1}(\mathbb {R},\mathbb {R}) \subset L^{2}(\mathbb {R},\mathbb {R}) \subset H^{-1}(\mathbb {R},\mathbb {R})\) and the fact that \(\delta u \) belongs to \( \lbrace w \in L^{2}(0,T;H^{1}(\mathbb {R},\mathbb {R})): \partial _{t}w \in L^{2}(0,T;H^{-1}(\mathbb {R},\mathbb {R})) \rbrace \). Proceeding as below (19), we get:

$$\begin{aligned} \forall t \in [0,T], ~~ \Vert \delta u (\bullet ,t) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} = 2 \int _{0}^{t} \int _{\mathbb {R}} \delta b \, \dfrac{\partial (\delta u) }{\partial x} \, \mathrm {d}x \, \mathrm {d}s - \int _{0}^{t} \int _{\mathbb {R}} \left( \delta u \right) ^{2} \dfrac{\partial u_{b}}{\partial x} \, \mathrm {d}x \, \mathrm {d}s . \end{aligned}$$

Using the Cauchy–Schwarz inequality, we obtain for any \(t \in [0,T]\):

$$\begin{aligned} \Vert \delta u (\bullet ,t) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \leqslant C \int _{0}^{t} \Vert \delta u (\bullet ,s) \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})} \, \mathrm {d}s + 2 T \Vert h \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \, \Vert \delta u \Vert _{C^{0}(0,T,H^{1}(\mathbb {R},\mathbb {R})}, \end{aligned}$$

where we have set \(C : = \Vert u_{b} \Vert _{C^{0}(0,T,H^{2}(\mathbb {R},\mathbb {R})} \). We can now apply Grönwall’s Lemma to the map \(t \in [0,T] \mapsto \Vert \delta u(\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \in \mathbb {R}\), which is continuous [18, Lemma 7.3], and it comes:

$$\begin{aligned} \Vert u_{b+h} - u_{b} \Vert _{C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))}^{2} \leqslant 2 T \hbox {e}^{CT} \Vert h \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \Vert \delta u \Vert _{C^{0}(0,T,H^{1}(\mathbb {R},\mathbb {R})} . \end{aligned}$$
(22)

Combined with the continuity of the map \(b \in L^{2} \mapsto u_{b} \in C^{0}(0,T,H^{1}(\mathbb {R},\mathbb {R}))\) ensured by Corollary B.1, we can deduce from (22) that \(\Vert \delta u \Vert ^{2}_{C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))} = o ( \Vert \delta b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} ) \).

Then, Proposition 4.4 ensures (12) has a unique global solution \(v_{b} \in C^{0}(0,T;H^{2} (\mathbb {R},\mathbb {R}))\). Hence, we can correctly compute again the integration-by-parts formula given in [18, Lemma 7.3] by considering the Gelfand triple \(H^{1}(\mathbb {R},\mathbb {R}) \subset L^{2}(\mathbb {R},\mathbb {R}) \subset H^{-1}(\mathbb {R},\mathbb {R})\) combined with the fact that \((\delta u,v_{b}) \in \lbrace w \in L^{2}(0,T;H^{1}(\mathbb {R},\mathbb {R})) : \partial _{t}w \in L^{2}(0,T;H^{-1}(\mathbb {R},\mathbb {R})) \rbrace ^{2}\). Since \(v_{b}(\bullet ,T) = \delta u(\bullet ,0) = 0\), we have:

$$\begin{aligned} 0 \,= & {} \, \int _{0}^{T} \left\langle \partial _{t} \left( \delta u \right) ~\vert ~ v_{b} \right\rangle _{H^{-1}(\mathbb {R},\mathbb {R}), H^{1}(\mathbb {R},\mathbb {R})} \left( \bullet , t \right) \, \mathrm {d}t\\&+ \int _{0}^{T} \left\langle \partial _{t} v_{b} ~\vert ~ \delta u \right\rangle _{H^{-1}(\mathbb {R},\mathbb {R}), H^{1}(\mathbb {R},\mathbb {R})} \left( \bullet , t \right) \, \mathrm {d}t \end{aligned}$$

Proceeding as below (19), one may obtain from the previous relation:

$$\begin{aligned} 2 \int _{0}^{T} \int _{\mathbb {R}} u_{b} \, \delta u \, \mathrm {d}x \, \mathrm {d}t = \int _{0}^{T} \int _{\mathbb {R}} \dfrac{ \left( \delta u \right) ^{2}}{2} \, \dfrac{\partial v_{b}}{\partial x} \, \mathrm {d}x \, \mathrm {d}t + \int _{0}^{T} \int _{\mathbb {R}} \delta b \, \dfrac{\partial v_{b}}{\partial x} \, \mathrm {d}x \, \mathrm {d}t. \end{aligned}$$

Recalling that \(\delta b = h\) and introducing the map (3), we deduce from the last relation:

$$\begin{aligned} R_{F}(h):= & {} F(b+h) - F(b) - \int _{\mathbb {R}} h(x) \left( \int _{0}^{T} \dfrac{\partial v_{b}}{\partial x }(x,t) \, \mathrm {d}t \right) \, \mathrm {d}x \\= & {} \int _{0}^{T} \int _{\mathbb {R}} (\delta u)^{2} \left( 1 + \dfrac{1}{2} \dfrac{\partial v_{b}}{\partial x} \right) \mathrm {d}x \, \mathrm {d}t. \end{aligned}$$

Consequently, using the fact that \(v_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\), we establish

$$\begin{aligned} \vert R_{F}(h) \vert \leqslant T \Vert \delta u \Vert _{C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))}^{2} \left( 1 + \dfrac{1}{2} \Vert \partial _{x}v_{b} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} \right) , \end{aligned}$$

and inserting the estimate (22) into the last one above, we get indeed \(R_{F}(h) = o(\Vert h \Vert ^{2}_{L^{2}(\mathbb {R},\mathbb {R})}) \). Since \(h \in L^{2}(\mathbb {R},\mathbb {R}) \mapsto \int _{\mathbb {R}}h(x)[\int _{[0,T]} \partial _{x}v_{b}(x,t) \, \mathrm {d}t ] \, \mathrm {d}x \in \mathbb {R}\) is a continuous linear form, the uniqueness of the differential ensures that the functional \(F_{b} : h \in L^{2}(\mathbb {R},\mathbb {R}) \mapsto F(b+h) \in \mathbb {R}\) is Fréchet differentiable at the origin, i.e. F is Fréchet differentiable at any bottom \(b \in L^{2}(\mathbb {R},\mathbb {R})\) and the shape gradient is well defined by (13), concluding the proof of Proposition 4.5. \(\square \)

Appendix B: Other Useful Properties

1.1 B.1 Hölder Continuity of the Functional

In Sect. 4.3, given any fixed \(K > 0\), we have proved the (sequential) continuity of the nonlinear map \(N:b \in L^{2}(]-K,K[,\mathbb {R}) \mapsto u_{b} \in C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\) for the \(L^{2}\)-weak topology. Here, we first establish \(N: L^{2}(\mathbb {R},\mathbb {R}) \mapsto C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\) is continuous for the \(L^{2}\)-strong topology. Then, by restricting N and F to any ball of \(L^{2}(\mathbb {R},\mathbb {R})\), we obtain their Hölder continuity.

Proposition B.1

Let \(T > 0\) and \(b \in L^{2}(\mathbb {R},\mathbb {R})\). Consider any sequence \((b_{n})_{n \in \mathbb {N}}\) of square-integrable maps strongly converging to b in \(L^{2}(\mathbb {R},\mathbb {R})\). Then, the sequence \((u_{b_{n}})_{n \in \mathbb {N}}\) of their associated solutions given in Proposition 4.1 strongly converges to \(u_{b}\) in \(C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\), where \(u_{b}\) is the unique solution of Proposition 4.1 associated with b.

Proof

Let \(T > 0\) and \(b \in L^{2}(\mathbb {R},\mathbb {R})\). From Proposition  4.1, we can consider the unique solution \(u_{b} \in C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) satisfying (4). First, we treat the smooth case. Let \((b_{n})_{n \in \mathbb {N}}\) be a sequence of maps in \(H^{\infty }(\mathbb {R},\mathbb {R})\) that is strongly converging to b for the \(L^{2}\)-norm. In particular, this sequence is uniformly bounded in \(L^{2}(\mathbb {R},\mathbb {R})\). Applying Proposition A.1, there exists a sequence \((u_{b_{n}})_{n\in \mathbb {N}}\) of associated smooth maps satisfying (4) and the a priori estimates, from which we deduce that \((u_{b_{n}})_{n\in \mathbb {N}}\) is uniformly bounded in \(C^{0}(0,T;H^{2}(\mathbb {R},\mathbb {R}))\) by a constant denoted \(C > 0\). Then, applying Proposition 4.1 with b and \(\mathfrak {b} = b_{n}\) for any \(n \in \mathbb {N}\), we obtain that \((u_{b_{n}})_{n \in \mathbb {N}}\) strongly converges to \(u_{b}\) in \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\). We now prove that in fact the convergence occurs in \(C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\). Let \((m,n) \in \mathbb {N} \times \mathbb {N}\). We set \(\delta u : = u_{b_{n}} - u_{b_{m}}\) and \(\delta b : = b_{n} - b_{m}\) then establish that \( (u_{b_{k}})_{k \in \mathbb {N}} \) is a uniform Cauchy sequence by relating \(\partial _{x}(\delta u)\) to \(\delta b\) and \(\delta u\). Since both \((b_{n},u_{b_{n}})\) and \((b_{m},u_{b_{m}})\) satisfy (4), we get that \((\delta u, \delta b)\) is a smooth solution to (19) (where we have replaced \(u_{b}\) by \(u_{b_{m}}\)). We use the conservative structure of (4) and (19) by writing \(\partial _{t} ( u_{b_{m}}) = -\partial _{x}I\) and \(\partial _{t}(\delta u) = -\partial _{x}J\), where we set \(I : = \partial _{xx}u_{b_{m}} + \frac{1}{2}(u_{b_{m}})^{2} + b_{m} \) and \(J : = \partial _{xx}(\delta u) + \frac{1}{2}(\delta u)^{2} + \delta b + u_{b_{m}} \, \delta u \). We have:

$$\begin{aligned}&\dfrac{\mathrm {d}}{\mathrm {d}t} \int _{\mathbb {R}} \left[ \dfrac{\left( \delta u \right) ^{3}}{6} + u_{b_{m}} \dfrac{\left( \delta u \right) ^{2}}{2} - \dfrac{1}{2} \left( \dfrac{\partial \left( \delta u \right) }{\partial x} \right) ^{2} + \delta b \, \delta u \right] \\&\quad = \, \underbrace{2 \int _{\mathbb {R}} \partial _{t} \left( \delta u \right) \, J}_{ = \, - \int \partial _{x}( J^{2}) \, = \, 0 } + \underbrace{\frac{1}{2} \int _{\mathbb {R}} \partial _{t} (b_{m}) \left( \delta u \right) ^{2}}_{=\, - \int I \, \delta u \, \partial _{x} (\delta u)} \\&\quad = - \int _{\mathbb {R}} \delta u \, \dfrac{\partial \left( \delta u \right) }{\partial x} \left( \dfrac{\partial ^{2} u_{b_{m}}}{\partial x^{2}} + \dfrac{u_{b_{m}}^{2}}{2} + b_{m} \right) \, \mathrm {d} x. \end{aligned}$$

Proceeding as below (19) (but here the functions are regular), we obtain for any \(t \in [0,T]\):

$$\begin{aligned}&\Vert \partial _{x} \left( u_{b_{n}} - u_{b_{m}} \right) \left( \bullet , t \right) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2}\nonumber \\&\quad \leqslant \dfrac{5C}{3} \Vert \left( u_{b_{n}} - u_{b_{m}} \right) (\bullet ,t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}^{2} + 4C \Vert b_{n} - b_{m} \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \nonumber \\&\quad +\, 2TC \left( C + \dfrac{C^{2}}{2} + \sup _{k \in \mathbb {N}}\Vert b_{k} \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \right) \Vert \left( u_{b_{n}} - u_{b_{m}} \right) (\bullet , t) \Vert _{L^{2}(\mathbb {R},\mathbb {R})}. \end{aligned}$$
(23)

We deduce from (23) that \(t \in [0,T] \mapsto \partial _{x}( u_{b_{n}} ) (\bullet ,t) \in L^{2}(\mathbb {R},\mathbb {R})\) is a uniform Cauchy sequence. It is thus strongly converging to a certain map in \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\), which has to be \(\partial _{x}u_{b}\) by considering the convergence in the sense of distributions. Finally, we treat the non-regular case by approximations. Let \(\varepsilon > 0\) and \((b_{n})_{n \in \mathbb {N}}\) be any sequence of maps in \(L^{2}(\mathbb {R},\mathbb {R})\) that is strongly converging to b. By density, for any \(n \in \mathbb {N}\), there exists a sequence \((b^{k}_{n})_{k \in \mathbb {N}}\) of smooth maps with compact support that is strongly converging to \(b_{n}\) in \(L^{2}(\mathbb {R},\mathbb {R})\). From the foregoing, we deduce that there exists \(k_{n} \in \mathbb {N}\) such that we have \(\Vert u_{b_{n}^{k_{n}}}- u_{b_{n}} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} < \varepsilon \). Moreover, one can check that \((b_{n}^{k_{n}})_{n \in \mathbb {N}}\) strongly converges to b in \(L^{2}(\mathbb {R},\mathbb {R})\). Again, from the foregoing, there exists \(N \in \mathbb {N}\) such that for any integer \(n \geqslant N\), we have \(\Vert u_{b_{n}^{k_{n}}} - u_{b} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} < \varepsilon \). We deduce for any \(n \geqslant N\):

$$\begin{aligned} \Vert u_{b_{n}}- u_{b} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))}\leqslant & {} \Vert u_{b_{n}} - u_{b_{n}^{k_{n}}} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} \\&+ \Vert u_{b_{n}^{k_{n}}} - u_{b} \Vert _{C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))} < 2 \varepsilon . \end{aligned}$$

To conclude the proof of Proposition B.1, \((u_{b_{n}})\) converges to \(u_{b}\) in \(C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\). \(\square \)

Corollary B.1

Let \(M > 0\), \(T > 0\) and set \(B_{M} : = \lbrace b \in L^{2}(\mathbb {R},\mathbb {R}) : \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant M \rbrace \). Then, there exists \( C(T,M) > 0\) depending only on T and M such that for any \((b, \mathfrak {b}) \in B_{M} \times B_{M}\):

In particular, the energy functional \(F: \mathcal {B} \rightarrow \mathbb {R}\) given in (3) is \(\frac{1}{2}\)-Hölder continuous.

Proof

Let \(M > 0\), \(T > 0\), and set \(B_{M} : = \lbrace b \in L^{2}(\mathbb {R},\mathbb {R}) : \Vert b \Vert _{L^{2}(\mathbb {R},\mathbb {R})} \leqslant M \rbrace \). First, we combine the a priori estimate of Proposition 4.2 with the fact that \((b, \mathfrak {b}) \in B_{M} \times B_{M}\). We deduce that the constant \(C > 0\) appearing in the quantitative estimate of Proposition 4.1 can be bounded by one that only depends on T and M. Hence, the nonlinear map \(N: b \in B_{M} \mapsto u_{b} \in C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R}))\) is \(\frac{1}{2}\)-Hölder continuous. Then, the continuity of the embedding \(C^{0}(0,T;L^{2}(\mathbb {R},\mathbb {R})) \subset L^{2}(0,T;L^{2}(\mathbb {R},\mathbb {R})) \) applied to b and \(b_{n'}=\mathfrak {b}\) also yields to the same result for the map \(F : B_{M} \mapsto \mathbb {R}\). Finally, there exists two sequences \((b_{n})_{n \in \mathbb {N}}\) and \((\mathfrak {b}_{n})_{n \in \mathbb {N}}\) of smooth maps with compact support respectively converging to b and \(\mathfrak {b}\) strongly in \(L^{2}(\mathbb {R},\mathbb {R})\). Proposition B.1 ensures that the associated smooth maps \((u_{b_{n}})_{n \in \mathbb {N}}\) and respectively converges to in \(C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R}))\). We can now proceed as in the proof of Proposition B.1 so (23) holds with \(b_{n}\) and \(b_{m} = \mathfrak {b}_{n} \). By letting \(n \rightarrow + \infty \) in this inequality, we deduce from the foregoing that \(N : b \in B_{M} \mapsto u_{b} \in C^{0}(0,T;H^{1}(\mathbb {R},\mathbb {R})) \) is \(\frac{1}{4}\)-Hölder continuous, concluding the proof of Corollary B.1. \(\square \)

1.2 B.2 Stability Analysis of the Numerical Scheme

Proposition B.2

The discretization (15) takes the form \(L_{\varDelta x, \varDelta t}u = 0\) and approximates Eq. (6) written as \(\partial _{t}u + Lu = 0\). If we assume \(\varDelta t = O(\varDelta x )\), then (15) is consistent and first-order accurate: \( \forall u \in C^{4} ( \mathbb {R} \times [0,+\infty [,\mathbb {R} ), \, \partial _{t}u + Lu = 0 \Rightarrow \partial _{t}u + Lu = L_{\varDelta x, \varDelta t}u + O ( \varDelta x )\).

Proof

We introduce the shift operators \(s^{\pm }_{x}[(\bullet )(x,t)]: = (\bullet )(x \pm \varDelta x , t)\) in order to define \(\delta ^{1}_{x} := \frac{1}{2 \varDelta x}(s^{+}_{x} - s^{-}_{x})\), \(\delta ^{2}_{x} := \frac{1}{\varDelta x^2}(s^{+}_{x} - 2 + s^{-}_{x})\) and \(\delta ^{3}_{x} := \delta ^{1}_{x} \delta ^{2}_{x}\). Let \(u \in C^{4}(\mathbb {R} \times [0,+\infty [,\mathbb {R})\) be such that \(\partial _{t}u + Lu = 0\). We get \(s^{\pm }_{x}u = u \pm \varDelta x \, \partial _{x}u + \frac{\varDelta x^{2}}{2} \partial _{xx}u \pm \frac{\varDelta x ^{3}}{6} \partial _{xxx}u + O(\varDelta x^{4})\) from a Taylor expansion. We deduce \(\partial _{x} u = \delta ^{1}_{x}u + O(\varDelta x^{2})\) and \(\partial _{xx} u = \delta ^{2}_{x}u + O(\varDelta x^{2})\). These estimations are then combined to obtain \(\partial _{xxx} u = \delta ^{3}_{x}u + O(\varDelta x)\). Therefore, we have an approximation of the linear terms of L:

$$\begin{aligned} Lu = \dfrac{3 c_{0} }{2 h_{0}} u \, \partial _{x}u + \dfrac{c_{0}h_{0}^{2}}{6} \delta ^{3}_{x}u + \dfrac{c_{0}}{2} \delta _{x}^{1} b + O(\varDelta x). \end{aligned}$$
(24)

Introducing the time operators \(s^{+}_{t}[(\bullet )(x,t)]: = (\bullet )(x,t+\varDelta t)\) and \(\delta _{t}^{+} : = \frac{1}{\varDelta t } (s^{+}_{t} - 1) \), similar arguments yields \(\partial _{t} u = \delta _{t}^{+} u + O(\varDelta t)\) and \(\partial _{t} u = s_{t}^{+} \partial _{t} u + O(\varDelta t) = -s_{t}^{+} Lu + O(\varDelta t) \). We get: \(\partial _{t} u + Lu = \delta _{t}^{+} u + Lu + O(\varDelta t) = \delta _{t}^{+} u + \frac{1}{2}(Lu - \partial _{t}u) + O(\varDelta t) = \delta _{t}^{+} u + \frac{1}{2}(1+s_{t}^{+})Lu + O(\varDelta t) \). Then, we assume \(\varDelta t = O(\varDelta x)\) and from \( \frac{1}{2}(s_{t}^{+} +1) u^{2}= u \,s^{+}_{t}u + O(\varDelta t^{2}) \), we can treat the nonlinear term of Lu: \((1+s_{t}^{+})(u \, \partial _{x}u) = \delta ^{1}_{x}[u \,s^{+}_{t}u + O(\varDelta t^{2})] + O(\varDelta x^{2}) = \delta ^{1}_{x}(u \,s^{+}_{t}u) + O(\varDelta x) \). Hence, we deduce the expected estimation \(\partial _{t}u + Lu = L_{\varDelta x, \varDelta t}u + O(\varDelta x)\) with:

$$\begin{aligned} L_{\varDelta x, \varDelta t}u ~:=~ \delta ^{+}_{t} u + \dfrac{3 c_{0}}{4h_{0}} \delta ^{1}_{x} \left( u \,s^{+}_{t} u \right) + \dfrac{c_{0}h_{0}^{2}}{12} \left( 1+s^{+}_{t} \right) \left( \delta ^{3}_{x} u \right) + \dfrac{c_{0}}{2} \delta ^{1}_{x} b. \end{aligned}$$

\(\square \)

Proposition B.3

Consider the discretization (15) of Eq. (6) with forcing term \(b = 0\). Let \(\beta = \frac{3c_{0}}{2 h_{0}} \Vert \zeta \Vert _{C^{0}([-L,L] \times [0,T],\mathbb {R})}\), \(\mu = \frac{c_{0}}{6} h_{0}^{2}\) and \(s = \frac{\varDelta t}{\varDelta x}\). Then, Von Neumann’s stability analysis provides an amplification factor \(g: [- \pi , \pi ] \rightarrow \mathbb {C} \) of the following form:

$$\begin{aligned} g(\xi ): = \dfrac{1 - i A\left( \xi \right) }{1 + i A \left( \xi \right) } \quad \mathrm {where}\quad A \left( \xi \right) : = s \left( \sin \xi \right) \left[ \dfrac{\beta }{2} + \dfrac{\mu }{\varDelta x^{2}} \left( \cos \xi - 1 \right) \right] . \end{aligned}$$

In particular, \(\vert g \vert = 1\), ensuring the non-dissipative feature of the method: the scheme (15) is unconditionally stable. Moreover, the numerical dispersion \(\Psi = \mathrm {arg}(g) = - \mathrm {arctan}(\frac{2A}{1 - A^{2}})\) is compared to the analytical one whose expression is given by \(\Psi _{\mathrm{ref}}(\xi ) := - s \beta \xi + \frac{s \mu }{\varDelta x^{2}} \xi ^{3}\). We obtain \(\Psi (\xi ) = \Psi _{\mathrm{ref}}(\xi ) + E_{\Psi }(\xi ) + O(\xi ^{7})\) where:

$$\begin{aligned} E_{\Psi }(\xi ) = \dfrac{s \beta }{6} \left( 1 + \dfrac{s^{2} \beta ^{2}}{2} \right) \xi ^{3} - \left[ \dfrac{s^{5} \beta ^{5}}{80} + \dfrac{s^{3} \beta ^{3}}{24} + \dfrac{s \beta }{120} + \dfrac{ s^{3} \beta ^{2} \mu }{4 \varDelta x^{2}} + \dfrac{s \mu }{4 \varDelta x^{2}} \right] \xi ^{5}. \end{aligned}$$

Proof

We refer to [23, (38)–(51)] for details on the proof of this result. We stress, however, a disagreement between our expression of \(E_{\Psi }\) and the one provided in [23, (50)]. This seems to result from a mistake made in [23, (48)], when expanding \(\Psi \) by using Taylor series. More precisely, it is wrongly stated that \(\mathrm {arctan}(\frac{-2A}{1 - A^{2}}) = - 2A[1 - \frac{1}{3}A^{2} - 3 A^{4}] + O(A^7)\), as the correct expression is given by \(\mathrm {arctan}(\frac{-2A}{1 - A^{2}}) = - 2A[1 - \frac{1}{3}A^{2} +\frac{1}{5} A^{4}] + O(A^7)\). \(\square \)

1.3 B.3 The Necessity of a \(L^{2}\)-Constraint

Lemma B.1

Let b(x) be a forcing function with enough regularity, as given in Lemma 3.1, and u(xt) be the unique smooth solution of the initial-value problem (4). For any \(\theta \in \mathbb {R}\), define the maps \( u_{\theta }: (x,t) \mapsto \theta ^{2} u(\theta x, \theta ^{3}t)\) and \( b_{\theta }: x\mapsto \theta ^{4} b(\theta x) \). Then, \(u_{\theta }\) is precisely the solution of (4) with forcing function \(b_{\theta }\).

Proposition B.4

Let \(K > 0\) and \(T > 0\). Then, the problem (11) has no global maximizer.

Proof

Assume, by contradiction, that there exists a maximizer b to (11). Then, from Lemma 3.1, we can consider its associated smooth solution \(u_{b}\). Introducing the bottoms \((b_{\theta })_{\theta > 1}\) of Lemma B.1, one can check they are admissible for problem (11). Moreover, we deduce from Lemma B.1 that \(F(b_{\theta }) = \int _{0}^{\theta ^{3}T} \int _{\mathbb {R}} u_{b}^{2}(x,t) \, \mathrm {d}x \, \mathrm {d}t\). Using the optimality of b, we obtain \(F(b_{\theta }) = F(b)\) for any \(\theta > 1\) so \(u_{b} = 0\) on \([T,\theta ^{3}T]\). From (6), we get \(\partial _{x}b = 0\) thus \(u_{b} = 0\) also on [0, T]. Thus, \(F(b) = 0\), which contradicts the optimality of b. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dalphin, J., Barros, R. Optimal Shape of an Underwater Moving Bottom Generating Surface Waves Ruled by a Forced Korteweg-de Vries Equation. J Optim Theory Appl 180, 574–607 (2019). https://doi.org/10.1007/s10957-018-1400-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1400-8

Keywords

Mathematics Subject Classification

Navigation