Convex Environmental Contours for Non-Stationary Processes

Environmental contours are tools frequently used in the early design of marine structures. They provide a description of critical design conditions and serve as a means for simplifying expensive long-term response calculations. Here, we consider convex contours based on the assumption of convex failure sets. We provide a rigorous foundation for the existence of such contours when the underlying environmental factors are modelled by a general, possibly non-stationary, process. This constitutes a generalisation of existing theory and is done to properly account for empirically observed increases in extreme sea-states. Two definitions are proposed, based respectively on averages or quantiles of exceedence times, along with minimal conditions on the environmental processes to guarantee existence. In order to illustrate these methods we give two examples, including an empirical study containing a method for constructing contours based on the presented theory.


Introduction
Environmental contours are tools frequently used in the early design of marine structures.They provide a description of critical environmental conditions that may serve as a basis for structural design.Furthermore, they may be utilised to reduce the number of computationally expensive response calculations needed for the analysis of reliability.As a result, environmental contours are applied to analyse a wide variety of marine structures [1,4,5,27], and several methods for the construction of contours is mentioned in the recommended practices -environmental conditions and environmental loads document by DNV (Det Norske Veritas) [3].
A large variety of methods for constructing environmental contours exist, for a summary and comparison of different techniques we refer to [7] and [22].A common thread through many of these methods is the modelling of environmental conditions, V , as a piecewise constant process with independent and identically distributed values in each interval.The length of this interval, ∆t, varies depending on the application, but ∆t = 3 hours is a common choice.The values of V usually represent some summary statistics of the wave elevation, or other relevant environmental factors, over the period.V then represents the long-term variations of the environmental conditions.The short-term variations, i.e. the variation of instantaneous conditions within the period, is usually ignored in the construction of these contours.
These contours are also constructed to satisfy certain exceedence probabilities.These properties can usually be formulated by requiring that the probability of V hitting a failure set F, not intersecting with the contour ∂B, has at most a given probability p e of occurring in each interval.Due to the independence between different intervals, this assumption also implies restrictions on the time to failure τ F = inf{t : V t ∈ F}.Specifically, we have that the return period E[τ F ] is bounded from below by ∆t/p e .Note that these exceedence properties are effectively defined under the assumption that failure depends only on V , thereby ignoring the shortterm variation of the response.
Arguably, the most popular construction of environmental contours is the inverse first order reliability method (IFORM) developed in [29,8].This method first establishes a sea-state distribution which induces a Rosenblatt transformation [21] of the density.All failure sets are then implicitly assumed to be convex in the transformed space, implying that the contour in the original space can be constructed by applying the inverse Rosenblatt transformation to a sphere.
As an alternative, in [12,13], they develop a definition of convex environmental contours by assuming the failure set to be convex in the original space.This approach has several advantages, such as the easy inclusion of omission factors and a more amenable interpretation of the convexity assumption compared to IFORM.Several improvements and possible modifications to this method have been made in the literature.In [2] the concept of buffered contours is introduced and [11] considers omission factors and convexity.Several different ways of constructing these contours are also discussed in e.g.[6,11,25,16].
Once a contour with the desired exceedence properties has been constructed, it can be applied to reliability analysis in several ways.Usually, response simulations are carried out for conditions along the contour over a period of length ∆t.The point along the contour providing the highest extreme short-term response is then chosen as the design point.Often, the design point is chosen to correspond with the highest median of the response distribution.In e.g.[5,23], an importance sampling procedure, centred around the design point, is discussed in the context of extreme long-term response computation.Additionally, in e.g.[5,8], they consider quantiles of the response distribution at the design point as estimates of the characteristic response.These methods allow the estimation of response with only a limited number of computationally expensive response calculations.
However, these methods still rely on a stationary model.This causes issues when taken together with the evidence of increasingly extreme sea-states, as detailed in [14,27,28], which would imply a significant non-stationarity in significant wave heights.
Several articles such as [27] and [12] adjust the sea-state distributions to correct for this increase.However, the models are still stationary, which keeps them from fully representing the changing behaviour of the environmental processes involved.A closer view on the differences between such strategies and the methods to be presented in this paper will be given in Section 7. It is also worth mentioning the works of e.g.[10,15,16,17,26] which consider stationary processes with a varying degree of autodependence.These articles allow for more general behaviours of the underlying environmental processes, but do not address the issue of long-term trends.
The goal of this article is to present a mathematically rigorous framework for environmental contours with a broad class of possible models for the underlying environmental factors, including non-stationary ones.While discrete models are the primary focus in terms of applications, we will also include the possibility of continuous-time models.In this regard, we will give minimal conditions for relevant functions to be well defined in addition to existence of the contours themselves.
Results presented in this article will be a generalisation of the theory discussed in [11,12,13], which are based on the assumption of failure sets being convex in the original parameter space.As such, we will, in Section 2, give a brief overview of the main results and definitions from these papers, for so to generalise the setting in Section 3. We will here propose two different ways of defining convex environmental contours based on either averages or quantiles of exceedence times.A brief discussion on how these definitions are connected to response analysis is given in Section 4. This is followed, in Section 5, by a mathematically rigorous treatment of which minimal conditions are required of the model for V , in order to ensure the existence of contours.In Section 6 and Section 7 we present two examples of applications of the theory, which highlight some of the differences between classical approaches and the more flexible methods allowed by the theory presented in this article.As a part of the final example we also present a method for computing these contours in practice.

Convex Contours
A convex environmental contour is the boundary of a compact convex set B ⊂ R d , denoted ∂B, defined with respect to a d-dimensional environmental process V .For example, this process is often taken to be the pair V = (P, H) for d = 2 where P is the zero-upcrossing wave period and H the significant wave height of a particular location of interest.
We will in this section follow the construction described in e.g.[13] and [11] and assume that the distribution of V t is constant and absolutely continuous with respect to the Lebesgue measure on R d .Furthermore, the process is assumed be path-wise constant over periods of a set length of ∆t, and most importantly we make the assumption that values of V are independent between these different periods.One could equivalently consider V t = W ⌊t/∆t⌋ where ⌊•⌋ denotes the floor function and with {W n } ∞ n=0 defined as a sequence of independent and identically distributed (i.i.d.) random variables with a distribution equal to that of V .As such we will refer to this type of model as an i.i.d.model throughout this article.
For every possible structural design we consider a limit-state function g, also referred to as the performance function.This is assumed to depend only on V , thereby ignoring the variance of the structural response conditional on V .The function g is defined such that the region F, where g(V ) ≤ 0, represents conditions the structure cannot safely handle.We therefore refer to F as the failure set.Environmental contours then aim to apply to any design satisfying F ∩ B ⊂ ∂B.As such we consider an unknown performance function, and therefore an unknown failure set F. In order to handle such an unknown F we further assume that F belongs to E(B), the class of all convex sets such that F ∩ B ⊆ ∂B.Based on this we may define the exceedence probability by (2.1) and impose the constraint of where p e is some given target exceedence probability.
When dealing with convexity we will need the concept of hyperplanes.We will denote by ⟨•, •⟩ the canonical inner product on R d and by ∥ • ∥ the euclidean norm, with this we also define the unit sphere on R d by S d−1 = {v ∈ R d : ∥v∥ = 1}.The hyperplane indexed by the threshold c ∈ R and the unit vector u ∈ S d−1 is then defined as We further define the half-spaces which allow us to present an important well-known result about separating hyperplanes.For a proof of this result, as well as (2.6), we refer to [20].This result implies that we can separate a convex set B and any F ∈ E(B).In particular, we can reconstruct any compact and convex B as the intersection of all these half-spaces.If we define we get that B can be represented as It is further shown in [13], for d = 2, that these hyperplanes also serve as maximal elements of E(B) for computing the exceedence probability.Specifically, we have ( gives an explicit construction of this optimal proper contour. In the following sections we will need the concept of the first hitting time of a set F ⊆ R d , defined by τ F = inf{t : V t ∈ F} (2.10)For our i.i.d.model we can easily associate our target exceedence probability with a target return period.We can note that by our assumption of independence between the W n 's that the exceedence time, τ F , is geometrically distributed.Furthermore, for any valid ∂B and F ∈ E(B), the mean of τ F is at least ∆t/p e .This implies that when we are ensuring that P e (B, E) ≤ p e we are equivalently ensuring that E[τ F ] ≥ t r for some target return period t r = ∆t/p e .Similar arguments would allow us to compare P e (B, E) with quantiles of the distribution of τ F .Both the mean and quantiles of τ F are more amenable to generalisation than exceedence probabilities and will be used in the upcoming sections.

Environmental Contours for General Processes
We now aim to extend the concepts introduced in the previous section to a more general context.We no longer assume V to be stationary and instead consider it to be a progressively measurable process taking values in R d .We also need the process to satisfy certain regularity conditions in order to ensure that (5.3), which will be introduced later, is measurable.For this purpose one may assume, for instance, càdlàg paths.Usually, discrete models are used in order to facilitate response analysis.Fortunately, a discrete model for V is sufficient to ensure the measurability of (5.3).
We will also still consider an unknown failure set F ∈ E(B) where E(B) is the collection of all convex sets F such that F ∩ B ⊆ ∂B.This will similarly allow the use of half-spaces to control the exceedence time.
Since we no longer assume a stationary distribution, we will introduce two ways of replacing (2.1).A common substitute for the exceedence probability, used explicitly in works such as [10,15,16,17,26], is to use the average failure time, commonly referred to as the return period.As such, we start by defining the return period of B by T r (B) = inf Remark 3.1.Since V is now possibly non-stationary the concept of a long-term average return period is no longer meaningful.However, for the sake of consistency, we shall still refer to these average exceedence times as return periods.
In some cases there may be yearly trends present which, if persisting indefinitely, may cause the process to drift over time.In such cases there could be a positive probability that the process never exits certain sets.This contrasts with the stationary case where every set of positive measure (w.r.t. the law of V ) is eventually hit.While the exceedence time might have a positive probability of not occurring, thereby making the return period infinite, there could still be a high chance of it occurring in finite time.In order to account for such behaviour we want a more flexible version of (3.1).As such we define the survival probability of B by for a given survival time t s > 0.
Like with (2.2), we will construct our contour based on these two definitions.In our case we will consider two separate possible restrictions, the first one is based on return periods with T r (B) ≥ t r (3.3) for some target return period t r > 0. If this holds then for any F ∈ E(B) it will take on average at least a time of t r to enter F. Note that under the constrains of an i.i.d model, (3.3) is equivalent to (2.2) for t r = ∆t/p e .The alternative restriction corresponding to the survival probability is defined as for a given minimal survival probability 1 > q s > 0. If this condition holds, then for any F ∈ E(B), it is guaranteed that with a probability of at least q s , the process V will take at least t s amount of time before hitting F. Note that under the constraints of an i.i.d model, τ has a geometric distribution which means (3.4) is equivalent to (2.2) if e.g.t s = ∆t/p e and q s = (1 − p e ) 1/pe .In particular, for low exceedence probabilities, we have that q s ≈ 1/e ≈ 37%.
Remark 3.2.While contours are usually formulated using return periods, there are several benefits to considering contours based on (3.4).As mentioned, longterm trends can lead to heavy tails of τ F which can inflate E[τ F ], or even make it infinite.There are also two technical benefits in that only the path of V up to time t s needs to be considered.If these paths need to be simulated, as in Section 7 or [26], then simulating the average exceedence time could require paths of arbitrary length, introducing certain numerical challenges.Lastly, given the difficulty in forecasting distant future trends, avoiding the specification of these trends for timepoints beyond t s is advantageous.
With these possible constraints established we now have two new ways of defining our environmental contours, either by (3.3) or (3.4).As noted, both of these serve as generalizations of the restriction of (2.2).Analogously to the previous section we will refer to a contour ∂B such that B is convex and compact as valid in the return period sense if T r (B) ≥ t r and valid in the quantile sense if Q s (B) ≥ q s .Likewise, we call ∂B proper in the return period sense if E[τ Π + (u,B(B,u)) ] = t r for all u ∈ S d−1 and proper in the quantile sense if P(τ Π + (u,B(B,u)) > t s ) = q s for all u ∈ S d−1 .In order to justify the definitions of proper contours we proceed analogously to [13] by the following result.Proposition 3.3.Let B ⊂ R d be a compact and convex set, we then have Proof.We first note that by Proposition 2.1 we have for any

This inequality yields inf
The result then follows by noting that Π + (u, B(B, u)) ∈ E(B) for any u ∈ S d−1 .□ Remark 3.4.Proposition 3.3 treats the tangent half-spaces, Π + (•, B(B, •)), as maximal elements of E(B).However, it is also possible to interpret the tangent half-spaces as FORM approximations of failure sets for possible designs.This would replace our assumption of convex failure sets by the linear FORM approximation.
With Proposition 3.3, we can introduce our analogues of C e from (2.8) by These functions allow us to define our contours by which will be proved in Proposition 5.5.However, before we move to the theoretical considerations of this article we will discuss the more practical connection to response analysis.

Interpretation of Environmental
Processes in Continuous Time.The theory presented makes no restrictions on whether V is modeled as discrete or not, which causes some minor complications in applying these contours to response analysis.
In this regard it is worth mentioning that the use of continuous-time processes for the definition of contours has been previously considered in e.g.[15].Here, a continuous process is made comparable to an i.i.d.process by equating the outcrossing rate over a period of length T for specific thresholds.This was achieved through the use of equivalent characteristic time scales.This procedure ensured that the continuous and i.i.d.process produced the same contour for a given return period, but whenever the target return period was changed the resulting contours differed significantly.
For offshore engineering it is common to split the description of ocean waves into its long-term and short-term variability.The long-term variability often considers summary statistics such as significant wave height and zero-upcrossing period, these describe conditions over a certain time period ∆t.For example, the significant wave height over this period would be the average height of the largest third of waves within this period.The short-term variation usually describes the variation of individual waves within those periods.
Generally, the environmental conditions along a contour represents the long-term conditions, i.e. summary statistics over a period of length ∆t.Response analysis is the carried out by simulating the short-term variations over a period of that length, under the assumption of constant long-term conditions, see e.g.[1,5,23].In order to facilitate this type of response analysis it is important that we can interpret V as suitable summary statistics of the sea-state over some period.There are several ways of ensuring this, most easily and commonly done by modelling V as a discrete process.
If one wanted to model V as a continuous-time process it is possible to consider V t to represent some collection of summary statistics over [t, t + ∆t].This can be achieved by e.g.choosing a continuous-time model equating the distribution of V t to an estimated density of the relevant statistics over the given period.In so doing one must also ensure that the autodependence structure of V is sufficiently accurate.An simple example of this approach is given in Section 6.

4.2.
Applications to Response Analysis.Environmental contours are often applied in the early concept evaluation phase, where the contours help identify possible critical design conditions.In terms of applications to response analysis we mainly have deterministic response, characteristic response estimates by way of design points, and importance sampling centred around these design points.We will here discuss the two former, as the extension of the latter to a nonstationary setting is still quite speculative.4.2.1.Deterministic Response.The most straightforward applications of contours is in the case of deterministic response.While usually unrealistic in practise, this case highlights the intuitive basis for the use of environmental contours.
Assume we have a deterministic response function y and a target return period t r for the design of our structure.For any design we will then have a response capacity, y cap inducing a limit-state function, g(v) = y cap − y(v), and an associated failure set F = {v ∈ R d : g(v) < 0}.The goal is then to find a minimal response capacity, y cap , such that the mean time to failure, E[τ F ], is at least T years.This can easily be done by considering any valid contour in the return period sense, as defined in Section 3 for a T -year return period.We can then apply an analogue to the inverse FORM method described in [29], which chooses Assuming that argmax b∈B y(b) ∈ ∂B and that the resulting F is convex, we then have by the definition of the contour that E[τ F ] ≥ T years.Since, in the simple deterministic case, the failure of the structure occurs at time τ F , we know that the average failure time of the structure shares the same bound.4.2.2.Quantiles of Design Point.The most common application of environmental contours in the case of stochastic response, mentioned in e.g.[5,8], is the following.First compute an environmental contour with a T -year return period.For environmental conditions along the contour, compute the distribution of the short-term maximal response.Designate the worst of these conditions as the design point, the conditions providing a response distribution with the highest median is often chosen.The response level with a T -year return period is then estimated by a quantile of the response distribution at the design point, quantiles ranging from 85% to 95% is recommended in [18].This procedure, as described, can also be applied to a non-stationary setting.
Note that by Section 4.2.1, the design point estimate would correspond to an exact bound in the deterministic case.The idea is then to choose a suitably high quantile to correct for the stochastic nature of the response.The underlying assumption then becomes that a representative response value with a return period of T years should occur along a contour with the same return period (paraphrasing [15,22]).By applying an analogous assumption to our non-stationary contours then this can shed some light on the choice between contours based on survival times or return periods.If one is interested in finding a response level with a specified average exeedance time, then a contour based on that same average exceedence time should be used.Similarly, if quantiles of this exceedence time is of primary interest, then contours based on survival times should be considered.
It is important to note that the use of quantiles of the design point for the calculation of characteristic response levels are rough approximations.It is usually recommended to verify the choice of quantile by a full long-term response analysis if possible.Despite this, the method is highly efficient and requires very few response simulations to be carried out.
In the case where V is modeled in continuous time it is still possible to identify a design point along the contour.Assuming that V represents the long-term conditions over a period of length ∆t, then response distributions can be established for conditions along the contour.However, since a full response analysis in continuous time is generally unfeasible, there is no practical way to verify the choice of quantile.In the case where a full response analysis is out of reach, continuous-time modelling is an option.However, this issue is a strong reason to focus on discrete models whenever a full response analysis is needed.

Existence of Contours
The design conditions along the contour are chosen due to their statistical properties, as such it is important to be able to clearly interpret and mathematically verify them.In Section 3, we allow for a very general class of models for V , however, not every model permits the existence of well-defined contours.Therefore, in this section we will provide rigorous mathematical justification for the existence of these contours.Firstly, we give minimal conditions for C Q and C T to be well defined.We will then show that the analogous representation of Equation (2.9), based on constructing contours by B(B, u) = C e (u), still provides a unique proper contour in our generalised setting.Finally, in the case where no proper contours exist, we prove existence of valid contours.
In order to ensure our functions are well defined we will need the following definitions and results.For any u ∈ S d−1 , b ∈ R and t ∈ R with t ≥ 0, we denote the cumulative distribution function of τ Π + (u,b) by and the average of Finally we define ⟨V s , u⟩. ( Throughout this article we will usually assume that for any t > 0, b ∈ R ∪ {±∞} we have P (ϕ u t = b) = 0.
(5.4)This assumption will serve as a minimal condition for our contours and other concepts to be definable.For most models, (5.4) will follow as a consequence of ϕ u t admitting a continuous density for every u ∈ S d−1 and t > 0. For a discrete model of V , it is sufficient that V t admits a density for all t.
To see the connection between these definitions we have the following result.
Lemma 5.1.Let u ∈ S d−1 and t > 0, we then have Furthermore, if (5.4) holds, then The first equality is the standard tail probability expectation formula, as such we omit the proof.
As for the second equality, if τ Π + (u,b) ≤ t then there is some point s ≤ t such that V s ∈ Π + (u, b), or equivalently ⟨V s , u⟩ ≥ b, which implies ϕ u t ≥ b.Similarly, if τ Π + (u,b) > t then no such point exists and consequently ϕ u t ≤ b, this conversely states that ϕ u t > b implies τ Π + (u,b) ≤ t.Applying these implications and (5.4) we get which proves the second equality.□ With this lemma we can prove the following results which guarantee that (3.5) and (3.6), i.e.C Q and C T , are well defined.
Theorem 5.2.Under the assumption of (5.4) we have for any t > 0, u ∈ S d−1 that b → F u b (t) is monotone non-increasing and continuous with lim b→∞ F u b (t) = 0 and Proof.We will start by showing that We see that ϕ u t ≥ b n for all n, is equivalent to ϕ u t ≥ b.This implies that the limit equals P (ϕ Similarly to before we note that ϕ u t < b n for all n is equivalent to ϕ u t ≤ b.As a consequence the limit equals P (ϕ u t ∈ R) − 1 = 0, which implies that b → F u b (t) is right-continuous and therefore fully continuous.
Considering lim b→∞ F u b (t) we may, for any sequence And lastly, for and lim b→∞ F u b (t) = 0 for all u ∈ S d−1 is equivalent to the assumption of (5.4), making it a necessary and sufficient condition for Theorem 5.2.
This theorem implies that b → 1 − F u b (t s ) = P(τ Π + (u,b) > t s ) spans (0, 1) which implies that C Q is well defined for any q s ∈ (0, 1).To ensure that C T is also well defined, we have the following result.Proposition 5.4.Assume that (5.4) holds and that for any u ∈ S d−1 there is some b We then have that T u (•) is continuous and monotone non-decreasing on We then get by continuity of b → F u b (t) and the dominated convergence theorem that Similarly, for b n → −∞, we have some N and b for n > N we get by the dominated convergence theorem, along with Lastly, for b n → b * u < ∞, we start with right limits and assume b n ≥ b n+1 for all We then consider the left limit case, i.e. b n ≤ b n+1 for all n, b n → b * u .We then have that {1 − F u bn } ∞ n=1 is a monotone increasing sequence of non-negative functions, as such we get by the monotone convergence theorem that The same computations would hold if b * u = ∞ by considering F u ∞ = 0, which completes the proof.□ With this result we see that, under the given assumptions, T u (•) spans the whole of (0, ∞) so C T is well defined for any t r ∈ (0, ∞).
With this, both our analogues of C e from (2.8) are well defined.Similarly to [13], we can use these functions to guarantee certain properties of our contours.Proposition 5.5.Fix some t s ∈ (0, ∞), q s ∈ (0, 1) such that C Q (u) is well defined for all u ∈ S d−1 .We then have that Q s (B) ≥ p s is equivalent to Furthermore, if we fix some t r > 0 such that C T is well defined, then T r (B) ≥ t r is equivalent to Proof.We start by assuming that B(B, u) ≥ C Q (u) for all u ∈ S d−1 and get Combining this with Lemma 5.1 then yields Consider then some proper contour ∂B, and assume, for contradiction, that we have B(B, u) > C Q (u) for some u ∈ S d−1 .Since, by definition, C Q (u) ∈ U u ts we must also have This contradicts the fact that ∂B is a proper contour and we must therefore have □ We can also extend this result to proper contours in the return period sense.
Proposition 5.7.Fix some t r such that C T (u) is defined for all u ∈ S d−1 and let the conditions of Proposition 5.6 hold.We also assume that τ Π + (u,C(u)) is nondeterministic in the sense that P(τ Π + (u,C(u)) = t r ) < 1.Under these conditions, if there exists some proper contour ∂B in the return period sense, i.e.E τ Π + (u,B(B,u)) = t r for all u ∈ S d−1 , then B(B, •) = C T and Proof.Consider the proper contour ∂B and assume for contradiction that B(B, u) > C T (u) for some u ∈ S d−1 .We then have that since either equals 1(t < s) or 1(t ≤ s) for some s ∈ [0, ∞), where 1 denotes the indicator function.In fact, since . From this we see that P(τ Π + (u,C(u)) = t r ) = 1 which contradicts our assumption that τ Π + (u,C(u)) is non-deterministic, thereby implying B(B, u) = C T (u) for all u ∈ S d−1 .□ These results shows us that any proper contour, in either the return period or the quantile sense, is uniquely defined by (5.6) with C = C Q or C T respectively.
In the case where no proper contour exists we will need alternative methods for constructing a valid contour.One example of a possible construction is the following.Here x ∈ R d is some suitable centre point, ideally such that C(u) − ⟨u, x⟩ > 0 for all u ∈ S d−1 , around which the contour is drawn.
, (5.7) where (•) + equals max(•, 0), conv(•) denotes the convex hull and cl(•) is the closure.Proposition 5.8.Fix some t s ∈ (0, ∞), q s ∈ (0, 1) such that C Q (u) is well defined for all u ∈ S d−1 and bounded from above.Let B be constructed as in (5.7) with C = C Q , we then have that ∂B is a valid contour in the quantile sense.
Similarly, if C T is defined and bounded above for some t r ∈ (0, ∞) and B is constructed as in (5.7) with C = C T , we then have that ∂ B is a valid contour in the return period sense.
Proof.Since, by definition, for any u ∈ S d−1 .Lastly, B is closed and convex by definition, and since C Q is bounded from above we have that (C Q (u) − ⟨u, x⟩) + is bounded.As a consequence, B is compact, making ∂B valid in the quantile sense.An identical argument shows that ∂ B is valid in the return period sense.□ To ensure that this construction is always feasible we have the following result, which ensures that C Q and C T are indeed bounded, thereby guaranteeing the existence of valid contours.Lemma 5.9.Fix some t r , t s ∈ (0, ∞), q s ∈ (0, 1) such that C T is well defined and assume that (5.4) holds.We then have that C Q and C T are bounded from above on S d−1 .
Proof.Firstly, by Theorem 5.2, we have that C Q is defined and finite for any u ∈ S d−1 .Furthermore, if we define ϕ ∞ t = sup s∈[0,t] ∥V s ∥ we may note that (5.4) implies P(ϕ ∞ t = ∞) = 0 for any t ∈ (0, ∞).As a consequence we may pick some b ∈ R such that P(ϕ ∞ ts < b) > q s and compute Similarly, we may pick b ′ ∈ R such that P(ϕ ∞ 2tr < b ′ ) > 0.5.If we then define

□
With this result we can guarantee the existence of valid contours.However, other construction methods also exist.In [11], the authors consider a scenario where C e could produce a proper contour, but due to estimation errors, the approximated C e fails to do so.To address this issue, they propose constructing an inflated contour using (5.6) based on C e + c for some appropriate c ∈ R. In a more general case where C e does not admit a proper contour, as presented in [6], an invalid contour is constructed using (5.6), followed by an extension procedure that guarantees a valid construction in the limit.For both cases it is assumed that C e is bounded, and hence Lemma 5.9 ensures that these methods can still be applied in our setting.We also mention [24], where a numerical algorithm for computing a minimal valid contour in the sense of mean width is developed and analysed.
With the existence and construction of contours settled we can move on to some examples.The goal of these are to show the ways the presented methods differ from the existing framework presented in e.g.[12].The first main difference is the ability to consider a continuous-time framework which may more accurately capture the dynamics of V in between the discrete points.Secondly we can also allow nonstationary behaviour which allows the inclusion of effects like climate change as a part of the model for V .

Theoretical Example
We will in this section aim to define a proper contour in the return period sense with a target return period of t r , under the assumption that V follows the continuous dynamics described below.Once this is done we will compare our exact method with an i.i.d.method to highlight the differences.In analysing these models we also discuss when i.i.d.methods produce conservative estimates.
It is also worth mentioning that a similar case was studied in [15].Here, several types of contours were compared for both continuous and discrete models of the underlying environmental factors.It was also observed that a continuous model for V may result in larger contours.Additionally, in [10,17,26] they compare discrete models taking serial correlation into account with i.i.d.models.In these articles they conclude that including autodependence will lead to smaller contours, and hence argued that ignoring correlation will lead to overly conservative estimates.The i.i.d.method will therefore serve as a conservative representation of all possible discrete models.We can therefore compare its resulting contours to that of a continuous model to examine how autodependence affects the size of the contour in a continuoustime setting.
We here consider the case where V t ∈ R 1 is defined by where W is standard Brownian motion and θ ∈ R, θ > 0. This makes V a standardised Ornstein-Uhlenbeck process which serves as a continuous interpolation of an AR(1) discrete-time process.Note that V is standardised to ensure a mean of 0 and variance 1, which implies V t standard normally distributed for any t.This allows us to interpret the continuous-time process in the sense of Section 4. We assume that the long-term conditions over the periods [n∆t, (n + 1)∆t], n ∈ N, are standard normally distributed and follow an AR(1) process.Consequently, V t is an interpolation of these conditions and can be interpreted as the average conditions over , we see that −V is also an Ornstein-Uhlenbeck process with the same parameters and thus equal in law to V .As such we know that C T is constant on S 0 = {−1, 1}.In fact, if we considered a d-dimensional Ornstein-Uhlenbeck process, C T would still be constant and ∂B would equal a (d − 1)-sphere with a radius given by the same value of C T as our 1-dimensional case.
In computing C T we again consider . Under the assumption that V 0 = 0 we have an explicit representation of T u (•), independent of u, given in e.g.[19] as where Γ is the gamma function.One can also show the more convenient alternative representation of where erf denotes the error function.By inverting T u (•) we can easily compute C T numerically.The resulting contour would then, due to C T being constant, be the two points given by ∂B = {±T −1 u (t r )}.This gives us an explicit representation of the optimal contour.However, as an alternative we could consider am i.i.d model for V .We define W = {W n } ∞ n=0 as an independent sequence of standard normally distributed random variables.Further define V by V t = W ⌊t/∆t⌋ , for some ∆t > 0 and note that V t and V t are equal in law for any t ∈ R. As such we may consider V as an alternative i.i.d.model of V .
If we were to apply the i.i.d.method presented in e.g.[12] we could compute C T based on V .We know that this is equivalent to considering C e with an exceedence probability of p e = ∆t/t r , which means that for all u ∈ S 0 we would have where Φ is the cumulative distribution function of a standard normal random variable.
With these two models we can compare how the resulting contours differ.Since both models provide perfectly circular contours (insofar as S 0 can be referred to as circular ) we can instead compare the radii.To compute exact numbers we want a reasonably realistic value for θ.This will be chosen based on a time series, {H n } n∈N , of significant wave heights.The details of this dataset will be given in Section 7.
In order to choose our parameters we first pick ∆t = 3 hours and compute the 24 hour autocorrelation (AC 24 ≈ 68%) of the standardised data (H − µ H )/σ H .Here µ H and σ H are the empirical mean and standard deviation, respectively, of the time series {H n } n∈N .By noting that the 24-hour autocorrelation of an Ornstein-Uhlenbeck process satisfies θ = −log(AC 24 )/24 we can estimate θ ≈ 0.16 hours −1 .
The resulting radius curve based on the continuous-time model of V , labelled OU Method, and the curve based on V , labelled IID Method, are plotted in Figure 1.

Figure 1. Comparison of contour radii for different methods
Remark 6.1.The differences are quite small, but there is still a noticeable distinction between the two methods.In fact, their similarities are heavily dependent on the value of θ, a higher value would push the radius of the O.U. method more noticeably above or below the one of the i.i.d.method.For example, by accounting for trends and seasonality in H one may get θ ≈ 0.025.This would yield e.g. a 200-year contour radius of 4.75, which is a 2.3% increase over the i.i.d.method.If we instead considered an estimate of θ based on e.g. the 7-hour empirical autocorrelation, then θ ≈ 0.01 is obtained.This estimate would yield a 200-year contour radius of 4.54, a 2.3% decrease compared to the i.i.d.method.
As we see the i.i.d.method produces larger and more conservative contours for low values of t s , but crosses below for sufficiently large return periods.One can even find an approximation of when the two lines cross by noting that this point occurs when the radius, R, of the contour satisfies It can further be shown, by taking specific asymptotic expansions, that for high values of R we have the following approximations Applying these to both sides of (6.1) then yields as long as the point where the lines cross occur for sufficiently high values of R.
Simplifying this expression we get the approximate identity θ∆tR 2 = 1, which means we can compute the return period for which this point occurs, here denoted t * r , by For our specific parameters, this approximation yields t * r ≈ 130 years.By analysing (6.2) we can supplement the more heuristic reasons why these different models yield different contours.
Using V ignores the autocorrelation, allowing the process to vary more wildly, usually producing larger contours.This effect is magnified when ∆t is low which corresponds to the limit lim ∆t→0 t * r = ∞.This effect justifies why the i.i.d.approach is usually considered as a conservative estimate, as argued in e.g.[10,17,26].In these articles they remark that including serial correlation would lead to less conservative contours, and thus less extreme design conditions.This holds in the setting of discrete models for V .However, as we see from this example, there are situations where the consideration of continuous-time models can lead to more conservative contours despite the inclusion of serial correlation.
As this example shows, there are situations where the i.i.d.method can overestimate the return period compared to a continuous-time model.Since the presumed true model of V is non-discrete, it has the possibility of exceeding boundaries at times between the discrete points.Since V t was calibrated to equal V t in distribution, and therefore represents the long-term conditions over [t, t+∆t], we consider the following scenario.Some off-shore structure is exposed to environmental loads, for simplicity we focus only on the significant wave height.The significant wave heights over the periods [0, ∆t] and [∆t, 2∆t], each induces their own short-term probability of failure.However, due to the variability of V , the significant wave height could potentially be higher over the middle period of [∆t/2, 3∆t/2], than over any of the two other periods individually.This permits a potentially much higher failure probability over the middle period that would be ignored by only considering [0, ∆t] and [∆t, 2∆t].As a consequence, the time until failure could be underestimated by not considering the short-term variability of the long-term conditions described by V .This effect is reflected by ∂t * r /∂θ < 0, which implies that increasing θ shrinks the domain where the method based on V is conservative.Indeed, a high θ increases the short-term volatility of V , thus improving the chances of the process exceeding a fixed boundary at times in-between the points of {n∆t, n ∈ N}.
For the purposes of response analysis we usually want to use a discrete model for V .This example, however, shows that this can overestimate the return period of sufficiently extreme conditions.Consequently, those models may underestimate the response resulting from such conditions.Bear in mind that the i.i.d.method can be viewed as a conservative representative for all possible discrete methods.Thus, if the continuous-time modelling of V produces meaningfully larger contours than even the conservative i.i.d.estimate, then this serves as an indication that a lower ∆t should be considered.

Empirical Example
7.1.Data and Outline of Method.The data considered for this example will consist of ERA5 reanalysis data [9].We will use hourly data for significant wave height and wave periods from 65 • N, 0 • W over the period 1959-2021.While the calibration will use the full resolution of one hour, we will use a three-hour time step for the purposes of simulation and computation.
Our primary goal is here to compute an empirical environmental contour in the quantile sense for a survival time of t s = 50 years and a survival probability of q s = e −1 ≈ 37%.In doing so we will also present a specific algorithm for generating such contours which is carried out in four steps: • The distribution of V t is estimated for all relevant values of t.
• Paths of V are simulated by using sequences of independent (but not identically distributed) random variables.• These paths are used to compute samples of ϕ u ts which allows the computation of C Q by Lemma 5.1.
• The contour can then be constructed.If a proper contour exists, then it is defined by (5.6), but if no such contour exists, we may employ the methods of [24] to produce a minimal valid contour in the sense of mean width.Note that a survival probability of e −1 is chosen to correspond with a 50-year return period if τ had an exponential density.Due to the presence of a trend we do not have this distribution, but it will still serve as an easy, although rough, point of comparison.
Remark 7.1.This procedure is very similar to the method proposed in [26], which consider a discrete, but serially correlated model for V .The author also considers contours in the return period sense in the same manner as defined in Section 3. Firstly, both the marginal distribution and autocorrelation function of V are estimated.It is then assumed that V t = g(Z t ), where g is the Rosenblatt transform, implying that Z has standard Gaussian marginals.The author then finds a unique autocorrelation structure for Z that implies the empirical autocorrelation of V .Paths of V can then be simulated by simulating Z, which is used to estimate C Q .
In our simulation method we consider a non-stationary distribution of V , but ignore its autocorrelation.It is entirely possible to include serial correlation by extending the method in [26] to a non-stationary setting.One could for example consider V t = g t (Z t ), where g t (•) is the Rosenblatt transform corresponding to the marginal density of V t .7.2.Calibration of Distributions.We here have d = 2 with V t = (P t , H t ) where P and H denotes the wave period and significant wave height respectively.Following e.g.[11,12,27], we model H using a 3-parameter Weibull distribution, and P with a conditional log-normal distribution.However, due to the presence of long-term trends discussed in e.g.[14,27,28], we will model V as non-stationary to take both this trend, as well as seasonality, into account.
We assume that H t ∼ W (λ t , k t , θ), i.e. a 3-parameter Weibull distribution with scale λ t , shape k t , and location θ.Here λ is taken to be on the form λ t = (c 1 + c 2 t)l t , furthermore, l and k are assumed periodic with a period of one year.
As for the wave period, we assume that (log(P t )|H t = h) ∼ N (µ(t, h), σ 2 (t, h)), i.e. a conditional normal distribution with mean µ(t, h) and variance σ 2 (t, h).Here µ and σ are assumed to be on the form µ(t, h) = m(t)+f µ (h) and σ(t, h) = s(t)f σ (h) where m and s are periodic with a period of one year.Remark 7.2.Several simplifications could potentially be considered here.For example, one may ignore seasonality and use the popular parametric estimates f µ (h) = a 1 + a 2 h a 3 and f σ (h) = b 1 + b 2 e b 3 h for some constants a i , b i , i = 1, 2, 3.This would allow for a simpler and fully parametric model for the distribution of V t .
In order to perform our calibration procedure we will first remark that if H t ∼ W (λ t , k t , θ), we then have for any To estimate (λ t , k t , θ) for H we then do the following.
• θ is estimated by the minimal measured value (rounded down to 2 significant digits to avoid numerical issues).• The linear trend parameters, (c 1 , c 2 ), are estimated by linear regression on H − θ. • k t is estimated by inverting the equality , where the expectations are computed by smoothing spline regression of H −θ.
We then normalise k by defining k ′ t = k t / 1 0 k t dt so the average value of k ′ equals 1.This is done to avoid numerical issues from taking high powers.
• For calibration of l we note that We can then fit l k ′ t t by a smoothing spline regression of .
The resulting parameters are given in Table 1 and Figure 2. Note that the parameters in Table 1 are given in meters (m) and meters per year (m/y), the functions , where L is the log-moment of a chi-squared random variable with 1 degree of freedom.This allows us to fit smoothing splines for log(f 2 σ (h)) and log(s 2 (t)) by a weighted generalised additive model calibration.Finally, to counteract issues arising from log-scale calibration, s is scaled to ensure that (log(P t ) − µ(t, H t )) /σ(t, H t ) has a variance of 1.The resulting functions are given in Figure 3.With this model we can easily simulate paths of V over the next 50 years.Based on these simulations we obtain samples of ϕ u ts for 180 uniformly spaced unit vectors in S 1 .By considering the lower q s quantile of ϕ u ts for a fixed u ∈ S 1 we obtain C Q (u) by Lemma 5.1, which yields q s = P(τ Π + (u,C Q (u)) > t s ) = P(ϕ u ts ≤ C Q (u)).3. We may also note that W n has a continuous density for all n ∈ N.This implies that, for any u ∈ S 1 , t s > 0, ϕ u ts has a density as well.Consequently, by Theorem 5.2, C Q is well defined, and by Proposition 5.8 and Lemma 5.9 we can guarantee existence of valid contours.Furthermore, as the density of W n has support equal to R × [θ, ∞), we know that ϕ u ts has a density with connected support.Consequently, if a proper contour exists, Proposition 5.6 implies it is defined by (5.6).
In existing literature, such as [12] and [27], the inclusion of climatic trends is accomplished by considering a stationary distribution with parameters modified to reflect the observed trend.In order to study the effects of replacing the trend by adjusting parameters of a stationary model, we will examine three cases.This will be done by fixing the trend at either the beginning or end of our 50-year period.
• Case 2: λ t = (c 1 + c 2 * t)l t , which represents the estimated true sea-state distributions.• Case 3: λ t = c 1 l t , which represents sea-states based on the trend at the beginning of 2022.Note that we here still include the seasonal effects.This is done to avoid using several calibration methods, which could create artificial differences between the cases unrelated to the trend.However, despite seasonality, we would still expect case 1 and 3 to properly represent stationary alternatives to our method.As we see, there is a notable difference in C Q (u), though largely for u ≈ (0, 1).Note that since ⟨V t , (0, 1)⟩ = H t these values depend mostly on the behaviour of the significant wave height.This demonstrates that including trends is important to avoid underestimation of risk, such as in case 3.However, by including the trend as a non-constant effect, as in case 2, we can still safely reduce the resulting contour relative to case 1, where the highest trend value is applied to the entire period.Specifically, using the conservative estimate of case 1 still overestimates the risk significantly, with a maximal difference in C Q of 0.41.Since the modelling of V by an i.i.d.process is inherently on the safe side, we may be overly cautious by choosing a method which makes further conservative approximations.

Summary and Conclusions
This paper has rigorously defined and established minimal conditions for existence of environmental contours based on general stochastic processes.These definitions have several advantages over conventional constructions.Chiefly, the ability to properly include climate trends, but also the capability of including seasonality and autodependence.
The theory can also be further generalised to include several extensions.For example, we have discussed the possibility of including serial correlation in the simulation algorithm of Section 7, by modifying the methods of [26].The addition of omission factors can be done in similar way as described in [11].Lastly, buffering, as introduced in [2], can be readily extended to our setting, but may require restricting the model choice of V , to those based on sequences of independent random variables.
Furthermore, the presented methods have also been compared to conventional techniques, and significant differences in the resulting contours have been demonstrated.In particular, we have discussed how contours can be used to examine the impact of discretisation and autocorrelation.Additionally, we have also illustrated how these methods can avoid the underestimation of risk coming from trends, without the use of excessively conservative strategies.Finally, as part of these examples, we have also presented a strategy for computing these contours based on Monte-Carlo simulation.As such, the approaches considered are presented as an alternative method for the construction of environmental contours.

Declaration of Competing Interest
The author declares that there is no known competing financial interest or personal relationship that could have appeared to influence the work reported in this paper

Proposition 2 . 1 .
For any two convex sets B and F in R d such that B ∩ F ⊆ ∂B there exists some u ∈ S d−1 , c ∈ R such that B ⊆ Π − (u, c) and F ⊆ Π + (u, c).
For the continuity we start by showing left-continuity of b → F u b (t).Whenever b n → b, b n < b n+1 we have by set-continuity of measures and Lemma 5.1 that u t ≥ b) − P (ϕ u t ≥ b) = 0, thus proving the left-continuity of b → F u b (t).As for right-continuity we consider b n → b, b n > b n+1 and get

Figure 2 .
Figure 2. Non-parametric functions for H

7. 3 .
Simulation.With the d-dimensional marginal distributions of V determined we can move on to the computation of C Q .Similarly to the previous example we define {W n } ∞ n=0 as a sequence of independent random variables such that W n = V n∆t in distribution for ∆t = 3 hours.This again lets us model V by V t = W ⌊t/∆t⌋ .

(
A) Full Contour (B) Zoomed-In Contour

Table 1 .
Parameters for H in Figure2are dimensionless.Furthermore, we consider t = 0 to occur at the end of the dataset, i.e. the beginning of 2022.
As for P we do the following: • We first estimate µ(t, h) by E [log(P t )|H t ] = µ(t, H t ) = m(t) + f µ (H t ).With this we can fit smoothing splines for m and f µ by generalised additive model calibration.• Similarly, σ(t, h) = s(t)f σ (h) can be computed by E log (log(P t