Extreme points and support points of conformal mappings

There are three types of results in this paper. The first, extending a representation theorem on a conformal mapping that omits two values of equal modulus. This was due to Brickman and Wilken. They constructed a representation as a convex combination with two terms. Our representation constructs convex combinations with unlimited number of terms. In the limit one can think of it as an integration over a probability space with the uniform distribution. The second result determines the sign of $\Re L(\overline{z}_0(f(z))^2)$ up to a remainder term which is expressed using a certain integral that involves the L\"owner chain induced by $f(z)$, for a support point $f(z)$ which maximizes $\Re L$. Here $L$ is a continuous linear functional on $H(U)$, the topological vector space of the holomorphic functions in the unit disk $U=\{z\in\mathbb{C}\,|\,|z|<1\}$. Such a support point is known to be a slit mapping and $f(z_0)$ is the tip of the slit $\mathbb{C}-f(U)$. The third demonstrates some properties of support points of the subspace $S_n$ of $S$. $S_n$ contains all the polynomials in $S$ of degree $n$ or less. For instance such a support point $p(z)$ has a zero of its derivative $p'(z)$ on $\partial U$.


Introduction
Let S := {f ∈ H(U) | f ( ) = f ( ) − = , f is injective on U := {z ∈ C | |z| < }}. This is the family of normalized conformal mappings on the open unit disk U. S is a normal family and a compact subspace of the holomorphic functions on U, H(U). The topology is taken to be that of uniform convergence on compact subsets of U. This topology is locally convex on H(U). We recall the following standard de nitions.
De nition 1.1. Let X be a topological vector space over the eld of complex numbers. Let Y be a subset of X.
A point x ∈ Y is called an extreme point of Y if it has no representation of the form as a proper convex combination of two distinct points y and z in Y. A point x ∈ Y is called a support point of Y if there is a continuous linear functional L on X, not constant on Y, such that In this paper we will give an extension of a result of L. Brickman and D. R. Wilken. This result whose elegant proof is essentially due to Brickman and Wilken, can be found in [1]. See also [2].
Another property we will prove is that for a function f ∈ S that maximizes {L(g)}, g ∈ S where L is a linear continuous functional on H(U). We have for any natural number n ∈ Z + , and for any positive real number t ∈ R + : where f (z ) is the tip of the monotone slit C − f (U), |z | = and f (z ) = . f (z, s), (z ∈ U, s ∈ R + ) is the Löwner chain generated by the support point f (z). As a general reference we will use the book [3], especially Chapter 9, 275-287 and Chapter 3, 76-113.
In the nal section we will prove that the properties of the support points f of S, such as that f has a zero on the boundary ∂U, are inherited by much smaller subfamilies of S such as Sn, the spaces of all the polynomials in S of degree n or less (n ∈ Z + ). Clearly the Sn's are less geometric than S. Nevertheless, the birth of the slit structure of the image is starting to be visible by their support points. An important part of geometric function theory is the solution of extremal problems, such as coe cient problems, integral means problems, distortion problems and many other extremal problems. In order to apply functional analytic tools it is natural to identify the extreme points of S and its support points. By the Krein-Milman theorem, there is an extreme point of S among the support points associated with each linear continuous functional on H(U). Knowing properties of support points might allow restricting the search for a solution to a much smaller family of points in S, than the whole of S. This is one aspect of the importance of such results.

A simple extension of a result of Brickman and Wilken
Here is a result of Brickman and Wilken, [1]. The clever proof given by Brickman and Wilken considers the image of f , D = f (U) which omits α and β, α ≠ β. They de ne an analytic single-valued branch of Ψ(w) = {(w − α)(w − β)} / in D and prove that the two functions w ± Ψ(w) are univalent and have disjoint images of D. They normalize to get two conformal mappings later on that belong to S
Hence < t < and the elegant proof is done.
Immediate consequences (see [3], Corollary 1 and Corollary 2 on page 287) are that each extreme point of S and each support point of S have the monotonic modulus property. We show how to get more information on f , based on the above nice proof. The two functions w ± Ψ(w) are analytic and injective in D. In fact this is true in every domain that is complementary to two disjoint slits that start respectively at α and at β and extend to in nity. We note that if w ∉ {α, β} then also w ± Ψ(w) ∉ {α, β} (for w ± Ψ(w) = α ⇒ (w − α) = (w−α)(w−β) ⇒ w = α or w = β). Hence the following compositions are analytic, single-valued and injective in D and omit {α, β}, These functions have disjoint images (for ξ +Ψ(ξ ) and We de ne for ≤ j ≤ and w ∉ {α, β}, We conclude that if for ≤ j ≤ we have g j ( ) > then is a strict convex combination (no zero coe cients) of the h j , ≤ j ≤ . Thus if f ∈ S omits the values α, β so that g j ( ) > for ≤ j ≤ , then f has the following representation where < α j < , j= α j = and f j are distinct functions in S that omit non-empty open sets. Here, as in Brickman and Wilken's proof, f j = g j • f , ≤ j ≤ . So we need to prove that g j ( ) > for ≤ j ≤ . We, once more, will make a use in the assumption |α| = |β| (which was already used by Brickman and Wilken in the rst step of our iteration). Let us compute g j ( ).
where ∓ are signs not synchronized with the other sign changes in the expression.

Now we have
We denote A = Ψ (±Ψ( )) and then we need < · A · ( ∓ A). This happens when − < A < and so − < Ψ (±Ψ( )) < . This means that and we already know that this is indeed the case when |α| = |β|. This proves the case n = in our general theorem below.

erent functions in S that omit (each) open non-empty sets
Proof. We denote D = f (U) and we assume that α, β ∉ D, α ≠ β, |α| = |β|. We de ne an analytic and singlevalued function in D by Ψ(w) = {(w − α)(w − β)} / and denote two more functions in D, Ψ (w) = w + Ψ(w), Ψ (w) = w − Ψ(w). We will de ne a sequence of n sequences of functions. In the j'th sequence there will be j functions. The rst sequence is: g = Ψ , g = Ψ . We now assume that j > and that the (j − )'st sequence is: g (j− )k , k = , , . . . , j− . Then the j'th sequence is: The functions in our series are injective (g jk The functions are pairwise disjoint in each of the n sequences (functions within the same sequence). For if g jk (w ) = g jk (w ) then there can be only two possibilities: But Ψ l is injective and hence g (j− )k (w ) = g (j− )l (w ) and we use induction.
In particular for the n'th sequence we have: the functions g nk , ≤ k ≤ n are analytic, single-valued, injective and disjoint in D ⊆ C − {α, β}. Also we have n k= g nk (w) = n · w.
For we can use inductive argument as follows We conclude that if for ≤ j ≤ n we have g nj ( ) > , then the usual convex combination with positive coe cients of the h j (w)'s. If this is the case, we de ne α j = −n g nj ( ), f j = h j • f , ≤ j ≤ n and we get the convex representation f = n j= α j · f j that we were looking for. For it is obvious that each f j ∈ S and those functions omit open non-empty sets, for the h j do, because the g nj 's are disjoint. Thus we need to prove that g nj ( ) > for ≤ j ≤ n . This follows by induction and by the assumption that |α| = |β|, α ≠ β. We note the following g nj ( ) = Ψ s (g (n− )j ( )) · g (n− )j ( ) = { ± Ψ (g (n− )j ( ))} · g (n− )j ( ) = We elaborate a bit more this nal part of the proof -the proof that g nj ( ) > is inductive (on n). It is convenient to denote Xn = g nj ( ) (j is xed) and the induction assumption is that |Xn − α| = |Xn − β|. By Xn = X n− ± (Xn − α) / (Xn − β) / we get and hence |Xn − α| = |Xn − β| for all n. Hence and we conclude that indeed − < Ψ (g nj ( )) < .
The construction in the proof of Theorem 2.1 applies to any natural number n ∈ Z + . A natural question is whether when n → ∞ it converges to some kind of, say, an integral representation of the function f ∈ S that omits {α, β}, where as usual α ≠ β, |α| = |β|. To start with, we inquire if a recursion such as the one we have g k+ (w) = w + Ψ(g k (w)) or g k+ (w) = w − Ψ(g k (w)) converges (the sign is chosen at each stage arbitrarily). We rst try to solve for g in g(w) = w + Ψ(g(w)) or g(w) = w − Ψ(g(w)). We immediately note the following, Proposition 2.2. Let us consider the following functions that result by applying nitely many times recursions of the form g (w) = w, g k+ (w) = w ± Ψ(g k (w)), k = , , , . . . . β) regardless of the sign. This last equation is linear in g, − wg + w = −(α + β)g + αβ and it's (unique) solution is

where at each step th sign + or − is chosen arbitrarily. Then all the resulting functions have a unique xed-point which is the same for all of them. This xed-point is the rational function
The same is true when we solve the xed-point equation of higher members of the recursion. For example, solving for g = w ± Ψ(w ± Ψ(g)), is independent of the sign choices. It leads to We conclude this section by noting that in passing with the sum of n elements n j= g nj ( )h j (w) = n ·w to the next sum, that of n+ elements, n+ j= g (n+ )j ( )h j (w) = n+ · w, each element in the former sum gave birth to two descendents w + Ψ(g nj (w)) and w − Ψ(g nj (w)). So in a sense, each of the elements in a particular sum (say the one with n elements) developed from a well-de ned chain of elements in the former (smaller) sums, in a way that resembles partial sums in s series development. When n → ∞ we can interpret our recursive process as integrating all these multitude of elements that can be thought of as the values of a random variable over a probability space with the uniform distribution.

One more property of support points of S
We recall that the space H(U) is a linear topological locally convex space. The normalized conformal mappings S ⊂ H(U) is a compact topological subspace of H(U). The topology is that of uniform convergence on compact subsets. If f ∈ S is a support point of S that corresponds to the continuous linear functional L on H(U), then by the de nition L(f ) = max g∈S L(g). The complement of the image of f , Γ = C − f (U) is an analytic curve having the property of increasing modulus and having the π/ -property, i.e. the angle between the segment that connects the origin to the tip of Γ is at most π/ . Moreover, Γ has an asymptotic direction at ∞. There is a point z ∈ ∂U such that f is analytic on U − {z } and has a pole of order at z = z . Also, if w is the tip of the slit Γ, then there is a point z ∈ ∂U such that w = f (z ), and f (z ) = . If the functional L is not constant on S (as we assume throughout) then L(f ) ≠ as is well known. In fact this was used in order to prove that the slit has an asymptotic direction at in nity. See [4]. Also, in [3], Theorem 10.4 on page 307, and Theorem 10.5 on page 311. It is here that we go further and prove a family of inequalities that involve L(z j (f (z) j+ ).

Theorem 3.1. Let L be a continuous linear functional on H(U) which is not constant on S. Let f ∈ S satisfy the equation L(f ) = max g∈S L(g)
, and suppose that |z | = , f (z ) = . Then for any natural number n ∈ Z + , and for any positive real number t ∈ R + : Proof. Since Γ = C − f (U) is a slit, we can embed f inside a Löwner chain. We brie y recall this standard procedure (see Chapter 3 in [3], 76-92). One chooses a parametric representation of Γ, w = Ψ(t), ≤ t < ∞ so that Ψ( ) = f (z ), Ψ(s) ≠ Ψ(t) for s ≠ t. Also, if Γ t is the tail of Γ from Ψ(t) to ∞, then g(z, t) is the Riemann mapping of U onto C − Γ t so that g( , t) = , g ( , t) > and we have: We de ne Then f (z, t) is called a Löwner chain and it satis es: where the limit is uniform on compact subsets of U. The point /k(t) = k(t) is that point on ∂U that is mapped by f (z, t) onto the tip of Γ t . We note that e t f (z, t) ∈ S, ≤ t < ∞ and so: (2) On the other hand we have: where h(z, s)ds = e s f (z, s). Hence: Integrating between t and ∞ we obtain the identity: The last integral is o(e −t ) for t → ∞. Hence using the inequality in (4), we obtain: This proves the theorem.
Remark 3.2. It is not possible to deduce from the inequality in Theorem 3.1 that when t → ∞ as the author wrongly thought in the rst version of this paper. The author thanks the referee for his remark and insight on that matter. Here is a simple example that shows that such an inequality can not be true.

Properties of support points are inherited by less geometric families of mappings
If f ∈ S is a support point of f that corresponds to the continuous linear functional L on H(U), then Γ = C−f (U) is an analytic curve, called the slit of f . It starts at its tip w and monotonically extends to in nity. The tip w has a single pre-image z on ∂U, w = f (z ) and f (z ) = . In this section we will see that the tip of the slit, w , already appears in a support point of univalent polynomials.
De nition 4.1. Let n ∈ Z + be a natural number. The family of all the polynomials in S, of degree n or less will be denoted by Sn. We note that Sn is a compact subspace of H(U). Here is our result. Proof. Let us suppose that L is not constant on Sn. Then there are two polynomials q , q ∈ Sn such that q − q ∉ C and L(q ) ≠ L(q ). Using the normalization of elements in Sn we have q ( ) = q ( ) = and q ( ) = q ( ) = . Hence q( ) = q ( ) = and L(q) = L(q − q ) = L(q ) − L(q ) ≠ . We proceed with a Rouche's type of principle for injectivity.

Lemma 4.4.
If f (z) = z + a z + . . . ∈ S is analytic in a neighborhood of U such that f (z) does not have zero on ∂U, then for any function g(z) analytic in a neighborhood of U there exists a δ > (depending on g), so that if |w | < δ, then f (z) + w · g(z) is injective on U.