Oligopoly competition between satellite constellations will reduce economic welfare from orbit use

Significance Orbital space is rapidly concentrating among a few commercial operators who manage large coordinated fleets of satellites. Managing these fleets safely requires conducting a high number of collision avoidance maneuvers. Proposed management strategies for these systems have been primarily technological, with less attention to the impacts of economic competition on orbit use. Using a coupled physicoeconomic model, we show that imperfect competition between satellite operators will reduce economic welfare and distort orbital-use patterns relative to optimal public utility systems. These results highlight the need for regulatory policies promoting efficient orbit use in the public interest.


Model overview
We model competition between firms for orbital space and telecommunications market share as a two-stage game.Constellation operators are indexed by i = F, L. In the first stage, they compete in a sequential-move (Stackelberg) game to choose their constellations' altitude and size.Firm L (the Leader) deploys their constellation first and firm F (the Follower) deploys second. 1 Both players are rational, so the Follower best-responds to the Leader's choice and the Leader anticipates the Follower's best-response when designing their constellation.With their constellations deployed, in the second stage, firms play a simultaneous-move price-setting game to compete for demand from a mass of N consumers. 2eploying a constellation means choosing a number of satellites Q i to maintain and a mean orbital altitude h i to place them.These design choices determine the overall service quality x i .Service quality is a function of the constellation's global Earth coverage α i , congestion loss β i , latency L i , and bandwidth S i .Earth coverage is increasing in altitude and constellation size.Congestion loss is increasing in the constellations' sizes and proximity to each other.Latency is increasing in altitude.Bandwidth is increasing in constellation size.
The annualized unit cost of manufacturing, deploying and maintaining a satellite at altitude h i , inclusive of ground stations and other support infrastructure, is C(h i ).We assume it is first decreasing (for h i < h ) and then increasing in altitude (for h i < h ).This is explained by the countervailing facts that a higher altitude requires more lift energy at launch, but a lower altitude requires more fuel during operational lifetime to offset drag [De Pater and Lissauer, 2015]. 3The annualized total cost of a constellation is C(h i )Q i .
Consumers purchase one unit of telecommunications service and are characterized by their preference for quality, θ .A type θ consumer has utility u(θ , i) = θ x i − p i from purchasing satellite service from operator i, given service quality x i and price p i .We assume θ is uniformly distributed between θ and 1 + θ in the population.
In the following sections, we construct the coupled physico-economic model from a combination of physical and economic first principles and empirical data.

Model components and calibration
In this section we describe the components of the model, their calibration, and the solution concepts and algorithms in more detail.

Consumer demand and firm pricing
Consumers choose between purchasing one unit of service from the Leader or the Follower.The mass of N consumers is uniformly distributed in preference for satellite service quality, θ , over [θ , 1 + θ ].We assume that θ is large enough, so that all targeted consumers will be willing to purchase satellite service.
Assume that, in equilibrium, the Leader offers a better quality than the Follower (x L > x F ).We will show that the firm with the higher quality charges a higher price (p L > p F ). 4 The consumer who is indifferent between the two services has preference parameter θ * such that which can be solved to yield If θ < θ * < 1 + θ , the demands for each service are The firms set prices simultaneously to maximize their individual profits, max Their first-order conditions are yielding (Nash equilibrium) prices and demands The optimized profits are Notice that the equilibrium prices in equations 10 and 11 and equilibrium profits in equations 14 and 15 are increasing in the degree of quality differentiation between the two firms (i.e.p i and π i are increasing in x L − x F ). Thus, constellation operators face incentives to differentiate their service offerings when choosing constellation design parameters which determine quality (altitude and size).This incentive raises the potential that firms over-differentiate in equilibrium.

Constellation service quality and cost
Service quality depends on the constellations' availability, coverage, latency, and bandwidth.We show below that they are determined by the constellation own altitude and size, but also by the other constellation altitude and size.For tractability, we assume that consumers are uniformly distributed over the Earth.See Osoro and Oughton [2021] for more detailed analysis of how nonuniformities in the geographical distribution of consumers affects the distribution of service quality at different locations.We give some brief intuition below for how these features may matter in a model like ours.
It is important to note that the distribution of consumers is non-uniform in three senses: geographically (population densities vary according to location on the globe), economically (consumer characteristics vary according to location on the globe), and temporally (the same location has different demand characteristics at different times).Since low-Earth orbit (LEO) satellites revolve around the Earth, designing a constellation to only serve specific customers at specific times (e.g.where and when geographic and economic features are favorable) is challenging if not impossible.Still, these non-uniformities may lead to deviations from our modeling results in practice.
On one hand, our model implicitly accounts for some of these non-uniformities.The distribution of consumer preferences (i.e.θ ) we use may already both represent economic heterogeneity within and between regions.For example, to the extent that high-WTP consumers are concentrated in particular areas (such as the global North), our model allows the firms to target those consumers through service offering characteristics.Similarly, to the extent that demand is lower at night, peak throughput (bandwidth per consumer) for subscribers may be higher than during the day, but average or worst-case peak throughput can still be calculated as in equation 25 below.On the other hand, some non-uniformities may affect characteristics that are integrated over physical space (e.g.bandwidth per consumer) more substantially.For example, a uniform distribution will overstate the bandwidth available in more-densely populated regions and understate bandwidth in more-sparsely populated regions.However, this is unlikely to substantially affect our main conclusions or estimates, given that the estimates are at the annual level and that constellation operators will likely target consumers living in sparsely populated areas not served by terrestrial means-a more homogeneously-distributed group than the consumer population at large.

Service quality
We assume that consumers evaluate telecommunications service from a constellation i along three dimensions: availability, latency and bandwidth.The willingness-to-pay (WTP) x i of a representative consumer (i.e. with type θ = 1) for a service with availability α i , latency L i , and bandwidth S i will have the form where F is a function reflecting the fraction of WTP preserved given partial service availability and G is a function reflecting WTP for fully-available service.We assume that F and G satisfy the following conditions: 1. Consumers will be willing to pay nothing for service which is never available: F(0) = 0; 2. Consumers prefer greater availability: F α (α) > 0 for all α ∈ [0, 1]; 3. A representative consumer (i.e. with type θ = 1) will be willing to pay G(L, S) when service with latency L and bandwidth S is always available : F(1) = 1; 4. Consumers prefer less latency and higher bandwidth: G L (L, S) < 0 and G S (L, S) > 0 for all L ≥ 0 and S ≥ 0.
We assume consumers have positive WTP for a service with latency no greater than L, with bandwidth preference parameter a S and latency preference parameter a L .These parameters are calibrated such that a typical consumer's WTP for the service is 1500 $/year.We use the following specifications: where α ∈ [0, 1], L ≥ 0, and S ≥ 0. The specification of F implies that consumers strongly prefer higher availability.The specification of G implies that, ceteris paribus, consumers have constant marginal WTP for lower latency and a diminishing marginal WTP for larger bandwidth.Latency and bandwidth are assumed to contribute multiplicatively to overall WTP.That is, while consumers are willing to substitute between latency and bandwidth to some extent, there are also complementarities between the two.a S has units of bandwidth squared ((Mb/s) 2 ), L has units of latency (ms), and a L has units of WTP per unit latency ($/ms).
We calibrate a L , a S and L to be consistent with the following data: (i) A standard internet service provides a latency of 25 ms, a bandwidth of 75 Mb/s, and is available 100% of the time; (ii) A representative consumer is willing to pay a maximum of 1500 $/y. to subscribe for a standard internet service; (iii) The willingness to pay for a lower latency is about 6 $/year/ms [Liu et al., 2018]; (iv) The willingness to pay for an increased bandwidth amounts to 168 $/year, from 4 Mb/s to 10 Mb/s, and 288 $/year, from 10 Mb/s to 25 Mb/s [Liu et al., 2018]).
The values of a L , a S , and L chosen to approximately match these targets are listed in Table S2. Figure S1 gives a graphical representation of the marginal WTP of a representative consumer (θ = 1) for a fully-available (F ≡ 1) service supplying latency L (x-axis) and and bandwidth S (y-axis).Later, we will conduct a sensitivity analysis with respect to preference parameters a L , a S and L. In order to obtain relevant and comparable outcomes, we will restrict our attention to sets of parameters satisfying requirements 1, 2 and 3 above.

Earth Coverage
We assume that each satellite of a constellation with average altitude h i can cover a circle of radius h i tan(φ /2) and area πh 2 i tan(φ /2) 2 , where φ is the average angle of the beam from the satellite to the surface.The surface area of the Earth is πR 2 , so the minimum number of satellites needed to fully cover the Earth at any instant is approximately The constellation consists of Q i satellites.However, at a given instant, due to avoidance maneuvers, β i Q i satellites are "turned off" (not providing service) on average (see below for the definition and calculus of β i ).If the remaining constellation is smaller than the minimum covering number ((1 , each operational satellite will serve an area of πh 2 i tan(φ /2) 2 and the system will have gaps in coverage (i.e.areas which are not covered at some point of the day).If the remaining constellation is larger than the minimum covering number ((1 The fraction of the Earth covered at any instant is We calibrate the average beam angle φ so that the minimum covering numbers at the approximate mean altitudes of Starlink and OneWeb-550 km and 1,200km-are close to full coverage at their approximate current (Starlink, 3,351 satellites) and planned (OneWeb, 648 satellites) sizes.The value is listed in Table S1.

Latency
Consider the area that an operational satellite services as a circle of radius r i centered at its vertical.Average signal latency is determined primarily by the average distance between consumers in the market and the satellite.Assuming that consumers are uniformly distributed over this area, the average distances are the lengths of the straight lines connecting them to the satellite.
From the Pythagorean theorem, the distance between a satellite at altitude h i and a consumer located at a distance x of the center of the market is x 2 + h 2 i .Integrating over the radius of the market under a spatially-uniform consumer distribution, we obtain an average distance of where If the signal travels at average speed v, we obtain the average time-of-flight for a single trip as d i /v.Assuming that the signal makes λ trips and the electronic systems introduce a minimum latency of µ, the average latency for a constellation at altitude h i and size Q i is then We calibrate λ and µ so that the implied latencies for the Starlink and OneWeb constellations are consistent with publicly available data on their service offerings [Ookla, 2022, Brodkin, 2019].The parameter values are listed in Table S1.The implied latencies for Starlink and OneWeb are about 34 ms and 38 ms, respectively.

Bandwidth
An operational satellite in constellation i provides average bandwidth of κ Mb/s.Constellation i therefore supplies total bandwidth of κ(1 − β i )Q i shared among all covered subscribers.Given a coverage rate of α i , only α i D i consumers can be active at the same time among the D i subscribers to constellation i. Therefore the throughput at peak load for a subscriber receiving service will be equal to For tractability, we assume consumers anticipate bandwidth S i per equation 25 under equilibrium demand (equations 12 and 13), thus taking the behavior of other consumers as given.Bandwidth for each constellation then simplifies to We calibrate the average bandwidth per satellite κ so that the implied peak-load throughput per subscriber for a constellation like Starlink (most satellites currently located near 550 km altitude, full coverage, 3,351 satellites and 1,000,000 subscribers) is consistent with publicly available data on Starlink [Ookla, 2022] for North America.The parameter value is listed in Table S1.The implied peak-load throughput per subscriber is about 84 Mb/s.

Satellite unit costs
The unit cost of a satellite includes the costs of manufacture, launch, and operations during lifetime, inclusive of ground stations.We let C (h i ) be the annualized unit cost of a satellite.The annualized cost of a constellation of Physical principles (e.g. the rocket equation) and empirical data support modeling the launch cost of a satellite as increasing in its altitude.There is less guidance on the operation cost during lifetime.We will approximate the annualized unit cost function as where all parameters c, d and e are assumed positive.This specification implies that the annualized cost of a satellite is strictly convex and reaches a minimum at h = d/e.We set the cost-minimizing altitude h to 500 km.
We calibrate the parameters of C(•) to match publicly available information about unit launch costs for the Starlink and OneWeb megaconstellations.Following Osoro and Oughton [2021] and public statements by SpaceX COO Gwynne Shotwell [Wang, 2019], we assume the cost of a reference satellite with a 5-year operational life is on the order of 500,000 $, with additional operational and support infrastructure costs on the order of 125,000 $.We thus approximate the annualized cost of a satellite at the cost-minimizing altitude h as 150,000 $.Taken together, these figures imply that the annualized cost of building, launching, operating, and supporting a satellite at 550 km (Starlink) and at 1,200 km (OneWeb) altitudes are 152,500 $. and 640,000 $ respectively.The parameter values are listed in Table S2.

Avoidance maneuvers and service availability
To avoid collisions, satellites in constellation i must maneuver in response to at least some conjunctions.These maneuvers are an opportunity cost for operators, as they reduce their constellations' operational service time.To keep the model tractable and focus on economic behavior, we assume that all maneuvers are successful and result in no collisions. 5e assume that satellites will maneuver if and only if the conjunction is predicted to occur at or within a "maneuver safety margin" of ρ kilometers.As mentioned in the main text, this parameter may depend on technical, behavioral and regulatory factors, such as the constellations slotting architectures, the positional uncertainty in the object's trajectory [Arnas et al., 2021, D'Ambrosio et al., 2022], the operators' risk aversion and/or the implementation of avoidance guidelines coordinated internationally [Inter-Agency Debris Coordination Committee, 2017]. 6ollowing prior literature [Talent, 1992, Lewis et al., 2009, Lifson et al., 2022], we model the probability δ (h) that any two satellites orbiting in a common shell at mean altitude h have a maneuver-inducing conjunction using kinetic gas theory as7 where is the velocity of an object in a circular orbit at altitude h, GM is the geocentric gravitational constant and R is the radius of the Earth, and is the volume of a circular shell of thickness 2∆ at altitude h above the Earth's surface.Satellites in different shells are assumed to have no conjunctions.
The expected number of maneuvers n i that constellation i will have to perform per day is calculated by adding the number of conjunctions involving two satellites from constellation i (internal congestion) and one satellite from constellation i with any other objects in the same orbital shell (external congestion).Knowing that the constellation i operates Q i satellites at altitude h i , we obtain: where Q −i represents the number of other objects in the same shell and the factor of 1/2 reflects symmetric turn-taking behavior.We define the constellation system's daily maneuver burden as n L + n F .
We calibrate ρ to match open-source analysis of Starlink maneuver behaviors in Lewis [2022].By the end of 2022, Lewis [2022] utilized reporting by SpaceX and publicly available data to infer that Starlink satellites were collectively conducting around 75 maneuvers per day.Data from USSPACECOM [2022] indicates that as of December 26th, 2022 there were approximately 5616 objects total (3015 of them Starlink satellites) within the 35 km shell centered at 550 km (the mean altitude of Starlink).This implies that ρ must solve n i = 75, given h i = 550, Q i = 3015 and Q i + Q −i = 5616, giving a value of approximately ρ = 0.150 km.The value of ρ can be interpreted as follows: "if two satellites are predicted to approach within about 0.300 kilometers of each other then their operators commit to one of them maneuvering, each taking turns being the one to maneuver."Figure S2 gives a graphical representation of the numbers of maneuvers per day for a constellation at altitude h (x-axis) and and size Q (y-axis), considering only internal conjunctions.Note that since we predict close approaches using kinetic gas theory, our estimated safety margin may be smaller than those used in practice.This is because kinetic gas theory assumes objects move randomly, whereas in reality satellite trajectories are not random.Since we maintain this assumption throughout, the magnitudes of welfare gains from optimal orbit use will likely be unaffected.
Letting τ be the average operational time (hours) lost per avoidance maneuver which reduces service,8 the fraction of operational time that constellation i loses to maneuvers ("congestion") is: In both the equilibrium and optimal solutions under the benchmark calibration, constellations are spaced far enough apart that there are no conjunctions between operational satellites in different constellations.The contour map reflects that internal conjunctions increase in constellation size Q i at an increasing rate as the maneuver distance ρ increases.

Solution concepts and algorithms
We apply two solution concepts: a "duopoly equilibrium" and an "optimal plan".The duopoly equilibrium predicts the allocation of orbital space if profit-maximizing firms compete in the twostage game described in section 2.1 and below.The optimal plan predicts the allocation if constellations were designed by a benevolent planner to maximize global economic welfare from constellation telecommunications services.
We assume everywhere that the market is fully served, i.e. no consumers are left without telecom service.In equilibrium, this assumption is justified if it is profitable to serve all potential customers.In the optimum, this assumption is justified if it is socially beneficial to serve all consumers (i.e.equity of access).This assumption is satisfied in the equilibrium and optimum of the benchmark results presented in the main text.

Duopoly equilibrium
Firms first play a Stackelberg game of sequentially choosing altitude and constellation size.As described earlier, the Leader moves first in choosing an altitude and constellation size.In doing so they anticipate the Follower's optimal response in terms of altitude and constellation size as well as the equilibrium of the pricing subgame described in Section 2.1.The Follower best-responds to the Leader's choice anticipating only the price subgame equilibrium.Formally, using the solutions to the price equilibrium (equations 14 and 15), the Follower solves taking (h L , Q L ) as given, with x L and x F as defined in 16.
The solution to equation 34 gives the Follower's best-response functions, h F (h L , Q L ) and Q F (h L , Q L ).The Leader internalizes the Follower's behavior and solves max anticipating h F (h L , Q L ) and Q F (h L , Q L ), with x L and x F as defined in 16.
Below, we restrict our attention to equilibria satisfying the following incentive compatibility constraints: The first condition states that the consumer with the lowest valuation on telecom service gets non-negative utility from subscribing to the Follower's constellation.The second condition states that the Follower earns non-negative profits from staying in the market.
A duopoly equilibrium is a vector of choices for each firm, (h L , Q L , p L ) and (h F , Q F , p F ), and an indifferent consumer θ * , such that no firm has an incentive to change their strategies, the indifferent consumer gains equal utility from either firm's service, all consumers have positive utility, and the Follower stays in the market (i.e. a joint solution to the system defined by 1, 5, 6, 34, 35, 36, and 37).

Social optimum
The planner seeks to maximize total economic welfare from constellation deployment, i.e. consumer surplus net of constellation production and environmental damage costs.To this end they may choose to deploy one or two constellations.We refer to these choices in the main text as "public utility constellations".A single very large constellation may be able to provide a high quality service to all consumers at a single low altitude (low latency and high bandwidth), but at the cost of more congestion and avoidance maneuvers.Two smaller constellations can provide differentiated services to better match varied consumer preferences at lower cost.We allow the planner to choose between welfare-maximizing one-and two-constellation architectures to determine the overall economically-optimal constellation design.
We assume environmental damage costs are proportional to the total number of satellites in orbit.The environmental damage cost of an additional satellite, denoted as f , captures various impacts throughout its lifecycle: damages from launch emissions and debris; collision risk, debris formation, and interference with astronomical observations [Adilov et al., 2015, Rao et al., 2020, Rouillon, 2020, Venkatesan et al., 2020, Massey et al., 2020]; contributions to climate change and ozone depletion during reentry; and potential damage to human and physical capital upon landing [Ryan et al., 2022, Byers et al., 2022].Notably, these costs are externalities, overlooked by private constellation operators [Adilov et al., 2022, Lawrence et al., 2022].With limited data on the full extent of damages from these externalities, further research is imperative to better understand and address these issues.We set the value of f to zero in our main scenarios and conduct sensitivity analysis over the magnitude of damages to understand how they impact optimal and equilibrium orbital allocations.
When the planner allocates orbital space to one constellation, they solve When the planner allocates orbital space to two constellations, they solve Both problems are subject to all the physical constraints/functions as described earlier.The planner chooses the better of the two optimized constellation designs, solving We assume that the planner seeks to serve all consumers, so internalizes 36 as a constraint.However, the planner controlling two constellations has an additional degree of freedom: by letting the size of the constellation serving the lower end of the market shrink and moving the threshold θ * , the two-constellation planner is able to reduce service to consumers with lower valuations.Thus, while the planner controlling one constellation is constrained to provide equitable service to all even if it reduces aggregate welfare, the planner controlling two constellations can segment the market as necessary to maximize aggregate welfare.
An optimal plan is a vector of constellation design characteristics ((h, then the optimal plan includes an indifferent consumer θ * to solve equation 40.

Algorithms
To solve for the duopoly equilibrium and social optimum, we first generate a grid over possible values of (h L , Q L , h F , Q F ).9 We then compute the value of each constellation and total economic welfare with two constellations (i.e. the objective function being maximized in program 40) at each grid point.For the decentralized equilibrium, we then compute the Follower's profit-maximizing choices of (h F , Q F ) conditional on the Leader's choices, i.e. we calculate the Follower's best response (h the Leader could make, conditional on the incentive compatibility conditions 36 and 37 being satisfied.Next, we compute the Leader's optimal choice of (h L , Q L ) given the Follower's best response to that choice, and select the choice which maximizes the Leader's profits.Finally, firms' prices are set to solve the simultaneous-move game described in section 2.1, i.e. per equations 10 and 11.This is described more precisely in Algorithm 1.
Solving the planner's problem is more straightforward.We solve problems 38 and 40 using Generalized Simulated Annealing (GSA) [Tsallis and Stariolo, 1996] as implemented in the R package GenSA [Xiang et al., 2013], and select the one with the highest objective function value.This is the constellation design that maximizes economic welfare.In this setting GSA is preferable to methods such as Nelder-Mead or BFGS as the physical equations for service quality, latency, and coverage include non-differentiable points, and GSA provides better guarantees of finding the global optimum regardless of initial value.However, its runtime precludes use in the duopoly equilibrium problem where multiple nested GSA calls would be required.The objective surface for the planner's problem in the two-constellation case also features multiple solutions with nearidentical welfare levels.To address instability in sensitivity analysis caused by multiple solutions with approximately equal welfare levels, we restrict the solver to focus on solutions where the constellation serving the upper end of the market is at a lower altitude than the one serving the lower end of the market.This has no effect on the solution in the benchmark calibration, and makes the solution paths for sensitivity analyses smoother without reducing economic welfare.
Algorithm 1: Algorithm to calculate decentralized equilibrium 2 Calculate π L , π F ,W, θ x F − p F at each grid point.Discard all points which don't satisfy conditions 36 and 37. 3 Calculate argmax h F ,Q F π F for each value of (h L , Q L ).Call these (h e F , Q e F ). 4 Calculate argmax h L ,Q L π L assuming Follower chooses (h e F , Q e F ) in response.The profile (h e L , Q e L , h e F , Q e F ) is the Stackelberg equilibrium.5 Set firms' prices according to equations 10 and 11 to reflect equilibrium in the simultaneous-move pricing subgame.

Orbital shell discretization
For tractability and following prior work, we discretize orbital space into a series of non-overlapping shells.For the duopoly equilibrium, we construct a grid of altitude values beginning at 200 km and ending at 900 km and use it as described in Algorithm 1.For the optimum, we constrain the GSA solver to a numerically-stable portion of the 200-900 km region identified through testing.For both the equilibrium and the optimal plan we apply external congestion only if the constellations are within 35 km of each other, consistent with the discretization used in Lifson et al. [2022] and the announced spacings of Starlink and Kuiper (which are set to be 50 km apart in the latest FCC filings).

Parameter values
Tables S1 and S2 lists the parameter values, parameter units, and a brief description of the calibration strategy where applicable.We set the number of consumers purchasing service, N, to 10,000,000 to reflect the assumption of a more-mature sector in the near future.Current LEO constellation subscribers globally are on the order of 1,000,000 [Jewett, 2022].Note that we are not making a forecast of demand for constellation services from any particular operator, rather choosing a market size to reflect a particular scenario.[ $/year satellite−km 2 ] Same as above.θ 0.5 Utility parameter for consumer with lowest WTP for satellite service.

N 10 7
Total number of targeted consumers.
[people] Reflects mature market.3 Sensitivity analysis

Market size and maneuver safety margin
This figure shows the monetary value of the welfare gain generated by optimal utility constellations as a function of target population size values.Figure S3 shows the dollar value of welfare gain generated by the optimal public utility constellations over values of size of targeted population (N) and maneuver safety margin (ρ).
As mentioned in the main text, the monetary value of the gain from a two-constellation public utility system is increasing in N (holding ρ constant) even as the percentage gain decreases (main text Fig. 5). Figure S4 illustrates this point: holding ρ constant at the benchmark level and varying N, the welfare gain from moving from duopoly to a two-constellation system is roughly $1 billion.At 20 million consumers, the gain is roughly $2 billion.

Preference parameters
We conduct sensitivity analysis over preference parameters a L , L, and a S , while maintaining the maximum and marginal willingness to pay for lower latency as in the benchmark calibration, given a standard internet service (25 ms latency, 75 Mb/s bandwidth available full time).The sensitivity analysis thus goes through the set of parameters satisfying these conditions: • Same maximum WTP: That is, in order to maintain the targets we set for maximum and marginal WTP, we must hold one parameter ( L) at the benchmark level and co-vary a L and a S as dictated by equation 45.We conduct the sensitivity analysis over a S ∈ [40 2 , 70 2 ].
The total constellation sizes increase at a decreasing rate as a L and a S increase.The altitude also increases, though at a very slow rate.A higher preference for service quality (of which bandwidth is an important part) can justify decreased availability due to larger sizes at roughly the same altitude.

Environmental damages
We conduct sensitivity analysis over the magnitude of environmental damages, f , to identify how the optimal allocations change in response to these external costs.We find that total environmental damages on the order of $150,000 are sufficient to make the public utility constellation sizes match the duopoly sizes, though the orbital space allocations remain different.Figure 7 in the main text shows the total number of satellites and the threshold at roughly $150,000 in total environmental damages.Figure S6 shows the size and location allocations by constellation.
The single public utility constellation abruptly goes to zero at roughly $330,000 in total environmental damages.This is related to the equity constraint discussed in section 2.4.2, i.e. equation 36.The planner controlling a single constellation must provide equitable service to all consumers.When the environmental damages become high enough there is no way to provide such service while also generating surplus sufficient to cover the costs of the system.That is, the service quality becomes bad enough that consumers with a low valuation will prefer to not have it.The planner therefore ceases to provide constellation service once the environmental damages cross this threshold, since it is unable to provide all consumers with desirable service.The two-constellation planner is not similarly constrained.In this case, the planner ceases deployment of the smaller constellation once the $150,000 damages threshold is crossed, progressively reducing the size of the remaining system as well as the market it serves.This ensures that the system provides positive net benefits to the consumers it serves.

Figure S1 :
Figure S1: Maximum willingness to pay of a representative consumer (θ = 1).The contour map reflects that greater bandwidth is a good while greater latency is a bad.

Figure S2 :
Figure S2: Number of maneuvers per day (internal conjunctions only).The contour map reflects that internal conjunctions increase in constellation size Q i at an increasing rate as the maneuver distance ρ increases.

Figure S3 :
FigureS3: Dollar value gain in economic welfare from the economically-optimal two-constellation public utility system relative to the duopoly equilibrium across population size and maneuver safety margin.The black dot shows the benchmark calibration.

Figure S4 :
FigureS4: Dollar value gain in economic welfare from the economically-optimal two-constellation public utility system relative to the duopoly equilibrium across population size, holding maneuver safety margin constant.The dashed line shows the benchmark calibration.

Figure S5 :
Figure S5: Change in number of satellites for public utility constellations as latency preference parameter a L varies subject to equations 44 and 45.The dashed line shows the benchmark calibration.

Figure S6 :
FigureS6: Change in total number of satellites for public utility constellations as total environmental damages from orbit use increase.In the benchmark calibration these damages are set to zero.

Table S1 :
Table of calibrated physical and technical parameter values.

Table S2 :
Table of calibrated economic parameter values.