A Monte Carlo Methodology for Environmental Assessment Applied to Offshore Processing of Natural Gas with High Carbon Dioxide Content

Offshore production of oil and natural gas with high carbon dioxide content and high gas-to-oil ratio entail stringent processing conditions that require innovations and first-of-a-kind designs, which bear uncertainties derived from the scarcity of commercial-scale projects, hindering to move along technology learning curves. Consequently, unpredicted scenarios and unachieved specifications cause economic and environmental losses. Such uncertainties force offshore plants to be designed under stochastic factors seeking best statistical performance. The Monte Carlo Method is suitable to such finality. This work proposes a computer-aided engineering framework ‘MCAnalysis’ automatically applying a probabilistic environmental assessment of offshore gas processing. ‘MCAnalysis’ integrates HYSYS simulator with ‘Waste Reduction Algorithm’ to assess potential environmental impacts, whose most relevant categories were identified via Principal Component Analysis. An offshore plant processing natural gas with high carbon dioxide content was submitted to probabilistic raw gas flow rate under two scenarios of carbon dioxide content. The higher carbon dioxide content scenario presented the highest probabilistic potential environmental impacts, being the atmospheric category the most relevant.


INTRODUCTION
Offshore Natural Gas (NG) production has been experiencing continuous increase, especially in Brazil, where it is over 36.6 MM Nm 3 /d [1] as a result of recent discovery of huge oil and gas reserves in deep-water Pre-Salt fields with high Gas-Oil Ratio (GOR) from 250 to 500 Nm 3 /m 3 and high Carbon dioxide (CO2) content in the associated NG.
Exploration and Production of Pre-Salt fields are hence challenged by the environmentally friendly design decision of avoiding NG flaring, requiring large-scale processing of CO2 rich NG on the topside of Floating Production Storage and Offloading units (FPSO). Processing this stranded NG requires energy intensive operations for CO2 removal, CO2 compression and re-injection, and compression of sale gas for export to onshore facilities by subsea pipelines. Additionally to the efficient CO2 removal, NG processing must include Water Dew Point Adjustment (WDPA) and Hydrocarbon Dew Point Adjustment (HCDPA) to avoid condensation of heavy hydrocarbons and hydrate formation along the pipeline, besides machinery and pipeline for re-injection of CO2 at very high pressures, creating technology and economic challenges of large magnitude [2].
This scenario requires innovative processes on the topside of FPSO's, resulting in First-Of-A-Kind (FOAK) design conceptions. FOAK technologies bear large uncertainties derived from the absence of prior commercial-scale projects that allow moving along the technology learning curve. Hence, they must be designed considering uncertainties in technologies for CO2 removal from huge fluxes of CO2 rich NG, WDPA, and HCDPA, with direct impact on critical variables defining project viability, namely: • Equipment area and weight; • NG sale specifications [e.g., Wobbe index, Water Dew-Point (WDP), Hydrocarbon Dew-Point (HCDP) and CO2 content]; • Energy consumption; • Economic performance [Capital and Operational Expenditures (CAPEX and OPEX)]; • Environmental performance quantified by several Potential Environment Impact (PEI) categories.

Uncertainties of offshore natural gas processing and Monte Carlo simulations
Many and severe uncertainties affect offshore processing of NG: • Variability of raw gas composition, pressure and flow rate [3]; • Sales gas price and consumer market, equipment and operational costs [4]; • Meteorological events in high seas and even high operational risks of subsea equipment and topside processes, among others. Feed composition and flow rate, ambient temperature and pipeline pressure [5] are critical load conditions as their variations propagate effects [6] throughout the plant disturbing operating conditions and product specifications [7]. In industrial practice, uncertainties are usually compensated by the use of conservative decisions like over-design of process equipment and then retrofits to overcome operability bottlenecks, or overestimation of operational parameters caused by worst-case assumptions of uncertain parameters [4], which, despite making the design feasible, drastically decrease profitability [8].
Another relevant uncertainty concerns the impact of the CO2 re-injection in the reservoir for Enhanced Oil Recovery (EOR). Although EOR increases the efficiency of oil recovery and provides a safe destination for the large volume of CO2 removed from the NG, it results in a long-term increase of CO2 content in raw NG. In fact, up to 60% of the re-injected CO2 can be retained in the reservoir [9], meaning that 40% (or more) stay in the gas phase, rising its CO2 content, leading to incremental costs and risks throughout the lifetime of offshore NG processing.
Such uncertainties recommend using decision-making techniques under influence of stochastic factors. A classic and powerful technique for decision under non-deterministic scenarios is the Monte Carlo (MC) method. Given advances in computer-aided engineering [10], MC is an adequate technique to estimate the probability of process Offshore processing of carbon dioxide rich natural gas Offshore processing of raw NG with high CO2 content is supposed to occur on oil-gas FPSO platforms. Raw NG is first separated from oil and water and then goes to the gas plant at given flow rate, composition, temperature, and pressure. The raw gas is saturated in water (from 2,000 to 3,500 ppm) and its most critical variables are the flow rate and the high content of CO2. The gas plant is normally designed for a given gas flow rate and CO2 content, but both variables change along the FPSO campaign, the former because the oil flow rate can change for several reasons at constant GOR, while the latter changes (increases) along the campaign due to continuous injection of CO2 in the reservoir for EOR. Evidently, several other variables associated with the raw gas can affect the process, but the flow rate and CO2 content can seriously impact the processing if exceed the design condition. In this work, only the raw gas flow rate and the CO2 content are considered as relevant process input factors subjected to uncertainties.
Specifications of NG for commercialization. Sales NG is specified in Brazil according to the National Agency of Oil, Gas and Biofuels. The most relevant NG specifications considered in this work are maximum WDP of −45 °C at 1 atm, maximum HCDP of 0 °C at 45 bar, maximum 3%mol CO2 and minimum 85%mol CH4.
Gas processing description. The gas plant for offshore processing of CO2 rich NG is sketched in Figure 1. This process was designed to comply with NG specifications assuming the mean values of critical input factors subjected to uncertainties -flow rate and CO2 content of the raw gas. This design is denominated as the Base-Case. The objective is to test the Base-Case in terms of environmental impacts via MC sampling. The main operations in the Base-Case ( Figure 1) are: • NG dehydration for WDPA; • Separation of condensable hydrocarbons for HCDPA; • CO2 removal. Triethylene Glycol (TEG) dehydration was chosen for WDPA as the most economic option [22], despite requiring a stripping column for TEG regeneration. HCDPA is achieved by Joule-Thomson Expansion (JTE) [23], while CO2 is removed from NG by Membrane Permeation (MP) due to its operational simplicity, low costs, modularity, capability of processing CO2 rich feeds, and reduced weight and footprint [24]. The process starts with compression of raw NG, which is sent to TEG dehydration for WDPA to avoid ice and gas hydrates in pipelines [25]. Rich TEG is regenerated producing flare gas residue and recirculates to WDPA column, while the dry NG flows to HCDPA, where a saleable stream of Natural Gas Liquids (NGL) is a sub-product.
HCDPA must precede MP CO2 removal to avoid membrane damage by condensable hydrocarbons. In MP CO2 removal, dry NG flows through a battery of two MP modules, whose outlet MP retentate is the final conditioned NG. Final NG is compressed to be exported to onshore facilities. The MP permeate is a low-pressure CO2 rich stream which is compressed for re-injection as EOR agent.
Gas processing assumptions for simulation. For each MC realization of the gas plant stochastic input factors, the Base-Case is simulated in HYSYS. The following assumptions are valid for each simulation. {A1} Thermodynamic modelling uses HYSYS Peng-Robinson Package, except for TEG unit with HYSYS Glycol Package. {A2} Reference raw gas composition (%mol) 20% CO2, 77.88% CH4, 1.20% C2H6, 0.36% C3H8, 0.09% iC4H10, 0.08% C4H10, 0.04% iC5H12, 0.02% C5H12, 0.024% C6H14, 0.037% C7H16, 0.024% C8H18, 0.0023% C9H20, 0.25% N2. Gas plant uncertainties. Uncertainties related to the raw NG were selected for MC analysis. Independent normal random populations were assumed for feed flow rate (MM Sm³/d) and for its CO2 content as molar fraction because these are the feed factors with the highest influence on the process response and also with the highest subjection to uncertainties. Normal PDF's were chosen due to their relevance for describing multiple physical, meteorological, biological and financial phenomena. One can argue that normal PDF's are not adequate to represent real stochastic disturbances because they have infinite tails spread on (−∞, +∞) domains, whereas the reality is not akin to infinite amplitude inputs. Well, this is not the case. First of all, there are, indeed, (very) rare natural events with relatively gigantic catastrophic amplitudes (someone does not have to be especially imaginative to cite one). Secondly, the 99.73% probability domain of normal PDF's corresponds to the finite interval [µ − 3σ, µ + 3σ], while the 99.99% probability domain corresponds to [µ − 4σ, µ + 4σ], where µ and σ are, respectively, the population mean and standard deviation. This implies that more than 10,000 outcomes have to be sampled to have a single one outside the [µ − 4σ, µ + 4σ] interval, while MC searches are applied with finite samples containing from 1,000 to 3,000 outcomes, which is sufficient for ergodicity [27]. In other words, on practical grounds normal PDF's give rise to bounded unimodal samples. Other advantages of normal PDF's are: • By the Central Limit Theorem a normal input is appropriate to represent the contribution of several other independent inputs following arbitrary distributions (which is, in real applications, the case of some unimodal disturbances that represent a collection of contributions); • Normal PDF's may degenerate to other simpler PDF's like the Dirac PDF.
Additionally, there is a very special reason to adopt normally distributed input factors. This has to do with the traceability of normal signals throughout the process. When submitted to uncertainties following normal PDF's, process output variables shall present behaviours close to the normal pattern for approximately linear cause-effect relationships, whereas behaviours very different from normal (e.g. bimodal, multi-modal or stone-wall responses) would be observed for very non-linear causality relationships. In other words, using normal inputs one can identify where there is great non-linearity in the process response. This may be useful in sensitivity studies, design of process control strategies and when applying design safe margins to some critical process units. The normal PDF of variable x is given by eq. (1) with its parameters μ (mean) and σ (standard deviation). In the present case, the raw NG flow rate population is supposed to follow a normal PDF with μ = 6.0 MM Sm 3 /d and σ = 1 MM Sm 3 /d, meaning that it practically varies from 3.0 to 9.0 MM Sm 3 /d. Meanwhile, two scenarios were devised for the CO2 content populations of raw NG: Case 20% -using normal PDF with μ = 0.20 and σ = 0.03 (i.e. CO2 molar fraction approximately varies from 0.10 to 0.30), and Case 50% -using normal PDF with μ = 0.50 and σ = 0.03 (i.e. CO2 molar fraction approximately varies from 0.40 to 0.60), as shown in Figure 2. Monte Carlo sampling MC is a relevant methodology based on stochastic analysis for evaluating systems under non-deterministic scenarios when analytical solutions are complex or impossible due to non-deterministic components which are not known a priori [11]. Such A Monte Carlo Methodology for Environmental ...
Year XXXX Volume X, Issue Y, pp xx-yy methodology provides an approximation of the problem solution by stochastic sampling of the independent system variables obeying known PDF instead of solving the numeric-mathematical problem directly. The objective of MC analysis is the stochastic appraisal of the performance of a given dependent variable of interest (output variable) according to the behaviour of uncertain independent variables (input variables). The method consists on creating samplings of the independent variables by generating pseudo-random numbers distributed between 0 and 1 which are converted to random samples obeying PDF's of the independent variables, given that the behaviour of these independent variables is random and follows specific PDF's. When a large sample of random values of independent variables is used, the calculated values of output variables can be plotted as histograms, leading to approximations of PDF's of output variables, which are the main objectives of MC analysis jointly with some statistics for estimating parameters of these PDF's (e.g. sample mean and sample standard deviation).
This work uses the Inverse Transform Method [28] to generate normal pseudo-random populations. This method employs random number properties and the Cumulative Distribution Function (CDF) of a random variable to generate its sample PDF. The correlation of a random number with the CDF is given by eq. (2), which can be inverted to generate eq. (3) based on the iCDF (inverse of CDF) for expressing the mapping of a set of values of the variable of interest from a set of random values uniformly distributed between 0 and 1: where rnd is a random number uniformly distributed between 0 and 1, rnd1 is a sample of rnd between 0 and 1, x is another random number varying along the domain b x a ≤ ≤ , x1 is a sample of x between a and b, PDF(rnd) and PDF(x) are the PDF's of variables rnd and x, and CDF(rnd) and CDF(x) the integrals of PDF(rnd) and PDF(x).
The iCDF -inverse of the CDF -for the standard normal distribution (μ = 0, σ = 1) is numerically approximated in eq. (4) and eq. (5) [29]. Eq. (4) and eq. (5) correspond to the conversion of a population of pseudo-random numbers sampled from 0 to 1 into a population following the standard normal PDF with an absolute error smaller than 4.5 × 10 −4 , where c0 = 2.515517, c1 = 0.8202853, c2 = 0.010328, d1 = 1.432788, d2 = 0.189269 and d3 = 0.001308. The symmetry of normal PDF is considered so that eq. (4) and eq. (5) are valid for 5 . 0 0 ≤ < p . For 1 5 . 0 ≤ < p , eq. (4) and eq. (5) are used with 1 − p, switching the signal of the calculated abscissa z. With eq. (4) and eq. (5) the population of z values generated from the population of p values approximately follows the standard normal PDF. Eq. (6) converts the population of standard normal z to a normal population x with mean μ and standard deviation σ. Figure 3 depicts the relationships between samples via iCDF and CDF to generate samples according to normal PDF's: time-consuming step is the simulation of the complex gas plant (i.e. evaluation of the process response) and not the sampling management algorithm itself.

Environmental performance of processes -Waste Reduction algorithm
Several methodologies for characterizing the environmental impact of products and processes are available in the literature, as Life Cycle Assessment (LCA) and Waste Reduction (WAR), both well-established techniques to include environmental considerations into process design [32]. The LCA methodology assesses the environmental performance of a product or process thorough its life cycle: from the primary resources to recycling or safe disposal [33]. However, this methodology requires a large amount of information and few data are publicly available due to legal or intellectual property concerns [34]. The WAR methodology considers only the product manufacturing step [19] as Figure 4 shows. WAR is selected to be used in this work due to its simplicity when compared to LCA. It is worth noting that WAR, contrarily to LCA, is restricted to 'gate-to-gate' analysis ( Figure 4). WAR was proposed by Cabezas et al. [18] as a general theory for the flow and generation of PEI's through a chemical process and is used to quantify its environmental performance. By definition, PEI is the unrealized average effect or impact that the emission of mass and energy would cause to the environment, being essentially a probability function associated to a potential effect. A PEI conservation equation based on an accounting of the flow of PEI in/out of the product manufacturer and energy generation [20], is introduced by WAR in eq. (7) for steady state balance, where represents the creation or consumption of PEI by chemical reactions inside the chemical process and the power plant. Figure 5 illustrates eq. (7) PEI is calculated by a unified score obtained by the weighted sum of eight environmental impact categories, listed in Table 1, and a specific PEI for each impact category is associated to the components of the process streams as shown in eq. (8), where l is an indicator for input or output, αi is the weighting factor for environmental impact category i,  Table 1.
The calculation of s ki ψ is given by eq. (9) where (Score)ki represents the impact score of component k correlated with environmental category i and <(Score)k>i represent the average impact score of all components in category i. This normalization of the component impact eliminates unnecessary bias within the category. The specific correlations among each category score and its corresponding measure of impact are described in Young and Cabezas [19]. The environmental assessment in this work presents the output PEI's for each individual category caused by the offshore processing of CO2 rich NG and identifies the most relevant as performance indicators using PCA. Impacts from product streams are not considered.

Principal Component Analysis
Principal Component Analysis (PCA) consists in re-organizing data sets (e.g. data from process plants), which often exhibit correlated patterns, in order to find a set of new uncorrelated variables as linear combination of the original ones. The new variables are assigned to fractions of the variance in the original data in decreasing order [21].
The original data set is organized as a matrix X m × n, where the scalar variables of the problem correspond to the columns and their samples correspond to the rows, meaning that each vector of sampled data i X m × 1 for variable i X corresponds to a column of the matrix X , as illustrated by eq. (10). Each vector i X originates a sample scalar mean > < i X given by eq. (11). Such sample means are gathered in the vector of means > < X as shown in eq. (12). U m × 1 is a compatible vector of ones: PCA factorizes the matrix of sample variance-covariance X R n × n -symmetric and Matrix P contains the directions capable of describing the variability of original data X by decreasing relevance, meaning that X data show more variability over the direction defined by the first column of P . This is the 1 st principal direction for describing the statistical behaviour of X . The second column of P is the 2 nd principal direction and so on. A matrix of generalized scores S m × n is obtained by projecting X over the directions (columns) of P after subtracting the respective sample means > < i X as eq. (15) shows, where i P is the principal direction i of P and i S contains m × 1 samples of the generalized score Si. The generalized scores are the new scalar variables S1, S2,…, Sn candidates to substitute the original variables X1, X2,…, Xn with the advantage of having the variability condensed to its maximum and decreasing along the elements of the set. Usually, the first elements represent most of the variability of the original set. The percentage of the variance associated to the general score Si is calculated considering its contribution over the total variance of the sample as shown in eq. (16): A Monte Carlo Methodology for Environmental ...
Year XXXX Volume X, Issue Y, pp xx-yy

Software 'MCAnalysis'
In order to enable the automatic execution of MC analysis for any chemical process representable as a simulation flowsheet, a set of random sample values of the process input stochastic variables must be generated, managed and submitted to process simulation to generate output samples which are then statistically processed to generated statistics results and figures. Therefore 'MCAnalysis' was designed as a HUB to manage MC analysis for several different scenarios of stochastic response like technical analysis of a process design, safety analysis, energy consumption assessment, economic assessments and environmental sustainability assessment ( Figure 6). In this work only the environmental assessment is demonstrated. 'MCAnalysis' starts with the module "Generate Batch Data", which processes a configurable XML (Configuration XML) containing the definition of the MC non-deterministic independent input variables, the respective PDF's and how to identify them in the HYSYS simulation flowsheet. Populations of the input stochastic variables are randomly generated and graphically processed through MATLAB for generating histograms and plotting associated PDF curves. HYSYS simulation is then executed in batch for each sample of the input variables. The relevant simulated responses for the MC analysis, listed in another configuration XML (Read XML), are gathered and stored in an output XML file (HYSYS output XML). The batch data generated from the simulation can then be used for process design or assessment of environmental performance with MC analysis.
For technical analysis of a process design, HYSYS output XML is processed by the module 'MCM Analysis' together with a configurable XML (Configuration XML) containing the output variables relevant for MC analysis as well as their maximum or minimum specifications (if the variable is a design specification) for graphical presentation of MC analysis results through MATLAB: histograms, PDF curves and percentage of success achieved by the sampled cases. As for environmental performance assessment, HYSYS output XML is processed by the module 'Environmental Indicators' together with a configurable XML (WAR general data), which is extracted from HYSYS A Monte Carlo Methodology for Environmental ...
Year XXXX Volume X, Issue Y, pp xx-yy with components, input and output streams, where the output streams are classified by the user as product or waste. This module uses the WAR algorithm data [35] to generate an XML (WAR output) containing PEI's classified according to the eight environmental impact categories for all the samples of MC analysis. For the environmental performance assessment itself, this output XML is processed by the module 'MCM Analysis' in the same way as the HYSYS output XML.

RESULTS AND DISCUSSIONS: ENVIRONMENTAL ASSESSMENT VIA MONTE CARLO ANALYSIS
The environmental performance was assessed for the process design with two mean CO2 contents in the raw gas feed: Case 20% CO2 and Case 50% CO2. The respective flowsheets of both cases were previously designed with MC analysis in order to achieve a minimum of 75% success frequency for all design specifications in the sampled cases. Populations of 1,000 random samples for each independent stochastic input were used. Specifications WDP, HCDP and minimum molar fraction of methane were achieved with success in 100% of the samplings for both cases. Maximum 3%mol fraction of CO2 in the exported NG was reached in 83.3% of the samples for Case 20% CO2 and in 82.1% for Case 50% CO2. Figure 7 shows histograms and PDF curves of the sampled populations of the input variables. The amount of sampled cases, sample mean μ and sample standard deviation σ of the populations as well as their theoretical and expected values, are available in the title bar of each graphic. In all graphical results in this study, the similarity of the response behaviour with a normal pattern is tested by depicting the population histograms with the corresponding normal PDF using the sample mean and standard deviation as parameters. For example, Figure 7 confirms for the two stochastic input factors -flow rate and CO2 content of raw gas -the similarity of sampled and theoretical population parameters, and the similarity of population histograms and normal PDF curves. These numerical and graphical similarities attest that the sampling was successful in terms of reproducing the respective normal behaviours. After designed under uncertainties with MC analysis, the gas plant flowsheets adequate to Case 20% CO2 and Case 50% CO2 were submitted to environmental assessment with MC analysis. The intent is to disclose the behaviour of the environmental indicators in Table 1 for the gas plant operating under stochastic inputs. The results are shown in Figures 8-15 which depict histograms and normal PDF's for populations of output PEI's from the eight environmental impact categories considered in WAR algorithm (Table 1). The normal PDF's built with sample mean and sample standard deviation serve as qualitative indications of the suitability of normal patterns to represent the statistical behaviour of the corresponding MC responses. The number of samples, percentage of samples which attained or exceeded specifications, sample mean μ and sample standard deviation σ are available in the title bar of each graphic. When considering the uncertainties of both feed gas flow rate and CO2 molar fraction, the histograms of HTPI, HTPE, TTP for both cases, ATP and PCOP for Case 20% CO2 and GWP for Case 50% CO2 presented behaviour completely different from normal PDF, which shows that the process responds to uncertainties in a highly non-linear way regarding environmental performance. In each one of these instances, the impertinence of normal behaviour can be visualized by means of the discrepancy between the statistical behaviour of the histogram and the respective normal PDF built with sample mean and sample standard deviation from the histogram. The remaining PEI's presented behaviour relatively close to normal patterns. In addition, the histograms of HTPI, HTPE, ATP, TTP, GWP and PCOP exhibit some difference of patterns for the 20% and 50% CO2 content cases. This means that the process responds non-linearly to changes in the CO2 content of raw NG regarding most of the environmental indicators. Table 2 summarizes the percent differences between sample mean μ and sample standard deviation σ between the populations for the two CO2 content cases relative to the values of Case 20% CO2. By assigning equal weights to each environmental category, the total output PEI is depicted in Figure 16, showing that the Case 50% CO2 has a sample mean 62% higher than Case 20% CO2 and sample standard deviation 74% higher. Therefore, the long-term increase of CO2 content in raw NG caused by CO2 reinjection due to EOR will deteriorate the environmental performance of the process. In addition, the most relevant environmental impact categories were identified with PCA. Table 3 shows the eigenvalues (λi), the variances (νi) and the cumulative variances for Case 20% CO2 and Case 50% CO2. It can be concluded that the first two principal components PC(1) and PC(2) are the only relevant components for explaining the environmental performance of the process, corresponding to, respectively, 81.2% and 18.8% of the variance of Case 20% CO2 and to 92.6% and 6.5 of Case 20% CO2. For identifying the dominant environmental impact categories corresponding to PC(1) and PC(2), the components of vectors i P with higher absolute values listed in Table 4 for Case 20% CO2 and Case 50% CO2, correspond to the most relevant categories. Table 4 shows that the environmental impact categories GWP (global atmospheric impacts), AP and PCOP (regional atmospheric impacts) are the most significant to PC(1), having high relevance to the process for both cases. When PC(2) is also included, the environmental impact category TTP (ecological toxicity) can be considered as medium relevance to the process for both cases.

CONCLUSIONS
MC analysis was successfully applied for environmental assessment of offshore processing of CO2 rich NG considering uncertainties on raw NG flow rate and CO2 content. Processing NG with higher CO2 content carries a higher potential environmental impact, as expected, since CO2 is the main emission from the plant, due to the power demand of compressors for NG exportation and CO2 injection for EOR. This result raises an alert for the impact of the CO2 injection in the reservoir for EOR, which will increase the CO2 content in the NG in the long-term.
The statistical behaviours of the PEI's corresponding to each environmental potential category evidence highly non-linear responses of the process, which validates the A Monte Carlo Methodology for Environmental ...
Year XXXX Volume X, Issue Y, pp xx-yy recommendation to adopt decision making under influence of stochastic factors as MC analysis. Furthermore, the CAE tool 'MCAnalysis' was shown to be valuable for this kind of analysis, as it can handle uncertainties affecting process responses for a given design and address sustainability performance of the process.
The categories GWP (global atmospheric impacts), AP and PCOP (regional atmospheric impacts) were identified by PCA as very relevant to process environmental performance under uncertainties, while the category TTP (ecological toxicity) exhibited medium relevance, independently of the CO2 content of raw NG, because TTP is also related to emissions of unburnt hydrocarbons in the atmosphere due to leakages and incomplete burning. These aspects have reflexes on FPSO design decision-making and can influence environmental policies of regulating agencies in connection with CO2 rich NG exploration and production by offshore platforms. average score of all species in environmental impact category i P n × n matrix of orthonormal eigenvectors of X R i P n × 1 eigenvector i of X R X R n × n sample variance-covariance matrix of X i S m × 1 vector of generalized scores S1, S2,…, Sn from X1, X2,…, Xn X m × n matrix of process data with n scalar variables and m samples