Efficient Hybrid Multiobjective Optimization of Pressure Swing Adsorption

Pressure swing adsorption (PSA) is an energy-efficient technology for gas separation, while the multiobjective optimization of PSA is a challenging task. To tackle this, we propose a hybrid optimization framework (TSEMO + DyOS), which integrates two steps. In the first step, a Bayesian stochastic multiobjective optimization algorithm (i.e., TSEMO) searches the entire decision space and identifies an approximated Pareto front within a small number of simulations. Within TSEMO, Gaussian process (GP) surrogate models are trained to approximate the original full process models. In the second step, a gradient-based deterministic algorithm (i.e., DyOS) is initialized at the approximated Pareto front to further refine the solutions until local optimality. Therein, the full process model is used in the optimization. The proposed hybrid framework is efficient, because it benefits from the coarse-to-fine function evaluations and stochastic-to-deterministic searching strategy. When the result is far away from the optima, TSEMO can efficiently approximate a trade-off curve as good as a commonly used


Introduction
Pressure swing adsorption (PSA) is an energy-efficient gas separation technology [1][2][3] that has been widely used in the industry for drying [4], air separation [5,6], and hydrogen production [7,8].Over the last two decades, academia has seen a growing interest in applying PSA for CO2 capture [9,10].PSA possesses significant advantages over the conventional amine-based CO2 capture technology with regards to emissions to the environment and energy consumption [3,11].Since no amine solvent is involved in the PSA system, no organic waste is disposed to the environment.
The optimal design and operation of PSA processes is a challenging task due to the inherent cyclic and dynamic behavior of the system and highly nonlinear process models [12].Since the column pressure varies over time, the PSA process can never reach a steady-state operating point.Instead, it eventually comes to a cyclic steady state (CSS), where the trajectories of state variables are the same for consecutive cycles.From an industrial operation perspective, PSA is required to operate at CSS as to achieve a constant process performance.However, it is difficult to analytically calculate CSS, which generally requires a numerical simulation [13][14][15].Additionally, multiple (conflicting) objectives co-exist, including product purity, recovery rate, energy consumption, and operating cost [11,16,17].The process design and operation problems often involve nonconvex functions [18][19][20], where multiple local optimal solutions may exist.Further, PSA may be operated in more complicated modes, e.g., multiple columns integrated with recycles [3,11,12,17].Overall, the above-mentioned factors contribute to the difficulty for the optimization of PSA processes.
In the previous literature, stochastic optimization algorithms have been used to optimize PSA processes [11,16,21].Stochastic optimization algorithms consider the simulation as a blackbox function.They vary the values of decision variables and run the PSA simulation until CSS.
Following this procedure, the values of objectives and constraints are returned to the optimizer for evaluations.Haghpanah et al. used a genetic algorithm (GA) to optimize the PSA operation, while the time-consuming feature of PSA simulation leads to the slow performance of the overall optimization [11].Capra et al. [16] reported a multi-level coordinate search (MCS) algorithm, where the decision space is divided for parallel computing on multiple workers to speed up the overall optimization.Stochastic algorithms can search the decision space globally.
However, the optimality cannot be guaranteed in finite time [22], and thus the solution found through stochastic optimization does not satisfy Karush-Kuhn-Tucker (KKT) optimality conditions [23].
Deterministic algorithms belong to another type of method that can be used for PSA optimization, where gradient information is used to guide the search direction (thus, it is often referred to 'gradient-based optimization').There are two common approaches for the gradientbased optimization of dynamic systems, i.e., the simultaneous and the sequential approaches [23].The simultaneous approach discretizes the state and decision variables.Herein, both temporal and spatial domains of partial differential equations (PDEs) are discretized, resulting in a large set of algebraic equations and eventually large-scale nonlinear programming (NLP) problems.Tsay et al. proposed a pseudo-transient optimization framework to identify the final cycle of PSA under CSS using a 'tear-recycle' method, in which the temporal domain is significantly reduced [24].The sequential approach is well-suited to problems with a few decision variables and complex dynamic behavior.The integrator solves the differential equations and provides the gradient to the NLP solver.However, in the case of PSA, a significant amount of computational time is required to calculate the sensitivity information and its integration over many PSA cycles for the gradient.Additionally, the sensitivity integration may fail due to the highly nonlinear PSA model [13].Jiang et al. focused on one PSA cycle [ ! "#$ ] and applied the sequential approach to converge the initial conditions ( ! ) to the endpoint ( "#$ ) of state variables [13].This concept can dramatically accelerate the simulation to reach CSS.However, the spatial-discretized PSA model contains over 1,000 state variables, and thus the convergency of them is still a large optimization problem.
Besides the extensive work on applying various optimization algorithms to PSA, researchers have exerted effort on developing surrogate models to represent the dynamic behavior of PSA.
Surrogate models are cheap-to-evaluate and can approximate the relationship between inputs and outputs of physical models.Jiang et al. employed a Lagrange interpolation polynomial to approximate the profiles of state variables, as to simplify the convergence problem.
Nevertheless, such approximation was reported to introduce inaccuracy for the further optimal design of PSA process [13].Agarwal et al. demonstrated that proper orthogonal decomposition (POD) can be employed to replace the stiff PDEs of PSA.A POD can achieve a significant reduction of state variables and thus lead to low-order surrogate models [25].
With the recent increasing attention to machine learning, Artificial Neural Networks (ANNs) and Gaussian processes (GP) surrogate models have become prominent options for replacing computationally expensive models [26][27][28] [29].With this method, PSA processes with different materials were optimized successfully within acceptable computational time [29].However, surrogate models are often criticized for their inaccuracy and lack of generalization [30].
In summary, prior studies on PSA optimization are based on (1) stochastic algorithms using expensive full-order models, in which optimality cannot be guaranteed, (2) deterministic algorithms which require the expensive-to-obtain gradient information, or (3) surrogate formulations in which accuracy might be compromised.A hybrid method may integrate the complementary advantages of the individual methods.The concept of hybrid optimization methods -a synthesis of a global solver with a local solver -has been proposed initially by computer scientists to solve nonconvex problems many years ago [31][32][33].Similarly, a concept of 'coarse-to-fine' search also proposes to transform the original problem into a coarse approximation for the initial search and then gradually approach the actual problem for refined search [34].The efficiency of these concepts has been proven in the areas of computer vision [34], speech signal processing [35], and image processing [36].Nevertheless, these concepts are not frequently used in the chemical industry.
Therefore, we propose a hybrid strategy: a stochastic algorithm for the initial search and then a gradient-based algorithm for the local refinement of the solution.This work achieves efficient multiobjective optimization of the PSA system by hybrid optimization framework.
The efficiency of the hybrid optimization framework benefits from: the stochastic-to-deterministic search strategy; the coarse-to-fine function evaluations: initially GP-based surrogate model for the rough evaluation, then the rigorous process model for the refined evaluation.
The remaining sections are structured as follows.Section 2 briefly describes the process model of PSA.Section 3 introduces the state-of-the-art algorithms used in the hybrid framework.Section 4 presents the optimization formulation of PSA using a hybrid optimization framework.
Section 5 shows results, followed by the discussion on why the overall optimization efficiency of the hybrid framework is competitive in Section 6.The final section presents conclusions and outlook.

Model description of pressure swing adsorption
PSA is operated in a cyclic mode that alternates between adsorbing the desired gas species at a higher pressure and releasing them at a lower pressure (Figure 1).Due to the variations in time and space, the PSA system is mathematically described by PDEs, which are based on the mass, energy and momentum balances listed in the Supplementary Information (SI, equation S1-S19).Notably, discontinuities are introduced by a sequence of frequent control actions of pressure levels, thus resulting in multiple discrete stages, e.g., adsorption, blowdown, evacuation and feed pressurization, while each stage is operated continuously.Hence, the overall process belongs to a class of combined discrete/continuous systems, which requires additional effort in the model formulation and numerical solution [37].The process model of PSA is based on the work of Haghpanah et al. [11] and implemented in Modelica using Dymola.The weighted essentially nonoscillatory (WENO) method, a finite volume method, is applied to discretize the PDEs into DAEs using 30 finite volumes.The combined discrete/continuous feature of PSA can first be described by a superstructure formulation of all PSA stages (SI, equation S. 19), and then external controls (binary variables, refers to Table S2 in SI) are imposed to determine which stage to execute.As such, the combined discrete/continuous PSA is transformed into a set of continuous subsystems.Each subsystem is mathematically described by DAEs.The simulation of PSA requires the numerical integration of a series of initial value problems (IVP).The PSA cycle is repeatedly simulated and eventually reaches CSS.The simulation result is listed in the S3 section in the supplementary information (SI), because it is not the key finding in this work.Haghpanah's model has been validated experimentally [38,39] and our simulation result is in good agreement (SI, Table S2) with those reported by Haghpanah et al. [11].

State-of-the-Art of Hybrid Optimization Framework
The hybrid optimization framework integrates TSEMO [40] with DyOS [41].The characteristics of the methods are summarized in Table 1.TSEMO uses the input-output dataset of simulation results to train a GPs-based surrogate model, which is refined iteratively by sampling new input data points for more simulation results.Thompson sampling is the acquisition function for updating the dataset.In each iteration, the surrogate model is used as the evaluation function for multiobjective optimization [40].With these characteristics, TSEMO belongs to Bayesian optimization [42].NSGA-II is the optimizer within TSEMO, so the searching strategy of TSEMO is stochastic and the optimality cannot be guaranteed.DyOS contains a local sequential dynamic optimization solver, so the searching strategy belongs to gradient-based (deterministic) optimization and the optimality can be secured.The original dynamic process model is required to calculate the gradient information, and thus the function evaluations of DyOS are based on the rigorous process model.Alternatively, the Modelica model can be compiled as a functional mock-up unit (FMU) [43].

Table 1. Characteristics of TSEMO, DyOS and hybrid framework
TSEMO runs Dymola through Dymosim.exe for simulation-based stochastic optimization, while DyOS takes an FMU as a model input for gradient-based optimization.
As a reference, we also employ the NSGA-II, a well-established evolutionary algorithm, to optimize the original process model of PSA.

Optimization formulation of PSA using the hybrid framework
One of the challenges in PSA optimization is owed to multiple (conflicting) criteria for the final product.In this work, we employ PSA for CO2 capture, and two optimization objectives are considered: (i) the recovery rate and (ii) the purity of the product gas CO2 are maximized.
Purity = CO % in product within a CSS cycle total gas in product within a CSS cycle × 100% The details of the hybrid approach (1 st TSEMO + 2 nd DyOS) are formulated in this section.

First
Step: Optimization formulation using TSEMO TSEMO can deal with multiobjective optimization problems directly, and two objectives can be inserted in the solver without any further reformulation.The formulation is constrained by the process equations (cf.SI, S.1-S.19).The evaluation and optimization of PSA are only meaningful after the process reaches CSS.As an evaluation method for CSS, a small tolerance value, δ, is used to check the difference between state variables x over one cycle.When where θ is a vector of six decision variables of four-stage PSA system including the duration of the first stage -adsorption (tads), the duration of the second stage -blowdown (tbd), the duration of the third stage -evacuation (tevac), two pressure setpoints -intermediate pressure (PI), low pressure (PL), respectively as well as feed velocity (vfeed).The lower and upper bounds of the decision variables are given in Table 2.In this work, the highest pressure is fixed at 1 bar.The duration of the pressurization stage (the fourth stage) is reported to have a negligible effect on the operation of PSA; therefore, it is fixed to 20 s [11].
Table 2.The ranges of the decision variables in the PSA optimization via TSEMO.

Second step optimization formulation of PSA using DyOS
DyOS is designed to solve single-objective optimization problems.Herein, we reformulate our multiobjective optimization problem into a series of single-objective optimization problems via the epsilon-constrained method [44].In other words, the recovery remains to be the objective, while the purity is reformulated as an inequality constraint.Following the results from the first step, the constraint and the initial values of decision variables are based on the results obtained from TSEMO.In case that the constraint is too tight, a relaxation coefficient ( = 0.99) is given for the purity constraint (Eq.8).When optimizing PSA using DyOS, the system is assumed to reach CSS at the same number of cycles as the optimization using TSEMO (Eq.9).The set-up of DyOS for PSA optimization is illustrated in Figure S5 (SI).The formulation of PSA optimization in DyOS is as follows, Eqs. ( 6)-( 9): The PSA optimization via DyOS is conducted with respect to three decision variables: intermediate pressure, low pressure and inlet flowrate, as shown in Table 3.In the initial trials with DyOS we included the duration variables, which caused the method not to converge, likely because sensitivity integration over time is highly related to duration variables.Since the reason for unsuccessful termination is unclear at this time, we did not include the duration variables into the optimization.
Table 3.The ranges of the decision variables in the PSA optimization via DyOS.

First step: optimization using TSEMO
To initialize TSEMO, 30 random sets of inputs were sampled using a Latin Hypercube Sampling (LHS) method, and then the simulation inputs and outputs (i.e., recovery and purity) are used to train the initial GPs.Then, random samples were drawn from the GPs and multiobjective optimization is performed.Following this, new inputs for simulations were recommended by the algorithm to improve the objectives.Then, the new data points were added to the whole dataset for GP surrogate training in the next iteration.In this case study, we discuss the optimization results after 50, 100, 200, 300, 400, 500, and 600 PSA simulations, which were recommended by TSEMO. Figure 3 (a) shows the obtained Pareto front, which represents the trade-off between recovery and purity through different numbers of simulations.
The hypervolume can be used as an indicator to quantify the performance of multiobjective optimization [45,46].Figure 3 (b) shows that the hypervolume improves with the increase in the number of simulations.A significant improvement for the estimated Pareto front between 50 and 100 simulations is observed while only moderate change is observed when further increasing the number of simulations.The growth in the hypervolume is negligible once the number of iterations is above 200 (Figure 3b).This result might be explained in two ways: one explanation is that the estimated Pareto front is almost close to the actual Pareto front and leaves little space for further improvement; an alternative explanation is that the searching efficiency of TSEMO considerably drops when the identified solutions are approaching optimality.This is a known issue of any stochastic search algorithm: the convergence is only guaranteed in the limit of an infinite number of function evaluations.

Second step: optimization using DyOS
One issue with the stochastic global search is the lack of local refinement of the identified solutions.In particular, TSEMO does not use gradient information to improve approximate solutions further.Hence, it is desired to perform further gradient-based optimization that is initialized from the approximate solution points obtained in the first step.Following 600 simulations via TSEMO, we selected 22 non-dominated points with purity over 80% and recovery over 75%, which are the starting points in the second step.For every individual point, DyOS is called to perform gradient-based optimization using the full model.As shown in  almost three times that of TSEMO.This is because TSEMO uses cheap-to-evaluate surrogate models and parallel computing is possible for surrogate models.By contrast, DyOS relies on gradients calculated from the sensitivity integration over all PSA cycles, and thus a large percentage of time is consumed to obtain the gradient information.Notably, the full-order physical model is evaluated to ensure the result's accuracy, which further increases the CPU cost in the second step.Hence, the second step is time-consuming.

Discussion
To demonstrate the efficiency of this hybrid framework, we firstly compare the performance of TSEMO with that of NSGA-II.As shown in Figure 5, the estimated Pareto front from TSEMO is comparable to that of NSGA-II, while NSGA-II requires a significantly larger number of simulations than TSEMO.As shown in Table 5, TSEMO with 100 simulations has a closed hypervolume value the same as the NSGA-II with 2,400 simulations, while TSEMO only uses around 1/16 th of the CPU time of NSGA-II.This is reasonable because TSEMO trains the GP-surrogate for the function evaluations during optimization, so it is not CPU-intensive as the rigorous model.NSGA-II is actually the optimizer within the TSEMO framework, so TSEMO has a similar exploration capacity as NSGA-II.TSEMO also employs Thompson sampling (acquisition function) to choose new sampling points, thus improving the exploitation capability.Therefore, the efficiency of TSEMO is higher than NSGA-II.From Table 4, we noticed that the optimization result from TSEMO is closed to that of DyOS, but DyOS costs significantly more CPU time.However, it is important to notice that the deterministic local search also offers distinct advantages for the considered case study.Firstly, DyOS verifies that the optimization result of TSEMO is 'good enough'.Without the verification, there are no criteria to check the optimality only by TSEMO.Secondly, DyOS indeed improves the optimization result.A slight improvement of operating condition may only introduce little difference in one hour for a laboratory set-up.However, such improvement can be significant for an annually operated industrial PSA plant.Last but not least, the searching efficiency of DyOS is higher than TSEMO when the optimization result is near optima.We introduce a value to quantify the searching efficiency: As shown in Figure 6a, the growth of hypervolume slows down with the increase of iteration of TSEMO, while the CPU time starts to increase gradually.Thus, the search efficiency of TSEMO dramatically decreases after 3 rd iteration.DyOS is initialized based on the result of the 7 th iteration of TSEMO.The searching efficiency of DyOS is over 11 times that of TSEMO on its 7 th iteration (Table 6).This means that TSEMO requires much more than 11 times CPU time to achieve the same trade-off curve calculated from DyOS, given the searching efficiency of TSEMO keeps going down.TSEMO belongs to a stochastic search algorithm.Theoretically, TSEMO can only converge to optimality in the limit of an infinite number of function evaluations.In other words, the searching efficiency of TSEMO declines inevitably and approaches 0 eventually.That is an inherent characteristic of the stochastic method -focusing on space-filling, rather than the improvement of individual points as gradient-based methods.Both TSEMO and DyOS tend to find better results than the last iteration, but the improvement on individual points is quite different.As shown in Figure 6b, the average hypervolume improvement on an individual point drops significantly with the increase of TSEMO iteration, while DyOS can still take advantage of the gradient to further optimize the individual point (operating conditions for new simulation).As shown in Table 6, the difference can be 553 times when comparing between DyOS and the last iteration of TSEMO, regarding the hypervolume improvement of an individual point.In other words, in the proximity of an optimal solution, DyOS possesses a significantly higher exploitation capacity than TSEMO.Hypervolume improvement per point [-] 0.003 1.66

Conclusions and outlook
When solving the multiobjective optimization problem of PSA deterministically, the main challenge is the high computational cost.In this work, a hybrid (TSEMO + DyOS) optimization framework is developed to secure a high searching efficiency and accuracy for a four-stage PSA system with an application in CO2 capture.
In the hybrid optimization framework, the first step employs our open-source Bayesian optimization algorithm, TSEMO, to search the full decision space efficiently.This step identifies an approximate Pareto front of two objectives, CO2 purity and recovery.In the second step, DyOS starts from the most promising objective points obtained in the first step and further improves the optimization result of PSA until optimality.The small improvement in the 2 nd step indicates that TSEMO can achieve nearly optimal operation conditions of PSA within the limited number of simulations.
The hybrid optimization framework possesses an excellent optimization efficiency.Such efficiency benefits from the coarse-to-fine function evaluations and stochastic-to-deterministic searching strategy.TSEMO employs GP-surrogates for function evaluations in the initial coarse search.Hence, the efficiency of TSEMO is higher than NSGA-II.However, the searching efficiency of TSEMO dramatically drops on the nearly-optimal condition, where the hybrid framework can use DyOS to further improve the searching efficiency by over 10 times.
This is because TSEMO belongs to stochastic methods, which are weaker in exploitation than deterministic methods, when the optimal solution is nearly optimal.Therefore, the overall searching efficiency on PSA optimization can be ranked as follows, hybrid (TSEMO + DyOS) framework > TSEMO > NSGA-II.
Ideally, the hybrid framework can be implemented iteratively as follows, (TSEMO à DyOS) à (TSEMO à DyOS) à (TSEMO à DyOS) … An iterative way can help balance the exploration and exploitation better, thus leading to fast convergence to the optimal solution.In the case study of PSA, the optimization result from TSEMO was thought to be 'good enough', which can be referred to the result of NSGA-II (2400 simulations / 63 hours in total) and DyOS.
Also, the second step on DyOS consumed significantly more time.As a result, the iterative way for the hybrid framework was set aside.In the future, two factors might make the iterative way more appealing and practical: 1) fast evaluation of PSA process model: reformulate the PSA model to make the system efficiently converge to cyclic steady state; 2) parallel computing in DyOS: initialize the exploitation for all individual points simultaneously.
This hybrid multiobjective optimization framework can be used to explore other competing criteria, such as energy consumption and productivity of PSA.Further, this approach can be extended to optimization of any other complex expensive-to-evaluate dynamic processes.
TSEMO seems to already deliver a 'good-enough' trade-off curve among multiple criteria in a relatively low time cost, while the hybrid framework can be used to accelerate the trade-off curve to converge to the real 'good-enough' solution.Pursing the optimality can be especially meaningful to high-value processes because a slight improvement of the operating condition can make a significant impact on an annually operating industrial plant.
packed with solid adsorbent is considered, and the following assumptions are used to derive the balance equations： (1) A one-dimensional dispersed plug flow model is applied to simulate the bulk fluid flow in the axial direction.
(2) No mass, temperature, or pressure gradient exists in the radius direction.
(3) Ideal gas law is applied for the state of the gas phase.
(4) Ergun equation is used for the pressure drop in the axial direction.
(5) The thermal equilibrium between the gas and solid phase is established instantaneously.
(6) Diffusion through adsorbent pores is considered as molecular diffusion in the macropores.
(7) Multisite Langmuir model is applied to calculate the solid phase saturation loading.
Total mass balance in gas phase: Component mass balances (1 %&'( − 1) in gas phase: Component mass balance in solid phase: Energy balance inside column: Energy balance in the column wall: The mass transport coefficient given by where > !, @ ! are the solid phase saturation loadings of sites 1 and 2, respectively.They can be calculated based on Arrhenius-type temperature dependence: ) (S.9.)

S3 Simulation of PSA
The model equations are discretized using a finite volume method and partial differential equations (PDEs) are turned into a set of differential-algebraic equations (DAEs).This DAEs system was initially set up in Matlab and later we transferred the model to Dymola, a Modelica platform.If the column is discretized as 30 equal volumes ("30" is recommended based on both accuracy and efficiency by Haghpanah [1]), 1220 equations are generated.In numerical solving, MATLAB tends to proceed equation line by line, builds a large sparse matrix and solves equations in an iterative way.By contrast, Modelica is an objected-oriented modelling language [2], and all equations are Table S1.Binary variables for four stages After completing four stages, re-initialize the cycle time ($ %5%67 ) as 0 and then start the simulation of another one cycle of PSA.As a consequence, the PSA cycle is simulated iteratively until a cyclic steady state (CSS) is reached.Theoretically, when a CSS is reached, the column profile is expected to the same between the same step in two subsequent cycles.In the mathematical language, when Px(t) − x(t + t EFEGH )P < δ, PSA is deemed to be under CSS.The dynamic simulation of PSA can be found in Figure S2.TSEMO is an in-house algorithm to solve multiobjective optimization problems [2].This algorithm aims to identify Pareto front between multiple objectives of expensive-to-evaluate models.First, a small dataset of simulations is collected using a space-filling design method (e.g., Latin Hypercube Sampling).Subsequently, two individual GP surrogate models are trained on the two objectives.
Then, TSEMO takes random samples of the GPs and uses a multiobjective genetic algorithm to identify the Pareto front of the two sampled functions.Among the final population of the genetic algorithm, TSEMO selects the next sample point based on an expected hypervolume improvement.
Within the approach, the randomness of the samples balances the effort for exploration of the experimental domain (reduction of model uncertainty) and exploitation of the objectives (finding regions of optimality).The algorithm terminates when a desired number of simulations is reached (stopping criteria is the allocated experimental budget).The framework of TSEMO can be referred to the Figure S3.
Two essential characteristics of TSEMO are: (i) the built-in GP-surrogates learning, and (ii) the adaptive sampling method.Adaptive sampling applies a hypervolume indicator to provide a 'sampling direction'.Hypervolume is a quantitative method to estimate how well is the Pareto front approximated [3,4].TSEMO will search the new sampling points, which can improve the hypervolume indicator.

S6 Framework of DyOS
DyOS is a framewor for adaptive direct sequential multi-stage dynamic optimization [5].DyOS     1 shows the values of the decision variables corresponding to the Pareto front shown in Error!Reference source not found..The evacuation pressure (PL) and blowdown pressure (PI) are driven to the lower bound following the gradient in DyOS, while the inlet flowrate changes little.Haghpanah et al. reported that a lower evacuation pressure (PL) can remove side-products and improve CO2 recovery [1], which is consistent with our result.
. Subraveti et al. applied the ANN-based surrogate model to represent the original model, which was coupled with nondominated sorting genetic algorithm II (NSGA-II) for multiobjective optimization.The CPU time was reported to be 10 times shorter compared to NSGA-II coupled with the original PSA model [17].Leperi et al. employed individual ANN-based surrogate models to represent typical PSA stages.Then, these surrogate-based PSA stages can synthesize different types of cycles (three-stage, four-stage or five-stage cycle) [21].Boukouvala et al. applied a grey-box method to capture both the analytical information of the physical models and noise information by a GP-based surrogate model

Figure 1 .
Figure 1.Four-stage PSA for CO2 capture model (coarse-to-fine)YESThe proposed hybrid optimization framework consists of two steps.In Step 1, TSEMO searches the decision space globally to generate an approximate trade-off curve, which contains the best points obtained by TSEMO.In Step 2, DyOS is initialized at one of the best points obtained inStep 1 and improves the solution until local optimality is reached.DyOS can only improve one point per time, so the second step needs to be repeated to 'one-by-one' improve all of the best points obtained in Step 1. Overall, the searching strategy is stochastic-to-deterministic, and the function evaluations are 'coarse-to-fine' type: initially the GP-based surrogate for rough evaluations, then the rigorous model for the refined evaluations.The overall optimization framework is implemented in MATLAB, as illustrated in Figure2.The model in Dymola can be compiled into an executable file (Dymosim.exe)and Functional mock-up Unit (FMU), which can be seamlessly integrated into the MATLAB environment.In Step 1, the PSA model is coupled to TSEMO as an executable.In Step 2, the model is coupled to DyOS through the functional mock-up interface (FUM), and then MATLAB calls DyOS through a mex interface.

Figure 2 .
Figure 2. Illustration of the integrated platform for modeling and optimization of PSA.Process

Figure 4 ,
Figure 4, DyOS slightly improves the estimated Pareto front until local optimality is satisfied.

Figure 4 .
Figure 4.The result of the hybrid approach for the multiobjective optimization of PSA

Figure 5 .
Figure 5.Comparison between Pareto set of solutions obtained by TSEMO -100 simulations

Figure S2 .
Figure S2.The dynamic behavior of PSA integrates different non/linear equation solvers, integration, optimization NLP solvers, and is designed for large-scale multi-stage dynamic optimization problems.Based on direct adaptive shooting algorithms, DyOS is tailored to DAEs, and it can integrate multi-stage process models continuously.Initial guesses are given to the decision variables.Several integrators are available for the DAEs to integrate time-dependent variables and gradient over the time horizon of all stages.Following this, function values and gradient values are passed to NLP solver for optimization.DyOS can be set up in either Matlab or Python.The framework of DyOS is shown in Figure S4.

Figure S5 .
Figure S5.Optimization set-up of PSA on DyOS Multiobjective optimization of PSA via TSEMO.(A1, B1, C1) optimization results through 100 simulations recommended by TSEMO: to initialize TSEMO, LHS generated 30 simulations, shown as the blue points; the algorithm recommended additional 100 simulations, shown as the red crosses.The estimated Pareto front was evolved, shown as the black circles.(A2, B2, C2) hypervolume quantification (reference point is [0, 0]) varying from 50 to 600 simulations recommended by TSEMO.
is deemed to be under CSS.Overall, in the TSEMO s.t.Dynamic process model (SI, S.1-S.19)(4)

Table 4 .
Optimization performance via TSEMO and DyOS (reference point of hypervolume

Table 1 .
hybrid approach for the multiobjective optimization of PSA: the corresponding decision variables.