Computer Assisted Method Development in Liquid Chromatography

. This paper describes potential applications of computer-assisted chemometrics in method development in liquid chromatography. These include modeling of retention (isocratic, gradient, molecular modeling, artificial neural networks), assessment of separation (peak capacity), single and multiple objec-tive optimization approach, advanced optimization algorithms (genetic algorithms, simulated annealing) and method transfer issues (transfer of methods between instruments and / or laboratories). Selected topics provide an accessible source of information needed for successful increase of chromatographic efficiency and economic feasibility (higher sample throughput) in liquid chromatography.


INTRODUCTION
Operational costs of typical liquid chromatographic (LC) analyses have increased considerably over the last years, although the nominal price of analysis has remained relatively stable.These higher operational costs need to be countered by the laboratories through better productivity of analytical equipment, which might be, at least partially, accomplished using chemometric approach in searching for the useful analytical information.
The term "chemometrics" was first introduced by Svante (in Europe) and Kowalski (in USA) back in 1972. 1 Massart and coworkers 2,3 defined chemometrics as a chemical discipline that uses mathematical, statistical, and other methods employing formal logic: to design or select optimal measurement procedures and experiments, to provide maximum relevant chemical information by analyzing chemical data and to obtain knowledge about chemical systems.
The range of application of chemometics in LC method development may vary considerably.In many instances, the primary interest is resolving all compounds in a sample.In other cases, an analyst is interested in just a few compounds, or perhaps in only one analyte present in the sample.Either way, the chromatographic system configuration -i.e. the combination of column features (nature of stationary phase, column filler particle size and column length), nature and composition of eluent, working conditions (particularly flow rate and temperature), elution mode (isocratic or gradient), applied detector and, eventually, the chemical transformations before the injection -all determine the success of the separation.These elements, sometimes with opposite effects, should be appropriately tuned to get the maximal capability of LC system under consideration. 4n order to find optimal chromatographic methodology an expert system may be used to define the search area; in such an application it acts as a precursor to the optimization chemometrics. 5However, the more usual approach is to select a group of parameters (e.g.concentration of competing ion, pH, etc.), then to impose boundary conditions on these parameters (on the basis of pragmatic or theoretical considerations), and finally to define the search area as all possible combinations of the selected parameters.Once a search area has been defined and the experimental parameters determined, the optimization chemometrics can be applied.Typically, optimum method development may be done with two scenarios.In sequential design scenario, a limited number of experiments are performed, the outputs are evaluated by appropriate numerical values, and new experiments are designed based on the results and computer assisted chemometric tools.In simultaneous design scenario, somewhat larger number of experiments is performed, the outputs are modeled by a suitable set of chemometric tools, and the search for the optimum method is done numerically, exclusively with a computer.This work is not exhaustive.Rather, it is based on author's field of interest and it focuses on some chemometric strategies, algorithms and tools needed to fulfill specific demands and goals in the efficient LC method development.

RETENTION MODELING
Retention modeling is probably the most important topic when discussing the application of chemometrics in LC; successful method development is always based on good retention prediction.There is a range of models for predicting analyte retention in LC.In general, they can be divided into two main categories: (i) models built to describe the retention of a given set of solutes under changing chromatographic conditions: the retention of a given solute at given chromatographic conditions is predicted based on a model derived from previous measurements for the same solute performed under varying chromatographic conditions (e.g.different solvent strengths); (ii) models built for a specific chromatographic system to predict retention of new solutes: the retention of new solutes is predicted based on a model derived on the retention data of a representing set of substances, all measured under the same chromatographic conditions.Although the artificial neural networks (ANN) may be used in forming models that belong to both categories, the prediction power of ANNs as nonlinear modeling tool may be the reason to put them in a separate category that will be discussed below.

Models for a Given Set of Solutes
When discussing method optimization in LC, one is primarily interested in applying solvent gradients to improve the method efficiency.−8 However, other gradient techniques may be more useful for solving specific problems.RP gradient elution with aqueous-organic mobile phases provides excellent results for the separation of peptides, proteins and other biopolymers; the separation is based on the differences in hydrophobicities. 9−14 An alternative is ionic strength gradient ion-exchange chromatography (IC) which discriminates charged biopolymers on the basis of differences in their effective charges. 5,7−17 pH gradients are of limited use in RP-LC. 27,28−31 There are many options to improve the separation selectivity and to separate complex samples in short times.−42 For a gradient separation under consideration, suitable column chemistry and mobile phase components should be selected before fixing the gradient program.In a later stage of method development, one can adapt mobile phase flow-rate and column dimensions to suit the sample type; the detection technique may be selected to fit the purpose, such as LC / MS. 43 However, the gradient profile should be readjusted when changing other operation conditions.
The key elements of gradient method development are predictive calculations of retention volumes, bandwidths and resolution of sample compounds, as dependent on parameters that characterize the gradient profile. 5,44,45Accurate prediction of retention volume would greatly facilitate the transfer between various columns and instruments.−50 Such a theory would also help in evaluating the instrument effects on the deviations from the "ideal" retention behaviour. 5,44,45,51Significant reduction of experimental effort used for gradient elution modeling is obtained by using crossing procedure form isocratic elution to gradient elution mode. 52The model is based on integral equation of gradient elution.Retention time for a solute, t g , is described in terms of measurable properties (capacity factor, k, and void time of a column, t 0 ): g 0 ( , , ) 0.
Upon the inclusion of time-independent term k[c] (c denotes concentration of eluent competing ion) within the time integral, one may easily switch to the gradient elution by allowing for the temporal variation of c: k[c] can be assumed constant for each step and t 0 can be approximated to: where I represents the approximate cumulative integral.The approximate value of cumulative integral is calculated stepwise; it is expected to increase in due course of the integration procedure and it will eventually exceed the fixed (experimental) t 0 -value on the left-hand side of Eq. (3) at some (t g − t 0 )-value.At this point t g can be easily calculated as:

Models for New Set of Solutes
The second category of prediction models is formed by linear free-energy relationships (LFERs) and quantitative structure-retention relationships (QSRRs).log 6) where log SP is the solute property described (e.g.log k), R  the overall hydrogen bond donating and accepting potencies, respectively, c a constant, while r, v, s, a, and b are the regression coefficients.These solute descriptors are of empirical or semiempirical origin and are experimentally obtained from solvatochromic measurements.
Obviously, LFER models for retention prediction appear as special cases of QSRR models.However, LFER models are usually specified explicitly, and the more general term QSRR is not used for them.One of the most frequent and simplest applications of QSRRs in RP LC, proposed by Martin and Synge, 55 is to relate the retention time to the solutes' partition coefficients.Usually the logarithm of the n-octanol-water partition coefficient, c log P, introduced by Hansch and Leo, 56 is used: R 0 where t R is the isocratic retention time, and b 0 and b 1 are the regression coefficients.Another QSRR model proposed by the group of Kaliszan uses quantum chemical indices and analyte structural descriptors from the computational chemistry: 57 where b 0 to b 3 are the regression coefficients,  is the total dipole moment, Min  is the electron excess charge of the most negatively charged atom, and A WAS is the water-accessible molecular surface area.The descriptors account for dipole-dipole and dipole-induced dipole interactions (), polar interactions ( Min  ), and dispersive interactions (London-type interactions, A WAS ).
Both methodologies are trying to describe the chromatographic retention on a given chromatographic system based on a limited set of analytes characterized by their descriptors.The selection of proper descriptor set is done more or less arbitrarily, and reflects the most important properties of the analytes as viewed by the analyst.In case of successful correlation, models can be used for future prediction of retention of new solutes, usually belonging to the class of analytes used to construct the model.Validation is probably the most important part of model development in this case.It should consist of the evaluation of the prediction performance of the model for future solutes.Ideally, an external test set should be used if available, but for small data sets internal validation methods could be used (e.g.n-fold cross-validation).There are some recent reviews on QSRR in the literature. 58,59Among the statistical tools used for model construction, multiple linear regression (MLR) was the common first choice.It still remains the most abundant one but other approaches, such as partial least squares (PLS) regression and artificial neural networks (ANN) are gaining more importance.Contemporary studies often use both linear and nonlinear modeling methodologies in parallel, or some combination of both methodologies in the so-called two-step approaches.

Artificial Neural Networks
ANNs are capable of generating complex non-linear models directly from the basic knowledge, i.e. merely from the input / response data pairs {X i , P i }.This is probably the reason why ANNs (Figure 1) have gained so much popularity.−68 In general, neural networks are a set of tools capable to do clustering, classification (which will not be addressed in this mini review), and modeling. 69With the exception of some very specific designs (like 1-d Kohonen ANN), ANNs are not usually intended for optimization purposes but for non-linear modeling only (exceptions are i.e. process control etc.).The most widely used ANN architecture for general non-linear modeling is the feed-forward network with error-backpropagation learning strategy. 70Although the counterpropagation ANNs 71,72 are increasingly used in biological and pharmaceutical QSAR studies, 73−75 only the error-backpropagation ANNs will be discussed here.The Radial Basis Function networks (RBF) 76,77 are very important for data with appearance of clusters, and are important theoretically for their formal relationship with fuzzy logic modeling.This view is supported by some commercial ANN packages that commonly offer RBF as a method of choice (STATISTICA, MATLAB).However, they are in essence systems of equations which can be solved by multiple linear regression (MLR) method. 66

PEAK SHAPE MODELING
Proper modeling of retention is satisfactory for many applications in LC.Yet, recent demands for increasing the productivity using the gradients, in combination with ever growing complexity of analyzed samples, are introducing an additional request on the analytical system -beside being fairly separated, the peaks are required be as "smoothly" shaped as possible to ensure their precise quantification.In other words, the analysts are becoming interested in peak shapes and peak shape modeling as well.The search for flexible peak functions, originating both from the theory or being purely empirical, has been challenged by a significant variation of the peak shapes observed in LC.There are two major applications of peak functions in chromatographic data processing.The first one is the deconvolution of overlapped peaks and the second one is the smoothing of experimental peaks for the determination of statistical moments. 78,79−87 In both applications, however, the first requirement is the capability of peak functions to describe real chromatographic response -the peaks should be described perfectly, if possible.In terms of empirical functions, the functions should be flexible enough to fit peaks of different shapes.
In ideal conditions, chromatographic peak often may be described by the Gaussian function: where t represents time, H 0 is the peak's height at the analyte's retention time, t R , and σ denotes the standard deviation that measures the peak width.Gaussian function may be applied for symmetric peaks only.For asymmetric peaks, the most popular model is the exponentially modified Gaussian (EMG) function. 88owever, the flexibility of this function is rather limited.The solution of this problem was proposed by decomposing Gaussian function into two separate functions, the one describing leading portion of the peak and the other one describing the trailing edge. 89After recombination of the two functions, the so-called empirically transformed Gaussian function (ETG) is obtained.
The function does not have a rigorous physicochemical foundation but it fits acceptably many experimental peaks.In addition, it fits apparently perfectly many other peak functions, such as EMG, Giddings, Haarhoff-Van der Linde, Poisson, log-normal, statistical, nonlinear chromatography functions, and Edgeworth-Cramér series and may serve as a general replacement for them.Other functions used for fitting chromatographic peaks were derived, for example the polynomial modified Gaussian (PMG), generalized exponentially modified Gaussian (GEMG) function 90,91 and a hybrid function of Gaussian and truncated exponential functions (EGH). 92PMG model attributes the deviations from ideality to the time-dependent standard deviation; polynomial equation is used to describe the temporal variation: All above mentioned functions are empirical in nature; they were reported to offer better fits than the popular EMG model.They were all use for deconvolution of overlapped peaks and they were found capable of fitting very asymmetric tailing peaks.Among the functions mentioned, only ETG function was designed to fit symmetric, fronting and tailing peaks; other three functions were designed to fit tailing peaks exclusively.Their main drawback is the large number of parameters.In a recent contribution, the generalized logistic function was employed in an empirical manner to fit asymmetric peaks.Its main advantage is the small number of parameters (three); yet it is relatively flexible and capable of fitting both fronting and tailing peaks. 93

MEASURES OF SEPARATION QUALITY
Separation quality is another important property to be quantified in the chemometric approach to LC method development.There are several measures for the quality of separation that are currently used by chromatographers.The most commonly used yardstick is the plate count, which is also often regarded as a benchmark for the separation power or the quality of a column.However, the plate count has several disadvantages.It uses only a single peak, and it cannot be applied easily for the measurement of the separation power of a gradient separation.On the other hand, the concept of peak capacity is much more versatile and, at the same time, very intuitive.The concept was first described by Giddings 94 and very soon elaborated by Horváth for its use in gradient chromatography. 95It measures the separation power using the information from entire chromatographic space together with the variability of the peak widths over the chromatogram.However, its theoretical treatment is very complex, since one needs to be able to assess the changes in the peak width over the separation space.−98 This is particularly true for twodimensional separations, where gradients are almost inevitable.There is a good review of using peak capacity concept in multidimensional separations. 99eak capacity measures the number of peaks that can fit into an elution time window t 1 to t 2 with a fixed resolution.One commonly assumes a peak spacing of 4 standard deviations τ (i.e.near baseline resolution), and it has therefore been defined as The peak width does not change with retention in the simplest case.Therefore, Eq. ( 11) simplifies to One can immediately recognize that P C is closely related with the resolution: a parameter that is more widely accepted among the chromatographers; here w denotes the peak width, usually taken as 4 standard deviations.By comparing Eqs. ( 12) and ( 13), one recognizes that the peak capacity value of 2 corresponds to the resolution value of 1.
If the retention time window expands, both the resolution and peak capacity increase.In the simple linear case, the numerical values of Eqs. ( 12) and ( 13) parallel each other.However, the resolution parameter is defined for neighboring peaks only, while the peak capacity measures the entire chromatographic space of interest.In addition, the basic equation for the peak capacity, Eq. ( 11), allows the chromatographers to treat situations where the peak width varies over the chromatographic window.
The association between resolution and peak capacity can be extended further.As can be concluded from above, the peak capacity is the number of peaks that can be resolved with a fixed resolution of 1.The peak capacity can be calculated by summing up the resolution of all neighboring peaks in the chromatogram, starting with n = 0 as the designation for the unretained peak: This relationship becomes useful for the calculation of the peak capacity for many practical cases with variable peak width, where integral of Eq. ( 11) cannot be solved.
The peak capacity is most commonly defined over the entire chromatogram, i.e. from the retention time of unretained peak, t 0 , to the specified end of the chromatogram.However, there is a variation of the approach; Snyder and co-workers defined the sample peak capacity using only the peaks of interest that lie in the specified range of retention times t 1 -t 2 : where t 0 is the retention time of the unretained reference peak.The sample peak capacity thus becomes a useful measure for efficiency of separation space for a specific sample with defined first-eluting and last-eluting peaks.

OPTIMIZATION METHODOLOGY
So far we have defined some of the options available for 1) describing component retention in terms of various models, 2) quantifying the peak shape and 3) quantifying the separation quality.In other words we have created tools for quantitative evaluation of the quality of chromatographic separation (or of the chromatogram).Now it is possible to switch to the optimization methodology.It has been recognized in many instances 100−103 that an ideal chromatographic objective function has to fulfill five fundamental requirements: (i) to be able to effectively compare and differentiate the chromatogram quality, (ii) to be able to quantitatively scale the chromatogram quality, (iii) to serve effectively the aims of the chromatographer, (iv) to be affected solely by the parameters controllable by the chromatographer, (v)  to display a clear and straightforward correlation with controllable parameters.Therefore, objective functions in LC method optimization may be defined as functions that clearly and understandably correlate the response (i.e. the quality of chromatographic separation) with the decision variables (i.e. the controllable chromatographic parameters).Here we make distinction between the single and multiple-objective function cases.
R i -resolution between i th and the (i + 1) th peaks L -the number of peak appearing in the chromatogram T A -maximum acceptable time of chromatographic run T L -retention time of the final peak T 1 -retention time of the first peak T 0 -minimum desired retention time of the first peak w n -weight parameters selected by analyst R av − average resolution of all pairs of peak R opt -desired optimum resolution n -number of peaks 105 t R,n -retention time of the last eluting peak t R,crit -user-selected time-cost weight factor R s,crituser selected resolution target value R s,ijresolution between two Gaussian peaks i and t Ri , t Rj -retention times of two adjacent peaks

Single-Objective Optimization
In a single-objective optimization there is only one objective function; each set of decision variables defines a unique solution (a point).The solution can be practically feasible or not.The dimensionality of problem is determined by the number of decision variables (controllable parameters).The solutions can be handled inside the decision space by simply adding the response as an additional dimension.Single-objective optimization is capable to treat different, sometimes opposing objectives, but they have to be lumped into a single function.In the field of LC, the most popular approach is the weighted sum method.This method scales a set of objectives multiplying each objective with a usersupplied weight.The weight factors establish the hierarchy of objectives which are either summed up or subtracted depending on how they influence (increase or decrease) the optimization goal.In general, one might be interested in maximizing or minimizing the objective function, but those two approaches are usually mutually convertible by simple arithmetic operations.Many chromatographic response functions (CRFs) have been proposed and applied during the past decades for LC optimization and method development but no one has fulfilled all the necessary demands.−113 Because CRF is a linear combination of the objectives, one expects the formation of parallel hypersurfaces in the decision space (these appear as straight lines depicted in Figure 2 for the simple case of two-objective problems).Any solution in the contour lines will give the same optimal value (Pareto optimality).If a different weight vector is used, the slope of the lines will change and thus, another different optimum solution will be found.Indeed, there is always much arbitrariness in selecting the proper weights for a given problem.A detailed discussion of this and other characteristics of the weighted sum approach can be found in the Deb's 114 monograph.

Multiple-Objective Optimization
6][117] Thus, there are two experimental spaces: the decision variables space and the objective functions space.The spaces can be mapped one into another by a graphical or conceptual tool called the formulation map (Figure 3).However, the mapping is commonly restricted by a number of constraints that any feasible solution (including the optimal one) must satisfy.The problem of multiple-objective optimization may be generally formulated as: ..., .
Here f(x) is the vector of responses (solutions), x is the vector of decision variables (experimentally controllable factors), g denotes the set of inequality constraints and h stands for the set of identity constraints.The last set of constraints is called variable bounds; they restrict each decision variable x i to take a value between a lower (L) and upper (U) bound.Thus the decision space may be explicitly delimited within the experimentally feasible domain.

OPTIMIZATION ALGORITHMS
Many different algorithms are being used today by chromatographers for finding optimum values of objective functions described in the previous sections.In the classical approach, a set of initial values of decision variables (controllable parameters) is defined first as a point in decision variable space.The initial values are defined either randomly or by a seeded guess solution as  deduced from the more-or-less qualitative knowledge of the system behavior.Now, objective function value (or vector of objective values in the multiple-objective approach) is determined as local information, either using a previously constructed model or experimentally.Then a search direction in the decision variable space is suggested, based on the obtained function value(s) and a pre-specified transition rule, and a new point is calculated to restart the process.Thus a deterministic point-by-point algorithm is constructed.Among the methods that belong to the described class of algorithms, one has to mention gradient-based and simplex method procedures.The important drawbacks of these optimization procedures are well known.These are: (i) the convergence to an optimal solution depends on the initial point chosen, (ii) most algorithms cannot avoid getting stuck in local optima, (iii) the efficiency of an algorithm depends on the problem to be solved, (iv) the algorithms are not efficient in solving the problems with a discrete search space.Some of these drawbacks have been observed in the development and application of PREOPT package.PREOPT [118][119][120][121][122] was developed for the automated optimization of HPLC separations.PREOPT used a simplex method for optimizing the binary gradient separations of any profile.
The alternative to deterministic procedures is found in the application of stochastic 123 methods.Here one can include the random search (or random walk) method, simulated annealing, tabu search and Monte Carlo procedures and evolutionary computation.Monte Carlo 124,125 and evolutionary algorithms 125 have been applied to automated chromatographic optimization; the former served as a basis of a commercial product.Monte Carlo methods use a purely random search; any trial set of decision variables is fully independent of any previous choice and its outcome.The best solution at any instance of calculation and associated decision variables are stored as a comparator.However, such a calculation is computationally exhaustive -it takes hours or days of computer time 124 to reach a valuable solution -which might not be an optimum.Evolutionary algorithms (EAs) have been suited for multipleobjective optimization problems; so-called multiobjective evolutionary algorithms have been developed for this class of problems.Among the stochastic methods, we shall refer to genetic algorithms and simulated annealing in more details.

Genetic Algorithms
Genetic algorithms (GA, Figure 4) are developed with an aim to mimic the process of natural selection.They involve the procedures based on simulating the important features that are believed to govern the evolution of living organisms.This is reflected in the names of the procedures such as "survival-pressure", "crossover / breeding", "mutation-of-genes", "limitation-of-resources", and "elitism".There are many varieties of the original GA, 126 however the essentials are common.In the preprocessing step, GA commonly codes real input values, r-variate objects U i = (u i1 , u i2 , . .., u ir ) into mvariate bit strings X i = (x i1 , x i2 , . .., x im ) termed chromosomes.Coding of X i is usually done in binary code. 127n terms of a typical chromatography problems, this means coding of a large number of randomly generated sets of decision variables (controllable chromatographic parameters) in a suitable manner.Then objective functions are calculated for all the sets.The basic idea is to keep only the best sets and combine them in a suitable manner to produce possibly even better outputs (better values of objective functions).The elitism is the concept of preservation of only the best chromosomes (usually two that produce currently best objective function values) from the old generation to serve as parents for the new generation.It guarantees the continuous increase of objective function in due course of GA.GA is, however, prone to sharp convergence towards local optima if only passing the best chromosome from generation to generation is allowed.It would be desirable to maintain separate domains within a large population pool of one generation and then exchanging the elites among them.This is termed migration-of-the-best and it may decrease the probability of converging to local optima.The crossover of chromosomes is the main mechanism for producing new generations of chromosomes ("offspring") from the old ones ("parents").There are different ways to performe crossover.Yet, two approaches are particularly popular.These are one-point and two-point swapping.In these approaches two or respectively three segments of the parents' chromosomes are randomly combined form new generation of chromosomes.If on a particular location within a string all chromosomes have the same bit value (zero or one in case of binary strings), the information at that position is lost, because no crossover can turn zero to one or vice versa.This is the situation where mutation is of vital importance; mutation is the random changing of bit values.The mutation probability rate is the parameter to be set in advance, at a rather low but nonzero value, in order for genetic algorithm to be effective.More details about GA algorithms can be found in 126 and some examples of their application in LC are prediction of the chromatographic retention 128 and response surface modeling in HPLC, 129 multi-linear gradient optimization, 130 etc.

Simulated Annealing
Simulated annealing (SA) algorithms mimic the natural phenomenon (Figure 5) of cooling of an assembly of atoms from a randomly ordered state at temperature T 1 to the thermal equilibrium at temperature T 2 .First SA algorithm was proposed by Kirckpatrick et al,. 131based on computer simulation of the thermal transition between T 1 and T 2 . 132In this approach any random point X i with a quality criterion a i is regarded as a possible starting point from which the true optimum can be found.In terms of chromatography, any set of decision variables (controllable chromatographic parameters) may lead to the optimum method.The new candidates for the solution X i are randomly selected from the neighborhood of the previously located candidate solution.The next step consists of calculating the probabilities for accepting new candidates.In classical algorithms, a new point with better fitness than the one examined in the previous step would be accepted with a probability 1, which means always.Probability values of 1 are assigned to such solutions in SA as well.However, the solutions with the inferior fitness criterion (i.e. the worse objective function value) a i have also assigned a possibility to be accepted over the solution with better fitness criterion a 0 .The probability of accepting is p(a i , a 0 ) that mimics the Boltzmann one: The selection of a new point is then done stochastically according to calculated probabilities of candidate points.Allowing of the acceptance of worse solutions reduces the possibility of stacking in local optima.The product of the Boltzman constant, k, and the temperature, T, is a parameter called T 0 which has to be set at the beginning of the optimization.It cannot be set in advance for actual applications and it requires several trials to be adjusted to the proper value for the studied problem.Too high a value means that only the solution better then the present is accepted; too low a value converts the optimization into a random search, since all the probabilities are equalized.In general it is accepted that the probability has to be in the range 0.5 -0.9.T 0 parameter is gradually decreased by a small fraction, by multiplying it with the parameter  commonly lying between 0.80 and 0.99 whenever a new candidate solution X i is found.This means that probability of downhill acceptance increases with lowering T 0 .The neighborhood, where new candidates are sought for, is gradually shrinking as well, which helps handling steep gradients.However, the number of search points, ni, generated at each T 0 usually increases with decreasing T 0 .This means that a more detailed search is done around the potential global optimum than around local optima.The expected average of the values (a i , a j ) determines the suitable T 0 and  to be used in Eq. ( 16).If no preliminary knowledge exists, it is advisable to start increasing the "temperature" at the fixed probability p(a i , a opt ) of e.g.p = 0.6, up to the point where the first candidate solution is found, and then to start the "cooling" process with the SA algorithm at reached T 0 -value.The end of the process may be marked by reaching the preset optimum value or by delimiting the computation time.Although simulated annealing was hardly ever used for method development in chromatography, the authors sees this particular approach as promising chemometrics alternative which is worth to be included in this review.

METHOD TRANSFER
The term method transfer in chromatography refers to the transfer of routine elution programs, perhaps gradient ones, to another instruments or columns.One of the primary aims of every analytical laboratory is increasing its productivity, e.g. by shortening of the cycle time (i.e. increase of sample throughput), which is achieved primarily by reducing the column length or the internal diameter.The need of method transfer is met whenever the instrument or column is changed, or when the method is disseminated throughout the laboratories.4][135][136][137] The mismatch of dwell (delay) volume of different instruments is a typical problem.Such a mismatch will occasionally result in a partially isocratic migration of less retained analytes through the column. 133hemometric approach is based on the use of a computer-assisted method development tool to perform some changes in the gradient program.The idea is to match the separations obtained with instruments having different dwell volumes.Therefore, the separation and not the elution program is to be retained.This may be successfully accomplished by posing a model describing the separation on a new instrument, using a limited set of experiments.Then, gradient method is optimized with an aim to match the original instrument separation.That means that a difference between original (old instrument) and model (new instrument) separation may be used as an objective function.

CONCLUSION
This article presents a selection of the relevant issues that emerge at the interface between liquid chromatography method development and chemometrics.In the chemometrics arsenal, we can find methods that will help chromatographer to deal with all the steps of chromatographic methodology, starting from the design of an experiment through the extraction of information to the final decision making.As any other analytical technique, liquid chromatography adapts from the other fields what is necessary and useful for its development.The speed of these adaptations is determined by the complexity of the problems to be solved, instrumentation currently in use and the amount of the data to be processed.For its part, chemometrics attempts to cope with the ongoing challenges and to develop new tools to deal with new problems.Moreover, there are old chromatographic problems that can only now be solved efficiently due to the increasing power of computers and due to the progress in computer-related fields of human knowledge.However, a misuse of chemometrics might occur with those less familiar with the data processing approaches and each application of chemometrics has to be taken cautiously.

( 15 )
Sample peak capacity is related to the classical definition of peak capacity via the following equation:


av -average selectivity t R -retention time of the first eluting peak f -factor taking into account the number of separated

Figure 2 .
Figure 2. Illustration of the weighted sum approach on a twodimensional objective space.

Figure 3 .
Figure 3. Representation of decision variable space and objective space in multi objective optimization.