Trending and Out-of-Trend results in pharmaceutical industry

There are increasing demands from both, regulatory agencies and industry itself, for improving the quality monitoring procedures. This focus on quality and risk management has prompted for re-evaluation of the systems and procedures to ensure compliance with the proposed guidelines. One of the essential tools for quality monitoring is statistical tools with which we can support any conclusion with regard to the variability and capability of a given process and thus we can ensure a state of control. There are numerous techniques which can be used and one of them is Statistical Process Control (SPC). SPC is a highly structured approach for defining what is important, by process screening; data collection and measuring the data; data analyzing and finding the variations; what needs to be done and how to bring the process in control and maintain the state of control. Trending is part of SPC and enables to see the general direction towards which the process is moving. Trending is used as a prevention of possible out-of-specification results and nonconforming products that could have impact on the patient’s health. It is no longer acceptable to be only within specification limits and out of process control.


Introduction
Ever since The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) guidelines Q8 -Pharmaceutical Development, Q9 -Quality Risk Management (QRM) & Q10 -Pharmaceutical Quality System (PQS) have been officialy published major changes were going through pharmaceutical industry. From an early phase of development, defining Quality Target Product Profile (QTTP), pre-formulation studies, formulation screening and development, process development and optimization through technology transfer and scale up to commercial manufacturing with the implementation of the three quality quidelines achieves the desired quality -"fit for its purpose" (Jean-Louis et all., 2010). Good scientific development (Q8) in combination with QRM (Q9) and PQS (Q10) could improve the drug quality and efficiency during pharmaceutical manufacturing. As a result, there were numerous of changes in good manufacturing guidelines from different agencies and different territories (for example, Food and Drug Administration Department of Health and Human Services (FDA) in United States of America (USA), EudraLex: The Rules Governing __________________ * mcadinoska@Alkaloid.com.mk Maced. pharm. bull., 65 (1) 39 -60 (2019) Medicinal Products in the European Union, Good Manufacturing Practice for drug products (GUI-0001), Health Canada etc.).
In 1960, Dr Genichi Taguchi introduced a new definition of World Class Quality. It means "On-Target" with "Minimum Variance". Operating "On-Target" requires a different way of thinking about the processes. Operating with "Minimum Variance" could be achieved only when the process displays a reasonable degree of statistical control (Education SPC course, 2016;Oakland, 2003).
The pharmaceutical industry is known that is both, highly regulated and capital intensive, with significant funds for development and manufacturing of the medicinal products. Yet, when compared to other industries, the quality assurance in pharmaceutical industry is not highly proficient. For example, the quality defects and batch failures range from 3 -15% and waste has been reported as high as 50%. In comparison, the semiconductor industry maintains waste below 1% (Winkle, 2007). PQS should enable system to enhance quality and availability of medicines. As stated in ICH Q10 (Fig. 1), one of keys objectives of PQS is to encourage manufacturers to develop effective monitoring and control system for process performance and product quality, thereby providing assurance of continued sustainability and capability of the processes. Good manufacturing practice guidelines require the processes Out-of-specification (OOS), Out-of-expectation (OOE) and Out-of-trend (OOT) to be fully understood. It is no longer acceptable to be within the specification limits but out of statistical control because in such cases the probability of producing a defective product is high. Recently, these three terms were interlaced together, in one Standard Operating Procedure (SOP) for regulating OOS; but while the OOS could be defined as a situation where the result clearly is out of specification, OOT and OOE are more related to predicting such situations. Therefore, numerous procedures and guidelines were written for detecting and handling OOT results (Burgess et al., 2015;Oakland, 2003;Thomas et al., 1982).

Regulatory requirements
The regulatory requirements for monitoring of product quality are laid down in the current guidelines of good manufacturing process (cGMP) to ensure that a state of control is maintained throughout the product lifecycle with the relevant process trends evaluated. In the cGMP guidelines of EU, Volume 4 Part I (EU cGMP), trending is addressed in several chapters. For example, Chapter 1: Pharmaceutical Quality System, point 1.10, which relates to product quality review (PQR) states that one of the purpose of the PQR is to highlight any trend and to identify product or process improvement. As stated in Chapter 6, Quality Control, data obtained during release testing of the batch (e.g. assay, dissolution, impurities etc.) or during environmental controls should be recorded in a manner permitting trend evaluation. During data evaluation any OOT result should be addressed and subject to investigation. Chapter 6 also addresses the importance of trending the data which are obtained during stability testing. It is pointed out that any significant negative trend should be not only investigated but also reported to the competent authorities. Trending during stability testing can be used as a proactive approach for evaluation of changes during time, which may help predicting future trends.
EU cGMP Annex 15: Qualification and validation and FDA Process Validation Guideline: General Principal and Practices specify that manufacturers must establish ongoing program to collect and analyze product and process data that are related to product quality and the data should be statistically trended and reviewed by trained personnel.
Data integrity guidelines from Medicines & Healthcare products Regulatory Agency (MHRA) 'GXP' Data Integrity Guidance and Definitions and World Health Organization (WHO) Guidance on Good Data and Record Management Practices recommends that data should be reviewed and, where appropriate, statistically evaluated after completion of the process in order to determine whether outcomes are consistent and compliant with established standards. The evaluation should take into consideration all data, including atypical or suspect data or rejected data, together with the reported data. This includes a review of the original paper and electronic records.
The insight of the regulatory inspection findings shows that the regulators expect useful trend analysis to Макед. фарм. билт., 65 (1) 39 -60 (2019) be performed, in their publications regarding inspection of compliance with good manufacturing practice deficiency data trend several types of deficiencies are related to trending and OOT result. For example, not all data were monitored by trend (e.g. IPC), or the monitoring is limited to graphical presentation of data; statistical tools for tracking the trend have not been applied correctly/completely (e.g., only one WECO rule has been used). In the stability studies, significant negative trends have not been reported in the Agencies, only OOS results; which is considered as a major finding (OOT Forum, 2015).

Methodology
In order to explore the trending techniques and to detect OOT results we performed literature review within database of governmental agencies, non-governmental organizations and current legislation (guidelines, standard operating procedures, books and handbooks, as well as materials from educational courses and training workshops).

Definition of trend and trending
Trend is a sequence of time related events. It shows the general direction towards which a particular situation/data is moving. In order to see if there is a certain direction in their movement, as well as the extent to which the data changes, procedures are established which enable collection of data during a certain time period and analyzed. Trend analysis refers to techniques for detecting an underlying pattern of behavior in a time or batch sequence, which would otherwise be partly or nearly completely hidden by noise. These techniques enable specific behaviors such as a shift, drift or excessive noise to be detected. OOT is test result or pattern of results that are outside of pre-defined limits, historical or expected results. Trending refers to the techniques of data collection; the analyzing the collected data and the patterns that they follow and techniques of OOT results detection (Laney, 2002;Oakland, 2003;OOT Forum, 2015).
Good trending reflects manufacturer's profound knowledge of the process and thus build confidence in the manufacturer's quality risk management. Trending may mitigate the risk of failure to comply with marketing authorization, which can lead to product recall, or failure to comply with cGMP guidelines, which can lead to withdrawal of GMP certificate (Burges et al., 2015).
In practice, the expectation is that there is no trend of the data (for example, for production or analytical process data). However, sometimes the trend is expected, for example in stability testing (Burges et al., 2015;OOT Forum, 2015).
There should be written policies, procedures, protocols and reports and associated records of measures, or results from trending and OOT investigations. The same principle of good documentation practice, which is given in EU cGMP Chapter 4: Documentation should be followed, there must be written specification, test protocol and test report. The specification should describe which parameters should be subject to trending analysis such as: data obtained from assay analysis, dissolution, other quality metrics, key performance indicators (KPI). The parameters should have defined acceptance criteria. The purpose of the analyzing and monitoring should be described, for example, whether it is drifting of the results toward the specification limits, or increased degradability of the active pharmaceutical ingredient, etc. (Burges et al., 2015;OOT Forum, 2015).

Defining the parameters that should be trended
Quality risk management is a tool for assessing which parameters should be monitored. There are several risk management tools however; the selection of particular risk management tools is completely dependent upon specific facts and circumstances. ICH Q9 gives guidance on the approach and potential application. Mostly it depends of the process complexity, for process monitoring mostly used techniques are Failure Mode and Effects Analysis (FMEA), Failure Mode, Effects and Criticality Analysis (FMECA) and Hazard analysis and critical control points (HACCP). As a rule, all data, which are defined as critical, should be included in trending. For the products which are developed using Quality by Design (QbD) principle the defined CQAs are the ones that should be trended. For the products which are developed using traditional approach and for legacy product the CQAs are defined using risk management. For example, assay of drug substance, dissolution, system suitability, critical in-process controls such a tablet's hardness, critical process parameters on batch by batch basis, OOS results, deviations, batch rejection, reworks etc. should be evaluated (Burges et al., 2015;Education course SPC, 2016;ICH Q9, 2006;OOT Forum, 2015).

How to perform trend analysis?
There are various statistical tools of analyzing data; one of them is Statistical Process Control (SPC), a scientific method of analyzing data and using analysis to solve practical problems. SPC means with the help of numbers, or data, we study the characteristics of our process in order to make it behave in a way that we want to behave. SPC gives insight when specific conditions for variation occur in the process and build in the system if there is a positive effect on the process or eliminate if there is a negative effect on the process (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al. 1982). SPC is a process-orientated data driven method to improve processes and to deliver day after day results, as shown in Fig. 2. If in the past, the objective was the customer requirements, where the analysis was after some variances or OOS results occurred, in the SPC the objective is the continual improvement and process understanding, controlling and monitoring the stability of the process. As a new way of thinking, SPC promotes continuously improvement of the processes by reducing their variability and prevents from occurring OOS results by alerting before any batch is violating specifications (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).
The SPC have several phases. First phase is process understanding, in order to monitor and control the process it is necessary to understand its behavior and natural pattern. It is crucial to understand the process variability, its source, whether it is common or special cause variability. In statistical terms, this phase is process of understanding the capability of the process -how much the center is within the specification limits, how much is the target the center value.
Second phase is eliminating the variation of the special cause and establishing the boundaries in which we want the process to move. It means to get a stable process and consequently to establish control limits. The third phase is monitoring and controlling the process (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).

Types of variability in the process
There are two types of variability, common cause and special cause variability. The common cause variability comes from the design of the process itself. It can be affected from the variability of the materials, equipment, and environmental conditions, physical and mental reactions of the people. Most of these differences are small; they cause a pattern to fluctuate in what is known as "natural" or "normal" manner. Every process is affected by common cause variability that can be reduced but not eliminated. Engineers often use the word "noise" to describe it. The process is at statistical control and can be measured by the process capability indexes: potential capability (Cp); actual capability (Cpk), overall capability of the process (Pp) and measure of overall capability of the process (Ppk). Process capability refers to the performance of the process when it is operating in control.
Two capability indices are usually computed: Cp and Cpk. Cp measures the potential capability in the process and it does not take into account where the process mean is located relative to the specifications. Often the process can be off-centre it is toward the lower or upper specification limits, than Cpk measures the actual capability in a process. Ideally if a process is centered, then Cp=Cpk. Pp evaluates overall capability based on the variation in the process, but not on its location. Ppk is a measure of the overall capability of the process and evaluates both the location and the overall variation of the process (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).
Sometimes unexpected, special cause happens which has an effect on the process variability. For example, material is taken from a different batch, or the operator sets the machine with new settings, and in-experienced operator takes place of an experienced operator. These causes make the pattern fluctuate in an "unnatural" or "abnormal" way. Finding the root cause opens the possibility to identify and study the behavior of the cause. The effort to find the root cause depends on the complexity of the process. Consequently, ether we can eliminate the negative effect or keeping the positive, in any way it is a chance for improvement. Control Charts are the only tools to differentiate between common and special cause variability (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).

SPC techniques
There are several techniques that are essential in the SPC: control charts, process capability studies, statistical sampling inspection and design of experiments (DoE) (Thomas et al., 1982).
One of the key component in understanding the process and the variability is the analyzing the data. They should come from a measurement system, which has only consistent common cause variation and should be free from systematic errors (special cause variations). The number of measurements should be large enough to provide a means of estimating the common cause error (Burges et al., 2015;Oakland, 2003;Thomas et al., 1982).
There are two types of data continuous (variable) data and attribute (discrete) data. A continuous random variable is the one that can take any value over a range of values. This type of variables is obtained by measurement, for example an assay, dissolution, pH, tablet's hardness, temperature or an impurity level. The attribute is an integer where the set of possible values for a discrete random variable is at most countable. These types of data are obtained mainly by counting, for example a cosmetic defect on a tablet, number of defective products or the number of particles in a solution. Hence, the selection of the appropriate mathematical distribution may be shown as a decision tree ( Fig. 3) according which the mathematical shape (model) of data that determines the applicability (suitability) of the statistical methodology (Burges et al., 2015).
For our purposes, the most useful distribution for continuous variables is the Normal or Gaussian distribution for a population whose properties are well known.
For a true mean value (µ) of zero and a standard deviation (Ϭ) of 1 the probability distribution is given by equitation (1.1) and shown graphically in Fig. 4. Maced. pharm. bull., 65 (1) 39 -60 (2019) (1.1) The areas under the curve indicate the probability of values lying ± Ϭ, ±2Ϭ and ±3Ϭ from the mean. This distribution is the basis for control charting of continuous random variables and stability trending. For attribute data, the Binomial or Poisson distributions are preferred. If counted defects are to be analyzed, the Binomial distribution is used. If the data are defects expressed as a percentage for example then the use of the Poisson distribution is indicated (Burges et al., 2015). There are several ways to test the "normality" of the data, as shown in Fig. 5. The data are displayed in histogram, Shapiro-Wilks Test and Normal Quantile-Quantile Plot. The first idea of the underlying distribution gives the histogram. Normal Quantile-Quantile Plot gives a goodness of fit for the normal distribution, when comparing the observed sample distribution with the expected probability distribution. One of the most used test and most efficient test of normality is Shapiro-Wilk's the most used. When the associated p-value is more than 0.05, then there is a strong statistical evidence of normality. For this example, the associated p-value is 0.2420, and then there is evidence of fit to normal distribution (OOT Forum, 2015).
If the data are not normally distributed, then they should be transformed either by logarithm: ln (X) or by their reciprocal value: 1/X. The same should be applied for specification values as well, simple flow diagram in (Fig.6). With such transformed data, we should continue to compute the indices and analyze the behavior of the data (Fig. 7) (OOT Forum, 2015).

Control Charts
Control chart is a plot of observations or measurements (means, ranges etc.) against time or a batch. The measurements can be ether variables or attributes. Fig. 8 shows the control chart, for variable with normally distributed data. The control chart is divided in zones depending on the distance from the process average of the one, two or three standard deviations, σ. In general, the distance of two standard deviations from the process average is called warning limit, the yellow lines, and the Макед. фарм. билт., 65 (1) 39 -60 (2019) distance of three standard deviations from the process average is called the control limit, the red lines on the chart. Based on the location of the observations or measurements in the control chart we can decide when it may be necessary to take some action, as later explained in Control chart zones (Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982 This approach is based on the idea that no matter how well a process is designed there is a certain amount of nature variability in output measurements. When the variation in process quality is due to random causes alone, the process is said to be in-control. If the process variation includes both random and special causes of variation, the process is said to be out-ofcontrol. The control chart is supposed to detect the presence of special causes of variation (Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).
The American Walter Shewhart was credited with the invention of control charts for variable and attribute data in the 1920s, at the Bell Telephone Laboratories, and the term 'Shewhart charts' is in common use (Oakland, 2003;Thomas et al.1982).

Elements of the control chart
There are three main elements of a control chart as shown in Fig. 9. A control chart begins with plotting of a time series graph, where the process data, or results of the measurement, are plotted against time. The data comprising each individual point are random, but the points themselves are plotted in non-random arrangement, they are plotted consecutively according time that is in order of production. The central line is a process average and is added as a visual reference for detecting shifts or trendsthis is also referred to as the process location. The process dispersion is between UCL and LCL which are computed from available data and placed equidistant from the central line (Education course SPC, 2016; Burges et al., 2015;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982). The control limits, UCL and LCL are calculated according formulas: where: -Process average of the sample, -Standard deviation of the sample and L -Constant, which is dependent of the sample size.

The control limits
The control limits are used so it can be determined whether the process is in the state of statistical control, i.e. consistent. The UCL and LCL provide a range of what is still acceptable for a result. Control charts are therefore used to determine if the results that are coming in are within the limits of what is acceptable or if the process is out of control.
If a process is influenced by chance 99.73% of all results do fall within 3 standard deviation (Ϭ), for practical reason, these limits are called "natural" limits of the process (Fig. 8 and 9). The natural limits have no connection with the specification limits, they can be broader or narrower, later is preferable. Within these limit Maced. pharm. bull., 65 (1) 39 -60 (2019) Fig. 10. Diagram of control charts selection vs data type & sample size 1 (Ref. Burges et al., 2015).
the process is able to hold when operating normally under influence of the non-assignable causes. UCL and LCL must, wherever possible, be based on the values determined for the Proven Acceptable Range (PAR) and Normal Operating Range (NOR) for a process. While the specification limits are used so that we can determine if the product is in regulatory compliance (Education course SPC, 2016; OOT Forum, 2015; Thomas et al. 1982).
The calculation of the Control Limits should be based on 20 to 30 data points. Fewer data points will not give sufficient accuracy and the control limits will be too wide. If there are more than 30 data points, there is a high possibility that special cause variability will affect the process and the control limits will be to wide, too. Even if the number of samples is very low the use of Control Charts is highly recommended and Control Limits should be re-calculated with every new data point. Control limits are recalculated if several rules are applicable: if the change is positive; the root cause of the change is known and there is assurance that the change is going to be permanent; and the forth rule is that there should be at least 20 samples that confirm the new process (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).

Types of control charts
Which type of the control charts shall be used depends on the type of data and the size of the sample as shown in diagram in Fig.10 (OOT Forum, 2015). The types of the control chart will be explained in subsequent sections.

Control charts for variable type of data
The control charts for variable type of data are used to assess stability for variable data (e.g. HPLC potency assay, UV measurements), typically are used two types control charts (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982). The first types are charts to monitor mean consistency over time. Monitor between subgroup variation (subgroup mean): if the sample is represented by one observation (or measurement) per batch, i.e. sample size is one, the control chart that is constructed is individual control hart (I chart)chart for individual measurement. If data are collected in subgroups (several observations or measurements on the same batch) i.e. when the sample size is >1, than control chartchart for subgroup mean is constructed.
The second types of charts are charts to monitor common cause variation. These charts monitor within subgroup variation (such as subgroup range or subgroup standard deviation over time). If the sample is represented by one observation (or measurement) per batch than the chart that is used for monitoring the common cause variation is Moving Range (MR) Control Charts which is based on the difference between two consecutive measurements or observation. With the increase the sample size than Range (R) or Standard deviation (S) Control Charts are used. The R-chart plots subgroup ranges (when subgroup sample size <9), and the S-chart plots subgroup standard deviations (when subgroup sample size ≥9).
for monitoring of mean consistency and chart for monitor the common cause variation has all the information. Individual chart is presented with Moving Range (MR) Control Charts and X is usually presented with Range (R or S control chart (Education course SPC, 2016; Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).

Individual (I) / Moving Range (MR) Charts (Subgroup Size n=1)
The individual control charts are used for samples of sizes n=1, each individual point is plotted on the graph; for example, release testing according specification of the finished product. Also it can be used if data are already available but not collected in constant subgroup sizes which are greater than 1 (OOT Forum, 2015).
For example, if we have 84 individual measurements, they are plotted in a chart where the control limits are established and the average value is calculated.
The calculation of the mean value, or process center is according standard formula for calculation of average value: The calculation of the UCL and LCL are directly dependent of the calculation of the moving ranges MR: is constant which is dependent of the sample size (Montgomery, 1982). In our example: = 1.128 and MR = 3.29, therefore:

Moving Range (MR) chart
In order to estimate the process variability, we can use the Moving Range, which will indicate possible shifts or changes in the process from one observation to the next (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).
The moving range (MR) is defined as the absolute difference between two consecutive observations or measurements (Burges et al., 2015;Oakland, 2003;OOT Forum, 2015;SPC Education course, 2016;Thomas et al., 1982). The formula for calculation is given in equitation 1.6: (1.6) In our example, MR chart for the same 84 measurements, the mean value of MR which represents the center line (CL) would be calculated as follows: The calculation of UCL and LCL for MR chart and sample size 1 is according formulas 1.7 and 1.8 (Burges et al., 2015;Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al., 1982).
In our example, the UCL and LCL are as follows:

/ Range (R) control chart
When the data are collected in subgroups and the sample size n is 2≤ ≤ 8, than / Range (R) control chart are recommended. The size of subgroups should be rational, predefined and constant, for example items which were produced under the same conditions. The subgroups should be formed by consecutive measurements. Each subgroup's statistics are compared to the control limits, and patterns of variation between subgroups are analyzed. For example, in-process controls parameters of tablets from same batch such are tablet's mass, hardness, thickness; or control sample in analytical analysis, for example dissolution testing is performed on 6 tablets, therefore the subgroup n=6 (Burges et al., 2015;Oakland, 2003;OOT Forum, 2015; SPC Education course, 2016; Thomas et al.,1982).
Chart is used to detect changes in the mean value between subgroups; the process average is averages of all subgroups: The control charts are constructed according equitation 1.9 and 1.10: (1.9) (1.10) constant for sample size (Montgomery, 1982).
R and S Charts are used to detect changes in the variation within subgroups. R charts are used for subgroup size between 2 and 8, and S charts are used for subgroup size ≥ 9 (Eq. 1.11 and 1.12, respectively).
R chart of the same measurements would be constructed according following equitation: (  Montgomery, 1982).
The Shewart control chart detects large variability with standard deviation between 0.5 -2.5. For more sensitive shifting in the process with standard deviation less than 0.5 CUSUM (Cumulative Sums) and EWMA (Exponentially Weighted Moving Average) charts are used. These charts detect small but persistent changes in the process, they are mainly used in trend analysis for investigation. CUSUM and EWMA charts are useful for analysis of historical data after the discovery of a problem (post mortem). They are used in cases when there is large time difference between two measurements or on daily basis for quick detection. CUSUM and EWMA charts can be used in variable and discrete type of data (Burges et al., 2015;OOT Forum, 2015).

CUSUM charts
Cumulative Sum control chart, which displays cumulative sums of the deviations of measurements or subgroup means from a target value. As measurements are taken, the difference between each measurement and the benchmark value which is process target ( 0) is calculated, and this is cumulatively summed up -CuSum. If the processes are in control, measurements do not deviate significantly from the process target, so measurements greater than the process target and those less than the process target averaged each other out, and the CuSum value should vary narrowly around process target level. If the processes are out of control, Maced. pharm. bull., 65 (1) 39 -60 (2019) measurements will more likely to be on one side of the bench mark, so the CuSum value will progressively depart from that of the bench mark ( Fig. 16) (Burges et al., 2015;OOT Forum, 2015).

EWMA
Exponentially Weighted Moving Average (EWMA) chart, also referred to as a Geometric Moving Average (GMA) chart are a good alternative to the Shewart control chart when we want to detect small shifts. It acts in the same way as a CuSum chart. Each point on a EWMA chart is the weighted average of all the previous subgroup means, including the mean of the present subgroup sample (Fig. 17). The weights decrease exponentially going backward in time (Burges et al., 2015;OOT Forum, 2015).
If is close to 0, more weight is given to past observations. If is close to 1, more weight is given to present information. When =1, the EWMA becomes the Individuals control chart. Typical values for are less than 0.25 (Burges et al., 2015;OOT Forum, 2015).

Control charts for discrete type of data
Discrete distribution is where the set of possible values for the random variable is at most countable (Attribute). There are several types of mathematical models of data distribution (Fig. 3), but most useful in the SPC analysis are Binomial and Poisson's distribution. An advantage of attributes is that they are in general more quickly assessed; sometimes variables are converted to attributes for assessment. However, as we shall see, attributes are not so sensitive a measure as variables and, therefore, detection of small changes is less reliable (Burges et al., 2015;OOT Forum, 2015). Whenever the measured quantities for one item are not continuous but rather quality characteristics or count data, control chart for discrete data should be used. Usually, the classification of the inspected item is "conforming item" or "nonconforming item". A nonconforming item is a unit of product that does not satisfy one or more of the specifications of the product (it contains at least one nonconformity). If more than one defect can be observed on the same unit, one can be interested in the number of nonconformities (defects) per unit, instead of the fraction nonconforming for a single nonconformity (defect), Fig.18 (Burges et al., 2015;OOT Forum, 2015).
Hence, a defective is an item or 'unit' that contains one or more flaws, errors, faults or defects. A defect is an individual flaw, error or fault. The inspection of product defects is based on acceptance quality level (AQL) with defined acceptance quality limit.

Control charts for single nonconformity: P-chart and NP-chart
The control charts for single nonconformity can be either fraction (percentage) of non-conforming (P-chart) o, r total number of non-conforming units, if sample size is the same (NP-chart). These charts are used to monitor the proportion of defective items where each item can be classified into one of two categories, such as pass or fail (Oakland 2003;OOT Forum, 2015;Thomas et al., 1982). It should be noted the importance of the sample size. Namely, if sample size is too large with respect to the number of nonconforming units (e.g., 20 nonconforming units out of 500000), then the p-chart will not work properly because the control limits are inversely proportional to the sample size. Therefore, they became very small and process will look out of control, as data plotted on the control chart will be out of control limits. If the sample size is the same (or approximately the same) the individuals control charts should be used where the number of nonconforming units should be plotted. If the sample size is significantly different from one sample point to another, then one could use a Laney p-chart (Laney, 2002;Oakland, 2003;OOT Forum, 2015;Thomas et al.,1982).
Other important point that should be addressed is the dispersion of the data. The over dispersion of the data exists when is more variation than expected as a result from the natural causes and not special causes of variation. Since the P Chart and later explained U-chart are based on the assumption that the variability is constant, therefore in situation of wide dispersion of the data the over dispersion of the data can cause some points to be out of control when they are not. For a Laney attributes chart, the definition of common cause variation includes not only the within-subgroup variation, but also the average variation between consecutive subgroups. If there is over dispersion, the control limits on a Laney P' chart are wider than those of a traditional attributes chart. The opposite, under dispersion occurs when there is less than expected variation in the data, it occurs mostly when the data are auto correlated with each other autocorrelation. For example, as a tool wears out, the number of defects may increase. The increase in defect counts across subgroups can make the subgroups more similar than they would be by chance (Laney, 2002;Oakland 2003;OOT Forum, 2015;Thomas et al., 1982). https://support.minitab.com/enus/minitab/18/help-and-how-to/quality-andprocess-improvement/control-charts/howto/attributes-charts/p-chart/before-youstart/overview/).
Maced. pharm. bull., 65 (1) 39 -60 (2019) For example, the subgroups are very large, with an average of about 2500 observations in each ( Fig. 20 and  21). Additionally, the P Chart diagnostic test reveals over dispersion in the data. The large subgroup sizes result in very narrow control limits on the traditional P chart. With the narrow control limits, the over dispersion causes several of the subgroups to appear out of control. The Laney P' chart, however, corrects for over dispersion and shows that the process is actually in control. No points fall outside of the control limits (Laney, 2002;Oakland 2003;OOT Forum, 2015;Thomas et al., 1982). https://support.minitab.com/enus/minitab/18/help-and-how-to/quality-andprocess-improvement/control-charts/howto/attributes-charts/p-chart/before-youstart/overview/).

P-chartscontrol chart for proportion of nonconforming units
This chart shows the proportion of nonconforming or defective product produced by a manufacturing process (Laney, 2002;Oakland 2003;OOT Forum, 2015;Thomas et al., 1982).
Suppose m samples of sample size n i are available, is the average sample size: (1.20) If the sample size is the same for each group, then = .
The sample fraction nonconforming for sample i is defined as the ratio of the number of non-conforming units in the sample i, Di, to the sample size n i (Laney, 2002;Oakland 2003;OOT Forum, 2015;Thomas et al., 1982).
The P-chart control limits are inversely proportional of the samples size, the formula for calculating the UCL and LCL (Fig.22 and 23), (Laney, 2002;Oakland 2003;OOT Forum, 2015): (1.23) (1.24) Depending on the values of and n i , sometimes the LCL is less than 0. In these cases, we set LCL=0 and assume the control chart only has an upper limit. https://support.minitab.com/enus/minitab/18/help-and-how-to/quality-andprocess-improvement/control-charts/howto/attributes-charts/p-chart/before-youstart/overview/).

NP-chartscontrol chart for number of nonconforming units (defectives)
This chart shows the total number of nonconforming units, with constant sample size, almost the same as the Pchart. Each point on an NP-chart represents the number of defective items or units for one subgroup (Fig. 24). The centerline on an NP-chart represents the average number of defectives per subgroup, n . If the subgroups sizes are not equal, the centerline is not straight on the NP-chart (Oakland, 2003;OOT Forum, 2015).

C-charts for number of defects/non-conformities
This chart shows the number of defects or nonconformities produced by a manufacturing process. Each point on a C chart represents the number of defects for one subgroup. When we have a constant sample size, n, of inspection units from one sample to another, we can work with the total number of nonconformities per sample and construct the c-chart. The total number of nonconformities in a unit is represented on the chart (Oakland, 2003;OOT Forum, 2015): (1.28) where x ij is the number of defects for inspection unit i in sample j. The total nonconformities in a sample follow a Poisson distribution. The C-chart upper and lower control limits and centerline are calculated according formulas 1.29, 1.30 and 1.31 respectively. Where is the observed average number of non-conformities in a preliminary sample of m inspection units, n is the constant sample size and is the number of defects for inspection unit i (Oakland, 2003;OOT Forum, 2015): If LCL yields a negative value, then LCL is fixed to 0. https://support.minitab.com/enus/minitab/18/help-and-how-to/quality-andprocess-improvement/control-charts/howto/attributes-charts/c-chart/before-youstart/example/).

U-Charts for number of defects/non-conformities
If the sample size is not constant and can vary from one sample to another, the average number of nonconformities per unit of product should be taken instead of total number (Fig.26). The average number of nonconformities per unit is labeled with which also represent the centerline of the chart; we can calculate according equation 1.32 (Oakland, 2003;OOT Forum, 2015): (1.32) is the total nonconformities in a sample of . represents the observed average number of nonconformities per unit in a preliminary data set of m inspection units, n is the sample size of the current inspected sample (Oakland, 2003;OOT Forum, 2015): https://support.minitab.com/enus/minitab/18/help-and-how-to/quality-andprocess-improvement/control-charts/howto/attributes-charts/u-chart/before-youstart/example/).

Interpretation of the Control Charts, trending rules
The control charts are designed for detection of change in the average value of the process, detection of change in the variability and both simultaneously: detection of average value and variability of the process. Through control charts, we can detect continuous changes, drifting or trend, or frequent irregular changes. The trending rules are taking in to account the differences between normal and unnatural patterns based on several criteria. For example, the absence of data around the central line or too much data near the control limits indicates presence of more than one distribution, this is called mixture. The absence of data on control limits indicates stratification i.e. abnormally small fluctuations. The presence of data outside the control limits indicates instability, i.e. abnormally large fluctuations. Other unnatural patterns are systematic movement, cyclical repetition, trend (Oakland, 2003;OOT Forum, 2015;SPC Education course, 2016;Thomas et al., 1982).

Control chart zones
Control charts are divided in three zones A, B and C, depending of the spread of the standard deviation, as shown in Fig. 8 and Table 1. The A zone is the zone which is most distant from the centerline, this zone is between 2 and 3 standard deviations above and below centerline or mean value of the process, the limits of the zone A are the UCL and LCL. Zone B is between 1 and 2 standard deviations from the centerline; its limits are the warning limits UWL and LWL. The C zone is between centerline and one standard deviation. The zones A, B, and C are sometimes called the three sigma zone, two sigma zone, and one sigma zone, respectively (Education course SPC, 2016;Oakland, 2003;OOT Forum, 2015;Thomas et al.,1982).

WECO Western Electric rules
WECO rules were codified by a speciallyappointed committee of the manufacturing division of the Western Electric Company and appeared in the first edition of a 1956 handbook, that became a standard text of the field. Their purpose was to ensure that line workers and engineers interpret control charts in a uniform way. Today they are widely used in interpreting the control charts. There are four (4) rules that are in use in analyzing and interpretation of data patterns. If anyone of the violates the rules than it is considered that trend or some kind of instability do exist and there is special cause variation (OOT Forum, 2015;Thomas et al., 1982).
Rule 1 is that if any single data point falls outside the 3σ limit from the centerline, i.e. outside the zone A or beyond the UCL and LCL, then the process is out of control. Rule 2 is if 2 out of 3 consecutive points, which are on the same side of the centerline, fall beyond 2σ limit, in zone A or beyond, then the rule is violated. Rule 3 is if four (4) out of five (5)  the centerline. Rule 4 is if 9 nine (9) consecutive points fall on the same side of the centerline, in zone C or beyond, then trend do exist. Fig. 27 shows examples of the WECO rules.

Nelson rules
The Nelson rules were first published by Lloyd S Nelson in 1984 in the Journal of Quality Technology. There are eight (8) Nelson rules for interpreting the control chart and separate special cause from common cause of variation (Nelson 1984;OOT Forum, 2015). Rule 1 is one point is more than 3 standard deviations from the mean it means that the sample is out of control (Fig. 28). Rule 2 is applied when nine (9), or more, points in a row are on the same side of the mean, which means that some prolonged bias exists (OOT Forum, 2015) (Fig. 29). Rule 4 is fourteen (14), or more, points in a row alternate in direction, increasing then decreasing, which usually means that the process is over-controlled (OOT Forum, 2015) (Fig. 31). Rule 5 is two (2), or three (3) out of three (3) points in a row are more than 2 standard deviations from the mean in the same direction which can be interpreted as that there is a medium tendency for samples to be out of control (OOT Forum, 2015) (Fig.32).
Rule 6 is four (4) or five (5) out of five (5) points in a row are more than 1 standard deviation from the mean in the same direction. There is a strong tendency for samples to be slightly out of control (OOT Forum, 2015) (Fig. 33).
Maced. pharm. bull., 65 (1) 39 -60 (2019) Rule 7 is fifteen (15) points in a row are all within 1 standard deviation of the mean on either side of the mean (OOT Forum, 2015) (Fig. 34). Rule 8 is that eight points in a row exist, but none within 1 standard deviation of the mean, and the points are in both directions from the mean. Rule 7 and 8 are diagnostic tests for stratification, i.e. for abnormal fluctuation in data (OOT Forum, 2015). From the eight Nelson's rules only the first 3 of them are used, the reason for this is that we should be cautious not overreact and search out-of-trend results when they do not exist (OOT Forum, 2015).
Even the FDA Guidance for Industry Process Validation: General Principles and Practices in part D. Stage 3: Continued Process Verification clearly states that "Procedures should describe how trending and calculations are to be performed and should guard against overreaction to individual events as well as against failure to detect unintended".

Examples of interpreting control charts
Test 1 Nelson & WECO: Points out-side the control limits (Fig. 36); the process is grossly out of control, mainly as a large difference between two consecutive results (OOT Forum, 2015; Thomas et al. 1982). Nelson Тест 2 = WECO Тест 4 ( Fig. 29): At least 9 points in a row, below the average; these rules can detect change in the average value of the process (Burges et al., 2015;Nelson, 1984;Thomas et al., 1982). Макед. фарм. билт., 65 (1) 39 -60 (2019) Nelson Rule 3 (Fig.38): 6 or more points in a row are continually increasing or decreasing, this test indicates trend, i.e. drifting of the average value.

Steps for control charting and trending
Control charting and trending can be explained in several short steps:  The samples should be collected at exact time interval with defined sample size, n; The number of samples which are needed for establishing the control chart limits are at least 30  After data collection, we should check the normality of the data  The data should not be auto correlated  Depending of the sample size we should construct of I or charts for process average and MR or R charts for process variability We should check the stability of the process on the MR, R charts, by applying WECO or Nelson rules and if they are violated than the process is unstable. Consequently, this means that UCL are LCL for I/X-bar charts are invalid. It should be noted that UCL and LCL are valid only on stable process. If the process is stable than the UCL and LCL can be accepted and for every new measurement WECO or Nelson rules should be applied. If the rules are violated investigation should be triggered and closed before batch disposition.

Basic Capability Indices
The process capability is another vast array in SPC and in this review it is only mentioned as a connection between process stability and capability. Process capability refers to the performance of the process when it is operating in control. Process capability index is a measure relating the actual performance of a process to its specified performance. The process itself is a combination of the materials used for production, the equipment, the method of production, the personnel involved in manufacturing operation, the environment. The absolute minimum requirement is that three standard deviations each side of the process mean are contained within the specification limits, which are the UCL and UCL. This means that ca 99.7 per cent of output will be within the tolerances (Fig.8). A more stringent requirement is often stipulated for UCL and LCL to ensure that product of the correct quality is consistently obtained over the long term. When a process is under statistical control i.e. only common causes of variation are present, a process capability index may be calculated (Fig. 39).
Process capability indices are simply a means of indicating the variability of a process relative to the product specification tolerance: Cp and Cpk. Where Cp is process capability ratio for a centered process and measures the potential capability of the process. Cpk is process capability ratio for an off-center process and measures the actual capability of a process (Burges et al., 2015;Oakland, 2003

Calculation of the Cp index
In order the product to be manufactured with parameters that are within specification it is important that the difference between upper specification limit (USL) and lower specification limit (LSL), or so called tolerance limits (T) is smaller than the process standard variation. In Fig.40 is shown how the widening of specification limits is affecting the process capability.
Usually in pharmaceutical industry, pharmacopeia's monographs impose the product specification limits and changing the limits would not be allowed. In that case, efforts should be made to lower as much as possible the standard deviation. The calculation of Cp index takes in consideration of the established specification limits and standard deviation σ. The calculation is according equitation 1.36. When variables control charts are used in the capability studies the standard deviation, , is estimated by and constant 2 as explained in Control charts for variable data (Burges et al., 2015;Oakland, 2003).

Calculation of the Cpk index
The process capability ratio-does not take into account where the process mean is located relative to the specifications. In some instances, we can see wide specification limits, with low standard deviation, but part of the process lies outside the specification limit (Fig. 41). The index that is used to measure both the process variation and the centering is Cpk, which is widely accepted as process capability index (Burges et al., 2015;Oakland, 2003). Fig. 41. Non-centered process 1 (Ref. Oakland, 2003).
For upper and lower specification limits, there are two Cpk values, Cpku and Cpkl. These relate the difference between the process mean, μ, and the upper and the lower specification limits respectively, to 3σ (half the total process variation). Cpk is the minimum value of the Cpu and Cpl. It is calculated according formulas 1.37 and 1.38 (Burges et al., 2015;Oakland, 2003).
(1.37) (1.38) Many industries use a benchmark value of 1.33. If Cpk is lower than 1.33 benchmark, it should be considered ways to improve the process, such as reducing its variation or shifting its location. In Fig. 42 examples are shown the stability of the process and it location. For stable and capable processes, Example 1, the control limits are narrower than the specification limits and all process data are within UCL and LCL and around the target of the specification limit. For unstable but capable process, Example 2, the UCL and LCL are too much narrower and some data are out of control limits. All the data are around the target value of the specification. In Example 3 it is noticeable that all the data are toward the upper specification limit al do all of them are within control and specification limits. But since the data are toward the upper specification limits the process is incapable. In Example 4 it is shown that not only the data are out of control but out of specification also. The process is unstable and incapable (OOT Forum, 2015).

Conclusion
Trending is a trend in the GMP environment. Particularly in process validation trending is a requirement form FDA since 2011. The control chart are essential tools for data analyzing establishing control limits on CQAs, trending and detecting the unexpected or special cause variability in the process. They are used in context of gain  Fig. 42. Examples of process stability and capability (Ref. OOT Forum, 2015). more knowledge of the process. The critical data are monitored more frequently and thus adequate responses in case of violations based on risk shall be taken. We should ensure that correct control chart is used, based on data type, their distribution and sample sizes. Along with this process it is imperative to ensure roust documentation and knowledge management. Continuous monitoring should enable management to assess process capability more frequently and drive improvements.