Abstract

Traditional and typical iterative learning control algorithm shows that the convergence rate of error is very low for a class of regular linear systems. A fast iterative learning control algorithm is designed to deal with this problem in this paper. The algorithm is based on the traditional P-type iterative learning control law, which increases the composition of adjacent two overlapping quantities, the tracking error of previous cycle difference signals, and the current error difference. Using convolution to promote Young inequalities proved strictly that, in terms of Lebesgue-p norm, when the number of iterations tends to infinity, the tracking error converges to zero in the system and presents the convergence condition of the algorithm. Compared with the traditional P-type iterative learning control algorithm, the proposed algorithm improves convergence speed and evades the defect using the norm metric’s tracking error. Finally, the validation of the effectiveness of the proposed algorithm is further proved by simulation results.

1. Introduction

Iterative learning control is suitable for controlled objects with repetitive motion (running) properties in a limited time interval. It uses the data generated during the previous iteration of the system to correct undesirable control signals and generate the control signals used in the current iteration to make the system control. The performance is gradually improved, and finally the complete tracking in a limited time interval is achieved. In a comparison with other control methods, the iterative learning control method has a simple controller structure, a small amount of calculation, and only less knowledge of dynamic characteristics and can get precise control. The characteristics of precise tracking control are applied in many industrial applications such as assembly line industrial robots and chemical intermittent processes. The iterative learning control algorithm is different from other learning algorithms such as neural networks and adaptive control. The iterative learning control algorithm aims at the controlled system with repeated operation characteristics in the finite time interval. It uses the tracking error stored in the system to modify the control input one by one to realize the goal of completely tracking the expected trajectory [13]. The iterative learning controller can be designed without precise model information, which has the advantages of simple structure, batch processes, etc. [4, 5]. The iterative learning control [6] has achieved many research results in theoretical research and practical application since it was proposed [7, 8]. The ILC optimal approach is also used in recent days for the error convergence [9]. The soft, inflatable robotic manipulator has many useful features. High compliance and low inertia combined with pneumatic execution assist fast but still secure operations and applications [1012]. However, precise position control is challenging for soft manipulators since they usually take many potential coupling and uncontrollable degrees of freedom [13, 14]. Besides, soft materials’ dynamical behavioral properties act as viscoelastic materials, which are problematic to model from first principles [15]. The body of a soft robot is made of soft and compliant materials in nature. This inherent softness allows them to interact with faint objects and passively adjust their shape to adapt to amorphous atmospheres [16]. These features are desired for robotic applications that require safe human-computer interaction, such as wearable robots, home assistant robots, and medical robots. These robots’ soft bodies also present modeling and control challenges that have limited their functions so far. The challenge in constructing such precise control technology is the difficulty in designing a soft robot model suitable for model-based control design technology. Consider, for example, a rigid mechanical system, which is connected by rigid links through discrete joints. Since the joint displacement can completely describe the configuration of the rigid body system, the joint displacement and its derivative are the natural choice of the state variables of the rigid body robot.

Furthermore, many typical paths tracking control strategies have been adopted for soft manipulators [17]. A pneumatic control system-based open-loop and mechanical feedback control topology are discussed in [18]. Model predictive control (MPC) and neural network-based nonlinear MPC methodology are adopted to achieve error convergence for soft actuators [19]. Further advancement made in the control method is the reinforcement learning method, which is introduced for precise position tracking in these manipulators [4, 20].

In [21], the authors used a learned inverse kinematics model to enhance the tracking accuracy of position with soft processing aid. Iterative learning control (ILC) is applied in [22] to find a control strategy for the soft mesh worm robot. The authors of [23] used ILC to generate flexible impact behavior, and the authors of [24] reported an ILC-based method to learn the grasping task of a soft, fluid, and elastomeric manipulator. A graph-based, model-free flexible robot motion control framework was proposed in [2527]. In literature [28, 29], the authors have suggested a control strategy influenced by marine life. Both of these solutions are only concerned with the coarse-grained motion of the soft robot. It has fine-grained control and dynamic response adjustment. Reference [30] uses a numerical model to control the system response, but this technique is applied to a linear predictable model. Such assumptions and the lack of feedback loops can make the system unbalanced and yield unwanted responses. References [3133] proposed a control strategy based on the Finite Element Method (FEM), which can attain high accuracy but needs a detailed understanding of soft structural materials’ mechanical properties. The controller strategy based on FEM will produce a high computational cost, making it impossible to execute in real-time on the embedded processor. The solution is to run the FEM-based mechanism in a feedforward open-loop mode, which leads to control error dominance and reduces the system’s overall robustness. A model-based soft robot dynamic response optimization control strategy is proposed in [24, 34, 35].

So far, most of the literature on iterative learning control has been focused on the convergence of the algorithm in the sense of norm metric, pointing out that the algorithm’s convergence can only be guaranteed if is large enough [3638]. Since norm is an upper-bounded negative exponential function norm, the error’s essential characteristics cannot be objectively quantified. The paper [39] found that even though the learning algorithm is theoretically convergent when it gets an enormous parameter value, the upper bound of the error during the initial stage of system operation often exceeds the allowable error range of practical engineering. To avoid the above defects of the norm, the papers [40, 41] presented the convergence of PD iterative learning control algorithm in the sense of PD measurement in the definite upper norm [42, 43]. It is found that the learning algorithm is convergent only in a subinterval of the system running time interval. In [44], to make the iterative learning control algorithm convergent in the sense of upper-bounded norm measurement, the algorithm is adjustable and learning law subinterval modified accordingly. However, the algorithm structure is quite complex, and it is not easy to apply in practical nonlinear engineering systems [45].

Furthermore, the Lebesgue-p norm is more reasonable in terms of the properties of quantization and reaction function . It considers both the upper bound value of the function in the whole time interval and the integral function value at each running time [46]. Based on literature [47, 48], the tracking performance of iterative learning control is discussed using the Lebesgue-p norm, but the algorithm’s convergence is not involved. In references [49, 50], the stability of iterative learning control for multistate delay linear systems is studied, and Lebesgue-2 norm is used to evaluate the learning algorithm’s tracking performance. In [51], convergence analysis is carried out for PD iterative learning control with feedback information regarding Lebesgue-p norm measurement of linear time-invariant systems. Literature [52, 53] analyzes the convergence of fractional-order iterative learning control laws in the sense of the Lebesgue-p norm. Based on the Lebesgue-p norm, an accelerated initial state error convergence topology is discussed in the literature [48, 54].

Further, the convergence of variable gain iterative learning control algorithm is discussed in [55] in the sense of Lebesgue-p norm. It can be found from the analysis literature [49, 5155] that although these research results avoid the defect of using the tracking error of the norm metric, they are all convergent analysis for the complete, on-regular system with , and their conclusions do not apply to the regular system . The reason is that, for a completely nonregular system, there must be derivative of tracking error in the iterative learning control law, namely, derivative (D) or PID iterative learning law. As for the regular system, only following error, namely, the proportional (P) iterative learning law, would be used to correct the control law. Because the traditional P-type iterative learning algorithm only uses the previous tracking error to correct the control law, the tracking speed is low. To improve the conventional P-type iterative learning algorithm’s convergence speed, an iterative learning control algorithm is proposed in the literature [56], but its convergence analysis still adopts the norm. In the theoretical analysis, norm was mainly used in the measurement of tracking error. However, the convergence condition of the control algorithm could be satisfied when the parameter was relatively large, but the maximum value of transient tracking error fell beyond the allowable range of practical engineering application in the repeated operation of the system, leading to system collapse [5759]. In literature [60, 61], Ruan et al. studied the convergence of P-type and PD-type iterative learning control algorithms for linear time-invariant systems using Lebesgue-p (Lp) norm and found that the convergence condition of the system is independent of the value of the parameter and mainly depends on the system’s own properties and the learning gain matrix. Furthermore, in the sense of Lebesgue-p norm, the convergence of fractional-order iterative learning control algorithm for fractional-order linear systems is discussed in literature [62]. In order to cope the above defects, this paper proposes a class of regular system to improve the convergence speed of traditional P-type iterative learning algorithm. Furthermore, it also overcomes the -norm to measure the tracking error using the tracking error of system before storage and the current tracking error information as well as adjust iterative axis on the difference between two-time error signal. The control input of successive modified fast iterative learning control algorithm gives accelerated and better convergence of the Lebesgue-p norm for particular satisfied conditions. This paper is organized into the following sections. Section 2 presents the problem description and its importance as well as basic mathematical background of relevant problems. Section 3 refers to the convergence and proof of the error convergence and analysis. It also gives the sufficient conditions for the validation of the proposed algorithm. Section 4 elaborates the validation of the proposed algorithm and its result discussion. Finally concluding remarks are given in Section 5.

2. Problem Description

Consider a class of regular systems with repetitive running characteristics:

where k denotes the number of iterations, t is the time interval of the system, is the state vector of the system running in the time, and and are the control input vector and output vector, respectively, in the system running in the time. The proper dimensions are taken for all A, B, C, and D matrices.

It is considered that the initial state of the system for every iteration is consistent with the expected initial state; that is, .

Hypothesis 1. There is a unique ideal input to make (2) true:where denotes the expected trajectory and is the expected state.

2.1. Control Target

This research’s primary and vital control objective is to design a fast iterative learning control algorithm for a regular linear system described in (1) and to overcome the shortcoming of the low convergence speed of the traditional P-type iterative learning control algorithm. Simultaneously, the convergence of the algorithm is analyzed by using the Lebesgue-p norm to overcome the defect of using the tracking error measured norm.

For this control goal, the fast iterative learning control algorithm is designed as follows:where is the tracking error of the trial and is the tracking error of the trial. and like the iteration axis on the difference between two-time error signal, where is called the last difference signal and is called the differential signal of the current time. is the learning gain of the tracking error, is the feedback gain of the tracking error, and and are the learning gain and feedback gain of the differential signal, respectively.

According to algorithm (3), when and are set at zero, algorithm (3) is the open-loop iterative learning control algorithm:

When and are all set at zero, algorithm (3) becomes the traditional P-type iterative learning control algorithm.

The question is now raised that what would be the control law designed for linear regular system (1) to make it convergent using algorithm (3) and also what conditions should be chosen for , , and ?

2.2. Preliminaries Knowledge

Convergence is obtained through the following definitions and lemmas: define one vector-valued function and norm [63] as

The upper vertical-bound norm [10] and Lebesgue-p norm [64] of vector-valued function are defined as follows:

An important conclusion is given in the literature [65]: the upper-bounded norm is a particular case of Lebesgue-p norm, namely,

Lemma 1 [66]. If the vectometric function is integrable for Lebesgue, then the generalized convolution Young inequality iswhere is the convolution of and h, and the parameters satisfy and . In particular, when , Young’s inequality applies .

3. Convergence Analysis

Theorem 1. uses the designed algorithm (3) to control system (1) that meets Hypothesis 1. Suppose that the following conditions are satisfied:(1).(2).

Among them, , , , , . As the number of iterations the tracking error of the system in the Lebesgue-p norm tends to zero, so the limit k goes to infinity. The proof can be seen from system (1).

For Hypothesis 2 we need to do some assumption as follows:

Assumption 1. Assume that the initial state and expected initialization of system (1) satisfy the states, whereWe consider a class of single input single output linear time-invariant systems as follows:The system operation interval is an n-dimensional state variable. and are the control input and output, respectively. A, B, and C, are matrices with corresponding dimensions, and it is assumed that . Without loss of generality, it is taken that the dynamics of system (1) is not entirely known, but the initial state of system (1) when repeatedly running on the interval is resettable, and the desired ideal trajectory is given. To realize the system’s ultimate complete tracking of the ideal trajectory, we construct a P-type iterative learning control law with feedback information.
Obviously, in control law (3) above, when and are set at zero, the control law of degradation of specific iterative learning control law (4) is as follows:Furthermore, set all the , , and to zero results in typical P-Type ILC law as follows:(1), for any initial control input when When the control input in system (1) is replaced by in control law (3), (4), or (5), the corresponding system dynamics iswhere , , and are the corresponding state variables, controlling input and controlling output of the system for the th iteration. In this paper, Lebesgue-p norm is used to demonstrate the convergence of the algorithm. For easy comparison, the upper bound norm and Lebesgue-p norm are defined as follows:
is a vector-valued function, and is a positive real number; then, the norm of the vector-valued function f can be expressed asThe upper verticality [59] and Lebesgue-p norm [65] of vector-valued function f areIn the literature [65], an important conclusion is that . That is, the upper-bounded norm is a particular case of the Lebesgue-p norm.

Proof. of Error Convergence.
There is unique ideal input according to Hypothesis 1, such aswhereThe above-mentioned is an arbitrary value, subscript represents the number of iterations, and and are denoted separately, describing proportional and differential learning gain matrix.

Hypothesis 2. PD-type iterative learning controller (3) is used for system (1), if the condition is met, whereThen, the number of iterations approaches infinity, and the norm of Lebesgue-P becomes significant.(1)When is caused by the deviation of initial state value, the system output cannot follow the desired trajectory.(2)In the period, the tracking error monotonously tends to zero, and the system outputs the expected trajectory at the output of tracking; i.e.,According to Hypothesis 1 and by substituting (3) into (20), we can getArrangement formula (21) can be obtained as follows:Lebesgue-p norm is taken from both sides of (22), and Young inequality is applied to obtainPreparation equation (23) can be obtained as follows:That is,Procedure formula (25) can be obtained as follows:The conditions of the theorem (26) show that it satisfies and is true; that is, when the number of iterations approaches infinity, the tracking error of the system approaches zero.

Note 1. In general, the Young inequality based on generalized convolution is also true for vector-valued functions. The conclusion obtained in this paper is also true for multi-input and multioutput systems in the case of the Lebesgue-p norm defined for vector-valued functions as described in this paper. The demonstration process only needs to replace the single input single output scalar with the corresponding multidimensional vector in this paper’s demonstration process and follow the vector algorithm for the deduction, so it will not be described further.

Note 2. When , the control law of degradation (2) for specific iterative learning control law (3), the PD-type control law (3), and the convergence conditions for show that, in the sense of Lebesgue-p norm, the convergence of PD-type iterative learning control law (3) not only depends on the system input and output matrix of CB and the differential learning gain values, but also depends on the proportion of the system state matrix A and learning gain values of . Although the convergence conditions relative to the norm in the sense of are conventional, in this paper, the error in measurement and the analysis of convergence are not dependent on the parameter selection of , and convergence conditions essentially describe the system dynamics and control law of learning gain decided to the convergence of the leading role.

Note 3. Compared with the convergence conditions under the norm, the convergence conditions given in this paper are conservative, but their convergence is no longer dependent on selecting the value. Simultaneously, the article gets the convergence condition as , and must satisfy or , and so make when selecting feedback gain and the learning gain more immense freedom.

4. Simulation Examples and Discussion

4.1. Algorithm Application to Soft Robotic Position Control

The soft structure has unlimited degrees of freedom; therefore, building a model as accurate as a rigid structure is challenging. It makes the quite fine-grained control structure challenging, especially when tuning the dynamic response. Therefore, people have raised serious concerns, especially in rehabilitation applications where fine-grained control of muscles under the support of soft structures is compulsory. A further illustration is high-speed applications, such as industrial robots with soft tentacles, where fine-tuned dynamic response is necessary. As an emerging field, soft robots have very limited research on precise modeling and vibrant response tuning [67]. The method to improve the tracking accuracy and performance of flexible and inflatable manipulators is to syndicate flexible structures with stiff parts. Compared with a completely soft design, this hybrid design usually has worse inclusive compliance and higher inertia, but the degree of freedom is also reduced. As a result, the control action of the remaining degrees of freedom can be amplified, accordingly improving the tracking control performance. The literature [6871] describes such kind of examples.

The rigid body dynamics of the soft robotic arm are calculated by defining the difference in pressure between the two actuators, , as shown in Figure 1. In the positive alpha direction, the positive pressure difference accelerates the arm (compare Figure 2). To describe the dynamics of the robotic arm with as input and arm angle as an output, use device recognition. Apply the same mechanism of an acknowledgement as in [72]. The following continuous-time transfer function is obtained:where the parametric values are taken as , , . Now discretizing this transfer function can be obtained by taking sampling time of 0.02 s.where denotes the time index and the states are being described as is the arm deflection angle that is directly measurable and that is normalized . is the control input and initial condition for this . For this proposed controller, parameters are , , , , and the desired trajectory is taken as .

When algorithm (3) is applied to the soft robotic system (28), the system’s output tries to reach its desired trajectory. It can be seen from Figure 1 that after the second iteration, the controller effort of the learning algorithm (3) is remarkable, but the error is still significant. After a few iterations, it can be noted, as in Figure 3, that the error reaches its convergent limit compared to the super norm. The error of the super norm is more prominent as well as not converging to zero. The reason is that, when iterative learning control algorithm (3) is used for system (1), if the condition is met, where, then as the number of iterations approaches infinity, the sup-norm is significant. When is caused by the deviation of the initial state value, system output cannot follow the desired trajectory, so the error does not converge to zero as expected. In contrast with the period , the tracking error monotonously tends to zero, and the system outputs tracks ultimately as the expected output, i.e., .

Algorithm (3) uses previous and current error information, and its convergence is proved through sufficient conditions. Under the above conditions, when the algorithms proposed in (3) and (4) are applied to the soft robotic systems (21) having an arbitrary initial state, the system tracking errors are shown in Figure 3. According to the Lebesgue-p norm, errors in the proposed algorithms’ follow-up (2) and (3) tend monotonously to zero with the increase in iteration number. At this point, the tracking error reaches the error convergence limits when algorithm (3) executes four iterations. In contrast, algorithm (4) requires more iterations to achieve the convergence limit but cannot reach zero. Therefore, under the given appropriate learning gain, algorithm (3) has a faster convergence speed and higher control accuracy than algorithms (4) and (5). Algorithm (3) updating law includes feedback gains with current and previous information of the errors such as , and . As the number of iterations the tracking error of the system in the Lebesgue-p norm tends to zero, the output of the system tries to follow within the finite time interval , and ultimately a perfect desired trajectory is achieved. Algorithm (3) is more robust and guarantees monotonic error convergence for position tracking, especially in soft robotic applications. This robust topology is also applied to higher-order high dynamical systems with little modifications in the learning proportional and derivative gains according to the system requirements.

4.2. Validation for Typical PMSM Servo Position Control System

A typical PMSM (permanent magnet synchronous motor)-based servo position control system is taken as an example for validating the proposed algorithm. The standard state-space linear servo position control model of the PMSM can be described as follows:

which the value of each parameter is described in Table 1.

The state-space equation for the given system in a standard form can be expressed as follows:

The states of the system are described as , and the control input is , for which each matrix of the system can be calculated as follows:

To validate the Lebesgue-p norm proposed in this paper, we assume the parameters to be as follows: the rotational inertia and viscous friction coefficient . For ILC, the parameters are taken as , , , , and the desired trajectory for the system is .

The controller’s effort is shown in Figure 2, which describes the output of system (31), attempting to follow the desired position. The figure displays the simulation results and interpretation and also shows the control consequence of a particular iteration of the method. As we have seen, the performance of the second iteration is not better and initially has significant errors, the delay is relatively apparent, and the error is critical. The operation of the ILC control additionally reduces the error and attempts to exceed its goal. The error converges rapidly to its limit after a limited amount of time and several iterations, and the performance of the method precisely tracks the target location.

In comparison, these conditions still occur despite modifying the controller parameters several times. As shown in Figure 2, the system’s desired and output position can be seen and automatically updated by the output accurately following the optimal level. The controller’s action is stable and sufficient for the error to converge to its monotone convergence limit under satisfactory conditions.

The system’s tracking curve is seen in Figure 2 in the second iteration of the learning system, and the error curve indicates that the error is too high. In Figure 4, the results of the different iterations errors are shown. The error trajectory of the device is already greatly decreased, and the most significant error in the tenth iteration relative to the second trial (the results are shown in Figure 4) has very good tracking accuracy for algorithm (3) as compared to the other two algorithms. The error is too small to meet the demands of the system. Therefore, we can say that the proposed Lebesgue-p norm scheme for accurate position tracking is significantly fast compared to algorithms (4) and (5). The sufficient conditions and the Lebesgue-P error criterion suggest that the findings are acceptable and that the mechanism is stable enough to monitor the PMSM servo control position. This approach can also be implemented with a specific additional extension to other complicated speed and position servo systems for the broad range of traditional automation applications.

4.3. Application and Validity for Other Linear Systems

The following linear system is taken to validate the proposed algorithm’s effectiveness further, and it is obtained from [73].where . Algorithm (3) was used to control system (32). It was assumed that of the desired trajectory, and the initial state of the system was . The initial control was set as , and . If the convergence condition is satisfied, then the control parameters are chosen as , , , and . To validate the effectiveness of algorithm (3) proposed in this paper, simulation comparisons are made with open-loop algorithm (4) and traditional P-type algorithm (5). The simulation results are shown in Figure 57. Figure 5 shows the output tracking curve of different iteration times during algorithm (3) control. Figure 6 shows the tracking error curve in the sense of the norm of upper certainties and Lebesgue-2 norm; Figure 7 shows the tracking error curves of algorithms (3)–(5) in the Lebesgue-2 norm sense.

As shown in Figure 5, after the 20th iteration, the system output has been fully tracked on the expected trajectory in a finite time. It can be seen from Figure 6 that the Lebesgue-2 norm and the upper-bounded norm of algorithm (3) converge to 0. As can be seen from Figure 7, algorithm (3) has the highest convergence rate, algorithm (4) comes second, and algorithm (5) has the lowest convergence rate. The reason lies in the fact that algorithm (4) increases the difference signal of error in two adjacent iterations based on algorithm (5). Algorithm (3) uses the current error and the previous error to form the difference signal, while algorithm (4) only uses the previous error to create the difference signal. Compared with algorithm (4), algorithm (3) makes full use of the current error information. To better illustrate the effectiveness of algorithm (3) designed in this paper, the numerical values of tracking errors of algorithms (3)–(5) under different iteration times are given below, as shown in Table 2.

Table 2 shows that the tracking error of algorithms (3)–(5) in the first iteration is 1.217316. After the 15th iteration, the error of algorithm (5) is 0.07538, and the error of algorithm (4) is 0.024335. The error of algorithm (3) is 0.003683. From the vertical data in Table 2, the three algorithms’ tracking error can be reduced successively with the increase of iteration number. However, from the horizontal data in Table 2, the tracking error of algorithm (3) is the smallest, followed by algorithm (4), and that of algorithm (5) under the same iteration number is the largest. Therefore, it is easily observed from Table 1 that the convergence speed of the fast iterative learning control algorithm (3) designed in this paper is significantly higher than that of algorithms (4) and (5).

4.4. Validation for Other Linear System

To illustrate the tracking capability of algorithm (3) in this paper for different expected signals, let us assume the expected trajectory, , to be as follows:

The value is the same as that of the above expected sinusoidal trajectory. The tracking effect of the output curve on the predicted trajectory under different iteration times is shown in Figure 8, which shows the tracking effect of iteration 2, iteration 10, and iteration 15.

It can be seen from Figures 5 and 8 that the control algorithm (3) designed in this paper can achieve complete tracking of different expected tracks in the finite time interval with the increase of iteration numbers for the predicted trajectory of slow and abrupt changes. The new proposed updating input iterative learning law includes feedback gains with current and previous information of the errors such as , , and . As the number of iterations the system’s tracking error in the Lebesgue-p norm tends to zero. The system’s output tries to follow within the finite time interval as specified for this system, . Ultimately, a perfect desired trajectory is achieved. The result of the system is shown for the different iterations in Figure 8. When the tracking error converges after 15 or more iterations and tends to zero, the system’s output precisely follows the desired trajectory . Accordingly, algorithm (3) in the sense of the Lebesgue-p norm is robust and satisfies , and is accurate. When the number of iterations approaches infinity, the tracking error of the system approaches zero. This robust control topology is also applied to higher-order dynamic systems with little change in proportional and derivative learning gains as required by the system. Furthermore, it can also correctly work for the motor position control, aircraft altitude and latitude control, angle of attack, soft articulated robot position control, satellite positioning systems, and piezoelectric nanopositioning control systems.

5. Conclusion

This research paper has discussed a fast iterative learning control algorithm for a class of regular linear systems with direct input-output transmission terms of Lebesgue-p norm. The convergence of the algorithm is proved under the Lebesgue-p norm, and sufficient conditions are given for the convergence of the norm form of the algorithm. This algorithm not only has a higher convergence rate than the traditional P-type algorithm, but also avoids the defect of using the tracking error of the norm metric and increases the degree of freedom of learning gain selection. Due to the convolution limitation of Lemma 1, the algorithm in this paper is only applicable to regular linear systems. Therefore, in future studies, the convergence of typical nonlinear systems in the Lebesgue-p norm can be further analyzed.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.