VisTune: Auto-Tuner for UAVs Using Vision-Based Localization

This letter presents VisTune, a method for automatic controller tuning specifically designed for UAVs using vision-based localization (VBL) for position control. In contrast to existing methods that involve manually flying the UAV to collect data for system identification and tuning, our approach leverages relay-based system identification and tuning, which autonomously generates stable oscillations without the need for a stabilizing controller. The entire process concludes within a few seconds. Prior work in vision-based position control of the UAVs often ignores the delay from the perception pipeline, which is quite significant and results in suboptimal tuning and poor control performance. Our approach accounts for perception delay and addresses practical issues, such as varying delays due to varying computation requirements and inevitable estimation errors, which pose challenges in applying relay-based identification and tuning. Typically, VBL system introduces over 100ms of delay, compared to less than 20ms delay when motion capture system is used. Moreover, we show that the perception delay identified by VisTune can be effectively used to temporally advance the feedforward acceleration signal to achieve better tracking performance. Finally, we demonstrate the robustness of the tuned controllers on a trajectory tracking task, reaching speeds of up to 2.1m/s with an RMS control error of only 0.054m. Under wind disturbance of 5m/s, we report an RMSE of 0.116m. A video of the experiments is available at https://youtu.be/hJoT8bn0K0o.

Abstract-This letter presents VisTune, a method for automatic controller tuning specifically designed for UAVs using vision-based localization (VBL) for position control.In contrast to existing methods that involve manually flying the UAV to collect data for system identification and tuning, our approach leverages relay-based system identification and tuning, which autonomously generates stable oscillations without the need for a stabilizing controller.The entire process concludes within a few seconds.Prior work in vision-based position control of the UAVs often ignores the delay from the perception pipeline, which is quite significant and results in suboptimal tuning and poor control performance.Our approach accounts for perception delay and addresses practical issues, such as varying delays due to varying computation requirements and inevitable estimation errors, which pose challenges in applying relay-based identification and tuning.Typically, VBL system introduces over 100ms of delay, compared to less than 20ms delay when motion capture system is used.Moreover, we show that the perception delay identified by VisTune can be effectively used to temporally advance the feedforward acceleration signal to achieve better tracking performance.Finally, we demonstrate the robustness of the tuned controllers on a trajectory tracking task, reaching speeds of up to 2.1m/s with an RMS control error of only 0.054m.Under wind disturbance of 5m/s, we report an RMSE of 0.116m.A video of the experiments is available at https://youtu.be/hJoT8bn0K0o.Index Terms-Aerial systems: mechanics and control, aerial systems: perception and autonomy, vision-based navigation.

I. INTRODUCTION
M ANUAL tuning of UAV controllers is a tedious task and results in suboptimal control performance with great risk of crashing and no stability guarantees [1], [2].Motivated by the need for systematic methods for system identification and controller tuning, several methods have been proposed in the literature [1], [3], [4], [5], [6], [7], [8], [9].Most of these methods either require flying the UAV manually to collect the data [6], [8], [10] or perform specific tests [11] which is time consuming and may require skills to fly.In [1], the authors highlighted the importance of controller tuning for high speed flights and proposed an auto-tuning approach using simulations.However, accurate model parameters are required for simulations which are usually not available.Recently, relay-based controllers have been used to collect data for system identification and controller tuning [9], [12].The advantage of such approach is process automation without prior knowledge about the system nor controller stabilization.
Nevertheless, most of these works have been conducted using a highly accurate external motion capture system which provides pose updates with negligible delay [1], [9].Hence, their applicability is not studied for UAVs using vision-based localization.A typical visual inertial odometry (VIO) system introduces a delay of ∼ 100ms, while motion capture system incurs only ∼ 20ms delay.High performance control usually requires higher controller gains, which results in reduced robustness.Hence, an additional delay of only 30ms can result in crash [13].Most of the existing works on vision-based autonomous UAV navigation rely on primitive methods for system identification and controller tuning [2], [10], [11], [14] and do not account for perception delay, which results in poor control performance.Other works that demonstrate high performance vision-based control [15], [16] do not mention the tuning method and probably resort to manual tuning.
In this paper, we present an automatic identification and tuning method and demonstrate its practical usefulness for aggressive flights using onboard vision-based localization.We leverage relay-based identification similar to [9], which was extended to deal with noisy vision sensors for visual servoing application [17].In contrast to visual servoing applications where the objective is to track a target object, our approach is specifically designed for position control using vision-based localization.
The key difference between [17] and our work lies in addressing the specific issues when using vision-based localization, such as varying delay due to varying computation requirements and inevitable estimation errors with onboard VIO systems.When using VIO, the oscillations generated by the relay-based controller are not consistent due to the aforementioned issues.An instance from our experiments is depicted in Fig. 1.Therefore, existing approaches [9], [17] are not directly applicable.Our method, summarized in Fig. 2, does not require prior information or controller stabilization, and optimally tune the UAV within a few seconds.

A. Contributions
We propose VisTune, the first fully automatic controller tuning solution, tailored specifically for vision-based autonomous UAVs.Our approach addresses the issues of varying delay and estimation errors that impact frequency-based identification methods, thereby effectively able to identify the average representative dynamics of the overall system.We provide experimental studies on the effectiveness of our approach and the optimality of such tuned controller using the UAV system built from commercially available components.We present a nonlinear controller formulation that results in the overall system being approximately linear even at higher attitude angles; thus; the tuned controller remains nearly optimal even for aggressive flights.We conduct extensive real-world experiments to demonstrate the robustness and performance of the automatically tuned controller using VisTune.Finally, we also show how the perception delay identified by VisTune can be leveraged in the feedforward term for better tracking of aggressive trajectories.

II. RELATED WORK
With the advent of efficient and fast visual SLAM approaches [18], it became possible to control UAVs using visionbased positioning in unknown environments.Initial work [11] used off-board processing with a monocular camera but suffered from scale ambiguity which caused platform stability issues.Using off-board processing limits the autonomy and robustness of the platform due to communication delays.Subsequently, autonomous UAVs with onboard processing were developed to address these issues [19].Later, the use of stereo cameras to overcome the scale ambiguity problem and the fusion with IMU to provide high update rate pose estimates enabled high-speed vision-based flights in GPS-denied environments [15].The technology was further refined and a fully onboard perception and control system was implemented on a small 250 gm micro aerial vehicle (MAV) while performing aggressive maneuvers [16].In [10], the authors proposed a miniature-sized nano aerial vehicle using vision-based positioning for control, but the processing was done off-board and only near-hover flight experiments were attempted.Later, multiple research works introduced full solutions for autonomous vision-based navigation in unknown environments [20], [21].
Several controllers have been proposed in the literature exhibiting impressive performance and robustness.However, all of these methods either require certain prior knowledge (such as inertia, mass, dynamic parameters, etc) [22] about the platform or experimental data to tune the controller parameters [10], [11].
To achieve high performance, the role of controller tuning is eminent [1].The simplest and most commonly used approach in the community is to manually tune the controllers based on trial and error.This method is tedious, time-consuming, and could potentially cause crashes as well.On the other hand, although manually tuned controllers may demonstrate stability, it is not possible to determine if they are in an optimal state.Additionally, since UAVs are complex systems, they often involve tuning a considerable number of parameters, which adds to the complexity of the task.
Multiple methods have been proposed in the literature for automatic controller tuning.The methods from adaptive control literature [3] use gradient-based optimization with analytical cost function to optimize for certain performance metric.However, these methods may require significant amounts of data and would be susceptible to sensor and actuation noise.Gain scheduling has also been proposed for implementing adaptive P or PI controllers based on different flight conditions [23].But it is computationally intensive to implement real-time adaptation with several operating points.Adaptive pole-placement [24] is a classical technique to ensure robustness and performance by placing closed loop poles to desired locations, however its applicability in practical scenarios demand accurate system model which is usually not available.To deal with modelling uncertainties and external disturbances L 1 adaptive control has been proposed [25], providing performance bounds.However, tuning parameters to achieve high performance is necessary but not easy.Fuzzy logic has also been investigated for self-tuning PID controller [26], however it requires knowledge about the model itself and tedious trial and error process to effectively design the fuzzy rule base.In [27], the authors proposed iterative learning control (ILC) to optimize parameters using an iterative approach which also requires a large amount of experimental data.Similarly, [28] proposed to use a deep neural network to learn inverse dynamics requiring training data from experimental trials.To avoid collecting large amounts of experimental data, the authors in [1] and [5] used a high-fidelity simulator to tune the controllers.These simulators can also be used to develop deep learning-based approaches [29] for UAV control but they incur huge computational costs during training and inference along with increased memory requirements.Moreover, accurate model parameters are required for simulation, while controller deterioration due to simulation to reality gap is also a known issue.In [4], the authors proposed AL-Tune to automatically tune altitude controller, where only two parameters are tuned.Extending the same approach to more parameters in practical scenarios would require considerably long flight time.
Alternatively, controller tuning can be done by estimating dynamic parameters with system identification [6], [9], [30].In particular, frequency-domain methods are more attractive since they tend to be less sensitive to high frequency noise [7].To the best of our knowledge, none of the controller tuning approaches have been specifically designed for vision-based position control of UAVs, while the applicability of existing tuning approaches and their practical viability have not been studied.

A. Preliminaries
We define fixed inertial frame F I with basis {i x , i y , i z }, where i z is antiparallel to the gravitational force.Pre-superscript Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

B. Nonlinear UAV Model
The nonlinear UAV model [31] is given as follows where I p, I v, ω ∈ R 3 denote position, velocity, angular velocity of the UAV, respectively.g, m are gravitational acceleration and mass of the UAV, respectively.D, B, J ∈ R 3×3 are diagonal matrices representing translational drag, rotational drag and moment of inertia, respectively.While [•] × gives cross-product matrix of a vector and c T , τ are collective mass-normalized thrust and angular moments produced by the motors.We use first-order motor dynamics with delay, hence we can write lumped-parameter equations for c T and τ as where u T and u τ ∈ R 3 are thrust and torque commands respectively.T m and δ m are the time constant and time delay of the motors, while k × are steady state gains, relating commands and outputs.

C. Control Design
Frequency-based methods for controller design and tuning require the system model to be linear.However, the UAV system has nonlinear and coupled dynamics as given in (1) and (2).In [9], [17], linearization is considered at the hover state, thus restricting the applicability of these approaches to near-hover flights only.We take a different approach by designing a nonlinear controller such that the effect of system nonlinearity is minimized, thereby allowing the use of linear control theory even for aggressive flights with large angles.
We introduce axis-angle notation for orientation with r ∈ R 3 being a unit-vector, representing the axis of rotation and θ ∈ [−π, π) being the angle.Hence, according to Rodrigues' rotation formula, we can write (5) For decoupling of angular dynamics given in (2) we assume r z , ω z = 0, which means the UAV does not change its heading.Substituting r z = 0 in (5), we get b z = r y sin θ −r x sin θ cos θ T (6) and ( 1) can be rewritten as where ) is linearized drag.Using (7), we introduce inversion-based control which is equivalent to geometric control without yaw.With u p being position control commands, we get the desired attitude θr and desired thrust command by u T , θ, r = h −1 (u p ) .(8) Equation ( 8) can be solved to get When θ = 0, we can take r = 0 to avoid division by zero in (9).Note that rz = 0. Finally, we present feedback control to get u p and u η as follows where I p, I v, I ā are desired position, velocity and acceleration from reference trajectory, while θ, r are calculated using (9).k C , k D ∈ R 3 are tunable controller gains.

D. Modelling for Identification
Since our identification model stems from frequency-based methods, which are only applicable for linear systems, we introduce linear decoupled lumped-parameter models that are used for system identification.Assuming ω z = 0 and η = θr, we can formulate the attitude dynamics from (2) as Combined with (4), the lumped parameters for attitude dynamics are then given as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.and time constant of the system while total delay is lumped as δ η .Hence, ( 4) and ( 12) give third-order attitude dynamics.
For translational dynamics, considering (1), ( 6) and ( 7) with sin θ ≈ θ, we can write ) Neglecting c T dynamics which are faster compared to η y dynamics, the lumped parameters for translational dynamics are k x ≈ T x = 1/ dx , where k x is the gain and T x is the time constant, while total delay is lumped as δ x .Since there is a nonlinear closed loop control for angular dynamics (as described in Section III-C) the total dynamics between u px and I p x can be represented by a fifth-order system.
T ηy (s) = k ηy k Cηy s(T ηy s + 1)(T m s + 1) + k ηy (k Cηy + k Dηy s)e −δ ηy s (15) For altitude dynamics, consider (7) with cos θ ≈ 1, we can write ) Here, considering u pz from (10) we can define the lumped gain k z and time constant T z as k z ≈ T z = 1/d z , also total time delay is lumped as δ z .Hence, from ( 16), ( 3) and (9), altitude dynamics between u pz and I p z can be given as a third-order system.
E. Linearization Fig. 3 illustrates the block diagram of the full nonlinear UAV control system.Note that there is a closed-loop attitude system between h(•) and h −1 (•).Since an onboard IMU with negligible delay is used for attitude control, high-bandwidth control can be achieved.Hence, closed-loop dynamics will be close to unity for low frequencies.Given r ≈ r, the system nonlinearity h(•) will be cancelled by the controller nonlinearity h −1 (•), resulting in a linear system.Thus, our NDI controller effectively attempts to reduce the impact of nonlinearity in UAV dynamics and ensure the optimality of controllers tuned by VisTune, even at high angles when flying smooth trajectories.

F. Dnn-Mrft
In [9], [17] the authors proposed to use modified relay feedback test (MRFT) [32] to generate stable oscillations for each control channel to collect data.MRFT control law can be given as Algorithm 1: Disambiguation.
where h is relay amplitude, h 1 = −βσ min , and h 2 = βσ max .σ min , σ max are last minimum and maximum values of σ(t) and β is a parameter corresponding to the switching phase as φ d = arcsin (β). 1) DNN Specification: Following [9], the DNN is consisted of 2 fully connected layers of sizes 3000 and 1000 neurons with ReLU activations.The output layer neurons are equal to the number of classes and utlize modified softmax function introduced in [33].The input to the network is sampled u(t) and σ(t) signals from single MRFT cycle.They are concatenated and zero-padded to obtain a fixed-size input.

2) Data Generation and Training:
Training data is generated from the simulation models of representative systems (classes) that are obtained by sampling the parameter space using the least worst deterioration (LWD) criteria, given by Δ(D, J * ) = { Ď : J( ďi , ďj ) < J * ∧ { ďi , ďj } ∈ Ď}, (19) where Ď represents the discretized parameter space and J(d i , d j ) is a relative sensitivity function, defined as: (20) Given that C(d i ) is the optimal controller for process G(d i ), J(d i , d j ) is performance deterioration when the optimal controller of process G(d j ) is applied to process G(d i ) compared to the optimal controller C(d i ).Q(•) gives performance measure; e.g.integral square error (ISE).ADAM optimizer with cross entropy loss function is used for the training.More details can be found in [9].

G. Disambiguation
Since we use vision-based positioning, which can introduce variable delays and drifts while doing the test.This can affect the quality of identification and tuning, resulting in inconsistent predictions.In practice, DNN-MRFT would predict different classes for each oscillation.In fact, we found 5 different classes were predicted when DNN-MRFT was used with a real UAV platform as described in Section IV-A.The parameters of these classes are given in Table I.Fig. 1 shows the obtained oscillations along with the MRFT control signals for each axis.It is clearly evident that the oscillations vary significantly in their time periods and amplitudes.
Note that we extend the parameter space compared with the original approach in [9], hence increasing the range of time delay as τ x ∈ [0.02, 0.2].The optimum value for β using the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Note that this particular phase is found by optimization over parameter space as explained in [9].Therefore, the discrepancies in oscillation frequencies among the predicted classes are very small, which explains the distinct predictions.

TABLE I DYNAMIC MODEL PARAMETERS
In order to alleviate the aforementioned issue, we propose to change β and perform an additional test to differentiate between the predicted classes.Algorithm 1 is proposed to find the optimal parameter β opt for the additional MRFT test.For our UAV system we found β opt = 0, which can be easily verified from phase plot in Fig. 4  To ensure reliability, multiple RFT oscillations are obtained, as shown in Fig. 5, since noise and drift from vision-based positioning may introduce inconsistent oscillation frequencies.Finally, the frequency spectrum is obtained using the fast Fourier transform as shown in Fig. 5, which shows a clear peak at the frequency of 0.2Hz.
From the phase plot of the predicted classes in Fig. 4, we can see that the closest curve to 0.2Hz at the phase of φ = −180 • corresponds to the class C1.Alternatively, given the phase crossover frequency of n predicted classes as {Ω C1 , . . ., Ω Cn } and the frequency from RFT response Ω RF T , we can get the correct class by From identified model, we find the optimal control parameters with offline optimization using the constraint for phase margin φ m > 20 • .However, we are more interested in delay margin which can be given by where Δ m is delay margin and Ω gc is gain crossover frequency.For example, for parameters corresponding to I p x in Table II we can find P M{(k Cx + k Dx )G x (s)} = 19.9• with Ω gc = 3.8 rad/s.Hence Δ m ≈ 0.09s which is sufficient considering the average update rate does not fall below 30Hz.
IV. RESULTS

A. Experimental Setup
The experimental setup consists of an off-the-shelf DJI F550 kit fitted with Navio2 flight controller board and our customized flight control software.For vision-based localization, we use ZED mini camera along with ZED SDK, running on a Jetson TX2 board.Although the frame rate is set to 100Hz, we observed that VIO update frequency drops down to roughly 25Hz when the UAV is moving.Nevertheless, ZED SDK exhibited the best performance among other open source and commercial solutions in feature-sparse environment as shown in experimental video 1 .We used OptiTrack only to get the ground truth for benchmarking purposes.We used Kalibr [34] to find the spatial transformation from the camera to onboard IMU, which is placed near to the center of mass of the UAV.It helps to remove the bias from the system.However, the drift in position estimates from VIO is unknown in practical situations, where the ground truth pose is not available.This causes inconsistent oscillations as seen in Fig. 1.

B. Optimality Verification
We approach the verification of VisTune controller's optimality from a practical perspective by comparing the step response of the optimal controller with that of a sub optimal controller and comparing the responses from simulation and experiment.From disambiguation, we found C 1 to be the correct prediction, and its corresponding controller, Cont1, to be the optimal controller.From frequency responses in Fig. 4, C 4 has the closest frequency response, and hence, the optimal controller of C 4 , Cont2 would be suboptimal for C 1 .Therefore, we record the simulation step responses of Cont1 and Cont2 applied on the system defined by C 1 (identified class) and compare them with the experimental step responses obtained by applying Cont1 and Cont2 to the actual UAV platform.Fig. 6 shows that the shape of responses from simulation and real tests are quite similar.More oscillations  in experimental responses can be attributed to disturbances, discretization effect and nonlinearity.Table III shows lower integral square error (ISE) for Cont1 compared to Cont2, which signifies that Cont1 is indeed optimal and would result in lower tracking errors in practical scenarios where disturbances are present.Particularly, step signal covers the full frequency spectrum, hence it can be viewed as the most aggressive kind of disturbance.The formula for ISE is given below In case of practical experiments, discrete time measurements are available, hence equivalent formulation with discrete summation is used with t F = 8 seconds.Fig. 7 shows marginally stable response when the controller gains are multiplied by gain margin, GM {(K Cx + K Dx )G x (s)} ≈ 1.7 using parameters from Table II.This signifies the practical usefulness of VisTune to ensure robustness and performance.Note that we only verify gain margin, since phase margin would be hard to verify due to varying delays.Moreover, the control design process itself guarantees certain phase margin and the robustness to varying delays is demonstrated extensively on aggressive trajectory tracking in Section IV-D.

C. Effect of Temporal Adjustment of Feedforward
Feedforward acceleration term in (10) has been previously used in [16], [21] to add phase lead to the system for better tracking.However, since there is a significant delay in position measurement due to the use of VIO, whereas the angle control loop uses an onboard IMU with negligible delay, the acceleration feedforward needs to be temporally adjusted.Thanks to Vis-Tune, in addition to the optimal controller gains, we also obtain  lumped parameter estimate.It can be shown that the acceleration feedforward signal should be temporally advanced in time as I ā(t + δ a ) in (10), where δ a = δ x − δ η .
We conducted experiments, both with and without temporal adjustment.Fig. 8(a) shows that temporal adjustment reduces the overshoot and results in better angle reference tracking.The experiments were repeated six times for each scenario to obtain the box plot shown in Fig. 9(a).It can be seen that y-axis errors are considerably improved compared to x-axis errors.This is because trajectory in the y-axis is more aggressive than in the x-axis.It is important to note that temporal adjustment may not be as effective when there is significant external disturbance or when the estimation error from VIO is significantly high.

D. Performance and Robustness: Trajectory Tracking
For the performance and robustness evaluation in more practical scenarios, we tested VisTune for aggressive trajectory tracking.Minimum snap trajectory generation [35] is used to generate smooth trajectories.In total 3 trajectories are flown, namely: Circle, Lemniscate, and Spiral.Each scenario is repeated 4 times to obtain statistical measures as reported in Table IV.Where RMSE is the root-mean-square error for each of x, y, z axes, while E a is the absolute trajectory error as defined in [31].Moreover, the control error is the difference between the reference and VIO estimates, while the estimation error is the difference between the ground truth from the OptiTrack motion capture system and the VIO estimates.The standard deviation (STD) of control errors is much lower which shows the consistency of control performance, although the STD of estimation errors is quite high.It can be seen that E a is more correlated with average velocity than maximum velocity.This Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.might be because of significantly high estimation errors, as can be seen in Table IV.Fig. 10 shows temporal tracking as well as control/estimation errors with time along with 2D and 3D plots for spatial tracking.We found estimation errors can be correlated with velocity; however, control errors are generally higher at sharp turnings, as expected.
It is worth noting that we use PD controllers; hence altitude control system is Type I and would result in steady state error for ramp input as seen for the spiral trajectory in Fig. 10(e).In fact, we can calculate the steady-state error from identified model parameters as E ss = 1/(k Cz k z ) = 0.048m.Since the difference between the spiral and circle trajectories is z-axis reference to be ramp instead of constant.By comparing z-axis errors we get a difference of 5.7cm which is close to what was calculated.Note that the estimation error is quite higher for the spiral trajectory.

E. Benchmark Comparison
Most vision-based UAV systems in literature attempted near hover flight [2], [10], [11], [14], [21], while only [15] and [16] attempted aggressive flights, however they did not specify controller tuning approach (probably manual tuning).Since they have different overall systems, the comparison may not be fair.Also they did not report quantitative control errors but qualitative comparison from their trajectory tracking figures suggests that our results are on par if not exceeding state-of-the-art [15], [16].Fig. 8(b) shows that the system in [16] suffers undershoot while our approach successfully mitigates the problem of over/undershoot at aggressive turning points as seen in Fig. 8(a).

F. Tracking Under Wind Disturbance
We also test the performance of the controller under wind disturbances.For this purpose, we consider two scenarios: 3m/s and 5m/s directional wind disturbances (wind is measured at the center of the trajectory, not at the source).Fig. 10(j) and (k) show that trajectory is smoother for the case of 3m/s than for 5m/s.Nevertheless, the controllers are quite robust in keeping the control error low, even though the velocity of trajectory reaches up to 2m/s.The quantitative error results are given in Table V.

V. LIMITATIONS AND FUTURE WORK
Although VisTune can be used with different types of multirotor UAVs, the tuning process may not work as efficiently in the presence of excessive wind disturbance.As such, addressing this issue with the current formulation is not trivial.Moreover, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

VI. CONCLUSION
In this letter, we present VisTune.The results show the effectiveness of VisTune as a practical tool for automatic controller tuning for autonomous UAVs that use VIO for position control.Although verifying the optimality from experimental setup is not trivial, we attempt to address this by comparing step responses of simulations (using the identified model) and experiments.Moreover, we test if the experimental system exhibits a marginally stable response when the controller gains are multiplied by theoretical gain margin of the system.Finally, we test for practical cases of trajectory tracking scenarios with and without external wind disturbance to demonstrate the robustness and performance of our approach.

Fig. 1 .
Fig. 1.(a) Example of inconsistent MRFT oscillations for x, y, z axes from real tests.(b) Variable delays and estimation errors are evident in VIO oscillations compared to OptiTrack.

Fig. 2 .
Fig.2.VisTune overview: MRFT controller excites stable oscillations, the control command and response is recorded and fed to respective DNNs.Model and Control parameters from inner DNN are used to train outer DNN with simulated data.Disambiguation step is introduced to deal with multiple predictions (candidate classes) due to variable delays and estimation errors from VIO.The identified delay is used to temporally adjust feedforward term in the controller which is important when the delay is significant, such as in the case of VIO.

Fig. 5 .
Fig. 5. RFT response recorded from real UAV flight and its frequency spectrum using Fast Fourier Transform (FFT).
modified parameter space was found to be β * = −0.72 which corresponds to the switching phase of φ d = arcsin(β) ≈ −46 • .This gives the total phase of the system at which the oscillations are excited; φ = −180 • + φ d ≈ In Fig. 4 we show the phase plots for all the predicted classes {C1, . . ., C5}.It can be clearly seen that they are very close to each other at the phase where MRFT oscillations are generated; marked by the dashed line.

Fig. 6 .
Fig. 6.Step plots using optimal controllers of classes C 1 and C 4 in simulation with dynamic model of C 1 (identified using VisTune) and compared with experimental step responses.

Fig. 8 .
Fig. 8. (a) Tracking results for temporal adjustment of acceleration feedforward in y-axis.(b) Tracking results from[16] showing undershoot at turning point.

TABLE II IDENTIFICATION
RESULTS.k C AND k D ARE CONTROLLER GAINS AS DESCRIBED IN SECTION III-C.k, T m , T, δ ARE LUMPED PARAMETERS REPRESENTING GAIN, TIME CONSTANTS AND DELAYS IN EACH SYSTEM

TABLE III ISE
COMPARISON OF THE IDENTIFIED CONTROLLER (CONT1) AND NEAREST SUBOPTIMAL CONTROLLER (CONT2) Fig. 7. Marginally stable behavior is observed when PD controller gains are scaled by analytical gain margin.

TABLE IV TRAJECTORY
TRACKING STATISTICAL RESULTS.EACH SCENARIO IS REPEATED 4 TIMES TO OBTAIN MEAN AND STANDARD DEVIATION (STD)

TABLE V QUANTITATIVE
RESULTS FOR LEMNISCATE TRAJECTORY TRACKING IN THE PRESENCE OF DIRECTIONAL WIND using VisTune as an identification tool for training and deploying high-level reinforcement learning (RL)-based controllers or for dynamics-based path planning algorithms would be an interesting area for future research.