Delay-incorporating observability and predictability analysis of safety-critical continuous-time systems

: The authors suggest a framework for human–automation interaction in safety-critical continuous systems under shared control and consider continuous-time linear time-invariant (LTI) dynamics to formalise our physical models math-ematically. Their goal is to determine whether or not a given user-interface provides the information required for a certain task, under the assumption that the user does not have access to any information beyond what is provided in the display. They identify observability-based conditions under which a user-interface provides the user with necessary information to accomplish a given task, formulated as a subset of the state space. They, therefore formulate the novel delay-incorporating user-observable subspace, and the delay-incorporating user-predictable subspace and compare them with the space spanned by the combination of the states which create the task. They assume the user is a special type of observer, with capabilities corresponding to different levels of knowledge regarding the current user’s input and its derivatives. In addition, they consider that state reconstruction and prediction incorporate a processing delay.


Introduction
As interfaces and their underlying systems become more complex, information beyond what is contained in the interface may not be accessible. We aim to identify tools that assess the correctness of a user-interface for a given task, an especially relevant problem in systems for which intuition and simulation may not be enough to assure that an interface is effective. While in many systems, such as a human-driven car, the user has access to information beyond what is simply contained in the interface, we focus here solely on information contained in the interface, for example, a remotely controlled car or a pilot performing a task in high altitude.
Human-factors researchers have frequently pointed out the importance of incorporating human-factors into traditional control theory, which can result in better human-automation interaction [1,2]. Many researchers have suggested frameworks for humanautomation interaction in order to come up with more effective interaction between the elements of the system [1][2][3]. Attaining situation awareness (SA) is a human-factor concept which is known to be necessary for the user to have correct interaction with the automation [4,5]. Attaining SA includes three levels of processing the information, (i) perception of the information, (ii) comprehension of the information and (iii) projection or prediction of the information [4,6]. Lack of SA can lead to a faulty control action by the operator because of bad decisions. As we have already assumed that the display is the only source of information for the user, it has to be designed to allow the user perceive, comprehend and predict the desired information to attain correct SA. Hence, the display design procedure requires a careful selection and clear presentation of the information for the user to perform the desired tasks correctly.
Some examples which are available in the literature about incorporating human-factors into traditional control theory are (i) modelling the human operator as a controller for systems without [7][8][9][10] or with uncertainties [11,12], (ii) modelling the user as an estimator and Kalman filter [13,14] and (iii) modelling the user as an observer-based fault detector with the focus on modelling the decision making process [15,16]. Rather than modelling the user as a full-order Kalman-filter/observer, in [17] we modelled the user as a customised functional observer which could be personalised based on the capabilities of the operator.
In [18], we suggested a method to analyse user-interfaces of a particular type of system under shared control. We obtained subspaces based on observability and predictability requirements of the user and limitations of the user regarding input signals. In [18], in order to model the shared control systems, we only took into account the low-level control inputs from the human and the automation -that is, the input by human or automation which directly affects the control surfaces. In addition, in order to evaluate the states which are predictable by the user, we simply evaluated the observability of the states and their first derivative.
The results of [18] are acceptable for the systems in which a small amount of delay in comprehension and prediction of the task vector does not affect the safety of the system. However, for safetycritical systems which have to follow a precise trajectory of the functionals of the states, those results are not precise enough. It is up to the designer what framework suits a specific system better.
Several researchers have worked on the design of full-state and functional observers for systems with unknown or uncertain input for fault detection [19][20][21][22][23], and on creation of a rank test for unknown-input observability [24,25]. Other approaches on unknown-input observable systems focus on estimation of the input as well as the state [26][27][28][29]. In this work, we draw mainly on control concepts by Kalman [30] and Luenberger [31,32] on observability and observers for single and multi-variable systems, work by Basile and Marro [33] in unknown-input observability, and the techniques we developed in [18,34].
As our main contribution in this paper, we synthesise the limitations of the human regarding information about the input signal into pure control concepts of observability and predictability and come up with delay-incorporating user-observable and delay-incorporating user-predictable notions. Our focus is on linear time-invariant systems and our goal is to determine whether or not a given user-interface provides necessary information required for a certain task. The results of this paper, partially incorporate and partially modify our previous results in [18]. This work, specifically, targets cases where the human is a part of the system and also the safety and precision in task accomplishment is of extreme importance. In this paper, we take into account the processing delay of the estimation and the prediction which we simply ignored in the previous work. In addition, in [18] we assumed that a subsequent value of a functional of the states is known if the current functional and its first derivative are known. Here, we relax this assumption and consider the actual evolution of the states of the system which depends on the current functional, the inputs and the input derivatives. By relaxing these assumptions, we will be able to modify our tool so that it can be used for the analyses of the displayed information in safety-critical systems. In addition, in this work, we suggest a framework for systems under shared control and consider the system to be affected by low-level input as well as the reference trajectory. Overall, we can briefly summarise our contributions as (i) incorporating the information processing delay, (ii) considering a more realistic pattern of states' prediction and (iii) considering a more practical model of our physical system. The results of this paper are applicable to safety critical systems, such as surgical robots under shared control and fighter aircrafts. We, however, do all our analysis based on the assumption that these systems can be presented by deterministic LTI dynamics with no noise and uncertainty. Further research is required to resolve this limitation of our purposed method.
In Section 2, we introduce a framework for systems under shared control and present the proposed mathematical framework. We determine formulas for the delay-incorporating user-observable subspace, and the delay-incorporating user-predictable subspace of shared control systems in Section 3. An example on a remotely driven car is provided in Section 4.

Common notations
In the following, ⊕ indicates the summation of subspaces, N (·) is the null space and R(·) is the range space.

Problem statement
While performing a desired task, a human controller might make mistakes because of different factors, including lack of understanding about the system's dynamics and the indeterministic behaviour of the automation. These problems can be solved by providing the user with sufficient training and by designing a deterministic plant, respectively. In addition, lack of correct displayed information is a safety threat to the system. Our goal, here, is to address this safety aspect for the system by analysing the displays and rejecting those which do not let the users attain the required understanding about the desired states of the system. Clearly, if a user does not have access to the necessary information about the desired states of the system, making correct decisions is not possible for them, hence, their actions may result in unwanted or even hazardous outcomes. In this work, in order to analyse the correctness of the displayed information in safety-critical systems, we will come up with an inclusion which has to be satisfied, otherwise the interface is rejected.
The importance of having a complete and correct understanding about the situation, that is, the desired combination of the states of the system, has been pointed out by various human-factors researchers [4,[35][36][37]. The process of information acquisition and analysis was formally named the SA process by Endsley [4] and is mentioned to be the foundation of correct decision making and action. SA which is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future' [4], incorporates three levels which are perception of information, comprehension of information and projection (prediction) of information.
In this work, we assume that the information processing only occurs in the working memory -that is, we simply ignore the effect of pattern matching which can help the user with state estimation and prediction without loading the working memory. In addition, we assume that the only role of the mental model is to provide the user with adequate understanding about the system's dynamics.
On the basis of the above discussion, we assume that the comprehension of the non-measured part of the task functional (i.e. the portion of the task which is not measured in the display) and also the prediction the task functional have to take place within the working memory which is associated with certain amount of processing delay [38]. We limit our work to systems with only visual cues available in their displays. From [39], the perception delay of visual cues can be seen to be smaller than the processing delay. Hence, by focusing on the processing delay and assuming the perception delay to be negligible, we build our technique on the following assumption.
Assumption 1: Comprehension of the directly measured combination of the states of the system does not incorporate delay. However, prediction of all combinations of the states and also comprehension of the non-measured functionals of the states are associated with a delay, τ 1 .

Framework for human-automation interaction
For analysing the human-automation interaction, several researchers have developed various frameworks [1][2][3]. In [40], we suggested a framework for shared-control continuous-time systems, such that through providing a low-level control input as well as a reference trajectory, the user-interacts with an automated system. For this work, we consider a similar model as in [40] with a slightly different representations to make the model more proper for our current case. The model is provided in Fig. 1 and shows a human who interacts with an automated plant through a user-interface. Deriving the model for the specific cases of a human-driven system and a fully automated system from Model 1 is straightforward.
In Model 1, we consider the user to be a function that maps display information to the user input. This mapping involves different stages of processing the information which we have earlier discussed. In our framework, we similarly consider the user to first obtain situation awareness, then decide on the required action, and finally act on the system. As was pointed out earlier, our focus is on the stage of attaining situation awareness and our purpose is to evaluate the understanding of the user from the available information in the display. This helps us to understand whether or not the user has access to the necessary information.
We also assume to have a trained user who is fully experienced with the given dynamics of the automation. It is worth mentioning that the amount of training and the experience to attain the required level of user's proficiency depends on the complexity of the automation which the user is controlling or monitoring [41] (e.g. the difference between the amount of training and experience required for a driver and a pilot). In this paper, a trained and an experienced user is the one who can make use of the input and output signals and some of their derivatives to reconstruct the current and upcoming values of a desired functional.
The automation shown in Fig. 1 consists of a plant, actuators and a computer. Although, in general the role of the computer is to determine the required movement in the control surfaces corresponding to the desired reference trajectory, in this model, we partially combine the computer with the plant. Hence, we simplify this role to generating the final reference trajectory and model our plant as a set of differential equations which is designed to have both the low-level inputs and the reference-trajectory as its inpuṫ with x(t) ∈ R n is the state vector, u(t) ∈ R m h is the low-level input from the control effort of the human, r ∈ R m r is the time-invariant reference trajectory and matrices A, B and F have compatible dimensions. We also assume that no poles of the system have a zero real part.
Assumption 2: Matrix A has no eigenvalues on the imaginary axis.
The actuators shown in Fig. 1 are devices which translate the low-level input to the movement of the control surfaces. As a result of a possible saturation or rate limiting, the actuator behaviour is not necessarily deterministic. However, we consider the operating conditions in which actuators behave deterministically. Our focus here is not on the mathematical model of the actuator, but, as we considered to have a deterministic actuator, in order to analyse the displayed information, it is enough for us to know how much knowledge the users have about their own input to the system.

Assumption 3:
The user knows up to λ derivatives of the user's low-level input to the system. For the users who do not know about their own input to the system, λ = −1.
One of the main elements of the system represented in Fig. 1 is the user-interface. The user-interface is an instrument through which the user-interacts with the automation. This device consists of a display and controls. The display is providing the user with abstract information about the states and inputs of the automation. The user on the other hand acts on the system through the control.
In the real world, displays usually provide the user with information through multi channels -that is, there are multiple sources of information in any realistic system which interact with each other. For instance, when the user is provided with too much information, multiple sensory channels could be used to reduce the workload and increase the capability of the user in processing the information [42]. In the suggested framework, we, however, consider having a uni-modal display and assume that all information in that display is of equal weight. We formulate the display as where y x = Cx and y r = Dr with C ∈ R p×n aug and D ∈ {0, 1}. Our focus in this paper is on systems in which the user does not have access to any information beyond what is provided in the display. For instance, for a remotely driven car, the user cannot have any access to the information from the environment. In addition, the information that a pilot gets from the environment in higher altitudes, may not be what exactly he relies onto accomplish what he wants. In most systems, however, a part of information is provided through the environment. Consider a driver who has access to position information from the windshield. Having direct measurements from the environment reduces his need for GPS which provides similar information as what already is available to him. A careful combination of information from the display and environment together with a powerful presentation of information helps the user to perceive information much better, for example, heads-up displays.
We also impose the following assumption about the user.

Assumption 4:
The user knows up to γ derivatives of the measured output.

Task formulation
Another important notion that should be specified in order to reliably evaluate the interaction between the user and the automation is task. Degani and Heymann [43] introduced a schematic diagram to define the desired relationship between the elements of system which are the user model, task and the user-interface during human-automation interaction. Using their suggested model, they could describe the interrelation between these elements of the system [43]. When user's capabilities, information from the display, and task requirements are aligned, the correct interaction between the elements of the system is possible. As an extension to their model, we suggest Fig. 2 which presents the necessary relationship between the elements of the system. From Fig. 2, the human-automation interaction is not correct for all tasks unless the task requirements is entirely included in the intersection of user model and user-interface -that is, to have a correct human-automation interaction, it is necessary for the user-interface to provide the user with information related to any feasible task.
In our previous work [18], we defined the task as 'conditions that must always be met or must be eventually met'. Hence, we formulated the task as 'a function f : with task matrix T comprised of l linear combinations of the state. For 'always F ', the state trajectory must lie in F for all time in order for the task to be successfully completed. For 'eventually F ', when the state enters the set F at some finite time, the task has been successfully completed'. Denote the 'task space' T by the row space of T , that is, T = R(T T ). We impose some assumptions on the task. Note that for fully automated systems, by task accomplishment we essentially mean monitoring the accomplishment of a desired task.

Assumption 5:
To assure the feasibility of the task, let T ⊆ C, the controllable subspace of (1).
Having developed a framework for shared control systems and having formulated the task physically and mathematically, we are ready to come-up with a mathematical criterion for evaluating the information content of the display for the system presented in Fig. 1 to meet the requirements of a task formulated in (3). Hence, we should evaluate whether the user has access to the required information to attain situation awareness regarding the task. We therefore impose the following assumption. Assumption 6: In order to accomplish or monitor a desired task, all rows of Tx should be mathematically observable and predictable by the user.
Oftentimes, it might be acceptable for the user to reconstruct and predict the task functional with a small amount of processing delay, for example, to maintain the velocity of a car in a desired bound. However, for safety-critical systems, it is necessary that the user has precise information about the current and the upcoming task functionals. Therefore even very small processing delays may be hazardous.
Under Assumption 6, the user-interface of a safety-critical system under shared control must provide the user with information that results in a delay-incorporating user-observable and a delay-incorporating user-predictable which we define them as below.
Definition 1: The delay-incorporating user-observable subspace, O * H , is a space which is spanned by the combination of current states, x(t), which are known to the user at current time, t. Definition 2: The delay-incorporating user-predictable subspace, P * H , is a space which is spanned by the combination of upcoming states, x(t + τ ), which are known to the user at current time, t.
To make this more clear, consider a user who aims to reconstruct a combination of the states x(t) given u(t) and y(t). Although this reconstruction might be feasible for the user, because of the processing delay, the desired combination of x(t) will become available to the user at time t + τ 1 . For a safety critical system, this late understanding about the states of the system can be unacceptable. Hence, in order to determine a necessary condition for having a 'good' user-interface in an LTI system under shared control, we impose the following problem.
Problem 1: Find delay-incorporating user-observable and delayincorporating user-predictable subspaces of the mentioned system.

Subproblem 1: Find the combinations of states x(t) and x(t + τ )
which can be known to the user at time t, that is, the combination of the states x(t) and x(t + τ ) which can be reconstructed from u(t − τ 1 ), y(t − τ 1 )), and their available derivatives.
We obtain O * H and P * H using projection matrices to remove the automation input and human input derivative which are unknown, from the derivatives of the state and output equations.

Methodology
Under Definitions 1 and 2, for system (1), we formulate the delay-incorporating user-observable subspace, and the delayincorporating user-predictable subspace. From Assumption 1, we can write where O * H is the delay-incorporating user-observable space, O H ,y = R(C T ) is obtained from the directly measured combinations of the states, y x (t) = Cx(t), and O H ,τ 1 is the delayed observable space which is the space spanned by the non-measured functional of states which can be reconstructed given τ 1 ≥ 0 delay.

Delay-incorporating user-observable subspace
In this section, we determine the combination of the states at time t, that is, x(t), which can be reconstructed by time t. Mathematically, this means to obtain what combination of x(t), can be reconstructed given y(t − τ 1 ), u(t − τ 1 ), and some of their derivatives.
Consider the output equation and its ith derivative for i ∈ {1, . . . , γ } at time t − τ 1 , If we put all the derivatives of the output equation together in a matrix form we will attain In (6), O is the observability matrix and H x is the Toeplitz matrix obtained from (5). In addition, the coefficient matrix of the reference trajectory has the following form In (6), the delayed states of the system -that is, x(t − τ 1 ), are formulated as a function of the input, output, their derivatives up to the γ th derivative and the reference trajectory at time t − τ 1 .
From Assumption 3, only λ derivatives of the input is known to the user. Hence, it is not possible to directly obtain x(t − τ 1 ) from (6). We, therefore, define P 0 = I and If we select P i,r and P i,j as follows, pre-multiplication of the ith derivative of the output equation by i k=0 P i will remove the unknown values from the ith derivative of output.
• The matrix P i,r is the projection matrix onto the left null-space • For j ≤ λ, P i,j = I and for j = λ + 1 where i > j, P i,j is the projection onto the left null-space of (P i,r and pre-multiply it in (6) to eliminates the unknown values of the input derivatives and the reference trajectory. Hence (11) in which the only unknown value is x(t − τ 1 ).

Theorem 1: In a system of form (1) and under Assumptions 3 and 4, the delay-incorporating user-observable subspace is of the form
where M is from (10) and P τ 1 is from (18).
Proof: Since in general the delay is small, we consider τ 2 1 to be negligible, hence Note that to make the results more precise, it is straight forward to model the current state as a larger series of previous states and modify the rest of the results as per need.
Under (13), the states of a continuous time system (1) evolve as where By introducing the variables From (13)- (16), we can write We can combine (11) and (17) to formulate x(t) as a function of the input, output and their derivatives at time t − τ 1 . As, new unknown u(t − τ 1 ),u(t − τ 1 ), and r may arise, we introduce where P τ 1 ,0 , P τ 1 ,1 and P τ 1 ,r are defined as follows: • The matrix P τ 1 ,r is a projection onto the left null-space of D c (MOe −Aτ 1 θ 1 ).
Procedure: The following steps are required to calculate the delayincorporating user-observable subspace, O * H : • Determine matrices O, H x and H r and calculate the value of θ 0 − θ 2 from (16).

Corollary 1:
A delay-incorporating user-observable space is also user observable.
Proof: By definition, the user-observable space is the space which is spanned by the combination of the current states which are known to the user at the current time, for τ 1 = 0. From (11), the user-observable subspace can be formulated as We also have the equation of O * H from (12). Under Assumption 2, the matrix A is of full rank, hence In addition, for random matrices N and Q of compatible dimensions, we have R(NQ) ⊆ R(N ). Hence

Delay-incorporating user-predictable subspace
From Definition (2), the delay-incorporating user-predictable space is the space which can be spanned at time t based on the information available on x(t + τ ). As in Section 3.1, we can write the upcoming states as a function of Y 0:γ (t − τ 1 ) and U 0:λ (t − τ 1 ).
Theorem 2: In a system of form (1) and under Assumption 3, the delay-incorporating user-predictable subspace is of the form where P τ is from (27), P τ 1 is from (18) and M is from (10).
Proof: From (19) and where τ is the required prediction horizon and we can write where C known is from (20). By pre-multiplying (26) by we can remove all unknown values from it. In (27), • The matrix P τ ,r is a projection onto the left null-space of D c (P τ 1 MOe −A(τ 1 +τ ) δ 2 ).
• For j ≤ λ, P τ ,j is an identity matrix and it is a projection onto the left null-space of ( Hence, the functional of the upcoming states of the system, x(t + τ ), which span the row space of P τ P τ 1 MOe −A(τ 1 +τ ) can be reconstructed by the user by time t.
Procedure: The steps that are required to calculate the delayincorporating user-predictable subspace, P * H are as follows: • Obtain all the required matrices to determine O * H . • Calculate δ 0 − δ 2 from (25).

Corollary 2:
A delay-incorporating user-predictable space is also delay-incorporating user-observable.
Proof: As in the proof in Corollary 1, if is straight forward to show that with A being a full rank matrix Corollary 3: A delay-incorporating user-predictable space is also user predictable.
Proof: Consider τ 1 = 0, hence P τ 1 = I and from (23), P H can be formulated as As in Corollaries 1 and 2, it is trivial to show that P * H ⊆ P H .

Validation of the displayed information
In Section 2, we stated that for safety-critical systems under human or shared control, the user-interface must provide the user with information that results in a delay-incorporating user-observable and a delay-incorporating user-predictable tasks. Hence, we can introduce the following proposition.

Proposition 1:
In order for a user to be able to accomplish a desired task in a safety-critical condition for a system of form (1) and under Assumptions 1-5, the following inclusion is necessary where P * H is the delay-incorporating user-predictable subspace, formulated in (23).

Example
We consider a remotely driven point mass car modelled as a double integrator and stabilised to have poles on −2 and −3. The system matrices are Our goal is to evaluate whether for such a system the displays (which measure the position or the velocity of the car) are effective to accomplish a desired task, when the processing delay is τ 1 = 0.2 [39].
We consider two cases, (1) a user who controls the system via a known force with known constant rate -that is, all derivatives of the input are known and (2) a user whose input to the system is complicated and random, thus, has no knowledge about input derivatives -that is, λ = 0. For both cases, we consider γ = 1.
Our desired task is stopping at a stop sign, hence, we can define the task space as

Delay-incorporating user-observable subspace
It is first required to calculate the delay-incorporating userobservable subspace for different measurements of states available to the user. From (12), for λ = ∞, we can obtain M = I and P τ 1 = I . Hence Also, for λ = 0, we can obtain M = I , therefore From (31) and (32), for either of the two different displays including a GPS with C = [1 0] and a speedometer with C = [0 1], we can show that O * H | λ=∞ spans R 2 . In addition, when λ = 0, with either of the measurements in the display we can obtain P τ 1 = 0.9919 0.0899 0.0899 0.0081 The results state that for such a system, regardless of the type of the measurements and the complexity of users' input, the user can reconstruct both states of the system.

Delay-incorporating user-predictable subspace
For τ = 0.1, we calculate the delayed-incorporates user-predictable subspace of this system for different measurements of states provided in the display.
When λ = 0, for either of the displays, The results above can help the reader understand Corollary 2 better, as it is clear that in all of the above cases, the delayincorporating user-predictable space is a subset of the delayed incorporated user-observable space.
We now consider λ = 0 for the case of having no processing delay, τ 1 = 0. We thus can obtain the user-predictable subspace to be for either of the displays. On the basis of Corollary 3, the delayincorporating user-predictable space is always a subset of the userpredictable space which we also have shown it to be the case in this example. The result of having no delay shows that not considering the processing delay can result in a larger user-predictable subspace. Overlooking this delay can mislead the designer to misjudge the capability of the user to accomplish a task; that is, by ignoring the delay, the designer might find the user capable of accomplishing the task, while, because of the existence of the processing delay, the space which is predictable by the user may not include the task space or even is empty. Thus, for safety critical systems, it is not safe to simply ignore this value as it may result in hazardous outcomes.

Task accomplishment
With any of the suggested displays, when λ = ∞, both the delayincorporating user-observable and the delay-incorporating userpredictable spaces span R 2 . Hence, T ⊂ P H * | λ=∞ . This means, regardless of the displayed information, if the pattern of changes of the user's input is all clear to the user (i.e. the user has knowledge about his/her own input to the system and all its derivatives), it might be possible for the user to stop at a stop sign. As an example, consider a user who enters a constant value input to the system by applying a constant amount of pressure on the brake. In this case, not only the input is known to the user, it is also known that this input is not changing over time.
On the other hand, it is clear that T P H * | λ=0 for either of the displays. Hence, under the Assumption 6, for a complicated input to the system, neither the GPS nor the speedometer are effective for a human to control the velocity of the mass.
For this example, although not having processing delay is helpful in expanding the user predictable space, it still does not help with task accomplishment.
In summary, we could show when the input and all its derivatives are known by the user, the user is capable of observing and predicting the states which are involved in the task. On the other hand, when the user does not have a knowledge about the input derivatives, he may not be able to predict the task. Hence, overall, we can see that for a car modelled as a stable point mass vehicle, acceptance or rejection of the display depends on the degree of the knowledge of the user about his own input to the system. This is due to the fact that, based on the theory of SA, this delayincorporated user-predictability of the task is necessary for task accomplishment.

Conclusions
In this paper, we presented a framework for systems under shared control to obtain a necessary condition for evaluating the information content of the user-interfaces of the safety-critical systems. This framework incorporates both the low-level control and reference tracking. We considered systems in which the environment does not provide the user with any information. We also focused on uni-modal displays containing only visual modality. For safety-critical LTI systems, two novel subspaces, the delay-incorporating user-observable subspace, O * H , and the delayincorporating user-predictable subspace P * H , were formulated and were compared to the task subspace for a feasible task. If the task subspace does not lie in the relevant space, then the user-interface is incorrect, meaning that there exist a possibility that the user cannot accomplish the desired task with the given user-interface.
So far, we considered a specific category of systems in which the user-interface was the only source of information for the user. Incorporating the effect of information from the environment on the required information content of the user-interface is an important work which is necessary in order to make our framework more applicable to realistic cases. Another important modification to our work can be evaluating the information content of user-interfaces in the presence of uncertainties and noisy measurements. Clearly, having uncertain measurements and noisy derivatives is very common for real systems and it is necessary that we modify our framework to become more suitable for real-world applications.
Other interesting pieces of the further research are evaluating the direct effect of mental-model on situation awareness and also extending the framework to systems with multi-modal displays by assigning proper weight to measurements from various sources. In addition, incorporating the processing delay and considering the differences between the amount of this delay for different types of the measurements is a worthy extension to the work.