Games of entangled agents

: The paper extends the mathematical formalism of quantum physics to include games of intelligent agents that communicate due to entanglement. The novelty of the approach is based upon a human factor-based behavioral model of an intelligent agent. The model is quantum inspired: it is represented by a modified Madelung equation in which the gradient of quantum potential is replaced by a specially chosen information force. It consists of motor dynamics simulating actual behavior of the agent, and mental dynamics representing evolution of the corresponding knowledge base and incorporating this knowledge in the form of information flows into the motor dynamics. Due to feedback from mental dynamics, the motor dynamics attains quantum-like properties: its trajectory splits into a family of different trajectories, and each of those trajectories can be chosen with the probability prescribed by the mental dynamics; each agent is entangled (in a quantum way) to other agents and makes calculated predictions for future actions; human factor is associated with violation of the second law of thermodynamics: the system can move from disorder to order without external help, and that represents intrinsic intelligence. All of these departures actually extend and complement the classical methods making them especially successful in analysis of communications of agents represented by new mathematical formalism, and in particular, in agent-based economics with a human factor.


ABOUT THE AUTHOR
This work opens up the way to development of a fundamentally new approach to mathematical formalism of economics that is inspired by the formalism of quantum mechanics. Mathematical treatment of economics has a relatively short history. Recently developed Newtonian models have been criticized for application to human behavior, arguing that some human choices are irreducible to Newtonian physics. Actually, the alert was expressed much earlier by Newton who stated "I can calculate the motion of heavenly bodies, but not the madness of people." Limitations of modern mathematical methods in economics are especially transparent in the area of agent-based computational economics. In this context, the presented mathematical formalism incorporates a human factor in agentbased economics while departing from physics formalism toward physics of life. The area of recent research activity of the author is: Mathematical physics, artificial intelligence, and theory of turbulence.

PUBLIC INTEREST STATEMENT
The novelties introduced in the manuscript, and in particular, departures from Newtonian approach actually extend and complement the classical methods making them especially successful in analysis of communications of agents represented by new mathematical formalism, and in particular, in agent-based economics with a human factor.

Introduction
This paper is devoted to a new approach to differential games, (Isaacs, 1965), i.e. to a group of problems related to the modeling and analysis of conflict in the context of a dynamical system. In corresponding agent-based models, the "agents" are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on incentives and information. Such rules could also be the result of optimization, realized through use of AI methods.
We will concentrate on the non-Newtonian properties of dynamics describing a psychology-based behavior of intelligent agents as players. In other words, we will introduce dynamical systems with a human factor.

Justification for non-Newtonian approach
All the previous attempts to develop models for so-called active systems (i.e. systems that possess certain degree of autonomy from the environment that allows them to perform motions that are not directly controlled from outside) have been based upon the principles of Newtonian and statistical mechanics. These models appear to be so general that they predict not only physical, but also some biological and economical, as well as social patterns of behavior exploiting such fundamental properties of non-linear dynamics as attractors. Notwithstanding indisputable successes of that approach (neural networks, distributed active systems, etc.), there is still a fundamental limitation that characterizes these models on a dynamical level of description: they propose no difference between a solar system, a swarm of insects, and a stock market. Such a phenomenological reductionism is incompatible with the first principle of progressive biological evolution associated with Darwin. According to this principle, the evolution of living systems is directed toward the highest levels of complexity if the complexity is measured by an irreducible number of different parts, which interact in a well-regulated fashion (although in some particular cases deviations from this general tendency are possible). At the same time, the solutions to the models based upon dissipative Newtonian dynamics eventually approach attractors where the evolution stops while these attractors dwell on the subspaces of lower dimensionality, and therefore, of the lower complexity (until a "master" reprograms the model). Therefore, such models fail to provide an autonomous progressive evolution of living systems (i.e. evolution leading to increase in complexity). Let us now extend the dynamical picture to include thermal forces. That will correspond to the stochastic extension of Newtonian models, while the Liouville equation will extend to the Fokker-Planck equation that includes thermal force effects through the diffusion term. Actually, it is a well-established fact that evolution of life has a diffusion-based stochastic nature as a result of the multi-choice character of behavior of living systems. Such an extended thermodynamics-based approach is more relevant to model of living systems, and therefore, the simplest living species must obey the second law of thermodynamics as physical particles do. However, then the evolution of living systems (during periods of their isolation) will be regressive since their entropy will increase. Therefore, Newtonian physics is not sufficient for simulation the specific properties typical for intelligence.
There is another argument in favor of a non-Newtonian approach to modeling intelligence. As pointed out by Penrose (1955), the Gödel's famous theorem has the clear implication that mathematical understanding cannot be reduced to a set of known computational rules. That means that no knowable set of purely computational procedures could lead to a computer-control robot that possesses genuine mathematical understanding. In other words, such privileged properties of intelligent systems as common sense, intuition, or consciousness are non-computable within the framework of classical models. That is why a fundamentally new physics is needed to capture these "mysterious" aspects of intelligence, and in particular, to decision-making process.

Dynamical model for simulations
In this section, we review and discuss a behavioral model of intelligent agents, or players. The model is based upon departure from Newtonian dynamics to quantum inspired dynamics that was first introduced in Zak (1998Zak ( , 2007Zak ( , 2008Zak ( , 2014aZak ( , 2014b, Figure 1.

Destabilizing effect of Liouville feedback
We will start with the derivation of an auxiliary result that illuminates departure from Newtonian dynamics. For mathematical clarity, we will consider here a one-dimensional motion of a unit mass under action of a force f depending upon the velocity v and time t and present it in a dimensionless form referring all the variables to their representative values v 0 , t 0 , etc.
If initial conditions are not deterministic, and their probability density is given in the form while ρ is a single-valued function, then the evolution of this density is expressed by the corresponding Liouville equation The solution of this equation subject to initial conditions and normalization constraints (2) determines probability density as a function of V and t: Remark. Here and below we make distinction between the random variable v(t) and its values V in probability space.
In order to deal with the constraint (2), let us integrate Equation (3) over the whole space assuming that ρ → 0 at |V| → ∞ and |f| < ∞. Then Hence, the constraint (2) is satisfied for t > 0 if it is satisfied for t = 0.
Let us now specify the force f as a feedback from the Liouville equation and analyze the motion after substituting the force (6) into Equation (2) This is a fundamental step in our approach. Although the theory of ODE does not impose any restrictions upon the force as a function of space coordinates, the Newtonian physics does: equations of motion are never coupled with the corresponding Liouville equation. Moreover, it can be shown that such a coupling leads to non-Newtonian properties of the underlying model. Indeed, substituting the force f from Equation (6) into Equation (3), one arrives at the non-linear equation of evolution of the probability density Let us now demonstrate the destabilizing effect of the feedback (6). For that purpose, it should be noticed that the derivative ∂ρ/∂v must change its sign at least once, within the interval −∞ < v < ∞, in order to satisfy the normalization constraint (2).
But since there will be regions of v where the motion is unstable, and this instability generates randomness with the probability distribution guided by the Liouville equation (8). It should be noticed that the condition (9) may lead to exponential or polynomial growth of v (in the last case the motion is called neutrally stable, however, as will be shown below, it causes the emergence of randomness as well if prior to the polynomial growth, the Lipcshitz condition is violated).

Emergence of self-generated stochasticity
In order to illustrate mathematical aspects of the concepts of Liouville feedback in systems under consideration as well as associated with it instability and randomness, let us take the feedback ( Here v stands for a particle velocity, and σ 2 is the constant diffusion coefficient. The solution of Equation (12) subjects to the sharp initial condition describes diffusion of the probability density, and that is why the feedback (10) will be called a diffusion feedback.
Substituting this solution with Equation (11) at V = v one arrives at the differential equation with respect to v (t) and therefore, where C is an arbitrary constant. Since v = 0 at t = 0 for any value of C, the solution (15) is consistent with the sharp initial condition for the solution (13) of the corresponding Liouvile equation (12). The solution (15) describes the simplest irreversible motion: it is characterized by the "beginning of time" where all the trajectories intersect (that results from the violation of Lipcsitz condition at t = 0, Figure 2), while the backward motion obtained by replacement of t with (−t) leads to imaginary values of velocities. One can notice that the probability density (13) possesses the same properties.
It is easily verifiable that the solution (15) has the same structure as the solution of the Madelung equation (Zak, 2014b), although the dynamical system (11), (12) is not quantum! The explanation of such a "coincidence" is very simple: the system (11), (12) has the same dynamical topology as that of the Madelung equation where the equation of conservation of the probability is coupled with the equation of conservation of the momentum. As will be shown below, the systems (11), (12) neither quantum nor Newtonian, and we will call such systems quantum-inspired, or self-supervised.
Further analysis of the solution (15) demonstrates that the solution (15) is unstable since and therefore, an initial error always grows generating randomness. Initially, at t = 0, this growth is of infinite rate since the Lipchitz condition at this point is violated This type of instability has been introduced and analyzed in Zak (1992). The unstable equilibrium point (v = 0) has been called a terminal attractor, and the instability triggered by the violation of the Lipchitz condition-a non-Lipchitz instability. The basic property of the non-Lipchitz instability is the following: if the initial condition is infinitely close to the repeller, the transient solution will escape the repeller during a bounded time while for a regular repeller the time would be unbounded. Indeed, an escape from the simplest regular repeller can be described by the exponent v = v 0 e t . Obviously, v → 0 if v 0 → 0, unless the time period is unbounded. On the contrary, the period of escape from the terminal attractor (15) is bounded (and even infinitesimal) if the initial condition is infinitely small, (see Equation (17)). The same results follow from the ensemble (15) at −∞ ≤ C ≤ ∞. Indeed, the first equality in (18) results from symmetry of the ensemble with respect to v = 0; the second one follows from the fact that It is interesting to notice that the stochastic process (15) is an alternative to the following Langevin equation, (Risken, 1989) that corresponds to the same Fokker-Planck Equation (12). Here, Γ(t) is the Langevin (random) force with zero mean and constant variance σ.
Thus, the emergence of self-generated stochasticity is the first basic non-Newtonian property of the dynamics with the Liouville feedback.

Second law of thermodynamics
In order to demonstrate another non-Newtonian property of the systems considered above, let us start with the dimensionless form of the Langevin equation for a one-dimensional Brownian motion of a particle subjected to a random force, (Risken, 1989) Here, v is the dimensionless velocity of the particle (referred to a representative velocity v 0 ), k is the coefficient of a linear damping force, Γ(t) is the Langevin (random) force per unit mass, σ > 0 is the noise strength. The representative velocity v 0 can be chosen, for instance, as the initial velocity of the motion under consideration.
The corresponding continuity equation for the probability density ρ is the following Fokker-Planck equation Obviously without external control, the particle cannot escape from the Brownian motion.
Let us now introduce a new force (referred to unit mass and divided by v 0 ) as a Liouville feedback Here D is the dimensionless variance of the stochastic process D(t) = Obviously, the diffusion coefficient in Equation (25) is negative. Multiplying Equation (25) by V 2 , then integrating it with respect to V over the whole space, one arrives at ODE for the variance ṽ(t) Thus, as a result of negative diffusion, the variance D monotonously vanishes regardless of the initial value D(0). It is interesting to note that the time T of approaching D = 0 is finite This terminal effect is due to violation of the Lipchitz condition, at D = 0, (Zak, 2014a).
Let us review the structure of the force (23): it is composed only of the probability density and its variance, i.e. out of the components of the conservation Equation (25); at the same time, Equation (25) itself is generated by the equation of motion (24). Consequently, the force (23) is not an external force. Nevertheless, it allows the particle escape from the Brownian motion using its own "internal effort." It would be reasonable to call the force (23) an information force since it links to information rather than to energy.
Thus, we came across the phenomenon that violates the second law of thermodynamics when the dynamical system moves from disorder to the order without external interactions due to a feedback from the equation of conservation of the probability to the equation of conservation of the momentum. One may ask why the negative diffusion was chosen to be non-linear. Let us turn to a linear version of Equation (26) and discuss a negative diffusion in more detail. As follows from the linear equivalent of Equation (26) Thus, eventually the variance becomes negative, and that disqualifies Equation (29) from being meaningful. As shown in Zak (2014a), the initial value problem for this equation is ill-posed: its solution is not differentiable at any point. Therefore, a negative diffusion must be non-linear in order to protect the variance from becoming negative, Figure 3.
It should be emphasized that negative diffusion represents a major departure from both Newtonian mechanics and classical thermodynamics by providing a progressive evolution of complexity against the second law of thermodynamics.
In the next subsection, we will demonstrate again that formally the dynamics introduced above does not belong to the Newtonian world; nevertheless, its self-supervising capability may associate such a dynamics with a potential model for intelligent behavior. For that purpose, we will turn to even simpler version of this dynamics by removing the external Langevin force and simplifying the information force.
In 1945, Schrödinger wrote in his book "What is life": "Life is to create order in the disordered environment against the second law of thermodynamics." The self-supervised dynamical system introduced above is fully consistent with this statement. Indeed, consider a simplified version of Equations (21) and (22) Removal of the Langevin forces makes the particle isolated. Nevertheless, the particle has a capability of moving from disorder to order. For demonstration of this property, we will assume that the Langevin force was suddenly removed at t = 0 so that the initial variance D 0 > 0. Then, As follows from Equation (33), as a result of internal, self-generated force the Brownian motion gradually disappears and then vanishes abruptly: Thus, the probability density shrinks to a delta-function at t = √ D 0 . Consequently, the entropy ln dV decreases down to zero, and that violates the second law of thermodynamics.
Another non-Newtonian property is entanglement.

Entanglement
In this subsection, we will introduce a fundamental and still mysterious property that was predicted theoretically and corroborated experimentally in quantum systems: entanglement. Quantum entanglement is a phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated. This leads to correlations between observable physical properties of the systems. As a result, measurements performed on one system seem to be instantaneously influencing other systems entangled with it. Different views of what is actually occurring in the process of quantum entanglement give rise to different interpretations of quantum mechanics. Here, we will demonstrate that entanglement is not a prerogative of quantum systems: it occurs in quantum-inspired (QI) systems that are under consideration in this paper. That will shed light on the concept of entanglement as a special type of global constraint imposed upon a broad class of dynamical systems that include quantum as well as QI systems.
In order to introduce entanglement in QI system, we will start with Equations (11) and (12) and generalize them to the two-dimensional case As in the one-dimensional case, this system describes diffusion without a drift The solution to Equation (38) has a closed form Here, Substituting the solution (39) with Equations (36) and (37), one obtains Eliminating t from these equations, one arrives at an ODE in the configuration space This is a classical singular point treated in text books on ODE.

Its solution depends upon the roots of the characteristic equation
Since both the roots are real in our case, let us assume for concreteness that they are of the same sign, for instance, 1 = 1, 2 = 1. Then the solution to Equation (43) is represented by the family of straight lines Substituting this solution into Equation (41) yields (45) Thus, the solutions to Equations (36) and (37) are represented by two-parametrical families of random samples, as expected, while the randomness enters through the time-independent parameters C and C that can take any real numbers. Let us now find such a combination of the variables that is deterministic. Obviously, such a combination should not include the random parameters C or C . It is easily verifiable that and therefore, Thus, the ratio (48) is deterministic although both the numerator and denominator are random, (see Equation (47)). This is a fundamental non-classical effect representing a global constraint. Indeed, in theory of stochastic processes, two random functions are considered statistically equal if they have the same statistical invariants, but their point-to-point equalities are not required (although it can happen with a vanishingly small probability). As demonstrated above, the diversion of determinism into randomness via instability (due to a Liouville feedback), and then conversion of randomness to partial determinism (or coordinated randomness) via entanglement is the fundamental non-classical paradigm that may lead to instantaneous transmission of conditional information on remote distance that to be discussed below.

Relevance to model of intelligent agents
The model under discussion was inspired by E. Schrödinger, the creator of quantum mechanics who wrote in his book "What is Life": "Life is to create order in the disordered environment against the second law of thermodynamics." The proposed model illuminates the "border line" between living and non-living systems. The model introduces a biological particle that, in addition to Newtonian properties, possesses the ability to process information. The probability density can be associated with the self-image of the biological particle as a member of the class to which this particle belongs, while its ability to convert the density into the information force-with the self-awareness (both these concepts are adopted from psychology). Continuing this line of associations, the equation of motion (such as Equations (1) or (7)) can be identified with a motor dynamics, while the evolution of density (see Equations (3) or (8)-with a mental dynamics. Actually, the mental dynamics plays the role of the Maxwell sorting demon: it rearranges the probability distribution by creating the information potential and converting it into a force that is applied to the particle. One should notice that mental dynamics describes evolution of the whole class of state variables (differed from each other only by initial conditions), and that can be associated with the ability to generalize that is a privilege of living systems. Continuing our biologically inspired interpretation, it should be recalled that the second law of thermodynamics states that the entropy of an isolated system can only increase. This law has a clear probabilistic interpretation: increase of entropy corresponds to the passage of the system from less probable to more probable states, while the highest probability of the most disordered state (that is the state with the highest entropy) follows from a simple combinatorial analysis. However, this statement is correct only if there is no Maxwell' sorting demon, i.e. nobody inside the system is rearranging the probability distributions. But this is precisely what the Liouville feedback is doing: it takes the probability density ρ from Equation (3), creates functionals and functions of this density, converts them into a force and applies this force to the equation of motion (1). As already mentioned above, because of that property of the model, the evolution of the probability density becomes non-linear, and the entropy may decrease "against the second law of thermodynamics," Figure 4.
Obviously the last statement should not be taken literary; indeed, the proposed model captures only those aspects of the living systems that are associated with their behavior, and in particular, with their motor-mental dynamics, since other properties are beyond the dynamical formalism. Therefore, such physiological processes that are needed for the metabolism, reproduction, est., are not included into the model. That is why this model is in a formal disagreement with the first and second laws of thermodynamics while the living systems are not. Indeed, applying the first law of thermodynamics we imply violation of conservation of mechanical energy since other types of energies (chemical, electro-magnetic, etc.) are beyond our mathematical formalism. Applying the second law of thermodynamics, we consider our system as isolated one while the underlying real system is open due to other activities of livings that were not included in our model. Nevertheless, despite these limitations, the proposed model captures the "magic" of life: the ability to create selfimage and self-awareness, and that fits perfectly to the concept of intelligent agent. Actually, the proposed model represents governing equations for interactions of intelligent agents. In order to emphasize the autonomy of the agents' decision-making process, we will associate the proposed models with self-supervised (SS) active systems.
By an active system, we will understand here a set of interacting intelligent agents capable of processing information, while an intelligent agent is an autonomous entity, which observes and acts upon an environment and directs its activity toward achieving goals. The active system is not derivable from the Lagrange or Hamilton principles, but it is rather created for information processing. One of the specific differences between active and physical systems is that the former are supposed to act in uncertainties originated from incompleteness of information. Indeed, an intelligent agent almost never has access to the whole truth of its environment. Uncertainty can also arise because of incompleteness and incorrectness in the agent's understanding of the properties of the environment. That is why QI SS systems are well suited for representation of active systems.

Self-supervised active systems with integral feedback
In this subsection, we will introduce a feedback from the mental to motor dynamics that is different from the feedback (6) discussed above. This feedback will make easier to formulate new principles of the competitive mode of agents associated with game theory.
Let us introduce the following feedback, (Zak, 2008) With the feedback (49), Equations (7) and (8) take the form, respectively, The last equation has the analytical solution Subject to the initial condition This solution converges to a prescribed, or target, stationary distribution ρ * (V). Obviously the normalization condition for ρ is satisfied if it is satisfied for ρ 0 and ρ * . This means that Equation (51) has an attractor in the probability space, and this attractor is stochastic. Substituting the solution (52) with Equation (50), one arrives at the ODE that simulates the stochastic process with the probability distribution (52) As notices above, the randomness of the solution to Equation (54) is caused by instability that is controlled by the corresponding Liouville equation. It should be emphasized that in order to run the stochastic process started with the initial distribution ρ 0 and approaching a stationary process with the distribution ρ * , one should substitute into Equation (54) the analytical expressions for these functions.
It is reasonable to assume that the solution (4) starts with sharp initial condition As a result of that assumption, all the randomness is supposed to be generated only by the controlled instability of Equation (54). Substitution of Equation (55) into Equation (54) leads to two different domains of v: v ≠ 0 and v = 0 where the solution has two different forms, respectively, Equation (57) represents a singular solution, while Equation (56) is a regular solution that include arbitrary constant C. The regular solutions is unstable at t = 0, |v| → 0 where the Lipschitz condition is violated and therefore, an initial error always grows generating randomness.
Let us analyze the behavior of the solution (56) in more detail. As follows from this solution, all the particular solutions intersect at the same point v = 0 at t = 0, and that leads to non-uniqueness of the solution due to violation of the Lipcshitz condition. Therefore, the same initial condition v = 0 at t = 0 dv dv → ∞ at t → 0 ,|v| → 0 yields infinite number of different solutions forming a family (56); each solution of this family appears with a certain probability guided by the corresponding Liouville equation (51). For instance, in cases plotted in Figures 5 and 6, the "winner" solution is, respectively, since it passes through the maximum of the probability density (51). However, with lower probabilities, other solutions of the family (53) can appear as well. Obviously, this is a non-classical effect. Qualitatively, this property is similar to those of quantum mechanics: the system keeps all the solutions simultaneously and displays each of them "by a chance," while that chance is controlled by the evolution of probability density (51).
The approach is generalized to n-dimensional case simply by replacing v with a vector v = v 1 , v 2 , … v n since Equation (51) does not include space derivatives.
Examples. Let us start with the following normal distribution Substituting the expression (60) and (55) with Equation (56) at V = v, one obtains As another example, let us choose the target density ρ * as the student's distribution, or so called power law distribution Substituting the expression (62) with Equation (56) at V = v, and ν = 1, one obtains  The 3D plot of the solutions of Equations (61) and (63), are presented in Figures 7, and 8 respectively.

Finding global maximum
Based upon the proposed model with integral feedback, a simple algorithm for finding a global maximum of an n-dimensional function can be formulated. The idea of the proposed algorithm is very simple: based upon the model with integral feedback (50), and (51), introduce a positive function (v 1 , v 2 , ...v n ), |v i | < ∞ to be maximized as the probability density ρ * (v 1 , v 2 , ...v n ) to which the solution of Equation (50) is attracted. Then the larger value of this function will have the higher probability to appear. The following steps are needed to implement this algorithm.
(1) Build and implement the n-dimensional version of the model Equations (50), and (51), as an analog devise  (2) Normalize the function to be maximized (3) Using Equation (51), evaluate time τ of approaching the stationary process to accuracy ε (4) Substitute ̄ instead of ρ * with Equations (64) and run the system during the time interval τ.
(5) The solution will "collapse" into one of possible solutions with the probability ̄.Observing (measuring) the corresponding values of {v * }, find the first approximation to the optimal solution.
(6) Switching the device to the initial state and then starting again, arrive at the next approximations.
(7) The sequence of the approximations represents Bernoulli trials that exponentially improve the chances of the optimal solution to become a winner. Indeed, the probability of success ρ s and failure ρ f after the first trial are, respectively, Then the probability of success after M trials is Therefore, after polynomial number of trials, one arrived at the solution to the problem (unless the function ψ is flat).
The main advantage of the proposed methodology is in a weak restriction imposed upon the space structure of the function ̄({v}): it should be only integrable since there is no space derivatives included in the model (64). This means that ̄({v}) is not necessarily to be differentiable. For instance, it can be represented by a Weierstrass-like function f (v) = ∑ ∞ 0 a n cos(b n v), where 0 < a < 1, b is a positive odd integer, and ab > 1 + 1.5π.
In a particular case when ̄({x}) is twice differentiable, the algorithm is insensitive to local maxima because it is driven not by gradients, but by the values of this function.

Entanglement in QI active systems with integral feedback
We will continue the analysis of the QI system with integral feedback introduced above proceeding with the two-dimensional case The solution of Equation (71) has the same form as for one-dimensional case, (see Equation (52), Substitution this solution into Equations (69) and (70), yields, respectively, that are similar to Equation (54). Following the same steps as in one-dimensional case, one arrives at the following solutions of Equations (73) and (74), respectively, that are similar to the solution (56). Since ρ(v 1 , v 2 ) is the known (preset) function, Equations (75) and (76) implicitly define v 1 and v 2 as functions of time. Eliminating time t and orbitary constants C 1 , C 2 , one obtains Thus, the ratio (77) is deterministic although both the numerator and denominator are random, (see Equations (75) and (66)).

Application to games of entangled agents
In this section, we will address a situation when agents are competing. That means that they have different objectives. Turning to Equation (75), (76), one can rewrite them for the case of competing agents where * k is the preset density of the kth agent that can be considered as his objective, a k is a constant weight of the kth agent's effort to approach his objective.
Thus, each kth agent is trying to establish his own static attractor * k , but due to entanglement, the whole system will approach the weighted average Substituting the solution (80) with Equation (78), one arrives at a coupled system of n ODE with respect to n state variables v i . Although a closed form analytical solution of the system (78) and (79) is not available, its property of the Lipcshitz instability at t = 0 could be verified. This means that the solution to the system (78) and (79) is random, and if the system is run many times, the statistical properties of the whole ensemble will be described by Equation (80). Obviously, those agents who have chosen density with a sharp maximum are playing more risky game. Here, we have assumed that competing agents are still entangled, and therefore, their information about each other is complete. More complex situation when the agents are not entangled, and exchanged information is incomplete is address in the next section. The simplest way to formalize the incompleteness of information possessed by competing agent is to include the "vortex" terms into Equation (77): these terms could change each particular trajectory of the agent motion, but they would not change the statistical invariants that remain available to the competing agents It is easily verifiable that the augmented neural net-like terms do not effect the corresponding Liouville equation, and therefore, they do not change the static attractor in the probability space described by Equation (76). However, they may significantly change the configuration of the random trajectories in physical space making their entanglement more sophisticated. Another way to formalize uncertainty is to introduce a complex joint probability density where its imaginary part represents a measure of uncertainty in density distribution. This case will be considered below in more details.

Problem formulation
In this section, we will present a draft of application of self-supervised active dynamical systems to differential games. Following von-Neuman, and Isaaks, (Isaacs, 1965), we will introduce a two-player zero-sum (antagonistic) differential game that is described by dynamical Equation (64) rewritten for i = 1,2 where v is the state vector, f 1 is the control vector of the maximizing player E (evader), f 2 is the control vector of the minimizing player P (pursuer), and C 1 , C 2 are the normalizing factors. Obviously, f is a known function of both state variables.
However, the rules of the game we propose is slightly different from those introduced by Isaaks, namely: the player P tries to minimize the function f, (i.e. maximize the function f −1 ) while the player E tries to maximize f in the same manner as it was described in the previous subsection i.e. via entanglement. The Liouville equation for the system (69) follows from Equation (79) whence We will now give a description of the game.
The game starts with zero initial conditions: It is assumed that each player has access to the systems (82), (84), and therefore, he has complete information about its state. The substitution of Equation (84) with Equation (82) closes the system (82). However, because of a failure of the Lipcshitz condition at t = 0 (see Equation (58)), the solution of Equation (82) is random, and each player can predict it only in terms of probability. As follows from Equation (84), the highest probability to appear has the solution that delivers the global maximum to the payoff function Obviously, the player that has higher weight a i would have better chances to win since the global maximum of Equation (86) is closer to the global maximum of his goal function. With reference to Equation (86), a player can evaluate time τ of approaching the stationary process to accuracy ε as and introduce This is the end of the first move. After that, each player updates his weight as following and starts the next move with the same initial conditions (85). But the system (82), (84) is different now: the control functions f 1 , f 2 are to be replaced by their updated values f 1 1 , f 1 2 , respectively. Thus, during the first move, the potential winner is selected by a chance, and during the next move, his chances are increased due to favorable update of the weights. However, the role of the chance is still significant even during the subsequent moves; indeed, if the global maximum of the control function F is sharp, the initially selected potential winner still can lose.
The game ends when one of the players achieves his goal my maximizing his control function to a preset level, for instance, if

Games with incomplete information
The theory presented above includes applications to such problems as battle games, games with moving craft, pursuit games, etc. However, the main limitation of this theory, as well as the most of the game theories, is that it requires complete information about the state variables available to both players. This limitation significantly diminishes the applicability of the theory to real-life games where the complete information is not available. That is why the extension of this theory to cases of incomplete information is of vital importance. In our application, we will assume that each player knows only his own state variables, while he has to guess about the state variables of his adversary. For that case, the mathematical formalism of QI systems can offer a convenient tool to replace unknown value of a state variable by its expected value. Such a possibility is available due to players' dependence (but not necessarily entanglement) via the joint probability density: since each player possesses the joint density, he can, at any moment, compute the expected value of the state variable of the other player.
We assume that the players follow the strategy: "what do you think I think you think …?" and we will start with the assumption that each player takes a conservative view by thinking that although where v 2|1 is the state variable of the Pursuer in view of the Evader.
The corresponding Liouville equation that governs the joint probability equation is not changed: it is still given by Equation (79). Its solution (84) should be substituted with Equations (91) and (93) along with the Equation (92). Obviously, the expected value (92) is found from the solution (84) The system of Equations (91) Thus, we obtain two independent systems of ODE describing entanglement of the player with the image of his adversary. Each system has random solutions that appear with the probability described by Equation (79). After time interval τ (see Equation (87)), each player gets access to the real values of the functions f i to be maximized, and based upon that, he can update the state variables and weights for the next move, (see Equations (88) and (89)).
Let us consider now the case when the players do not know the values of state variables of their adversary. It is easy to conclude that the image Equations (99) and (101) can be solved independently Now replacing V 2 ,V 1 in Equations (98) and (100) by the solutions for v 2|1|1 and v 1|2|2 , respectively, one arrives at two independent ODE describing behaviors of the players. Therefore, at this level of incompleteness of information, the entanglement disappears.
The games with incomplete information give a reason to distinguish two type of dependence between the agents described by the variables v i in the iQ systems. The first type of dependence is entanglement that has been introduced and discussed above. One should recall that in order to be entangled, the agents are supposed to run the system jointly during some initial period of time. But what happens if the agents had never been in contact? Obviously they are not entangled, i.e. they cannot predict each other's motions. However, they are not completely independent: they can make random decisions, but the probability of these decisions will be correlated via the joint probability. As a result, the agents will be able to predict expected decisions of each other. We will call such correlation a weak entanglement. As follows from the games with incomplete information considered above, weak entanglement was presented as entanglement of an agent with the probabilistic image of another agent.

Games of partially entangled agents
In this section we introduce a new, more sophisticated entanglement that does not exist in quantum mechanics, but can be found in QI models. This finding is based upon existence of incompatible stochastic processes that are considered below.

Incompatible stochastic processes
Classical probability theory defines conditional probability densities based upon the existence of a joint probability density. However, one can construct correlated stochastic processes that are represented only by conditional densities since a joint probability density does not exist. For that purpose, consider two coupled Langevin equations (Risken, 1989) where the Langevin forces L 1 (t) and L 2 (t) satisfy the conditions Then the joint probability density ρ(X 1 , X 2 ) describing uncertainties in values of the random variables x 1 and x 2 evolves according to the following Fokker-Planck equation Let us now modify Equations (104) and (105) (108) and (109), respectively. Now the uncertainties of x 1 and x 2 are characterized by conditional probability densities ρ 1 (X 1 |X 2 ) and ρ 2 (X 2 |X 1 ) while each of these densities is governed by its own Fokker-Planck equation The solutions of these equations subject to sharp initial conditions for t > t′ read As shown in Zak (1998), a joint density for the conditional densities (113) and (114) exists only in special cases of the diffusion coefficients g 11 and g 22 when the conditional probabilities are compatible. These conditions are (104) ink( 1 , 2 ) = X 1 X 2 ln 1 (X 1 |X 2 ) 2 (X 2 |X 1 ) ≡ 0 Indeed whence and that leads to Equation (115).
Thus, the existence of the join density ρ(X 1 , X 2 ) for the conditional densities ρ 1 (X 1 |X 2 ), and ρ 2 (X 2 |X 1 ) requires that Obviously the identity (118) holds only for specially selected functions g 11 (X 2 ) and g 22 (X 1 ), and therefore, existence of the joint density is an exception rather than a rule.

Partial entanglement
In order to prove existence of a new form of entanglement, let us modify the system Equations (36), (37), and (38) as following Since here we do not postulate existence of a joint density, the system is written in terms of conditional densities, while Equations (120) and (121) are similar to Equations (110) and (111). The solutions of these PDE can be written in the form similar to the solutions (113) and (114) As noticed in the previous subsection, the existence of the joint density ρ(V 1 , V 2 ) for the conditional densities ρ 1 (V 1 |V 2 ) and ρ 2 (V 2 |V 1 ) require that In this case, the joint density exists (although its finding is not trivial, Zak, 1998), and the system (119)-(122) can be reduced to a system similar to (36)-(38). But here we will be interested in case when the joint density does not exist. It is much easier to find such functions a 11 (V 2 ), a 22 (V 1 ) for which the identity (125) does not hold, and we assume that In this case, the system (119)-(122) cannot be simplified. In order to analyze this system in detail, lets substitute the solutions (123) and (124) with Equations (119) and (121), respectively. Then with reference to Equation (14), one obtains and therefore It should be recalled that according to the terminology introduced in Section 3, the systems (119)-(120) and the systems (121)-(122) can be considered as dynamical models for interaction of two communicating agents where Equations (119) and (121) describe their motor dynamics, and Equations (120) and (122)-mental dynamics, respectively. Also, it should be reminded that the solutions (129) and (130) are represented by one parametrical family of random samples, as in Equation (15), while the randomness enters through the time-independent parameters C 1 and C 2 that can take any real numbers. As follows from Figure 2, all the particular solutions (129) and (130) intersect at the same point v 1,2 = 0 at t = 0, and that leads to non-uniqueness of the solution due to violation of the Lipcshitz condition. Therefore, the same initial condition v 1,2 = 0 at t = 0 yields infinite number of different solutions forming a family; each solution of this family appears with a certain probability guided by the corresponding Fokker-Planck Equations (120) and (122), respectively. Similar scenario was described in the Section 2 of this paper. But what is unusual in the system (119)-(121) is correlations: although Equations (120) and (122) are correlated, and therefore, mental dynamics are entangled, Equations (119) and (121) are not correlated (since they can be presented in the form of independent Equations (127) and (128), respectively), and therefore, the motor dynamics are not entangled. This means that in the course of communications, each agent "selects" a certain pattern of behavior from the family of solutions (129) and (130), respectively, and these patterns are independent; but the probabilities of these "selections" are entangled via Equations (120) and (122). Such sophisticated correlations cannot be found in physical world, and they obviously represent a "human touch." Unlike the entanglement in system with joint density (such as that in Equations (36)-(38)) here the agents do not share any deterministic invariants (compare to Equation 48). Instead, the agents can communicate via "best guesses" based upon known conditional probability densities distributions.
In order to quantify the amount of uncertainty due to incompatibility of the conditional probability densities (123) and (124), let us introduce a concept of complex probability, (Zak, 1998), This system is overdetermined unless the compatibility conditions (115) are satisfied.
As known from classical mechanics, the incompatibility conditions are usually associated with a fundamentally new concept or a physical phenomenon. For instance, incompatibility of velocities in fluid (caused by non-existence of velocity potential) introduces vorticity in rotational flows, and incompatibility in strains describes continua with dislocations. In order to interpret the incompatibility (115), let us return to the system (138). Discretizing the functions in Equation (138) and replacing the integrals by the corresponding sums, one reduces Equation (138) to a system of n algebraic equations with respect to n unknowns. This means that the system is closed, and cases when a solution does not exist are exceptions rather than a rule. Therefore, in most cases, for any arbitrarily chosen conditional densities, for instant, for those given by Equations (123) and (124), the system (138) defines the complex joint density in the form (131). Now we are ready to discuss a physical meaning of the imaginary component of the complex probability density. Firstly, as follows from comparison of Equations (138) and (140), the imaginary part of the probability density appears as a response to incompatibility of the conditional probabilities, and therefore, it can be considered as a "compensation" for the incompatibility. Secondly, as follows from the inequalities (137), the imaginary part consumes a portion of the "probability mass" increasing thereby the degree of uncertainty in the real part of the complex probability density. Hence, the imaginary part of the probability density can be defined as a measure of the uncertainty "inflicted" by the incompatibility into the real part of this density.
In order to avoid solving the system of integral equations (138), we can reformulate the problem in an inverse fashion by assuming that the complex joint density is given. Then the real parts of the conditional probabilities that drive Equations (119) and (120) can be found from simple formulas (134) and (135).
Let us illustrate this new paradigm, and consider two players assuming that each player knows his own state but does not know the state of his adversary. In order to formalize the degree of initial incompleteness of information, introduce the complex joint probability density, that shows how much the players know and how much they do not know about each other when the game starts. With reference to the normalization constraint (136), The structure of the real part of the joint probability density can be chosen the same as in Equation (84) However, since here a 0 < 1, the real part of the joint probability density is reduced due to a "leak" of the probability "mass" from the real to the imaginary part, and this makes predictions less certain for the both players. Otherwise, the formal structure of the motor dynamics is similar to that described by Equations (83) and (84).
The imaginary part can be preset as where ρ * is the probability density characterizing the degree of uncertainty of information that the players have about each other, while the larger ρ * the more the probability leak from the real to imaginary part of the complex probability density. The arbitrary constants C 1 , C 2 , and C 3 couples the real and the imaginary parts via the normalization constraint (136) The motor dynamics has a slight change compare to Equation (82) Thus, both players rely only upon the real part of the complex joint density instead of a real joint density (that may not exist in this case). But as follows from the inequalities (137), the values of density of the real part are lowered due to loss of the probability mass, and this increases the amount of uncertainty in player's predictions. In order to minimize that limitation, the players can invoke the imaginary part of the joint density that gives them qualitative information about the amount of uncertainty at the selected maxima.
It should be noticed that the game starts with a significant amount of uncertainties that will grow with next moves. Such subtle and sophisticated relationship is typical for communications between humans, and the proposed model captures it via partial entanglement introduced above.
Remark. So far we considered the imaginary part of a joint probability density as a result of incompatibility of conditional densities of the players. However, this part can have a different origin: it can also represent a degree of deception that the players apply in real-life games. As in the previous example, in games with deception the imaginary part of the joint probability density increases uncertainty of the players' prediction capabilities. The mathematical formalism of the game with deception is similar to that discussed above.

Passive period of players' performance
In this section, we will discuss the capacity of mathematical formalism that provides an extension of the proposed model to a new space with imaginary time where players exhibit virtual motions such as dreams and memories. In order to demonstrate that, let us replace Equation (54) by the following where and T is the period of active performance of the player.
Then at 0 < t < T the player is active, and its activity is described by the governing Equations (50) and (51).
= 1 Page 27 of 30 Zak, Cogent Mathematics (2016) For t = T the player is at rest, and its state is described by a simple Newtonian state.
But for t > T the feedback (148) as well as Equations (50), (61) and (52) become complex. For better interpretation, it will be more convenient to introduce an imaginary time Then, the formal solutions of these equations are Thus, the velocity v and the probability density ρ become real functions of imaginary time. It is reasonable to assume that the family of trajectories in the solution (63) describes virtual motions evolving in imaginary time with the probability (62), while the time scale of these motions could be different from the real one. Such a surrealistic activity can be associated with memories and dreams, i.e. with the period of passive performance during which a player has an opportunity to enrich his information with help of memories, and to plan and test his future performance.

Discussion and conclusion
We start the discussion with outlining the mathematical novelties of this work. Actually a new class of ODE that are coupled with their Liouville equation is introduced. The leading ideas came from the Madelung equation that is a hydrodynamics version of the Schrödinger equation Here ρ and S are the components of the wave function = √ e iS∕ℏ , and ℏ is the Planck constant divided by 2π. The last term in Equation (2) is known as quantum potential. From the viewpoint of Newtonian mechanics, Equation (167) expresses continuity of the flow of probability density, and = 0 Equation (168) is the Hamilton-Jacobi equation for the action S of the particle. Actually the quantum potential in Equation (168), as a feedback from Equation (167) to Equation (168), represents the difference between the Newtonian and quantum mechanics, and therefore, it is solely responsible for fundamental quantum properties.
Our approach is based upon replacing the quantum potential with a different Liouville feedback, Figure 1.
In Newtonian physics, the concept of probability ρ is introduced via the Liouville equation generated by the system of ODE where v is velocity vector.
It describes the continuity of the probability density flow originated by the error distribution in the initial condition of ODE (171).

Let us rewrite Equation (168) in the following form
where v is a velocity of a hypothetical particle. This is a fundamental step in our approach: in Newtonian dynamics, the probability never explicitly enters the equation of motion. In addition to that, the Liouville equation generated by Equation (172) is non-linear with respect to the probability density ρ and therefore, the system (172), (173) departs from Newtonian dynamics. However, although it has the same topology as quantum mechanics (since now the equation of motion is coupled with the equation of continuity of probability density), it does not belong to it either. Indeed Equation (172) is more general than the Hamilton-Jacoby Equation (2): it is not necessarily conservative, and F is not necessarily the quantum potential although further we will impose some restriction upon it that links F to the concept of information. The relation of the system (172), (123) to Newtonian and quantum physics is illustrated in Figure 1. Two different types of feedbacks replacing the quantum potential (see Equation (23) and Equation (49)) are introduced and analyzed. Both of these feedbacks lead to different approaches to psychological games. As demonstrated, this new class of ODE includes mathematical formalization of human factor. Therefore, the paper extends the mathematical formalism of quantum physics to include games and economics with a component of psychology. The novelty of the approach is based upon a human factor-based behavioral model of an intelligent agent. The model is quantum inspired: it is represented by a modified Madelung equation in which the gradient of quantum potential is replaced by a specially chosen information force. It consists of motor dynamics simulating actual behavior of the agent, and mental dynamics representing evolution of the corresponding knowledge base and incorporating this knowledge in the form of information flows into the motor dynamics. Due to feedback from mental dynamics, the motor dynamics attains quantum-like properties: its trajectory splits into a family of different trajectories, and each of those trajectories can be chosen with the probability prescribed by the mental dynamics; each agent entangled (in a quantum way) to other agents and makes calculated predictions for future actions. In case of a complex density, its imaginary part represents a measure of uncertainty of the density distribution. Human factor is associated with violation of the second law of thermodynamics: the system can move from disorder to order without external help, and that represent intrinsic intelligence. All of these departures actually extend and complement the classical methods making them especially successful in analysis of communications of agents represented by new mathematical formalism. Special attention is concentrated on new approach to theory of differential games, and in particular, on behavioral properties of players as intelligent subjects possessing self-image and self-awareness. Due to quantum-like entanglement they are capable to predict and influence actions of their adversaries. The model addresses a new type of entanglement that correlates the probabilities of actions of livings rather than the actions themselves.
There are several differences between the proposed and conventional game theories. Firstly, in the proposed game, the players are entangled: they cannot make independent deterministic decisions; instead, they make coordinated random decisions such that, at least, the probabilities of these decisions are dependent. Therefore, the proposed game represents a special case of non-determine symmetric simultaneous zero-sum game. Secondly, the maximization of the pay-off function here does not require any special methods (like gradient ascend) since it is "built-in" into the dynamical model. Indeed, the pay-off function (86) is represented by the probability density of the stochastic attractor, and therefore, its maximum value has the highest probability to appear as a random solution of the underlying dynamical model (82). Moreover, the payoff function (86) is not required to be differentiable at all (although it must be integrable).
This work opens up the way to development of a fundamentally new approach to mathematical formalism of economics that is inspired by the formalism of quantum mechanics. Mathematical treatment of economics has a relatively short history. Formal economic modeling began in the nineteenth century with the use of differential calculus to represent and explain economic behavior, such as utility maximization, an early economic application of mathematical optimization. Economics became more mathematical as a discipline throughout the first half of the twentieth century, but introduction of new and generalized techniques in the period around the World War II, as in game theory, (Isaacs, 1965), would greatly broaden the use of mathematical formulations in economics. However, not withstanding undisputable success of mathematical methods in economics, there were alarmed critics of the discipline, as well as some noted economists. John Maynard Keynes, Robert Heilbroner, Friedrich Hayek, and others have criticized the broad use of mathematical models for human behavior, arguing that some human choices are irreducible to mathematics. Actually the alert was expressed much earlier by Newton who stated "I can calculate the motion of heavenly bodies, but not the madness of people." Limitations of modern mathematical methods in economics are especially transparent in the area of agent-based computational economics. It is a relatively recent field, dating from about the 1990s as to published work. It studies economic processes, including whole economies, as dynamic systems of interacting agents over time. As such, it falls in the paradigm of complex adaptive systems. In corresponding agent-based models, agents are not real people but "computational objects modeled as interacting according to rules" … "whose micro-level interactions create emergent patterns" in space and time. The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of mathematical optimization by agents markets is replaced by the less restrictive postulate of agents with bounded rationality adapting to market forces.
In this context, the presented mathematical formalism incorporates a human factor in agentbased economics while departing from physics formalism toward physics of life.